For the small model:
| Model | Pretrained | Scale | Epoch | APval Taking training RTCDet-S on COCO as the example, Taking training RTCDet-S on COCO as the example, Taking testing RTCDet-S on COCO-val as the example, Taking evaluating RTCDet-S on COCO-val as the example,
0.5:0.95 | APval
0.5 |
|----------|------------|-------|-------|------------------------|-------------------|
| RTCDet-N | - | 640 | 500 | 37.0 | 52.9 |
| RTCDet-N | IN1K | 640 | 500 | | |
| RTCDet-L | - | 640 | 500 | 50.2 | 68.0 |
| RTCDet-L | IN1K | 640 | 500 | | |
Results on the COCO-val
Model
Batch
Scale
APval
0.5:0.95
APval
0.5
FLOPs
(G)
Params
(M)
Weight
RTCDet-N
8xb16
640
37.0
52.9
8.8
3.2
RTCDet-S
8xb16
640
RTCDet-M
8xb16
640
RTCDet-L
8xb16
640
RTCDet-X
8xb16
640
50.7
68.3
165.7
43.7
Train RTCDet
Single GPU
python train.py --cuda -d coco --root path/to/coco -m rtcdet_s -bs 16 -size 640 --wp_epoch 3 --max_epoch 300 --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --multi_scale
Multi GPU
python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m rtcdet_s -bs 128 -size 640 --wp_epoch 3 --max_epoch 300 --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/
Test RTCDet
python test.py --cuda -d coco --root path/to/coco -m rtcdet_s --weight path/to/RTCDet_s.pth -size 640 -vt 0.4 --show
Evaluate RTCDet
python eval.py --cuda -d coco-val --root path/to/coco -m rtcdet_s --weight path/to/RTCDet_s.pth
Demo
Detect with Image
python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m rtcdet_s --weight path/to/weight -size 640 -vt 0.4 --show
Detect with Video
python demo.py --mode video --path_to_vid path/to/video --cuda -m rtcdet_s --weight path/to/weight -size 640 -vt 0.4 --show --gif
Detect with Camera
python demo.py --mode camera --cuda -m rtcdet_s --weight path/to/weight -size 640 -vt 0.4 --show --gif