| Model |
Backbone |
Batch |
Scale |
APval 0.5:0.95
| APval 0.5
| FLOPs (G)
| Params (M)
| Weight |
Logs |
| YOLOv2 |
ResNet-18 |
1xb16 |
640 |
28.4 |
47.4 |
38.0 |
21.5 |
ckpt |
log |
- For training, we train redesigned YOLOv2 with 150 epochs on COCO.
- For data augmentation, we use the SSD's augmentation, including the RandomCrop, RandomDistort, RandomExpand, RandomHFlip and so on.
- For optimizer, we use AdamW with weight decay of 0.05 and per image base lr of 0.001 / 64.
- For learning rate scheduler, we use cosine decay scheduler.
- For batch size, we set it to 16, and we also use the gradient accumulation to approximate batch size of 256.
Train YOLOv2
Single GPU
Taking training YOLOv2-R18 on COCO as the example,
python train.py --cuda -d coco --root path/to/coco -m yolov2_r18 -bs 16 --fp16
Multi GPU
Taking training YOLOv2-R18 on COCO as the example,
python -m torch.distributed.run --nproc_per_node=8 train.py --cuda --distributed -d coco --root path/to/coco -m yolov2_r18 -bs 16 --fp16
Test YOLOv2
Taking testing YOLOv2-R18 on COCO-val as the example,
python test.py --cuda -d coco --root path/to/coco -m yolov2_r18 --weight path/to/yolov2.pth --show
Evaluate YOLOv2
Taking evaluating YOLOv2-R18 on COCO-val as the example,
python eval.py --cuda -d coco --root path/to/coco -m yolov2_r18 --weight path/to/yolov2.pth
Demo
Detect with Image
python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov2_r18 --weight path/to/weight --show
Detect with Video
python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov2_r18 --weight path/to/weight --show --gif
Detect with Camera
python demo.py --mode camera --cuda -m yolov2_r18 --weight path/to/weight --show --gif