yjh0410 ec10144a0e update all README files před 2 roky
..
README.md ec10144a0e update all README files před 2 roky
build.py f878ef3bcd add nms_class_agnostic for test před 2 roky
loss.py 39e1db8115 modify my yolov7's config před 2 roky
matcher.py 2727b6adb1 debug trainer před 2 roky
yolov7.py f878ef3bcd add nms_class_agnostic for test před 2 roky
yolov7_backbone.py 9065e15f1a add HeadConv in RTCDet PaFPN před 2 roky
yolov7_basic.py 93b7481820 optimize yolov7 codes před 2 roky
yolov7_head.py 7cf531e7da add Tracking před 2 roky
yolov7_neck.py 7cf531e7da add Tracking před 2 roky
yolov7_pafpn.py 93b7481820 optimize yolov7 codes před 2 roky

README.md

YOLOv7:

  • For training, we train YOLOv7 and YOLOv7-Tiny with 300 epochs on 8 GPUs.
  • For data augmentation, we use the YOLOX-style augmentation including the large scale jitter (LSJ), Mosaic augmentation and Mixup augmentation.
  • For optimizer, we use AdamW with weight decay 0.05 and per image learning rate 0.001 / 64.
  • For learning rate scheduler, we use Cosine decay scheduler.
  • For YOLOv7's structure, we replace the coupled head with the YOLOX-style decoupled head.
  • I think YOLOv7 uses too many training tricks, such as anchor box, AuxiliaryHead, RepConv, Mosaic9x and so on, making the picture of YOLO too complicated, which is against the development concept of the YOLO series. Otherwise, why don't we use the DETR series? It's nothing more than doing some acceleration optimization on DETR. Therefore, I was faithful to my own technical aesthetics and realized a cleaner and simpler YOLOv7, but without the blessing of so many tricks, I did not reproduce all the performance, which is a pity.
  • I have no more GPUs to train my YOLOv7-X.

Train YOLOv7

Single GPU

Taking training YOLOv7-Tiny on COCO as the example,

python train.py --cuda -d coco --root path/to/coco -m yolov7_tiny -bs 16 -size 640 --wp_epoch 3 --max_epoch 300 --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --multi_scale 

Multi GPU

Taking training YOLOv7-Tiny on COCO as the example,

python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolov7_tiny -bs 128 -size 640 --wp_epoch 3 --max_epoch 300  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 

Test YOLOv7

Taking testing YOLOv7-Tiny on COCO-val as the example,

python test.py --cuda -d coco --root path/to/coco -m yolov7_tiny --weight path/to/yolov7_tiny.pth -size 640 -vt 0.4 --show 

Evaluate YOLOv7

Taking evaluating YOLOv7-Tiny on COCO-val as the example,

python eval.py --cuda -d coco-val --root path/to/coco -m yolov7_tiny --weight path/to/yolov7_tiny.pth 

Demo

Detect with Image

python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov7_tiny --weight path/to/weight -size 640 -vt 0.4 --show

Detect with Video

python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov7_tiny --weight path/to/weight -size 640 -vt 0.4 --show --gif

Detect with Camera

python demo.py --mode camera --cuda -m yolov7_tiny --weight path/to/weight -size 640 -vt 0.4 --show --gif
Model Backbone Batch Scale APval
0.5:0.95
APval
0.5
FLOPs
(G)
Params
(M)
Weight
YOLOv7-Tiny ELANNet-Tiny 8xb16 640 39.5 58.5 22.6 7.9 ckpt
YOLOv7 ELANNet-Large 8xb16 640 49.5 68.8 144.6 44.0 ckpt
YOLOv7-X ELANNet-Huge 640