yjh0410 88eab2141c modify the link to pretrained weight 2 years ago
..
README.md 88eab2141c modify the link to pretrained weight 2 years ago
build.py ccf0d4c14c try YOLOX's training config for my reproduced YOLOv7 2 years ago
loss.py 39e1db8115 modify my yolov7's config 2 years ago
matcher.py 2727b6adb1 debug trainer 2 years ago
yolov7.py 93b7481820 optimize yolov7 codes 2 years ago
yolov7_backbone.py 9065e15f1a add HeadConv in RTCDet PaFPN 2 years ago
yolov7_basic.py 93b7481820 optimize yolov7 codes 2 years ago
yolov7_head.py 7cf531e7da add Tracking 2 years ago
yolov7_neck.py 7cf531e7da add Tracking 2 years ago
yolov7_pafpn.py 93b7481820 optimize yolov7 codes 2 years ago

README.md

YOLOv7:

  • For training, we train YOLOv7 and YOLOv7-Tiny with 300 epochs on 8 GPUs.
  • For data augmentation, we use the YOLOX-style augmentation including the large scale jitter (LSJ), Mosaic augmentation and Mixup augmentation.
  • For optimizer, we use AdamW with weight decay 0.05 and per image learning rate 0.001 / 64.
  • For learning rate scheduler, we use Cosine decay scheduler.
  • For YOLOv7's structure, we replace the coupled head with the YOLOX-style decoupled head.
  • I think YOLOv7 uses too many training tricks, such as anchor box, AuxiliaryHead, RepConv, Mosaic9x and so on, making the picture of YOLO too complicated, which is against the development concept of the YOLO series. Otherwise, why don't we use the DETR series? It's nothing more than doing some acceleration optimization on DETR. Therefore, I was faithful to my own technical aesthetics and realized a cleaner and simpler YOLOv7, but without the blessing of so many tricks, I did not reproduce all the performance, which is a pity.
  • I have no more GPUs to train my YOLOv7-X.
Model Backbone Batch Scale APval
0.5:0.95
APval
0.5
FLOPs
(G)
Params
(M)
Weight
YOLOv7-Tiny ELANNet-Tiny 8xb16 640 39.5 58.5 22.6 7.9 ckpt
YOLOv7 ELANNet-Large 8xb16 640 49.5 68.8 144.6 44.0 ckpt
YOLOv7-X ELANNet-Huge 640