yjh0410 1 year ago
parent
commit
2381fdaa59
5 changed files with 125 additions and 60 deletions
  1. 16 15
      models/yolov1/README.md
  2. 16 15
      models/yolov2/README.md
  3. 16 15
      models/yolov3/README.md
  4. 16 15
      models/yolov5/README.md
  5. 61 0
      models/yolov5_af/README.md

+ 16 - 15
models/yolov1/README.md

@@ -12,49 +12,50 @@
 |--------|------------|-------|-------|------------------------|-------------------|-------------------|--------------------|--------|
 | YOLOv1 | ResNet-18  | 1xb16 |  640  |                    |               |   37.8            |   21.3             | [ckpt](https://github.com/yjh0410/RT-ODLab/releases/download/yolo_tutorial_ckpt/yolov1_coco.pth) |
 
-- For training, we train redesigned YOLOv1 with 150 epochs on COCO. We also gradient accumulate.
-- For data augmentation, we only use the large scale jitter (LSJ), no Mosaic or Mixup augmentation.
-- For optimizer, we use SGD with momentum 0.937, weight decay 0.0005 and base lr 0.01.
-- For learning rate scheduler, we use linear decay scheduler.
+- For training, we train redesigned YOLOv1 with 150 epochs on COCO.
+- For data augmentation, we use the SSD's augmentation, including the RandomCrop, RandomDistort, RandomExpand, RandomHFlip and so on.
+- For optimizer, we use AdamW with weight decay of 0.05 and per image base lr of 0.001 / 64.
+- For learning rate scheduler, we use cosine decay scheduler.
+- For batch size, we set it to 16, and we also use the gradient accumulation to approximate batch size of 256.
 
 
 ## Train YOLOv1
 ### Single GPU
-Taking training YOLOv1 on COCO as the example,
+Taking training YOLOv1-R18 on COCO as the example,
 ```Shell
-python train.py --cuda -d coco --root path/to/coco -m yolov1 -bs 16 -size 640 --wp_epoch 3 --max_epoch 150 --eval_epoch 10 --no_aug_epoch 10 --ema --fp16 --multi_scale 
+python train.py --cuda -d coco --root path/to/coco -m yolov1_r18 -bs 16 --fp16 
 ```
 
 ### Multi GPU
-Taking training YOLOv1 on COCO as the example,
+Taking training YOLOv1-R18 on COCO as the example,
 ```Shell
-python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolov1 -bs 128 -size 640 --wp_epoch 3 --max_epoch 150  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda --distributed -d coco --root path/to/coco -m yolov1_r18 -bs 16 --fp16 
 ```
 
 ## Test YOLOv1
-Taking testing YOLOv1 on COCO-val as the example,
+Taking testing YOLOv1-R18 on COCO-val as the example,
 ```Shell
-python test.py --cuda -d coco --root path/to/coco -m yolov1 --weight path/to/yolov1.pth -size 640 -vt 0.3 --show 
+python test.py --cuda -d coco --root path/to/coco -m yolov1_r18 --weight path/to/yolov1.pth --show 
 ```
 
 ## Evaluate YOLOv1
-Taking evaluating YOLOv1 on COCO-val as the example,
+Taking evaluating YOLOv1-R18 on COCO-val as the example,
 ```Shell
-python eval.py --cuda -d coco-val --root path/to/coco -m yolov1 --weight path/to/yolov1.pth 
+python eval.py --cuda -d coco --root path/to/coco -m yolov1_r18 --weight path/to/yolov1.pth 
 ```
 
 ## Demo
 ### Detect with Image
 ```Shell
-python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov1 --weight path/to/weight -size 640 -vt 0.3 --show
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov1_r18 --weight path/to/weight --show
 ```
 
 ### Detect with Video
 ```Shell
-python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov1 --weight path/to/weight -size 640 -vt 0.3 --show --gif
+python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov1_r18 --weight path/to/weight --show --gif
 ```
 
 ### Detect with Camera
 ```Shell
-python demo.py --mode camera --cuda -m yolov1 --weight path/to/weight -size 640 -vt 0.3 --show --gif
+python demo.py --mode camera --cuda -m yolov1_r18 --weight path/to/weight --show --gif
 ```

+ 16 - 15
models/yolov2/README.md

@@ -12,49 +12,50 @@
 |--------|------------|-------|-------|------------------------|-------------------|-------------------|--------------------|--------|
 | YOLOv2 | ResNet-18  | 1xb16 |  640  |                    |               |   38.0            |   21.5             | [ckpt](https://github.com/yjh0410/RT-ODLab/releases/download/yolo_tutorial_ckpt/yolov2_coco.pth) |
 
-- For training, we train redesigned YOLOv2 with 150 epochs on COCO. We also gradient accumulate.
-- For data augmentation, we only use the large scale jitter (LSJ), no Mosaic or Mixup augmentation.
-- For optimizer, we use SGD with momentum 0.937, weight decay 0.0005 and base lr 0.01.
-- For learning rate scheduler, we use linear decay scheduler.
+- For training, we train redesigned YOLOv2 with 150 epochs on COCO.
+- For data augmentation, we use the SSD's augmentation, including the RandomCrop, RandomDistort, RandomExpand, RandomHFlip and so on.
+- For optimizer, we use AdamW with weight decay of 0.05 and per image base lr of 0.001 / 64.
+- For learning rate scheduler, we use cosine decay scheduler.
+- For batch size, we set it to 16, and we also use the gradient accumulation to approximate batch size of 256.
 
 
 ## Train YOLOv2
 ### Single GPU
-Taking training YOLOv2 on COCO as the example,
+Taking training YOLOv2-R18 on COCO as the example,
 ```Shell
-python train.py --cuda -d coco --root path/to/coco -m yolov2 -bs 16 -size 640 --wp_epoch 3 --max_epoch 150 --eval_epoch 10 --no_aug_epoch 10 --ema --fp16 --multi_scale 
+python train.py --cuda -d coco --root path/to/coco -m yolov2_r18 -bs 16 --fp16 
 ```
 
 ### Multi GPU
-Taking training YOLOv2 on COCO as the example,
+Taking training YOLOv2-R18 on COCO as the example,
 ```Shell
-python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolov2 -bs 128 -size 640 --wp_epoch 3 --max_epoch 150  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda --distributed -d coco --root path/to/coco -m yolov2_r18 -bs 16 --fp16 
 ```
 
 ## Test YOLOv2
-Taking testing YOLOv2 on COCO-val as the example,
+Taking testing YOLOv2-R18 on COCO-val as the example,
 ```Shell
-python test.py --cuda -d coco --root path/to/coco -m yolov2 --weight path/to/yolov2.pth -size 640 -vt 0.3 --show 
+python test.py --cuda -d coco --root path/to/coco -m yolov2_r18 --weight path/to/yolov2.pth --show 
 ```
 
 ## Evaluate YOLOv2
-Taking evaluating YOLOv2 on COCO-val as the example,
+Taking evaluating YOLOv2-R18 on COCO-val as the example,
 ```Shell
-python eval.py --cuda -d coco-val --root path/to/coco -m yolov2 --weight path/to/yolov2.pth 
+python eval.py --cuda -d coco --root path/to/coco -m yolov2_r18 --weight path/to/yolov2.pth 
 ```
 
 ## Demo
 ### Detect with Image
 ```Shell
-python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov2 --weight path/to/weight -size 640 -vt 0.3 --show
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov2_r18 --weight path/to/weight --show
 ```
 
 ### Detect with Video
 ```Shell
-python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov2 --weight path/to/weight -size 640 -vt 0.3 --show --gif
+python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov2_r18 --weight path/to/weight --show --gif
 ```
 
 ### Detect with Camera
 ```Shell
-python demo.py --mode camera --cuda -m yolov2 --weight path/to/weight -size 640 -vt 0.3 --show --gif
+python demo.py --mode camera --cuda -m yolov2_r18 --weight path/to/weight --show --gif
 ```

+ 16 - 15
models/yolov3/README.md

@@ -12,49 +12,50 @@
 |----------|-------|-------|------------------------|-------------------|-------------------|--------------------|--------|--------|
 | YOLOv3-S | 1xb16 |  640  |                    |               |   25.2            |   7.3             |  |  |
 
-- For training, we train redesigned YOLOv3 with 150 epochs on COCO. We also gradient accumulate.
-- For data augmentation, we only use the large scale jitter (LSJ), no Mosaic or Mixup augmentation.
-- For optimizer, we use SGD with momentum 0.937, weight decay 0.0005 and base lr 0.01.
-- For learning rate scheduler, we use linear decay scheduler.
+- For training, we train redesigned YOLOv3 with 300 epochs on COCO. We also use the gradient accumulation.
+- For data augmentation, we use the RandomAffine, RandomHSV, Mosaic and Mixup augmentation.
+- For optimizer, we use AdamW with weight decay of 0.05 and per image base lr of 0.001 / 64.
+- For learning rate scheduler, we use cosine decay scheduler.
+- For batch size, we set it to 16, and we also use the gradient accumulation to approximate batch size of 256.
 
 
 ## Train YOLOv3
 ### Single GPU
-Taking training YOLOv3 on COCO as the example,
+Taking training YOLOv3-S on COCO as the example,
 ```Shell
-python train.py --cuda -d coco --root path/to/coco -m yolov3 -bs 16 -size 640 --wp_epoch 3 --max_epoch 150 --eval_epoch 10 --no_aug_epoch 10 --ema --fp16 --multi_scale 
+python train.py --cuda -d coco --root path/to/coco -m yolov3_s -bs 16 --fp16 
 ```
 
 ### Multi GPU
-Taking training YOLOv3 on COCO as the example,
+Taking training YOLOv3-S on COCO as the example,
 ```Shell
-python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolov3 -bs 128 -size 640 --wp_epoch 3 --max_epoch 150  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda --distributed -d coco --root path/to/coco -m yolov3_s -bs 16 --fp16 
 ```
 
 ## Test YOLOv3
-Taking testing YOLOv3 on COCO-val as the example,
+Taking testing YOLOv3-S on COCO-val as the example,
 ```Shell
-python test.py --cuda -d coco --root path/to/coco -m yolov3 --weight path/to/yolov3.pth -size 640 -vt 0.3 --show 
+python test.py --cuda -d coco --root path/to/coco -m yolov3_s --weight path/to/yolov3.pth --show 
 ```
 
 ## Evaluate YOLOv3
-Taking evaluating YOLOv3 on COCO-val as the example,
+Taking evaluating YOLOv3-S on COCO-val as the example,
 ```Shell
-python eval.py --cuda -d coco-val --root path/to/coco -m yolov3 --weight path/to/yolov3.pth 
+python eval.py --cuda -d coco --root path/to/coco -m yolov3_s --weight path/to/yolov3.pth 
 ```
 
 ## Demo
 ### Detect with Image
 ```Shell
-python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov3 --weight path/to/weight -size 640 -vt 0.3 --show
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov3_s --weight path/to/weight --show
 ```
 
 ### Detect with Video
 ```Shell
-python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov3 --weight path/to/weight -size 640 -vt 0.3 --show --gif
+python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov3_s --weight path/to/weight --show --gif
 ```
 
 ### Detect with Camera
 ```Shell
-python demo.py --mode camera --cuda -m yolov3 --weight path/to/weight -size 640 -vt 0.3 --show --gif
+python demo.py --mode camera --cuda -m yolov3_s --weight path/to/weight --show --gif
 ```

+ 16 - 15
models/yolov5/README.md

@@ -12,49 +12,50 @@
 |----------|-------|-------|------------------------|-------------------|-------------------|--------------------|--------|--------|
 | YOLOv5-S | 1xb16 |  640  |                    |               |   27.3            |   9.0             |  |  |
 
-- For training, we train redesigned YOLOv5 with 150 epochs on COCO. We also gradient accumulate.
-- For data augmentation, we only use the large scale jitter (LSJ), no Mosaic or Mixup augmentation.
-- For optimizer, we use SGD with momentum 0.937, weight decay 0.0005 and base lr 0.01.
-- For learning rate scheduler, we use linear decay scheduler.
+- For training, we train redesigned YOLOv5 with 300 epochs on COCO. We also use the gradient accumulation.
+- For data augmentation, we use the RandomAffine, RandomHSV, Mosaic and Mixup augmentation.
+- For optimizer, we use AdamW with weight decay of 0.05 and per image base lr of 0.001 / 64.
+- For learning rate scheduler, we use cosine decay scheduler.
+- For batch size, we set it to 16, and we also use the gradient accumulation to approximate batch size of 256.
 
 
 ## Train YOLOv5
 ### Single GPU
-Taking training YOLOv5 on COCO as the example,
+Taking training YOLOv5-S on COCO as the example,
 ```Shell
-python train.py --cuda -d coco --root path/to/coco -m yolov5 -bs 16 -size 640 --wp_epoch 3 --max_epoch 150 --eval_epoch 10 --no_aug_epoch 10 --ema --fp16 --multi_scale 
+python train.py --cuda -d coco --root path/to/coco -m yolov5_s -bs 16 --fp16 
 ```
 
 ### Multi GPU
-Taking training YOLOv5 on COCO as the example,
+Taking training YOLOv5-S on COCO as the example,
 ```Shell
-python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolov5 -bs 128 -size 640 --wp_epoch 3 --max_epoch 150  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda --distributed -d coco --root path/to/coco -m yolov5_s -bs 16 --fp16 
 ```
 
 ## Test YOLOv5
-Taking testing YOLOv5 on COCO-val as the example,
+Taking testing YOLOv5-S on COCO-val as the example,
 ```Shell
-python test.py --cuda -d coco --root path/to/coco -m yolov5 --weight path/to/yolov5.pth -size 640 -vt 0.3 --show 
+python test.py --cuda -d coco --root path/to/coco -m yolov5_s --weight path/to/yolov5.pth --show 
 ```
 
 ## Evaluate YOLOv5
-Taking evaluating YOLOv5 on COCO-val as the example,
+Taking evaluating YOLOv5-S on COCO-val as the example,
 ```Shell
-python eval.py --cuda -d coco-val --root path/to/coco -m yolov5 --weight path/to/yolov5.pth 
+python eval.py --cuda -d coco --root path/to/coco -m yolov5_s --weight path/to/yolov5.pth 
 ```
 
 ## Demo
 ### Detect with Image
 ```Shell
-python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov5 --weight path/to/weight -size 640 -vt 0.3 --show
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov5_s --weight path/to/weight --show
 ```
 
 ### Detect with Video
 ```Shell
-python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov5 --weight path/to/weight -size 640 -vt 0.3 --show --gif
+python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov5_s --weight path/to/weight --show --gif
 ```
 
 ### Detect with Camera
 ```Shell
-python demo.py --mode camera --cuda -m yolov5 --weight path/to/weight -size 640 -vt 0.3 --show --gif
+python demo.py --mode camera --cuda -m yolov5_s --weight path/to/weight --show --gif
 ```

+ 61 - 0
models/yolov5_af/README.md

@@ -0,0 +1,61 @@
+# Anchor-free YOLOv5:
+
+- VOC
+
+|     Model   | Batch | Scale | AP<sup>val<br>0.5 | Weight |  Logs  |
+|-------------|-------|-------|-------------------|--------|--------|
+| YOLOv5-AF-S | 1xb16 |  640  |       82.4        | [ckpt](https://github.com/yjh0410/YOLO-Tutorial-v5/releases/download/yolo_tutorial_ckpt/yolov5_af_s_voc.pth) | [log](https://github.com/yjh0410/YOLO-Tutorial-v5/releases/download/yolo_tutorial_ckpt/YOLOv5-AF-S-VOC.txt) |
+
+- COCO
+
+|    Model    | Batch | Scale | AP<sup>val<br>0.5:0.95 | AP<sup>val<br>0.5 | FLOPs<br><sup>(G) | Params<br><sup>(M) | Weight |  Logs  |
+|-------------|-------|-------|------------------------|-------------------|-------------------|--------------------|--------|--------|
+| YOLOv5-AF-S | 1xb16 |  640  |                    |               |   26.9            |   8.9             |  |  |
+
+- For training, we train redesigned YOLOv5-AF with 300 epochs on COCO. We also use the gradient accumulation.
+- For data augmentation, we use the RandomAffine, RandomHSV, Mosaic and YOLOX's Mixup augmentation.
+- For optimizer, we use AdamW with weight decay of 0.05 and per image base lr of 0.001 / 64.
+- For learning rate scheduler, we use cosine decay scheduler.
+- For batch size, we set it to 16, and we also use the gradient accumulation to approximate batch size of 256.
+
+
+## Train YOLOv5-AF
+### Single GPU
+Taking training YOLOv5-AF-S on COCO as the example,
+```Shell
+python train.py --cuda -d coco --root path/to/coco -m yolov5_af_s -bs 16 --fp16 
+```
+
+### Multi GPU
+Taking training YOLOv5-AF-S on COCO as the example,
+```Shell
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda --distributed -d coco --root path/to/coco -m yolov5_af_s -bs 16 --fp16 
+```
+
+## Test YOLOv5-AF
+Taking testing YOLOv5-AF-S on COCO-val as the example,
+```Shell
+python test.py --cuda -d coco --root path/to/coco -m yolov5_af_s --weight path/to/yolov5.pth --show 
+```
+
+## Evaluate YOLOv5-AF
+Taking evaluating YOLOv5-AF-S on COCO-val as the example,
+```Shell
+python eval.py --cuda -d coco --root path/to/coco -m yolov5_af_s --weight path/to/yolov5.pth 
+```
+
+## Demo
+### Detect with Image
+```Shell
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov5_af_s --weight path/to/weight --show
+```
+
+### Detect with Video
+```Shell
+python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov5_af_s --weight path/to/weight --show --gif
+```
+
+### Detect with Camera
+```Shell
+python demo.py --mode camera --cuda -m yolov5_af_s --weight path/to/weight --show --gif
+```