Răsfoiți Sursa

update README

yjh0410 2 ani în urmă
părinte
comite
797afb21c1
2 a modificat fișierele cu 30 adăugiri și 29 ștergeri
  1. 0 1
      models/detectors/yolox/README.md
  2. 30 28
      models/detectors/yolox2/README.md

+ 0 - 1
models/detectors/yolox/README.md

@@ -11,7 +11,6 @@
 - For data augmentation, we use the large scale jitter (LSJ), Mosaic augmentation and Mixup augmentation.
 - For optimizer, we use SGD with weight decay 0.0005 and base per image lr 0.01 / 64,.
 - For learning rate scheduler, we use Cosine decay scheduler.
-- The reason for the low performance of my reproduced **YOLOX-L** has not been found out yet.
 
 ## Train YOLOX
 ### Single GPU

+ 30 - 28
models/detectors/yolox2/README.md

@@ -1,53 +1,55 @@
-# YOLOv4:
-
-|    Model    |     Backbone    | Batch | Scale | AP<sup>val<br>0.5:0.95 | AP<sup>val<br>0.5 | FLOPs<br><sup>(G) | Params<br><sup>(M) | Weight |
-|-------------|-----------------|-------|-------|------------------------|-------------------|-------------------|--------------------|--------|
-| YOLOv4-Tiny | CSPDarkNet-Tiny | 1xb16 |  640  |        31.0            |       49.1        |   8.1             |   2.9              | [ckpt](https://github.com/yjh0410/RT-ODLab/releases/download/yolo_tutorial_ckpt/yolov4_t_coco.pth) |
-| YOLOv4      | CSPDarkNet-53   | 1xb16 |  640  |        46.6            |       65.8        |   162.7           |   61.5             | [ckpt](https://github.com/yjh0410/RT-ODLab/releases/download/yolo_tutorial_ckpt/yolov4_coco.pth) |
-
-- For training, we train YOLOv4 and YOLOv4-Tiny with 250 epochs on COCO.
-- For data augmentation, we use the large scale jitter (LSJ), Mosaic augmentation and Mixup augmentation, following the setting of [YOLOv5](https://github.com/ultralytics/yolov5).
-- For optimizer, we use SGD with momentum 0.937, weight decay 0.0005 and base lr 0.01.
-- For learning rate scheduler, we use linear decay scheduler.
-- For YOLOv4's structure, we use decoupled head, following the setting of [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX).
-
-## Train YOLOv4
+# YOLOX2:
+
+|   Model  | Batch | Scale | AP<sup>val<br>0.5:0.95 | AP<sup>val<br>0.5 | FLOPs<br><sup>(G) | Params<br><sup>(M) | Weight |
+|----------|-------|-------|------------------------|-------------------|-------------------|--------------------|--------|
+| YOLOX2-N | 8xb16 |  640  |                        |                   |                   |                    |  |
+| YOLOX2-S | 8xb16 |  640  |                        |                   |                   |                    |  |
+| YOLOX2-M | 8xb16 |  640  |                        |                   |                   |                    |  |
+| YOLOX2-L | 8xb16 |  640  |                        |                   |                   |                    |  |
+| YOLOX2-X | 8xb16 |  640  |                        |                   |                   |                    |  |
+
+- For training, we train YOLOX2 series with 300 epochs on COCO.
+- For data augmentation, we use the large scale jitter (LSJ), Mosaic augmentation and Mixup augmentation.
+- For optimizer, we use AdamW with weight decay 0.05 and base per image lr 0.001 / 64,.
+- For learning rate scheduler, we use Linear decay scheduler.
+
+## Train YOLOX2
 ### Single GPU
-Taking training YOLOv4 on COCO as the example,
+Taking training YOLOX2-S on COCO as the example,
 ```Shell
-python train.py --cuda -d coco --root path/to/coco -m yolov4 -bs 16 -size 640 --wp_epoch 3 --max_epoch 300 --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --multi_scale 
+python train.py --cuda -d coco --root path/to/coco -m yolox2_s -bs 16 -size 640 --wp_epoch 3 --max_epoch 300 --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --multi_scale 
 ```
 
 ### Multi GPU
-Taking training YOLOv4 on COCO as the example,
+Taking training YOLOX2-S on COCO as the example,
 ```Shell
-python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolov4 -bs 128 -size 640 --wp_epoch 3 --max_epoch 300  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolox2_s -bs 128 -size 640 --wp_epoch 3 --max_epoch 300  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
 ```
 
-## Test YOLOv4
-Taking testing YOLOv4 on COCO-val as the example,
+## Test YOLOX2
+Taking testing YOLOX2-S on COCO-val as the example,
 ```Shell
-python test.py --cuda -d coco --root path/to/coco -m yolov4 --weight path/to/yolov4.pth -size 640 -vt 0.4 --show 
+python test.py --cuda -d coco --root path/to/coco -m yolox2_s --weight path/to/yolox2_s.pth -size 640 -vt 0.4 --show 
 ```
 
-## Evaluate YOLOv4
-Taking evaluating YOLOv4 on COCO-val as the example,
+## Evaluate YOLOX2
+Taking evaluating YOLOX2-S on COCO-val as the example,
 ```Shell
-python eval.py --cuda -d coco-val --root path/to/coco -m yolov4 --weight path/to/yolov4.pth 
+python eval.py --cuda -d coco-val --root path/to/coco -m yolox2_s --weight path/to/yolox2_s.pth 
 ```
 
 ## Demo
 ### Detect with Image
 ```Shell
-python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov4 --weight path/to/weight -size 640 -vt 0.4 --show
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolox2_s --weight path/to/weight -size 640 -vt 0.4 --show
 ```
 
 ### Detect with Video
 ```Shell
-python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov4 --weight path/to/weight -size 640 -vt 0.4 --show --gif
+python demo.py --mode video --path_to_vid path/to/video --cuda -m yolox2_s --weight path/to/weight -size 640 -vt 0.4 --show --gif
 ```
 
 ### Detect with Camera
 ```Shell
-python demo.py --mode camera --cuda -m yolov4 --weight path/to/weight -size 640 -vt 0.4 --show --gif
-```
+python demo.py --mode camera --cuda -m yolox2_s --weight path/to/weight -size 640 -vt 0.4 --show --gif
+```