浏览代码

update all README files

yjh0410 2 年之前
父节点
当前提交
ec10144a0e

+ 41 - 0
models/detectors/rtcdet/README.md

@@ -21,3 +21,44 @@
 - For optimizer, we use AdamW with weight decay 0.05 and base per image lr 0.001 / 64.
 - For learning rate scheduler, we use linear decay scheduler.
 - Due to my limited computing resources, I can not train `RTCDet-X` with the setting of `batch size=128`.
+
+## Train RTCDet
+### Single GPU
+Taking training RTCDet-S on COCO as the example,
+```Shell
+python train.py --cuda -d coco --root path/to/coco -m rtcdet_s -bs 16 -size 640 --wp_epoch 3 --max_epoch 300 --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --multi_scale 
+```
+
+### Multi GPU
+Taking training RTCDet-S on COCO as the example,
+```Shell
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m rtcdet_s -bs 128 -size 640 --wp_epoch 3 --max_epoch 300  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
+```
+
+## Test RTCDet
+Taking testing RTCDet-S on COCO-val as the example,
+```Shell
+python test.py --cuda -d coco --root path/to/coco -m rtcdet_s --weight path/to/rtcdet_s.pth -size 640 -vt 0.4 --show 
+```
+
+## Evaluate RTCDet
+Taking evaluating RTCDet-S on COCO-val as the example,
+```Shell
+python eval.py --cuda -d coco-val --root path/to/coco -m rtcdet_s --weight path/to/rtcdet_s.pth 
+```
+
+## Demo
+### Detect with Image
+```Shell
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m rtcdet_s --weight path/to/weight -size 640 -vt 0.4 --show
+```
+
+### Detect with Video
+```Shell
+python demo.py --mode video --path_to_vid path/to/video --cuda -m rtcdet_s --weight path/to/weight -size 640 -vt 0.4 --show --gif
+```
+
+### Detect with Camera
+```Shell
+python demo.py --mode camera --cuda -m rtcdet_s --weight path/to/weight -size 640 -vt 0.4 --show --gif
+```

+ 42 - 0
models/detectors/yolov1/README.md

@@ -8,3 +8,45 @@
 - For data augmentation, we only use the large scale jitter (LSJ), no Mosaic or Mixup augmentation.
 - For optimizer, we use SGD with momentum 0.937, weight decay 0.0005 and base lr 0.01.
 - For learning rate scheduler, we use linear decay scheduler.
+
+
+## Train YOLOv1
+### Single GPU
+Taking training YOLOv1 on COCO as the example,
+```Shell
+python train.py --cuda -d coco --root path/to/coco -m yolov1 -bs 16 -size 640 --wp_epoch 3 --max_epoch 150 --eval_epoch 10 --no_aug_epoch 10 --ema --fp16 --multi_scale 
+```
+
+### Multi GPU
+Taking training YOLOv1 on COCO as the example,
+```Shell
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolov1 -bs 128 -size 640 --wp_epoch 3 --max_epoch 150  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
+```
+
+## Test YOLOv1
+Taking testing YOLOv1 on COCO-val as the example,
+```Shell
+python test.py --cuda -d coco --root path/to/coco -m yolov1 --weight path/to/yolov1.pth -size 640 -vt 0.3 --show 
+```
+
+## Evaluate YOLOv1
+Taking evaluating YOLOv1 on COCO-val as the example,
+```Shell
+python eval.py --cuda -d coco-val --root path/to/coco -m yolov1 --weight path/to/yolov1.pth 
+```
+
+## Demo
+### Detect with Image
+```Shell
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov1 --weight path/to/weight -size 640 -vt 0.3 --show
+```
+
+### Detect with Video
+```Shell
+python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov1 --weight path/to/weight -size 640 -vt 0.3 --show --gif
+```
+
+### Detect with Camera
+```Shell
+python demo.py --mode camera --cuda -m yolov1 --weight path/to/weight -size 640 -vt 0.3 --show --gif
+```

+ 41 - 0
models/detectors/yolov2/README.md

@@ -8,3 +8,44 @@
 - For data augmentation, we only use the large scale jitter (LSJ), no Mosaic or Mixup augmentation.
 - For optimizer, we use SGD with momentum 0.937, weight decay 0.0005 and base lr 0.01.
 - For learning rate scheduler, we use linear decay scheduler.
+
+## Train YOLOv2
+### Single GPU
+Taking training YOLOv2 on COCO as the example,
+```Shell
+python train.py --cuda -d coco --root path/to/coco -m yolov2 -bs 16 -size 640 --wp_epoch 3 --max_epoch 200 --eval_epoch 10 --no_aug_epoch 15 --ema --fp16 --multi_scale 
+```
+
+### Multi GPU
+Taking training YOLOv2 on COCO as the example,
+```Shell
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolov2 -bs 128 -size 640 --wp_epoch 3 --max_epoch 200  --eval_epoch 10 --no_aug_epoch 15 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
+```
+
+## Test YOLOv2
+Taking testing YOLOv2 on COCO-val as the example,
+```Shell
+python test.py --cuda -d coco --root path/to/coco -m yolov2 --weight path/to/yolov2.pth -size 640 -vt 0.3 --show 
+```
+
+## Evaluate YOLOv2
+Taking evaluating YOLOv2 on COCO-val as the example,
+```Shell
+python eval.py --cuda -d coco-val --root path/to/coco -m yolov2 --weight path/to/yolov2.pth 
+```
+
+## Demo
+### Detect with Image
+```Shell
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov2 --weight path/to/weight -size 640 -vt 0.3 --show
+```
+
+### Detect with Video
+```Shell
+python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov2 --weight path/to/weight -size 640 -vt 0.3 --show --gif
+```
+
+### Detect with Camera
+```Shell
+python demo.py --mode camera --cuda -m yolov2 --weight path/to/weight -size 640 -vt 0.3 --show --gif
+```

+ 42 - 1
models/detectors/yolov3/README.md

@@ -9,4 +9,45 @@
 - For data augmentation, we use the large scale jitter (LSJ), Mosaic augmentation and Mixup augmentation, following the setting of [YOLOv5](https://github.com/ultralytics/yolov5).
 - For optimizer, we use SGD with momentum 0.937, weight decay 0.0005 and base lr 0.01.
 - For learning rate scheduler, we use linear decay scheduler.
-- For YOLOv3's structure, we use decoupled head, following the setting of [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX).
+- For YOLOv3's structure, we use decoupled head, following the setting of [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX).
+
+## Train YOLOv3
+### Single GPU
+Taking training YOLOv3 on COCO as the example,
+```Shell
+python train.py --cuda -d coco --root path/to/coco -m yolov3 -bs 16 -size 640 --wp_epoch 3 --max_epoch 300 --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --multi_scale 
+```
+
+### Multi GPU
+Taking training YOLOv3 on COCO as the example,
+```Shell
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolov3 -bs 128 -size 640 --wp_epoch 3 --max_epoch 300  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
+```
+
+## Test YOLOv3
+Taking testing YOLOv3 on COCO-val as the example,
+```Shell
+python test.py --cuda -d coco --root path/to/coco -m yolov3 --weight path/to/yolov3.pth -size 640 -vt 0.4 --show 
+```
+
+## Evaluate YOLOv3
+Taking evaluating YOLOv3 on COCO-val as the example,
+```Shell
+python eval.py --cuda -d coco-val --root path/to/coco -m yolov3 --weight path/to/yolov3.pth 
+```
+
+## Demo
+### Detect with Image
+```Shell
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov3 --weight path/to/weight -size 640 -vt 0.4 --show
+```
+
+### Detect with Video
+```Shell
+python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov3 --weight path/to/weight -size 640 -vt 0.4 --show --gif
+```
+
+### Detect with Camera
+```Shell
+python demo.py --mode camera --cuda -m yolov3 --weight path/to/weight -size 640 -vt 0.4 --show --gif
+```

+ 42 - 1
models/detectors/yolov4/README.md

@@ -9,4 +9,45 @@
 - For data augmentation, we use the large scale jitter (LSJ), Mosaic augmentation and Mixup augmentation, following the setting of [YOLOv5](https://github.com/ultralytics/yolov5).
 - For optimizer, we use SGD with momentum 0.937, weight decay 0.0005 and base lr 0.01.
 - For learning rate scheduler, we use linear decay scheduler.
-- For YOLOv4's structure, we use decoupled head, following the setting of [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX).
+- For YOLOv4's structure, we use decoupled head, following the setting of [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX).
+
+## Train YOLOv4
+### Single GPU
+Taking training YOLOv4 on COCO as the example,
+```Shell
+python train.py --cuda -d coco --root path/to/coco -m yolov4 -bs 16 -size 640 --wp_epoch 3 --max_epoch 300 --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --multi_scale 
+```
+
+### Multi GPU
+Taking training YOLOv4 on COCO as the example,
+```Shell
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolov4 -bs 128 -size 640 --wp_epoch 3 --max_epoch 300  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
+```
+
+## Test YOLOv4
+Taking testing YOLOv4 on COCO-val as the example,
+```Shell
+python test.py --cuda -d coco --root path/to/coco -m yolov4 --weight path/to/yolov4.pth -size 640 -vt 0.4 --show 
+```
+
+## Evaluate YOLOv4
+Taking evaluating YOLOv4 on COCO-val as the example,
+```Shell
+python eval.py --cuda -d coco-val --root path/to/coco -m yolov4 --weight path/to/yolov4.pth 
+```
+
+## Demo
+### Detect with Image
+```Shell
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov4 --weight path/to/weight -size 640 -vt 0.4 --show
+```
+
+### Detect with Video
+```Shell
+python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov4 --weight path/to/weight -size 640 -vt 0.4 --show --gif
+```
+
+### Detect with Camera
+```Shell
+python demo.py --mode camera --cuda -m yolov4 --weight path/to/weight -size 640 -vt 0.4 --show --gif
+```

+ 41 - 0
models/detectors/yolov5/README.md

@@ -13,3 +13,44 @@
 - For learning rate scheduler, we use linear decay scheduler.
 - For YOLOv5's structure, we use decoupled head, following the setting of [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX).
 - For **YOLOv5-M** and **YOLOv5-L**, increasing the batch size may improve performance. Due to my computing resources, I can only set the batch size to 16.
+
+## Train YOLOv5
+### Single GPU
+Taking training YOLOv5-S on COCO as the example,
+```Shell
+python train.py --cuda -d coco --root path/to/coco -m yolov5_s -bs 16 -size 640 --wp_epoch 3 --max_epoch 300 --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --multi_scale 
+```
+
+### Multi GPU
+Taking training YOLOv5 on COCO as the example,
+```Shell
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolov5_s -bs 128 -size 640 --wp_epoch 3 --max_epoch 300  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
+```
+
+## Test YOLOv5
+Taking testing YOLOv5 on COCO-val as the example,
+```Shell
+python test.py --cuda -d coco --root path/to/coco -m yolov5_s --weight path/to/yolov5.pth -size 640 -vt 0.4 --show 
+```
+
+## Evaluate YOLOv5
+Taking evaluating YOLOv5 on COCO-val as the example,
+```Shell
+python eval.py --cuda -d coco-val --root path/to/coco -m yolov5_s --weight path/to/yolov5.pth 
+```
+
+## Demo
+### Detect with Image
+```Shell
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov5_s --weight path/to/weight -size 640 -vt 0.4 --show
+```
+
+### Detect with Video
+```Shell
+python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov5_s --weight path/to/weight -size 640 -vt 0.4 --show --gif
+```
+
+### Detect with Camera
+```Shell
+python demo.py --mode camera --cuda -m yolov5_s --weight path/to/weight -size 640 -vt 0.4 --show --gif
+```

+ 42 - 1
models/detectors/yolov7/README.md

@@ -12,4 +12,45 @@
 - For learning rate scheduler, we use Cosine decay scheduler.
 - For YOLOv7's structure, we replace the coupled head with the YOLOX-style decoupled head.
 - I think YOLOv7 uses too many training tricks, such as `anchor box`, `AuxiliaryHead`, `RepConv`, `Mosaic9x` and so on, making the picture of YOLO too complicated, which is against the development concept of the YOLO series. Otherwise, why don't we use the DETR series? It's nothing more than doing some acceleration optimization on DETR. Therefore, I was faithful to my own technical aesthetics and realized a cleaner and simpler YOLOv7, but without the blessing of so many tricks, I did not reproduce all the performance, which is a pity.
-- I have no more GPUs to train my `YOLOv7-X`.
+- I have no more GPUs to train my `YOLOv7-X`.
+
+## Train YOLOv7
+### Single GPU
+Taking training YOLOv7-Tiny on COCO as the example,
+```Shell
+python train.py --cuda -d coco --root path/to/coco -m yolov7_tiny -bs 16 -size 640 --wp_epoch 3 --max_epoch 300 --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --multi_scale 
+```
+
+### Multi GPU
+Taking training YOLOv7-Tiny on COCO as the example,
+```Shell
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolov7_tiny -bs 128 -size 640 --wp_epoch 3 --max_epoch 300  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
+```
+
+## Test YOLOv7
+Taking testing YOLOv7-Tiny on COCO-val as the example,
+```Shell
+python test.py --cuda -d coco --root path/to/coco -m yolov7_tiny --weight path/to/yolov7_tiny.pth -size 640 -vt 0.4 --show 
+```
+
+## Evaluate YOLOv7
+Taking evaluating YOLOv7-Tiny on COCO-val as the example,
+```Shell
+python eval.py --cuda -d coco-val --root path/to/coco -m yolov7_tiny --weight path/to/yolov7_tiny.pth 
+```
+
+## Demo
+### Detect with Image
+```Shell
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolov7_tiny --weight path/to/weight -size 640 -vt 0.4 --show
+```
+
+### Detect with Video
+```Shell
+python demo.py --mode video --path_to_vid path/to/video --cuda -m yolov7_tiny --weight path/to/weight -size 640 -vt 0.4 --show --gif
+```
+
+### Detect with Camera
+```Shell
+python demo.py --mode camera --cuda -m yolov7_tiny --weight path/to/weight -size 640 -vt 0.4 --show --gif
+```

+ 42 - 1
models/detectors/yolox/README.md

@@ -11,4 +11,45 @@
 - For data augmentation, we use the large scale jitter (LSJ), Mosaic augmentation and Mixup augmentation.
 - For optimizer, we use SGD with weight decay 0.0005 and base per image lr 0.01 / 64,.
 - For learning rate scheduler, we use Cosine decay scheduler.
-- The reason for the low performance of my reproduced **YOLOX-L** has not been found out yet.
+- The reason for the low performance of my reproduced **YOLOX-L** has not been found out yet.
+
+## Train YOLOX
+### Single GPU
+Taking training YOLOX-S on COCO as the example,
+```Shell
+python train.py --cuda -d coco --root path/to/coco -m yolox_s -bs 16 -size 640 --wp_epoch 3 --max_epoch 300 --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --multi_scale 
+```
+
+### Multi GPU
+Taking training YOLOX-S on COCO as the example,
+```Shell
+python -m torch.distributed.run --nproc_per_node=8 train.py --cuda -dist -d coco --root /data/datasets/ -m yolox_s -bs 128 -size 640 --wp_epoch 3 --max_epoch 300  --eval_epoch 10 --no_aug_epoch 20 --ema --fp16 --sybn --multi_scale --save_folder weights/ 
+```
+
+## Test YOLOX
+Taking testing YOLOX-S on COCO-val as the example,
+```Shell
+python test.py --cuda -d coco --root path/to/coco -m yolox_s --weight path/to/yolox_s.pth -size 640 -vt 0.4 --show 
+```
+
+## Evaluate YOLOX
+Taking evaluating YOLOX-S on COCO-val as the example,
+```Shell
+python eval.py --cuda -d coco-val --root path/to/coco -m yolox_s --weight path/to/yolox_s.pth 
+```
+
+## Demo
+### Detect with Image
+```Shell
+python demo.py --mode image --path_to_img path/to/image_dirs/ --cuda -m yolox_s --weight path/to/weight -size 640 -vt 0.4 --show
+```
+
+### Detect with Video
+```Shell
+python demo.py --mode video --path_to_vid path/to/video --cuda -m yolox_s --weight path/to/weight -size 640 -vt 0.4 --show --gif
+```
+
+### Detect with Camera
+```Shell
+python demo.py --mode camera --cuda -m yolox_s --weight path/to/weight -size 640 -vt 0.4 --show --gif
+```