Browse Source

no use_qfl

yjh0410 1 năm trước cách đây
mục cha
commit
481dd7c684
2 tập tin đã thay đổi với 8 bổ sung16 xóa
  1. 2 2
      config/model_config/rtcdet_config.py
  2. 6 14
      models/detectors/rtcdet/README.md

+ 2 - 2
config/model_config/rtcdet_config.py

@@ -55,7 +55,7 @@ rtcdet_cfg = {
         'loss_cls_weight': 0.5,
         'loss_box_weight': 7.5,
         'loss_dfl_weight': 1.5,
-        'use_qfl': True,
+        'use_qfl': False,
         'loss_cls_weight_qfl': 1.0,
         'loss_box_weight_qfl': 2.0,
         'loss_dfl_weight_qfl': 1.0,
@@ -115,7 +115,7 @@ rtcdet_cfg = {
         'loss_cls_weight': 0.5,
         'loss_box_weight': 7.5,
         'loss_dfl_weight': 1.5,
-        'use_qfl': True,
+        'use_qfl': False,
         'loss_cls_weight_qfl': 1.0,
         'loss_box_weight_qfl': 2.0,
         'loss_dfl_weight_qfl': 1.0,

+ 6 - 14
models/detectors/rtcdet/README.md

@@ -6,19 +6,11 @@
 - **Scratch**:  We just train the detector on the COCO without any pretrained weights for the backbone.
 
 For the small model:
-|   Model  | Pretrained | Scale | AP<sup>val<br>0.5:0.95 | AP<sup>val<br>0.5 | FLOPs<br><sup>(G) | Params<br><sup>(M) | Weight |
-|----------|------------|-------|------------------------|-------------------|-------------------|--------------------|--------|
-| RTCDet-S | Scratch    |  640  |                        |                   |                   |                    |  |
-| RTCDet-S | IN1K Cls   |  640  |                        |                   |                   |                    |  |
-| RTCDet-S | IN1K MIM   |  640  |                        |                   |                   |                    |  |
-
-For the large model:
-|   Model  | Pretrained | Scale | AP<sup>val<br>0.5:0.95 | AP<sup>val<br>0.5 | FLOPs<br><sup>(G) | Params<br><sup>(M) | Weight |
-|----------|------------|-------|------------------------|-------------------|-------------------|--------------------|--------|
-| RTCDet-L | Scratch    |  640  |                        |                   |                   |                    |  |
-| RTCDet-L | IN1K Cls   |  640  |                        |                   |                   |                    |  |
-| RTCDet-L | IN1K MIM   |  640  |                        |                   |                   |                    |  |
-
+|   Model  | Pretrained | Scale | Epoch | AP<sup>val<br>0.5:0.95 | AP<sup>val<br>0.5 | FLOPs<br><sup>(G) | Params<br><sup>(M) | Weight |
+|----------|------------|-------|-------|------------------------|-------------------|-------------------|--------------------|--------|
+| RTCDet-S | Scratch    |  640  |  300  |                        |                   |                   |                    |  |
+| RTCDet-S | IN1K Cls   |  640  |  300  |                        |                   |                   |                    |  |
+| RTCDet-S | IN1K MIM   |  640  |  300  |                        |                   |                   |                    |  |
 
 ## Results on the COCO-val
 |   Model  | Batch | Scale | AP<sup>val<br>0.5:0.95 | AP<sup>val<br>0.5 | FLOPs<br><sup>(G) | Params<br><sup>(M) | Weight |
@@ -31,7 +23,7 @@ For the large model:
 
 - For the backbone, we ... (not sure)
 - For training, we train RTCDet series with 300 epochs on COCO.
-- For data augmentation, we use the large scale jitter (LSJ), Mosaic augmentation and Mixup augmentation, following the YOLOX.
+- For data augmentation, we use the large scale jitter (LSJ), Mosaic augmentation and Mixup augmentation, following the YOLOv8.
 - For optimizer, we use AdamW with weight decay 0.05 and base per image lr 0.001 / 64,.
 - For learning rate scheduler, we use Linear decay scheduler.