Prechádzať zdrojové kódy

update ch11 readme in english

Gen TANG 2 rokov pred
rodič
commit
60ed34c92b
1 zmenil súbory, kde vykonal 6 pridanie a 6 odobranie
  1. 6 6
      ch11_llm/README.md

+ 6 - 6
ch11_llm/README.md

@@ -1,9 +1,9 @@
 
 |代码|说明|
 |---|---|
-|[char_gpt.ipynb](char_gpt.ipynb)| 从零开始实现GPT-2,并使用模型进行自然语言的自回归学习(根据背景文本预测下一个字母是什么) |
-|[gpt2.ipynb](gpt2.ipynb)| 使用开源的GPT-2模型 |
-|[lora_tutorial.ipynb](lora_tutorial.ipynb)| 实现简单版本的LoRA以及开源工具中LoRA的使用示例 |
-|[gpt2_lora.ipynb](gpt2_lora.ipynb)| 使用LoRA对GPT-2进行监督微调(微调方式并不是最优的) |
-|[gpt2\_lora_optimum.ipynb](gpt2_lora_optimum.ipynb)| 使用LoRA对GPT-2进行更优雅的监督微调 |
-|[gpt2\_reward_modeling.ipynb](gpt2_reward_modeling.ipynb)| 使用LoRA对GPT-2进行评分建模 |
+|[char_gpt.ipynb](char_gpt.ipynb)| Creating GPT-2 from scratch and conducting autoregressive learning for python script (predict the next char based on context) |
+|[gpt2.ipynb](gpt2.ipynb)| Usage of open-souce GPT-2 model |
+|[lora_tutorial.ipynb](lora_tutorial.ipynb)| Implementation of LoRA (simple version) and usage of LoRA in peft |
+|[gpt2_lora.ipynb](gpt2_lora.ipynb)| SFT (supervised fine-tuning) of GPT-2 by using LoRA (Note: This script is NOT  optimal) |
+|[gpt2\_lora_optimum.ipynb](gpt2_lora_optimum.ipynb)| More elegant SFT on GPT-2 by using LoRA |
+|[gpt2\_reward_modeling.ipynb](gpt2_reward_modeling.ipynb)| Reward modeling on GPT-2 by using LoRA |