Update README.md
Browse files
README.md
CHANGED
|
@@ -19,10 +19,14 @@ library_name: transformers
|
|
| 19 |
|
| 20 |
The LucasInsight/Meta-Llama-3.1-8B-Instruct model is an enhanced version of the Meta-Llama3 project, incorporating the alpaca-gpt4-data-zh Chinese dataset. The model has been fine-tuned using Unsloth with 4-bit QLoRA and generates GGUF model files compatible with the Ollama inference engine.
|
| 21 |
|
|
|
|
|
|
|
| 22 |
**模型概述**
|
| 23 |
|
| 24 |
LucasInsight/Meta-Llama-3.1-8B-Instruct 模型是在 Meta-Llama3 工程的基础上,增加了 alpaca-gpt4-data-zh 中文数据集。该模型通过使用 Unsloth 的 4-bit QLoRA 进行微调,生成的 GGUF 模型文件支持 Ollama 推理引擎。
|
| 25 |
|
|
|
|
|
|
|
| 26 |
**License Information**
|
| 27 |
|
| 28 |
This project is governed by the licenses of the integrated components:
|
|
@@ -101,4 +105,5 @@ This project is governed by the licenses of the integrated components:
|
|
| 101 |
journal={arXiv preprint arXiv:2304.03277},
|
| 102 |
year={2023}
|
| 103 |
}
|
| 104 |
-
```
|
|
|
|
|
|
| 19 |
|
| 20 |
The LucasInsight/Meta-Llama-3.1-8B-Instruct model is an enhanced version of the Meta-Llama3 project, incorporating the alpaca-gpt4-data-zh Chinese dataset. The model has been fine-tuned using Unsloth with 4-bit QLoRA and generates GGUF model files compatible with the Ollama inference engine.
|
| 21 |
|
| 22 |
+
👋Join our [WeChat](./wechat.jpg)
|
| 23 |
+
|
| 24 |
**模型概述**
|
| 25 |
|
| 26 |
LucasInsight/Meta-Llama-3.1-8B-Instruct 模型是在 Meta-Llama3 工程的基础上,增加了 alpaca-gpt4-data-zh 中文数据集。该模型通过使用 Unsloth 的 4-bit QLoRA 进行微调,生成的 GGUF 模型文件支持 Ollama 推理引擎。
|
| 27 |
|
| 28 |
+
👋加入我们的[微信群](./wechat.jpg)
|
| 29 |
+
|
| 30 |
**License Information**
|
| 31 |
|
| 32 |
This project is governed by the licenses of the integrated components:
|
|
|
|
| 105 |
journal={arXiv preprint arXiv:2304.03277},
|
| 106 |
year={2023}
|
| 107 |
}
|
| 108 |
+
```
|
| 109 |
+

|