Repository Details
Shared by


HelloGitHub Rating
0 ratings
Free•Apache-2.0
Claim
Discuss
Collect
Share
31.3k
Stars
No
Chinese
Python
Language
Yes
Active
58
Contributors
783
Issues
Yes
Organization
2025-02-2
Latest
2k
Forks
Apache-2.0
License
More

This project is a Python toolkit for fine-tuning and optimizing large language models (LLMs). It enhances the speed of model fine-tuning through dynamic quantization and memory optimization techniques, while reducing GPU memory usage by 70%-80%. It supports a variety of hardware configurations, LLMs, and ultra-long context tasks. In addition, it provides Jupyter Notebook examples that can be experienced online directly, lowering the barrier to entry for fine-tuning large models.
Comments
Rating:
No comments yet