下拉刷新
Repository Details
Shared bynavbar_avatar
repo_avatar
HelloGitHub Rating
0 ratings
Open Source LLM Evaluation Framework
FreeMIT
Claim
Collect
Share
8k
Stars
No
Chinese
Python
Language
Yes
Active
263
Contributors
479
Issues
Yes
Organization
0.4.7
Latest
2k
Forks
MIT
License
More
lm-evaluation-harness image
This framework is designed to evaluate Large Language Models (LLMs), capable of testing model performance across various tasks. It offers over 60 academic benchmarks, supports multiple model frameworks, local models, cloud services (like OpenAI), hardware acceleration, and the capability to customize tasks.
Included in:
Vol.107
Tags:
AI
LLM
Python

Comments

Rating:
No comments yet