Repository Details
Shared by


HelloGitHub Rating
0 ratings
Free•MIT
Claim
Discuss
Collect
Share
8k
Stars
No
Chinese
Python
Language
Yes
Active
263
Contributors
479
Issues
Yes
Organization
0.4.7
Latest
2k
Forks
MIT
License
More

This framework is designed to evaluate Large Language Models (LLMs), capable of testing model performance across various tasks. It offers over 60 academic benchmarks, supports multiple model frameworks, local models, cloud services (like OpenAI), hardware acceleration, and the capability to customize tasks.
Comments
Rating:
No comments yet