Llm Evaluation
@camoneart/llm-evaluation
v1.0.0•1 month ago
Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks.
prpm install @camoneart/llm-evaluation💡 Suggested Test Inputs
Loading suggested inputs...
🎯 Community Test Results
Loading results...
📦 Package Info
- Format
- claude
- Type
- agent
- Category
- testing-quality
- License
- MIT
- Latest Version
- 1.0.0
- Total Versions
- 1
🔗 Links
- Repository
- https://github.com/camoneart/claude-code
📋 Latest Version Details
- Version
- 1.0.0
- Published
- November 3, 2025
- Package Size
- 0.27 KB