Llm Inference Optimization
@tachyon-beep/skillpacks-yzmir-llm-specialist-llm-inference-optimization
v1.0.0ā¢1 month ago
Optimize LLM inference for latency, throughput, and cost in production systems.
prpm install @tachyon-beep/skillpacks-yzmir-llm-specialist-llm-inference-optimizationš” Suggested Test Inputs
Loading suggested inputs...
šÆ Community Test Results
Loading results...
š¦ Package Info
- Format
- claude
- Type
- skill
- Category
- ai-machine-learning
- License
- CC-BY-SA-4.0
- Latest Version
- 1.0.0
- Total Versions
- 1
š Links
š Latest Version Details
- Version
- 1.0.0
- Published
- November 11, 2025
- Package Size
- 9.70 KB