Home / Packages / @zechenzhangagi/claude-ai-research-skills-10-optimization-gptq

Gptq

@zechenzhangagi/claude-ai-research-skills-10-optimization-gptq

v1.0.0Today

Post-training 4-bit quantization for LLMs with minimal accuracy loss. Use for deploying large models (70B, 405B) on consumer GPUs, when you need 4× memory reduction with <2% perplexity degradation, or for faster inference (3-4× speedup) vs FP16. Integrates with transformers and PEFT for QLoRA fine-tuning.

prpm install @zechenzhangagi/claude-ai-research-skills-10-optimization-gptq
0 total downloads

💡 Suggested Test Inputs

Loading suggested inputs...

🎯 Community Test Results

Loading results...

📦 Package Info

Format
claude
Type
skill
Category
data-science
License
MIT
Latest Version
1.0.0
Total Versions
1

📋 Latest Version Details

Version
1.0.0
Published
November 11, 2025
Package Size
4.05 KB