vLLM v0.20.0: DeepSeek V4, PyTorch 2.11, FlashAttention 4
vLLM v0.20.0: 커밋 752개, 기여자 320명. CUDA 13.0 기본화, PyTorch 2.11, Transformers v5, Python 3.14, FlashAttention 4 기본화, TurboQuant 2-bit KV cache 4배 용량.
vLLM v0.20.0: 커밋 752개, 기여자 320명. CUDA 13.0 기본화, PyTorch 2.11, Transformers v5, Python 3.14, FlashAttention 4 기본화, TurboQuant 2-bit KV cache 4배 용량.