CVE-2025-62426
21.11.2025, 02:15
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.Enginsight
| Vendor | Product | Version |
|---|---|---|
| vllm | vllm | 0.5.5 ≤ 𝑥 < 0.11.1 |
| vllm | vllm | 0.11.1:rc0 |
| vllm | vllm | 0.11.1:rc1 |
𝑥
= Vulnerable software versions
References