CVE-2025-46560
EUVD-2025-1261230.04.2025, 01:15
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.Enginsight
Affected Products (NVD)
| Vendor | Product | Version |
|---|---|---|
| vllm | vllm | 0.8.0 ≤ 𝑥 < 0.8.5 |
𝑥
= Vulnerable software versions
Common Weakness Enumeration