CVE-2026-27893
EUVD-2026-1647827.03.2026, 00:16
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.Enginsight
Affected Products (NVD)
| Vendor | Product | Version |
|---|---|---|
| vllm | vllm | 0.10.1 ≤ 𝑥 < 0.18.0 |
𝑥
= Vulnerable software versions
Common Weakness Enumeration