CVE-2023-29374
05.04.2023, 02:15
In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
Vendor | Product | Version |
---|---|---|
langchain | langchain | 𝑥 ≤ 0.0.131 |
𝑥
= Vulnerable software versions
References