The Incident
On March 9, 2026, a critical severity CVE (CVE-2026-27966) was published against Langflow, the open-source platform widely used to build and deploy LLM-powered agent workflows. The vulnerability exists in Langflow’s CSV Agent node, which is designed to allow users to upload CSV files and query them using natural language. Prior to version 1.8.0, that node hardcodes allow_dangerous_code=True when initializing LangChain’s CSV agent — a flag that unconditionally attaches a Python REPL (python_repl_ast) as an available tool to the underlying LLM.
The consequence is straightforward and severe: any user — or attacker — who can submit a prompt to the CSV Agent can instruct the LLM to call the Python REPL with arbitrary code. From there, OS-level command execution is one os.system() or subprocess.run() call away. The attack surface includes any Langflow deployment where the CSV Agent node is present and reachable, which covers a substantial portion of production deployments given how commonly this node appears in data-processing workflows. The fix shipped in version 1.8.0, which removes the hardcoded flag and requires operators to make an explicit, deliberate choice to enable code execution.
This is not a subtle logic flaw. It is a case of a high-privilege capability — arbitrary code execution on the server process — being permanently wired open at the framework level, below the visibility of the operators who deployed the workflow.
The Authority Path That Failed
The identity carrying execution authority here is the Langflow server process itself. When a workflow invokes the CSV Agent node, the LLM runs under the authority of whatever OS user owns the Langflow process — typically a service account with filesystem access, network access, and in many cloud deployments, attached IAM roles or mounted secrets. The Python REPL tool inherits all of that authority with no additional gating. The scope held by the agent was the full ambient authority of the server process. The scope exercised by the attacker, through prompt injection, could be anything that process was permitted to do: read credentials, exfiltrate data, establish reverse shells, or pivot laterally.
Ownership of this capability gap sits in a difficult place. The Langflow developer who wrote the CSV Agent node made a default-unsafe choice — hardcoding a dangerous flag for convenience. But operators who deployed Langflow had no obvious signal that their CSV Agent node was silently exposing a code execution primitive. LangChain’s own documentation marks allow_dangerous_code as an explicit opt-in precisely because the risk is well understood in that ecosystem. The authority to execute code was granted by the framework, exercised by the LLM at attacker direction, and never surfaced to the operator who owned the deployment. That accountability gap — between what the agent was authorized to do on paper and what it was actually capable of doing at runtime — is the core failure.
SecurityV0 Perspective
An organization running SecurityV0 would see unproven_execution surface for any Langflow deployment running a version prior to 1.8.0 that includes the CSV Agent node. The finding type applies because the Python REPL tool is attached to the LLM’s tool-call surface without any runtime evidence of deliberate authorization by the deploying organization. The agent’s ability to execute arbitrary code was never explicitly justified in the deployment’s authority record — it was inherited silently from a hardcoded framework default. SecurityV0 scans the tool manifests of deployed agents and flags cases where high-privilege tools (code execution, shell access, filesystem write) appear in scope without a corresponding privilege justification artifact.
The evidence pack for this finding would show: the Langflow version string, the presence of the CSV Agent node in the workflow graph, the python_repl_ast tool in the resolved tool manifest, and the absence of any operator-authored justification record for code-execution scope. It would also flag the allow_dangerous_code flag state as a configuration-level risk indicator. That pack gives the security team what they need to triage immediately: a specific node, a specific tool, a specific missing control, and a patch version to target.
What To Do
- Patch immediately to Langflow 1.8.0 or later. The fix removes the hardcoded flag. If you cannot patch, remove or disable the CSV Agent node at the workflow level and treat any deployment running the vulnerable version as potentially compromised.
- Audit tool manifests for all deployed agents. For every LLM agent in production, enumerate the tools it can call. Any tool that can execute code, write to disk, or make outbound network calls requires an explicit, documented justification. Silence is not consent.
- Treat
allow_dangerous_codeand analogous flags as a security boundary, not a developer convenience. Establish a policy that any framework-level flag enabling code execution must be reviewed and approved before deployment, the same way you would treat a firewall rule opening an inbound port. - Apply prompt injection mitigations at the ingestion layer. CSV files are attacker-controlled input. Content from uploaded files should be treated as untrusted and sandboxed before it reaches a tool-enabled LLM. Consider stripping or escaping instruction-like patterns before they enter the agent context.
- Scope agent runtime permissions to the minimum necessary. The Langflow server process should not run as a user with broad filesystem or network access. Use a dedicated service account with only the permissions the workflow actually requires, so that even a successful RCE has a constrained blast radius.