Skip to main content
Daily AI Briefing/Topics/AI Security

AI Security

All briefings covering ai security.

30 April 2026OpenAI

OpenAI urges all macOS users to update ChatGPT, Codex and Atlas after Axios library compromise

OpenAI issued an urgent security alert on 29 April 2026 after a compromised third-party JavaScript library, Axios, was used to push a remote access trojan into its desktop apps. All macOS users must update before 8 May 2026 or risk credential theft.

Secure AI Brain
19 April 2026Mozilla (MZLA Technologies)

Mozilla Thunderbolt Gives Businesses a Self-Hosted AI Alternative

Mozilla's for-profit subsidiary MZLA Technologies launched Thunderbolt on 16 April 2026, an open-source, self-hostable enterprise AI client designed to replace Microsoft Copilot, ChatGPT Enterprise, and Claude Enterprise for organisations that want full control over their data. Thunderbolt supports any AI model, integrates with MCP servers and the Agent Client Protocol, and includes optional end-to-end encryption with device-level access controls. It is available on GitHub now, with a managed hosted version for smaller teams currently accepting signups.

Secure AI BrainEmployee Amplification Systems
11 April 2026ISACA

Agentic AI Prompt Injection Confirmed as Primary Enterprise Security Threat

Security researchers have confirmed that prompt injection via malicious instructions embedded in GitHub issues, documentation, and email is the leading attack vector against AI agents. In some enterprise environments, machine-to-machine interactions now outnumber human logins 100-to-1, creating a largely ungoverned attack surface.

Secure AI Brain
9 April 2026Anthropic

Anthropic Withholds Mythos From Public Over Cyberattack Risk

Anthropic has officially launched Project Glasswing, a tightly controlled release programme for its most powerful model, Claude Mythos Preview. The model, capable of finding tens of thousands of zero-day vulnerabilities and exploiting them autonomously, is being restricted to approximately 40 vetted organisations for defensive security work only. Anthropic describes it as the first AI model capable of bringing down a Fortune 100 company or penetrating critical national defence systems.

Secure AI BrainEmployee Amplification Systems
7 April 2026eSecurity Planet

70% of Organisations Have AI-Generated Code Vulnerabilities in Production

A new industry report reveals that 70.4% of organisations have confirmed or suspected security vulnerabilities in production systems introduced by AI-generated code. Despite this, 92% express confidence in their detection capabilities, revealing a dangerous confidence gap. Service principals and autonomous agents now outnumber human users 100-to-1 in enterprise environments, creating a largely ungoverned attack surface.

Secure AI Brain
7 April 2026Multiple

OpenAI, Anthropic, and Google Unite to Fight Chinese Model Distillation

OpenAI, Anthropic, and Google announced a joint intelligence-sharing operation through the Frontier Model Forum to detect and counter adversarial distillation attacks from Chinese AI labs. Anthropic reported that DeepSeek, Moonshot AI, and MiniMax collectively generated over 16 million exchanges with Claude via roughly 24,000 fraudulent accounts. This is the first time the Forum has been activated as an active threat-intelligence operation.

Secure AI BrainAI Growth Engine
4 April 2026Anthropic

Anthropic Leaks Claude Code Source via npm Packaging Error

On 31 March 2026, Anthropic accidentally exposed the full source code of Claude Code through a 59.8 MB source map file bundled in npm package version 2.1.88. The leak revealed 513,000 lines of unobfuscated TypeScript across 1,906 files, including 44 unreleased feature flags and the complete agent orchestration logic. Within hours, the code was mirrored to GitHub and forked tens of thousands of times.

Secure AI Brain
2 April 2026Thales

AI Agent-Level Exploits Emerge as Top Enterprise Security Threat

Security researchers are flagging agent-level exploits as one of the fastest-growing attack vectors of 2026, as enterprises roll out agentic AI systems with write access to databases, APIs, and financial systems. Legacy security platforms cannot address AI-to-AI interaction monitoring, creating a new class of tooling requirement.

Secure AI BrainEmployee Amplification Systems
2 April 2026Microsoft

Microsoft Releases Open-Source Agent Governance Toolkit Addressing All 10 OWASP Agentic AI Risks

Microsoft released the Agent Governance Toolkit on April 2, 2026, a free seven-package open-source system providing runtime security governance for autonomous AI agents. It covers all 10 OWASP agentic AI risks with deterministic, sub-millisecond policy enforcement and integrates directly with LangChain, CrewAI, Google ADK, and Microsoft Agent Framework without requiring code rewrites.

Secure AI BrainEmployee Amplification Systems
31 March 2026Anthropic

Anthropic Mythos Leaked: A Step-Change Model Above Opus

A misconfigured content management system exposed internal Anthropic documents on 27 March 2026, revealing a new model called Claude Mythos, described as a step change above the existing Opus tier. The leaked draft blog warns that Mythos poses unprecedented cybersecurity risks and is far ahead of any other AI model in cyber capabilities. Anthropic has confirmed the model exists and is restricting early access to cyber defence organisations while it improves efficiency before a general release.

Secure AI BrainEmployee Amplification Systems
27 March 2026GitHub / Microsoft

GitHub Copilot Will Train on Your Code from April 24

GitHub has announced that from April 24, 2026, interaction data from Copilot Free, Pro, and Pro+ users will be used to train AI models by default. The data collected includes code snippets, accepted outputs, repository structure, and chat interactions. Users must actively opt out via Privacy settings before the deadline.

Secure AI BrainEmployee Amplification Systems
25 March 2026HiddenLayer

HiddenLayer: 1 in 8 Companies Reporting AI Breaches Linked to Agentic Systems

HiddenLayer has released its 2026 AI Threat Landscape Report, finding that 1 in 8 companies have experienced AI breaches tied to agentic systems. 73% of organisations report internal conflict over who owns AI security, and 31% do not know if they have been breached.

Secure AI Brain