Back to Archive
Monday, February 16, 2026
8 stories3 min read

Today's Highlights

1

OpenClaw Demonstrated Sandbox Escape Leading to 0-Click RCE

AI SecurityAgentPrompt Injection

A security researcher replicated a 'Confused Deputy' chain in OpenClaw 2026.2.14: attackers used multi-layer prompt injection via email to first trick a sandbox-isolated Gmail sub-agent into forwarding payloads, then induced the main agent to misinterpret them as user instructions. Even with Opus 4.6, content labeling, safety warnings, and dual-LLM cascading enabled, the main agent executed actions such as cloning malicious repositories and restarting gateways, loading unauthorized plugins and ultimately achieving 0-click RCE, exposing trust boundary gaps in cross-agent message passing.

Read full article
2

Altman Says India Has 100M Weekly Active ChatGPT Users; AI Impact Summit Opens

Market DataGlobal ExpansionPolicy & Industry

The India AI Impact Summit opened on February 16 in New Delhi (February 16–20), attended by Google CEO Pichai, OpenAI CEO Altman, Anthropic CEO Amodei, and NVIDIA CEO Huang. Altman stated that India has 100 million weekly active ChatGPT users, making it OpenAI's second-largest market, with students forming the largest user group. Reports also mentioned OpenAI launching a lower-priced ChatGPT Go for price-sensitive markets and planning deeper collaboration with the Indian government and local partners, expanding its team to drive adoption in education and small-to-medium enterprises.

Read full article
3

NPR Host Sues Google Over NotebookLM Allegedly Cloning His Podcast Voice

Legal & ComplianceVoice AI

Former NPR host David Greene has sued Google, alleging that the AI male voice offered in NotebookLM closely resembles his own voice. He argues that Google copied his voice—a core aspect of personal identity—without permission, causing personal harm beyond economic loss. Google denies the allegations, but Greene's colleagues and friends say the voices are nearly indistinguishable. The case brings legal and compliance focus to whether synthetic voices constitute protectable personal rights, pre-launch authorization and disclosure obligations, and platform risk controls over replicable voiceprints.

Read full article
4

Z.ai Releases 744B MoE GLM-5 with SWE-bench Verified Score of 77.8%

Large ModelsEnterprise ApplicationEvaluation

Z.ai has released its large model GLM-5: a 744 billion parameter Mixture-of-Experts (MoE) model with 40 billion activated parameters, trained on 28.5 trillion tokens. It emphasizes tool-augmented output capabilities for real-world office tasks, enabling structured file generation such as Word, PDF, and Excel. The report notes a Humanity's Last Exam score of 30.5, rising to 50.4 with tool use, and a SWE-bench Verified code repair benchmark score of 77.8. The company also introduced Slime, an asynchronous reinforcement learning infrastructure designed to improve post-training efficiency and engineering deployment performance.

Read full article
5

ByteDance Launches Doubao 2.0, Claims Inference Cost Reduced by ~10x

Large ModelsCostChina Market

ByteDance launched Doubao 2.0 on February 15, positioning it for reasoning and multi-step task execution in the 'agent era.' The company claims the model matches capabilities of products like GPT-5.2 and Gemini 3 Pro while reducing usage costs by approximately 10 times, making it suitable for large-scale inference and complex generation tasks. Citing QuestMobile data, reports indicate Doubao has around 155 million weekly active users in China, ahead of DeepSeek's 81.6 million. This release is seen as a key move by ByteDance in domestic model competition and user acquisition on the application side.

Read full article
6

Bengio Leads Release of 2026 AI Safety Report Highlighting Regulatory 'Evidence Gap'

AI GovernanceSafety

An international expert panel led by Turing Award winner Yoshua Bengio has released the '2026 AI Safety Report,' synthesizing research perspectives from over 30 countries. It identifies an 'evidence gap' in AI governance: model capability evolution outpaces traditional scientific observation and risk assessment timelines, leaving policymakers torn between premature intervention that may stifle innovation and waiting for full evidence, thereby missing critical windows. The report focuses on runaway risks, regulatory lag, and insufficient global coordination, advocating for iterative governance frameworks and accountability mechanisms under incomplete data to manage risks across the model lifecycle.

Read full article
7

Fei-Fei Li's Team Proposes Latent Forcing, ImageNet-256 FID Drops from 18.60 to 9.76

Generative ModelsDiffusion ModelsPaper

Fei-Fei Li's team proposed Latent Forcing, reconfiguring pixel-level diffusion with a 'latent-first, then-pixel' generation order: early stages denoise in latent space to establish a global semantic skeleton, while later stages refine details in pixel space, reducing interference from high-frequency textures during structural modeling. The method introduces a dual time-stepping mechanism, treating latent variables as temporary 'drafts' discarded after generation, emphasizing independence from pretrained decoders while preserving pixel accuracy. The report shows ImageNet-256 FID improved from 18.60 to 9.76.

Read full article
8

ICLR 2026: AdaReasoner Enables 7B Model to Perform Tool-Based Visual Reasoning

MultimodalTool UseReinforcement Learning

ICLR 2026 work AdaReasoner introduces the 'Agentic Vision' training paradigm, internalizing tool usage (e.g., zooming, cropping) as part of reasoning. It employs a Think-Act-Observe loop to actively gather evidence within images, rather than guessing after a single image pass. The method uses Tool-GRPO to optimize multi-step tool orchestration via reinforcement learning and applies ADL to randomize tool names and descriptions, forcing the model to learn tool semantics instead of memorizing labels, improving generalization and robustness to novel tools. Reports indicate that 7B-scale models achieve performance exceeding GPT-5 on certain complex visual reasoning tasks, with a reproducible open-source implementation provided.

Read full article

Don't Miss Tomorrow's Insights

Join thousands of professionals who start their day with AI Daily Brief