Back to Archive
Friday, February 13, 2026
8 stories3 min read

Today's Highlights

1

Anthropic Raises $30 Billion, Valuation Soars to $380 Billion

FundingLarge ModelEnterprise Service

Anthropic announced on February 12 the completion of a $30 billion Series G funding round, bringing its post-money valuation to $380 billion (a significant increase from the previous ~$183 billion), led by Coatue and Singapore's sovereign wealth fund GIC, with participation from Microsoft, NVIDIA, and others. The company disclosed an annualized revenue of approximately $14 billion, with about 80% coming from enterprise customers; its AI coding product Claude Code generates around $2.5 billion in annualized revenue. Anthropic stated that the funds will be used to expand computing infrastructure, advance frontier research, and increase investment in enterprise products and commercialization.

Read full article
2

OpenAI Launches GPT-5.3-Codex-Spark, Accelerated by Cerebras WSE-3

Model ReleaseAI ProgrammingCompute Chip

OpenAI has released a lightweight programming model, GPT-5.3-Codex-Spark, designed for low-latency inference and real-time collaborative coding, targeting high-frequency iterative scenarios such as rapid prototyping. This version is powered by Cerebras' WSE-3 wafer-scale chip, which contains approximately 4 trillion transistors and emphasizes ultra-low latency workloads. This marks the first major deployment since both companies entered a multi-year agreement exceeding $10 billion last year. Currently, Spark is available only as a research preview within the Codex application for ChatGPT Pro users.

Read full article
3

Google Upgrades Gemini 3 Deep Think, ARC-AGI-2 Reaches 84.6%

Model UpdateReasoningAI Research

Google announced a major upgrade to the Deep Think reasoning mode in Gemini 3, aimed at handling complex problems in science, research, and engineering involving incomplete or messy data. Officially reported results include a 48.4% score on Humanity’s Last Exam (without tools), 84.6% on ARC-AGI-2 (verified by the ARC Prize Foundation), a Codeforces rating of 3455 Elo, and performance equivalent to a gold medal at the 2025 International Mathematical Olympiad. These capabilities are now available to Google AI Ultra subscribers via the Gemini app, while enterprise users can apply for early access through the Gemini API.

Read full article
4

Ai2 Open Sources AutoDiscovery: Automatically Generates Hypotheses and Runs Experimental Loops

AI ResearchOpen SourceAI Agent

The Allen Institute for AI (Ai2) has introduced and open-sourced AutoDiscovery, an experimental system part of the Asta scientific AI ecosystem. It aims to automatically generate hypotheses from data, design experiments, write and execute Python code, interpret statistical results, and iterate new hypotheses—forming a closed-loop research workflow. The system uses 'Bayesian surprise' to measure information gain and combines Monte Carlo Tree Search (MCTS) to select exploration paths. It can retrieve from 108 million academic abstracts and 12 million full-text papers. Documentation states it supports uploading datasets up to 20GB and can autonomously run up to 500 experiments/hypotheses to assist researchers in filtering promising directions.

Read full article
5

Cloudflare Launches Markdown for Agents: Reduces HTML-to-Markdown Tokens by 80%

Agent InfrastructureWeb OptimizationCost

Cloudflare has launched 'Markdown for Agents,' enabling agents to request web content in Markdown format by including Accept: text/markdown in HTTP headers. This triggers dynamic conversion of HTML pages into Markdown at the edge network, reducing token waste caused by scripts and layout noise. Official examples show a reduction from 16,180 tokens in HTML to 3,150 in Markdown—approximately 80% savings—lowering costs for long-context retrieval, web reading, and tool usage. The feature also integrates with Content Signals Policy headers, allowing websites to declare authorization boundaries for training, search, and input use, providing more controlled content output for agent traffic.

Read full article
6

GitHub Addresses AI-Generated Flood of Low-Quality PRs: Introducing PR Limits and Access Thresholds

Open Source EcosystemDeveloper ToolsGovernance

GitHub warns that the open-source community is entering an 'Eternal September': generative AI has drastically lowered the cost of submitting Issues/PRs, but maintainers’ review burden remains unchanged, leading low-quality contributions and automation noise to erode collaboration trust. GitHub plans to provide maintainers with finer-grained repository-level controls, including PR quantity limits, interaction restrictions, and conditional access and routing mechanisms, prioritizing bandwidth for trusted contributors. The platform is also exploring reputation systems and automated triage capabilities, such as assessing submission quality based on project guidelines, to reduce the passive 'sorting' burden on maintainers.

Read full article
7

Xiaomi Open Sources Xiaomi-Robotics-0 VLA, 4.7B Embodied Model with ~80ms Latency

Embodied IntelligenceOpen SourceRobotics

Xiaomi's technical team has open-sourced its first-generation embodied VLA model, Xiaomi-Robotics-0, with 4.7 billion parameters, targeting real-time robotic control. It adopts a 'Cerebrum + Cerebellum' architecture: a multimodal VLM handles instruction and environment understanding, while a DiT generates continuous action blocks. Through KV Cache sharing, asynchronous inference, and dedicated attention masks, the system reduces overhead and jitter. Documentation claims an end-to-end inference latency of approximately 80ms, emphasizing real-time performance on consumer-grade hardware. The team also released a two-stage training recipe and key mechanisms, aiming to lower the barrier for developers to reproduce and deploy embodied intelligence models.

Read full article
8

Xiaohongshu Open Sources FireRed-Image-Edit and Releases 15-Task RedEdit Bench

Image EditingOpen SourceDiffusion Model

Xiaohongshu's REDtech team has open-sourced the image editing model FireRed-Image-Edit and launched the RedEdit Bench benchmark, covering 15 common editing and restoration tasks. The team developed a multi-path data generation engine combining instruction-controlled synthesis, structure-controlled synthesis, and template-based synthesis, followed by rigorous cleaning and deduplication to improve complex instruction following and content consistency. Training follows a multi-stage pipeline, incorporating a Layout-Aware OCR reward during reinforcement learning to specifically constrain errors such as incorrect characters, misalignment, and layout collapse in tasks like poster text editing. Additionally, low-level vision tasks such as super-resolution, denoising, and deblurring are unified into instruction fine-tuning to enhance the model’s ability to handle diverse editing scenarios with a single architecture.

Read full article

Don't Miss Tomorrow's Insights

Join thousands of professionals who start their day with AI Daily Brief