Back to Archive
Saturday, January 10, 2026
9 stories3 min read

Today's Highlights

1

DeepSeek Reportedly to Launch V4 in February, Focusing on Long Code Prompts

Model ReleaseCode Generation

Reuters, citing The Information, reports that DeepSeek plans to release its new flagship model V4 around mid-February, coinciding with the Lunar New Year, with a focus on optimizing code generation. Sources indicate the company has conducted preliminary internal benchmark tests, showing V4 outperforms competitors like OpenAI GPT and Anthropic Claude on coding tasks and has achieved breakthroughs in handling ultra-long code prompts; the specific release date may still be adjusted.

Read full article
2

EU Requires X to Retain Grok Data Until Year-End, Image Generation Becomes Paid Feature

RegulationContent Safety

According to Reuters, on January 8th, the European Commission required X to extend the retention of internal documents and data related to the xAI chatbot Grok until the end of 2026 for investigating compliance issues such as generative image content. Meanwhile, xAI has limited Grok's image generation/editing features on X to paying subscribers to reduce the risk of abuse; regulators and lawmakers in multiple countries continue to apply pressure, and the incident may impact product features and moderation strategies.

Read full article
3

Lightricks Open-Sources LTX-2: 19B Parameters for Synchronous 4K Audio-Visual Generation

Open SourceVideo GenerationAudio Generation

Lightricks reportedly open-sourced the multimodal model LTX-2, which has 19 billion parameters and claims to generate 4K video and audio (dialogue, music, sound effects) synchronously within a single architecture, for up to about 20 seconds at up to 50 FPS. The model uses a Diffusion Transformer with a dual-stream structure, aligning audio-visual timelines via cross-attention, and provides controllable LoRA for camera motion, structure/pose, and style; it also offers optimization and quantization schemes for consumer-grade GPUs.

Read full article
4

DeepMind Releases TRecViT Causal Video Model, Reduces Memory by 12x

Research PaperVideo Model

Google DeepMind published the TRecViT causal video modeling architecture on OpenReview, which decouples time-space-channels: time dimension uses gated linear recurrent units (LRUs), spatial dimension uses self-attention, and channels use MLPs. The paper states it performs comparably to or better than non-causal ViViT-L on tasks like SSv2 and Kinetics400, while reducing parameters by 3x, memory footprint by 12x, FLOPs by 5x, and achieving an inference throughput of approximately 300 frames per second.

Read full article
5

NIST Launches GenAI Image Challenge, Workshop in April

EvaluationRegulation & Standards

The U.S. NIST launched the GenAI Image Challenge, conducting standardized evaluation for generative images: the Image-G task requires text-to-image generation, and the Image-D task requires distinguishing whether an image was created by AI or humans. Participants must register, sign a data license agreement, and submit system outputs according to a schedule; scores will enter a public leaderboard. Evaluation metrics include AUC, EER, etc., and officials plan to hold a workshop in April 2026 to summarize results and advance comparable standards.

Read full article
6

Anthropic Reportedly Seeks $10 Billion in Funding, Valuation at $350 Billion

Funding

Reportedly, Anthropic is seeking a new round of approximately $10 billion in funding, with lead investors including Singaporean sovereign fund GIC and Coatue, potentially valuing the company at $350 billion post-funding. The report also states the company expects to reach break-even for the first time in 2028 and plans to pursue an IPO. Previously, it has received investments from Alphabet and Amazon and, under agreements with Microsoft/NVIDIA, committed to spend at least $30 billion on compute procurement on Azure.

Read full article
7

TrendForce: HBM4 Mass Production Delayed Until After Late Q1 2026

HardwareHBM

TrendForce information cited by Tom's Hardware suggests HBM4 mass production may be delayed until after the end of Q1 2026. The reason is NVIDIA raising memory specifications for its next-generation Rubin platform and extending its Blackwell shipment strategy, prompting SK Hynix, Samsung, and Micron to adjust HBM4 designs. The report mentions Rubin may require per-pin data rates exceeding 11 Gbps (possibly up to 13 Gbps) and per-stack bandwidth over 2.6 TB/s; the delay will also affect product timelines like AMD's MI400.

Read full article
8

Tailwind Labs Cuts 75% of Staff as AI Coding Undermines Documentation Monetization

Developer ToolsBusiness Model

Multiple community newsletters disclosed that Tailwind Labs, behind Tailwind CSS, laid off about 75% of its engineering team this week. Its monthly downloads are reportedly 75 million, but AI coding tools/agents have become proficient at generating Tailwind syntax, leading developers to rely more on AI than official documentation for information, causing a decline in documentation traffic and paid conversions. Following the event, companies like Cursor, Shopify, and Vercel provided sponsorship support within 48 hours, reflecting the impact on the 'documentation monetization' model for developer tools.

Read full article
9

Havas Releases Ava Enterprise LLM Portal, Rollout in Phases from Spring

Enterprise AIProduct Launch

Advertising group Havas launched the global LLM portal Ava at CES, positioning it as a unified and compliant enterprise gateway, providing secure access and collaborative choice among various advanced models (reportedly naming Gemini 3, GPT-5, Claude Opus, etc.). Ava plans to open in phases to clients and partners starting Spring 2026 and will be promoted as part of its Converged.AI operating system; the company also announced a partnership with Akkio to enhance agent capabilities.

Read full article

Don't Miss Tomorrow's Insights

Join thousands of professionals who start their day with AI Daily Brief