Google Enables Some Gemini Features by Default in Gmail, Covering 3 Billion Users
ProductConsumer App
Google announced new Gemini features are being rolled out to Gmail, reaching over 3 billion users: automatic email thread summarization, one-click "suggested replies" based on email context, and upgraded grammar checking. Some AI capabilities will be enabled by default in the inbox, and users must manually opt-out in settings if they do not wish to use them. The features will launch first in the US English version, with some advanced capabilities available to paid subscribers. This move is seen as a key step for Google to deeply embed Gemini into its core consumer products and strengthen the generative AI entry point.
SAMR and CAC Release New Regulations for Livestream E-commerce: AI Hosts Must Be Labeled, Effective February
PolicyContent Governance
The State Administration for Market Regulation (SAMR) and the Cyberspace Administration of China (CAC) have released the "Network Trading Platform Rule Supervision and Management Measures" and the "Livestream E-commerce Supervision and Management Measures," both effective from February 1, 2026. The new regulations reinforce platform responsibilities regarding personal information protection, data security, rule publication, and solicitation of comments, and bring private domain livestreams into the scope of regulation. They require clear and continuous labeling for AI-generated content such as digital human hosts; simultaneously, traffic control is incorporated into the governance toolkit, allowing platforms to take measures such as restricting functions, limiting traffic, or shutting down accounts for violators.
OpenAI Conducts All-Stock 'Acqui-hire' of Convogo Team, Product to Shut Down
M&AEnterprise Services
OpenAI announced the acquisition of the team behind the executive coaching AI tool Convogo in an all-stock deal, a typical "acqui-hire": it does not acquire Convogo's intellectual property or product, but hires its three co-founders to join OpenAI to strengthen capabilities related to its 'AI Cloud business'. Convogo's existing product will be gradually shut down. Reports indicate this is OpenAI's ninth acquisition in the past year, reflecting a continuation of its strategy to accelerate capability and delivery team building through mergers and acquisitions, advancing commercialization and enterprise service deployment.
Hyundai Plans Annual Production of 30,000 Atlas Humanoid Robots by 2028, Training Uses NVIDIA Stack
RoboticsManufacturing
Hyundai Motor disclosed production plans for the Boston Dynamics Atlas humanoid robot at CES 2026: by 2028, its factory in Georgia, USA, will achieve an annual production capacity of 30,000 Atlas units, aimed at handling heavy physical labor in factory and similar settings, with humans shifting more towards supervision and maintenance. Hyundai also plans to deploy Spot and Stretch robots in "software-defined factories" and use the NVIDIA AI stack for training and development. This plan has raised concerns among unions about job displacement, while the company emphasizes it will create new jobs and skill demands.
5
AI21 Open-Sources Jamba2: 256K Context, Includes 3B and 52B MoE Variants
Open SourceLarge Language Model
AI21 Labs released and open-sourced the Jamba2 model family under Apache 2.0, targeting enterprise production scenarios like knowledge Q&A and document summarization, emphasizing reliability and efficiency. The series includes a 3B dense version and a Mini mixture-of-experts (MoE) version (total parameters 52B, activated 12B), with a context window of 256K, focusing on factual accuracy and instruction following with lower memory footprint. The models are available for download and calling on Hugging Face and AI21 Studio, supporting local or cloud deployment for easy integration into enterprise workflows in controlled environments.
CrowdStrike Plans $740 Million Acquisition of SGNL to Strengthen Runtime Identity Access Control
SecurityM&A
CrowdStrike announced the acquisition of identity management startup SGNL for nearly $740 million to enhance identity security for the Falcon platform. SGNL positions itself as a "runtime" access control layer, capable of making dynamic decisions at the time of access, continuously evaluating, and revoking permissions when necessary, covering systems like AWS IAM and Okta. CrowdStrike stated that the increase in cloud automation scripts, service accounts, and AI agents exposes gaps in static permission models and expands the attack surface. The transaction is expected to close in CrowdStrike's fiscal first quarter of 2027.
DeepSeek Completes R1 Report to 86 Pages: Training Cost About $294K and RL Stack Disclosed
PaperTrainingOpen Source
DeepSeek updated the R1 technical report to version 2, expanding the paper from 22 to 86 pages, adding details on the GRPO algorithm, reinforcement learning training infrastructure, the four-stage training pipeline, and releasing multiple intermediate checkpoints. The report discloses the incremental training cost for R1 at approximately $294,000, with the R1-Zero phase costing about $202,000 using 648 H800 GPUs for 198 hours. The team also supplemented safety evaluations and deployment risk controls, stating average scores of 95.0% on several safety benchmarks, while acknowledging that boundary issues like copyright and jailbreaking still require improvement.
TanStack Releases TanStack AI Toolkit: Framework-Agnostic, Positioned as Vercel AI SDK Alternative
Developer ToolsOpen Source
TanStack released the open-source AI toolkit TanStack AI (alpha), emphasizing framework agnosticism and reduced vendor lock-in: the same set of interfaces can connect to OpenAI, Anthropic, Gemini, Ollama, etc., and be reused across different runtime environments and frontend frameworks. Its "isomorphic" tool system emphasizes end-to-end TypeScript type safety, allowing developers to define tools in a unified way, then implement and call them on the client or server side respectively. The project is positioned as an open-source alternative to Vercel AI SDK, aiming to allow applications to directly connect to their chosen model services.
SimpleLLM Open-Sources 950-Line Inference Engine: 4,041 tok/s Throughput on H100
Open SourceInferenceInfrastructure
The open-source project SimpleLLM released a minimalist LLM inference engine, with code about 950 lines, aimed at learning and secondary development. It defaults to asynchronous and continuous batching, and uses techniques like CUDA Graphs, slot-based KV caching, fused operators, FlashAttention 2, paged KV caching, and GQA to boost single-GPU throughput. The author's benchmark shows 4,041 tokens/sec on a single NVIDIA H100 with batch=64, slightly higher than vLLM. Currently, it only supports the OpenAI/gpt-oss-120b model, with plans to expand tensor parallelism and support for more architectures.