Back to Archive
Friday, January 2, 2026
10 stories3 min read

Today's Highlights

1

OpenAI Plans to Launch New-Generation Audio Model in Q1 2026, Preparing for 'Voice-First' Screenless Personal Devices

Product/ModelVoice InteractionHardware Ecosystem

Multiple reports indicate that OpenAI is integrating engineering, product, and research teams, aiming to launch a new-architecture audio/speech model by the end of the first quarter of 2026. The goal is to bridge the gap in accuracy and response speed between current voice capabilities and text models, while enhancing interactive capabilities such as real-time conversation, handling user interruptions, and more natural emotional expression. The reports also mention the involvement of former Character.AI researcher Kundan Kumar in leading the effort, and that OpenAI acquired design firm io Products Inc. at a valuation of approximately $6.5 billion to advance hardware implementation.

Read full article
2

Z.ai Open-Sources GLM-4.7 Focused on Development Workflows, Achieves 67.5 on BrowseComp, 87.4 on τ²-Bench, and Provides API

Open Source ModelProgramming/Agent

Z.ai has released and open-sourced the new-generation large language model GLM-4.7, positioned as a 'coding/agent' model for real software development and production workflows, emphasizing long task execution, stable tool calling, and multi-step reasoning. Official information states it outperforms GLM-4.6 in completion rate and stability across 100 production-like programming task tests; in benchmarks, it scores 67.5 on BrowseComp and 87.4 on τ²-Bench, and is said to perform close to Claude Sonnet 4.5 on SWE-bench Verified and LiveCodeBench v6.

Read full article
3

DeepSeek Publishes mHC Architecture for Stable Training of Larger Models at Lower Cost, Validated on 3B/9B/27B Scale with Training Time Increase of Only 6.7% at Scaling Factor 4

Research ProgressTraining ArchitectureCost Efficiency

DeepSeek published a technical paper at the start of 2026 proposing 'Manifold-Constrained Hyper-Connections' (mHC), aiming to enhance the stability and scalability of large model training without significantly increasing computational burden, enabling lower-cost training of larger-scale models. Reports state the method has been tested on 3B, 9B, and 27B parameter models; other materials indicate that at a scaling factor of 4, the training time overhead increases by only 6.7%. The materials also mention the paper was co-authored by founder Liang Wenfeng and uploaded to arXiv, with the industry viewing it as a signal for future model engineering direction.

Read full article
4

Moonshot AI Completes $500 Million Series C Funding at $4.3 Billion Valuation, Capital Directed Towards Computing Infrastructure Expansion

FundingComputing Infrastructure

Reports show that Moonshot AI completed a $500 million Series C funding round with a post-money valuation of approximately $4.3 billion; materials mention IDG Capital led a $150 million portion of the round, with follow-on investments from Alibaba, Tencent, and others. The company is described as directing funds primarily towards GPU/computing infrastructure construction, with no near-term IPO plans; other materials state its cash reserves exceed 1 billion Chinese yuan (about $140 million).

Read full article
5

Google Reportedly Releases Gemma 3 Family of Open-Weights Models, Covering 1B–27B and Offering Multimodal Versions

Open Source ModelMultimodal

Materials claim Google has released the Gemma 3 series of open-weights models, with parameter sizes ranging from 1B to 27B, positioned to run on consumer hardware; the 4B, 12B, and 27B versions integrate visual encoders, supporting multimodal inputs like images and videos. Materials also mention Gemma 3 shows improvements over Gemma 2 in coding, math, and language tasks, and that the 27B version ranks in the top ten on Chatbot Arena, being the highest-ranking open-weights model on the leaderboard after DeepSeek-R1.

6

Zhi Zhi Innovation Research Institute, Backed by Jiukun, Open-Sources IQuest-Coder Code Model Series, Positioned as 'Programming Assistant' and Provides Technical Report and Repository

Open Source ModelAI Programming

Materials indicate that the Zhi Zhi Innovation Research Institute, initiated by the founding team of Jiukun Investment, has open-sourced the IQuest-Coder series of code large language models, emphasizing code read, write, and modify capabilities for automatic programming, bug fixing, and code explanation; it also publicly released a technical report, model weights, and code repository for community verification and secondary development. This event is also cited as illustrating the trend of quantitative institutions establishing independent AI research platforms to promote technology spillover and open-source releases.

Read full article
7

China's Newly Revised Cybersecurity Law Takes Effect January 1, 2026, Maximum Fine Raised to 20 Million Yuan and Includes AI Ethics and Risk Assessment Requirements

Policy RegulationCybersecurityAI Governance

Materials state that the newly revised Cybersecurity Law of the People's Republic of China took effect on January 1, 2026, significantly increasing penalties: the maximum fine for network operators failing to fulfill security obligations is raised to 20 million yuan, and the fine limit for directly responsible individuals is raised to 1 million yuan. It also systematically incorporates AI-related governance requirements for the first time, proposing the establishment of ethical review and technology risk assessment mechanisms, and strengthening security standards and risk assessment reporting requirements for critical information infrastructure operators.

Read full article
8

CAC Seeks Opinions on Interim Measures for Management of Human-like Interactive AI Services, Requiring Clear AI Identity Labeling and Introducing Anti-Addiction and Special Group Protection

Policy RegulationContent Safety

Regarding governance of companion/human-like interactive AI, materials state that the Cyberspace Administration of China (CAC) drafted the Interim Measures for the Management of Artificial Intelligence Human-like Interactive Services (Draft for Comments), proposing to establish a transparency system (clearly labeling AI identity, reminders on first use and login) and setting up timeout interruption and cooldown mechanisms after continuous use to prevent addiction. It also emphasizes protection for special groups like minors, requiring human intervention and contacting emergency contacts when suicidal or self-harm tendencies are detected, and strengthening protection of users' emotional personal information and content safety requirements.

Read full article
9

EU Rejects Delaying Key AI Act Compliance Deadlines, Pressure Rises for High-Risk AI Systems to Achieve Mandatory Compliance by August 2026

Policy RegulationCompliance

Materials state that the EU rejected 'stop-the-clock' requests, insisting on enforcing mandatory compliance requirements for 'high-risk AI systems' before August 2026; with unified technical standards not yet fully in place, companies must conduct compliance assessments and face uncertainty. Reports also mention the EU proposed a response: if technical standards are not ready on time, specific sectors could receive extensions of up to 16 months, but must demonstrate 'good faith compliance' based on existing drafts; and cite examples where certification costs for a single industrial system could reach 80 million to 150 million euros. Related timelines can also be referenced on the AI Act phased implementation page.

Read full article
10

TSMC Advances CoWoS Capacity Expansion to Alleviate AI Chip Packaging Bottleneck, Aims to Increase Monthly Capacity to 130,000 Wafers by End of 2026

Chip/PackagingSupply ChainComputing Power

Materials state that TSMC is undertaking a large-scale expansion of advanced packaging CoWoS capacity, planning to increase monthly capacity from about 35,000 wafers in 2024 to 130,000 wafers by the end of 2026 to alleviate AI chip supply bottlenecks; it mentions NVIDIA is the largest customer occupying over 60% of capacity, while demand from cloud providers' custom AI ASICs and companies like AMD collectively intensifies competition for advanced packaging resources. The materials also view advanced packaging as a key capability in the 'More than Moore' era, emphasizing the role of heterogeneous integration in AI processor performance growth.

Read full article

Don't Miss Tomorrow's Insights

Join thousands of professionals who start their day with AI Daily Brief