Humans& raises $480 million in seed round, valued at $4.5 billion
FundingStartup
Reuters disclosed that AI startup Humans& has completed a $480 million seed funding round, with a pre-money valuation of approximately $4.5 billion. Founded by former researchers from OpenAI, Alphabet, and xAI, the company says funds will be used to advance generative AI model development and expand its team. This round is described as one of the rare large seed rounds in recent years, highlighting how capital is increasingly concentrating around top-tier talent and cutting-edge technical directions. The report did not disclose lead investors, model parameters, or API timelines; external observers will focus on its technology deployment and commercialization pace. Against a backdrop of high compute costs, scrutiny over future capital efficiency will become more critical.
ServiceNow signs three-year agreement with OpenAI to integrate GPT-5.2
Enterprise SoftwarePartnership
CNBC reports that ServiceNow has signed a three-year partnership with OpenAI to integrate GPT-5.2 into its enterprise workflow platform. Financial terms were not disclosed. The collaboration covers AI agents and voice capabilities, aiming to accelerate automation for customers in customer service, IT, and business processes. In recent years, ServiceNow has bolstered its agent capabilities through acquisitions like Moveworks, and continues advancing planned acquisitions of Armis and Veza, emphasizing the creation of an 'AI Control Tower'. Such deep integrations with frontier models are shifting competitive focus in enterprise software toward agent orchestration and data access points.
Jueyi Stars open-sources Step3-VL-10B, discloses 1.2T training data
Open Source ModelMultimodal
Jueyi Stars has open-sourced its vision-language model Step3-VL-10B, a 10B-parameter model targeting visual understanding, OCR, and multimodal reasoning. The company claims it introduces a Parallel Coordination Reasoning (PaCoRe) mechanism, combined with large-scale reinforcement learning to enhance perception in complex scenarios and logical consistency. The model was jointly pre-trained end-to-end on 1.2 trillion high-quality data points using full-parameter training. Weights and deployment instructions have been released to facilitate developer customization. Positioned to achieve performance close to larger models with lower compute requirements, it targets multimodal Agent backbones for both edge and cloud deployment. The team also emphasizes that the 10B scale helps reduce deployment cost and inference latency.
Liquid AI open-sources LFM2.5-1.2B-Thinking, under 900MB on device
On-Device InferenceOpen Source Model
Liquid AI has released and open-sourced LFM2.5-1.2B-Thinking, a compact reasoning model designed for on-device use, with memory footprint under 900MB, enabling offline operation on smartphones and other edge devices. The company claims superior performance over Qwen3-1.7B's thinking mode in math, tool usage, and programming tasks, reducing cyclic error rates to 0.36%. The project has gained support from ecosystems like Ollama, with over 6 million downloads reported on Hugging Face. Training employed curriculum-style reinforcement learning and model merging strategies, emphasizing reduction of 'doom looping' and improved stability.
AgentCPM open-sources two agent models: Explore 4B and Report 8B
AI AgentOpen Source
Tsinghua University NLP Lab, Renmin University, ModelBest, and the OpenBMB community jointly open-sourced AgentCPM, an intelligent agent infrastructure, releasing two models: Explore and Report. Explore is a lightweight 4B-agent model, achieving SOTA on long-horizon agent benchmarks such as GAIA and HLE, suitable for edge deployment. Report, based on MiniCPM4.1-8B, can generate long-form professional reports locally and offline. Code and toolchains are released under the Apache-2.0 license. The project emphasizes self-hosting and security, offering a reusable foundation for agent training and deployment.
Amazon Bedrock Knowledge Base launches multimodal retrieval, supporting audio-video RAG
Cloud ServiceRAGMultimodal
AWS announced the launch of multimodal retrieval capabilities in Amazon Bedrock Knowledge Bases, natively supporting RAG workflows for text, images, audio, and video. The new feature leverages Amazon Nova's multimodal embeddings to encode different media types into a shared vector space, enabling cross-modal recall. AWS also provides an automated pipeline converting multimedia into timestamped text representations, facilitating compliance and long-content search, with managed ingestion, chunking, embedding, and vector storage. For enterprises, this reduces engineering overhead from building custom multimedia preprocessing pipelines and maintaining multiple indexing systems.
Moonshot AI rumored at $4.8B valuation, new trillion-parameter model begins API beta
FundingLarge Model
Media reports suggest Moonshot AI (Kimi) has reached a post-financing valuation of approximately $4.8 billion, with over 10 billion RMB in cash reserves. Funds will be used for expanding compute clusters and developing next-generation models. Reports indicate the company is preparing a new trillion-parameter model focused on enhancing multimodal, Agent, and AI coding capabilities, incorporating a linear attention mechanism called 'Delta Attention'. Materials note that the model's API has entered internal testing, open for applications from verified enterprise users. Official release timing and pricing remain unconfirmed. Much of this information stems from market rumors and media reporting—without official confirmation, these should be treated as plans rather than established facts.
Ifanr reports that Musk has announced the open-sourcing of 𝕏's redesigned recommendation algorithm, with the core process driven by a Grok-based Transformer model emphasizing 'zero manual feature engineering'. Public materials break down recommendations into retrieval and scoring stages, assigning weights and filters to actions like likes, reposts, and blocks to balance relevance and diversity. While code availability allows external researchers to directly examine traffic distribution logic and potential biases, actual deployment policies remain controlled by the platform. The team says the algorithm will continue iterating, with regular repository updates. Increased transparency may influence discussions around advertising and content governance, further pushing traffic-ranking rules into the public domain.
DeepSeek updates FlashMLA code, 'MODEL1' hints at new architecture
Research UpdateOpen Source Project
DeepSeek's recent update to the FlashMLA repository includes references to 'MODEL1': the commit involves 114 files, mentions MODEL1 28 times, and appears alongside V32. Observers speculate that MODEL1 may not be a minor version of DeepSeek-V3.2, but rather an experimental new architecture, differing in KV cache layout, sparsity handling, and FP8 decoding—key memory optimization areas. Currently, this information comes solely from code changes; whether the model will be released and its capabilities remain subject to official confirmation. Such 'roadmap-from-repository' signals also suggest domestic model teams are prioritizing efficiency optimization more prominently.