Three reasons why DeepSeek’s new model matters


In terms of performance, V4 is, perhaps unsurprisingly, a huge jump from R1—and it seems to be a strong alternative to just about all the latest big AI models. On the major benchmarks, according to results shared by the company, DeepSeek V4-Pro competes with leading closed-source models, matching the performance of Anthropic’s Claude-Opus-4.6, OpenAI’s GPT-5.4, and Google’s Gemini-3.1. And compared to other open-source models, such as Alibaba’s Qwen-3.5 or Z.ai’s GLM-5.1, DeepSeek V4 exceeds them all on coding, math, and STEM problems, making it one of the strongest open-source models ever released. 

DeepSeek also says that V4-Pro now ranks among the strongest open-source models on benchmarks for agentic coding tasks and performs well on other tests that measure ability to carry out multistep problems. Its writing ability and world knowledge also lead the field, according to benchmarking results shared by the company. 

In a technical report released alongside the model, DeepSeek shared results from an internal survey of 85 experienced developers: More than 90% included V4-Pro among their top model choices for coding tasks.

DeepSeek says it has specifically optimized V4 for popular agent frameworks such as Claude Code, OpenClaw, and CodeBuddy.

2. It delivers on a new approach to memory efficiency.

One of the key innovations of V4 is its long context window—the amount of text the model can process at once. Both versions can handle 1 million tokens, which is large enough to fit all three volumes of The Lord of the Rings and The Hobbit combined. The company says this context window size is now the default across all DeepSeek services and it matches what is offered by cutting-edge versions of models like Gemini and Claude. 

But it’s important to know not just that DeepSeek has made this leap, but how it did so. V4 makes significant architectural changes to the company’s former models—especially in the attention mechanism, which is the feature of AI models that helps them understand each part of a prompt in relation to the rest. As the prompt text gets longer, these comparisons become much more costly, making attention one of the main bottlenecks for long-context models.



Source link

  • Related Posts

    EFF Challenges Secrecy In Eastern District of Texas Patent Case 

    Clinic students Emily Ko and Zoe Lee at the Technology Law and Policy Clinic at the NYU School of Law were the principal authors of this post. Courts are not…

    A Battlefield movie adaptation is on the way, possibly starring Michael B. Jordan

    Have you ever noticed how Walgreens and CVS locations often end up across the street from each other? Well, Call of Duty and Battlefield have a similar thing going on.…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Federal government delays plan to shut down Quebec food allergy lab: union

    Federal government delays plan to shut down Quebec food allergy lab: union

    Toronto Star wins three National Newspaper Awards

    Toronto Star wins three National Newspaper Awards

    Third ‘Ride Along’ Movie in Development

    Third ‘Ride Along’ Movie in Development

    Preorders For Patapon Creator's New Rhythm Game Are Now Live

    Preorders For Patapon Creator's New Rhythm Game Are Now Live

    Saskatchewan community comes together to fight rising waters

    Saskatchewan community comes together to fight rising waters

    Brandi Chastain on momentum in women’s sports

    Brandi Chastain on momentum in women’s sports