Meta’s latest open-source AI models are a shot across the bow to the more expensive closed models from OpenAI, Google, Anthropic and others. But it’s good news for businesses because they could potentially lower the cost of deploying artificial intelligence (AI), according to experts. The social media giant has released two models from its Llama family of models: Llama 4 Scout and Llama 4 Maverick. They are Meta’s first natively multimodal models — meaning they were built from the ground up to handle text and images; these capabilities were not bolted on. Llama 4 Scout’s unique proposition: It has a context window of up to 10 million tokens, which translates to around 7.5 million words. The record holder to date is Google’s Gemini 2.5 — at 1 million and going to 2. The bigger the context window — the area where users enter the prompt — the more data and documents one can upload to the AI chatbot.
Ilia Badeev, head of data science at Trevolution Group, told PYMNTS that his team was still marveling at Gemini 2.5’s 1 million context window when Llama 4 Scout comes along with 10 million. “This is an enormous number. With 17 billion active parameters, we get a ‘mini’ level model (super-fast and super-cheap) but with an astonishingly large context. And as we know, context is king,” Badeev said. “With enough context, Llama 4 Scout’s performance on specific applied tasks could be significantly better than many state-of-the-art models.” Continue reading here.
We provide communication in written form only
pr@dyninno.com
Phone
+44 7391 796792
v.veltmane@dyninno.com
j.kondratovica@dynatech.lv
Phone
+37 120 65 5702
+37 129 61 3971
karnika.bahuguna@dyninno.in
i.cerevco@datapro.md
Phone
+37 379 420400
j.carreno@dyninno.com
Phone
+571 314 49 00053
We do not comment on anything which might negatively impact our business, our partnerships, our employees, or our competitors
We are happy to provide information at any time of day or night but ask you to understand that we require up to two hours to prepare any statement