Mistral AI

Mistral AI develops open-source foundational models. These are an alternative to closed, proprietary AI models like those developed by OpenAI, Google, and Anthropic. The company also focuses on developing more efficient and cost-effective models, targeting the “performance-cost frontier.” For example, in October 2023 a third-party source estimated that the Mistral 7B model was about 187x cheaper than GPT-4 and 9x cheaper than GPT-3.5. As of December 2023, the Mistral 7B model had been downloaded 2.1 million times. As of January 2024, the Mixtral 8x7B model had been downloaded 337K times. As of April 2024, Mistral AI counted as its customers companies such as Brave, BNP Paribas, Orange, and Cloudflare.

Founding Date

Jan 1, 2023

Headquarters

Paris, Ile-de-France

Total Funding

$ 1B

Stage

series b

Employees

11-50

Careers at Mistral AI

Memo

Updated

April 20, 2024

Reading Time

22 min

Thesis

Businesses are increasingly recognizing the potential of generative AI across various sectors. According to a June 2023 report, generative AI could add between $2.6 trillion and $4.4 trillion annually to the world economy. The potential market for foundation models — the starting point to develop other AI models — “may encompass the entire economy” according to a September 2023 report.

In 2020, OpenAI’s paper "Scaling Laws for Neural Language Models" showed that large language models (LLMs) improve directly with increases in model size, data, and computing power. As such, AI has seen significant progress due to advancements in computing technology and an exponential increase in data availability and variety. Data worldwide increased by 60-70% annually as of November 2022. From 1956 to 2015, computing performance had increased a trillion-fold. General compute power was estimated to increase tenfold by 2030, and AI compute power would increase by a factor of 500 during the same period, according to a 2021 report.

Mistral AI develops open-source foundation models. These are an alternative to closed, proprietary AI models like those developed by OpenAI, Google, and Anthropic. The company also focuses on developing more efficient and cost-effective models, targeting the “performance-cost frontier.” For example, in October 2023 a third-party source estimated that the Mistral 7B model was about 187x cheaper than GPT-4 and 9x cheaper than GPT-3.5. As of December 2023, the Mistral 7B model had been downloaded 2.1 million times. As of January 2024, the Mixtral 8x7B model had been downloaded 337K times. As of April 2024, Mistral AI counted as its customers companies such as Brave, BNP Paribas, Orange, and Cloudflare.

Weekly Newsletter

Subscribe to the Research Rundown

Founding Story

Mistral AI was founded by Arthur Mensch (CEO), Timothee Lacroix (CTO), and Guillaume Lample (CSO) in 2023. They met at university, where all three were studying across the field of artificial intelligence from 2011 to 2014.

Mensch spent much of his career attempting to make AI and machine-learning systems more efficient, first as a post-doc researcher at the École Normale Supérieure between 2018 and 2020, and later at Google DeepMind, where he contributed to projects such as Retro and Chinchilla between 2020 and 2023. Lacroix and Lample worked at Meta’s AI division from 2014 to 2023, first as research interns and later as PhD students and researchers. They worked together on papers such as “Open and Efficient Foundation Language Models”, published on February 2023.

In 2021, the three founders started talking about the direction they saw AI taking, noting the development of the technology was accelerating and that there was an opportunity to do things differently; instead of following a proprietary model approach, they argued for an open-source approach. In a December 2023 interview, Mensch stated that the goal behind Mistral AI was “to create a European champion with a global vocation in generative artificial intelligence, based on an open, responsible, and decentralized approach to technology." In a February 2024 interview, Mensch highlighted that efficiency was a key aspect of Mistral AI, stating that “we want to be the most capital-efficient company in the world of AI. That’s the reason we exist.”

In September 2023, Mistral launched Mistral 7B, a 7 billion parameters open source AI model the team claimed was better than models twice its size. In December 2023, French President Emmanuel Macron praised the company, stating “Bravo to Mistral, that's French genius." By January 2024, Mistral had hired over half the team behind Meta’s LLaMA model to work on its open-source models.

Product

Mistral AI develops foundation large language models (LLMs). As of April 2024, all its models are designed to be open source, licensed under Apache 2.0, and available for free. The company also offers “optimized” versions of its AI models via its developer platform, for which it charges following a usage-based business model.

AI Models

Mistral 7B: Released in September 2023, Mistral 7B is the company’s first model. At the time of release, Mistral AI claimed this model, made up of 7 billion parameters, outperformed “all currently available open models up to 13B parameters on all standard English and code benchmarks.”

Mistral 7B is fluent in English and code. Leveraging transformer architecture, it integrates components such as sliding window attention, rolling buffer cache, and pre-fill & chunking to improve efficiency and performance.

Sliding window attention can be explained using a metaphor: imagine someone is on a train moving through a landscape, but the window only allows them to see a few meters around them at any given time. As the train moves forward, their view shifts, allowing them to see new parts of the landscape while losing sight of the parts they’ve passed. This is similar to sliding window attention, where a model only focuses on a portion of the entire data (like the words in a sentence) at one time. This method helps the model efficiently process long sequences of data by focusing on smaller, more manageable chunks, improving both speed and resource usage without losing the context needed for accurate predictions. This helps the model be more efficient, reducing the computational cost while allowing each word to be influenced by its context.

To understand rolling buffer cache, imagine someone playing a video game on a console. To ensure the game runs smoothly without loading pauses, the console keeps the most recent and relevant data (like the immediate game environment) in memory, discarding older, less relevant data as the player moves through the game world. The rolling buffer cache works similarly in computing, storing recent inputs and then moving older, less relevant data out of the cache as new data comes in. This process allows the system to efficiently manage memory resources by ensuring only the most current and necessary data is kept ready for quick access, which is key for processing large amounts of data without overwhelming the system's memory capacity.

For pre-fill and chunking, imagine someone is planning to cook a large meal, and the recipe calls for numerous ingredients. Instead of measuring and cutting each ingredient as they go, they prep everything beforehand — chopping vegetables, measuring spices, and so on — dividing them into smaller, manageable portions (or "chunks"). This way, when it's time to cook, they can focus on combining these pre-prepped portions in the right order, without the need to pause and prepare each one. This method streamlines the cooking process, making it more efficient and ensuring that each step is ready to go as soon as needed. Similarly, "pre-fill and chunking" in the computational context means pre-loading the model with chunks of data (the "ingredients") so that processing (or "cooking") can happen more smoothly and efficiently, without the need to process the entire dataset from scratch every time a new piece is needed.

As of April 2024, users can access Mistral 7B in several ways: (1) download the model directly; (2) use Mistral’s API through La Plateforme; (3) run the model locally with Ollama using the command “ollama run mistral”; or (4) access the model through Hugging Face.

Mixtral 8x7B: In December 2023, Mistral AI released its second model, Mixtral 8x7b. As of December 2023, according to the company Mixtral outperformed Llama 2 70B on “most benchmarks” with 6x faster inference and matched or outperformed OpenAI’s GPT 3.5 on “most standard benchmarks.”

Source: Mistral AI

Mistral 8x7B is a high-quality sparse mixture of expert models (SMoE) with open weights. Think of SMoE as a talent show where each participant (expert) has a unique skill, and the judges (gating network that controls how to weigh decisions) decide which acts to showcase based on the audience's current mood (input data). Instead of having every act performed every time, which would be time-consuming and irrelevant, the judges pick a few acts that best match the audience's interests, combining their performances to create an engaging show. This approach allows the show to adapt to different audiences efficiently, using only the most relevant talents. This mirrors how SMoE selects the "experts" for processing data. The SMoE technique increases the number of parameters of a model while controlling cost and latency, as the model only uses a fraction of the total set of parameters per token. Consequently, Mixtral 8x7B has 46.7B total parameters but only uses 13B parameters per token. Therefore, the model processes input and generates output at the same speed and for the same cost as a 13B model.

Mixtral 8x7B handles a context of 32K tokens and is fluent in English, French, Italian, German, Spanish, and code. Like Mistral 7B, it is licensed under Apache 2.0 and can be used for free. As of April 2024, both Hugging Face and Perplexity AI allowed users to enable Mixtral 8x7B in a chat interface, and developers could access the model through Mistral AI’s La Plateforme.

Mixtral 8x22B: Launched in April 2024, Mixtral 8x22B is Mistral AI’s third model. This model was built on 176 billion parameters, has a context window of 65K tokens, and was released under the Apache 2.0 license. As of April 2024, there is no mention of this model on the company’s website; Mistral AI announced the launch of Mixtral 8x22B through a magnet link posted on social media platform X.

La Plateforme

La Plateforme is Mistral AI’s developer platform. This platform leases “optimized” versions of the company’s models to developers through generative endpoints, accessible via an API. Its goal is to offer efficient deployment and tailored customization for various use cases. As of April 2024, La Plateforme offers three “optimized” commercial models, each tailored for specific performance and cost needs: Mistral Small, Mistral Large, and Mistral Embed.

Mistral Small offers “cost-efficient reasoning for low-latency workloads.” Mistral Large offers “top-tier reasoning” and is designed for high-complexity tasks. As of April 2024, according to the company Mistral Large ranked “second among all models generally available through an API, and provide top-tier reasoning capabilities.”

Source: Mistral AI

Both these models are fluent in English, French, Italian, German, and Spanish, and are “strong in code”. They support a context window of 32K tokens and have native function calling capacities and JSON outputs. As of April 2024, Mistral AI described these models as “concise, useful, unopinionated, with fully modular moderation control.”

Mistral Embed is a “state-of-the-art” semantic model, designed for extracting representation of text extracts. Specifically, this model is engineered to convert text into mathematical vectors, each comprising 1024 dimensions. These high-dimensional vectors represent textual information numerically, capturing semantic nuances that facilitate the understanding and quantification of similarities among different text segments. The choice of 1024 dimensions ensures a detailed and nuanced representation of text, enhancing the model's ability to process and analyze data with a high degree of accuracy. This embedding technique helps parse vast corpora of text to identify information relevant to a given context. This capability, in turn, helps generative models to produce outputs that are not only contextually aware but also highly tailored to the specific informational needs of an application.

As of April 2024, Mistral Embed only supported English “for now.” According to the company, this model achieved a retrieval score of 55.26 on the Massive Text Embedding Benchmark (MTEB).

Le Chat

Le Chat is Mistral AI’s chatbot service, functionally equivalent to OpenAI’s ChatGPT but powered by Mistral AI’s foundation models. As of April 2024, Le Chat can use Mistral Large, Mistral Small, or Mistral Next, a prototype model “designed to be brief and concise.” As of April 2024, users can access Le Chat for free. Mistral AI also offers Le Chat Enterprise, a service designed for businesses to “empower your team’s productivity with self-deployment capacities, and fine-grained moderation mechanisms.”

Market

Customer

As of April 2024, Mistral AI targets “the performance-cost frontier” for businesses looking to implement generative AI in their offerings. For example, a third-party source estimated the Mistral 7B model was around 187x cheaper than OpenAI’s GPT-4 and 9x cheaper than GPT-3.5 as of October 2023. As of April 2024, notable Mistral AI customers include Lamini, Arcane, Lindy, Hugging Face, Brave, Cloudflare, Pretto, BNP Paribas, Orange, and MongoDB.

Source: Mistral AI

Market Size

The global artificial intelligence market was valued at $150.2 billion in 2023 and was expected to reach $1.35 trillion by 2030, growing at a CAGR of 36.8%. Generative AI, the artificial technology that underpins foundation models, could add between $2.6 trillion and $4.4 trillion annually to the world economy according to a June 2023 report. Across the banking industry, for example, the technology could add $200 billion to $340 billion annually as of June 2023. In retail and consumer packaged goods, generative AI could add $400 billion to $660 billion a year as of June 2023. Foundation models, such as the ones developed by Mistral AI, underpin much of this growth by providing versatile platforms that can be customized for a variety of applications across multiple industries. As such, estimating the market size for foundation models can be challenging; a September 2023 report suggested that “the potential market for foundation models may encompass the entire economy.” A November 2023 report estimated that foundation models would generate $11.4 billion in revenue by 2028.

Competition

While companies such as OpenAI, Anthropic, or Google develop “proprietary” models — i. e. owned and controlled by the entity, with access restricted —, Mistral AI develops “open” models — i. e. all of the company’s models are accessible to the public for free and distributed through an open-source license.

OpenAI

Founded in 2015, OpenAI is a non-profit turned for-profit in 2019. It is known for creating Generative Pre-trained Transformers (GPT) series of AI models, which the company first introduced in 2018. The company had raised $11.3 billion in funding as of a $300 million venture round in April 2023, across eight funding rounds. In February 2024, OpenAI reportedly completed a deal that valued the company at $80 billion.

Initially, OpenAI had an open approach to model development; for example, it released the source code and model weights for GPT-2 in November 2019. The company later changed its approach; following the launch of GPT-4 in March 2023, co-founder Ilya Sutskever stated “we were wrong” about OpenAI’s development of open models. The company launched its first consumer product in November 2022, called ChatGPT. ChatGPT garnered 100 million monthly active users within two months of launch. As of April 2024, the service had approximately 180.5 million users, out of which 100 million were active weekly.

Anthropic

Founded in 2021, Anthropic produces AI research and products with an emphasis on safety. The company develops Claude, a family of closed foundation AI models trained and deployed through a method it dubbed “Constitutional AI” where the only human oversight during training is through a list of rules, principles, and ethical norms. Anthropic was founded by former OpenAI employees who left OpenAI due to “differences over the group’s direction after it took a landmark $1 billion investment from Microsoft in 2019” according to a May 2021 article. The company had raised $7.6 billion as of its $2 billion corporate round in October 2023, led by Google. As of December 2023, Anthropic was reportedly raising another $750 million at a $18.4 billion valuation.

Meta AI

Established in 2013, Meta AI develops the LLaMA series of open-source foundation AI models. As such, these models are in direct competition with Mistral AI’s models. Meta AI’s LLaMA 2 7B and LLaMA 2 13B models compete with Mistral 7B, and LLaMA 2 70B competes with Mixtral 8x7B. Although the LLaMA models are considered comparatively less performant than other models — LLaMA 2 70B ranked 34 in the HuggingFace LLM leaderboard as of April 2024 — Meta AI has made substantial contributions to AI research, including the development of the open-source machine learning library, PyTorch. In the ongoing debate between open and closed models, Mistral AI and Meta AI share a common vision, advocating for openness and accessibility in AI technologies.

Google AI

Google has been advancing AI research since it acquired DeepMind in 2014, notably with projects like AlphaGo and the 2017 research paper "Attention is All You Need" that introduced transformer architecture. Between 2014 and 2023, Google’s AI departments were divided between Google Brain and DeepMind. In April 2023, the company consolidated these departments under the Google AI brand. In December 2023, Google AI launched Bard (rebranded to Gemini in February 2024), a closed foundation model designed to compete with the likes of GPT-4. As of April 2024, the company claimed its model had superior performance to GPT-4 in most benchmarks.

Cohere

Cohere develops open and closed generative AI models optimized for enterprise use. It was founded in 2018 by Aidan Gomez, a former Google Brain investigator and one of the original authors of the 2017 “Attention is All You Need” paper. Its proprietary large language models (LLMs) offer services like summarization, text creation, and classification to corporate clients via its API. These models are designed to be augmented with additional training data from users. Like Mistral AI, Cohere offers a chatbot assistant dubbed Coral, as well as a Cohere Embed, an embedding model that competes directly with Mistral Embed. In March 2024, Cohere announced the launch of Command-R, a “new LLM aimed at large-scale production workloads.” In April 2024, according to crowdsourced LLM leaderboard, Arena, Command-R was placed 6th behind variations of products like Claude 3 and GPT-4, but ahead of Mistral Large and Mistral Medium. The company had raised $434.9 million in total funding as of April 2024. It was valued at $2.1 billion as of its $270 million Series C round in May 2023.

Business Model

Mistral AI offers all its models for free through the Apache 2.0 open-source license. The company also charges for “optimized” versions of its products using a pay-as-you-go business model, accessible through La Plateforme. For every million tokens (approximately 750K words), Mistral AI charges a fee; this fee varies depending on model endpoint, input, and output. As of April 2024, Mistral AI’s pricing was as follows:

As of April 2024, the company also charged $0.1 per 1 million tokens for its embedding API (Mistral Embed model). As of April 2024, all endpoints had a rate limit of five requests per second, 2 million tokens per minute, and 10 billion tokens per month.

Mistral AI also offers its Le Chat service to enterprises, but as of April 2024, there was no publicly available pricing information for this service.

Traction

Mistral AI’s revenue as of April 2024 is unknown; however, in a January 2024 interview, CFO Florian Bressand highlighted the company had entered “hyper-growth”, noting significant customer segments in financial services, banking, insurance, telecommunications, and utilities.

One way to determine the success of Mistral AI’s products, at least its open models, is through the number of downloads on Hugging Face. The Mistral 7B model had been downloaded 2.1 million times as of December 2023. The Mixtral 8x7B model had been downloaded 337K times as of January 2024.

Mistral AI had made two notable partnerships as of April 2024, one with Microsoft, and another with Snowflake. Mistral AI partnered with Microsoft in February 2024 to make its open and commercial models available on Microsoft Azure. Alongside this, Microsoft invested $16.3 million into the company. In March 2024, Mistral AI partnered with Snowflake to make its models more accessible to enterprises by integrating them with Snowflake's Cortex.

Valuation

As of April 2024, Mistral AI had raised a total of $536.8 million across five funding rounds from investors including a16z, Databricks Ventures, Lightspeed Venture Partners, and Microsoft. Mistral AI closed a $113 million seed round in June 2023, four weeks after its launch, at a $260 million valuation. In December 2023, the company raised a $415 million round at a $2 billion valuation. In February 2024, Mistral AI sealed a distribution partnership with Microsoft; as part of this partnership, Microsoft invested $16.3 million into the company. As of April 2024, Mistral AI was reportedly raising “hundreds of millions of dollars” and seeking a $5 billion valuation.

Key Opportunities

Expansion into Emerging Markets

According to the International Monetary Fund (IMF), emerging market economies represented a 53% share of the world economy on a global PPP-adjusted GDP scale as of 2013, and are expected to grow at a faster rate than developed markets. In January 2024, the IMF highlighted that AI was set to impact 40% of jobs in emerging markets and 26% of jobs in low-income countries. This indicates a substantial demand for AI technologies in these regions to fuel economic development, improve societal outcomes, and enhance competitiveness on the global stage. However, according to a 2020 report, the biggest obstacle for emerging markets adopting AI is the technology’s cost. Moreover, most emerging markets include countries in Asia, Africa, Latin America, and parts of Eastern Europe, where English is not the native language.

Mistral AI’s models are designed to “target the performance-cost frontier”. According to a third-party source, the Mistral 7B model was 187x cheaper than GPT-4, and 9x cheaper than GPT-3.5 as of October 2023. This model also outperformed comparative models like LLaMA 2 13B on “all benchmarks” as of September 2023. Another of Mistral AI’s value propositions is its models’ multi-lingual capacity. The Mistral Small and Mistral Large models are not only fluent in English, but also in French, Italian, German, and Spanish. Moreover, the Mistral Large model outperformed the comparative model LLaMA 2 70B in French, Italian, German, and Spanish as of February 2024. These two facts — the models’ cost-effectiveness and multi-lingual capabilities — signal that Mistral AI is strategically positioned to capture emerging markets.

Customized AI Solutions for SMBs

While Mistral AI primarily targets enterprises as of April 2024, there is an opportunity for the company to expand into serving small-to-medium businesses (SMBs). In 2024, the artificial intelligence market for SMBs was estimated to reach $90.7 billion by 2027, growing at a CAGR of 22.1%. An August 2023 survey found that AI has become a priority for 53% of small businesses, up from 41% in April 2023. SMBs, however, can be price-sensitive when it comes to vendors, and AI model implementation can be expensive. Mistral AI’s models, however, are significantly cheaper than their competitors according to one third-party source, making them well-suited to meet the needs of SMBs.

Continued Expansion of Data & Compute

OpenAI's 2020 paper "Scaling Laws for Neural Language Models" demonstrated that the performance of language models improves directly with increases in model size, data, and computing power. As of April 2024, both the amount of data and computing power were growing exponentially. Data worldwide was increasing by 60-70% annually as of November 2022, providing foundation models like those developed by Mistral AI with more information for training, which improves their accuracy and functionality. Additionally, from 1956 to 2015, computing performance had increased a trillion-fold, partly due to Moore's Law, which suggests a doubling of computing power approximately every two years. According to a 2021 report, general compute power was estimated to increase tenfold by 2030; AI compute power specifically would increase by a factor of 500 during the same period. Mistral AI can utilize these advancements to enhance the performance of its AI models, making them more efficient and integrated into technological solutions across various industries.

Key Risks

Regulatory Concerns

Regulatory concerns, specifically EU regulation, pose a significant risk for Mistral AI. The EU's AI Act, the world's first major legislation on AI, strictly regulates general-purpose models and applies to models operating in the EU. For Mistral Large, Mistral AI will have to comply with transparency requirements and EU copyright law, which includes disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for training. Since EU AI regulation is stricter than AI regulation in the US as of April 2024, and Mistral AI is a European company, there is a risk that the company’s development could be slowed down; Mistral AI needs to make sure it is complying with all EU regulation requirements while its competitors abroad can operate more freely. For example, in February 2024 Microsoft’s investment into Mistral AI drew regulator scrutiny; an EU AI commission spokesperson stated that the commission was “looking into agreements that have been concluded between large digital market players and generative AI developers and providers.”

Hiring Top Talent

As of October 2023, the demand for AI talent capable of building sophisticated AI models exceeded its supply. A May 2023 report found that “to be an industry leader in five years, companies need a clear and compelling AI talent strategy today, but many organizations are hitting a brick wall.” Talent poaching is becoming an issue. For example, by January 2024, Mistral had hired over half the team behind Meta’s LLaMA model to work on its open-source models. In November 2023, Salesforce CEO Marc Benioff offered to match any resignation from OpenAI with full cash and equity if the worker joined Salesforce. As AI technology continues to develop, the gap between AI talent supply and demand could widen, putting additional pressure on Mistral AI to not only attract but also retain top AI talent, which could slow its growth.

Weekly Newsletter

Subscribe to the Research Rundown

Summary

Mistral AI aims to shape the future of artificial intelligence by developing open-source alternatives to proprietary models created by the likes of OpenAI and Google. The company also puts emphasis on creating models that are efficient and cost-effective, coupled with a commitment to transparency and accessibility. This approach can help Mistral AI expand into markets where AI technology has not seen significant penetration, such as emerging economies and SMBs. As available data and compute power grows exponentially, Mistral AI can further enhance the performance and efficiency of its AI models. However, AI regulation, particularly European AI regulation, as well as the widening gap between AI talent supply and demand, pose challenges.

Disclosure: Nothing presented within this article is intended to constitute legal, business, investment or tax advice, and under no circumstances should any information provided herein be used or considered as an offer to sell or a solicitation of an offer to buy an interest in any investment fund managed by Contrary LLC (“Contrary”) nor does such information constitute an offer to provide investment advisory services. Information provided reflects Contrary’s views as of a time, whereby such views are subject to change at any point and Contrary shall not be obligated to provide notice of any change. Companies mentioned in this article may be a representative sample of portfolio companies in which Contrary has invested in which the author believes such companies fit the objective criteria stated in commentary, which do not reflect all investments made by Contrary. No assumptions should be made that investments listed above were or will be profitable. Due to various risks and uncertainties, actual events, results or the actual experience may differ materially from those reflected or contemplated in these statements. Nothing contained in this article may be relied upon as a guarantee or assurance as to the future success of any particular company. Past performance is not indicative of future results. A list of investments made by Contrary (excluding investments for which the issuer has not provided permission for Contrary to disclose publicly, Fund of Fund investments and investments in which total invested capital is no more than $50,000) is available at www.contrary.com/investments.

Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by Contrary. While taken from sources believed to be reliable, Contrary has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Please see www.contrary.com/legal for additional important information.

Authors

David Burton

Fellow

See articles

© 2024 Contrary Research · All rights reserved

Privacy Policy

By navigating this website you agree to our privacy policy.