UPDATED 16:02 EDT / AUGUST 24 2023

AI

Modular nabs $100M for its AI programming language and inference engine

Modular Inc., the creator of a programming language optimized for developing artificial intelligence software, has raised $100 million in fresh funding.

General Catalyst led the investment, which was announced this morning. Alphabet Inc.’s GV startup fund participated as well along with several other institutional backers. Modular will use the capital to enhance its AI programming language and its other product, a software tool called AI Engine that promises to make companies’ neural networks faster. 

Developers usually write AI models in the Python programming language. It has a relatively simple, concise syntax that allows a neural network to be implemented with less effort than in other languages. But that simplicity comes at the cost of performance: Programs written in Python can be slow.

Modular has developed a programming language called Mojo that it positions as a faster alternative. The language’s syntax is nearly identical to that of Python, which means it’s relatively easy to use. The key difference, according to Modular, is that Mojo is up to 35,000 times faster.

One reason for Python’s slow performance is that it includes a so-called memory safety mechanism. That mechanism helps developers avoid common bugs, such as buffer overflows, that occur when a program mismanages the RAM of the server on which it’s running. Automating that task saves time for developers, but it also slows down their code.

Modular says its Mojo language provides a memory safety mechanism much like Python, but without a significant performance impact. The language also promises to ease developers’ work in other ways. Mojo reduces the amount of manual coding required to ensure that an AI model will run well on multiple types of chips.

The company offers the programming language alongside a software platform called AI Engine. The platform is designed to speed up companies’ existing AI models, including those not written in Mojo. Modular claims the AI Engine can increase the inference performance of neural networks by a factor of more than seven without requiring any code changes. 

One way the platform speeds up AI models is by translating their code into the Mojo language. Additionally, it applies several other optimizations that can further improve a neural network’s hardware efficiency.

While performing inference, an AI application carries out calculations that involve data snippets known as constants. Modular’s AI Engine can perform such calculations at compile time, the point when developers turn an AI’s raw code into a functioning program. This removes the need to repeat the calculation while the neural network is performing inference, which speeds up processing.

The platform also uses a performance optimization technique called operator fusion. It involves combining two components of an AI into a single, more efficient component that requires less hardware to run.

Modular debuted the AI Engine and Mojo in June following a $30 million funding round. The company says that more than 120,000 developers have since expressed interest in the two technologies. The AI Engine is currently available through an early access program, while Mojo is set to roll out early next month. 

Image: Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU