May has been a whirlwind of major open source releases, packed in-person events, and deep technical content!
We kicked it off with the release of Modular Platform 25.3 on May 6th, a major milestone in open source AI. This drop included more than 450k lines of Mojo and MAX code, featuring the full Mojo standard library, the MAX AI Kernels, and the MAX serving library. It’s all open source, and you can install it in seconds with pip install modular
, whether you’re working locally or in Colab with A100 or L4 GPUs.
That momentum carried into the real world when we hosted our first-ever community meetup at Modular HQ in Los Altos and our first GPU Kernel Hackathon at AGI House, where over 100 engineers and researchers came together to build, learn, and push the limits of Mojo and MAX.
Keep reading for all the highlights from this packed month, including talks, tutorials, technical deep dives, open source projects, and more.
Blogs, Tutorials, and Videos
- Earlier this month, over 100 engineers and researchers from across the AI ecosystem gathered at AGI House in Hillsborough, California, for the Modular GPU Kernel Hackathon!
- Before the hacking began, we were lucky to hear from an all-star lineup of speakers. Check out all the talk recordings:
- Democratizing AI compute together: Chris Lattner, CEO of Modular
- Open GenAI on AMD: Ramine Roane, CVP of AI at AMD
- A Hundred PyTorch Backends: Mark Saroufim, cofounder of GPU MODE & software engineer at Meta
- triton_lite, a Triton clone in Mojo: Jeff Niu, member of technical staff at OpenAI
- Claude on Three Accelerators: Simon Boehm and Sasha Krassovsky, members of technical staff at Anthropic
- Catch up with the hackathon highlight reel and the full recap blog post, featuring the list of winning projects.
- Before the hacking began, we were lucky to hear from an all-star lineup of speakers. Check out all the talk recordings:
- Couldn’t make it in person to our recent meetup at Modular HQ in Los Altos? See what went down with the recordings of Chris Lattner and Jack Clayton’s talks.
- Chris Lattner and Abdul Dakkak gave a marathon (2+ hours!) deep dive on Mojo during a recent GPU MODE livestream. Don’t miss the recording of this packed session on Mojo’s capabilities and our vision for high-performance GPU programming, featuring tons of 🤯 moments.
- Ever wanted to sit in on one of our internal tech talks? You’re in luck! Modular Tech Talks is an exclusive series featuring internal presentations from our engineering team, explaining the inner workings of our tech stack.
- Kernel Programming and Mojo🔥 explores the unique architecture of the Mojo compiler and shares how our team addresses the challenges of developing for modern GPUs.
- MAX Graph Compilation to Execution highlights how the MAX graph compiler leverages Mojo to beautifully balance performance, control, and usability, delivering a novel way to develop dynamic models.
- You can now call Mojo code directly from Python with the latest nightly release! Tap into Mojo's performance benefits without rewriting your existing Python projects. The updated Mojo Manual explores how this works under the hood, while examples in our GitHub repo allow you to get hands-on, even including an example of calling Mojo code that runs on a GPU from Python.
- Mojo is now supported for GPU challenges on both LeetGPU and Tensara! Time to put your Mojo skills to work and crush the leaderboards.
- Learn GPU programming in Mojo by solving GPU puzzles:
- Start here and complete puzzles to earn stickers.
- Unpack our recently dropped section, Understanding GPU Performance: The Roofline Model, which explores a visual performance model used to understand GPU optimization.
- Interface with Python via MAX Graph custom ops in Part III, starting with a 1D convolution op.
- In Community Meeting #16, the team shared important updates about Modular Platform 25.3, major open sourcing news, and an updated Mojo roadmap.
- Chris Lattner joined Kevin Ball on Software Engineering Daily’s podcast to discuss his engineering journey and current work on AI infrastructure and all things Modular.
- Our repository is trending this month on GitHub!
- Mojo's parameter system helps you write flexible, type-safe code without adding runtime cost. Our recent guest blog post by Brian Grenier dives into how this works, with real examples and LLVM IR output from the Mojo compiler.
- In part 3 of our intro to GPU programming video series, you'll learn how the Mandelbrot set is defined and how to implement the computation in Mojo and run your code on GPUs.
- Dig into the latest installments in our Democratizing AI Compute blog post series:
- The Electronic Engineering Times recently covered how we’re delivering performant portability across GPUs and giving back AI back to developers.
Awesome MAX + Mojo
- TilliFe released a research preview of NABLA, a framework for differentiable programming in Mojo.

- gonsolo shared a CPU and GPU raytracer in Mojo.
- jklaivins built firehose, a flexible Mojo logging library.
- miktavarez created TLSE bindings to enable HTTPS requests.
- rd4com built ui-terminal-mojo, a terminal user interface framework in Mojo.
Open-Source Contributions
If you’ve recently had your first PR merged, message Caroline Frasca in the forum to claim your epic Modular swag!
Check out the recently merged contributions from our valuable community members:
- soraros [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39]
- martinvuyk [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47]
- bgreni [1]
- ratulb [1]
- winding-lines [1]
- kasmith11 [1]
- ljoukov [1][2]
- auris [1]
- Ahajha [1][2]
- ssslakter [1]
- OwenJRJones [1]
- sstadick [1][2][3][4]
- kirillbobyrev [1][2]
- msaelices [1][2][3][4][5][6]
- sibarras [1]
- shogo314 [1]
- owenhilyard [1]
- astrobdr [1]
- Hundo1018 [1]
- gabrieldemarmiesse [1][2][3]
- yeison [1]
- EKami [1]
- Laerte [1]