If you're excited about building the future of machine learning infrastructure, send us your resume and a cover letter explaining why you'd be a great fit for the Synnada/Mithril team via our General Application Form. We'll review your application and get back to you promptly.
Synnada is an open-core company and a leading contributor to Apache DataFusion, a fast-growing open-source project. Mithril is an innovative open-source project created by Synnada to revolutionizing how machine learning models are composed and deployed. Our mission is to create an “LLVM” of Neural Networks — a powerful intermediate representation and compilation framework that enables unprecedented optimization and cross-framework compatibility. By separating model architecture from execution strategy, Mithril aims to enable a new generation of ML systems that are more flexible, performant, and maintainable.
We are seeking software engineers who are still computer scientists at heart. Do learning about new intermediate representations and optimization algorithms excite you? Do you enjoy reading papers on state-of-the-art (or sometimes, old but elegant) compilation techniques and automatic differentiation systems, as well as implementing them and seeing them transform real models? Do you spend more time reading arxiv.org/ML, exploring compiler design patterns, or reimplementing autograd engines than you think you should?
This role is perfect for individuals with a passion for systems architecture and ML infrastructure, and an interest in implementing these in a high-performance, multi-framework environment. At Mithril, you will get the opportunity to build an innovative model compilation engine, design IR transformations, optimize execution strategies, and create new backend implementations. You will play a crucial part in our mission to build a universal, high-performance foundation for ML systems that powers the next generation of AI applications.
At the intersection of data processing and artificial intelligence lies the future of computing—and Synnada stands at this crucial crossroads. As a venture-backed open-core company and primary contributor to Apache DataFusion, we're not just participating in the evolution of data systems; we're actively shaping it. Having raised $4.4M USD from notable institutional investors including Expeditions Fund, 500 EE, Collective Spark, and DayOne, we're also backed by industry giants like Andy Grove (creator of DataFusion) and Wes McKinney (creator of Arrow, pandas).
We're building what we call "Spark 2.0" — a next generation distributed compute engine that seamlessly unifies data processing and AI workflows. In a world where traditional data systems struggle to keep pace with the demands of modern AI, we're creating a solution that scales naturally with the complexity of today's data-driven applications. Our technology represents a fundamental rethinking of how data and AI systems interact, offering a unified compute layer that processes both data and AI workloads with unprecedented efficiency.
As members of both the AWS CTO Fellowship and StartX, Stanford's leading accelerator, we have unique access to cutting-edge innovations, networks, and mentorship opportunities that inform our technical direction. Our goal is ambitious yet clear: to build a unified, highly scalable system that integrates seamlessly across distributed data and AI pipelines, setting a new standard for how organizations process and analyze data at scale. By combining the innovation speed of open source with enterprise-grade reliability, we're creating the foundation for the next generation of data-driven applications.
Like its namesake from Tolkien's works, Mithril represents something both lightweight and incredibly strong. Our framework aims to be both easy to use and powerful enough for the most demanding applications. Just as LLVM revolutionized traditional compiler infrastructure, Mithril aims to transform how we build and optimize machine learning systems.
Mithril represents a fundamental rethinking of how we build and deploy machine learning models. By providing a robust intermediate representation and compilation framework, we're creating a more flexible and maintainable approach to ML system design. Our core principles include: