Petuum CEO Aurick Qiao, PhD and Director of Engineering Tong Wen, PhD demoed the new Petuum Platform for scaling enterprise MLOps and announced that they are now accepting applications for private beta customers.
In their talk Supercharging MLOps with Composability, Automation, and Scalability at Open Data Science Conference (ODSC) East, Aurick Qiao, PhD and Tong Wen, PhD of machine learning startup Petuum unveiled their new enterprise MLOps platform for AI/ML teams, now in private beta.
Petuum helps enterprise AI/ML teams operationalize and scale their machine learning pipelines to production with the world’s first composable platform for MLOps. After years of development at CMU, Berkeley, and Stanford, as well as dozens of customer engagements in finance, healthcare, energy, and heavy industry, Petuum announced a limited release of their platform through an exclusive private beta for select customers.
“We have spent the last five years working with customers on the hard problems in MLOps, and have learned how to multiply AI team productivity through extensive research. The Petuum Platform helps AI teams do more with less.” – Aurick Qiao, CEO
Petuum’s enterprise MLOps platform is built around principles of composability, openness, and infinite extensibility. With universal standards for data, pipelines, and infrastructure, AI applications can be built from reusable building blocks and managed as part of a repeatable assembly-line process. Petuum’s users don’t need to worry about infrastructure or DevOps expertise, glue code, or tuning, and can instead focus on rapidly deploying more projects in less time, with less resources, and with less help from others.
“In training alone, we have seen 3 to 8 times greater time to value. Infrastructure orchestration and Pythonic deployment system are easy enough for a data scientist to use.” – Tong Wen, Director of Engineering
The end-to-end platform includes the AI OS with low/no-code Kubernetes optimized for AI. Universal Pipelines allow low-expertise users to compose and execute DAGs with modular DataPacks for any kind of data. The low/no-code Deployment Manager can upgrade, reuse, and reconfigure pipelines in production with observability and user management. The platform also hosts a revolutionary experiment manager for amortized autotuning and optimizing pipelines of models and systems.
Petuum’s award-winning team has grown out of the CASL open source consortium and comprises thought leaders across all categories of machine learning operations. Petuum’s customers have seen improvements of 50% or more in time to value and productivity of ML team and resources. These unparalleled efficiencies only increase with scale.
“This is the Petuum omniverse. With Petuum AI OS you can wrap up anything and everything, as long as it runs with Docker and normal compute systems. In that sense, you not only have this graph system, you also want to standardize all of your pipelines.” – Guowei He, Inception Institute of Artificial Intelligence