How Uber is Running Ray on Kubernetes

If you’re running distributed ML and want to avoid the infra mess, Uber’s Ray-on-Kubernetes stack is a blueprint worth studying.
How Uber is Running Ray on Kubernetes
Image by Mohit Pandey
Companies are in a bind over whether to use or ditch Kubernetes. While some companies decided to completely move away from it, many have now been moving their workloads back to Kubernetes after trying and testing monolithic architectures. Unfortunately, both approaches have their pain points, and no single perfect solution exists.  Last year, ride-hailing logistics firm Uber decided to upgrade its machine learning platform and shift its ML workloads to Kubernetes. And, in typical Uber fashion, it didn’t just migrate but also built some of its own tools along the way to make everything run smoothly. In a recent blog, the Uber tech team explained this transition and the motivation behind it. ML pipelines deal with huge volumes of data, especially during model training. These are
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Mohit Pandey
Mohit Pandey
Mohit writes about AI in simple, explainable, and often funny words. He's especially passionate about chatting with those building AI for Bharat, with the occasional detour into AGI.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed