Microsoft, Nvidia, and AWS are all eager to cash in on the buzz around DeepSeek’s AI reasoning model, R1, by offering it through their cloud services.
Since DeepSeek dropped its R1 model, it’s been gaining a ton of attention, and now big players like Microsoft, Nvidia, and Amazon Web Services (AWS) are rushing to integrate this open-source AI into their platforms.
In India, Union Minister of Information and Technology Ashwini Vaishnaw gave a shoutout to the Chinese AI startup, saying that its large language models (LLMs) will be hosted on local servers. He mentioned that hosting these open-source models in India could help tackle data privacy concerns.
Vaishnaw also noted that DeepSeek’s models are set to be hosted on India’s new AI Compute Facility, which is gearing up to launch soon with 10,000 GPUs ready to go.
On the local front, Indian AI startup Krutrim, founded by Ola CEO Bhavish Aggarwal, announced that developers can access R1 through Krutrim Studio for just Rs 45 per million tokens. Aggarwal emphasized that India needs to keep up with AI advancements, stating, “@Krutrim has ramped up efforts to create top-notch AI. Our cloud now has DeepSeek models live on Indian servers, and our pricing is the lowest in the world.”
Here’s a quick rundown of how some tech giants are getting in on the DeepSeek action:
Microsoft
On , Microsoft revealed that R1 is now available on Azure AI Foundry and GitHub. Azure AI Foundry is their platform for various AI services aimed at businesses. They’ve put R1 through some serious testing to ensure safety and minimize risks. Microsoft is excited to see how developers and companies will use R1 to solve real-world problems. Plus, they mentioned that soon, customers with Copilot+ PCs will be able to run “distilled” versions of R1 right on their devices. Interestingly, this move comes after OpenAI, a major investment for Microsoft, accused DeepSeek of using its tech to train their models.
Nvidia
On Nvidia announced that DeepSeek-R1 is now live on its NIM microservice platform. This platform is designed to make it easier for developers to deploy custom and pre-trained AI models. Nvidia shared that the 671-billion-parameter DeepSeek-R1 model is available for developers to experiment with, and it can handle up to 3,872 tokens per second on a single NVIDIA HGX H200 system. They also mentioned that an API for R1 will be available soon, allowing enterprises to create customized DeepSeek-R1 microservices for their specific AI needs.
So, it looks like the race is on for these tech giants to leverage DeepSeek’s growing popularity!