#MSBuild Announcements: Making your AI Easier with updates to Azure Machine Learning, #MLOps, or #DevOps for Machine Learning

I’ve been using Azure Machine Learning for some time, and I’m excited about the new possibilities with the new updates to Azure Machine Learning to make it easier for AI novices to build, train and deploy machine learning models. I believe in the power of AI, and I believe that everyone should get a chance to use it.

Microsoft are helping businesses to have easy access to AI, from conception to modelling through to business value creation and sustainability by making it production-ready. Azure Machine Learning helps data scientists and developers build and train AI models faster, then easily deploy those models to the cloud or the edge. By simplifying AI, it makes it easier to derisk to get started with it.

Microsoft have announced a new automated machine learning user interface which is zero-cde, meaning that you create your models visually by dragging and dropping. I’m also pleased to be delivering a Build session on Wednesday at 2pm, where I get to show off the new machine learning notebooks for code-first model development.

Right now, these capabilities are available in preview, so why not head over and have a play?

There are also new capabilities to help you to transition your models to production, at scale. New MLOps, or DevOps for Machine Learning capabilities, simplifies the end-to-end lifecycle from model creation to deployment. In order to help make AI easier for the business to manage, it’s also possible to monitor it with Azure DevOps integration. If you want to know more about MLOps, check out this video with Seth Juarez (Twitter), who is fantastic at explaining things and the video is well worth checking out.

At Build, it was also announced that there is now high-speed inferencing from cloud to edge. This enables low-latency and lowcost inferencing with the general availability of hardware-accelerated models that run on FGPAs in Azure. This capability is also available in preview in Data Box edge. ONNX Runtime support for  NVIDIA TensorRT and Intel nGraph enables high-speed inferencing on NVIDIA and Intel chipsets.

To summarise, Microsoft are helping businesses to have easy access to AI, from conception to modelling through to business value creation and sustainability by making it production-ready. Azure Machine Learning helps data scientists and developers build and train AI models faster, then easily deploy those models to the cloud or the edge.
The updates in Azure Machine Learning are all currently available.

Share the Post:

Related Posts

Data Quality Beats DataOps: Why Organisations are Choosing Foundations over Flashiness

13 December 2025 The AI revolution promised transformative business outcomes, yet many organisations are discovering a harsh reality: flashy AI

Read More

What Does IBM’s Acquisition of Confluent Mean for AI in Enterprises?

IBM’s $11 billion acquisition of Confluent represents a strategic shift in enterprise AI deployment, focusing on event-driven platforms for resilience and speed. Confluent’s streaming technology enhances data integration across hybrid environments, fostering robust AI ecosystems. This move positions IBM competitively against major cloud providers, emphasising that data infrastructure is just as important for AI scalability and reliability.

Read More
Image of robot with human team office workers.

Shadow AI: What Every Enterprise Needs to Know (Statistics, Risks & Solutions)

Discover why Shadow AI is reshaping the enterprise, often under the radar. Explore essential statistics, real-world risks, and proven solutions to manage Shadow AI, protect your data, and enable secure innovation in your organisation.

Read More

Discover more from Jennifer Stirrup: AI Strategy, Data Consulting & BI Expert | Keynote Speaker

Subscribe now to keep reading and get access to the full archive.

Continue reading