Tuesday, December 3, 2024

Announcing fine-tuning for customization and support for new models in Azure AI 

To truly harness the power of generative AI, customization is key. In this blog, we share the latest Microsoft Azure AI updates.

AI has revolutionized the way we approach problem-solving and creativity in various industries. From generating realistic images to crafting human-like text, these models have shown immense potential. However, to truly harness their power, customization is key. We are announcing new customization updates on Microsoft Azure AI including:

  • General availability of fine-tuning for Azure OpenAI Service GPT-4o and GPT-4o mini.
  • Availability of new models including Phi-3.5-MoE, Phi-3.5-vision through serverless endpoint, Meta’s Llama 3.2, The Saudi Data and AI Authority (SDAIA) ‘s ALLaM-2-7B, and updated Command R and Command R+ from Cohere. 
  • New capabilities that expand on our enterprise promise including upcoming availability of Azure OpenAI Data Zones.
  • New responsible AI features including Correction, a capability in Azure AI Content Safety’s groundedness detection feature, new evaluations to assess the quality and security of outputs, and Protected Material Detection for Code.
  • Full Network Isolation and Private Endpoint Support for building and customizing generative AI apps in Azure AI Studio.

Unlock the power of custom LLMs with Azure AI 

Customization of LLMs has become an increasingly popular way for our users to gain the power of best-in-class generative AI models, combined with the unique value of proprietary data and domain expertise. Fine-tuning has become the preferred choice to create custom LLMs: faster, cheaper, and more reliable than training models from scratch.

Azure AI is proud to offer tooling to enable customers to fine-tune models across Azure OpenAI Service, the Phi family of models, and over 1,600 models in the model catalog. Today, we’re excited to announce the general availability of fine-tuning for both GPT-4o and GPT-4o mini on Azure OpenAI Service. Following a successful preview, these models are now fully available for customers to fine-tune. We’ve also enabled fine-tuning for SLMs with the Phi-3 family of models.

Azure OpenAI Service fine-tuning GPT-4o

Whether you’re optimizing for specific industries, enhancing brand voice consistency, or improving response accuracy across different languages, GPT-4o and GPT-4o mini deliver robust solutions to meet your needs. 

Lionbridge, a leader in the field of translation automation, has been one of the early adopters of Azure OpenAI Service and has leveraged fine-tuning to further enhance translation accuracy. 

“At Lionbridge, we have been tracking the relative performance of available translation automation systems for many years. As a very early adopter of GPTs on a large scale, we have fine-tuned several generations of GPT models with very satisfactory results. We’re thrilled to now extend our portfolio of fine-tuned models to the newly available GPT-4o and GPT-4o mini on Azure OpenAI Service. Our data shows that fine-tuned GPT models outperform both baseline GPT and Neural Machine Translation engines in languages like Spanish, German, and Japanese in translation accuracy. With the general availability of these advanced models, we’re looking forward to further enhance our AI-driven translation services, delivering even greater alignment with our customers’ specific terminology and style preferences.”—Marcus Casal, Chief Technology Officer, Lionbridge.

Nuance, a Microsoft company, has been a pioneer in AI-enabled healthcare solutions since 1996, starting with the first clinical speech-to-text automation for healthcare. Today, Nuance continues to leverage generative AI to transform patient care. Anuj Shroff, General Manager of Clinical Solutions at Nuance, highlighted the impact of generative AI and customization: 

“Nuance has long recognized the potential of fine-tuning AI models to deliver highly specialized and accurate solutions for our healthcare clients. With the general availability of GPT-4o and GPT-4o mini on Azure OpenAI Service, we’re excited to further enhance our AI-driven services. The ability to tailor GPT-4o’s capabilities to specific workflows marks a significant advancement in AI-driven healthcare solutions”—Anuj Shroff, General Manager of Clinical Solutions at Nuance.

For customers focused on low costs, small compute footprints, and edge compatibility, Phi-3 SLM fine-tuning is proving to be a valuable approach. Khan Academy recently published a research paper showing their fine-tuned version of Phi-3 performed better at finding and fixing student math mistakes compared to other models.

A platform for customization quality 

Fine-tuning is about so much more than just training models. From data generation to model evaluation, and support for scaling your custom models to production workloads, Azure provides a unified platform: data generation via powerful LLMs, AI Studio Evaluation, built in safety guardrails for fine-tuned models, and more. As part of our GPT-4o and 4o-mini now generally available, we’ve recently shared an end-to-end distillation flow for retrieval augmented fine-tuning, showing how to leverage Azure AI for custom, domain-adapted models.

We are hosting a webinar on October 17, 2024, to unpack the essentials and practical recipes to get started with fine-tuning. We hope you will join us to learn more.

Expanding model choice

With over 1,600 models, Azure AI model catalog offers the broadest selection of models to build generative AI applications. Azure AI models are now also available through GitHub Models so developers can quickly prototype and evaluate the best model for their use case.

I am excited to share new model availability, including: 

  • Phi-3.5-MoE-instruct, a Mixture-of-Experts (MoE) model and Phi-3.5-vision-instruct through serverless endpoint and also through GitHub Models. Phi-3.5-MoE-instruct, with 16 experts and 6.6B active parameters provides multi-lingual capability, competitive performance, and robust safety measures. Phi-3.5-vision-instruct (4.2B parameters), now available through managed compute enables reasoning across multiple input images, opening up new possibilities such as detecting differences between images.
  • Meta’s Llama 3.2 11B Vision Instruct and Llama 3.2 90B Vision Instruct. These models are Llama’s first ever multi-modal models and are available via managed compute in the Azure AI model catalog. Inferencing through serverless endpoints is coming soon. 
  • SDAIA’s ALLaM-2-7B. This new model is designed to facilitate natural language understanding in both Arabic and English. With 7 billion parameters, ALLaM-2-7B aims to serve as a critical tool for industries requiring advanced language processing capabilities.
  • Updated Command R and Command R+ from Cohere available in Azure AI Studio and through Github Models. Known for their expertise in retrieval-augmented generation (RAG) with citations, multilingual support in over 10 languages, and workflow automation, the latest versions offer better efficiency, affordability, and user experience. They feature improvements in coding, math, reasoning, and latency, with Command R being the fastest and most efficient model yet.

Achieve AI transformation with confidence

Earlier this week, we unveiled Trustworthy AI, a set of commitments and capabilities to help build AI that is secure, safe, and private. Data privacy and security, core pillars of Trustworthy AI, are foundational to designing and implementing new solutions. To help meet regulatory and compliance standards, Azure OpenAI Service—an Azure service, provides robust enterprise controls so organization can build with confidence. We continue to invest to expand enterprise controls and recently announced upcoming availability of Azure OpenAI Data Zones to further enhance data privacy and security capabilities. With the new Data Zones feature that builds on the existing strength of Azure OpenAI Service’s data processing and storage options, Azure OpenAI Service now provides customers with options between Global, Data Zone, and regional deployments, allowing customers to store data at rest within the Azure chosen region of their resource. We are excited to bring this to customers soon.

Additionally, we recently announced full network isolation in Azure AI Studio, with private endpoints to storage, Azure AI Search, Azure AI services, and Azure OpenAI Service supported via managed virtual network (VNET). Developers can also chat with their enterprise data securely using private endpoints in the chat playground. Network isolation prevents entities outside the private network from accessing its resources. For additional control, customers can now enable Entra ID for credential-less access to Azure AI Search, Azure AI services, and Azure OpenAI Service connections in Azure AI Studio. These security capabilities are critical for enterprise customers, particularly those in regulated industries using sensitive data for model fine-tuning or retrieval augmented generation (RAG) workflows.

In addition to privacy and security, safety is top of mind. As part of our responsible AI commitment, we launched Azure AI Content Safety in 2023 to enable generative AI guardrail. Building on this work, Azure AI Content Safety features—including prompt shields and protected material detection—are on by default and available at no cost in Azure OpenAI Service. Further, these capabilities can be leveraged as content filters with any foundation model included in our model catalog, including Phi-3, Llama, and Cohere. We also announced new capabilities in Azure AI Content Safety including:

  • Correction to help fix hallucination issues in real time before users see them, now available in preview.
  • Protected Material Detection for Code to help detect pre-existing content and code. This feature helps developers explore public source code in GitHub repositories, fostering collaboration and transparency, while enabling more informed coding decisions.

Lastly, we announced new evaluations to help customers assess the quality and security of outputs and how often their AI application outputs protected material.

Get started with Azure AI

As a product builder it is exciting and humbling to bring new AI innovations to customers including models, customization, and safety features and to see real transformation that customers are driving. Whether an LLM or SLM, customizing generative AI model helps to boost their potential, allowing businesses to address specific challenges and innovate in their respective fields. Create the future today with Azure AI.

Additional resources 


Related Articles

Latest Articles