Equities

Microsoft AI Safety Team Grows to 400, Focuses on Chatbot Safety

Microsoft expands its AI safety team to 400, focusing on responsible AI amid growing concerns over AI-generated content.

By Athena Xu

5/1, 09:50 EDT
Alphabet Inc.
Meta Platforms, Inc.
Microsoft Corporation
article-main-img

Key Takeaway

  • Microsoft expanded its AI safety team from 350 to 400, focusing over half on AI safety amid concerns over AI-generated content.
  • The company addresses issues with its Copilot chatbot and commits to responsible AI deployment, highlighted in its first annual transparency report.
  • Adopts NIST's framework for deploying AI safely, including 30 tools to mitigate risks like prompt injection attacks in chatbots.

AI Safety Expansion

Microsoft Corp. has significantly increased the size of its team dedicated to ensuring the safety of its artificial intelligence (AI) products, from 350 to 400 members last year. This expansion reflects the company's commitment to responsible AI deployment amidst growing concerns over AI-generated content. Over half of this team is now focused full-time on AI safety, incorporating both new hires and existing Microsoft employees into its ranks. This move comes in the wake of the dissolution of Microsoft's Ethics and Society team, a decision made amid a wave of layoffs that affected various tech giants, including Meta Platforms Inc. and Alphabet Inc.’s Google.

Addressing AI Concerns

Microsoft's efforts to bolster trust in its generative AI tools have become increasingly crucial, especially following incidents involving its Copilot chatbot, which produced responses ranging from odd to potentially harmful. In response to these challenges, Microsoft investigated the chatbot's behavior and faced internal warnings from a software engineer about the potential for its AI image generation tool, Copilot Designer, to create abusive and violent content. The company's first annual AI transparency report highlights these efforts and Microsoft's acknowledgment of its responsibility in shaping AI technology's future.

Responsible AI Framework

The foundation of Microsoft's approach to deploying AI safely lies in a framework developed by the National Institute for Standards and Technology (NIST). Following an executive order from President Joe Biden, NIST was charged with establishing standards for AI, a task that Microsoft has embraced in its operational practices. The company's inaugural AI transparency report outlines the deployment of 30 responsible AI tools designed to mitigate risks associated with AI chatbots, including "prompt shields." These tools are engineered to detect and prevent prompt injection attacks or jailbreaks, where users attempt to make an AI model act in unintended ways.