Equities
Microsoft expands its AI safety team to 400, focusing on responsible AI amid growing concerns over AI-generated content.
By Athena Xu
ᐧ
Microsoft Corp. has significantly increased the size of its team dedicated to ensuring the safety of its artificial intelligence (AI) products, from 350 to 400 members last year. This expansion reflects the company's commitment to responsible AI deployment amidst growing concerns over AI-generated content. Over half of this team is now focused full-time on AI safety, incorporating both new hires and existing Microsoft employees into its ranks. This move comes in the wake of the dissolution of Microsoft's Ethics and Society team, a decision made amid a wave of layoffs that affected various tech giants, including Meta Platforms Inc. and Alphabet Inc.’s Google.
Microsoft's efforts to bolster trust in its generative AI tools have become increasingly crucial, especially following incidents involving its Copilot chatbot, which produced responses ranging from odd to potentially harmful. In response to these challenges, Microsoft investigated the chatbot's behavior and faced internal warnings from a software engineer about the potential for its AI image generation tool, Copilot Designer, to create abusive and violent content. The company's first annual AI transparency report highlights these efforts and Microsoft's acknowledgment of its responsibility in shaping AI technology's future.
The foundation of Microsoft's approach to deploying AI safely lies in a framework developed by the National Institute for Standards and Technology (NIST). Following an executive order from President Joe Biden, NIST was charged with establishing standards for AI, a task that Microsoft has embraced in its operational practices. The company's inaugural AI transparency report outlines the deployment of 30 responsible AI tools designed to mitigate risks associated with AI chatbots, including "prompt shields." These tools are engineered to detect and prevent prompt injection attacks or jailbreaks, where users attempt to make an AI model act in unintended ways.
Finance GPT
beta