Equities

Microsoft Bolsters AI, Security Teams After Hacks

Microsoft enhances cybersecurity with new deputy CISO roles amid high-profile breaches, expands AI safety team to 400.

By Mackenzie Crow

5/2, 19:54 EDT
Microsoft Corporation
article-main-img

Key Takeaway

  • Microsoft appoints deputy CISOs across product groups to strengthen cybersecurity, responding to high-profile breaches.
  • The company faces scrutiny over security practices, with recent hacks by Russian and Chinese groups targeting executives' emails.
  • Microsoft expands AI safety team to 400, emphasizing responsible AI development amidst growing concerns over AI-generated content.

Strengthening Cybersecurity Measures

Microsoft Corp. has announced the addition of deputy chief information security officers (CISOs) to its product groups, a move aimed at enhancing the company's resilience to cyberattacks. This decision comes in the wake of criticism over Microsoft's handling of several high-profile security breaches. The newly appointed executives will report to Igor Tsyganskiy, Microsoft's global chief information security officer since December. This organizational change is part of Microsoft's broader strategy to integrate security considerations more deeply into its product development process. Ann Johnson, a veteran Microsoft security executive, has been appointed deputy CISO for customer outreach and regulated industries, emphasizing the importance of customer communication in Microsoft's security efforts.

Responding to Cybersecurity Challenges

Microsoft's cybersecurity practices have been under scrutiny following incidents involving state-sponsored and criminal hacking groups. Early in the year, a Russian group accessed the email accounts of top Microsoft executives, leading to a significant internal response to mitigate the intrusion. More recently, a Chinese-linked hacking group exploited a Microsoft access tool to breach the email accounts of high-profile U.S. officials. These incidents have prompted calls for urgent reforms within the company, including from the US Cyber Safety Review Board and US Senator Ron Wyden, who has proposed legislation targeting collaboration software's cybersecurity standards.

Commitment to Responsible AI Development

In addition to bolstering its cybersecurity framework, Microsoft has expanded its team dedicated to ensuring the safety and responsibility of its artificial intelligence (AI) products. The company increased the AI safety team's size from 350 to 400, focusing on addressing the challenges posed by AI-generated content. This expansion is part of Microsoft's Secure Future Initiative and follows the dissolution of its Ethics and Society team. Microsoft's efforts to promote trust in its AI tools include investigating incidents with its Copilot chatbot and committing to a responsible AI deployment framework based on standards from the National Institute for Standards and Technology (NIST).