Equities
Microsoft enhances cybersecurity with new deputy CISO roles amid high-profile breaches, expands AI safety team to 400.
By Mackenzie Crow
ᐧ
Microsoft Corp. has announced the addition of deputy chief information security officers (CISOs) to its product groups, a move aimed at enhancing the company's resilience to cyberattacks. This decision comes in the wake of criticism over Microsoft's handling of several high-profile security breaches. The newly appointed executives will report to Igor Tsyganskiy, Microsoft's global chief information security officer since December. This organizational change is part of Microsoft's broader strategy to integrate security considerations more deeply into its product development process. Ann Johnson, a veteran Microsoft security executive, has been appointed deputy CISO for customer outreach and regulated industries, emphasizing the importance of customer communication in Microsoft's security efforts.
Microsoft's cybersecurity practices have been under scrutiny following incidents involving state-sponsored and criminal hacking groups. Early in the year, a Russian group accessed the email accounts of top Microsoft executives, leading to a significant internal response to mitigate the intrusion. More recently, a Chinese-linked hacking group exploited a Microsoft access tool to breach the email accounts of high-profile U.S. officials. These incidents have prompted calls for urgent reforms within the company, including from the US Cyber Safety Review Board and US Senator Ron Wyden, who has proposed legislation targeting collaboration software's cybersecurity standards.
In addition to bolstering its cybersecurity framework, Microsoft has expanded its team dedicated to ensuring the safety and responsibility of its artificial intelligence (AI) products. The company increased the AI safety team's size from 350 to 400, focusing on addressing the challenges posed by AI-generated content. This expansion is part of Microsoft's Secure Future Initiative and follows the dissolution of its Ethics and Society team. Microsoft's efforts to promote trust in its AI tools include investigating incidents with its Copilot chatbot and committing to a responsible AI deployment framework based on standards from the National Institute for Standards and Technology (NIST).
Finance GPT
beta