13 January 2025

Microsoft takes legal action against hackers exploiting AI for malicious purposes


Microsoft takes legal action against hackers exploiting AI for malicious purposes

Microsoft has announced that it is taking legal action against a threat actor accused of operating a “hacking-as-a-service” infrastructure designed to bypass the safety controls of its generative artificial intelligence (AI) services, such as Azure OpenAI Service. The tech giant alleges that the group exploited the services to create harmful and offensive content, including violating the company’s terms of use and security protocols.

The company’s Digital Crimes Unit (DCU) reported that it first discovered the malicious activity in July 2024. The group reportedly developed software to exploit exposed customer credentials scraped from public websites. Using these credentials, the group accessed generative AI services, like DALL-E, and manipulated the system to produce harmful content, including violent or inappropriate images. Microsoft claims that the group then monetized this access by selling the generated services to other malicious actors.

Microsoft has revoked the group's access to its AI tools and implemented stronger safeguards to prevent future incidents. In addition, the company secured a court order to seize a website—aitism[.]net.

The lawsuit, filed in the Eastern District Court of Virginia, names ten individuals involved in the illegal operation. According to Microsoft, these individuals violated multiple laws, including the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and the Racketeer Influenced and Corrupt Organizations Act, among others. The group allegedly used stolen API keys from Microsoft’s Azure OpenAI services to create and distribute harmful content. Documents indicate that some of these keys belonged to US-based companies.

The group has used custom software to circumvent Microsoft's built-in AI safety protocols. The software enabled them to reverse-engineer flagged phrases and bypass the filters that prevent the creation of violent, hateful, or misleading AI-generated content. Additionally, it is claimed that the group’s tool allowed them to strip metadata from the generated media, thus avoiding detection of AI-generated content.

As part of the investigation, Microsoft has secured a temporary restraining order, which facilitates the seizure of the malicious domain and the redirection of its traffic to Microsoft's DCU sinkhole for further analysis.

Both Microsoft and OpenAI have previously reported the use of their services by nation-state groups from China, Iran, North Korea, and Russia for purposes such as disinformation campaigns and reconnaissance. A report from the European police organization Europol warned that tools similar to OpenAI’s ChatGPT made it possible “to impersonate an organization or individual in a highly realistic manner.” The UK’s National Cyber Security Centre has also highlighted the possible hacking risks through AI use.


Back to the list

Latest Posts

Critical Aviatrix Controller flaw exploited to install backdoors and cryptominers

Critical Aviatrix Controller flaw exploited to install backdoors and cryptominers

The vulnerability allows attackers to escalate privileges and gain full control of cloud resources.
13 January 2025
Over 4K active hacker backdoors found in expiring or abandoned domains

Over 4K active hacker backdoors found in expiring or abandoned domains

Several of the web shells had been backdoored by their original maintainers, leaking critical information.
13 January 2025
Microsoft takes legal action against hackers exploiting AI for malicious purposes

Microsoft takes legal action against hackers exploiting AI for malicious purposes

The group accessed generative AI services and manipulated the system to produce harmful content.
13 January 2025