Microsoft has announced that it is taking legal action against a threat actor accused of operating a “hacking-as-a-service” infrastructure designed to bypass the safety controls of its generative artificial intelligence (AI) services, such as Azure OpenAI Service. The tech giant alleges that the group exploited the services to create harmful and offensive content, including violating the company’s terms of use and security protocols.
The company’s Digital Crimes Unit (DCU) reported that it first discovered the malicious activity in July 2024. The group reportedly developed software to exploit exposed customer credentials scraped from public websites. Using these credentials, the group accessed generative AI services, like DALL-E, and manipulated the system to produce harmful content, including violent or inappropriate images. Microsoft claims that the group then monetized this access by selling the generated services to other malicious actors.
Microsoft has revoked the group's access to its AI tools and implemented stronger safeguards to prevent future incidents. In addition, the company secured a court order to seize a website—aitism[.]net.
The lawsuit, filed in the Eastern District Court of Virginia, names ten individuals involved in the illegal operation. According to Microsoft, these individuals violated multiple laws, including the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and the Racketeer Influenced and Corrupt Organizations Act, among others. The group allegedly used stolen API keys from Microsoft’s Azure OpenAI services to create and distribute harmful content. Documents indicate that some of these keys belonged to US-based companies.
The group has used custom software to circumvent Microsoft's built-in AI safety protocols. The software enabled them to reverse-engineer flagged phrases and bypass the filters that prevent the creation of violent, hateful, or misleading AI-generated content. Additionally, it is claimed that the group’s tool allowed them to strip metadata from the generated media, thus avoiding detection of AI-generated content.
As part of the investigation, Microsoft has secured a temporary restraining order, which facilitates the seizure of the malicious domain and the redirection of its traffic to Microsoft's DCU sinkhole for further analysis.
Both Microsoft and OpenAI have previously reported the use of their services by nation-state groups from China, Iran, North Korea, and Russia for purposes such as disinformation campaigns and reconnaissance. A report from the European police organization Europol warned that tools similar to OpenAI’s ChatGPT made it possible “to impersonate an organization or individual in a highly realistic manner.” The UK’s National Cyber Security Centre has also highlighted the possible hacking risks through AI use.