Government-backed hacker groups associated with Russia, North Korea, Iran, and China are using artificial intelligence (AI) and large language models (LLMs) to improve their cyber attack operations, according to Microsoft and OpenAI.
OpenAI said in a blog post that it shut down OpenAI accounts used by multiple state-backed threat actors, including two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon, the Iran-affiliated threat actor known as Crimson Sandstorm, the North Korea-linked Emerald Sleet, and the Russia-associated hacker group tracked as Forest Blizzard.
The threat actors used OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks, the company said.
More specifically, Charcoal Typhoon (aka RedHotel, Aquatic Panda, and Bronze University) used AI to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns. Salmon Typhoon (Sodium) was interested in translating technical papers, retrieving publicly available information on multiple intelligence agencies and regional threat actors, assisting with coding, and researching common ways processes could be hidden on a system.
Crimson Sandstorm (Curium) employed AI for scripting support related to app and web development, generating content for spear-phishing campaigns, and researching malware detection evasion techniques.
The Emerald Sleet (previously Tallium) group used AI to identify experts and organizations focused on defense issues in the Asia-Pacific region, understand publicly available vulnerabilities, help with basic scripting tasks, and draft content that could be used in phishing campaigns.
The Forest Blizzard threat actor was primarily focused on open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks, Open AI said.
Microsoft announced a set of principles to mitigate the risks posed by the malicious use of AI tools and APIs by state-backed hackers, advanced persistent manipulators (APMs), and cybercriminal syndicates and conceive effective guardrails and safety mechanisms around its models.
These principles include identification and action against malicious threat actors' use, notification to other AI service providers, collaboration with other stakeholders, and transparency.