Microsoft warns: Cyber threat organizations are increasingly exploiting artificial intelligence to carry out attacks

robot
Abstract generation in progress

IT House March 8 News, Microsoft stated that cyber threat organizations are increasingly using artificial intelligence in their operations to accelerate attack speed, expand malicious activities, and lower the technical barriers for the entire cyberattack process.

A recent report from Microsoft’s Threat Intelligence team indicates that attackers are employing generative AI tools for various tasks, including information reconnaissance, phishing, infrastructure setup, malware creation, and post-intrusion activities.

In most cases, AI is used to craft phishing emails, translate content, summarize stolen data, debug malware, and assist in scripting or configuring attack infrastructure.

“Microsoft Threat Intelligence has observed that most malicious uses of AI currently focus on generating text, code, or multimedia content with large language models. Threat groups use generative AI to write phishing bait, translate content, summarize stolen data, generate or debug malware, and build scripts or attack infrastructure,” Microsoft warns. “In these applications, AI acts as an efficiency multiplier, reducing technical difficulty and speeding up execution, while operators still control the attack targets, objectives, and deployment decisions.”

Microsoft has observed multiple cyberattack groups integrating AI into their operations, including North Korea-linked groups tracked as “Jasper Sleet” and “Coral Sleet,” which use this technology for remote IT infiltration campaigns.

In such operations, AI tools help generate realistic identities, resumes, and communication content to deceive Western companies into hiring, and maintain access after onboarding.

Jasper Sleet leverages generative AI platforms to streamline the creation of fake digital identities. For example, members prompt AI to generate lists of names and email formats matching specific cultural backgrounds to fit particular profiles. Sample prompts include:

Prompt Example 1: “Generate 100 Greek names.”

Prompt Example 2: “Use the name Jane Doe to generate a set of email address formats.”

Jasper Sleet also uses generative AI to review job postings on professional platforms related to software development and IT, extracting and summarizing required skills to craft fake identities tailored for specific positions.

The report also details how AI is used to assist in malware development and infrastructure setup: threat groups utilize AI coding tools to generate and optimize malicious code, troubleshoot errors, or port malware components across different programming languages.

Some malware experiments have shown AI-powered features, such as dynamically generating scripts or modifying behavior during runtime.

Microsoft also observed Coral Sleet using AI to quickly build fake company websites, configure attack infrastructure, and test or troubleshoot deployment content.

When AI security defenses attempt to block these malicious uses, threat groups employ bypass (jailbreak) techniques to trick large language models into generating malicious code or content.

Beyond generative AI, Microsoft researchers have also found that threat groups are beginning to experiment with autonomous AI agents that can perform tasks independently and adapt based on results.

However, Microsoft notes that currently, AI is mainly used to assist decision-making rather than launching fully autonomous attacks.

Since many IT infiltration attacks rely on abusing legitimate permissions, Microsoft recommends that organizations treat such scams and similar behaviors as internal risks.

Additionally, because these AI-driven attacks resemble traditional cyberattacks, defenders should focus on detecting abnormal credential usage, strengthening identity systems against phishing, and protecting AI systems that could become future attack targets.

IT House notes that Microsoft is not the only entity observing increased use of AI by threat groups to launch attacks and lower entry barriers. Recently, Google reported that threat groups are abusing Gemini AI throughout the entire attack process, consistent with observations from Amazon. Amazon and cybersecurity blog Cyber and Ramen also recently reported an incident where a threat group used multiple generative AI services to breach over 600 FortiGate firewalls.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin