Over the past year, one word has come to represent the lively conversation at the intersection of AI and cybersecurity: speed. The urgency is important, but it is not the most important change we are seeing across the threat landscape today. Now, threat actors from nations to cyber crime groups are incorporating AI into their approach to planning, developing and supporting cyber attacks. The goals haven’t changed, but the tempo, iteration, and scale of the AI-enabled attacks certainly improve them.
However, like the defenders, there is often a human-in-the-loop powering these attacks, not completely autonomous AI campaigns. AI minimizes conflicts throughout the life of an attack; helping threat actors to quickly investigate, exploit, malware a vibe code, and stolen data. The security leaders I spoke with at the RSACâ„¢ 2026 Conference this week are prioritizing tools and strategic changes to lead this critical evolution across the threat landscape.
Functionality: Embedded, not emergent
The scale of what we are tracking makes it impossible to exclude. Threats are emerging everywhere. The United States alone represents about 25% of observed cases, followed by the United Kingdom, Israel and Germany. The volume reflects the economic and political situation.1
But the big change is not a place, it’s a process. Threat actors are incorporating AI into the way they work across intelligence, malware development, and post-compromise operations. Motives such as identity theft, financial gain, and espionage may seem familiar, but their accuracy, persistence, and scale have changed.
Email is still the fastest way
Email is still the fastest and cheapest way to initially reach out. What has changed is the level of sophistication that AI allows to create a message that gets someone to click.
When AI is integrated into the phishing process, we see click-through rates up to 54%, compared to around 12% for most traditional campaigns. That is 450% increase in efficiency. That’s not the result of more volume, but the result of improved accuracy. AI enables threat actors to identify local issues and adapt to specific messages and features, reducing the complexity of designing an evolving trap to access. When you combine that improved functionality with infrastructure designed around multifactor authentication (MFA), the result is a phishing operation that is more stable, more targeted, and much harder to defend against at scale.
A 450% increase in click-through rates changes the number of risks for every organization. It also shows that AI is not only used to do the same, but is used to do it better.
Tycoon2FA: What industrial cybercrime looks like
Tycoon2FA is an example of how the actor we follow as Storm-1747 turned to development and stability. Understanding how it worked teaches us where the threats may lead, and strengthened discussions in the RSAC 2026 discussion rooms this week that focus on the environment rather than individual actors.
Tycoon2FA was not a phishing tool, it was a registration platform that generates tens of millions of phishing emails per month. It was linked to nearly 100,000 compromised organizations as of 2023. At its peak, it accounted for 62% of all phishing attempts that Microsoft blocked each month. This process is unique to the central opponent’s attacks designed to defeat the MFA. It accepted credentials and session tokens in real time and allowed attackers to verify they were legitimate users without displaying warnings, even after passwords were reset.
But technical ability is only part of the story. The biggest change is structural. Storm-1747 did not work alone. This was a modular cybercrime: one service dealt with phishing templates, another provided buildings, another managed by email, another access to money. It was actually a conference call for identity theft. Services could be scheduled, metered, and available by subscription.
This is the example that changed the debate this week: it’s not about one complex character; it is about an environment that has access to industries and lowers the barrier of entry for every actor who enters it. That’s exactly what AI is doing across the vast threat landscape: making the capabilities of advanced players available to everyone.
Disruption: Closing the threat intelligence loop
Our Digital Crimes Unit disrupted Tycoon2FA earlier this month, seizing 330 sites in collaboration with Europol and industry partners. But the goal was not just to take down websites. The aim was to put pressure on the supply chain. Cybercrime today is about the types of malicious services that lower the barrier to entry. Identity is the primary focus and MFA bypass is now packaged as a feature. Disrupting one service forces the market to adapt. Static pressure separates the environment. By targeting the economic engine behind attacks, we can change the risk environment.
Every time we interrupt an attack, it generates a signal. Branding feeds intelligence. Wisdom strengthens perception. Discovery is what drives the response. It’s how we turn threat actions into sustainable defenses, and how deterrence works over time. Microsoft’s ability to see at scale, act at scale, and share intelligence at scale is a key differentiator. It makes a difference because of how we use it.
AI throughout the life of the attack
When we go back to any campaign and look for a broader approach, AI doesn’t appear in just one aspect of the attack; it occurs throughout the entire life cycle. At RSAC 2026 this week, I provided a framework to help advocates prioritize their response:
- On purpose: AI accelerates the discovery of infrastructure and human development, compressing the time between target selection and initial communication.
- In resource development+
- For early access: AI improves voice masking, deepfakes, and auto-messages using compromised data, producing cables that are difficult to distinguish from legitimate communications.
- Persistence and evasion: AI measures false identities and automates communication that keeps attackers at bay while meeting normal operations.
- With weapons: AI enables malware development, update updates, and real-time debugging, generating tools that adapt to the victim’s environment instead of relying on static signatures.
- In the post-compromise process: AI adapts to certain victim situations, and in some cases, automatically negotiates the ransom.
The motive has not changed: document theft, financial gain, and espionage. What has changed is the tempo, the speed of repetition, and the ability to try and refine the scale. AI isn’t just accelerating cyberattacks, it’s improving them.
What follows
At my RSAC 2026 conference this week, I shared topics that help explain the AI-driven change in the threat landscape.
The first is an example of agent threat. The conditions we are preparing for have changed. The barrier to launching complex attacks has been broken. What once required the resources of a state or a well-organized criminal enterprise is now available to someone with the motivation of the right tools and the patience to use them. The trends have not changed dramatically; accuracy, speed, and volume.
The second is the software supply chain. Knowing which software and agents you have used and being able to account for their behavior is not a compliance function. Ecosystem will become the most vulnerable area in business. Organizations that cannot answer basic questions about the environment of their representatives will not be able to protect it.
The third is to understand the value of human talent in the security process using agency systems to measure. The security expert as a professional gives way to the security analyst as an orchestrator. Talent modeling agencies are hiring against today is outdated. But technology can help protect people who might make mistakes. Although it means that the cancellation of representative decisions is a requirement of the government today, not in the end. The SOC of the future requires a very different kind of protector.
The moment to lead with clear strategic direction, strategic priorities, and a strong level of ethical accountability is now.
If AI is integrated throughout the attack lifecycle, intelligence and security must be integrated throughout the lifecycle. Microsoft Threat Intelligence will continue to track, report, and act on what we see in real time. Patterns are evident. The wisdom is there.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert content on security issues. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
1Microsoft Digital Defense Report 2025.
#malicious #abuse #players #accelerates #tool #cyberattack #site #Microsoft #Security #Blog