Cybercrime gets an upgrade
Artificial Intelligence is reshaping the business of cybercrime.
Gil Baram, Leah Walker
10.22.2025
The integration of artificial intelligence (AI) into cybercrime is altering the threat landscape in both subtle and significant ways. While much of the public discourse on AI and cyber issues remains focused on speculative scenarios involving autonomous cyberweapons or large-scale geopolitical attacks, a more immediate transformation is underway: AI is making common forms of cybercrime faster, cheaper, and easier to execute.
Rather than introducing wholly new categories of attack, AI is improving the speed, scale, and affordability of familiar techniques (e.g., phishing, identity fraud, and ransomware) while also complicating attribution and response. Tasks that once required technical skill and manual effort, like crafting convincing phishing emails, impersonating individuals, or developing malware, can now be carried out using off-the-shelf generative tools and pre-trained models. This shift is not only making existing tactics more effective but also lowering the barrier to entry. A growing number of actors, including those with limited resources or expertise, can now engage in cybercrime at scale.
Two factors explain the change. First, generative models and other toolkits reduce the specialized labor once required to prepare convincing deception or to automate exploitation. Second, automation shortens timelines and boosts volume, enabling increases in the speed and scope of reconnaissance, content creation, and vulnerability detection. The result is not uniformly sophisticated attacks but rather a larger number of attacks, across a wider attack surface, with a greater likelihood of success.
Alongside these technical shifts, the cybercrime market itself is evolving. Echoing the rise of ransomware-as-a-service, a small number of developers now create AI-powered tools that are leased or sold to less technical users. This commodification of AI-enabled cybercrime further democratizes access, allowing less technically savvy criminals to launch complex attacks with minimal effort and investment.
A case in point occurred earlier this year when attackers used AI-generated video and voice deepfakes to impersonate executives at Arup, a multinational engineering firm. An employee, believing they were on a legitimate video call, was persuaded to transfer $25 million. The technical breach was minimal. Instead, the attackers exploited trust in visual communication, a trend that is becoming more common as generative tools improve in fidelity and ease of use.
These trends were explored in a recent tabletop exercise at UC Berkeley, which brought together cybersecurity practitioners, researchers, and law enforcement. The exercise highlighted that basic controls, such as sender authentication and access restrictions, remain essential. Pre-approved communication protocols and cross-team coordination also helped reduce confusion and mitigate risk during simulated incidents, especially in scenarios involving AI-generated deception.
As AI capabilities continue to advance, the nature of cybercrime is likely to evolve further, not necessarily toward dramatic new threat categories, but through the steady refinement and scaling of existing techniques. However, despite growing awareness, significant gaps remain in understanding the practical adoption and impact of AI-enabled cybercrime tools. Addressing these uncertainties will require ongoing interdisciplinary research, combining technical analysis with insights from social sciences and policy studies. Regular tabletop exercises, alongside sustained dialogue with private and public sector stakeholders, will be essential to tracking emerging tactics and assess and improving tools and measures available to defenders.