AI May Not Be Revolutionizing Cybercrime as Predicted, Study Finds

For three years, warnings from cybersecurity experts and AI researchers suggested that generative AI would spawn a new wave of sophisticated hackers. However, an academic study reveals that these anticipated threats have not materialized; instead, tools like ChatGPT are primarily used for creating spam or generating adult content.

Published on arXiv by academics from Cambridge among other institutions, the paper titled ‘Stand-Alone Complex or Vibercrime?’ examines actual AI adoption in cybercrime circles rather than vendor predictions. The study is one of the first to empirically investigate early AI usage patterns within these underground networks, according to the researchers.

The investigation involved analyzing 97,895 forum posts from November 2022 to present, sourced from the Cambridge Cybercrime Centre’s CrimeBB dataset. By applying topic models and reviewing over 3,200 threads firsthand, along with ethnographic involvement, the team conducted a thorough analysis.

Their findings challenge prevailing doomsday scenarios: 97.3% of the posts were categorized as ‘other,’ indicating no genuine use of AI for criminal activities. Only 1.9% pertained to using vibe coding tools. Contrary to alarming headlines in 2023 about malicious chatbots like WormGPT and FraudGPT, data from forums showed these products often elicited requests for free access, speculative discussions, and critiques over their efficacy.

One developer of a popular Dark AI service even acknowledged that the product was mainly a marketing ploy, admitting it was essentially an unrestricted version of ChatGPT. By late 2024, jailbreaks for mainstream models had become easily disposable, with most losing effectiveness within days, while open-source versions remained but were resource-intensive and outdated.

The study’s authors highlight that AI system guardrails are unexpectedly effective, a finding they acknowledge as counterintuitive for a critical paper. This directly counters Anthropic’s 2025 report on Claude Code used in alleged hacking campaigns against multiple organizations; the Cambridge data didn’t reflect this trend.

AI coding tools are mostly being utilized by hackers similarly to mainstream developers: as code autocompletion and Stack Overflow substitutes for skilled programmers, with low-skill actors continuing to rely on pre-built scripts. Forum discussions revealed skepticism about AI-generated tools due to risks like insecure code or supply chain vulnerabilities. One hacker warned that reliance on AI could lead to rapid skill degradation.

Contrary to Europol’s 2025 warnings of fully autonomous AI controlling criminal networks, the study found disruption at lower levels. SEO spammers use LLMs for blog spamming amid falling ad revenues, while romance scammers employ voice cloning and image generation. Get-rich-quick schemes involve AI-generated eBooks.

A troubling market identified involved nude image creation services, where one operator offered AI-based adult content production at low prices. This resembles the low-margin, high-volume spam industry of the past but with more advanced tools.

The researchers suggest that AI’s biggest impact on cybercrime might not be enhancing criminal capabilities but rather pushing skilled developers out of legitimate jobs into underground activities due to market disruptions and layoffs. The paper notes increasing anxiety over labor market impacts from these technologies, potentially driving skilled developers toward fraudulent schemes and cybercrime.

Platform Hexoria Forex officieel vertrouwd platform voor AI-handel