AI is already making online crimes easier. It could get much worse.
AI Summary
According to a Technology Review report, AI tools are increasingly being used by cybercriminals to enhance the scale and efficiency of scams, phishing campaigns, and malware development, though experts say fully autonomous AI-driven attacks remain limited in practice. Research cited in the article indicates that at least half of spam email is now LLM-generated, and targeted email fraud using AI has doubled to 14% year-over-year as of April 2025, per researchers at Columbia University, the University of Chicago, and Barracuda Networks. Reports from Google and Anthropic document instances of state-linked actors using AI tools such as Gemini and Claude to assist in cyberattacks, though both companies acknowledged significant limitations in AI autonomy and effectiveness during those operations.
Why it matters
The documented rise in AI-assisted fraud and cybercrime, including a $25 million deepfake scam reported at engineering firm Arup, has direct financial implications for corporations and financial institutions, potentially increasing compliance and cybersecurity costs across industries. For AI companies, the dual-use nature of large language models is drawing heightened regulatory and reputational scrutiny, with implications for how open-source and closed-source AI products are governed and deployed commercially.
Scoring rationale
The article covers AI-enabled cybercrime and deepfakes with tangential market relevance, mentioning Google Gemini, Anthropic Claude, and Microsoft Security, but focuses primarily on cybersecurity threats rather than financial market-moving AI developments.
Impacted tickers
This summary was generated by AI from the original article published by MIT Technology Review AI. AIMarketWire does not provide trading advice. Always refer to the original source for complete reporting.