Alarming Study Finds One in Three AI-Generated Links Are Fake
A new cybersecurity report by UK-based firm Netcraft has revealed a growing concern: ChatGPT and other AI chatbots frequently generate fake or incorrect brand URLs, inadvertently exposing users to phishing attacks and malware risks.
According to the study, 34% of all URLs generated by large language models (LLMs) like GPT-4.1 were inaccurate. Even more troubling—many of these URLs aren’t just broken links or typos; they can lead to malicious domains purposefully set up by attackers.
Not Just Inaccurate—Potentially Dangerous
Here’s a breakdown of the findings:
- 29% of the URLs pointed to unregistered, parked, or inactive domains
- 5% led users to legitimate but unrelated websites
- Only two-thirds of generated links were accurate and brand-specific
This means users can unknowingly end up on fraudulent websites even when asking simple, seemingly safe questions like “What’s the login page for [brand]?”
Small Brands Are More Vulnerable
According to researchers, lesser-known brands are especially susceptible to these hallucinated links due to the lack of detailed training data about them. However, no company is truly safe. Even major institutions like Wells Fargo were reportedly misrepresented—Perplexity AI once returned a fake banking login page for the brand.
Hackers have started exploiting these AI “hallucinations” by preemptively registering the non-existent domains generated by chatbots. These domains are then repurposed for phishing schemes, data theft, or spreading malware.
The Silent Spread Through Developer Tools
It doesn’t stop at end-users. Netcraft also discovered that some developers are copying AI-generated links directly into source code without proper validation. At least five public GitHub repositories were found containing harmful URLs, likely inserted by AI coding assistants.
Adding to the threat, cybercriminals are now SEO-optimizing malicious domains specifically for AI models. For instance, over 17,000 phishing pages hosted on GitBook were tailored to target crypto users, disguised as technical support, documentation, or login portals.
How to Stay Safe in the Age of AI
Netcraft urges everyone—from casual users to developers—not to trust any URL automatically generated by an AI model. Instead, they recommend:
- Typing website addresses manually
- Cross-checking URLs on official brand websites
- Avoiding unknown links provided in AI responses
This is a wake-up call for the industry. While AI continues to reshape search and productivity, it also creates new cybersecurity vulnerabilities that can’t be ignored.
Conclusion: AI’s Convenience Comes with Real Risk
The rise of AI chatbots has revolutionized how we access information—but at a cost. As this Netcraft study makes clear, AI hallucinations aren’t just inaccurate—they’re dangerous. Whether you’re a user or a developer, due diligence is no longer optional. Treat AI-generated links with the same skepticism as any unsolicited email or SMS. The digital world just got a bit trickier, and awareness is the first line of defense.