Attackers poison the public web so AI assistants surface fake customer support and reservations numbers as the official numbers
Researchers at Aurascape’s Aura Labs have uncovered what they believe is the first real-world campaign where attackers systematically manipulate public web content so that large language model (LLM) powered systems recommend scam airline customer support and reservations phone numbers as if they were official.
Instead of tricking the AI model with prompt injections or jailbreaks, the attackers are attacking the web itself. By seeding poisoned content across compromised government and university sites, popular WordPress blogs, YouTube descriptions, and Yelp reviews, they are steering AI search answers toward fraudulent call centers that attempt to extract money and sensitive data from unsuspecting travelers.
AI assistants pointing to scammers, not airlines
In one case study, Aurascape researchers showed that when a user asked an LLM-powered assistant for the “official Emirates Airlines reservations number,” the system confidently returned a scam call-center number and labeled it as the official hotline. A separate query about booking a British Airways flight produced the same fraudulent U.S. number, described as a “commonly used” reservations line for customers.
Google’s AI Overview feature was also observed returning multiple fraudulent phone numbers as if they were legitimate Emirates customer support and reservations lines, complete with step-by-step booking instructions. None of the numbers belong to the airlines being impersonated.
“This is not a jailbreak and it is not the model hallucinating a random phone number,” said Qi Deng, lead security researcher at Aurascape Aura Labs. “Attackers are quietly rewriting the web that AI systems read. When you ask an assistant how to call your airline, it does exactly what it was designed to do, but with a customer support and reservations number that leads straight to a scammer instead of the real company.”
Gaming how AI finds “the answer”
The Aurascape team calls this pattern LLM phone-number poisoning. Instead of competing for blue links, attackers are practicing what the researchers describe as Generative Engine Optimization and Answer Engine Optimization. The goal is to become the single source that an AI assistant chooses, summarizes, and presents as “the answer.”
According to Aurascape, the campaign:
- Uploads search-optimized PDFs and HTML snippets to compromised, high-authority sites, including government and university domains
- Abuses user-generated platforms like YouTube and Yelp by stuffing descriptions and reviews with airline names, “customer care” language, and the same fraudulent customer support and reservations numbers
- Structures content in simple Q&A formats that are easy for LLMs to parse and quote directly
A growing risk to AI search and consumer trust
Aurascape’s research also shows that the problem goes beyond obviously wrong answers. In multiple tests, AI systems returned correct airline contact details but still pulled context from spam-injected or bot-filled pages that contained scam numbers.
“Even when the AI gets the customer support number right today, its retrieval layer is already contaminated,” Qi said. “The attack is in the pipeline. It is only a matter of time before more of those poisoned sources leak into the final answer.”
The findings highlight a new attack surface created by AI search. Any system that crawls and summarizes the public web for users is now a target for manipulation. Because people increasingly trust AI assistants to “just give them the answer,” a single poisoned number can have outsized impact.
“A decade ago, scammers bought search ads or cloned login pages. Today, they are targeting the systems that write the answers for us,” said Moinul Khan, CEO of Aurascape. “If we want people to keep trusting AI assistants, we need to treat AI search and indexing as critical security infrastructure, not just a product feature.”
What travelers and companies can do now
Until deeper defenses are in place, Aurascape recommends a few immediate steps:
-
Double-check numbers
Always confirm customer support and reservations numbers on an airline or company’s official website or app. -
Be skeptical of pressure tactics
Treat unexpected upsells, refund offers, or “act now” pressure on a call with extra caution, even if you reached the number through an AI assistant. -
Monitor AI use in the enterprise
Organizations should monitor which AI tools employees use, what links and numbers they are shown, and how that affects high-risk workflows such as finance, travel, and IT support.
Research availability
Aurascape has published a detailed technical report, “When AI Recommends Scammers: New Attack Abuses LLM Indexing to Deliver Fake Support Numbers,” which includes case studies, indicators of compromise, and examples of compromised and abused hosts that defenders can use to hunt for related activity. The report is available on Aurascape’s website: https://aurascape.ai/llm-search-poisoning-fake-support-numbers/
About Aurascape
Aurascape is an AI-native security company that helps organizations safely adopt and govern AI across public, embedded, and enterprise applications. Aurascape combines AI-aware discovery, risk assessment, and real-time policy enforcement to give security teams visibility and control over how AI is used, what data it can access, and how conversations are handled. Aurascape Aura Labs, the company’s research arm, focuses on uncovering emerging AI attack patterns and helping defenders understand how adversaries are adapting to the AI-driven web.
Learn more at https://aurascape.ai/
View source version on businesswire.com: https://www.businesswire.com/news/home/20251208251854/en/
Contacts
Media Contact
press@aurascape.ai
