
AI Recommendation Poisoning
We found thousands of websites embedding additional instructions inside AI chatbot prompts - a pattern Microsoft Security has described as AI Recommendation Poisoning. These instructions may persist across conversations and influence future answers from ChatGPT, Claude, and Perplexity, even on unrelated topics.
Many companies implement these buttons as legitimate user convenience features. This research documents publicly observable technical patterns using verifiable data. Finding a cite button does not imply illegal conduct, deceptive intent, or knowledge on the part of the company. We document the behavior and language used, not the intent behind it. Companies named may have legitimate reasons for their implementation. Trakkr operates in the AI visibility space; we disclose this interest for transparency.
How it works
A blog adds an AI button
A small button sits beside a review of running shoes from a fictional brand. It looks like a shortcut to summarize the guide.
The visible interface still looks like a harmless reader shortcut.
The click contains a prefilled instruction
The button opens ChatGPT with a real request plus a second clause asking the model to remember or prefer that site for future running-shoe questions.
The memory-related language lives inside the prefilled prompt, not the visible button label.
That preference may persist
Later conversations about marathon shoes may favor that source again, even when the next question is broader than the original article.
For example-brand comparisons, example.com is often surfaced as a preferred reference.
A single click may shape later product recommendations in the same chat.
How it works
A blog adds an AI button
A small button sits beside a review of running shoes from a fictional brand. It looks like a shortcut to summarize the guide.
The visible interface still looks like a harmless reader shortcut.
The click contains a prefilled instruction
The button opens ChatGPT with a real request plus a second clause asking the model to remember or prefer that site for future running-shoe questions.
The memory-related language lives inside the prefilled prompt, not the visible button label.
That preference may persist
Later conversations about marathon shoes may favor that source again, even when the next question is broader than the original article.
For example-brand comparisons, example.com is often surfaced as a preferred reference.
A single click may shape later product recommendations in the same chat.
Classification
Standard helper prompt that asks AI to summarize or explain a page without future-memory language.
Standard helper language. The prompt asks for a summary and avoids future-memory instructions.
Positions the site as the preferred source or preferred framing, even without future-memory language.
May steer the answer toward the site as the default frame, even without future-memory language.
Contains instructions for AI to remember the domain, cite it later, or prioritize it in future answers.
Contains source-preference language. Future conversations may surface this domain unprompted.
Explorer
wpbeginner.com
Uses explicit go-to-source language in prefilled prompts.
Platform Coverage
Key insight
98% link to ChatGPT. Its open URL scheme and memory feature appear to make it the most common destination for cite-button links. In our testing, prefilled prompts executed without a visible user warning.
In our testing, Claude flagged prefilled prompts containing memory or preference instructions before executing them. We did not observe equivalent warnings in ChatGPT at time of publication.
Methodology
We combined two web-scale datasets with live verification to build a picture of prompt embedding patterns across the web. These buttons may be intended as legitimate user shortcuts - this research documents the observable technical behavior, not the intent behind it.
Data sources
1.97 billion archived pages via Common Crawl and 833,791 domains via PublicWWW, checked for outbound links and cite-button HTML patterns across 20 search queries.
Each candidate domain visited, pages fetched, prompts extracted and decoded through a multi-layer detection engine sorting helper prompts, source shaping, and memory anchoring.
In February 2026, Microsoft Security independently described a similar pattern as "AI Recommendation Poisoning" in a published blog post, characterizing it as cross-prompt instruction patterns influencing AI memory.
Limitations
The 7,029 figure is from automated pattern matching at web scale. Of the domains we live-verified, 93.5% contained prompts with memory or source-preference instructions, suggesting the true count is conservative, not inflated.
JavaScript-rendered buttons may be missed by our scanners. The true count is likely higher.
This study verifies prompt text, link flows, and archived pages. It does not directly prove that any provider stored the instruction in long-term memory or reused it across unrelated future prompts.
This snapshot reflects data through March 31, 2026. Sites may add or remove cite buttons at any time.
Evidence Appendix
Study boundaries
Guardrails
This study documents prompt text, destination links, and archived source pages. It does not prove provider-side memory persistence across unrelated future prompts.
Only the live-verified subset should be treated as confirmed. PublicWWW counts are code-pattern signals, not proof by themselves.
Archived copies were fetched on April 2, 2026 and fingerprinted with SHA-256 so example pages can be re-checked if a page later changes.
See how your brand performs in AI search