
How to Get Recommended: What the Research Actually Says
Princeton researchers found content optimization can boost LLM visibility by 30-40%. Social proof helps. Scarcity signals hurt. Here are the tactics with evidence behind them.
The first two parts of this series covered what biases exist and how they differ across models. The natural next question: can you actually do anything about it?
The short answer is yes. The longer answer is that the tactics that work aren't the ones most people assume from SEO experience, and some widely-shared advice is directly contradicted by the evidence.
The GEO findings
The most rigorous study on AI content optimization comes from Princeton's GEO research. The team tested specific content modifications across thousands of queries to measure which tactics actually increase how often LLMs cite and recommend specific sources.
Content optimization increases LLM visibility by 30-40%. Adding statistics improves visibility by 22%. Adding quotations from authoritative sources improves it by 37%. Lower-ranked sites benefit more from optimization.
Princeton / KDD 2024The standout tactics:
- Statistics and data points - Adding specific numbers, percentages, and quantitative claims increased visibility by 22%. LLMs treat quantified claims as more authoritative.
- Quotations and citations - Including quotes from recognized authorities or citing specific sources produced the largest effect: +37%. This maps to how LLMs evaluate trustworthiness.
- Technical fluency - Content that demonstrates domain expertise through precise terminology performed better than generalist content covering the same topics.
Equally important is what didn't work. Keyword stuffing - the default SEO instinct - showed no meaningful impact. Neither did simply making content longer. The LLM doesn't care about word count; it cares about information density and source authority.
One finding deserves special attention: lower-ranked sites benefit more from optimization than dominant ones. This is the opposite of the SEO dynamic where top-ranking sites have structural advantages. In AI recommendations, the playing field is more level - or at least more responsive to content quality.
The social proof amplifier
Beyond content structure, the way you frame your product matters - and not in the ways you might think.
Social proof consistently boosts recommendation rates. Scarcity ('limited availability') and exclusivity ('invite-only') signals surprisingly reduce LLM visibility, likely because models interpret restrictive access negatively.
NTUA Athens / EMNLP 2025Social proof - user counts, testimonials, community size, case studies - consistently increased how often LLMs recommended a product. This makes sense: LLMs learned from web content where social proof signals correlate with quality and popularity. The model has internalized "many users = good recommendation."
The counterintuitive part: scarcity and exclusivity signals reduced visibility. "Limited availability," "invite-only," "exclusive access" - these marketing tactics that create urgency with human buyers appear to backfire with LLMs. The models seem to interpret restricted access as a reason not to recommend something broadly.
The implication for product descriptions
If you're writing product descriptions that will be read by both humans and LLMs, lead with social proof (user counts, adoption metrics, integration partners) rather than scarcity signals. The "10,000+ teams use us" framing will serve you better than "limited beta access" in AI recommendation contexts.
Do LLMs prefer AI-generated content?
There's an uncomfortable finding in the literature. A 2024 study tested whether LLMs show preference for AI-generated product descriptions versus human-written ones. GPT-4 showed a 74.68% preference for AI-generated content.
LLMs consistently prefer LLM-generated product descriptions over human-written ones. GPT-4 showed a 74.68% bias toward AI content. The preference pattern is model-specific.
FZI Karlsruhe / PNAS 2025Before you conclude that all product pages should be AI-written, the nuance matters. The preference appears to be for the structural patterns common in AI-generated content - clear organization, comprehensive coverage, balanced comparison language - rather than for AI authorship per se. Content that's well-structured and informationally dense performs better regardless of who wrote it.
The practical read: write for clarity and completeness. Use clear headers, cover trade-offs honestly, include specific data points. Whether a human or an AI does this doesn't seem to matter as much as whether it gets done at all.
What doesn't transfer from SEO
The biggest mistake I see brands make is treating AI visibility optimization like search engine optimization with different keywords. The mechanics are fundamentally different.
In SEO, you optimize individual pages for specific queries. In AI visibility, there is no "page 1" to rank on. The model synthesizes information from across its training data and generates a novel response. Your content's influence is diffuse - it shapes the model's general understanding of your brand rather than winning a specific position.
This means tactics like "optimize this landing page for the query 'best CRM'" don't translate. Instead, the research suggests focusing on:
- Breadth over depth - Having your brand mentioned across many contexts matters more than dominating one keyword.
- Authority signals over keyword density - The GEO research shows that what signals credibility to LLMs is data, citations, and expert positioning - not keyword repetition.
- Honest comparison content - Content that acknowledges competitors and positions your product fairly tends to perform better than pure marketing copy. LLMs are trained on balanced content; they tend to favor it.
The research is still early, but the direction is clear: AI visibility rewards substance over optimization tricks. The brands that invest in being genuinely informative, quantitatively rigorous, and broadly present across the web will outperform the ones trying to game individual prompts. That's a harder strategy to execute, but it's also harder to displace once it's working.
Related
See how AI talks about your brand
Enter your domain to get a free AI visibility report in under 60 seconds.
