Core Concepts
How AI visibility works - the mechanics behind your scores.
- AI search gives answers, not links - being mentioned is the new ranking
- Three things determine if you appear: training data, relevance, and sentiment
- Each AI model has different blind spots - that's normal
- Citations show where AI learns about you - they're your improvement roadmap
If you've run your first research and you're staring at your dashboard wondering why your numbers look the way they do, this page is for you.
AI search is a different game
Here's what trips people up: AI search isn't an evolution of Google. It's fundamentally different.
Google shows you a list of links. You scan, click, evaluate. AI gives you a direct answer. One synthesized response. No clicking required.
With traditional search, you're competing for position. With AI, you're competing for existence.
Right now, millions of people are asking ChatGPT for product recommendations. They're using Claude to compare solutions. They're querying Perplexity for advice. If your brand doesn't show up in those responses, you're losing customers you'll never know about.
What determines whether you appear
Three things decide if AI mentions your brand when someone asks a relevant question.
1. Training data presence
AI models learn from data. If your brand appeared in quality sources that were part of the training data, the model knows you exist.
What feeds AI knowledge:
- Wikipedia pages
- Industry publications
- Major news coverage
- Authoritative blogs
- Well-linked content
What doesn't help directly:
- Paid ads (AI ignores these)
- Most social media (often excluded from training)
- Gated content behind logins (crawlers can't access it)
The implication? Earned media matters more than ever. Being mentioned on authoritative sites isn't just good for SEO anymore. It's how you become part of AI's knowledge base.
2. Contextual relevance
Knowing your brand exists isn't enough. When someone asks "What's the best note-taking app?", AI has to make a connection: your brand → notes → apps → productivity → this question.
If your content clearly establishes what you do and what category you belong to, AI makes the connection easily. If your positioning is vague or scattered, it doesn't.
Clarity wins. "We're an all-in-one workspace" is harder for AI to place than "We're a note-taking app for teams."
3. Sentiment and positioning
When AI mentions you, how does it frame you? There's a big difference between:
- "Notion is widely regarded as the leading workspace tool..."
- "Notion is one of several options in this space..."
- "While Notion is popular, many users prefer..."
Your citation sources shape this framing. If AI primarily learns about you from competitor comparison articles where you're positioned as the alternative, that follows you into responses. If it learns from glowing reviews and thought leadership, that comes through too.
Understanding prompts
Prompts are the questions you track - what real people ask AI when they're researching solutions like yours.
Think of prompts as the evolution of keywords. Instead of targeting "project management software," you track "What project management tool should a remote startup use?"
Different types reveal different things:
What are the best tools for managing remote teams?
How does Slack compare to Microsoft Teams for small businesses?
Is Asana worth it for project management?
How should a startup handle team communication?
Discovery prompts - Are you in the consideration set at all?
Comparison prompts - How do you stack up head-to-head?
Reputation prompts - What does AI think of your quality and fit?
Educational prompts - Are you seen as an authority?
A healthy prompt mix covers all four.
Understanding citations
Citations are the sources that influence AI responses. When Perplexity says "According to G2..." or when ChatGPT's knowledge clearly draws from a particular article, that's a citation.
This is where Trakkr gets actionable:
Where AI learns about you. Which sites mention your brand? What do they say?
Where the gaps are. Which authoritative sites mention your competitors but not you? Those are opportunities.
What's driving your position. If AI frames you as the underdog, look at your citations. There's probably an "X vs Y" article somewhere positioning you that way.
Citations are your roadmap. If you want better AI visibility, start by understanding and influencing your citation profile.
The improvement cycle
AI visibility isn't a one-time audit. It's ongoing.
Measure your visibility across prompts and models. This is your baseline.
Understand what's driving those scores. Look at citations. See where you're strong, where you're weak.
Act on what you learn. Get mentioned on authoritative sources. Address citation gaps. Create content that establishes your positioning clearly.
Re-measure to see impact. Some changes show up quickly (especially on Perplexity, which uses real-time search). Others take longer as models retrain.
Then repeat. The brands winning at AI visibility treat this as continuous.
Metrics reference
This section explains every metric in Trakkr. Bookmark it - you'll come back here.
Visibility Score
What it measures: How prominently AI models mention your brand.
How it's calculated: When AI lists brands, higher positions earn more points. First position = 10 points, second = 9, down to tenth = 1 point. Your score is total points earned divided by maximum possible.
| Position | Points |
|---|---|
| 1st | 10 |
| 2nd | 9 |
| 3rd | 8 |
| ... | ... |
| 10th | 1 |
What's good: 40+ is solid. 60+ is excellent. 80+ means you dominate your tracked prompts.
Presence Rate
What it measures: The percentage of prompts where your brand appears at all.
How it's calculated: Prompts where you're mentioned ÷ total prompts × 100
What's good: 50%+ means solid coverage. 75%+ is excellent. 100% means you appear on every prompt you track.
Average Position
What it measures: When AI lists brands, where do you typically appear?
How it's calculated: Sum of all your positions ÷ number of mentions
What's good: 1-2 is excellent. 3-4 is good. 5+ means you're appearing but as an afterthought.
Mentions
What it measures: Raw count of times your brand appeared across all prompts and models.
Why it matters: Volume matters. Compare actual mentions to possible mentions to see your overall footprint.
Citations
What it measures: Unique URLs that AI models reference when discussing your brand.
Why it matters: More citations from authoritative sources = stronger AI presence.
Demand Score
What it measures: How many people are likely asking about this topic in AI chat platforms.
How it's calculated: We combine two main signals:
- 1Search volume - Real clickstream data showing how often people search for related topics
- 2LLM affinity - How naturally the query fits AI conversation patterns (creative requests and complex comparisons score higher than simple facts)
We also apply modifiers for query specificity - longer, more specific queries typically indicate niche topics with lower overall demand.
| Score | Meaning |
|---|---|
| 70+ | High demand - valuable real estate |
| 40-69 | Medium demand - solid opportunity |
| <40 | Lower demand - niche or specialized |
Why it matters: Prompts with higher demand scores represent more valuable real estate. If you're not appearing on high-demand prompts, you're missing more potential customers than on low-demand ones.
How to use it: Prioritize improving your visibility on high-demand prompts first - that's where the traffic is.
AI Volume
What it measures: The estimated number of times people ask AI platforms (ChatGPT, Gemini, Claude, Perplexity, Copilot) about a given topic each month.
Why it's an estimate: Unlike Google, AI platforms don't publish query volume data. We combine multiple data sources to produce the best estimate possible, then round conservatively - the real number is likely higher.
How it's calculated: We use a three-tier waterfall, choosing the highest-confidence data available:
| Confidence | Label | How it works |
|---|---|---|
| High | Measured | Direct panel data from AI search platforms. Most accurate. |
| Medium | Calibrated estimate | Derived from Google search volume using learned ratios per query type (e.g. comparison queries have higher AI crossover than navigational ones). |
| Low | Projected estimate | Classified by topic type when no search data is available. Shown as a range rather than a specific number. |
The confidence tier is shown in the tooltip when you hover over a volume number. Higher-confidence estimates display a specific number with a ~ prefix. Low-confidence estimates display a range.
Platform breakdown: Total volume is split across platforms based on current market share data. ChatGPT accounts for the largest share (~72%), followed by Gemini (~12%), Claude (~6%), Perplexity (~5%), Copilot (~3%), and others.
How to use it: Focus your optimization efforts on high-volume prompts first. A prompt with 10K monthly AI queries represents far more potential exposure than one with 100. Pair volume with your visibility score to find the biggest opportunities: high volume + low visibility = high-impact prompt to improve.
Competitive Metrics
Win Rate - How often you rank higher than a specific competitor when both appear. 55%+ means you're winning. 70%+ means you dominate them.
Share of Voice - Your visibility as a proportion of total visibility across all tracked competitors.
Head-to-Head - Direct comparison against one competitor. Shows wins, losses, ties.
Competitive Gap - The visibility difference between you and a competitor. Positive = you're ahead.
Perception Metrics
Overall Perception - How positively AI describes your brand. Combines scores across 20 attributes in 5 categories. 75+ is excellent, 60-74 is good, below 60 needs work.
| Category | What it measures |
|---|---|
| Trust & Reliability | Trustworthy, dependable, established |
| Quality & Performance | Product quality, craftsmanship |
| Value & Experience | Value proposition, customer experience |
| Market Position | Leadership, competitive standing |
| Innovation & Appeal | Modernity, desirability |
Trends
7-Day Change - Short-term momentum. Compares now vs 7 days ago.
30-Day Change - Medium-term trajectory. Compares now vs 30 days ago.
Green = improving. Red = declining. Small consistent trends matter over time.
Citation Quality
Citation Quality Score - Average authority of sources citing your brand.
Domain Authority - How authoritative a citing website is. Forbes > random blog.
Citation Sentiment - Whether sources discuss you positively, negatively, or neutrally.
Important notes
There's no universal "good" score. 40% visibility might be excellent in a crowded market with 50 competitors, or terrible in a niche where you should dominate. Compare against competitors and track your own trend.
Model performance varies. You might score 80% on Claude and 30% on Perplexity. Normal. Each model has different training data, different biases, different retrieval approaches.
Presence vs position are different. Presence asks: were you mentioned at all? Position asks: where in the list? Both matter, but presence comes first.
Where to go from here
See your scores
Head to your Dashboard to see visibility across all models.
Find citation gaps
Your citation profile shows why your scores are what they are. This is where actionable insights live.
Track competitors
See head-to-head matchups. Find which prompts they're winning.
Was this helpful?
