Pro Audit snapshot across 5 AI engines, benchmarked against 5 competitors. 6,750 total responses analysed.
Ahrefs sits at 67.7% share of voice — 6.3 percentage points behind semrush.com. That gap is closable. It's not a position of weakness; it's a position where targeted action can shift the ranking within a quarter. The data below shows exactly where that action should focus.
There's a significant divergence across engines — Ahrefs achieves 83.7% on Perplexity but only 49.9% on Gemini, a spread of 34 percentage points. This divergence matters: it means some engines are finding and surfacing Ahrefs while others aren't. The engine-by-engine breakdown below identifies which engines to prioritise.
Ahrefs's domain authority (88) sits below the cohort median (88). This creates a headwind: even well-structured content may be passed over in favour of higher-authority competitors. The recommendation section addresses this with a parallel track of authority-building alongside content optimisation.
Share of voice across all audited brands, broken down by engine. Highlighted cells indicate the leader per engine. Ahrefs is marked with ★.
| Rank | Brand | Overall SOV | Gemini | Claude | Perplexity | Chatgpt | Ai Overviews |
|---|---|---|---|---|---|---|---|
| 1 | semrush.com | 72.4% | 54.4% | 83.9% | 87.9% | 62.4% | 73.5% |
| 2 | Ahrefs ★ | 66.4% | 49.9% | 72.8% | 83.7% | 58.1% | 67.5% |
| 3 | seranking.com | 37.9% | 30.0% | 39.8% | 40.7% | 30.1% | 48.8% |
| 4 | moz.com | 35.2% | 28.2% | 35.6% | 44.5% | 38.6% | 29.1% |
| 5 | similarweb.com | 3.8% | 1.6% | 3.0% | 3.3% | 4.5% | 6.4% |
| 6 | sistrix.com | 1.1% | 0.9% | 0.1% | 3.3% | 0.0% | 1.1% |
semrush.com leads on 5 of 5 engines, making it the dominant competitor in this audit. Ahrefs does not hold the lead on any individual engine — closing this gap on even one engine would meaningfully shift the overall position. The margin on Gemini is tight — a few points of improvement could shift the ranking.
The three most significant patterns the audit uncovered — derived from 6,750 classified AI responses.
Each AI search engine has its own training data, retrieval method, and ranking logic. A brand can dominate one engine and be invisible on another. Understanding the per-engine pattern tells you where to focus.
Ahrefs achieves 83.7% on Perplexity, trailing semrush.com by 4.2 points. Perplexity's retrieval model favours semrush.com's content — likely due to stronger structured data, more third-party citations, or better content alignment with the queries Perplexity is answering.
Ahrefs achieves 72.8% on Claude, trailing semrush.com by 11.1 points. Claude's retrieval model favours semrush.com's content — likely due to stronger structured data, more third-party citations, or better content alignment with the queries Claude is answering.
Ahrefs achieves 67.5% on Ai Overviews, trailing semrush.com by 6.0 points. Ai Overviews's retrieval model favours semrush.com's content — likely due to stronger structured data, more third-party citations, or better content alignment with the queries Ai Overviews is answering.
Ahrefs achieves 58.1% on Chatgpt, trailing semrush.com by 4.3 points. Chatgpt's retrieval model favours semrush.com's content — likely due to stronger structured data, more third-party citations, or better content alignment with the queries Chatgpt is answering.
Ahrefs achieves 49.9% on Gemini, trailing semrush.com by 4.5 points. Gemini's retrieval model favours semrush.com's content — likely due to stronger structured data, more third-party citations, or better content alignment with the queries Gemini is answering.
Each AI engine pulls from different sources, weights signals differently, and surfaces different competitors. The summary chart above gives the headline. Below: each engine analysed on its own terms — wins, losses, cited sources, and a specific action for that engine.
Brand SOV: 67.5%. Sentiment is overwhelmingly positive (76%) across 910 mentions.
Engine-specific action: Ahrefs sits at rank 2 on Ai Overviews. Focus on the top-cited sources for this engine — placement on those domains is the most direct path to citation share here.
Brand SOV: 58.1%. Sentiment is overwhelmingly positive (78%) across 232 mentions.
No voice gaps on this engine — the brand appears wherever competitors do.
Engine-specific action: Ahrefs sits at rank 2 on Chatgpt. Focus on the top-cited sources for this engine — placement on those domains is the most direct path to citation share here.
Brand SOV: 72.8%. Sentiment is broadly positive (60%) across 979 mentions.
Engine-specific action: Ahrefs sits at rank 2 on Claude. Focus on the top-cited sources for this engine — placement on those domains is the most direct path to citation share here.
Brand SOV: 49.9%. Sentiment is overwhelmingly positive (77%) across 671 mentions.
Engine-specific action: Ahrefs sits at rank 2 on Gemini. Focus on the top-cited sources for this engine — placement on those domains is the most direct path to citation share here.
Brand SOV: 83.7%. Sentiment is broadly positive (57%) across 1129 mentions.
Engine-specific action: Ahrefs sits at rank 2 on Perplexity. Focus on the top-cited sources for this engine — placement on those domains is the most direct path to citation share here.
Modern LLMs don’t answer prompts directly — they fan out into sub-queries, search each one, then synthesise. The reasoning trail captured live during this audit is below. The keywords AI is exploring most often are where your next content opportunities live.
| # | Sub-query | Frequency | Engines |
|---|---|---|---|
| 1 | leading SEO tools 2026 | 18 | Claude, Gemini |
| 2 | SEO tools for startups | 16 | Claude, Gemini |
| 3 | SEO tools no implementation fees | 14 | Claude, Gemini |
| 4 | emerging SEO tools 2026 | 14 | Claude, Gemini |
| 5 | SEO tools GDPR compliant contracts | 14 | Claude, Gemini |
| 6 | most acquired SEO tools companies | 14 | Claude, Gemini |
| 7 | SEO tools for distributed teams | 12 | Gemini |
| 8 | affordable SEO tools for small businesses | 12 | Gemini |
| 9 | best free SEO tools | 12 | Gemini |
| 10 | best SEO tools platforms 2026 | 12 | Claude |
Original prompt: Which SEO tools brands have the most loyal customers?
Each row below is a sub-query the AI engines explored during this audit. If you don’t currently rank organically for these queries, you are missing a primary AI citation pathway. These are discrete, actionable content gaps you can close.
Opportunities are ranked by frequency. Audit your current organic ranking for each in your preferred SEO tool, then prioritise the high-frequency queries where you rank below position 10.
Prompts are grouped into intent clusters that reflect how real buyers search. Each card includes buyer-journey context explaining why that cluster matters commercially.
The spread between Ahrefs's strongest cluster (Category leadership, 73.7%) and weakest (Pricing and procurement, 59.4%) is 14 percentage points. This tells you where AI engines associate Ahrefs with the category — and where they don't. The weakest cluster is typically the highest-ROI target for new content, because you're starting from a low base where even small improvements are visible.
When AI engines mention Ahrefs, what tone do they use? Sentiment is classified per-mention across 3921 total brand references. The dominant frame is positive.
A majority-positive sentiment profile is a competitive advantage. AI engines are not just mentioning Ahrefs — they're framing it favourably. Protect this by continuing to generate positive third-party signals: reviews, press coverage, industry awards, and customer testimonials.
How sentiment varies by engine. Different engines pull from different sources — a brand can be celebrated by one and ignored by another.
Authority signals triangulated across Ahrefs Domain Rating and Moz Domain Authority, alongside referring-domain count and organic traffic value for full context. Higher authority correlates with higher AI citation rates — engines trust domains that the wider web trusts.
Ahrefs is quietly outperforming its authority score. Organic traffic value ($1760k/mo) and referring domains (93,250) are both above the cohort median, but triangulated authority (88) sits below (88). The lag is likely a measurement artefact — DR/DA update slowly. The traffic and links suggest the authority score will catch up; protect the lead by maintaining content cadence and link quality.
| Domain | Triangulated | Ahrefs DR | Moz DA | Ref domains | Organic kws | Organic traffic value | Sources |
|---|---|---|---|---|---|---|---|
| Ahrefs ★ | 88.0 | 91.0 | 85.0 | 93,250 | 78,326 | $1759.6k/mo | ahrefs_dr, moz_da |
| semrush.com | 89.0 | 92.0 | 86.0 | 13,792 | 148,326 | $5894.5k/mo | ahrefs_dr, moz_da |
| moz.com | 91.0 | 91.0 | 91.0 | 102,992 | 36,195 | $779.1k/mo | ahrefs_dr, moz_da |
| similarweb.com | 88.5 | 89.0 | 88.0 | 2,517 | 75,165 | $1205.6k/mo | ahrefs_dr, moz_da |
| seranking.com | 70.0 | 85.0 | 55.0 | 17,886 | 31,679 | $537.8k/mo | ahrefs_dr, moz_da |
| sistrix.com | 67.5 | 80.0 | 55.0 | 345 | 8,170 | $24.0k/mo | ahrefs_dr, moz_da |
20 prompts where AI engines named one or more competitors but did not mention Ahrefs. Sorted by commercial importance, competitor count, and response substance. These are the highest-leverage placements to target.
A breakdown of the third-party domains AI engines pull from when answering category questions. The placement opportunities at the bottom are the most actionable list in this audit.
The 15 most-cited domains across all five engines for this category. Brand-highlighted rows are Ahrefs's own domains.
| Domain | Citations | Engines citing it |
|---|---|---|
| marketermilk.com | 1,333 | ai_overviews, claude, gemini, perplexity |
| onelittleweb.com | 1,322 | ai_overviews, claude, gemini, perplexity |
| trysight.ai | 895 | ai_overviews, claude, gemini, perplexity |
| almcorp.com | 742 | ai_overviews, claude, gemini, perplexity |
| morningscore.io | 687 | ai_overviews, claude, gemini, perplexity |
| gartner.com | 660 | ai_overviews, chatgpt, claude, gemini |
| zapier.com | 556 | ai_overviews, chatgpt, claude, gemini, perplexity |
| semrush.com | 537 | ai_overviews, chatgpt, claude, gemini, perplexity |
| aioseo.com | 516 | ai_overviews, claude, gemini, perplexity |
| nightwatch.io | 513 | ai_overviews, claude, gemini, perplexity |
| techradar.com | 511 | ai_overviews, chatgpt, claude, gemini, perplexity |
| riffanalytics.ai | 510 | ai_overviews, claude, gemini, perplexity |
| seranking.com | 505 | ai_overviews, claude, gemini, perplexity |
| rankability.com | 498 | ai_overviews, claude, gemini, perplexity |
| reddit.com | 480 | ai_overviews, gemini |
Each engine has its own canon — the small set of domains it consistently cites. Getting placed on these is a direct path to citation share.
Domains where your competitors are cited but Ahrefs is not.
| Domain | Competitor citations | Engines citing it |
|---|---|---|
| seenos.ai | 2 | chatgpt |
| guestpost.uk | 2 | ai_overviews, claude |
16 prioritised actions derived from the audit data. Each includes an impact rating, timeline, and specific implementation steps.
Full transparency on the audit methodology, data collection, and classifier reliability.
Gemini, Claude, Perplexity, Chatgpt, Ai Overviews. Each engine was queried independently with the same prompt panel. Responses were captured in full and classified by an AI classifier for brand mention, position, citation type, and sentiment.
5,796 of 6,750 responses were successfully classified (85.9% success rate). 954 responses failed classification and were excluded from the SOV denominator to avoid deflating scores.
The brand name “Ahrefs” overlaps with common English vocabulary. When an AI engine uses the word in a non-brand context, our classifier conservatively marks the response as ambiguous rather than guessing. This protects the accuracy of the SOV figures by ensuring only confident classifications are counted. The 5,796 successfully classified responses provide a robust sample for all metrics in this report.
SOV figures carry a ±3 percentage-point confidence interval at the 95% level. This means the true share of voice is within 3 points of the reported figure in 95 out of 100 repeated audits.
The full panel of 150 prompts, grouped by intent cluster. Each prompt was sent to every engine 9 times. Use this to understand exactly what AI engines were asked and cross-reference with the cluster analysis above.