Does Blocking AI Bots Hurt SEO? The Complete Answer (2026)
Short answer: no — blocking AI training bots has no effect on your Google Search rankings. But there is a real trade-off worth understanding around AI search engines like Perplexity and SearchGPT. Here is the full explanation.
TL;DR
- ✅Blocking GPTBot, ClaudeBot, CCBot, Bytespider, Google-Extended = zero effect on Google rankings
- ✅Returning 403 to AI bots = zero effect on SEO (Googlebot gets a normal 200)
- ✅Adding noai to robots.txt = not processed by Google Search
- ⚠️Blocking PerplexityBot or OAI-SearchBot = reduces AI search visibility (Perplexity, SearchGPT) — a business choice, not an SEO risk
- ❌Using
User-agent: * Disallow: /= blocks Googlebot — this DOES hurt SEO (don't do this)
SEO crawlers and AI training bots are completely different programs
The confusion comes from thinking Google is one thing. It is not. Google runs many different crawlers for different purposes, and they are identified by different user agent strings:
| Bot | User-Agent token | Purpose | Block to stop AI? |
|---|---|---|---|
| Googlebot | Googlebot | Google Search indexing | ❌ Never — kills rankings |
| Googlebot-Image | Googlebot-Image | Google Image Search | ❌ Never |
| Google-Extended | Google-Extended | Gemini / Vertex AI training | ✅ Yes — safe |
| AdsBot-Google | AdsBot-Google | Google Ads quality check | ❌ Never |
| Storebot-Google | Storebot-Google | Google Shopping | ❌ Never |
The same separation applies to Microsoft. bingbot is Bing Search — block it and you disappear from Bing. BingPreview is used for Bing Chat and Copilot features — safe to block.
When you implement bot blocking correctly — targeting specific AI training user agents — Google's search crawlers are unaffected. Your pages are indexed identically. Your rankings do not move.
How robots.txt directives interact with SEO
Google only processes two directives in robots.txt for search purposes:Allow andDisallow. Anything else is ignored.
Correct — named bots only
# robots.txt — CORRECT: named AI bot tokens, Googlebot unaffected User-agent: * Allow: / ← Googlebot and all other crawlers: full access User-agent: GPTBot Disallow: / ← Only blocks GPTBot User-agent: ClaudeBot Disallow: / ← Only blocks ClaudeBot User-agent: Google-Extended Disallow: / ← Only blocks Gemini training — NOT Googlebot
Wrong — wildcard Disallow
# robots.txt — WRONG: blocks Googlebot too User-agent: * Disallow: / ← Blocks EVERYTHING including Googlebot = site vanishes from Google
The one mistake that CAN hurt SEO
Using User-agent: * with Disallow: / blocks every crawler including Googlebot. This removes your site from Google Search. Always use specific named user agent tokens when blocking AI bots.
What about the noai directive?
The noai and noimageai directives in robots.txt are not part of the Robots Exclusion Standard and are not processed by Google Search. They are signals to compliant AI crawlers but have zero effect on search indexing or rankings. You can use them as an additional signal alongside proper Disallow rules.
Do 403 responses to AI bots affect Google's crawling?
No — and here is why. Your bot-blocking middleware checks the incomingUser-Agent header and returns 403 only when it detects a known AI bot string. Googlebot uses the user agent stringGooglebot/2.1 — which is not in your AI bot list — so it receives a normal 200 response with your full page content.
What each crawler sees
Googlebot/2.1— search indexingGPTBot/1.1— AI trainingClaudeBot— AI trainingGoogle-Extended— Gemini trainingMozilla/5.0 (human browser)— real visitorThe 403 responses that AI bots receive are invisible to Google's ranking systems. Google never sees them. Your search presence is completely unaffected.
The real trade-off: AI search engines
This is where it gets nuanced. There are two distinct categories of AI bots:
AI training crawlers
Scrape your content to build datasets for training AI models. Blocking these prevents your content from being used without your consent. No SEO impact whatsoever.
GPTBotClaudeBotCCBotBytespiderGoogle-Extendedcohere-aiAI search indexers
Index your content to power AI-generated answers in search engines like Perplexity, SearchGPT, and You.com. Blocking these means your content won't appear in those answers. No Google SEO impact — but real AI search visibility impact.
PerplexityBotOAI-SearchBotChatGPT-UserYouBotiaskspiderWhether to block AI search indexers is a business decision, not an SEO decision. Ask yourself:
- Does traffic from Perplexity or SearchGPT matter to my business?
- Am I happy for AI search engines to summarise my content without a click-through?
- Is appearing in AI answers worth the content usage by the AI company?
Many sites block AI training crawlers (GPTBot, ClaudeBot) while allowing AI search indexers (PerplexityBot, OAI-SearchBot) — getting AI search visibility without contributing to training datasets.
What about AEO — Answer Engine Optimization?
AEO is the practice of optimising content to appear in AI-generated answers. If AEO matters to your business, you need AI search engines to crawl your content. This means you should allow AI search indexers even if you block AI training crawlers.
Selective blocking for AEO
# robots.txt — block training, allow AI search User-agent: * Allow: / # Block AI TRAINING crawlers User-agent: GPTBot User-agent: ClaudeBot User-agent: anthropic-ai User-agent: Google-Extended User-agent: CCBot User-agent: Bytespider User-agent: cohere-ai Disallow: / # Allow AI SEARCH indexers (good for AEO) # PerplexityBot — not listed above, so gets the default Allow: / # OAI-SearchBot — not listed above, so gets the default Allow: / # YouBot — not listed above, so gets the default Allow: /
By only listing training crawlers in the Disallow block and omitting AI search indexers, you allow AI search engines to index your content while preventing training data scraping.
Does X-Robots-Tag: noai affect SEO?
The X-Robots-Tag: noai, noimageai response header is a signal to compliant AI crawlers that you do not consent to your content being used for AI training. It is not recognised by Google Search.
Google recognises the following values in X-Robots-Tag for search purposes: noindex, nofollow, nosnippet, noarchive, and a few others. noai is not in that list.
Setting X-Robots-Tag: noai on your pages has zero effect on how Googlebot indexes them or how Google ranks them. Use it as an additional layer alongside proper middleware blocking — it harms nothing and signals your intent to compliant crawlers.
The safe approach: block training, allow search
For most sites, this approach maximises protection without any SEO or AI search trade-off:
Block in robots.txt
GPTBot, ClaudeBot, anthropic-ai, Google-Extended, CCBot, Bytespider, cohere-ai, FacebookBot, Amazonbot
No SEO impact. No AI search impact.
Allow in robots.txt
Googlebot, Bingbot, PerplexityBot, OAI-SearchBot, YouBot (omit from Disallow)
SEO preserved. AI search visibility preserved.
Server middleware (403)
Same AI training list as above
Hard block. No SEO impact — Googlebot gets 200.
X-Robots-Tag: noai
All responses
Signal to compliant crawlers. No SEO effect.
FAQ
Does blocking GPTBot or ClaudeBot hurt my Google rankings?
No. GPTBot and ClaudeBot are AI training crawlers used by OpenAI and Anthropic. Google Search uses Googlebot — a completely different program with a different user agent. Blocking GPTBot has zero effect on how Googlebot indexes your site or how Google ranks it.
Is Google-Extended the same as Googlebot?
No. Googlebot is Google's search indexing crawler — it powers Google Search rankings. Google-Extended is a separate token Google uses for Gemini and Vertex AI training. Blocking Google-Extended has no effect on your search rankings. Only blocking Googlebot would affect search.
Does returning a 403 to AI bots hurt SEO?
No — as long as you return 403 only to AI bot user agents, not to Googlebot. Your middleware checks User-Agent and returns 403 for AI training strings. Googlebot uses a different user agent (Googlebot/2.1) and receives a normal 200 response. The 403s AI bots see are invisible to Google's ranking systems.
Does adding noai to robots.txt affect how Google indexes my pages?
No. The noai and noimageai directives are not recognised by Google Search. Google only processes Allow and Disallow in robots.txt. Adding User-agent: GPTBot / Disallow: / does not affect Googlebot's behaviour.
Will blocking AI bots affect my visibility in Perplexity or ChatGPT answers?
Blocking PerplexityBot reduces Perplexity visibility. Blocking OAI-SearchBot reduces SearchGPT visibility. This is not an SEO issue — it does not affect Google rankings. It is an AI search visibility question. Consider allowing AI search indexers while blocking AI training crawlers.
What is the one mistake that CAN hurt SEO when blocking AI bots?
Using User-agent: * Disallow: / in robots.txt. This blocks every crawler including Googlebot, removing your site from Google Search entirely. Always block AI bots by name with specific User-agent tokens.
Is your site protected from AI bots?
Run a free scan to check your robots.txt, meta tags, and overall AI readiness score.