How to Block AI Bots on SvelteKit
SvelteKit's static/ directory, file-based +server.ts routing, and hooks.server.ts handle hook give you multiple layers to control AI crawlers — from polite opt-out to hard 403 blocking before any route is processed.
Quick fix — create static/robots.txt
Place in your project root's static/ folder (same level as src/). SvelteKit copies it verbatim to /robots.txt at build time — no config needed.
User-agent: GPTBot Disallow: / User-agent: ClaudeBot Disallow: / User-agent: CCBot Disallow: / User-agent: Google-Extended Disallow: /
All Methods
static/robots.txt (Recommended)
EasySSG + SSR
static/robots.txt
SvelteKit copies everything in static/ verbatim to the build output. A plain robots.txt here works in all modes with no configuration or plugins.
Plain text only — no Svelte syntax, no front matter. Works with every adapter.
src/routes/robots.txt/+server.ts
EasySSG + SSR
src/routes/robots.txt/+server.ts
A SvelteKit API endpoint that generates robots.txt dynamically. Useful for environment-based rules. Pre-rendered to a static file by adapter-static; served dynamically in SSR.
Directory must be named "robots.txt" (not "robots") to map to /robots.txt. Export prerender = true for SSG.
svelte:head in +layout.svelte — global noai tag
EasySSG + SSR
src/routes/+layout.svelte
Add <svelte:head><meta name="robots" content="noai, noimageai" /></svelte:head> to the root layout. Applies to every page without any additional configuration.
Can also be added directly to src/app.html for a zero-config global approach.
X-Robots-Tag via handle hook
EasySSR only
src/hooks.server.ts
Set X-Robots-Tag: noai, noimageai as an HTTP response header on every page via the SvelteKit handle hook. More authoritative than the HTML meta tag — works at the HTTP layer.
Requires an active server (adapter-node, adapter-vercel, adapter-cloudflare). Does not run in adapter-static SSG mode.
hooks.server.ts — hard bot blocking (SSR only)
IntermediateSSR only
src/hooks.server.ts → handle
Intercept all requests in the handle hook and return a 403 Response for matched AI bot user agents. The most powerful method — bots are blocked before SvelteKit processes any route.
Only works in SSR mode. In adapter-static, the hook is never executed. Combine with robots.txt for full coverage.
Method 1: static/robots.txt
SvelteKit copies every file in static/ verbatim to the build output root. Create static/robots.txt in your project root (same level as src/ and svelte.config.js). It becomes yourdomain.com/robots.txt automatically — no vite config, no plugin, no import.
User-agent: * Allow: / User-agent: GPTBot Disallow: / User-agent: ChatGPT-User Disallow: / User-agent: OAI-SearchBot Disallow: / User-agent: ClaudeBot Disallow: / User-agent: anthropic-ai Disallow: / User-agent: Google-Extended Disallow: / User-agent: Bytespider Disallow: / User-agent: CCBot Disallow: / User-agent: PerplexityBot Disallow: / User-agent: meta-externalagent Disallow: / User-agent: Amazonbot Disallow: / User-agent: Applebot-Extended Disallow: / User-agent: xAI-Bot Disallow: / User-agent: DeepSeekBot Disallow: / User-agent: MistralBot Disallow: / User-agent: Diffbot Disallow: / User-agent: cohere-ai Disallow: / User-agent: AI2Bot Disallow: / User-agent: Ai2Bot-Dolma Disallow: / User-agent: YouBot Disallow: / User-agent: DuckAssistBot Disallow: / User-agent: omgili Disallow: / User-agent: omgilibot Disallow: / User-agent: webzio-extended Disallow: / User-agent: gemini-deep-research Disallow: /
Method 2: src/routes/robots.txt/+server.ts
SvelteKit maps files to URLs based on their directory path. Create the directory src/routes/robots.txt/ and add a +server.ts file that exports a GET function returning a text/plain Response. The directory name must be robots.txt (not robots) so it maps to /robots.txt.
For adapter-static, add export const prerender = true so SvelteKit pre-renders the endpoint to a static file at build time. In SSR mode, the endpoint is served dynamically on each request.
// src/routes/robots.txt/+server.ts
import type { RequestHandler } from '@sveltejs/kit';
export const prerender = true; // required for adapter-static
const AI_BOTS = [
'GPTBot', 'ChatGPT-User', 'OAI-SearchBot',
'ClaudeBot', 'anthropic-ai',
'Google-Extended', 'Bytespider',
'CCBot', 'PerplexityBot',
'meta-externalagent', 'Amazonbot',
'Applebot-Extended', 'xAI-Bot',
'DeepSeekBot', 'MistralBot',
'Diffbot', 'cohere-ai',
'AI2Bot', 'Ai2Bot-Dolma',
'YouBot', 'DuckAssistBot',
'omgili', 'omgilibot',
'webzio-extended', 'gemini-deep-research',
];
export const GET: RequestHandler = () => {
const lines = [
'User-agent: *',
'Allow: /',
'',
...AI_BOTS.flatMap((bot) => [`User-agent: ${bot}`, 'Disallow: /', '']),
'Sitemap: https://yourdomain.com/sitemap.xml',
].join('\n');
return new Response(lines, {
headers: { 'Content-Type': 'text/plain; charset=utf-8' },
});
};Conflict warning
If you have both static/robots.txt and src/routes/robots.txt/+server.ts, the static file takes precedence. Use one or the other — not both.
Method 3: svelte:head in +layout.svelte
Add a <svelte:head> block to your root layout to inject the noai and noimageai meta tags on every page. This applies to all routes that use the default layout (everything under src/routes/ by default).
<!-- src/routes/+layout.svelte --> <svelte:head> <meta name="robots" content="noai, noimageai" /> </svelte:head> <slot />
Alternatively, add it directly to src/app.html for a zero-config global approach that doesn't depend on layout rendering:
<!-- src/app.html -->
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8" />
<link rel="icon" href="%sveltekit.assets%/favicon.png" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<!-- Block AI training crawlers -->
<meta name="robots" content="noai, noimageai" />
%sveltekit.head%
</head>
<body data-sveltekit-preload-data="hover">
<div style="display: contents">%sveltekit.body%</div>
</body>
</html>For per-page control, add <svelte:head> in individual page files. Per-page tags merge with (and can override) the layout tag:
<!-- src/routes/blog/[slug]/+page.svelte — allow AI to index but not train --> <svelte:head> <meta name="robots" content="index, follow" /> <!-- Overrides parent layout noai only for this page --> </svelte:head>
Method 4: X-Robots-Tag via hooks.server.ts
The X-Robots-Tag HTTP header is more authoritative than the HTML meta tag because it applies at the transport layer — bots that download pages without rendering JavaScript still see the directive. Set it on every response using the handle hook in src/hooks.server.ts:
// src/hooks.server.ts
import type { Handle } from '@sveltejs/kit';
export const handle: Handle = async ({ event, resolve }) => {
const response = await resolve(event);
response.headers.set('X-Robots-Tag', 'noai, noimageai');
return response;
};static/robots.txt and noai meta tags for full coverage across all adapter types.Method 5: Hard Blocking in hooks.server.ts (SSR only)
The handle hook runs before SvelteKit processes any route. Return a 403 Response early for matched bot user agents, and the bot never reaches any page — no HTML is rendered, no content is served.
You can combine header injection and hard blocking in a single handle hook:
// src/hooks.server.ts
import type { Handle } from '@sveltejs/kit';
const BLOCKED_UAS = /GPTBot|ClaudeBot|anthropic-ai|CCBot|Bytespider|Google-Extended|PerplexityBot|meta-externalagent|Amazonbot|Applebot-Extended|xAI-Bot|DeepSeekBot|MistralBot|Diffbot|cohere-ai|AI2Bot|YouBot|DuckAssistBot|omgili|omgilibot|webzio-extended|gemini-deep-research|OAI-SearchBot|ChatGPT-User/i;
export const handle: Handle = async ({ event, resolve }) => {
const ua = event.request.headers.get('user-agent') ?? '';
if (BLOCKED_UAS.test(ua)) {
return new Response('Forbidden', {
status: 403,
headers: { 'Content-Type': 'text/plain' },
});
}
const response = await resolve(event);
// Also set X-Robots-Tag for any crawlers that aren't hard-blocked
response.headers.set('X-Robots-Tag', 'noai, noimageai');
return response;
};SSR only — adapter-static won't execute this
When using adapter-static, SvelteKit pre-renders your site at build time and no hooks.server.ts code runs at request time. The only runtime enforcement available for static sites is at the infrastructure layer: Cloudflare WAF, Netlify Edge Functions, or similar.
Method 6: Cloudflare WAF (All Adapters)
For static SvelteKit sites on Cloudflare Pages (or any site behind Cloudflare), a WAF rule blocks AI bots at the network edge before any request reaches your server — the most effective option for adapter-static deployments and the best defence against bots that ignore robots.txt:
# Cloudflare WAF — Custom Rule (Expression Editor) (http.user_agent contains "GPTBot") or (http.user_agent contains "ClaudeBot") or (http.user_agent contains "anthropic-ai") or (http.user_agent contains "CCBot") or (http.user_agent contains "Bytespider") or (http.user_agent contains "Google-Extended") or (http.user_agent contains "PerplexityBot") or (http.user_agent contains "DeepSeekBot") or (http.user_agent contains "MistralBot") or (http.user_agent contains "Diffbot") or (http.user_agent contains "cohere-ai") or (http.user_agent contains "meta-externalagent") or (http.user_agent contains "Amazonbot") or (http.user_agent contains "xAI-Bot") or (http.user_agent contains "AI2Bot") → Action: Block
Cloudflare Pages users: go to your Pages project → Settings → WAF to create the rule. Cloudflare proxy users: Security → WAF → Custom rules. For Netlify deployments, use Netlify Edge Functions or the static/_headers file to add X-Robots-Tag: noai, noimageai to all responses.
SSG vs SSR: Which Methods Work Where
| Method | adapter-static | adapter-node/vercel |
|---|---|---|
| static/robots.txt | ✅ Yes | ✅ Yes |
| +server.ts endpoint (prerender:true) | ✅ Yes | ✅ Yes |
| svelte:head noai in +layout.svelte | ✅ Yes (baked in HTML) | ✅ Yes |
| app.html noai meta tag | ✅ Yes | ✅ Yes |
| X-Robots-Tag via handle hook | ❌ No server | ✅ Yes |
| Hard blocking via handle hook | ❌ No server | ✅ Yes |
| Cloudflare WAF | ✅ Yes (edge) | ✅ Yes (edge) |
AI Bots to Block
These 25 user agents cover the major AI training crawlers and AI search bots. The robots.txt blocks shown above include all of them.
Frequently Asked Questions
Where do I put robots.txt in a SvelteKit project?
Place robots.txt in the static/ directory at your SvelteKit project root. SvelteKit copies everything in static/ verbatim to the build output, so static/robots.txt becomes yourdomain.com/robots.txt automatically. This is a plain text file — no Svelte syntax, no front matter. It works in both SSG (adapter-static) and SSR modes. Alternatively, create src/routes/robots.txt/+server.ts for a dynamic endpoint that generates rules programmatically.
What is the src/routes/robots.txt/+server.ts approach?
SvelteKit's file-based routing supports non-HTML endpoints. Create the directory src/routes/robots.txt/ and add a +server.ts file that exports a GET function returning a text/plain Response. This is useful for dynamic rules — for example, blocking everything in non-production environments, or pulling bot lists from a config file. The endpoint is pre-rendered to a static file by adapter-static (with the correct prerender = true export), and served dynamically in SSR mode. Note: the directory must be named robots.txt, not robots — SvelteKit maps it to the URL /robots.txt.
How do I add a noai meta tag to every SvelteKit page?
Two options: (1) src/routes/+layout.svelte with svelte:head — add <svelte:head><meta name="robots" content="noai, noimageai" /></svelte:head> in your root layout file. This applies to every page that uses this layout (all pages by default). (2) src/app.html — add <meta name="robots" content="noai, noimageai" /> directly in the HTML template's <head> section. The app.html approach is simpler but less flexible if you need per-page control. For per-page control, add svelte:head blocks in individual page files.
Does the hooks.server.ts handle hook work in SSG (adapter-static) mode?
No. The handle hook in hooks.server.ts only runs when SvelteKit is running as an active server (adapter-node, adapter-vercel, adapter-cloudflare, etc.). In SSG mode with adapter-static, the site is pre-rendered to static HTML files and there is no server to run hooks — the hook code is completely ignored. For SSG deployments, use static/robots.txt plus noai meta tags, and platform-level tools (Cloudflare WAF, Netlify Edge Functions) for runtime bot blocking.
Can I set X-Robots-Tag as an HTTP header in SvelteKit?
Yes, using the handle hook in hooks.server.ts. In your handle function, call resolve(event) to get the response, then use response.headers.set('X-Robots-Tag', 'noai, noimageai') before returning it. This applies the header to every page response. X-Robots-Tag is more authoritative than the HTML meta tag because it applies at the HTTP layer — bots that download HTML without rendering JavaScript still see the directive. This only works in SSR mode.
Will blocking AI bots break SvelteKit's SEO or sitemap generation?
No. Blocking GPTBot, ClaudeBot, CCBot, and other AI training bots does not affect Googlebot, Bingbot, or other search engine crawlers. If you use a sitemap library (svelte-sitemap or a custom +server.ts endpoint), it continues generating normally. SvelteKit's built-in SEO features — page titles, meta tags, canonical URLs — are completely unaffected. Your search rankings remain unchanged.
Is your site protected from AI bots?
Run a free scan to check your robots.txt, meta tags, and overall AI readiness score.