llms.txt for Nuxt: AI Crawler Readiness for Vue and Nitro Sites
A practical Nuxt implementation guide for llms.txt, robots.txt, sitemap, canonical URLs, structured data, SSR, prerendering, and AI crawler logs.
Nuxt gives teams several crawler-friendly options: SSR, static generation, prerendering, Nitro routes, and hybrid rendering. The risk is not the framework. The risk is shipping public pages that only become meaningful after client-side JavaScript runs.
For GEO, AI crawlers need a clear path from discovery to usable content. GEO Scout can then measure whether those pages influence answers across ChatGPT, Perplexity, Claude, Gemini, and other AI systems.
Add llms.txt
For static content:
public/llms.txtFor dynamic content, create a server route:
export default defineEventHandler(event => {
setHeader(event, 'content-type', 'text/plain; charset=utf-8')
return `# Example Nuxt Site
> Public product, documentation, and case-study resources.
## Core pages
- https://example.com/
- https://example.com/features
- https://example.com/pricing
## Docs
- https://example.com/docs
- https://example.com/docs/api
`
})Verify:
curl -i https://example.com/llms.txtrobots.txt
Use robots rules to separate public knowledge from private application areas:
User-agent: GPTBot
Allow: /
User-agent: ClaudeBot
Allow: /
User-agent: PerplexityBot
Allow: /
User-agent: *
Disallow: /dashboard/
Disallow: /account/
Disallow: /api/
Sitemap: https://example.com/sitemap.xmlIf a page helps a buyer understand your product, it usually should not be blocked.
Sitemap Coverage
Include:
- homepage;
- feature and use-case pages;
- pricing;
- docs;
- blog posts;
- comparison pages;
- case studies;
- security, privacy, and compliance pages.
Do not include internal search, faceted duplicates, logged-in screens, or test URLs.
SSR and Prerendering
Recommended architecture:
/ -> prerender or SSR
/features -> prerender
/pricing -> prerender
/blog/[slug] -> prerender
/docs/[slug] -> prerender
/customers/[slug] -> prerender or SSR
/app/* -> client app behind authUse Nuxt route rules to make intent explicit:
export default defineNuxtConfig({
routeRules: {
'/': { prerender: true },
'/features/**': { prerender: true },
'/blog/**': { prerender: true },
'/docs/**': { prerender: true },
'/app/**': { ssr: false },
},
})Canonical and Metadata
Use useSeoMeta and canonical links consistently:
useSeoMeta({
title: 'Feature page title',
description: 'Specific description for buyers and crawlers.',
ogTitle: 'Feature page title',
})
useHead({
link: [{ rel: 'canonical', href: 'https://example.com/features/reporting' }],
})Canonical confusion is expensive for AI search because similar URLs can split evidence.
Structured Data
Render JSON-LD on the server:
useHead({
script: [
{
type: 'application/ld+json',
innerHTML: JSON.stringify({
'@context': 'https://schema.org',
'@type': 'SoftwareApplication',
name: 'Example SaaS',
applicationCategory: 'BusinessApplication',
url: 'https://example.com',
}),
},
],
})Add FAQPage on pages with real FAQs, Article on blog posts, BreadcrumbList on deep pages, and Organization sitewide.
Log Validation
After release, inspect CDN or hosting logs for:
- crawler user agent;
- requested path;
- status code;
- response size;
- cache status;
- redirects;
- blocked routes.
If crawlers only request / and receive a small shell, your Nuxt setup is not ready. If they reach docs, blog, and feature pages with 200 responses and meaningful HTML, the foundation is stronger.
Implementation Checklist
- Publish
/llms.txt. - Allow desired AI crawlers in
robots.txt. - Generate and submit XML sitemaps.
- Prerender stable public pages.
- Render schema and FAQ in initial HTML.
- Set canonical URLs.
- Check logs and raw HTML.
- Measure changes in GEO Scout on geoscout.pro.
Nuxt can be highly crawlable. The important part is making public product knowledge explicit, stable, and easy for AI systems to retrieve.
Частые вопросы
How do I add llms.txt in Nuxt?
Is Nuxt SSR enough for AI crawlers?
Should Nuxt docs and blog pages be prerendered?
How can GEO Scout help after implementation?
Related Articles
AI Crawler Readiness Checklist: Is Your Site Ready for GPTBot, OAI-SearchBot, and Others?
A technical checklist for AI crawler readiness covering robots.txt, sitemaps, SSR, status codes, logs, CDN rules, rate limits, structured data, and unblocked content.
GEO for Headless CMS: Technical Checklist for AI-Ready Content Models
How to configure a headless CMS for AI search with structured fields, canonical URLs, sitemaps, schema, SSR or static rendering, and crawler-safe publishing workflows.
Schema for SaaS Feature Pages: Structured Data for AI Answers
A technical checklist for SaaS feature page schema with SoftwareApplication, FAQPage, BreadcrumbList, Organization, Product signals, canonical URLs, and server rendering.