🎯 Free: get your first AI visibility baseline in 5 min, then refresh it every 7 daysTry it →

Blog
3 min read

llms.txt for Nuxt: AI Crawler Readiness for Vue and Nitro Sites

A practical Nuxt implementation guide for llms.txt, robots.txt, sitemap, canonical URLs, structured data, SSR, prerendering, and AI crawler logs.

llms.txtNuxtVueAI crawlers
Vladislav Puchkov
Vladislav Puchkov
Founder of GEO Scout, GEO optimization expert

Nuxt gives teams several crawler-friendly options: SSR, static generation, prerendering, Nitro routes, and hybrid rendering. The risk is not the framework. The risk is shipping public pages that only become meaningful after client-side JavaScript runs.

For GEO, AI crawlers need a clear path from discovery to usable content. GEO Scout can then measure whether those pages influence answers across ChatGPT, Perplexity, Claude, Gemini, and other AI systems.

Add llms.txt

For static content:

public/llms.txt

For dynamic content, create a server route:

export default defineEventHandler(event => {
  setHeader(event, 'content-type', 'text/plain; charset=utf-8')
 
  return `# Example Nuxt Site
 
> Public product, documentation, and case-study resources.
 
## Core pages
- https://example.com/
- https://example.com/features
- https://example.com/pricing
 
## Docs
- https://example.com/docs
- https://example.com/docs/api
`
})

Verify:

curl -i https://example.com/llms.txt

robots.txt

Use robots rules to separate public knowledge from private application areas:

User-agent: GPTBot
Allow: /
 
User-agent: ClaudeBot
Allow: /
 
User-agent: PerplexityBot
Allow: /
 
User-agent: *
Disallow: /dashboard/
Disallow: /account/
Disallow: /api/
 
Sitemap: https://example.com/sitemap.xml

If a page helps a buyer understand your product, it usually should not be blocked.

Sitemap Coverage

Include:

  • homepage;
  • feature and use-case pages;
  • pricing;
  • docs;
  • blog posts;
  • comparison pages;
  • case studies;
  • security, privacy, and compliance pages.

Do not include internal search, faceted duplicates, logged-in screens, or test URLs.

SSR and Prerendering

Recommended architecture:

/                 -> prerender or SSR
/features         -> prerender
/pricing          -> prerender
/blog/[slug]      -> prerender
/docs/[slug]      -> prerender
/customers/[slug] -> prerender or SSR
/app/*            -> client app behind auth

Use Nuxt route rules to make intent explicit:

export default defineNuxtConfig({
  routeRules: {
    '/': { prerender: true },
    '/features/**': { prerender: true },
    '/blog/**': { prerender: true },
    '/docs/**': { prerender: true },
    '/app/**': { ssr: false },
  },
})

Canonical and Metadata

Use useSeoMeta and canonical links consistently:

useSeoMeta({
  title: 'Feature page title',
  description: 'Specific description for buyers and crawlers.',
  ogTitle: 'Feature page title',
})
 
useHead({
  link: [{ rel: 'canonical', href: 'https://example.com/features/reporting' }],
})

Canonical confusion is expensive for AI search because similar URLs can split evidence.

Structured Data

Render JSON-LD on the server:

useHead({
  script: [
    {
      type: 'application/ld+json',
      innerHTML: JSON.stringify({
        '@context': 'https://schema.org',
        '@type': 'SoftwareApplication',
        name: 'Example SaaS',
        applicationCategory: 'BusinessApplication',
        url: 'https://example.com',
      }),
    },
  ],
})

Add FAQPage on pages with real FAQs, Article on blog posts, BreadcrumbList on deep pages, and Organization sitewide.

Log Validation

After release, inspect CDN or hosting logs for:

  • crawler user agent;
  • requested path;
  • status code;
  • response size;
  • cache status;
  • redirects;
  • blocked routes.

If crawlers only request / and receive a small shell, your Nuxt setup is not ready. If they reach docs, blog, and feature pages with 200 responses and meaningful HTML, the foundation is stronger.

Implementation Checklist

  1. Publish /llms.txt.
  2. Allow desired AI crawlers in robots.txt.
  3. Generate and submit XML sitemaps.
  4. Prerender stable public pages.
  5. Render schema and FAQ in initial HTML.
  6. Set canonical URLs.
  7. Check logs and raw HTML.
  8. Measure changes in GEO Scout on geoscout.pro.

Nuxt can be highly crawlable. The important part is making public product knowledge explicit, stable, and easy for AI systems to retrieve.

Частые вопросы

How do I add llms.txt in Nuxt?
For static content, place llms.txt in the public directory. For dynamic content, create a Nitro server route that returns text/plain at /llms.txt.
Is Nuxt SSR enough for AI crawlers?
SSR helps, but you still need crawlable routes, canonical URLs, useful HTML, structured data, robots.txt, and sitemap coverage.
Should Nuxt docs and blog pages be prerendered?
Usually yes. Public docs, blog posts, feature pages, pricing, and case studies are strong candidates for prerendering or ISR-like regeneration through your deployment platform.
How can GEO Scout help after implementation?
GEO Scout on geoscout.pro tracks whether AI systems mention your brand and cite the Nuxt pages you made crawler-ready.