IndexNow for Next.js: Faster Discovery for AI Search and Bing Copilot
How to implement IndexNow in Next.js for published and updated pages, including API routes, keys, sitemaps, canonical URLs, and GEO measurement.
IndexNow is a push mechanism: instead of waiting for a crawler to rediscover a changed page, your site notifies participating search engines that a URL changed. For GEO, this matters because some AI search experiences depend on web indexes and fresh source discovery.
GEO Scout does not require IndexNow, but it helps teams measure whether faster discovery correlates with better AI visibility in geoscout.pro.
What to Submit
Submit canonical public URLs when they are:
- created;
- substantially updated;
- redirected;
- deleted;
- restored after an error;
- important for commercial prompts.
Good candidates:
/features/*
/pricing
/docs/*
/blog/*
/compare/*
/customers/*
/case-studies/*
/securityDo not submit internal search results, preview URLs, private dashboards, or parameter duplicates.
Key File
Create a key file at the root:
https://example.com/YOUR_KEY.txtIn Next.js, a static file can live in:
public/YOUR_KEY.txtKeep the key value available to the server as an environment variable if your submission code needs it:
INDEXNOW_KEY=your-key
SITE_URL=https://example.comSubmission Route
Create a protected server route that your CMS or deployment process can call:
import { NextResponse, type NextRequest } from 'next/server'
export async function POST(request: NextRequest) {
const token = request.headers.get('authorization')
if (token !== `Bearer ${process.env.INDEXNOW_SUBMIT_TOKEN}`) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { urls } = (await request.json()) as { urls: string[] }
const host = new URL(process.env.SITE_URL!).host
const response = await fetch('https://api.indexnow.org/indexnow', {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({
host,
key: process.env.INDEXNOW_KEY,
keyLocation: `${process.env.SITE_URL}/${process.env.INDEXNOW_KEY}.txt`,
urlList: urls,
}),
})
return NextResponse.json({ ok: response.ok, status: response.status })
}Call this route only from trusted systems.
CMS Hook
When an editor publishes a page:
- build the canonical URL;
- update the sitemap;
- revalidate the page if using ISR;
- submit the URL to IndexNow;
- log the submission status.
This keeps content discovery aligned with publishing.
Canonical Guardrails
Before submitting, normalize:
- protocol;
- host;
- trailing slash policy;
- locale prefix;
- lowercase slugs if your site uses them;
- removed query parameters;
- redirected legacy URLs.
Submitting duplicate URL variants can dilute signals and make logs harder to interpret.
Robots and Rendering Still Matter
IndexNow is not a bypass. If a submitted URL is blocked or thin, it will not help.
Check:
curl -A "Bingbot/2.0" -I https://example.com/features/reporting
curl -A "Bingbot/2.0" -s https://example.com/features/reporting | head -80The page should return 200, canonical metadata, body content, and JSON-LD in the initial response.
Measurement
Use logs to confirm discovery:
- Did Bingbot request the submitted URL?
- How soon after submission?
- Was the response 200?
- Did it hit the canonical page?
Use GEO Scout to confirm visibility:
- Did Bing Copilot or other AI surfaces cite the page?
- Did brand mentions improve for related prompts?
- Did competitor pages lose source share?
IndexNow is a speed layer. GEO still depends on useful content, structured evidence, and clean crawler access.
Частые вопросы
Does IndexNow directly update ChatGPT or Claude?
Should IndexNow replace sitemaps?
Where should the IndexNow key live in Next.js?
How can GEO Scout measure IndexNow value?
Related Articles
AI Crawler Logs in Vercel: How to Debug GEO Access
How to use Vercel logs and headers to validate GPTBot, ClaudeBot, PerplexityBot, Googlebot, Bingbot, robots.txt, redirects, rendering, and AI crawler readiness.
AI Crawler Readiness Checklist: Is Your Site Ready for GPTBot, OAI-SearchBot, and Others?
A technical checklist for AI crawler readiness covering robots.txt, sitemaps, SSR, status codes, logs, CDN rules, rate limits, structured data, and unblocked content.
llms.txt for Next.js: Implementation Checklist for AI Crawler Readiness
How to add llms.txt, robots.txt, sitemap, canonical tags, structured data, and server-rendered content to a Next.js site for AI crawlers.