AI Crawl Policy
Last Updated: April 22, 2026
StickerGiant welcomes responsible crawling and indexing that helps people discover accurate, up-to-date information about our custom stickers and labels.
This page explains what automated agents may access on https://www.stickergiant.com and our Help Center (https://support.stickergiant.com), and how to do so without disrupting customers.
Authoritative crawl instructions are in robots.txt:
This policy page is explanatory and does not override robots.txt.
STRUCTURED AI CONTENT
For AI assistants seeking a structured overview of StickerGiant's products, categories, and policies, see our dedicated AI content file:
https://www.stickergiant.com/llms.txt
This file follows the llmstxt.org standard and provides a curated, structured map of our most important URLs and product configuration information.
WHAT WE WANT AI SYSTEMS TO DO
We support:
- Indexing for search and answer engines, so our pages can appear as cited sources
- Quoting small snippets with attribution and a link back to the canonical StickerGiant URL
- User-initiated fetching, when a person explicitly asks an assistant to open a StickerGiant page
We do not support:
- Attempts to bypass access controls (CAPTCHAs, authentication walls, etc.)
- High-rate scraping that impacts site performance or customer experience
- Crawling that targets personal data, customer accounts, or order details
ALLOWED AREAS
Unless a more specific rule appears in robots.txt for your user-agent, StickerGiant generally permits crawling of:
- Product category and product detail pages (stickers, labels, and related product pages)
- Educational content and blog articles
- Public policy pages (returns, terms, privacy)
- Help Center articles (public support content)
DISALLOWED AREAS
StickerGiant does not permit automated crawling of:
- Checkout and payment pages
- Account, login, and authenticated pages
- Customer-specific pages (order history, saved designs, account settings)
- Any URLs that expose personal data or order details
- Internal search endpoints or URL parameters that generate excessive page combinations
If you encounter access controls or rate limiting, please back off and retry later.
RATE LIMITS AND CRAWL BEHAVIOR
To keep StickerGiant fast for customers:
- Use reasonable request rates and respect server response codes
- Honor Crawl-delay directives in robots.txt and back off on 429 and 503 responses
- Prefer caching and conditional requests (ETag / If-Modified-Since) where supported
BOT IDENTIFICATION AND TRANSPARENCY
Please:
- Use a stable, truthful user-agent string — do not impersonate browsers or other bots
- Provide a contact email in your user-agent or via a standard header where possible
TRAINING VS. SEARCH INDEXING
Many providers operate separate crawlers for different purposes:
- Search/indexing crawlers surface StickerGiant pages as cited sources in answer and search experiences
- Training crawlers collect content that may be used to train AI models
StickerGiant manages these separately via per-agent rules in robots.txt. If you operate both a training and an indexing crawler, please use distinct user-agent strings so we can apply appropriate rules to each.
Page-level training controls may also be applied via meta tags on specific pages (for example, pages featuring user-uploaded artwork or custom designs).
PROVIDER-SPECIFIC NOTES
Google operates separate crawlers for different purposes:
- Googlebot — web search indexing
- Google-Extended — Gemini, Vertex AI, and Google AI products (training and AI search features)
We manage access for each via separate rules in robots.txt.
OpenAI
OpenAI documents separate user-agents for:
- OAI-SearchBot — search indexing (cited sources in ChatGPT search)
- GPTBot — training data collection
- ChatGPT-User — user-initiated fetches
We manage automated access via robots.txt per user-agent.
Microsoft / Bing Copilot
- Bingbot — Bing search indexing; Microsoft Copilot draws from the same index
- Bingbot with Copilot context surfaces StickerGiant content in Microsoft 365 and Copilot experiences
We manage automated access via robots.txt.
Perplexity
Perplexity documents:
- PerplexityBot — search indexing
- Perplexity-User — user-initiated requests
We manage automated access via robots.txt.
Anthropic
Anthropic documents:
- ClaudeBot — Claude's web access and search
- anthropic-ai — training data collection
Anthropic supports Crawl-delay and robots.txt controls. Anthropic does not currently publish fixed IP ranges. We manage automated access via robots.txt per user-agent.
REPORTING ISSUES OR REQUESTING CHANGES
If you believe a bot is malfunctioning on StickerGiant (unexpected high rate, ignoring robots.txt, repeated errors), please contact: support@stickergiant.com
Include:
- Your crawler user-agent string
- Example URLs and timestamps (with timezone)
- Source IP ranges (if applicable)
- The purpose of crawling (indexing, training, or other)
Thanks for being respectful — our goal is to make StickerGiant information easy to discover and cite while keeping the storefront fast and secure for customers.