ServicesAI Audit
← Back to Blog

What the AI Visibility Score on FlinnSchema Actually Means

AI VisibilityScoringHow It Works

It's Not a Vanity Metric

When you run a free audit on FlinnSchema, you get a percentage score. But unlike most "website scores" that give you a feel-good number, this one is built from 26 individual factors, each weighted by how much it actually influences whether AI search engines like ChatGPT, Perplexity, Gemini, and Grok will recommend your business.

This article breaks down exactly what goes into that number, why certain factors matter more than others, and what the 90% cap is about.

26 Factors Across 7 Categories

Every audit analyses your website across 26 distinct factors, grouped into seven categories:

Structured Data

This is the most heavily weighted category. It covers:

  • Schema Markup (JSON-LD) — Whether your site has structured data at all, and how well it's implemented. This single factor carries the highest impact multiplier in the entire audit because it's the primary language AI engines use to understand your business.
  • Schema Completeness — Having schema is one thing. Having complete, detailed schema with all recommended properties filled in is another.
  • Schema Type Coverage — Does your site have Organisation, Product, FAQ, Review, and other relevant schema types? Each type helps AI engines understand a different aspect of your business.
  • FAQ Structured Data — FAQ schema gets its own factor because it's uniquely powerful. When someone asks an AI engine a question your FAQ answers, it can pull directly from your structured data.

Trust and Authority

  • Reviews and Trust Signals — We pull live data from Trustpilot, Google Reviews, Feefo, and Reviews.io. AI engines weight third-party trust signals heavily when deciding whether to recommend a business.
  • E-E-A-T Signals — Experience, Expertise, Authoritativeness, and Trustworthiness. This measures whether your content demonstrates genuine expertise and whether your site establishes your credentials.

AI Readiness

  • LLM Content Readability — How easily can a large language model parse, understand, and summarise your content? This looks at sentence structure, information density, heading hierarchy, and whether key facts are stated clearly.
  • AI Crawler Access — Whether GPTBot, ClaudeBot, PerplexityBot, GoogleOther, and Bytespider are allowed in your robots.txt. If these bots are blocked, AI engines literally cannot index your site.
  • Conversational Content — Does your content contain natural, quotable language? AI engines prefer content they can cite directly in conversational responses.
  • LLMs.txt File — A relatively new standard. A structured file at /llms.txt that gives AI systems a concise overview of your business, services, and key facts.

Community and Social

  • Reddit Presence — AI engines, particularly Perplexity and ChatGPT, pull heavily from Reddit discussions. Being mentioned on Reddit increases your chances of citation.
  • Social Profiles and sameAs — Whether your schema includes links to your LinkedIn, Twitter, Facebook, and other social profiles, which helps AI engines confirm your identity.

Content Quality

  • Content Depth and Quality — Whether your pages have enough substantive information for AI to work with. Thin pages with little content rarely get cited.
  • Content Freshness — AI engines favour recently updated content. Stale pages signal an inactive business.
  • Heading Structure — Proper H1 through H6 hierarchy makes content scannable for both humans and AI.

Technical Foundation

  • Meta Tags and SEO — Title tags, meta descriptions, and other standard SEO elements.
  • Open Graph and Social Meta — How your pages appear when shared on social media.
  • Robots.txt — Whether your robots.txt is properly configured for all crawlers.
  • Sitemap.xml — Whether AI and traditional crawlers can discover all your content.
  • Internal Linking — How well your pages connect to each other.
  • Image SEO — Alt text, descriptive filenames, and image optimisation.
  • Semantic HTML — Proper use of HTML5 elements like article, nav, main, and section.
  • HTTPS Security — SSL certificate and secure connections.
  • Mobile Viewport — Responsive design configuration.

Performance

  • Page Performance and Size — Load times and page weight. Bloated pages slow down AI crawler processing.
  • HTML Quality and Accessibility — Clean, valid HTML that's accessible to all users and systems.

Not All Factors Are Equal — Impact Multipliers

This is the part most scoring tools get wrong. They treat every factor equally. FlinnSchema doesn't.

Each of the 26 factors has an impact multiplier that reflects how much it actually affects AI visibility. The multipliers range from 0.2x to 2.2x:

  • High impact (1.5x to 2.2x) — Schema markup, E-E-A-T, schema completeness, schema types, LLM readability, reviews, AI crawler access, FAQ schema, conversational content, and LLMs.txt. These are the factors that directly determine whether AI engines can understand and trust your business.
  • Standard impact (0.8x to 1.4x) — Reddit presence, content freshness, social profiles, content depth, robots.txt, and sitemap. Important but not the primary drivers.
  • Low impact (0.2x to 0.7x) — Internal linking, image SEO, semantic HTML, meta tags, Open Graph, heading structure, page performance, HTML quality, HTTPS, and mobile viewport. These are table stakes. Most websites pass them already, and they have a smaller effect on AI recommendations.

The final score is calculated by applying these multipliers to each factor's raw score, then converting to a percentage. This means a small improvement in a high-impact factor moves your score far more than perfecting a low-impact one.

Why the Score Is Capped at 90%

You'll notice that the maximum achievable score is 90%, not 100%. This is deliberate.

Some factors — particularly reviews, Reddit presence, and E-E-A-T — require real-world effort over time. You can't game a Trustpilot rating overnight. You can't manufacture genuine Reddit discussions about your business. You can't fake expertise that AI engines can verify against multiple sources.

The 90% cap keeps scores honest. It means a score of 75% is genuinely strong, and there's always room to grow through continued investment in your business's online reputation and content quality. If you see a tool giving you a perfect 100%, it's probably not measuring the things that actually matter.

What Your Score Tells You

Here's a rough guide to what different score ranges mean:

  • Below 30% — AI search engines likely can't understand your business at all. You're probably missing schema markup entirely, and your content may not be structured for AI comprehension.
  • 30% to 50% — Some basics are in place, but major gaps exist. You likely have some schema but it's incomplete, or you're missing key trust signals.
  • 50% to 65% — A solid foundation. Most technical factors are covered, but the high-impact areas like schema completeness, reviews, and AI-specific optimisations need work.
  • 65% to 80% — Strong. Your site is well-structured for AI engines and you have good trust signals. Improvements at this stage are about refinement and consistency.
  • Above 80% — Excellent. You're in the top tier for AI visibility. Maintaining this score requires ongoing content freshness, review management, and staying current with AI search developments.

Free vs Premium — What You See

The free audit gives you your overall score and shows which of the 26 factors are passing, warning, or failing. That's enough to know where you stand.

Premium unlocks the full picture: detailed findings for every factor, specific recommendations tailored to your site, competitor comparison, LLM prompt testing across 4 AI engines, and a prioritised roadmap that tells you exactly what to fix first for maximum impact.

Run your free audit to see your score, or explore Premium for the full breakdown.

Want to check your AI visibility?

Run a free audit on your website and see how visible you are to ChatGPT, Perplexity, and other AI search engines.

Run Free Audit