Skip to content
Scannica

The catalog. Read the rules.

Every check Scannica runs, grouped by category and keyed by stable rule codes that you'll see in the app, in exports, and in any ticket pushed to Jira / GitHub / Monday / ClickUp. Click any code with a description to read what the rule detects, why it matters, and how to fix it.

Total
81
Errors
19
Warnings
35
Notices
27
01 · rule catalog

Every rule, every code.

81 checks across 7 categories. Click any code to read the description, the remediation, and the references. The URL updates with the rule code, so any selection is a shareable permalink.

74 / 81 descriptions published — the rest are en route.

SEO

22 rules
6 err 9 warn 7 notice

Performance

9 rules
2 err 6 warn 1 notice

Accessibility

9 rules
3 err 5 warn 1 notice

Best Practices

11 rules
4 warn 7 notice

Security

15 rules
7 err 6 warn 2 notice

GDPR

6 rules
1 err 4 warn 1 notice

GEO Readiness

9 rules
1 warn 8 notice
02 · deep dives

Four checks, close up.

A sampler of what actually sets Scannica apart once a crawl lands. Core Web Vitals from the rendered DOM. Redirect topology that walks every hop. Security findings with CWE references attached. Sitemap diff against the live crawl.

Rendered DOM · Core Web Vitals

Measure the page users actually see.

A headless browser collects real LCP, CLS, and INP per URL — then Scannica diffs the rendered DOM against the static HTML. Pages that only exist after hydration? Caught.

  • SPA and hydrated sites surface rendered-only links and content.
  • Render-blocking scripts flagged with byte counts, not vibes.
  • DOM size and depth checks catch builder-exported monsters.
/pricing · audited quality 74 / 100
LCP
1.8s
CLS
0.04
INP
312ms
TTFB
520ms
Rendered ↔ Static diff 18 nodes only in rendered
+ <article data-ssr="false">
+   <h1>Plans built for scale</h1>
+   <a href="/plans/enterprise">...</a>
  <section id="faq">
    <h2>Frequently asked</h2>
  </section>
Redirect topology

Every hop, named.

Scannica walks every redirect to its terminus and surfaces loops before they surface in Search Console. Chains longer than two, mixed 301/302, HTTP→HTTPS hand-offs — all flagged with the URL that started the mess.

REDIRECT_CHAIN REDIRECT_LOOP HTTP_TO_HTTPS_REDIRECT
  1. 301 http://example.com/products
    → https
  2. 301 https://example.com/products
    → www
  3. 301 https://www.example.com/products
    → /store
  4. 302 https://www.example.com/store
    → locale prefix
  5. 200 https://www.example.com/en/store
    final
4 hops · flagged as REDIRECT_CHAIN
Security · CWE-tagged

Findings your security team will actually read.

HTTPS enforcement, CSP directives, HSTS, cookie flags, mixed content — each flag carries the rule code AND a CWE reference so your engineers can triage without a translator.

CWE refs
linked on every security rule
Scope
per-URL
not just top-level headers
Response · https://example.com/checkout 3 issues
Strict-Transport-Security
max-age=15552000; includeSubDomains
Content-Security-Policy
SEC_CSP_UNSAFE_EVAL
script-src 'self' 'unsafe-eval' *.vendor.com
X-Frame-Options
SEC_PERMISSIONS_POLICY_MISSING
missing
Set-Cookie: session
SEC_COOKIE_NO_HTTPONLY
HttpOnly ✗ · SameSite ✗
Mixed content
SEC_MIXED_CONTENT_PASSIVE
3 http:// resources on https page
Sitemap diff

Orphans, found.

Feed in one or many sitemap.xml files. Scannica diffs them against the live crawl: pages listed but unreachable, pages crawled but absent from the sitemap, stale entries pointing at 404s. Migration QA without the spreadsheet.

sitemap.xml (127)
  • /
  • /about
  • /blog/launch
  • /blog/roadmap
  • /legal/old-tos
  • + 122 more
crawl (214)
  • /
  • /about
  • /blog/launch
  • /blog/v2-notes
  • /internal/staging
  • + 209 more
4 missing
12 orphan
123 matched
03 · FAQ

Short questions. Honest answers.

Pre-1.0 means some things are still moving. Where they are, the answer says so explicitly.

  • Does Scannica send my crawl data anywhere?

    No. The crawl engine runs inside the desktop app. The only outbound network traffic is the HTTP requests your crawl makes to the site you're audit­ing. There is no Scannica analytics backend, no telemetry, no cloud sync — audit a staging host from a VPN-isolated laptop and nothing leaks.
  • Which platforms are supported?

    macOS (Apple Silicon and Intel), Windows 10/11, and Linux. Builds are produced via Tauri 2 from a single Rust + SvelteKit codebase. Specific package formats are still being decided as part of the pre-1.0 release work.
  • How big a site can it crawl?

    The Free tier caps a single crawl at 500 URLs and 10 page audits — enough to evaluate the loop end-to-end. Pro raises the cap to 100K URLs and 1K audits. Enterprise removes the cap entirely. The session database runs SQLite in WAL mode and the URL frontier uses xxh3 for dedup, so above the per-tier caps the practical ceiling is your disk.
  • Can I pause a crawl and resume later?

    Yes. Pause and resume are first-class Tauri commands — session state and the crawl frontier persist to the project file, so closing the laptop mid-crawl and resuming on the train is the supported flow.
  • Does it render JavaScript?

    Yes — Scannica can route each URL through a headless browser, collect Core Web Vitals, and diff the rendered DOM against the static response to surface content that only appears after hydration. JS rendering is a Pro tier feature.
  • What's GEO Readiness, exactly?

    Generative Engine Optimization. The category groups checks that matter when retrieval-augmented surfaces — Google AI Overviews, ChatGPT, Perplexity, Copilot — read your page: entity density, author entity markup, FAQ + Article schema, llms.txt, citation signals, Speakable schema. No incumbent SEO auditor ships these. GEO Readiness checks are Pro+.
  • What format are the projects in?

    A .scannica bundle is a ZIP containing the session SQLite database and the crawl configuration. Portable, archivable, diff-able, and small enough to email. Save and load require Pro.
  • What can I export?

    From any audit: per-URL CSV, per-issue CSV, JSON of either, an HTML report, and the rebuilt sitemap as XML. PDF and Markdown exports are not on the current roadmap. Exports are Pro+.
  • Where do findings end up — Jira, GitHub, etc.?

    Pro+ ships integrations with Jira, GitHub Issues, Monday, ClickUp (one-click create from a finding), and Google Search Console (pair crawl results with GSC data). No Slack, no email-out yet.
  • Is it open source?

    The crawl engine and rule set are proprietary. The licensing layer verifies a JWT issued out of band; a purchase flow doesn't exist yet because Scannica is pre-1.0 and the pricing model isn't finalised.
  • Is there a CLI?

    Not today. Scannica ships only as a Tauri desktop app. A headless / CI runner isn't on the near-term roadmap; if it's important to your team, mention it when you request access.
  • Are audits scheduled? Can it run on a cron?

    No scheduler in v0.1. The expected loop is human-driven — open the app, kick off a crawl, compare the report to last week's via the .scannica diff. Scheduling may follow once the licensing layer is solid.
still exploring

Read enough? Get on the invite list.