The catalog. Read the rules.
Every check Scannica runs, grouped by category and keyed by stable rule codes that you'll see in the app, in exports, and in any ticket pushed to Jira / GitHub / Monday / ClickUp. Click any code with a description to read what the rule detects, why it matters, and how to fix it.
- Total
- 81
- Errors
- 19
- Warnings
- 35
- Notices
- 27
Every rule, every code.
81 checks across 7 categories. Click any code to read the description, the remediation, and the references. The URL updates with the rule code, so any selection is a shareable permalink.
74 / 81 descriptions published — the rest are en route.
SEO
22 rulesPerformance
9 rulesAccessibility
9 rulesBest Practices
11 rulesSecurity
15 rulesGDPR
6 rulesGEO Readiness
9 rulesFour checks, close up.
A sampler of what actually sets Scannica apart once a crawl lands. Core Web Vitals from the rendered DOM. Redirect topology that walks every hop. Security findings with CWE references attached. Sitemap diff against the live crawl.
Measure the page users actually see.
A headless browser collects real LCP, CLS, and INP per URL — then Scannica diffs the rendered DOM against the static HTML. Pages that only exist after hydration? Caught.
- → SPA and hydrated sites surface rendered-only links and content.
- → Render-blocking scripts flagged with byte counts, not vibes.
- → DOM size and depth checks catch builder-exported monsters.
+ <article data-ssr="false">
+ <h1>Plans built for scale</h1>
+ <a href="/plans/enterprise">...</a>
<section id="faq">
<h2>Frequently asked</h2>
</section> Every hop, named.
Scannica walks every redirect to its terminus and surfaces loops before they surface in Search Console. Chains longer than two, mixed 301/302, HTTP→HTTPS hand-offs — all flagged with the URL that started the mess.
REDIRECT_CHAIN REDIRECT_LOOP HTTP_TO_HTTPS_REDIRECT - 301 http://example.com/products→ https
- 301 https://example.com/products→ www
- 301 https://www.example.com/products→ /store
- 302 https://www.example.com/store→ locale prefix
- 200 https://www.example.com/en/storefinal
Findings your security team will actually read.
HTTPS enforcement, CSP directives, HSTS, cookie flags, mixed content — each flag carries the rule code AND a CWE reference so your engineers can triage without a translator.
- CWE refs
- ✓
- linked on every security rule
- Scope
- per-URL
- not just top-level headers
| Strict-Transport-Security — | max-age=15552000; includeSubDomains |
| Content-Security-Policy SEC_CSP_UNSAFE_EVAL | script-src 'self' 'unsafe-eval' *.vendor.com |
| X-Frame-Options SEC_PERMISSIONS_POLICY_MISSING | missing |
| Set-Cookie: session SEC_COOKIE_NO_HTTPONLY | HttpOnly ✗ · SameSite ✗ |
| Mixed content SEC_MIXED_CONTENT_PASSIVE | 3 http:// resources on https page |
Orphans, found.
Feed in one or many sitemap.xml files.
Scannica diffs them against the live crawl: pages listed but unreachable, pages crawled
but absent from the sitemap, stale entries pointing at 404s. Migration QA without the spreadsheet.
- /
- /about
- /blog/launch
- /blog/roadmap
- /legal/old-tos
- + 122 more
- /
- /about
- /blog/launch
- /blog/v2-notes
- /internal/staging
- + 209 more
Short questions. Honest answers.
Pre-1.0 means some things are still moving. Where they are, the answer says so explicitly.
-
Does Scannica send my crawl data anywhere?
No. The crawl engine runs inside the desktop app. The only outbound network traffic is the HTTP requests your crawl makes to the site you're auditing. There is no Scannica analytics backend, no telemetry, no cloud sync — audit a staging host from a VPN-isolated laptop and nothing leaks. -
Which platforms are supported?
macOS (Apple Silicon and Intel), Windows 10/11, and Linux. Builds are produced via Tauri 2 from a single Rust + SvelteKit codebase. Specific package formats are still being decided as part of the pre-1.0 release work. -
How big a site can it crawl?
The Free tier caps a single crawl at 500 URLs and 10 page audits — enough to evaluate the loop end-to-end. Pro raises the cap to 100K URLs and 1K audits. Enterprise removes the cap entirely. The session database runs SQLite in WAL mode and the URL frontier uses xxh3 for dedup, so above the per-tier caps the practical ceiling is your disk. -
Can I pause a crawl and resume later?
Yes. Pause and resume are first-class Tauri commands — session state and the crawl frontier persist to the project file, so closing the laptop mid-crawl and resuming on the train is the supported flow. -
Does it render JavaScript?
Yes — Scannica can route each URL through a headless browser, collect Core Web Vitals, and diff the rendered DOM against the static response to surface content that only appears after hydration. JS rendering is a Pro tier feature. -
What's GEO Readiness, exactly?
Generative Engine Optimization. The category groups checks that matter when retrieval-augmented surfaces — Google AI Overviews, ChatGPT, Perplexity, Copilot — read your page: entity density, author entity markup, FAQ + Article schema, llms.txt, citation signals, Speakable schema. No incumbent SEO auditor ships these. GEO Readiness checks are Pro+. -
What format are the projects in?
A .scannica bundle is a ZIP containing the session SQLite database and the crawl configuration. Portable, archivable, diff-able, and small enough to email. Save and load require Pro. -
What can I export?
From any audit: per-URL CSV, per-issue CSV, JSON of either, an HTML report, and the rebuilt sitemap as XML. PDF and Markdown exports are not on the current roadmap. Exports are Pro+. -
Where do findings end up — Jira, GitHub, etc.?
Pro+ ships integrations with Jira, GitHub Issues, Monday, ClickUp (one-click create from a finding), and Google Search Console (pair crawl results with GSC data). No Slack, no email-out yet. -
Is it open source?
The crawl engine and rule set are proprietary. The licensing layer verifies a JWT issued out of band; a purchase flow doesn't exist yet because Scannica is pre-1.0 and the pricing model isn't finalised. -
Is there a CLI?
Not today. Scannica ships only as a Tauri desktop app. A headless / CI runner isn't on the near-term roadmap; if it's important to your team, mention it when you request access. -
Are audits scheduled? Can it run on a cron?
No scheduler in v0.1. The expected loop is human-driven — open the app, kick off a crawl, compare the report to last week's via the .scannica diff. Scheduling may follow once the licensing layer is solid.