Skip to main content

Methodology

How AI Status Dashboard works

We combine official status feeds, normalized incidents, and real-time probes to give teams a trustworthy picture of AI provider health in under a minute.

1. Official status ingestion

We ingest public status endpoints (Statuspage, Instatus, Status.io, RSS, and provider APIs) on a tight cadence. Incidents, components, and maintenances are normalized into a unified format so you can compare providers side by side.

2. Synthetic probes

We run deterministic canary requests against AI provider APIs to measure latency, errors, and response correctness. If probes are inconclusive or blocked by account limits, we default to operational and flag the signal as advisory rather than declaring an outage.

3. Telemetry overlays

Optional telemetry allows teams to compare their own experience to global signals. We never log prompts or outputs by default; we store only aggregates needed for reliability insights.

4. Evidence-first alerts

Every alert is backed by timestamps, thresholds, and raw evidence. This reduces false positives and helps teams decide when to fail over or throttle usage.

5. Transparency & trust

We always link to official status pages and make it clear when data is aggregated or when signals are advisory. If we cannot verify a status confidently, we default to operational.

Build on AIStatusDashboard

Use stable public surfaces to integrate reliability data into your workflows.

  • MCP endpoint: https://aistatusdashboard.com/mcp
  • OpenAPI: https://aistatusdashboard.com/openapi.json
  • Public JSON: /api/public/v1/status/summary, /api/public/v1/incidents
  • Datasets: /datasets/incidents.ndjson, /datasets/metrics.csv
  • Discovery audit: /docs/discoverability-audit.md