PEER Audit

Digital JEDI Compliance
for B Corp Certification

PEER stands for Performance, Experience, Emissions, and Ranking. It is a four-part audit that measures how green, usable, and inclusive your website is. It runs over 200 checks and gives you a proof-backed report — built to meet B Lab's JEDI Standards V2.2.

210+

Individual checks

98

Accessibility rules tested

7

ISO 30071-1 checks

<4 min

Scan duration

4

Carbon methodologies

Free P.E.E.R. Website Scanner

Enter your URL. We'll score your site on speed, user experience, carbon output, and search ranking — in seconds.

The Problem We Solve

B Corp asks firms to show their commitment to justice, equity, diversity, and inclusion across their whole business — including their website. JEDI Standards V2.2 sets clear rules for web access, inclusive language, and product inclusivity. Most firms have no way to measure these.

The usual route is a manual audit costing thousands. It arrives months late with findings already out of date. PEER automates the checks you can measure and gives you audit-grade proof in under four minutes.

What We Test

1

WCAG 2.2 AAA Accessibility

JEDI Standard 2.m

B Lab's JEDI Standards V2.2 says websites should meet WCAG AAA — the top level of web access rules. PEER tests the full WCAG 2.2 set. Not just Level A or AA, but the complete AAA standard.

Automated Checks

98 accessibility rules across Levels A (32), AA (24), AAA (31), plus 4 custom OYNK rules and 7 ISO 30071-1 checks
Colour contrast at AA (4.5:1) and AAA (7:1) with pixel-level screenshot verification
Image accessibility — every image checked for alt text (WCAG 1.1.1)
Keyboard navigation — tab traversal, focus traps, focus indicators, skip links
Screen reader compatibility — ARIA landmarks, heading hierarchy, unlabelled icons
Form accessibility — label association, ARIA errors, tab order, placeholder misuse
Touch target sizing — WCAG 2.5.8, minimum 24×24 CSS pixels
Page language, auto-refresh, link disambiguation

Flagged for Manual Review

WCAG 2.2 has some rules that need a human to judge. PEER flags these and lists them as next steps:

  • Focus Appearance (2.4.13) — focus indicator area and contrast
  • Dragging Movements (2.5.7) — single-pointer alternatives for drag actions
  • Accessible Authentication (3.3.8 / 3.3.9) — no cognitive function tests for login
  • Consistent Help (3.2.6) — help mechanism placement across pages
  • Redundant Entry (3.3.7) — previously entered information auto-populated

Custom Detection Rules

axe-core misses issues that require comparing elements against each other. PEER runs four additional rules on every page:

Duplicate carousel content — Finds cloned content in marquees and infinite scrolls that isn’t hidden from screen readers with aria-hidden="true" (WCAG 1.3.1)
Identical link text, different destinations — Flags links sharing the same text (e.g. multiple “Read More” links) that point to different URLs (WCAG 2.4.9)
Decorative punctuation — Detects standalone typographic ornaments (em dashes, bullets, curly quotes) that screen readers will announce out loud (WCAG 1.3.1)
Inconsistent alt text — Finds the same image used multiple times on a page with different alt text values (WCAG 1.1.1)
2

Product and Service Inclusivity

JEDI Standard 2.q

JEDI Standards say you must check products for inclusivity — mainly font size, colour contrast, and screen reader support. PEER tests all three.

Font Size Assessment

Finds text smaller than 12px on desktop or 16px on mobile. Counts how many items are too small and shows the worst cases.

Colour and Contrast

Every text item checked against WCAG contrast rules. Pixel-level checks using screenshots, with support for partly see-through text.

Screen Reader Compatibility

Scored 0 to 100. Checks ARIA landmarks, icon labels, and heading order. Each fault costs points based on how much it matters.

Keyboard Navigation

Scored 0 to 100. Tests tab order, focus movement, trap spots, focus indicator display, and skip links.

Mobile Usability

Scored 0 to 100. Checks viewport, side scroll, fixed widths, tap targets (48px min), and screen blockers with a four-level scale.

3

Inclusive Communications

JEDI Standard 2.p

JEDI Standards say all public content — including websites — must use inclusive, easy-to-read language. PEER tests both readability and inclusive language as part of the audit.

Content Readability

  • Flesch-Kincaid Grade Level (target: Grade 10 or below)
  • Flesch Reading Ease (0–100 scale)
  • Gunning Fog Index
  • Reading age, words per sentence, syllables per word
  • Analyses main content area only for accuracy

Inclusive Language Scanning

  • 30+ patterns across five categories
  • Gendered, ableist, exclusionary, cultural, age-related
  • Each flag includes category, count, and suggested replacement
  • Density metric: flags per 1,000 words scanned
Found Category Suggested Alternative
chair​manGenderedchairperson / chair
man​powerGenderedworkforce / staffing
handi​cappedAbleistdisabled / person with a disability
suffering​ fromAbleistliving with / has
wheelchair​-boundAbleistwheelchair user
rock​starExclusionaryskilled / talented
tri​beCulturalteam / community / group
elder​lyAgeolder adults / seniors
digital​ nativeAgedigitally skilled
4

Carbon Emissions Measurement

Every PEER audit includes a full carbon check using the EcoPigs Digital Carbon Method v2.0. It covers page weight, resource breakdown, data centre energy, network energy, device energy, live grid data, green hosting checks, and waste analysis.

Score Purpose
BaselineGlobal grid average, no green hosting credit. For cross-site comparison.
TraditionalWhat Website Carbon Calculator would report (SWDM methodology). Like-for-like comparison.
LiveReal-time emissions using live grid intensity for the hosting country. Most accurate snapshot.
MeasuredCDP-instrumented device energy with per-segment grid splitting. Most granular methodology.
Grade Max gCO2/visit Approx Page Weight
A+≤ 0.095g~700 KB (top 12%)
A≤ 0.186g~1,370 KB (top 28%)
B≤ 0.341g~2,520 KB (median)
C≤ 0.493g~3,640 KB
D≤ 0.656g~4,840 KB
F> 0.846g> 6,250 KB
5

Performance and Core Web Vitals

Google's Core Web Vitals are proven ranking signals. They also shape how users feel about your site. PEER measures all three core metrics in a real browser.

Metric Good Needs Improvement Poor
LCP< 2.5s2.5–4.0s> 4.0s
CLS< 0.10.1–0.25> 0.25
TTFB< 600ms600–1,800ms> 1,800ms

Additional Performance Checks

HTTP/2 protocol detection
Brotli/gzip compression verification
Cache header analysis
Render-blocking resource detection
Third-party resource impact
Font optimisation (format, preload, font-display)
6

SEO and Search Visibility

Check What We Test
Page titlesPresence, length (50–60 chars optimal), uniqueness
Meta descriptionsPresence, length (150–160 chars optimal)
Heading structureSingle H1, valid hierarchy, no empty headings
Structured dataJSON-LD presence, Schema.org types detected
Crawlabilityrobots.txt, sitemap.xml, indexability status
Canonical URLsPresence, self-referencing, host consistency
Internal linkingOrphan page detection, content depth analysis
Mobile-firstViewport, overflow, tap targets, text size, mobile speed
7

ISO 30071-1 Digital Accessibility Maturity

ISO 30071-1 is the international standard for embedding accessibility into how organisations build and maintain digital products. It goes beyond WCAG — which tests whether a page is accessible — and asks whether the organisation has the processes to keep it accessible.

Most of ISO 30071-1 requires human assessment: team training, procurement policies, user research with disabled participants. But seven checks can be detected from the page itself. PEER runs all seven automatically.

Check ISO Clause What PEER Detects
Accessibility statement Clause 8 (Communication) Whether the page links to an accessibility statement. ISO 30071-1 requires organisations to publish one covering conformance level, known issues, and a feedback mechanism.
Skip navigation Activity 3 (Design) Whether a skip-to-content link exists early in the page, letting keyboard and screen reader users bypass repeated navigation.
Landmark regions Activity 3 (Design) Whether the page uses semantic HTML landmarks — <main>, <nav>, <header>, <footer> — that screen readers use to navigate page structure.
Third-party widget accessibility Clause 5 (Procurement) Whether embedded third-party content — cookie banners, chat widgets, maps, social embeds, video players — has accessible labelling. Iframes need title attributes; dialogs need aria-label.
Reduced motion support Activity 3 (Design) Whether the site respects prefers-reduced-motion when animations are present. Users with vestibular disorders need the option to disable motion.
Dark mode and high contrast Activity 3 (Design) Whether the site supports prefers-color-scheme and prefers-contrast CSS media queries for users with light sensitivity or low vision.
SPA focus management Activity 3 (Design) Whether single-page applications (React, Next.js, Vue, Angular) manage focus on navigation. Without this, screen reader users lose their place when views change.

What PEER Reports

Each check returns a pass or fail with a maturity signal. The ISO 30071-1 section of the report shows:

  • Checks passed out of 7
  • Maturity signals — boolean flags for each check, showing at a glance which organisational practices are visible from the website
  • Specific violations with element selectors, HTML snippets, and fix guidance

What ISO 30071-1 Requires Beyond Automation

The seven automated checks cover roughly 30% of ISO 30071-1. The remaining 70% requires human consultancy across nine maturity dimensions: motivation, responsibility, capability, support, policies, governance, users, lifecycle, and communication. OYNK offers full ISO 30071-1 consultancy programmes for organisations that want to go further.

How the P.E.E.R. Score Works

Pillar Weight What It Covers
Performance25%Core Web Vitals, resource optimisation, network performance
Experience25%WCAG compliance, mobile usability, forms, keyboard, screen reader
Emissions30%Carbon per page view, green hosting, waste, grid intensity
Ranking20%SEO technical health, content quality, mobile-first, structured data

Grade Scale

A+ (95+) → A (90+) → B+ (85+) → B (80+) → C+ (75+) → C (70+) → D (60+) → F (below 60)

Floor Rules

A site cannot get an overall A+ unless every pillar hits at least B+. Fast speed alone cannot hide poor access or high carbon.

JEDI Standards V2.2 Coverage

JEDI Requirement What It Requires What PEER Tests
JEDI2.m Website meets WCAG AAA criteria 98 accessibility rules (87 WCAG + 4 custom + 7 ISO 30071-1), pixel-verified contrast, full axe-core scan
JEDI2.q Product/service assessed for inclusivity (font, colour, screen readers) Font size, contrast ratio, screen reader score, keyboard score, mobile usability
JEDI2.p Inclusive external communications Readability (Flesch-Kincaid), inclusive language scan (30+ bias patterns), structural accessibility

PEER also runs 7 automated ISO 30071-1 checks that signal organisational accessibility maturity — accessibility statement presence, skip navigation, landmark structure, third-party procurement, motion preferences, display preferences, and SPA focus management. These go beyond WCAG compliance and indicate whether accessibility is embedded in design and procurement processes.

ON_TRACK

All three requirements pass

PARTIAL_COMPLIANCE

Some pass, none fail

ACTION_REQUIRED

One or more requirements fail

What Makes PEER Different

Built for B Corp, not bolted on

Most access tools only test WCAG AA. PEER tests AAA because that is what B Lab actually asks for. The readability checks, language scans, and JEDI proof format exist because B Corp needs them.

Evidence, not opinions

Every finding names the element, the rule it breaks, and the measured value. Contrast faults are checked pixel by pixel against screenshots. Readability scores use proven formulas on each page. Nothing is guesswork.

Carbon methodology that stands up to scrutiny

Four carbon scores using different methods, backed by research from IEA, Ember, and Malmodin. Live grid data comes from the National Grid ESO API. Green hosting is verified by the Green Web Foundation.

ISO 30071-1 built in

Most accessibility tools stop at WCAG. PEER also checks for signals of organisational accessibility maturity mapped to ISO 30071-1 — the international standard for embedding accessibility into how digital products are built and maintained.

Multi-page, multi-device

PEER scans both desktop and mobile, testing up to 14 pages per audit. Scores show the whole site, not just the homepage.

Actionable remediation

Every issue links to a fix code with severity, what is wrong, and how to solve it. 39 fix codes span access (15), speed (10), SEO (8), and green web (6).

Frameworks and Standards Referenced

Standard Version How PEER Uses It
WCAG2.2Full Level A, AA, and AAA automated testing
ISO/IEC 30071-120197 automated checks across Clauses 5, 7, and 8 — accessibility statement, skip links, landmarks, third-party widgets, motion, contrast, focus management
B Lab JEDI StandardsV2.2 (Feb 2026)JEDI2.m, JEDI2.q, JEDI2.p compliance assessment
Google Core Web Vitals2024 thresholdsLCP, CLS, TTFB measurement and grading
GHG ProtocolCorporate StandardScope 3 Category 1 alignment for digital emissions
IEA World Energy Outlook2024Energy intensity baselines
Ember Global Electricity Review2024Grid carbon intensity (473 gCO2/kWh global average)
Malmodin & Lundén2023Data centre and network energy factors
Green Web FoundationCurrentGreen hosting verification
National Grid ESOLive APIReal-time UK grid carbon intensity
HTTP Archive Web Almanac2025Carbon grade calibration against real web distribution
DEFRA2025UK electricity emission factors
Sustainable Web Design Modelv4Comparative reference (traditional score)
Flesch-Kincaid / Gunning FogNot applicableContent readability grading

What PEER Does Not Do

PEER is an automated tool. It covers the parts of JEDI you can measure and verify. It does not replace:

Manual Accessibility Testing

Keyboard-only walkthroughs, screen reader tests with NVDA/JAWS/VoiceOver, testing with real users who have disabilities, and the remaining 70% of ISO 30071-1 that requires human assessment — team training, procurement policies, user research methodology, and governance processes. OYNK offers consultancy programmes for the full standard.

Organisational JEDI Measures

Equity audits, diversity policies, stakeholder engagement, workforce demographics

Policy Review

JEDI2.c policy review against JEDI principles

Supplier Diversity Tracking

JEDI2.o demographic data on suppliers

Commitment Statements

JEDI2.a public JEDI commitment approved by management

The Digital Audit?

PEER shows exactly where you stand on the digital rules, with proof an assessor can verify. That part is covered.

Ready to Audit Your Digital JEDI Compliance?

Get a proof-backed PEER Audit report on access, inclusivity, carbon, speed, and SEO — mapped straight to B Lab's JEDI Standards V2.2.

Book a PEER Audit

Under 4 minutes. Over 200 checks. Auditor-grade evidence.