PEER Audit

Digital JEDI Compliance
for B Corp Certification

PEER (Performance, Experience, Emissions, Ranking) is a four-pillar digital audit framework that measures a website's sustainability, accessibility, and inclusivity. It produces an evidence-backed report covering over 200 individual checks — built specifically to address B Lab's JEDI Standards V2.2.

200+

Individual checks

87

WCAG rules tested

<4 min

Scan duration

4

Carbon methodologies

The Problem We Solve

B Corp certification requires companies to demonstrate commitment to justice, equity, diversity, and inclusion across their entire operation — including their digital presence. JEDI Standards V2.2 includes specific requirements around website accessibility, inclusive communications, and product inclusivity that most companies have no way to measure.

The typical approach is manual audits costing thousands of pounds, delivered months after engagement, with findings that are already outdated. PEER automates the measurable parts and produces auditor-grade evidence in under four minutes.

What We Test

1

WCAG 2.2 AAA Accessibility

JEDI Standard 2.m

B Lab's JEDI Standards V2.2 explicitly recommends websites meet the AAA criteria of the Web Content Accessibility Guidelines. PEER tests against the full WCAG 2.2 specification — not just Level A or AA, but the complete AAA standard.

Automated Checks

87 WCAG rules across Levels A (32), AA (24), and AAA (31)
Colour contrast at AA (4.5:1) and AAA (7:1) with pixel-level screenshot verification
Image accessibility — every image checked for alt text (WCAG 1.1.1)
Keyboard navigation — tab traversal, focus traps, focus indicators, skip links
Screen reader compatibility — ARIA landmarks, heading hierarchy, unlabelled icons
Form accessibility — label association, ARIA errors, tab order, placeholder misuse
Touch target sizing — WCAG 2.5.8, minimum 24×24 CSS pixels
Page language, auto-refresh, link disambiguation

Flagged for Manual Review

WCAG 2.2 includes criteria requiring human judgement. PEER identifies these and lists them as remaining steps:

  • Focus Appearance (2.4.13) — focus indicator area and contrast
  • Dragging Movements (2.5.7) — single-pointer alternatives for drag actions
  • Accessible Authentication (3.3.8 / 3.3.9) — no cognitive function tests for login
  • Consistent Help (3.2.6) — help mechanism placement across pages
  • Redundant Entry (3.3.7) — previously entered information auto-populated
2

Product and Service Inclusivity

JEDI Standard 2.q

JEDI Standards require that products and services are assessed for inclusivity — specifically font size, colour, and screen reader compatibility. PEER tests all three.

Font Size Assessment

Detects text rendered below 12px (desktop) or 16px (mobile). Counts affected elements and samples the smallest font sizes found.

Colour and Contrast

Every text element checked against WCAG thresholds. Pixel-level verification using screenshots with RGBA alpha channel support for semi-transparent text.

Screen Reader Compatibility

Scored 0–100. ARIA landmark presence, unlabelled icon controls, heading level jumps — each with weighted penalties.

Keyboard Navigation

Scored 0–100. Tab traversal, focus movement, trap detection, focus indicator visibility, skip link functionality.

Mobile Usability

Scored 0–100. Responsive viewport, horizontal overflow, fixed-width detection, tap target sizing (48×48px minimum), viewport obstruction detection with four-stage classification.

3

Inclusive Communications

JEDI Standard 2.p

JEDI Standards require that external communications — including websites — use inclusive, accessible language. PEER tests content readability and inclusive language alongside structural accessibility.

Content Readability

  • Flesch-Kincaid Grade Level (target: Grade 10 or below)
  • Flesch Reading Ease (0–100 scale)
  • Gunning Fog Index
  • Reading age, words per sentence, syllables per word
  • Analyses main content area only for accuracy

Inclusive Language Scanning

  • 30+ patterns across five categories
  • Gendered, ableist, exclusionary, cultural, age-related
  • Each flag includes category, count, and suggested replacement
  • Density metric: flags per 1,000 words scanned
Found Category Suggested Alternative
chairmanGenderedchairperson / chair
manpowerGenderedworkforce / staffing
handicappedAbleistdisabled / person with a disability
suffering fromAbleistliving with / has
wheelchair-boundAbleistwheelchair user
rockstarExclusionaryskilled / talented
tribeCulturalteam / community / group
elderlyAgeolder adults / seniors
digital nativeAgedigitally skilled
4

Carbon Emissions Measurement

Every PEER audit includes a full carbon footprint assessment using the EcoPigs Digital Carbon Methodology v2.0 — covering page weight, resource breakdown, data centre energy, network energy, user device energy, live grid intensity, green hosting verification, and waste analysis.

Score Purpose
BaselineGlobal grid average, no green hosting credit. For cross-site comparison.
TraditionalWhat Website Carbon Calculator would report (SWDM methodology). Like-for-like comparison.
LiveReal-time emissions using live grid intensity for the hosting country. Most accurate snapshot.
MeasuredCDP-instrumented device energy with per-segment grid splitting. Most granular methodology.
Grade Max gCO2/visit Approx Page Weight
A+≤ 0.095g~700 KB (top 12%)
A≤ 0.186g~1,370 KB (top 28%)
B≤ 0.341g~2,520 KB (median)
C≤ 0.493g~3,640 KB
D≤ 0.656g~4,840 KB
F> 0.846g> 6,250 KB
5

Performance and Core Web Vitals

Google's Core Web Vitals are confirmed ranking signals and directly affect user experience. PEER measures all three core metrics using real browser instrumentation.

Metric Good Needs Improvement Poor
LCP< 2.5s2.5–4.0s> 4.0s
CLS< 0.10.1–0.25> 0.25
TTFB< 600ms600–1,800ms> 1,800ms

Additional Performance Checks

HTTP/2 protocol detection
Brotli/gzip compression verification
Cache header analysis
Render-blocking resource detection
Third-party resource impact
Font optimisation (format, preload, font-display)
6

SEO and Search Visibility

Check What We Test
Page titlesPresence, length (50–60 chars optimal), uniqueness
Meta descriptionsPresence, length (150–160 chars optimal)
Heading structureSingle H1, valid hierarchy, no empty headings
Structured dataJSON-LD presence, Schema.org types detected
Crawlabilityrobots.txt, sitemap.xml, indexability status
Canonical URLsPresence, self-referencing, host consistency
Internal linkingOrphan page detection, content depth analysis
Mobile-firstViewport, overflow, tap targets, text size, mobile speed

How the P.E.E.R. Score Works

Pillar Weight What It Covers
Performance25%Core Web Vitals, resource optimisation, network performance
Experience25%WCAG compliance, mobile usability, forms, keyboard, screen reader
Emissions30%Carbon per page view, green hosting, waste, grid intensity
Ranking20%SEO technical health, content quality, mobile-first, structured data

Grade Scale

A+ (95+) → A (90+) → B+ (85+) → B (80+) → C+ (75+) → C (70+) → D (60+) → F (below 60)

Floor Rules

A website cannot achieve an overall A+ unless every pillar scores at least B+. High performance cannot mask poor accessibility or excessive emissions.

JEDI Standards V2.2 Coverage

JEDI Requirement What It Requires What PEER Tests
JEDI2.m Website meets WCAG AAA criteria 87 WCAG rules, pixel-verified contrast, full axe-core scan
JEDI2.q Product/service assessed for inclusivity (font, colour, screen readers) Font size, contrast ratio, screen reader score, keyboard score, mobile usability
JEDI2.p Inclusive external communications Readability (Flesch-Kincaid), inclusive language scan (30+ bias patterns), structural accessibility

ON_TRACK

All three requirements pass

PARTIAL_COMPLIANCE

Some pass, none fail

ACTION_REQUIRED

One or more requirements fail

What Makes PEER Different

Built for B Corp, not bolted on

Most accessibility tools test against WCAG AA. PEER tests against AAA because that is what B Lab's JEDI Standards actually recommend. The readability analysis, inclusive language scanning, and JEDI-specific evidence structure exist because B Corp requires them.

Evidence, not opinions

Every finding includes the specific element, the standard violated, and the measured value. Contrast failures are pixel-verified against screenshots. Readability scores are computed per-page using established algorithms. Nothing is subjective.

Carbon methodology that stands up to scrutiny

Four independent carbon scores using different methodologies, referenced against peer-reviewed research (IEA, Ember, Malmodin), with live grid data from the National Grid ESO API. Green hosting verified through the Green Web Foundation.

Multi-page, multi-device

PEER scans across desktop and mobile viewports, testing up to 14 page views per audit. Scores reflect the whole site, not just the homepage.

Actionable remediation

Every issue maps to a specific remediation code with severity, description, and implementation guidance. 39 issue codes across accessibility (15), performance (10), SEO (8), and sustainability (6).

Frameworks and Standards Referenced

Standard Version How PEER Uses It
WCAG2.2Full Level A, AA, and AAA automated testing
B Lab JEDI StandardsV2.2 (Feb 2026)JEDI2.m, JEDI2.q, JEDI2.p compliance assessment
Google Core Web Vitals2024 thresholdsLCP, CLS, TTFB measurement and grading
GHG ProtocolCorporate StandardScope 3 Category 1 alignment for digital emissions
IEA World Energy Outlook2024Energy intensity baselines
Ember Global Electricity Review2024Grid carbon intensity (473 gCO2/kWh global average)
Malmodin & Lundén2023Data centre and network energy factors
Green Web FoundationCurrentGreen hosting verification
National Grid ESOLive APIReal-time UK grid carbon intensity
HTTP Archive Web Almanac2025Carbon grade calibration against real web distribution
DEFRA2025UK electricity emission factors
Sustainable Web Design Modelv4Comparative reference (traditional score)
Flesch-Kincaid / Gunning FogContent readability grading

What PEER Does Not Do

PEER is an automated measurement tool. It covers the measurable, verifiable parts of JEDI compliance. It does not replace:

Manual Accessibility Testing

Keyboard-only walkthroughs, screen reader testing with NVDA/JAWS/VoiceOver, testing with real users with disabilities

Organisational JEDI Measures

Equity audits, diversity policies, stakeholder engagement, workforce demographics

Policy Review

JEDI2.c policy review against JEDI principles

Supplier Diversity Tracking

JEDI2.o demographic data on suppliers

Commitment Statements

JEDI2.a public JEDI commitment approved by management

The Digital Audit?

PEER tells you exactly where you stand on the digital requirements, with evidence an assessor can verify. That part is covered.

Ready to Audit Your Digital JEDI Compliance?

Get an evidence-backed PEER Audit report covering accessibility, inclusivity, carbon emissions, performance, and SEO — mapped directly to B Lab's JEDI Standards V2.2.

Book a PEER Audit

Under 4 minutes. Over 200 checks. Auditor-grade evidence.