← Portfolio Download PDF
Case Study · UXR-226 · Comprehension Study · Lifeway Christian Resources · 2026

Free Shipping Banner Comprehension Study

Two banner concepts. One communicates free shipping clearly. One doesn't. The data tells you which.

90%Free-shipping clarity — Design A
20%Free-shipping clarity — Design B
60%Preferred Design A overall
5Research questions answered
01 — Context

Part of a research program, not a one-off study.

This study followed UXR-224, which validated the checkout savings progress bar concept. With the progress bar moving toward production, the team needed to evaluate two competing banner designs for communicating the progressive discount and free shipping system to cart customers.

UXR-224 and UXR-226 ran in parallel and were reviewed together when making the final production recommendation. That's how a research program compounds value — individual studies inform each other.

Research question: Do users correctly understand how the savings system works — and which banner design communicates it most clearly?

02 — The Two Approaches

Same system. Different communication strategies.

Both banners communicated the same progressive discount and free shipping system. The difference was in emphasis and framing.

Design A

Status-Forward

  • Highlights the current discount tier the customer has unlocked
  • Visual progress bar shows how close they are to the next tier
  • Shows savings applied and shipping status clearly
Design B

Instruction-Forward

  • Leads with explanatory messaging about how discounts work
  • Tells users what to do to reach the next tier
  • Less visual emphasis on current status
03 — Research Questions

Five questions the study needed to answer.

04 — The Results

A 70-point gap in the metric that matters most.

Design A vs. Design B — Key Metrics

Overall Preference
Design A
60%
Design B
25%
Free-Shipping Threshold Correctly Identified
Design A
90% ★
Design B
20%
70pts

The free-shipping clarity gap

90% of Design A users correctly identified the free-shipping threshold. Only 20% of Design B users did. That gap is the deciding factor — not preference ratings, not ease scores.

05 — The Nuance

Design A wins the study. Design B has something worth keeping.

Design A is the right call

The 70-point gap in free-shipping clarity makes Design A the defensible choice. When the majority can't correctly identify the threshold, the design is failing at its core job — regardless of how other metrics score.

Design B's copy strategy has merit

Design B's explanatory approach showed stronger discount comprehension in one study. The framing works for discount mechanics even when it fails for shipping clarity. That copy approach can be integrated into Design A.

The recommendation is a hybrid

Ship Design A's visual structure with targeted copy from Design B's instructional language for the discount tier sections. The clarity of A with the comprehension strengths of B where B actually outperformed.

06 — The Key Finding

Design A wins on the metric that matters most.

When the majority can't correctly identify the free-shipping threshold, the design is failing at its core job — regardless of how ease ratings or preference scores compare. 90% versus 20% is not a close call.

07 — Recommendation & Impact

Ship Design A. Integrate Design B's instructional copy.

2
Banner designs tested
5
Research questions answered
90%
vs. 20% — clarity gap
1
Production decision directly informed
Primary Recommendation

Ship Design A

The 70-point gap in free-shipping clarity is the deciding factor. Design A correctly communicates the threshold to 90% of users — the core job of this banner.

Incorporate Design B's instructional copy

Design B's explanatory framing showed stronger comprehension of discount progression mechanics. Selectively apply that copy approach to Design A's discount tier sections without disrupting its structural clarity.

08 — Reflection

What worked. What I'd change.

What worked well

  • Testing both zero state and progress state — a complete picture of each design across the user journey
  • Running UXR-224 and UXR-226 in parallel built a richer evidence base than either study alone
  • Identifying the nuance in Design B's copy strength meant the recommendation wasn't a binary winner/loser — it was actionable

What I'd do differently

  • Add a moderated component for at least a subset of participants — seeing where users pause or re-read surfaces confusion that post-task questions miss
  • Test the hybrid design explicitly before recommending it, rather than inferring it would work
More Case Studies