
How to Conduct 7 Steps in Pay-What-You-Want Pricing Experiments
Pay-What-You-Want (PWYW) pricing sounds almost too good to be true. Let people choose the price, and somehow your revenue might hold up (or even beat a fixed price). But I’ll be honest: it doesn’t work “because it’s a nice idea.” It works because you set it up in a way that nudges people toward fair payments—and you measure what happens so you can course-correct fast.
In this post, I’m going to lay out a practical 7-step process I use when I test PWYW. I’ll also call out what I’ve seen go wrong (and why). If you’ve ever wondered, “Will customers actually pay more than the minimum?”—this is the part where you stop guessing and start running experiments.
Key Takeaways
Key Takeaways
- PWYW works best when you have trust, a clear reason to pay, and customers who care about fairness (not just the lowest price).
- PWYW tends to perform better when your product has low marginal costs (eBooks, digital goods, services) and fulfillment risk is low.
- To make PWYW experiments meaningful, you need to track both conversion and payment distribution (not just average revenue).
- Anchors (suggested minimums/default amounts) and framing (supporting a cause/community) often shift how much people pay.
- Customer behavior under PWYW is uneven: some pay generously, many pay close to the suggestion, and a smaller group pays near zero.
- You can reduce downside by using a realistic suggested minimum, clear value communication, and guardrails like limited-time tests.
- Common failures include running PWYW without a measurement plan, using weak framing, or placing the offer in a highly price-sensitive audience.

Step 1: Understand what you’re actually testing with PWYW
PWYW pricing means the customer chooses the amount they’ll pay for your product or service. The “trick” isn’t the pricing model itself—it’s what the model triggers in people: fairness concerns, trust, reciprocity, and sometimes guilt (in a useful way).
Here’s what I look for first: is your audience likely to treat the transaction as a fair exchange, or as a “get it as cheap as possible” deal? If the latter is the vibe, PWYW can turn into a race to the bottom.
One widely discussed example is Radiohead’s In Rainbows, released as PWYW. Fans could download the album for whatever they wanted, and many paid more than a typical album price. The key takeaway for experiments: this wasn’t “random goodwill.” Radiohead had a huge, loyal audience and a strong brand story—exactly the kind of context where PWYW can work.
So before you copy a pricing headline, ask yourself: do you have trust, a narrative, and low risk? If yes, you can run a controlled test. If no, you’ll likely learn the wrong lesson (like “people don’t pay” rather than “people don’t trust”).
Step 2: Pick the right conditions (and the wrong ones to avoid)
PWYW isn’t universal. In my experience, it’s much more likely to work when customers:
- Care about fairness (they want to pay “what it’s worth,” not “what I can get away with”).
- Trust you (they believe you’ll deliver what you promise).
- Have a reason to pay beyond the product itself—like supporting a mission, creator, or community.
- See clear value quickly (PWYW fails when people don’t understand what they’re buying).
On the flip side, I’d be cautious if you’re in a super price-sensitive, highly competitive market where buyers are already trained to compare discounts. If they’re used to fixed prices and promotions, PWYW may just remove friction for bargain behavior.
Also, low marginal cost helps. Digital goods (eBooks, templates, software), or services where you can control fulfillment time, are easier to make PWYW sustainable. If each sale costs you heavily, one or two “near-zero” customers can hurt more than you expect.
Step 3: Design the experiment like you mean it (metrics, sample size, duration)
If you don’t measure PWYW properly, you’ll end up with vibes instead of decisions. Here’s a simple setup that keeps things honest.
Choose your primary metric
In PWYW tests, don’t rely on only one number. I usually track:
- Conversion rate (did PWYW change how many people buy?)
- Average order value (AOV) and median payment (averages get skewed by generous outliers)
- Payment distribution (how many paid $0, near the suggested amount, or far above it)
- Gross margin per order (especially for services)
Set a test duration that matches your sales cycle
A weekend might be enough for low-ticket digital products, but for anything with a longer consideration cycle, you’ll need more time. A practical rule: run the test until you have enough transactions to see a stable distribution. If you only get 20 orders, your “results” can be dominated by a handful of generous customers.
Decide what “success” means before you launch
Example success criteria I’ve used:
- Revenue-neutral target: PWYW should achieve at least the same gross revenue per visitor as the fixed-price version.
- Distribution target: median payment should be above your cost-covering threshold.
- Engagement target: conversion should rise enough that even lower median payments still beat the baseline.
One more thing: keep the offer context consistent. Same landing page, same product description, same audience targeting. If you change everything at once, you won’t know what caused the change.
Step 4: Use anchors (suggested minimums/defaults) without being sneaky
PWYW works better when customers aren’t staring at a blank field. People need a reference point. That’s where anchors come in.
For example, if you sell an eBook, you might offer a suggested minimum of $24. The point isn’t to force payment—it’s to provide a reasonable starting value so customers can align with what “fair” looks like.
In my own tests, I’ve noticed a pattern: when you remove the suggestion entirely, many customers choose very low numbers or zero. When you add a clear suggestion, the distribution tightens and the median usually rises—even if conversion stays similar.
What I recommend you test (as separate variations):
- No suggestion vs suggested minimum
- Different suggestion levels (e.g., $10 vs $24 vs $40)
- Different defaults (pre-filled amount vs empty field)
Just don’t make the suggestion absurdly high. If it’s disconnected from perceived value, customers will either ignore it or feel manipulated.
Step 5: Frame the payment reason (value, community, or mission)
People don’t pay only for the object—they pay for the story around it. That’s why framing matters.
Here are three framing angles that tend to work:
- Support the creator: “Your payment helps fund new updates.”
- Support a community: “Proceeds support local workshops / access for others.”
- Pay what it’s worth: “If you found it valuable, please cover what you can.”
In experiments reported in the literature, framing and social motivations can push customers to pay more than they otherwise would. For example, studies in service settings show that fairness perceptions and satisfaction influence giving behavior, and some customers pay generously when they feel loyalty or a genuine desire to do good.
What I’d avoid: vague “support us” copy with no specifics. If you can say what the funds do (even in a single sentence), you’ll usually see a better response.
Step 6: Evaluate outcomes by looking at payment distribution and margin
This is where most teams mess up. They look at average revenue and declare victory or failure. But PWYW often produces a wide spread: some people pay a lot, many pay the suggestion, and a chunk pays near zero.
So I recommend you analyze at least:
- Median payment (what a “typical” buyer pays)
- Percent paying below/at/above the anchor
- Revenue per visitor (so you can compare fairly to fixed pricing)
- Gross margin per visitor (so you don’t win with cashflow but lose on sustainability)
Also check for side effects. Did PWYW change your refund rate? Did it attract a different audience segment? Did customers who paid more leave better reviews? Those are real signals about whether PWYW is creating goodwill or just discount seekers.
There’s also evidence from various real-world settings that PWYW can increase total revenue when customers have a reason to pay (like charity or social causes). But you’ll only know if it’s true for your product if you run the baseline comparison and track the full distribution.
Step 7: Iterate with decision rules (what you change next)
After the first round, don’t “wing it.” Decide what you’ll do based on what you observe. Here are practical decision rules:
If conversion drops
- Make value clearer above the fold (people hesitate when price feels uncertain).
- Try a lower anchor or a pre-filled amount so customers don’t work too hard.
- Consider a time-boxed PWYW test to reduce decision paralysis.
If conversion is fine but median payment is too low
- Increase the suggested minimum slightly (small steps, not big leaps).
- Improve framing: add one concrete line about what payments support.
- Try a default amount pre-selected in the checkout field.
If you see a “pay near zero” spike
- Move PWYW to a more loyal audience (email list, existing customers, community channels).
- Lower the “friction to pay fairly” by showing recommended amounts and explaining why.
- Set expectations in your copy: “Please pay what you can—most customers cover $X–$Y.”
If you see strong payments but margin is still weak
- Re-check your unit economics (especially for physical fulfillment or heavy support time).
- Limit PWYW to digital tiers or cap access if costs spike.
- Use PWYW for onboarding or campaigns, not as your permanent default (at least until you’re confident).
Once you have a repeatable pattern, you can run a second experiment with one variable at a time—anchor level, suggested copy, or framing angle. That’s how PWYW becomes a system, not a one-off gamble.

How PWYW Pricing performs across different markets
PWYW tends to do best when buyers feel like they’re making a fair choice. That usually shows up in niches where people already value the creator, the mission, or the community.
It can also perform better in settings where buyers see fewer alternatives. If customers don’t have 10 cheaper substitutes, they’re more likely to pay what feels fair rather than trying to undercut you.
Digital products like eBooks often see better results with PWYW because the business risk is lower, and customers can evaluate the value quickly—especially when you include a suggested minimum. Research discussing anchoring effects supports the idea that suggested values can nudge payments toward a “reasonable” range [7].
In contrast, highly competitive markets can be rough. If buyers are trained to hunt for discounts, PWYW might mostly remove the price anchor and let bargain behavior dominate.
Customer behavior and motivations under PWYW
Under PWYW, payment amounts vary for a bunch of reasons: social preferences, mood, income, and—big one—how much customers trust that you’re delivering real value.
Some people pay more because they want to support you. Others pay close to the suggestion because they’re trying to be “fair” without overthinking. And yes, some pay less because they’re price-conscious or because the product didn’t land for them.
Research in service and giving contexts suggests fairness perceptions and satisfaction matter. One example often cited from service studies is that certain customers pay above suggested amounts when they feel loyalty or a desire to do good [2].
So when you design your PWYW offer, you’re really designing for motivations. Clear value, transparent reasons to pay, and trust-building messaging usually beat gimmicks.
Optimizing payment strategies in PWYW models
If you want higher payments, your best levers are usually anchors and framing—not fancy checkout tricks.
Start with suggested minimums. If an eBook has a suggested minimum of $24, customers get a reference point for “reasonable.” Anchoring research supports the idea that these cues shift payment behavior [7].
Next, test framing. Tell customers what their payment does—supports a community, funds updates, keeps the project going. Then measure how it changes median payment and the share of customers paying at/above the anchor.
Keep the testing simple:
- Try two to three suggestion levels.
- Try two versions of the donation/support message (one specific, one generic).
- Track conversion, median payment, and revenue per visitor for each version.
You don’t need to overcomplicate it. PWYW is already a behavioral experiment. Your job is to isolate variables and learn what your audience responds to.
Common challenges (and what to do when they show up)
Challenge: customers pay far below what you expected.
Fix: use a realistic suggested minimum and explain what “fair payment” means in plain language. If your anchor is too high relative to perceived value, people won’t follow it.
Challenge: brand perception worries.
Fix: don’t hide the value. Emphasize quality, outcomes, and what the customer gets. If PWYW is tied to supporting your work, say that clearly—so customers don’t assume you’re discounting.
Challenge: revenue volatility.
Fix: run limited-time tests and analyze the distribution. If your median holds steady and your revenue per visitor beats baseline, volatility is manageable. If not, you’ll need to adjust anchors or target a different audience segment.
Challenge: you didn’t measure well enough.
Fix: next time, track conversion + payment distribution + margin. PWYW without a measurement plan is just gambling with extra steps.
Bottom line: treat each PWYW test like a learning loop. You’re not trying to “win” on day one—you’re trying to figure out what makes your customers pay fairly.
FAQs
PWYW pricing lets customers choose how much to pay for a product or service. It often allows payments starting at zero, and it relies on perceived value, fairness, and trust—so customers decide what feels right to them.
PWYW tends to work best when customers trust the seller, understand the value quickly, and have a reason to pay fairly (like supporting a mission or community). It also helps when your audience isn’t overly conditioned to hunt for the lowest price.
Look at conversion rate, median payment, payment distribution (including how many pay below/at/above your suggested amount), and revenue per visitor. If you have costs, also track gross margin per visitor. Customer feedback can help explain why people paid more or less.
The biggest challenges are undervaluation (people paying too little), difficulty predicting revenue, and the risk of confusing customers about value. The fix is careful positioning, clear value communication, realistic suggested amounts, and a measurement plan so you can iterate.
References
- [2] Bolton, Ruth N., and Katherine N. Warlop. “Friend or Foe? A Strategic Analysis of the Customer-Provider Relationship.” Journal of Service Research, 2000.
- [7] Kahneman, Daniel, and Amos Tversky. “On the Psychology of Prediction.” Psychological Review, 1973.