How to Get the Price Right (2024)

This article was co-authored with Duncan Gilchrist. Sample code, along with basic simulation results, is available on GitHub.

How to Get the Price Right (3)

You have a product and you’re selling it. So the price is right. Or is it?

In this post, we work through price setting — what to optimize for, how to learn from historical data, and which experiments to run to find your pricing sweet spot.

Our goal is to provide you with the quantitative tools to price optimally in your own context. As Economists by training and Data Scientists by trade, we do dig in to the technical details. But we’ve tried to make the core insights equally accessible to those who prefer to skim over the equations and code.

In order to price better, you’ll first need to decide what “better” means in your context. Here are three potential optimization functions, along with the contexts in which they may be most relevant:

Optimization Option 1: Maximize profit by setting prices to extract maximal surplus.

In symbols, this is Max (p - c)*q where p is the price you set for the good, c is the marginal cost of providing that good, and q is the quantity sold. Since the quantity sold is a function of the price directly we can equivalently write this as Max (p - c)*q(p).

How to Get the Price Right (4)

This optimization function yields the best possible short-term unit economics and is common among businesses outside of Silicon Valley. But it ignores long-run considerations and doesn’t provide much surplus to consumers (or, in the marketplace context, to suppliers either), so likely produces slower growth than the alternatives below.

Optimization Option 2: Maximize quantity of goods sold rather than worrying directly about immediate profits.

This is just Max q(p) where q is the quantity sold (still a function of the price directly). Here, the company deliberately under-prices — relative to short-run profit-maximization — in order to grow quickly. Strategies like this are sometimes called penetration pricing.

While this optimization function yields the fastest growth, it can be quite expensive, especially if there are real marginal costs per unit sold. If you’re going to pursue it, you’ll want to have a compelling case for why quantity sold (or number of users buying) is in and of itself so valuable to you. Here are four potential reasons:

  1. Network effects. As we covered last time, there are cases where a product becomes more valuable the more others are using it. In these cases, underpricing in the early days can help get the flywheel going faster and can pay off with a more valuable product down the line.
  2. Supply-side economies of scale. Marginal costs, c, are not a function of quantity, q, in the formulas above. But if the product is cheaper to produce at scale, consider maximizing quantity until units sold are high enough that marginal costs become manageable.
  3. Learning by doing. You may want lots of volume so you can learn fast, for example because the company faces high fixed costs of keeping shop and uncertainty about demand, and so needs to learn quickly whether or not to stay in the game. This isn’t an uncommon strategy among young tech firms: raise venture funding to build a business that is wildly unprofitable today but has the potential to become profitable once you’ve found product-market fit.
  4. Competitive considerations. Focusing on volume can help a company stay ahead of the competition and capture a good share of a growing market. (But note that the practice of predatory pricing, in which a firm deliberately sets prices very low in an attempt to drive competition out of the market, is illegal in many countries, including the United States.)

Optimization Option 3: Hybrid model. As the name suggests, this is a blend of Options 1 and 2. The idea is to grow quickly, but within your means.

One way to think about this is to maximize quantity sold subject to keeping losses below some threshold, i.e., Max q(p) s.t. (p - c)*q(c) >= -X, where X is the most you’re willing to “spend” for that growth. In setting X, weigh the value of growth (see above) with the costs (for a startup, could be the cost of the capital needed to fund the spend).

Deriving optimal price.

For now, suppose we’re maximizing profit (Option 1) and consider a simple model with a constant marginal cost, c, and fixed cost, F, such that profits, π, can be defined by π = (p - c) * q - F.

To find the price that maximizes those profits, we take the first derivative and set it equal to zero. The first order condition, dπ/dp = 0, yields the optimal price:

p* = c - q / (dq/dp)

This is a magically simple formula, and one worth keeping in mind. A couple call-outs:

First, fixed costs drop out entirely. Why? As long as we have a product, we’ll incur fixed costs no matter how much we sell. So we won’t have to worry about factoring engineering headcount into pricing.

Second, optimal price includes a “markup” proportional to the price elasticity of demand. This is captured in the second (and core) term q / (dq/dp). The dq/dp represents the change in quantity consumed for a given change in price. Most of this post is focused on how to estimate this value and, correspondingly, profits at any given price. But first, some high-level attributes:

  • dq/dp is almost always negative, i.e., demand is downward sloping. (The exception is the rare Giffen good, which is actually more demanded at higher prices.)
  • If dq/dp is close to zero, i.e., demand is inelastic, then the optimal price, p*, will be high.
  • If dq/dp is a large negative number, i.e., demand is highly elastic, then the optimal price, p*, will approach the marginal cost, c.

Optimal price-setting boils down to answering two questions:

  1. What is dq/dp today, i.e., how price-sensitive are our users at current prices? This will inform whether we want to increase or decrease prices relative to where we are now.
  2. How should we extrapolate dq/dp, i.e., how do price sensitivities change as prices change? This will inform by how much we want to increase or decrease prices.

The remainder of this post centers on statistical estimation of dq/dp — both at the current prices and across other potential prices — , first using only historical data and then incorporating experimentation. We focus on the profit-maximizing context, but the same price sensitivities are key no matter the optimization function. For example, if the objective is maximizing quantity sold, we’ll still want to know how much in profits we’re forgoing to understand the tradeoffs we’re making and know when to adjust course.

In this section we cover econometric techniques for estimating price sensitivities based on price variation in historical data. We’re assuming the product has been sold at at least two different prices. (If yours hasn’t, you might skip to Step 3.)

First, we’ll need to think carefully about why price changed historically as this will have key implications for the validity of the methods used. In particular, we need to distinguish between two reasons:

Reason 1. Price changed because demand shifted.

For example, you might have increased price around a big PR push or a major improvement that made the product more valuable. Maybe you thought, “People are really loving this thing! We can start charging more.”

For price changes like these that are co-timed with demand shifts, simply regressing quantity sold on price will produce a biased estimate of the price elasticity. In particular, the model will capture both the effect of the price increase (lower quantity) and the co-timed effect of the “people are really loving this thing” (higher quantity). As a result, we risk underestimating true price sensitivity.

In extreme cases the (biased) results can even suggest demand is upward sloping, with more people buying at higher prices. What’s effectively happening is that by naively estimating off of price changes co-timed with demand shifts we can end up accidentally tracing out the (upward-sloping) supply curve — not the demand curve! This isn’t a new mistake (see Working’s classic 1927 paper in the QJE), but it remains a common one. Since standard regression analysis won’t suffice in this context, we’ll need more robust methods for causal inference, several of which we cover below.

Reason 2: Price changed for any reason besides a demand change.

Estimation is more straightforward if the price change wasn’t co-timed with a shift in the demand curve. Maybe we raised price because marginal costs went up, or maybe we just hired a new PM who had a different gut feeling. Whatever the reason, this type of price variation is ripe for a simple regression model approach to estimating price sensitivity. We’ll start there, and build up to dealing with the hairier case of price changes co-timed with demand shifts.

Simple(st) regression model.

Assuming the price variation was driven by something other than demand shifts, we can estimate users’ price sensitivity with a simple regression. We work through three different models using simulated data, and compare the results. The first two models assume pretty good instrumentation on the site; the third allows for coarser data.

Before continuing we note that a wide range of demand model specifications are possible and that we focus here on the simplest possible setup to make the framework clear and build intuition. More sophisticated models, such as the canonical BLP model of Berry, Levinsohn, and Pakes (ECMA 1995), can incorporate random coefficients on price and other characteristics, and allow for highly flexible substitution patterns in the multi-product case. But simple models are always the place to start.

Let’s assume there’s logging at the user by product page level, i.e., we see historical product pageviews for each user, and know both the price the user was shown on that page, and whether or not she chose to buy.

In our Jupyter notebook on GitHub we simulate this historical data. We suppose that on each of 90 days, 10,000 users came to the site and hit the product page (where they are shown the price of the product). In the first half of the sample the price was low, and then it went up — so some users saw a lower price and some saw a higher price. The value of consuming the good has a component that is common across users, plus an i.i.d. user-specific component. Users also have a common price sensitivity. Together, these determine whether or not each user chose to buy the good at the price she was shown.

(Feel free to download the notebook and play around with the different simulation parameters. It can be helpful to understand how an econometric model performs on simulated data — where you’re squarely in the driver’s seat — before implementing it in the wild.)

We structure the data in a dataframe, df, with one row per user x product page view; each row contains the price the user saw and a purchase_binary that is set to 1 if the user purchased (let’s say in some window, e.g., within 24 hours), and zero otherwise. With this data, we estimate price sensitivity in two ways: with an Ordinary Least Squares (OLS) model and with a Logit model.

Simple Model 1: Estimate OLS on purchase binary data.

For the OLS model, we simply estimate an ordinary least squares regression of the purchase binary on price and then estimate the price elasticity as a function of the coefficient on price from the model, the average price, and the average conversion rate.

The R code is straightforward:

ols <- df %>%
lm(purchase_binary ~ prices, data = .) %>%
summary()
olsols_elasticity <- ols$coefficients[2] * mean(df$prices) /
mean(df$purchase_binary)

In our simulated data, the estimated price elasticity is -0.20. Recall that price elasticity is the percentage change in quantity purchased for a given percentage change in price. So for a 10% increase in price, the model predicts a 2% decrease in quantity purchased.

Since the increase in price is proportionally greater than the decrease in quantity, raising prices increases revenue in the current range. More generally, so long as the absolute value of the price elasticity is less than one, raising prices increases revenue; but once the absolute value of the price elasticity exceeds one, we’re on the other side of the revenue curve and should lower prices.

Simple Model 2: Estimate Logit on purchase binary data.

We again regress the purchase binary on price, but estimate a Logit model. Here, the price elasticity takes a slightly different form, but can similarly be estimated as a function of the coefficient on price from the model, the average price, and the average conversion rate. In R:

logit <- df %>%
glm(formula = purchase_binary ~ prices,
family = binomial(link = ‘logit’), data = .) %>%
summary()
logitlogit_elasticity <- logit$coefficients[2] *
(1 — mean(df$purchase_binary)) *
mean(df$prices)

Simple Model 3: Estimate OLS on total quantity purchased data.

The third model is an OLS model (like Model 1) but can be estimated with less complete data: it requires only that we observe the total quantity purchased on a given day (or week, or month), and the price charged on that day (week, month). This will be useful in cases where we might lack user-level eventing, and even works when we don’t know how many people hit the product page. Here, each row in the data frame is simply a day and includes the price charged in that period and the number of sales made.

We run a linear regression of number of sales on price, and compute the elasticity as a function of the resulting price coefficient, the average price, and the average quantity. In R:

quantity_model <- df_by_day %>%
lm(quantity ~ prices, data = .) %>%
summary()
quantity_modelquantity_elasticity <- quantity_model$coefficients[2] *
mean(df_by_day$prices) /
mean(df_by_day$quantity)

Comparing Price Elasticities.

How do the results of these models compare? In our example, the elasticities estimated from the OLS binary and Logit binary models are nearly identical (within a few basis points of each other). And depending on the time-series structure of the data, the elasticity estimates resulting from the aggregated versus disaggregated OLS models are also likely to be similar.

So why bother with all three?

First, the disaggregated data allows us to control for individual-level features that might otherwise induce noise — or even bias — in our estimates. We return to this below when we cover controlled regression. Of course, if all we have is aggregate data, then that’s what we’ll use.

Second, while the OLS model treats the demand curve as a straight line, the Logit’s functional form assumes some curvature in demand. So the two functional forms can diverge somewhat in their optimal price recommendations: if the model is forced to extrapolate away from observed data then it will rely more on its functional form. It’s often worth trying different functional forms to see how much the results are due to the functional form; if the results are highly dependent on model choice then we should ask how reasonable the model’s underlying assumptions are.

Estimated Demand, the Implied Profit Function, & Optimal Price.

Now to the question of the day: What do our estimated elasticities imply for the relationship between price and profits and, correspondingly, optimal prices?

Each model’s estimates allow us to trace out the demand curve and the profit function it implies. These can be computed directly from the estimates. In R:

demand_curves <- data.frame(prices = seq(from = 0, to = 3, 
by = 0.01))
demand_curves %<>%
mutate(
ols = n *
(ols$coefficients[1] + ols$coefficients[2] * prices),
logit = n *
(exp(logit$coefficients[1] + logit$coefficients[2] *
prices) /
(1 + exp(logit$coefficients[1] + logit$coefficients[2] *
prices))
) %>%
gather(model, quantities, -prices)
profit_curves <- demand_curves %>%
mutate(profit = quantities * prices)

Panel A of the figure below shows the demand curves from the OLS model (Model 1) and from the Logit model (Model 2); Panel B shows the corresponding profit curves assuming (for now) that marginal costs are zero.

How to Get the Price Right (5)

If we’re currently to the left of the peak on the profit curve, elasticity of demand is less than one (in absolute value), so increasing prices will increase revenue; if, in contrast, we’re to the right of the peak, elasticity of demand is greater than one (in absolute value), and we should decrease prices.

The optimal price, p*, is the global maximum of the curves in Panel B. In our example, the Logit model implies an optimal price within 5% of that implied by the OLS model. This adds confidence to our estimates. But it’s important to note that in our simulated data we actually only observe two prices — denoted by the dashed blue lines — and that optimal prices predicted by both models are substantially higher than either of these. If this were the real world, we would want to be confident in the out-of-sample properties of our models before trusting the models to get it exactly right so far outside our data. In practice, we’d likely want to test a moderate increase in price and re-run the analysis to make sure the curvature doesn’t change.

What about marginal costs? We’ve so far assumed that marginal costs are zero. This might be close enough to true in some tech companies where once the product is built the marginal cost of serving an additional user is hosting, plus maybe a little customer support. But in many other contexts marginal costs are substantial. What then?

Our formula for optimal pricing tells us that p* = c - q / (dq/dp). Here, marginal costs are a bit sneaky — they enter directly, through the c, but also indirectly because a change in marginal cost will change prices which in turn changes both q and dq/dp. This makes optimal prices “approximately” linear in marginal costs: the indirect effect of changes in c on q and dq/dp moderates slightly the overall change in price. So if marginal costs increase by $1, optimal prices are likely to increase by a bit less.

Recall that the core challenge at hand is to estimate the causal effect of price on quantity consumed. From there, it’s straightforward to trace out the demand curve and implied profit curve.

We started out by making the econometrics easy, assuming that changes in prices are unrelated to changes in demand. Of course, life with real-world data is seldom that simple. Techniques for causal inference, like the ones we covered in our earlier post, can help make headway; here we touch briefly on the specifics most relevant to pricing. Which of the methods you choose to use to recover your product’s demand curve will depend on your historical pricing context.

Controlled regression

Or what to do when some (but not all) price changes are co-timed with product — or other demand — changes.

Assume some (but not all) of the price changes were co-timed with measurable improvements in product quality. If we naively run a regression model without any controls, we might think we have upward sloping demand (Panel A below). This is a common mistake! The good news is that if the product changes were measurable, we can add them as controls and recover the true elasticity (Panel B).

How to Get the Price Right (6)

Besides quality, two other biggies to consider controlling for are time trends and user-level demographics.

Controlling for time trends helps capture time-varying demand shifters that at best add noise and at worst add bias. These demand shifters can include the changing popularity of your product, seasonal and holiday-driven variation, and / or the changing composition of users visiting your site (especially important if conversion is changing across cohorts).

Controlling for key user-level features can remove additional potential confounders (and can be interacted with price to produce richer elasticity estimates). User income tends to be a useful covariate: higher income people are often less price sensitive so to the extent that user wealth is heterogeneous it is likely to be highly explanatory. You can even do this with aggregated data, e.g. by matching census-level incomes to IP addresses. If available, it’s also worth including proxies that capture how much users’ value your product relative to other products. For example, if you make software that runs exclusively on Apple computers, then knowing — and controlling for — the fact that a site visitor is running OS X captures something important about her inherent willingness to pay.

Regression Discontinuity Design,

Or how to estimate off a single, global price change.

Regression discontinuity design, also known as RDD, is a statistical approach to causal inference that focuses in on a cut-point that, within some narrow range, can be thought of as a local randomized experiment. Here, our cut-point is simply a price change. In brief, we compare quantity sold just before the change to quantity sold just after. If we have enough users coming in in some narrow band around the change, we can use this discontinuity to estimate the price elasticity.

A couple caveats in the pricing context:

First, we’re assuming no confounding discontinuities. As long as we’re looking in a narrow time window, it’s probably safe to assume that market demand hasn’t changed dramatically over that period; but we will also want to make sure we don’t have a marketing push, or a change in our product, over the period in question.

Second, remember that the goal is to estimate the steady state effect of the price on quantity sold. If we look across all users, some of which were served the price in both periods, we may be capturing not only the effect of the price itself but also the effect of the price change.

Let’s consider an example: Jane comes to the site on Monday and sees the product costs $49 but isn’t quite ready to buy; on Tuesday the price change is implemented; on Wednesday, she comes back and sees the product costs $39. Jane buys on Wednesday, not just because it’s $39, but because it’s cheaper than she expected (and than it used to be); she might even be worried the lower price won’t last.

Because we are trying to estimate the general equilibrium effect of price — and the effect of the price change will be only temporary — to optimize correctly we need parse the effect of price itself. How can we isolate the effect of the price from the effect of the price change?

One blunt but effective way is to restrict the sample to users hitting the site for the first time. Since these users were never exposed to any historical price (at least not onsite), they are unlikely to be responding to the change.

A second, more nuanced approach, is to restrict the sample to users both new and existing who are being served a price for the first time. Relative to the first option, this one preserves more data but also requires better logging. Of course you’ll want to impose these sample restrictions not just for the “post change” period, but also for the “pre change” period so that users are comparable throughout the range.

A third approach is to build a structural model that explicitly incorporates the dynamics of uses observing price changes. This is probably only worth doing if you’re planning on changing prices often or if you have a lot of repeat traffic; in that case check out Hendel and Nevo (ECMA 2006) for a formal treatment.

Difference-in-Differences

Or how to estimate off a partial price change (for some products or some users).

Difference-in-differences, or DD, is similar in spirit to RDD except it relies on the existence of two groups — one that was served the new price (after some date) and another that experienced no such change. Because causal inference through DD considers a control group, it is generally more robust to confounders.

Suppose we sell two unrelated products, and changed price for one but not for the other. This gives a natural “control” in the product where price did not change. We can then use trends over time in sales of the control product to calculate the counterfactual we would have expected in the treatment product absent the price change.

Our causal inference post provides more detail on DD, including a graphical representation, sample R code, and details on the assumptions needed for internal and external validity of the resulting estimates. Briefly, a couple notes in the pricing context:

First, just as with RDD, we may want to estimate on new users only (or users new to observing price) to avoid confounding the effect of the price with the effect of the price change. Second, if the “control” is another product, we’ll want to think about the extent to which the products are complements or substitutes; if they’re closely related, the price change in one product will shift demand for the other, invalidating it as a true control. Third, the example above suggests using other products as controls; alternatively, we can consider whether there are control users, e.g., if users in some markets or regions experienced a price change while users in others did not.

Instrumental Variables Model

Or how to estimate off of changing cost structure, exchange rates, or other quasi-random variation.

In an instrumental variable model, or IV for short, we “instrument” for price with some other feature, Z, that impacts quantity sold only through its effect on price. We can use the resulting estimates to back out the causal effect of price on quantity sold.

The technical implementation of IV is relatively straightforward, but finding good instruments can be challenging. Here, an instrument must satisfy both of the following:

  1. It must move price, i.e., produce a strong first stage, and
  2. It must influence quantity sold only through its effect on price, i.e. the exclusion restriction is valid

The good news is that in the pricing context we often have natural instruments to consider:

  • If we change prices when the cost of underlying goods changes (i.e., when marginal costs change), we can instrument for price with marginal costs.
  • If we are in international markets but charge in fixed USD, we can instrument for price with variation in exchange rates.

Step 3. Run some experiments to get the additional insights you need

If we’re low on exogenous variation in prices or are looking to really nail pricing, experimenting with prices directly can be a good next step.

The most straightforward approach is a full randomization — i.e., AB testing price. But many companies shy away from full randomization. Whether we’re comfortable running a pricing AB test probably depends on a number of factors, including the nature of our product, our stage of development, and the sensitivity of our users.

A more palatable approach may be to create exogenous variation in prices by changing prices over time, and / or across products and user groups. We can then use the causal inference methods described earlier to analyze price sensitivities. For example, if we introduce an abrupt price change we can implement a regression discontinuity design; and if we introduce an abrupt price change but restrict it to some geographies, we’ve set ourselves up to estimate price elasticity with a difference-in-differences approach.

Even then, price changes have risks. Variation in pricing over time can diminish user trust, create confusion, and in some cases result in negative PR. Frequently changing pricing can also create expectations among users that prices will change and lead to “wait and see” behavior in anticipation of future price drops.

There are a few other, potentially lighter-weight ways to learn something about the demand curve. These include promotion or discount tests, painted door tests, and discrete choice surveys. Each has pros and cons. Briefly:

Promotion tests. We’ve all gotten discount coupons by email or even, in the old days, through the letterbox. Randomly changing prices for some users and not others through promotions can feel more justifiable than randomizing price directly; for example, the promotion may be addressed to the “most valued customers” or customers that “haven’t been seen in awhile.” So promotion tests generally don’t create distrust (and may engender goodwill). But promotions are not ideal when it comes to estimating price elasticities. First, we often end up estimating the combined effects of receiving the promotion and of the promotion itself. Second, even if we send all users similar messaging and just randomize the size of the promotion, we’re measuring sensitivity to promotions, which are likely different than sensitivity to price. Third, promotions only allow us to test lower (never higher) prices; in principle one could increase base price across the board, but the brand implications from perpetual promos are a complicated beast.

Painted door tests. The idea is to show higher prices to some individuals (e.g., in paid ads or on the product discovery pages), and measure the effect on click through. These tend to be simple tests to implement — often just a quick front-end change. But because they are “painted doors”, what is measured is the effect of price on click through, not on conversion (ultimately all users pay the same price). For painted door tests to be useful in estimating true price elasticity, we need to believe abandonment on the payment page is price invariant (or otherwise known); unfortunately, it often is not.

Discrete choice surveys. In discrete choice surveys, we pitch different products at different prices to current or potential users, ask them to choose among them, and use their responses to estimate willingness to pay. The results can be interesting for understanding willingness to pay for different types of products, although the approach does require that each product be relatively easy to describe. Unless we’re running these on our own users, we should also expect the surveys to be expensive to push out to meaningful samples.

So far we’ve assumed a single product with a single price across all users. But we often have multiple products, different types of users, and an evolving competitive landscape. The profit curve we trace out could look different for different products, groups of users, and time periods; correspondingly, optimal prices likely differ across products, users, and time.

How to Get the Price Right (7)

In this context, it can be tempting to try to run loads of pricing experiments. But keep in mind that each experiment has costs, not just in terms of engineering and analytics time, but more crucially in terms of the consistency of user experience. Running too many tests at once can also invalidate the results, for example if there are interactions across tests and / or if users become aware of the testing. So we want to be smart about which experiments we run.

One approach is to build an experimental strategy around working hypothesesas opposed to taking a more throw-prices-at-the-wall-and-see-what-sticks approach. And economic theory provides some insights that can help us generate testable working hypotheses in a range of pricing contexts. (These can also be helpful as we think about how to evolve our product to support higher prices.)

Elasticity Insight 1: A given increase in price results in less of a hit to quantity when fewer and / or worse substitutes are available. Some applications:

  • Consider higher prices for products that have only limited, imperfect, and / or expensive substitutes. (The exception is products with few substitutes but limited current uptake, e.g., niche art or a product we have to “talk people into” as these have small aggregate demand.)
  • If comfortable charging different prices to different users, consider higher prices for users who have relatively less access to substitutes, and / or who value the product relatively more than the alternatives.

Elasticity Insight 2: A given increase in price will result in less of a hit to quantity when the percentage of income the price represents is lower. Some applications:

  • If the product has a global user base, consider relatively higher prices in markets where consumers have higher incomes. One simple implementation is to incorporate purchasing power parity conversion rates into a global pricing model so as to make the “cost” of the product a comparable market basket of goods across countries (the World Bank publishes the full index here).
  • If selling multiple products, consider relatively higher prices on the products that appeal more to high-income consumers.

Elasticity Insight 3: A given increase in price will result in more of a hit to quantity in the long run than the short run because users will eventually find alternatives, or competitors might come knocking. Some applications:

  • Track the longer run effects of a price change; the initial quantity hit from a price increase (for example) may be a lower bound on the true effect, especially if substitutes are available or become available.
  • Set the product up to sustain higher prices over time through continued innovation and product differentiation; actively building out IP, exclusive content, brand loyalty, and other competitive moats.
How to Get the Price Right (8)

Finally, while theory can provide a clean framework for pricing, it should not be applied in a vacuum. Especially when money is at stake, behavioral factors have significant influence on user decision-making. So in price setting — and price testing — it’s good also to keep in mind the power of simplicity for reducing cognitive load, of transparency for building trust, and of fairness for strengthening the relationship with users.

Good luck getting the price right!

Comments, suggestions, or other ideas? We’d love to hear from you! You can find us on Twitter — @emilygsands and @dsgilchrist or email us at [email protected] and [email protected].

How to Get the Price Right (2024)
Top Articles
Boomers admit younger people have it harder
How do I get a refund of the prepaid card or voucher I purchased?
English Bulldog Puppies For Sale Under 1000 In Florida
Katie Pavlich Bikini Photos
Gamevault Agent
Pieology Nutrition Calculator Mobile
Hocus Pocus Showtimes Near Harkins Theatres Yuma Palms 14
Hendersonville (Tennessee) – Travel guide at Wikivoyage
Compare the Samsung Galaxy S24 - 256GB - Cobalt Violet vs Apple iPhone 16 Pro - 128GB - Desert Titanium | AT&T
Vardis Olive Garden (Georgioupolis, Kreta) ✈️ inkl. Flug buchen
Craigslist Dog Kennels For Sale
Things To Do In Atlanta Tomorrow Night
Non Sequitur
Crossword Nexus Solver
How To Cut Eelgrass Grounded
Pac Man Deviantart
Alexander Funeral Home Gallatin Obituaries
Energy Healing Conference Utah
Geometry Review Quiz 5 Answer Key
Hobby Stores Near Me Now
Icivics The Electoral Process Answer Key
Allybearloves
Bible Gateway passage: Revelation 3 - New Living Translation
Yisd Home Access Center
Home
Shadbase Get Out Of Jail
Gina Wilson Angle Addition Postulate
Celina Powell Lil Meech Video: A Controversial Encounter Shakes Social Media - Video Reddit Trend
Walmart Pharmacy Near Me Open
Marquette Gas Prices
A Christmas Horse - Alison Senxation
Ou Football Brainiacs
Access a Shared Resource | Computing for Arts + Sciences
Vera Bradley Factory Outlet Sunbury Products
Pixel Combat Unblocked
Movies - EPIC Theatres
Cvs Sport Physicals
Mercedes W204 Belt Diagram
Mia Malkova Bio, Net Worth, Age & More - Magzica
'Conan Exiles' 3.0 Guide: How To Unlock Spells And Sorcery
Teenbeautyfitness
Where Can I Cash A Huntington National Bank Check
Topos De Bolos Engraçados
Sand Castle Parents Guide
Gregory (Five Nights at Freddy's)
Grand Valley State University Library Hours
Holzer Athena Portal
Hello – Cornerstone Chapel
Stoughton Commuter Rail Schedule
Nfsd Web Portal
Selly Medaline
Latest Posts
Article information

Author: Jeremiah Abshire

Last Updated:

Views: 5617

Rating: 4.3 / 5 (54 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Jeremiah Abshire

Birthday: 1993-09-14

Address: Apt. 425 92748 Jannie Centers, Port Nikitaville, VT 82110

Phone: +8096210939894

Job: Lead Healthcare Manager

Hobby: Watching movies, Watching movies, Knapping, LARPing, Coffee roasting, Lacemaking, Gaming

Introduction: My name is Jeremiah Abshire, I am a outstanding, kind, clever, hilarious, curious, hilarious, outstanding person who loves writing and wants to share my knowledge and understanding with you.