Monday, April 5, 2010

Switching Costs and Learning

Came across a really interesting paper the other day by Matthew Osbourne of the U.S. D.O.J., Antitrust. I'll be working through it over the next couple days.

When it comes to introducing new products, there are lots of things going on that are easily confused. For starters, suppose that the new product is introduced with a low price, with the idea that the introductory price increases later. If we observe someone trying the product and then abandoning it, are we seeing a person who is very price sensitive, or are we seeing someone who learned that he didn't like the product? If you aren't careful, these two behaviors will look the same.

Another thing to consider is the counter-intuition behind a Bass-style diffusion process. (A Bass diffusion will tries to measure the different effects of early adopters with late adopters, with the result that the path to the full penetration of the new product can take several different paths). If it is true that learning has value (and it is true), then we would expect to see very rapid experimentation with new products, leading to a very rapid achievement of the steady state penetration.

That isn't what we see, typically. And one really good reason why is because of switching costs. If consumers havev switching costs, the value of learning has to be weighed against those costs, and the possibility that you will learn you don't like the new product and have to incurr the cost of switching back to the oringinal product.

So, it's risky and dynamic and forward-looking. Turns out, it's pretty important, too. According to Osbourne, ignoring learning will lead to models that underestimate own-price elasticity of new products by 30%, while ignoring switching costs will lead to underestimates of own-price elasticity of up to 60%.

So, it's a pretty big deal, since a firm could spend hundreds of millions introducing a new product and it is pretty important to get the pricing right. So, I'll be blogging some insights from the paper and looking for applications to various industries.

Monday, March 29, 2010

More CRM

So, we have information about customers -- our own and those not our own -- and we want to accomplish a few things:

  1. We want to hold on to the customers with the greatest value
  2. We want to encourage customers to increase their value
  3. We want to change non-customers into customers

Not that complicated, really. And let's suppose that we could categorize everybody in the market with a nice vector containing their

  1. Strength of preference for the firm
  2. Contribution margin
  3. Responsiveness to promotions

This assumes away a pretty big set of problems, but I want to focus on what a policy should look like from a strategic perspective. And there are interesting strategic problems with thte goals of CRM. A quick outline of them looks like this:

Customers have a high value either because their intrinsic preference for our firm makes them unlikely to switch or else because they have a high contribution margin. If they have a strong preference for our firm, there seem to be little reason to invest much trying to retain them. If they have a high contribution margin, they become prime targets for other firms who will invest resources trying to poach them. Obviously, having one firm investing in retention and another firm investing in poaching can result in high-value customers becoming lower-value customers, which wasn't what we wanted.

Making investments to increase their margins makes the customer into a high-value customer. Which increases their appeal as targets for other firms and could put us back into the bidding war outlined in the previous paragraph.

The customers that are the most attractive targets for switching to our firm are also those customers their current firm is most interested in keeping.

So, strategic interactions might matter quite a lot. To complicate matters, we sometimes start asking the wrong sorts of questions. For example, when it comes to loyalty programs, we might get into a debate over whether to target customers who occasionally make large purchases or customers who frequently make small purchases. Who can possibly say, without knowing why the customers purchase as they do?

So, are the frequent customers simply those who respond to promotions, or are they displaying a strong attachment to the brand? The right policy is determined by the answer to this question. Are the infrequent customers less prone to respond to promotions? If so, then competitors' attempts to poach them might be less effective -- suggesting a lower level of retention efforts would be required.

In short, we simply can't look at customers on a single dimension and expect to develop CRM policies that are right.

Tuesday, February 23, 2010

Strategic CRM

One of the interesting CRM challenges is in identifying whom to treat. One recent discussion for analytical marketing types on LinkedIn was over precisely this question. I think I can paraphrase it here:



For a customer loyalty program, is it right to choose customers on the basis of their total sales, total visits, etc., or is it better to choose customers based on their trends? In other words, who is more likely to be loyal: a customer who makes one or two big purchases infrequently or one who makes medium purchases frequently?

And the idea is that this form wants to set up some sort of loyalty program to keep the good purchase decisions going.


The reason why this question might cause the strategically inclined to wonder a little bit is because both sets of customers seem to be telling you that they are loyal to the firm's products. When you have a customer who chooses your firm with a high probability already, what do you hope to accomplish with a loyalty program?


One way to think about it is with a simple logistic probability structure.



Right. So, this is pretty simple -- a two-dimensional representation of probability. We might think of X being the loyalty of the customer, with higher loyalty making it more likely that customers will purchase from your company. (These numerical values are arbitrary).

Like everything in economics, the decision is made by comparing the marginal (incremental) affect of the proposed pollicy with the marginal cost. So, suppose we have some program that we figure will increase the loyalty score by a unit, from 25 to 26, 35 to 36, or whatever. Well, if we institute the program on people with loyalty scores of 35, the increase in probability is really small: you are already capturing nearly every purchase decision from those loyal customers to begin with.

Same thing is true on the lowest end of the loyalty scale: there just isn't much to be gained by bumping up these low-loyalty customers by a unit or two.

Clearly, the greatest response comes from increasing the loyalty of that bunch in the middle: the slope of the graph is highest for these cusomers. Which means that the investment in loyalty pays off best there.

So, in an important way, the discussion question misses the point. Who cares whether frequent purchases of small value imply greater loyalty than occasional purchases of large value? Both these groups are probably out on the right hand of the loyalty scale anyway and that means we probably are more interested in a way to exclude them from whatever loyalty program we'd want to inaugurate.

I think, following a really interesting article in Marketing Science by Musalem and Joshi, that we want to focus on three attributes of our customers: intrinsic preference for our company (intrinsic loyalty), margin (or lifetime value or something similar), and responsiveness to retention and acquisition efforts. Not only that, but an effective CRM program should consider the strategic interactions between competing firms through time.

And those subjects are going to be discussed next.

Wednesday, January 13, 2010

Solving Other People's Problems

There is an interesting literature on the ways managers select which projects to pursue. Not to give the game away, but it doesn't have much to do with selecting NPV-positive projects. Sorry if that causes disillusion for anybody.

The decision is made like any other -- utility maximization by the decision-maker. And the question for anybody who wants a decision-maker to implement a particular program.

This is on my mind because of a couple recent events. Most recently, I engaged in a small consulting project for an event marketing firm that was having a little problem with getting the subsidiary of one of their clients to sign off on a marketing plan that the parent company had already endorsed.

In this case, the problem was pretty straightforward: the reason the subsidiary managers have their jobs is because they have successfully made the argument that the subsidiary objectives and strategies are sufficiently different from those of the parent firm that the parent firm would have no hope of effectively running the subsidiary without the subsidiary managers. You can't say all that and then turn around and adopt an event marketing plan that includes some small tweaks to include the subsidiary's brand.

Part of the value of the marketing proposal is its distinctness from the parent's proposal. That it is distinct is probably the most important thing about any project -- from the perspective of the subsidiary.

The other thing was a recent proposal from some consultants to the firm where I used to work. It's not a secret that firms in that industry have a significant gap in understanding pricing -- it is the legacy of being a regulated service provider for so long. Anyway, the consultants from one of the big firms come in and basically make three points.
  1. You guys don't have any idea how to price your product
  2. If you priced properly, you could increase profit by a very significant amount
  3. You should pay us a very tiny fraction of that amount to tell you how to solve point 1

Do the NPV on that. Heck, do the NPV after assuming that the consultants were mentally unstable circus barkers who were overstating the benefits by a factor of twenty or thirty and you are still so positive that you have to do it. Not in the budget? Dude, this is a goose that lays golden eggs; take up a collection.

In this case the problem is also pretty simple. Marketing -- the division that asked for the consultants to present all this -- was measured on volume. For them, the profit targets were merely a constraint, not an objective. What they are supposed to do -- in this particular compensation scheme -- is to maximize volume subject to a profit constraint. In an industry where everybody knows the price elasticity is less than unity, the marketing function is really not interested in learning how expensive (in terms of foregone profit) it is for the firm to put up with their volume goals.

They don't want to hear the consultants setting a pricing policy to increase profit. What they want is an argument that lets them off the hook for low volume numbers -- now and in the future. What they want is the ability to support the claim that some numbers are low because of the profit constraint. And that has a whole different NPV.

And so the problem with solving other people's problems is simply that they are so unlikely to come out and tell you what the problem really is. Whether working from outside or inside the firm, figuring out what goes in the utility function and why is just about the most important question you have to answer.

Tuesday, January 5, 2010

Principal Component Analysis

One problem for analytical marketers is that lots of questions don't have particularly strong theoretical roots.

Take a question like the speed with which an innovation is adopted by the market. There is an awful lot of information about product diffusion and, to be honest, there is nothing easier than finding some parameters to drop in to a Bass model. The problem is with forecasting the diffusion of new products.

Basically, if you plan to use a Bass model, you are going to have to select some parameters. And it isn't clear what criteria matter for that problem. Does the similarity of the product matter more than the similarity of markets? And what do we mean by similarity? Is a product aimed at the same demographic in a different country a similar market?

Right? There are going to be lots of questions like this, requiring some sort of good judgment to be used. And that's a pretty big problem, since the whole point of analytical marketing is to make these judgments rigorous.

One possible improvement is a principal component anlysis. Basically, the goal is to reduce the number of important factors that determine an outcome to the fewest possible -- subsuming the secondary factors into the smaller set of important ones.

Marketing Science had an interesting paper back in 2009 by Sood, James, and Tellis doing precisely this. Their complaint was that almost all the literature on product diffusion was based on some adaption of the Bass model. So, by using a principal factor analysis, along with additional clustering and so on, they were able to firm up which elements matter most.

Of course, factor analysis / PCA is an a-theoretical process. And it is good to remember that sometimes a-theoretical analysis can serve to introduce rigor to the process.