Long-ish post on one of Tirole's sub-cases -- yes, I'm back to information transmission. I just can't quit it!
So, the simple Tirole model I've been talking about makes a couple of analytical cuts -- one on whether the Sender of information (instructor, analyst, whoever) knows the value of the information to the Receiver (student, marketing manager, etc.). For this post, let's look at the case where S doesn't know R's valuation of the information.
This actually happens quite a lot, in my experience. Foe example, beyond the immediate grade, plenty of economics students in Econ 101 don't really attach any value to the course. Or, people who are new at a job might not know the true value of a proposal because they don't understand the web of marketing initiatives currently being implemented: maybe they don't know the opportunity cost of their idea. Point is, this happens plenty.
So, the Sender has done some analysis on a project that has an uncertain payoff for the Receiver. So, he is going to present it and see what happens.
Now, the Receiver will know perfectly well how valuable the project is going to be -- IF he makes the investment in time and effort to understand the proposal. But making that investment is a choice variable in Tirole's model and the decision on the level of effort put in is a function of the Receiver's ex ante belief about the quality of the proposal. Proposals that have a high value to the Receiver get implemented -- that's what makes them "good."
See how quickly things get complicated? Even in a simple model?
Tirole measures this ex ante belief as a parameter of congruence, alpha. Big alpha just means the Receiver figures the idea/analysis is probably pretty good: you came from the right school, look smart, speak French, whatever.
Small alpha? Mismatched type fonts, southern accent, northern accent, French accent, who knows?
Point is, when we don't know the real payoffs to the client, Receiver, we look a two cases. First case: Receiver thinks we have a bad idea, probably. Second case: Receiver thinks we probably have a good idea.
So, suppose this alpha happens to be low. (By low, we mean some critical value that causes the behavior to flip. Turns out, there is a critical value that causes behavior to flip: that value is derived via first-order conditions of expected utility maximization, etc.) But if it's low, then both Sender and Receiver make investments in communicating / understanding the results that maximize their respective utilities, subject to this low value. So, there is some marginal condition that described how hard the two work to communicate. Communication efforts are made, information might get understood, projects might get implemented, we might all be surprised. Pretty standard, expected marginal cost equals expected marginal benefit stuff.
It could be that the alpha is sufficiently low that the equilibrium effort level is zero -- R really assumes S's ideas are terrible. All this is pretty much what you'd expect. So, cool.
Or not cool. If the Receiver has very low expectations, or if he has very high costs of understanding the analysis, then communication can breakdown. With very poor results.
Very poor.
The second-worst meeting I was ever involved in ended with the Sender telling the Receiver something like, "Look, understanding this analysis is just part of the standard 21st century marketing skill set. Surely they taught those at whichever business school you went to."
Wasn't me who said it. But, no, things did not get better after that for the analytics team.
Now suppose that alpha is high; R has an ex ante belief that the project is going to have a good payoff for him and that he will, therefore, implement it. Now things get interesting.
They get interesting because what happens next depends on the sort of oversight R is supposed to exercise over the proposal. If the oversight is what Tirole calls "executive," then R must really, truly understand the proposal in order to implement it. He's got to implement it himself, under his own direction.
So, if R is a student, he is going to have to take a test on the subject. If R is a marketing manager, maybe he is going to have to understand the pricing analysis in order to implement it on a case-by-case basis. Point is, the efforts made to ensure communication are exactly the same for high ex ante beliefs as they are for low ex ante beliefs about the project congruence.
So, prior beliefs don't matter here.
So, the effort in understanding the analysis will be calculated the same way as it was when the Receiver assumed Sender's idea was bad. It's just that, in this case, we have higher equilibrium levels of investment ('cause everybody assumes it is a good idea) and a different motivation (R is on the hook for implementation, not just trying to figure out if it is a good idea or not).
Where they do matter is when the oversight is simply "supervisory." By which, Tirole means that the Receiver can simply approve a project and it gets implemented. If the analysis simply said, "Change the price from X to Y," the marketing guys wouldn't have to understand the analysis in order to make the change. They would just have to believe the analysis.
With the ex ante belief about alpha being high and the oversight to be exercised being supervisory, communication will actually break down.
!!??
Nobody will make efforts to communicate because
1. The Receiver already figures the proposal is pretty good (that's what a high alpha means, after all) and will rubber stamp the idea, and
2. The Sender is smart enough to take "yes" for an answer.
So, in this case, the Sender has real authority when it comes to projects that get implemented. The Receiver becomes a rubber stamp for whatever ideas the Sender cooks up. In this case there isn't any real communication at all, maybe just a cheesy PowerPoint presentation with a couple un-serious area charts with arrows and hand-waving predictions.
But it's all good. Receiver gets to not invest in costly understanding but still gets projects that are probably really good. Sender gets effective control of project selection. What's not to like?
So, in the case where S does not know R's payoff, we expect two sorts of outcome. First, it might be that both sides make an effort to communicate and understand the analysis. That's because either
1) the Receiver is going to have to implement the idea himself or else it is because
2) the Receiver doesn't think the analysis is going to be very good (but thinks it is worth finding out if he is right or not).
The second outcome is where communication breaks down and nobody makes an effort to communicate. That is because either
3) the Receiver thinks the idea is going to be plenty good and, since he doesn't have to understand it to make it work, he might as well rubber stamp whatever idea come his way, or else
4) the Receiver has a very low guess about the quality of the idea (and the guess is so low that it isn't worth it to bother finding out more).