Due to an event he deemed highly unlikely, tech analyst Ben Thompson recently changed his position on Facebook regulation. After publishing his turnaround piece, he allowed his podcast co-host and longtime proponent of regulating Facebook, James Allworth, to write a piece for a Stratchery Daily Update (paywalled). In there, Allworth addresses (what he perceives as) the shortcoming in Thompson’s analysis which led him to his wrong conclusion.
It’s a great episode that shines a light on how different analytical frameworks might influence your conclusions (other meta things to like about it: the fact Allworth’s text appeared on Thompson’s own page; it illustrates my believe that analytical best practice isn’t to make no mistakes [predicting the future is hard, it turns out!] but to address and correct them as soon as reality proves you wrong).
I won’t go into details regarding the Facebook situation as a) the piece I refer to is for subscribers only and b) they aren’t essential (if you are interested anyway without being a Stratechery subscriber listen to this podcast). All you need to know is that Allworth proposes that we should make a distinction between incentives and motivations when analyzing behaviour. An idea he attributes to Frederick Herzberg’s research regarding motivation. To put it simply, incentives (or ‘hygiene factors’ as Herzberg calls them) are rational factors while motivation is influenced by intrinsic factors. That’s hardly news today but I assume it was rather unconventional thinking in 1987 when Herzberg published the article.
I wrote a response in the Stratechery members forum which I want to share with you in a slightly edited version:
When I started working as a consultant years ago, I approached problems exclusively rational. However (maybe because I didn’t work with tech firms – where an engineering mindset tends to dominate), I learned one thing quickly: The reasons why things happen or don’t happen in organizations are very often not at all rational (or, at least, not what we usually consider rational).
Let me give you an example. Once upon a time companies hired consultants to find out whether or not social media was a relevant marketing channel/customer touchpoint for them. More than once I experienced the following string of events:
- Digital savvy employees argued it might make sense to experiment with social media
- They had a pretty good idea about what to do but hired a consultant regardless because their bosses opposed it
- We did some extended research and analysis. More often that not the results ended up proving the employees’ initial intentions right¹
- We gave an elaborate presentation to said boss. But to no end. He still doubted the usefulness
- A few weeks later things suddenly changed: The boss spent a weekend with her/his kids and witnessed them using social media. Getting the company on social media turned into a top priority
(note: making a right decision on wrong grounds is almost² as bad as making a wrong one)
This (admittedly rather non-strategic) example is indicative of a broader theme: Decisions are often not governed by facts but personal view points and experiences. After some time in consulting I eventually realized that every manager is also a human being. All the good, the bad and the ugly consequences that come with it included. Similar patterns in decision-making emerged in way more complex, critical projects as well.
If I had to summarize my consulting work of those days in one sentence it would read: I helped my clients to identify and implement innovations relevant to their business. To be successful at that, understanding not only incentives (on all levels) but individual motivations – relationships, corporate politics, personal preferences – is key.
Early on that shocked me. Later, however, I came to the conclusion that the exclusively rational approach to analysis taught in most business schools represents a fallacy (I feel like it’s slowly changing thanks to Kahneman et al.). I call it the irrationality of rationality.
By focusing almost exclusively on the rational , we negated that human beings often don’t act rationally. Game theory optimal behaviour and reality are, for the most part, two different things. If you don’t account for the irrational human layer in your analysis, you will often ignore an important aspect of reality. Certainly a rather inconvenient one, but empiricism isn’t about convenience, is it?
Eventually, all good analysis comes down to figuring out the important variables to consider. To do so, you have to understand the overall system. What makes this complex, however, is that the variables often depend on a given actor’s viewpoint: In many situations, your personal incentives aren’t perfectly aligned with the company’s.
I once dubbed that the problem of misalignment (I prefer the term over the common principal-agent problem because it emphasizes the fact that you can often increase the alignment!). Think, e.g., of the financial crisis 07/08. I personally talked to investment bankers who back then dealt with CDOs and admitted they were aware of the accumulated risks. Yet, at the same time, they only had to avoid a blow-up ’til year’s end to collect their bonus³. Take a look at many situations in companies and you will discover such conflicts-of-interest.
Therefore, I find the distinction between motivation and incentives useful, if only as a hedge against ignorance. While you can just as well think about the pursuit of a personal goal as an incentive, making the distinction helps you to avoid carelessly overlooking it.
¹ Careful readers will realize that incentives would favor this outcome. That is true. It’s a matter of integrity (and professional pride, frankly) to not let those conflicts-of-interest cloud your advice. I guess what I’m saying is: To be a good consultant (i.e. one that acts in the interest of his client’s business) is to be one who acts against incentives. It might also mean you make less money short-term. I like to think that balances out over the long run.
² I think we should weigh the actual outcome to some degree. You may disagree though.
³ Judging whether or not the banker’s decision is ‘rational’ is actually quite tough. It might be on first sight. But what if s/he does consciously contribute to a systemic blow-up which also jeopardizes her/his personal savings?!