This week, I gave a talk at a company that’s starting with data-driven practices and A/B experimentation specifically. My talk was concerned with the enablers required for this, such as continuous deployment (or DevOps), the specific ways in which organizations can apply data-driven practices and A/B testing and the importance of value modeling so that you know what you’re optimizing for.
The interesting situation was that several participants were either very quiet or had remarks that seemed to question the entire approach. Although I was initially surprised by this and focused on the content, over time it became clear to me that something else was going on: the critics were so-called experts in their specific part of the product portfolio and, apparently, were concerned with the possibility of the data showing that their expert opinions might actually be wrong.
My favorite definition of an expert is someone who tells you why something can’t be done and to some extent, this viewpoint was confirmed as part of this talk. We all appreciate being experts at something and our sense of self-worth stems from the identity confirmation that comes with being an expert. Within groups, the status of individual members is derived from their reputation, for instance, as an expert. So, what I was doing with my talk was presenting people with the possibility that their reputation and status within the group, as well as their sense of self-worth, might be taken away from them because they’ve ‘built their house on quicksand’ and data from the field might not back up their beliefs.
Many organizations have historically had very limited, biased and highly delayed data from products in the field. This leads to this data being irrelevant for decision-making. So, if we can’t make decisions based on data, what’s the alternative? The alternative is to base decisions on the expertise available inside the company. And this is where the competition and jockeying for position start. The result often is a situation where each area within the product, and consequently within the company, will receive ‘something’ in terms of budget, people and responsibility. Not because these areas are the most critical from a business perspective but because of historical reasons and skillful political play.
Data-driven and experimental practices provide one of the most effective ways to return to a customer and market-centric way of working in which decisions are based on the real world, rather than on internal politics. This means your role as an expert is evolving as well. Instead of relying on your many years of expertise, you now need to concern yourself with frequently validating the beliefs you and the organization hold by collecting data from the field or running experiments to confirm or disprove, interpreting data coming back from the field concerning system and customer behavior based on your best understanding while highlighting aspects that are difficult to explain by current models, developing hypotheses to be tested based on discrepancies between what you thought you knew and the insights the data provides, accepting that, in most cases, you don’t know and need experiments and data to answer questions.
Western society has reached a stage where the confidence in experts is lower than it has been in decades. Rather than blaming the uneducated masses, perhaps we, as experts, should reflect on our own behavior and preconceptions and see if, maybe, we’re responsible for this decline in trust. It feels really good to go out and pontificate about your expert-based viewpoints and beliefs, but it hurts you and society if you have to go out a few months or a year later and claim the opposite of what you first believed.
As an expert, the correct answer to almost all questions should be “I don’t know, but I know how we can find out”. The real expert is humble and realizes how little he or she actually knows.