Jan Bosch is a research center director, professor, consultant and angel investor in startups. You can contact him at firstname.lastname@example.org.
Reading time: 4 minutes
So, here’s a little puzzle for you: you have a piece of rope that just fits around the earth at the equator. You now add 1 meter long sticks all around the equator and put the rope on top of these sticks. How much longer do you need to make the rope? Of course, you remember from one of your science classes that the circumference of the earth at the equator is a bit over 40,000 kilometers. So, maybe something like a meter for every kilometer? If that was your first thought, you’re really far off. The right answer is 6.28 meters! Surprised?
It turns out that humans are notoriously bad at answering these kinds of questions. One of my favorite illustrations of this is that people who start working out tend to gain rather than lose weight. Why is that? It’s because they tend to significantly overestimate the number of calories they burn by training and then eat too much to compensate for the estimated rather than the actual number of calories. Instead of relying on our guesses and estimates, we should rely on accurate and timely data.
The third rule for thriving in a digital world is to instrument the processes and activities you use to accomplish the outcomes you quantitatively defined in rule 2 (based on your purpose as clarified in rule 1) and to then use the data for ensuring you hit the intended outcomes. If not (or not optimally), you can of course correct or change the processes and activities.
This is far from a new idea, but it’s becoming a lot easier in a digital world. As an example, the quantified-self movement has been around since, at least, the 1970s. With the emergence of wearable devices, however, it really took off. Now, scores of people are tracking their steps, heart rate, sleep patterns and other factors using their smartwatch or similar. When I ask my wife in the morning how she slept, the typical response is “I don’t know yet” as her Fitbit hasn’t yet synchronized with her iPhone. She’ll only answer the question after reviewing the graphical summary in the app.
Even if we’ve used data before, digitalization makes it much easier to collect data more frequently and, in many cases, more accurately. For many diabetics, measuring blood sugar levels included sticking a needle in their finger, putting a drop of blood on a piece of paper and putting the thing in a reader to get an answer. Because it was a bit painful, messy and time consuming, many measured their blood sugar levels less frequently than was good for them. Now, most wear a plaster on their shoulder with a small needle and sensor and have a reader that can read the sensor whenever needed. The result is much more accurate administration of insulin doses based on what the body needs and, consequently, a much better outcome from a health perspective.
Instrumenting the activities and processes to achieve your outcomes is necessary to collect the data you can use for decision-making, adjusting and improving. Although necessary, it’s not sufficient. The next step is to actually use the data you collect. Virtually all of the companies I work with collect vast amounts of data. However, when it comes to decision-making and prioritizing functionality even those in R&D almost always fail to even look at the available data. As I wrote earlier, many look for ways to explain away any gaps between their own beliefs and the available data.
Interestingly, for all the traditionalism and conservatism in academia, over the last decades, there has been quite a bit of adoption of quantitative data and metrics. Sites like Google Scholar collect the number of citations for each article, calculate the h-index for researchers, the impact factor of journals, and so on. This often leads to vehement opposition among certain groups of researchers, but I must admit that I appreciate the quantification of research outcomes. In the end, most research is funded by government money that comes from taxes levied over the citizens. Consequently, I believe that researchers have a moral and ethical obligation to use their time on those topics that have the most positive impact on society. Someone who writes papers nobody reads and consequently isn’t cited fails to deliver on that obligation. And although the current metrics are far from ideal and cause some unconstructive behaviors, rather than categorically rejecting quantification, we should strive to improve the ways we measure impact.
In most companies I work with, finance is the department that’s by far the most quantified and data driven. This leads to many boards spending disproportionate amounts of time analyzing the revenue, margins, EBITDA, trends over time, and so on. The challenge is that financial data tends to be quite lagging, as a consequence of which it’s hard to use for proactive adjustment and control. My typical request is for other departments to adopt data-driven practices as well. Surprisingly, it’s R&D that’s struggling, while that’s exactly where the key opportunities are. Knowing how customers use our offerings, measuring the way features are used or not used, and so on. There are numerous ways to adopt data-driven practices across the board.
To thrive in a digital world, as a company and as a professional, the first step is to be clear on your purpose. Then, translating the purpose into tangible outcomes. The third step is to instrument the activities and processes you use to achieve the desired outcomes and to track if you’re actually making progress. It’s about collecting the data and then also using it for decision-making. It’s very easy to create a story about why a decision makes sense, but as Edwards Deming said: in God we trust; everyone else must bring data.