Skip to content


How modern data analytics enables better decision-making

by Dr. Julian Senoner

This is a reproduction of the article “Please explain it to me! How modern data analytics enables better decisions,” which we wrote for the United Nations Conference on Trade and Development (UNCTAD).

Why AI’s potential is still underutilized in decision-making

British industrialist Charles Babbage (1791-1871) once said, “Errors using inadequate data are much less than those using no data at all.” Three industrial revolutions later, it’s surprising how often decisions are still made on gut feeling without data. But it doesn’t have to be like that. The distinguishing factor of the ongoing fourth industrial revolution is the unparalleled access to and connection of data. However, having data is one thing; another is to make good use of them. That’s where AI comes in.

Management and policymaking are about decision-making, which is best when grounded in facts. Facts, on the other side, are verified truths derived from analyzing and interpreting data. The challenge then is to collect and present data in a form that can be turned into information and actionable knowledge. Luckily, the rampant developments in computer science and information technologies, enable decision-makers more, faster, and better access to data. In addition, AI can help decision-makers establish the needed facts by generating data-driven insights inaccessible to humans to date. But, as this article shows, it isn’t a silver bullet. A special type of AI is needed.

With the recent rise of AI models there have been impressive developments in both creative content generation as well as automation. Yet, when it comes to decision-making, AI still faces two critical challenges: First, the complexity and opacity often found in AI models can deter trust and adoption. Many AI models operate in a “black-box” fashion, where users cannot comprehend how a suggestion has been made. When domain experts are unable to validate the AI’s outputs against their own knowledge, they tend to distrust the AI output. Second, AI systems are currently limited in their ability to perform causal reasoning, which is a critical element in any decision-making process. Knowing that an event relates to another event is interesting, but knowing what causes the events to happen is a game-changer.

Hands-on experience from the industry

The key to addressing AI’s two drawbacks is to design systems that are explainable and can be augmented with domain knowledge. To give a concrete example, consider the following. We conducted a field experiment with Siemens, where we observed factory workers who engaged in a visual quality inspection task of electronic products. Participants were divided into two groups: one aided by a “black-box” AI and the other by AI capable of providing explanations for its recommendations. The group using the explainable AI significantly outperformed those factory workers who got recommendations from the “black-box” AI. And even more interestingly, users with the explainable AI system better knew when to trust the AI and when to depend on their own domain expertise, thereby outperforming the performance of the AI system alone. Hence, when humans work together with AI, the results are superior to letting the AI make decisions alone!

Explainability is not only helpful for creating trust among decision-makers. It can also be leveraged to get the best out of humans and AI strengths. AI can sift through amounts of data, whereas humans can complement this with the physical understanding of the process to establish cause-and-effect relationships. Consider for example our research in a semiconductor fabrication facility, where we provided process experts with explainable AI tools to identify the root causes of quality issues. While the AI was able to reveal complex correlations between various production factors and the quality of the outcomes, it was the human experts who translated these insights into actionable improvements. By cross-referencing the AI’s explanations with their own domain knowledge, experts were able to design targeted experiments to confirm the underlying causes of quality losses. The result? Quality losses plummeted by over 50%. This case emphasizes the indispensable role of human expertise in interpreting data and applying it within the context of established cause-and-effect relationships.


The key message is that data is proliferating and AI is here to help, but without the human in the loop, don’t expect better decision-making. We need to build AI systems and tools that support the human decision-maker in getting to facts faster. For this, the recent developments in Explainable AI and Causal AI offer a promising path forward. Such tools allow users to reason about the inner workings of AI systems and incorporate their own domain knowledge when judging the AI output. It helps explain the causal relations and patterns that the AI is picking up from the data–ultimately enabling decision-makers to make better decisions.

Dr. Julian Senoner

Julian Senoner is the Co-Founder and CEO of EthonAI. He holds a PhD from ETH Zurich, where he worked with several Fortune 500 companies to bring cutting-edge AI to the manufacturing sector. His award-winning research has been published in leading field journals such as Management Science and Production and Operations Management. In 2022, Forbes recognized Julian in its 30 under 30 list.