EthonAI contributes article to World Economic Forum
EthonAI Co-Founders Julian Senoner and Torbjørn Netland have recently contributed an agenda article to the World Economic Forum. The article highlights the importance of explainable AI in manufacturing and gives insights into real-world case studies in the industry.
AI has revolutionized the world of work, but its adoption in manufacturing has been slower than one might think. One key reason is the pervasive reluctance among domain experts to trust “black box” algorithms. This is not merely a psychological barrier, but also a practical one, as it prevents experts from cross-referencing AI-generated recommendations with their own knowledge and experience.
Our Co-Founders’ article suggests that explainable AI (XAI) is the answer to this trust issue. XAI clarifies the reasoning behind algorithmic recommendations, which not only addresses ethical and accountability questions but also improves teamwork between humans and AI. Research from leading manufacturers shows that this clarity allows both humans and AI systems to work more effectively together.
Two case studies substantiate these claims in the article. The first case study highlights a field experiment at Siemens where production workers using XAI significantly outperformed workers using “black-box AI” systems. Furthermore, it was shown that workers augmented by XAI knew better when to trust the AI and when to depend on their own expertise, thereby outperforming the performance of the AI system alone. The second case study was carried out with a semiconductor producer where explainable AI helped process engineers reduce yield losses by more than 50%.
The article concludes that the key to successful AI adoption in manufacturing is not to replace human expertise, but to enhance it. For this to happen, AI algorithms need to be transparent, allowing workers to understand how and why a specific recommendation was made.
The full article is available on the World Economic Forum’s website.