Skip links

How to Build Explainable AI

Event. We’ve heard much about ‘black box algorithms and AI’. Secret systems that analyse and predicts your credit score and search results, your likability of getting the flue or winning a game. ‘Black box’ means secret, a business secret just like Coca Cola would never give away the recipe for the popular drink, and it is often done by commercial big tech, who is not going to give away their models of predictions.

But in the days of GDPR, such secrecy is not acceptable. You must be open about how you use other peoples’ data (any AI system is built on data), and GDPR also demands a certain level of explainability. At least, if you want to practise data ethics and you use machine learning and artificial intelligence, then you need to explain the algorithms – the criteria and parameters behind.

We don’t have a standard yet for how to explain an algorithm, but Erlin Gulbenkoglu from the Finish AI consultancy firm Silo.ai proposes the currently developed model interpretability method SHAP. It will explain, for example, how an AI algorithm arrived at 95% likability that his person will win the game.

The red variables are pushing the model to Yes he will win, the blue the oppsite.
“In that way the human can assess the variables behind and see it if makes sense,” said Erlin Gulbenkoglu, who spoke at MyData2018 in Helsinki last week.

She will speak at the European Data Ethics Forum 2018 and tell much more about how to explain AI.

In AI research and development more and more attention is directed to building interpretable, anonymous, secure and fair AI systems. Erlin Gulbenkoglu specialises in privacy-enhancing technologies. In her master thesis, she worked on differentially private data analytics and applied differentially private algorithms in the context of learning. Currently, she is working on developing privacy standards for AI based on privacy by design principle and fair usage of AI for Silo.ai.