en flag +1 214 306 68 37

Building trust with computer vision AI

Oleg Chudakov

Oleg Chudakov

Oleg Chudakov

Oleg Chudakov

Oleg is Senior C++/Qt Developer with 11+ years of experience in programming and designing mobile and web-based applications. Focused on working with the Qt framework and OpenCV library, Oleg participates in major image processing and image analysis projects run by ScienceSoft.

Published:

Editor’s note: AI-powered computer vision has the potential to radically change major industries. Read on to learn why it’s important to keep AI algorithms understandable and transparent for human users. And reach out to ScienceSoft’s software development services to get effective and trustworthy enterprise applications with a high degree of sophistication.

Computer vision belongs to artificial intelligence domain, which means teaching computers to see is as difficult as solving the central AI problem, namely to make machines as smart as people. Although current computer vision algorithms are far from completing this task, those that are based on machine learning techniques are complex enough for us to stop understanding the exact way they make their decisions and conclusions. This is quite a disturbing thought, especially when it comes to such demanding industries as healthcare, manufacturing or national security.​​​

Building trust with computer vision AI

How do they know?

Machine learning has proved to be extremely efficient in analyzing and classifying images. Such techniques as support vector machines, random forests, probabilistic graphical models, and deep neural networks (DNNs) are widely used in image analysis, robot vision, face recognition, etc. But despite their great performance, these systems are notorious for being black boxes, which means that the details of their decision-making process remain obscure even for their developers. This is a major drawback because the user cannot see what particular features the system considers to be important, can’t spot its strengths and weaknesses, and cannot directly affect its reasoning.

This uncertainty questions the reliability of these algorithms and restricts their application to merely supportive, since a human supervisor is still required to assure that the output is correct.

Making machines explain themselves

In August of 2016, the Defense Advanced Research Projects Agency (DARPA) launched the Explainable Artificial Intelligence (XAI) project looking for ingenious research proposals in the area of machine learning. “Explainable AI will be essential, if users are to understand, trust, and effectively manage this emerging generation of artificially intelligent partners,” they say.

DARPA highlights that researchers have already proposed several approaches to the problem, but none of them provides a complete solution. However, they show some promising directions for further research:

  • Deep explanation: modifying deep learning techniques so that machines would learn more explainable features. For example, Matthew Zeiler, Ph.D. in Computer Science, has proposed a deconvolutional neural network that visualizes the output at different layers of a convolutional neural network. Besides that, it can be possible to train one deep neural network to explain the operation of another DNN.
  • Interpretable models: using more structured and interpretable models for machine learning. For instance, there is a variety of Bayesian algorithms that are inherently more comprehensible. One additional major task here is to improve their performance which is still lower than the performance of less explainable techniques.
  • Model induction: developing an algorithm that would observe and analyze the behavior of a black-box machine learning technique to infer an explainable model.

Of course, there can be different and even rather unexpected approaches. For example, Tsvi Achler, a researcher with degrees both in computer science and neuroscience, has proposed his brain-motivated method offering not only to make artificial neural networks more transparent but also to allow changing their recognition patterns without completely retraining the network.

Waiting for the results

DARPA stopped receiving submissions for the XAI project on November 1, 2016, and the work has started. The research may take substantial time, and now we should wait patiently for new announcements considering this promising project.

There is a strong possibility that in the long run, a new generation of machine learning algorithms emerges, logically clear and even smarter than the existing ones. That will make these techniques much more credible and eventually drive the use of computer vision in healthcare, logistics, automotive and electronic industries etc.

AI Consulting Services

Maximize the power of artificial intelligence with our professional consulting services. Get in touch with us to embark on your path to AI-driven excellence!