Search
  • Mykola Galushka

Introduction to DeepTrace



DeepTrace is an explainable AI (XAI) engine, designed to work side-by-side with existing AI solutions and help users to understand the decision-making process performed by AI. An understanding of this process is the fundamental criterion for building trust in any AI system. This trust is essential for raising our confidence and ability to rely on AI predictions.


Not every AI system requires explainability. However, if its predictions have a significant impact on human life, the ability of an AI system to explain the decision-making process is crucial.


In the following use-case, we will demonstrate how DeepTrace can help doctors to improve their experience of working with AI. As an example, we chose the AI model that makes a presumptive diagnosis of urinary system disease. It has been trained on the data set published by the UCI and predicts one out of four following outcomes for the specified patient's symptoms:


  • Inflammation of urinary bladder;

  • Nephritis of renal pelvis origin;

  • Inflammation of urinary bladder and Nephritis of renal pelvis origin;

  • Other cause of inflammations;


For example, a doctor submits the following patient’s symptoms:

Our AI model predicts that the disease is “Nephritis of renal pelvis origin”. From this simple answer, it is impossible to understand the rationale behind the obtained prediction. A doctor purely relies on his/her experience to anticipate the logic behind this prediction. Wouldn't it be beneficial if AI explained why this prediction has been made?


The limitation of many commercial AI to explain its results can be overcome with DeepTrace. If a doctor submits the same patient’s symptoms via DeepTrace, prediction results will look like this:


As one can see, there is no clear cut between one or another prediction. The AI model "thinks" that the disease is “Nephritis of renal pelvis origin” with 50% probability. However, it could be the “Other cause of inflammation” with 29% probability. In this case, a doctor needs to understand which symptoms support this prediction and which one plays against it. On the DeepTrace query panel a doctor can see that “Temperature of patient” and “Lumbar pain” have high positive impacts on the obtained prediction (25.2% and 29.1% accordingly). On the other hand, the symptom such as “Occurrence of nausea” produces a negative impact of 20.8%.


In other words, if specific symptoms are more common for the predicted outcome, their impact would be positive, and if specific symptoms are less associated with the predicted disease their impact would be negative. The percentage qualifies whether an impact is high or low.


In addition to presenting symptom impact values, DeepTrace provides a doctor with a clear set of constraints for the obtained prediction. In our example, the outcome generated by the AI system is caused by the following factors:


  • Temperature of patient ≥ 38.05

  • Lumbar pain = yes

  • Micturition pains = yes

As long as these conditions are met, the rest of the patient’s symptoms will not affect the predicted outcome.


With DeepTrace a doctor can explore an AI decision space even beyond the specified patient's symptoms. What if the patient's temperature would be slightly lower? The answer to this and many other questions can be also found with exploration tools.

For example, if the temperature drops to 37.7, the AI system will make a completely different prediction, which is “Other cause of inflammations”.


The final say will always be by a doctor. However, DeepTrace is a simple and powerful tool, which helps knowledge professionals, such as doctors, to make better decisions by working in partnership with AI.

Address

AUROMIND Ltd.
126 Eglantine Avenue
Belfast
BT9 6EU

Contact

+44(0)7923873555

Follow

©2018 BY AUROMIND LTD.