What Does “Explainable AI” Mean for Your Business?
Text analysis: what is the difference between machine learning/deep learning and a semantic analysis based approach?
The fundamental difference is related to the level of understanding that the two approaches provide. Using the classic algorithms of machine learning and deep learning, text is not considered as having a structure and meaning, but simply as a sequence of symbols (keywords) that occur together with a certain frequency. In essence, algorithms of this type recognize the most statistically frequent and relevant patterns but do not “understand” anything about the text. It follows that, for a system of this type, even a text that doesn’t make sense or that is syntactically incorrect is identical to one that is written correctly.
Instead, for an approach based on semantic analysis, the text is analyzed in a way that is similar to how a person would analyze it, imitating some of the cognitive processes that we all instinctively use to understand the meaning of a text. To do this, the software must have a rich and deep knowledge of the world and language (which is usually stored within a knowledge graph) and use algorithms written specifically for the understanding of text. It is a much more specific and complex approach, which requires more initial investment, but it is the only one that can go beyond simply counting word sequences and understanding structure, relationships and meaning in a way that is similar to how people approach text.
The approach based on semantic analysis can be also trusted and easily understood by humans. That’s called “Explainable AI” (as opposed to black-box AI). With Explainable AI, you can understand how the software arrives at a decision and why it provides specific output, by allowing you to follow all the steps to incrementally improve it.
How important is human activity for algorithm training?
Simple: it’s fundamental. Without human intelligence (and knowledge), an algorithm cannot work, except in the most trivial cases. Even the most sophisticated artificial intelligence software cannot work without the help of experts with special technical knowledge. Software that programs itself, learns complex things by itself, and maintains itself has not yet been invented.
How reliable are machines in the analysis of complex texts?
Given the complexity and breadth of the themes that could be covered, there isn’t a single answer. In general, it is not yet possible to reach the level of reliability of people and, for the most complex problems, this will not be achieved anytime soon. Taking into account that people are not perfect and can make mistakes due to fatigue, distraction or lack of knowledge, even in the most favorable scenarios, software can reduce the work of human analysis by 90-95%. And, for a variety of scenarios, the average reduction is around 30-40% with the possibility of significant growth over the next 4-5 years.
Where has semantic technology made the biggest contribution and what is the effect of increased computing power?
Semantic technology is the only technology able to face the types of problems where it is necessary to understand, even partially, the content of a text, whether for a short email or a report of dozens of pages. Only with semantic technology can we understand the meaning of words and phrases, identify relationships between concepts and/or entities and make inferences from elements extracted from a text. In principle, the non-trivial problems of understanding text can only be solved with semantic technology. This does not mean that all problems of this type can be solved but, more simply, that other technologies stop at a lower level of complexity.
Computing power is a critical factor for the results achieved in recent years. Semantic technology (like other technologies) benefits directly and measurably from the increase in computing power, so I expect even more significant improvements once we have computers that are much faster.
President and CTO, Expert System