Watson might be becoming smarter than Holmes after all, if IBM gets its way.

Watson was first unveiled in a game show Jeopardy two years ago. Jeopardy is a game show where contestants are presented with the answers to questions they are supposed to come up with. Unsurprisingly, even though Holmes wasn’t on scene, Watson managed to beat the two human contestants (who were widely considered to be the best Jeopardy players) hands down. Now that Watson has defeated its human overlords in a game of Jeopardy, it is ready to take on the task of curing cancer.

Developed by IBM since 2005, Watson was always envisioned as having a higher purpose in medicine. Watson is an artificial intelligence (AI) system that is able to answer questions posed to it in a natural language (i.e. a language spoken by humans). It does this by analysing connections and trends in data that humans may not be able to. The clever Machine used this to play the game show and is using this now in cancer treatment. It compares a patient’s medical records with what it has in its massive database, before recommending treatments with confidence levels on effectiveness. However, Holmes, and in fact humans like the rest of us, need not be worried – Watson isn’t as smart as it seems.

Watson is ready to take on the task of curing cancer.

There are things that can stump Watson. Natural languages pose an extreme challenge to the AI, with its use of subtle nuances and implicit meanings. This was demonstrated during the Jeopardy game when Watson answered several questions wrongly due to the use of implicit meanings. Watson also depends heavily on its database to be able to generate its answers. It is fed with data, and using machine learning algorithms, ‘learns’ the information to build its database and make the necessary connections. So if there is something that is not in its database, Watson would not be able to know about that piece of information. It would not be able to make any ‘educated guesses’ either. This all boils down to Watson not being what philosophers call a “Strong AI”.

A “Strong AI” is roughly an AI that has an intelligence that matches or exceeds that of humans whereas a “Weak AI” is simply an AI that can act inteligently. A “Strong AI” would be able to understand and act on information, showing signs of sentience. Watson is not a “Strong AI”. Watson is simply a machine that is following a set of instructions to analyse what it has in its database. It does not “understand” the data it has in the same way that humans do. This situation is illustrated in the “Chinese Room” thought experiment presented by John Searle.

Watson is not what philosophers call a “Strong AI”

In the “Chinese Room” experiment, Searle hypothesised that if a computer can, given sufficient and comprehensive instructions, produce outputs to input Chinese characters in such a convincing manner that it passes the Turing Test, it will be able to convince a human Chinese speaker that the program is itself a live Chinese speaker. However, the question is whether the computer can literally “understand” Chinese, or is it merely simulating the ability to understand Chinese. This is a case of “Strong vs Weak AI”.

Sentient AI still remains in the realm of Science Fiction, and we are unlikely to see situations like those in the Matrix films come true in the near future. Watson might be able to enable us to cure cancer in a more effective way, but it is not going to be able to think for itself any time soon.