With the advent of Artificial Intelligence (AI), our conception of intelligence has been slightly disrupted. After all, AI machines meet the requirements of some of the definitions of “intelligence”, such as solving complex problems. So, can machines think?
This question was already proposed in the middle of the last century by one of the fathers of computer science: the great Alan Turing (1912-1954). “How could we detect the display of intelligence in a machine?” he must have asked himself. To try to provide a solution to this problem, he devised a test that would consist, according to the standard or better known version, of a jury conversing in writing through a computer with two anonymous interlocutors for about 5 minutes. The fun of the test is that one of the interlocutors would be a real person and the other a computer trying to imitate a human being. The computer would be considered “thinking” if 30% of the jury could not distinguish between the person and the machine for at least 70% of the established time.
The Turing test was not systematically put into practice until 1991, when the Loebner Award was founded, a competition inspired by the test that rewards those who achieve the “smartest” computer program.
However, the Turing test has its detractors, first of all because Turing did not explicitly state that his experiment was meant to measure the intelligence of machines, but to evaluate whether they can mimic human behavior. We should also bear in mind that not all of our behaviors are intelligent…
Many consider 2014 to be the first time a machine passed the Turing test. It was a chatbot named Eugene Goostman. He managed to convince 33% of the jury members that he was a 13-year-old Ukrainian boy.