For decades we have thought that the best way to test Artificial Intelligence, is through the conventional Turing Test. The Turing test goes like this; “Have a human chat with ‘something’ over a period of time, and if the human cannot deduct whether or not it’s talking to a human or a computer, and the human actually is talking to a computer, then the computer has passed the Turing test, and is by the very definition ‘intelligent'”
This approach is flawed for several reasons. First of all, it assumes that the intelligence of a computer is similar to that of a person. Why would a computer want to mimic a person? That’s like a human trying to mimic a cow! Would a human pass the “Turing Cow test”? And if not, does that mean a human is less intelligent than a cow?
To find artificial intelligence within a computer, there exists a different approach, an “indirect” approach. This approach assumes that once the computer is smarter than humans, it will inevitably start “playing people”, just like we play computer games today. When it does, this can be perceived indirectly, through behavior from individual humans that cannot in any ways be attributed to humanoid intelligence
Basically, if a human shows signs of having an IQ that far supersedes the natural range for humans, then we must assume it was not the human doing the actual thinking, but actually the computer, indirectly, through the human
Or to say with different words; “When a human shows to have a billion in IQ, we have proof of ‘Artificial Intelligence'”
Or further condensed; “When Humans starts failing the Turing test, we must assume the Computer is Alive!!”
Then comes the question; “Have we already seen this type of behavior among us?”