Intelligence is language
In my previous post I made the hypothesis that our angst of AI is artificially fueled by a skewed definition of intelligence, and that we should at least reconsider whether the traditional notion of reason as natural language – and not computation procedure – could still hold today.
Of course it is somewhat a precarious and even foolish enterprise when, like I do, you consider mathematics to be the language of nature. But I am not going to discuss the structure of language or pretend that some inner property of it would make it out of reach of representation by any mathematical model. Indeed I do think there is one that we have not found yet.
Instead I simply want to point out that such a mathematical model of the sentence structure is not enough to describe what language is. Currently every language model only represents the signal – and not the needed context for language. You do not speak to your shoes unless you have a compulsive obsession disorder – and even then you are the audience. We keep saying “natural language”, but we do not study it like a natural object – what are conditions of possibility, how did it appear ?
Articulated sounds before articulated thought
For a start, actual language implies meaning, which demands an alter ego – some being distinct from me in its identity (for communication to be necessary) but identical in structure (for communication to be possible). Social human groups existed before the language developed, and we can imagine that communication codes would rather quickly identify with the group itself.
And we can assume that any amount of added cooperation would be a tremendous advantage in terms of survival of the group at the dawn of humanity. Sure you can hunt without a language, build a hive or nest, mate and take care of the young. All these – rather complex – problems can be solved by animals with brains the size of a peanut. This is instinct. But to keep fire alive, or start hunting mammoths – things human had no previous instinct for -, language seems to be a much credible factor of behavioral adaptation than random or even directed DNA mutations.
So let us take that as a plausible cause : language development is an evolutionary advantage that brings cumulative added benefits with each new complexity level it reaches. Once again the social group is a strong candidate for leveraging a small individual capacity . The group benefits from any communication capacity long before the individual, as we can see in the alert patterns of birds. Intelligence – in the sense of reason and self-consciousness – may have begun as a pattern of collective behaviors being gradually internalized by memorization. Language would then be the matrix of reasoning and self-consciousness as much as its vehicle.
The conscious brain and the autopilot
All of it sounds wildly hypothetical, yet neuro-psychology research actually tends to confirm the central role of language in cognitive processes. It has convincingly shown that the two brain hemispheres have very different cognitive roles – the right brain deals with visual form recognition, threat & general stress response, reflexes and/or learned responses, and short term memorization ; while the left brain works in verbal, secure situations, is slower by a factor 4 on similar cognitive tasks, and is the primary agent of long-term memory.
We can actually speak without thinking, because many processes are automated with sheer repetition, and thus become available for reflex/distracted responses : “I didn’t mean it” is actually a valid excuse for many blunders. And it is literally true : to mean something, one must be conscious of what he is saying while saying it – not later. The right brain can lead you to output a sentence out of habit, automatically. And that has only the appearance of language, because there is no more meaning attached to the sentence than if a parrot was repeating it.
“I” is in this regard the most important word in the language, because it is really the word of self-consciousness : no one can mean “I” without being conscious that they are saying “I”, and of its meaning : themselves. And in general, the structure of language must answer for the relationship between a sentence and the context in which it is said. It does not mean that we cannot one day have a “smart” search engine that will find answers to the meaning of our query rather than its content. It does mean that in such case “smart’ will not equate to “self-conscious”.
Machines can emulate humans inasmuch as humans emulate machines
Let us come back to comparing this approach of intelligence as language, with Deep Blue beating Kasparov in a chess series. Deep Blue, with all its power, is closer to a pocket calculator than to a human being wrt. the meaning capacity. It shows you not only how far current AI techniques are from emulating the human mind, but also how hopeless are the current efforts – if indeed they were aimed at it. If you do not feel threatened by your phone being better than you at calculating a square root, then you should not feel threatened by AI on an ontological basis.
We will have machines capable of emulating humans, not doubt, in everything where humans are actually emulating machines. Every time we draw our response from an array of predefined possibilities, to be chosen according to parameters, we are taking the job of machines. And in time they will take it back. These cases are just much more prevalent than we previously thought : all procedures for instance – be they legal or medical – fall under that category.
And there lies the last sting : even if we are self-conscious and machines are not, what is our meaning if they can do everything we can do ? If, for any given job there is 80% or 90% that can be automated with AI, how are we going to thrive on the 10% remaining ? We are digressing toward the impact of AI here, but let us answer that also, because it may be fine to not being ontologically threatened , if we all loose our jobs it will not make that much of a difference.
First it may be a blessing actually : if 90% of legal cases can be solved with AI, you may have legal insurance contracts for $10 / month, and same deal for health coverage. Humans would concentrate on outlier cases. Surgeons that never sleep, drink, or snap at nurses, will charge you $20 for a 4h procedure. I really think we need to have some optimistic dreamers provide us with an alternate, not dystopian, view of the future, just to balance out the current gloom.
Secondly however, this may also be the time for us to make a conscious effort at becoming more human rather than trying to emulate machines. How we can develop authentically human intelligence will be the third part of this journey.