Recognizable Intelligence

You would think that after 70+ years of the study of artificial intelligence (not to mention thousands of years of relevant philosophical discourse) we would at least have arrived at a usefully precise consensus definition of what "intelligence" really is. But unfortunately this simply isn't the case. I've been writing a lot about intelligence lately and I find myself constantly spending irksomely large amounts of time carefully considering what sort of adjectives to put in front of "intelligence" in order to make my point with sufficient precision. Am I talking about "general intelligence"?, "human-like intelligence"?, "human-level intelligence"? Inevitably, because I don't really have a good way to make absolutely rigorous definitions of any of those things, I find myself going back and changing the terms I'm using. Often completely rewriting what started out as relatively clear sounding sentences to remove troublesome words like "intelligence", "understanding", or "meaning". The rewritten sentences may have more rigor but they almost always suffer dramatically in clarity and brevity.

To help rectify this situation I'm going to start using "Recognizable Intelligence" or "Recognizably Intelligent" (RI) as a subjective term for talking about intelligences which either I, or an abstract reasonable expert human, would eventually come to regard as being intelligent with enough interaction and evaluation. That is to say an RI is a "know it when I see it" kind of intelligence. You could view RI as being intelligence that passes a kind of very relaxed version of the Turing test. Instead of requiring that a putative intelligent agent be able to perfectly indistinguishably emulate a human mind you just require that the expert judge would eventually give their stamp of approval of being "recognizably intelligent" whatever that means to that particular judge.

Examples of Recognizable Intelligences

Just to help elucidate what I mean by RI, I think it will be helpful to give some examples. Obviously humans of various flavors are clearly RI even though they can have dramatically different sorts of intelligence from each other. But for me personally I would also definitely include many animals, dogs, cats, octopi, ravens, etc. Your own personal evaluations may vary. Since RI is a subjective evaluation that is fine. For me I think that most insects simply wouldn't qualify. I am not at all convinced for example that a fruit fly is not just basically a set of finely honed reflexes and for me that is just not enough to qualify it. Likewise the most recent batch of LLMs like GPT-4 are clearly missing something that I think is important in order to give my whole hearted stamp of approval. It may well be the case that GPT-4 is smarter in some sense than your typical octopus but whatever that intelligence is I don't recognize it, it is in some ways perhaps an even more alien kind of intelligence than an octopus. I can reason about the goals and thoughts of an Octopus but LLMs just don't seem to think the way we do, if it is fair to say that they think at all.

Why not AGI?

I think the immediate first question a lot of people might have is why don't I use the phrase "General Intelligence"? There are two reasons; 1) We don't have any better definition of what it means to be "general" than we do what it really means to be intelligent, and 2) I'm not at all convinced that humans really do qualify as "general intelligences" or for that matter that a truly general intelligence (say an embodiment of an AIXI agent) would also necessarily be recognizably intelligent to us humans.

To elaborate a little on that first point, a really rigorous definition of "general" is either almost useless or itself not really rigorously definable. Without going into too much detail a perfectly good (but useless) definition of generality is that a "general" intelligence should be at least as good at predicting patterns as a completely random predictor (a consequence of the fact that almost all functions are Kolmogorov random). But this of course doesn't capture what we really mean when we say that an intelligence is "general". We of course want a general intelligence to be able to gainfully tackle "any" sort of problem but what we really mean is that it should be able to tackle just about any "natural" sort of problem. But then of course figuring out what the sort of distributions of "natural" problems looks like may very well be harder than figuring out how to make an intelligence that can solve problems like a human. For a much more in depth version of this argument and a discusion of various flavors of what "generality" really means consider giving Francois Chollet's "On the Measure of Intelligence" a read. He more or less comes to the conclusion that a good useful definition of what it means to be generally intelligent (though he does not actually employ that phrase) is to have priors that enable rapid learning of certain kinds of useful patterns. Since we really only have the one consensus example of human intelligence as to what such a set of priors might be he ends up supporting the idea that to tackle making truly general AI we should figure out how to tackle making AI that think in a human like way, with human like priors about the world. On this point I agree in principle. But as a guiding principle for new research I think we need to aggressively resist what I see as a rather myopic focus on intelligence as simply being defined as whatever it is that our human minds do.

Don't We Need a Rigorous Definition of Intelligence?

You may ask why on earth am I spending all this time trying to lay out an argument for an imprecise subjective definition of intelligence. If the problem is that we don't have a rigorous consensus definition of intelligence that will help us make progress in AI research shouldn't we be trying to better hone objective and rigorous definitions of intelligence?

Counterintuitively I think the answer to that question is no. I think the best way to make my argument might actually to be to go ahead and let none other than Alan Turing mock what is essentially what I have here called recognizable intelligence. This is the first few lines of the famous paper in which what we now think of as the "Turing test" was proposed.

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd.

Turing couldn't hope to cut through the millennia of philosophical arguments about what intelligence, understanding, meaning, or thinking etc really were simply by trying to make some set of rigorous definitions. It was clear to him that no such set of definitions could ever really even have a chance of achieving something like consensus without also first having achieved some sort of empirical support. So instead he tried to side step the need to talk about what really constitutes thinking or intelligence all together. All at once he had given permission to the researchers in the decades to follow to be able to pursue the creation of intelligence in machines without needing to even try to define what intelligence really is.

Now my working definition of RI does differ from Turing's mocked "Gallup poll" in at least one important aspect. Like Turing I think it is very important that the person making the evaluation of intelligence should themselves somehow be an expert judge. Say Geoff Hinton, or Gary Marcus would be examples that come to mind. Someone who doesn't really study cognition or who has no real understanding of what might be going on under the hood of an AI agent is going to be easier to fool, and since we have made the task of fooling ourselves into thinking that an AI is actually a human a central challenge of AI for the past 70 years it is even more important now than ever that the person making any assessment of intelligence is in some sense an expert tester. I don't think that we need to make any sort of specific criterion for what constitutes such an expert however, anyone who considers themselves an AI researcher I think should qualify.

Why We Need RI as a Term

Why do I think this is important? Well, as I've already mentioned I am tired of using phrases like "truly intelligent" and then second guessing myself. I think that there are just too many flavors of intelligence to really nail them all down. Sort of similar to how I don't think it really makes sense to talk about "general" intelligence as being any one well defined thing I don't think there is any one thing that is true intelligence. Likewise I think that researchers have used the goal of achieving human intelligence as a target as a bit too much of a crutch.

Because, I think that the Turing test has over time morphed into a sort of Turing mandate. First achieve human capabilities in some task and then you have strengthened the argument that however you achieved that capability is somehow causally connected to whatever real intelligence is. But I think this is stifling the conversation somewhat. In my opinion we don't necessarily need to be able to mimic humans perfectly to understand what intelligence really is. Moreover I think we have finally hit a sort of threshold where we will start to see AI that become incredibly good at seeming human without really having anything like human cognition running under the hood.

Turing argued that if human behavior could be achieved then of course intelligence must also have been achieved. That argument was a necessary defense of the then young and vulnerable field of AI. The scientists of that day could not afford to waste time rehashing the previous centuries of philosophical arguments about the nature of minds. In that setting putting on the cloak of unassailable objectivity by rigorously defining tasks and then one by one creating machines that could match human performance in those tasks was the right approach to take back then, and for many decades thereafter. But I think we have arrived in a different era where we need to begin really taking a hard look at what we think intelligence is. In order to facilitate that conversation I think it may actually be worth while to embrace terms for which we admit we do not have precise objective rigorous definitions while we are still figuring things out. Maybe "recognizable intelligence" isn't quite the right abstraction but it seems like a useful one to help with my own thinking and writing at least.

Comments

Comments powered by Disqus