Minimal Intelligences
A question that I have recently been using to guide my thoughts about AI and intelligence in general has been to stop and ask myself whether or not I think some particular thing is absolutely required for recognizable intelligence.
In the past I was much more likely to be pondering a question more in the form of; "What are modern AI missing?". Framing the question this way invites me to think of any of the myriad slippery mental concepts that we tend to apply to humans or animals and then try to figure out which of these things is most valuable, and which of these qualities might potentially be made mathematically rigorous, measureable, implementable etc. But this well of thought is almost infinitely deep. You can very comfortably spend years reasearching mechanisms for memory, or symbolic reasoning, or something a little more off the beaten path like how to incorporate "playfulness" into an AI system.
You could call this mode of thought the "additive" approach to AI because you can just keep generating capabilities, or properties of AI that you might like to tackle and add them in to the AI toolbox one at a time. One hopes that when we have accumulated enough knowledge about enough mental capacities then we will have finally achieved understanding of what intelligence really is. Certainly if we somehow manage to figure out how to give a single AI all of the mental capacities that we have has humans then we certainly will have made an incredible stride towards the understanding of intelligence in general. Though I worry that even if such a feat were to be achieved the complexity of such a system may make the interpretation of exactly why and how it works almost as hard as figuring out what is really going on in the human brain.
But even more importantly the additive approach I think is actually a very poor guide as to what avenues of research are most promising to pursue. For example consider the three candidate mental capacities mentioned above; memory, symbolic reasoning, and playfulness. All of these seem incredibly important to my mind. I would have a hard time to pick just one of these three to pick as being more important than the others for future research (though just for the record I think I would have to pick the concept of playfulness because it seems to me to capture much of what modern AI are missing that biological RIs possess).
When stuck purely in the additive mode of thinking I might spend a lot of time thinking about exactly what the right way to make each mental capacity precisely measurable and also how one could implement such a capacity for an AI. I think that this mode of thinking is very natural and can be a very productive direction for research. But I have recently come to think that it is really important to also spend a moment to follow up and ask yourself the subtractive question, "Am I sure that this is really necessary?".
In fact I find that for myself personally a much more powerful version of this same question is;
Can I imagine a recognizable intelligence which does not have this mental capacity?
In this form the question has started to really change the way I think about what it means to be recognizably intelligent. Consider those three example mental capacities I mentioned above; memory, symbolic reasoning, and playfulness.
Memoryless RI¶
First up memory, is it possible for me to imagine an entity which I think I would clearly recognize as being intelligent if I was allowed to interact with them, but which does not have a memory? My gut reaction without putting much thought into it is that of course any RI would absolutely need a memory! If there is no memory then there can be no learning, which seems rather important. But I think it is good to give extra scrutiny to things which feel like the "must" be true but for which you have no hard proof. Putting a little thought into what a powerful but memoryless intelligence might be like one of the first things that comes to mind is actually the modern transformer based LLM. I want to stress that I'm not saying that modern LLMs pass the test of being recognizable intelligences (for me they do not yet qualify as RI). Rather I'm saying that I do not feel that I can rule out a possible future in which something very much like our current LLMs does qualify as RI, though I would be very surprised if that turned out to be true.
If we are including the training process as part of our considerations then I would definitely say that LLMs have some sort of memory, since they will learn to reproduce documents from their training data. But if you consider an LLM operating purely in inference mode with no updates to its weights then any text which was previously input to the LLM has no effect on future outputs and so in that sense an LLM can be said to have no memory. You could argue that the input contextual data is actually acting as the memory of the LLM since in principle it could be used to represent a history of interactions with the model. I think that is a fair argument in a sense but even if you want to think of it as acting as some sort of effective memory it certainly is a strange sort of memory. So I think one could make a strong argument that future very strong LLMs could eventually constitute an example of an RI which is also effectively memoryless.
But for me there is a much more compelling example of memoryless RI, humans brain damage that prevents memory formation. For example consider the story of https://en.wikipedia.org/wiki/Clive_WearingClive Wearing</a> who has short term memory of 20-30 seconds but who cannot form any long term memories. If I ask myself the question of whether or not I think Clive is really recognizably intelligent in the way that I am, I have to say yes.
To me this is an extremely surprising answer. Because it means that we can delete memory off of the list of things that an RI absolutely must have. There is of course an important nuance here since in both of the examples we just considered there is an initial learning phase during which some form of implicit or explicit long term memory was present and in both cases there is the presence of some sort of effective working memory (though that is debatable for the LLM). But even so this seems to me to be a compelling argument that actually even though memory is extremely important to the application of intelligence perhaps it isn't one of the single most important things that should be focused on if the goal is to create a minimal clearly recognizable intelligence.
Non-Symbolic RI¶
What about symbolic reasoning, could I imagine some sort of entity which to me would seem definitely recognizably intelligent but which was completely incapable of symbolic reasoning?
In the very most general sense I think there may be reason to argue that a "symbol" is anything which has a representational relationship with something other than itself. So by that very broad definition, yes, absolutely every intelligent entity must have symbols of some sort. Because an intelligence which couldn't represent anything couldn't think anything. Such an entity might be better regarded as constituting a part of the environment instead of being an agent acting in the environment.
But when I say "symbolic reasoning" I'm talking specifically about a kind of rule based manipulation of discrete objects and relationships between objects. So for example any sort of thing that might go by the name of an "algebra" or a "calculus" consists of ways of manipulating what I'm calling "symbols" and any system that can manipulate those symbols (well or poorly) has some level of symbolic reasoning.
The first place my brain goes thinking about this is to think about whether or not extremely sophisticated symbolic manipulation capabilities would tend to increase the likelihood that I personally might give some putative RI my stamp of approval. So for example suppose that I'm interacting with a system which has access to an integrated Boolean satisfiability solver. If that AI were also able to translate interesting real world questions into satisfiability problems (and to be clear this transformation would very frequently be possible since satisfiability problems are known to be NP-complete) then the ability to solve those problems would give the AI a boost. A not very powerful solver could find solutions to Sudoku puzzles, and in principle a very powerful solver (really almost infeasibly powerful) could be used in principle to play a perfect game of Chess or Go or any other deterministic perfect information two player game.
But I am suspicious that the thing which really looks like the sort of intelligence I could recognize is actually the ability to transform interesting real world problems into solvable Boolean satisfiability problems in the first place. So having extremely sophisticated logical capabilities could no doubt be very useful to an RI, however I'm not convinced that logical capabilities are actually necessary for an intelligent entity to be RI.
Another direction of thought which also makes me suspicious of the need for symbolic manipulation for RI is the historical overemphasis (in my opinion) of the importance of language to thought. Human language is very comfortably symbolic and likely our language influenced our invention of the more rigorous kinds of symbolic reasoning like logic and various kinds of algebras. Similar to this vein of reasoning I think that it is entirely plausible that creatures which are very smart but which do not have (so far as we can tell) any sort of complex language also may not have sophisticated symbolic reasoning capabilities. But never the less such creatures can pass the test of being clearly RI. The prime example of this for me is the octopus. Octopi do have a rudimentary sort of language based on displays which involve changing the color and texture of their skin and placement of their tentacles. But octopi aren't social creatures and it seems that there are just a hand full of kinds of sentiments, like threat displays, that they may want to communicate with each other. But despite an apparent lack of language octopi are very curious and like to solve puzzles. But those puzzles are physical in nature not logical in nature, give an octopus a fun little physical toy to play with that has a clever opening mechanism and they will probably figure it out. Try to teach octopi to solve sudoku puzzles for bits of fish and you will probably be out of luck.
So although I am not at all certain about it, I think that it is very plausible that RI do not require any sort of sophisticated symbolic reasoning capabilities.
Unplayful RI¶
Humans are playful (or at any rate the small ones are). Furthermore I think that playfulness is actually very key both to why humans ended up as intelligent as we are. Moreover it is a huge part of how I tend to recognize intelligence in other humans. Someone who has a compulsive tendency to tackle puzzles and/or invent puzzles of their own also tends to be someone with high intelligence. Though I think it is important to consider the argument that potentially the causal arrow in humans actually points the other direction. Perhaps the more intelligent you are the more likely you are to enjoy tackling puzzles. But really I'm not talking about high intelligence per say so much as I'm talking about just a compulsion to play with things, be they physical objects or mental constructs. You don't have to give a baby a reward in order to make them interested in stacking blocks, toggling switches, figuring out how to fit things together, etc, etc.
If I try to envision an intelligent entity without playfulness the purely logical robot characters of Asimov like https://en.wikipedia.org/wiki/R._Daneel_OlivawDaneel Olivaw</a> come to mind, or for a more recent reference someone like commander Data from star trek. But of course such fictional characters cannot give me much empirical evidence one way or another, they are just mental archetypes which I can't help but envision. Also because they are fictional characters it is easy to nearly completely separate the properties of curiosity (of which both of these aforementioned characters have in abundance) from the property of playfulness, but in AI we could build in reality those two properties seem likely to me to be very difficult to achieve independently from each other.
If it isn't already clear from the way I have talked about it so far, I am convinced that thinking carefully about playfulness is a potentially very important direction for future AI research. Of the mental capacities I've mentioned so far I think playfulness might be the most important one. However, I am not certain that playfulness is strictly required for RI. The argument is more or less the same as the argument for memory. Some amount of experimentation and or learning is probably very useful for achieving apparent intelligence. But just like memory if you turn off the tendency for experimentation and learning at some point that does not make the resulting entity unintelligent. To put it another way adults that have lost their sense of playfulness and/or curiosity about the world still retain a recognizable kind of intelligence. So even though I think playfulness may be an essential part of what has made humans the way we are, I again have to come to the conclusion that yes I think it is not hard to imagine entities that are not playful and yet are still clearly RI.
Minimal Capacity RI¶
This way of thinking inevitably gives rise to the question of what sets of mental capacities constitute a minimal set of capacities that an RI could have? I think this is very much in contrast to the usual way of thinking about making progress in AI in which we simply want to add more and more capabilities and mental capacities. From my perspective a completely minimal artificial RI (ARI) would likely tell us a great deal more about intelligence than an ARI which had enough mental capacities to pass the Turing test. The reason being that minimal examples are just much easier to analyze. But more importantly a minimal RI seems likely to be much easier to achieve than something as complex as human like cognition.
I think the process of trying to get rid of aspects of intelligence which may not be absolutely essential may be somewhat paradoxically just as fruitful and enlightening as continuing the more standard march forward to figure out how to incorporate ever more mental capacities into new AI using new approaches. This is not to say that I think there will be much of a cottage industry of papers where people remove capabilities from existing AI one at a time (though now that I say it who knows? maybe that would be enlightening). Rather I'm just trying to make an argument that carefully thinking about whether or not everything we associate with intelligence (human or otherwise) is really actually necessary to recreating it.
Comments
Comments powered by Disqus