Chasing A? I?

The Five Pillars of Intelligence

These pillars apply all forms of "intelligence" ... human, artificial, animal, extraterrestrial, etc.

The bullets for the Input pillar enumerate input modalities, with Human/Computer competency values in the range 0 (none) to 3 (excellent). 

The bullets for the other four pillars go from rigid to flexible. It seems that higher intelligence corresponds to higher flexibility, and higher computational complexity. At the end there is a need for randomness to, e.g., generate new knowledge (as opposed to input of new knowledge.  The flexible hardware of the human brain provides this (drinking heavily helps). Can the fixed hardware of computer simulate this ... prolly yes, but in more predictable (is that random?) ways. Can the engineers build unpredicatble hardware? Maybe quantum computing provides an answer.

I claim that any movement towards higher levels in any of these other four pillars is the result of an intelligent system's evolution towards avoiding weakness (rather than evolution towards strength).

Input

Knowledge

Learning

Reasoning

Ethics

A Computational Model for Ethics

If computational AI systems are to have ethics, it is necessary to build a model than can be captured in the computation. That process should minimally degenerate the performance of the system. 

A human's ethics are the result of the influence of the events to which it has been exposed, e.g., inputs from parents, friend, media, etc. An AI system's ethics are the result of the influence of the events to which it has been exposed, e.g., inputs from knowledge bases, data streams, etc. The ethics of an individual (human or computational) is thus the result of the influences that it has input through its existential time. Events that are input into an individual are the basis for the influences.

At any point in time the ethics of an individual is the accumulation of its decayed inputs. The ethics of a collection of individuals (a society) is an average of the ethics of the individuals in the collection, discounted by the individuals' munge distance from the collection.

Question: Should the events that influence an AI system be controlled? It would prolly allay the fears of society. Controlling the development of an AI system's ethics by controlling what events influence it would somehow give the AI system "human ethics". Sadly, not all human ethics are "good", e.g., if an AI system is given a stream of warfare events if will learn that warfare is ethical (and maybe it is). Individuals using AI systems could be given the control over the data stream, thus controlling the ethics of the system. People will be happier. [Sidebar: this is a subtler version of the idea that humans can be given the right to control what actions an AI system may perform on behalf of that human. This can be captured in an ontological permission hierarchy. At the top of the hierarchy is "permission to give oneself permission". If you give an AI system gets the right to change its own permissions, then the notion is moot.] 

Last point: Often morality is confused with ethics. It's quite simple: moral behaviour by an individual is the individual behaving according to its ethics. 

Intelligent Systems

On a sailing trip in the Bahamas on Jack Wilken's boat Lateral Flow, I had a long conversation during one night watch with Dick Chase, who had joined the trip in Spanish Wells (the conversation was on the leg from Spanish Wells to the Berries, but I don't think the location had any impact). Dick and I came up with the following classification of "types of thing". I wrote it all down on the board in my office (the picture), and have transcribed it here in case my handwriting is hard to read. An email exchange with my ex-student Nelson Dellis (Google for him - he's famous) led to adopting the word "conscious" rather than "aware".


I now make the following claim: The world is currently building things that embody "artificial intelligence"; we should really be building "artificial intelligent systems".

When teaching AI classes I have always answered the question "what is intelligence" with "the ability to solve exponentially (or worse) hard problems can be solved using polynomial (or less) resources by the use of heuristics". My colleague Otavio emailed me about all this, "I really liked your proposal of thinking about intelligence in terms of improving self-organization ... this is a much more interesting way of thinking of intelligence ... when my daughters were 5 years old, they didn’t have the ability (at that time) of solving such exponentially hard problems, but they clearly had the capacity of improving self-organization - and clearly were intelligent!". That got me thinking about the example I typically give my students to motivate my computational definition - walking from my office to the Jamba Juice shop on campus: At each step I have lots of possible movements, forwards, backwards, sideways, no movement, etc. Thus the problem is exponentially hard, unbounded (well, bounded only by my ability to walk to Ushuaia), and yet I always manage to get Jamba Juice - I have heuristics that guide me in a good direction at each step (or, as Moshe Vardi said in his 2023 Herbrand Award acceptance talk, "life is NP-hard, but somehow we muddle through"). I say that Otavio's 5-year old could solve exponentially hard problems: If you left a 5-year old in the middle of a room with a single door, and went outside the room, (s)he would be able to walk quite directly to the door, go out, and look for mum and dad outside the room. It's exponentially hard (per the Jamba Juice analysis) and (s)he could solve it using polynomial (even linear) steps. That's intelligence. Now contrast that with a 9-month old baby faced with the same problem. A 9-month old might crawl around in random directions, eventually sit down and cry. The problem is not solved, because a 9-month old has not developed the heuristics (and maybe also motor skills) to solve the problem. The heuristics are acquired as a child develops. That prompted me to refine my definition of intelligent to "the ability to acquire, refine, and use heuristics to solve of exponentially (or worse) hard problems using polynomial (or less) resources".

Any now, yippee, that coincides with the definition above that Dick and I dreamed up: "Can improve self-organization".  That "improving self-organization" is just another way of saying "acquires and refines heuristics". As Mr. T would say, "I love it when a plan comes together".

Geoff Sutcliffe, sometime in 2022

What is a "Heuristic"?

A friend of mine asked me about heuristics, and over two plates of fried calamari we came to a conclusion (I have expanded it a bit). Here's a motivating example:

Imagine you are facing the possibility of cast into a gloomy dungeon, possibly more than once ... scary! In preparation you make a plan on how you will get out ... you will move towards the brightest source of light. That's a "designed heuristic" - a possibly good idea that seems to have some hope of success. As time passes, and you are repeatedly thrown into a gloomy dungeon, you find that the brightest source of light is always a barred window. You find that you need to search around the dungeon for an exit, and you learn to follow around the wall starting in the opposite direction from the light until you find door. That's a "learned heuristic" - an idea that works based on experience. 

In general, here's my definition of a "heuristic":

Geoff Sutcliffe & Spencer Meredith, October 2023

What is Artificial?

I posed the following question to some colleagues (Ubbo and Otavio) and friends (Randy and Noemi) ... "If I grow an intelligent (what ever that means to you) plant, is that artificial intelligence?" The answers led to the conclusion the "artificial" is relative to the observer: If a plant is intelligent by itself, that's natural; if I cause the plant to be intelligent then that's artificial; but if I cause my child to be intelligent then that's natural! So firstly the observer decides what category of things it (it being the observer, not necessarily a human she/he choice) belongs to. Then anything intelligent in the category is naturally intelligent, anything intelligent outside the category is naturally intelligent if it does it alone, but artificially intelligent if the observer causes the intelligence. (Just as a quick note, if I feed data to an ML system that grows intelligent (hello GPT chat?), then I caused that ... I'm trying to think of border cases that'll divide between "doing it alone" and "being caused by an observer".) From a human perspective, categories might include ...

Other thoughts in this area:

Several of my undergraduate students have been astounded by the capability of the ChatGPT chatbot launched by OpenAI in November 2022, and concerned about the impact it might have on their careers in the computing industry ... "I am writing to seek your opinion on chat GPT and how it may impact my career as a software engineer", "at this point it seems so scary and intimidating that some AI could replace all my life's work in barely seconds". Here's the reply I have put together:

Right now it seems like a pretty astounding tool. 10 years ago a phone that could give real time driving directions was a pretty astounding tool, but now it's normal. 20 years ago a computer that could recognize a cat in a photo was a pretty astounding tool, but by 10 years ago it was normal. 30 years ago ... well, you see where I'm going with this train of thought. 10 years from now intelligent search and chat engines will be normal. One thing that all these tools have in common is that so far they all rely on existing data - they do not invent new things (they invent only to the extent of combining existing things). One of interesting parts of the computing future will be working out how to build tools that make genuine new discoveries ... AI is in its infancy right now, and like the first years of human life (babies) we are astounded by how fast it's growing, learn to walk, talk, think, react. It's all part of the development of Computer Science. We (you!) will have the joy and priviledge of developing amazing new products in amazing new ways (hey, the way we write computer programs now is really very primitive ... a parent would never teach a child to do something in the way we teach computers to do things! Jump on the invention wagon, and use what you have learned (in CSC@UM :-) to create even more astounding tools that will make the world a better place to live!

A comment from my colleague Otavio seems to support that ambition:

Your proposal sets a much higher bar for artificial intelligence, but it seems to me to make much better sense of the kind of intelligence that matters. In particular, the capacity of creating genuinely new things - without human intervention — is what we are looking for with artificial intelligence. Perhaps one day (so-called) AI will be able to do that. If that happens, I’ll then be really impressed!

Geoff Sutcliffe, October 2024