Chasing A? I?

On a sailing trip in the Bahamas on Jack Wilken's boat Lateral Flow, I had a long conversation during one night watch with Dick Chase, who had joined the trip in Spanish Wells (the conversation was on the leg from Spanish Wells to the Berries, but I don't think the location had any impact). Dick and I came up with the following classification of "types of thing". I wrote it all down on the board in my office (the picture), and have transcribed it here in case my handwriting is hard to read (and an email exchange with my ex-student Nelson Dellis (Google for him - he's famous) led to adopting the word "conscious" rather than "aware"):

I now make the following claim: The world is currently building things that embody "artificial intelligence"; we should really be aiming to build "artificial intelligent systems". 

Somehow the word "artificial" has escaped definition in this context (although we provided a definition for "intelligent"), so read on for more thoughts ...

Thoughts about Artificial:

I posed the following question to some colleagues (Ubbo and Otavio) and friends (Randy and Noemi) ... "If I grow an intelligent (what ever that means to you) plant, is that artificial intelligence?" The answers led to the conclusion the "artificial" is relative to the observer: If a plant is intelligent by itself, that's natural; if I cause the plant to be intelligent then that's artificial; but if I cause my child to be intelligent then that's natural! So firstly the observer decides what category of things it (it being the observer, not necessarily a human she/he choice) belongs to. Then anything intelligent in the category is naturally intelligent, anything intelligent outside the category is naturally intelligent if it does it alone, but artificially intelligent if the observer causes the intelligence. (Just as a quick note, if I feed data to an ML system that grows intelligent (hello GPT chat?), then I caused that ... I'm trying to think of border cases that'll divide between "doing it alone" and "being caused by an observer".) From a human perspective, categories might include ...

Thoughts about Intelligence:

When teaching AI classes I have always answered the question "what is intelligence" with "the ability to solve exponentially (or worse) hard problems can be solved using polynomial (or less) resources by the use of heuristics". My colleague Otavio emailed me about all this, "I really liked your proposal of thinking about intelligence in terms of improving self-organization ... this is a much more interesting way of thinking of intelligence ... when my daughters were 5 years old, they didn’t have the ability (at that time) of solving such exponentially hard problems, but they clearly had the capacity of improving self-organization - and clearly were intelligent!". That got me thinking about the example I typically give my students to motivate my computational definition - walking from my office to the Jamba Juice shop on campus: At each step I have lots of possible movements, forwards, backwards, sideways, no movement, etc. Thus the problem is exponentially hard, unbounded (well, bounded only by my ability to walk to Ushuaia), and yet I always manage to get Jamba Juice - I have heuristics that guide me in a good direction at each step (or, as Moshe Vardi said in his 2023 Herbrand Award acceptance talk, "life is NP-hard, but somehow we muddle through"). I say that Otavio's 5-year old could solve exponentially hard problems: If you left a 5-year old in the middle of a room with a single door, and went outside the room, (s)he would be able to walk quite directly to the door, go out, and look for mum and dad outside the room. It's exponentially hard (per the Jamba Juice analysis) and (s)he could solve it using polynomial (even linear) steps. That's intelligence. Now contrast that with a 9-month old baby faced with the same problem. A 9-month old might crawl around in random directions, eventually sit down and cry. The problem is not solved, because a 9-month old has not developed the heuristics (and maybe also motor skills) to solve the problem. The heuristics are acquired as a child develops. That prompted me to refine my definition of intelligent to "the ability to acquire, refine, and use heuristics to solve of exponentially (or worse) hard problems using polynomial (or less) resources".

Any now, yippee, that coincides with the definition above that Dick and I dreamed up: "Can improve self-organization".  That "improving self-organization" is just another way of saying "acquires and refines heuristics". As Mr. T would say, "I love it when a plan comes together".

Geoff Sutcliffe, sometime in 2022


What is a "Heuristic"?

A friend of mine asked me about heuristics, and over two plates of fried calamari we came to a conclusion (I have expanded it a bit). Here's a motivating example:

Imagine you are facing the possibility of cast into a gloomy dungeon, possibly more than once ... scary! In preparation you make a plan on how you will get out ... you will move towards the brightest source of light. That's a "designed heuristic" - a possibly good idea that seems to have some hope of success. As time passes, and you are repeatedly thrown into a gloomy dungeon, you find that the brightest source of light is always a barred window. You find that you need to search around the dungeon for an exit, and you learn to follow around the wall starting in the opposite direction from the light until you find door. That's a "learned heuristic" - an idea that works based on experience. 

In general, here's my definition of a "heuristic":

Geoff Sutcliffe & Spencer Meredith, October 2023

Other thoughts in this area:

Several of my undergraduate students have been astounded by the capability of the ChatGPT chatbot launched by OpenAI in November 2022, and concerned about the impact it might have on their careers in the computing industry ... "I am writing to seek your opinion on chat GPT and how it may impact my career as a software engineer", "at this point it seems so scary and intimidating that some AI could replace all my life's work in barely seconds". Here's the reply I have put together:

Right now it seems like a pretty astounding tool. 10 years ago a phone that could give real time driving directions was a pretty astounding tool, but now it's normal. 20 years ago a computer that could recognize a cat in a photo was a pretty astounding tool, but by 10 years ago it was normal. 30 years ago ... well, you see where I'm going with this train of thought. 10 years from now intelligent search and chat engines will be normal. One thing that all these tools have in common is that so far they all rely on existing data - they do not invent new things (they invent only to the extent of combining existing things). One of interesting parts of the computing future will be working out how to build tools that make genuine new discoveries ... AI is in its infancy right now, and like the first years of human life (babies) we are astounded by how fast it's growing, learn to walk, talk, think, react. It's all part of the development of Computer Science. We (you!) will have the joy and priviledge of developing amazing new products in amazing new ways (hey, the way we write computer programs now is really very primitive ... a parent would never teach a child to do something in the way we teach computers to do things! Jump on the invention wagon, and use what you have learned (in CSC@UM :-) to create even more astounding tools that will make the world a better place to live!

A comment from my colleague Otavio seems to support that ambition:

Your proposal sets a much higher bar for artificial intelligence, but it seems to me to make much better sense of the kind of intelligence that matters. In particular, the capacity of creating genuinely new things - without human intervention — is what we are looking for with artificial intelligence. Perhaps one day (so-called) AI will be able to do that. If that happens, I’ll then be really impressed!