Hmm, that’s quite the puzzler. Luckily, with the miracle of technology at my fingertips, I can employ a very sophisticated technique to attack this problem: go and read what some random people have written on Wikipedia.
If you go to the Wikipedia article on “Artificial Intelligence”, you will find a short section labelled “Definitions”, which contains the following:
Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.
Wow, that’s impressive – as a (questionably) intelligent human, I don’t think even I can do that.
Suppose I have a difficult goal, say being able to run 100m in under 10 seconds. I don’t really know how to achieve it, so I go to my friend for advice and say “Hey, could you give me some advice on how to achieve my goal of running really fast”. And he’s like “Yeah sure, that’s easy, just take actions that maximise your chance of successfully obtaining your goal”.
My friend is obviously right with his advice, but it is difficult to follow for two main reasons:
- We can’t practically deal with all potential choices that we could make. In the above example there are so many different exercise plans, dietary plans, questionable plans to obtain radioactive superpowers, etc. that we can’t really consider all of them.
- Given a single choice of action, it is difficult to know what the chance of me achieving my goal is – how likely is it that my toxic waste-based exploits will actually result in super-speed.
Each of these two points is complex enough to receive many, many posts of its own (don’t worry, they’re on their way), but for now I will just outline a possible solution for each.
Reducing our choices is probably the more obvious one of the two for most people – we in fact do this every day, every single time we make a choice. Whenever I get dressed in the morning I don’t consider what will happen if I put my underpants on my head – I restrict my attention to more sensible choices. We can do something similar in more or less any context (to a varying degree of success depending on our domain-specific knowledge).
To address the second issue, we can use probability theory, which allows us to quantify the chance of an event happening. We can now directly compare choices by success probability. (I am skipping a heck of a lot of details here, all of which I will revisit in time).
If we can do these two things well, and are also able to convert all of this into a format that computers can understand, then we know how to do artificial intelligence. Wasn’t that easy?
As you might have guessed, it isn’t that quite that simple – we do now have a great way for computers to make “good choices”, but based on some really, really strong assumptions. We have assumed that the way in which we narrow down the choices we consider is “reasonable” in some sense, we have assumed that we know how to assign these “probability numbers” well, and we have assumed that we can “speak computer” well enough to instruct a computer to perform these tasks instead of us.
All three of these are very active areas of research today, so actually satisfying these assumptions is actually really, really hard. Nonetheless, we have at least broken down our original query into a slightly more concrete set of problems – consider this an introduction to the more technical side of AI. In time (and quite a large number of further posts), I will try to address in more detail each of these three problems, so stay tuned.