Artificial Intelligence
Artificial Intelligence is a typical science to describe, because it has fuzzy borders with, psychology, computer science, mathematics, philosophy, statistics, physics, biology and many other disciplines. It is characterized in many ways, some of which are given below. I'll use these categorizations to introduce several important issues in Artificial Intelligence.
Long Term Goals
What is the goal that science of Artificial Intelligence trying to achieve? At a very high level, you will hear AI researchers categorized as either 'weak' or 'strong'. The 'Strong' AI people think that computers may achieve consciousness (while they may not be working on consciousness issues). The 'weak' AI people do not go that far away. Other people talk of the difference among 'Small AI' and 'Big AI'. Big AI is the try to make robots of intelligence equaling that of humans, likewise Lieutenant Commander Data from Star Trek. Small AI is all about making programs to work for small problems and trying to take a broad view of the techniques to work on big problems. Most of AI researchers don't worry about things like as and concentrate consciousness on some of the following long term goals.
Firstly, many researchers want to:
- Produce machines which exhibit intelligent behavior.
In this sense Machines could simply be personal computers, or it could be robots with embedded systems, or a mixture of both of this. Why would we want to build intelligent systems? Only One answer appeals to the reasons why we use computers in general: to complete tasks which, if we did them by hand would be error prone. For an instance how many of us would not use our calculator if essential to multiply two six digit numbers together? If we extent this up to more intelligent tasks, then it might be possible to use computers to do some complicated things reliably. This reliability can be useful if the task is beyond the limitation of the brain, or when human perception is counter-constructive, likewise in the Monty Hall problem described the below, which many people - some of whom call themselves great mathematicians - get wrong.
Another reason we ought to want to build intelligent machines is to make enable us to do things we couldn't do previous to. A large part of science is dependent on the use of computers already, and more intelligent applications are increasingly being employed. Of course, the efficiency for intelligent software to increase our abilities is not limited to science and people are working on AI programs that can have a creative input to human activities likewise composing, painting and writing.
Finally, in building intelligent machines, we can learn something about intelligence in humanity and other kind of species. This all deserves a category of its own. Another cause to study AI is to help us to:
- Understand human intelligence in society.
Artificial Intelligence may be seen as just the latest tool in the philosopher's toolbox for answering these all questions about the behavior of human intelligence, following in the footsteps of logic, biology, mathematics, cognitive science psychology and all others. Some questions that philosophy has wrangled with are: "We know that we are more 'intelligent' than the other creature but what does this really mean?" and "How many of the activities that we call intelligent may be replicated by computation (for example algorithmically)?"
For an example, the ELIZA program described below is a typical example from the sixties where a very simple program raised some serious kind of questions about the behavior of human intelligence. Amongst other things, ELIZA helped psychologists and philosophers to question the idea of what it means to 'understand' in natural language (for example English) conversations
By saying that Artificial Intelligence helps us understand the nature of human intelligence in society, we should notice that AI researchers are increasingly studying multi-agent systems, roughly speaking, which are, collections of AI programs able to cooperate/compete and communicate on small job towards the completion of big tasks. This means that the social nature of intelligence, rather than, individual, is now a subject within range of computational studies in AI.
Certainly, humans are not the only living, and the questions of life (including intelligent life) poses even bigger questions. Surely, some Artificial Life (ALife) researchers have big plans for their software. They want to use them to:
- Give birth to new life forms.
A study of Artificial Life will surely throw light on what this means for a difficult system to be 'alive'. Furthermore, ALife researchers hope that, in creating artificial life-forms, given time, intelligent nature will emerge, mostly like they did in human evolution. Therefore, there can be practical applications of an ALife approach. In particular, evolutionary algorithms (where parameters and programs are evolved to perform a specific task, rather than to exhibit signs of life) are becoming mainstream in AI.
A less noticeable long term goal of AI research is to:
- Add to scientific knowledge.
This is not to be complex with the applications of AI programs to other sciences, discussed later. Rather, it is just pointing out that some AI researchers don't write intelligent programs and are surely not interested inbreathing life or human intelligence into programs. They all are really interested in the different scientific problems that arise during the study of Artificial Intelligence. One example is the question of algorithmic complexity - how bad shall an individual algorithm get at solving a specific problem (in terms of the time taken to find the solution) as the problem instances get larger. These kinds of studies surely have an effect on the other long term goals, but the pursuit of knowledge itself is overlooked as a reason for AI to exist as a scientific discipline. However, we won't be wrapping issues likewise algorithmic complexity.