Artificial intelligence

From New World Encyclopedia

Template:AI

Artificial intelligence (also known as machine intelligence and often abbreviated as AI) is intelligence exhibited by any manufactured (i.e. artificial) system. The term is often applied to general purpose computers and also in the field of scientific investigation into the theory and practical application of AI. "AI" the term is often used in works of science fiction to refer to that which exhibits artificial intelligence as well, as in "the AI" referring to a singular discrete or distributed mechanism.

Modern AI research is concerned with producing useful machines to automate human tasks requiring intelligent behavior. Examples include: scheduling resources such as military units, answering questions about products for customers, understanding and transcribing speech, and recognizing faces in CCTV cameras. As such, it has become an engineering discipline, focused on providing solutions to practical problems. AI methods were used to schedule units in the first Gulf War, and the costs saved by this efficiency have repaid the US government's entire investment in AI research since the 1950s. AI systems are now in routine use in many businesses, hospitals and military units around the world, as well as being built into many common home computer software applications and video games. (See Raj Reddy's AAAI paper for a comprehensive review of real-world AI systems in deployment today.)

AI methods are often employed in cognitive science research, which tries to model subsystems of human cognition. Historically, AI researchers aimed for the loftier goal of so-called strong AI—of simulating complete, human-like intelligence. This goal is epitomised by the fictional strong AI computer HAL 9000 in the film 2001: A Space Odyssey. This goal is unlikely to be met in the near future and is no longer the subject of most serious AI research. The label "AI" has something of a bad name due to the failure of these early expectations, and aggravation by various popular science writers and media personalities such as Professor Kevin Warwick whose work has raised the expectations of AI research far beyond its current capabilities. For this reason, many AI researchers say they work in cognitive science, informatics, statistical inference or information engineering. AI has seen many research paradigms, including symbolic, connectionist and Bayesian approaches. There is still no consensus as to the best way to proceed. Recent research areas include Bayesian networks and artificial life.

History

Prehistory of AI

Humans have always speculated about the nature of mind, thought, and language, and searched for discrete representations of their knowledge. Aristotle tried to formalize this speculation by means of syllogistic logic, which remains one of the key strategies of AI. The first is-a hierarchy was created in 260 by Porphyry of Tyros. Classical and medieval grammarians explored more subtle features of language that Aristotle shortchanged, and mathematician Bernard Bolzano made the first modern attempt to formalize semantics in 1837.

Early computer design was driven mainly by the complex mathematics needed to target weapons accurately, with analog feedback devices inspiring an ideal of cybernetics. The expression "artificial intelligence" was introduced as a 'digital' replacement for the analog 'cybernetics'.

Development of AI theory

Much of the (original) focus of artificial intelligence research draws from an experimental approach to psychology, and emphasizes what may be called linguistic intelligence (best exemplified in the Turing test).

Approaches to artificial intelligence that do not focus on linguistic intelligence include robotics and collective intelligence approaches, which focus on active manipulation of an environment, or consensus decision making, and draw from biology and political science when seeking models of how "intelligent" behavior is organized.

AI also draws from animal studies, in particular with insects, which are easier to emulate as robots (see artificial life), as well as animals with more complex cognition, including apes, who resemble humans in many ways but have less developed capacities for planning and cognition. Some researchers argue that animals, which are apparently simpler than humans, ought to be considerably easier to mimic. But satisfactory computational models for animal intelligence are not available.

Seminal papers advancing AI include A Logical Calculus of the Ideas Immanent in Nervous Activity (1943), by Warren McCulloch and Walter Pitts, and On Computing Machinery and Intelligence (1950), by Alan Turing, and Man-Computer Symbiosis by J.C.R. Licklider. See cybernetics and Turing test for further discussion.

There were also early papers which denied the possibility of machine intelligence on logical or philosophical grounds such as Minds, Machines and Gödel (1961) by John Lucas [1].

With the development of practical techniques based on AI research, advocates of AI have argued that opponents of AI have repeatedly changed their position on tasks such as computer chess or speech recognition that were previously regarded as "intelligent" in order to deny the accomplishments of AI. Douglas Hofstadter, in Gödel, Escher, Bach, pointed out that this moving of the goalposts effectively defines "intelligence" as "whatever humans can do that machines cannot".

John von Neumann (quoted by E.T. Jaynes) anticipated this in 1948 by saying, in response to a comment at a lecture that it was impossible for a machine to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church-Turing thesis which states that any effective procedure can be simulated by a (generalized) computer.

In 1969 McCarthy and Hayes started the discussion about the frame problem with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence".

Experimental AI research

Artificial intelligence began as an experimental field in the 1950s with such pioneers as Allen Newell and Herbert Simon, who founded the first artificial intelligence laboratory at Carnegie Mellon University, and John McCarthy and Marvin Minsky, who founded the MIT AI Lab in 1959. They all attended the aforementioned Dartmouth College summer AI conference in 1956, which was organized by McCarthy, Minsky, Nathan Rochester of IBM and Claude Shannon.

Historically, there are two broad styles of AI research - the "neats" and "scruffies". "Neat", classical or symbolic AI research, in general, involves symbolic manipulation of abstract concepts, and is the methodology used in most expert systems. Parallel to this are the "scruffy", or "connectionist", approaches, of which artificial neural networks are the best-known example, which try to "evolve" intelligence through building systems and then improving them through some automatic process rather than systematically designing something to complete the task. Both approaches appeared very early in AI history. Throughout the 1960s and 1970s scruffy approaches were pushed to the background, but interest was regained in the 1980s when the limitations of the "neat" approaches of the time became clearer. However, it has become clear that contemporary methods using both broad approaches have severe limitations.

Artificial intelligence research was very heavily funded in the 1980s by the Defense Advanced Research Projects Agency in the United States and by the fifth generation computer systems project in Japan. The failure of the work funded at the time to produce immediate results, despite the grandiose promises of some AI practitioners, led to correspondingly large cutbacks in funding by government agencies in the late 1980s, leading to a general downturn in activity in the field known as AI winter. Over the following decade, many AI researchers moved into related areas with more modest goals such as machine learning, robotics, and computer vision, though research in pure AI continued at reduced levels.

Modern AI

Modern AI research focuses on practical engineering tasks. (Supporters of Strong AI may call this approach 'weak AI'.)

There are several fields of AI, one of which is natural language. Many weak AI fields have specialised software or programming languages created for them. For example, one of the 'most-human' natural language chatterbots, A.L.I.C.E., uses a programming language AIML that is specific to its program, and the various clones, named Alicebots. Nevertheless, A.L.I.C.E. is still based on pattern matching without any reasoning. This is the same technique Eliza, the first chatterbot, was using back in 1966. Jabberwacky is a little closer to strong AI, since it learns how to converse from the ground up based solely on user interactions. In spite of that, the result is still very poor, and it is reasonable to state that there is actually no general purpose conversational artificial intelligence.

When viewed with a moderate dose of cynicism, AI can be viewed as ‘the set of computer science problems without good solutions at this point’. Once a sub-discipline results in useful work, it is carved out of artificial intelligence and given its own name. Examples of this are pattern recognition, image processing, neural networks, natural language processing, robotics and game theory. While the roots of each of these disciplines is firmly established as having been part of artificial intelligence, they are now thought of as somewhat separate.

Whilst progress towards the ultimate goal of human-like intelligence has been slow, many spinoffs have come in the process. Notable examples include the languages LISP and Prolog, which were invented for AI research but are now used for non-AI tasks. Hacker culture first sprang from AI laboratories, in particular the MIT AI Lab, home at various times to such luminaries as McCarthy, Minsky, Seymour Papert (who developed Logo there), Terry Winograd (who abandoned AI after developing SHRDLU).

Many other useful systems have been built using technologies that at least once were active areas of AI research. Some examples include:

  • Chinook was declared the Man-Machine World Champion in checkers (draughts) in 1994.
  • Deep Blue, a chess-playing computer, beat Garry Kasparov in a famous match in 1997.
  • InfoTame, a text analysis search engine developed by the KGB for automatically sorting millions of pages of communications intercepts.
  • Fuzzy logic, a technique for reasoning under uncertainty, has been widely used in industrial control systems.
  • Expert systems are being used to some extent industrially.
  • Machine translation systems such as SYSTRAN are widely used, although results are not yet comparable with human translators.
  • Natural language processing
  • Neural networks have been used for a wide variety of tasks, from intrusion detection systems to computer games.
  • Optical character recognition systems can translate arbitrary typewritten European script into text.
  • Handwriting recognition is used in millions of personal digital assistants.
  • Speech recognition is commercially available and is widely deployed.
  • Computer algebra systems, such as Mathematica and Macsyma, are commonplace.
  • Computer vision systems are used in many industrial applications ranging from hardware verification to security systems.
  • Program synthesis
  • Robotics
  • AI planning methods were used to automatically plan the deployment of US forces during Gulf War I. This task would have cost months of time and millions of dollars to perform manually, and DARPA stated that the money saved on this single application was more than their total expenditure on AI research over the last 30 years.

The vision of artificial intelligence replacing human professional judgment has arisen many times in the history of the field, and today in some specialized areas where "expert systems" are routinely used to augment or to replace professional judgment in some areas of engineering and of medicine. An example of an expert system is Clippy the paperclip in Microsoft Office which tried to predict what advice the user would like.

Micro-World AI

The real world is full of distracting and obscuring detail: generally science progresses by focusing on artificially simple models of reality (in physics, frictionless planes and perfectly rigid bodies, for example). In 1970 Marvin Minsky and Seymour Papert, of the MIT AI Laboratory, proposed that AI research should likewise focus on developing programs capable of intelligent behaviour in artificially simple situations known as micro-worlds. Much research has focused on the so-called blocks world, which consists of coloured blocks of various shapes and sizes arrayed on a flat surface. Micro-World AI

Languages, Programming Style and Software Culture

GOFAI research is often done in Lisp or Prolog. Bayesian work often uses Matlab or Lush Programming Language (a numerical dialect of Lisp). These languages include many specialist probabilistic libraries. Real-life and especially real-time systems are likely to use C++. AI programmers are often academics and emphasise rapid development and prototyping rather than bulletproof software engineering practices. Hence the use of interpreted languages to empower rapid command-line testing and experimentation. AI culture is historically tied to Unix and hacker cultures.

The most basic AI program is a single If-Then statement, such as "If A, then B." If you type an 'A' letter, the computer will show you a 'B' letter. Basically, you are teaching a computer to do a task. You input one thing, and the computer responds with something you told it to do or say. All programs have If-Then logic. A more complex example is if you type in "Hello.", and the computer responds "How are you today?" This response is not the computer's own thought, but rather a line you wrote into the program before. Whenever you type in "Hello.", the computer always responds "How are you today?". It seems as if the computer is alive and thinking to the casual observer, but actually it is an automated response. AI is often a long series of If-Then (or Cause and Effect) statements.

A randomizer can be added to this. The randomizer creates two or more response paths. For example, if you type "Hello", the computer may respond with "How are you today?" or "Nice weather" or "Would you like to play a game?" Three responses (or 'thens') are now possible instead of one. There is an equal chance that any one of the three responses will show. This is similar to a pull-cord talking doll that can respond with a number of sayings. A computer AI program can have 1,000s of responses to the same input. This makes it less predictable and closer to how a real person would respond, because a living person would respond unpredictably. When 1,000s of input (Ifs) are written in (not just "Hello.") and 1,000s of responses (Thens) written into the AI program, then the computer can talk (or type) with most people, if those people know the If statement input lines to type.

Many games, like chess and strategy games, use action responses instead of typed responses, so that players can play against the computer. Robots with AI brains would use If-Then statements and randomizers to make decisions and speak. However, the input may be a sensed object in front of the robot instead of a "Hello." line, and the response may be to pick up the object instead of a response line.

AI research in various countries

AI research is carried out all over the world, often in national universities and laboratories.

United Kingdom

In the United Kingdom, the most noted universities for AI research are Edinburgh and Sussex, although AI-related research activities can be found in most universities in the country. Since the publication of the Lighthill report, UK funding for AI has dried up, although research continues under more politically-acceptable headings such as "Informatics", "Information Engineering" and "Inference". Microsoft runs a large AI research group in Cambridge, which works closely with Cambridge University. HP Labs in Bristol, BT in Ipswich, and various government defence agencies also research AI applications.

AI in Business

According to Haag, Cummings, etc.(2004) there are four common techniques of Artificial Intelligence used in businesses:

  • Expert Systems
  • Neural Networks
  • Genetic Algorithms
  • Intelligent Agents

Expert Systems apply reasoning capabilities to reach a conclusion. An expert system can process large amounts of known information and provide conclusions based on them.

Neural Networks are AI that are capable of finding and differentiating between patterns. Police Departments use neural networks to identify corruption.

Genetic Algorithms are designed to apply the survival of the fittest process to generate increasingly better solutions to the problem. Investment brokers use Genetic Algorithms to create the best possible combination of investment opportunities for their clients.

An Intelligence Agent is software that assists you, or acts on your behalf, in performing repetitive computer-related tasks. Examples of its uses are data mining programs and monitoring and surveillance agents.


Logic programming was sometimes considered a field of artificial intelligence, but this is no longer the case.

Machines displaying some degree of intelligence

There are many examples of programs displaying some degree of intelligence. Some of these are:

  • Twenty Questions - A neural-net based game of 20 questions
  • The Start Project - a web-based system which answers questions in English.
  • Brainboost - another question-answering system
  • Cyc, a knowledge base with vast collection of facts about the real world and logical reasoning ability.
  • Jabberwacky, a learning chatterbot
  • ALICE, a chatterbot
  • Alan, another chatterbot
  • Albert One, multi-faceted chatterbot
  • ELIZA, a program which pretends to be a psychotherapist, developed in 1966
  • PAM (Plan Applier Mechanism) - a story understanding system developed by John Wilensky in 1978.
  • SAM (Script applier mechanism) - a story understanding system, developed in 1975.
  • SHRDLU - an early natural language understanding computer program developed in 1968-1970.
  • Creatures, a computer game with breeding, evolving creatures coded from the genetic level upwards using a sophisticated biochemistry and neural network brains.
  • BBC news story on the creator of Creatures latest creation. Steve Grand's Lucy.
  • AARON - artificial intelligence, which creates its own original paintings, developed by Harold Cohen.
  • Eurisko - a language for solving problems which consists of heuristics, including heuristics for how to use and change its heuristics. Developed in 1978 by Douglas Lenat.
  • X-Ray Vision for Surgeons - a group in MIT which researches medical vision.
  • Neural networks-based programs for backgammon and go.
  • Talk to William Shakespeare - William Shakespeare chatbot
  • Chesperito - A chat/info bot on #windows95 channel on the DALnet IRC network.
  • Ultra Hal, multimedia chatterbot with learning capabilities.
  • ALI (Artificial Language Intelligence), chatterbot and chatterbot builder with advance artificial intelligence, easy scripting, and machine learning capabilities.
  • djuzeppe Online AI-bot and online Editor for its knowledge base.

AI Researchers

There are many thousands of AI researchers (see Category:Artificial intelligence researchers) around the world at hundreds of research institutions and companies. Among the many who have made significant contributions are:

  • Alan Turing
  • Boris Katz
  • Doug Lenat
  • Douglas Hofstadter
  • Geoffrey Hinton
  • John McCarthy
  • Karl Sims
  • Kevin Warwick
  • Igor Aleksander
  • Marvin Minsky
  • Seymour Papert
  • Maggie Boden
  • Mike Brady
  • Oliver Selfridge
  • Raj Reddy
  • Judea Pearl
  • Rodney Brooks
  • Roger Schank
  • Terry Winograd
  • Rolf Pfeifer
  • James Hendler
  • Ali Sohani
  • Sankar K Pal

Further reading

Non-fiction

The following are considered seminal works in the field. A longer list is at Important publications in artificial intelligence.

  • Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig ISBN 0130803022
  • Gödel, Escher, Bach : An Eternal Golden Braid by Douglas R. Hofstadter
  • Understanding Understanding: Essays on Cybernetics and Cognition by Heinz von Foerster
  • In the Image of the Brain: Breaking the Barrier Between Human Mind and Intelligent Machines by Jim Jubak
  • Today's Computers, Intelligent Machines and Our Future by Hans Moravec, Stanford University
  • The Society of Mind by Marvin Minsky, ISBN 0671657135 15 March 1998
  • Perceptrons: An Introduction to Computational Geometry by Marvin Minsky and Seymour Papert ISBN 0262631113 28 December 1987
  • The Brain Makers: Genius, Ego and Greed In The Quest For Machines That Think by HP Newquist ISBN 0672304120.

Sources

  • John McCarthy: Proposal for the Dartmouth Summer Research Project On Artificial Intelligence. [2]
  • John Searle: Minds, Brains and Programs Behavioral and Brain Sciences 3 (3): 417-457 1980. [[3]]

See also

  • List of fictional computers
  • List of fictional robots and androids

Philosophy

Logic

Science

Applications

  • Artificial intelligence agent
  • Bio-inspired computing
  • Clinical decision support system
  • Computer game bot
  • Game AI
  • List of Artificial Intelligence projects

Uncategorised

  • Collective intelligence — The idea that a relatively large number of people co-operating in one process can lead to reliable action, in time for the emergence of smarter-than-human intelligence.
  • Friendly AI — A model for creating artificial intelligence which is moral and "safe".
  • Game programming AI
  • K-line (artificial intelligence)
  • Mindpixel — A project to collect simple true / false assertions and collaboratively validate them with the aim of using them as a body of human common sense knowledge that can be utilised by a machine.
  • Truth maintenance systems — by Gerald Jay Sussman and Richard Stallman


External links

General

Wikibooks
Wikibooks has more about this subject:

AI related organizations

bg:Изкуствен интелект ca:Intel·ligència artificial cs:Umělá inteligence da:Kunstig intelligens de:Künstliche Intelligenz es:Inteligencia artificial et:Tehisintellekt fa:هوش مصنوعی fr:Intelligence artificielle hr:Umjetna inteligencija id:Kecerdasan Buatan it:Intelligenza artificiale he:בינה מלאכותית ko:인공 지능 ms:Kecergasan Buatan nl:Kunstmatige intelligentie ja:人工知能 no:Kunstig intelligens pl:Sztuczna inteligencja pt:Inteligência artificial ru:Искусственный интеллект sv:Artificiell intelligens fi:Tekoäly th:ปัญญาประดิษฐ์ uk:Штучний інтелект vi:Trí tuệ nhân tạo zh:人工智能

Credits

New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:

The history of this article since it was imported to New World Encyclopedia:

Note: Some restrictions may apply to use of individual images which are separately licensed.