目录

  • Unit 1   Microelectronics and electronic circuits
    • ● Introduction to Microelectronics
    • ● How does a logic gate in a microchip work?
    • ● General electronics circuits
    • ● Reading: Nanotechnology--Getting Us Over the Brick Wall
  • Unit 2  Modern Electronic Design
    • ● Introduction to configurable computing
    • ● Cutting Critical Hardware
    • ● The Future of Configurable Computing
    • ● Reading: FPGAs
  • UNIT 3 Computer architecture and microprocessors
    • ● Computer architecture
    • ● CPU Design Strategies: RISC vs. CISC
    • ● VLIW Microprocessors
    • ● Embedded System
  • UNIT 4 Information network, protocols and applications
    • ● Computer networks
    • ● TCP/IP
    • ● Internet of Things
    • ● Technology Roadmap of the IoT
  • UNIT 5 Information Security and Biometrics Technology
    • ● Introduction to computer security
    • ● Encryption Methods
    • ● An Overview of Biometrics
  • Unit 6   Digital Signal Processing and Applications
    • ● Introduction to Digital Signal Processing (DSP)
    • ● Typical DSP Applications
    • ● DSP System Implementation solution
  • Unit 7   Speech Signal Processing
    • ● Speech Sampling and Processing
    • ● Speech Coding and Text-to-Speech (TTS) Synthesis
    • ● Speech Recognition and Other Speech Applications
  • Unit 8   Digital Images Processing
    • ● Representation of Images
    • ● Introduction to digital image processing
    • ● Fingerprint identification, hand geometry and face retrial
  • UNIT 9   Modern TV Technology
    • ● Television Video Signals
    • ● Related Technologies
    • ● HDTV
  • UNIT 10  Telecommunication Network
    • ● Introduction to “Communication Systems”
    • ● Satellite Communications
    • ● What is CTI?
  • Unit11 Optical Fiber Communication
    • ● The General Optical Fiber Communication System
    • ● Advantages of Optical Fiber Communication
    • ● Historical Development
  • UNIT 12 Artificial intelligence techniques and applications
    • ● Artificial Intelligence Techniques
    • ● Expert systems and robotics
    • ● Development of AI
  • UNIT 13 英文科技论文写作
    • ● 英文科技论文写作
Development of AI

11-3    Development of AI

The Classical Period: Game Playing and Theorem Proving

Artificial intelligence is scarcely younger than conventional computer science; the beginnings of AI can be seen in the first game-playing and puzzle-solving programs written shortly after World War II. Game-playing and puzzle-solving may seem somewhat remote from expert systems, and insufficiently serious to provide a theoretical basis for real applications. However, a rather basic notion about computer-based problem solving can be traced back to early attempts to program computers to perform such tasks.

State Space Search

The fundamental idea that came out of early research is called state space search, and it is essentially very simple. Many kinds of problem can be formulated in terms of three important ingredients:

a starting state, such as the initial state of the chess board;

a termination test for detecting final states or solutions to the problem, such as the simple rule for detecting checkmate in chess;

a set of operations that can be applied to change the current state of the problem, such as the legal moves of chess.

One way of thinking of this conceptual space of states is as a graph in which the states are nodes and the operations are arcs. Such spaces can be generated as you go. For example, you could begin with the starting state of the chess board and make it the first node in the graph. Each of White's possible first moves would then be an arc connecting this node to a new state of the board. Each of Black's legal replies to each of these first moves could then be considered as operations which connect each of these new nodes to a changed state of the board, and so on.

The simplest form of state space search is generate-and-test, and the algorithm is easy to specify:

(a) Generate a possible solution, in the form of a state in the search space, for example, a new board position as the result of a move.

(b) Text to see if this state is actually a solution by seeing if it satisfies the conditions for success, such as checkmate.

(c) If the current state is a solution, then quit, else go back to step (a). In addition to game playing, another principal concern of artificial intelligence that began in the 1950s was theorem proving. Roughly speaking, theorem proving involves showing that some statement in which we are interested follows logically from a set of special statements, the axioms(which are known or assumed to be true), and is therefore a theorem.[1] As an example, suppose we have the two axioms “If something can go wrong, it will “and ”My computer can go wrong”, expressed as sentences in some formal language, such as the predicate calculus. Then we can derive a sentence representing 'My computer will go wrong' as a theorem, using only the inference rules of the calculus.

Heuristic Search

Given that exhaustive search is not feasible for anything other than small search spaces, some means of guiding the search is required. A search that uses one or more items of domain-specific knowledge to traverse a state space graph is called a heuristic search. A heuristic is best thought of as a rule of thumb; it is not guaranteed to succeed, in the way that an algorithm or decision procedure is, but it is useful in the majority of cases.

A simple form of heuristic search is hill climbing. This involves giving the program an evaluation function which it can apply to the current state of the problem to obtain a rough estimate of how well things are going. For example, a simple evaluation function for a chess-playing program might involve a straight forward comparison of material between the two players. The program then seeks to maximize this function when it applies operators, such as the moves of chess. The algorithm for hill climbing is given below:

Generate a possible solution as with step (a) of generate-and-test.

From this point in the state space, apply rules that generate a new set of possible solutions; for example, the legal moves of chess that can be made from the current state.

If any state in the newly derived set is a solution, then quit with success, else take the "best" state from the set, make it the current state, and go back to step (b).

Among the most important discoveries of this period were the twin realizations that:

problems of whatever kind could, in principle, be reduced to search problems so long as they were formalized in terms of a start state, an end state, and a set of operations for traversing a state space;

the search had to be guided by some representation of knowledge about the domain of the problem.

The Romantic Period: Computer Understanding

The mid-1960s to the mid-1970s represents what I call the Romantic Period in artificial intelligence research. At this time, people were very concerned with making machines "understand", by which they usually meant the understanding of natural language, especially stories and dialogue. Winograd's (1972) SHRDLU system was arguably the climax of this epoch: a program which was capable of understanding a quite substantial subset of English by representing and reasoning about a very restricted domain (a world consisting of children's toy blocks).

The program exhibited understanding by modifying its "blocks-world" representation in response to commands, and by responding to questions about both the configuration of blocks and its "actions" upon them. Thus it could answer questions like what is the color of the block supporting the red pyramid and derive plans for obeying commands such as—Place the blue pyramid on the green block.

Other researchers attempted to model human problem-solving behavior on simple tasks, such as puzzles, word games and memory tests. The aim was to make the knowledge and strategy used by the program resemble the knowledge and strategy of the human subject as closely as possible.[2] Empirical studies compared the performance of program and subject in an attempt to see how successful the simulation had been.

Nevertheless, the new emphasis on knowledge representation proved to be extremely fruitful. Newell and Simon generated a kind of knowledge representation known as production rules, which has since become a mainstay of expert systems design and development. They also pioneered a technique known as protocol analysis, whereby human subjects were encouraged to think aloud as they solved problems, and such protocols were later analysed in an attempt to reveal the concepts and procedures employed. [3] This approach can be seen as a precursor of some of the knowledge elicitation techniques that knowledge engineers use today. These psychological studies showed just how hard the knowledge representation problem was, but demonstrated that it could be addressed in a spirit of empirical inquiry, rather than philosophical debate.

The Modem Period: Techniques and Applications

What I shall call the Modem Period stretches from the latter half of the 1970s to the present day. It is characterized by an increasing self-consciousness and self-criticism, together with a greater orientation towards techniques and applications. The flirtation with psychological aspects of understanding is somehow less central than it was.

The disillusionment with general problem-solving methods, such as heuristic search, has continued apace. Researchers have realized that such methods overvalue the concept of "general intelligence" traditionally favored by psychologists, at the expense of the domain-specific ability that human experts possess. Such methods also undervalues simple common sense, particularly the ability of humans to avoid, identify and correct errors.

The conviction has grown that the heuristic power of a problem solver lies in the explicit representation of relevant knowledge that the program can access, and not in some sophisticated inference mechanism or some complicated evaluation function. [4] Researchers have developed techniques for encoding human knowledge in modules which can be activated by patterns. These patterns may represent raw or processed data, problem states or partial problem solutions. Early attempts to simulate human problem solving (Newell and Simon, 1972) strove for uniformity in the encoding of knowledge and simplicity in the inference mechanism. Later attempts to apply the results of this research to expert systems have typically allowed themselves more variety.

It became clear that there were advantages attached to the strategy of representing human knowledge explicitly in pattern-directed modules, instead of encoding it into an algorithm that could be implemented using more conventional programming techniques:

The process of rendering the knowledge explicit in a piecemeal fashion seemed to be more in tune with the way that experts store and apply their knowledge. In response to requests as to how they do their job, few experts will provide a well articulated sequence of steps that is guaranteed to terminate with success in all situations. Rather, the knowledge that they possess has to be elicited by asking what they would do in typical cases, and then probing for the exceptions.

This method of programming allows for fast prototyping and incremental system development. If the system designer and programmer have done their jobs properly, the resultant program should be easy to modify and extend, so that errors and gaps in the knowledge can be rectified without major adjustments to the existing code. If they have not done their jobs properly, changes to the knowledge may well have unpredictable effects, since there may be unplanned interactions between modules of knowledge.

Practitioners realized that a program does not have to solve the whole problem, or even be right all of the time, in order to be useful. An expert system can function as an intelligent assistant, which enumerates alternative in the search for a solution, and rules out some of the less promising ones. The system can leave the final judgement, and some of the intermediate strategic decisions, to the user and still be a useful tool.

The Modem Period has seen the development of a number of systems that can claim a high level of performances on non-trivial tasks, for example, the RI system for configuring computer systems. A number of principles have emerged which distinguish such systems from both conventional programs and earlier work in AI.