You get a bonus - 1 coin for daily activity. Now you have 1 coin

Modern game programs

Lecture



A public demonstration of gaming programs is about the same for artificial intelligence as participation in international motor racing for the automotive industry - super-modern gaming programs are amazingly fast, extremely carefully tuned computers embody the most high-quality engineering solutions, but this is not intended ordinary buyer. And some researchers even believe that playing games is something not related to the main direction of development of artificial intelligence. Nevertheless, this area continues to generate not only universal admiration, but also a constant stream of innovations, which are then assimilated by the wider developer community.

Chess. In 1957, Herbert Simon predicted that in 10 years computers would win a man - a world chess champion. Forty years later, Deep Blue won Garry Kasparov in a demonstration match of six games. Simon was wrong, but only with a factor of 4. Kasparov wrote:

The decisive game in this match was the second game, which left a deep impression on my memory ... We observed events that far exceeded the most incredible expectations regarding how well a computer would be able to foresee the long-term positional consequences of its decisions. The car refused to move into a position that has a clear, but short-term advantage, demonstrating a completely human sense of danger.

The Deep Blue program was created by Myurray Campbell, Fenggsung Soo and Joseph Hoan from IBM [216] based on the project Deep Thought, previously developed by Campbell and Soo at Carnegie Mellon University (CMU). The winning computer was a parallel computer with 30 IBM RS / 6000 processors. On this computer, exploited the "software search" and 480 specialized VLSI chess processors, which carried out the development of moves (including the ordering of moves), "hardware search" for the last few levels of the tree and evaluated the leaf nodes. In the Deep Blue program, an average of 126 million nodes per second were searched, and the peak speed reached 330 million nodes per second.

This program has generated up to 30 billion positions per move, usually reaching a search depth of 14. This program is based on a standard alpha-beta search with an iterative indentation based on a transposition table, but the key to the success of this program seems to be its ability to produce extensions that go beyond the depth of the search for fairly interesting lines of forced / forced moves. In some cases, this search reached a depth of 40 half-moves.

The evaluation function covered over 8,000 characteristics, many of which described highly specific patterns of the location of figures. We used a reference book of debuts consisting of approximately 4,000 positions, as well as a database of 700,000 games of grandmasters from which the program could extract agreed recommendations. In addition, this system used a large endgame database consisting of positions with solutions, which contained all positions with five figures and many positions with six figures. The use of this database led to a significant increase in the effective search depth, which allowed the Deep Blue program in some cases to play perfectly even many moves before the mat.

The success of the Deep Blue program has strengthened the already widespread view that progress in the field of computer games is achieved mainly through more and more powerful hardware; Moreover, the spread of such views was stimulated by IBM. The creators of the Deep Blue program, on the other hand, argue that the expansion of the search and the use of a well-thought function has also played an important role.

Moreover, we know that some of the newest algorithmic improvements allow programs running on standard personal computers to win every world computer chess championship since 1992 and often defeat opponents with a massively parallel architecture capable of searching 1000 times more nodes. per second.

To reduce the effective branching ratio to less than 3 (compared to the actual branching ratio, which is about 35), various clipping heuristics are used. The most important of these is the empty move heuristic, which produces a good lower limit of the position value using surface search, in which the opponent is allowed to make a move twice at the beginning of the game. This lower limit often allows alpha-beta pruning without the cost of a full depth search. It is also an important method of cutting off unnecessary moves, which allows you to decide in advance which moves will call alpha-beta cutoff at the nodes of the successor.

The Deep Blue development team refused to take the chance to hold a rematch offered by Kasparov. Instead, in one of the most recent major competitions in 2002, against the world champion Vladimir Kramnik, the Fritz program spoke out. The match of eight games ended in a draw. The conditions of this match were much more favorable for the person, and the usual personal computer was used as the hardware, and not the supercomputer. Nevertheless, Kramnik commented on this match like this: "Now it’s obvious that this is the best program and the world champion is playing on an equal footing."

Checkers . Since 1952, Arthur Samuel from IBM has been developing a game of checkers in his spare time, which has developed his own assessment function through learning, playing with himself thousands of times. Samuel’s program initially played at a novice level, but after only a few days of playing with
he improved to such a level that he began to defeat Samuel (although he was not a strong player).

In 1962, this program defeated Robert Neely, the blindfold champion of checkers, thanks to an error on his part. Many at that time thought that computers were already playing checkers better than people, but in fact it had not yet happened. However, considering that the computing equipment used by Samuel (IBM 704 computer) had 10,000 words of main memory, a magnetic tape for long-term storage and a processor with a frequency of 0.000001 GHz, this victory remains a great achievement.

Many have tried to surpass this achievement, and finally, Jonathan Scheffer and his colleagues developed the Chinook program, which runs on ordinary personal computers and uses alpha-beta search. The Chinook program uses a pre-computed database of all 444 billion positions with eight or fewer pieces of chess on the board, which allows it to play the endgame without error. Chinook took second place in 1990 at the US Open and won the right to apply for participation in the World Championship.

But then this program ran into a problem in the face of Marion Tinsley. Dr. Tinsley was a world champion for over 40 years, losing only three games in all that time. In the first match against the Chinook program, Tinsley suffered his fourth and fifth defeat, but won the match with a score of 20.5-18.5. The match for the title of the World Cup in August 1994 between Tinsley and the Chinook program ended prematurely, since Tinsley was forced to surrender due to deteriorating health.

Chinook has been officially recognized as a world champion. Schaeffer believes that with sufficient computing power, the endgame database can be increased to such an extent that a direct search from the initial position will always reach the resolved positions, i.e. The challenge of the game of checkers must be completely solved. (The Chinook program occasionally announced its win on turn five.) A comprehensive analysis of this kind can be done manually for playing 3x3 tic-tac-toe and using a Qubic computer (4x4x4 volumetric tic), gomoku (five in a row) and Nine-Men's Torris (Mill).

The remarkable work of Ken Thompson and Lewis Stiller provides solutions for all chess endgames with five figures and some endgames with six figures, and these results are provided for universal access to the Internet. Stiller discovered one variation in which a forced mate was reached, but it consisted of 262 moves; This result caused some riff-raff, since it was established in the chess rules that at least some "progress" occurred during 50 moves, otherwise a draw would be counted.

Reversi seems to be more popular as a computer game, rather than a board game. It has less search space than chess, since it usually has from 5 to 15 permissible moves, but it was necessary to accumulate experience in evaluating positions in it literally from scratch. In 1997, the Logistello program defeated the world champion man, Takeshi Murakami, with a score of 6: 0. Now it is generally accepted that people can not compete with the computers of the game "Othello".

Backgammon Section 6.5 describes why, because of the inclusion of the element of uncertainty caused by throwing lots, a deep search becomes a costly luxury. In the field of backgammon, many efforts have been made to improve the evaluation function. Gary Tesauro used a combination of training methods with Samuel's reinforcements and neural network methods to develop a surprisingly accurate assessment tool that was used in a search to a depth of 2 or 3.

Having played against itself more than a million training games, the TD-Gammon program developed by Gary Tesauro rightfully ranked among the top three players in the world. Some of the recommendations of this program for choosing debut moves in the initial stage of the game of backgammon radically diverged from the "wise" councils, passed down from generation to generation for many centuries.

Go - this is the most popular board game in Asia, which in order to fully master the skill requires professionals as much effort as chess. Since the board has dimensions of 19x19, the branching ratio starts at 361, and this value is too burdensome for conventional search methods. Until 1997, there was not a single sufficiently competent program at all, but now programs often make moves worthy of respect.

Most of the best programs combine pattern recognition methods (which use the principle - when such a pattern of stones appears, you should consider such a move) with a limited search (to make a decision about whether you should strive to seize such and such stones). going beyond this local area). By the time this book was written, it seems that the strongest programs were Goemate Chen Zhishing and Go4 ++ by Michael Reis, each of which achieves a rating of approximately 10 ku (the level of a weak amateur).

Doing a go game is an area that is likely to develop as a result of intensive research using more sophisticated reasoning techniques. Success can be achieved by finding ways to integrate several lines of local reasoning about each of the many loosely coupled "sub-games" into which the entire game can be decomposed. Such methods would have enormous value for all intelligent systems in general.

Bridge is a game with incomplete information: cards of any player are hidden from other players. In addition, a bridge is a game with several players, in which four players participate, not two, although the players are divided into pairs into two teams. An optimal bridge game may include elements of information gathering, information transfer, misleading, and careful weighting of probabilities. Many of these methods are used in the Bridge Baron ™ program, which won the 1997 computer bridge world championship. Although the Bridge Baron program does not play optimally, it is one of the few successfully operating game management systems that use complex, hierarchical plans based on such high-level ideas as finessing and squeezing that are familiar to players. to the bridge.

The 2000 World Cup was won by the GIB program by a wide margin from its rivals. The GIB program uses the "averaging over forecasts" method of two important modifications. First, in this program, instead of examining how successful each version of the game will be in relation to each variant of the possible location of hidden cards (the number of which can reach 10 million), a randomly selected sample from 100 locations is examined. Secondly, the GIB program uses generalization based on an explanation to calculate and cache the general rules for an optimal game in the form of definitions of various standard classes of situations. This allows the program to make an exact decision with respect to each hand. The tactical accuracy of the GIB program compensates for its inability to form reasoning about the information available. She finished in 12th place among 35 participants in the pair competitions (which provided only a draw of cards on hand) in the World Human Championship in 1998, significantly exceeding the expectations of many experts in this field.

created: 2014-09-23
updated: 2024-11-14
247



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Computer games developming

Terms: Computer games developming