You get a bonus - 1 coin for daily activity. Now you have 1 coin

Environment properties

Lecture



There is no doubt that the variety of variants of the problem environment that can arise in artificial intelligence is very large. Nevertheless, it is possible to determine a relatively small number of measurements by which variants of the problem environment can be classified. These measurements largely determine the most appropriate design of the agent and the applicability of each of the main families of methods for implementing the agent. The definitions given here are informal.

Fully observable or partially observable

If the agent’s sensors provide it with access to complete information about the state of the environment at any given time, then such a problem environment is called fully observable. In essence, the problem environment is fully observable if the sensors detect all the data that are relevant for the agent to select; Relevance, in turn, depends on performance metrics.

Fully observable environments are convenient because the agent does not need to maintain any internal state in order to be aware of everything that happens in this world. The environment may be partially observable due to creating noise and inaccurate sensors, or due to the fact that some characteristics of its state are simply missing from the information received from the sensors; for example, a vacuum cleaner agent in which there is only a local garbage sensor cannot determine whether there is garbage in other squares, and the automated taxi driver has no information about what other drivers intend to perform maneuvers.

Deterministic or stochastic

If the next state of the environment is completely determined by the current state and the action performed by the agent, then this environment is called deterministic; otherwise, it is stochastic. In principle, in a fully observable, deterministic environment, an agent does not have to act under conditions of uncertainty.

But if the medium is partially observable, then it may appear that it is stochastic. This is partly true if the environment is complex and it is not easy for the agent to keep track of all its unobservable aspects. In this regard, it is often more convenient to classify the environment as deterministic or stochastic from the point of view of the agent. Obviously, with this interpretation, the taxi driving environment is stochastic, since no one can accurately predict the behavior of all other vehicles; moreover, in any car, a tire puncture or an engine stop may occur unexpectedly unexpectedly.

Episodic or sequential

In an episodic problem environment, the agent’s experience consists of inseparable episodes. Each episode includes the perception of the environment by the agent, and then the execution of a single action. It is extremely important that the next episode does not depend on the actions taken in previous episodes. In episodic versions of the environment, the choice of action in each episode depends only on the episode itself.

Many classification tasks are episodic. For example, an agent who must recognize defective parts on an assembly line forms each solution for the current part, regardless of previous decisions; moreover, it does not depend on the current solution whether the next part will be identified as a defective one. On the other hand, in successive environments, the current decision may affect all future decisions.

Such tasks as playing chess and driving a taxi are consistent: in both cases, short-term actions can have long-term consequences. Episodic medium variations are much simpler than sequential ones, since in them the agent does not need to think ahead.

Static or dynamic

If the environment can change in the course of how the agent chooses the next action, then such an environment is called dynamic for this agent; otherwise, it is static. It is simpler to operate in a static environment, because the agent does not need to observe the world in the process of making a decision about the next action, and the agent does not have to worry about the fact that he spends too much time thinking.

Dynamic variants of the environment, on the other hand, continually ask the agent what he is going to do, and if he has not decided anything yet, then this is considered as a decision to do nothing. If over time the environment itself does not change, but the performance indicators of the agent change, then such an environment is called semi-dynamic. Obviously, the taxi driving environment is dynamic, as other cars and the taxi itself continue to move as the driving algorithm determines what to do next. Chess game with time control
is semi-dynamic, and the task of solving a crossword puzzle is static.

Discrete or continuous

The distinction between discrete and continuous variations of the environment can relate to the state of the environment, the way that time is recorded, as well as the perceptions and actions of the agent. For example, a medium with discrete states, like a game of chess, has a finite number of distinguishable states. In addition, the game of chess is associated with a discrete set of perceptions and actions. Taxi driving is a problem with continuously changing state and continuously current time, because the speed and location of the taxi itself and other vehicles vary in a certain range of continuous values, and these changes occur smoothly over time.

Taxi driving activities are also continuous (continuous adjustment of the steering angle, etc.). Strictly speaking, input from digital cameras is discrete, but is usually seen as representing continuously changing speeds and locations.

Single-agent or multi-agent

At first glance, the distinction between single-agent and multi-agent variants of the environment may seem simple enough. For example, it is obvious that an agent who solves a crossword puzzle on his own is in a single-agent environment, and an agent who plays chess operates in a two-agent environment. Nevertheless, when analyzing this classification feature, some nuances arise.

First of all, it was described above on what basis a certain entity can be considered as an agent, but it was not indicated which entities should be considered as agents. Should agent A (for example, a taxi driver) consider an object B (another car) as an agent, or can it be treated simply as a stochastically acting object that can be compared to waves running onto the shore, or with leaves fluttering in the wind? The key difference is whether or not object B’s behavior should be described as maximizing personal performance measures, the values ​​of which depend on the behavior of Agent A.

For example, in chess, the rival entity tries to maximize its performance indicators, and this, according to the rules of chess, minimizes the performance indicators of agent A. Thus, chess is a competitive multi-agent environment. And in a taxi driving environment, on the other hand, collision avoidance maximizes the performance of all agents, so it can serve as an example of a partially cooperative multi-agent environment. It is also partly competitive since, for example, only one car can occupy a parking space.

The problems of agent design arising in a multi-agent environment are often completely different from those encountered in single-agent variants of the environment; For example, one of the hallmarks of rational behavior in a multi-agent environment is often communication support, and in some embodiments of a partially observable competitive environment, stochastic behavior becomes rational because it avoids predictability traps.

created: 2014-09-22
updated: 2021-03-13
132556



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Intelligent Agents. Multi agent systems

Terms: Intelligent Agents. Multi agent systems