You get a bonus - 1 coin for daily activity. Now you have 1 coin

Reflex agents

Lecture



The simplest type of agent is a simple reflex agent . Such agents choose actions based on the current act of perception, ignoring the rest of the history of acts of perception. The program for this agent is shown in the listing.


function Reflex-Vacuum-Agent ([location, status]) returns action
action
if status = dirty then return suck
else if location = A then return Right
else if location = In then return Left


Please note that this agent-vacuum cleaner program is really very small compared to the corresponding table. The most obvious reduction is due to the fact that it ignores the history of acts of perception, as a result of which the number of possible options is reduced from 4 T to just 4. An additional small reduction is due to the fact that if there is garbage in the current square, the action performed does not depend from the location of the vacuum cleaner.

Imagine yourself in an automated taxi driver. If the car in front of the vehicle brakes and its brake lights come on, you should notice this and start the braking too. In other words, a certain processing is performed on the visual input data to identify the condition, which is designated as "car-in-front-is-braking" (the car moving in front of it brakes). Then this condition activates some connection with the action and initiate braking (start braking) set in the agent's program. This connection is called the condition-action rule and is written as follows:

if car-in-front-is-braking then initiate-braking


People also use a large number of such connections, some of which are complex responses developed as a result of training (as when driving a car), while others are innate reflexes (such as blinking, which occurs when approaching an object’s eye). In various chapters of this book several different ways will be shown with which you can organize agent training and the implementation of such connections.

The listing program is specialized for one particular vacuum cleaner environment. A more general and flexible approach is to first create a general-purpose interpreter for condition-action rules, and then define sets of rules for a specific problem environment. The figure shows the structure of such a general program in schematic form and shows how the condition-action rules allow the agent to create a link from perception to action. (You should not worry if such a method seems trivial; soon he will discover much more interesting possibilities.)

In such schemes, the agent uses rectangles to indicate the current internal state of the decision-making process, and ovals are used to represent the background information used in this process. The program of this agent, which is also very simple, is shown in the listing. The Interpret-Input function produces an abstracted description of the current state based on perceptual results, and the Rule-Match function returns the first rule in the set of rules that matches the specified state description. It should be noted that the statement here in terms of “rules” and “conformity” is purely conceptual; actual implementations can be as simple as a set of logical elements that implement the logic circuit.


function Simple-Reflex-Agent ( percept ) returns action action
static: rules, condition-action rule set
state <- Interpret-Input ( percept )
rule <- Rule-Match ( state, rules )
action <- Rule-Action [ rule ]
return action


Simple reflex agents are characterized by the remarkable feature that they are extremely simple, but they have very limited intelligence. An agent whose program is shown in the listing only works if the correct decision can be made based solely on current perception, in other words, only if the environment is completely observable. The introduction of even a small fraction of non-observability can cause a serious disruption to its work.

For example, in the above deceleration rule, it is assumed that the car-in-front-is-braking condition can be determined from the current perception (current video image) if the moving car has a brake signal located at a central place among other signals. Unfortunately, some older models have other configurations of taillights, brake lights, side lights, braking signals and turn signals, so it is not always possible to determine from a single image whether this car is slowing down or not. A simple reflex agent, leading his car after such a car, will either constantly slow down unnecessarily, or, even worse, will not slow down at all.

  Reflex agents

The emergence of a similar problem can be detected in the vacuum cleaner world. Suppose that in a simple reflex vacuum cleaner the location sensor has deteriorated and only the garbage sensor is working. Such an agent receives only two possible perceptions: [Dirty] and [Clear]. He can perform a suck action in response to [Dirty] perception, and what should he do in response to [Clean] perception? The execution of the Left movement will end in failure (for an indefinitely long time), if it turns out that it starts this movement from square A, and if it begins to move from square in, it will end in failure for an indefinitely long time, movement Right. For simple reflex agents acting in partially observable variants of the environment, infinite cycles are often unavoidable.

The exit from infinite loops becomes possible if the agent has the ability to randomize their actions (to introduce an element of randomness into them). For example, if the agent-cleaner gets the result of the [Clean] perception, then he can flip a coin to choose between the Left and Right movements. It is easy to show that the agent will reach another square on average in two stages. Then, if there is debris in this square, the vacuum cleaner will remove it and the cleaning task will be completed.

Therefore, a randomized simple reflex agent can outperform a deterministic simple reflex agent in its performance.

created: 2014-09-22
updated: 2024-11-14
318



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Intelligent Agents. Multi agent systems

Terms: Intelligent Agents. Multi agent systems