You get a bonus - 1 coin for daily activity. Now you have 1 coin

Movement planning

Lecture



None of the robot motion planning algorithms discussed above mentioned the most important characteristic feature of robotic tasks - their uncertainty. In robotics, uncertainty arises due to partial observability of the environment, as well as under the influence of stochastic (or not provided by the model) results of the actions of the robot. In addition, errors may arise due to the use of approximate algorithms, such as particle filtering, as a result of which the robot will not obtain accurate data on the current confidence state, even though the ideal model is used to describe the stochastic nature of the medium.

In most modern robots, decision-making uses deterministic algorithms, such as various path planning algorithms that have been considered so far. For this purpose, it is usually customary to extract the data about the most probable state from the distribution of states formed by the localization algorithm. The advantage of this approach is that it helps to reduce the amount of computation. Even the task of planning paths through the configuration space itself is difficult, and if we had to work with a complete probability distribution over states, then the task would become even more difficult. Therefore, the uncertainty in these circumstances can be ignored only if the uncertainty is small.

Unfortunately, it is not always possible to ignore uncertainty. The fact is that when solving some problems, a situation arises that the uncertainty, under the conditions of which the robot acts, becomes too great. For example, how can you use a deterministic path planner to control a mobile robot that does not have information about where it is located? Generally speaking, if the true state of the robot is not the one indicated by the maximum likelihood rule, then the control actions will be far from optimal. Depending on the magnitude of the error, they can lead to all sorts of undesirable effects, such as collisions with obstacles.

In this area of ​​robotics, a number of methods of work organization under uncertainty conditions have been applied. Some of these methods are based on decision-making algorithms under conditions of uncertainty. If the robot encounters uncertainty only during transitions from one state to another, but the state itself is completely observable, then this task can best be modeled as a Markov decision process, or MDP (Markov Decision Process). The solution to the MDP problem is the optimal policy with which the robot can determine what to do in each possible state. Thus, he gets the opportunity to correct the errors of movement of all kinds, whereas the solution obtained from a deterministic scheduler, with a single path, can be much less reliable. In robotics, the term navigation function is commonly used instead of policy. The cost function can be converted to such a navigation function by providing gradient tracking.

The resulting robot control task is a partially observable MDP , or POMDP (partially observable MDP) task . In such situations, the robot usually maintains an internal trust state, like the one described in section 25.3. The solution to the POMDP problem is the policy defined on the trust conditions of the robot. In other words, the input to the policy in question is the entire probability distribution. This allows the robot to base its decision not only on what he knows, but also on what is unknown. For example, if a robot operates under conditions of uncertainty regarding an important state variable, it can make a rational decision under these conditions and trigger an action to collect information . Such an approach in the MDP infrastructure is impossible, since in MDP tasks it is assumed that there is complete observability. Unfortunately, the exact solutions to POMDP problems are not applicable to robotics, since there are no known methods for continuous spaces. As a result of discretization, such POMDP tasks are usually created that are too large to be solved using known methods. All that can be done at the present time is to try to reduce uncertainty about posture to a minimum; for example, in navigation heuristics along the coast, it is required that the robot stay close to known marks in order to reduce uncertainty about its posture. This situation, in turn, leads to a gradual reduction of uncertainty when mapping the new marks found nearby, and this further allows the robot to explore new territories.

created: 2014-09-22
updated: 2021-03-13
132556



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Robotics

Terms: Robotics