You get a bonus - 1 coin for daily activity. Now you have 1 coin

Simulation of human reasoning 9 Increasing the complexity level of the bot's behavior model, without increasing the size of the program code. INSTRUCTIONS.

Lecture



Before talking about the instructions for IR, you need to explain what it is in general ...
Our whole life goes on a pattern. All of our behavior (it would seem so reasonable) is in fact, just the execution of the "instructions" laid down by nature in us (instincts, reflexes) and the environment (parents, teachers, friends and our own life experience).
From early childhood to the very end, we learn something. At an early stage (childhood - adolescence), the learning process is mostly "external" (kindergarten, school, institute), later - "internal" (trial and error). “Sample behavior” is not “infringement” of rationality of behavior, but just the NORMAL state of the system. The system of motivations (carrots / sticks) and the meaning of life (purpose) go into the background, before the need to perform "intermediate tasks" (You will not go to the cinema until you wash the dishes!). The society in which we live imposes such restrictions on us. And if you look closely ... purely "reasonable" behavior is not at all. As there is no free will. For the most part, we simply carry out the (instructions) that are in us (genes) and loudly call it the meaning of life.

What will not be in this article:
1. neural networks (they are not needed here yet)
2. self-study (before we get there)
3. reasonable behavior, (only bare reflexes and AI)

So, in order to explain the principles, it is necessary to think in the abstract. Forget about AI so far and talk about pure AI. Let's try to simulate the behavior of the bot in an abstract environment.
To begin with, we define the problem (TK):

It is necessary to write a small and simple program that simulates the behavior (reaction) of the bot performing the task (purpose, goal) and leave it possible to gradually complicate the task by entering additional conditions of the “environment”, but the original size of the program should remain unchanged.

Is such a condition at all feasible if it is not known in advance what exactly can be “added” to the “bot's environment system” (world) and even more so, can we talk about adequate reactions of the bot with such a problem statement? Oddly enough, but this is possible with the help of instructions .

The instruction (in this context) is the "model of behavior" (the reaction of the bot) to a change in the "environment".

If someone likes the idea and wants to "bring it to a logical end" (until the game is released, in which the player will fulfill the role of a god sending rain / hail and earthquakes to the bot, and he stubbornly strives for the goal), I have nothing vs.

So:

1. Purpose / purpose.
The bot needs to go from point A (place of appearance at the beginning of the game) to point B (exit from the room / go to the next level)

2. Calculation of the trajectory of movement. Priority goals.
The problem is not simple. If the shortest distance between two points is a straight line, the “path” is very easily calculated from the difference between the coordinates of the “output” and “current position of the bot”, thereby calculating the “motion vector of the bot” and the specific direction for the next step.
However, (we will complicate the task all the time) if we add partitions in the room / randomly placed objects / a maze of walls, then we will need a "trajectory calculation algorithm" / "obstacle avoidance". Next, add (for example) in our world berries / strawberries / cherries and an analogue of the "feeling of hunger" in our bot. Now the “trajectory of movement” to the exit (goal achievement) is no longer limited to “bypassing obstacles”, we still need to monitor the “state” of our “stomach” and as soon as its “state” becomes critical, it will be necessary to change the “priority of goals” and calculate the "motion vector" towards the nearest "strawberry" in order not to "die of hunger on the way". And only when our “state” becomes satisfactory, will it be possible to return to “fulfillment of the purpose”.
Let's go ahead and add for example "smell" (for example, in the "corner of the map" we can have a toilet and at some distance from it a zone where it strongly "stinks") the task will be complicated by the choice.
For example, “avoiding stinks” is still far from the most important factor, “dying of hunger from malnutrition” is more fraught than “a quick snack in the toilet” and you have to make a choice, expecting a “trajectory of movement”, and, if necessary, “keeping the stomach full” and "to go around the toilet on the leeward side" and not to forget about the "meaning of life, for the sake of which we live."
And then God (that is, the user) decided to add a "temperature factor", in the form of a frost of - 30 on the territory of the map and several fires randomly scattered around the map.
The task has become complicated many times, now our trajectory cannot be "straight".
Rather, it resembles zigzag dash "from fire to fire," and you cannot get into the "fire" (burn) And you cannot go far (you freeze in figs) and the hunger dies reluctantly and this toilet is gadsky, it stinks on a quarter card ...
The farther into the forest ... The longer and more difficult our algorithm for “calculating the trajectory of motion” becomes.

3. Model of behavior. Instructions.
And so, we are required to follow the "model of behavior", but it is not stipulated that we ourselves should develop this "line of behavior" by means of trial and error. On the contrary, with the advent of "new conditions", the "algorithm of behavior" will be added, but not as a "program" but as a "conditional language" (something like a universal formula). In this way. Our program is reduced (ultimately) not to the development of numerous algorithms (this is the problem of "God", adding rain / hail and other elements, as well as "rules for survival in extreme conditions"). Our task is only to write an interpreter of "instructions "and strictly execute it. And this is "a completely different task" and not the fact that it is simpler;)

4. Writing instructions. Templates
Before writing a "program for executing instructions," you need to come up with a few "most of these instructions" (so that I will not be accused of trolling, I probably have to offer my own version;) but this is a task for a separate article with diagrams and drawings. Having come up with several "instructions" (that is, specific bot behavior algorithms "we can decide on" typical patterns. "Which we can program. And with" patterns of behavior "(turn around, take a step, etc.) we can write these patterns in abstract It is clear that the "interpreter of instructions" extracts the "instruction", modifies it into the list of "patterns of behavior" and builds a list of subroutines that must be executed.

5. Adding new templates to the template collection, without programming.
This is no more difficult task than the previous one. Each complex pattern consists of several simple ones.
(turn / take a step / calculate the difference of coordinates), so if there is no necessary template in the collection of templates, you need to figure out how to implement it using the existing ones.
It would seem a senseless task, take and write "the instruction is more authentic" and the whole business.
But here there is a catch. Limiting the "number of templates" injected into memory, restricts the "complexity of the bot's behavior model," while adding a "new template", albeit combined from several "old" ones, will give a pseudo-complication of this behavior (it all depends on the available templates) .

6. Independent search for "instructions" method of "trial and error."
I promised that this article will not;)) So it will not.
I propose to think at your leisure how to implement it? After all, having “patterns of behavior” and several “sample instructions” you can try to find the one that fits ... Or the one that at least does not kill until the situation on the card changes “for the better”.

7. Mood. Pseudo-emotions, as a determining factor in solving inevitable collisions.
So, when choosing the "path" there was a problem, an equiprobable choice of two or more direction options, with an equal degree of significance. How to be, emotions / habits / intuition, etc. for a bot are unusual.
Why actually? Is it difficult for example to make it "lefty"? If the food is at an equal distance from him, then he will choose the direction to the left, just because he likes it (walk left;)

created: 2014-09-23
updated: 2021-12-06
132467



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Natural Language Modeling of Thought Processes and Character Modeling

Terms: Natural Language Modeling of Thought Processes and Character Modeling