Lecture
Neighbor Neighbor Method |
Training in this case is to memorize all the objects of the training sample. If an unrecognized object is presented to the system then she relates this object to that image (Fig. 7), whose “representative” was the closest to . Fig. 6. Example of linearly inseparable sets Fig. 7. Decision rule "Minimum distance This is the nearest neighbor rule. Rule the closest neighbors is that a volume hypersphere is built centered on . Recognition is carried out on the majority of "representatives" of any image who are inside the hypersphere. Here the subtlety consists in correctly (reasonably) choosing the volume of the hypersphere. should be large enough so that a relatively large number of “representatives” of different images fall into the hypersphere, and small enough so as not to smooth out the nuances of the border separating the images. The method of nearest neighbors has the disadvantage that it requires the storage of the entire training sample, and not its generalized description. But he gives good results on the control tests, especially with large numbers of objects presented for training. To reduce the number of memorized objects, you can apply combined decision rules, for example, a combination of the method of fragmented standards and the nearest neighbors. In this case, those objects that fall into the zone of intersection of hyperspheres of any level are subject to memorization. The method of nearest neighbors applies only to those recognizable objects that fall into this intersection zone. In other words, not all the objects of the training sample are subject to memorization, but only those that are near the border separating the images. |
Potential function method |
The name of the method is to some extent related to the following analogy (for simplicity, we assume that two images are recognized). Imagine that objects are points. some space . Charges will be placed at these points. if the object belongs to the image and if the object belongs to the image (Fig. 8). Fig. 8. Illustration of potential function synthesis The function describing the distribution of the electrostatic potential in such a field can be used as a decision rule (or for its construction). If the potential point generated by a single charge located in equal to then the total potential in created by charges equal to - potential function. It, as in physics, decreases with increasing Euclidean distance between and . Most often, a potential is used as a potential function that has a maximum at and monotonously decreasing to zero with . Recognition can be carried out as follows. At the point where the unidentified object is located, the potential is calculated . If it is positive, then the object is attributed to the image. . If negative - to the image . With a large training sample size, these calculations are rather cumbersome, and it is often more advantageous to calculate not and evaluate the boundary separating classes (images) or approximate a potential field. Choosing the type of potential functions is not easy. For example, if they decrease very rapidly with increasing distance, then it is possible to achieve an unmistakable separation of training samples. However, this causes certain troubles when recognizing unidentified objects (the reliability of the decision is reduced, the zone of uncertainty increases). With too flat potential functions, the number of recognition errors may unreasonably increase, including on training objects. Certain recommendations in this regard can be obtained by considering the method of potential functions from statistical positions (restoration of the probability distribution density or dividing the boundary of the sample using a procedure like stochastic approximation). This question is beyond the scope of this course of lectures. |
Comments
To leave a comment
Pattern recognition
Terms: Pattern recognition