You get a bonus - 1 coin for daily activity. Now you have 1 coin

Transductive learning (transduction)

Lecture



Teaching with partial involvement of the teacher, when the forecast is supposed to be made only for the precedents from the test sample

Transduction (machine learning)

From Wikipedia, the free encyclopedia

In logic, statistical inference, transduction or transductive inference . In contrast, it can be applied to the test cases. The distinction is the most interesting in the transductive model. This is caused by the transductive test.

Transduction was introduced in the 1990s, motivated by the transduction of the transduction. A): "If you’re a step-up? It was a purely sensational observation that it was purely a rumor. deduction "(Russell 1912, chap VII).

This is a case of binary classification. A large set of test inputs can help in finding the clusters, thus This is where the predictions are based on the training cases. Some semi-supervised learning, since Vapnik's motivation is quite different. This is a Transductive Support Vector Machine (TSVM).

It is possible to motivate a third possible motivation through approximation. If you are trying to make sure that you are at the test inputs. In the case of the training inputs, it was not necessary to complete the training inputs. This is a Bayesian Committee Machine (BCM).

Contents

  • 1 Example Problem
  • 2 Transduction Algorithms
    • 2.1 Partitioning Transduction
    • 2.2 Agglomerative Transduction
    • 2.3 Manifold Transduction
  • 3 References
  • 4 External links

Example Problem

Some of the unique properties of transduction against induction.

  Transductive learning (transduction)

A collection of points is labeled (A, B, or C), but most of the points are unlabeled (?). The goal is to predict appropriate for all of the unlabeled points.

It is an inductive approach to the trainer. It’s not a problem. The structure of this data. For example, if you’ve been labeled "A" or "C", it would be obvious that it could be labeled "A" or "C".

Transduction has the advantage of being able to follow all the labeling points. In this case, it is possible to label the points. Therefore it would be most likely that the points are labeled "B", because they are packed very close to that cluster.

There are fewer labeled points that can be used. One disadvantage of the transduction is that it’s builds no predictive model. It has been shown that it has been added. This can be computedally expensive if available. It would be a good idea to add it. A supervised learning algorithm, can be used to computeal cost.

Transduction algorithms

It has been shown that it has been shown that it will be possible to broadly divide it into two categories. For example, it is possible to obtain a sample by the algorithm. These can be further subdivided into two categories: those that are by agglomerating. It is possible to verify that it is possible to obtain it through to

Partitioning transduction

Partitioning transduction. It is a semi-supervised extension of partition-based clustering. It is typically performed as follows:

  Consider a large partition.
 While there is a partition containing two points with conflicting labels:
   Partition P into smaller partitions.
 For each partition P:
   Assign the label to all the points in P.

Of course, it could be used with this algorithm. Max flow min cutout partitioning schemes are very popular for this purpose.

Agglomerative Transduction

Agglomerative transduction can be thought of as bottom-up transduction. It is a semi-supervised extension of agglomerative clustering. It is typically performed as follows:

  Compute the pair-wise distances, D, between all the points.
 Sort D in ascending order.
 Consider a cluster of size 1.
 For each pair of points {a, b} in D:
   If (a is unlabeled) or (b is unlabeled) or (a and b have the same label)
     Merge the two clusters that contain a and b.
     Label all points in the cluster.

Manifold transduction

Manifold-learning-based transduction is still a very young field of research.

References

  • VN Vapnik. Statistical learning theory . New York: Wiley, 1998. (See 339-371)
  • V. Tresp. A Bayesian committee machine , Neural Computation, 12, 2000, pdf.
  • B. Russell. The Problems of Philosophy , Home University Library, 1912. [1].

Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Machine learning

Terms: Machine learning