ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • Representation Learning - Introduction
    개인 공부/딥러닝 기초 개념 2023. 8. 26. 17:09

    reference : Deep Learning (Ian Goodfellow)

     

    Generally speaking, a good representation is one that makes a subsequent learning task easier. 

     

    The choice of representation will usually depend on the choice of the subsequent learning task.

     

    We can think of feedforward networks trained by supervised learning as performing a kind of representation learning. 

     

    Specifically, the last layer of the network is typically a linear classifier, such as softmax regression classifier. The rest of the network learns to provide a representation to this classifier. Training with a supervised criterion naturally leads to the representation at every hidden layer taking on properties that make the classification task easier. 

     

    Supervised training of feedforward networks does not involve explicity imposing any condition on the learned intermediate features. Other kinds of representation learning algorithms are often explicitly designed to shape the representation in some particular way. 

     

     Just like supervised networks, unsupervised deep learning algorithms have a main training objective but also learn a representation as a side effect. Regardless of how a representation was obtained, it can be used for another task. Alternatively, multiple tasks can be learned together with some shared interal representation. 

     

    Most representation learning problems face a tradeoff between preserving as much information about the input as possible and attaining nice properties (such as independence)

     

    Representation learning is particularly interesting because it provides one way to perform unsupervised and semi-unsupervised learning. We often have very large amounts of unlabeled training data and relatively little labeled training data. Training with supervised learning techniques on the labeled subset often results in severe overfitting. Semi-supervised learning offers the chance to resolve this overfitting problem by also learning from the unlabeled data. Specifically, we can learn good representations for the unlabeled data, and then use these representations to solve the supervised learning task. 

     

     

Designed by Tistory.