Mira el vídeo gratis de How Does Deep Learning Work?

(10.3 MB) Agregado el 10 Jun 2016 Reproducido 90 veces Educativo Niños Tutorial Technology Game Animation Animation Art Motion Graphic Design
Resetear
Zoom out
Zoom in
How Does Deep Learning Work?

Descripción de la animación:

Excited about deep learning? Watch as Expect Labs' Simon Handley takes us through the inner workings of this fascinating subfield of machine learning.
TRANSCRIPT:
Hi, I am Simon Handley and I work at Expect Labs. I want to talk a little bit more today about deep learning. So deep learning is a collection of machine learning techniques developed in response to problems people found with backpropagation. And backpropagation is a machine learning technique that was developed in the eighties for learning feed-forward neural networks. So the idea here is you have some input layer that maybe corresponds to an image, and you have some output layer with maybe some classifications, and you try to learn some hidden neurons, some hidden layers between the input and output layers. Backprop works okay, sort of, except that if you are trying to learn many hidden layers, it just falls over. The reason is that backpropagation learns by propagating errors back through the network, and people found that those errors decay exponentially, so doing more than one or two hidden layers just didn't really work.
The key insight here to improve backpropagation was rather than to learn an entire monolithic network and propagate errors back through that, they discovered that it worked better if you break that task down into a series of smaller tasks. So what they do is, they take the input layer and then they use unsupervised learning to learn a new representation of that input layer, and that becomes a hidden layer and then you repeat that iteratively. For each hidden layer, you learn a new hidden layer, which is a representation of the previous one and then at the very end, once you've done that, you do a supervised learning experiment on the last hidden layer to learn the output layer. The really cool thing is that it actually works, which is not sort of a given. It turns out that by doing that you get much higher representational power when you try to do the supervised learning of its outputs based on those hidden layers.
So to summarize, deep learning is a kind of semi-supervised representation learning, and by semi-supervised, I mean it contains aspects of both supervised and unsupervised learning. In a supervised learning what is does is it has an error function based on the known outputs and classifications. But it also has an unsupervised aspect in the sense that it learns hidden layers by just looking for patterns and inputs and those patterns are not driven by some error function. But it is also a kind of representation learning meaning that it tries to find new representations of the inputs. Thanks!

|