I am working through Googleâs Machine Learning Crash Course. The notes in this post cover the âNeural Networksâ module.
Does âdeep learningâ imply neural networks?
The introductory video refers to âdeep neural networksâ, so Iâm wondering what the relationship is between deep learning and neural networks.
Yes, according to Quoraâs âDoes deep learning always mean neural network or can include other ML techniques?â.
âTo give you some context, modern Convolutional Networks contain on orders of 100 million parameters and are usually made up of approximately 10-20 layers (hence deep learning)â – https://cs231n.github.io/neural-networks-1/
âDeep Learning is simply a subset of the architectures (or templates) that employs ‘neural networks’â – https://towardsdatascience.com/intuitive-deep-learning-part-1a-introduction-to-neural-networks-aaeb3a1500df (TDS)
âDeep learningâ in Google’s glossary links to âdeep modelâ: âA type of neural network containing multiple hidden layers.â
âHowever, until 2006 we didn’t know how to train neural networks to surpass more traditional approaches, except for a few specialized problems. What changed in 2006 was the discovery of techniques for learning in so-called deep neural networks.â – http://neuralnetworksanddeeplearning.com/about.html
Towardâs Data Scienceâs âIntuitive Deep Learning Part 1a: Introduction to Neural Networksâ clarifies âdeep learningâ is a subset of machine learning. I guess theyâre both âlearningâ. I like the comparison of an algorithm to a recipe, and in this context, ML optimizes a recipe. Deep learning is a subset of optimization techniques.
When to use neural networks?
Small data with linear relationships â LSR
Large data with linear relationships â gradient descent
Large data with simple, nonlinear relationships â feature crosses
Large data with complex, nonlinear relationships â NN
âNeural nets will give us a way to learn nonlinear models without the use of explicit feature crossesâ – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/playground-exercises
âNeural networks, a beautiful biologically-inspired programming paradigm which enables a computer to learn from observational dataâ – http://neuralnetworksanddeeplearning.com/index.html
NN âhave the flexibility to model many complicated relationships between input and outputâ- https://towardsdatascience.com/intuitive-deep-learning-part-1a-introduction-to-neural-networks-aaeb3a1500df
âThatâs not to say that neural networks arenât good at solving simpler problems. They are. But so are many other algorithms. The complexity, resource-intensiveness and lack of interpretability in neural networks is sometimes a necessary evil, but itâs only warranted when simpler methods are inapplicableâ – https://www.quora.com/What-kinds-of-machine-learning-problems-are-neural-networks-particularly-good-at-solving
Why are there multiple layers?
âeach layer is effectively learning a more complex, higher-level function over the raw inputsâ – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/anatomy
âA single-layer neural network can only be used to represent linearly separable functions ⌠Most problems that we are interested in solving are not linearly separable.â – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/
The universal approximation theory states that one hidden layer is sufficient for any problem – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/
âHow many hidden layers? Well if your data is linearly separable (which you often know by the time you begin coding a NN) then you don’t need any hidden layers at all. Of course, you don’t need an NN to resolve your data either, but it will still do the job.â – https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw
âOne hidden layer is sufficient for the large majority of problems.â – https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw
âEven for those functions that can be learned via a sufficiently large one-hidden-layer MLP, it can be more efficient to learn it with two (or more) hidden layersâ – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/
âMulti-layerâ implies at least one hidden layer: âIt has an input layer that connects to the input variables, one or more hidden layersâ – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/
Chris Olahâs âNeural Networks, Manifolds and Topologyâ, linked from the crash course, visualizes how data sets intersecting in n dimensions may be disjoint in n + 1 dimensions, which enables a linear solution. Other than that, though, Olahâs article was over my head. Articles like TDS are more my speed.
Why are some layers called âhiddenâ?
âThe interior layers are sometimes called âhidden layersâ because they are not directly observable from the systems inputs and outputs.â – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/
How many layers do I need?
Task 4 in the exercise recommends playing around with the hyperparameters to get a certain loss, but the combinatorial complexity makes me wonder if thereâs an intuitive way to think about the role of layers and neurons. đ¤
âRegardless of the heuristics you might encounter, all answers will come back to the need for careful experimentation to see what works best for your specific datasetâ – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/
âIn sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers.â – https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw
â3 neurons are enough because the XOR function can be expressed as a combination of 3 half-planes (ReLU activation)â – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/playground-exercises Seems narrowing the problem space to ReLU enables some deterministic optimization.
âThe sigmoid and hyperbolic tangent activation functions cannot be used in networks with many layers due to the vanishing gradient problemâ – https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/
âuse as big of a neural network as your computational budget allows, and use other regularization techniques to control overfittingâ – https://cs231n.github.io/neural-networks-1/#arch
âa model with 1 neuron in the first hidden layer cannot learn a good model no matter how deep it is. This is because the output of the first layer only varies along one dimension (usually a diagonal line), which isn’t enough to model this data set wellâ – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/playground-exercises
âA single layer with more than 3 neurons has more redundancy, and thus is more likely to converge to a good modelâ – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/playground-exercises
Two hidden layers with eight neurons in the first and two in the second performed well (~0.15 loss) on repeated runs.
Heuristics from spiral solution video:
- Tune number of layers and nodes. Max neurons in the first layer, tapering down a couple layers to the output is a reasonable start. Each neuron takes time to train, though, so reduce total neurons if training is too slow. This is reinforced by the practice exercise, which started with two layers of 20 and 12 neurons, and then tried to reduce the number of neurons while keeping loss stable.
- Reduce the learning rate to smooth loss curve
- Add regularization to further smooth loss curve
- Feature engineering helps with noisy data
- Try different activation functions. Ultimately, tanh had the best fit
- Iterate from 1
Even after all this, tuning hyper parameters still seems combinatorially complex.
Activation functions
A neural net consists of layers. Nodes in the bottom layer are linear equations. Nodes in a âhiddenâ layer transform a linear node into a non-linear node using an âactivation functionâ. The crash course states âany mathematical function can serve as an activation functionâ.
A sigmoid is an example of an activation function. I remember from the module on logistic regression (notes) that we used a sigmoid to transform a linear equation into a probability.
Why is it called a âneuronâ?
The glossary definition for âneuronâ is pretty good: 1) âtaking in multiple input values and generating one output valueâ, and 2) âThe neuron calculates the output value by applying an activation function.â Aside: this reminds me of lambda architecture. I appreciate TDS clarifying neurons âoften take some linear combination of the inputsâ, like w1x1 + w2x2 + w3x3. I suppose this is what the glossary means by âa weighted sum of input valuesâ.
TDS references a single image from the biological motivations section of Stanfordâs CS231n, but I find both the images from that section useful for comparison.
I like TDS’ definition of a âlayerâ as âa âneural networkâ is simply made out of layers of neurons, connected in a way that the input of one layer of neuron is the output of the previous layer of neuronsâ. In that context, the hidden layer diagrams from the crash course makes sense.