{"id":3416,"date":"2017-04-27T11:43:27","date_gmt":"2017-04-27T15:43:27","guid":{"rendered":"http:\/\/www.goodmath.org\/blog\/?p=3416"},"modified":"2017-04-27T11:43:27","modified_gmt":"2017-04-27T15:43:27","slug":"introduction-to-neural-networks","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2017\/04\/27\/introduction-to-neural-networks\/","title":{"rendered":"Introduction to Neural Networks"},"content":{"rendered":"<p> In preparation for starting a new job next week, I&#8217;ve been doing some reading about neural networks and deep learning. The math behind neural networks is pretty interesting, so I thought I&#8217;d take my notes, and turn them into some posts.<\/p>\n<p> As the name suggests, the basic idea of a neural network is to construct a computational system based on a simple model of a neuron. If you look at a neuron under a microscope, what you see is something vaguely similar to:<\/p>\n<p><a href=\"https:\/\/i0.wp.com\/www.goodmath.org\/blog\/wp-content\/uploads\/2017\/04\/neuron.gif\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/www.goodmath.org\/blog\/wp-content\/uploads\/2017\/04\/neuron.gif?resize=300%2C224\" alt=\"\" width=\"300\" height=\"224\" class=\"alignright size-medium wp-image-3450\" \/><\/a><\/p>\n<p> It&#8217;s a cell with three main parts:<\/p>\n<ul>\n<li> A central body;<\/li>\n<li> A collection of branched fibers called <em>dendrites<\/em> that receive signals and carry them to the body; and<\/li>\n<li> A branched fiber called an <em>axon<\/em> that sends signals produced by the body.<\/li>\n<\/ul>\n<p> You can think of a neuron as a sort of analog computing element. Its dendrites receive inputs from some collection of sources. The body has some criteria for deciding, based on its inputs, whether to &#8220;fire&#8221;. If it fires, it sends an output using its axon.  <\/p>\n<p> What makes a neuron fire? It&#8217;s a combination of inputs. Different terminals on the dendrites have different signaling strength. When the combined inputs reach a threshold, the neuron fires. Those different signal strengths are key: a system of neurons can learn how to interpret a complex signal by varying the strength of the signal from different dendrites. <\/p>\n<p> We can think of this simple model of a neuron in computational terms as a computing element that takes a set of <em>weighted<\/em> input values, combines them into a single value, and then generates an output of &#8220;1&#8221; if that value exceeds a threshold, and 0 if it does not.<\/p>\n<p> In slightly more formal terms,  <img src='http:\/\/l.wordpress.com\/latex.php?latex=%28n%2C%20%5Ctheta%2C%20b%2C%20t%29&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='(n, \\theta, b, t)' style='vertical-align:1%' class='tex' alt='(n, \\theta, b, t)' \/> where:<\/p>\n<ol>\n<li> <img src='http:\/\/l.wordpress.com\/latex.php?latex=n&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='n' style='vertical-align:1%' class='tex' alt='n' \/> is the number of inputs to the machine. We&#8217;ll represent a given input as a vector <img src='http:\/\/l.wordpress.com\/latex.php?latex=v%3D%5Bv_1%2C%20...%2C%20v_n%5D&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='v=[v_1, ..., v_n]' style='vertical-align:1%' class='tex' alt='v=[v_1, ..., v_n]' \/>.<\/li>\n<li> <img src='http:\/\/l.wordpress.com\/latex.php?latex=%5Ctheta%20%3D%20%5B%5Ctheta_1%2C%20%5Ctheta_2%2C%20...%2C%20%5Ctheta_n%5D&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='\\theta = [\\theta_1, \\theta_2, ..., \\theta_n]' style='vertical-align:1%' class='tex' alt='\\theta = [\\theta_1, \\theta_2, ..., \\theta_n]' \/> is a vector of <em>weights<\/em>, where <img src='http:\/\/l.wordpress.com\/latex.php?latex=%5Ctheta_i&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='\\theta_i' style='vertical-align:1%' class='tex' alt='\\theta_i' \/> is the weight for input <img src='http:\/\/l.wordpress.com\/latex.php?latex=i&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='i' style='vertical-align:1%' class='tex' alt='i' \/>.<\/li>\n<li> <img src='http:\/\/l.wordpress.com\/latex.php?latex=b&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='b' style='vertical-align:1%' class='tex' alt='b' \/> is a <em>bias<\/em> value.<\/li>\n<li> <img src='http:\/\/l.wordpress.com\/latex.php?latex=t&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='t' style='vertical-align:1%' class='tex' alt='t' \/> is the <em>threshold<\/em> for firing.<\/li>\n<\/ol>\n<p> Given an input vector <img src='http:\/\/l.wordpress.com\/latex.php?latex=v&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='v' style='vertical-align:1%' class='tex' alt='v' \/>, the machine computes the combined, weighted input value <img src='http:\/\/l.wordpress.com\/latex.php?latex=I&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='I' style='vertical-align:1%' class='tex' alt='I' \/> by taking the <em>dot product<\/em> <img src='http:\/\/l.wordpress.com\/latex.php?latex=v%20%5Ccdot%20w%20%3D%20%5B%5Ctheta_1v_1%20%2B%20%5Ctheta_2v_2%20%2B%20...%20%2B%20%5Ctheta_nv_n%5D&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='v \\cdot w = [\\theta_1v_1 + \\theta_2v_2 + ... + \\theta_nv_n]' style='vertical-align:1%' class='tex' alt='v \\cdot w = [\\theta_1v_1 + \\theta_2v_2 + ... + \\theta_nv_n]' \/>. If <img src='http:\/\/l.wordpress.com\/latex.php?latex=I%20%2B%20b%20%5Cge%20t&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='I + b \\ge t' style='vertical-align:1%' class='tex' alt='I + b \\ge t' \/>, the neuron &#8220;fires&#8221; by producing a 1; otherwise, it produces a zero.<\/p>\n<p> This version of a neuron is called a <em>perceptron<\/em>. It&#8217;s good at a particular kind of task called <em>classification<\/em>: given a set of inputs, it can answer whether or not the input is a member of a particular subset of values. A simple perceptron is limited to <em>linear classification<\/em>, which I&#8217;ll explain next.<\/p>\n<p> To understand what a perceptron does, the easiest way to think of it is graphical. Imagine you&#8217;ve got an input vector with two values, so that your inputs are points in a two dimensional cartesian plane. The weights on the perceptron inputs define a line in that plane. The perceptron fires for all points <em>above<\/em> that line &#8211; so the perceptron classifies a point according to which side of the line it&#8217;s located on. We can generalize that notion to higher dimensional spaces: for a perceptron taking <img src='http:\/\/l.wordpress.com\/latex.php?latex=n&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='n' style='vertical-align:1%' class='tex' alt='n' \/> input values, we can visualize its inputs as an <img src='http:\/\/l.wordpress.com\/latex.php?latex=n&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='n' style='vertical-align:1%' class='tex' alt='n' \/>-dimensional space, and the perceptron weight&#8217;s define a hyperplane that slices the <img src='http:\/\/l.wordpress.com\/latex.php?latex=n&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='n' style='vertical-align:1%' class='tex' alt='n' \/>-dimensional input space into two sub-spaces.  <\/p>\n<p> Taken by itself, a single perceptron isn&#8217;t very interesting. It&#8217;s just a fancy name for a something that implements a linear partition. What starts to unlock its potential is <em>training<\/em>. You can take a perceptron and initialize all of its weights to 1, and then start testing it on some input data. Based on the results of the tests, you alter the weights. After enough cycles of repeating this, the perceptron can <em>learn<\/em> the correct weights for any linear classification.<\/p>\n<p> The traditional representation of the perceptron is as a function <img src='http:\/\/l.wordpress.com\/latex.php?latex=h&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='h' style='vertical-align:1%' class='tex' alt='h' \/>:<\/p>\n<p align=center><img decoding=\"async\" src=\"http:\/\/s0.wp.com\/latex.php?latex=%5Cdisplaystyle++++h%28x%2C+%5Ctheta%2C+b%29+%3D+%5Cleft%5C%7B++++%5Cbegin%7Barray%7D%7Bcl%7D+++++0%2C+%26+x+%5Ccdot+%5Ctheta+%2B+b+%3C+0+%5C%5C+++%2B1%2C+%26+x+%5Ccdot+%5Ctheta+%2B+b+%5Cge+0+%5Cend%7Barray%7D+++++%5Cright.+&#038;bg=ffffff&#038;fg=000000&#038;s=0&#038;c=20201002\" alt=\"&#92;displaystyle    h(x, &#92;theta, b) = &#92;left&#92;{    &#92;begin{array}{cl}     0, &amp; x &#92;cdot &#92;theta + b &lt; 0 &#92;&#92;   +1, &amp; x &#92;cdot &#92;theta + b &#92;ge 0 &#92;end{array}     &#92;right. \" class=\"latex\" \/><\/p>\n<p>Using this model, learning is just an optimization process, where we&#8217;re trying to find a set of values for <img decoding=\"async\" src=\"http:\/\/s0.wp.com\/latex.php?latex=%7B%5Ctheta%7D&#038;bg=ffffff&#038;fg=000000&#038;s=0&#038;c=20201002\" alt=\"{&#92;theta}\" class=\"latex\" \/> that minimize the errors in assigning points to subspaces.<\/p>\n<p> A linear perceptron is a implementation of this model based on a very simple notion of a <em>neuron<\/em>. A perceptron takes a set of weighted inputs, adds them together, and then if the result exceeds some threshold, it &#8220;fires&#8221;.<\/p>\n<p> A perceptron whose weighted inputs don&#8217;t exceed its threshold produces an output of 0; a perceptron which &#8220;fires&#8221; based on its inputs produces a value of +1.<\/p>\n<p> Linear classification is very limited &#8211; we&#8217;d like to be able to do things that are more interesting that just linear. We can do that by adding one thing to our definition of a neuron: an <em>activation function<\/em>. Instead of just checking if the value exceeds a threshold, we can take the dot-product of the inputs, and then apply a function to them before comparing them to the threshold.<\/p>\n<p> With an activation function <img src='http:\/\/l.wordpress.com\/latex.php?latex=f&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='f' style='vertical-align:1%' class='tex' alt='f' \/>, we can define the operation of our more powerful in two phases. First, the perceptron computes the <em>logit<\/em>, which is the same old dot-product of the weights and the inputs. Then it applies the activation function to the logit, and based on the output, it decides whether or not to fire.<\/p>\n<p> The logit is defined as:<\/p>\n<p><center><img src='http:\/\/l.wordpress.com\/latex.php?latex=%20z%20%3D%20%28%5CSigma_%7Bi%3D0%7D%5E%7Bn%7D%20w_i%20x_i%29%20%2B%20b%20&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title=' z = (\\Sigma_{i=0}^{n} w_i x_i) + b ' style='vertical-align:1%' class='tex' alt=' z = (\\Sigma_{i=0}^{n} w_i x_i) + b ' \/><\/center><\/p>\n<p> And the perceptron as a whole is a classifier:<\/p>\n<p align=center><img decoding=\"async\" src=\"http:\/\/s0.wp.com\/latex.php?latex=%5Cdisplaystyle+h%28x%2C+%5Ctheta%29+%3D+++++++%5Cleft%5C%7B+++++++%5Cbegin%7Barray%7D%7Bcl%7D+++++++0%2C+%26+f%28z%29+%3C+0+%5C%5C+++++++%2B1%2C+%26+f%28z%29+%3E%3D+0+++++++%5Cend%7Barray%7D+++++++%5Cright.&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;displaystyle h(x, &#92;theta) =       &#92;left&#92;{       &#92;begin{array}{cl}       0, &amp; f(z) &lt; 0 &#92;&#92;       +1, &amp; f(z) &gt;= 0       &#92;end{array}       &#92;right.\" class=\"latex\" \/><\/p>\n<p> Like I said before, this gets interesting when you get to the point of training.  The idea is that before you start training, you have a neuron that doesn&#8217;t know anything about the things it&#8217;s trying to classify. You take a collection of values where you know their classification, and you put them through the network. Each time you put a value through, ydou look at the result &#8211; and if it&#8217;s wrong, you adjust the weights of the inputs. Once you&#8217;ve repeated that process enough times, the edge-weights will, effectively, encode a curve (a line in the case of a linear perceptron) that divides between the categories. The real beauty of it is that you don&#8217;t need to know where the line really is: as long as you have a large, representative sample of the data, the perceptron will discover a good separation.  <\/p>\n<p> The concept is simple, but there&#8217;s one big gap: <em>how<\/em> do you adjust the weights?  The answer is: calculus! We&#8217;ll define an error function, and then use the slope of the error curve to push us towards the minimum error.<\/p>\n<p> Let&#8217;s say we have a set of training data. For each value <img src='http:\/\/l.wordpress.com\/latex.php?latex=i&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='i' style='vertical-align:1%' class='tex' alt='i' \/> in the training data, we&#8217;ll say that <img src='http:\/\/l.wordpress.com\/latex.php?latex=t%5E%7B%28i%29%7D&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='t^{(i)}' style='vertical-align:1%' class='tex' alt='t^{(i)}' \/> is the &#8220;true&#8221; value (that is, the correct classification) for value <img src='http:\/\/l.wordpress.com\/latex.php?latex=i&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='i' style='vertical-align:1%' class='tex' alt='i' \/>, and <img src='http:\/\/l.wordpress.com\/latex.php?latex=y%5E%7B%28i%29%7D&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='y^{(i)}' style='vertical-align:1%' class='tex' alt='y^{(i)}' \/> is the value produced by the current set of weights of our perceptron. Then the<br \/>\ncumulative error for the training data is: <\/p>\n<p><center><img src='http:\/\/l.wordpress.com\/latex.php?latex=E%20%3D%20%5Cfrac%7B1%7D%7B2%7D%5Csum_%7Bi%7D%28t%5E%7B%28i%29%7D%20-%20y%5E%7B%28i%29%7D%29%5E2&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='E = \\frac{1}{2}\\sum_{i}(t^{(i)} - y^{(i)})^2' style='vertical-align:1%' class='tex' alt='E = \\frac{1}{2}\\sum_{i}(t^{(i)} - y^{(i)})^2' \/><\/center><\/p>\n<p> <img src='http:\/\/l.wordpress.com\/latex.php?latex=i%5E%7B%28i%29%7D&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='i^{(i)}' style='vertical-align:1%' class='tex' alt='i^{(i)}' \/> is given to us with our training data. <img src='http:\/\/l.wordpress.com\/latex.php?latex=y%5E%7B%28i%29%7D&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='y^{(i)}' style='vertical-align:1%' class='tex' alt='y^{(i)}' \/> is something we know how to compute. Using those, we can view the errors as a curve on <img src='http:\/\/l.wordpress.com\/latex.php?latex=y&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='y' style='vertical-align:1%' class='tex' alt='y' \/>. <\/p>\n<p> Let&#8217;s think in terms of a two-input example again. We can create a three dimensional space around the ideal set of weights: the x and y axes are the input weights; the z axis is the size of the cumulative error for those weights. For a given error value <img src='http:\/\/l.wordpress.com\/latex.php?latex=z&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='z' style='vertical-align:1%' class='tex' alt='z' \/>, there&#8217;s a countour of a curve for all of the bindings that produce that level of error. All we need to do is follow the curve towards the minimum.<\/p>\n<p> In the simple cases, we could just use Newton&#8217;s method directly to rapidly converge on the solution, but we want a general training algorithm, and in practice, most real learning is done using a non-linear activation function. That produces a problem: on a complex error surface, it&#8217;s easy to overshoot and miss the minimum. So we&#8217;ll scale the process using a meta-parameter <img src='http:\/\/l.wordpress.com\/latex.php?latex=%5Cepsilon&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='\\epsilon' style='vertical-align:1%' class='tex' alt='\\epsilon' \/> called the <em>learning rate<\/em>.<\/p>\n<p> For each weight, we&#8217;ll compute a change based on the partial derivative of the error with respect to the weight:<\/p>\n<p><center><img src='http:\/\/l.wordpress.com\/latex.php?latex=%20%5CDelta%20w_k%20%3D%20-%20%5Cepsilon%20%5Cfrac%7B%5Cpartial%20E%7D%7B%5Cpartial%20w_k%7D&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title=' \\Delta w_k = - \\epsilon \\frac{\\partial E}{\\partial w_k}' style='vertical-align:1%' class='tex' alt=' \\Delta w_k = - \\epsilon \\frac{\\partial E}{\\partial w_k}' \/><\/center><\/p>\n<p> For our linear perceptron, using the definition of the cumulative error <img src='http:\/\/l.wordpress.com\/latex.php?latex=E&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='E' style='vertical-align:1%' class='tex' alt='E' \/> above, we can expand that out to:<\/p>\n<p><center><img src='http:\/\/l.wordpress.com\/latex.php?latex=%5CDelta%20w_k%20%3D%20%5CSigma_i%20%5Cepsilon%20x_k%5E%7B%28i%29%7D%28t%5E%7B%28i%29%7D%20-%20y%5E%7B%28i%29%7D%29&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title='\\Delta w_k = \\Sigma_i \\epsilon x_k^{(i)}(t^{(i)} - y^{(i)})' style='vertical-align:1%' class='tex' alt='\\Delta w_k = \\Sigma_i \\epsilon x_k^{(i)}(t^{(i)} - y^{(i)})' \/><\/center><\/p>\n<p> So to train a single perceptron, all we need to do is start with everything equally weighted, and then run it on our training data. After each pass over the data, we compute the updates for the weights, and then re-run until the values stabilize.<\/p>\n<p> This far, it&#8217;s all pretty easy. But it can&#8217;t do very much: even with a complex activation function, a single neuron can&#8217;t do much. But when we start combining collections of neurons together, so that the output of some neurons become inputs to other neurons, and we have multiple neurons providing outputs &#8211; that is, when we assemble neurons into networks &#8211; it becomes amazingly powerful. So that will be our next step: to look at how to put neurons together into networks, and then train those networks.<\/p>\n<p> As an interesting sidenote: most of us, when we look at this, think about the whole thing as a programming problem. But in fact, in the original implementation of perceptron, a perceptron was an analog electrical circuit.  The weights were assigned using circular potentiometers, and the weights were updated during training using electric motors rotating the knob on the potentiometers!<\/p>\n<p> I&#8217;m obviously not going to build a network of potentiometers and motors. But in the next post, I&#8217;ll start showing some code using a neural network library. At the moment, I&#8217;m still exploring the possible ways of implementing it. The two top contenders are TensorFlow, which is a library built on top of Python; and R, which is a stastical math system which has a collection of neural network libraries. If you have any preference between the two, or for something else altogether, let me know!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In preparation for starting a new job next week, I&#8217;ve been doing some reading about neural networks and deep learning. The math behind neural networks is pretty interesting, so I thought I&#8217;d take my notes, and turn them into some posts. As the name suggests, the basic idea of a neural network is to construct [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[322],"tags":[325,323,324],"class_list":["post-3416","post","type-post","status-publish","format-standard","hentry","category-machine-learning","tag-deep-learning","tag-neural-networks","tag-perceptron"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-T6","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/3416","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=3416"}],"version-history":[{"count":24,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/3416\/revisions"}],"predecessor-version":[{"id":3452,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/3416\/revisions\/3452"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=3416"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=3416"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=3416"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}