Neural Networks Simplified
Artificial Intelligence is the key to our future, and it goes hand in hand with neural networks. Not many people know what neural networks are, so in this article I’ll be giving a beginner’s explanation of the workings of neural networks.
Vocabulary
You know that feeling when you’re watching a YouTube video or reading an article, and they suddenly throw a big word at you? Then you’re like, “wait… what?” Yeah, it sucks. That’s why I included a glossary at the end of this article in case you don’t know or need a refresher on keywords. Hope it helps!
Let’s dive into our neural networks.
Structure of a Neural Network
In the image below, we can see a basic example of a neural network. The colours don’t matter — I just used them to make them stand out. Each different circle is a different node. The lines connect these nodes, forming our neural network.

When making a neural network, you can make it your own. For this one, I chose three input neurons, five nodes for the hidden layer (more on that later), and two output nodes. If you ever make your own, you can choose however many neurons and hidden layers you’d like! There is a suggested way to do it, but we won’t get into that now.
The Process
Now we’re going to look at the actual process. Let’s say we want our network to successfully determine whether or not an image is a dog or not. The correct answers for our neurons would be 1 (for yes) and the other would be 0 (for no).

First, we input our data. For this example, I’ll just randomize it to make things easier. It’s easy to tell that the first neuron was assigned the value of 30, the second was assigned to 4, and the last is equal to 15. Our next step is to assign values to our weights (the lines connecting our neurons).

It’s important to note that the values for weights are assigned randomly. As our neural network runs through iteration after iteration, the weights change bit by bit, helping our network towards our goal of accuracy. We call this a learnable parameter.

We’re getting closer to the end! Next up is your bias. When you assign your bias, it’s also random. Just like our weights, bias is a learnable parameter, meaning it’ll be adjusted as we go through iteration after iteration. Biases are assigned to the hidden layer(s) and the output layer of your network.

How It Works
Now, here’s where things get tricky. To get from your input to your output, your data will be passing through all these weights, biases, and layers to somehow form an answer.
This all happens within your hidden layers.
The math behind neural networks is difficult, and because this is a beginner’s guide, we won’t be getting into that as much. If you can understand the concepts of neural networks and learn a bit of Python, the math can come later. All we’re going to do today is the basic stuff.
Now, let’s say we want to determine the value of this neuron within your hidden layer.

What we’d do is locate all the weights and values connected to the neuron, just like this:

Here’s where we do the actual math. The formula to find the value of the neuron (just for our example) will look like this:

In this formula, variable i is our inputs, w means weight, and b represents bias. The numbers in the equation are specific to our example; it will be different for any other network (unless it’s identical). Once we substitute our values, it’ll look like this:

Once we have this, we’d run it through a sigmoid function — something we won’t get into now, but essentially what a sigmoid function does is ‘squish’ numbers into a value of either 0 or 1, letting us know which neuron is lit up and which are not.
That was the process for one neuron. We would repeat the same process for every neuron in our network. Luckily, if you’re able to code Python and create your own neural network, it’ll do the math for you, making things much easier.
Here’s a more general explanation of the exact formula for one neuron — the example we just did was a little specific.
Inputs are mutiplied by the weights connecting the neuron and all added together — that’s how weights determine the effect our inputs have on the outcomes of our network. Then bias is added or subtracted to this sum, further affecting the output. Once this has happened, we put it through a sigmoid function, which converts the number to either 0 or 1, letting us know if the neuron will ‘light up’ or not. This process is repeated for every neuron until we reach our outputs.
Backpropagation
Don’t be discouraged if your network is completely wrong at first. It’s literally impossible to guess every single weight and bias correct to make your neural network 100% accurate. It takes hundreds, if not thousands, of iterations to get your neural network extremely accurate. Every single time you run your data through, you’ll go through a process called backpropagation. This is the process of making small tweaks to your weights and bias, fine-tuning them to become more accurate.

Backpropagation is one of the most important steps of your neural net. If you tune your weights and bias properly, it ensures lower error rates, making the algorithm more accurate and reliable. Same case with the neural network — it isn’t necessary to learn the math behind it, so don’t worry too much if you don’t understand it.
The Math Within Neural Networks
The math in this article is just scratching the surface of everything that goes on in your algorithm. If you truly want to learn and understand the math behind neural networks, I will warn you — it is difficult. I would suggest learning Python and building things before trying to learn the math because it isn’t necessary in order to build a neural network. If you can, by all means, go for it; learning the math will provide you with a deeper understanding of what’s happening within your neural network, but if you don’t understand it, don’t sweat it.
Python
Just a brief paragraph on Python — it’s a great programming language to learn, considering it’s the most popular language in the world. You can code your neural networks with Python, and it opens up many job opportunities.
Here are two free courses to get you started:
1. Codecademy teaches Python 2
2. CS Circles teaches Python 3
There isn’t much of a difference between Python 2 and 3, so don’t be worried about learning one over the other.
Glossary
Backpropagation: short for backward propagation of errors, backpropagation is the process of moving backward through your network, making small adjustments to your weights and biases based on the result you received to help make your output more accurate
Bias: an additional parameter in the neural network which is used to adjust the output along with the weighted sum of the inputs to the neurons
Hidden layers: this layer is part of your neural network; they’re neuron nodes stacked in between your input and output layers, and this is where the magic happens inside your network
Input: the data you feed the algorithm
Iteration: a run-through of your neural network
Learnable parameter: learnable parameters are just like the name implies; parameters that will be learned by the model during the training procedure, becoming more and more accurate every iteration
Learning rate: the rate at which your neural network will learn and make assumptions and guesses. A high learning rate can quickly jump to conclusions and answer incorrectly, but low learning rates can take forever to come to a decision
Output: the answer/result your algorithm gives you
Parameter: a number or other measurable value that will define a system or set the conditions of its operation
Training: the process of teaching your neural network to learn and adapt to become more accurate
Momentum: momentum is a term that stems from gradient descent; once your algorithm has recognized an upwards or downwards trend in your weights and bias, it will continue this trend while adjusting weights and bias, continuing the momentum (hence the name)
Neural Network: neural networks are a series of neurons trained to recognize patterns and interpret data through clustering or labeling. Neural networks are essentially algorithms, working in a similar way to our own brains
Neurons: also known as nodes, neurons in neural networks are named after the neurons in our own brains and bodies. Our neurons send signals to the rest of our body to perform actions like blinking and moving our limbs. Neurons in a neural network hold numbers and values, helping the network find an output
Weight: weights are numbers randomly assigned to connections between neurons. As your network goes through multiple run-throughs, weights will be adjusted every time to make the output more accurate. They control the signal between two neurons, and how much influence the input will have on the output
I hope this article helped you get started with Artificial Intelligence!
If you enjoyed this article, connect with me on LinkedIn or email me at connoristheboss@gmail.com!