Another Easy Neural Network

Python and Scykit-Learn Digits Part 1

--

Photo by Sergey Pesterev on Unsplash

History

I have recently begun my first internship as a Machine Learning and AI Developer. It’s a big title! Ludicrously enough, I have no experience in this field. Nevertheless, I find myself writing neural networks daily. So today, I am going to show how easy it is to write a one. We’ll start simple, and as I go along, I will explain the nitty-gritty behind it all.

If you wanna master something, teach it.

The above quote is my purpose in writing this article and I know you will enjoy it just as much as I enjoyed writing it. Let us begin!

Trending AI Articles:

1. Paper repro: “Learning to Learn by Gradient Descent by Gradient Descent”

2. Back-Propagation is very simple. Who made it Complicated ?

3. How to train a neural network to code by itself ?

4. Basics of Neural Network

Requirements

I’m going to make this a “go-to” for everything in regards to building a neural network so skip ahead if the requirements below are already installed on your computer.

  • Python 3.7+ w/ Anaconda
  • Jupyter Notebook

Okay, installing all this is relatively straight forward. Anaconda is kind enough to handle the heavy lifting for you. If you aren’t familiar with the platform, let me explain.

Anaconda is a distribution of Python and R for data scientists. It’s used by over 11 million users worldwide and is the industry standard for developing, testing, and training on a machine. It allows you to manage libraries, dependencies, and environments with Conda, a package management system and environment that runs on your computer. I come from a web development background, so I like to view it as the npm/yarn of python. However, Conda works for any language. Nevertheless, at the moment, Python is one of its most significant communities.

Once everything is installed and ready to pump, go ahead and open your terminal. Then enter the command: «jupyter notebook» to unlock Jupyter Notebook… Yah, it is that simple. This command will open a local server for you!

Learning the Dataset

Welcome to Jupyter

Lets first begin by importing our general packages. If you are not familiar with Jupyter Notebooks, then I recommend just jumping right in by messing around with the cell block system or watching a quick crash course. It’s honestly reasonably straight forward, you enter the code in the cell block, and when you want to run it, you click run in the toolbar or use one of their many beautiful hotkeys. There is more to the software; however, for this article, we won’t need to go that deep.

Okay, so in your first cell we’ll begin with the following:

Analyzing the Data

We start by importing a built-in dataset provided by Scikit-Learn. Scikit-Learn is a Python library that provides Data Scientist with the data-mining and data-analysis tools needed to create machine learning models. The data set that we just imported will help provide us with the information necessary to take advantage of those tools.

However, datasets can vary in form depending on their information. It will be unwise to jump straight into building our model before analyzing our data’s format.

After defining that dataset to a variable go ahead and run it in the next cell block like below:

It’s quite the mess of arrays full of data. Beneath all of the data should be a paragraph of a text containing the Attribute Information, which will inform you that the data consists of 8x8 images ranging in pixels. It’ll also notify you further in more detail about the data.

One of the methods we can call on-top of our data is “.images,” which will return an array of matrices. Now you may be asking, “Wait a second… These aren’t images?” Technically that is correct; however, recall how a computer generates images. Computers read by integers! Go ahead and call the data as shown below and try to find the pattern shown within the array.

You’ll find that it looks oddly like a number. Images are just an array of numbers allowing the computer to know where to dedicate pixelation. Neat right? It’ll be easier to see if we plot out the array in a graph. Let’s go ahead and import Matplotlib, which is a 2d plotting library for Python allowing you to graph information. Check it out:

Behold a zero! We can also check this by running its target data. Quick note: If you are not familiar with Matplotlib the “cmap” only defines the color scheme for the graph.

Build

Okay, so we have our dataset imported, we’ve analyzed the layout of each image, and now all that is left to do is to begin splitting our data for the neural network.

Lets first import the packages we’ll require to prepare our data for the Neural Network.

Notice the type of Neural Network we import, MLPClassifier from Scikit Learn. This model is a multi-layer perceptron classifier, which is your general learning Neural Network. There are many different forms of Neural Networks, and each performs a different task better than another. It’s our job to know which one to call, and in this simple case, the MLP neural network fits just fine.

The next step is to split our data into training and testing variables, which will allow us to begin training our neural network. We’ll split it into training images and training labels with testing images and testing labels saved for later use after fitting the data.

You can call each variable individual to see how Scikit split the data. It’s a little interesting. However, let us cut straight to creating our machine learning model.

Above we begin by defining our neural network to a variable with a set size of hidden layers. Hidden layers are the “layers” of perceptions linked between the input and output of the data. Here is a beautiful video providing a creative simulation of a neural network. Pay attention to the multi-layer section of the video.

You’ll notice that there are now three layers compared to one from the first section, which gradually decreases in density. Also in the top left, it’ll note three hidden layers and the total of perceptions. This is what we are defining in our MLP Classifier function. The numbers represent how many perceptrons are in that layer leading to the output. Another thing to notice from this video is the number of input perceptrons. We have 64 because our images are 8x8 pixels, in the video, you’ll see 784. The images being feed through this simulation are 28x28 pixels. Anyways, that was just a fun little fact to help better understand the network. The simulation is essentially our model!

Now let’s fit our data into our network. Fitting the model/network will train our network with our targeted data.

Evaluation Time

Now that our neural network is done training let us test it and see it’s accuracy.

Below we import a Scikit Learn module that’ll import metrics for us in evaluating our accuracy for the model. We then set a prediction variable that we’ll use to call our tested neural network. We then apply the predict method on our test images that we split from earlier.

The “accuracy_score” will return the score from comparing our predicted data under the predictions variable we set with the answers we split to test the labels.

Not a bad score! Typically anything above 80% is excellent for a model.

Time for Action

Now that everything is set we can repeat and begin applying the model.

As you can see our model is working well!

Closing

In closing, I’d like to challenge you to check out some other models. You can follow me on medium for more tutorials and information in regards to machine learning and AI to be published.

The next phase of this project will be implementing this model into an actual application. We’ll be using Flask to accomplish this by deploying it to the internet.

Thank you and for now, stay tuned!

Don’t forget to give us your 👏 !

--

--

For most people, it's coffee, but for me, its chocolate milk. I usually drink at least two glasses a day — It's the fuel that pushes my creativity.