# Recognising Handwritten digits:

Well, the best way to learn new things is to get your hands dirty with that. Let’s learn how to train a model to serve our purpose.

So, This is in continuation with my previous article Getting Started With TensorFlow.

**Hand written digit Recognition using TensorFlow**.

So far we have seen the basic implementation of linear regression using this library. Let’s see it’s real application. Here, We will try to make a simple model to identify handwritten digits. As it’s obvious to train a model we need a huge amount of data. Since we are just starting over this, let’s use MNIST_data for this purpose.

Here, we will be training the model with about 55k samples and then will be evaluating our trained model with about 10k of testing images.

**The Chatbot Conference**

*The Chatbot Conference On September 12, Chatbot's Life, will host our first Chatbot Conference in San Francisco. The…*www.eventbrite.com

So, Let’s get started.

from tensorflow.examples.tutorials.mnist import input_data

from scipy.misc import imshow

import tensorflow as tf

import numpy as np

mnist = input_data.read_data_sets('MNIST_data', one_hot=True)

sess = tf.InteractiveSession()

Here, we are just reading our data from MNIST database. You might be thinking of the difference between InteractiveSession and Session used in previous article. There isn’t much difference, just eval() and run() function can easily be called in interactive session. It avoids having to pass an explicit Session object to run operations.

x = tf.placeholder(tf.float32, shape=[None, 784])

y_ = tf.placeholder(tf.float32, shape=[None, 10])

W = tf.Variable(tf.zeros([784,10]))

b = tf.Variable(tf.zeros([10]))

sess.run(tf.global_variables_initializer())

y = tf.matmul(x,W) + b

Declaring variables and placeholders. Please refer to the previous article for the difference.

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))

train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

for i in range(1000):

batch = mnist.train.next_batch(100)

train_step.run(feed_dict={x: batch[0], y_: batch[1]})

weight=np.array(sess.run(W))

const=np.array(sess.run(b))

np.savetxt('weights.txt',weight)

np.savetxt('const.txt',const)

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))

accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

print("\n\n")

print("training completed with accuracy=",accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

Now, the cross_entropy is basically the cost, that we need to minimize while training our model. To avoid training the data again and again i am writing the trained weights in a file for testing purpose. As you might feel the difference, we used **mnist.train** dataset for training and **mnist.test** dataset for evaluation. When you run this code, you will see the following screen.

So, our model appears to be 92% accurate. Well, this accuracy is quite bad when compared to state of the art models. Since, we have not used any of the advanced concepts of neural networks, this much accuracy is ok.

Well, Does this give a feel to you? I don’t think so. Of course we have trained and tested the dataset but cannot see any images. One cannot say how complicated or simple the images would have been. So, let’s see how good is it in that regard.

So, To test it just clone this repo and run predict_digit.py after training your model.

Learning_TensorFlow. So, here i have taken 100 input samples and evaluated that on the basis of our trained weights. So, yeah, now you can easily compare the results. Since we already know the accuracy you need not bother about that.

Now, **How to achieve high accuracy say about 99%?** Well, with neural networks we can do that. It turns out to be about **99.2%** accurate with mnist test images.

That’s all. We will continue with neural networks in next article. Thanks for reading. :)

Next article: https://medium.com/@rahulvernwal/how-should-i-start-with-cnn-c62a3a89493b