Hello guys, in this post we will learn about MNITS(Modified National Institute of Standards and Technology) Hand written digits recognition. Here we will be creating a Deep Neural Network model that recognizes the hand written digits. MNIST is nothing but a dataset that contains 60000 training images and 10000 testing images of hand written digits and these are actually gray scale images. You can find full code here
The below are the sample images of MNIST dataset.
Now lets dive into code!!!
Setup
First we will import some necessary modules like numpy and keras. Numpy is used to perform algebric or numerical operations and Keras is a deep learning API written in Python, running on top of the machine learning platform Tensorflow(Tensorflow is an end-to-end, open-source machine learning platform.)
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
Load the dataset
Now lets load the MNIST dataset.
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
Scaling and Normalizing the images
We scale images so that the range of each pixel will be in the range of [0,1].To do that we want to divide each pixel with 255.
x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
Prepare the data
Now the shape of the training and testing images should be in the form of(num_images, img_height, img_width, num_channels). For MNIST dataset img_height = img_width=28 and num_channels=1(gray scale images).
x_train = x_train.reshape(x_train.shape[0],x_train.shape[1],x_train.shape[2],1)
x_test = x_test.reshape(x_test.shape[0],x_test.shape[1],x_test.shape[2],1)
Similarly training and testing labels should be in the form of (num_images,num_classes).Here num_classes=10(0-9).
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
Build the Model
Now lets build our CNN(Convolution Neural Network) Model.
model = keras.Sequential([
keras.Input(shape=(28,28,1)),
layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(10, activation="softmax"),
])
model.summary()
This gives us output as follows which consists of nearly 34k parameters
Train the model
Now, we will compile our model and fit our model. Here we use Categorical CrossEntropy as loss function and Adam opimizer
model.compile(loss="categorical_crossentropy",
optimizer="adam",
metrics= ["accuracy"])
model.fit(x_train, y_train, batch_size=128, epochs=15, validation_split=0.2)
Evaluate our model
Now lets see how well our model performs we will use model.evaluate to evaluate our model
score = model.evaluate(x_test, y_test, verbose=0)
print("Test loss:", score[0])
print("Test accuracy:", score[1])
This gives us 0.99 accuracy and 0.01 loss.
Finally we have build our very first model that gives 99% accuracy. You can find full code on my github account which I mentioned below. You can find my Introduction to Deep Learning here.
Thank you!
Contacts:
ph.No: +91 9182530027
gmail: hunnurjirao2000@gmail.com
github: github.com/hunnurjirao
Comments
Post a Comment
If you have any doubts please leave it in a comment box