Simple image classification using TensorFlow and CIFAR-10

Almost one year after following cs231n online and doing the assignments, I met the CIFAR-10 dataset again.

This time, instead of implementing my Convolutional Neural Network from scratch using numpy, I had to implement mine using TensorFlow, as part of one of the Deep Learning Nano Degree assignments.

As an aside, since this course uses some content from the free Deep Learning course, they took the time to fix the sub par presentation of that course. Good. :-)

The ConvNet

TensorBoard generated graph

What I ended up with was a quite simple ConvNet. To meet specifications we were expected to achieve at least 50% accuracy in the training, validation and test sets. So I went with a network that comprised four layers (or five, if you count pooling as a different layer):

  • MARKDOWN_HASH9c573dc6310062be95869970ac78e05cMARKDOWNHASH with a [ReLU activation](https://en.wikipedia.org/wiki/Rectifier(neural_networks)) function
    • A convolution layer with 64 kernels of size 3x3 followed by
    • A max pooling layer with of size 3x3 and 2x2 strides
  • A fully connected layer mapping the output of the previous layer to MARKDOWN_HASH0584ce565c824b7b7f50282d9a19945bMARKDOWNHASH outputs with a [ReLU activation](https://en.wikipedia.org/wiki/Rectifier(neural_networks)) function
  • Another fully connected layer mapping the 384 outputs of the previous layer to MARKDOWN_HASH58a2fc6ed39fd083f55d4182bf88826dMARKDOWNHASH outputs with a [ReLU activation](https://en.wikipedia.org/wiki/Rectifier(neural_networks)) function
  • A final layer mapping the 192 outputs to the 10 classes in CIFAR-10, to which SoftMax is applied.

On top of that, all layers were regularized with Dropout.

Classification Performance

This neural network, trained over 50 epochs, achieved \~66% validation accuracy and \~65% test accuracy. Pretty good for a small project.

Sample Classification

Source code

Source code can be found on GitHub.