Dr. Mark Magic, "Action Recognition Using Python and Recurrent Neural Network "
English | ISBN: 1798429047 | 2019 | 111 pages | PDF | 989 KB
English | ISBN: 1798429047 | 2019 | 111 pages | PDF | 989 KB
* Updated in May, 2019!
* Research fields: Computer Vision and Machine Learning.
* Book Topic: Action recognition from videos.
* Recognition Tool: Recurrent Neural Network (RNN) with LSTM (Long-Short Term Memory) layer and fully connected layer.
* Programming Language: Step-by-step implementation with Python in Jupyter Notebook.
* Major Steps: Building a network, training the network, testing the network, comparing the network with an SVM (Support Vector Machines) classifier.
* Processing Units to Execute the Codes: CPU and GPU (on Google Colaboratory).
* Image Feature Extraction Tool: Pretrained VGG16 network.
* Dataset: UCF101 (the first 15 actions, 2010 videos).
* Main Results: For the testing data, the highest prediction accuracy from the RNN is 86.97%, which is a little higher than that from the SVM classifier (86.09%).
* Detailed Description:
Recurrent Neural Network (RNN) is a great tool to do video action recognition.
This book built an RNN with an LSTM (Long-Short Term Memory) layer and a fully connected layer to do video action recognition.
The RNN was trained and evaluated with VGG16 Features that were saved in .mat files; the features were extracted from images with a modified pretrained VGG16 network; the images were converted from videos in the UCF101 dataset, which has 101 different actions including 13,320 videos; please notice that only the first 15 actions in this dataset were used to do the recognition.
The codes were implemented step-by-step with Python in Jupyter Notebook, and they could be executed on both CPUs and GPUs; free GPUs on Google Colaboratory were used as hardware accelerator to do most of the calculations.
For the purpose of getting a higher testing accuracy, the architecture of the network was regulated, and parameters of the network and its optimizer were fine-tuned.
For comparison purpose only, an SVM (Support Vector Machines) classifier was trained and tested.
For the first 15 actions in the UCF101 dataset, the highest prediction accuracy of the testing data from the RNN is 86.97%, which is a little higher than that from the SVM classifier (86.09%).
In conclusion, the performances of the RNN and the SVM classifier are approximately the same for the task in this book, which is a little embarrassed. However, RNN does have its own advantages in many other cases in the fields of Computer Vision and Machine Learning, and the implementation in this book can be an introduction to this topic in order to throw out a minnow to catch a whale.