Before we begin the project, set up your virtual environment for it to run. In this part, we will move on to starting with the source content to make the article look concise and clear.
Prepare your dataset
First, import tensorflow related libraries.
And prepare the data(※1) we will use.
There are 152 data in total, which are divided into five classes, anger, happiness, neutral, surprise, and sad. Then the data folder is shaped as follows.
Then we use opencv to read the files from the folder to create data list and label list.
The image size is set to be 28x28, which means that the input size is quite small and time-efficient for model training.
Then we split the whole data by a ratio of 8:2, reshape and normalize the data before passing it to a CNN.
Create and train your CNN model
Here we create a sample CNN model.
And compile the model.
Train the model and check the training process.
Check the model information by this line.
Or visualize the model result by matplortlib.pyplot.
If you are not satisfied with the result, try to increase the data or modify the model network, and then train the model again to check multiple results from different models.
If you are fine with the result, save it.
Predict by the model we trained.
For the test data we want to predict with, we need to repeat previous step of preparing the dataset, load the model we saved, and use that model to predict with the prepared test data.
Since the process of data preparation is the same of previous process, we skip to load our model.
Use the model to predict our test data. For each result of class index predicted, transfer it to the real class we defined.
Check the result.
Remember your model prediction results could be different from mine, since the model varies whenever it being trained. Replace your own data with your own classes, and try it yourself!
This is the last par
t and the end of this series. If you have any questions about this part, feel free to ask. Thanks for being with us on this journey.
※1 See data reference at https://zenodo.org/records/3451524