In this chapter, we implement a neural network model for the text classification task. We then apply the model to the news classification dataset that we used in the chapter 6. You might want to use the deep learning frameworks such as PyTorch, TensorFlow, Chainer.
70. Generating Features through Word Vector Summation
Let us consider converting the dataset from the problem 50 to feature vectors. For example, we want to create a matrix (sequence of feature vectors of all instances) and a vector (sequence of gold labels of all instances).
Here, represents a number of instances in training data. and represent -th feature vector and target (gold) label respectively. Note that the task is to classify a given headline into one of the following four categories: “Business”, “Science”, “Entertainment” and “Health”. Let us define that represents a natural number smaller than 4 (including zero). Then a gold label of a given instance can be represented as . Let us also define that represents the number of labels (This time ).
A feature vector of -th instance is computed as follows:
where -th instance consists of tokens and represents a (size ) word vector corresponding word . In other words, -th article headline is represented as an average of word vectors of all words in the headline. For word embeddings, use pretrained word vector of dimension (i.e., ).
A gold label of -th instance is defined as follows:
Note that you do not have to strictly follow the definition above as long as there exist one-to-one mappings between the name of the category and the label index.
Based on the specifications above, create the following matrices and vectors and save them into binary files:
- Training data feature matrix:
- Training data label vector:
- Validation data feature matrix:
- Validation data label vector:
- Test data feature matrix:
- Test data label vector:
Here, represent the number of instances in training data, validation data and test data, respectively.
71. Building Single Layer Neural Network
Load matrices and vectors from the problem 70. Compute the following operations on training data:
Here, refers to softmax function and is a vertical concatenation of :
Matrix is the weight of single-layer neural network. You may randomly initialize the weight for now (we will update the parameter in later questions). Note that represents a probability distribution over the category. Similarly, represents a probability distribution of each instance in training data .
72. Calculating loss and gradients
Calculate the cross-entropy loss and the gradients for the matrix on a training sample and a set of samples . The loss on a single sample is calculated using the following formula:
The cross-entropy loss for a set of samples is the average of the losses of each sample included in the set.
73. Learning with stochastic gradient descent
Update the matrix using stochastic gradient descent (SGD). The training should be terminated with an appropriate criterion, for example, “stop after 100 epochs”.
74. Measuring accuracy
Find the classification accuracy over both the training and evaluation data using the matrix obtained in the problem 73.
75. Plotting loss and accuracy
Modify the code from the problem 73 so that the loss and accuracy of both the training and the evaluation data are plotted on a graph after each epoch. Use this graph to monitor the progress of learning.
Modify the code from the problem 75 to write out checkpoints to a file after each epoch. Checkpoints should include values of the parameters such as weight matrices and the internal states of the optimization algorithm.
Modify the code from the problem 76 to calculate the loss/gradient and update the values of matrix for every samples (mini-batch). Compare the time required for one learning epoch by changing the value of to .
78. Training on a GPU
Modify the code from the problem 77 so that it runs on a GPU.
79. Multilayer Neural Networks
Modify the code from the problem 78 to create a high-performing classifier by changing the architecture of the neural network. Try introducing bias terms and multiple layers.