Reference no: EM133142433
Machine Learning Assignment -
1. Least Squares and Double Descent Phenomenon (17P)
The goal in this assignment is to learn about linear least squares regression and the double descent phenomenon, shown in Figure 1. In the classical learning setting, the U-shaped risk curve can be observed indicating a bad test error while the training error is very low, i.e. the model does not generalize well to new data. However, a highly over-parameterized model with a large capacity allows the test error to go down again in a second descent ("double descent"), which can sometimes be observed in over-parametrized deep learning settings.
Tasks -
1. Rewrite eq. (2) in pure matrix/vector notation, such that there are no sums left in the final expression. Use ? = φ(x) for the feature transform which can be computed prior to the optimization. Additionally, state the matrix/vector dimensions of all occurring variables.
2. Analytically derive the optimal parameters w* from eq. (2).
3. Give an analytic expression to compute predictions y^ given w*.
This can also be interpreted as a small feed-forward neural network with one hidden layer for an input x ∈ Rd and output y^ ∈ R. Draw a simple schematic of this neural network and include exemplary labels of its neurons and connections.
4. Create a training dataset comprised of input data x = {x1, ..., xN} and corresponding targets y = {y1, ..., yN} with N = 200, d = 5 and σ = 2 according to eq. (1).
In the same manner, create a test dataset with Nt = 50 for both test input data and test targets.
5. Generate M = 50 d-dimensional random feature vectors v = {v1, ..., vM} on the unit sphere.
6. Implement the computation of w* from the training data using a QR decomposition. Further, compute the mean squared error denoted in eq. (4) for both the training and test data based on the optimal parameters w*.
7. Use λ = 1 x 10-8 to reproduce the double descent behaviour. Run this experiment for a number of feature vectors M = {10k + 1 | k ∈ {0, 1, 2, ..., 60}} and save the training and test loss in each run. For each M, do the experiment r = 5 times to obtain averaged scores.
8. Plot both the averaged (over the r = 5 runs) train and test errors depending on the number of feature vectors M in the same plot. Include the standard deviation of each setting in addition to the averaged loss. Give an interpretation of your results.
9. Repeat the same experiment for λ = {1 x 10-5, 1 x 10-3} and explain the influence of λ. Include the resulting curves containing train and test error for each λ in two additional subplots.
2. Dual Representation (8P)
The linear least squares problem from Task 1 can be reformulated in its dual representation, where an equivalent solution can be obtained.
Tasks -
1. Analytically compute the optimal parameters a* from eq. (5). State the dimension of the resulting matrix that has to be inverted in the process and compare them those required in Task 1. When is it favourable to use the primal and when the dual solution?
2. Give an analytic expression to compute predictions y^ given a* using eq. (7), such that you only rely on K and do not need to compute the features ? explicitely.
3. For the train data x compute the kernel matrix as given in eq. (6). Repeat the same process for the test data, ensuring that the resulting kernel matrices are of dimensionality RNxN and RN_txN, respectively.
4. Implement the computation of a* and report the mean squared error on the train and test data, using λ = 1 x 10-8.
5. Use exactly the same datasets as in Task 1. For the train data x, compare the kernel K and ??T . For different numbers of features M = {10, 200, 800}, evaluate both terms and plot the row n = 10 from both resulting N x N matrices in one plot. Describe the influence of M. Compute for each M the mean absolute error between both 1D arrays, i.e. MAE(Kn, (??T)n) = 1/N i=1ΣN|(Kn)i - ((??T)n)i|.
Compare train and test errors obtained with the primal solution for each setting of M with the dual solution.
Attachment:- Machine Learning Assignment File.rar