COMP4528: Computer Vision Semester 1, 2023 Tutorial 6
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
Semester 1, 2023
Tutorial 6
COMP4528: Computer Vision
Question 1 Matrix Algebra
1. Let A = [2(2) 0(6)] and B = [2(0) 8(1)], compute AB.
2. Let x = [1(5)] and y = [8(0)], compute ∥x − y∥2 .
3. Let w = [w(w)2(1)], x = [6(8)] and L = 2(1) (w⊤x − 4)2 , compute w(L) 1 .
Question 2 Back Propagation
1. Back propagation through the computational graph. The current values are w0 = 0.2, w1 = 0.2, w2 = 0.3, x0 = 2, x1 = 3. p and q define the intermediate variables that are calculated during training, at the specified points in the computation graph. L is the output of the computational graph. Please provide the gradient and based on the back-propagated gradient calculation. w0
2. Given the linear regression model ˆ(y) = w⊤x + b and the loss function is defined as L(y,ˆ(y)) = 2(y − yˆ) 2 . The initial model weights are w = − 6 4 and b = −8. What is the new model weights after performing one gradient descent step with learning rate 0.01 and training data x = [1(8)], y = 1.
3. !! Given a logistic regression model Softmax(Wx+b) where W = l 2(2) 2(0) 6(4) and b =
−1 1 4 −2 .
The Softmax function is defined as Softmax(xi) = . For the training data, you have
x = −5 and the ground truth label y = 0 . Calculate the gradient of the cross-entropy loss (L = −ΣcM=1yo,clog (po,c))2 with respect to the bias vector b.
2023-07-24