MATH 3333 3.0 - Winter 2022 Assignment 3
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
MATH 3333 3.0 - Winter 2022
Assignment 3
Question 1: Given the following data, please apply the Fisher linear discriminant method. There are two classes, C1 and C2 . The class C1 has five observations:
/. 3(2)
.(.) 4
. 5
7(3) 、.
8 .(.) .
12 .
The class C2 has six observations:
/. 3(2)
. 4
.(.) 5
. 6
2(1) 、
.
a) Comute the mean of the first class µ 1 , and the mean of the second class µ2 .
b) Compute the within class variation Sd = S1 + S2 , where S1 and S2 are the variations within C1 and C2 , respectively.
d) Find the optimum projection v which can lead to the maximum seperation of the pro- jected observations.
e)Find the cutoff point vC µ 1 + vC µ2 .
g) Given a new observation (5, 3), which class does it belong to?
x1<-rbind(c(2,3),c(3,7),c(4,8),c(5,12),c(6,10))
x2<-rbind(c(2,1),c(3,2),c(4,2),c(5,3),c(6,4),c(7,6))
s1<-4*cov(x1)
s2<-5*cov(x2)
sw<-s1+s2
mu1<-apply(x1,2,mean)
mu2<-apply(x2,2,mean)
v=solve(sw)%*%(mu1-mu2)
cut<-1/2*t(v)%*%(mu1+mu2)
t(v)%*%c(5,3)
projsample1<-x1%*%v
projsample2<-x2%*%v
> projsample1
[,1]
[1,] 0.1104167
[2,] 0.9729167
[3,] 0.8666667
[4,] 1.7291667
[5,] 0.6541667
>
> projsample2
[,1]
[1,] -0.5354167
[2,] -0.6416667
[3,] -1.0708333
[4,] -1.1770833
[5,] -1.2833333
[6,] -1.0666667
>
> cut<-1/2*t(v)%*%(mu1+mu2)
> cut
[,1]
[1,] -0.04791667
>
> t(v)%*%mu1
[,1]
[1,] 0.8666667
>
>
> t(v)%*%mu2
[,1]
[1,] -0.9625
>
>
> t(v)%*%c(5,3)
[,1]
[1,] -1.177083
a) Comute the mean of the first class µ 1 , and the mean of the second class µ2 .
> mu1
[1] 4 8
mu2
[1] 4.5 3.0
b) Compute the within class variation Sd = S1 + S2 , where S1 and S2 are the variations within C1 and C2 , respectively.
> sw [,1]
[1,] 27.5 [2,] 35.0 >
[,2]
35
62
d) Find the optimum projection v which can lead to the maximum seperation of the pro- jected observations.
> v
[,1]
[1,] -0.4291667
[2,] 0.3229167
e)Find the cutoff point vC µ 1 + vC µ2 .
cut<-1/2*t(v)%*%(mu1+mu2)
cut
[1,] -0.04791667
g) Given a new observation (5, 3), which class does it belong to?
> t(v)%*%mu1
[,1]
[1,] 0.8666667
>
>
> t(v)%*%mu2
[,1]
[1,] -0.9625
>
>
>
> t(v)%*%c(5,3)
[,1]
[1,] -1.177083
Since the projection of the sample is closer to the projection of the class 2 mean, the new observation belongs to class 2.
Question 2: In the forensic glass example, we classify the type of the glass shard into six categories based on three predictors. The categories are: WinF, WinNF, Veh, Con Tabl and Head. The three predictors are the mineral concentrations of Na, Mg, and Al. Attached is the R output of the multinomial logistic regression. The R function vglm considers the last group as the baseline category. The estimates of the five intercepts and the estimates of the 15 slopes are provided in the output. The model contains 20 parameters, which are estimated on 214 cases.
a) Let pàj denote the probability that the ith observation belongs to class j. Formulate the logistic model for the five log odds: log 宁(宁) log 宁(宁) log 宁(宁) log 宁(宁) log 宁(宁)
log = 1.61 _ 2.48 * x1 + 3.84 * x2 _ 3.71 * x3,
log = 3.44 _ 2.03 * x1 + 1.69 * x2 _ 1.70 * x3,
log = 0.99 _ 1.40 * x1 + 3.29 * x2 _ 3.00 * x3,
pà4
pà6
log = 0.339 _ 0.15 * x1 + 0.69 * x2 _ 1.39 * x3.
b) The ith piece of glass shard is obtained and the Na, Mg, Al concentrations are: 0.20, 0.06, and 0.11, respectively. Calculate the probabilities pà1, pà2,pà3, pà4, pà5 , and pà6 . Based on the predicted class probability, which type of glass does this piece of glass belong to?
x1<-0.20
x2<-0.06
x3<-0.11
eta1<-1.61-2.48*x1+3.84*x2-3.71*x3
eta2<-3.44-2.03*x1+1.69*x2-1.70*x3
eta3<-0.99-1.40*x1+3.29*x2-3.00*x3
eta4<-0.067-2.38*x1+0.05*x2-0.26*x3
eta5<-0.339-0.15*x1+0.69*x2-1.39*x3
exp(eta1)/(1+exp(eta1)+exp(eta2)+exp(eta3)+exp(eta4)+exp(eta5)) exp(eta2)/(1+exp(eta1)+exp(eta2)+exp(eta3)+exp(eta4)+exp(eta5))
exp(eta3)/(1+exp(eta1)+exp(eta2)+exp(eta3)+exp(eta4)+exp(eta5)) exp(eta4)/(1+exp(eta1)+exp(eta2)+exp(eta3)+exp(eta4)+exp(eta5)) exp(eta5)/(1+exp(eta1)+exp(eta2)+exp(eta3)+exp(eta4)+exp(eta5)) 1/(1+exp(eta1)+exp(eta2)+exp(eta3)+exp(eta4)+exp(eta5))
#probability for the first class:
> exp(eta1)/(1+exp(eta1)+exp(eta2)+exp(eta3)+exp(eta4)+exp(eta5)) [1] 0.09707709
#probability for the second class:
> exp(eta2)/(1+exp(eta1)+exp(eta2)+exp(eta3)+exp(eta4)+exp(eta5)) [1] 0.7260402
#probability for the third class:
> exp(eta3)/(1+exp(eta1)+exp(eta2)+exp(eta3)+exp(eta4)+exp(eta5)) [1] 0.06780293
#probability for the fourth class:
> exp(eta4)/(1+exp(eta1)+exp(eta2)+exp(eta3)+exp(eta4)+exp(eta5)) [1] 0.02464577
#probability for the fifth class:
> exp(eta5)/(1+exp(eta1)+exp(eta2)+exp(eta3)+exp(eta4)+exp(eta5)) [1] 0.04637244
>#probability for the sixth class:
> 1/(1+exp(eta1)+exp(eta2)+exp(eta3)+exp(eta4)+exp(eta5))
[1] 0.03806158
The probabilities for each class are given above. The second class has the highest prob- ability. Thus it is predicted to be the second class.
Question 3:
a. In this question, we consider the discrimant analysis method for multivariate normal data. Given C1 , C2 , . . . , Cσ classes, we assign the prior probabilities to each class P (Cj ), j = 1, . . . , K. Given that X belongs to class Cj , the conditional distribution of X is a multivariate normal with the mean µj , and the covariance matrix Σj . Then based on the Bayes formula,
p(Cj )P (XlCj )
Then we can use P (Cj lX) as the discriminant function. We assign X to class j if P (Cj lX) > P (Cj/ lX), for any other classes. As the denominator is a constant which does not depend on j, we can use P (Cj )P (XlCj ) as the discriminant function. Or equivalently we can use log P (XlCj ) + log P (Cj ). The discriminant function is denoted by gj (X).
gj (X) =logP (XlCj ) + log P (Cj )
1 1 (1)
= _ 2 (X _ µj )C Σ_j1 (X _ µj ) _ 2 log lΣj l + log p(Cj )
Consider the case that Σj = σ I.2 In this case, all the predictors are independent with different means and equal variances σ 2 . Please simplify gj (X) and show that it is a linear function of X.
Thus dropping constant terms, we have gj (X) = _ (X _ µj )C (X _ µj ) + log p(Cj ). With further simplification:
gj (X) = _ (XC X _ µj(C)X _ XC µj + µj(C)µj ) + log p(Cj )
µj(C)X µj(C)µj
(2)
We have a linear discriminant function linear in X with an intercept (bias) term wj0 .
❼ b) In this example, we have three classes, each is a 2-dim Gaussian distribution, with µ 1 = (2, _1)C , µ2 = (4, 3)C , µ3 = (2, 3)C and Σ 1 = Σ2 = Σ3 = 2I2 , where I2 is an identity matrix of dimension 2 × 2. We assume the priors p(C1 ) = p(C2 ) = 1/4, and p(C3 ) = 1/2. Let X = (0.5, 0.4)C . Calculate g1 (X), g2 (X) and g3 (X). Classify the ob- servation X to one of the classes.
Solution: Plug in the parameters in each class, here X = (0.5, 0.4)C .
g1 (X) = _2.336294
g2 (X) = _6.036294.
g1 (X) = _2.843147
x<-c(0.5,0.4)
mu1<-c(2,-1)
mu2<-c(4,3)
mu3<-c(2,3)
> 1/2*mu1^T%*% x-mu1^T%*%mu1/4 +log(0.25)
[,1]
[1,] -2.336294
>
> 1/2*mu2^T%*% x-mu2^T%*%mu2/4 +log(0.25)
[,1]
[1,] -6.036294
>
> 1/2*mu3^T%*% x-mu3^T%*%mu3/4 +log(0.5)
[,1]
[1,] -2.843147
For any X, we will assign the membership j if the corresponding discriminant function gj (X) has the highest value. Here g1 (X) is highest, thus we assign it to class 1.
Question 4: Analyze the student math performance test. Apply the linear discrimi- nant analysis and quadratic discriminant analysis on the dataset. The reponse variable is ”schoolsup” and the three predictors are ”G1”, ”G2” and ”G3”. Please randomly select 300 observations as the training set and use your two models to predict the default status of the remaining students. Repeat this cross-validation five times and calculate the average misclassification errors of the two models. Which method performs better for this data set, the linear discriminant analysis or the quadratic discriminant analysis?
> for (k in 1:5) {
train <- sample(1:nrow(student.mat),300)
## linear discriminant analysis
model_1 <- lda(schoolsup~G1 + G2 + G3,student.mat[train,]) predlin <- predict(model_1,student.mat[-train,])$class tablin <- table(student.mat$schoolsup[-train],predlin)
errlin[k] <- (neval-sum(diag(tablin)))/neval
## quadratic discriminant analysis
model_2 <- qda(schoolsup~G1 + G2 +G3,student.mat[train,])
predquad <- predict(model_2,student.mat[-train,])$class tabquad <- table(student.mat$schoolsup[-train],predquad) errquad[k] <- (neval-sum(diag(tabquad)))/neval
}
merrlin=mean(errlin)
merrquad=mean(errquad)
merrlin
[1] 0.1284211
merrquad
[1] 0.1326316
The performance of the two methods are very similar. The LDA is slightly better than QDA in this case. (The comparative result may depend on the random seed. So if the students get different numbers, it could be due to the random seed.)
Note that the average error rate may vary from simulation to simulation, so individual results may vary.
Question 5:
Suppose we have 2-classes observations with p-dimensional predictors. We have samples x1 , . . . , xn , with n1 samples from Class 1 and n2 samples from Class 2. Let v be a unit vector. The projection of sample xà onto a line in direction v is given by the inner product of yà = vC xà . Let µ 1 and µ2 be the means of class 1 and class 2. Let 1 and 2 be the mean of the projections of class 1 and class 2. Denote the variance of the projected samples of class 1 is 1(2) = rie←1(yà _ 1 )2 and the variance of the projected samples of class 2 is 2(2) = rie←2(yà _ 1 )2 . The Fisher linear discriminant is to project to a direction v which maximizes:
(1 _ 2 )2
1(2) + 2(2)
Let the variance of the original samples of class 1 be S1(2) = rie←1(xà _ µ 1 )((xà _ µ 1 )C . and the variance of the original samples of class 2 be S2(2) = rie←2(xà _ µ2 )((xà _ µ2 )C . Define the within class variation:
Sd = S1 + S2 .
Define the between the class variation: Sk = (µ1 _ µ2 )(µ1 _ µ2 )C . Prove the objective function
can be simplified as:
J (v) =
vC Skv |
vC Sd v . |
Proof: Using yà = vC xà , and 1 = vC µ 1 ,
1(2) =
(vC xà _ vC µ 1 )2
rie←1
=
(vC (xà _ µ 1 ))2
rie←1
= (vC (xà _ µ 1 ))((xà _ µ 1 )C v)
rie←1
(3)
= (vC (xà _ µ 1 )(xà _ µ 1 )C v)
rie←1
= vC S1vC
Similarly, 2(2) = vC S2v.
Thus 1(2) + 2(2) = vC (S1 + S2 )v = vC Sd v.
Define the between the class variation: Sk = (µ1 _ µ2 )(µ1 _ µ2 )C .
Sk measures the separation between the two classes before projection.Sd measures the variations within each classes before projection.
(1 _ 1 )2 =(vC µ 1 _ vC µ 1 )2
=(vC (µ1 _ µ 1 ))(vC (µ1 _ µ 1 ))
=(vC (µ1 _ µ 1 ))((µ1 _ µ 1 )C v)
=vC Skv
Thus our objective function can be simplified as:
vC Skv
J (v) =
(4)
2022-04-18