Reference no: EM132490380
Question 1. Solve the following problems from Chapter 2 of the text book. 12, 13, 14 (a), (b), (c), 27
Note: The problem numbers listed in this assignment are from the second edition of the text book. Check the version you have and make sure you are solving the right problem.
Question 2. Consider a two-category classification problem with two-dimensional feature vector X = (x1, x2). The two categories are ω1 and ω2.
p(X| ω1 ) ~ N ([-1], Σ1),
[2]
p(X| ω2 ) ~ N ([1], Σ2),
[2]
P (ω1) = P (ω2) = 1/2,
and
Σ1 = [1 1] , Σ2 = [2 0]
[1 2] , [0 2]
(a) Calculate the Bayes decision boundary.
(b) Generate 50 random patterns from each of the two class-conditional densities and plot them in the two-dimensional feature space. Also draw the decision boundary and ellipses of concentration on this plot.
(c) Calculate the Bhattacharya error bound.
(d) Generate 1, 000 additional test patterns from each class and determine the em- pirical error rate based on the decision boundary in 2(a). Compare this error with the bound in part 2(c).
Question 3. Consider the following bivariate class-conditional density function for feature vector
X = [x1]
[x2]
µ = [µ1] ; Σ = [σ12 σ21] ; σ12 = σ21
[µ2] [σ21 σ22]
(a) What is the expression for the Euclidean distance between point X and mean vector µ?
(b) What is the expression for the Mahalanobis distance between point X and mean vector µ. Simplify this expresion by expanding the quadratic term?
(c) Compare the expressions in 3(a) and 3(b). How does the Mahalanobis distance differ from the Euclidean distance? When are the two distances equal? When is it more appropriate to use the Mahalanobis distance over the Euclidean distance? Support your answers with illustrations.
Question 4. The class-conditional density functions of a discrete random variable X for four pat- tern classes are shown below:
x
|
p(x/ω1)
|
p(x/ω2)
|
p(x/ω3)
|
p(x/ω4)
|
1
|
1/3
|
2/3
|
1/6
|
2/5
|
2
|
2/3
|
1/3
|
5/6
|
3/5
|
The loss function λ(•) is as follows, where action αi means "decide pattern class ωi":
|
ω1
|
ω2
|
ω3
|
ω4
|
α1
|
0
|
2
|
3
|
4
|
α2
|
1
|
0
|
1
|
8
|
α3
|
3
|
2
|
0
|
2
|
α4
|
5
|
3
|
1
|
0
|
Assume the following class prior probabilities: P (ω1) = 1/4, P (ω2) = 1/4, P (ω3) = 1/8, P (ω4) = 3/8.
(a) Compute the conditional risk for each action as follows:
R(αi| x) = Σ4j=1λ(αi | ωj)P (ωj | x), i = 1, . . . , 4
(b) Compute the overall risk R, given as:
R = Σ2i=1R(α(xi) | xi)p(xi)
where α(xi) is the decision rule minimizing the conditional risk for xi.
Question 5. In a particular binary hypothesis testing application, the conditional density for a scalar feature x given class w1 is
p(x | ω1) = k1(exp(-x2/10))
Given class w2 the conditional density is
p(x | ω2) = k2(exp(-(x - 2)2/2))
(a) Find k1 and k2, and plot the two densities on a single graph.
(b) Assume that the prior probabilities of the two classes are equal, and that the loss for choosing correctly is zero. If the loss for choosing incorrectly are λ12 = 1 and λ21 = 5, what is the expression for the Bayes risk?
(c) Find the decision regions which minimize the Bayes risk, and indicate them on the plot you made in part (a).
(d) For the decision regions in part (c), what is the numerical value of the Bayes risk?