Reference no: EM133004382
CIS 6930 Trustworthy Machine Learning
Data Privacy
Problem 1: Syntactic Metrics
Consider the data set depicted in Table1. Answer the following questions. (Justify your answers as appropriate.)
Age
|
Zip Code
|
Sex
|
Diagnosis
|
30-39
|
32607
|
M
|
Broken Leg
|
30-39
|
32607
|
M
|
Cancer
|
40-49
|
32611
|
F
|
Heart Disease
|
20-29
|
32607
|
F
|
Cancer
|
20-29
|
32607
|
F
|
Heart Disease
|
40-49
|
32611
|
M
|
Hypertension
|
Table 1: Anonymized Data Set 1.
1. What are the quasi-identifier(s)? What are the sensitive attribute(s)?
2. What is the largest integer k such that the data set satisfies k-anonymity? What is the largest integer l such that the data set satisfies l-diversity?
3. Modify the data set using generalization and suppression to ensure that it satisfies 3- anonymity and 2-diversity. Here we are looking for a solution that minimally affects the utility of the data. Write the modified data set below.
Now consider the data set depicted in Table2. Answer the following questions. (Justify your answers as appropriate.)
Age
|
Zip Code
|
Sex
|
Credit Score
|
Yearly Income
|
Loan
|
30-39
|
32607
|
M
|
678
|
90k
|
Approved
|
30-39
|
32607
|
M
|
799
|
120k
|
Approved
|
40-49
|
32611
|
F
|
451
|
35k
|
Declined
|
20-29
|
32607
|
F
|
783
|
30k
|
Approved
|
20-29
|
32607
|
F
|
560
|
70k
|
Declined
|
40-49
|
32611
|
M
|
725
|
22k
|
Declined
|
Table 2: Anonymized Data Set 2.
1. Suppose that in this case age, Zip code, and Sex are quasi-identified but Credit Score, Yearly Income, and Loan are sensitive attributes.
Propose a way to apply k-anonymity and l-diversity in this case that follows the spirit of these notions.
Note: You do not need to anonymize the table itself, only explain how you would apply k-anonymity and l-diversity.
Hint: what if any difference is there between this question and Problem 1 (a)?
2. Your student friend Alice (who is not in the anonymized data set) was recently declined for a loan despite her 30k yearly income. She thinks she may have been discriminated against.
In an order to be transparent (and follow the philosophy of the GDPR), the bank that declined Alice's loan has published the following transparency report about their loan approval algorithm.
• If yearly income ≥ 50k then: return APPROVED
• If yearly income ≥ 25k:
- If student:
∗ If credit score ≥ 550 then: return APPROVED
∗ Else: return DECLINED
- Else (not student) if credit score ≥ 500 then: return APPROVED
• If yearly income ≥ 20k and credit score ≥ 650 then: return APPROVED
• return DECLINED
What can you infer about Alice assuming that the transparency report accurately reflects the loan approval process? What do you conclude about the possible tension between algorithmic transparency and privacy? (Explain your answer.)
Problem 2: Randomized Response & Local Differential Privacy
Social science researchers at the University of Florida want to conduct a study to explore the prevalence of crime among students. Specifically they want to ask questions of the form: have you ever committed crime X? (Here X stands for a specific crime or crime category.)
Researchers are ethical so they want to carefully design the study to ensure that participants respond truthfully and that privacy is protected. They reached out to you, a CNT5410 student, to evaluate their methods.
Consider a participant that is asked the question have you ever committed crime X? This question admits a yes or no answer. Before answering the participant is instructed to use the following algorithm to compute a "noisy" answer given their true answer and only report the noisy answer to the researchers.
Answer the following questions.
1.Suppose the researchers obtain noisy answers z1, z2, . . . , zn from the n study participants. You can assume that YES is encoded as 1 and NO is encoded as 0. Explain how the researchers can estimate the true proportion of YES from the noisy answers. Specifically, give formulae for (1) the expected number of YES answers.
Your answer here.
2. Consider the following definition of (Local) Differential Privacy.
Definition 1. A randomized algorithm F which takes input in some set X satisfies ε-differential privacy (for some ε > 0) if for any two input records x ∈ X, xJ ∈ X and any output z ∈ Range(F): Pr{z = F(x)} ≤ eεPr{z = F(xJ)} .
Does the noisy answer algorithm satisfy Definition1? Produce a proof or a counter-example. If it does, also give an expression for ε in terms of p.
3. Now consider the following (more general) variant of the algorithm.
Prove that this general variant satisfies ε-(local) differential privacy (Definition1). Give an expres- sion for ε in terms of p and pJ.
4. Suppose we can arbitrarily set p and pJ. Explain the trade-off between minimizing ε and minimizing the error between the true answers and the one estimated from noisy answers.
Problem 3: Implementing DP Mechanisms
For this problem you will implement several differential privacy mechanisms we talked about in class. Please use the comments in the Python files provided to guide you in the implementation.
For this question, we will use the dataset data/ds.csv. It contains pairs of age and yearly income for several individuals. For the purpose of calculating sensitivity, assume that the age range for any individual is [16, 100].
0. What is the (global) sensitivity of mean age query()? (You can assume that the size of the dataset is known.)
1. Fill in the implementation of laplace mech(), gaussian mech(). Also fill in the (global) sensitivity in the mean age query() function.
You can test your implementation by running: 'python3 hw1.py problem3.1'. How close are the noisy answers to the true answer?
2. Complete the implementation of the dp accuracy plot() and run it for ε = 0.1, 0.5, 1.0, 5.0 on mean age query(). Paste the plots below.
To run the code: 'python3 hw1.py problem3.2 <epsilon>'. By default, figures are saved in./plots and named based on the value of ε. What do you conclude?
3. Implement the function called budget plot(). Use it to produce a plot of the budget of naive composition and advanced composition (refer to the course materials for details) when using gaussian mech() to perform mean age query() m > 1 times. Plot the naive composition and advanced composition budgets (i.e., total privacy budget εJ) for varying m from 1 to 200 keeping δ ≤ 2-30. Use base ε = 0.1. Paste the plot below. For what values of n is naive composition better than advanced composition? (Justify your answer.)
Problem 4: Simple Linear Model with Differential Privacy?
Consider training a simple linear model on a dataset D = {(x1, y1), (x2, y2), . . . , (xn, yn)}. Here xi ∈ R and yi ∈ {-1, +1} for all i = 1, 2, . . . , n. The model is:
fw,b(x) = sign(wx + b) ,
where w, b are the model's parameter and sign returns the sign of its input (i.e., +1 if the input is ≥ 0, -1 otherwise).
We would like to train the model using D to obtain the optimal set of parameters for the model. But we want to do so in a way that satisfies ε-differential privacy. An additional restriction here is that: w, b ∈ {-1, 0, +1}.
1.(5 pts) Ignoring privacy for now, explain how you could find the optimal values for w and b given D as training dataset. Keep in mind the constraint that w and b can only take values in the set {-1, 0, +1}. (Hint: ERM.)
Let T = {-1, 0, +1}×{-1, 0, +1}, that is T is the set of possible parameters values for the tuple (w, b). Given a tuple t ∈ T for t = (w, b) and dataset D, define the quality score function q as: q(D, (w, b)) = Σn 1f (x )=y . In other words, q simply counts the number of correct predictions in the training set if
2. What is the (global) sensitivity ? q of the quality score function? (Justify your answer.)
3. Let F denotes the application of the exponential mechanism to D using q as quality function. Does F satisfy ε-differential privacy? If yes, explain why. If not, explain why not.
4. Suppose we would like to remove the restriction that w, b ∈ {-1, 0, +1}. So we now allow w, b ∈ R. Using the same quality function q. Does the application of the exponential mechanism to D change? Does it still satisfy ε-differential privacy? (Justify your answer.)
Attachment:- Trustworthy Machine Learning.rar