Reference no: EM132707923
Question 1:
Consider these 2 tables.
left table
id
|
item
|
type
|
0021
|
pasta alfredo
|
main
|
0310
|
fruit bowl
|
dessert
|
right table
id
|
allergen
|
vegetarian
|
0110
|
honey
|
yes
|
0021
|
gluten
|
no
|
For each join type below, give the corresponding resulting table. 1 mark each for complete tables.
(a) inner
(b) left
(c) outer
(d) right
Question 2
Consider the following regular expression meta operators:
( ) [ ] { } . * + ? ^ $ | \
For each of the following, give a couple of examples of strings which the regular expression would match. Describe (colloquially, in a manner that a non-technical person would understand) the set of strings that the pattern is designed to match. Each regular expression is enclosed in the pair of '/' characters below. [2 marks each]
(a) /^[A-Za-z][a-z]+@[a-z\.]+$/
(b) /\+?\d+(\d)\1{2,3}/
Question 3
You are employed by a medical research company to develop a cancer screening test. This involves developing a classifier to predict whether a patient has a particular type of cancer based on a blood test result. You wish to figure out whether a decision tree or a 5-nn classifier performs better at this task. You train both classifiers on your dataset then test how well each classifier performs on the same dataset. You find the decision tree has a 3% higher classification accuracy than the 5-nn classifier.
a. Suggest two reasons why you might prefer the 5-nn classifier despite this result
b. You then train a 1-nn classifier on your cancer dataset and test it on the same dataset. Do you expect the accuracy to be higher than, lower than, or about the same as the 5-nn classifier? Why?
Question 4
Given the Levenstein Distance matrix below provide the values for w, x, y and z.
Note that the operations add, delete and substitute (replace) all have a cost of 1.
|
#
|
c
|
h
|
o
|
o
|
s
|
e
|
#
|
0
|
1
|
2
|
3
|
4
|
5
|
6
|
c
|
1
|
|
|
|
|
|
|
a
|
2
|
|
x
|
|
|
|
w
|
k
|
3
|
|
|
|
y
|
|
|
e
|
4
|
|
|
|
|
|
z
|
a. w =
b. x =
c. y =
d. z =
Question 5
You scrape 500 newswire articles from www.abc.net.au (Links to an external site.) to perform text classification on this constructed dataset by creating a bag-of-words representation for all the documents. Before you can use this data for any task you need to think about how you might store and preprocess this information.
a. When you scrape the data, you want to make sure you don't lose important information such as where the article was found (under Politics, Business, Sports, Science, etc), the title of the article, the author, and the date it was published. In what data format would you store this information and why?
b. Given that you have scraped so few articles, you're worried that you have a data sparsity problem. What text processing procedure would you apply to mitigate this and why?
Question 6
Given the following dataset, assume that each feature can take the value ‘1' or ‘2' and that the class label can take the value ‘1', ‘2' or ‘3':
Instance ID
|
Feature1
|
Feature2
|
Class label
|
1
|
1
|
1
|
1
|
2
|
1
|
1
|
1
|
3
|
2
|
1
|
1
|
4
|
2
|
1
|
2
|
5
|
2
|
2
|
2
|
6
|
2
|
2
|
2
|
a. Compute the mutual information between Feature1 and Feature2 using a base 2 logarithm.
b. Assume we also have a new feature, Feature3. What is the maximum possible mutual information between Feature1 and Feature3? Give a possible vector for Feature3 so that the mutual information between Feature1 and Feature3 is maximised.
c. Calculate the chi-square value for Feature1 and state how many degrees of freedom are present. How could you use this value to determine whether Feature1 is a good predictor of the class label?
Question 7
Free email providers frequently employ sophisticated data analysis techniques to scan a user's email messages and extract data. This can be used to deliver targeted advertising, as well as to conduct research projects. It may also be sold to third parties.
a. List three stakeholders in this process
b. Describe two advantages and two disadvantages that affect at least one of these stakeholders
c. Suggest how two of Zook's 10 rules for responsible big data research can be applied to mitigate the disadvantages you have identified
Question 8
A data scientist has a dataset containing 200 documents. 100 of these are full-length academic articles about a particular topic and the other 100 documents are the abstracts from the same articles. Without any text pre-processing, he uses a bag of words model to represent each document as a feature vector then performs K-means clustering to cluster the documents into 12 clusters.
a. Are the full-length articles likely to be assigned to the same clusters as their abstracts? Why or why not?
b. What pre-processing could you do to improve the likelihood of full-length articles being assigned to the same cluster as their abstracts?
c. The data scientist performs the clustering a second time without making any changes to the algorithm or pre-processing. He is surprised to find documents are assigned to different clusters. Suggest why this might be the case
d. The data scientist is not sure which of the two clustering results to use. Suggest how he could determine which result is better.
Question 9
A data scientist wishes to understand whether there is a correlation between a patient's income and the time they spent in hospital after contracting a particular disease. The data scientist obtains information from the national census, which includes the date of birth, postcode, gender and income of everyone in Australia, but not personally identifying information such as name or address. He also obtains information from hospitals containing the date of birth, postcode, gender and time spent in hospital for 10000 patients hospitalised for the disease over a two-year period.
He then links the two datasets together using the edit distance algorithm to calculate the similarity between the dates of birth, postcodes and gender in the two datasets. The edit distances for each of these features is evenly weighted to produce a similarity score. Each individual in the hospital dataset is linked to the individual with the closest similarity score in the census dataset.
After performing the linkage, he computes the Pearson's correlation between income and time spent in hospital. He finds the magnitude of the correlation is 0.15, which is much lower than he expected.
a. Suggest four possible reasons the correlation is much lower than expected
b. Another data scientist runs a similar analysis and finds a Pearson's correlation of 0.8. He concludes that having a low income causes patients to be discharged sooner. Is this conclusion justified from the analysis? Why or why not?
Question 10
An online movie rental store wishes to develop a recommender system to recommend films to its customers. A table of film ratings is provided below:
User
|
Film1
|
Film2
|
Film3
|
Film4
|
Anne
|
4
|
|
3
|
|
Bob
|
2
|
|
|
4
|
Chris
|
|
2
|
3
|
|
Dave
|
|
3
|
|
3.5
|
Use the Item-based recommender systems approach describe in lectures to predict Bob's rating for Film3 based on the two most similar users. Show the key intermediate calculations.