Reference no: EM132537547
Data Cleansing and Integration
Introduction
Nowadays there are many job hunting websites including seek.com, Azuna.com, etc. These job hunting sites all manage a job search system, where job hunters could search for relevant jobs based on keywords, salary, and categories, etc.
Job advertisement data analysis is becoming increasingly important and beneficial for job hunting sites, as they can be used to make improvements in the experience of users searching for jobs.
For this assessment, you are required to write Python (version 3.7) code to clean and integrate job advertisement datasets from different sources. There are two major tasks in this assessment, which has to be completed in order.
• In Task 1, you will need to find and fix problems in a given job advertisement dataset.
• Then in Task 2, you will integrate the cleaned dataset (the output from Task 1) and the 2nd dataset with different formatting.
Task 1. Auditing and Cleansing the Job dataset
In this task, you are given a job advertisement dataset dataset1_with_error.csv. You are required to inspect and audit this dataset to identify data problems and to fix those problems.
The description of each column and its required format in the output cleaned dataset are shown in Table 1. Different generic and major data problems could be found in the data might include:
• Typos and spelling mistakes
• Irregularities, e.g., abnormal data values and data formats
• Violations of the Integrity constraint.
• Outliers
• Duplications
• Missing values
• Inconsistency, e.g., inhomogeneity in values and types in representing the same data
Task 2. Integrating the Job datasets
In this task, you will be giving a 2nd job advertisement dataset dataset2.csv. You will then integrate this dataset with the output from Task 1 (dataset1_solution.csv).
To complete this task successfully, you are required to do the following:
1. Resolving schema conflicts and merging data: Inspect and compare the schema of dataset1_solution.csv and dataset2.csv to identify and resolve any schema conflicts.
You will need to write Python code to
a. Resolve any schema conflicts. You will need to adopt the schema in dataset1_solution.csv (refer to Table 1) as your global schema as much as you could (please DO NOT change the attribute names).
b. Implement the semantic mapping and integrate the two data sets dataset1_solution.csv and dataset2.csv to produce one unified table.
2. Resolving data conflicts: Inspect tuples and instances for data conflicts in the unified table. In this step, you are required to do the following:
a. Use Pandas libraries to detect and resolve duplications in the unified table.
b. Identify a proper global key for the integrated job data and explain your chosen key in the notebook.
3. Finally, you should output the integrated dataset as dataset_integrated.csv
Attachment:- Data Cleansing and Integration.rar