Reference no: EM132598263
MMI223995 Big Data Platforms - Glasgow Caledonian University
DESIGN A DATA PIPELINE TO PROCESS AN OPEN DATASET
There are many open datasets freely available relating to just about any area of interest or activity, including data about government, science, health, sport, music and so on. There are valuable insights to be gained from analysis of such data. Analyses can be performed using data pipelines which are built from platforms that ingest datasets and then transform and/or combine, store and query/visualise the data as required.
In this assignment you will consider the design and implementation of such a data pipeline, using platforms of the kinds that you have learned about in this module. The focus of the assignment is on the use of platforms capable of scaling to handle "big data", so your design should be based on the use of distributed platforms.
A. Design report
You should first research possible datasets and select the data that you want to use as the basis for your assignment. A list of resources to help you find suitable data will be made available on GCU Learn (see Reading & Links), but you may make use of suitable data from any source that you find. You may want to try to find data related to an area that you have a personal interest in and knowledge about.
IMPORTANT: before proceeding you MUST get approval from me for your choice of dataset. You must email me with the following information, and await approval:
• Name/nature of the dataset(s)
• URL(s) from which dataset(s) can be downloaded
• A brief description of the purpose for which you propose to use the data
If the dataset you choose is not considered to be suitable, or has been chosen already by another student, you may be asked to find an alternative. I expect that each student will use a different dataset.
You should then proceed to devise and report on a high-level design for a data pipeline that could be used to perform your proposed analysis.
This design should consider:
Overall concept
• The original format of the data (e.g. CSV, JSON) and illustration of the data schema.
• Any transformation to be applied to the data as ETL (Extract, Transform, Load)
• Potential analyses and/or visualisations to be performed. Given the focus of this module, I expect that analyses will be based on relatively simple filtering, projection and aggregation, rather than on ML (Machine Learning) algorithms, although there is no specific restriction on the analyses you can include.
Platforms
• The key components of the pipeline: for each component you should select a suitable big data platform (e.g. specific data store, file system, analytic engine) and describe the purpose of that component within your solution
Integration and deployment
• Interaction/integration between components, e.g. storing from analytic engine to data store
• File formats in cases where file system storage will be used
• Software that would need to be installed or provisioned, including connector libraries where required
• Physical deployment of components
You should base your choices on the module content and on additional research, and you should justify your choices. You should include appropriate references. Marks will be awarded on the basis of depth, completeness and relevance of the content within each of the above areas. Your report should be submitted in the form of a Word or PDF document.
B. Prototype
You should implement a prototype that illustrates the processing stages required for your solution to part A, for example ETL, query, visualisation.
You should prepare your prototype in the form of a notebook, either a Jupyter notebook which runs on the local machine or in a cloud service within Azure, or a DataBricks notebook, and you should make use of markdown cells to document your work. The first markdown cell should contain a descriptive title for your prototype and your name and student number
It is suggested that you use Python as the programming language for your implementation, although Scala is an option if you use DataBricks. You should explain the purpose of the processing, where it fits into the overall data pipeline, and the steps involved in the data ingest, processing and output. You may wish to implement integration of components where appropriate to illustrate your design, e.g. integration of an analytic engine and a data store.
Note that while your design should be based on the use of platforms deployed on clusters, it is sufficient for testing your prototype to run on a local standalone computer or on the limited (single-node) clusters typically available in the free tier of cloud-based services.
Your prototype and documentation should be submitted in the form of a single Jupyter or DataBricks notebook exported as HTML or PDF, including the output from executing the code in all the code cells. It should be possible for marking to view in the exported notebook the results of "running" the prototype.
Attachment:- Big Data Platforms.rar