Reference no: EM132959206
MMI223995 Big Data Platforms - Glasgow Caledonian University
DESIGN A DATA PIPELINE TO PROCESS AN OPEN DATASET
There are many open datasets freely available relating to just about any area of interest or activity, including data about government, science, health, sport, music and so on. There are valuable insights to be gained from analysis of such data. Analyses can be performed using data pipelines which are built from platforms that ingest datasets and then transform and/or combine, store and query/visualise the data as required.
In this assignment you will consider the design and implementation of such a data pipeline, using platforms of the kinds that you have learned about in this module. The focus of the assignment is on the use of platforms capable of scaling to handle "big data", so your design should be based on the use of distributed platforms.
Your assignment consists of two parts: A. Design report
You should first research possible datasets and select the data that you want to use as the basis for your assignment. A list of resources to help you find suitable data will be made available on GCU Learn (see Reading & Links), but you may make use of suitable data from any source that you find. You may want to try to find data related to an area that you have a personal interest in and knowledge about.
You should then proceed to devise and report on a high-level design for a data pipeline that could be used to perform your proposed analysis. The pipeline should include stages as appropriate for: ingest, ETL, storage, analysis/visualisation. The pipeline should be designed for deployment on a single cloud service provider, and the platforms for each stage should be deployable or available as managed services on that provider's infrastructure. You will need to research the offerings that are available for your chosen cloud provider.
This design should consider:
Overall concept
• The original format of the data (e.g. CSV, JSON) and illustration of the data schema.
• Source of the data, e.g. file, streaming. Even if your chosen dataset is only available as file(s) you can consider for your design a scenario where that data would be streamed if it makes sense for your scenario
• Any transformation to be applied to the data as ETL (Extract, Transform, Load)
• Potential analyses and/or visualisations to be performed. Given the focus of this module, I expect that analyses will be based on relatively simple filtering, projection and aggregation, rather than on ML (Machine Learning) algorithms, although there is no specific restriction on the analyses you can include.
Platforms
• The key components of the pipeline: for each component you should select a suitable big data platform (e.g. specific data store, file system, analytic engine) and describe the purpose of that component within your solution
• Interaction/integration between components, e.g. storing from analytic engine to data store
• Software and services that would need to be installed or provisioned and the process of doing so in each case.
• Implementation details, for example: file formats in cases where file system storage will be used; query languages/mechanisms to be used, etc.
You should base your choices on the module content and on additional research, and you should justify your choices. You should include appropriate references. Marks will be awarded on the basis of depth, completeness and relevance of the content within each of the above areas. Your report should be submitted in the form of a Word or PDF document.
B. Prototype
You should implement a prototype that illustrates the processing stages required for your solution to part A, for example ETL, query, visualisation.
You should prepare your complete prototype in the form of a DataBricks notebook making use of Apache Spark for data processing, and you should make use of markdown cells to document your work. The first markdown cell should contain a descriptive title for your prototype and your name and student number. It is suggested that you use Python as the programming language for your implementation, although Scala is an option on DataBricks.
Each processing stage of your pipeline should be represented by one or more executable notebook cells. Storage within your pipeline may be represented by file storage in the Databricks filesystem or by in-memory data structures. Your comments at each point should explain the purpose of the processing, where it fits into the overall data pipeline. It should be clear in your prototype where it is illustrating data being transferred from a storage platform to an analytic platform or vice versa.
Attachment:- Big Data Platforms.rar