Generate the corpus vocabulary with the same structure

Assignment Help Other Subject
Reference no: EM132617886

Task 1: Parsing Text Files

This assessment touches the very first step of analyzing textual data, i.e., extracting data from semi-structured text files. Each student is provided with a data-set that contains information about COVID-19 related tweets (please find your own directory "part1" from here). Each text file contains information about the tweets, i.e., "id", "text", and "created_at" attributes. Your task is to extract the data and transform the data into the XML format with the following elements:

1. id: is a 19-digit number.

2. text: is the actual tweet.

3. Created_at: is the date and time that the tweet was created

The XML file must be in the same structure as the sample folder. Please note that, as we are dealing with large datasets, the manual checking of outputs is impossible and output files would be processed and marked automatically therefore, any deviation from the XML structure (i.e. sample.xml) and any deviation from this structure (e.g. wrong key names which can be caused by different spelling, different upper/lower case, etc., wrong hierarchy, not handling the XML special characters,...) will result in receiving zero for the output mark as the marking script would fail to load your file. (hint: run your code on the provided example and make sure that your code results in the exact same output as the sample output. You can also use the "xmltodict" package to make sure that your XML is loadable). Beside the XML structure, the following constraints must also be satisfied:

1. The "id"s must be unique, so if there are multiple instances of the same tweets, you must only keep one of them in your final XML file.

2. The non-english tweets should be filtered out from the dataset and the final XML should only contain the tweets in English language. For the sake of consistency, you must use the langidpackage to classify the language of a tweet.

3. The re, os, and the langidpackages in Python are the only packages that you are allowed to use for the task 1 of this assessment (e.g., "pandas" is not allowed!). Any other packages that you need to "import" before usage is not allowed.

The output and the documentation will be marked separated in this task, and each carries its own mark.

Output
See sample.xml for detailed information about the output structure. The following must be performed to complete the assessment.
• Designing efficient regular expressions in order to extract the data from your dataset
• Storing and submitting the extracted data into an XML file,
<your_student_number>.xml following the format of sample.xml
• Explaining your code and your methodology in task1_<your_student_number>.ipynb
• A pdf file, "task1_<your_student_number>.pdf ". You can first clean all the output in
the jupyter notebook task1_<your_student_number>.ipynband then export it as a pdf file. This pdf will be passed to Turnitin for plagiarism check.

Methodology
The report should demonstrate the methodology (including all steps) to achieve the correct results.

Documentation

The solution to get the output must be explained in a well-formatted report (with appropriate sections and subsections). Please remember that the report must explain both the obtained results and the approach to produce those results. You need to explain both the designed regular expression and the approach that you have taken in order to design such an expression.

Task 2: Text Pre-Processing

This assessment touches on the next step of analyzing textual data, i.e., converting the extracted data into a proper format. In this assessment, you are required to write Python code to preprocess a set of tweets and convert them into numerical representations (which are suitable for input into recommender-systems/ information-retrieval algorithms).

The data-set that we provide contains 80+ days of COVID-19 related tweets (from late March to mid July 2020). Please find your .xlsx file from the folder "part2" from this link. The excel file contains 80+ sheets where each sheet contains 2000 tweets. Your task is to extract and transform the information of the excel file performing the following task:

1. Generate the corpus vocabulary with the same structure as sample_vocab.txt. Please note that the vocabulary must be sorted alphabetically.

2. For each day (i.e., sheet in your excel file), calculate the top 100 frequent unigram and top-100 frequent bigrams according to the structure of the sample_100uni.txt and sample_100bi.txt. If you have less than 100 bigrams for a particular day, just include the top-n bigrams for that day (n<100).

3. Generate the sparse representation (i.e., doc-term matrix) of the excel file according to the structure of the sample_countVec.txt
Please note that the following steps must be performed (not necessarily in the same order) to complete the assessment.

1. Using the "langid" package, only keeps the tweets that are in English language.

2. The word tokenization must use the following regular expression, "[a-zA-Z]+(?:[-'][a-zA-Z]+)?"

3. The context-independent and context-dependent (with the threshold set to more than 60 days) stop words must be removed from the vocab. The provided context-independent stop words list (i.e, stopwords_en.txt) must be used.

4. Tokens should be stemmed using the Porter stemmer.

5. Rare tokens (with the threshold set to less than 5 days) must be removed from the vocab.

6. Creating sparse matrix using countvectorizer.

7. Tokens with the length less than 3 should be removed from the vocab.

8. First 200 meaningful bigrams (i.e., collocations) must be included in the vocab using.

Reference no: EM132617886

Questions Cloud

Provide journal entries stage of completion can be reliably : Provide the journal entries for 2018, 2019 and 2020, assuming that The stage of completion can be reliably estimated; and The stage of completion cannot be.
Difference between paraphrasing and direct quotes : What is the difference between paraphrasing and direct quotes? Why does paraphrased information need a citation?
How you would help your students support former tormentor : List one idea showing how the class can support Antonio's change. Explain how you would help your students support their former tormentor.
Identify appropriate actions bean and associates consider : Discuss the threats to Bean & Associates' independence and identify appropriate actions Bean & Associates should consider to mitigate the threats
Generate the corpus vocabulary with the same structure : Generate the corpus vocabulary with the same structure as sample_vocab.txt. Please note that the vocabulary must be sorted alphabetically
Determine if journal entries are needed to record the events : Determine if journal entries are needed to record these events in the books of Quality Meat Distributors. Write down the complete journal entries.
How you can lead same child to achieve intrinsic motivation : Imagine that you are a preschool teacher. Describe a situation in which you would use an extrinsic reward to motivate a preschooler.
Data processing and data streamlining for public libraries : Imagine that you work for a data service company that specializes in data storage, data processing, and data streamlining for public libraries.
Requirements for company local area network : A logical design for the HQ LANAn IP address plan for the HQ LAN according to the proposed design

Reviews

Write a Review

Other Subject Questions & Answers

  Cross-cultural opportunities and conflicts in canada

Short Paper on Cross-cultural Opportunities and Conflicts in Canada.

  Sociology theory questions

Sociology are very fundamental in nature. Role strain and role constraint speak about the duties and responsibilities of the roles of people in society or in a group. A short theory about Darwin and Moths is also answered.

  A book review on unfaithful angels

This review will help the reader understand the social work profession through different concepts giving the glimpse of why the social work profession might have drifted away from its original purpose of serving the poor.

  Disorder paper: schizophrenia

Schizophrenia does not really have just one single cause. It is a possibility that this disorder could be inherited but not all doctors are sure.

  Individual assignment: two models handout and rubric

Individual Assignment : Two Models Handout and Rubric,    This paper will allow you to understand and evaluate two vastly different organizational models and to effectively communicate their differences.

  Developing strategic intent for toyota

The following report includes the description about the organization, its strategies, industry analysis in which it operates and its position in the industry.

  Gasoline powered passenger vehicles

In this study, we examine how gasoline price volatility and income of the consumers impacts consumer's demand for gasoline.

  An aspect of poverty in canada

Economics thesis undergrad 4th year paper to write. it should be about 22 pages in length, literature review, economic analysis and then data or cost benefit analysis.

  Ngn customer satisfaction qos indicator for 3g services

The paper aims to highlight the global trends in countries and regions where 3G has already been introduced and propose an implementation plan to the telecom operators of developing countries.

  Prepare a power point presentation

Prepare the power point presentation for the case: Santa Fe Independent School District

  Information literacy is important in this environment

Information literacy is critically important in this contemporary environment

  Associative property of multiplication

Write a definition for associative property of multiplication.

Free Assignment Quote

Assured A++ Grade

Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!

All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd