K-nearest neighbor for text classification, Computer Engineering

Assignment Help:

Assignment 2: K-nearest neighbor for text classification.

The goal of text classification is to identify the topic for a piece of text (news article, web-blog, etc.). Text classification has obvious utility in the age of information overload, and it has become a popular turf for applying machine learning algorithms. In this project, you will have the opportunity to implement k-nearest neighbor and apply it to text classification on the well known Reuter news collection.

1.       Download the dataset from my website, which is created from the original collection and contains a training file, a test file, the topics, and the format for train/test.

2.       Implement the k-nearest neighbor algorithm for text classification. Your goal is to predict the topic for each news article in the test set. Try the following distance or similarity measures with their corresponding representations.

a.        Hamming distance: each document is represented as a boolean vector, where each bit represents whether the corresponding word appears in the document.

b.       Euclidean distance: each document is represented as a numeric vector, where each number represents how many times the corresponding word appears in the document (it could be zero).

c.         Cosine similarity with TF-IDF weights (a popular metric in information retrieval): each document is represented by a numeric vector as in (b). However, now each number is the TF-IDF weight for the corresponding word (as defined below). The similarity between two documents is the dot product of their corresponding vectors, divided by the product of their norms.

3.        Let w be a word, d be a document, and N(d,w) be the number of occurrences of w in d (i.e., the number in the vector in (b)). TF stands for term frequency, and TF(d,w)=N(d,w)/W(d), where W(d) is the total number of words in d. IDF stands for inverted document frequency, and IDF(d,w)=log(D/C(w)), where D is the total number of documents, and C(w) is the total number of documents that contains the word w; the base for the logarithm is irrelevant, you can use e or 2. The TF-IDF weight for w in d is TF(d,w)*IDF(d,w); this is the number you should put in the vector in (c). TF-IDF is a clever heuristic to take into account of the "information content" that each word conveys, so that frequent words like "the" is discounted and document-specific ones are amplified. You can find more details about it online or in standard IR text.

4.       You should try k = 1, k = 3 and k = 5 with each of the representations above. Notice that with a distance measure, the k-nearest neighborhoods are the ones with the smallest distance from the test point, whereas with a similarity measure, they are the ones with the highest similarity scores.

 

 


Related Discussions:- K-nearest neighbor for text classification

What about division and multiply operations, Q. What about division and mul...

Q. What about division and multiply operations? In most of the older computers divisions and multiply were implemented using subtract/add and shift micro-operations. If a digit

Describe set-associative mapping, Q. Describe Set-Associative Mapping? ...

Q. Describe Set-Associative Mapping? A third type of cache organization known as set-associative mapping is an improvement on direct mapping organization in that every word of

Is it possible to pass data and include program explicitly, Is it possible ...

Is it possible to pass data to and from include programs explicitly? No.  If it is needed to pass data to and from modules it is needed to use subroutines or function modules.

What are the concerns for growth of e-commerce in india, What are the conce...

What are the concerns for growth of e-commerce in India? Government as Facilitator for the growth of e-commerce has taken following steps: a. Promotion of competitive dataco

Where particular header ends & next item begins of ipv6, As IPV6 contain mu...

As IPV6 contain multiple headers, how does it know where particular header ends and next item begins? Several headers types contain fixed size. For illustration a base header h

What is dynamic memory allocation, What is dynamic memory allocation? T...

What is dynamic memory allocation? The mechanism of allocating needs amount of memory at run time is known as dynamic allocation of memory. Sometimes it is needed to allocate m

What do you mean by e-cash, What do you mean by E-cash? E-Cash and it...

What do you mean by E-cash? E-Cash and its Properties:  Ecash is cash which is represented by two models. One is the on-line form of e-cash which allows for the completi

Explain about instruction cycle, Q. Explain about Instruction Cycle? Th...

Q. Explain about Instruction Cycle? The instruction cycle for this provided machine comprises four cycles. Presume a 2-bit instruction cycle code (ICC). The ICC can represent t

Explain salient points about indirect addressing, Q. Explain salient points...

Q. Explain salient points about indirect addressing? A number of salient points about this scheme are:  In this addressing scheme effective address EA and contents of th

Define about anchor tag, Q. Define about Anchor Tag? Anchor tag is used...

Q. Define about Anchor Tag? Anchor tag is used to create links between various objects such as HTML pages, web sites, files etc. It is introduced by characters and termi

Write Your Message!

Captcha
Free Assignment Quote

Assured A++ Grade

Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!

All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd