K-nearest neighbor for text classification, Computer Engineering

Assignment Help:

Assignment 2: K-nearest neighbor for text classification.

The goal of text classification is to identify the topic for a piece of text (news article, web-blog, etc.). Text classification has obvious utility in the age of information overload, and it has become a popular turf for applying machine learning algorithms. In this project, you will have the opportunity to implement k-nearest neighbor and apply it to text classification on the well known Reuter news collection.

1.       Download the dataset from my website, which is created from the original collection and contains a training file, a test file, the topics, and the format for train/test.

2.       Implement the k-nearest neighbor algorithm for text classification. Your goal is to predict the topic for each news article in the test set. Try the following distance or similarity measures with their corresponding representations.

a.        Hamming distance: each document is represented as a boolean vector, where each bit represents whether the corresponding word appears in the document.

b.       Euclidean distance: each document is represented as a numeric vector, where each number represents how many times the corresponding word appears in the document (it could be zero).

c.         Cosine similarity with TF-IDF weights (a popular metric in information retrieval): each document is represented by a numeric vector as in (b). However, now each number is the TF-IDF weight for the corresponding word (as defined below). The similarity between two documents is the dot product of their corresponding vectors, divided by the product of their norms.

3.        Let w be a word, d be a document, and N(d,w) be the number of occurrences of w in d (i.e., the number in the vector in (b)). TF stands for term frequency, and TF(d,w)=N(d,w)/W(d), where W(d) is the total number of words in d. IDF stands for inverted document frequency, and IDF(d,w)=log(D/C(w)), where D is the total number of documents, and C(w) is the total number of documents that contains the word w; the base for the logarithm is irrelevant, you can use e or 2. The TF-IDF weight for w in d is TF(d,w)*IDF(d,w); this is the number you should put in the vector in (c). TF-IDF is a clever heuristic to take into account of the "information content" that each word conveys, so that frequent words like "the" is discounted and document-specific ones are amplified. You can find more details about it online or in standard IR text.

4.       You should try k = 1, k = 3 and k = 5 with each of the representations above. Notice that with a distance measure, the k-nearest neighborhoods are the ones with the smallest distance from the test point, whereas with a similarity measure, they are the ones with the highest similarity scores.

 

 


Related Discussions:- K-nearest neighbor for text classification

How will you prepare problem statement, How will you prepare problem statem...

How will you prepare problem statement? Problem statement should state what is to be completed and not how it is to be executed. It should be a statement of requirements not a

Determine the fastest logic, Which of the fastest logic: TTL, ECL, CMOS and...

Which of the fastest logic: TTL, ECL, CMOS and LSI ? Ans. The fastest logic family of all logic families ECL. High  speeds  are  possible  in  ECL  since the  transistors  a

What is typical storage hierarchy, Q. What is typical storage hierarchy? ...

Q. What is typical storage hierarchy? A typical storage hierarchy is displayed in Figure above. Though Figure shows only block diagram however it includes storage hierarchy:

stores on each line a part number, Make a file "parts_inv.dat" that stores...

Make a file "parts_inv.dat" that stores on each line a part number, cost, and quantity in inventory, e.g.: 123 5.99 52 456 3.97 100 333 2.22 567 Use fscanf to read this infor

What is incidence matrix, Incidence Matrix: - This is the incidence matrix ...

Incidence Matrix: - This is the incidence matrix for an undirected group. For directed graphs, the vertex from where an edge is originating will have +1 and the vertex where the ed

Associative mapping - computer architecture, Associative Mapping: It i...

Associative Mapping: It is a more flexible mapping technique A primary memory block can be placed into any specific cache block position. Space in the cache may be

Show the conflict in register, Q. Show the conflict in register? All mi...

Q. Show the conflict in register? All micro-operations written on a line are to be executed at same time provided the statements or a group of statements to be implemented toge

Learning weights in perceptrons, Learning Weights in Perceptrons In det...

Learning Weights in Perceptrons In detail we will look at the learning method for weights in multi-layer networks next chapter. The following description of learning in percept

Explain short note about molap?, Classic form of OLAP is called as MOLAP an...

Classic form of OLAP is called as MOLAP and it is often known as OLAP. Simple database structures like time period, product, location, etc are used. Functioning of each and every d

Write Your Message!

Captcha
Free Assignment Quote

Assured A++ Grade

Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!

All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd