Example calculation of entropy, Computer Engineering

Assignment Help:

Example Calculation:

If we see an example we are working with a set of examples like S = {s1,s2,s3,s4} categorised with a binary categorisation of positives and negatives like that s1  is positive and the rest are negative. Expect further there that we want to calculate the information gain of an attribute, A, and  A can take the values {v1,v2,v3} obviously. So lat in finally assume that as: 

1745_Example Calculation of Entropy.png

Whether to work out the information gain for A relative to S but we first use to calculate the entropy of S. Means that to use our formula for binary categorisations that we use to know the proportion of positives in S and the proportion of negatives. Thus these are given such as: p+ = 1/4 and p- = 3/4. So then we can calculate as: 

Entropy(S) = -(1/4)log2(1/4) -(3/4)log2(3/4) = -(1/4)(-2) -(3/4)(-0.415) = 0.5 + 0.311

= 0.811 

Now next here instantly note that there to do this calculation into your calculator that you may need to remember that as: log2(x) = ln(x)/ln(2), when ln(2) is the natural log of 2. Next, we need to calculate the weighted Entropy(Sv) for each value v = v1, v2, v3, v4, noting that the weighting involves multiplying by (|Svi|/|S|). Remember also that Sv  is the set of examples from S which have value v for attribute A. This means that:  Sv1 = {s4}, sv2={s1, s2}, sv3 = {s3}. 

We now have need to carry out these calculations: 

(|Sv1|/|S|) * Entropy(Sv1) = (1/4) * (-(0/1)log2(0/1) - (1/1)log2(1/1)) = (1/4)(-0 -

(1)log2(1)) = (1/4)(-0 -0) = 0 

(|Sv2|/|S|) * Entropy(Sv2) = (2/4) * (-(1/2)log2(1/2) - (1/2)log2(1/2))

                                      = (1/2) * (-(1/2)*(-1) - (1/2)*(-1)) = (1/2) * (1) = 1/2 

(|Sv3|/|S|) * Entropy(Sv3) = (1/4) * (-(0/1)log2(0/1) - (1/1)log2(1/1)) = (1/4)(-0 -

(1)log2(1)) = (1/4)(-0 -0) = 0 

Note that we have taken 0 log2(0) to be zero, which is standard. In our calculation,

we only required log2(1) = 0 and log2(1/2) =  -1. We now have to add these three values together and take the result from our calculation for Entropy(S) to give us the final result: 

Gain(S,A) = 0.811 - (0 + 1/2 + 0) = 0.311 

Now we look at how information gain can be utilising in practice in an algorithm to construct decision trees.


Related Discussions:- Example calculation of entropy

What about page footer, By default, this page is quite barren. Though, go e...

By default, this page is quite barren. Though, go explore the Query Page some more; you will search that you can store numerous queries on the server, so if you regularly run a cer

Cognitive science applications, This branch of AI is concerned with incor...

This branch of AI is concerned with incorporating knowledge from varied disciplines such as biolog neurology psychology mathematics and several linked disciplines. It is typic

Difference among using a filter and a query to find records, What is the di...

What is the difference among using a filter and a query to find records? Filter is used to quickly limit the records as we are already viewing in a Datasheet or a form to those

What is an operating system, What is an operating system? An operating...

What is an operating system? An operating system is system software which provides interface between hardware and user. The operating system gives the means for the proper uti

Estimate the circuit switched network, a) Total available bandwidth = 1 Mbp...

a) Total available bandwidth = 1 Mbps = 1000 Kbps Each user requires send data at the rate of = 500 kbps As it is circuit switched network we have to dedicate the bandwidth So the

How can we decrement and increment operations, How can we decrement and in...

How can we decrement and increment  operations We  can  implement decrement  and increment  operations  by  using  a  combinational circuit  or  binary  down/up counters.  In

Definition of lists, Q. Definition of Lists? Another type of list is a ...

Q. Definition of Lists? Another type of list is a definition list. Definition lists have a heading and text appears below that. EXPERTSMI

Learning weights in perceptrons, Learning Weights in Perceptrons: Furt...

Learning Weights in Perceptrons: Furthermore details are we will look at the learning method for weights in multi-layer networks next lecture. Thus the following description o

Write the truth table and simplify karnaugh map, For F = A.B.C + B.C.D ...

For F = A.B.C + B.C.D ‾ + A‾.B.C ,  write  the  truth  table and simplify using Karnaugh map . And. Simplification of Logic Function F = A B C + B C‾ D + A‾ B C. Therefore the

Explain the disadvantages of random scan display, Disadvantages of random s...

Disadvantages of random scan display - Just by wire-frame, it is almost impossible to create images with shaded objects or areas filled with a given colour. - In case

Write Your Message!

Captcha
Free Assignment Quote

Assured A++ Grade

Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!

All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd