This is the truth table:

Entropy of the collection is 1:

The Information Gain of a2 is zero:

Skip to content
# Esaias Pech

## Software Design Engineer, Georgia Tech OMSCS Student, Udacity Self Driving Car Nanodegree student

# Exercise 3.2

# Exercise 3.1.d

# Exercise 3.1.c XOR

# Exercise 3.1.b

# Solutions to exercises found in Machine Learning by Tom M. Mitchell

# Exercise 3.1.a

This is the truth table:

This is a summary obtained from the truth table, the entropy is for the whole set:

From the truth table, we calculate the Entropy and Gain of every attribute:

ID3 uses the Information Gain to decide what node to attribute to use as the next node in the tree, in this case we select A as the root node since it has the highest Information Gain.

I’m taking my 3rd class in the OMSCS program by Georgia Tech which is Machine Learning by Prof. Charles Isbell and Prof. Michael Littman (I previously took Computer Vision by Prof. Aaron Bobick and Knowledge Based AI by Prof. David Joyner)

The book that we are using is Machine Learning by Tom M. Mitchell. At the end of every chapter there is a set of exercises, as I working through the exercises I often found myself wanting to corroborate my solution to the problem but I couldn’t find it so I decided to document them on my blog to be of help to others like me.

If you find a mistake or a better way to do this, please let me know as I would be more than happy to learn from you!

I will do my best to be diligent and constant at blogging but this is going to be a low priority task (more like the ‘idle’ task in my scheduler).

—–

Links to solutions: