I started the Self Driving Car Nanodegree in January 2017, I was initially selected for the October 2016 cohort but for personal reasons I couldn’t join then.
So far the Nanodegree complements nicely what I’ve been learning in the Georgia Tech’s OMSCS program, some of the videos used (e.g. Neural Networks) were taken from the Machine Learning class in the OMSCS program.
The first project is to detect lanes in the road, the objective is to get familiar with some basic concepts of Computer Vision (Gaussian filter, Hough Transform, Canny Edge Detection).
This is my final result:
This semester (Fall 2016) in the Georgia Tech OMSCS program, I took Computational Photography. It was a great class!
The final assignment is to make a portfolio to showcase our results from the different assignments we had during the semester.
If you are interested in knowing more about the class, visit the following link: https://cs6475.wordpress.com/fall-2016/
There were 11 assignments:
||A Photograph is a photograph
||Share one picture to get started with class
||Image I/O & Python Setup
||Setup your computing environment
||2 picture with Epsilon Difference
||Gradients and Edges
||Computing with Images
||Build a PinHole Camera
||Experiment with Image Blending
||Use Feature Detection
||Build a Simple Panorama
||Experiments with HDR
||Photos of Space
||Generate Panorama and PhotoSynths
||Build a Video Texture
Click here to see the Portfolio
This is the truth table:
Entropy of the collection is 1:
The Information Gain of a2 is zero:
Entropy and Information gain per attribute:
Decision Tree (solution):
This is the well known XOR Truth table:
This is the summary, notice that the Entropy of the Set is 1.
Notice that the Entropy of every attribute value is 1 and the Information Gain of every attribute is zero.
This is Decision Tree (solution):
This is the truth table:
This is a summary obtained from the truth table, the entropy is for the whole set:
From the truth table, we calculate the Entropy and Gain of every attribute:
ID3 uses the Information Gain to decide what node to attribute to use as the next node in the tree, in this case we select A as the root node since it has the highest Information Gain.
This is the decision tree (solution):
I’m taking my 3rd class in the OMSCS program by Georgia Tech which is Machine Learning by Prof. Charles Isbell and Prof. Michael Littman (I previously took Computer Vision by Prof. Aaron Bobick and Knowledge Based AI by Prof. David Joyner)
The book that we are using is Machine Learning by Tom M. Mitchell. At the end of every chapter there is a set of exercises, as I working through the exercises I often found myself wanting to corroborate my solution to the problem but I couldn’t find it so I decided to document them on my blog to be of help to others like me.
If you find a mistake or a better way to do this, please let me know as I would be more than happy to learn from you!
I will do my best to be diligent and constant at blogging but this is going to be a low priority task (more like the ‘idle’ task in my scheduler).
Links to solutions: