Lane Detection – Self Driving Car Nanodegree

I started the Self Driving Car Nanodegree in January 2017, I was initially selected for the October 2016 cohort but for personal reasons I couldn’t join then.

So far the Nanodegree complements nicely what I’ve been learning in the Georgia Tech’s OMSCS program, some of the videos used (e.g. Neural Networks) were taken from the Machine Learning class in the OMSCS program.

The first project is to detect lanes in the road, the objective is to get familiar with some basic concepts of Computer Vision (Gaussian filter, Hough Transform, Canny Edge Detection).

This is my final result:

Computational Photography Portfolio

This semester (Fall 2016) in the Georgia Tech OMSCS program, I took Computational Photography. It was a great class!

The final assignment is to make a portfolio to showcase our results from the different assignments we had during the semester.

If you are interested in knowing more about the class, visit the following link: https://cs6475.wordpress.com/fall-2016/

There were 11 assignments:

Assignment # Title Goal
1 A Photograph is a photograph Share one picture to get started with class
2 Image I/O & Python Setup Setup your computing environment
3 Epsilon Photography 2 picture with Epsilon Difference
4 Gradients and Edges Computing with Images
5 Camera Obscura Build a PinHole Camera
6 Blending Experiment with Image Blending
7 Feature Detection Use Feature Detection
8 Panoramas Build a Simple Panorama
9 HDR Experiments with HDR
10 Photos of Space Generate Panorama and PhotoSynths
11 Video Textures Build a Video Texture

Click here to see the Portfolio

Exercise 3.1.b

This is the truth table:

3_1_b_TruthTable

This is a summary obtained from the truth table, the entropy is for the whole set:3_1_b_TruthTableSummary

From the truth table, we calculate the Entropy and Gain of every attribute:

3_1_b_Calculations

ID3 uses the Information Gain to decide what node to attribute to use as the next node in the tree, in this case we select A as the root node since it has the highest Information Gain.

This is the decision tree (solution):
3_1_b_DecisionTree

Solutions to exercises found in Machine Learning by Tom M. Mitchell

I’m taking my 3rd class in the OMSCS program by Georgia Tech which is Machine Learning by Prof. Charles Isbell and Prof. Michael Littman (I previously took Computer Vision by Prof. Aaron Bobick and Knowledge Based AI by Prof. David Joyner)

The book that we are using is Machine Learning by Tom M. Mitchell. At the end of every chapter there is a set of exercises, as I working through the exercises I often found myself wanting to corroborate my solution to the problem but I couldn’t find it so I decided to document them on my blog to be of help to others like me.

If you find a mistake or a better way to do this, please let me know as I would be more than happy to learn from you!

I will do my best to be diligent and constant at blogging but this is going to be a low priority task (more like the ‘idle’ task in my scheduler).

—–

Links to solutions:

Exercise 3.1.a

Exercise 3.1.b

Exercise 3.1.c

Exercise 3.1.d

Exercise 3.2