Today I decided to play around with a Beaglebone Blue that I bought last week.
It’s very easy to bring it up:
1) I followed this guide: http://beagleboard.org/getting-started. I tried to install the drivers for Windows 10 but I couldn’t install them, it would keep failing, then on the same Getting Started guide says that it’s not necessary to install them if you’re using one of the latest builds.
2) After flashing an SD card with the latest Debian build, I powered up the board and my PC got an IP (192.168.7.1)
3) I logged in to the BBB via SSH, via a Web Browser and also via Cloud9. Cloud9 is a very cool IDE, it’s not super fast but it’s very useful, I can see myself using it moving forward.
4) Since I’m a student at Georgia Tech, I got access to a free FULL License of MATLAB, so I decided to try Simulink. It’s actually very easy, I downloaded and installed the support package: https://www.mathworks.com/hardware-support/beaglebone-blue.html then ran and modified the GettingStarted simulink file (see image below).
Very simple to implement and to see running on the HW. The whole process took (from flashing a SD card to reading switch and controlling LEDs in Simulink) took less than 1hr. I think I’m going to be having a lot of fun with the Beaglebone Blue! (They also have blocks to control motors!).
“In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the experiment it represents.”
I decided to give it a try in MATLAB.
% Expected Value of a Die
numOfIterations = 100000;
meanArray = zeros([1 numOfIterations]);
values = zeros([1 numOfIterations]);
for i = 1:numOfIterations
% Get a random value between the range of [1 6]
values(i) = randi(6);
% Calculate the mean so far
meanArray(i) = mean(values(1:i));
% Display the latest calculated mean
This is the result:
The following code creates 3 floating-point matrices in RAM (imgA, imgB, imgC), and also 3 on the GPU (cuImgA, cuImgB, cuImgC).
Initializes imgA with 1.0 and imgB with 0.13. Then subtracts them using the GPU, copies the result from the GPU to RAM and displays the resultant image.
Hope this helps!
#include "opencv2/imgproc.hpp" // Gaussian Blur
#include "opencv2/highgui.hpp" // OpenCV window I/O
using namespace std;
using namespace cv;
// Declare operands
Mat imgA(Size(512, 512), CV_32F);
Mat imgB(Size(512, 512), CV_32F);
cuda::GpuMat cuImgA, cuImgB, cuImgC;
// Display information on the current environment setup
cout << "CUDA Device count: \t" << cuda::getCudaEnabledDeviceCount() << endl;
cout << "Cuda Device: " << cuda::getDevice() << endl;
// Set the images to different values
imgA = 1;
imgB = 0.13;
// Copy data from RAM to GPU
// Perform arithmetic operation
cuda::absdiff(cuImgA, cuImgB, cuImgC);
// Download result to RAM from GPU
cout << "Image A: " << imgA.at(255, 0) << endl;
cout << "Image B: " << imgB.at(0, 255) << endl;
cout << "Image C: " << imgC.at(255, 255) << endl;
// Display image
// Wait for a keystroke in the window
- CUDA Toolkit (I’m using version 9.1)
- OpenCV (I’m using version 3.4.1)
- CMake (I’m using version 3.10.1)
- Visual Studio 2017
Open CMake and selecte the sources folder as well as the folder where the build files will be generated (I created a folder named buildDATE).
Click on “Configure”, it will take sometime to configure everything, it may ask you to select the “Generator” (in this case select Visual Studio 15 2017 Win64).
Click on Generate.
Go to the build folder and open the OpenCV.sln file (the solution for Visual Studio).
Under CMakeTargets, right click on ALL_BUILD and Build. This will take a long time (it took over 2hrs in my case).
I ran into a problem where the module/opencv_highgui was throwing linker errors, I had to manually add window_w32.cpp to the opencv_highgui project to get it to link correctly.
Verify that all the modules built correctly. Every module will generate a .lib and .dll file.
Under CMakeTargets, right click on INSTALL and build. This will take some time but not as much as the ALL_BUILD project. This will generate a folder named “install/x64/vc15” with 2 subfolders: “bin” and “lib”.
After this, you are ready to start developing with OpenCV 3.4.1 + CUDA, next post will be a classic “Hello World”.
I wanted to connect to a cheap Wansview IP Camera from Python using OpenCV, it’s very easy, see for yourself:
I started the Self Driving Car Nanodegree in January 2017, I was initially selected for the October 2016 cohort but for personal reasons I couldn’t join then.
So far the Nanodegree complements nicely what I’ve been learning in the Georgia Tech’s OMSCS program, some of the videos used (e.g. Neural Networks) were taken from the Machine Learning class in the OMSCS program.
The first project is to detect lanes in the road, the objective is to get familiar with some basic concepts of Computer Vision (Gaussian filter, Hough Transform, Canny Edge Detection).
This is my final result:
This semester (Fall 2016) in the Georgia Tech OMSCS program, I took Computational Photography. It was a great class!
The final assignment is to make a portfolio to showcase our results from the different assignments we had during the semester.
If you are interested in knowing more about the class, visit the following link: https://cs6475.wordpress.com/fall-2016/
There were 11 assignments:
||A Photograph is a photograph
||Share one picture to get started with class
||Image I/O & Python Setup
||Setup your computing environment
||2 picture with Epsilon Difference
||Gradients and Edges
||Computing with Images
||Build a PinHole Camera
||Experiment with Image Blending
||Use Feature Detection
||Build a Simple Panorama
||Experiments with HDR
||Photos of Space
||Generate Panorama and PhotoSynths
||Build a Video Texture
Click here to see the Portfolio