Category Archives: HEXAPOD VISION

MOTION TRACKING USING OPENCV

Hello Friends,

While researching about various trackers in my hexapod project I came across a very simple python code that was tracking on the basis of movements. But it was based on old Matlab API. So I wanted to implement it in OpenCV. Tracking any object in a video is a very important part in the field of Robotics. For eg. suppose you want to track moving vehicles at traffic signals(Project Transpose,IIM Ahmedabad), track moving obstacles for an autonomous robot( Project Hexapod,Bits Pilani KK Birla Goa Campus), finding life existence in unmanned areas, etc.

You can download the code from here: https://github.com/abhi-kumar/OPENCV_MISC/blob/master/track_motion.cpp

Lets go through the major snippets of the code.

#include <stdio.h>
#include <cv.h>
#include <highgui.h>

These are the libraries for the old C based OpenCV modules

#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/video/tracking.hpp"

These are the libraries for the new C++ based OpenCV modules

#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>

The standard C++ libraries

float MHI_DURATION = 0.05;                
int DEFAULT_THRESHOLD = 32;
float MAX_TIME_DELTA = 12500.0;
float MIN_TIME_DELTA = 5;
int visual_trackbar = 2;

These are the parameters to be used in the tracking function. Please note that they may change according to the type of camera being used.
1.Timestamp– Current time in milliseconds or other units.
2.MHI_DURATION-Maximal duration of the motion track in the same units as timestamp
3.DELTA_TIME-Minimal (or maximal) allowed difference between mhi values within a pixel neighborhood.

updateMotionHistory(motion_mask,motion_history,timestamp,MHI_DURATION);			
calcMotionGradient(motion_history, mg_mask, mg_orient, 5, 12500.0, 3);
segmentMotion(motion_history, seg_mask, seg_bounds, timestamp, 32);

To understand these three major lines you must go through these links
1. http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html#updatemotionhistory

2. http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html#calcmotiongradient

3. http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html#segmentmotion

Now the compilation and running the application in ubuntu
1.Download the code
2.Open a terminal an traverse to the folder containing the code
(Assuming you named the code file as “track_motion.cpp”)
3.Type
a)chmod +x track_motion.cpp
b)g++ -ggdb `pkg-config –cflags opencv` -o `basename track_motion.cpp .cpp` track_motion.cpp `pkg-config –libs opencv`
c)./track_motion

The default trackbar will be set to binary view, any motion detected will be tracked in white color. Changing the trackbar position to number “1” will provide a grayscale view and in the same way number “0” is RGB and number “3” is in HSV.

Here is a demo video link to get an overview of the different views of the application:

I hope you benefit from this code.

Thank you 🙂

Advertisements

WORKING WITH OPENCV IN WINDOWS

Hello Friends,

Till today I was working with OpenCV on ubuntu platform. But we should also have knowledge about using the OpenCV libraries with Visual studio in Windows. Here I will be explaining how to integrate OpenCV with Windows 32-bit and 64-bit versions.

Step-I:

Download OpenCV from here: http://opencv.org/downloads.html

Double click on the downloaded exe file and when it is is being extracted select the folder name as “opencv” and extract it in your C drive.

Step-II:

Now we need to add the path of the extracted libraries to Environment variables.
Go to “Control Panel” >> “System and Security” >> “System” and click on “advanced system properties” and a window will appear(the one inside the red rectangle):
Screenshot (18)

Now click on the “environment variable”, in the “system variables” select “Path” and edit it by adding these lines to it:
64-Bit users
C:\opencv\build\x64\vc10\bin;C:\opencv\build\common\tbb\intel64\vc10;

32-Bit users
C:\opencv\build\x86\vc10\bin;

Assuming that you are using Visual Studio 2010
You are basically adding path to bin in the environment variables, so make sure the path is correct and make appropriate changes if necessary.

Click on “OK” in every window that has been opened to make the changes.

Step-III:
Open Visual studio 2010 and create a new visual c++ win32 console application project. Name it something and create it.

Now in that select the “View” menu and click on “Property Manager”.

Step-IV:
Only for 64-Bit version users.

Select the “Project” Menu and click on “Properties”.
Screenshot (19)

Click on the “Configuration Manager”
Select the “Active Solution Configuration” as “Release”
Select the “Active Solution Platform” and click on “”
Screenshot (20)

And in that Select the new platform as “X64”, click on “ok” and close the configuration manager.

Step-V:
Now in the properties window,
Select the “Configuration Properties” >> “C/C++” >> “General” and in that edit the “Additional Include libraries” by adding these two lines to it:
Screenshot (21)

C:\opencv\build\include\opencv;C:\opencv\build\include
Here we are adding path to include folders,make sure the path is correct as per your computer.

Select the “Configuration Properties” >> “C/C++” >> “Preprocessor” and in that select the “Preprocessor definition”,edit it by adding this to it :
_CRT_SECURE_NO_WARNINGS

Step-VI:
Now in the properties window,
Select the “Configuration Properties” >> “Linker” >> “General” and select and edit the “Additional Library Directories”
Screenshot (22)

64-Bit version users add this line to it:
C:\opencv\build\x64\vc10\lib;

32-Bit version users add this line to it:
C:\opencv\build\x86\vc10\lib;

Make sure the path to lib is correct as per your settings.

Select the “Configuration Properties” >> “Linker” >> “Input” and click on “Additional dependencies” and edit it:
Screenshot (23)

opencv_core246.lib
opencv_imgproc246.lib
opencv_highgui246.lib
opencv_ml246.lib
opencv_video246.lib
opencv_features2d246.lib
opencv_calib3d246.lib
opencv_objdetect246.lib
opencv_contrib246.lib
opencv_legacy246.lib
opencv_flann246.lib

Note: In …..246.lib,246 is the version of opencv,for me its OpenCV-2.4.6 ,So make appropriate changes according to the version you have downloaded.

Click on “apply” and “ok”

Now in the code DELETE everything and copy the test code from here: https://github.com/abhi-kumar/OPENCV_MISC/blob/master/tracker.cpp
Note:on the top of the code add this line: #include “stdafx.h”

Keep the mode as release and run it:
Screenshot (24)

Get the details of the code from : https://abhishek4273.wordpress.com/2014/07/05/track-the-region-of-interest/

So, now you have integrated OpenCV with Windows Visual Studio

Thanks 🙂

TRACK THE REGION OF INTEREST

Authors: Abhishek Kumar Annamraju, Akashdeep Singh, Adhesh Shrivastava

Hello Friends,

Lets go through interesting stuff that computer vision can provide. Confused by the title????

With the application I am going to introduce, you can track down a region from a live streaming video. Suppose you take a live stream from your web-cam, and in that window you draw a rectangle using your mouse,then in the next coming frames the application will track down that region unless and untill that part of the region is in the frame. The main crux of the application is “Good features to track” and “Optical flow”

Seems interesting!!!!!!

Download the code from here:
https://github.com/abhi-kumar/OPENCV_MISC/blob/master/tracker.cpp

Now lets understand the code:

#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <iostream>
#include <stdio.h>
#include <stdlib.h>

These libraries are the standard C and C++ libs.

#include <cv.h>
#include <highgui.h>

These libs are the ones from opencv C

#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/video/tracking.hpp"

These libs are the ones from opencv C++,also caleed as opencv2

using namespace cv;
using namespace std;

Including the standard namespaces.

IplImage* frame, * img1;
CvPoint point;
int drag = 0;
int x_point,width_point,y_point,height_point;

Initialising the parameters to capture a video from webcam and to set the mouse.

int key = 0;
CvRect rect;
Rect region_of_interest;
int test;
Mat src,src_gray,image,src_gray_prev,src1,src_gray1,copy,copy1,frames,copy2;
int maxCorners = 23;
RNG rng(12345);
vector<Point2f> corners,corners_prev,corners_temp;
double qualityLevel = 0.01;
double minDistance = 10;
int blockSize = 3;
bool useHarrisDetector = false;
double k = 0.04;
vector<uchar> status;
vector<float> err;
float x_cord[100];
float y_cord[100];

There are the parameters we will use in “Good features to track” and “Optical flow”.

void mouseHandler(int event, int x, int y, int flags, void* param)
{
    if (event == CV_EVENT_LBUTTONDOWN && !drag)
    {
        point = cvPoint(x, y);
        drag = 1;
    }
    
    if (event == CV_EVENT_MOUSEMOVE && drag)
    {
        img1 = cvCloneImage(frame);
        cvRectangle(img1,point,cvPoint(x, y),CV_RGB(255, 0, 0),1,8,0);
        cvShowImage("result", img1);
    }
    
    if (event == CV_EVENT_LBUTTONUP && drag)
    {
        rect = cvRect(point.x,point.y,x-point.x,y-point.y);
		x_point = point.x;
		y_point = point.y;
		width_point = x-point.x;
		height_point = y-point.y;
        cvShowImage("result", frame);
        drag = 0;
    }

    
    if (event == CV_EVENT_RBUTTONUP)
    {
        drag = 0;
    }
}

This is the code to draw and select a region of interest from the video window.

int main(int argc, char *argv[])
{
    capture = cvCaptureFromCAM( 0 ); 
    if ( !capture ) {
        printf("Cannot open initialize webcam!\n" );
        exit(0);
    }
    cvNamedWindow( "result", CV_WINDOW_AUTOSIZE );
	int small,big; //declares integer
    
	int x = 1;

The above snippet captures the first frame.

while( key != 'q' )
    {
        frame = cvQueryFrame( capture );

These line make sure that video is available till you press the key “q”

if (rect.width>0)

To check if the rectangle has been chosen or not

if(corners.size() == 0 || x==0)
			{
				Mat frames(frame);
				src = frames.clone();
				cvtColor( src, src_gray, CV_BGR2GRAY );
				cv::Mat mask1 = cv::Mat::zeros(src.size(), CV_8UC1);  
				cv::Mat roi(mask1, cv::Rect(x_point,y_point,width_point,height_point));
				roi = cv::Scalar(255, 255, 255);
				copy1 = src.clone();		
				goodFeaturesToTrack( src_gray,
    		           corners,
    		           maxCorners,
    		           qualityLevel,
    		           minDistance,
    		           mask1,
    		           blockSize,
    		           useHarrisDetector,
    	           k );

				int rad = 3;
  				for( int i = 0; i < corners.size(); i++ )
  				   { circle( copy1, corners[i], rad, Scalar(rng.uniform(0,255), rng.uniform(0,255),
  			            rng.uniform(0,255)), -1, 8, 0 );
			
					}
				IplImage test1 = copy1;
			  	IplImage* test2 = &test1;
				x = 1;

			    cvShowImage("result", test2);
			}

If the rectangle has just been drawn in the previous frame,the above code finds good features and saves it.

else
				{
					src_gray_prev = src_gray.clone();
					corners_prev = corners;
					Mat framess(frame);
					src = framess.clone();
					cvtColor( src, src_gray, CV_BGR2GRAY ); 
					cv::Mat mask = cv::Mat::zeros(src.size(), CV_8UC1);  
					cv::Mat roi(mask, cv::Rect(x_point,y_point,width_point,height_point));
					roi = cv::Scalar(255, 255, 255);	
					Mat copy;
  					copy = src.clone();
					goodFeaturesToTrack( src_gray,
    		           corners,
    		           maxCorners,
    		           qualityLevel,
    		           minDistance,
    		           mask,
    		           blockSize,
    		           useHarrisDetector,
    		           k );
	
					calcOpticalFlowPyrLK(src_gray_prev, src_gray, corners_prev, corners, status, err);
			
  					int r = 3;
  					for( int i = 0; i < corners.size(); i++ )
    				 { circle( copy, corners[i], r, Scalar(rng.uniform(0,255), rng.uniform(0,255),
    			          rng.uniform(0,255)), -1, 8, 0 );
					 x_cord[i] = corners[i].x;
					 y_cord[i] = corners[i].y;
					 
					}			
					IplImage test3 = copy;
					IplImage* test4 = &test3;
					cvShowImage("result", test4);			
					
				}

Now once the features have been saved,it is tracked.

vSetMouseCallback("result", mouseHandler, NULL);
        key = cvWaitKey(10);
        if( (char) key== 'r' )
	{ 
		rect = cvRect(0,0,0,0); cvResetImageROI(frame);
		x = 0;
	}
        cvShowImage("result", frame);

Calling the mouse handler function and setting up the key to reset the region of interest.

This is major explanation gor the code.

Compilation and running:(for ubuntu users)
1)Save the and name it tracker.cpp
2)Open a terminal and traverse to that folder where you saved the code and type:
a)chmod +x tracker.cpp
b)g++ -ggdb `pkg-config –cflags opencv` -o `basename mouse2.cpp .cpp` mouse2.cpp `pkg-config –libs opencv`
c)./tracker

Now the video will open,select the box and play with the tracker.If you want to reset the tracker to draw a new window press the key “r”.

I hope you like the application. Will be back with a more revised and more robust application in a few days.

Thank you 🙂

OBJECT DETECTION USING MULTIPLE TRAINCASCADED XML FILES

AUTHORS:Abhishek Kumar Annamraju,Akashdeep Singh,Adhesh Shrivastava

Hello Friends

In our previous post we learnt how to generate a cascade.xml file using opencv_traincascade.We usually tend to use it with facedetect.cpp present in OpenCV folder.

Now what if we want to use multiple xml files over the same image and display a combined result????

Well,after a little struggle we came up with an optimum code that does the above mentioned task in a very fast way.The code has only been implemented to images,will try to develop it further for videos too.

Download the code from here:https://github.com/abhi-kumar/OPENCV_MISC/blob/master/objectDetectionMultipleCascadeInput.cpp

The code is well commented,so you will understand everything once you carefully go through it.

Some of the results of the code:

I used it to detect cars,database was chosen as UIUC car database.

The cascade files used can be downloaded from here:https://github.com/abhi-kumar/CAR-DETECTION.git

These are sum of the results:
*******************************************************************************
Detection time:38.2002 ms

INPUT1

OUTPUT1

**********************************************************************************

Detection time:38.4898 ms

INPUT2

OUTPUT2

**********************************************************************************

You are always welcome to download the code and modify it for better usages.Any contribution to the code is highly appreciated.

Thank you 🙂

See also :- http://blindperception.wordpress.com/

TRAINCASCADE AND CAR DETECTION USING OPENCV

AUTHORS:Abhishek Kumar Annamraju,Akashdeep Singh,Adhesh Shrivastava

Hello Friends

My last post explained how segmentation can be used to detect roads.
This post will explain the following things:
1.Optimum use of traincascade
2.Creating xml files for object detection
3.Using multiple xml files to detect object,here it is cars
4.Using multiple xml files without detecting a single object twice.

Haartraining is stated to provide better results than traincascade but it is extremely slow.Sometimes it may take one to two weeks to train a classifier.So we shifted our goal to traincascade.

The whole procedure from now will focus on car detection

STEP-1:
Download the image database from here : http://cogcomp.cs.illinois.edu/Data/Car/

STEP-2:
Inside a folder:
1.Copy all the positive images in a folder named pos.
2.Copy all the negative images in a folder named neg.
3.Create a folder named data to store cascade file generated later on.

STEP-3:
Open a terminal,navigate to the requires folder and type

1.find pos -iname “*.pgm” -exec echo \{\} 1 0 0 100 40 \; > cars.info

Result will be file like this :https://github.com/abhi-kumar/CAR-DETECTION/blob/master/cars.info

2.find neg -iname “*.pgm” > bg.txt
Result will be file like this :https://github.com/abhi-kumar/CAR-DETECTION/blob/master/bg.txt

3.opencv_createsamples -info cars.info -num 550 -w 48 -h 24 -vec cars.vec
(width and height parameters change with change of database,-num is the number of images in pos folder)

4.opencv_traincascade -data data -vec cars.vec -bg bg.txt -numStages 10 -nsplits 2 -minhitrate 0.999 -maxfalsealarm 0.5 -numPos 500 -numNeg 500 -w 48 -h 24
a)-numPos and -numNeg must have the number of photos you have in pos and neg folder respectively.
b)numPos < number of samples in vec
c)choosing minhitrate and maxfalsealarm

To understand how the parameters of training affect the output refer : http://scholarpublishing.org/index.php/AIVP/article/view/1152

For example you have 1000 positive samples. You want your system to detect 900 of them. So desired hitrate = 900/1000 = 0.9. Commonly, put minhitrate = 0.999^number of stages

For example you have 1000 negative samples. Because it’s negative, you don’t want your system to detect them. But your system, because it has error, will detect some of them. Let error be about 490 samples, so false alarm = 490/1000 = 0.49. Commonly,put false alarm = 0.5^number of stages

Note:

1)The number of negative images must be greater than the number of positive images.

2)Try to set npos = 0.9 * number_of_positive_samples and 0.99 as a minHitRate.

3)vec-file has to contain >= (npos + (numStages-1) * (1 – minHitRate) * numPose) + S, where S is a count of samples from vec-file.S is a count of samples from vec-file that can be recognized as background right away.

This will generates xml file in data folder.The face detction code in the opencv examples could be used but with change in the xml file name.

A better way of using the xml file is running it through the code I made for object detection using multiple xml file
Refer:https://abhishek4273.wordpress.com/2014/03/16/object-detection-using-multiple-traincascaded-xml-files/

Code and other xml files(do read README.nd):https://github.com/abhi-kumar/CAR-DETECTION.git

The cascade files used can be downloaded from here:https://github.com/abhi-kumar/CAR-DETECTION.git

These are sum of the results:
*******************************************************************************
Detection time:38.2002 ms

INPUT1

OUTPUT1

**********************************************************************************

Detection time:38.4898 ms

INPUT2

OUTPUT2

**********************************************************************************

You are always welcome to download the code and modify it for better usages.Any contribution to the code is highly appreciated.

Thank you 🙂

Science lovers must see this : http://tanvik3394.wordpress.com/

See also :- http://blindperception.wordpress.com/

FAST ROAD/PATH DETECTION USING OPENCV

AUTHORS:Abhishek Kumar Annamraju,Akashdeep Singh,Adhesh Shrivastava

Hello Friends

Today I would like to discuss with you how we are trying to implement road detection for our robotics project of developing a hexapod.

Map building,object detection,obstacle detection etc,all emerge out clearly when we successfully implement background detection.Robotics enthusiasts like to call this background detection as “basic path detection”.

OpenCV has always been our favourite tool for image processing,so again we are using the same for road detection.

Inspired by the great working of watershed algorithm for image segmentation(RESEARCH PAPER:http://www.ias-iss.org/ojs/IAS/article/viewFile/852/755),we wanted to implement it for detection of path.The main aim of this code is to segment out road as FAST as possible,but we also need to take care of the accuracy.

The code goes here.
You can download the code from https://github.com/abhi-kumar/Hexapod/blob/master/road_detection.cpp

Click once somewhere on the code and press ctrl+A to select whole code.You may not see the whole code so its better to copy the code and paste it in your favourite text editor and then go through it.

#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <iostream>
#include <stdio.h>

using namespace std;
using namespace cv;

class findroad{                              //class that separates out roads from images
private:
Mat path;
public:
void setpath(Mat& image)
{
image.convertTo(path, CV_32S);
}

Mat getroad(Mat &image)                  //major working function attribute of the class
{
watershed(image, path);                                     //using watershed segmenter
path.convertTo(path,CV_8U);
return path;
}
};

int main( int argc, const char** argv )
{
Mat image1 = imread(argv[1],1);
Mat image;
resize(image1,image,Size(500,500),0,0,INTER_LINEAR);
Mat gray;
cvtColor(image, gray, CV_BGR2GRAY);
threshold(gray, gray, 100, 255, THRESH_BINARY);          //threasholding the grayscale image
double t = 0;
t = (double)cvGetTickCount();                                //setting up timer
Mat Erode;
erode(gray,Erode,Mat(),Point(2,2),7); //eroding image 7 times.
Mat Dilate;
dilate(gray,Dilate,Mat(),Point(2,2),7);    //dilating the image
threshold(Dilate,Dilate,1, 50,THRESH_BINARY_INV);
Mat path_trace(gray.size(),CV_8U,Scalar(0));
path_trace= Erode+Dilate;                                      //morphological addition
findroad road;                                             //creating an object for our class
road.setpath(path_trace);                                      //preparing the input
namedWindow("founded road");
namedWindow("input image");
Mat road_found = road.getroad(image);
road_found.convertTo(road_found,CV_8U);
imshow("founded road", road_found);
imshow("input image",image);
t = (double)cvGetTickCount() - t;                              //time of detection
printf( "road got detected in = %g ms\n", t/((double)cvGetTickFrequency()*1000.) );
cout << endl << "cheers" << endl;
imwrite("ROAD1.jpg",image);
imwrite( "ROAD1_DETECTED.jpg",road_found);
waitKey(0);
return 0;
}

********************************************************************************

We are still developing the code to get better and better results.
The folowing were our best outcomes

This result was obtained in 109.48 ms.

ROAD1

ROAD1_DETECTED

***************************************************************************************
This result was obtained in 114.475 ms

ROAD1

ROAD1_DETECTED
****************************************************************************************

This result was obtained in 98.91 ms

ROAD3

ROAD3_DETECTED

****************************************************************************************

This result was obtained in 110.336 ms

ROAD4

ROAD4_DETECTED

***************************************************************************************

But still as the image gets more complex,more crowded with objects the result fades away,for example the two following road detections:

******************************************************************************
This result was obtained in 107.042 ms

ROAD5

ROAD5_DETECTED

**********************************************************************************

This result was obtained in 114.656 ms

ROAD6

ROAD6_DETECTED

***********************************************************************************

Well with vehicles and other objects the code becomes more vulnerable.We are trying our level best to come up with an optimim solution.

Note:I have made the code open source so that everyone could benefit from it,also it can be modified by others to function in a better way.You can download the code from here https://github.com/abhi-kumar/Hexapod/blob/master/road_detection.cpp

Any modified code could be mailed to this address:
abhishek4273@gmail.com

Your contribution is highly appriciated.

Thank You 🙂

See also :- http://blindperception.wordpress.com/