Let us  now see what is SURF.

SURF Keypoints of my palm

SURF Keypoints of my palm

                     SURF  stands for Speeded Up Robust Features. It is an algorithm which extracts  some unique keypoints and descriptors from an image. More details on the algorithm can be found here and a note on its implementation in OpenCV can be found here. A set of SURF  keypoints and descriptors can be extracted from an image and then used later to detect the same image. SURF uses an intermediate image representation called Integral Image, which is computed from the input image and is used to speed up the calculations in any rectangular area.It is formed by summing up the pixel values of the x,y

co-ordinates from origin to the end of the image. This makes computation time invariant to change in size and is particularly useful while encountering large images. The SURF detector is based on the determinant of the Hessian matrix. The SURF descriptor describes how pixel intensities are distributed within a scale dependent neighborhood of each interest point detected by Fast Hessian

Object detection using SURF is scale and rotation invariant which makes it very powerful. Also it doesn’t require long and tedious training as in case of using cascaded haar classifier based detection. But the detection time of SURF is a little longer than Haar, but it doesn’t make much problem in most situations if the robot takes some tens of millisecond more for detection. Since this method is rotation invariant, it is possible to successfully detect objects in any orientation. This will be particularly useful in mobile robots where it may encounter situations in which it has to recognize objects which may be at different orientations than the trained image, say for example , the robot was trained with the upright image of an object and it has to detect a fallen object. Detection using  haar features fails miserably in this case.  OK, lets now move from theory to practical, the actual way things works.

OpenCV library provides an example of detection called find_obj.cpp. It can be found at the OpenCV-x.x.x/samples/c/ folder of the source tar file, where x.x.x stands for the version number. It loads two images, finds the SURF keypoints and descriptors , compares them and finds a matching if there is any. But this sample code will be a bit tough for beginners. So let us move slowly, step by step. As the first step , we can find the SURF keypoints and descriptors in an frame captured from the webcam. The code for the same is given below:

//********** SURF implementation in OpenCV*****//
//**loads video from webcam, grabs frames computes SURF keypoints and descriptors**//  //** and marks them**//

//****author: achu_wilson@rediffmail.com****//

#include <stdio.h>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc_c.h>

using namespace std;
int main(int argc, char** argv)
  CvMemStorage* storage = cvCreateMemStorage(0);
cvNamedWindow("Image", 1);
int key = 0;
static CvScalar red_color[] ={0,0,255};
CvCapture* capture = cvCreateCameraCapture(0);
CvMat* prevgray = 0, *image = 0, *gray =0;
while( key != 'q' )
int firstFrame = gray == 0;
IplImage* frame = cvQueryFrame(capture);
image = cvCreateMat(frame->height, frame->width, CV_8UC1);
//Convert the RGB image obtained from camera into Grayscale
cvCvtColor(frame, image, CV_BGR2GRAY);
//Define sequence for storing surf keypoints and descriptors
CvSeq *imageKeypoints = 0, *imageDescriptors = 0;
int i;

//Extract SURF points by initializing parameters
CvSURFParams params = cvSURFParams(500, 1);
cvExtractSURF( image, 0, &imageKeypoints, &imageDescriptors, storage, params );
printf("Image Descriptors: %d\n", imageDescriptors->total);

//draw the keypoints on the captured frame
for( i = 0; i < imageKeypoints->total; i++ )
CvSURFPoint* r = (CvSURFPoint*)cvGetSeqElem( imageKeypoints, i );
CvPoint center;
int radius;
center.x = cvRound(r->pt.x);
center.y = cvRound(r->pt.y);
radius = cvRound(r->size*1.2/9.*2);
cvCircle( frame, center, radius, red_color[0], 1, 8, 0 );
cvShowImage( "Image", frame );

return 0;

        The explanation of the code is straightforward. It captures a frame from the camera, then converts it into  grayscale ( because OpenCV SURF implementation works on grayscale images). The function cvSURFParams sets the various algorithm parameters. The function cvExtractSURF extracts the keypoints and desciptors into the corresponding sequences. Now circles are drawn with these keypoints as center and the strength of descriptors as the radius. Below are some images showing the keypoints captured.

SURF keypoints of my mobile phone
SURF keypoints of my mobile phone
The above picture shows the SURF keypoints captured by myself holding a mobile phone. The background wall has no strong intensity variations and hence no keypoints exist. An average of about 125 keypoints are detected in the above image and is shown in th terminal.  Below are some more images.
SURF Keypoints of my palm

SURF Keypoints of my palm

From the above set of images, it can be clearly seen that SURF keypoints are those pixels whose average intensity values differ much greatly from the immediate neighbors, and by using the descriptor, a relation between the reliable keypoints and the neighboring pixels are obtained. Once the SURF descriptors and keypoints of two images are calculated, then they can be compared using one of the many algorithms like nearest neighbour , k-means clustering etc. SURF finds usage not only in object detection. It is used in many other applications like 3-D reconstruction.

Here are a few more screenshots of object recognition using surf: