Archive for August, 2011



This project is an Interactive Voice Response System based on a personal computer. The data of an institution or firm is stored in a computer as a MySQL database, the world’s most popular open source database.Parents or students can then access and retrieve data from this database just by making a call to a predetermined mobile number. The user should provide the student details like admission number, semester number, exam code etc. when requested by the computer. The computer will then speak back the data requested, using a speech synthesizer. The Block diagram is given below:

BLOCK DIAGRAM

Continue reading

Roller Robot- A Clone of Recon Scout


                    It was a video about a reconnaissance robot called Recon Scout  that inspired me to design a clone of it. Recon scout is a rugged, stealthy and  mobile reconaissance robot which can be thrown through a window, over a wall, or down the stairs and it lands ready-to-go. You can even drop it from an unmanned aerial reconnaissance vehicle. Once deployed, you can control its movement at a distance using a hand-held operator control unit.

The Roller Bot

The Roller Bot

Continue reading


                    It was a video about a reconnaissance robot called Recon Scout  that inspired me to design a clone of it. Recon scout is a rugged, stealthy and  mobile reconaissance robot which can be thrown through a window, over a wall, or down the stairs and it lands ready-to-go. You can even drop it from an unmanned aerial reconnaissance vehicle. Once deployed, you can control its movement at a distance using a hand-held operator control unit.

The Roller Bot

The Roller Bot

The Recon Scout helps you explore hostile or dangerous environments by providing real-time mission-critical reconnaissance video . My clone is not as rugged as the original Recon Scout and may not  withstand extreme conditions, also it is much bigger than the original .The body is built using a PVC pipe and its both ends closed with a PVC cap. Holes are drilled through the cap to mount the motor. The robot is powered using a 12 V, 4.5Ah Lead-Acid battery. The motors are 12V 100 rpm DC geared motors. An additional tail like structure is also provided to enhance stability of the robot while it is moving. The roller robot is also provided with a wireless camera operating in the 2.4GHz ISM band. The camera has a number of Infra Red Leds, which helps much in night vision.The robot is controlled using a custom built RF remote control and the common motor driver H bridge L293D, although any off the shelf RF remote controller can be used.Here are some photos:

The camera, surrounded by IR LEDsThe camera, surrounded by IR LEDs

The Power button and the charging jack

The Power button and the charging jack


Grippers are an important part of robots. Often they are used in gripping and taking objects and play the most important role in robotic arms. Most of the grippers cost much money and may not be easily affordable by hobbyists. Then the best option is to build your own gripper.When it is from the so called ‘useless junk parts’, it becomes more interesting. So let us dive into more  details.

My first prototype

My first prototype

Continue reading


Grippers are an important part of robots. Often they are used in gripping and taking objects and play the most important role in robotic arms. Most of the grippers cost much money and may not be easily affordable by hobbyists. Then the best option is to build your own gripper.When it is from the so called ‘useless junk parts’, it becomes more interesting. So let us dive into more  details.

My first prototype

My first prototype

In order to construct this gripper, all you need is the mechanism of on old CD/DVD player, some acrylic sheets or even some small wooden boards for the gripper, some screws and a little hot glue. The mechanism is shown below:

The mechanism of a CD player

The mechanism of a CD player

An upclose view

An upclose view

Detailed view

Another detailed view

As seen from the above images, the CD/DVD player mechanism consistes of a rack and pinion arrangement to drive the eye across the disc and another stepper motor mechanism to drive the disc. It is driven using a simple DC motor and a pair of gears used to increase torque and decrease the speed of the motor, giving a fine control. Now we can cut and remove some parts of the mechanism, ie, the stepper  motor section. Now it will look like this.

The mechanism with the stepper motor section removed

The mechanism with the stepper motor section removed

What we are left with is a  mechanism shown above. It now consists of only the eye and the rack and pinion mechanism to drive it. Driving the motor causes the eye to move towards and away from the stationary part shown at the bottom of the above picture. Now if we replace the eye with something else (preferably something that looks like a robotic gripper), we end up with a partially functional robotic arm. I think you got the logic behind this, and thats all now use your imagination to work out better… The images of my first working prototype are given below

My first prototype

My first prototype

Open arms

Open arms

Closed arms

Closed arms

I also added an Infrared LED and its sensor to the tips of my robotic gripper to detect whether an object is present in between the gripper and it worked pretty nice.

SURF in OpenCV


Let us  now see what is SURF.

SURF Keypoints of my palm

SURF Keypoints of my palm

Continue reading


Let us  now see what is SURF. SURF  stands for Speeded Up Robust Features. It is an algorithm which extracts  some unique keypoints and descriptors from an image. More details on the algorithm can be found here and a note on its implementation in OpenCV can be found here

SURF Keypoints of my palm

                    A set of SURF  keypoints and descriptors can be extracted from an image and then used later to detect the same image. SURF uses an intermediate image representation called Integral Image, which is computed from the input image and is used to speed up the calculations in any rectangular area.It is formed by summing up the pixel values of the x,y

co-ordinates from origin to the end of the image. This makes computation time invariant to change in size and is particularly useful while encountering large images. The SURF detector is based on the determinant of the Hessian matrix. The SURF descriptor describes how pixel intensities are distributed within a scale dependent neighborhood of each interest point detected by Fast Hessian

Object detection using SURF is scale and rotation invariant which makes it very powerful. Also it doesn’t require long and tedious training as in case of using cascaded haar classifier based detection. But the detection time of SURF is a little longer than Haar, but it doesn’t make much problem in most situations if the robot takes some tens of millisecond more for detection. Since this method is rotation invariant, it is possible to successfully detect objects in any orientation. This will be particularly useful in mobile robots where it may encounter situations in which it has to recognize objects which may be at different orientations than the trained image, say for example , the robot was trained with the upright image of an object and it has to detect a fallen object. Detection using  haar features fails miserably in this case.  OK, lets now move from theory to practical, the actual way things works.

OpenCV library provides an example of detection called find_obj.cpp. It can be found at the OpenCV-x.x.x/samples/c/ folder of the source tar file, where x.x.x stands for the version number. It loads two images, finds the SURF keypoints and descriptors , compares them and finds a matching if there is any. But this sample code will be a bit tough for beginners. So let us move slowly, step by step. As the first step , we can find the SURF keypoints and descriptors in an frame captured from the webcam. The code for the same is given below:

//*******************surf.cpp******************//
//********** SURF implementation in OpenCV*****//
//**loads video from webcam, grabs frames computes SURF keypoints and descriptors**//  //** and marks them**//

//****author: achu_wilson@rediffmail.com****//

#include
#include
#include
#include

using namespace std;
int main(int argc, char** argv)
{
  CvMemStorage* storage = cvCreateMemStorage(0);
cvNamedWindow("Image", 1);
int key = 0;
static CvScalar red_color[] ={0,0,255};
CvCapture* capture = cvCreateCameraCapture(0);
CvMat* prevgray = 0, *image = 0, *gray =0;
while( key != 'q' )
{
int firstFrame = gray == 0;
IplImage* frame = cvQueryFrame(capture);
if(!frame)
break;
if(!gray)
{
image = cvCreateMat(frame->height, frame->width, CV_8UC1);
}
//Convert the RGB image obtained from camera into Grayscale
cvCvtColor(frame, image, CV_BGR2GRAY);
//Define sequence for storing surf keypoints and descriptors
CvSeq *imageKeypoints = 0, *imageDescriptors = 0;
int i;

//Extract SURF points by initializing parameters
CvSURFParams params = cvSURFParams(500, 1);
cvExtractSURF( image, 0, &imageKeypoints, &imageDescriptors, storage, params );
printf("Image Descriptors: %d\n", imageDescriptors->total);

//draw the keypoints on the captured frame
for( i = 0; i total; i++ )
{
CvSURFPoint* r = (CvSURFPoint*)cvGetSeqElem( imageKeypoints, i );
CvPoint center;
int radius;
center.x = cvRound(r->pt.x);
center.y = cvRound(r->pt.y);
radius = cvRound(r->size*1.2/9.*2);
cvCircle( frame, center, radius, red_color[0], 1, 8, 0 );
}
cvShowImage( "Image", frame );

cvWaitKey(30);
}
cvDestroyWindow("Image");
return 0;

}

        The explanation of the code is straightforward. It captures a frame from the camera, then converts it into  grayscale ( because OpenCV SURF implementation works on grayscale images). The function cvSURFParams sets the various algorithm parameters. The function cvExtractSURF extracts the keypoints and desciptors into the corresponding sequences. Now circles are drawn with these keypoints as center and the strength of descriptors as the radius. Below are some images showing the keypoints captured.
SURF keypoints of my mobile phone
SURF keypoints of my mobile phone
The above picture shows the SURF keypoints captured by myself holding a mobile phone. The background wall has no strong intensity variations and hence no keypoints exist. An average of about 125 keypoints are detected in the above image and is shown in th terminal.  Below are some more images.
SURF Keypoints of my palm

From the above set of images, it can be clearly seen that SURF keypoints are those pixels whose average intensity values differ much greatly from the immediate neighbors, and by using the descriptor, a relation between the reliable keypoints and the neighboring pixels are obtained. Once the SURF descriptors and keypoints of two images are calculated, then they can be compared using one of the many algorithms like nearest neighbour , k-means clustering etc. SURF finds usage not only in object detection. It is used in many other applications like 3-D reconstruction.

Here are a few more screenshots of object recognition using surf:

Follow

Get every new post delivered to your Inbox.

Join 98 other followers