Herkulex Servos and PyHerkulex

Herkulex are the smart servos manufactured by Dongbu Robotics. Smart servos are far superior to RC servos in performance and quality. They make the ideal choice for making robots. Herkulex servos stand in between the common RC servos and the high end harmonic geared servo motors.


Continue reading “Herkulex Servos and PyHerkulex”

V-REP , a Powerful Simulator for ROS

V-REP (Virtual Robot Experimentation Platform ) is often called the  Swiss Army Knife among all the robot simulators. It is a comprehensive tool which can be used by beginners as well as robotics gurus. It provides interfaces  in C/C++, Python, Lua, Java, Matlab and URBI. Moreover it is cross platform compatible and works flawlessly in Windows, Linux and Mac. What attracted me to V-REP was that, it can be used for fast prototyping and verification, fast algorithm development.It also has the possibility of testing sensors and vision algorithm. V-REP handles dynamics and kinematics simulations pretty well. Two dynamics engines, Bullet and ODE , are present and the user can choose which one suits to his need. It also allows the user to create his own user interfaces for the robots. V-REP has plenty of common robot models included as default.

Dr. Marc Freese from Coppelia Robotics explains the main strengths of V-REP compared to similar products: “V-REP allows the user to create virtually any robotic system quickly, thanks to a built-in script interpreter and more than 300 different API functions: sophisticated sensors, actuators or whole robots can be edited from within the simulator, which offers an integrated development environment. Created models can be reused by a simple drag-and-drop operation”.

ROS has a major share in the robotics research & development happening worldwide. Moreover, it has a super cool community support. More and more robots are being supported by ROS day by day. Plenty of core research in robotics algorithms, especially robot navigation, manipulation, cognitive robotics are  happening in ROS. The default simulator that comes with ROS is Gazebo.  But what i feel is that Gazebo is not as stable as V-REP and crashes often. Also Gazebo is resource hungry. It needs a decent machine to run Gazebo properly. Here comes the importance of V-REP. It is comparatively lightweight and runs fine even  on my personal laptop with no dedicated graphics card and an Intel Core2duo processor. V-REP has an extensive ROS API. So , the advantages of ROS and V-REP can be combined to result in a much better solution.

GSM Controlled Robot Car

This is a Robot car which can be controlled using any mobile phone. This was a project I did years ago, while learning digital electronics in the college. It was a time when I knew only less about microcontroller programming else almost the whole circuit could be replaced by a small microcontroller. The  mobile phone is placed inside the car. To control the car, just call that mobile phone  in the car after enabling the DTMF keypad tones in the calling phone. Then press 2 to move forward, 8 for backward, 2/4 for left/right control.The following video shows it in action although it is not controlled by another phone in the video.

Speech Recognition in ROS / Linux

Speech recognition in ROS/Linux has been has been traditionally done using projects like CMU-Sphinx or Julius. But they lack an efficient vocabulary  and is not stable. So reliable speech recognition was confined to Windows/Mac users only. Initially I was using a windows virtual  machine inside ubuntu to do speech processing, even though it was quite resource consuming. A good alternative is to use the speech recognition built into Chrome by Google. The speech samples are sent to Google’s servers for processing and they return the recognized speech and a confidence value.It is quite easy to use this possibility of speech recognition. It also offers an advantage of speaker independent recognition of speech. The only disadvantage is the delay caused in detection. It normally takes about 3 seconds for the speech to be recognized.A simple python script for speech recognition is shown below

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import shlex,subprocess,os
print " talk something"
os.system('sox -r 16000 -t alsa default recording.flac silence 1 0.1 1% 1 1.5 1%')
cmd='wget -q -U "Mozilla/5.0" –post-file recording.flac –header="Content-Type: audio/x-flac; rate=16000" -O – ""'
args = shlex.split(cmd)
output,error = subprocess.Popen(args,stdout = subprocess.PIPE, stderr= subprocess.PIPE).communicate()
if not error:
a = eval(output)
#a = eval(open("data.txt").read())
confidence= a['hypotheses'][0]['confidence']
print "you said: ", speech, " with ",confidence,"confidence"

view raw

hosted with ❤ by GitHub

I have also created a ROS package for speech recognition. It can be run by checking out theGithub  repo, and running  ‘rosrun gspeech‘. It will publish two topics: /speech and /confidence. The first one is the detected speech while the latter one is the confidence level of detection

Long range RF link using NRF24L01+ RF Transceiver

Creating a two way wireless link is fun and exciting. Recently I  ordered  some RF modules based on NRF 24L01+ transciver  from sparkfun (link here or here). Ready-made RF transceivers like this saves the hobbyist from designing RF circuits and PCBs which can get quite complex and expensive.

Automatic Curtain Controller

This curtain controller uses a microcontroller to control the opening of  curtain veils, thus enabling the control of light intensity in the room. It features both automatic mode and manual override mode. In the automatic mode, we can set the preferred intensity level in the room using a pot and the control system will adjust the veils suitably to maintain the user set intensity level always. As the name indicates, in the manual override mode the user is free to choose any opening of the veils.
Circuit Diagram

URDF Model Of Chippu

       URDF stands for Unified Robot Description Format, Which is an XML format for representing the model of the robot. More info about the URDF can be had from the tutorials at Once we have the URDF model of our robot, we can use it in rviz to visualize what the robot is perceiving or use it in gazebo for simulation.
Here is a small video showcasing the URDF model of Chippu.