This is a small simulation done for the MBZIRC challenge, in which a UAV must autonomously land on a moving rover.
Continue reading “Autonomous landing of Quadcopter on a Moving rover”
This is a small simulation done for the MBZIRC challenge, in which a UAV must autonomously land on a moving rover.
Continue reading “Autonomous landing of Quadcopter on a Moving rover”
We recently did a demo during free time. It was nothing short of beating the popular 2048 game. See it in action below
Herkulex are the smart servos manufactured by Dongbu Robotics. Smart servos are far superior to RC servos in performance and quality. They make the ideal choice for making robots. Herkulex servos stand in between the common RC servos and the high end harmonic geared servo motors.
V-REP (Virtual Robot Experimentation Platform ) is often called the Swiss Army Knife among all the robot simulators. It is a comprehensive tool which can be used by beginners as well as robotics gurus. It provides interfaces in C/C++, Python, Lua, Java, Matlab and URBI. Moreover it is cross platform compatible and works flawlessly in Windows, Linux and Mac. What attracted me to V-REP was that, it can be used for fast prototyping and verification, fast algorithm development.It also has the possibility of testing sensors and vision algorithm. V-REP handles dynamics and kinematics simulations pretty well. Two dynamics engines, Bullet and ODE , are present and the user can choose which one suits to his need. It also allows the user to create his own user interfaces for the robots. V-REP has plenty of common robot models included as default.
Dr. Marc Freese from Coppelia Robotics explains the main strengths of V-REP compared to similar products: “V-REP allows the user to create virtually any robotic system quickly, thanks to a built-in script interpreter and more than 300 different API functions: sophisticated sensors, actuators or whole robots can be edited from within the simulator, which offers an integrated development environment. Created models can be reused by a simple drag-and-drop operation”.
ROS has a major share in the robotics research & development happening worldwide. Moreover, it has a super cool community support. More and more robots are being supported by ROS day by day. Plenty of core research in robotics algorithms, especially robot navigation, manipulation, cognitive robotics are happening in ROS. The default simulator that comes with ROS is Gazebo. But what i feel is that Gazebo is not as stable as V-REP and crashes often. Also Gazebo is resource hungry. It needs a decent machine to run Gazebo properly. Here comes the importance of V-REP. It is comparatively lightweight and runs fine even on my personal laptop with no dedicated graphics card and an Intel Core2duo processor. V-REP has an extensive ROS API. So , the advantages of ROS and V-REP can be combined to result in a much better solution.
Speech recognition in ROS/Linux has been has been traditionally done using projects like CMU-Sphinx or Julius. But they lack an efficient vocabulary and is not stable. So reliable speech recognition was confined to Windows/Mac users only. Initially I was using a windows virtual machine inside ubuntu to do speech processing, even though it was quite resource consuming. A good alternative is to use the speech recognition built into Chrome by Google. The speech samples are sent to Google’s servers for processing and they return the recognized speech and a confidence value.It is quite easy to use this possibility of speech recognition. It also offers an advantage of speaker independent recognition of speech. The only disadvantage is the delay caused in detection. It normally takes about 3 seconds for the speech to be recognized.A simple python script for speech recognition is shown below
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env python | |
# -*- coding: utf-8 -*- | |
import shlex,subprocess,os | |
print " talk something" | |
os.system('sox -r 16000 -t alsa default recording.flac silence 1 0.1 1% 1 1.5 1%') | |
cmd='wget -q -U "Mozilla/5.0" –post-file recording.flac –header="Content-Type: audio/x-flac; rate=16000" -O – "http://www.google.com/speech-api/v1/recognize?lang=en-us&client=chromium"' | |
args = shlex.split(cmd) | |
output,error = subprocess.Popen(args,stdout = subprocess.PIPE, stderr= subprocess.PIPE).communicate() | |
if not error: | |
a = eval(output) | |
#a = eval(open("data.txt").read()) | |
confidence= a['hypotheses'][0]['confidence'] | |
speech=a['hypotheses'][0]['utterance'] | |
print "you said: ", speech, " with ",confidence,"confidence" |
I have also created a ROS package for speech recognition. It can be run by checking out theGithub repo, and running ‘rosrun gspeech gspeech.py‘. It will publish two topics: /speech and /confidence. The first one is the detected speech while the latter one is the confidence level of detection