How do I measure my robot movements speed

Let me show an easy way I use to measure speed of robot’s movements. It is important to do the measurements with accuracy ~0.1sec to optimize delays between commands.

I use my camera, make video of the movements, and then convert the video file into set of pictures with the command like this:

ffmpeg -ss 00:00:04 -t 40 -i ../MVI_3219.MOV G_%04d.png

ffmpeg is a GNU licensed cross platform utility

-ss <position> – start position

-t <duration> – limit the duration of data read from the input file

-i <input_file> – the video file

G_%04d.png – is an example of C-printf template for the generated images. %04d is for substitution by frame number in format 0000, 0001, 0002, ….

The generated pictures are looked like this:

We can see that the transition lasts 7 frames.

With the displayed metadata, we can get the video frame rate:

major_brand     : qt
minor_version   : 537331968
compatible_brands: qt  CAEP
encoder         : Lavf56.36.100
Stream #0:0(eng): Video: png, rgb24, 1920x1080, q=2-31, 200 kb/s, 23.98 fps, 23.98 tbn, 23.98 tbc (default)

It is 23.98 frames per second in this case (that is almost 24). So the transition lasts:

7f / 24fps = 0.29sec

How the robot sees the world

For me Computer Vision is one of the most exciting and important topics in robotics. Humans get 80% of all information about the world using their eyes. So the ability to collect visual information is a big advantage for any robot. The basic visual information our brain supplies to our mind is what objects are around and where they are. Let’s consider how robots do that.

Computer vision object detection consists of the following major steps:

test31. Image preprocessing – improve the quality of the image: reduce noise, adjust the light and contrast.

2. Extracting points that are potentially related to the looking object. This robot applies a gradient filter to detect potential borders – the green dots on the picture.

3. Segmenting the extracted points. Removing small segments. Roughness of the surfaces also have its mini-borders on the image. The “real” object have a long solid border that helps to distinguish it.

4. Constructing objects from the extracted segments using specific patterns. The robot is interested only in objects he can pick up by his gripper, the objects should not be too small or too large. Also black gripper parts that are on the bottom right and left of the image should not be detected as objects.

test45. When the objects are detected, the robot knows the coordinates on the image – these coordinates need to be translated into the angles – deviation from the main direction of the camera.

6. Knowing the configuration of the robot’s neck the robot calculates position and direction of the camera, using that, he calculates the location of the detected objects on the floor. The accuracy of the detection by this robot is some millimeters!

Calculating the location of the objects the robot can easily locate his head to grab them.

Follow my posts, you will read the details of every step of this process.

See you!