i wanted to reuse the neck construction from my former trashbots:
i’m thinking about adding stereo vision to trashbot as the raspberry pi has enough oomph to do some kind of computer vision and it seems that open cv supports this camera.
there’s not too many “3d” cams out there that actually fullfill my requirements:
- cheap (<100€), so i can buy more when killing one
- 3d or at least two cameras
- supported by raspbian et al.
- small enough to fit a robot’s head and
- light (as the higher up the hardware in the bot the more it will impact stability when walking)
for trashbot 6 i planned to change the arduino nano into a raspberry pi 2. i also moved the board below the hips as luckily, the three mgr 996 servos of the hip are as wide as the raspberry pi:
but as you can see on the lower left (where the edimax wifi plug is inserted), the usb ports are pretty much flush with the outer servos, i.e. the attached legs will not leave too much space for usb plugs. Continue reading
most of this work has been done over christmas, but only now i found the time to at least do a quick tour around the bot. here are some highlights:
- arduino nano changed to raspberry pi 2 & wifi
- moved controller from back to hips
- added 16 channel i2c servo controller
- new foot construction adding an additional degree of freedom (DoF)
- reconstructed legs that are lighter and now take up new batteries
- power distribution board including i2c current / voltage measurement
- accelerometer and gyro sensor (i2c) moved to “belly” instead of neck
- added arms with shoulders (two DoF)
- changed spine, reduced complexity
- removed head for now (being redesigned)
I always love to obey to minimalism and I always love to construct stuff with a minimal amount of material and complexity. Of course, the question is then how replicable, durable and maintainable the result is.
In the present case it became obvious, that at some point in time, Trashbot would need to have an additional degree of freedom on his foot, namely an ankle that allows him to lean forward. The present prototype is acutally from August last year. But now I’m revisiting my designs as Trashbot recently got the long awaited additional hip servos and the logical next step is to add the ankle servos.
So let’s start with the layout:
Last week I added three degrees more to Trashbot, two hip servos for forward movement (“kicking”) and one to the bone.
This week I found some time to do the first single servo movements and tests to check out the new geometry of the bot since the broader hips will affect the Center of Gravity etc. Here’s the first attempt to do what the normal gait would do: shift the body to one foot:
So, definetly software teaching me how to improve hardware… Next draft iteration: Continue reading
I’ve been working on Trashbot for quite a while now, but the basic gait mechanism is still the same as in version one. The hips’ movements and the distance between the legs define the possible step length. This is annoying since the robot is rather tall and you’d expect that he’ll walk a bit faster than he actually does. However, moving faster induces stronger vibrations in the skeleton and makes him fall much easier.
Also, the upper part of the body will tilt “stronger” when Continue reading
Slowly inching towards the Blackbird2 on the Oculus Rift DK2.
It installed the DirectPlay on Windows10, so that’s the tribute to old tech, I guess.
So I get the video on my desktop now, have updated the Nvidia driver 358.87 for my GTX980 and the lastest Oculus Rift driver (v8.0).
What we get is here:
As you can see, it’s black and white somehow. The capture says it sees 525 (PAL60) or 625 (PAL_N), with the latter setting, I also get some sort of colors but they change rapidly and are mostly false…
I can also put the video on full screen, but it seems that with the latest drivers for the Oculus, I can’t use the DK2 as second output display, so I don’t know how to put the camera video into the DK2. Clearly, more research needed…
I’ve been dreaming of building a remote controlled 3D camera rig for the Oculus Rift for over a year now. Recently, I came across the Blackbird 3D First Person View camera here in a forum. And then I learned that the producer of this nice thing actually just launched the version 2 which is natively capable of producing images suitable for the Oculus Rift by rotating the videos glueing them next to each other and distorting them adequately. And I had to buy it right away. That was about three weeks ago and finally, today I received it.
It thought I just share the unboxing of it with you, I hope I’ll have time to play with it (the rest of the hardware is on my desk already: an analog display, wireless transmission, USB video digitisation and a 2 DoF servo gimbal to be remote controlled). Yay!
You can see some similar projects here. Continue reading
So, here’s BabblePi’s software: CMU Sphinx running in a phoneme detection mode, i.e. it is not recognising text or words, but really phonemes (transcriptions of speech sounds). It is then speaking this sequence back using espeak again running in phoneme mode:
So when we’re looking at BabblePi and how it is listening and repeating words and sentences, we see that Continue reading