i wanted to reuse the neck construction from my former trashbots:
unfortunately, i had to remove the bike lamp because the stripped minoru camera is broader (about 9cm) so i had to find a lamp head where it fits in. Continue reading
i wanted to reuse the neck construction from my former trashbots:
unfortunately, i had to remove the bike lamp because the stripped minoru camera is broader (about 9cm) so i had to find a lamp head where it fits in. Continue reading
i’m thinking about adding stereo vision to trashbot as the raspberry pi has enough oomph to do some kind of computer vision and it seems that open cv supports this camera.
there’s not too many “3d” cams out there that actually fullfill my requirements:
for trashbot 6 i planned to change the arduino nano into a raspberry pi 2. i also moved the board below the hips as luckily, the three mgr 996 servos of the hip are as wide as the raspberry pi:
but as you can see on the lower left (where the edimax wifi plug is inserted), the usb ports are pretty much flush with the outer servos, i.e. the attached legs will not leave too much space for usb plugs. Continue reading
most of this work has been done over christmas, but only now i found the time to at least do a quick tour around the bot. here are some highlights:
I always love to obey to minimalism and I always love to construct stuff with a minimal amount of material and complexity. Of course, the question is then how replicable, durable and maintainable the result is.
In the present case it became obvious, that at some point in time, Trashbot would need to have an additional degree of freedom on his foot, namely an ankle that allows him to lean forward. The present prototype is acutally from August last year. But now I’m revisiting my designs as Trashbot recently got the long awaited additional hip servos and the logical next step is to add the ankle servos.
So let’s start with the layout:
Last week I added three degrees more to Trashbot, two hip servos for forward movement (“kicking”) and one to the bone.
This week I found some time to do the first single servo movements and tests to check out the new geometry of the bot since the broader hips will affect the Center of Gravity etc. Here’s the first attempt to do what the normal gait would do: shift the body to one foot:
So, definetly software teaching me how to improve hardware… Next draft iteration: Continue reading
I’ve been working on Trashbot for quite a while now, but the basic gait mechanism is still the same as in version one. The hips’ movements and the distance between the legs define the possible step length. This is annoying since the robot is rather tall and you’d expect that he’ll walk a bit faster than he actually does. However, moving faster induces stronger vibrations in the skeleton and makes him fall much easier.
Also, the upper part of the body will tilt “stronger” when Continue reading
Slowly inching towards the Blackbird2 on the Oculus Rift DK2.
Right now, I have the ancient Logilink video capture card running on Windows 10 and found a software that is old enough to be compatible with it but young enough to run on Windows 10.
It installed the DirectPlay on Windows10, so that’s the tribute to old tech, I guess.
So I get the video on my desktop now, have updated the Nvidia driver 358.87 for my GTX980 and the lastest Oculus Rift driver (v8.0).
What we get is here:
As you can see, it’s black and white somehow. The capture says it sees 525 (PAL60) or 625 (PAL_N), with the latter setting, I also get some sort of colors but they change rapidly and are mostly false…
I can also put the video on full screen, but it seems that with the latest drivers for the Oculus, I can’t use the DK2 as second output display, so I don’t know how to put the camera video into the DK2. Clearly, more research needed…
I’ve been dreaming of building a remote controlled 3D camera rig for the Oculus Rift for over a year now. Recently, I came across the Blackbird 3D First Person View camera here in a forum. And then I learned that the producer of this nice thing actually just launched the version 2 which is natively capable of producing images suitable for the Oculus Rift by rotating the videos glueing them next to each other and distorting them adequately. And I had to buy it right away. That was about three weeks ago and finally, today I received it.
It thought I just share the unboxing of it with you, I hope I’ll have time to play with it (the rest of the hardware is on my desk already: an analog display, wireless transmission, USB video digitisation and a 2 DoF servo gimbal to be remote controlled). Yay!
You can see some similar projects here. Continue reading
So, here’s BabblePi’s software: CMU Sphinx running in a phoneme detection mode, i.e. it is not recognising text or words, but really phonemes (transcriptions of speech sounds). It is then speaking this sequence back using espeak again running in phoneme mode:
So when we’re looking at BabblePi and how it is listening and repeating words and sentences, we see that Continue reading