Author Archives: ramin assadollahi

Trashbot upper body and neck servos revived

I recently split my Trashbot in half to finally get hold on the walking patterns of the lower part, as I changed the controller from Arduino Nano to Raspberry Pi. Here’s the upper part:

Trashbot upper body incl neck & head

Trashbot upper body incl neck & head

With the recent progress of running Oculus Rift from a Pi 3 and the experiments of streaming video from the stereo cam Blackbird 2, I thought it was a great idea to attach the camera to the upper part of Trashbot and send the Oculus head orientation to the neck controlling Arduino Mini Pro.

First step is to get the Arduino run with the PC again. But oh, that shitty servo:

So, before diving deeper into head synchronous robotic telepresence, I’ll need to fix that bugger…

Wireless streaming stereo video from an RC rover to the Oculus Pi

The next iteration was of course to simply try out the wireless transmission of video. And this doesn’t make sense if you still have the camera attached to yourself, so I “augmented” another project of mine, the rover:

Arduino / RC mixed autonomy Rover

Arduino / RC mixed autonomy Rover

Also I did a little self-sufficient stereo video transmission pack, including the Blackbird 2 camera:

Battery, video sender, stereo camera

Battery, video sender, stereo camera

And attached the two. I love modular designs where you can recombine your projects easily. So these two simply have two batteries etc. Here’s the full setup:

Oculus Pi wireless streaming video rover setup

Oculus Pi wireless streaming video rover setup

Here’s the video:

(Don’t know what the heck Pinnacle Studio was thinking to put the black frame around the video when exporting.)

Strange seems that we seem to get some kind of “interlace distortion” when the car is moving too quickly. I don’t quie know whether that is an artefact from the analog video transmission or from the digitisation process itself:

interlace artifactsWhen playing with this, you intuitively move your head to look around. So I could read out the head movements and send them to the Arduino on the rover to actually move the camera. I’d also need to add another servo to the camera to actually be able to move along at least two axis’. Let’s see…

 

2 3D cams & 1 2D cam on the DK2 @ Pi3

Cryptic title of a blog post, I know. But I urgently needed to try out my cameras with the Pi3 for the use on the Oculus Rift DK2 (video below). My last attempt to stream video locally into the Rift was successfull, so I wanted more. The initial single 2D camera was a logitech webcam C525 (for 60$)

logitech hd webcam c525

logitech hd webcam c525

I have two stereoscopic cameras. The Minoru which is “kind of” a cheap camera at about 70€, but then again: not, because it’s just 2x 640×480 resolution… I stripped the camera to reduce its size, weight an volume to actually use it on the Trashbot since I know that it is supported by the Raspbian as a camera.

Minoru Steroscopic Cam

Minoru Steroscopic Cam, around 70€

I also experimented already with the Blackbird 2, an analog camera for streaming video from drones to video goggles. However, that attempt was not really successful since the camera software on PC (!) was laggy and I was thinking that it may be due to the video capture card Easycap (10-20€).

BlackBird 2

BlackBird 2, around 180 USD

But let’s see, what happens when I run it on the Pi 3:

Learning: the 190€ combo beats the other two BY FAR in experience.

PS: In the video, you see that I attached the cameras to a different Pi than the one attached to the Oculus. This is because, the Oculus Pi has display settings tuned towards the goggles and I need to invest time to make the config changeable via software to actually switch between the googles and a real external display.

cam fits nicely the cables of the Rift and enough space on the straps to lead cables.

Oculus Pi untethered, pt. 2: a camera!

Last time, I was able to get the Oculus Rift DK2 to run on the Raspberry Pi 3, including the head tracking. However, the first interactions showed that it’s cumbersome to work with the desktop (since it’s not distributed on the two eyes but really is using the LCD as one screen) and also to use the keyboard.

Also, in the context of making the Oculus mobile and untethered, it is necessary to have a camera onboard, at least, until I get the XTtion to work. It is interesting that it is not too easy to find software that simply can display a video stream from a local webcam, most blogs just describe how to stream from a remote webcam or make a local USB cam accessible via some webservice.

My last attempt to get a stereo analog cam to work was not really cool since the latencies on a PC plus some weird display software as an .exe were not the ideal setting to really improve things.

So after some research, I found a git repository for streaming a local webcam to a dedicated  view, independent from the desktop. You need to install CMake and libbsd-dev to make it run via:

sudo apt-get install libbsd-dev
sudo apt-get install cmake

Also, I was able to install the camera in a nice position without additional mechanics:

cam fits nicely the cables of the Rift and enough space on the straps to lead cables.

cam fits nicely the cables of the Rift and enough space on the straps to lead cables.

Here’s the video walk through with some live feed to see the latency:

Next, I may try to either position the video on one eye, or even double the stream to both eyes. Or I may try to use the Minoru Stereo cam that I’ve been working on last year.

 

 

Getting XTion / Kinect to run on Pi 3, pt 2

So after yesterday’s compile of the PointCloudLibrary that worked although I couldn’t install some of the files before, I compiled today OpenNI although I was not able to install g++-multilib (following Larry’s tutorial).

Well, the error codes seemed to talk more about errors generating documentation, but I don’t care too much about that since google / stackexchange et al are the best documentation…

Also, the installation worked (within seconds).

Nextup was the installation of the Kinect driver / software package. Then it hailed errors. So, it seems that the g++-multilib is necessary for the SensorKinect package to compile. This will be an adventure to get somehow since it doesn’t seem to be available in Raspbian.

Interestingly, I found in an older tutorial from 2013 the necessity to install g++, not g++-multilib. However, trying to install this just led to the message that I had the latest version already.

In another old blog article, I found another version of this tutorial with an error similar to the one I got when calling make in the Build directory. And luckily, there was a reply by another visitor from May this year that there’s two files to change slightly to be able to compile it.

I don’t quite understand what exactly has been changed there, but I’ll try out tomorrow to see whether this will work.

So to sum up:

  • swap file extended
  • compiled PCL
  • compiled OpenNI
  • tosucceed: compile SensorKinect

 

Using the XTion / Kinect on Raspberry Pi 3 pt. 1

Recently, I managed to use the Oculus Rift as a display for the Pi 3 together with listening to the gyros via OpenHMD. And I found another instruction to run the XTion RGBd camera on the Pi 3 as well. For the sake of simplicity I set up a new system first and then will “fuse” the two.

First prerequisite in Larry’s tutorial is to add decent swap space on the OS, his OS is Ubuntu, mine is Debian Jessie Pixel and that is done differently. So, on the default Raspberry OS, you change the size of the swapspace in the file /etc/dphys-swapfile where you change from 100MB to 2000MB here: CONF_SWAPSIZE=100. I love stackexchange for that… (after rebooting, the command free -h should show you 2.0 GB of swap space).

Under Raspbian Jessie Pixel, I was not able to install libvtk (sudo apt-get install libvtk5.10-qt4 libvtk5.10 libvtk5-dev). Mono, QT and JDK seem quite heavy (like one GB of space) and I don’t know whether I really need these, I hope not, since I really want to code in python and don’t intend to use QT. I installed them anyways, just to get as close as the recipe would let me ( sudo apt-get install mono-complete and sudo apt-get install qt-sdk openjdk-8-jdk openjdk-8-jre).

sudo add-apt-repository ppa:v-launchpad-jochen-sprickerhof-de/pcl also didin’t work, you need to sudo apt-get install software-properties-common first (see stackexchange again), Then it starts to work, but finally I get “NoDistroTemplateException: Error could not find a distribution template for Raspbian/jessie”. Damn. For fun, I googled Jochen Sprickerhof and found his PCL site. As the URL says, it’s geared towards Ubuntu, so this might explain the error I got. When you google “installing PCL on raspian” you get a couple of descriptions how people did it. So I’ll stick to that should something go wrong…

So I skipped this launchpad part, just to  see how far I’ll get. My feeling was that I wasn’t apt-getting something afterwards (but actually git cloning), so maybe this has not an immediate effect, let’s see. git cloning of the PointCloudLib worked (after user and pwd for git).

Also I was able to run the cmake command and run make. The blog says, it will take about 7 hours to compile, so I go to bed now…

UPDATE: it worked! (sudo make install)

Oculus Rift DK2 on a Raspberry Pi3: towards untethered VR.

I’ve been long dreaming of connecting these to important technologies and run them in way that I actually understand. I’m not good at Android programming (for those cardbox VRs) and not really good at Unity (although that would be fun to dive deeper) and somehow my driver situation with the DK2 and windows 10 deteriorated.

There have been earlier attempts to run the Oculus on a Pi, but in my eyes the latest iteration I found is the most fitting to me (including Python bindings and 3d libraries).

So recently, I came across Wayne Keenan’s blog how actually did exactly what I’m dreaming of, namely to run the DK2 controlled by the Pi3. I’m not so much interested in maximising the complexity of geometry in VR but really more in the interaction design and what untethered VR (or even AR) can actually feel like.

His github recipe really works nicely, and so the installation was really a matter of 30min:

Pi3, DK2 and mini keyboard

Continue reading

three iterations of light weight, organic building blocks for robots

In this post I compare all three versions of my light weight structure experiments, with a couple of pictures an descriptions that were not given in the video below:

Version 1 had no roof and floor holes and that created the problem for the printer, also, I was printing the PLA at 200 degrees. I learned that lower temperatures are better for bridge structures. Download both files on Thingiverse.

solid thinkercad 2solid thinkercad 1 voro 2  voro 1

So, for version 2 I decided to work with round holes in all three planes, cylindric holes always yield a more organic appearance compared to triangles / prisms and I want to get more organic looking structures. Download version 2, solid and vorony version on Thingiverse.

tinkercad ungrouped rounded box v31tinkercad 2 ungrouped rounded box v31tinkercad rounded box v31tinkercad voro 3 rounded box v31tinkercad voro 2 rounded box v31     tinkercad voro 1 rounded box v31

Version 3 is a refinement of version 2 in that the smaller holes are bigger in diameter to better distinguish them better from the Vornoy holes generated automatically. Also, a greater diameter yields more surface to add the organic structure. Finally, I reduced the center holes to get a better average “wall” thickness, i.e. the distance between a hole and another hole or the sides from the box. In this way, the printer had a better chance to actually realise the Voronoy holes between the cyclinders.

v4 solid tinkercadv4 solid top

Finally, I produced the Voronoy version with a thicker structure, meaning that the lines were thicker and ideally the printer has a better chance to “find” them again in the next layer:

v4 voro1 v4 voro2 v4 voro3

Final versions for download here.

Learnings when printing:

I also reduced the printing temperature to 190 degrees from layer 3 on. The first layers I printed with 200 degrees to allow for a better sticking to the heat bed. I changed the bed to glass and used glue stick. These delicate structures have only a couple of touch points with the headbed so it is of outmost importance to have them sticking well. The servo housing shown in the end of the video was printed on pure glass without glue or spray, but the organic structures need more attention…

 

 

 

 

non-reversible reverse engineering of an electric toothbrush

i always wanted to study how the inductive charging in such devices actually works and i decided to open an old braun tooth brush to study it.

here’s two gifs:

toothbrush standing

toothbrush

 

you can open the toothbrush by destroying the snap-ins underneath the plastic protection:

20170121_114136

20170121_122345

 

 

 

this is the final charging setup, in the end it didn’t work anymore, the battery is probably dead, i may connect a standard 1.2V NIMH battery to it and see what it does. the multimeter showed about 300mV on the receiving coil which seems a bit too low to me but that might also mean that the battery is dead?

20170121_170341

here’s the chips, if anyone’s interested:

20170121_171047

Diving into 3d printed light weight structures

I recently bought the cheapest a Prusa i3 clone on the planet, an Anet A8 at gearbest. Lovely to build and certainly great for an electronics hobbyist / maker. I never would buy a 600 or even 2000€ printer today, since I don’t think to have many usage scenarios.

But there was always one dream: to print “bones” for my walking robots to make them lighter and more “bionic”. So one of the first things I designed on Tinkercad was this structure:

light weight structure

 

I’ve been searching for a while to make structures more “bone”-like, i.e. some kind of organic or cell-like holes into them. That’s not too easy.

One tutorial I found is working with Autodesk Meshmixer. a software that is really nice to work with 3d models and view them. The tutorial leads to such structures:

The problem is that this depends on the amount of points given in the 3d model itself and tessalating them is cumbersome imho.

Another software I found is the Voronoization Online service which is EXACTLY what I want (upload STL, set parameters, download STL, bang!). The point is that you don’t care about the points of which your model consists:

meshed box

And here’s the video comparing the solid version and the Voronoy version:

This 6x3x3cm structure has a weight of five grams, it’s 20% infill, 3×1.5×1.5 solid sibling has six grams (but of course the solid one is MUCH sturdier).

I’ll now get into testing these parameters and understanding them in the context of 3d-printability. Exciting!