Developing for the Oculus Quest on the Oculus Quest.

Short-cut: for instructions how to run a server on the Quest for developing a-frame webVR apps that run on the Quest, scroll down to how-to:

Background: Reading William Gibson’s Neuromancer as a child, I was always intrigued by how to get, stay and work in “the matrix” / cyberspace. Today, VR is still not where Gibson conceptualised it (we don’t have meaningful and conventionalised data visualisation and navigation in 3d yet), but still we’re making baby steps. When I first saw this video in 2014, I fell in love with the idea to write live updating code in VR:

Inspecting the code today, it isn’t built in a-frame or webVR as far as I understand, but in threeJS which afaik underlies webVR. It’s not hosted anywhere anymore, so you can’t try it out on your headset.

Although, having owned the DK2 and the CV1, the Oculus Quest finally is the device of my dreams as it doesn’t depend on external sensor setups, cables or a PC. For the moment, I don’t care about the resolution and the GPU. I want to explore the idea of having a fully fledged computer on your body to work with. It’s really a new device category. Right now, content and software is developed outside HMD and then tested on them (think Unity 3d etc), even for phones, this is the case, you can’t develop for Android in Android without anything else (at least last time I checked). From my perspective (I grew up with the Commodore C64), the computer to consume on should be the computer to produce on.

The Quest is a computer, it has a huge screen, it has an OS, internet connection, it runs Android and is bluetooth capable. I came across a-frame and think that for the moment, this is the go-to development framework compared to Unity 3d et al. However, to develop for a-frame, you also need a web-server (although systems like glitch.me and pencode allow you to work in the browser). Thus, ideally this server itself should also run on the Quest (my ideal is self-contained).

Here is a little how-to:

  1. [this post assumes that you have enabled the developer mode and are capable of sideloading via ADB or sidequest].
  2. Keyboard: I successfully paired a bluetooth keyboard (using a sideloaded bt lister app, this one works on the Quest, others didn’t) and it works in the Oculus Browser but not in the (side loaded) Firefox Reality browser (current version in Beta: 1.2.3, use *oculusvr-arm64-release-signed-aligned.apk for the Oculus Quest, start it from “unknown sources”). It also works in termux (see below).
  3. [Optional: I successfully can work in glitch in the Oculus Quest browser (here’s something I composed from two other glitches plus some of my own code: https://glitch.com/~gallery-appear-disappear use grip button on all the objects, triceratops will produce boxes with physics, sphere will change environment, resize the picture using both controllers). Note: You can leave out this step but it’s nice to see that you can work on code in a VR browser and experience it in the same browser.]
  4. I successfully sideloaded termUX (it “contains” / “makes accessible” Linux in Android and interfaces with the hardware, read here) and can run it in the OculusTV environment (bigger screensize would be nice though).
  5. In termUX (keep it up to date by issueing: apt-get update and apt-get upgrade), we should install:
    1. pkg install nano (cmdline texteditor)
    2. pkg install python (in some cases needed
    3. pkg install nodejs (node js and npm)
    4. pkg install git (for pulling git repositories)
  6. Next, we install a-frame by issueing git clone https://github.com/aframevr/aframe.git and change to that directory
  7. We install the package by npm install
  8. And we start npm start

It takes a while for the system to put together the server and start it, finally it will print something like “server started at http://192.168.178.xx:9000”, call that from your Oculus Browser (or Firefox Reality) and voilá:

Certainly, it is still a bit cumbersome to change between termUX (that is to be called via OculusTV) and the browser and yes, nano is not the perfect tool to work on java script and HTML, BUT: we have it working! A self-contained system running on the Oculus Quest for developing a-frame / webVR applications.

Some more references:

Other experiments:

  • I was able to connect a webcam using OTG with my phone and found an app in the Google Playstore that actually can stream video on the phone’s screen. But sideloading it to the Quest and starting it there doesn’t deliver a live stream. Intention is to look at my keyboard in VR)
  • I found some webVR code that can use the webcam as texture on an object. It works on my PC but not on the Quest (neither Oculus Browser nor the Firefox Reality although the latter has an enable button for webcams)
  • I installed OVRVNC to login to my Raspberry Pi, connect a webcam to that and stream video from there and run a webserver. However, on the Quest it doesn’t connect to my Pi VNC session from a PC works.

PocketPi!

PocketPi in its current incarnation.

TL;DR: 14x11cm portable Raspberry Pi Zero W with 3000mAh Adafruit powerboost 1000, Hyperpixel 4inch display, a keyboard and a4 port USB hub. Scroll down for videos, STL files, shopping list.

I always want to be able to carry a little computer around that reduces the outside distractions, like no email, ideally no browser, no messaging, etc. Actually, I enjoy the “silence” that I have with the Raspberry Pis. They allow me to focus on writing software for robots and the like.

Sometimes, you want to code on the go. My second shot in that direction is the StickPi, a little computer without battery but with an e-paper display and a couple of buttons. I actually keep using it, it’s my most used self designed computer.

My first shot hasn’t been published yet: the SlatePi. It is a 5inch computer with a small keyboard and a 4000mA battery, but it seems to be to large for every day use.

So I did another computer: today’s PocketPi. It is the symbiosis of the first two: portable but still a full keybard and a battery included.

It all started with the idea to build a Pi Zero and a display into my favorite mobile keyboard:

I wanted to replace the mousepad with that display.

So at some time, I started to decompose that keyboard to understand how much space is there to actually fit in a Pi Zero W, battery and the display.

the decomposed keyboard and already cut off left part of the PCB and softkeys.

I quickly learned that a) the inclusion of the driver for a ST7735 1.8inch display is cumbersome and the resolution might actually not be nice for really working with it (128×160), it may be better suited for a retroPi machine.

So I decided to use another display that I had at home already which is a 3.5inch display for the Pi (without HDMI) and some more decent 480×320 resolution.

[I also removed the original battery plug and the keyboard’s switch to reduce height.]

However, I didn’t want to increase the size of the device by the height of the diplay and thus decided that I don’t need the left (media-related) buttons of the keyboard:

Another thing was to ideally keep the height of the original keyboard which is about 12mm, but looking at the standard plugs of GPIO displays, it became clear that I need to apply some tricks:

plug deprived display, pi with a plug

I soldered the rails of a DIP plug to the pi, took away the whole plug from the display cutting off the pins to a bare minimum.

great height reduction.

I also decided that I wanted to have a full size HDMI out so I bought shorter and more flexible cables than I had at home and dismantled them:

decomposing an HDMI cable reduces weight and increases flexibility.

Finally, i also wanted to add a decent non-OTG USB to the machine as OTG adaptors simply SUCK.

Different USB hubs to select from.

I actually went with a little board that included the OTG “logic” already and had one USB on the side to actually keep the keyboard receiver and stay within the devices case.

before dcomposing the USB 4 port hub.

During the journey, I decided to upgrade to the final Hyperpixel 4 inch display, with a decent 800×480 resolution. The advantage is the little increased size (4mm) compared to my 3.5inch before plus it can be switched off via GPIO. So this is the evolution of the displays over the project:

1.8 inch, 2.8 inch, 3.5 inch, 4.0 inch

I also added the Adafruit Powerboost 1000 and a rather flat 3000mAh LiPo to the mix. The Adafruit is rather expensive (20-30€) and only supports 1A charging current, I’d love to have a cheaper charger board with 2A at some point in time.

With the power source in place, it was time to wire it all up. Note that I added another switch to the keyboard so that I could switch off the keyboard but let the Pi run for background computations.

As you can see, I wired the USB hub directly to the Pi to save some more weight and space:

Wiring the OTG USB hub directly to the Pi.

Another trick to save height with also the new Hyperpixel display (the plug of the screen is okay, so I needed a new Pi Zero without the DIPs and just short pins), is to solder the pins with the plastic spacer from behind and then remove it plus the excess pin lengths on the back:

After the system was established, it was time to design the case and get a feeling for the locations of everything:

[earlier version with the 7 port USB hub and 3.5inch screen.]
[earlier version with the 3.5 inch screen and first attempt to make the display switchable.]

A later version then had spacing elements between the components to hold them in place. Also the HDMI output cable was added then:

As mentioned before, the screen switch for the 3.5 inch didn’t work as you could switch it off (cutting off 5V with the physical switch) but not on again since the OS wouldn’t re-recognise the screen then.

So the whole case design (TinkerCad) underwent a couple iterations:

As you can see in iterations 7 & 8, the battery was rotated 90° to landscape and the HDMI cable is going between battery and battery charger.

During these iterations the device grew a bit in dimensions to a final 11x14cm. That’s 3cm more than the original keyboard’s case at 8x14cm. But anyway that’s the price for a 4inch screen with a 800×480 resolution…

designing the holes for the keys is a pain in the butt…

So that’s the currently final layout:

Time to look at the functionality in a video:

FIRST BOOT UP!

As I wanted to take the PocketPi to my easter holidays, I needed trade the flat screw “lips” to be bold and rounded to give the hole thing more stability:

bold screw holders, not nice, but survived the holidays / travel.

I installed Raspbian Stretch lite and added the Pixel desktop plus Geany as python IDE. I also configured two of the buttons to zoom the text in Geany, the right mouse key on the keyboard is really handy, the touch screen works as left mouse button and I added multiple desktops to easily switch between apps. Here’s a video of the currently final device in operation:

LOVE the keyboard illumination.

As I have promised on the Facebook Raspberry Pi group, here are the STL files: https://www.thingiverse.com/thing:3623114

Here is the shopping list:

It’s roughly 130€ plus 3D print.

Woah! Featured in:
hackaday: https://hackaday.com/2019/05/16/pocketpi-is-exactly-what-it-sounds-like
hackster.io: https://blog.hackster.io/pocketpi-raspberry-pi-zero-w-keyboard-computer-is-an-iterative-success-981821d46c48
geeky-gadgets.com:
https://www.geeky-gadgets.com/pocketpi-raspberry-pi-pocket-computer-17-05-2019/

locomotion in VR

I just came across this video on Facebook:

and found the original on youtube:

The original video is from 2009, we had the Computer Vision breakthrough in 2012 with Deep Learning. All the new technologies understanding human body posture and movements have been developed further since then. See for example this research from ETH Zurich predicting body movements:

Also, moving tiles are in production already at huge warehouses:

 

I was a supporter in the kickstarter campain of the Omni by Virtuix, but didn’t get one as the machine is so heavy that the developers decided to only ship it to professional companies, not private persons and/or also not internationally. The Omni is a treadmill:

Treadmills try to keep you in place physically, they are not really good at sensing where you are. In contrast, the tiles concept above is actually working around you as it has to understand where you are going. It is a real interface between the virtual environment (as it has to know what comes next) and your movements (and thus has to track you decently). It may be clumsy (what if the person is running?) and inefficient right now, but in my eyes, it has all the potential to make you feel running as you like and feeling the environment.

 

Improvemnts necessary:

  • It has to become smarter in seeing and predicting the body movements
  • it has to become faster, maybe even smaller then.
  • possibly the plates have to tilt in directions to emulate environments better (going up a hill).

I would love to see this concept advancing. That said, I’m still in love with the bionic chair by Govert Flint as it would work in constrained spaces, i.e. for consumer use.

 

StickPi – a Raspberry Pi Zero W with GPIO buttons and an e-paper display

UPDATE: HACKAYDAY featured StickPi! So honored!

I always wanted to have a sturdy and rigid Raspberry Pi that is mobile and as small as possible. Recently I designed a Raspberry Pi 3 plus 5 inch display, built-in keyboard and a battery / charging circuit. It’s nice as it’s about a DIN A5 paper sheet.

Then I came along these USB boards that you can pogo-pin to your Pi Zero which is similar in design to what the guy at NODE did.

Pi Zero and USB power board

When working with a Pi Zero, I wanted to connect via VNC so I can run Pixel desktop and Geany on the Pi to develop and run software. When doing so, you quickly want the Pi to display the Wifi and IP address it’s connected to. I chose an e-paper display from Waveshare (do yourself a favor an buy the balck/white display only, not the one with three colors as they don’t support partial update, i.e. they refresh very slowly!) and bought such a USB power board and attached the Zero to a battery:

Pi Zero plus epaper plus USB board and battery

Once you have this kind of display, you learn that the physical interaction with the machine is one way: consumption. And immediately, you think that it would be nice to have interaction methods to select a WIFI or shut down the machine or select operation modes, etc.

So I sat down and designed a PCB (not printed but wires on a protoboard with a pretty ideal size), to add some buttons for directions as well as two “shoulder buttons”:

three pieces 1

To keep the design compact and as the e-paper display would consume the full GPIO, I soldered the pins on the Pi to go through the PCB:

three pieces 2

I chose a wiring that can actually use buttons without additional resistors and so the whole system works with only input ports as well as GND:

button gpio wiring

This led to a compact design alltogether, see also the first version of a PETG designed case:

The case itself is designed in Tinkercad and I made the case available on Thingiverse:

StickPi TinkerCAD

Pi in case

There is still room for improvement for the case but right now I’m happy enough with it to actually keep it and write software. Querying the keys is really easy:

 

button read  code

Finally see a little interaction of the buttons and the screen here, the software is really simple to query the buttons continually and display values on the e-paper:

(On the top left, you see the Raspberry Pi 3 case with the keyboard integrated, still not happy with that design, that’s why I haven’t published it yet.)

I may venture into actually designing (EAGLE files) and ordering PCBs for the button to make the whole thing more “defined” and better fitting the case.

quick 3d print of a toy’s helicopter rotorblades

 

The rotorblades couldn’t resist my kids, so there was the job to 3d print them. I did this in 40min in Tinkercad, I certainly did the connection of the blades in a different manner than the original, you see that the blades are connected even around the shaft. I also added those blue supports that should give additional strength.

20170806_191419Anet A8 printing the bad boy in grey, similar to the orginal:

20170806_200055

You see there is some infill structure, I hope the rotors will last better than the orgininals, I also made them 3mm thick which is about double the original. But PLA is also a bit crisper. The two together:

 

20170806_204130And the new blades on the helicopter (about 40min print, via Octoprint):

20170806_204904

In the back you see the rendered gcode file in Simplify3d. Nice One and a half hour repair!

 

 

Trashbot upper body and neck servos revived

I recently split my Trashbot in half to finally get hold on the walking patterns of the lower part, as I changed the controller from Arduino Nano to Raspberry Pi. Here’s the upper part:

Trashbot upper body incl neck & head

Trashbot upper body incl neck & head

With the recent progress of running Oculus Rift from a Pi 3 and the experiments of streaming video from the stereo cam Blackbird 2, I thought it was a great idea to attach the camera to the upper part of Trashbot and send the Oculus head orientation to the neck controlling Arduino Mini Pro.

First step is to get the Arduino run with the PC again. But oh, that shitty servo:

So, before diving deeper into head synchronous robotic telepresence, I’ll need to fix that bugger…

Wireless streaming stereo video from an RC rover to the Oculus Pi

The next iteration was of course to simply try out the wireless transmission of video. And this doesn’t make sense if you still have the camera attached to yourself, so I “augmented” another project of mine, the rover:

Arduino / RC mixed autonomy Rover

Arduino / RC mixed autonomy Rover

Also I did a little self-sufficient stereo video transmission pack, including the Blackbird 2 camera:

Battery, video sender, stereo camera

Battery, video sender, stereo camera

And attached the two. I love modular designs where you can recombine your projects easily. So these two simply have two batteries etc. Here’s the full setup:

Oculus Pi wireless streaming video rover setup

Oculus Pi wireless streaming video rover setup

Here’s the video:

(Don’t know what the heck Pinnacle Studio was thinking to put the black frame around the video when exporting.)

Strange seems that we seem to get some kind of “interlace distortion” when the car is moving too quickly. I don’t quie know whether that is an artefact from the analog video transmission or from the digitisation process itself:

interlace artifactsWhen playing with this, you intuitively move your head to look around. So I could read out the head movements and send them to the Arduino on the rover to actually move the camera. I’d also need to add another servo to the camera to actually be able to move along at least two axis’. Let’s see…

 

2 3D cams & 1 2D cam on the DK2 @ Pi3

Cryptic title of a blog post, I know. But I urgently needed to try out my cameras with the Pi3 for the use on the Oculus Rift DK2 (video below). My last attempt to stream video locally into the Rift was successfull, so I wanted more. The initial single 2D camera was a logitech webcam C525 (for 60$)

logitech hd webcam c525

logitech hd webcam c525

I have two stereoscopic cameras. The Minoru which is “kind of” a cheap camera at about 70€, but then again: not, because it’s just 2x 640×480 resolution… I stripped the camera to reduce its size, weight an volume to actually use it on the Trashbot since I know that it is supported by the Raspbian as a camera.

Minoru Steroscopic Cam

Minoru Steroscopic Cam, around 70€

I also experimented already with the Blackbird 2, an analog camera for streaming video from drones to video goggles. However, that attempt was not really successful since the camera software on PC (!) was laggy and I was thinking that it may be due to the video capture card Easycap (10-20€).

BlackBird 2

BlackBird 2, around 180 USD

But let’s see, what happens when I run it on the Pi 3:

Learning: the 190€ combo beats the other two BY FAR in experience.

PS: In the video, you see that I attached the cameras to a different Pi than the one attached to the Oculus. This is because, the Oculus Pi has display settings tuned towards the goggles and I need to invest time to make the config changeable via software to actually switch between the googles and a real external display.

Oculus Pi untethered, pt. 2: a camera!

Last time, I was able to get the Oculus Rift DK2 to run on the Raspberry Pi 3, including the head tracking. However, the first interactions showed that it’s cumbersome to work with the desktop (since it’s not distributed on the two eyes but really is using the LCD as one screen) and also to use the keyboard.

Also, in the context of making the Oculus mobile and untethered, it is necessary to have a camera onboard, at least, until I get the XTtion to work. It is interesting that it is not too easy to find software that simply can display a video stream from a local webcam, most blogs just describe how to stream from a remote webcam or make a local USB cam accessible via some webservice.

My last attempt to get a stereo analog cam to work was not really cool since the latencies on a PC plus some weird display software as an .exe were not the ideal setting to really improve things.

So after some research, I found a git repository for streaming a local webcam to a dedicated  view, independent from the desktop. You need to install CMake and libbsd-dev to make it run via:

sudo apt-get install libbsd-dev
sudo apt-get install cmake

Also, I was able to install the camera in a nice position without additional mechanics:

cam fits nicely the cables of the Rift and enough space on the straps to lead cables.

cam fits nicely the cables of the Rift and enough space on the straps to lead cables.

Here’s the video walk through with some live feed to see the latency:

Next, I may try to either position the video on one eye, or even double the stream to both eyes. Or I may try to use the Minoru Stereo cam that I’ve been working on last year.

 

 

Getting XTion / Kinect to run on Pi 3, pt 2

So after yesterday’s compile of the PointCloudLibrary that worked although I couldn’t install some of the files before, I compiled today OpenNI although I was not able to install g++-multilib (following Larry’s tutorial).

Well, the error codes seemed to talk more about errors generating documentation, but I don’t care too much about that since google / stackexchange et al are the best documentation…

Also, the installation worked (within seconds).

Nextup was the installation of the Kinect driver / software package. Then it hailed errors. So, it seems that the g++-multilib is necessary for the SensorKinect package to compile. This will be an adventure to get somehow since it doesn’t seem to be available in Raspbian.

Interestingly, I found in an older tutorial from 2013 the necessity to install g++, not g++-multilib. However, trying to install this just led to the message that I had the latest version already.

In another old blog article, I found another version of this tutorial with an error similar to the one I got when calling make in the Build directory. And luckily, there was a reply by another visitor from May this year that there’s two files to change slightly to be able to compile it.

I don’t quite understand what exactly has been changed there, but I’ll try out tomorrow to see whether this will work.

So to sum up:

  • swap file extended
  • compiled PCL
  • compiled OpenNI
  • tosucceed: compile SensorKinect