Category Archives: virtual reality

All things related to VR, Oculus Rift, Unity 3d, etc.

Developing for the Oculus Quest on the Oculus Quest.

Short-cut: for instructions how to run a server on the Quest for developing a-frame webVR apps that run on the Quest, scroll down to how-to:

Background: Reading William Gibson’s Neuromancer as a child, I was always intrigued by how to get, stay and work in “the matrix” / cyberspace. Today, VR is still not where Gibson conceptualised it (we don’t have meaningful and conventionalised data visualisation and navigation in 3d yet), but still we’re making baby steps. When I first saw this video in 2014, I fell in love with the idea to write live updating code in VR:

Inspecting the code today, it isn’t built in a-frame or webVR as far as I understand, but in threeJS which afaik underlies webVR. It’s not hosted anywhere anymore, so you can’t try it out on your headset.

Although, having owned the DK2 and the CV1, the Oculus Quest finally is the device of my dreams as it doesn’t depend on external sensor setups, cables or a PC. For the moment, I don’t care about the resolution and the GPU. I want to explore the idea of having a fully fledged computer on your body to work with. It’s really a new device category. Right now, content and software is developed outside HMD and then tested on them (think Unity 3d etc), even for phones, this is the case, you can’t develop for Android in Android without anything else (at least last time I checked). From my perspective (I grew up with the Commodore C64), the computer to consume on should be the computer to produce on.

The Quest is a computer, it has a huge screen, it has an OS, internet connection, it runs Android and is bluetooth capable. I came across a-frame and think that for the moment, this is the go-to development framework compared to Unity 3d et al. However, to develop for a-frame, you also need a web-server (although systems like glitch.me and pencode allow you to work in the browser). Thus, ideally this server itself should also run on the Quest (my ideal is self-contained).

Here is a little how-to:

  1. [this post assumes that you have enabled the developer mode and are capable of sideloading via ADB or sidequest].
  2. Keyboard: I successfully paired a bluetooth keyboard (using a sideloaded bt lister app, this one works on the Quest, others didn’t) and it works in the Oculus Browser but not in the (side loaded) Firefox Reality browser (current version in Beta: 1.2.3, use *oculusvr-arm64-release-signed-aligned.apk for the Oculus Quest, start it from “unknown sources”). It also works in termux (see below).
  3. [Optional: I successfully can work in glitch in the Oculus Quest browser (here’s something I composed from two other glitches plus some of my own code: https://glitch.com/~gallery-appear-disappear use grip button on all the objects, triceratops will produce boxes with physics, sphere will change environment, resize the picture using both controllers). Note: You can leave out this step but it’s nice to see that you can work on code in a VR browser and experience it in the same browser.]
  4. I successfully sideloaded termUX (it “contains” / “makes accessible” Linux in Android and interfaces with the hardware, read here) and can run it in the OculusTV environment (bigger screensize would be nice though).
  5. In termUX (keep it up to date by issueing: apt-get update and apt-get upgrade), we should install:
    1. pkg install nano (cmdline texteditor)
    2. pkg install python (in some cases needed
    3. pkg install nodejs (node js and npm)
    4. pkg install git (for pulling git repositories)
  6. Next, we install a-frame by issueing git clone https://github.com/aframevr/aframe.git and change to that directory
  7. We install the package by npm install
  8. And we start npm start

It takes a while for the system to put together the server and start it, finally it will print something like “server started at http://192.168.178.xx:9000”, call that from your Oculus Browser (or Firefox Reality) and voilá:

Certainly, it is still a bit cumbersome to change between termUX (that is to be called via OculusTV) and the browser and yes, nano is not the perfect tool to work on java script and HTML, BUT: we have it working! A self-contained system running on the Oculus Quest for developing a-frame / webVR applications.

Some more references:

Other experiments:

  • I was able to connect a webcam using OTG with my phone and found an app in the Google Playstore that actually can stream video on the phone’s screen. But sideloading it to the Quest and starting it there doesn’t deliver a live stream. Intention is to look at my keyboard in VR)
  • I found some webVR code that can use the webcam as texture on an object. It works on my PC but not on the Quest (neither Oculus Browser nor the Firefox Reality although the latter has an enable button for webcams)
  • I installed OVRVNC to login to my Raspberry Pi, connect a webcam to that and stream video from there and run a webserver. However, on the Quest it doesn’t connect to my Pi VNC session from a PC works.

locomotion in VR

I just came across this video on Facebook:

and found the original on youtube:

The original video is from 2009, we had the Computer Vision breakthrough in 2012 with Deep Learning. All the new technologies understanding human body posture and movements have been developed further since then. See for example this research from ETH Zurich predicting body movements:

Also, moving tiles are in production already at huge warehouses:

I was a supporter in the kickstarter campain of the Omni by Virtuix, but didn’t get one as the machine is so heavy that the developers decided to only ship it to professional companies, not private persons and/or also not internationally. The Omni is a treadmill:

Treadmills try to keep you in place physically, they are not really good at sensing where you are. In contrast, the tiles concept above is actually working around you as it has to understand where you are going. It is a real interface between the virtual environment (as it has to know what comes next) and your movements (and thus has to track you decently). It may be clumsy (what if the person is running?) and inefficient right now, but in my eyes, it has all the potential to make you feel running as you like and feeling the environment.

Improvements necessary:

  • It has to become smarter in seeing and predicting the body movements
  • it has to become faster, maybe even smaller then.
  • possibly the plates have to tilt in directions to emulate environments better (going up a hill).

I would love to see this concept advancing. That said, I’m still in love with the bionic chair by Govert Flint as it would work in constrained spaces, i.e. for consumer use.

Wireless streaming stereo video from an RC rover to the Oculus Pi

The next iteration was of course to simply try out the wireless transmission of video. And this doesn’t make sense if you still have the camera attached to yourself, so I “augmented” another project of mine, the rover:

Arduino / RC mixed autonomy Rover

Arduino / RC mixed autonomy Rover

Also I did a little self-sufficient stereo video transmission pack, including the Blackbird 2 camera:

Battery, video sender, stereo camera

Battery, video sender, stereo camera

And attached the two. I love modular designs where you can recombine your projects easily. So these two simply have two batteries etc. Here’s the full setup:

Oculus Pi wireless streaming video rover setup

Oculus Pi wireless streaming video rover setup

Here’s the video:

(Don’t know what the heck Pinnacle Studio was thinking to put the black frame around the video when exporting.)

Strange seems that we seem to get some kind of “interlace distortion” when the car is moving too quickly. I don’t quie know whether that is an artefact from the analog video transmission or from the digitisation process itself:

interlace artifactsWhen playing with this, you intuitively move your head to look around. So I could read out the head movements and send them to the Arduino on the rover to actually move the camera. I’d also need to add another servo to the camera to actually be able to move along at least two axis’. Let’s see…

 

2 3D cams & 1 2D cam on the DK2 @ Pi3

Cryptic title of a blog post, I know. But I urgently needed to try out my cameras with the Pi3 for the use on the Oculus Rift DK2 (video below). My last attempt to stream video locally into the Rift was successfull, so I wanted more. The initial single 2D camera was a logitech webcam C525 (for 60$)

logitech hd webcam c525

logitech hd webcam c525

I have two stereoscopic cameras. The Minoru which is “kind of” a cheap camera at about 70€, but then again: not, because it’s just 2x 640×480 resolution… I stripped the camera to reduce its size, weight an volume to actually use it on the Trashbot since I know that it is supported by the Raspbian as a camera.

Minoru Steroscopic Cam

Minoru Steroscopic Cam, around 70€

I also experimented already with the Blackbird 2, an analog camera for streaming video from drones to video goggles. However, that attempt was not really successful since the camera software on PC (!) was laggy and I was thinking that it may be due to the video capture card Easycap (10-20€).

BlackBird 2

BlackBird 2, around 180 USD

But let’s see, what happens when I run it on the Pi 3:

Learning: the 190€ combo beats the other two BY FAR in experience.

PS: In the video, you see that I attached the cameras to a different Pi than the one attached to the Oculus. This is because, the Oculus Pi has display settings tuned towards the goggles and I need to invest time to make the config changeable via software to actually switch between the googles and a real external display.

Oculus Pi untethered, pt. 2: a camera!

Last time, I was able to get the Oculus Rift DK2 to run on the Raspberry Pi 3, including the head tracking. However, the first interactions showed that it’s cumbersome to work with the desktop (since it’s not distributed on the two eyes but really is using the LCD as one screen) and also to use the keyboard.

Also, in the context of making the Oculus mobile and untethered, it is necessary to have a camera onboard, at least, until I get the XTtion to work. It is interesting that it is not too easy to find software that simply can display a video stream from a local webcam, most blogs just describe how to stream from a remote webcam or make a local USB cam accessible via some webservice.

My last attempt to get a stereo analog cam to work was not really cool since the latencies on a PC plus some weird display software as an .exe were not the ideal setting to really improve things.

So after some research, I found a git repository for streaming a local webcam to a dedicated  view, independent from the desktop. You need to install CMake and libbsd-dev to make it run via:

sudo apt-get install libbsd-dev
sudo apt-get install cmake

Also, I was able to install the camera in a nice position without additional mechanics:

cam fits nicely the cables of the Rift and enough space on the straps to lead cables.

cam fits nicely the cables of the Rift and enough space on the straps to lead cables.

Here’s the video walk through with some live feed to see the latency:

Next, I may try to either position the video on one eye, or even double the stream to both eyes. Or I may try to use the Minoru Stereo cam that I’ve been working on last year.

 

 

Oculus Rift DK2 on a Raspberry Pi3: towards untethered VR.

I’ve been long dreaming of connecting these to important technologies and run them in way that I actually understand. I’m not good at Android programming (for those cardbox VRs) and not really good at Unity (although that would be fun to dive deeper) and somehow my driver situation with the DK2 and windows 10 deteriorated.

There have been earlier attempts to run the Oculus on a Pi, but in my eyes the latest iteration I found is the most fitting to me (including Python bindings and 3d libraries).

So recently, I came across Wayne Keenan’s blog how actually did exactly what I’m dreaming of, namely to run the DK2 controlled by the Pi3. I’m not so much interested in maximising the complexity of geometry in VR but really more in the interaction design and what untethered VR (or even AR) can actually feel like.

His github recipe really works nicely, and so the installation was really a matter of 30min:

Pi3, DK2 and mini keyboard

Continue reading

Blackbird 2, 3D FPV camera and the Oculus Rift DK2, pt. 1

Slowly inching towards the Blackbird2 on the Oculus Rift DK2.

Right now, I have the ancient Logilink video capture card running on Windows 10 and found a software that is old enough to be compatible with it but young enough to run on Windows 10.

It installed the DirectPlay on Windows10, so that’s the tribute to old tech, I guess.

So I get the video on my desktop now, have updated the Nvidia driver 358.87 for my GTX980 and the lastest Oculus Rift driver (v8.0).

What we get is here:

blackbird video on PC

As you can see, it’s black and white somehow. The capture says it sees 525 (PAL60) or 625 (PAL_N), with the latter setting, I also get some sort of colors but they change rapidly and are mostly false…

I can also put the video on full screen, but it seems that with the latest drivers for the Oculus, I can’t use the DK2 as second output display, so I don’t know how to put the camera video into the DK2. Clearly, more research needed…

Blackbird 2 FPV 3D camera unboxing

I’ve been dreaming of building a remote controlled 3D camera rig for the Oculus Rift for over a year now. Recently, I came across the Blackbird 3D First Person View camera here in a forum. And then I learned that the producer of this nice thing actually just launched the version 2 which is natively capable of producing images suitable for the Oculus Rift by rotating the videos glueing them next to each other and distorting them adequately. And I had to buy it right away. That was about three weeks ago and finally, today I received it.

It thought I just share the unboxing of it with you, I hope I’ll have time to play with it (the rest of the hardware is on my desk already: an analog display, wireless transmission, USB video digitisation and a 2 DoF servo gimbal to be remote controlled). Yay!

You can see some similar projects here. Continue reading

make munich 14, pt 2: oculus rift und kamera

eins der projekte, die ich auch gerne mal basteln wollen würde ist eine kamera führung, die mit drei servos entlang drei achsen geschieht. die servos werden durch die kopf orientierungsdaten der oculus rift gesteuert, die aufnahme der kamera kommt in der rift an.

im vorliegenden projekt wurden dynamixel servos genommen, ca 30€ das stück, die seriell vernetzt werden, was die komplexität verringert. allerdings war nur eine 2d kamera drauf bzw nur ein kamera chip, nicht zwei, sodass wohl im rift kein echter 3d eindruck entstehen konnte.

darüberhinaus war dazwischen ein relativ langsamer laptop, sodass die latenzen nicht wirklich überzeugt haben dürften. schade eigentlich, denn latenzen sind meines erachtens wesentlich wichtiger für einen lebendigen eindruck in der virtuellen realität.

hier das video dazu:

 

der mensch hier hat das etwas ernsthafter betrieben, nur zum vergleich:

 

concepts for moving in virtual space yet safe in real world.

we all know the onmi by virtuix that allows for walking / running in the virtual space while remaining on one spot in the real world. i’ve backed this project (i hope they will deliver soon) as it seems that the user experience is conceptually well executed through special shoes that can be tracked by the floor:

but just today i came across a “chair” that actually frees your body from the floor, i.e. it allows for moving more freely in the virtual space while the physical device is actually home-compatible compared to “lawn mower man” kind of interfaces. govert flint did this as a graduation project at the design school in eindhoven:

it seems that it has different ways of collecting data, i’ve read about accelerometers, probably even potentiometers would probably do the job. then, they added some software to actually recognise gestures. i think we should go further than that and really use the whole body posture. in that way, we could use the kinect to record the posture and simplify the design of the chair. would be great to try that out (i do love the possibility to swing left and right, that’s going beyond a bar stool. actually, that guy should do a kickstarter project.

btw: the original lawnmower man interface is akin to a gyroskope:

but that’s clearly not consumer compatible. so i’m sold on that chair thingy.

then there’s the züricher hochschule für künste who allow the user to lie on “bed” and use wings attached to the arms to fly like a bird. also quite inspiring, but again, in terms of practicality, not for me at home: