Slowly inching towards the Blackbird2 on the Oculus Rift DK2.
Right now, I have the ancient Logilink video capture card running on Windows 10 and found a software that is old enough to be compatible with it but young enough to run on Windows 10.
It installed the DirectPlay on Windows10, so that’s the tribute to old tech, I guess.
So I get the video on my desktop now, have updated the Nvidia driver 358.87 for my GTX980 and the lastest Oculus Rift driver (v8.0).
What we get is here:
As you can see, it’s black and white somehow. The capture says it sees 525 (PAL60) or 625 (PAL_N), with the latter setting, I also get some sort of colors but they change rapidly and are mostly false…
I can also put the video on full screen, but it seems that with the latest drivers for the Oculus, I can’t use the DK2 as second output display, so I don’t know how to put the camera video into the DK2. Clearly, more research needed…
I’ve been dreaming of building a remote controlled 3D camera rig for the Oculus Rift for over a year now. Recently, I came across the Blackbird 3D First Person View camera here in a forum. And then I learned that the producer of this nice thing actually just launched the version 2 which is natively capable of producing images suitable for the Oculus Rift by rotating the videos glueing them next to each other and distorting them adequately. And I had to buy it right away. That was about three weeks ago and finally, today I received it.
It thought I just share the unboxing of it with you, I hope I’ll have time to play with it (the rest of the hardware is on my desk already: an analog display, wireless transmission, USB video digitisation and a 2 DoF servo gimbal to be remote controlled). Yay!
eins der projekte, die ich auch gerne mal basteln wollen würde ist eine kamera führung, die mit drei servos entlang drei achsen geschieht. die servos werden durch die kopf orientierungsdaten der oculus rift gesteuert, die aufnahme der kamera kommt in der rift an.
im vorliegenden projekt wurden dynamixel servos genommen, ca 30€ das stück, die seriell vernetzt werden, was die komplexität verringert. allerdings war nur eine 2d kamera drauf bzw nur ein kamera chip, nicht zwei, sodass wohl im rift kein echter 3d eindruck entstehen konnte.
darüberhinaus war dazwischen ein relativ langsamer laptop, sodass die latenzen nicht wirklich überzeugt haben dürften. schade eigentlich, denn latenzen sind meines erachtens wesentlich wichtiger für einen lebendigen eindruck in der virtuellen realität.
hier das video dazu:
der mensch hier hat das etwas ernsthafter betrieben, nur zum vergleich:
we all know the onmi by virtuix that allows for walking / running in the virtual space while remaining on one spot in the real world. i’ve backed this project (i hope they will deliver soon) as it seems that the user experience is conceptually well executed through special shoes that can be tracked by the floor:
but just today i came across a “chair” that actually frees your body from the floor, i.e. it allows for moving more freely in the virtual space while the physical device is actually home-compatible compared to “lawn mower man” kind of interfaces. govert flint did this as a graduation project at the design school in eindhoven:
it seems that it has different ways of collecting data, i’ve read about accelerometers, probably even potentiometers would probably do the job. then, they added some software to actually recognise gestures. i think we should go further than that and really use the whole body posture. in that way, we could use the kinect to record the posture and simplify the design of the chair. would be great to try that out (i do love the possibility to swing left and right, that’s going beyond a bar stool. actually, that guy should do a kickstarter project.
btw: the original lawnmower man interface is akin to a gyroskope:
but that’s clearly not consumer compatible. so i’m sold on that chair thingy.
then there’s the züricher hochschule für künste who allow the user to lie on “bed” and use wings attached to the arms to fly like a bird. also quite inspiring, but again, in terms of practicality, not for me at home:
okay, so i had some more time and actually tried to find out how to bring all the little hardware pieces together to form a working system. for the moment, it seems to me that the gaming engine unity3d is the best way to integrate most of the stuff i have although there are some nice things with point clouds i’d like to do with the asus xtion sensor (a derivation of primesense’s kinect that they developed together with microsoft).
but let’s start small, otherwise there’s just flat chaos. first of all, i’d like to show you what the current problem is and then describe how i got here. somehow i don’t find any information in the internet, so i think it’s worthwhile to write it up and get some people’s minds around it.
here’s what i built in about ten iterations with unity3d, leapmotion and the oculus rift:
here’s me manipulating the cubes with the leap motion on the table and the hmd on my head. i had to remove the leap motion from the hmd (see my last post taping it) during experimentation as i didn’t get any tracking first and later somehow the hands were mirrored, so i decided to simplify the setup and get it running on the table first.
here’s two things i learned in unity:
in the box (a prefab from the leap motion package), the rift camera has to be IN FRONT of the leap motion controller icon. the screenshot above shows the correct positioning from top view (leap motion icon is in the middle, right to the lamp icon). it may sound trivial but it took me two hours with many different settings (like the leap motion config menu also allows for changing orientations, etc).
the z-axis must not be mirrored. secondly, i scaled the sandbox by 40, this allows you to move freely in the box and your hands seem to be roughly in proportion to the cubes (default size from unity).
one thing i still don’t know: how can i actually get a screen recording while i’m playing? this is unity’s output (ok, ok i did 11 iterations):
“direct to rift” works for me with my hmd, but on the pc screen there’s just a black window (as in my second video above), without “direct to rift” i see a live window, but the hmd isn’t showing anything. of course, the hmd is then used as a second screen on the pc, and i could move the window to that screen but then it won’t be full screen. and the second problem is that the software actually captures the mouse for controlling the navigation in the virtual space, so i can’t move the window as soon as i the animation actually starts… any ideas? haven’t found anything in the unity “build settings”.
okay, so i’ve tried the device on a couple of family members and since the virtual desktop example is fast enough for my current little computer, it was a great experience for everybody.
it seems though that the head tracking in the example is just for turning your head but somehow it’s not tracking your relative height, i.e. looking under the table somehow doesn’t work. Continue reading →
okay, this is a new era. let’s face it. ever since william gibson’s neuromancer (the guy who invented the word cyberspace), i’ve been waiting to dive in into data. in the mid nineties, i bought myself a cybermaxx helmet. and yes, i recently Continue reading →
YEAH! can’t wait to hook up the oculus rift dk2 up to my wii fit board. i think that navigation in the virtual space via finger, keyboard, mouse, wii nunchuck or razor hydra etc is sub optimal. you have to have your hands free to do things that hands do in the real space. walking is obviously not part of that…
so i would like to stand on the board and read out the pressure sensors of the wii fit board to accelerate ahead to steer left / right just by moving my body similar to using a skateboard.
and the leap motion. it is of outmost importance to use your hands in the space and leap motion did a great pivot recently when adapting their SDK to the oculus rift, i.e. adding support for finding your hands even if the leap motion controller is not on the table but attached to your helmet. can’t wait to see this. plus the leap motion controller / sdk has built-in support for gesture recognition so that you can easily implement a couple of basic interactions with objects in the virtual space.
and my two asus xtions, the guy below uses three kinects. it will be a great thing to fuse the real world and the virtual world. i’d love to track my body via openni and / or see myself in the virtual space. the rift’s head tracking and possibly the xtion’s spatial position tracking will allow me to walk in space but also give real out-of-body experiences in the helmet. soo cool, check the video out below: