This is my first Chromebook and I really love it as it’s ARM driven, has Android support and runs Linux in a VM. The only thing I don’t like is the original kickstand that comes with it as it weighs 240g.
So I ordered this 46g foldable tablet stand and removed the lower lip that usually holds the tablet in place. For the chrombook I need a direct connection to the keyboard that is a broad cloth-like connection to the tablet part.
As this is an experiment, I added “power strips” to it, which is a two-sided glue tape that you can remove easily without leaving stains behind.
Finally, I placed it in the middle of the tablet part:
And voilá, this is how it looks like:
I may design something similar in 3d with even flatter angles and more rounded edges when folded so that I can put it into a bag without the risk of tangling.
Short-cut: for instructions how to run a server on the Quest for developing a-frame webVR apps that run on the Quest, scroll down to how-to:
Background: Reading William Gibson’s Neuromancer as a child, I was always intrigued by how to get, stay and work in “the matrix” / cyberspace. Today, VR is still not where Gibson conceptualised it (we don’t have meaningful and conventionalised data visualisation and navigation in 3d yet), but still we’re making baby steps. When I first saw this video in 2014, I fell in love with the idea to write live updating code in VR:
Inspecting the code today, it isn’t built in a-frame or webVR as far as I understand, but in threeJS which afaik underlies webVR. It’s not hosted anywhere anymore, so you can’t try it out on your headset.
Although, having owned the DK2 and the CV1, the Oculus Quest finally is the device of my dreams as it doesn’t depend on external sensor setups, cables or a PC. For the moment, I don’t care about the resolution and the GPU. I want to explore the idea of having a fully fledged computer on your body to work with. It’s really a new device category. Right now, content and software is developed outside HMD and then tested on them (think Unity 3d etc), even for phones, this is the case, you can’t develop for Android in Android without anything else (at least last time I checked). From my perspective (I grew up with the Commodore C64), the computer to consume on should be the computer to produce on.
The Quest is a computer, it has a huge screen, it has an OS, internet connection, it runs Android and is bluetooth capable. I came across a-frame and think that for the moment, this is the go-to development framework compared to Unity 3d et al. However, to develop for a-frame, you also need a web-server (although systems like glitch.me and pencode allow you to work in the browser). Thus, ideally this server itself should also run on the Quest (my ideal is self-contained).
Keyboard: I successfully paired a bluetooth keyboard (using a sideloaded bt lister app, this one works on the Quest, others didn’t) and it works in the Oculus Browser but not in the (side loaded) Firefox Reality browser (current version in Beta: 1.2.3, use *oculusvr-arm64-release-signed-aligned.apk for the Oculus Quest, start it from “unknown sources”). It also works in termux (see below).
[Optional: I successfully can work in glitch in the Oculus Quest browser (here’s something I composed from two other glitches plus some of my own code: https://glitch.com/~gallery-appear-disappear use grip button on all the objects, triceratops will produce boxes with physics, sphere will change environment, resize the picture using both controllers). Note: You can leave out this step but it’s nice to see that you can work on code in a VR browser and experience it in the same browser.]
I successfully sideloaded termUX (it “contains” / “makes accessible” Linux in Android and interfaces with the hardware, read here) and can run it in the OculusTV environment (bigger screensize would be nice though).
In termUX (keep it up to date by issueing: apt-get update and apt-get upgrade), we should install:
pkg install nano (cmdline texteditor)
pkg install python (in some cases needed
pkg install nodejs (node js and npm)
pkg install git (for pulling git repositories)
Next, we install a-frame by issueing git clone https://github.com/aframevr/aframe.git and change to that directory
We install the package by npm install
And we start npm start
It takes a while for the system to put together the server and start it, finally it will print something like “server started at http://192.168.178.xx:9000”, call that from your Oculus Browser (or Firefox Reality) and voilá:
Certainly, it is still a bit cumbersome to change between termUX (that is to be called via OculusTV) and the browser and yes, nano is not the perfect tool to work on java script and HTML, BUT: we have it working! A self-contained system running on the Oculus Quest for developing a-frame / webVR applications.
I was able to connect a webcam using OTG with my phone and found an app in the Google Playstore that actually can stream video on the phone’s screen. But sideloading it to the Quest and starting it there doesn’t deliver a live stream. Intention is to look at my keyboard in VR)
I found some webVR code that can use the webcam as texture on an object. It works on my PC but not on the Quest (neither Oculus Browser nor the Firefox Reality although the latter has an enable button for webcams)
I installed OVRVNC to login to my Raspberry Pi, connect a webcam to that and stream video from there and run a webserver. However, on the Quest it doesn’t connect to my Pi VNC session from a PC works.
TL;DR: 14x11cm portable Raspberry Pi Zero W with 3000mAh Adafruit powerboost 1000, Hyperpixel 4inch display, a keyboard and a4 port USB hub. Scroll down for videos, STL files, shopping list.
I always want to be able to carry a little computer around that reduces the outside distractions, like no email, ideally no browser, no messaging, etc. Actually, I enjoy the “silence” that I have with the Raspberry Pis. They allow me to focus on writing software for robots and the like.
Sometimes, you want to code on the go. My second shot in that direction is the StickPi, a little computer without battery but with an e-paper display and a couple of buttons. I actually keep using it, it’s my most used self designed computer.
My first shot hasn’t been published yet: the SlatePi. It is a 5inch computer with a small keyboard and a 4000mA battery, but it seems to be to large for every day use.
So I did another computer: today’s PocketPi. It is the symbiosis of the first two: portable but still a full keybard and a battery included.
It all started with the idea to build a Pi Zero and a display into my favorite mobile keyboard:
So at some time, I started to decompose that keyboard to understand how much space is there to actually fit in a Pi Zero W, battery and the display.
I quickly learned that a) the inclusion of the driver for a ST7735 1.8inch display is cumbersome and the resolution might actually not be nice for really working with it (128×160), it may be better suited for a retroPi machine.
So I decided to use another display that I had at home already which is a 3.5inch display for the Pi (without HDMI) and some more decent 480×320 resolution.
However, I didn’t want to increase the size of the device by the height of the diplay and thus decided that I don’t need the left (media-related) buttons of the keyboard:
Another thing was to ideally keep the height of the original keyboard which is about 12mm, but looking at the standard plugs of GPIO displays, it became clear that I need to apply some tricks:
I soldered the rails of a DIP plug to the pi, took away the whole plug from the display cutting off the pins to a bare minimum.
I also decided that I wanted to have a full size HDMI out so I bought shorter and more flexible cables than I had at home and dismantled them:
Finally, i also wanted to add a decent non-OTG USB to the machine as OTG adaptors simply SUCK.
I actually went with a little board that included the OTG “logic” already and had one USB on the side to actually keep the keyboard receiver and stay within the devices case.
During the journey, I decided to upgrade to the final Hyperpixel 4 inch display, with a decent 800×480 resolution. The advantage is the little increased size (4mm) compared to my 3.5inch before plus it can be switched off via GPIO. So this is the evolution of the displays over the project:
I also added the Adafruit Powerboost 1000 and a rather flat 3000mAh LiPo to the mix. The Adafruit is rather expensive (20-30€) and only supports 1A charging current, I’d love to have a cheaper charger board with 2A at some point in time.
With the power source in place, it was time to wire it all up. Note that I added another switch to the keyboard so that I could switch off the keyboard but let the Pi run for background computations.
As you can see, I wired the USB hub directly to the Pi to save some more weight and space:
Another trick to save height with also the new Hyperpixel display (the plug of the screen is okay, so I needed a new Pi Zero without the DIPs and just short pins), is to solder the pins with the plastic spacer from behind and then remove it plus the excess pin lengths on the back:
After the system was established, it was time to design the case and get a feeling for the locations of everything:
A later version then had spacing elements between the components to hold them in place. Also the HDMI output cable was added then:
As mentioned before, the screen switch for the 3.5 inch didn’t work as you could switch it off (cutting off 5V with the physical switch) but not on again since the OS wouldn’t re-recognise the screen then.
So the whole case design (TinkerCad) underwent a couple iterations:
As you can see in iterations 7 & 8, the battery was rotated 90° to landscape and the HDMI cable is going between battery and battery charger.
During these iterations the device grew a bit in dimensions to a final 11x14cm. That’s 3cm more than the original keyboard’s case at 8x14cm. But anyway that’s the price for a 4inch screen with a 800×480 resolution…
So that’s the currently final layout:
Time to look at the functionality in a video:
As I wanted to take the PocketPi to my easter holidays, I needed trade the flat screw “lips” to be bold and rounded to give the hole thing more stability:
I installed Raspbian Stretch lite and added the Pixel desktop plus Geany as python IDE. I also configured two of the buttons to zoom the text in Geany, the right mouse key on the keyboard is really handy, the touch screen works as left mouse button and I added multiple desktops to easily switch between apps. Here’s a video of the currently final device in operation:
The original video is from 2009, we had the Computer Vision breakthrough in 2012 with Deep Learning. All the new technologies understanding human body posture and movements have been developed further since then. See for example this research from ETH Zurich predicting body movements:
Also, moving tiles are in production already at huge warehouses:
I was a supporter in the kickstarter campain of the Omni by Virtuix, but didn’t get one as the machine is so heavy that the developers decided to only ship it to professional companies, not private persons and/or also not internationally. The Omni is a treadmill:
Treadmills try to keep you in place physically, they are not really good at sensing where you are. In contrast, the tiles concept above is actually working around you as it has to understand where you are going. It is a real interface between the virtual environment (as it has to know what comes next) and your movements (and thus has to track you decently). It may be clumsy (what if the person is running?) and inefficient right now, but in my eyes, it has all the potential to make you feel running as you like and feeling the environment.
It has to become smarter in seeing and predicting the body movements
it has to become faster, maybe even smaller then.
possibly the plates have to tilt in directions to emulate environments better (going up a hill).
I would love to see this concept advancing. That said, I’m still in love with the bionic chair by Govert Flint as it would work in constrained spaces, i.e. for consumer use.
Then I came along these USB boards that you can pogo-pin to your Pi Zero which is similar in design to what the guy at NODE did.
When working with a Pi Zero, I wanted to connect via VNC so I can run Pixel desktop and Geany on the Pi to develop and run software. When doing so, you quickly want the Pi to display the Wifi and IP address it’s connected to. I chose an e-paper display from Waveshare (do yourself a favor an buy the balck/white display only, not the one with three colors as they don’t support partial update, i.e. they refresh very slowly!) and bought such a USB power board and attached the Zero to a battery:
Once you have this kind of display, you learn that the physical interaction with the machine is one way: consumption. And immediately, you think that it would be nice to have interaction methods to select a WIFI or shut down the machine or select operation modes, etc.
The rotorblades couldn’t resist my kids, so there was the job to 3d print them. I did this in 40min in Tinkercad, I certainly did the connection of the blades in a different manner than the original, you see that the blades are connected even around the shaft. I also added those blue supports that should give additional strength.
Anet A8 printing the bad boy in grey, similar to the orginal:
You see there is some infill structure, I hope the rotors will last better than the orgininals, I also made them 3mm thick which is about double the original. But PLA is also a bit crisper. The two together:
And the new blades on the helicopter (about 40min print, via Octoprint):
In the back you see the rendered gcode file in Simplify3d. Nice One and a half hour repair!
The next iteration was of course to simply try out the wireless transmission of video. And this doesn’t make sense if you still have the camera attached to yourself, so I “augmented” another project of mine, the rover:
Arduino / RC mixed autonomy Rover
Also I did a little self-sufficient stereo video transmission pack, including the Blackbird 2 camera:
Battery, video sender, stereo camera
And attached the two. I love modular designs where you can recombine your projects easily. So these two simply have two batteries etc. Here’s the full setup:
Oculus Pi wireless streaming video rover setup
Here’s the video:
(Don’t know what the heck Pinnacle Studio was thinking to put the black frame around the video when exporting.)
Strange seems that we seem to get some kind of “interlace distortion” when the car is moving too quickly. I don’t quie know whether that is an artefact from the analog video transmission or from the digitisation process itself:
When playing with this, you intuitively move your head to look around. So I could read out the head movements and send them to the Arduino on the rover to actually move the camera. I’d also need to add another servo to the camera to actually be able to move along at least two axis’. Let’s see…
Cryptic title of a blog post, I know. But I urgently needed to try out my cameras with the Pi3 for the use on the Oculus Rift DK2 (video below). My last attempt to stream video locally into the Rift was successfull, so I wanted more. The initial single 2D camera was a logitech webcam C525 (for 60$)
logitech hd webcam c525
I have two stereoscopic cameras. The Minoru which is “kind of” a cheap camera at about 70€, but then again: not, because it’s just 2x 640×480 resolution… I stripped the camera to reduce its size, weight an volume to actually use it on the Trashbot since I know that it is supported by the Raspbian as a camera.
But let’s see, what happens when I run it on the Pi 3:
Learning: the 190€ combo beats the other two BY FAR in experience.
PS: In the video, you see that I attached the cameras to a different Pi than the one attached to the Oculus. This is because, the Oculus Pi has display settings tuned towards the goggles and I need to invest time to make the config changeable via software to actually switch between the googles and a real external display.
Last time, I was able to get the Oculus Rift DK2 to run on the Raspberry Pi 3, including the head tracking. However, the first interactions showed that it’s cumbersome to work with the desktop (since it’s not distributed on the two eyes but really is using the LCD as one screen) and also to use the keyboard.
Also, in the context of making the Oculus mobile and untethered, it is necessary to have a camera onboard, at least, until I get the XTtion to work. It is interesting that it is not too easy to find software that simply can display a video stream from a local webcam, most blogs just describe how to stream from a remote webcam or make a local USB cam accessible via some webservice.
My last attempt to get a stereo analog cam to work was not really cool since the latencies on a PC plus some weird display software as an .exe were not the ideal setting to really improve things.
So after some research, I found a git repository for streaming a local webcam to a dedicated view, independent from the desktop. You need to install CMake and libbsd-dev to make it run via: