Virtual Reality: The New Frontier

There was a huge virtual reality (VR) craze when I was in high school. I remember going to a Virtual Reality Expo with this exhibit that let you experience the weightless magic of riding on a cloud, just like Goku in Dragon Ball. Another attraction had this game with neat, high-speed LEDs that lit up from top to bottom, giving the game unique, 3D-like effects. Needless to say, the thrills were virtually endless.  

These past few years, VR has stepped back into the spotlight. The advent of products like Oculus Rift and Sony’s Project Morpheus have wrought all sorts of advancements in the world of head-mounted display technology. Thanks to cutting-edge computer graphics, improvements in GPUs, and other new technology, an insanely real, fully-immersive virtual reality experience is now well within our grasp.

The most exciting part of this new generation of VR may very well be Google’s Cardboard kit. It stands out from the competition thanks to its affordable price and sheer simplicity. Anyone can get their hands on this kit.

In this post, we’ll get into the nitty-gritty of the Cardboard kit. My goal for this project is to build a game demo starring a robot that’s piloted through the magic of VR.

Nothin’ Hard About Cardboard

Google Cardboard is a VR attachment designed for smartphones. Yes, it is made of cardboard. The design schematics are available for anyone interested in building their own VR headset. All of the required materials (magnifying glass lenses, some cardboard, and a few strips of fairly sturdy tape) can be purchased at your local dollar store.

fig1

Bringing Cardboard to Life–the Easy Way

I built two Cardboard VR kits by following the schematics posted on Google’s website. The lenses can be purchased from your local dollar store. All you have to do is buy a magnifying glass and tear it apart. To my surprise, the photo magnifying glass I bought included two lenses. Lucky me! I was able to get my hands on all the lenses I needed for the price of one.

fig2

This is very easy to do if you have a laser cutter on hand. Since I didn’t, I printed out a template of what I wanted my VR headset to look like, taped it onto a piece of cardboard, and used a regular set of cutters to cut it out.

Since cardboard boxes don’t usually come with bands to wrap around your head, I decided to create a very basic headband by using some Velcro tape and elastic bands from an arts and crafts kit. Voila!

Here’s my first cardboard headset in all its glory.

fig3

Google Cardboard with Unity (Durovis Dive Version)

The first thing I tried was Durovis Dive’s Tracking Tech. I downloaded the SDK found on the developers page (see link) and imported the Unity Package to get started.

I took the “Dive_Camera” Prefab found in the “Dive/Prefabs/” folder I imported and added it to the scene I was working on. This transforms the device into a binocular camera complete with head movement tracking abilities. By setting this camera as your main camera, you can easily adapt the plugin for use with almost any smartphone VR attachment, including Cardboard.

Google Cardboard with Unity (Cardboard SDK Version)

Next, I tried out the official Google Cardboard Unity SDK. As you can see on the Cardboard Developer Page (Unity SDK), you can download the Unity Package from the Download and Samples page. Import the package into your project to create a Cardboard folder.

Just like with Durovis Dive, add the “CardboardMain” Prefab found in the “Cardboard/Prefabs/” folder. This will add camera functionalities to the device. Now you’re ready to roll with Google Cardboard.

Which One to Choose...

I tried both SDKs in this grand experiment, but I didn’t have time to create a comparative analysis of the two methods. I ended up using Durovis Dive for the app simply because the Cardboard Unity SDK wasn’t released when I first started this experiment.

How Does It Work?

When working with VR attachments for smartphones, one of the biggest challenges lies in designing the user interface. With the Cardboard headset, I solved this conundrum by creating a switch based on magnetism and the smartphone’s magnetic sensor. The smartphone’s magnetic sensor detects the changes in magnetism induced by position changes in the two magnets, which in turn allows data to be input from a single button. This setup is pretty sweet.

For the second Cardboard headset, I used a piece of fabric with conductive properties and a lever to create a mechanism that touched the screen whenever a button is pressed. This setup is pretty cool too. On the other hand, both methods of input don’t allow for a lot of interaction. They also generate lag time per interaction, and are a little cumbersome to control. This could create a few hurdles for games with a lot of action.

It Takes Two, Baby

That’s when it hit me. Why not use the smartphone’s speed sensor and touch panel as a controller? Smartphones are flooding the market at an incredible rate. What’s wrong with taking two of my old phones laying around in my desk drawer and using them as controllers?

This plan calls for a combination of three smartphones–one for display, one for controlling the left side of the robot, and one more for controlling the right. My wife and I had an old iPhone 4 and an iPhone 4s we weren’t using any more. Jackpot! I decided to use these as my controllers.

Connect the Controllers to the VR Headset via Wireless Network

Here comes the tricky part. How exactly should one go about connecting the two controller smartphones and the smartphone used for running and displaying the game? I decided to use WebSocket to solve this quandary. By going through a server, I was able to connect the brain smartphone with the controllers using a WebSocket connection.

fig4

For the Unity engine being used on the client-side, I used a modified version of KLab’s C# WebSocket for Unity to build a receiver for running the program. It looked a lot like this:

using UnityEngine;
using System.Collections;
using WebSocketSharp;

public class WebSocketClient : MonoBehaviour {
   public const int RIGHT = 0;
   public const int LEFT = 1;
   public const int B1 = 0;
   public const int B2 = 1;
   public Vector3[] accel = new Vector3[2];
   public bool[,] button = new bool[2,2]{{false,false},{false,false}};

   private WebSocket ws;

   // Use this for initialization
   void Start () {
       ws = new WebSocket ("ws://URL/chat/test");
       ws.OnMessage += (object sender, MessageEventArgs e) => {
           string [] message = e.Data.Split(new char[]{':'});
           int controllerNo=RIGHT;// Default Right is bad
           switch(message[1]){
           case "R":
               controllerNo = RIGHT;
               break;
           case "L":
               controllerNo = LEFT;
               break;
           default:
               break;
           }
           switch(message[2]){
           case "AC":
               accel[controllerNo] = new Vector3(float.Parse(message[3]),
                                                 float.Parse(message[4]),
                                                 float.Parse(message[5]));
               break;
           case "B":
               button[controllerNo,int.Parse(message[3])-1] =
                   (message[4].Equals("DOWN")) ? true : false;
               break;
           default:
               break;
           }
       };
       ws.Connect ();
   }

   // Update is called once per frame
   void Update () {
   }
}

This class takes information from the player’s controller script and other sources in order to pilot the robot in the game.

I made the controller using an HTML file. Not exactly fancy, but it works just fine by simply displaying it in the browser.

<!DOCTYPE html>
<html><head>
<meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0; user-scalable=no;">
<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-status-bar-style" content="black">
</head>

<body style='user-select: none; -webkit-user-select: none;'>
<script type="text/javascript">
document.addEventListener('touchmove', function(e) {
 e.preventDefault();
},false);

var ws;
ws = new WebSocket("ws://URL/chat/test");

function send(message){
 ws.send(message);
}

window.addEventListener("devicemotion",function(evt){
 var x = evt.accelerationIncludingGravity.x; //side-to-side tilt
 var y = evt.accelerationIncludingGravity.y; //vertical tilt
 var z = evt.accelerationIncludingGravity.z; //up-and-down tilt
 send("1:L:AC:"+x+":"+y+":"+z);
},false);
</script>
<div style="width: 100px; height: 100px; background-color: red; margin: 20px; float: right;" ontouchstart="send('1:L:B:1:DOWN')" ontouchend="send('1:L:B:1:UP')" ></div>
<div style="clear: right;"></div>
<br>
<br>
<div style="width: 100px; height: 100px; background-color: blue; margin: 20px; float: right;" ontouchstart="send('1:L:B:2:DOWN')" ontouchend="send('1:L:B:2:UP')" ></div>
</body></html>

With this HTML, I changed the ID number for each controller to create two independent controllers―one for the left side of the robot, and one for the right.

The server used to relay controller messages is only used to broadcast messages received on the client through a simple WebSocket connection. I used this Clojure and Aleph chatroom service for reference.

An Ode to Cyber Troopers Virtual-On

It’s hard to talk about piloting a futuristic robot with two joysticks without bringing up Cyber Troopers Virtual-On, one of the most innovative titles in the “giant fighting robots” gaming genre. I borrowed a few ideas from the series for my game. The angle at which each device is tilted is detected by the built-in acceleration sensor. This in turn is used to move the robot forward when you tilt both left and right hand controller phones forward. Tilt them both backwards to move the robot in reverse. Move them forward or backward in different directions to make the robot spin around. Tilt them horizontally in the same direction to move the robot side to side. Spread the controllers apart horizontally to make the robot jump, and bring the controllers closer together to make the robot move down. Classy.

The screen on the controller phones includes buttons for attacking and boosting. Tapping the buttons triggers the desired effect.

fig7

As you can see in the photo above, I used an in-browser HTML page for the controller screen. (Not a lot of whistles and bells here.) The red square is the attack button, and the blue one is the boost button. It works on both Android and IOs devices, but the acceleration sensor is based on a different axis for each OS, so be careful.

Everybody Do the Robot

Here’s a shot of me using my Cardboard headset. Looking good.


fig5

And here’s a screenshot of the actual display.

fig6

Summary

This experiment asks a lot since you need three phones to make it work. All things considered though, it’s not that hard to get your hands on three smartphones these days. When I started scrounging around the house, I found a 3GS, 4, 4s, 5, 5s, 6, and a first-generation iPad, and that’s just for iOS! For Android, we had an HTC Aria, a Kindle Fire HD, and a Kobo Arc. You probably have at least three old phones lying around somewhere.

There’s a little bit of latency that comes into play when you send messages from controllers through the server via the WebSocket connection, but it’s barely noticeable once you start playing. With a little in-game tweaking, you should be able to cover that lag up quite nicely so that it’s virtually unnoticeable.

Overall, this project was a blast. I loved making the VR headsets. Time was flying and so was I. My only regret is that this article fails to convey the full extent of how much fun this project really was! My goal was to create a robot that you could climb in and pilot yourself. The demo’s gameplay was pretty lack-luster, but I'm looking forward to taking what I learned from this project and applying it to the games we’ll make in the future.

Thanks for reading!