A Journey to the Virtual World

Welcome to our Christmas post on our work on data visualization in immersive media!

Technology is evolving rapidly and it has become a part of our daily lives. One of the top trending technologies today is virtual reality (VR). And we, at Imaginea Labs, are probing into it with gusto.

Table of contents

  1. Virtual reality primer
  2. VR headsets
  3. Difference between VR and AR
  4. VR experiment at Imaginea Labs
  5. Virtual building walkthrough
  6. Data Visualization
  7. Interactions
  8. Learnings
  9. Conclusion
  10. Future Works
  11. Useful Resources

Virtual reality primer

The idea of virtual reality is to digitally simulate actual environments in platforms like video games or 360-degree videos through various VR devices. The phenomenon can be achieved by triggering simulated senses such as vision, touch, hearing, and even smell in the brain. Remember seeing VR stalls in malls stating ‘4D VR’, ‘9D VR’, and so on? As we add senses to the virtual reality space, it lends a new dimension of value to the whole experience. However, at Imaginea Labs, our focus is mainly on achieving VR through 360-degree vision.

VR headsets

VR headsets aid us in entering the world of virtual reality. The device is of two kinds:

  • One is wherein we insert our smartphone and use its features like screen, gyroscope, accelerometer, and audio. Examples: Google Cardboard, Samsung Gear VR, etc.
image4 image3 image2
  • The other one is the complete stand-alone VR unit where everything is built-in and we don’t need a phone. It has a screen, all required sensors, sound system, and the option to display the VR screen on TV or computer. Examples: Oculus Rift, HTC Vive, etc.

These two types of VR headsets have one thing in common–both have two lenses to give a stereoscopic effect and a 3D depth effect. Consider how we have tried this many times: checking one eye’s vision by closing the other. We see the same image but with a different range (although having some common range). While normally watching with both eyes, our brain is trained to find similarities between these images with different ranges and patch them to give a single wide vision. In the same way, in a VR application, we use a split screen; when we see images through a VR headset, we achieve a 360-degree vision.

stereo

Difference between VR and AR

There exists a crucial difference between the two. In virtual reality, we create an entire artificial environment, where whatever we see is unreal.

VR1 VR2 AR

Augmented reality (AR), on the other hand, is where we see artificial objects in real environment, mostly as overlays. We have one camera layer to show the reality and on top of it, there is an augmented layer where we show 3D objects to add more value to the reality. A good example of an AR tool is Google Glass.

VR experiment at Imaginea Labs

The application of virtual reality ranges from video games and prototype visualization to training students with simulations. At Imaginea Labs, we have experimented with a few applications, with which we have learnt 3D modeling, 3D depth effect, parallax effect, and gesture building for intuitive interaction. In this blog, we will focus on virtual walkthrough and data visualization with various modes of interaction designed to engage users in the virtual world.

Virtual building walkthrough

The purpose of a virtual walkthrough is to solve navigation issues. These problems generally occur in spaces like, say, malls where we might face difficulty in finding a store or finding the way to one. A virtual walkthrough app gives us an opportunity to explore a building virtually. If we do a walkthrough using the app, we can be sure of the basic layout and routes of the building. Here we have taken our office building as a place to explore virtually.

officeVR

While building such an app, we had to work on the look and feel, and the interactions involved instances like opening a door and walking in a specific direction. We used Maya for modeling the building, and a bluetooth controller for interactions.

We started with setting a base for the building, that is, the floor with grids. With the help of the grid units, we created the building layout. Then we started to work on the colors, and focused on the textures and materials. We had taken a number of pictures of the building for reference.

office-large

En route to building the app, we had to face a few challenges. After modeling the building in Maya, we had to import it to Unity to add lights and provide interactions. That is when we could spot pixelated edges in the model. Also, the objects in the building looked smaller or bigger than they did in Maya. So we had to go for iterations to eliminate the ratio issues. Generally, we always focus on low polygon models in 3D models, that is, with less number of vertices and joints. This is another area where we worked on for performance improvement.

One area of improvement is instances of VR sickness. In this application, users felt disoriented as mentally they feel they are walking but physically they are stationed in one place.

Data Visualization

The idea of data visualization is to convey information in visual forms. For long, we have visualized data as tables, charts, plots, and graphs. We thought it as appropriate to experiment data visualization in virtual reality now–how will it be to enter a 3D data world and take a walk to check data and their relations.

So we experimented with a few applications to explore data visualization in virtual reality and solved issues on the go. In the process, we realised how apt is it to use VR concepts in visualizing data. Let us take a closer look at the experiments below.

3D point cloud

We started basic by creating a data point cloud in the 3D space with an alcohol consumption dataset. As we all know, a point in 3D space is positioned based on three axes, and if there is a need to add more dimensions we can consider color and size. Based on it, we did the mapping in the following manner:

  • Each point represents a country
  • Position of the point is based on the spirits, beer, and wine consumption in the country
  • Color of the point is based on whether spirits/ beer/ wine is consumed
  • Size of the point represents the total volume of pure alcohol consumed

The data mapping is done in following way:

PointCloud-Parallax2

One could see more information by clicking on the data point.

Graph with nodes and edges

The next experiment was aimed at replicating a LAN structure in the 3D space by plotting points and linking them. The network graph was created from an xml data set, consisting of nodes and their connection details.

node tag:

nodes id x y z group name status type
node_1 3744.53 4253.76 6190.83 1 156.145.220.12825 Okay PseudoNode
node_2 3988.03 3771.95 5946.03 1 156.145.210.7630 Okay PseudoNode
: : : : : : : :
node_55 3686.3 4267.01 6234.78 1 156.145.220.150 Okay Router

link tag:

link id source target color label status type
link_1 node_1 node_55 red 1000 Up Half
network

We did the mapping in the following manner:

  • Each node represents server/ client in the network
  • Edges represent connections between the nodes
  • Pulse along the edges show the communication flow of the connection
Flight visualization

In order to make flight bookings seem more interesting, we thought of going beyond the practice of booking tickets using a traditional user interface and creating a VR app for it. In this application, we used a wide galaxy view wherein we created a simulation of Earth with cities/ airports highlighted. In terms of data, we leveraged real-time flight data APIs. The uses of this app are listed below:

  • Controlling Earth rotations and position
  • Selecting cities/ airports to view weather report
  • Selecting cities/ airports to view the flights for those places
  • Checking flight information by clicking on them (Flight audio is added for a better experience.)
flightViz

With the flight visualization app, we fixed a couple of problems that arose along the way. We saw that as we zoom in to the Earth, it turns pixelated and the texture quality goes bad. We couldn’t add high quality texture directly as the mobile processor could not handle it. So we thought it best to work on streaming the texture, similar to how it is done in the Google Maps app. This way, the zoomed in picture turned a lot sharper and we could see more detailing. We decided to have five levels of detailing (LOD) and considered only the Earth’s visible portion for updating the textures at runtime. Moreover, we uploaded all the textures on cloud to manage space. This technique helped us a lot in improving user experience by rendering high resolution Earth textures in the app.

Another issue was that of low performance because of an unacceptable drop in frame rates. We used Unity’s profiler and found that each city GameObject (GameObjects are custom objects in Unity that represent characters, props, and scenery) ran on its own update function. Further, since there are more than 4000 city instances in the Earth object, it would definitely lead to a huge overhead. So instead of using the update function, we used events broadcasting to perform the necessary implementation and it worked smoothly as expected.

Interactions

Among the factors that decide the quality of a VR experience is the interaction between the user and the VR application. In this domain, along with head tracking and sound system based on 3D space positioning, we worked on hand interaction and voice control as well. We will focus on each interaction further.

Bluetooth joystick

We started off with this familiar controller that comes with a handful of VR headset brands. It has two modes of operations: one for games and another as the mouse mode. It has a 360-degree control rocker and a few more buttons for interactions. We used the former for navigation and the rest for triggering any custom events. The following is the code sample for the joystick controller:

bluetooth_controller

Joystick.cs

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class Joystick : MonoBehaviour {
    public GameObject player;
    public float speed = 0.9f;
    public float xVal;
    public float yVal;
    public float zVal;

    float lastUpdateTime = 0;
    float cancelInterval = 0.2f;

	// Update is called once per frame
	void Update ()
    {
        yVal = 0;

        if (Input.GetAxis("Vertical") > 0)
            zVal = 1 * speed * Time.deltaTime;

        if (Input.GetAxis("Vertical") < 0)
            zVal = - 1 * speed * Time.deltaTime;

        if (Input.GetAxis("Vertical") == 0)
            zVal = 0;

        if (Input.GetAxis("Vertical") != 0)
            Debug.Log("Vertical pressed: "+ Input.GetAxis("Vertical"));

        if (Input.GetAxis("Horizontal") > 0)
            xVal = 1 * speed * Time.deltaTime;

        if (Input.GetAxis("Horizontal") < 0)
            xVal = -1 * speed * Time.deltaTime;

        if(Input.GetAxis("Horizontal") == 0)
            xVal = 0;

        if (Input.GetAxis("Horizontal") != 0)
            Debug.Log("Horizontal pressed: " + Input.GetAxis("Horizontal"));

        player.transform.Translate(xVal, yVal, zVal, Space.World);

        // a wait to ignore multiple presses even user tries to press button once.
        bool pass = false;
        if (Time.time - lastUpdateTime > cancelInterval)
        {
            pass = true;
            lastUpdateTime = Time.time;
        }
        if (!pass)
            return;

        if (Input.GetMouseButtonUp(0))
        {
            Debug.Log("A pressed");
        }

        if (Input.GetAxis("Fire1") > 0)
        {
            Debug.Log("C pressed"); // Joystick button 0 
        }

        if (Input.GetAxis("Fire2") > 0)
        {
            Debug.Log("A pressed");
        }

        if (Input.GetAxis("Fire3") > 0)
        {
            Debug.Log("D ressed"); // not working
        }

        if (Input.GetAxis("Jump") > 0)
        {
            Debug.Log("B pressed");

        }
    }
}

After implementation and use, we found that this controller is the best fit for applications, which call for less interactions like navigation and triggering events.

Mobile controller

While working on the flight visualization application, we could use the bluetooth controller only for minimal interactions as it has fewer number of controls. Also, we couldn’t always use click-based interactions as the VR space is vast and allows us to try out random creative things. So we thought of using mobile as a controller for creating interactive gestures in VR apps. Most of the mobile applications use finger gestures and sensors for interactions.

In this case, we needed two mobiles: one for VR application, which we could use with the VR headset, and another as a controller. We connected these mobiles via WiFi. Here are the two applications we created:

  • Controller app: It sends the orientation and finger gestures data to the VR application using touch, accelerometer, and gyroscope sensors.
  • Mobile controller VR package: It acts as a server and has APIs to handle the data that the controller app sends. Data points are mapped to the laser pointer to move it on all three axes. VR applications can leverage this package and enable laser pointer to point/ select objects in the 3D space.

The idea was to create a basic TCP connection between the VR app and the controller app using Unity3D UNET API. We managed to create a basic networking setup and initialize the server on the launch of the VR application.

Controller App

The controller app sends user input data to the VR application. To send data, we created a script inheriting network behavior and attached it to the cube prefab. In the sample script, we declared SyncVar attributes for all the data and Command methods to update data in the server.

To test the controller app, we connected to the server (VR application) by entering its IP address. Once the connection was established, we could see the cube rotating w.r.t the controller device data. Using the following code, we were able to activate acceleration in the same way we could send single tap, double tap, swipes, and gyroscope values.

RemoteController.cs

using UnityEngine;
using UnityEngine.Networking;

public class RemoteController : NetworkBehaviour
{
    [SyncVar]
    public Vector3 inputAcceleration;
    public Vector3 gyro;
    
    [SyncVar]
    public int touchCount;

    void Start()
    {
        DontDestroyOnLoad(transform.gameObject);
        Input.gyro.enabled = true;
    }

    void Update()
    {
        if (isClient)
        {
            inputAcceleration = Input.acceleration;
            gyro = Input.gryo.CmdSendAcceleration(inputAcceleration);
            CmdSendTouchCount(Input.touchCount);
        }
    }

    [Command]
    public void CmdSendAcceleration(Vector3 v)
    {
        inputAcceleration = v;
    }

    [Command]
    public void CmdSendTouchCount(int t)
    {
        touchCount = t;
    }
}
Mobile controller VR package

The VR application containing this package initializes the server at the launch of the application and does not require any user input. In this sample script, OnServerAddPlayer is called when the client is connected to the server. With the Update function, we get accelerometer, gyroscope, and touch input values from the client. We used controller data as:

  • Touch counts - for moving camera object on all three values
  • Accelerometer - for navigating camera
  • Gyroscope - for controlling the laser pointer

AppInput.cs

using UnityEngine;
using UnityEngine.Networking;

public class AppInput : NetworkManager
{
    public GameObject VRCamera;
    private GameObject clientGameObj;
    private NetworkManager networkManager;
    
    void Start ()
    {

        NetworkServer.Reset();
        networkManager = GetComponent<NetworkManager>();

        //Initialize the server
        networkManager.StartServer();
    }

    void Update()
    {
        if (clientGameObj)
        {
            remoteCtrl = clientGameObj.GetComponent<RemoteController>();
            accel = remoteCtrl.inputAcceleration;

            accel *= zoomSpeed;

            //For navigation
            if (remoteCtrl.touchCount == 1)
            {
                VRCamera.transform.Translate(accel.x, 0, accel.z);
            }
            else if (remoteCtrl.touchCount == 2)
            {
                VRCamera.transform.Translate(0, accel.z, 0);
            }    
            else
            {
                VRCamera.transform.Translate(Vector3.zero);
            }
        }
    }

    public override void OnServerAddPlayer(NetworkConnection conn, short playerControllerId)
    {
        base.OnServerAddPlayer(conn, playerControllerId);
        clientGameObj = conn.playerControllers[playerControllerId].gameObject;
    }

    public override void OnServerRemovePlayer(NetworkConnection conn, PlayerController player)
    {
        base.OnServerRemovePlayer(conn, player);
        if(player.gameObject == clientGameObj)
        {
            clientGameObj = null;
        }
    } 
}
Controller Diagram
Mobile Controller

The following are the gestures used in the app for basic navigation:

gyro1
Neutral Gesture
gyro2 gyro3
Zoom In : Single Touch + Tilt towards user. Zoom Out: Single Touch + Tilt away from user.
gyro4 gyro5
Pan Right: Single Touch + Tilt in right side. Pan Left: Single Touch + Tilt in left side.
gyro7 gyro6
Pan Top: Double Touch + Tilt away from user. Pan Bottom: Double Touch + Tilt towards user.

It is very clear that the mobile controller was much more interactive than the bluetooth controller, as it enabled us to use gestures to perform events. This controller is well-suited for car games or anything that involves steering gestures. However, it did not help much for the flight visualization application as the gestures were very unnatural–thus unstable and intuitive–and the sensor data was not too accurate.

Leap Motion controller

Our experiment also involved trying a Leap Motion device–moreso as hand detection is very common in VR applications nowadays. A Leap Motion controller has IR cameras and LEDs to detect finger joints of the hand visible within a distance of one meter. This device allows us to create realistic and natural interactions for the application, but is only supported by Windows/ Mac systems. We planned to use Leap Motion for an Android app; so we thought of sending Leap Motion data to the application via a socket connection, and found an open source application Coloreality doing the same. The system acted as a server and we created a mobile app to act as the client.

leap_motion vr-leap

For Leap Motion data, we used left/ right hand, forearm, palm, wrist, fingers, and finger joints for position, rotation, direction, and velocity vector with grab strength.

leap_data leap_values

We did create a few gestures for the VR application but some of them did not work as expected as the leap motion data we got from the device was inconsistent and unreliable. Following were the challenges we faced:

  • Data points for hands were getting swapped often, so we couldn’t focus on left/ right hand specific gestures
  • Angle glitches were frequent, so we couldn’t work on palm rotation gestures

There were some gestures which were very intuitive and accurate, so we were able to use them in the flight visualization application:

  • Left/ right hand closed and move - for translating and rotating the Earth in 3D space
  • Both hands closed - to enable/ disable city selection mode
  • Clapping - to reset the Earth in 3D space
  • Index finger - for pointing and selecting city/ flight.

The virtual reality experience turns more realistic with Leap Motion controller as we use our hands to interact with objects in a 3D virtual world. Although initially, after integration, we found a time lag on Leap Motion data owing to weak hotspot. In weak signal, data flow was slow and we faced discontinuity and flickering issues in the virtual hand object.

Voice control

With Leap Motion, we could use our index finger to select cities and view the flying route in the flight visualization app. But expecting the user to rotate the screen and search for cities is difficult. So, to add a novel touch, we tried voice commands.

Speech recognition plays a very important role in ironing out the flaws of full immersion through a controller-based interface. In view of this, we created a voice recognition API so that any VR application that needs voice support can integrate it and provide relevant actions based on the user’s voice commands. We used the Google Cloud Speech API for implementation–we recorded the user’s voice for a few seconds (configurable) and sent the audio file to Google Cloud to get the text input of the same. We focused on keywords to carry out actions. For example, to select two cities, we would enable the city selection mode and say “show me flight timings for Chennai to Kolkata trip”. Then, we would filter all the keywords from the sentence (show, flight timings, Chennai, and Kolkata), define actions for each command, and then execute them all in a sequence.

voice

Learnings

While working on these visualizations, the problems encountered and steps taken to eliminate them are as follows:

  • GVR camera restrictions: Since there were restrictions in applying translation and rotation to a GVR camera, we created a custom virtual reality view with two cameras side by side. They referenced the eye positions to bring out a 3D effect and added a front shift from the center of rotation to bring out the parallax effect.
  • Jitter: This was slowing down the user movement in the VR space. It was found that rendering does not complete on time and the frame rate drops. Most of the rendering is taken care of by the update function. And if a scene has thousands of GameObjects running their own update functions, one could imagine the performance drop. To resolve this, we used “Events” as explained in the flight visualization app section above.
  • Nausea: For virtually navigating the user around the globe we chose to pivot the camera attached to the user’s head. This created a disoriented feel producing motion sickness. It was resolved by creating four viewpoints around the globe, one in each quadrant and equally spaced. This meant the user could select any visible viewpoint to get teleported there and have the Earth’s view from that point.
  • Rendering quality: While building the flight visualization application, it was found that the globe’s texture quality was below par, which was a big UX turn off. Higher resolution texture could not be a solution since the mobile processor wouldn’t be able to handle it. So, to make the rendering better, we implemented the Streaming Texture technique.
  • Creating a virtual 3D space: Creating a virtual space is not that easy as we have lot of factors to focus on: right from measurement, aspect ratio, and texturing to animating and lighting concepts. We learnt many of the 3D modeling concepts while creating the office 3D model in Maya.

Conclusion

On the visualization front, we explored various kinds of techniques to see virtual reality’s impact on it. As we went along in this rapid fire iteration, we came across a number of roadblocks: parallax effect from 3D point cloud, jittering and slow frame rate due to large amount of data in graph visualization, and nausea in flight visualization. And also successes for good measure: an immersive 3D feel in the flight visualization app, new modeling concepts in Maya to create 3D models for our office walkthrough, and lighting concepts in Unity. Each challenge faced made us bring forth a solution, which in turn increased the VR impact on data visualization. The evolution of VR experience in our experiment–in terms of rendering speed and quality–is very clearly visible from where we started, which is 3D point cloud, to where we reached, which is flight visualization.

On the interaction front, it was found that for joystick and mobile controller, interactions had to be taught to users. While in the case of Leap Motion controller, the users were able to figure out the gestures by themselves (since the gestures created were best fit to human norms of interactions). We also found that the gestures created via joystick and mobile were limited but Leap created gestures that paved way for many more robust ones. This meant that Leap is certainly the more capable and flexible controller than the other two. We also worked on voice command interactions for a better experience. In conclusion, the Leap Motion and voice control were found to be the most comfortable for user interaction.

Future Works

Some of our future work on the applications include:

  • Virtual walkthrough
    • Humans with AI - The aim is to create a more realistic environment where as we enter the office, colleagues work/ move around and greet us while crossing.
    • Readable books - We can stock up on virtual books and allow users to read them.
  • Flight visualization
    • Booking experience - As we already have almost all features ready for this app, we will try to add more use cases.
    • Voice command integration.
    • More intuitive gestures through Leap Motion; we have a basic version presently.

Useful Resources

Ashok Regar avatar
About Ashok Regar
An engineer who is keen to explore stuff coming across
Ramakrishnan Mohan avatar
About Ramakrishnan Mohan
An engineer, passionate in blending artistic ideas with technology
Anitha Manikandan avatar
About Anitha Manikandan
A fast-track developer who loves solving problems with cutting edge technologies