How we Developed VR for Mobile
17th November 2016
VR in elearning is still new and most of us our still finding our feet, especially when it comes to developing VR for use on smartphones. We decided to show our work and share what we've learned in the hope that others can learn from us and we can contribute to the continued growth of VR in elearning.
Creating our first virtual reality application has been quite a learning experience for us. While VR has been around in various forms for several decades, as a mass market proposition it is still a very new medium and is still in the process of finding its way. VR has often been described as being in the ‘wild west’ – a technological no-man’s land, where almost anything goes and most of the rules have yet to be written.
This article looks at these challenges and some of the lessons we have learned.
Desktop or Mobile?
When we first started thinking about VR app development, it was soon clear that there were two ways we could go – the high-end desktop option, and the more affordable mobile option. The high-end option called for a consumer-level VR head-mounted display, such as the Oculus Rift or HTC Vive, coupled with a high-spec PC to power it. The mobile option required only a smartphone and low-cost headset.
After weighing up the pros and cons of each, we decided that the low-end mobile option was the better choice. While not as visually advanced, we felt that this was offset by the lower cost and accessibility.
Which Game Engine?
The game engine is the software framework that powers the VR application, and forms the foundation of the development process. Our next decision was which 3D game engine would work best for us. There are many choices on the market, each with their own capabilities, cost options and user communities. Among the most popular game engines are Unity, Unreal and CryEngine. After some experimentation, we decided to go with Unity – a popular choice for mobile VR development.
Research and Design
Our first VR app required us to develop a 3D scissor-lift, which the user could interact with in various ways. Early on, we enlisted the help of a training expert from IPAF, with whom we collaborated on creating Mobile Elevated working Platform training, to ensure that our virtual scissor-lift was as close as possible to the real thing – from the accuracy of the operator controls to the locations of the safety warning decals. We had to fully understand the functions and operation of the unit, and then translate that to a 3D virtual environment in a way that would be clear and easy to follow – not an easy task! To help us, we shot a lot of reference video of equipment in operation and procedure checklists being carried out. We also put together a storyboard, showing all the processes we needed to cover.
Realism and Immersion
The latest generation of graphics cards coupled with modern 3D engines are capable of rendering a remarkable level of realism, which is now beginning to make its way into desktop VR. However, mobile VR cannot yet match what desktop VR can deliver.
We discovered that we cannot yet incorporate the features necessary for high realism, such as ray traced reflections, real-time shadows and advanced material shaders. In terms of CPU load, they are simply too expensive, and maintaining an acceptable frame-rate requires that some compromises be made. However, as the capabilities of mobile devices continue to advance, we can certainly expect things to improve significantly.
Frame Rate and Performance
In mobile VR, there are many technical constraints we have to stay within. Compared to desktop VR, we have much less processing power to work with, and a poorly designed VR experience can result in dizziness and/or motion sickness for the user. One of the things we can do to prevent it is to maintain a constant frame-rate of at least 60 frames a second. That means, 60 individual images have to be calculated and displayed every second of use.
The main factor affecting frame-rate is the visual complexity of the scene. All our 3D models are composed of 3D polygons. In mobile VR, we are restricted to a total scene size of around 50,000 - 100,000 polygons. On top of that, we are also having to render the scene twice (one for each eye). Clearly, we need to ensure we are streamlining things as much as possible.
Largely thanks to the gaming industry, the principles of developing real-time 3D applications are already well-established. Here are some of the steps we take when designing our VR environments:
- All our 3D models are designed to be highly efficient. We try to use the fewest number of polygons necessary to define an object’s form
- Where high detail is required, we place the detail in the textures instead
- Rather than using dozens of individual textures, we group textures for multiple objects into a single high-resolution image. This technique is known as texture atlasing, and is far more efficient
- Where possible, we re-use models within the 3D environment. For example, rather than modelling a large, sprawling wall, we model only a single wall section and duplicate it as necessary
When combined, these optimizations result in better performance and a subsequently higher frame-rate.
User Interface and Control
Desktop VR makes use of external input devices, such as gaming joypads and handheld motion controllers. Although Bluetooth-based controllers are available for mobile VR, we decided to keep things simple. Our mobile VR applications make use of ‘gaze control’, allowing the entire application to be controlled simply by looking at specific objects and markers within the virtual environment. The advantages of this approach are:
- No external controller is required
- Control and navigation can be mastered within minutes - ideal for learners who are new to VR
Developing VR for mobile is an area of enormous potential. If you want to find out about how VR for mobile could work for you business then get in touch!