The display of virtual reality covers full 360 degree. Unlike screen-based UI, a border is absent in VR-UI, which brings opportunites and challenges in UI design.

Introduction

Recently, the emmergence of head-mouted displays has broken the rule of graphic design and screen-based UI design. When we design VR-UI in a head-mounted display, we may meet the following problems.

Problem 1: The Border of UI is absent.

For UI of desktop and mobile apps and websites, the border is the screen of devices. But for VR, the concept of screen has dispeared, which brings more opportunites and new challenges to designers. UI layout lacks a reference to determine the positions of the UI elements, and the size, position (including x,y,z axises) and even the shape need to be defined by designers.

When we design a menu of VR-UI in unity, we can hardly find any problem in edit mode. But when we put on the head-mounted display, we will meet various kinds of problems such as oversize elements, large range of head turns and unfocused content. Designers have to test many times in HMD to ensure design quality.

Problem 2: NO Design Software for VR-UI.

Whether for App UI or for Web UI, the desgin draft is almost the same as what it look after being develped. But for VR, the design draft tends to be different from the finished product. When we design VR-UI, two ways are common used in the industry. The one is to use some graphic design softwares such as Sketch or PS to design the scenes and elements, and then deliver them to developers to implent. The other way is to desgin in a game engine such as Unity diretly. The former problem is that designers have to communicate with developers and modify their design again and again. The latter problem is that the threshold for designers is high just as they use html and css to design a website.

Focused on the two problems above, I have made a framework in VR according to some reference documents. This framework has normalized the visual focus to which designers arrange the elements, and it also improves the development efficiency. In the next section, I will introduce two important terms which are fundamental to VR-UI.

FOV and PDD

FOV and PPD are fundamental concepts just like the term of PPI for mobile devices. The former one is an important ergonomic condition restricted by the physiology of eyes, while the latter one is restricted by the display performance of head-mounted dispay. These two conditions should be synthetically considered.


FOV (Field of View)

FOV Stands for the range which we can see with our eyes. As shown, Angle a is the horizontal FOV, while Angle B is the vertical one. For head-mounted displays, FOV tends to stand for horizontal FOV because the vertical FOV of human eyes is much smaller than that of mainstream HMD. Human's horizontal FOV is about 120°, while that of mainstream HMD is between 90° to 110°. This is the reason why we can see the black shadow on both edges of our eyes.

PPD (Pixel per Degree)

PPD takes into account both the screen resolution and the FOV from which the device is viewed. To calculate PPD, we divide the screen resolution by the FOV. Only when the PPD is more than 60, we will see little graininess in a head-mounted display, while the PPD of current HMD is less than 13, for which many consumers are far from satisfaction.

To take Oculus DK2 for example, its monocular resolution is 960px*1080px, and its FOV is 100°. So the PPD of Oculus DK2 is 960/100=9.6. Conversely, PPD can be used to calculate the pixels we can see in the vertical direction, which is 9.6 (PPD) * 55° (the FOV that human eyes stay still) = 528px, and it is much smaller than the vertical resolution of the screen, 1080px. In other words, both the top and bottom edges of the screen are blind spots in the first sight.


As is shown in the graph below, the horizontal FOV of static eyes is 120°, while that of rolling eyes is about 200°. The best horizotal FOV is 60°. On the other hand, the vertical FOV of static eyes is 55° (upward 25° and downward 30°), while that of rolling eyes is 120° (upward 50° and downward 70°). The best vertical FOV is 30°.

Framework

As is mentioned above, we now know all the data of the best FOV and the maximal FOV. In fact, we need to turn them into pixels to contribute to our design. To take Oculus DK2 for example, if the PPD of the device is 9.6, the resolution corresponding to the best FOV is 576px (60*9.6=576). The following diagram will show the other data of resolution related to FOV, and a canvas corresponding to the data will be given as well.


Distance-Related Reading Performance


The canvas below is the frame work of UI layout in VR, and it is based on the UI of head reference frame.

The red wireframe represents what people can see in the device with rolling eyes.

The black wireframe stands for the monocular screen size of Oculus DK2.

The grey area is the theoretical field of view. In fact, people can only see within the red wireframe, and outside the red wireframe they can see nothing but darkness. All the important and frequently viewed information should not be placed in this area, which is difficult to read and easy to cause eyestrain.

The light blue area is what people can see without rolling eyes. It is proper to place UI elements in this area. On the one hand, the elements in this area are hard to cover the scene and disturb the main task. On the other hand, it is comfortable to read in this area.

The dark blue area is the most comfortable field of view. When the main task of the user is to interact with the UI, important or frequently viewed elements can be placed in this area. But when the main task is not related to the UI, placing UI elements in this area may interfere with the user's action.

Oculus DK2 Canvas


The following canvas is the simplified version which is more convenient for Unity Development. According to this framework, designers can position the elements, modify the structure and arrange the sequence more effectively, and developers can place the elements with less efforts.

Simplified Canvas

Implementation

In the last section, a framework for UI layout in VR is introduced, but how to use it and how to integrate it to the Unity development? This section will elaborate these problems with the VR-UI design processes including before and after using the framework.


before (design draft)

can only imagine the virtual environment and estimate the approximate position of the UI

after (design draft)

arrange the UI elements according to the framework, and provide reference to subsequent development


before (development)

Estimate the relative position of the UI elements according to the design draft

after (development)

- place the framework in the virtual environment and adjust its position

- adjust the position of the UI elements according to the framework and the design draft


before (test)

- The upper and lower parts of UI cannot been seen in the first sight, although they are shown in the first screen. Some elements are even out of the screen.

- To interact with the UI, the user has to turn his head by a wide range frequently, which can grow fatigue and cause sickness.

- Did not come up to the expectations.

after (test)

- The UI elements are clear and can be seen in the first sight.

- Did not have to turn the head.

- Mostly consistent with the expectations.


Using the framework, the UI elements almost never need to be adjusted after the test. Before using the framework, however, developers have reposition the UI elements again and again. If the UI is complicate, the process may go back to the design draft.

Conclusion

In the virtual environment, there is no border for the scene or other objects. For VR-UI, however, a border should be drawn due to the limit of our human eyes and current head-mounted display, and VR-UI should be placed in the area which is the intersection between human's FOV and HMD's FOV. Only in this way can we see the UI in the first sight.

For VR design software, it will continue to be absent in the short term. In order to improve the efficiency of design and development, related design guideline should be built. Besides the framework from this article, the specifications of the UI elements need to be elaborated in the future work.