CWI Box groep5

From Fontys VR-Wiki
Jump to: navigation, search

Introduction

The purpose of the course Interactive Virtual Environments (2IV55) is to introduce the main concepts for designing and evaluating Virtual Environments (VEs). The focus will be on the techniques needed to implement a VE. The goal of the assignment is to acquire basic knowledge about and get some hands-on experience in the field of VEs with respect to a given problem statement. The following sections describe the problem statement, planning for the assignment and concepts based on a collection of research papers. This is followed by a description of the evaluation method and analysis of this evaluation method. Lastly, we describe the observations made during the evaluation, the implementation and conclude the assignment with an evaluation of these observations.


Problem Statement

The Centrum Wiskunde \& Informatica (CWI) has an experimental Virtual Reality (VR) application, called the 'box', which shows a number of paper airplanes flying inside a wireframe cube. The 'box' is written in C using OpenGL and is used to demonstrate various aspects of immersive VR. The assignment comprises the following tasks:

  • Port the application so that it can be viewed in the Fontys CAVE for egocentric and exocentric viewing.
  • Enumerate and develop various additional effects that are designed to increase the immersive experience.
  • Experimentally quantify the benefits of these effects.


Presence and immersion

In the problem statement, the term immersive experience is used. We explore its meaning here. In [1], a distinction is made between presence and immersion as follows:



Immersion includes the extent to which the computer displays are extensive, surrounding, inclusive, vivid and matching. (...) Immersion is an objective description of what any particular system does provide. Presence is a state of consciousness, the (psychological) sense of being in the virtual environment.



In other words, immersion can be measured objectively, while presence is mainly a subjective experience. Changes to the five aforementioned properties (i.e. changes to the immersive experience) may or may not have a direct influence on presence. These changes are to be quantified and the influence on presence is to be measured using questionnaires to elicit the subjective experience of the user. From now on, we will use the term presence to denote the immersive experience (item 2 of the problem statement).

In the paper, the five properties are explained:



The displays are more extensive the more sensory systems that they accommodate. They are surrounding to the extent that information can arrive at the person's sense organs from any (virtual) direction. They are inclusive to the extent that all external sensory data (from physical reality) is shut out. Their vividness is a function of the variety and richness of the sensory information they can generate. In the context of visual displays, for example, colour displays are more vivid than monochrome, and displays depicting shadows are more vivid than those that do not. Vividness is concerned with the richness, information content, resolution and quality of the displays. Finally, immersion requires that there is match between the participant's proprioceptive feedback about body movements, and the information generated on the displays. A turn of the head should result in a corresponding change to the visual display, and, for example, to the auditory displays so that sound direction is invariant to the orientation of the head. Matching requires body tracking, at least head tracking, but generally the greater the degree of body mapping, the greater the extent to which the movements of the body can be accurately reproduced.



In the description of VR concepts, the immersive aspects of the implemented feature will be explained. For clarity, each of the five aspects will be denoted in teletype font, just as in the above text.


Planning

In order to complete the specified tasks, the assignment has been divided into several phases.

Initially, all group members will download and experiment with Juggler, a necessary component for this assignment. In this phase of the project, no explicit tasks are divided amongst the group members, these will be established later when there is a better understanding of the application's architecture and a division into components can be made.

Concerning porting the application, whether egocentric viewing or exocentric viewing is implemented first or whether they are implemented simultaneously will be determined by first examining examples from the Fontys CAVE. These examples were requested at the first introduction to the CAVE and have been provided quickly thereafter.

Next, the 'box' application will be ported. In order to facilitate collective code ownership and further development of the completed application at a later time, a coding standard has to be used by the group members. Note that the coding will be the primary form of documentation of the implementational details, such as specific values of constants or variables and used datatypes, whereas the report will provide insight in the global implementational aspects, such as rationales for choosing a specific technique.

Following the implementation of existing functionality, additional functionality is to be added. Examples of such functionality would be movement (translation and rotation of the views depending on some input device such as the buttons of a Wiimote (Wiimote is shorthand for 'Wii remote' or the remote control of the Wii game console) or sound. In future sections, we will clarify what additional functionality would be most desired and most feasible. Such decisions can best be made after the application has been ported to the CAVE, so that they can be based on the acquired experience during the first phases of the project.

Lastly, experiments will be conducted to determine the difference in presence between the application with and without additional functionality. It would be most desirable to have test subjects that have not yet grown accustomed to the application, for the experience of group members that implement the application might suffer from a certain bias towards one approach or the other. At the end of the project, depending on the amount of time there is to perform tests in the CAVE, we wish to invite two or three students that are not involved in the project or course to join us at the CAVE to compare different approaches and functionality.

Description of VR Concepts and Related Work

In this section the various virtual reality concepts that are used for this assignment are described. Only an overview of each concept is given, since the relation of the concept to the assignment is described later. Since the assignment concerns implementing several concepts to enhance the presence of the user of the application, many concepts could potentially be used. Time given is bounded, however, so even potentially interesting concepts could be left out in the implementation and/or the evaluation. The interesting concepts are first summed up, after which a more detailed description is given for each of them. The virtual reality concepts are:

  • Head-tracking
  • Egocentric viewing
  • Exocentric viewing
  • Body Movement
  • Sound
  • Shadows
  • Update rates
  • Input latency


Head-tracking

The first virtual reality concept discussed is the notion of head-tracking, which is essential to create an immersive virtual environment, as it establishes a match between the user's movement and the generated scene. Without head-tracking it would not be possible to effectively determine the correct view when a person looks into the environment. The concept of head-tracking is not difficult at all: it simply needs to track the position and orientation of the head of the subject. Using that the positioning of the eyes is dependent of the position and orientation of the head, the environment can be changed when the head is moved. Head-tracking can be done in several ways:

  • Magnetic
  • Optical
  • Acoustical
  • Mechanical

Several issues/properties are important when implementing effective head-tracking:

  • The distance between the eyes of the subject and the head-tracking device.
  • The resolution of the device, which determines the amount of change that is detected.
  • The range of the positions tracked by the system.
  • The frequency at which the device checks for new data.
  • The end-to-end latency between the actual movement and the visual change of the environment.
  • The noise that interferes with the tracking device.
  • The size and weight of the device.
  • The ability to detect multiple objects.

In [2] it is concluded that the reported presence of a participant in an immersive VE is likely to be positively associated with the amount of whole body movement (such as crouching down and standing up), and head movements (looking around and looking up and down) appropriate to the context offered by the VE.


Egocentric/Exocentric viewing

Another concept that is important when talking about virtual environments, is the used viewing technique. All views can be separated into two categories: egocentric or exocentric. A viewing is said to be egocentric when the subject is standing within the environment and looks to the outside; i.e. the environment is perceived from the viewpoint of your own position and orientation. An example of egocentric viewing is a CAVE, where the subject stands between several walls that project the environment. Ive groep5 wiki egocentric.jpg

A viewing is said to be exocentric when the subject looks from the outside into the environment; i.e. the environment is perceived from a viewpoint which differs from the position and orientation of the subject. An example of exocentric viewing is a responsive workbench, where the subject looks to the workbench, but is not fully present in the environment.

Ive groep5 wiki exocentric.jpg

An observer's personal space is considered to be the space worked within by the (stationary) observer. It is delimited [3] to 1.5m. Action space is the space between 1.5 and 30 meters from the observer, and vista space is further than 30 meters from the observer. Personal space seems nearly Euclidean in nature (i.e. the user is capable of estimating not only distances between objects but also between objects and the space's origin) whereas the other two seem largely affine (i.e. distances between objects can be estimated, but distance to the origin cannot). When the observer is allowed to move and act within the environment, however, action space also appears to approach a Euclidean nature.

In [4] the study found that reported presence was higher for egocentric compared to exocentric immersion.

Spatialized sound

The next concept that potentially can improve the presence a subject perceives, is sound, for it makes the scene more extensive by adding more sensory input. Virtual environments depend heavily on the visual perception and many concepts are centered around this sense. However, other senses can also contribute to the feeling a subject has when navigating through a virtual environment. One of those senses is hearing. To use this sense, the environment can be extended with sound. Sound can be added in several ways; A first option is to add a continuous noise that does not come from a specific direction. A better notion of presence could however be achieved when the sounds come from a certain direction. This direction changes when the corresponding object changes its location in the environment. It may therefore also add to the degree in which the environment is surrounding, since the sounds may come from more directions than just the direction in which the user is viewing.

In [5], a significant difference in reported levels of presence of the virtual environment was discovered when a spatialized sound was added to a stereoscopic display. However, the addition of spatialized sound did not significantly increase the overall realism of the virtual environment. These results are surprising as one would expect that as new sensory channels of information are added to a display medium, one would associate with that display medium a greater sense of overall realism. One such factor might be that term 'realism' has some additional semantic load which implies the 'visual-realism' of the environment in the user’s mind. As a result, if the user does not perceive a change in the visual scene, then they may not perceive a change in the overall realism of the environment even though they have reported higher levels of presence and interactive fidelity for that environment. A second factor contributing to the results of the current experiment might also stem from the notion that subjects associate degrees of realism predominantly with changes in the visual information channel over other sensory information channels.


Shadows

All concepts discussed so far do not change anything about the graphical representation of the environment, since all objects inside the virtual environment remain the same. The viewpoint that decides how a subject looks at those objects in the environment does change however. The virtual reality concept described in this section does change the graphical representation, since the actual colors of the scenery are dependent on the shadows that other objects create. Just adding some shadow however is not that simple, since there are a few options to consider:

  • Where does the light in the environment come from? If the light comes from right above the objects, all shadows are small. The shadows increase in size as the light source changes its location.
  • How detailed do the shadows look? For example, consider the shadow of a person. One option is to visualize it by drawing an ellipse, while another option would be to actually compute the shadow for all body parts and take the sum of those results.
  • Which technique is used? Some techniques are faster than others, but the results will be less realistic. For example, shadows with soft edges (like most real shadows) are computationally expensive to render, whereas shadows with hard edges can be rendered faster but are less realistic.

When implemented correctly shadows can potentially increase the presence of subjects in virtual environments. In [1], the proposition is considered that shadows, increasing the degree of vividness of the visual displays, will enhance the sense of presence. The amount of increase is likely to be determined by the techniques used to produce the shadows and the amount of detail in each shadow. Just implementing the most realistic technique is not always an option, since this can also lead to a reduction of update rates and the introduction of latency when the rendering just takes too much time.

An important effect of adding shadows to an environment is that the subject will be able to better determine the location of objects in the environment. A good example of the influence of shadows is shown in the figure below. The four globes in the environment have the same location in both pictures. However, the pictures are perceived in a different way, caused by the location of the shadows corresponding to the globes. Ive groep5 wiki shadow.jpg

Input latency

The last concept discussed in this document is the notion of input latency, also called end-to-end latency. Latency is the time between the actual action taking place on one hand and the update of the environment on the other hand. Latency exists because the signals have to travel from the input device to the output device. For example, the head tracker which tracks the movements of the subject, is the input device. From there on signals are sent to the system that processes them and updates the environment accordingly. The time needed to travel from the input device to the system is one part of the final end-to-end latency. At the end, the system will send signals to actually update the environment visible to the user, e.g. by sending a signal to the screens of the CAVE. The time taken to travel to the output device is also part of the total latency. If for some reason more signals are sent between receiving the input and sending the output, all these signals will increase the latency as well.

Although many things can be done to reduce the latency, for example by using a faster medium (instead of normal cables for example) and faster hardware, the latency can never be reduced to zero. The main reason for this is that the speed of the signals can never be higher than the speed of light. Hence, there will always be some sort of end-to-end latency, although it can be reduced in several ways.

Evaluation Method

Below, the used evaluation method is described. Since it is difficult to quantify abstract values like 'presence' and 'realism' objectively, the evaluation method has to consist of empirical tests. A lot of research has already been done regarding such empirical tests (see [6]). The evaluation method will let a volunteer test the created application. In order to quantify the benefits of the features that contribute to immersion on presence, we need to establish how different settings will be compared. Since the goal is to experimentally quantify the benefits of the added effects, it is required to evaluate effects as independently as possible. First, we identify a so-called 'base case' which we shall refer to in the evaluation method. This base case is the 'box' as it was when ported to the CAVE, without additional effects. Below, an overview is presented of the properties of this 'base case':

Head-trackingOff
ViewingEgocentric, box slightly smaller than the CAVE. Box not rotated or translated.
ScalingNot allowed.
RotatingNot allowed.
TranslatingNot allowed.
Spatialized soundOff
ShadowsOff
WireframeWhite, visible
FacesLight-blue
Numberairplanes & 4
SpeedNormal

The most important features are the features of head-tracking, ego-/exocentric viewing, spatialized sound and shadows. The tests will focus on identifying the benefits of these effects. Spatialized sound and exocentric viewing use the observer's position, therefore head-tracking will be enabled in all following cases. Lastly, the user will have free choice of manipulating the box to compare the minor features that were added. During all tests, the user will be free to move around inside the CAVE. The cases that we will evaluate are the following:

  1. Base case.
  2. Base case with head-tracking enabled. Goal is to identify the benefits of head-tracking. User will compare with item \ref{1}.
  3. Base case with head-tracking enabled and exocentric viewing. Scaling, rotating and translating allowed. Goal is to identify the benefits of exocentric viewing. User will compare with item \ref{2}.
  4. Base case with head-tracking enabled and sounds enabled. Goal is to identify the benefits of spatialized sound. User will compare with item \ref{2}.
  5. Base case with head-tracking enabled and shadows enabled. Goal is to identify the benefits of shadows. User will compare with item \ref{2}.
  6. Base case with head-tracking enabled. Goal is to identify the combined benefits of both major (head-tracking, exocentric viewing, sounds and shadows) and minor (color of the box, number of airplanes and speed of flight) effects. User has free choice of manipulating the box and will be asked to find the configuration in which his sense of presence is highest.

In order to be able to measure the results of the evaluation, a questionnaire will be used which asks the test persons questions about how absorbed they feel in the virtual environment and how realistic the environment looks to them. The last thing needed to evaluate the application regarding presence and realism, is an adequate number of persons to test the application and fill in the questionnaire. Since the evaluation for this project is just for educational purposes, only a limited number of test persons are used.

In each setting, the observer will be asked to compare the simulations as described above and will answer the questions given in appendix Questionnaires. In each setting, the user is free to walk around.

Results

We have chosen to evaluate the box ourselves first and then, if the time allowed for it, other people would be asked to partake in the experiment. Unfortunately, due to organizational and practical reasons, there was not enough time to perform an elaborate user test. Instead, three of our group members have evaluated the application together. Each answered the questionnaires (see appendix Questionnaires) individually and although the application was discussed, scores were not compared during the test. Although three persons is a small test group, a lot of our findings are the same or similar. Where they are not, we will discuss the difference and the likely cause for these differences. In the appendix Results, the specific scores can be found.


Q2: Head-tracking

All participants indicated a heightened sense of presence when head-tracking is enabled. Also, the sense of realism, sense of objects moving and the estimation of distances between the user and objects and between objects improves or stays the same. We deem it likely that in a larger test set the mean value of the aforementioned aspects also increases with respect to the base case.

There is, however, room for improvement. The CAVE did not respond to the user input in time, causing a delay in user movement and change in view. This delay was not immediately apparent to the user when he moved in a calm and controlled manner. However, when the user moved suddenly and from a standing position, the delay was clearly visible. This seems to be a problem with the head-tracking of the CAVE. Also, the position of the box did not change accordingly when the user was moving. For example, when the user moved forward towards the box, the box moved away a little. This means that when the user covers some distance in reality, the distance he covers in the box is about one third of it. The user clearly notices this discrepancy (both in egocentric and in exocentric viewing). When these two problems are solved, we expect the user's sense of presence to increase, along with their sense of realism. We expect all other user experience may benefit from this as well. These benefits will most likely propagate to all other settings using head-tracking (i.e. all values should increase equally for Q3-6 as well).


Q3: Exocentric

All participants indicated a heightened sense of presence when exocentric viewing is used. This increase is, for each person, equally large as their increase that is associated with the enabling of head-tracking (remember, Q3-6 are compared to Q2, since all use head-tracking), which suggests that head-tracking is just as influential as the addition of exocentric viewing to the range of views the user may choose. Also, the sense of realism, sense of objects moving and the estimation of distances between the user and objects and between objects improves or stays the same. Also, the scores for each of these questions lie closer together than the scores the three participants gave in Q2 (i.e. the standard deviation decreases), which suggests that in a larger test set the mean values of these three aspects also increase.

Furthermore, two specific circumstances showed the benefits of exocentric viewing. When standing in an egocentric viewpoint, airplanes would sometimes fly through the user. This was experienced as unrealistic and would startle the user only the first few times, after which (s)he did not respond to it again. When standing in an exocentric viewpoint very near the box, airplanes would sometimes fly towards the user, changing direction just in time so as not to hit the walls of the box. This resulted in a higher sense of realism and users would still respond to the situation (e.g. ducking or stepping aside) instead of ignoring the situation because it was unrealistic.

Another test we performed was to gradually move the box further away from the observer and to indicate what happened to the realism of the scene. The resulting experience is that the box gradually turns from a three-dimensional shape (a cube containing three-dimensional objects) to a two-dimensional shape (a flat screen showing two-dimensional objects). Note, however, that the box becomes smaller and smaller as it moves further away; This might also strongly contribute to the experience that it becomes 'flat'.

These results differ from [4]. We believe this might be due to the surrounding nature of the CAVE: Even though the user is looking at the box from an exocentric viewpoint, he is still standing in an egocentric application. This might imply that some of the benefits of egocentric viewing still apply, even if an exocentric viewpoint is chosen. For example, the vividness of the display may be affected positively due to an exocentric viewpoint because the planes are a little less distinctive and thus their artificial appearance might be less of a bother. Also, there may be an unintentional positive effect to the match between the feedback about movement and the information generated on the display; In egocentric viewing, planes frequently fly through the user, yet there is no proprioceptive feedback, resulting in a poor match. Because such conflicts do not occur in exocentric viewing, this may be the cause of the positive results we see here. Also, as will be discussed in section Implementation, there is a delay in the visual display. Exocentric viewing may diminish or even cancel out the effects of this delay.

Q4: Spatialized sound

For spatialized sound, the change in the observers' sense of presence is inconsistent yet on average the same; One participant indicates the sense of presence increases slightly (by one point in a hundred), another indicates the sense of presence stays the same, yet another indicates the sense of presence decreases slightly (by two points in a hundred). The sense of realism, sense of objects moving and the estimation of distances between the user and objects and between objects stays exactly the same (except for a one-point decrease for one answer by one test person). These results give no clear indication of improvement or degradation. One person indicates to have difficulty in identifying sounds, the other two did not have trouble at all. However, localizing sounds turns out to be very difficult.

Although the observer's answers did not provide significant results, there were occasions when the user would suddenly flinch or duck when the sound came 'nearby', especially when this coincided with an airplane being close to the user.

These results differ from the results in [5], where a significant difference is discovered. As will be explained in section Implementation, this might be due to implementational problems.

Q5: Shadows

All participants indicated a heightened sense of presence when shadows were shown. The increase is, for one person, equally large as the increase that is associated with the enabling of head-tracking, for the other two the increase is smaller. For all other aspects there is a clear increase, which is however smaller than the increase for exocentric viewing. All users note that it is easier to judge where airplanes are in the box, because shadows compensate for the fact that there is no ceiling projection in the CAVE. All three users indicate that the added value of shadows becomes more clear when at least ten airplanes are added to the simulation.

Although we did not research whether the three observers were visually or auditory dominant, these result seem to correspond to the results in [1].


Q6: Free choice

In the final test, the observers were given full freedom over the application and were asked to choose the settings for the box such that they experienced the largest amount of presence and realism. Interestingly, all three ended up with the same color framework and faces, shadows enabled, sound disabled and in exocentric viewing. All users preferred to use the white box with black framework. All users chose to make the framework of the box visible, because they felt it provided important depth-information. All users experienced that the box with transparent faces is weakest in terms of realism, because a mirror-image of airplanes and the framework can be seen in the corners of the CAVE due to the strong reflection that occurs (e.g. airplanes that fly in the corner can be seen reflected on the other screen of the CAVE) and also because it is not possible to project shadows. One user indicated to want to add more airplanes (about ten to fifteen in total). Although all users tried it, none of the users chose a setting with adapted speed of the airplanes in the end, because it negatively affected the realism; if the planes moved slower, they appeared to be suspended in the air rather than flying; when they moved faster their movement did not correspond to the expected movement of paper planes. All users indicated that they experienced additional sense of presence and realism when one of the vertical edges of the box was in the center of the CAVE such that the user could walk around it. The users also noted that the edge wass not displayed perfectly (it clearly bent where it touched a CAVE wall).

Implementation

Methods

Below, we will briefly introduce the concepts that have been implemented.

  • Head-tracking One of the first additional effects is head-tracking. The user's view of the box should be dependent of his or her viewing direction.
  • Egocentric viewing The box is viewed from an egocentric point of view by default. The Wiimote is shown to be used for several CAVE applications to navigate and zoom. Zooming in until the box walls coincide with the CAVE walls, would be a form of egocentric viewing. There is a maximum to the amount of box scaling that a user can do, for the following reasons:
    • Zooming in until certain features of the box can no longer be seen (due to the user's distance to the objects, (s)he might only see an all-white or all-yellow surrounding) is not useful to the viewer, as it will result in the user having the feeling of there not being any objects to look at / interact with.
    • An unlimited amount of zooming in might cause problems in use, pertaining to practical issues such as the user not being able to 'navigate' properly due to the application's vehement response to his/her head movements, resulting in the user seeing nothing but a white-and-yellow flashing environment.
    • Zooming in until user is a tiny speck in the box is therefore a waste of resources.
  • Exocentric viewing The Wiimote is shown to be used for several CAVE applications to navigate and zoom. Zooming out until the box is the size of a suitcase or shoe box would be a form of exocentric viewing. There is a minimum to the amount of box scaling that a user can do, for the following reasons:
    • Zooming out until certain features of the box can no longer be distinguished (planes, user position symbol) is not useful to the viewer, as it will result in the user having the feeling of there not being any objects to look at / interact with.
    • An unlimited amount of zooming out might cause problems in accuracy, pertaining to practical issues such as floating point precision and artefacts or faults that can be the result of such issues.
    • Zooming out until the box is no longer visible to the human eye is therefore a waste of resources.
  • Spatialized sound Another additional effect is the addition of sound to the application. The internal speaker of the Wiimote has not been used.
  • Shadows Shadows have been added to the planes.
  • Rotation/Translation The user can move the box by translation or rotation using either the keyboard or the Wiimote buttons.
  • Faces and wireframe The color of the wireframe and faces of the box can be changed and the wireframe can be disabled or enabled. The following color schemes are available:
    • Light-blue faces and white wireframe.
    • Light-blue faces, wireframe disabled.
    • White faces and transparent wireframe.
    • White faces, wireframe disabled.
    • Transparent faces and white wireframe.
    • Transparent faces, wireframe disabled.

    The environment of the box is an infinite black space.

  • Airplane speed The speed of the airplanes can be changed. Airplanes can also fly backwards by making the speed negative.
  • Adding/removing planes The user can add or remove airplanes.

    Input devices

    The Wiimote as well as the keyboard input have been enabled in the application. In the table below, an overview of the controls for the application is shown.

    Controls for the box

    ESC move-mode (default)
    F1 move-mode
    F2 rotate-mode
    F3 scale-mode
    ----
    F10 rotate through the different coloring of the box
    1 blue box with wires
    2 blue box without wires
    3 white box with wires
    4 white box without wires
    5 black box with wires
    6 black box without wires
    9 fixed position for planes on (push a different number [1..6] to put off)
    ----
    R reset all to the starting state
    P reset the position and rotation
    I sound on / off
    G debug on / off
    O shadows on / off
    ----
    WiiMote A Button rotate through the modes
    WiiMote B Button reset the position and rotation
    WiiMote Home Button reset all to the starting state
    ----
    Move-mode (default):
    W A S D Q E moving the box
    + - adding and removing planes
    ----
    WiiMote C Z DU DR DD DL moving the box
    WiiMote NunChuck moving the box
    WiiMote Plus Min adding and removing planes
    ----
    Rotate-mode (red wires):
    W A S D Q E rotating the box
    + - changing the speed of the planes
    ----
    WiiMote C Z DU DR DD DL rotating the box
    WiiMote NunChuck rotating the box
    WiiMote Plus Min changing the speed of the planes
    ----
    Scale-mode (green wires):
    W A S D Q E scaling the box
    + - total scaling of the box
    ----
    WiiMote C Z DU DR DD DL scaling the box
    WiiMote NunChuck scaling the box
    WiiMote Plus Min total scaling of the box


    Issues

    Several issues arose while porting and implementing the application. Firstly, the CAVE application did not generate the correct sounds. When we removed all airplanes but one, paused the application (airplane speed set to zero) and manually moved the box around the user (i.e. moving the airplane around the user) the sounds were alright. However, when the airplanes were moving the sounds did not correspond to the location of the airplanes. It is unknown whether this is some sort of delay or not, because to the user it wasn't recognizable as delay (the sounds seemed totally off). There is, however, one aspect of the application that may have caused problems on hindsight; The wireframe of the box is very detailed, this may have caused some delay.

    Secondly, due to issues with the head-tracker, sound has been implemented such that the user is assumed to be in the center of the application. Again, when performing the above test, the sounds were correct. It may, however, be possible that this design decision contributes somehow to problems with sound.

    Thirdly, the doppler effect (see http://nl.wikipedia.org/wiki/Dopplereffect) has not been implemented, because VRJuggler's audio layer does not support this feature.

    Conclusion

    The positive benefits of head-tracking and exocentric viewing were most significant. Shadows had a positive effect as well, though not as significant. Spatialized sound did not heighten the sense of presence, contrary to the expected result.

    For head-tracking, this corresponds to the referenced studies. Further improvements are possible by optimizing the application so that there is less of a delay between user input and output and that there is a one on one correspondence between the user's movement and his movement in the virtual reality. We believe that if these improvements are made, this will further contribute to the benefits of head-tracking and - subsequently - to the benefits of other added effects.

    For exocentric viewing, the result differed from the literature. We believe that this is due to the fact that benefits of ego- and exocentric views were combined in the CAVE; Although the user is viewing the box from an exocentric viewpoint, (s)he is still standing inside an application and looking at the entire application from an egocentric viewpoint. It may also be a matter of interpretation: In the used literature, when considering exocentric viewing, a workbench or screen is meant or other device where the user is physically on the outside of the environment. The used literature does not deal with ego- and exocentric viewing in a CAVE. We conclude that exocentric viewing is beneficial to the 'box' application in a CAVE. We feel that additional literature on similar types of viewing in a CAVE may help support this claim.

    For shadows, this corresponds to the studied literature. Further improvements are possible by improving the shape of the shadows and adding additional effects such as anti-aliasing.

    For spatialized sound, the results differed from the literature. We are sure that this is due to the fact that the implementation suffered problems. Because users did however respond to the sound, especially when it happened to coincide with the location of an airplane, we feel that spatialized sound will have significant benefits when the current problems are solved. Also, we believe that implementing the Doppler effect would have also significantly benefitted the user's experience, had it been supported by the audio layer.

    Adding minor effects such as changing the colors of the wireframe, faces of the box, the number of airplanes and speed of the airplanes proved to be useful: When having full freedom, users preferred to change the color of the box to one with a black frame and white faces. They would also add airplanes. The speed of the airplanes was not changed. This suggests that such minor effects may also contribute to the user's experience, although they have not been quantified individually.

    We feel that the chosen approach of distinguishing six cases was very suitable for this situation, because the individual benefits of each of the four main added effects can be determined without interference from other effects, by comparing the results to a base case.

    Appendix

    Questionnaires

    In this appendix the questionnaires used during the experiment are described. Every function handled during the experiment had its own questionnaire which the subjects had to fill in. The results and conclusion drawn from the experiment is discussed in the main document. To reduce the space the questionnares take in this document, only one instance of each questionnaire is shown. During the experiment every questionnaire had to be filled in three times: for personal space, action space en vista space.


    Basic box questionnaire

    The questionnaire used while testing the basic box, without any additions. Notice that this questionnaire is also used in every other questionnaire.

    Basic box questionnaire
    (1)If your level of presence in the real world is 100, and your level of presence is 1 if you have no presence, rate your level of presence in this virtual world.
    (2)How realistic did the virtual world appear to you?
    (3)How aware were you of your display and control devices?
    (4)How compelling was your sense of objects moving through the virtual world?
    (5)How well were you able to determine the exact distance between you and the objects?
    (6)How well were you able to determine the exact distance between the objects themselves?


    Head-tracking questionnaire

    The questionnaire used while testing the box with addition of head-tracking.

    Head-tracking questionnaire
    (1)If your level of presence in the real world is 100, and your level of presence is 1 if you have no presence, rate your level of presence in this virtual world.
    (2)How realistic did the virtual world appear to you?
    (3)How aware were you of your display and control devices?
    (4)How compelling was your sense of objects moving through the virtual world?
    (5)How well were you able to determine the exact distance between you and the objects?
    (6)How well were you able to determine the exact distance between the objects themselves?
    Domain-specific questions
    (a)How natural did your interactions with the environment seem?
    (b)How compelling was your sense of moving around inside the virtual world?
    (c)How much delay did you experience between your actions and expected outcomes?


    Exocentric questionnaire

    Exocentric questionnaire
    (1)If your level of presence in the real world is 100, and your level of presence is 1 if you have no presence, rate your level of presence in this virtual world.
    (2)How realistic did the virtual world appear to you?
    (3)How aware were you of your display and control devices?
    (4)How compelling was your sense of objects moving through the virtual world?
    (5)How well were you able to determine the exact distance between you and the objects?
    (6)How well were you able to determine the exact distance between the objects themselves?


    Spatialized sound questionnaire

    Spatialized sound questionnaire
    (1)If your level of presence in the real world is 100, and your level of presence is 1 if you have no presence, rate your level of presence in this virtual world.
    (2)How realistic did the virtual world appear to you?
    (3)How aware were you of your display and control devices?
    (4)How compelling was your sense of objects moving through the virtual world?
    (5)How well were you able to determine the exact distance between you and the objects?
    (6)How well were you able to determine the exact distance between the objects themselves?
    Domain-specific questions
    (i)How well were you able to identify sounds?
    (ii)How well were you able to localize sounds?


    Shadows questionnaire

    Shadows questionnaire
    (1)If your level of presence in the real world is 100, and your level of presence is 1 if you have no presence, rate your level of presence in this virtual world.
    (2)How realistic did the virtual world appear to you?
    (3)How aware were you of your display and control devices?
    (4)How compelling was your sense of objects moving through the virtual world?
    (5)How well were you able to determine the exact distance between you and the objects?
    (6)How well were you able to determine the exact distance between the objects themselves?


    Free choice questionnaire

    Free choice questionnaire
    (1)If your level of presence in the real world is 100, and your level of presence is 1 if you have no presence, rate your level of presence in this virtual world.
    (2)How realistic did the virtual world appear to you?
    (3)How aware were you of your display and control devices?
    (4)How compelling was your sense of objects moving through the virtual world?
    (5)How well were you able to determine the exact distance between you and the objects?
    (6)How well were you able to determine the exact distance between the objects themselves?

    Results

    Basic box results

    QuestionUser 1User 2User 3
    (1)151010
    (2)221
    (3)666
    (4)442
    (5)252
    (6)361


    Head-tracking results

    QuestionUser 1User 2User 3
    (1)251515
    (2)222
    (3)666
    (4)543
    (5)353
    (6)462
    ----
    (a)324
    (b)433
    (c)446


    Exocentric results

    QuestionUser 1User 2User 3
    (1)352020
    (2)344
    (3)566
    (4)655
    (5)455
    (6)565


    Spatialized sound results

    QuestionUser 1User 2User 3
    (1)261513
    (2)222
    (3)666
    (4)543
    (5)343
    (6)462
    ----
    (i)626
    (ii)111


    Shadows results

    QuestionUser 1User 2User 3
    (1)302016
    (2)333
    (3)666
    (4)654
    (5)553
    (6)563


    Free choice results

    QuestionUser 1User 2User 3
    (1)402523
    (2)455
    (3)556
    (4)756
    (5)566
    (6)576

    Bibliography

    [1] M. Slater, M. Usoh, and Y. Chrysanthou, The influence of dynamic shadows on presence in immersive virtual environments, in VE '95: Selected papers of the Eurographics workshops on Virtual environments '95, (London, UK), pp. 8--21, Springer-Verlag, 1995.

    [2] J.M.F.M. Mel Slater, Anthony Steed, The influence of body movement on subjective presence in virtual environments, Human Factors, vol. 40, 1998.

    [3] J. Cutting, How the eye measures reality and virtual reality, Behavior Research Methods, Instruments and Computers, vol. 29, no. 1, pp. 27--36, 1997.

    [4] M. Slater, V. Linakis, M. Usoh, R. Kooper, and G. Street, Immersion, presence, and performance in virtual environments: An experiment with tri-dimensional chess, in ACM Virtual Reality Software and Technology (VRST}, pp. 163--172, 1996.

    [5] C. Hendrix and W. Barfield, Presence in virtual environments as a function of visual and auditory cues, in VRAIS '95: Proceedings of the Virtual Reality Annual International Symposium (VRAIS'95), (Washington, DC, USA), p.74, IEEE Computer Society, 1995.

    [6] B.G. Witmer and M.J. Singer, Measuring presence in virtual environments: A presence questionnaire, Presence:

    Teleoper. Virtual Environ., vol. 7, no. 3, pp. 225--240, 1998.