CWI Box groep8

From Fontys VR-Wiki
Jump to: navigation, search

Introduction

Virtual environments represent a new approach to human-computer interfaces from which a new computer interaction paradigm has been introduced in the form of virtual reality. An individual no longer simply observes images on the computer screen but becomes actively involved in a three-dimensional virtual world. This incorporates the integration of various inputs and output devices, like sound and motion, to provide the user with the illusion of being immersed in a computer generated environment. Abstractly, a virtual environment is a collection of computer generated objects which interact with themselves and the user. Specifying a virtual environment involves specifying both the appearance of the objects in that environment and their behavior. As the objects are computed generated, the behavior of the objects is defined by some program which updates the graphic representation of the object, usually that program will act on some data associated with the objects. The problem of specifying the interactions between objects is the problem of specifying the functions defining those interactions, the objects on which they act, and the object data that is affected. The main goal of a virtual environment is to give to the user a sense of presence and immersion in the virtual environment. We will create a virtual environment in which the human participant is “immersed” in it in two ways. First, through the VE system displaying the sensory data depicting his or her surroundings; part of the immediate surroundings consist of a representation of the participant’s body and the environment is displayed from the unique position and orientation defined by the place of the participant’s viewpoint within the environment. The second part corresponds to the input signals about the disposition and dynamic behavior of the human body, according to the stimulus given in the environment. Immersion may lead to a sense of presence: presence refers to experiencing the virtual environment rather than the actual physical locale; presence in a VE depends on one’s attention shifting from the physical environment to the VE, but does not require the total displacement of attention from the physical locale. Throughout this work we will discuss the aspects involved in the realization of a VE.

Porting the CWIbox to the CAVE

The VR application 'box' we got from CWI is an animation of a number of airplanes flying in a box. This CWI box is written in OpenGL. The first task we had to perform was porting this CWI box to the CAVE. As is described in the section 'Chosen technologies', we have chosen to port the application to OpenSceneGraph and integrate it with VRJuggler. To do this, we used the following approach. The first step was to port the application to OpenSceneGraph. Since the original application was written in OpenGL, we had to redraw the airplanes in OpenSceneGraph. OpenSceneGraph treats airplanes and the like as objects which have vertices, faces and several other properties such as color. All this information is stored in a geometric object and the objects are all stored in a scene graph which is drawn by the OpenSceneGraph viewer. On the other hand, we have OpenGL. OpenGL is state oriented and each time a frame is drawn all objects must be drawn separately by calling its drawing procedure. Porting was done by storing the geometric data retrieved from the airplane drawing procedure in a geometric object and adding a procedure which updates the position of the planes.

Cwibox.png

This resulted in an application that can be run on a single computer, written in OpenSceneGraph and C++. The CAVE is a virtual environment which runs on a cluster of computers. To view the same scene in the CAVE we had to integrate the application in VRJuggler, which supports multiple viewports over a network and cluster of computers. VRJuggler uses a configuration file with information it requires to run one application on a cluster of computers. This configuration includes the positions of the screen and information to synchronize the viewing on all clusters. Basically, VRJuggler takes a scene graph and the positions of the screens as input and renders the same scene on each screen. None of us had any experience with VRJuggler, hence we adapted an existing application called 'Afrika'. Afrika has the same structure as our application. It is also written in OpenSceneGraph and integrated with VRJuggler to make it run in the CAVE. We used the structure of Afrika and replaced its scene graph by the airplanes and a box. An application that contains only static objects would be finished after this step, but airplanes are not static objects and their positions should be the same on all screens. We had to introduce a shared variable to realize that. A shared variable is updated by one single computer, the server. That server updates the positions of the planes and all other computers only read the positions from the shared variable before drawing the scene. It took quite some time to get the synchronization of the shared variables to work in the CAVE, but finally it worked and we could conclude porting the CWI box and begin increasing the immersion of the CWI box simulation. More about this in the section Implementation issues.

Increasing the immersion

To enhance the immersive effect of the simulation we will both increase the depth and breadth of sensory input. Increasing the depth is done by making the scene more realistic, for example by creating an environment. The breadth of sensory input is increased by adding auditory. We will describe the implemented improvements to increase the immersion. All improvements can be toggled on and off in the simulation to create tests with the presence or absence of each addition.

Environment

In the original CWI box the airplanes fly in a simple box. Instead of a box, we added an environment to make the scene more realistic and it also enables egocentric and exocentric viewing. The environment consists out of two parts, a sky and a terrain, each provided with a bitmap to make the scene more realistic. The sky is simply a bitmap projected on the walls of the CAVE, whereas the terrain is a 3D object with a difference in altitude.

More realistic actors

The actors in the application are the airplanes which are rather primitively drawn. They resemble paper folded airplanes and can have several colors. Our idea was to create more realistic actors, such that the subject would get the feeling they are real and avoid collisions with the flying objects. Instead of creating more realistic airplanes, we made a simple model of a bee. In the real world it is much more likely that something like a bee flies around buzzing people than paper airplanes flying around autonomous. The subject is likely to be familiar with this and we therefore expect it to add to an immersive experience.

Bees.png

Flight paths

In the CWI box the planes fly in a straight line and make turn when they reach the border of the box. This flight path is not very interesting and rather predictable. We added two more realistic flight paths. The first flight path is a smooth curved around a static point in space. In this case the planes, or bees in our simulation, fly independent of the subjects position and movements. The second implemented flight path is also a smooth curve, but now the bees circle around and towards the subjects head such that it would appear as if a bee is attacking. Our thought was that besides the more realistic flight path, the subject could get immersed in the sence that he or she would try to evade the bees.

Interaction with the environment

We now have an environment in which bees fly according to a specified flight path. However, the subject has no influence on the environment and can only move by walking within the walls of the CAVE. We added a platform and gave the subject control of the position of the platform. This gives the subject some freedom and he or she is not constricted to the limited space of the CAVE. Moving around can't be done physically, since the subject should remain in the CAVE, but is done using the Wii remote. This has as a consequence that moving around is not realistic anymore. We therefore added a platform to counter this consequence. The subject is standing on the platform while moving and the Wii remote is merely a device to control it. The platform can fly around freely in all directions, including up and down, and rotate sideways. When the subject is completely immersed in the simulation he or she should never step of the floating platform, since this could result in serious injuries.

Fog

In the real world it is noticable that objects that are further away have a lower contrast due to the dust particles floating around in the air. This limits the view and also reflects the light. We often see this when standing on a high building looking over a great distance and the further away we look, the more white things seem to be. To simulate this, we added fog which gives a similar experience. Simply adding fog to the simulation is not as easy as it seems, since the density of the fog should also be realistic, otherwise it would have too much or too less effect. Figuring this out is a matter of testing and we only had a limited time to choose a density which seemed reasonable. During the tests we noticed that the density was a bit too high.

Actor sound

When a bee flies by, the first thing we notice is the buzzing sound it makes. The bee itself is small and hardly annoying or noticable with the limited view of our eyes. Therefore, it would be a great improvement to add a buzzing sound for every bee in the simulation. We decided to use VRJuggler's implementation for localized audio, the Sonix layer which uses OpenAL, and used it to emit a buzzing sound at the positions of every bee. Whenever the position of a bee changes or the position of the environment with respect to the subject, it updates the position of the buzzing sound for that bee. This way the subject can hear the bees flying around his head.

Environmental sound

Another similar improvement was the addition of environmental sounds. When you walk in the countryside it is never completely quiet. There is always the sound of the wind and animals like birds and grasshoppers. To drift away from the CAVE surrounded by computers and beamers which produce a lot of noise, we included the sound of wind to the environment. With auditory added to the simulation, the user is not only subjected to visual aspects of the environment but also the auditory aspects. Even though he might not be aware of it.

Implementation issues

During the implementation we encountered a number of difficulties. Below a list of the ones that claimed the most of the valuable project time.

Documentation

The documentation and tutorials/examples of OpenAL are outdated. A look at the header files was needed to get things working. The documentation of VRJuggler was better, but still incomplete. We sometimes had to dive into the header files to find the answer we needed. In extreme cases, trial and error was used to find the inner workings of the shared variables. More on that later. The documentation of OpenSceneGraph seems quite complete but a bit scattered (and sometimes outdated). For example, NodeMasks are mentioned in the documentation, but nowhere can be found what the different node masks mean or how they should be used.

Sound

As stated before, we chose to use OpenAL/Sonix for the audio in the simulations. One thing we didn't try but could have worked, was to use OpenAL directly. In the porting process, we did some experimentation with OpenAL. Using OpenAL directly for positional sound did work quite easily. For integration reasons, we choose to use the Sonix layer in our final implementation. On our local computers positional sound did work, as long as OpenAL worked. The latter was apparently quite difficult to realise on the Windows machines. On the CAVE cluster, however, using sound resulted in segmentation faults during the startup op the simulations. In a number of cases, this resulted in a blue screen at the server computer. One of the causes suggested by the CAVE people was the format of the WAVE files used for the samples. We tried a few formats, with no success. Due to the instability, we decided to abandon sound for the tests.

Shared variables

The problem that took most of our time to solve was synchronisation of the cluster nodes, using shared varibles. While there are some examples available, not all quirks were documented and it took quite a lot of trial and error to find some details. For example, the order in which the shared variables are specified in the XML configurarion files actually matters. The shared variables that are first instantiated in the program, should be the first in the configuration file. If not, the program will start without complaints, but no synchronisation occurs. Also, in the program itself, the use of shared variabled comes quite precisely. Reading and writing of the variables should occur in the correct phases of the rendering. If not, synchronisation will fail once again.

Flock of Birds

During the tests, the flock of birds (the head tracking) crashed regulary, sometimes up to once every session. This lead to a freeze of the head tracking and consequently rather lengthy restarts of the simulation during the tests. We have found no explanation for these failures, apart from the observation that the occurence of the crashes seemed to reduce when the head tracker was kept inside the CAVE.

Other issues

It turned out that there are some subtle differences between the versions of VRJuggler and OpenSceneGraph. This could result in things working on our test cluster, but not on the CAVE cluster. Usually this manifested itself as runtime errors during the startup of the simulation or compilation/linking errors. Finally, we decided to use the same version of VRJuggler and OpenSceneGraph on one of the laptops to avoid compatibility issues. Also, the software was recompiled on the CAVE cluster server computer. One thing we noticed was varying performance of the simulation. The first time we got the simulation running, the bees were moving synchronously on all the screens, but we were rather shocked by the dramatic performance. Fortunately, the next time we started the simlation, the lag had vanished. The performance issues seem to be non-deterministic and usually a restart of the simulation resolved it. Until now, we have not been able to determine the origin of the performance lag.

Tests

Now we have a method of measuring immersion, we have to expose the subjects to enough virtual enviroments to fill in the questionnair. For this purpose, we designed a number of tests. The simulations we used for the tests differ in realism, interaction and distraction factors. After each test, a questionnary was filled in by the subjects. In total, we used 7 subjects for the tests, all with different non-computer science backgrounds.

Boxed

This test is the most similar to the original CWIbox. The user takes place in a simple box. To make the simulation more realistic and to improve immersion, sky, terrain and fog were added to the simulation. This test was meant as an initial static, non-interactive encounter for the subjects with a virtual environment. To our knowledge, none of the test subjects had encountered a virtual environment like the CAVE before. In the initial situation, the box has exactly the same dimensions as the CAVE, i.e. the walls of the CAVE coincide with the walls of the box. In the front wall a rectangular window was placed to let the user look outside. This resulted in an exocentric simulation, i.e. the similation was taking place outside the CAVE. In a few cases, we adjusted the box a little bit backward with respect to the CAVE. This made it possible for the users to look just over the edge of the window. This resulted in a number of users almost hitting the walls of the CAVE with their head. The task that the users were given was counting the number of bees that were flying around the box. Beetest1.png

Move it

In the second test, we disabled the fog and shadows to reduce the graphical aspects of immersion. On the other hand, the simulation was made interactive: self-navigation was added and the center of the flight path was coupled to the head position of the user (i.e. "attacking bees"). The user was given a wii-mote to control the platform they were standing on. The purpose of the test was to investigate the influence of self-navigation on the immersion with respect to reduced eye-candy and the absence of sound. Just as in the other simulations, the task of the user was to count the number of bees. An added difficulty was that the flight path of the bees was centered around the users head, so it was not possible to navigate to a position where there was a good overview of the bees. Users seemed to like to be able to navigate themselves, sometimes being more busy looking around in the world then counting bees. In those cases we kindly reminded them of their task, since time was limited. Beetest2.png

Vertigo

In the third test, the interactivity was removed again. Fog was added and originally also actor sound ("bzzzz") and enviromental sound (wind) were to be added for this test to make the similation as rich and realistic as possible. Due to the reasons previously mentioned, sound was not avaible during the test. The purpose of this test was twofold: first of all we wanted to test what the effect of removing the interactivity, but adding realism was on the immersion. Next to this, we were also interested in the behavior of the user of the plateau. The user was given the task to count the bees once again, but this time the user was places on a platform well above the ground. The bees were flying at ground level in the fog right below the users would have stepped of the platform. On a few occasions, we removed the platform while the user was concentrated on counting the bees. When asked about that experience, the subject replied that falling gave "a strange sensation". Nevertheless, no obvious shock or anxiety was observed via the webcam. Beetest3.png

Darkness

In the fourth test we removed all visual aspects of the simulation, leaving the subject in a empty black environment standing on a plateau. Fog, shadow, sky and ground were disabled. The user was given a wii-mote to navigate the plateau they were standing on. The purpose of this test was to see how all visual enhancements influenced or distracted the immersion of the user. I.e. a sort of "blanco" test. The task of the user in this test was finding and counting the bees. Since the user had no visual clues to orient on, this could be a difficult exercise. The idea was that the user could use sound to locate the bees, and then navigate to them and count them. Since sound was not working, the exercise was even more difficult. This was the only simulation in which users could move themselves to the optimal position for counting the bees (the could move there own position and the fly path of the bees was centered around a stationary point). One interesting observation was that the users remained at quite some distance from the bees when counting them, but usually the stayed at the same heigth as the the bees. E.g. they were counting the bees while looking forward. A quirk was the palm tree that wasn't disabled with the terrain. This resulted in a palm tree floating in space, which could be used as a visual beacon for orienting. Beetest4.png

Results and discussion

The answers to the questionnaires were merged together and divided in four different types called the: control, sensory, distraction and realism factors. The picture below shows the results of the test group by type and test. It should be noted that the lower the distraction value the less distracted the test subject were. So in this case the lower the value the better in relation to immersion. This value was made red to signify this difference: We can immediately see that the third test scored much lower then the other tests. It is likely this was caused by the fact that the fog used in this test was quite dense. This made it hard for the users to see the bees and to count them. Apparently this significantly lowered the sense of immersion of the users. The low value of the control factor was obviously caused by the fact that the users could not move around in the simulation. Having trouble seeing the bees through the fog probably made the users to want to move around. And thus the users were probably more critical on the control factors of this test.

Beeresults.png

The lack of realism is due to the fact that the fog was more dense than a user would expect from reality. This obviously showed in the way the users experienced the realism of the simulation. Also other realism enhancing features used in the simulation were less visible due to the dense fog. And equally the lack of visual information due to the fog was the reason for the low sensory factors. The distraction factor is however remarkably low. The fact that the user had trouble seeing the bees, means that they had to try really hard to find them. This increased difficulty of the task, which made them much less susceptible to outside influences. The more challenging a task is the more concentrated the user has to be to complete it. And thus the users were less distracted resulting in lower distraction values. The results further indicate that the second and fourth test were most immersive. Only the distraction factor was quite bad for these tests. The reason for this could be that the task was less complicated and thus the users were more easily distracted by what was going on around them. Interesting to see is that the fourth test also scored quite badly on the distraction factor. In the fourth test all graphical enhancements were disabled to give the user less distractions. This did however have no effect. It seems the users were more distracted by for example handling the controls or the people that were watching them. The control factors for the second and fourth tests were obviously quite high as the users had full control of the simulation. This control allowed them to easily navigate to the bees and count them. The sensory factor was rated high as well. Especially for the second test this is quite obvious as most visual enhancements were enabled and there was no fog present to obscure the view of the bees. This may also be the reason that the fourth simulation got high ratings for the sensory factor. While there were no visual enhancements the users, could quite easily see the bees flying in the distance. This fact may have increase the sensory rating for this test. The first test scores quite average in comparison to the other test. A reason for this might be that it was the first test. While the first test probably scored quite well in general, the users had no reference from previous test, which probably caused them to given fairly average answers. We can however note that the realism factor of this test was somewhat low. This might be caused by the fact that the users were standing in a white box looking outside to the world. A more realistically textured box might have improved the realism factor somewhat. From these results we can conclude that a challenging task can greatly reduce the amount of distraction the user experiences from sources outside the simulation. And since less distraction increases immersion, this is a very important tool to increase immersion. The test with the best results on the other factors was test two. The increased control over the simulation could have contributed to a greater sense of immersion. The reason the distraction factor was heigh in this test is because the increased control made it easier for the users to complete their task, and thus they were more quickly and more easily distracted. Another factor that could have influenced the realism factor was that the bees were flying towards the user in this test.

The platform

Another purpose of the Vertigo test was to see how the users would react to the virtual platform they were standing on. During the vertigo test the users were positioned high above the bees which were flying at ground level right below the user. This meant that the view to the bees was obstructed by the platform. And thus the user had to view over the edge od the platform to be able to count all the bees. Since they where however standing in a simulated environment the users could just have stepped of the platform and have easily counted the bees without the platform obstructing their view. We observed however that but for one, none of the users actually did this. They all stepped to the edge of the platform and leaned over it to count the bees below them. We did not tell them before the start of the test that stepping of the platform was not allowed or that anything special would happen if they this. Nevertheless, the users assumed by themselves that they had to remain on the platform. There was only one user who did go of the platform. However, this user was told beforehand that it was the purpose of the test to see if the users remained on the platform. And so this user was interested to see what would happen if she were to step of the platform.

Conclusion

Based on our results we conclude that to make a simulation immersive one needs to first make it as graphically realistic as possible. Then one has to give the user a task that is fairly challenging. With these things one can make a simulation that will have an improved sense of immersion. From the results we can conclude that navigation has more impact on immersion than visual cues. This was shown by the fact that the tests with navigation had a heigher overall immersion score. This is in agreement with the literature. We have concluded that navigation has more impact on immersion than visual cues. Comparing our results with the results of the literature study, we indeed see the amount of control on the simulation influences immersion as was to be expected. Not all additions to the application had a noticable effect on the immersiveness of the simulation. In our case interaction of the actors with the user did not show any influence on the level of immersion. Based on the literature, this is not what we expected. This might have been caused by a lack of graphical realism of the actors. We were not able to test the influence of sound on immersion. However it is likely that this increases the immersion of the simulation, as is suggested in literature. We also observed that users tend to stay on the virtual platform. What really caused this can have several reasons. But it seems that people tend to apply the rules of the real world to a simulated world. And thus one does not step off a platform that is floating high in the air. It does seem likely that the more a user is immersed in the simulation, the lesser he will see the difference between the real world and the simulation. We did not find any literature to compare this result with.