Mat Janson Blanchet's academic works

Virtual Subjectiveness Analysis of ArtAnim’s “Walking Through a Pharaoh’s Tomb”

Posted on December 3, 2018

Interaction Models

This paper analyzes ArtAnim’s mixed reality virtual experience (VE) Walking Through a Pharaoh’s Tomb from the point of view of Virtual Subjectiveness (VS) (Parés & Parés, 2006) [1].

We explore the virtual experience’s physical interfaces and their mappings in the virtual world, as well as the Virtual Subjectiveness’ logical interface and its behaviours. We also investigate the level of interaction of the application, and its interaction design process. Finally, we offer a critique of the mixed reality experience.

Keywords

Analysis; ArtAnim; critique; interaction models; logical interface; physical interfaces; mixed reality; virtual experience; virtual reality; virtual subjectiveness.

High Level Analysis

ArtAnim’s Walking Through a Pharaoh’s Tomb’s high level Virtual Subjectiveness is the first person’s point of view of a human (female or male) exploring a low-lit Egyptian tomb. Before entering the experience, “users had the choice to impersonate a standard avatar or to use their own avatar obtained by 3D scanning” (Chagué & Charbonnier, 2015) [2], which could then include their own clothing.

The potential body motions (walking around, jumping, moving arms and feet) and positions (standing, crouching, etc.) of that tomb explorer in the virtual world are the same as a typical human’s. As the users’ heads, hands, feet, and back are tracked in the real world and mapped in the virtual world, users are able to look down on themselves and see their full body represented in the virtual world. Users looking at themselves or their limbs would recognize them as theirs, since their motion is the same as the virtual avatar which they embody. By doing so, the immersion allows the player to feel more present in the virtual world (Chagué & Charbonnier, 2015, 2006) [2][3].

ArtAnim’s virtual world does not contain any reflective surface, as it is set in an underground dusty Egyptian tomb. We could, however, extrapolate that if there were any, the users could see themselves, or rather their avatar.

Other players can also participate simultaneously. As all users are represented by their avatar which all other users can see, direct physical interaction and collaboration (shake hands, exchange physical elements like the torch) in the virtual experience is possible.

Amongst the objects with which users can interact, chief is a baton that serves in the virtual world as a torch. This item is key, since the Egyptian tomb is very lowly lit. When there are multiple users participating in the experience at the same time, their collaboration is required as they need to share the light from the virtual torch.

Since two users can simultaneously see the real/virtual torch, as well as each other, it is possible for them to pass the torch, even throw it accurately from a distance, and catch it.

By the same token, nothing but goodwill, common sense, and savoir vivre prevent a user from clubbing the other, since the avatar means there is an actual physical person at that virtual location.

The experience’s goal is for players to discover the tomb’s hieroglyphs and main room. Most hieroglyphs are only virtually present. When users hover their hand over some of them, and additional layer of textual information is displayed, although no haptic feedback is explicitly provided. Also, physically as well as virtually present on a wall is an extruded hieroglyph, in which users must place a small disc, also physically present in the room.

Beside visual and physical stimuli, sound and music are also present, coming from loudspeakers located on the walls of the experience. The HMD also have headphones that cover the ears of the users. They are used in a generally similar way to how video games use sound: there is a stereotypical Egyptian eerie background soundtrack. When the users finally solve the puzzle to reach the main room, the walls make rumbling noise and the soundtrack is changed to a much more victorious one as the room is revealed.

Physical Interfaces

Walking Through a Pharaoh’s Tomb is made up of quite a few physical interfaces (see fig. 1). Let’s investigate those that pertain only to the Virtual Subjectiveness (VS).

Real Virtuality Hardware Setup

Fig. 1: The hardware setup

First would be the head-mounted display (HMD) which comprises position and orientation sensors (gyroscopes and accelerometers) as inputs, and two screen displays for a stereoscopic output. The HMD offers its standard 6 degrees of freedom.

There are also markers on the hands, feet, and backpack of the players:

[…] users were equipped with a set of rigid bodies positioned on their feet and hands. Each rigid body was a cluster of reflective spherical markers (Ø 14 mm) arranged in a unique geometrical pattern allowing the motion capture system to identify it (e.g., left hand player 1) and to compute its absolute position and orientation in the 3D space. (Chagué & Charbonnier, 2015) [2]

Each human hands are said to have 7 degrees of freedom, thanks to the cumulation of the degrees of freedom of the wrist, elbow, and shoulder. [7] Since only the hands are tracked by the motion capture, we will see in the following section (see Mappings section) how the joints’ positions are calculated.

Concerning the feet, we can use the same logic of cumulating the legs’ joints degrees of freedom, which would lead us to the conclusion that there are 6 degrees of freedom per feet. [8]

Logical Interface

Physical interfaces are mapped to the logical interface, and one of the representations of this interface is an avatar in the VE, a “tomb explorer.” This avatar is chosen before the experience begins. We have already described the avatar’s appearance and selection above (see Virtual Subjectiveness section).

Virtual hands

Fig. 2: VS using left hand to trigger text overlay,
and right hand to move torch to light up wall

Through the HMD’s displays, users are presented with a stereoscopic display, which enables them to view the VE from the point of view of the avatar’s head. Users can see their own virtual bodies and limbs from, as well as manipulate said limbs directly by their motion in the physical world (see fig. 2). Users can also move their head around, and thus change their point of view in the virtual world.

As described above (see Mappings section), the tracking of the real hands, feet, as well as the HMD, and the backpack, allow to position the skeleton of the avatar coherently in the virtual world. From this, users are able to understand their position, motion, and volume properly in the VE.

Mappings

The combination of the data obtained from the sensors of the HMD (x, y, z, and acceleration for each) are mapped to the position and orientation of the head of the avatar of the player. The real-world head motion is translated in a 1:1 ratio to the VS’ point of view. As such, the HMD takes this point of view to display the virtual world to the user.

Motion capture obtains many points in real space from the many cameras tracking the reflective markers with infrared. Markers on the hands, feet, and backpack of the players are respectively mapped to the virtual hands, feet, and body skeleton. Here also, their position is translated in a 1:1 ratio to the VS.

With the data obtained by the markers and motion capture, and with inverse kinematics, the application is able to position the arms and legs of the avatar in virtual space. In short, inverse kinematics associates the hands and feet to a virtual skeleton model, which is defined the typical limitations of a human body. For example, a human elbow is unable to rotate a full 360°. With all the limitations combined with the position of the hand, the virtual hands and arms positions, as well as those of the virtual feet and legs, can be generated in real time. This means this allows a 1:1 mapping of the real-world body position to the position of the VS. in the virtual world. [4]

As opposed to many virtual reality (VR) experiences which make use of game controllers to enable users to move around in space, users have to actually move around in a real-world space to move in the VE. Motion capture tracker data is also used to position the VS. in the virtual space. Distance and positions, in meters, are then converted to virtual meters, as is often the case in 3D generated/rendered worlds. Here as well, the 1:1 relation applies: users moving a single step forward in reality will make the VS. move forward the exact same distance in the VE.

The HMD also has headphones that cover the ears of users, likely to generate binaural sound effects. For example, there is the sound of fire crackling from the torch which can be spatialized when the torch is moved around the head. Other elements that emit sound in the VE can also be spatialized by a combination of the loudspeaker sound output and the HMD headphones output.

Table of the different mappings

Fig. 3: Table of the different mappings

Logical Interface Behaviours

The logical interface enables users to explore the area freely, restricted only by virtual walls.

With the 1:1 mapping of their real-to-virtual body motions, users are then able to use their hands to interact with the VE’s elements and environment:

When other real-human players are present, they can interact with each other as well, for example, by shaking hands or exchanging the torch (see fig. 5).

Finally, there is nothing preventing users from talking to each other. However, there are no sound inputs into the system, thus not making it a direct part of the VE, rather only a part of reality.

User manipulating a box

Fig. 4: User manipulating a box

Users throwing the torch

Fig. 5: Users throwing the torch

Level of Interaction

According to Parés & Parés (Parés & Parés, 2006) [1], we believe this mixed-reality virtual experience qualifies as a contributive experience. Let us explore how ArtAnim’s experience exceeds the definition of explorative and manipulative experiences.

There is no doubt that there is an exploration aspect to this project, as the whole theme is that of an explorer discovering a Pharaoh’s tomb untouched, safe from the pillages that occurred in reality. However, even if there were no interaction but exploring, the experience would at least qualify as a manipulative one, since the different mappings of the physical interfaces affect the positions and the motions of the virtual avatar of users.

Manipulation happens in mixed reality: in real-life, users are able to pick up a cube, virtually represented as a coffin, and a stick, which is virtually represented by a flaming torch. Moving these real objects in the room and leaving them elsewhere changes the position of the virtual objects, but does not alter their state, they are finite and cannot be modified.

How then do users contribute to the experience? As mentioned before, players must investigate the Pharaoh’s tomb in order to discover the main room. In order to do so, users must manipulate a real/virtual disc and place it in a hieroglyph/cardboard cut-out. Once that task is completed, a virtual wall slides up, revealing the path for more exploration. At that moment, the virtual world is modified, as the wall’s state is changed.

Interaction Design Approach

We believe ArtAnim took an interaction-driven approach to develop their experience. Since they are the creators of the technological setup, they wished to showcase the interactions which their product enables.

By presenting situations in which cooperation was needed—sharing the torch light, for example—it is clear that their goal was not to present content, but to build an experience depending on the interaction between players, and between the players and the VE system.

Critique

ArtAnim’s Walking Through a Pharaoh’s Tomb mixed reality experience is technically interesting and impressive, there are a few shortcomings and space for improvements.

First, we acknowledge that this application is a proof of concept to showcase ArtAnim’s Real Virtuality, which is the whole setup of mixing a VR HMD and their motion capture technology. As such, we understand that the agency wished to show examples of creative interactions their system enables (as we described above, see Interaction Design Approach section).

Fig. 6: Static fingers

The authors claim that their application allows full immersion and presence, as if in real-life, thanks to the mapping of real limbs into virtual space. However, the players’ virtual hands are static (see fig. 6). While all other limbs seem to move in was natural to humans, users looking at their virtual fingers, and trying to move them, would obtain no visual reaction. This leads us to conclude that users noticing these drawbacks would be drawn out of immersion if not thrust straight into the Uncanny Valley. We do acknowledge that ArtAnim are already aware of such a limitation, and are already considering improvements (Chagué & Charbonnier, 2015) [2].

Observing documentation of users interacting in an open space, represented virtually as a maze, we believe that there is also another risk of breaking the experience. Collision with walls and other objects are both virtual and real, as this mixed reality experience incorporates them, and users quickly understand that objects from reality are also solid in the VE.

Depending on the location of the experience—since it is a travelling experience, adapted to the physical surroundings where it is presented—some walls may only be virtual, while others could be hard and real. Users expecting a wall may be disappointed in the lack of solidity they face when encountering a virtual-only wall. Same as the torch, there is nothing to prevent users from simply plowing through virtual walls until they reach the secret sarcophagus chamber. Although, a user behaving this way would likely bump onto an actual real and solid wall at one point.

Collision with walls and other objects are both virtual and real, as this mixed reality experience incorporates them. Depending on the location of the experience—since it is a travelling experience, adapted to the physical surroundings where it is presented—some walls may only be virtual, while others could be hard and real.

There are some alcoves in the wall designs of the tomb, and if users were to put their hands through, the immersion could either be broken by encountering a fully solid wall, or emptiness when touching the sides of the alcove.

Additionally, as visibly dusty—and maybe mouldy—as the tomb is in the virtual environment, users touching the solid walls and objects would only find even and dry surfaces.

The Virtual Subjectivity seems to work quite well, especially due to the setting in which this mixed reality experience takes place. Having a 1:1 mapping of motion and gestures, as well as point of view, is appropriate for this “tomb explorer” context.

While users may not initially know that their hands serve as triggers for some virtual text overlays, people do tend to also use their hands to discover their surroundings. Seeing the overlay once, even if by happenstance, would entice users to try again to use their hands on other objects that could potentially provide more information.

Conclusion

We analyzed ArtAnim’s Walking Through a Pharaoh’s Tomb mixed reality experience, which is, according to the authors themselves, a proof of concept for building more elaborate applications.

We investigated how the application uses multiple physical interfaces, and mapped the data streams obtained from those interfaces in a virtual space. We then analyzed all this from the point of view of Virtual Subjectiveness (VS) (Parés & Parés, 2006) [1].

We believe this experience and its settings are interesting, but ultimately lacking in depth. The use of virtual reality combined with motion capture, while creative and likely to improve in future applications, feels like a novelty to attract people who would not otherwise visit cultural and heritage spaces. However, if this method allows a greater number of people to engage with cultural and heritage content and institutions whether virtually or on location, this novelty is actually helping.

Credits

Written in collaboration with:

References

  1. Parés, N., & Parés, R. (2006). Towards a Model for a Virtual Reality Experience: the Virtual Subjectiveness, 15(5), 524–538. http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=CF71B881C60ECBB9D3C5CF1D40EBB24?doi=10.1.1.104.8365&rep=rep1&type=pdf.
  2. Chagué, S., & Charbonnier, C. (2015). Digital cloning for an increased feeling of presence in collaborative virtual reality environments, (October), 22–28. https://doi.org/10.15221/15.251.
  3. Chagué, S., & Charbonnier, C. (2016). Real Virtuality : A Multi-User Immersive Platform Connecting Real and Virtual Worlds. In Virtual Reality International Conference (pp. 2–3). https://doi.org/10.1145/2927929.2927945.
  4. [Adam Savage’s Tested]. (2015, August 15). Real Virtuality Multiplayer VR Demo [Video File]. Retrieved from https://www.youtube.com/watch?v=DNMkLgTSn4k.
  5. [Artanim Foundation]. (2015, July 7). Real Virtuality – Siggraph 2015 Immersive Realities finalist – Gameplay footage [Video File]. Retrieved from https://www.youtube.com/watch?v=ur4KQakkmA8.
  6. [Artanim Foundation]. (2015, July 15). Walking through a Pharaoh Tomb – A visit combining Oculus and Mocap [Video File]. Retrieved from https://www.youtube.com/watch?v=iAacQLEFF_Q.
  7. Degrees of freedom. (2016). In XinReality, Virtual Reality and Augmented Reality Wiki. Retrieved from https://xinreality.com/wiki/Degrees_of_freedom. Accessed November 28, 2018.
  8. Muscle. (2018). In Encyclopaedia Britannica Online. Retrieved from https://www.britannica.com/science/muscle. Accessed November 28, 2018.

List of Figures

Fig. 1: Hardware setup.
Chagué, S., & Charbonnier, C. (2016). Real Virtuality : A Multi-User Immersive Platform Connecting Real and Virtual Worlds. In Virtual Reality International Conference (pp. 2–3). https://doi.org/10.1145/2927929.2927945.

Fig. 2: VS using left hand to trigger text overlay, and right hand to move torch to light up wall.
[Artanim Foundation]. (2015, July 15). Walking through a Pharaoh Tomb – A visit combining Oculus and Mocap [Video File]. Retrieved from https://www.youtube.com/watch?v=iAacQLEFF_Q.

Fig. 3: Table of the different mappings.

Fig. 4: User manipulating a box.
[Adam Savage’s Tested]. (2015, August 15). Real Virtuality Multiplayer VR Demo [Video File]. Retrieved from https://www.youtube.com/watch?v=DNMkLgTSn4k.

Fig. 5: Users throwing a torch.
Ibid.

Fig. 6: Static fingers.
[Artanim Foundation]. (2015, July 7). Real Virtuality – Siggraph 2015 Immersive Realities finalist – Gameplay footage [Video File]. Retrieved from https://www.youtube.com/watch?v=ur4KQakkmA8.

Paper in PDF Format


Leave a Reply

Your email address will not be published. Required fields are marked *