By CDH Guest Author on October 31, 2019
Written by Megan Reusche
This blog series is titled “Extended Reality (XR) Series” because “XR” is a term that encompasses the immersive technologies of Virtual Reality (VR)/Augmented Reality (AR)/Mixed Reality (MR) and allows for a flexible designation of a variety of combinations between virtual and real environments, as well as human-machine interactions. The articles will highlight the VR and AR projects that span across various disciplines at UCLA, explore immersive technology as an educational tool, and consider the ethical and technical implementation of these technologies in higher education.
In the first article of this four-part series, HumTech sat down in spring 2019 to talk to Dr. Maja Manojlovic, a UCLA lecturer and Faculty Advisor for Writing II Pedagogy. Her current research examines the effects that VR technologies and cinematic 3D aesthetics have on our embodied experience of temporality and spatiality. She is also working on a few projects that study how VR shapes our identities, empathy, and relationships with others.
The goal of this first article is to provide readers with an all-encompassing introduction to these technologies, particularly to VR, that will prepare them for the upcoming installments.
1. How would you define immersive technology to someone who is unfamiliar with the term?
There are many immersive technologies we’re all very familiar with. For example, we know of various writing technologies from chalk, brush, and pen, all the way to mechanical typewriters, PCs, and smartphones. With the last two, I’ve touched on the screen-based technologies, from cinema and giant advertising screens, to iPhones, and smart watches, all of which quite literally both enclose and expand our lives. Can you even imagine functioning without immersing yourself in activities and tools they offer? From playing Red Dead Redemption 2 to taking SATs or GREs, our interaction with screens is essential to how we move though our life. Well, immersive media such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) repurpose the conventional screens and expand our field of interactive experience.
When immersed in VR environment, we no longer perceive the screen, although even the slickest headsets, such as Oculus Quest, weigh quite heavy on our face. Also, the degree of interactivity with the VR environment depends on the type of VR experience. 360 VR video, for example, emulates cinematic experience, where we are sitting or standing in physical space and observing the 360 VR environment, but can’t interact with it. There are also VR experiences where we can move in both physical and virtual space as well as interact with objects in the VR environment.
A beautiful example of the 360 cinematic VR experiences is Gina Kim’s Bloodless. We’re taken along as the director traces the last moments of the life of a sex-worker in Dongducheon, an area where prostitutes would meet up with American GIs in South Korea in 1992. They filmed on location where the crime took place. What remains imprinted in my memory is the moment where I encountered the sex worker face to face just before she enters the room where she is violently killed. She looked straight into my eyes and then “passed through” me. That is, she moved through the space my virtual body occupied. In this way, her virtual, yet palpably embodied figure touched my real embodied and affective self, and affected me just like a real-life, intimate encounter with another person would.
An example of an interactive VR experience, where you move through space — and interact with it — is the VR game John Wick. I was introduced to it at IMAX VR Center on Fairfax Avenue, which unfortunately closed down last year. As you might have guessed, this is an action-packed, First Person Shooter (FPS) game. You’re John Wick – yes, the Keanu Reeves’s John Wick! – standing on the top of a skyscraper, while your enemies are coming from everywhere; climbing up from lower floors, shooting at you from the helicopter, crawling from every corner and doing their best to kill you. I was crouching to aim and shoot, moving fast, in and out of shelter, making sure to keep an eye on my entire surroundings, reload my gun, ahhh…. By the end of 5 minutes, I was ready to be done. At the end of this 15-minute VR game I was completely wiped. My leg muscles were sore for a few days afterwards. The most intense workout ever! And, although I’ve enjoyed the game, I was also viscerally affected by it. I’ve gotten as close as it gets to a real experience of being a target and fearing for your life, while aggressively defending and attacking. I suspect many of those enjoying this VR experience also end up anti-gun and war in real life.
I won’t go in depth with AR and MR. Let me just say that the screens used for AR (smartphone or iPod) and MR (eyewear – “glasses”) mostly serve as tools for accessing and interacting with virtual objects. In AR, the virtual world is laid over the “real”, while in MR, the virtual and “real” world objects “recognize” one another.
To wrap it up, immersive technologies of VR/AR/MR, along with haptic technologies, are also called Extended Reality (XR). I like this term because it allows for a more flexible designation of a variety of combinations of virtual and real environments, as well as human-machine interactions.
XR also provides a refined definition of immersive technologies: they’re not immersive in terms of simply offering an escape from reality. Instead, they expand our field of vision beyond the 2D frame by situating our embodied self into a 360 environment. This 360 VR environment might be similar to the real, but isn’t the same. The combinations of virtual and real experiences emerging from XR technologies therefore, and paradoxically, give us an opportunity to confront and re-examine reality. Should we take this opportunity, we might end up with a novel and embodied perspective on the world and our place in it. So XR technologies are both immersive — in the sense of inviting submersion in fictional worlds, and emersive — in the sense of the experiences they generate emerging into the real world.
2. Why is immersive tech important to you?
My interest in immersive tech goes way back. I’m talking about being eight and pretending to go to sleep when my parents said good night, only to then turn on the light and continue reading the book I started earlier. To me, reading wasn’t about the text, about words. It was about the images and vistas emerging in my imagination that I inhabited in a very real way. A world building of sorts. My passion for books — ah, those analog pleasures! — naturally transitioned to film and, later on, to my exploration of how we conceptualize and experience space and time.
Digital technologies that took over celluloid film, its production, distribution, and aesthetics in the 90’s, made the issues of space-time even more palpable for me. Indeed, my dissertation examined the ways in which filmmakers in the 2000s deploy the digital to reimagine spatiality and temporality in cinema. Specifically, I looked at an international selection of films, such as Russian Ark (Russia, 2002), Charlie and The Chocolate Factory (USA, 2005), Waking Life (USA, 2001), Demonlover (France, 2002), Paprika (Japan, 2007), and Speed Racer (USA, 2008). I argued that digital aesthetics in these films reconfigure our senses, perception, affect, and how we make sense of the world and ourselves. To explain that, I developed a concept of interval/interstice, which describes both an experience and process of a temporal suspense and spatial gap occurring when our senses, affect, and cognition are challenged by a (digital) image that we do not immediately recognize by “pasting” it onto our previous experience. I’m now revising this work into a book, Interval/Interstice: The Aesthetics of Digital Cinema.
Now you see my interest in immersive tech is not just an abstract fad. On the contrary, it’s practically autobiographical. I am absolutely fascinated by the potentials of XR to both reconfigure and expand our horizons of possible embodied experiences, affectivity, and consciousness. I’m particularly interested in articulating the deep structures of these experiential reconfigurations and how we make sense of them. That said, I also want to point out that I believe XR, as we currently know them, represent only a transitional phase towards ideas, concepts, and tech we are not even imagining yet. As fascinating as these XR techs are momentarily, they’re also clunky and somewhat ineffective. What we have now and what is to come is comparable to the 16th century camera obscura in relation to a VR 3D 360 4K camera today. However, with the exponential development of tech today, we won’t be waiting 5 centuries, but rather a few decades for these new techs to take place in our daily lives.
XR technologies are fascinating. And yet, I’m always reminded of Martin Heidegger’s writing on “The Question Concerning Technology,” where he, back in 1954, invites us to catch “sight of what comes to presence in technology, instead of merely gaping at the technological.” Since my first VR experience in 2013, when Nonny de la Peña showed me her project, Hunger in Los Angeles, I knew this technology offers an unprecedented (at least for me) platform for interdisciplinary research of human embodied consciousness, as well as our relationships with one another, other species, and our environment.
How so? Well, let me return to Nonny’s Hunger in L.A. This is an immersive journalism piece, which emplaces the user on the street, Downtown LA, where homeless people are standing in line, waiting for a meal at the food bank. You begin walking around (in both physical and virtual space) to explore the environment, when you suddenly hear yelling and screams (these are documentary recordings of the actual event). As if you would in real space, you look for the source of sound, see the line in front of the food bank and a group of people gathering around something. As you approach you realize they’ve gathered around a person, laying on the ground in a diabetic coma. The surge of emotions you experience, mixed with the urge to help, do something that is as real as it gets, and just as subjective. People cry, try to physically help, call 911, or talk to people in line to get more information — all in VR. However, the most important thing is that, once you’ve experienced this piece of immersive journalism in VR, it remains etched into your being. You feel an urge to do something in the real world.
This is where VR gets real. By situating you in virtual environment where you feel yourself physically present, while simultaneously aware and feeling your body functioning in the “real” world, VR becomes a technology of self-knowledge. In a unique way, it “trains” your consciousness to revise the habitual “reading” of the input it gets from the nervous system. Similarly to the case of the (once new) digital aesthetics in cinema I’ve described earlier, VR challenges our cognition. It definitely causes us to simultaneously suspend the way we make sense of our being-in-the-world and consider the gap between our “real” and “virtual” experiences.
Ultimately, all XR prompt you to confront yourself. As Hunger in L.A. demonstrates, this self-confrontation happens both in terms of diving into your subconscious and acting externally. XR are neither the first nor the only media with such effects. However, they are slightly different. Let me briefly explain why I think so by comparing cinema and VR. If cinema, as philosopher Henry Bergson already pointed out in the 19th century, reflected our thought processes, then VR reflects our perceptual processes. While the visual holds primacy in both cinema and VR, the latter enhances the importance of sound as a narrative device beyond what cinema has accomplished. Moreover, VR deploys haptic devices, stimulating the sense of touch. Our perception in VR therefore engages and is informed by a fuller sensory palette than in any other medium. In addition, let’s not forget we’re fully emplaced in the VR environment, can physically move in it, and interact with its objects. All this allows us to examine the world and us in a novel way, through those intervals/interstices — pauses/gaps — in our habitual interpretations of both the virtual and the real.
So basically, XR are important to me because they both reflect and engage our sensorium, perception, emotions, and cognition in ways that not only allow us to explore these processes in depth, but also enable us to consider our existence in a holistic way. Once I’ve learned to think deeply about how I feel, it might be easier to maintain awareness of the other person and respond to their situation in an empathetic way. Once I’ve learned to experience the world as an emotionally textured, 3-D, and 360 environment I might be better attuned to interacting with it in a mutually supportive way.
3. What types of projects are you currently working on and what tools do you use to create them?
I am currently working on two interdisciplinary research projects that are VR/AR-related: Tongva VR/AR and Reconnect: The Amazon Medicine Garden. As we speak, I’m focused on Tongva VR/AR, which aims to create VR/AR models of two Tongva villages: Yaanga, one of the largest Tongva villages near downtown Los Angeles, and Koruuvanga in West Los Angeles.
Tongva VR/AR will continue building on the UCLA American Indian Studies Center’s “Mapping Indigenous LA Project.” My research will continue to address the issues surrounding colonization of indigenous identities, languages, architectures, and geographies. However, I anticipate the 3-D models of Tongva villages and artifacts in VR/AR environments will offer yet another alternative to the existing indigenous (hi)stories. By creating 3D VR/AR environments, I’m hoping to explore the space of Tongva villages from a phenomenological perspective. I’m interested in how might an embodied experience in the intensified space of VR/AR environments modulate the way users make sense of the Tongva’s lived-experience, both individually and collectively. What happens once we can no longer distance ourselves from our object of inquiry, as we would when “reading” a two-dimensional “text,” or a map? Do we relate differently to the space of the Tongva village and their culture once we are situated in its three-dimensional environment? If so, might we say that we are “re-embodying” our cognition by incorporating feeling and affect?
In addition, this project will also serve as a teaching platform for a Digital Humanities course that folds in the interests of students in Environmental and Urban Humanities. Students who might want to continue to build on the Tongva VR/AR once it’s available to be experienced can explore numerous avenues of inquiry into the life and culture of the Tongva peoples.
This project was also selected for the 2019-2020 Digital Research Accelerator Program headed by Anthony Caldwell, who is in charge of the UCLA Scholarly Innovation Lab. The program begins in the fall, and I am very excited to collaborate with Anthony and the rest of the DH team. To begin with, Anthony will support me in creating 3D models of the two original Tongva villages, which I want to then put into interactive VR using Unity game engine. He will also mentor me in using Vectorworks to create AR content.
I’ve already mentioned the research questions I’m asking with Tongva VR/AR. But I think it’s also important from the pedagogical perspective. Tongva VR/AR is set up for a student-centered and project-based pedagogy that engages students in decision-making regarding the direction of their research as well as in hands-on activities, such as creating in VR/AR. This generates a different kind of classroom environment, less hierarchical, and emphasizing experimentation, exploration, and creativity. What happens when, instead of producing and interacting with 2D texts and images, both students and professors become directly implicated in the classroom content situated in a 3D environment?
Reconnect: The Amazon Medicine Garden is another project I’ve been working on since I’ve participated in the 2018 Oculus Launch Pad. This project is about preservation of the living space and culture of the Yawanawá, an indigenous tribe of the Amazon, by creating a garden of their medicinal plants. I would like to create an interactive VR archive of these plants and the individual and cultural histories they support. For the Yawanawá, plants are the archives of their knowledge and culture. In this sense, this VR project would not only archive the plants for the Ywanawá, but also give a model for a different way of relating to one’s environment that acknowledges its holistic vitality rather than merely viewing it as a resource to be used.
4. Can you tell me more about how you incorporate 3D aesthetics and VR into your classes?
I am just finished teaching a course in the Professional Writing Minor, called Trends in Multimedia Environments: Essentials of Videogame Rhetoric and Design. In small groups, students created incredible pitches for their original games. One of the groups created a VR game. I am hoping to display these truly outstanding projects on a website supported by HumTech.
This summer [2019], I am teaching a Digital Humanities course on Emerging Media: XR Technologies, Immersive Environments, and Embodied Experience. We explore how we engage our senses, perception, and emotions to make sense of various VR environments. We will also consider the ethical implications of their effects on both our personal identity and social interactions. Students will visit 3-D Space: The Center for Stereoscopic Photography, Art, Cinema, and Education, curated by Eric Kurland. They will create in Tilt Brush VR under the guidance of painter Mariam Oskoui, as well as film with a 360 GoPro camera and learn how to stitch images and record sound in a workshop led by the 360/VR producer and director Nir Netzer. Overall, students will interact and experiment with 360 VR films, immersive journalism, and VR games. By the end of the course, they will collaboratively develop and design a professional pitch-deck for a VR experience prototype. Again, I am hoping to be able to display their work on a website.
I am also developing a course on Alternative Pedagogies for the Graduate Certificate in Writing Pedagogy. This course will specifically deal with multimodal approaches to teaching afforded in immersive environments of XR. As the interest in pedagogies enabled by XR is growing internationally, I’m looking forward to having our graduate students experimenting with new approaches to teaching.
5. Which tools would you list in this article for professors and grad students who might be looking to add immersive technology into their research or classrooms?
I’ve been very fortunate to have the support of HumTech staff for both my Videogame Rhetoric and Design and the upcoming VR courses. Tom Garbelotti and Andrew Jessup built a “cart” specifically for my class. It includes a computer processor, PS4, and Oculus Rift S. In addition to these, my DH 150 course in VR will use a Rylo 360 camera and Zoom H4n Pro Handy Recorder to make 360 videos and record binaural sound. 360 video is one of the easiest ways to begin exploring immersive environments both conceptually and practically. Having recently tried out Oculus Quest, I think this is definitely the way to go!
6. Have you experienced any challenges and constraints with the equipment you use?
Let me just say I wouldn’t be able to do this without HumTech and their incredible support in both building tech for my needs and brainstorming the solution for possible pedagogical issues arising when using technologies.
7. How is the project planning coming along for bringing the XR Initiative to UCLA? What do you envision the XR Initiative to look like and what impact do you hope it has on students and faculty at UCLA?
Ideally, the XR Initiative will provide a unique platform for people with diverse interests from different areas of campus to come together and create a point of convergence for interdisciplinary dialogue. We still need to develop an appropriate vocabulary to talk about XR technologies and experiences they’re generating. And, since XR are here to stay, it is important to consider their ethical implications and talk about both their beneficial and perilous effects.
As far as short-term goals are concerned, I hope the XR Initiative continues to have support in organizing monthly or bi-weekly Open House events, where various UCLA Labs with XR equipment open their doors to visitors. In the last couple of months we held Open Houses mostly for faculty to get to know one another and spark new conversations surrounding these tech and their uses. I am hoping that starting this fall, we can continue with these events, but also invite both graduate and undergraduate students to join them.
Doug Daniels, the driving engine in organizing these events, also kept a blog-record of these Open Houses. They’re on our website. Everyone should check it out and keep an eye on the future events! Also, if you’re interested in the current and future plans regarding XR on campus, or want to in some way contribute to the XR Initiative, feel free to contact me at mmanojlo@ucla.edu and I’ll be happy to put you on the mailing list.
Besides continuing to expand the website content and making it both functional and visually appealing, we will also begin organizing informal meetings, such as mini-workshops, and mini-symposia where both faculty and students would get to know one another, exchange and generate ideas, present their work (both technical and research) and get friendly feedback on their work-in-progress.
And finally, we’re planning to organize a larger conference that brings together researchers, developers, artists, entrepreneurs, and industry professionals to think, do, and envision the future of XR that supports an environmentally sustainable and culturally inclusive future. I’d be very excited to see UCLA buzzing with XR-related conversations and activities involving international and diverse participants.
8. Any final thoughts?
XR techs reshuffle the conventional approaches to research and its implementation, both in our classrooms and communities outside of academia. In other words, it gives academic researchers an opportunity to get even more intensely involved in the industry as well as in raising the quality of education of underprivileged communities. In the latter case, we would be making available content and methods previously inaccessible to these students.
One of my Oculus Launchpad colleagues, Kai Frazier, brings VR versions of national museums and galleries to public schools where students otherwise wouldn’t have access to such content.
I experienced Nonny de la Peña’s project After Solitary that gave me an opportunity to stand next to an inmate in solitary confinement, talking about his experience and how it affected him. I was emplaced in the claustrophobic environment of the solitary and felt his embodied presence and the timbre of his voice resonating in me. This would not have been possible without VR.
This is precisely why XR are such an incredible experiential machine that we need in education: because it allows us to experience, explore, and experiment with places, feelings, and objects that we otherwise may never have gotten in contact with or thought about in a meaningful way.
In the next article, we speak with Francesca Albrezzi, a UCLA Ph.D. student in the department of World Arts and Cultures/ Dance at UCLA, about the role of VR in higher education and the easy-to-use tools that professors and students can use to create VR projects.
Photo courtesy of Unsplash