• Brain Development
  • Childhood & Adolescence
  • Diet & Lifestyle
  • Emotions, Stress & Anxiety
  • Learning & Memory
  • Thinking & Awareness
  • Alzheimer's & Dementia
  • Childhood Disorders
  • Immune System Disorders
  • Mental Health
  • Neurodegenerative Disorders
  • Infectious Disease
  • Neurological Disorders A-Z
  • Body Systems
  • Cells & Circuits
  • Genes & Molecules
  • The Arts & the Brain
  • Law, Economics & Ethics
  • Neuroscience in the News
  • Supporting Research
  • Tech & the Brain
  • Animals in Research
  • BRAIN Initiative
  • Meet the Researcher
  • Neuro-technologies
  • Tools & Techniques
  • Core Concepts
  • For Educators
  • Ask an Expert
  • The Brain Facts Book

BrainFacts.org

This interactive brain model is powered by the Wellcome Trust and developed by Matt Wimsatt and Jack Simpson ; reviewed by John Morrison , Patrick Hof , and Edward Lein . Structure descriptions were written by Levi Gadye and Alexis Wnuk and Jane Roskams .

Copyright © Society for Neuroscience (2017). Users may copy images and text, but must provide attribution to the Society for Neuroscience if an image and/or text is transmitted to another party, or if an image and/or text is used or cited in User’s work.

Facebook

SUPPORTING PARTNERS

Dana Foundation logo

  • Privacy Policy
  • Accessibility Policy
  • Terms and Conditions
  • Manage Cookies

Some pages on this website provide links that require Adobe Reader to view.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Nature Video
  • 24 July 2019

Exploring the human brain with virtual reality

  • Shamini Bundell

You can also search for this author in PubMed   Google Scholar

Virtual-reality technology is being used to decode the inner workings of the human brain. By tasking people and rodents with solving puzzles inside virtual spaces, neuroscientists hope to learn how the brain navigates the environment and remembers spatial information. In this documentary, Shamini Bundell visits three neuroscience labs that are using virtual-reality technology to explore the brain. She uncovers the many benefits — and unsolved challenges — of performing experiments in virtual worlds.

For more stories at the cutting edge of neuroscience, including a forgotten aspect of memory that is challenging conventional thinking, visit Nature Outlook: The brain .

doi: https://doi.org/10.1038/d41586-019-02154-x

This Nature Video is editorially independent. It is produced with third party financial support. Read more about Supported Content .

Related Articles

vr tour of the brain

  • Neuroscience
  • Medical research

Innate immunity in neurons makes memories persist

Innate immunity in neurons makes memories persist

News & Views 27 MAR 24

Memories are made by breaking DNA — and fixing it

Memories are made by breaking DNA — and fixing it

News 27 MAR 24

Ketamine is in the spotlight thanks to Elon Musk — but is it the right treatment for depression?

Ketamine is in the spotlight thanks to Elon Musk — but is it the right treatment for depression?

News Explainer 20 MAR 24

Formation of memory assemblies through the DNA-sensing TLR9 pathway

Formation of memory assemblies through the DNA-sensing TLR9 pathway

Article 27 MAR 24

A brainstem–hypothalamus neuronal circuit reduces feeding upon heat exposure

A brainstem–hypothalamus neuronal circuit reduces feeding upon heat exposure

How to make an old immune system young again

How to make an old immune system young again

Depleting myeloid-biased haematopoietic stem cells rejuvenates aged immunity

Depleting myeloid-biased haematopoietic stem cells rejuvenates aged immunity

Pregnancy advances your ‘biological’ age — but giving birth turns it back

Pregnancy advances your ‘biological’ age — but giving birth turns it back

News 22 MAR 24

Professor of Experimental Parasitology (Leishmania)

To develop an innovative and internationally competitive research program, to contribute to educational activities and to provide expert advice.

Belgium (BE)

Institute of Tropical Medicine

vr tour of the brain

PhD Candidate (m/f/d)

We search the candidate for the subproject "P2: targeting cardiac macrophages" as part of the DFG-funded Research Training Group "GRK 2989: Targeti...

Dortmund, Nordrhein-Westfalen (DE)

Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V.

vr tour of the brain

At our location in Dortmund we invite applications for a DFG-funded project. This project will aim to structurally and spatially resolve the altere...

vr tour of the brain

Postdoctoral Fellow

We are seeking a highly motivated PhD and/or MD graduate to work in the Cardiovascular research lab in the Tulane University Department of Medicine.

New Orleans, Louisiana

School of Medicine Tulane University

vr tour of the brain

Posdoctoral Fellow Positions in Epidemiology & Multi-Omics Division of Network Medicine BWH and HMS

Channing Division of Network Medicine, Brigham and Women’s Hospital, and Harvard Medical School are seeking applicants for 3 postdoctoral positions.

Boston, Massachusetts

Brigham and Women's Hospital (BWH)

vr tour of the brain

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

8K Brain Tour: Interactive 3D visualization of terabyte-sized nanoscale brain images at 8K resolution

Creative Commons

Attribution 4.0 International

Kaori Kikuchi, Nickolaos Savidis, Yosuke Bando, Kazuhiro Hiwada, Mika Kanaya, Takahito Ito, Shoh Asano, Edward Boyden

Project Contact:

8K Brain Tour is a visualization system for terabyte-scale, three-dimensional (3D) microscopy images of brains.  High resolution (8K or 7680 x 4320 pixels), large format (85” or 188 cm x 106 cm), and touch-sensitive interactive rendering allows the viewers to dive into massive datasets capturing a large number of neurons and to investigate nanoscale and macroscale structures of the neurons simultaneously.

vr tour of the brain

The image shown on the 8K display is a rendering of a slice of the part of the mouse brain called hippocampus. The specimen was physically expanded by 4.5-fold using  Expansion Microscopy  before being imaged under the light sheet microscope, resulting in a 3D image consisting of 25,000 x 14,000 x 2,000 voxels each representing a volume of around 50 x 50 x 200 nanometers. The dataset size is 5 terabytes.

vr tour of the brain

A high-resolution, large-scale dataset like this calls for high resolution visualization, as otherwise the viewers would have to choose to either zoom into a small area of the data to see it in detail or zoom out to look at the entire data at a low resolution. With 8K rendering, this trade-off is significantly relaxed. As an example, suppose the entire image above is shown on an 8K display. Without zooming in, the small region in the yellow rectangle reveals details as shown below. Therefore, the viewers can observe microscopic thorny structures (called dendritic spines, which are where synapses are located) without losing sight of the macroscopic layered structures of neurons in the hippocampus.

vr tour of the brain

In order to realize interactive visualization of terabytes of data at a high resolution, we developed a volume renderer, named BrainTour, that takes full advantage of graphics processing units (GPUs) and solid-state drives (SSDs) by optimizing data transfer from SSDs to keep feeding data to GPUs. As a result, BrainTour requires only a single desktop computer with commodity hardware as listed below.

  • 1x CPU with 6 cores at 3.0 GHz
  • 2x GPUs each with 11 GB video memory

The BrainTour renderer is a Windows application that can read 3D images in TIFF format. It first converts data into a preprocessed format offline and then performs interactive visualization.

vr tour of the brain

Here is another specimen example, showing the entire brain of a fruit fly,  expanded and captured using a lattice light sheet microscope . This 11-terabyte dataset consists of 15,000 x 28,000 x 6,600 voxels at a voxel size of 24 x 24 x 44 nanometers. A fly-through movie was created using the BrainTour renderer.

Research Topics

Advertisement

Advertisement

Teaching the Virtual Brain

  • Original Paper
  • Open access
  • Published: 23 May 2022
  • Volume 35 , pages 1599–1610, ( 2022 )

Cite this article

You have full access to this open access article

  • Javier Hernández-Aceituno   ORCID: orcid.org/0000-0001-8885-0605 1 ,
  • Rafael Arnay 1 ,
  • Guadalberto Hernández 2 ,
  • Laura Ezama 3 &
  • Niels Janssen 3  

2152 Accesses

Explore all metrics

As a complex three-dimensional organ, the inside of a human brain is difficult to properly visualize. Magnetic Resonance Imaging provides an accurate model of the brain of a patient, but its medical or educational analysis as a set of flat slices is not enough to fully grasp its internal structure. A virtual reality application has been developed to generate a complete three-dimensional model based on MRI data, which users can explore internally through random planar cuts and color cluster isolation. An indexed vertex triangulation algorithm has been designed to efficiently display large amounts of complex three-dimensional vertex clusters in simple mobile devices. Feedback from students suggests that the resulting application satisfactorily complements theoretical lectures, as virtual reality allows them to better observe different structures within the human brain.

Similar content being viewed by others

Virtual reality educational tool for human anatomy.

Santiago González Izard, Juan A. Juanes Méndez & Pablo Ruisoto Palomera

vr tour of the brain

Creating Virtual Models and 3D Movies Using DemoMaker for Anatomical Education

vr tour of the brain

VeLight: A 3D virtual reality tool for CT-based anatomy teaching and training

Lingyun Yu, Joost Ouwerling, … Jiri Kosinka

Avoid common mistakes on your manuscript.

Introduction

The human brain is a complex three-dimensional object that resides inside our cranium. Our understanding of the brain has increased enormously with the advent of recent neuro-imaging techniques such as Magnetic Resonance Imaging (MRI) [ 1 , 2 ]. The MR technique creates 3D matrices that contain signal intensity values determined by the specific magnetic properties of the tissues in particular locations of the brain. The most commonly used method for visualizing these 3D matrices relies on displaying 3D versions of the images on a 2D computer monitor [ 3 ].

This standard method of visualization is used for diagnosis and prognosis in clinical contexts, to study brain function in research contexts, and to study the underlying principles of neuro-anatomy and physiology in educational contexts. However, despite its widespread use, this visualization method is problematic because the 2D images do not preserve accurate depth information, and do not permit easy interaction. Consequently, improving methods for visualization may be beneficial to this wide range of contexts.

This work presents a mobile application to expedite the teaching of brain anatomy by visualizing MR images and the different brain structures using Virtual Reality (VR). Although techniques for VR have been around for years [ 4 ], recent technological advancements in small-scale computing have made VR accessible to the masses. Specifically, modern mobile phones now possess sufficient computing power to render a fully interactive VR experience [ 5 ]. Our particular VR setup relied on a low-cost solution: a standard Android phone (LG Nexus 5x with Android 6.0 Marshmallow) combined with a VR headset [ 6 ]. We used the Unity3D platform as a local rendering engine [ 7 ].

Related Work

Visualizing the human brain using VR is not new; early approaches date back to 2001. The advantages of using VR are that it preserves accurate depth information, and that it potentially allows for a natural interaction with the visualized object. Zhang et al. [ 8 ] displayed diffusion tensor magnetic resonance images using a virtual environment, which consisted of an \(8\times 8\times 8\) foot cube with rear-projected front and side walls and a front-projected floor; in this setup, the user wore a pair of LCD shutter glasses which supported stereo-viewing. Ten years later, Cheng et al. [ 9 ] presented a virtual reality visualization method which used a two-screen immersive projection system, required passive projection and glasses for a stereoscopic 3-D effect, and involved an Intersense IS-900 6-DOF tracking system with head-tracker and wand.

As technology evolved, VR systems were integrated into ever smaller devices, such as mobile phones and virtual reality glasses. Kosch et al. [ 10 ] used VR as input stimulus to display real time, three-dimensional measurements using a brain–computer interface. Soeiro et al. [ 11 ] proposed a mobile application which used virtual and augmented reality to display the human brain and allowed the user to show or hide complete regions. Prior applications primarily convert the MR images to surface meshes and do not permit the examination of the internal structure of the brain. The application presented in this paper allows the user to make arbitrary cuts that reveal the underlying brain structure directly from the MR image, and to generate voxel clusters from arbitrary seed points, with high detail and fidelity to the original data.

The benefits of the educational application of VR systems have been extensively studied before: Schloss et al. [ 12 ] included audio to narrate information as part of guided VR neuroanatomy tours, and Stepan et al. [ 13 ] used computed tomography and highlighted the ventricular system and cerebral vasculature to create a focused interactive model. Several different technologies have also been used to produce educational VR experiences, including the origin of the presented anatomical data (magnetic resonance [ 14 ], dissection [ 15 ]), the hardware which runs the applications (HTC Vive [ 16 ], Dextrobeam [ 17 ]), or the software which presents the simulation (virtual presentations [ 18 ], fully interactive applications [ 19 ]). All works however agree that allowing students to study anatomical models in an interactive virtual environment greatly improves their understanding of the matter. The presented work builds upon this concept and introduces a new slicing feature that allows students to explore the human brain in greater depth.

This paper is organized as follows: “ Educational Application ” explains the educational goals of the presented work; “ Virtual Brain in the Classroom ” then details the protocol used to introduce students to the developed application, which is then described in “ Material and Methods ”, along with the actions a user can perform in it; “ Calculation ” then presents the algorithms upon which the application is based; “ Results and Discussion ” studies the user feedback regarding the presented work; finally, “ Conclusions ” provides a conclusion on the usefulness of the application.

Educational Application

The VRBrain application will be used in two separate courses. The first course, Biological Psychology (BP), forms part of the Master degree of Biomedicine at the University of La Laguna. The course consists of 3 ECTS credits (European Credit Transfer and Accumulation System) and takes place across a period of 3 weeks in daily 2 hour sessions. The course is typically taken by students that intend to pursue a doctoral degree in the PhD program in Medicine where a Master’s degree is required. The focus of the course is on how the various human cognitive and behavioral skills are implemented in the brain and how they are affected by disease.

The course is divided into two main sections: One section that examines these issues using the Magnetic Resonance Imaging (MRI) technique and one that examines these issues using the Electro-Encephalography (EEG) technique. These two techniques permit insight into brain structure and function and allow for the examination of the brain under pathological circumstances. The defined skills that students are required to have mastered at the completion of the course are the following:

Understand basic anatomical organization of the human brain

Understand how brain pathology can affect basic functions like memory and language

Understand how brain pathology can affect basic functions like attention and perception

The first section of the course explains the MRI technique and details how this tool has been used to understand basic functions like memory and language. The course also examines how brain pathology that affects these functions can be elucidated using MRI. For example, it is commonly known that abnormal aging and Alzheimer’s Disease are related to memory problems and MRI has played a pivotal role in showing that such memory dysfunctions are associated with reductions in gray matter volume that start in a specific brain region called the hippocampal formation. In addition, another brain pathology called cerebral stroke sometimes leads to a specific language problem called Broca’s aphasia, and MRI has shown that such problems are associated with lesions in a part of the brain called the left inferior frontal gyrus.

In order to understand these issues, students should learn the brain’s division into its main structures (the cerebral lobes, the ventricles, the meninges, etc.), as well as know some of the finer details of the organization of the brain (e.g., the parcellation of the cerebral cortex into its main areas, frontal lobe, temporal lobe, etc.). The BP course is then focused on the hypothesized function of these different parts of the brain and the role they may play in pathology.

In the second part of the course, the EEG technique will be used to address similar issues in the context of attention and perception. However, given the limitations of the EEG technique to yield images of the internal structure of the brain, the VRBrain application will be primarily used in the first section of the course. The specific structure of this first section of the course is as follows:

Basic concepts in functional Magnetic Resonance Imaging (fMRI): Physical basis, biological basis, hands-on experience—7 hours

Learning and Memory: Aging, Alzheimer’s Disease—4 hours

Language: Language disorders and the brain, language lateralization, recent evidence—4 hours

The section on fMRI and the hands-on experience are intended to use the VRBrain application.

The second course in which we intend to use the VRBrain application is the Undergraduate Thesis Projects (UTP) which are mandatory under the Spanish university system. The UTP is a 6 ECTS credits course which takes place in the second semester of the fourth year in the Psychology Degree at the University of La Laguna. The aim of this course is to allow for students to develop their own interest on a given research topic. They work together with a professor to establish a research idea and then do autonomous work to perform the research and write the thesis.

Given the large number of final year students in the Psychology degree that need to do the UTP, the students need to choose from a number of research themes that are proposed by professors in the Psychology department that are teachers in the UTP course. Some of these research themes are:

Educational Psychology

Personality Psychology: Mindfulness, stress, anxiety

Basic Psychology: Neuro-imaging; Neuro-anatomy; Language; Memory

The research theme in which the VRBrain will be applied is the research theme related to neuro-imaging and neuro-anatomy. The research topics in this research theme are related to investigation of the brain and pathology, and as such, it is useful if students understand the basic neuro-anatomy of the brain. Here the VRBrain application will be very useful.

Virtual Brain in the Classroom

As we pointed out in “ Educational Application ”, the main goal of the application is to increase the understanding of neuro-anatomy in students at both the undergraduate and master’s degree levels. In the classroom setting we will have implemented the following protocol in the usage of the VRBrain application. The duration of the entire protocol is around 2 hours.

First, we will divide students in the class into small groups of 3 to 4 people. This will ensure that the application is used by everyone including those that do not have an Android device. In addition, given that the number of VR headsets is limited, this will also ensure that everyone will be able to use the application.

Second, given the complexity of the application, we will first give students the ability to get familiar with the application. This means they are free to start up the application and explore the various menus using the Bluetooth controller. In this next section they are required to perform a small quiz in which we present a number of targeted questions that require finding and understanding basic neuro-anatomy of the human brain. This mainly relies on the first functionality of the application (see “ Material and Methods ”). For example, we have questions such as:

Describe in anatomical terms the location of the human hippocampus in relation to the amygdala.

Does Broca’s region lie in the frontal or temporal lobe?

What is the main function of the occipital lobe?

Answering these questions relies on a 3D understanding of the brain, as well as having read the information that appears in the textbox when a given structure is highlighted within the VRBrain application.

In addition, we also ask students to examine the internal structure of the brain using the second functionality of the application (see “ Brain Slicing ” in “ Material and Methods ” as follows). Within this functionality we will ask questions such as

Estimate the distance from the superior part of the brain to the lateral ventricle in centimeters.

Find the hippocampal area by slicing the brain in coronal slices

Find the corpus callosum by slicing the brain in sagittal slices

Answering these questions requires inspection of the internal structure of the brain which is implemented in the VRBrain application. We hope that by studying these questions in the classroom the students develop further insight into the 3D structure of the brain and improve their understanding of human neuro-anatomy. This will then in turn improve their understanding of the larger topics related to brain function and pathology in the respective courses.

Material and Methods

The application is divided into two main parts. First, a series of functionalities have been implemented to color and highlight different parts of the brain and show information about them. Second, a functionality has been developed in which the user can make cuts in the brain in orthogonal planes to the point of view. In this application, the user can interact with the menu entries by looking directly at them for a short period of time. The analog stick is used to rotate the virtual representation of the brain. Figure 1 shows the main menu of the presented application.

figure 1

Main menu of the Virtual Brain application

Brain Slicing

Brain Slicing is a functionality to study the internal anatomy of the brain. The user can rotate the view around the virtual brain and perform cuts in an orthogonal plane to the point of view. The virtual representation of the brain is made from MRI data that must be preprocessed to generate both the internal information of the brain and the cortical surface. In the next sections, both the preprocessing step and the calculations necessary to carry out the cuts both in the cortex and in the internal representation of the brain are detailed.

Preprocessing

In the first step of the process, the raw MRI data that is obtained from a patient scan is stored in the standard Digital Imaging and Communications in Medicine (DICOM) format. As this image format generally does not permit easy manipulation, all DICOM images were transformed into a data format called Neuroimaging Informatics Technology Initiative (NIfTI [ 20 ]). The NIfTI file format includes the affine coordinate definitions relating voxel index to spatial location and codes to indicate the spatial and temporal order of the captured brain image slices. Although the developed application accepts files of any size, the default resolution of the examples in the presented work is \(256\times 256\times 128\) voxels.

This NIfTI file is first processed through the BrainSuite cortical surface identification tool, which produces a three-dimensional vertex mesh of the brain cortex [ 21 ]. This step is necessary because a raw representation of the voxels of a NIfTI file most commonly offers a dull and unrealistic appearance. However, in order to properly display any segmentation of a brain, both the cortical and inner data are required; therefore, both the cortex mesh and the vortex data matrix are loaded onto the visualization program.

The user can then freely rotate the view around the virtual representation of the brain, and they may choose to perform three different actions: cutting off a section by defining an intersection plane, isolating a specific same-colored region of the brain, and restoring the whole brain to its original state. “ Calculation ” explains how these operations are executed. The flowchart of the presented application is displayed in Fig.  2 .

figure 2

Flowchart of the brain slicing functionality

A second mesh is created when a cut or an isolation is produced, showing the faces inside the brain that become visible (Fig.  3 ). A three-dimensional mesh normally contains the locations of all vertices, some metadata regarding their normal vectors, colors and/or texture mapping, and a list of triangles, which describe the connections between vertices in order to form visible faces.

figure 3

Cortex ( C ) and inner mesh ( M ) of the virtual brain

The vertices of the inner brain mesh are the centers of all the voxels provided by the NIfTI file, and their color is the gray level defined by their fMRI value. To generate a clearer image, shading is not taken into account, so the normal vectors of the vertices are unnecessary and ignored. Visible faces only appear at the edge between a visible region of the brain and a hidden one, so the triangles of the inner mesh are calculated every time the user produces a cut or an isolation, as explained in “ Calculation ”.

Calculation

To select a plane with which to cut off a portion of the brain, a point in space P and a normal vector N are needed. Both elements are extracted from the point of view of the user, relative to the center of the brain: the orientation vector of the camera in the scene equals \(-N\) , while P is located at the center of the closest active brain voxel on which the user focuses their gaze (Fig.  4 ). Point P can also define a seed voxel to isolate a same-colored region of the brain.

figure 4

Plane point ( P ) and normal vector ( N ), as defined by the user camera ( C )

Once the user selects a cut plane, all voxels of the virtual brain are then classified according to their relative position. Let Q be the center of a voxel, its signed distance to the plane is calculated as the scalar product \(N\cdot \left( P-Q\right)\) ; if this value is negative, the voxel is located between the cut plane and the user camera and must be removed.

The cortex mesh is also updated every time a brain section is cut off. Removing its vertices is not necessary, since they will not be visible unless they are referenced by at least one triangle. Therefore, when the user defines a cut plane, only the list of triangles of the mesh is updated by removing every triangle which contains one or more vertices located between the plane and the user camera (Fig.  5 ).

figure 5

Examples of arbitrary plane cuts

To calculate the triangles of the inner brain mesh, all \(2\!\times \!2\!\times \!2\) vertex neighborhoods of the inner brain mesh are studied individually. Since visible faces only form at the edge between visible and invisible regions of the brain, triangles will only connect visible vertices that are close to at least one hidden vertex (Fig.  6 ). A neighborhood contains only 8 vertices, which can either be visible or invisible, so the amount of possible face combinations per neighborhood is \(2^8\) . To decrease calculation time during execution, these combinations are precalculated as shown in Algorithm 1 and reused for each vertex neighborhood of the virtual brain.

figure a

Examples of triangulation neighborhoods, where white vertices are invisible and dark vertices are visible

Once the inside of the brain becomes visible, the user can select an isolation seed by focusing their gaze on a specific brain voxel. When this happens, every voxel in the brain is marked as invisible; then, the visibility of the seed and every neighboring voxel with a similar enough gray level, given a threshold, is restored as shown in Algorithm 2 . The result is a single cluster of voxels of similar color (Fig.  7 ). The cortex is completely hidden in this situation, so that the cluster can be seen clearly, and the triangulation process described in Algorithm 1 creates a visible mesh around the isolated voxels.

figure b

Example of a cluster of same-colored voxels

Finally, the restoration action simply returns all voxels back to their original state and resets the triangles of the cortex mesh.

Paint Areas and Selection Tool

The application also includes functionalities to paint areas of the brain, to highlight both internal and external structures and to display information about them. Virtual reality is used so that the user can better appreciate the shape and spatial arrangement of these structures within the brain.

Selection Tool

The selection tool allows the user to select a part of the brain and visualize its shape and location; information about its functions is also displayed. To select a region, the user must first select the zone in which it is located, in a menu on the left side. Then, a menu is displayed to the right of the user, displaying the structures which the selected zone contains. Finally, the selected area is shown in green color inside a semitransparent brain, as seen in Fig.  8 . Figure  9 shows a flowchart of this functionality and Fig.  10 shows the different interfaces which the user can access in the selection tool environment.

figure 8

Selected area of the brain shown in green inside a semitransparent brain

figure 9

Flowchart of the Section tool

figure 10

Selection tool interface: area where information is displayed ( A ), zone selection menu ( B ) and area selection menu ( C )

Paint Areas

The functionality to paint areas allows the user to visualize different cortical areas of the brain in color. The interface is composed of two main buttons that activate two different color schemes: one to visualize the hemispheres and the other to show the major lobes. In addition, the user has access to buttons to color each area individually. If the area is already colored, clicking the button turns it semi-transparent, so that the internal structure of the brain is exposed. Figures 11 and  12  show the flowchart and the interface of this functionality, respectively.

figure 11

Interface of the Paint areas functionality

figure 12

Flowchart of the Paint areas functionality

Results and Discussion

The developed VR application was presented to 32 students, ages between 19 and 37 years old, and their opinion on its performance and usefulness was collected in an anonymous five point Likert scale satisfaction questionnaire [ 22 ]. Table  1 shows the mean, standard deviation and \(95\%\) confidence interval of the questionnaire results.

These results show that students have found the application to be helpful in their learning process, as represented by their opinion on the “Paint areas” and “Selection tool” functionalities (mean above 4.44). This is in line with previous works, which found that the usage of virtual reality significantly improves test grades [ 17 ], since it helps students to better understand the three-dimensional structures of the human brain [ 18 ], also improving their satisfaction and decreasing their reluctance to learn neuroanatomy [ 14 ].

The “Brain slicing” option scored only an average 3.33 out of 5. This tool was not originally designed as an educational device, but for medical experts to explore the brain of a real patient and search for anomalies; as such, students found it too complicated to use and not instructive enough. Further iterations of this work may attempt to adapt this option to increase its formative potential.

Students also mostly agree that the presented application should be used in following courses and find its usage easy and intuitive, the “Brain slicing” option again scoring lower than the other functionalities.

Conclusions

A VR brain exploration application has been developed as a medical and educational tool. The presented system builds a three-dimensional brain model from MRI data and a basic cortex model, then allows the user to cut slices off in order to study its inside and isolate vertex clusters by color. Students can also highlight and analyze different areas of the brain in order to complement their anatomical knowledge.

A satisfaction questionnaire showed very positive feedback from the students who tested the application, who claim that the educational side of the presented work was very useful to them, as it helped them better understand the theoretical explanations provided by the teacher.

The results obtained in the present study fit well with previous works, such as [ 13 , 14 , 16 , 17 ], or [ 19 ], but the implemented virtual experience allowed for a greater degree of interaction than Schloss et al. [ 12 ], Lopez et al. [ 18 ] and de Faria et al. [ 15 ], due to its unique vertex triangulation algorithm and novel exploration tools which may also be used for medical and non-educational purposes. Based on student feedback, further iterations of the presented work will improve some of the presented features to increase their approachability in an academic environment.

Data Availability

All collected data is included as part of the presented work

Code Availability

All developed code is available at github.com/jhaceituno/brain3D.

Huettel SA, Song AW, McCarthy G (2009) Functional Magnetic Resonance Imaging. Freeman, USA

Google Scholar  

Lauterbur P (1973) Image formation by induced local interactions: examples employing nuclear magnetic resonance. Nature 242:190-191

Rinck PA (2019) Magnetic resonance in medicine: a critical introduction. BoD–Books on Demand

Steuer J (1992) Defining virtual reality: Dimensions determining telepresence. Journal of communication 42(4):73–93

Article   Google Scholar  

Henrysson A, Billinghurst M, Ollila M (2005) Virtual object manipulation using a mobile phone. In: Proceedings of the 2005 international conference on Augmented tele-existence, ACM, pp 164–171

Google (2021) Google AR & VR. https://arvr.google.com/vr/

Unity Technologies (2005) Unity3D. https://unity3d.com , accessed: 2017-06-20

Zhang S, Demiralp Ç, DaSilva M, Keefe D, Laidlaw D, Greenberg B, Basser P, Pierpaoli C, Chiocca E, Deisboeck T (2001) Toward application of virtual reality to visualization of dt-mri volumes. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2001, Springer, pp 1406–1408

Chen B, Moreland J, Zhang J (2011) Human brain functional mri and dti visualization with virtual reality. In: ASME 2011 World Conference on Innovative Virtual Reality, American Society of Mechanical Engineers, pp 343–349

Kosch T, Hassib M, Schmidt A (2016) The brain matters: A 3d real-time visualization to examine brain source activation leveraging neurofeedback. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, ACM, pp 1570–1576

Soeiro J, Cláudio AP, Carmo MB, Ferreira HA (2016) Mobile solution for brain visualization using augmented and virtual reality. In: Information Visualisation (IV), 2016 20th International Conference, IEEE, pp 124–129

Schloss KB, Schoenlein MA, Tredinnick R, Smith S, Miller N, Racey C, Castro C, Rokers B (2021) The uw virtual brain project: An immersive approach to teaching functional neuroanatomy. Translational Issues in Psychological Science

Stepan K, Zeiger J, Hanchuk S, Del Signore A, Shrivastava R, Govindaraj S, Iloreta A (2017) Immersive virtual reality as a teaching tool for neuroanatomy. In: International forum of allergy & rhinology, Wiley Online Library, vol 7, pp 1006–1013

Ekstrand C, Jamal A, Nguyen R, Kudryk A, Mann J, Mendez I (2018) Immersive and interactive virtual reality to improve learning and retention of neuroanatomy in medical students: a randomized controlled study. Canadian Medical Association Open Access Journal 6(1):E103–E109

de Faria JWV, Teixeira MJ, Júnior LdMS, Otoch JP, Figueiredo EG (2016) Virtual and stereoscopic anatomy: when virtual reality meets medical education. Journal of neurosurgery 125(5):1105–1111

Article   PubMed   Google Scholar  

van Deursen M, Reuvers L, Duits JD, de Jong G, van den Hurk M, Henssen D (2021) Virtual reality and annotated radiological data as effective and motivating tools to help social sciences students learn neuroanatomy. Scientific Reports 11(1):1–10

Kockro RA, Amaxopoulou C, Killeen T, Wagner W, Reisch R, Schwandt E, Gutenberg A, Giese A, Stofft E, Stadie AT (2015) Stereoscopic neuroanatomy lectures using a three-dimensional virtual reality environment. Annals of Anatomy-Anatomischer Anzeiger 201:91–98

Lopez M, Arriaga JGC, Álvarez JPN, González RT, Elizondo-Leal JA, Valdez-García JE, Carrión B (2021) Virtual reality vs traditional education: Is there any advantage in human neuroanatomy teaching? Computers & Electrical Engineering 93:107282

Souza V, Maciel A, Nedel L, Kopper R, Loges K, Schlemmer E (2020) The effect of virtual reality on knowledge transfer and retention in collaborative group-based learning for neuroanatomy students. In: 2020 22nd Symposium on Virtual and Augmented Reality (SVR), IEEE, pp 92–101

Cox RW, Ashburner J, Breman H, Fissell K, Haselgrove C, Holmes CJ, Lancaster JL, Rex DE, Smith SM, Woodward JB, Strother SC (2004) A (sort of) new image data format standard: NIfTI–1. Neuroimage 22:e1440

Shattuck DW, Leahy RM (2002) Brainsuite: An automated cortical surface identification tool. Medical Image Analysis 6(2):129 – 142

Likert R (1932) A technique for the measurement of attitudes. Archives of psychology

Download references

Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. No funding was received for the presented work

Author information

Authors and affiliations.

Departamento de Ingeniería Informática y de Sistemas, Universidad de La Laguna, Avda. Astrofísico Fco. Sánchez s/n, La Laguna, 38204, Canary Islands, Spain

Javier Hernández-Aceituno & Rafael Arnay

Departamento de Fisiología, Universidad de La Laguna, Campus de Ofra s/n, La Laguna, 38071, Canary Islands, Spain

Guadalberto Hernández

Departamento de Psicología Cognitiva, Social y Organizacional, Instituto de Tecnologías Biomédicas e Instituto Universitario de Neurociencia, Universidad de La Laguna, Campus de Ofra s/n, La Laguna, 38071, Canary Islands, Spain

Laura Ezama & Niels Janssen

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Javier Hernández-Aceituno .

Ethics declarations

Ethics approval.

No experiments on humans or animals were performed.

Informed Consent

Informed consent was obtained from all individual participants included in the study.

Consent for Publication

No personal data of any participant is revealed in the study.

Conflicts of Interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Hernández-Aceituno, J., Arnay, R., Hernández, G. et al. Teaching the Virtual Brain. J Digit Imaging 35 , 1599–1610 (2022). https://doi.org/10.1007/s10278-022-00652-5

Download citation

Received : 03 August 2021

Revised : 31 March 2022

Accepted : 03 May 2022

Published : 23 May 2022

Issue Date : December 2022

DOI : https://doi.org/10.1007/s10278-022-00652-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Virtual reality
  • Brain exploration
  • Find a journal
  • Publish with us
  • Track your research

Print logo

The Science of Virtual Reality

Woman wearing a virtual reality headset

Your Brain in a Virtual World Virtual reality (VR) technologies play with our senses to transport us to any world that we can imagine. How do VR environments convince your brain to take you to these different places? How does your brain, in turn, react as you explore a virtual world?

Creating a Virtual Environment Your brain builds on your past experience to develop “rules” by which to interpret the world. For example, the sky tells you which way is up. Shadows tell you where light is coming from. The relative size of things tells you which one is farther away. These rules help your brain operate more efficiently.

VR developers take these rules and try to provide the same information for your brain in the virtual world. In an effective virtual environment, moving objects should follow your expectations of the laws of physics. Shading and texture should allow you to determine depth and distance. Sometimes, when the virtual cues don’t quite match your brain’s expectations, you can feel disoriented or nauseated. Because the human brain is much more complex than even the most sophisticated computer, scientists are still trying to understand which cues are most important to prioritize in VR.

The Next Wave: Multisensory Virtual Reality VR technology is also revealing new insights into how the brain works. When you navigate through space, your brain creates a mental map using an “inner GPS”—a discovery that was awarded the Nobel Prize in 2014. However, recent studies with rats in virtual environments show that their brains don’t create the same detailed map as in a real physical space. Visual processing is just a subset of the rich multisensory integration taking place in your brain all the time. As new VR technologies start to engage more of our senses, their effects may be even more compelling.

Applications for Health Today, the interaction between VR and the brain has already led to applications in health and medicine, including treatment of post-traumatic stress disorder, surgical training, and physical therapy. Scientists are even exploring whether VR can change social attitudes by helping people see the world from a different person’s point of view.

A virtual reality headset at the Franklin Institute's holodeck

Get the answers to your questions about VR.

Woman experiencing virtual reality through a headset

Follow the development of VR from View-Master to virtual NASA workstations.

  • Buy Tickets
  • Hours, Pricing, Parking
  • Accessibility
  • Daily Schedule
  • Tips for a Great Visit
  • Where to Eat & Stay
  • All Exhibits & Experiences
  • The Art of the Brick
  • Wondrous Space
  • Science After Hours
  • Spring Break
  • Eclipse Hub
  • Events Calendar
  • 2024 Solar Eclipse Viewing Party
  • Staff Scientists
  • Benjamin Franklin Resources
  • Scientific Journals of The Franklin Institute
  • Professional Development
  • The Current: Blog
  • About Awards
  • Ceremony & Dinner
  • Sponsorship
  • The Class of 2024
  • Call for Nominations
  • Committee on Science & The Arts
  • Next Generation Science Standards
  • Title I Schools
  • Neuroscience & Society Curriculum
  • STEM Scholars
  • GSK Science in the Summer™
  • Missions2Mars Program
  • Children's Vaccine Education Program
  • Franklin @ Home
  • The Curious Cosmos with Derrick Pitts Podcast
  • So Curious! Podcast
  • A Practical Guide to the Cosmos
  • Archives & Oddities
  • Ingenious: The Evolution of Innovation
  • The Road to 2050
  • Science Stories
  • Spark of Science
  • That's B.S. (Bad Science)
  • Group Visits
  • Plan an Event

This is Your Brain on VR … The Neuroscientist’s Perspective

tameka@theinfluxlab.com

[email protected]

vr tour of the brain

I don’t want to jinx it, but all signs point to us peering over the edge of the tipping point for virtual reality here in the States.

That tipping point is tied to two trends.

On one end is the surge of investments in location-based VR experiences like VR escape rooms, VR roller coasters , and adult arcades like LA’s Two Bit Circus. These destinations allow friends, families and colleagues to play and explore VR together.

On the other is the launch of the Oculus Quest — a VR headset with no wires, no need for a high-powered PC to operate, and a $399 price point that makes it comparable to the nearly 40 million video game consoles currently in American homes.

If you build it and make it affordable — they will come.

Both trends point to an onslaught of immersive VR experiences both inside and outside the home — and as a technology enthusiast and gamer, I’m excited for what’s to come.

But as a student of media, technology and how the two impact our everyday lives, I wonder how the widespread adoption of VR will affect our collective sense of mental and physical well-being. 

After all, no one thought smartphones would become a leading source of depression in teens or lead to increased anxiety amidst the convenience of having always-on internet access, did they?

We need to understand what VR actually does (or doesn’t do) to our brains, in order to understand any potential impact on our mental health. And who better to give us a basic understanding of what happens to our brains in VR, than a neuroscientist?

Dr. Sook-Lei Liew is an Assistant Professor and head of USC’s Neural Plasticity and Neurorehabilitation Lab , and among other things, she’s working on studies to see if VR can help stroke patients recover their mobility.

To be clear, we’re still in the early phases of understanding how VR might affect the brain and body — let alone the psyche. But unlike all the hand-wringing and remorse we feel because of the studies that continue to expose the negative impact of devices like smartphones, perhaps we have the opportunity to gauge the potential impact of VR before everything turns into a dystopian tech wasteland (a la Ready Player One).

Tameka Kee: Healthy neurons and rehabilitating those that aren’t is the name of the game for a neuroscientist — but not necessarily studying virtual reality. What made you turn your interest to VR and its potential for “rehabbing the brain?”

Sook-Lei Liew: VR offers a few unique strengths that I believe [may be] really useful for brain recovery and training.

First, the embodiment aspect — VR gives people a chance to take on a new body, and tricks the brain into exhibiting behaviors associated with that body.

For instance, studies by Mel Slater and Jeremy Bailenson have shown that if you’re given a child’s body in VR, you start to show more childlike behaviors. Similarly, if you’re given the body of a different gender or race, you start to act accordingly.

When I learned about this, I started to ask, what if someone who can’t move their body after a stroke gets a body they can move in VR? Can this help trick their brains towards recovery? That’s when I started to look more into VR for research.

What has your own experience been like in a headset?

SL: I’ve definitely been impressed by the embodiment aspect — I get real butterflies in my stomach when I’m walking a plank above a city, even though I know I’m in my lab on solid ground.

That said, there are still some challenges with [the current state of] VR that will keep us from becoming some sort of sci-fi world where people love VR so much [that] they don’t want to be outside of it. I’d say at this point, I haven’t worn a headset that I would want to be in for more than an hour, just due to comfort, eye strain and other factors.

Speaking of eye strain, VR headsets currently have restrictions for children under the age of 13 because of the potential for eye damage. There are also physical challenges for adults in terms of motion sickness and dizziness. Have you uncovered any intel about VR and a potentially negative impact on the brain?

SL: There is so much we don’t know about how VR affects the brain yet!

Most research studies with VR have primarily looked at changes in behavior [as opposed to] looking at direct changes in the brain. We are starting to measure brain activity using EEG while people use VR, and also fMRI ( or functional magnetic resonance imaging ) before and after people use VR, but we’re just at the beginning of what I believe will be a long foray into this topic.

One thing we do know is that when it comes to learning motor skills, [the way] people learn in VR is not the same as how they learn in the real world. That indicates to us that what happens in the brain when it processes stimuli and tries to do new computations in VR is different from in the real world. How exactly it differs I think will depend largely on the task, but in any case, as we learn more, we can take advantage of these aspects for more tailored approaches in VR.

Specifically regarding the eye damage/strain, I would be wary of this as a problem. I think we’re learning more and more about what happens when our eyes look at screens for an extended time, and have very little knowledge about what happens when they look at screens in VR.

Specifically regarding motion sickness and dizziness – it’s definitely a limiting factor. The hope is that as the technology improves, these symptoms will be reduced, but it’s a wait and see scenario.

There have also been some studies that show VR has the ability to help people slip into a flow or meditative state. What are one or two things you’ve learned about VR and its ability to heal?

SL: Well, preliminarily (and fresh from the lab), we’re seeing some promise for our VR-based brain computer interface actually resulting in motor improvements for stroke — both in [the patient’s ] better ability to move, and in subtle brain changes with our brain imaging and brain stimulation. We need to do a lot more research, but we are starting to see that it has some ability to promote physical recovery and neural plasticity, so that’s really exciting.

In terms of more psychology flow, I am not the expert but my USC colleague Dr. Vangelis Lympouridis works on VR for pain management and meditation, and does see this happen.

Are there any physiological factors that make some VR experiences “feel” more immersive than others?

SL: Yes, research from Mel Slater and Mavi Vives-Sanchez’s group has shown that you need to build a sensorimotor contingency between the [virtual version of yourself] and your own body to feel more embodied.

That is, you move your real hand, and your virtual hand moves exactly the same. Or you see something touch your virtual hand and you feel something touch your real hand in the same way.

Linking the visual stimuli in VR with real world sensory stimuli really helps you to feel more embodied in the environment.

What’s been the most surprising or unexpected pattern or insight you’ve uncovered in your research thus far?

SL: Although we’ve been focused on helping people regain motor function after stroke, some of our participants have reported more general changes in their mood, cognition, sleep and the [overall way in which] they view their bodies. That’s been pretty exciting. We aren’t sure yet what specifically about the [VR] intervention does this, but it is encouraging and promising!

Dr. Liew will share more of her findings when she headlines The In.flux Reality Mixer in Los Angeles on December 1. 

vr tour of the brain

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Home

Alcohol's Effects on Health

Research-based information on drinking and its impact.

National Institute on Alcohol Abuse and Alcoholism (NIAAA)

Alcohol and your brain: a virtual reality experience.

Updated: 2023

Welcome to Alcohol and Your Brain , an interactive activity for youth ages 13 and older to learn about alcohol’s effects on five areas of the brain.

This educational experience shares age-appropriate messages through engaging visuals, informative billboards, and narration.

Two versions of this activity are available. One is formatted for the virtual reality (VR) environment and the other in a video version.

The VR version creates an immersive experience. Using VR headsets, participants take a rollercoaster ride through the human brain, pausing at stations to learn about key brain regions that are affected by alcohol—and how alcohol, in turn, affects behavior.

There is also a video version for viewing the experience without a VR headset. Informative chapter breaks allow viewers to see an outline of key stops and jump to a particular brain region. An accessible version of the video provides audio description of the visuals.

How to Get NIAAA’s Alcohol and Your Brain

For anyone age 13+ with Quest, Quest 2, or Meta Quest Pro VR headsets, the free NIAAA app is available through Oculus App Lab .

Parents and educators can share the YouTube video with students on any computer or mobile device (an audio-described video is also available).

Alcohol and Brain VR Image

Available in the Oculus App Lab

For anyone age 13+ with Quest, Quest 2, or Meta Quest Pro VR headsets, the free NIAAA app is available through Oculus App Lab.

Alcohol and Brain Non VR Image

YouTube Video for Parents and Educators

Parents and educators can share the YouTube video with students on any computer or mobile device.

niaaa.nih.gov

An official website of the National Institutes of Health and the National Institute on Alcohol Abuse and Alcoholism

To revisit this article, visit My Profile, then View saved stories .

  • Backchannel
  • Newsletters
  • WIRED Insider
  • WIRED Consulting

Sarah Zhang

Hacking the Inner Ear for VR—And for Science

STORY 532483451

Virtual reality, as it exists now, works because humans trust their eyes above all else. And in a VR headset, the possibilities of what you can see are pretty much infinite. What you can feel, on the other hand, is not. You'll pretty much feel like you're sitting on your couch. Forget zooming through space. Or rocking on a boat in stormy seas. But what if virtual reality, as it might exist in the future, also fools the inner ear that keeps track of motion?

That’s where galvanic vestibular stimulation comes in—a fancy name for a simple procedure. The vestibular system keeps you situated in space by relying on the subtle movements of fluid and tiny bones in your ears. Put an electrode behind each ear, hook up a 9 volt battery, and you can stimulate the nerves that run from your inner ears to the brain. Zap with GVS and your head suddenly feels like it’s rolling to the right. Reverse the electrodes and you feel your head roll to the left.

GVS, or at least this basic version of it, is absurdly easy. The internet is full of VR enthusiasts who have hooked up their own GVS rigs and are happy to teach you how to do it, too. At the Game Developers Conference in 2013, Palmer Lucky, the boy wonder who founded Oculus VR, talked about his own experiments in GVS . “VR potentially hypothetically in theory could be a good fit for GVS technology,” he said. “The problem with GVS,” he continued, “...oh, there’s so many problems.” We’ll get to that later.

For a technology that gets mentioned so often in the same breath as VR, galvanic vestibular stimulation is pretty old-fashioned. In 1790, Alessandro Volta—yes, that Volta—stuck the electrodes of a newly invented battery in his ears. He felt an explosion in his head , heard the sound of boiling “tenacious matter,” and then promptly passed out. Volta’s battery would have produced about 30 volts. Do not try this at home.

At lower voltages, researchers can steer people using GVS like a remote control. Basically, if you feel your head rolling to the right, you’ll jerk to the left to compensate. It looks pretty eerie. For a while, neurophysiologist Tim Inglis’ lab at the University of British Columbia paired a flight simulator with GVS. Turn the yoke to the left, and a zap behind the ears made it feel like your head rolled to the left, too. But the flight simulator was a cheap, crude one, and GVS’s control of the vestibular system, it turns out, is pretty crude, too.

Here Comes the Flood of Plug-In Hybrids

Aarian Marshall

Large Language Models’ Emergent Abilities Are a Mirage

Stephen Ornes

Why the Baltimore Bridge Collapsed So Quickly

Chris Baraniuk

The Mayor of London Enters the Bullshit Cinematic Universe

Peter Guest

Current GVS technology is like banging on a keyboard with your fist. Electrodes behind the ear stimulate many, many nerves at once rather than just a few—and more precise control is still a ways off. For now, it’s easy enough for GVS to simulate rolling your head toward your shoulder, but the feeling of simply turning left or right with your head upright is tougher to replicate. People also seem to vary widely in their sensitivity to a particular voltage, so it’s not a one-size-fits-all solution. Plus, any mismatch in timing between vestibular or visual changes can produce its own motion sickness.

Even if GVS is still far off from the living room, it’s become an interesting tool for neuroscientists studying the brain. “The technique allows you to electronically send an error message,” says Inglis. His lab is studying exactly how GVS perturbs balance when you’re constantly shifting your entire weight between the two poles of your legs, aka walking. It could help identify people with movement disorders—and it could help them, too. Inglis' colleagues at UBC are looking into how low-level GVS might help Parkinson’s patients with tremors.

The vestibular system also physically links up with higher areas of the brain, and in recent years, scientists have been looking into how very low-level GVS could affect higher brain function: tactile sensation, face recognition, and memory. Certain brain disorders might be the result of chronic brain inactivity, and stimulation through the vestibular system just might get things working again. But these studies tend to be small, and scientists are rightly skeptical. University of Kent psychologist David Wilkinson, who is now doing a study on how GVS restores recognition to patients with face blindness, recalls when he first heard about the cognitive effects of GVS: It was at an academic talk that he attended solely for the free food—in this case, corn dogs. “The corn dog fell out of my mouth,” he says.

These studies underscore that stimulating nerves behind the ear is an inelegant process—one whose effects scientists have yet to fully understand. Companies do already make GVS devices that cost several thousand dollars, largely for labs, but even those devices don’t offer the kind of control you’d need for VR. Inglis, who talks about someday creating a true artificial vestibular stimulation system, has some advice about the current state of things: “If someone is trying to sell it, don’t buy it.”

So You Want to Rewire Brains

Caitlin Kelly

A Gene-Edited Pig Kidney Was Just Transplanted Into a Person for the First Time

Emily Mullin

Stop Misunderstanding the Gender Health Gap

Rob Reddick

Are You Noise Sensitive? Here's How to Tell

Amy Paturel

The Keys to a Long Life Are Sleep and a Better Diet&-and Money

Matt Reynolds

Why You Hear Voices in Your White Noise Machine

Jennifer Billock

A 62-Year-Old German Man Got 217 Covid Shots&-and Was Totally Fine

Beth Mole, Ars Technica

There Are Already More Measles Cases in the US This Year Than All of 2023

Mayank Mehta

  • Division of Physical Sciences

Virtual reality boosts and retunes brain rhythms crucial for learning and memory

New research shows that VR therapy can be used for the early diagnosis and treatment of memory disorders, ranging from Alzheimer’s to ADHD.

It always feels good to get into a rhythm like dancing to a thumping beat, running with your feet reliably hitting the ground, or feeling the groove while playing an instrument. Turns out, even our brains have rhythms that keep them functioning properly. We need these rhythms for attention, sleep, learning, memory, and figuring out where (and when) we are. When these rhythms are lost, the brain’s ability to learn and remember is impaired – a characteristic of many neurologic disorders.

New research from Professor Mayank Mehta, head of UCLA’s W.M. Keck Center for Neurophysics and a UCLA professor of physics and neurology, discovered along with postdoctoral scholar Karen Safaryan a new way of boosting and re-tuning these vital brain rhythms using virtual reality (VR). By placing rats in VR, the researchers strengthened one important brain rhythm and even introduced another new rhythm. This exciting research, supported by the W.M. Keck Foundation, NIH, and AT&T and published in Nature Neuroscience , paves the way for a possible treatment for Alzheimer’s, depression, epilepsy, schizophrenia, and more.

Mehta theta rhythms figure 1

These brain rhythms happen in an area of the brain important for learning and memory, known as the hippocampus. Previous Nobel-winning research has shown that neurons (the cells that make up the circuitry of our brains) in the hippocampus encode information about location, making it the GPS system of the brain. There’s also a very important rhythm to the firing of these neurons, discovered over 70 years ago and known as the theta rhythm. This rhythm gets stronger when our brains are working hard at learning or navigating the space around us. Past work from the Mehta lab also shows that the precise frequency of the theta rhythm is important for a brain’s flexibility and learning ability (also known as neuroplasticity). In theory, tuning this rhythm back to its correct frequency would be a promising target for pharmaceutical treatments, but until now no one has figured out a way to do so.

The Mehta Lab’s brain-boosting virtual reality is unlike what you may think of for commercially available video games—no headsets, and most importantly, no lag to make the subjects dizzy and disoriented. The subjects – rats – are able to walk around on a treadmill where everything they see is controlled by the scientists. In this virtual world, the rats have to navigate to virtual spouts, where they will be given sugar water as a reward. All the while, the scientists are monitoring their brain activity, watching when their neurons fire. In rats, as in humans, tasks like navigating to rewards work the hippocampus, the exact region of the brain that this study is interested in observing.

Interestingly, the VR experience affected the rhythms of the rats’ brains. “The rhythmicity of theta oscillations was boosted by more than 50% in the VR,” Safaryan said.

This is a significant improvement. “No other manipulation, pharmacological or otherwise, has demonstrated such robust boosting of theta rhythm,” Mehta said. He explained that virtual reality is so effective ”because VR reacts to every movement of the subject, which in turn modifies the subject’s brain, and all of this happens very fast, totally unlike a TV.”

Their research also showed that nearly 60% of the hippocampus temporarily shuts down while in VR – something no currently known drug can do. Finding a way to turn off parts of the hippocampus could be an important breakthrough for disorders where the neurons are hyper-excited, such as epilepsy or Alzheimer’s.

Not only does VR boost the theta rhythm, but Mehta’s team found it also induces an entirely new brain rhythm, termed the eta rhythm. Different frequencies of brain rhythms are important for different types of learning, so observing eta is yet another window into how our brains learn. Eta even happens in a different part of the neuron – it’s dominant in the central cell bodies, whereas theta is dominant in the dendrites, the connecting tendrils. “That was really mind-blowing,” Mehta said. “Two different parts of the neuron seem to be keeping a different beat!” This suggests that types of VR could be developed as a therapy to re-tune the brain’s rhythms, an exciting new technology with the potential to help many.

The Mehta Lab also found a possible conductor for the neurons’ beats: GABA-containing inhibitory neurons, which tend to shut down connecting neurons. GABA is an important neurotransmitter already targeted by some pharmaceuticals, such as anti-anxiety medications, so it’s possible that it could be yet another target to help re-tune the brain’s rhythms in combination with VR.

With all this research, the Mehta lab is bringing together two disciplines – physics and biology – to build a new understanding, using physics-style analytic theories and hardware to analyze biological systems. While building our understanding of the brain’s complex functions and creating new technology, Mehta and his team have taken a bold step forward in finding treatments for neurological disorders. They are piloting the idea that just like rats in VR, we too will be able to boost and retune our rhythms and improve our brains by simply roaming around in virtual reality.

Story by Briley Lewis, graduate student in Physics & Astronomy at UCLA.

© 2024 Regents of the University of California

  • Accessibility
  • Report Misconduct
  • Privacy & Terms of Use
  • Type 2 Diabetes
  • Heart Disease
  • Digestive Health
  • Multiple Sclerosis
  • COVID-19 Vaccines
  • Occupational Therapy
  • Healthy Aging
  • Health Insurance
  • Public Health
  • Patient Rights
  • Caregivers & Loved Ones
  • End of Life Concerns
  • Health News
  • Thyroid Test Analyzer
  • Doctor Discussion Guides
  • Hemoglobin A1c Test Analyzer
  • Lipid Test Analyzer
  • Complete Blood Count (CBC) Analyzer
  • What to Buy
  • Editorial Process
  • Meet Our Medical Expert Board

These Researchers Want to Make MRIs More Comfortable With Virtual Reality

Thomas Barwick / Getty Images

Key Takeaways

  • Getting an MRI scan done can be uncomfortable, especially for children, which sometimes hinders the accuracy of the results.
  • To alleviate the discomfort of getting an MRI scan, researchers developed a virtual reality system to distract the patient.
  • This VR system incorporates the sounds and movements of an MRI into the experience to fully immerse the patient.

Undergoing a magnetic resonance imaging scan, also known as an MRI, can often be an uncomfortable experience for many patients, especially children. This unease often leads to fidgeting which can ruin test results. Because of this, researchers have long since tried to find ways to improve the experience.

One team of researchers wants to take this optimization to a new level.  

Scientists at King’s College London are developing an interactive virtual reality system (VR) to be used during MRI scans. This system immerses the patient into a VR environment, distracting them from the test. It even integrates key MRI features, like vibrations and sounds from the machine into the VR experience to make it more realistic.

Ideally, this should distract the patient during the procedure but keep them concentrated enough for the MRI to be carried out perfectly. The August research was published in the journal Scientific Reports .

Although the project is still in its early days, it shows promise—the next steps will be perfecting and testing it on large groups of patients. The researchers are hopeful technology like this could improve the test for children, individuals with cognitive difficulties, and people with claustrophobia or anxiety.

Remaining Calm During an MRI Is Crucial

“Many people describe being inside an MRI scanner and in particular lying down in the narrow and noisy tunnel as being a very strange experience, which for some can induce a great deal of anxiety,” lead researcher Kun Qian , a post-doctoral researcher in the Centre for the Developing Brain at Kings College London, tells Verywell.

“This is exacerbated during the scan itself, as people are also asked to relax and stay as still as possible, but at the same time are always aware that they are still inside this very alien environment," Qian adds.

This discomfort can affect both image quality and the scan's success. Due to anxiety, MRI scans fail frequently. For example, scanning failure rates in children are as high as 50% and 35% between 2 to 5 and 6 to 7 years respectively, according to Qian.

“This results in a great deal of time and resources being lost, and potentially can significantly affect clinical management,” Qian says, with many clinics having to sedate or use anesthesia on the patient. “So our VR system could potentially make a profound difference by not only improving scanning success rates but also by avoiding the need for sedation or anesthesia.”

The creative spark behind this project occurred when researcher Tomoki Arichi gifted Joseph Hajnal, another researcher on Qian’s team, VR goggles for Christmas. 

“Professor Hajnal realized that whilst using the goggles, he was completely unaware of what was going on around him because of the strong immersive experience,” Qian says. “He realized that this could be an exciting way to also address the difficulties with anxiety around having an MRI scan.” 

As a result, the team then went on to develop the new technology.

How Does the VR Technology Work?

This new virtual reality system will be fully immersive and ideally distract the patient from the MRI occurring around them. Here’s how it will work.

The headset is what’s called light-tight, so the patient can't see their surrounding environment and can only see what the VR system is showing them. The projector will immediately go live as soon as the patient is ready, so they are immersed in this virtual experience from the second the scan starts to when it ends.  

Sensations such as the scanner noise, the table movement, and the table vibration are all integrated into the virtual experience. When the scanner vibrates, the VR depicts a construction scene. When the scanner moves or makes a noise, so does the character.

To interact with the virtual environment, the patient uses their eyes. They can navigate just by looking at objects in the virtual world. Plus, the user doesn’t strap a headset onto their head so there should be no problems with motion sickness, according to Qian, which is usually one of the drawbacks of VR.

What This Means For You

MRI's can be stressful. For now, VR technology isn't available for you yet during the exam. But if you're feeling anxious about the experience you can have a friend or family member present and try to control your breathing. Some places even offer the option to listen to music during your test.

The Future of VR in Health Care

“This is a perfect example of what is increasingly being considered by the healthcare sector and regulatory bodies around the world as a critical use case for virtual reality,” Amir Bozorgzadeh, co-founder and CEO of Virtuleap , a health and education VR startup, tells Verywell.

VR is the first digital format in which the user is immersed in an ecologically valid experience that fully tricks the body into believing the experience is real, he explains. 

“It doesn't matter if I know I'm physically in my living room; to the whole body, meaning the autonomic nervous system, the vestibular balance system, and my proprioception, I am in the simulated experience,” Bozorgzadeh says.

That’s why this phenomenon creates a safe environment for medical examinations. On the other hand, according to Bozorgzadeh, there still hasn’t been enough research on the effects of long-form VR. It is, after all, still an emerging technology.

For now, this newly designed VR for MRIs seems to be a step in the right direction.

“In our initial user tests, we were very pleased to find that the system has been tolerated very well, with no headaches or discomfort reported at all,” Qian says. “However, this is something we need to systematically test with large numbers of subjects in the coming months.”

Qian explains that his team would also like to develop more content specifically for vulnerable groups like patients with anxiety—potentially tailoring the virtual environment to them down the line.

Qian, K., Arichi, T., Price, A.  et al.   An eye tracking based virtual reality system for use inside magnetic resonance imaging systems .  Sci Rep  11, 16301 (2021). https://doi.org/10.1038/s41598-021-95634-y

UC San Siego Health. Magnetic Resonance Imaging (MRI) .

By Sofia Quaglia Sofia Quaglia is a science and health writer based between Italy, the United Kingdom, and the United States.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Hum Neurosci

“Tricking the Brain” Using Immersive Virtual Reality: Modifying the Self-Perception Over Embodied Avatar Influences Motor Cortical Excitability and Action Initiation

Karin a. buetler.

1 Motor Learning and Neurorehabilitation Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland

Joaquin Penalver-Andres

2 Psychosomatic Medicine, Department of Neurology, University Hospital of Bern (Inselspital), Bern, Switzerland

Özhan Özen

Luca ferriroli, rené m. müri.

3 Gerontechnology and Rehabilitation Group, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland

4 Department of Neurology, University Neurorehabilitation, University Hospital of Bern (Inselspital), University of Bern, Bern, Switzerland

Dario Cazzoli

5 Neurocenter, Luzerner Kantonsspital, Lucerne, Switzerland

Laura Marchal-Crespo

6 Department of Cognitive Robotics, Delft University of Technology, Delft, Netherlands

Associated Data

The dataset presented in this study can be found online in the following repository: doi: 10.5281/zenodo.5522866 .

To offer engaging neurorehabilitation training to neurologic patients, motor tasks are often visualized in virtual reality (VR). Recently introduced head-mounted displays (HMDs) allow to realistically mimic the body of the user from a first-person perspective (i.e., avatar) in a highly immersive VR environment. In this immersive environment, users may embody avatars with different body characteristics. Importantly, body characteristics impact how people perform actions. Therefore, alternating body perceptions using immersive VR may be a powerful tool to promote motor activity in neurologic patients. However, the ability of the brain to adapt motor commands based on a perceived modified reality has not yet been fully explored. To fill this gap, we “tricked the brain” using immersive VR and investigated if multisensory feedback modulating the physical properties of an embodied avatar influences motor brain networks and control. Ten healthy participants were immersed in a virtual environment using an HMD, where they saw an avatar from first-person perspective. We slowly transformed the surface of the avatar (i.e., the “skin material”) from human to stone. We enforced this visual change by repetitively touching the real arm of the participant and the arm of the avatar with a (virtual) hammer, while progressively replacing the sound of the hammer against skin with stone hitting sound via loudspeaker. We applied single-pulse transcranial magnetic simulation (TMS) to evaluate changes in motor cortical excitability associated with the illusion. Further, to investigate if the “stone illusion” affected motor control, participants performed a reaching task with the human and stone avatar. Questionnaires assessed the subjectively reported strength of embodiment and illusion. Our results show that participants experienced the “stone arm illusion.” Particularly, they rated their arm as heavier, colder, stiffer, and more insensitive when immersed with the stone than human avatar, without the illusion affecting their experienced feeling of body ownership. Further, the reported illusion strength was associated with enhanced motor cortical excitability and faster movement initiations, indicating that participants may have physically mirrored and compensated for the embodied body characteristics of the stone avatar. Together, immersive VR has the potential to influence motor brain networks by subtly modifying the perception of reality, opening new perspectives for the motor recovery of patients.

Introduction

Stroke represents a leading cause of long-term disability in adults worldwide, with one-third of chronic stroke patients requiring assistance during activities of daily living ( Feigin et al., 2014 ). Intensive and costly neurorehabilitation interventions are an integral part of the therapy following stroke, aiming at regaining (part of) the motor functionality of patients. Within this context, robotic neurorehabilitation has been receiving increasing interest to provide more cost-effective therapy ( Lum et al., 2012 ). Robotic-assisted interventions allow for repetitive, high-intensity, and task-specific training, lowering costs and personal limitations (e.g., fatigue) and optimizing the potential of motor recovery of patients ( Marchal-Crespo and Reinkensmeyer, 2009 ).

To increase the engagement of patients during training, motor tasks are often visualized in virtual reality (VR), allowing the simulation of various real and imaginary activities of daily living ( Lee et al., 2003 ; Perez-Marcos et al., 2018 ). VR further offers the possibility to individualize the virtual environment to the needs of the patients, and to provide standardized and safe training ( Rose et al., 2005 ; Marchal-Crespo and Reinkensmeyer, 2008 ). A large body of research has demonstrated the efficacy of VR therapy in (robotic) stroke rehabilitation ( Adamovich et al., 2004 ; Deutsch et al., 2004 ; Jang et al., 2005 ). However, in standard clinical VR settings, computer screens are used to display the virtual training environment. Here, the patient interacts with the virtual elements using an abstract virtual representation (e.g., a cursor). While this symbolic interaction provides useful visual guidance, it strongly deviates from interactions required in the real world and, therefore, may limit the transfer of acquired skills into activities of daily living ( de Mello Monteiro et al., 2014 ; Bezerra et al., 2018 ).

Recently emerging head-mounted displays (HMDs) provide highly immersive virtual training environments. In this immersive virtual environment, the user interacts with a virtual self-representation perceived from first-person perspective (i.e., an avatar), realistically mimicking the body of the user. Previous work has suggested that immersive virtual reality, compared with screens, may further promote motor training because they enhance embodiment over the avatar ( Wenk et al., 2021 ) i.e., the body of the avatar is –at least partially– processed like the own (virtual) body ( Kilteni et al., 2012a ). In the immersive virtual training environment, the user may experience the feeling of body ownership over the avatar. Body ownership –one out of the three components of embodiment together with agency [i.e., the feeling of initiating and being in control of the own actions; (e.g., David et al., 2008 ; Braun et al., 2018 )] and location [i.e., the experienced location of the body in space; (e.g., Blanke, 2012 )]– is the cognition that a body and/or its parts belong to oneself ( Blanke, 2012 ). Body ownership results from the integration and interpretation of multimodal sensory information in the brain, importantly, visual, somatosensory, and proprioceptive signals ( Botvinick and Cohen, 1998 ; Maravita et al., 2003 ; Ehrsson, 2004 ). Neuroimaging studies have shown that body ownership relies on frontal premotor, somatosensory, temporoparietal junction, and insular brain regions ( Botvinick and Cohen, 1998 ; Maravita et al., 2003 ; Ehrsson, 2004 ; Tsakiris, 2010 ).

Even though our body and its physical features and capabilities (e.g., the size of body parts and/or the color or material of the skin) usually do not change –and one could assume that the perception of the own body is stable–a vast amount of research has shown that bodily self-perceptions are continuously updated in the brain in response to sensory signals related to the body characteristics ( de Vignemont, 2010 ; Serino and Haggard, 2010 ; Tsakiris, 2010 , 2017 ; Longo and Haggard, 2011 ; Blanke et al., 2015 ). Consequently, multisensory feedback can be used to modulate the self-perception of the body, as for example, in the well-known “rubber hand illusion” paradigm, first introduced by Botvinick and Cohen (1998) . Here, an experimenter simultaneously strokes the hidden real hand of a participant and a rubber hand placed in front of the participant. The simultaneously felt stroking on the real hand and the visual perception of the rubber hand being stroked has been shown to reliably induce the feeling of body ownership over the rubber hand in the participant. The rubber hand illusion has also been demonstrated by providing auditory instead of visual feedback. In the “marble hand illusion,” Senna et al. (2014) touched the (hidden) hand of the participant with a hammer that was coupled with stone-hitting sound. This led participants to experience their own hand to be more stone-like than in the control condition (e.g., they rated their own hand as stiffer, heavier, harder, unnatural, and less sensitive). Various variations of the rubber hand illusion paradigm have shown that body ownership can be experimentally induced in a part of a body or an entire body other than one’s own in healthy young ( Ehrsson, 2004 ; Tsakiris and Haggard, 2005 ; Tsakiris et al., 2006 ; Lloyd, 2007 ; Haans et al., 2008 ; Kammers et al., 2009 ; van der Hoort et al., 2011 ; Kalckert and Ehrsson, 2012 ; Lopez et al., 2012 ; Pozeg et al., 2014 ; Crea et al., 2015 ; Flögel et al., 2016 ; Wen et al., 2016 ; Burin et al., 2017 ; Riemer et al., 2019 ; Matsumoto et al., 2020 ), elderly ( Burin and Kawashima, 2021 ), and neurologic patients ( Zeller et al., 2011 ; Lenggenhager et al., 2012 ; Burin et al., 2015 ; Wawrzyniak et al., 2018 ). The demonstrated flexibility of the brain is indeed crucial to preserve a stable body image while the perceptual characteristics of the body constantly vary in everyday life. For example, the skin color may change depending on light, and the size and shape of body parts are influenced by posture and distance. Therefore, to save resources, the brain is trained to accept deviations resulting from a mismatch between sensory signals [such as the proprioceptive incongruency between the location of the real hand and the rubber hand in the case of the rubber hand illusion paradigm; ( Knoblich, 2006 ; Makin et al., 2008 ; Tsakiris, 2010 )].

Numerous studies have shown that immersive VR is an especially powerful tool to alternate body perceptions. The rubber hand illusion has, for example, been replicated numerous times in VR –in the so-called “virtual hand illusion”– where congruent (e.g., haptic or tactile) feedback is provided to the real hand of the participant together with visual feedback in VR [i.e., on the hand of the avatar seen by the participant; ( Slater, 2008 ; Perez-Marcos et al., 2009 ; Sanchez-Vives et al., 2010 ; Pyasik et al., 2020 ; Jeong and Kim, 2021 ; Kanayama et al., 2021 )]. Further, the high visuo-motor or visuo-proprioceptive synchrony –i.e., the high spatial and temporal correlation between the performed movement and the visually perceived feedback on the display– in immersive VR has also been shown to induce strong embodiment over the avatar, without the need of additional tactile stimulation ( Sanchez-Vives et al., 2010 ; Carey et al., 2019 ; Odermatt et al., 2021 ). Notably, immersive VR, together with additional sensory feedback, can be used to induce embodiment over “unrealistic” avatars ( Kilteni et al., 2012b ; Preston and Newport, 2012 ). For example, Kilteni et al. (2012b) induced a “very long arm illusion” by visually elongating a virtual arm and simultaneously providing haptic feedback to the real hand of the participant which was visually reproduced in the VR (namely, the touching of a grass surface).

Importantly, body perceptions impact how people interact with the environment. When we perform actions, it is critical to keep track of, for example, the size and shape of the different body parts ( Head and Holmes, 1911 ; Holmes and Spence, 2004 ; Maravita and Iriki, 2004 ). In a series of experiments, Tajadura-Jiménez et al. (2012 , 2015b , 2016) used real-time auditory feedback to induce illusionary ownership over elongated arms (e.g., by providing sounds that implied to originate from a greater distance when participants tapped their hand on a surface). Crucially, the authors showed that the illusion of having a longer arm also influenced the real arm movements of participants, similarly to what would be expected if the illusionary body characteristics were real ( Tajadura-Jiménez et al., 2016 ). Further, in another study by Kilteni et al. (2013) , the authors showed that participants who embodied dark-skinned avatars using immersive VR improved their drumming patterns compared to light-skinned avatars, therefore, not only showing that participants expected people with darker skin color to be better at drumming than people with lighter skin color, but also that they embodied the darker-skinned avatar.

The finding that motor actions are influenced by manipulating the self-perception of the own body using immersive VR may have important applications for neurorehabilitation. The flexibility of the brain regarding embodiment could be exploited to induce the feeling of body ownership over avatars with different body characteristics, modulating underlying motor brain networks and performance and optimizing recovery. For example, embodying a virtual stone arm may increase the physical engagement during training, similarly to lifting an empty bottle that is believed to be full. However, the ability of the brain to adapt motor commands based on a perceived modified reality has not yet been fully explored. Evidence suggests that the embodiment over an artificial limb may go in line with the disembodiment of the own limb (for a review see Golaszewski et al., 2021 ). Previous neurophysiological studies using non-invasive brain stimulation techniques (transcranial magnetic and direct current stimulation) and electroencephalography (EEG) have evidenced attenuated activity in motor ( della Gatta et al., 2016 ; Fossataro et al., 2018 ) and somatosensory ( Tajadura-Jiménez et al., 2012 ; Zeller et al., 2015 ; Hornburger et al., 2019 ; Isayama et al., 2019 ; Sakamoto and Ifuku, 2021 ) brain areas, along with enhanced error tolerance ( Raz et al., 2020 ) during the experience of illusionary body ownership. However, most studies on the neural correlates underlying embodiment investigated illusionary body ownership over a rubber hand ( Botvinick and Cohen, 1998 ; Ehrsson, 2004 ; Tsakiris and Haggard, 2005 ; Haans et al., 2008 ). Importantly, in the rubber hand illusion paradigm, the hand of the participant is not located at the same place as the rubber hand. To overcome this proprioceptive mismatch and to embody the rubber hand, the brain may be forced to disembody the own hand, lowering neural activity in the corresponding brain areas. Further, the experience of illusionary body ownership is commonly associated with congruent multisensory feedback (for example, applied to the participants real hand and a rubber hand), while the control condition (i.e., low body ownership) is associated with incongruent feedback, previously shown to introduce confounding congruency effects ( Rao and Kayser, 2017 ; Odermatt et al., 2021 ). Yet, immersive VR allows for congruent multisensory feedback with high visuo-proprioceptive congruency –i.e, the motor actions are spatially and temporally highly correlated with the visual feedback perceived through the HMD– and therefore, disembodiment of the own limb may not be necessary, allowing for a more naturalistic embodiment. This is in line with previous work showing that body illusions based on unimodal sensory feedback and without proprioceptive mismatch enhance activity in motor brain networks: visual kinesthetic illusions, in which the illusory feeling of motion of a static body part is induced by mechanically vibrating the tendon muscle of a physically constrained joint, have been associated with an increase in motor cortical excitability (for a review see Dilena et al., 2019 ). Therefore, in this study, we aimed at “tricking the brain” in a naturalistic fashion using immersive VR and investigate if multisensory feedback modulating the physical properties of an embodied avatar influences motor brain networks and control. To allow for a more naturalistic embodiment, we decided to change the skin material rather than the limb size or shape (e.g., elongated arm), to not introduce a proprioceptive mismatch during the illusion which could be associated with reduced motor activity.

Ten healthy participants were immersed in VR with an HMD, where they saw an avatar from a first-person perspective. We applied multisensory feedback (i.e., auditory, tactile, and visual) to induce a “stone arm illusion,” inspired by the work of the marble hand illusion by Senna et al. (2014) . We slowly transformed the surface of the avatar (i.e., the “skin material”) from human to stone. We enforced this visual change by repetitively touching the arm of the participant and the real arm of the avatar with a (virtual) hammer, while progressively replacing the sound of a hammer against skin with the sound of a hammer against stone provided via a speaker. To study changes in motor brain networks associated with the illusion, we applied single-pulse transcranial magnetic stimulation (TMS) over the primary motor cortex. Applying TMS through the scalp over the primary motor cortex elicits action potentials in motor neurons of the brain, which can be captured as motor evoked potentials (MEPs) with electromyographic recordings on the corresponding muscles ( Rothwell, 1997 ; Wolf et al., 2005 ; Groppa et al., 2012 ). Amplitude and latency of the MEPs are influenced by the cortical excitability of the motor system and corticospinal tract and can therefore be used as an index of physiological state changes in the primary motor area ( Barker et al., 1985 ; Rothwell et al., 1987a , b ; Bestmann, 2012 ; Bestmann and Krakauer, 2015 ; Rossini et al., 2015 ; Schmidt and Brandt, 2021 ). Further, to investigate if the “stone illusion” affected action execution, participants performed a reaching task visualized in VR with the human and stone avatar, i.e., they had to reach as fast and accurately as possible from a resting hand position on a table to appearing spheres above the table. Finally, we used questionnaires to assess the subjectively reported strength of the embodiment and illusion.

We expected that the immersive, highly congruent multisensory feedback in VR would induce strong body ownership over the avatar across both the human and stone conditions. In addition, we expected higher subjectively rated “stone feeling” in the stone versus human avatar condition, indicating the presence of a “stone arm illusion.” We further hypothesized that the stone arm illusion or the subjectively experienced stone feeling would be associated with enhanced motor cortical excitability, reflecting an adaptation of motor brain processes to the altered body image. Further, we hypothesized that the stone arm illusion or stronger subjectively experienced stone feeling would enforce accelerated movement patterns and/or motor overshooting in the reaching movements, due to an overestimation of the weight of the real arm. Finally, to better understand the nature of the stone arm illusion, we were further interested in exploring the relationship between the stone feeling and embodiment components (as stronger subjectively experienced stone feeling may hamper agency but not body ownership), and between the reaching movements and cortical excitability in the human and stone condition.

Materials and Methods

Participants.

We recruited 10 healthy participants [five female; age (M ± SD) = 29.4 ± 6.5 years] from the campus of the University of Bern, Switzerland. All participants reported to be right-handed when asked to indicate their dominant hand and to have normal or corrected-to-normal vision. None of them had a psychiatric or neurological clinical history. Participants were naïve to the hypotheses of the experiment. The study was approved by the local ethics committee and all participants gave written informed consent.

Experimental Setup

An overview of the experimental setup can be seen in Figure 1 . A head-mounted display (HTC Vive, HTC, Taiwan and Valve, United States), two trackers, and one controller (HTC Vive, Taiwan and Valve, United States) were employed in the VR setup. Two trackers were attached on the right upper arm and wrist of the participant with Velcro ® straps to record the motion kinematic data and visually animate the avatar in the VE. The controller was operated by the experimenter to animate a virtual hammer. The kinematic data of the trackers were continuously collected at a sampling rate of ∼50 Hz in the Unity game engine and stored for offline analysis (version 2018.3.0f2; Unity Technologies, United States).

An external file that holds a picture, illustration, etc.
Object name is fnhum-15-787487-g001.jpg

Experimental setup and virtual environment. (A) Participant wearing the head-mounted display (HMD) and receiving transcranial magnetic simulation (TMS) over the primary motor cortex. (B) Electromyographic recordings in the shape of MEPs elicited by the TMS pulses were obtained from the first dorsal interosseous (FDI) muscle of the right hand of the participant placed on an armrest and with the tracker around wrist and upper arm. (C) The first-person perspective point of view of the participant in the VR during the multisensory feedback in the human and (D) stone condition. (E) The first-person perspective of the participant during the questionnaires, and (F) the motor task.

A 4-button response box (The Black Box ToolKit Ltd., United Kingdom) placed on a table was used by the participants to answer the questionnaires in VR. A loudspeaker located on the right side of the same table provided the auditory feedback. The data obtained via response box were collected in the Unity game engine and stored for offline analysis.

A Magstim 200 Mono Pulse stimulator (Magstim Ltd., United Kingdom) and a figure-of-eight coil were used to apply TMS pulses through the scalp of the head of the participant over the primary motor area ( Figure 1A ). A TMS navigation system (Localite GmbH, Germany) was employed for the co-registration of the position and orientation of the coil with the head of the participant. Electromyographic recordings in the shape of MEPs elicited by the TMS pulses were obtained using the Dantec Keypoint G4 Workstation (Natus Medical Incorporated, United States) from the right hand of the participant in a belly-tendon montage by means of Ag/AgCl surface tab electrodes with a diameter of 5 mm (Medtronic Ltd., United Kingdom). The active electrode was placed over the belly of the first dorsal interosseous (FDI) muscle, the reference electrode over the proximal interphalangeal joint of the index finger (tendon), and the ground electrode over the abductor digiti minimi ( Figure 1B ). The electromyographic raw signal was amplified, recorded with a sampling rate of 48 kHz, and stored for offline analysis using Keypoint. Net Software (version 2.31; Natus Medical Incorporated, United States).

Virtual Environment and Avatar

The virtual environment was built in Unity game engine (version 2018.3.0f2; Unity Technologies, United States) and consisted of a virtual living room. A male and a female avatar were designed in MakeHuman (open source software version 1.1.1) 1 . Participants saw the gender-matched avatar from a first-person perspective sitting on a chair in front of a table, i.e., they could see the upper body (arms, shoulder) and parts of the legs of the avatar ( Figures 1C,D ). The right arm of the avatar was animated using the position of the trackers of HTC Vive placed on the right upper arm and wrist of participants. The left arm of the avatar was rendered to be located under the virtual table (i.e., the left hand of the avatar was neither animated nor visible in VR). A controller operated by the experimenter was employed to animate a virtual hammer.

Experimental Procedure

The whole experiment was completed in a single session with a total duration of approximately 60–70 min. Participants were seated comfortably at a table with their right hand placed on a soft armrest in a predefined position in front of them, matching the hand of the avatar on the virtual table in VR.

The experiment consisted of five phases, i.e., a baseline phase (phase 0) and four experimental phases (phases 1–4; Figure 2A ). Task instructions were presented outside VR before the start of the experiment. The baseline phase was performed outside VR to assess the motor hotspot for the TMS application on the head of the participant [see section “Motor Hotspot Definition (Phase 0)”]. Then, participants were immersed in VR with their right hand placed on the armrest and the left hand on the response box (to fill in the questionnaires). Before starting the experiment, participants could visually explore the virtual environment. In each experimental phase, participants performed three measurement blocks, i.e., a questionnaire block [see section “Questionnaire Blocks (Phases 1–4)”], an MEP evaluation block [see section “Motor Evoked Potential Evaluation Blocks (Phases 1–4)”], and a motor task block [see section “Motor Task Blocks (Phases 1–4)”], while continuously being immersed in VR. The phases or blocks were manually initiated by the experimenter. Phases 1 and 4 were performed with the avatar animated with human skin while phases 2 and 3 were performed with the avatar animated with a stone surface ( Figure 2B ).

An external file that holds a picture, illustration, etc.
Object name is fnhum-15-787487-g002.jpg

Experimental procedure. (A) Experimental protocol, and (B) exemplar overview of the virtual environment, including the female version of the avatar with animated human surface (left), mixed surface during the transformation (middle), and stone surface (right).

The first experimental phase performed with a human-skinned avatar (i.e., first human avatar condition) started with the questionnaire block (QT1). Then, participants underwent the first MEP evaluation block (MEP1) and finished with a motor task block (MT1). After phase 1, participants received the multisensory feedback for approximately 50 s during which we induced the “stone arm illusion” [see section “Experimental Conditions (Phases 1–4)”]. After the skin transformation was finished, phases 2 and 3 (i.e., first and second stone arm conditions) started with alternating multisensory feedback and measurement blocks. First, participants answered the questionnaires (QT2), followed by the MEP evaluation (MEP2), another questionnaire (QT3), and a motor task block (MT2). Then, participants received another MEP evaluation block (MEP3), followed by a motor task block (MT3). Between each experimental block in phases 2 and 3, participants received multisensory feedback for 15 s, i.e., they felt/saw a hammer touching their real/the arm of the avatar triggering a stone sound from the speaker. The order of phases 2 and 3 was selected to prioritize the TMS evaluation and questionnaire ratings over the motor task, in case the reaching movements would break the illusion. After phases 2 and 3, we transformed the avatar back to a human skin surface by applying the multisensory feedback for around 50 s [see section “Experimental Conditions (Phases 1–4)”]. Finally, the fourth phase started (i.e., second human avatar condition), where participants first received the MEP evaluation (MEP4), then performed the motor task (MT4) and finished by filling in the questionnaires (QT4) for the last time. Finally, participants were taken out of the immersive VR and debriefed about the study aim.

Motor Hotspot Definition (Phase 0)

Before the experimental phases in the VR environment, we determined the location of the “motor hotspot,” i.e., the stimulation site on the head of the participant reliably producing high amplitude TMS-induced MEPs recorded from the FDI muscle of the right hand of the participant. Participants were asked to relax the muscles in arm and hand. Complete muscle relaxation was monitored via audiovisual feedback. Then, single-pulse TMS was applied by a blinded experimenter (i.e., naïve to the experimental conditions) over the primary motor cortex. Stimulation intensity started at 10% and was slowly increased in increments of 2–5%. The region over the skull where the stimulation induced reliable MEPs of the first dorsal interosseous muscle activation across 10 consecutive trials was defined as the “motor hotspot.” Mean stimulation intensity across all participants was 49% (SD = 6.5; range 40–57%). The coil position of the hotspot was marked directly on the scalp to ensure accurate coil repositioning. Since efficiency (i.e., the stimulus intensity required to bring corticospinal neurons to firing threshold) and type (i.e., direct axonal versus indirect trans-synaptic) of TMS stimulation are highly influenced by the orientation of the neural element within the induced electric field ( Bonato et al., 2006 ; Thut et al., 2011 ), we co-registered the M1 hotspot location on the participants head with the TMS coil using a neuronavigation system (Localite GmbH, Germany). The whole baseline procedure took around 10–15 min.

Experimental Conditions (Phases 1–4)

The two experimental conditions represent the embodiment of a human arm/hand avatar and a stone avatar, respectively, which we modulated using congruent multisensory feedback. After the experimental phase 1 (resp., after phase 3), we induced a “stone arm illusion” (resp. “human arm illusion”) by gradually transforming the surface of the avatar (i.e., the “skin material”) from human to stone (resp., from stone to human; Figure 2B ). We enforced this visual change by gently and repetitively touching at ∼1 Hz the real forearm of the participant with an HTC Vive controller while touching the forearm of the avatar in the VR with a virtual hammer animated using the position and orientation of the controller. We progressively replaced the sound of a hammer against skin displayed from the loudspeaker on the table with the sound of a hammer against stone (resp., vice versa; see “ Supplementary Material ” for an exemplar video). The stone hitting sound was generated by recording the sound of a real hammer hitting a real stone. The human skin hitting sound was generated by recording the sound of a real hammer hitting a real arm. Of note, the tactile feedback (i.e., the touch with the controller on the forearm of the participant) did not change across transformation. The transformation lasted for approximately 50 s.

Questionnaire Blocks (Phases 1–4)

Participants filled in two questionnaires to assess the subjectively reported embodiment and the perceptual correlates of the stone arm illusion (i.e., stone feeling). Questionnaires were presented in VR to keep them standardized and to facilitate immersion and answered by the participants with the left hand on the response box ( Figure 1E ).

The stone feeling questionnaire consisted of four items on a 7-point Likert scale, indicating how cold/hot, light/heavy, soft/stiff, and sensitive/insensitive participants rated their right arm (see Table 1 ). The questionnaire was adapted from the study by Senna et al. (2014) . The embodiment questionnaire consisted of eight items adapted from established questionnaires ( Longo et al., 2008 ; Bassolino et al., 2018 ) that were rated on a 7-point Likert scale from −3 (strongly disagree) to 3 (strongly agree). The three main components of embodiment (body ownership, agency, and location) and disownership were assessed. In addition, control items unrelated to the body illusion were included to validate the specificity of potential illusion effects ( Table 2 ). Participants took around 1–3 min to fill in the questionnaires.

Stone feeling questionnaire.

Adapted from Senna et al. (2014) .

Embodiment questionnaire.

Q1–4, Q6–8 ( Longo et al., 2008 ); Q5 ( Bassolino et al., 2018 ).

Motor Evoked Potential Evaluation Blocks (Phases 1–4)

During the MEP evaluation blocks, participants received single-pulse TMS over the left primary motor cortex, i.e., contralateral to the electromyographic leads at the marked optimal site [i.e., motor hotspot, see section “Motor Hotspot Definition (Phase 0)”] for first dorsal interosseous muscle activation of the right hand. The consistent coil orientation across MEP blocks was verified using a neuronavigation system (Localite GmbH, Germany). A total of 20 ± 2 TMS pulses were applied, and the corresponding MEPs recorded in each block with an inter-pulse interval of approx. 3 s. The total duration of an MEP evaluation block was around 2 min.

Motor Task Blocks (Phases 1–4)

Participants were asked to perform a motor task consisting of reaching as fast and accurately as possible with their right arm or hand placed on the armrest located on the table to vertically appearing blue spheres ( Figure 1F ). The resting initial position was indicated with a green sphere in the virtual environment. After reaching to a blue sphere, participants were asked to bring back their hands to the rest position (green sphere) until a next blue sphere appeared. One block consisted of four trials/blue spheres, i.e., two blue spheres placed 32 cm and two blue spheres placed 36 cm above the table (i.e., 20 and 24 cm above the resting position/armrest, respectively). The two different reaching distances were selected to minimize the possibility of potential movement anticipation strategies of participants. All blue spheres were placed above the initial position of the hand resting on the table (i.e., the vertical projection over the green sphere). The order of the spheres was randomized to minimize anticipation. One motor task block lasted for around 1 min.

Metrics and Data Processing

Stone feeling.

To quantify the subjectively experienced stone feeling, the mean of the coldness (of note, this item was reversed for the analyses, so that positive values reflect coldness), heaviness, stiffness, and insensitivity item ratings for the human and stone condition were calculated for each participant.

To quantify the subjectively experienced level of embodiment over the avatar, the mean of the body ownership (Q1–Q2), agency (Q4), location (Q3), disownership (Q5–Q6), and control items (Q7–Q8) were calculated for each participant and condition (i.e., human, stone).

Cortical Excitability

Cortical excitability was quantified using the peak-to-peak amplitude from the TMS-induced MEPs ( Di Lazzaro and Rothwell, 2014 ; Schulz et al., 2014 ; Bestmann and Krakauer, 2015 ; Rossini et al., 2015 ; Smith et al., 2019 ; Ammann et al., 2020 ). The peak-to-peak MEP amplitude (mV) was calculated as the voltage difference between the maximum positive and maximum negative peak in the electromyographic potential occurring 15–80 ms after TMS pulse onset and averaged across participants and conditions (i.e., human and stone).

Kinematic Variables

Due to the uneven sampling rate in Unity (∼50 Hz), data were linearly interpolated every 15 ms (= 66.67 Hz). We calculated the maximum speed ( m/s ), the time to the maximum speed ( s ), the maximum acceleration ( m/s 2 ), and path length ( m ) of the reaching movements. We selected these kinematic variables based on previously used ones in literature to quantify motor performance ( Shishov et al., 2017 ; Basalp et al., 2021 ). Since we expected that the real sensory feedback during the motor task may break the stone illusion (i.e., due to visuo-motor or visuo-proprioceptive synchrony), we calculated the kinematic variables for both, the first 150 ms after movement onset (a period in which the cerebellum is assumed to not have received updated sensory feedback; Miall et al. (2007) ; and therefore, reflecting feedforward kinematics associated with movement initiation) and from movement onset until the visual outer border of the sphere was crossed (defined by a collider in Unity). Further, the time ( s ) from movement onset until the visual outer boarder of the sphere was reached (defined by a collider in Unity) was computed. Finally, motor overshooting ( m ) was quantified by calculating the highest point reached by the center of the hand in the upward movement after movement onset minus the height of the center of the blue sphere. Movement onset was defined as the time point when 2% of the maximum velocity after the presentation of the sphere was reached. Each kinematic variable was averaged per participant and condition (i.e., human, stone).

Data Analysis

The data of both human arm (phases 1 and 4) and stone arm (phases 2 and 3) conditions were averaged for each participant to account for the time factor, which may be associated with intra-subject habituation or fatigue effects across experimental phases.

To investigate whether the stone feeling and the embodiment questionnaires, MEP amplitudes, and the kinematic variables differentiated between the human and stone condition, parametric (paired t -tests) and non-parametric (Wilcoxon Signed-Rank tests) pairwise comparisons were performed when applicable.

Further, Pearson product-moment or Spearman’s rank correlation analyses (depending on the statistical distribution of the datasets) were conducted to study the relationship between (1) stone feeling items and MEP amplitudes, (2) stone feeling items and kinematic variables, (3) stone feeling items and embodiment components, and (4) kinematic variables and MEP amplitudes. Correlation analyses were performed separately for the human and stone condition.

Assumptions for parametric testing were checked using normality tests (Kolmogorov–Smirnov, p > 0.05). Outlier trials (more than ± 2.5 SDs from the mean of the participant) were excluded from the analyses. All p values were corrected for multiple hypothesis testing using Tukey–Kramer and Bonferroni–Holm, respectively (between conditions comparisons) and the Benjamini-Hochberg false discovery rate (correlation analyses). Statistical analyses were performed with R v. 4.1.1 and the significance threshold was set at α < 0.05. If not otherwise stated, two-sided hypothesis testing was applied (and we indicate one-sided testing in the case there was a clear directed a priori hypothesis).

A summary of the results with the statistics is represented in Table 3 .

Descriptive statistics and results of the pairwise comparisons.

Mean (standard deviation) or median (25% quantile–75% quantile) range are reported.

*Indicates significance at the 0.05 level.

Between Conditions Differences

Stone illusion.

Pairwise comparisons showed significant (one-sided) stone illusion effects. Subjects rated their right arm to be colder, heavier, stiffer, and more insensitive in the stone versus human condition ( Figure 3A ).

An external file that holds a picture, illustration, etc.
Object name is fnhum-15-787487-g003.jpg

Between conditions differences. (A) Rated stone feeling, (B) subjectively experienced embodiment, (C) cortical excitability assessed via motor evoked potential (MEP) amplitudes, and (D) time to the maximum speed in the feedforward kinematics reflecting movement initiation across phases. H1, first human condition; S1, first stone condition; S2, second stone condition; H2, second human condition. Bar plots: Error bars represent standard deviation. Boxplots: Whiskers show the data ranging 1.5 times inter-quartile range above the upper or below lower quartiles, boxed horizontal solid lines show the median and box vertical boundaries show the inter-quartile range. * p < 0.05 for pairwise comparisons between human (mean H1 + H2) and stone (mean S1 + S2) condition.

Pairwise comparisons did not show significant differences in the subjectively reported embodiment components (i.e., body ownership, agency, and location), disembodiment, and control items between the human and stone condition ( Figure 3B ).

The pairwise comparison did not reveal a significant (one-sided) modulation of the MEP amplitude between the human and stone condition ( Figure 3C ).

We found a significant (one-sided) effect of the illusion on the time until the maximum speed in the feedforward kinematics, which was higher in the stone than in the human condition ( Figure 3D ). None of the other kinematic variables showed significant differences between the human versus stone avatar condition in the pairwise comparisons.

Correlation Analyses

Stone feeling and motor evoked potentials.

Correlation analyses revealed a significant relationship between the subjectively reported stone feeling with the MEP amplitude in the stone but not the human condition ( Figure 4A ). The stronger the rated coldness [ r s (18) = 0.44, p (one-sided) = 0.03] and stiffness [ r s (18) = 0.53, p (one-sided) = 0.02], the higher was the cortical excitability. We further found a trend for an association between the rated heaviness with the MEP amplitude [ r s (18) = 0.40, p (one-sided) = 0.08]. The MEP amplitudes were not associated with the insensitivity item of the stone feeling [ r s (18) = 0.08, p (one-sided) = 0.5].

An external file that holds a picture, illustration, etc.
Object name is fnhum-15-787487-g004.jpg

Results of the correlation analyses for the human (in lighter blue/circles) and stone (in darker blue/diamonds) condition. (A) Stone illusion strength and MEP amplitude reflecting cortical excitability. (B) Stone illusion strength and feedforward (FF) path length reflecting the average speed in the movement initiation. (C) Stone illusion strength and path length for the movement until the sphere. (D) Average speed in the feedforward kinematics and cortical excitability. Of note, due to few very similar values across human and stone conditions, the number of individually visible plots (i.e., circles and diamonds) may be lower than the number of measurement points (i.e., 20). * p < 0.05.

Stone Feeling and Embodiment

A significant negative correlation between the stone feeling and agency (but not body ownership) was observed in the human and stone conditions. In the stone condition, coldness ratings were associated with reduced agency [ r s (18) = −0.56, p = 0.02]. In the human condition, insensitivity was associated with reduced agency [ r s (18) = −0.55, p = 0.02], while there was a trend for the heaviness [ r s (18) = −0.41, p = 0.08].

Stone Feeling and Kinematic Variables

We also found significant positive correlations between the stone feeling items with kinematic variables in the stone but not human condition. The higher the rated coldness [ r p (18) = 0.57, p (one-sided) = 0.02] and stiffness [ r p (18) = 0.44, p (one-sided) = 0.03] of the arm of the subject, the longer were the performed paths within the 150 ms after movement onset in the stone condition ( Figure 4B ). Further, the coldness item was associated with longer paths in the movements until the sphere in the stone condition [ r p (18) = 0.55, p = 0.04; Figure 4C ].

Kinematic Variables and Motor Evoked Potentials

Finally, we found a significant correlation between the feedforward kinematics (i.e., within the 150 ms after movement onset) and MEP amplitudes in the stone but not the human condition. The higher the cortical excitability, the longer were the performed paths [ r s (18) = 0.72, p = 0.03; Figure 4D ]. For the movements until the sphere, we found significant associations between the kinematics and the MEP amplitude for the stone condition. More precisely, higher cortical excitability was associated with longer paths [ r s (18) = 0.51, p = 0.03] and higher maximum acceleration [ r s (18) = 0.48, p = 0.03]. Further, we found a trend for an association between the motor overshooting and the MEP amplitudes in the stone condition [ r s (18) = 0.43, p = 0.06].

No further correlation analyses reached significance. A summary of the correlation analyses is represented in Table 4 .

Significant correlations ( p < 0.05) between measures for human (H) and stone (S) condition.

The plus and minus signs indicate if the correlation is positive or negative.

We “tricked the brain” using immersive VR to investigate if multisensory feedback modulating the physical properties of an embodied avatar influences motor brain networks and control. Ten healthy participants were immersed in a virtual environment using an HMD, where they saw an avatar from first-person perspective. We slowly transformed the visual appearance of the human-skinned avatar to an avatar with a stone surface. To enforce the “stone arm illusion,” we simultaneously applied auditory and tactile feedback during the visual transformation, i.e., participants saw and felt a (virtual) hammer touching their real arm or the arm of the avatar, triggering a progressively changing human to stone hitting sound from a loudspeaker. Participants filled in questionnaires to report their level of embodiment and experienced stone feeling, had single-pulse TMS applied over the primary motor cortex, and performed an arm reaching task to study how the “stone arm illusion” affected motor cortical excitability and action execution.

The Strength of Subjectively Experienced “Stone Arm Illusion” Is Associated With Enhanced Motor Cortical Excitability

In line with our expectation, our participants indeed experienced the “stone arm illusion.” They rated their arm as colder, heavier, stiffer, and more insensitive when we enforced illusionary ownership over a stone versus human avatar using multisensory feedback in immersive VR. The adaptation of the participants to the stone illusion is further visible in the aftereffects found after the transformation back to the human avatar. Participants rated their own arm as less heavy, stiff, and insensitive after the illusion compared with the baseline (i.e., first human block). The stone illusion is a result of both the relatively enhanced stone feeling during the immersion with the stone compared with the human avatar and the relatively lowered stone ratings below baseline after the transformation back to the human avatar. Importantly, the stone illusion did not impact the experienced level of embodiment. Participants reported high body ownership and agency over the avatar, independently of the condition. Our results are in line with a vast amount of research that used multisensory feedback with ( Slater, 2008 ; Perez-Marcos et al., 2009 ; Kilteni et al., 2012b , 2016 ; Pyasik et al., 2020 ; Tambone et al., 2021 ) and without immersive VR ( Botvinick and Cohen, 1998 ; Ehrsson, 2004 ; Petkova and Ehrsson, 2008 ; van der Hoort et al., 2011 ; Burin et al., 2017 ) to induce various body illusions. These findings established the view that body perceptions are continuously updated in the brain in response to sensory signals related to the body ( Blanke, 2012 ; Tsakiris, 2017 ). A certain flexibility of the brain regarding body perception is indeed crucial to maintain a stable body image despite that body characteristics constantly change in response to external influences such as light, temperature, and posture.

We further showed that the strength of the reported stone feeling was associated with enhanced cortical excitability, i.e., with an increased amplitude of the TMS-induced motor evoked potentials in the stone but not human avatar condition. More precisely, the subjectively rated coldness and stiffness of the own arms of the participants in the stone condition were associated with enhanced motor excitability, while we found a tendency for the rated heaviness to be associated with the MEP amplitudes. This finding may indicate that participants physically mirrored the embodied body characteristics of the stone avatar. Participants may have enhanced the muscle tension or activity in their own arm, (unconsciously) mimicking the stiffness of the stone avatar with increasing illusion strength. Cortical excitability is generally thought to reflect the responsiveness of the brain to exogenous and/or endogenous signals ( Rosanova et al., 2011 ). In the case of the primary motor cortex, excitability is linked with a decreased motor threshold and modulated, for example, during action preparation and/or execution ( Starr et al., 1988 ; Bestmann, 2012 ; Bestmann and Krakauer, 2015 ). In line with our conclusion, previous studies have shown that muscle contractions enhance primary motor cortical excitability ( Arányi et al., 1998 ; Yahagi et al., 2003 ; Perez and Cohen, 2008 ; Perez et al., 2012 ). However, since our experimental setup was already crowded, we did not add electromyographic recordings during the experiment to objectivate our conclusion on the enhanced muscle tone. Alternatively, the embodied “stone feeling” may have increased the perceived difficulty to control the arm, as task difficulty has previously also been shown to enhance motor cortical excitability ( Pearce and Kidgell, 2009 ; Watanabe et al., 2018 ).

Importantly, the subjectively experienced illusion strength was only correlated with the MEP amplitude in the stone but not the human avatar condition. The stone avatar condition was further temporally embedded between the human avatar condition, minimizing the possibility that the neurophysiological effects were impacted by confounding factors related to the duration of the experiment, such as room temperature or fatigue (which could be equally expected for the human and stone condition). Since the average cortical excitability did not differentiate between human and stone condition (i.e., in the between conditions analyses), our findings suggests that the MEP amplitudes are crucially modulated during the presence of the stone illusion depending on the subjectively experienced illusion strength.

In line with this conclusion, we found a negative association between subjectively reported stone feeling, namely, the rated coldness, with reported agency, indicating that participants were feeling less in control over their arm, the stronger they experienced the stone illusion. Importantly, the “stone feeling” did not affect the experienced level of body ownership over the avatar. The correlation between the reported stone feeling with the reported embodiment was only present for agency, but not for body ownership and location – the two other embodiment components ( Kilteni et al., 2012a ). Therefore, the strength of the stone illusion impacted how well participants think they can control their arm, but they kept experiencing the virtual stone arm as their own arm. However, analyses showed a similar pattern for the human avatar condition. Agency (but not body ownership) was also negatively associated with the reported stone feeling in the human avatar condition. Therefore, carry-over effects may have influenced the results, i.e., stone feeling may have persisted in the human condition, hampering the feeling of agency when immersed with the human avatar. Alternatively, other inter-subject variables, such as fatigue or body temperature, which could have also been captured with the questionnaire, may contribute to the reduced reported agency.

To the best of our knowledge, we are the first to show modulated motor brain processing associated with altered body perceptions using immersive VR. Our findings on neurophysiological effects extend previous studies reporting, for example, affective ( Tajadura-Jiménez et al., 2015a ), (social) cognitive ( D’Angelo et al., 2019 ; Burin and Kawashima, 2021 ; Clausen et al., 2021 ; Tambone et al., 2021 ), and motor ( Kilteni et al., 2013 ; Tajadura-Jiménez et al., 2015a , 2016 ) effects of body illusions.

Interestingly, the found enhancement in excitability associated with the strength of the illusionary self-perception suggests a more “complete” body illusion using immersive VR compared with classical paradigms, notably, the rubber hand illusion. Non-invasive brain stimulation studies showed that the activity in motor ( della Gatta et al., 2016 ; Fossataro et al., 2018 ) and somatosensory ( Zeller et al., 2015 ; Hornburger et al., 2019 ; Isayama et al., 2019 ) brain areas was attenuated during the experience of illusionary ownership over a rubber hand (i.e., during the synchronous but not asynchronous multisensory stimulation). These findings have previously been discussed as an indication for the disembodiment of the real hand necessary to embody a rubber hand (for a review see Golaszewski et al., 2021 ). The use of immersive VR with highly congruent visuo-motor and proprioceptive feedback –compared with the rubber hand illusion, no proprioceptive mismatch is present– may allow to induce highly realistic illusions, without the necessity for the user/brain to disembody the own limbs. As a consequence, the highly realistic visuo-proprioceptive synchrony experienced in immersive VR illusions may influence brain networks similarly as could be expected from modifying real body characteristics ( Tajadura-Jiménez et al., 2016 ).

Together, immersive VR may be an especially powerful tool to realistically modify the perception of the bodily self and influence associated brain networks. Our results show that participants embodied the stone avatar in immersive VR and that the reported illusion strength was associated with the motor cortical excitability, indicative of a physical mirroring of the embodied body characteristics of the stone arm.

The “Stone Arm Illusion” Influences Movement Initiation in a Reaching Task

We further predicted that the modulated body perception associated with the stone arm illusion using immersive VR would impact motor actions, as body characteristics such as shape and weight critically influence interactions with the environment ( Head and Holmes, 1911 ; Maravita and Iriki, 2004 ). Our participants performed a simple goal-oriented motor task visualized in the virtual environment, i.e., they had to reach as fast and accurately as possible with their hand from a resting position to vertically appearing spheres. Since we expected that the reaching movement may break the illusion, in addition to the whole movement until the sphere, we analyzed feedforward kinematics within 150 ms after movement onset, in which the cerebellum would not have received updated sensory (e.g., proprioceptive) feedback ( Miall et al., 2007 ).

We found that participants in the stone condition were marginally slower in reaching the maximum speed within the first 150 ms of the movement than when they were in the human condition. Since the maximum speed/path length in the feedforward movement did not differ across conditions, this result indicates an, on average, slightly slower movement initiation when the motor task was performed with the stone versus human avatar. This finding is contrary to what we expected. We predicted that, if the stone illusion worked, participants would overestimate the weight of their real arm, as reflected in enhanced acceleration patterns and motor overshooting, similar to lifting an empty bottle of water that is expected to be full.

However, our correlation analyses suggest that the movement initiation may critically depend on the subjectively experienced illusion strength. The reported stone feeling, namely, the experienced coldness and stiffness, was associated with longer paths within the first 150 ms after movement onset, indicating on average faster movements (average speed defined as the path length over the 150 ms time window). Therefore, participants with stronger illusion effects may have not only stronger physically mirrored (see section “The Strength of Subjectively Experienced ‘Stone Arm Illusion’ Is Associated With Enhanced Motor Cortical Excitability”) but also compensated for the embodied body characteristics of the stone avatar. Stronger experienced illusions might have led participants to put more physical engagement into the task to compensate for the additional (illusionary) weight of the stone arm, resulting in faster feedforward movements or initiations than participants with weaker experienced illusions. In contrast, weak illusion effects during the stone avatar condition could have hampered movement initiation. Participants with relatively weak illusion effects likely experienced an enhanced sensory mismatch in the stone condition than participants with stronger illusion effects. Incongruency of information in virtual environments has previously shown to hamper reaction times and motor performance, independently of the experienced body ownership over the avatar ( Odermatt et al., 2021 ). The facilitated versus hampered feedforward movement depending on the illusion strength could explain why, across the whole group, marginal overall slowed movement initiations were found in the stone versus human avatar condition. Apart from this marginal effect on the movement initiation, the kinematic variables were not influenced by the human versus stone conditions when considering the whole group, suggesting that the physical compensation reflected in the movement initiation may be critically modulated by the subjectively experienced illusion strength.

Our conclusion on a physical compensation of the embodied stone characteristics in the motor task may be further supported by the association between the cortical excitability with faster movement initiation in the stone, but not human condition. The higher the cortical excitability associated with the subjectively reported stiffness and coldness, the faster were the reaching movements 150 ms after movement onset on average (namely, reflected in longer performed paths). It is possible that the previously discussed potential physical mirroring and/or compensation of the embodied body characteristics of the stone avatar “chronically” activated and increased the cortical excitability in motor brain areas, boosting movement initiation. Previous research has shown that the motor cortical excitability is correlated with the force ( Baud-Bovy et al., 2008 ; Perez and Cohen, 2009 ; Barthélemy et al., 2012 ) and speed of performed movements ( Uehara et al., 2011 ). However, it needs to be pointed out that the assessment of the cortical excitability was temporally separated from the motor task. Even though literature points toward short-term influences of motor actions on cortical excitability within the range of milliseconds to seconds ( Chen et al., 1998 ; Chen and Hallett, 1999 ), the exact time course of corticospinal excitability is yet poorly understood.

Our results suggest stronger illusion effects on feedforward kinematics than on the “whole” movement, contrasting previous studies reporting relatively long-lasting motor effects. However, in these studies, the motor task was part of the multisensory feedback to induce the illusion. Tajadura-Jiménez et al. (2012 , 2015b , 2016) , for example, instructed blindfolded participants to tap with their hand on a surface, eliciting sound implying a longer arm (e.g., the provision of lighter sound to simulate increased distance). The authors showed that the illusion of having a longer arm slowed and prolonged the real arm movements of participants. The same authors also found alternated gait patterns when participants had the illusion of owning a lighter versus heavier body modulated with different sounds provided with each footstep ( Tajadura-Jiménez et al., 2015a ). In both studies, the movements were directly coupled with the real-time auditory feedback modulating the body characteristics. Kilteni et al. (2013) , in one of the few studies that used immersive VR to create illusions, showed that participants who embodied dark-skinned avatars exhibited more variable and faster drumming movements compared with embodied light-skinned avatars. Even though, here, no additional sensory enforcement was induced during the motor task, participants were continuously performing the movements. Conversely, in our experiment, the reaching movements were performed between resting periods without the audio-tactile feedback used to enforce the stone arm illusion. Therefore, the (sudden) proprioceptive feedback linked with the movement in the motor task may have more likely disrupted or lowered the illusion effects compared with the setups used in previous studies.

Together, we replicate and extend previous findings with our “stone arm illusion.” We show that a modulated self-perception using multisensory feedback in immersive VR impacts motor control, and may crucially depend on the subjectively experienced illusion strength. Participants with higher reported illusion strength performed, on average, faster reaching movements, indicating that they may have physically compensated for the embodied body characteristics of the stone avatar. In contrast, the incongruent multisensory information associated with weaker experienced illusion effects may have hampered movement initiation.

Clinical Implications

Our finding that motor brain activity can be influenced based on a perceived modified reality may have important applications for neurorehabilitation. The use of immersive virtual training environments may help to subtly “trick the brain” and change the self-perception of neurologic patients, enhancing the engagement of motor brain networks during motor training. Besides, future studies may investigate to what extent motor brain networks can be activated via the mere immersion into VR environments, without actual motor execution, especially in neurologic patients.

Further, the embodiment of avatars with different body characteristics may offer a playful possibility to implicitly increase the (physical) engagement during training, optimizing motor recovery. However, our results show that the effects of body illusions highly depend on the subjectively experienced illusion strength, and do not necessarily follow from the experimental manipulation of multisensory information. Our findings may be of special relevance for clinical settings considering previous work suggesting that the strength of body illusions in neurologic patients depends on their motor impairment. Using the rubber hand illusion paradigm, Burin et al. (2015) showed that hemiplegic patients reported stronger illusion effects for their impaired hand and weaker effects for the unimpaired hand than healthy controls. The chronic absence of movements may enhance the flexibility of the brain with regard to body ownership for the paralyzed limb, while the healthy limb may be more strictly embodied ( Burin et al., 2015 ). Future studies are needed to investigate how to enforce and optimize body illusions and their behavioral and neural benefits in patients.

Study Limitations

Similar to previous studies, we did not find consistent illusion effects across different kinematic parameters ( Tajadura-Jiménez et al., 2016 ). As previously highlighted by Tajadura-Jiménez et al. (2016) , the small sample size limits the findings and conclusions of the present study. A further reason for the lack of stronger effects found in the kinematic variables may be the experimental setup. The soft armrest on which participants placed their arm during the experiment was slightly sticky and, therefore, noticeable for the participants during the motor task. This may have reduced the immersion in VR, enhancing the attention on the sensory feedback associated with the movement and mitigating illusion effects. In addition, the motor task consisted of only four trials per block (i.e., eight trials per condition) to lower the risk of the performed movements to break the illusion, therefore, conclusions drawn from the movement performance should be treated with caution.

Further, the questionnaire used to assess the stone feeling may have assessed confounds such as inter-subject variability in physical or mental fatigue, motivation, or perceived body temperature. This could also explain why not all stone feeling items showed consistent effects with the experimental manipulation. Future studies may implement additional questionnaires or sensors (e.g., temperature, skin conductance) to objectivate or control for inter-subject confounds.

Moreover, even though we aimed at balancing our two conditions as much as possible to control for intra-subject confounds, for example, related to the duration of the experiment, some differences remained. Participants experienced more audio-tactile feedback during the stone than human condition, to enforce the stone arm illusion between the measurement blocks. The tactile feedback may have increased the awareness of the own arm, in turn, influencing motor brain networks. For example, attention has shown to modulate motor cortical excitability ( Conte et al., 2007 ). Further, the third block was the only block where the MEP evaluation was performed directly after the motor task, and carry-over effects could have impacted our cortical excitability results. However, analysis excluding the third block did not significantly change our findings. This is consistent with findings showing short-lasting effects of movements on cortical excitability ( Chen et al., 1998 ; Chen and Hallett, 1999 ). In addition, learning effects may confound our results despite our balanced design. Indeed, learning effects were present in most kinematic performance variables. For example, participants showed more accurate (as reflected in shorter path lengths in the movements until the sphere) and faster reaching movements at the end compared with the beginning of the experiment. However, learning may have not occurred linearly across blocks, and therefore, differently affected the performance in the human condition (consisting of the first and last experimental blocks) and the stone condition (consisting of the two embedded experimental blocks). Averaging the blocks of each condition might have, therefore, not fully accounted for learning effects.

Finally, correlation analyses do not reveal the directionality of an association. For example, it is possible that participants with enhanced cortical excitability were more “prone” to experience body illusions and/or to perform faster movements. Similarly, better performance in VR may have enforced the embodiment of the stone avatar ( Grechuta et al., 2017 , 2019 ). Future studies are needed to disentangle the directionality and causality of the relationship between VR illusions effects and behavior or neural correlates.

The goal of this study was to “trick the brain” using immersive VR and to investigate how modulated physical properties of an embodied avatar influence motor brain networks and action execution. Our results show that participants indeed experienced the “stone arm illusion.” The reported illusion strength was associated with enhanced motor cortical excitability and faster movements, indicating that participants may have physically mirrored and compensated for the embodied body characteristics of the stone avatar. Together, alternating the perception of the own body and associated motor brain networks in a subtle way using immersive VR may have important applications for neurorehabilitation and boost the motor recovery of neurologic patients.

Data Availability Statement

Ethics statement.

The studies involving human participants were reviewed and approved by the Ethics Committee of the Canton of Bern (KEK). The patients/participants provided their written informed consent to participate in this study.

Author Contributions

KB, JP-A, and LM-C designed the study. JP-A set up the experiment. KB, JP-A, and DC tested all subjects. KB and JP-A analyzed the data. ÖÖ contributed to the analysis of kinematic data. KB and LM-C wrote the manuscript. All authors edited and revised the manuscript and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

We would like to thank all members of the Motor Learning and Neurorehabilitation Laboratory and the Eyelab for their kind support during this project. A special thanks goes to our participants.

1 http://www.makehumancommunity.org

This work was supported by the Swiss National Science Foundation through the grant PP00P2 163800, Swiss National Center of Competence in Research (NCCR Robotics), and UniBE ID Grant.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnhum.2021.787487/full#supplementary-material

  • Adamovich S. V., Merians A. S., Boian R., Tremaine M., Burdea G. S., Recce M., et al. (2004). A virtual reality based exercise system for hand rehabilitation post-stroke: transfer to function. Conf. Proc. IEEE Eng. Med. Biol. Soc. 7 4936–4939. 10.1109/IEMBS.2004.1404364 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ammann C., Guida P., Caballero-Insaurriaga J., Pineda-Pardo J. A., Oliviero A., Foffani G. (2020). A framework to assess the impact of number of trials on the amplitude of motor evoked potentials. Sci. Rep. 10 : 21422 . 10.1038/s41598-020-77383-6 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Arányi Z., Mathis J., Hess C. W., Rösler K. M. (1998). Task-dependent facilitation of motor evoked potentials during dynamic and steady muscle contractions. Muscle Nerve 21 1309–1316. [ PubMed ] [ Google Scholar ]
  • Barker A. T., Jalinous R., Freeston I. L. (1985). Non-invasive magnetic stimulation of human motor cortex. Lancet 325 1106–1107. 10.1016/S0140-6736(85)92413-4 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barthélemy D., Alain S., Grey M. J., Nielsen J. B., Bouyer L. J. (2012). Rapid changes in corticospinal excitability during force field adaptation of human walking. Exper. Brain Res. 217 99–115. 10.1007/s00221-011-2977-4 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Basalp E., Wolf P., Marchal-Crespo L. (2021). Haptic training: which types facilitate (re)learning of which motor task and for whom answers by a review. IEEE Trans. Haptics 14 722–739. 10.1109/TOH.2021.3104518 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bassolino M., Franza M., Bello Ruiz J., Pinardi M., Schmidlin T., Stephan M. A., et al. (2018). Non-invasive brain stimulation of motor cortex induces embodiment when integrated with virtual reality feedback. Eur. J. Neurosci. 47 790–799. 10.1111/ejn.13871 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Baud-Bovy G., Prattichizzo D., Rossi S. (2008). Contact forces evoked by transcranial magnetic stimulation of the motor cortex in a multi-finger grasp. Brain Res. Bull. 75 723–736. 10.1016/j.brainresbull.2008.01.005 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bestmann S. (2012). “ Functional modulation of primary motor cortex during action selection ,” in Cortical Connectivity , eds Chen R., Rothwell J. C. (Berlin: Springer; ), 183–205. 10.1007/978-3-662-45797-9_10 [ CrossRef ] [ Google Scholar ]
  • Bestmann S., Krakauer J. W. (2015). The uses and interpretations of the motor-evoked potential for understanding behaviour. Exper. Brain Res. 233 679–689. 10.1007/s00221-014-4183-7 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bezerra ÍM. P., Crocetta T. B., Massetti T., Silva T. D., da Guarnieri R., de Meira C. M., et al. (2018). Functional performance comparison between real and virtual tasks in older adults: a cross-sectional study. Medicine 97 : e9612 . 10.1097/MD.0000000000009612 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Blanke O. (2012). Multisensory brain mechanisms of bodily self-consciousness. Nat. Rev. Neurosci. 13 556–571. 10.1038/nrn3292 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Blanke O., Slater M., Serino A. (2015). Behavioral, neural, and computational principles of bodily self-consciousness. Neuron 88 145–166. 10.1016/j.neuron.2015.09.029 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bonato C., Miniussi C., Rossini P. M. (2006). Transcranial magnetic stimulation and cortical evoked potentials: a TMS/EEG co-registration study. Clin. Neurophysiol. 117 1699–1707. 10.1016/j.clinph.2006.05.006 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Botvinick M., Cohen J. (1998). Rubber hands ‘feel’ touch that eyes see. Nature 391 756–756. 10.1038/35784 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Braun N., Debener S., Spychala N., Bongartz E., Sörös P., Müller H. H. O., et al. (2018). The senses of agency and ownership: a review. Front. Psychol. 9 : 535 . 10.3389/fpsyg.2018.00535 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Burin D., Kawashima R. (2021). Repeated exposure to illusory sense of body ownership and agency over a moving virtual body improves executive functioning and increases prefrontal cortex activity in the elderly. Front. Hum. Neurosci. 15 : 674326 . 10.3389/fnhum.2021.674326 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Burin D., Livelli A., Garbarini F., Fossataro C., Folegatti A., Gindri P., et al. (2015). Are movements necessary for the sense of body ownership? evidence from the rubber hand illusion in pure Hemiplegic patients. PLoS One 10 : e0117155 . 10.1371/journal.pone.0117155 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Burin D., Pyasik M., Salatino A., Pia L. (2017). That’s my hand! therefore, that’s my willed action: how body ownership acts upon conscious awareness of willed actions. Cognition 166 164–173. 10.1016/j.cognition.2017.05.035 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Carey M., Crucianelli L., Preston C., Fotopoulou A. (2019). The effect of visual capture towards subjective embodiment within the full body illusion. Sci. Rep. 9 : 2889 . 10.1038/s41598-019-39168-4 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chen R., Hallett M. (1999). The time course of changes in motor cortex excitability associated with voluntary movement. J. Can. Des Sci. Neurol. 26 163–169. 10.1017/S0317167100000196 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chen R., Yaseen Z., Cohen L. G., Hallett M. (1998). Time course of corticospinal excitability in reaction time and self-paced movements. Ann. Neurol. 44 317–325. 10.1002/ana.410440306 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Clausen S., Tajadura-Jiménez A., Janssen C. P., Bianchi-Berthouze N. (2021). Action sounds informing own body perception influence gender identity and social cognition. Front. Hum. Neurosci. 15 : 688170 . 10.3389/fnhum.2021.688170 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Conte A., Gilio F., Iezzi E., Frasca V., Inghilleri M., Berardelli A. (2007). Attention influences the excitability of cortical motor areas in healthy humans. Exper. Brain Res. 182 109–117. 10.1007/s00221-007-0975-3 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Crea S., D’Alonzo M., Vitiello N., Cipriani C. (2015). The rubber foot illusion. J. Neuroeng. Rehabil. 12 : 77 . 10.1186/s12984-015-0069-6 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • D’Angelo M., di Pellegrino G., Frassinetti F. (2019). The illusion of having a tall or short body differently modulates interpersonal and peripersonal space. Behav. Brain Res. 375 : 112146 . 10.1016/j.bbr.2019.112146 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • David N., Newen A., Vogeley K. (2008). The “sense of agency” and its underlying cognitive and neural mechanisms. Conscious. Cogn. 17 523–534. 10.1016/j.concog.2008.03.004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • de Mello Monteiro C. B., Massetti T., da Silva T. D., van der Kamp J., de Abreu L. C., Leone C., et al. (2014). Transfer of motor learning from virtual to natural environments in individuals with cerebral palsy. Res. Dev. Disabil. 35 2430–2437. 10.1016/j.ridd.2014.06.006 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • de Vignemont F. (2010). Body schema and body image—Pros and cons. Neuropsychologia 48 669–680. 10.1016/j.neuropsychologia.2009.09.022 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • della Gatta F., Garbarini F., Puglisi G., Leonetti A., Berti A., Borroni P. (2016). Decreased motor cortex excitability mirrors own hand disembodiment during the rubber hand illusion. eLife 5 : e14972 . 10.7554/eLife.14972 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Deutsch J. E., Merians A. S., Adamovich S., Poizner H., Burdea G. C. (2004). Development and application of virtual reality technology to improve hand use and gait of individuals post-stroke. Restor. Neurol. Neurosci. 22 371–386. [ PubMed ] [ Google Scholar ]
  • Di Lazzaro V., Rothwell J. C. (2014). Corticospinal activity evoked and modulated by non-invasive stimulation of the intact human motor cortex. J. Physiol. 592 4115–4128. 10.1113/jphysiol.2014.274316 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dilena A., Todd G., Berryman C., Rio E., Stanton T. R. (2019). What is the effect of bodily illusions on corticomotoneuronal excitability? A systematic review. PLoS One 14 : e0219754 . 10.1371/journal.pone.0219754 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ehrsson H. H. (2004). That’s My Hand! activity in premotor cortex reflects feeling of ownership of a limb. Science 305 875–877. 10.1126/science.1097011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Feigin V. L., Forouzanfar M. H., Krishnamurthi R., Mensah G. A., Connor M., Bennett D. A., et al. (2014). Global and regional burden of stroke during 1990-2010: findings from the global burden of disease study 2010. Lancet 383 245–254. 10.1016/s0140-6736(13)61953-4 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Flögel M., Kalveram K. T., Christ O., Vogt J. (2016). Application of the rubber hand illusion paradigm: comparison between upper and lower limbs. Psychol. Res. 80 298–306. 10.1007/s00426-015-0650-4 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fossataro C., Bruno V., Giurgola S., Bolognini N., Garbarini F. (2018). Losing my hand. Body ownership attenuation after virtual lesion of the primary motor cortex. Eur. J. Neurosci. 48 2272–2287. 10.1111/ejn.14116 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Golaszewski S., Frey V., Thomschewski A., Sebastianelli L., Versace V., Saltuari L., et al. (2021). Neural mechanisms underlying the Rubber Hand Illusion: a systematic review of related neurophysiological studies. Brain Behav. 11 : e02124 . 10.1002/brb3.2124 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grechuta K., Guga J., Maffei G., Rubio Ballester B., Verschure P. F. M. J. (2017). Visuotactile integration modulates motor performance in a perceptual decision-making task. Sci. Rep. 7 : 3333 . 10.1038/s41598-017-03488-0 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grechuta K., Ulysse L., Rubio Ballester B., Verschure P. F. M. J. (2019). Self beyond the body: action-driven and task-relevant purely distal cues modulate performance and body ownership. Front. Hum. Neurosci. 13 : 91 . 10.3389/fnhum.2019.00091 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Groppa S., Oliviero A., Eisen A., Quartarone A., Cohen L. G., Mall V., et al. (2012). A practical guide to diagnostic transcranial magnetic stimulation: report of an IFCN committee. Clin. Neurophysiol. 123 858–882. 10.1016/j.clinph.2012.01.010 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Haans A., IJsselsteijn W. A., de Kort Y. A. W. (2008). The effect of similarities in skin texture and hand shape on perceived ownership of a fake limb. Body Image 5 389–394. 10.1016/j.bodyim.2008.04.003 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Head H., Holmes G. (1911). Sensory disturbances from cerebral lesions. Brain 34 102–254. 10.1093/brain/34.2-3.102 [ CrossRef ] [ Google Scholar ]
  • Holmes N. P., Spence C. (2004). The body schema and multisensory representation(s) of peripersonal space. Cogn. Process. 5 94–105. 10.1007/s10339-004-0013-3 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hornburger H., Nguemeni C., Odorfer T., Zeller D. (2019). Modulation of the rubber hand illusion by transcranial direct current stimulation over the contralateral somatosensory cortex. Neuropsychologia 131 353–359. 10.1016/j.neuropsychologia.2019.05.008 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Isayama R., Vesia M., Jegatheeswaran G., Elahi B., Gunraj C. A., Cardinali L., et al. (2019). Rubber hand illusion modulates the influences of somatosensory and parietal inputs to the motor cortex. J. Neurophysiol. 121 563–573. 10.1152/jn.00345.2018 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jang S. H., You S. H., Hallett M., Cho Y. W., Park C.-M., Cho S.-H., et al. (2005). Cortical reorganization and associated functional motor recovery after virtual reality in patients with chronic stroke: an experimenter-blind preliminary study. Archiv. Phys. Med. Rehabil. 86 2218–2223. 10.1016/j.apmr.2005.04.015 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jeong H., Kim J. (2021). Development of a guidance system for motor imagery enhancement using the virtual hand illusion. Sensors 21 : 2197 . 10.3390/s21062197 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kalckert A., Ehrsson H. H. (2012). Moving a rubber hand that feels like your own: a dissociation of ownership and agency. Front. Hum. Neurosci. 6 : 40 . 10.3389/fnhum.2012.00040 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kammers M. P. M., de Vignemont F., Verhagen L., Dijkerman H. C. (2009). The rubber hand illusion in action. Neuropsychologia 47 204–211. 10.1016/j.neuropsychologia.2008.07.028 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kanayama N., Hara M., Kimura K. (2021). Virtual reality alters cortical oscillations related to visuo-tactile integration during rubber hand illusion. Sci. Rep. 11 : 1436 . 10.1038/s41598-020-80807-y [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kilteni K., Bergstrom I., Slater M. (2013). Drumming in immersive virtual reality: the body shapes the way we play. IEEE Trans. Vis. Comput. Graph. 19 597–605. 10.1109/TVCG.2013.29 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kilteni K., Grau-Sánchez J., Veciana De Las Heras M., Rodríguez-Fornells A., Slater M. (2016). Decreased corticospinal excitability after the illusion of missing part of the arm. Front. Hum. Neurosci. 10 : 145 . 10.3389/fnhum.2016.00145 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kilteni K., Groten R., Slater M. (2012a). The sense of embodiment in virtual reality. Presence Teleoperat. Virtual Environ. 21 373–387. 10.1162/PRES_a_00124 [ CrossRef ] [ Google Scholar ]
  • Kilteni K., Normand J.-M., Sanchez-Vives M. V., Slater M. (2012b). Extending body space in immersive virtual reality: a very long arm illusion. PLoS One 7 : e40867 . 10.1371/journal.pone.0040867 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Knoblich G. (ed.) (2006). Human Body Perception From the Inside Out. Oxford: Oxford University Press. [ Google Scholar ]
  • Lenggenhager B., Pazzaglia M., Scivoletto G., Molinari M., Aglioti S. M. (2012). The sense of the body in individuals with spinal cord injury . PLoS One 7 : e50757 . 10.1371/journal.pone.0050757 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lee J. H., Ku J., Cho W., Hahn W. Y., Kim I. Y., Lee S.-M., et al. (2003). A virtual reality system for the assessment and rehabilitation of the activities of daily living. Cyberpsychol. Behav. 6 383–388. 10.1089/109493103322278763 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lloyd D. M. (2007). Spatial limits on referred touch to an alien limb may reflect boundaries of visuo-tactile peripersonal space surrounding the hand. Brain Cogn. 64 104–109. 10.1016/j.bandc.2006.09.013 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Longo M. R., Haggard P. (2011). Weber’s illusion and body shape: anisotropy of tactile size perception on the hand. J. Exper. Psychol. 37 720–726. 10.1037/a0021921 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Longo M. R., Schüür F., Kammers M. P. M., Tsakiris M., Haggard P. (2008). What is embodiment? A psychometric approach. Cognition 107 978–998. 10.1016/j.cognition.2007.12.004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lopez C., Bieri C. P., Preuss N., Mast F. W. (2012). Tactile and vestibular mechanisms underlying ownership for body parts: a non-visual variant of the rubber hand illusion. Neurosci. Lett. 511 120–124. 10.1016/j.neulet.2012.01.055 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lum P. S., Godfrey S. B., Brokaw E. B., Holley R. J., Nichols D. (2012). Robotic approaches for rehabilitation of hand function after stroke. Am. J. Phys. Med. Rehabil. 91 S242–S254. 10.1097/PHM.0b013e31826bcedb [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Makin T. R., Holmes N. P., Ehrsson H. H. (2008). On the other hand: dummy hands and peripersonal space. Behav. Brain Res. 191 1–10. 10.1016/j.bbr.2008.02.041 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Maravita A., Iriki A. (2004). Tools for the body (schema). Trends Cogn. Sci. 8 79–86. 10.1016/j.tics.2003.12.008 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Maravita A., Spence C., Driver J. (2003). Multisensory integration and the body schema: close to hand and within reach. Curr. Biol. 13 R531–R539. 10.1016/S0960-9822(03)00449-4 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Marchal-Crespo L., Reinkensmeyer D. J. (2008). Haptic guidance can enhance motor learning of a steering task. J. Motor Behav. 40 545–556. 10.3200/JMBR.40.6.545-557 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Marchal-Crespo L., Reinkensmeyer D. J. (2009). Review of control strategies for robotic movement training after neurologic injury. J. Neuroeng. Rehabil. 6 : 20 . 10.1186/1743-0003-6-20 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Matsumoto N., Nakai R., Ino T., Mitani A. (2020). Brain activity associated with the rubber foot illusion. Neurosci. Lett. 721 : 134820 . 10.1016/j.neulet.2020.134820 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miall R. C., Christensen L. O. D., Cain O., Stanley J. (2007). Disruption of state estimation in the human lateral cerebellum. PLoS Biol. 5 : e316 . 10.1371/journal.pbio.0050316 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Odermatt I. A., Buetler K. A., Wenk N., Özen Ö, Penalver-Andres J., Nef T., et al. (2021). Congruency of information rather than body ownership enhances motor performance in highly embodied virtual reality. Front. Neurosci. 15 : 678909 . 10.3389/fnins.2021.678909 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pearce A. J., Kidgell D. J. (2009). Corticomotor excitability during precision motor tasks. J. Sci. Med. Sport 12 280–283. 10.1016/j.jsams.2007.12.005 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Perez M. A., Cohen L. G. (2008). Mechanisms underlying functional changes in the primary motor cortex ipsilateral to an active hand. J. Neurosci. 28 5631–5640. 10.1523/JNEUROSCI.0093-08.2008 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Perez M. A., Cohen L. G. (2009). Scaling of motor cortical excitability during unimanual force generation. Cortex 45 1065–1071. 10.1016/j.cortex.2008.12.006 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Perez M. A., Soteropoulos D. S., Baker S. N. (2012). Corticomuscular coherence during bilateral isometric arm voluntary activity in healthy humans. J. Neurophysiol. 107 2154–2162. 10.1152/jn.00722.2011 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Perez-Marcos D., Bieler-Aeschlimann M., Serino A. (2018). Virtual reality as a vehicle to empower motor-cognitive neurorehabilitation. Front. Psychol. 9 : 2120 . 10.3389/fpsyg.2018.02120 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Perez-Marcos D., Slater M., Sanchez-Vives M. V. (2009). Inducing a virtual hand ownership illusion through a brain-computer interface. Neuroreport 20 589–594. 10.1097/WNR.0b013e32832a0a2a [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Petkova V. I., Ehrsson H. H. (2008). If I were you: perceptual illusion of body swapping. PLoS One 3 : e3832 . 10.1371/journal.pone.0003832 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pozeg P., Rognini G., Salomon R., Blanke O. (2014). Crossing the hands increases illusory self-touch. PLoS One 9 : e94008 . 10.1371/journal.pone.0094008 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Preston C., Newport R. (2012). How long is your arm? Using multisensory illusions to modify body image from the third person perspective. Perception 41 247–249. 10.1068/p7103 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pyasik M., Tieri G., Pia L. (2020). Visual appearance of the virtual hand affects embodiment in the virtual hand illusion. Sci. Rep. 10 : 5412 . 10.1038/s41598-020-62394-0 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rao I. S., Kayser C. (2017). Neurophysiological correlates of the rubber hand illusion in late evoked and alpha/beta band activity. Front. Hum. Neurosci. 11 : 377 . 10.3389/fnhum.2017.00377 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Raz G., Gurevitch G., Vaknin T., Aazamy A., Gefen I., Grunstein S., et al. (2020). Electroencephalographic evidence for the involvement of mirror-neuron and error-monitoring related processes in virtual body ownership. Neuroimage 207 : 116351 . 10.1016/j.neuroimage.2019.116351 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Riemer M., Trojan J., Beauchamp M., Fuchs X. (2019). The rubber hand universe: on the impact of methodological differences in the rubber hand illusion. Neurosci. Biobehav. Rev. 104 268–280. 10.1016/j.neubiorev.2019.07.008 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rosanova M., Casarotto S., Pigorini A., Canali P., Casali A. G., Massimini M. (2011). “ Combining transcranial magnetic stimulation with electroencephalography to study human cortical excitability and effective connectivity ,” in Neuronal Network Analysis , Vol. 67 eds Fellin T., Halassa M. (Totowa, NJ: Humana Press; ), 435–457. 10.1007/7657_2011_15 [ CrossRef ] [ Google Scholar ]
  • Rose F. D., Brooks B. M., Rizzo A. A. (2005). Virtual reality in brain damage rehabilitation: review. Cyberpsychol. Behav. 8 241–262. 10.1089/cpb.2005.8.241 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rossini P. M., Burke D., Chen R., Cohen L. G., Daskalakis Z., Di Iorio R., et al. (2015). Non-invasive electrical and magnetic stimulation of the brain, spinal cord, roots and peripheral nerves: basic principles and procedures for routine clinical and research application. An updated report from an I.F.C.N. Committee. Clin. Neurophysiol. 126 1071–1107. 10.1016/j.clinph.2015.02.001 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rothwell J. C. (1997). Techniques and mechanisms of action of transcranial stimulation of the human motor cortex. J. Neurosci. Methods 74 113–122. 10.1016/S0165-0270(97)02242-5 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rothwell J. C., Day B. L., Thompson P. D., Dick J. P., Marsden C. D. (1987a). Some experiences of techniques for stimulation of the human cerebral motor cortex through the scalp. Neurosurgery 20 156–163. 10.1097/00006123-198701000-00032 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rothwell J. C., Thompson P. D., Day B. L., Dick J. P. R., Kachi T., Cowan J. M. A., et al. (1987b). Motor cortex stimulation in intact man: 1. General characteristics of EMG responses in different muscles. Brain 110 1173–1190. 10.1093/brain/110.5.1173 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sakamoto M., Ifuku H. (2021). Attenuation of sensory processing in the primary somatosensory cortex during rubber hand illusion. Sci. Rep. 11 : 7329 . 10.1038/s41598-021-86828-5 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sanchez-Vives M. V., Spanlang B., Frisoli A., Bergamasco M., Slater M. (2010). Virtual hand illusion induced by visuomotor correlations. PLoS One 5 : e10381 . 10.1371/journal.pone.0010381 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schmidt S. H., Brandt S. A. (2021). “ Motor threshold, motor evoked potential, central motor conduction time ,” in The Oxford Handbook of Transcranial Stimulation , 2nd Edn, eds Wassermann E. M., Peterchev A. V., Ziemann U., Lisanby S. H., Siebner H. R., Walsh V. (Oxford: Oxford University Press; ), 10.1093/oxfordhb/9780198832256.013.11 [ CrossRef ] [ Google Scholar ]
  • Schulz H., Übelacker T., Keil J., Müller N., Weisz N. (2014). Now I am Ready—Now I am not: the influence of Pre-TMS oscillations and corticomuscular coherence on motor-evoked potentials. Cereb. Cortex 24 1708–1719. 10.1093/cercor/bht024 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Senna I., Maravita A., Bolognini N., Parise C. V. (2014). The marble-hand illusion. PLoS One 9 : e91688 . 10.1371/journal.pone.0091688 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Serino A., Haggard P. (2010). Touch and the body. Neurosci. Biobehav. Rev. 34 224–236. 10.1016/j.neubiorev.2009.04.004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shishov N., Melzer I., Bar-Haim S. (2017). Parameters and measures in assessment of motor learning in neurorehabilitation; a systematic review of the literature. Front. Hum. Neurosci. 11 : 82 . 10.3389/fnhum.2017.00082 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Slater M. (2008). Towards a digital body: the virtual arm illusion. Front. Hum. Neurosci. 2 : 6 . 10.3389/neuro.09.006.2008 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Smith V., Maslovat D., Carlsen A. N. (2019). StartReact effects are dependent on engagement of startle reflex circuits: support for a subcortically mediated initiation pathway. J. Neurophysiol. 122 2541–2547. 10.1152/jn.00505.2019 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Starr A., Caramia M., Zarola F., Rossini P. M. (1988). Enhancement of motor cortical excitability in humans by non-invasive electrical stimulation appears prior to voluntary movement. Electroencephalogr. Clin. Neurophysiol. 70 26–32. 10.1016/0013-4694(88)90191-5 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tajadura-Jiménez A., Basia M., Deroy O., Fairhurst M., Marquardt N., Bianchi-Berthouze N. (2015a). “ As light as your footsteps: altering walking sounds to change perceived body weight, emotional state and gait ,” in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems , eds Begole B., Kim J., Woo W., Inkpen K. (Seoul: Association for Computing Machinery; ), 2943–2952. 10.1145/2702123.2702374 [ CrossRef ] [ Google Scholar ]
  • Tajadura-Jiménez A., Tsakiris M., Marquardt T., Bianchi-Berthouze N. (2015b). Action sounds update the mental representation of arm dimension: contributions of kinaesthesia and agency. Front. Psychol. 6 : 689 . 10.3389/fpsyg.2015.00689 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tajadura-Jiménez A., Marquardt T., Swapp D., Kitagawa N., Bianchi-Berthouze N. (2016). Action sounds modulate arm reaching movements. Front. Psychol. 7 : 1391 . 10.3389/fpsyg.2016.01391 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tajadura-Jiménez A., Väljamäe A., Toshima I., Kimura T., Tsakiris M., Kitagawa N. (2012). Action sounds recalibrate perceived tactile distance. Curr. Biol. 22 R516–R517. 10.1016/j.cub.2012.04.028 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tambone R., Poggio G., Pyasik M., Burin D., Dal Monte O., Schintu S., et al. (2021). Changing your body changes your eating attitudes: embodiment of a slim virtual avatar induces avoidance of high-calorie food. Heliyon 7 : e07515 . 10.1016/j.heliyon.2021.e07515 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Thut G., Veniero D., Romei V., Miniussi C., Schyns P., Gross J. (2011). Rhythmic TMS causes local entrainment of natural oscillatory signatures. Curr. Biol. 21 1176–1185. 10.1016/j.cub.2011.05.049 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tsakiris M. (2010). My body in the brain: a neurocognitive model of body-ownership. Neuropsychologia 48 703–712. 10.1016/j.neuropsychologia.2009.09.034 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tsakiris M. (2017). The multisensory basis of the self: from body to identity to others. Q. J. Exper. Psychol. 70 597–609. 10.1080/17470218.2016.1181768 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tsakiris M., Haggard P. (2005). The rubber hand illusion revisited: visuotactile integration and self-attribution. J. Exper. Psychol. Hum. Percept. Perform. 31 80–91. 10.1037/0096-1523.31.1.80 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tsakiris M., Prabhu G., Haggard P. (2006). Having a body versus moving your body: how agency structures body-ownership. Conscious. Cogn. 15 423–432. 10.1016/j.concog.2005.09.004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Uehara K., Higashi T., Tanabe S., Sugawara K. (2011). Alterations in human motor cortex during dual motor task by transcranial magnetic stimulation study. Exper. Brain Res. 208 277–286. 10.1007/s00221-010-2478-x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • van der Hoort B., Guterstam A., Ehrsson H. H. (2011). Being barbie: the size of One’s own body determines the perceived size of the world. PLoS One 6 : e20195 . 10.1371/journal.pone.0020195 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Watanabe H., Mizuguchi N., Mayfield D. L., Yoshitake Y. (2018). Corticospinal excitability during actual and imaginary motor tasks of varied difficulty. Neuroscience 391 81–90. 10.1016/j.neuroscience.2018.08.011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wawrzyniak M., Klingbeil J., Zeller D., Saur D., Classen J. (2018). The neuronal network involved in self-attribution of an artificial hand: a lesion network-symptom-mapping study . Neuroimage 166 , 317–324. 10.1016/j.neuroimage.2017.11.011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wen W., Muramatsu K., Hamasaki S., An Q., Yamakawa H., Tamura Y., et al. (2016). Goal-directed movement enhances body representation updating. Front. Hum. Neurosci. 10 : 329 . 10.3389/fnhum.2016.00329 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wenk N., Penalver-Andres J., Buetler K. A., Nef T., Müri R. M., Marchal-Crespo L. (2021). Effect of immersive visualization technologies on cognitive load, motivation, usability, and embodiment. Virtual Real. 10.1007/s10055-021-00565-8 [ CrossRef ] [ Google Scholar ]
  • Wolf S. L., Butler A. J., Alberts J. L., Kim M. W. (2005). Contemporary linkages between EMG, kinetics and stroke rehabilitation. J. Electromyogr. Kinesiol. 15 229–239. 10.1016/j.jelekin.2005.01.002 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Yahagi S., Ni Z., Takahashi M., Takeda Y., Tsuji T., Kasai T. (2003). Excitability changes of motor evoked potentials dependent on muscle properties and contraction modes. Motor Control 7 328–345. [ PubMed ] [ Google Scholar ]
  • Zeller D., Gross C., Bartsch A., Johansen-Berg H., Classen J. (2011). Ventral premotor cortex may be required for dynamic changes in the feeling of limb ownership: a lesion study . J. Neurosci . 31 , 4852–4857. 10.1523/JNEUROSCI.5154-10.2011 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zeller D., Litvak V., Friston K. J., Classen J. (2015). Sensory processing and the rubber hand illusion—an evoked potentials study. J. Cogn. Neurosci. 27 573–582. 10.1162/jocn_a_00705 [ PubMed ] [ CrossRef ] [ Google Scholar ]

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

MIT.nano’s Immersion Lab opens for researchers and students

Press contact :.

Photo of two people wearing VR headsets and suits

Previous image Next image

The MIT.nano Immersion Lab, MIT’s first open-access facility for augmented and virtual reality (AR/VR) and interacting with data, is now open and available to MIT students, faculty, researchers, and external users.

The powerful set of capabilities is located on the third floor of MIT.nano in a two-story space resembling a black-box theater. The Immersion Lab contains embedded systems and individual equipment and platforms, as well as data capacity to support new modes of teaching and applications such as creating and experiencing immersive environments, human motion capture, 3D scanning for digital assets, 360-degree modeling of spaces, interactive computation and visualization, and interfacing of physical and digital worlds in real-time.

“Give the MIT community a unique set of tools and their relentless curiosity and penchant for experimentation is bound to create striking new paradigms and open new intellectual vistas. They will probably also invent new tools along the way,” says Vladimir Bulović, the founding faculty director of MIT.nano and the Fariborz Maseeh Chair in Emerging Technology. “We are excited to see what happens when students, faculty, and researchers from different disciplines start to connect and collaborate in the Immersion Lab — activating its virtual realms.”

A major focus of the lab is to support data exploration, allowing scientists and engineers to analyze and visualize their research at the human scale with large, multidimensional views, enabling visual, haptic, and aural representations. “The facility offers a new and much-needed laboratory to individuals and programs grappling with how to wield, shape, present, and interact with data in innovative ways,” says Brian W. Anthony, the associate director of MIT.nano and faculty lead for the Immersion Lab.

Massive data is one output of MIT.nano, as the workflow of a typical scientific measurement system within the facility requires iterative acquisition, visualization, interpretation, and data analysis. The Immersion Lab will accelerate the data-centric work of MIT.nano researchers, but also of others who step into its space, driven by their pursuits of science, engineering, art, entertainment, and education.

Tools and capabilities

The Immersion Lab not only assembles a variety of advanced hardware and software tools, but is also an instrument in and of itself, says Anthony. The two-story cube, measuring approximately 28 feet on each side, is outfitted with an embedded OptiTrack system that enables precise motion capture via real-time active or passive 3D tracking of objects, as well as full-body motion analysis with the associated software.

Complementing the built-in systems are stand-alone instruments that study the data, analyze and model the physical world, and generate new, immersive content, including:

  • a Matterport Pro2 photogrammetric camera to generate 3D, geographically and dimensionally accurate reconstructions of spaces (Matterport can also be used for augmented reality creation and tagging, virtual reality walkthroughs, and 3D models of the built environment);
  • a Lenscloud system that uses 126 cameras and custom software to produce high-volume, 360-degree photogrammetric scans of human bodies or human-scale objects;
  • software and hardware tools for content generation and editing, such as 360-degree cameras, 3D animation software, and green screens;
  • backpack computers and VR headsets to allow researchers to test and interact with their digital assets in virtual spaces, untethered from a stationary desktop computer; and
  • hardware and software to visualize complex and multidimensional datasets, including HP Z8 data science workstations and Dell Alienware gaming workstations.

Like MIT.nano’s fabrication and characterization facilities, the Immersion Lab is open to researchers from any department, lab, and center at MIT. Expert research staff are available to assist users.

Support for research, courses, and seminars

Anthony says the Immersion Lab is already supporting cross-disciplinary research at MIT, working with multiple MIT groups for diverse uses — quantitative geometry measurements of physical prototypes for advanced manufacturing, motion analysis of humans for health and wellness uses, creation of animated characters for arts and theater production, virtual tours of physical spaces, and visualization of fluid and heat flow for architectural design, to name a few.

The MIT.nano Immersion Lab Gaming Program is a four-year research collaboration between MIT.nano and video game development company NCSOFT that seeks to chart the future of how people interact with the world and each other via hardware and software innovations in gaming technologies. In the program’s first two calls-for-proposals in 2019 and 2020, 12 projects from five different departments were awarded $1.5M of combined research funding. The collaborative proposal selection process by MIT.nano and NCSOFT ensures the awarded projects are developing industrially-impactful advancements, and that MIT researchers are exposed to technical practitioners at NCSOFT.

The Immersion Lab also partners with the Clinical Research Center (CRC) at the MIT Institute for Medical Engineering and Science to generate a human-centric environment in which to study health and wellness. Through this partnership, the CRC has provided sensors, equipment, and expertise to capture physiological measurements of a human body while immersed in the physical or virtual realm of the Immersion Lab.

Undergraduate students can use the Immersion Lab through sponsored Undergraduate Research Opportunities Program (UROP) projects. Recent UROP work includes jumping as a new form of locomotion in virtual reality and analyzing human muscle lines using motion capture software. Starting with MIT’s 2021 Independent Activities Period, the Immersion Lab will also offer workshops, short courses, and for-credit classes in the MIT curriculum.

Members of the MIT community and general public can learn more about the various application areas supported by the Immersion Lab through a new seminar series, Immersed, beginning in February. This monthly event will feature talks by experts in the fields of current work, highlighting future goals to be pursued with the immersive technologies. Slated topical areas include motion in sports, uses for photogrammetry, rehabilitation and prosthetics, and music/performing arts.

New ways of teaching and learning

Virtual reality makes it possible for instructors to bring students to environments that are hard to access, either geographically or at scale. New modalities for introducing the language of gaming into education allow for students to discover concepts for themselves.

As a recent example, William Oliver, associate professor in electrical engineering and computer science, is developing Qubit Arcade to teach core principles of quantum computing via a virtual reality demonstration. Users can create Bloch spheres, control qubit states, measure results, and compose quantum circuits in an intuitive 3D representation with virtualized quantum gates.

IMES Director Elazer Edelman, the Edward J. Poitras Professor in Medical Engineering and Science, is using the Immersion Lab as a teaching tool for interacting with 3D models of the heart. With the 3D and 4D visualization tools of the Lab, Edelman and his students can see in detail the evolution of congenital heart failure models, something his students could previously only study if they happened upon a case in a cadaver.

“Software engineers understand how to implement concepts in a digital environment. Artists understand how light interacts with materials and how to draw the eye to a particular feature through contrast and composition. Musicians and composers understand how the human ear responds to sound. Dancers and animators understand human motion. Teachers know how to explain concepts and challenge their students. Hardware engineers know how to manipulate materials and matter to build new physical functionality. All of these fields have something to contribute to the problems we are tackling in the Immersion Lab,” says Anthony.

A faculty advisory board has been established to help the MIT.nano Immersion Lab identify opportunities enabled by the current tools and those that should be explored with additional software and hardware capabilities. The lab’s advisory board currently comprises seven MIT faculty from six departments. Such broad faculty engagement ensures that the Immersion Lab engages in projects across many disciplines and launches new directions of cross-disciplinary discoveries.

Visit nanousers.mit.edu/immersion-lab to learn more.

Share this news article on:

Related links.

  • Immersion Lab

Related Topics

  • School of Architecture and Planning
  • School of Humanities Arts and Social Sciences
  • MIT Sloan School of Management
  • Electrical Engineering & Computer Science (eecs)
  • Institute for Medical Engineering and Science (IMES)
  • Undergraduate Research Opportunities Program (UROP)
  • Nanoscience and nanotechnology
  • Computer science and technology
  • Augmented and virtual reality
  • Technology and society
  • Artificial intelligence
  • Education, teaching, academics
  • STEM education

Related Articles

Photo of two people using untethered VR headsets and backpacks in the MIT.nano Immersion Lab

MIT.nano Immersion Lab Gaming Program awards 2020 seed grants

The MIT.nano Immersion Lab will provide an array of software and hardware for NCSOFT seed grant recipients and other researchers at MIT who are investigating augmented reality, virtual reality, and the display and analysis of spatially related data.

MIT.nano awards inaugural NCSOFT seed grants for gaming technologies

Visitors wear virtual reality headsets in the MIT.nano Immersion Lab.

The power of play

Previous item Next item

More MIT News

A young Black woman wearing a brightly patterned top and braids smiles over a set table at a restaurant.

Student spotlight: Victory Yinka-Banjo

Read full story →

Headshot of Catherine Wolfram

A delicate dance

A large black hole has a spinning disk around it. It also has a magnetic field represented as an orange cone on top and bottom of the black hole. A tiny black hole punches in and out through the disk as it orbits the larger one. Plumes from the large disk emerge when the tiny black hole travels. The plumes are especially strong in the magnetic fields.

Persistent “hiccups” in a far-off galaxy draw astronomers to new black hole behavior

A blind man uses a laptop, and in the background is a bar graph that resembles how audio bars look to show sound.

New software enables blind and low-vision users to create interactive, accessible charts

DNA strands attached to the surface of a cathode, a blue bar, with catalysts, depicted as blue circle, attached to the ends. Set of five tri-molecules change from carbon dioxide to carbon monoxide, indicated by change in red and gray circles.

Engineers find a new way to convert carbon dioxide into useful products

Gloved hands and eye dropper hovers over mRNA strands and shown over synthetic biology iconography

Unlocking mRNA’s cancer-fighting potential

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

A senior man wearing VR goggles.

We created a VR tool to test brain function. It could one day help diagnose dementia

vr tour of the brain

Research Theme Fellow in Health and Wellbeing, Western Sydney University

vr tour of the brain

Senior Lecturer in Psychology, Western Sydney University

Disclosure statement

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Western Sydney University provides funding as a member of The Conversation AU.

View all partners

If you or a loved one have noticed changes in your memory or thinking as you’ve grown older, this could reflect typical changes that occur with ageing. In some cases though, it might suggest something more, such as the onset of dementia.

The best thing to do if you have concerns is to make an appointment with your GP, who will probably run some tests. Assessment is important because if there is something more going on, early diagnosis can enable prompt access to the right interventions , supports and care.

But current methods of dementia screening have limitations , and testing can be daunting for patients.

Our research suggests virtual reality (VR) could be a useful cognitive screening tool, and mitigate some of the challenges associated with current testing methods, opening up the possibility it may one day play a role in dementia diagnosis.

Where current testing is falling short

If someone is worried about their memory and thinking, their GP might ask them to complete a series of quick tasks that check things like the ability to follow simple instructions, basic arithmetic, memory and orientation.

These sorts of screening tools are really good at confirming cognitive problems that may already be very apparent. But commonly used screening tests are not always so good at detecting early and more subtle difficulties with memory and thinking, meaning such changes could be missed until they get worse.

Read more: These 12 things can reduce your dementia risk – but many Australians don't know them all

A clinical neuropsychological assessment is better equipped to detect early changes . This involves a comprehensive review of a patient’s personal and medical history, and detailed assessment of cognitive functions, including attention, language, memory, executive functioning, mood factors and more. However, this can be costly and the testing can take several hours.

Testing is also somewhat removed from everyday experience, not directly tapping into activities of daily living.

Enter virtual reality

VR technology uses computer-generated environments to create immersive experiences that feel like real life. While VR is often used for entertainment, it has increasingly found applications in health care, including in rehabilitation and falls prevention .

Using VR for cognitive screening is still a new area. VR-based cognitive tests generally create a scenario such as shopping at a supermarket or driving around a city to ascertain how a person would perform in these situations.

Notably, they engage various senses and cognitive processes such as sight, sound and spatial awareness in immersive ways. All this may reveal subtle impairments which can be missed by standard methods.

VR assessments are also often more engaging and enjoyable, potentially reducing anxiety for those who may feel uneasy in traditional testing environments, and improving compliance compared to standard assessments.

A senior woman sitting on a bed with her hand to her face.

Most studies of VR-based cognitive tests have explored their capacity to pick up impairments in spatial memory (the ability to remember where something is located and how to get there), and the results have been promising.

Given VR’s potential for assisting with diagnosis of cognitive impairment and dementia remains largely untapped, our team developed an online computerised game (referred to as semi-immersive VR) to see how well a person can remember, recall and complete everyday tasks. In our VR game, which lasts about 20 minutes, the user role plays a waiter in a cafe and receives a score on their performance.

To assess its potential, we enlisted more than 140 people to play the game and provide feedback. The results of this research are published across three recent papers.

Testing our VR tool

In our most recently published study , we wanted to verify the accuracy and sensitivity of our VR game to assess cognitive abilities.

We compared our test to an existing screening tool (called the TICS-M ) in more than 130 adults. We found our VR task was able to capture meaningful aspects of cognitive function, including recalling food items and spatial memory.

We also found younger adults performed better in the game than older adults, which echoes the pattern commonly seen in regular memory tests.

A senior man sitting outdoors using a laptop.

In a separate study , we followed ten adults aged over 65 while they completed the game, and interviewed them afterwards. We wanted to understand how this group – who the tool would target – perceived the task.

These seniors told us they found the game user-friendly and believed it was a promising tool for screening memory. They described the game as engaging and immersive, expressing enthusiasm to continue playing. They didn’t find the task created anxiety.

Read more: We gave palliative care patients VR therapy. More than 50% said it helped reduce pain and depression symptoms

For a third study, we spoke to seven health-care professionals about the tool. Overall they gave positive feedback, and noted its dynamic approach to age-old diagnostic challenges.

However, they did flag some concerns and potential barriers to implementing this sort of tool. These included resource constraints in clinical practice (such as time and space to carry out the assessment) and whether it would be accessible for people with limited technological skills. There was also some scepticism about whether the tool would be an accurate method to assist with dementia diagnosis.

While our initial research suggests this tool could be a promising way to assess cognitive performance, this is not the same as diagnosing dementia. To improve the test’s ability to accurately detect those who likely have dementia, we’ll need to make it more specific for that purpose, and carry out further research to validate its effectiveness.

We’ll be conducting more testing of the game soon. Anyone interested in giving it a go to help with our research can register on our team’s website .

  • Cognitive impairment
  • Virtual reality

vr tour of the brain

Biocloud Project Manager - Australian Biocommons

vr tour of the brain

Director, Defence and Security

vr tour of the brain

Opportunities with the new CIEHF

vr tour of the brain

School of Social Sciences – Public Policy and International Relations opportunities

vr tour of the brain

Deputy Editor - Technology

Virtual-reality concerts could redefine the live-music experience — but it'll take more time

  • VR concerts have emerged as a creative outlet for artists such as Travis Scott and Teflon Sega.
  • Companies like Meta, AmazeVR, and Wave are leaning into the tech, creating hyperrealistic VR performances.
  • This article is part of " Build IT ," a series about digital tech and innovation trends that are disrupting industries.

Insider Today

Working in an industry that's constantly evolving is no small feat for musicians. With changes in trends and music-sharing mediums, artists must keep an ear to the ground to build profitable careers.

Artists' most lucrative sources of revenue are tour and merchandise sales. In a previous statement to Business Insider, Live Nation said : "Artists make more money from touring than any other piece of their business including recorded, streaming, and more."

Giving fans a memorable show is crucial, and some artists are looking to virtual reality to do just that . Part of it has to do with the pandemic; shutdowns in 2020 pushed musicians to use different channels to engage with their fans. Travis Scott, for example, partnered with "Fortnite" that year to bring his virtual-reality concert, "Astronomical," to a global audience. It was successful: Forbes reported that the event drew in 12 million viewers and raked in about $20 million, including from merchandise sales .

But interest in immersive concerts precedes the pandemic. In 2019, the entertainment-technology company Wave launched a multichannel virtual platform for live concerts that could reach fans through various digital means: video games, livestreaming channels, social networks, and more. Wave has since collaborated with the likes of The Weeknd, Justin Bieber, John Legend, Calvin Harris, and Tinashe.

The enigmatic singer Teflon Sega, who performs as a virtual-reality avatar, also worked with Wave to put on a virtual concert in 2022. The artist told BI that he embraced virtual reality — and his digital metaverse persona — after he was dropped by a record label and needed a space to freely share his music.

"A full live performance as an avatar was always something I was excited about but couldn't wrap my head around how to execute," he said. "When Wave showed interest in teaming up for a concert series, it was a no-brainer."

He added: "The market for immersive concerts and live experiences in VR will be a massive shift in the paradigm of performance."

Why artists and fans are interested in VR concerts

The appeal of VR concerts lies heavily in their experimental nature. Artists can try new ways to express themselves creatively and connect with listeners in digital environments. For fans, skirting painful concert-ticket prices , avoiding large crowds at venues, and experiencing their favorite musicians in an alternate world are convincing perks.

Meta is one of the tech giants that has capitalized on these benefits. In November, Meta Quest launched its Music Valley concert series. The ongoing series has featured virtual performances by artists such as Doja Cat, Blackpink, Victoria Monét, Jack Harlow , and Jorja Smith.

Sarah Malkin, the director of metaverse entertainment content at Meta, told BI that the virtual concerts served both creative and strategic purposes. "Often our VR shows are timed with key artist moments like tour launches or new music drops, helping performers reach their fans across multiple touchpoints and present their music or tour experiences in totally unique ways," she said.

AmazeVR, an immersive-concert platform that launched in 2015, is another leader in the VR-entertainment space and has partnered with the likes of Sony Music and Roc Nation.

One of its most notable collaborations was with Megan Thee Stallion — the star of AmazeVR's first VR-concert tour. Over three months, the artist's VR concert was available to watch at select theaters in different cities. Attendees who purchased tickets went to a designated theater where they were given VR headsets to view the performance.

Steve Lee, the company's CEO and cofounder, told BI that the debut tour was successful, with it selling out in 15 US cities and allowing consumers to "enjoy the magic of seeing their favorite artist in one of the most realistic forms — proving the power of VR."

How VR-concert tech works

One of the ways Meta captures VR concerts is by using purpose-built cameras to film artists at their real-life shows.

"We determine the optimal placement of our specialized VR cameras in collaboration with the artists and their teams, integrating our capture system into their shows to provide the best possible viewing experience," Malkin said.

Meta and the Diamond Bros, a creative-production company, used this approach to film a VR version of Doja Cat's sold-out Detroit show from her tour last year.

Jason Diamond, the director and executive producer of the Diamond Bros, met with the artist's team to map out filming. They decided to use a stereoscopic technique, which employs a camera with two lenses that create a 3D effect. Diamond said the triangular stage was "perfect for VR" as it allowed them to give a full view of everything.

At AmazeVR, Lee and his team are developing proprietary technologies to elevate VR music experiences. The company is using visual-effect modules and a custom renderer — used to generate an image from a 2D or 3D model — powered by artificial intelligence and Unreal Engine , a real-time 3D-creation tool. With the help of a top engineering team, the company's technologies allow it to capture immersive concert footage in 8K resolution, creating sharp visuals.

"These technologies empower us to produce hyperrealistic VR concerts that make attendees feel as though they are face-to-face with the artists," Lee said.

The downside of VR

VR has its kryptonite. Only about 13% of households in the US own a VR headset , so the technology isn't ubiquitous. In a survey by Arris Composites, 67% of respondents who didn't own a VR headset said its hefty price tag kept them from purchasing one, and 33% said they didn't think they would actually use it.

Some people who paid a pretty penny for an Apple Vision Pro have returned the headset because they simply didn't know how to set it up or didn't think the image quality was worth the cost .

Discomfort is another issue: Prolonged headset use could cause neck fatigue and eye strain, and according to the Food and Drug Administration, motion sickness is "the most commonly reported negative side effect" of VR. There's also the possibility of VR distancing users too far from reality, which could rewire their brains if they don't moderate headset use.

Where could VR go from here?

Looking ahead, Amy Dorsey, the managing director of Dorsey Pictures, the company that produces Meta's Red Rocks Live in VR concert series, told BI that developments in extended reality would usher in "the next iteration of entertainment."

Malkin said she and her Meta team believe those advancements would involve "greater real-time interactivity," including live avatar engagement between artists and their fans.

vr tour of the brain

  • Main content
  • Latest News
  • Inside DELTA
  • Online Education
  • Students and Alumni
  • Academic Technology
  • Tools and Tips
  • WolfWare News
  • WolfWare Outreach
  • Instructional Innovation
  • Course Quality
  • DELTA Grants
  • DELTA Express Grants
  • Faculty Fellows Features

Immersive VR Tours Promote Inclusive Recreation

DELTA Grant Enables Parks, Recreation and Tourism Management Students to Create Interactive Videos

A student in a classroom wears a virtual reality headset.

The physical, mental and emotional health benefits of being in nature are well-documented and almost universally recognized – but not everyone’s experience of outdoor recreation is the same. Also, not all people have equal access to opportunities and time in nature, or feel safe and welcomed in public outdoor locations.

Enter Virtual Reality (VR) technology, which creates an immersive and interactive digital environment using 360-degree video, audio and other sensory inputs through the use of a headset and motion controllers. 

Department of Parks, Recreation and Tourism Management Assistant Teaching Professor Nathan Williams , a longtime advocate of inclusive outdoor recreation, embraces VR in his courses. He sees VR as a way to help his students better understand their environments and the sociocultural factors impacting the experience of outdoor recreation for everyone. 

VR allows users to enjoy the natural world or other digital environments from a safe, comfortable and navigable environment of their choosing. Also, for those with physical or cognitive disabilities or mobility impairment, VR provides opportunities to interact with nature without barriers to entry, such as cost, transportation or personal identity.

And the best part? Williams’ students not only consume VR experiences in the classroom, they create their own. 

An Experiential Outdoor Classroom

With support from two consecutive DELTA Exploratory Course Grants , Williams partnered with a team of DELTA specialists to address a unique instructional challenge: teaching students how to be VR content creators while exploring questions of accessibility and inclusion in outdoor recreation. 

Students in PRT 152: Introduction to Parks, Recreation, Tourism, and Event Management collaboratively created VR recreation experiences later published in a publicly-available interactive VR map of North Carolina recreation spots.  

The resulting map, NC: State of Recreation , divides the state into its existing geographic regions: mountain, Piedmont and coastal. Users select a region of interest to explore recreational areas from mountain hikes to sandy beaches. Many of the locations have personal significance for the students, adding to their enthusiasm for creating the experience to share with the community. 

“While there are a number of research studies on the potential benefit of VR experiences for wellness, there is very little practical application of these technologies on college campuses and in the broader world,” Williams said. “The development of this project will set an example of application of virtual recreation content that can be a model for other universities and communities.”

A man in front of a classroom of students wearing a virtual reality headset.

In sending his students out into nature to conduct their coursework, Williams has effectively created an experiential outdoor classroom in which students can explore, examine and create in collaboration with one another.   

“As students are increasingly less engaged in outdoor activities while encountering challenges with mental health and wellness,” Williams explained, “integrating this technology will create opportunities for them to explore destinations virtually that they can later visit in-person on their own to access these wellness benefits.” 

Using DELTA Expertise to Improve Student Success

While exploring the question of how immersive VR experiences can positively impact peoples’ real-world behaviors and interactions with outdoor recreation, this project focused on and met four guiding student learning outcomes: 

  • Encourage students to explore recreation activities in North Carolina
  • Encourage students to see VR experiences as a form of recreation
  • Support student-created VR/360 recreational experiences
  • Create a platform for posting and browsing student-created experiences

During the first year of the grant, Williams and the DELTA team identified appropriate hardware and software, outlining best practices for student engagement with VR technology. The second year emphasized the process of student content creation for the interactive map. 

Supporting student content production in VR was a new endeavor for the DELTA team, and the process was not without its challenges — such as the number of students with no prior experience using VR technology or its corresponding software. 

After reviewing platforms for creating VR content, the team decided to use Wonda VR, an affordable, efficient option. Then, the DELTA production team focused on creating a clear, concise process for students to follow as they executed and scaled up their projects.

“We asked ourselves, what would a finished project look like?” explained DELTA Digital Learning Team Lead Arthur Earnest . “What camera would the students use? How would they handle video clips? How many new programs would they need to learn?”

Early iterations of the map experimented with various graphics, icons, background images and color palettes before implementing the visually relaxing, easy-to-navigate and consistent design of the final product. 

Check out the interactive map, NC:State of Recreation .

Then it was time to capture some video, using the Insta360 , a dual lens, 360-degree camera that allows users to record hands-free first-person point of view videos. The Insta360 is available for checkout to NC State faculty and students from the VR Studio in the D.H. Hill Jr. Library . 

To simulate what the experience would be like for students new to the technology, and create plausible examples from which they could learn, Earnest had the enviable task of traversing North Carolina — from the mountains around Boone to the beaches of Ocean Isle — as he filmed himself enjoying nature.  

A screenshot of a white man with short grey hair and a beard in three outdoor locations with self portraits inlaid.

“It became ingrained for me that every time I traveled to a new place, I thought I should be filming this for the class,” he said. 

Supporting Student-created Content Production

Students began in the classroom by engaging with two VR experiences using DELTA’s cutting-edge PICO NEO-3 VR headsets. Following a step-by-step presentation and video tutorial of how to use the equipment, they quickly learned to navigate the headsets and the Insta360 camera while DELTA staff stood by — ready to troubleshoot. 

Students were then turned loose to record five (or more) 60-second 360-degree videos, with or without themselves in the shot, along with voiceover narration in a self-view video describing the location and its significance to them. The narration would become a box-in-picture for the panoramic video. 

Although many students stayed close to campus, creating experiences near Lake Raleigh, the Rose Garden and Court of the Carolinas, others traveled as far as Blowing Rock State Park and the coastal plains near Wilmington. 

Partnering with the VR Studio in the D.H. Hill Jr. Library, Williams and members of the DELTA team hosted an open VR workshop mid-semester, helping with hardware and software questions that would inform the next iteration of the course. 

At the end of each semester, Williams can move selected student work to a publicly accessible space — the virtual map. This curated collection is then showcased at the D.H. Hill Jr. Library Visualization Studio on the wraparound screen for any interested members of the NC State community. Other faculty members who may be considering incorporating VR technology into their classrooms and want to see its practical course application are welcome in this space. 

“Watching students go from being intimidated by VR to being excited about making content for it was such a cool thing to see happen,” said Lead Interaction Designer Ben Huckaby . “And with multiple modes of interacting with the content (web, headset, the Visualization Lab), students were able to engage with the VR experience and with each other in a meaningful way.”

Williams plans to bring the completed project to local public high schools to encourage outdoor recreation while highlighting NC State’s technological capabilities and opportunities in the Parks, Recreation and Tourism Management department. 

Meanwhile, the DELTA team continues to refine the process for using VR technology in the classroom and strategize the scalability of the project for other courses at NC State. 

“The goal was to create an easily reproducible process that can be used in any course,” said Instructional Designer and Project Lead Caitlin McKeown .

At the Vanguard of Extended Reality

DELTA began using VR to develop immersive learning experiences for students more than a decade ago by investing in hardware, software and staff to leverage newly available 360-degree video capture systems. In 2013, the first two of many DELTA Grants were awarded for the creation of virtual classroom field trips. 

Augmented Reality (AR) occurs when a headset or phone superimposes virtual objects in a physical space, while VR uses a headset or goggles to immerse the user in a virtual environment. Extended reality (XR) is an umbrella term to refer to AR, VR and mixed reality (MR) .

Since then, DELTA has partnered with dozens of NC State faculty members on projects to improve student learning outcomes using XR. For example, students in the College of Veterinary Medicine explored a remote African village , students in the Prestage Department of Poultry Science built their own feed mills and students studying organic chemistry conducted remote lab experiments using a first person point of view to simulate being in the lab. 

“DELTA is very interested in the use of XR for education and has used 360 video, both browser-based and for VR, in previous DELTA Grants to great success,” explained Huckaby. “I am looking forward to seeing how this project will motivate and influence other faculty to explore XR creation as part of their courses in the future and how DELTA can support them in these efforts.”

For these DELTA Exploratory Grants, Williams served as Principal Investigator (PI), along with the following team of DELTA specialists:

  • Caitlin McKeown , Lead Instructional Designer (Project Lead)
  • Mike Cuales , Director of Digital Media Innovation 
  • Arthur Earnest , Instructional Media Producer
  • Grant Eaton , Multimedia Designer
  • Ben Huckaby , Lead Interaction Developer/Designer
  • Dan Spencer , Research Scholar
  • Chris Willis , Associate Director, Research and Analysis

DELTA Grants enhance the digital learning environment at NC State by leveraging the latest technologies, applying research-based standards of excellence and exploring advancements in instructional design. Discover how to partner with DELTA to develop new instructional technologies for your course with DELTA Grants.

  • DELTA Front Page
  • course design
  • student success
  • teaching with technology
  • Top Stories

More From DELTA News

instructor standing in a classroom with students sitting at tables using notebook computers

DELTA Offers New Hybrid Learning Grant 

Laptop hosting a Zoom meeting. Image from "Panopto: The Secure, Searchable Library for Zoom Recordings" video.

New Student Accountability Features with Panopto Videos! 

hand on laptop

Tips to Improve Your HTML 

IMAGES

  1. Explore Inside Your Own Brain with Virtual Reality

    vr tour of the brain

  2. The Human Brain

    vr tour of the brain

  3. Explore the Human Brain in VR with "A Journey into the Brain" from

    vr tour of the brain

  4. BrainVR Milestone

    vr tour of the brain

  5. TUTORIAL BRAIN VR

    vr tour of the brain

  6. VR Brain Exploration on SideQuest

    vr tour of the brain

VIDEO

  1. VR पर चूहों का Brain

  2. Unboxing the cheapest VR Headset!

  3. The Ultimate VR Experience

  4. VR rots your brain?

COMMENTS

  1. Neurosurgery Patients Take A VR Tour Of Their Own Brains

    Utilizing software called, Precision VR, patients can actually take a tour through a reconstructed version of their own brain in VR in order gain a better understanding of what the doctors are saying. Image Credit: Precision VR. Most doctors will relay information through plastic models, diagrams, 2D images, even small drawings on a pad of ...

  2. Virtual Reality and the Brain-Body Connection

    Virtual Reality and the Brain-Body Connection. Apr 05, 2023 Nikolas Charles. In today's rapidly evolving technological environment, a group of Cedars-Sinai investigators is exploring how virtual reality (VR) can play a role in treatment and help heal the body. Research into how this relatively new technology can be used in the healthcare ...

  3. 3D Brain

    3D Brain. This interactive brain model is powered by the Wellcome Trust and developed by Matt Wimsatt and Jack Simpson; reviewed by John Morrison, Patrick Hof, and Edward Lein. Structure descriptions were written by Levi Gadye and Alexis Wnuk and Jane Roskams.

  4. Teaching the Virtual Brain

    Visualizing the human brain using VR is not new; early approaches date back to 2001. The advantages of using VR are that it preserves accurate depth information, and that it potentially allows for a natural interaction with the visualized object. ... included audio to narrate information as part of guided VR neuroanatomy tours, and Stepan et al

  5. Exploring the human brain with virtual reality

    Virtual-reality technology is being used to decode the inner workings of the human brain. By tasking people and rodents with solving puzzles inside virtual spaces, neuroscientists hope to learn ...

  6. What It's Like To: Take a VR Tour of Your Brain

    Louis: This is often a very scary time when the patient is getting a diagnosis of a brain tumor—and add to that the fear of the unknown. The virtual reality technology takes that second part, the fear of the unknown, out of it. You bring the patient on not as a passive participant but as a member of the team in helping take care of the problem and get the problem out.

  7. Virtual Reality in Neurosurgery: Beyond Neurosurgical Planning

    4.1. Brief History. Though VR was in use for panoramic viewing in the early eighteenth century, the first VR simulator was a flight simulator, invented in 1929, and the term "Virtual reality" was coined in 1987 [].The evolution of VR began with the induction of a young electrical engineer, Tom Furness of the United States Air Force [].This insightful invention resulted in him being ...

  8. Impact of Virtual Reality Cognitive and Motor Exercises on Brain Health

    A study by Brachman and colleagues [ 70] presents an interesting innovative VR system and the positive effects of virtual training on the elderly in gait, balance, and cognitive functions. Additionally, motor-cognitive dual tasking in VR games provides additional challenges and supports motor learning processes.

  9. Overview ‹ 8K Brain Tour: Interactive 3D visualization of terabyte

    8K Brain Tour is a visualization system for terabyte-scale, three-dimensional (3D) microscopy images of brains. High resolution (8K or 7680 x 4320 pixels), large format (85" or 188 cm x 106 cm), and touch-sensitive interactive rendering allows the viewers to dive into massive datasets capturing a large number of neurons and to investigate ...

  10. Teaching the Virtual Brain

    Conclusions. A VR brain exploration application has been developed as a medical and educational tool. The presented system builds a three-dimensional brain model from MRI data and a basic cortex model, then allows the user to cut slices off in order to study its inside and isolate vertex clusters by color.

  11. The Science of Virtual Reality

    The Next Wave: Multisensory Virtual Reality. VR technology is also revealing new insights into how the brain works. When you navigate through space, your brain creates a mental map using an "inner GPS"—a discovery that was awarded the Nobel Prize in 2014. However, recent studies with rats in virtual environments show that their brains don ...

  12. This is Your Brain on VR … The Neuroscientist's Perspective

    First, the embodiment aspect — VR gives people a chance to take on a new body, and tricks the brain into exhibiting behaviors associated with that body. For instance, studies by Mel Slater and Jeremy Bailenson have shown that if you're given a child's body in VR, you start to show more childlike behaviors. Similarly, if you're given the ...

  13. Ophthalmology VR Tour of the Eye and Brain

    Through the Eccles Health Sciences VR Lab, at the University of Utah, I helped 3D model anatomical eye and brain systems. I also textured the assets using Su...

  14. Virtual Reality Enhances Brain Surgery Training, Preparation and

    At the hospital's Oakland campus, patient and family anxiety levels are lowered prior to brain surgery with a VR tour of the surgical path to their tumor, guided by the child's neurosurgeon. Meanwhile VR is used to educate medical staff more thoroughly and communicate clearly about an upcoming surgery.

  15. Alcohol and Your Brain: A Virtual Reality Experience

    The VR version creates an immersive experience. Using VR headsets, participants take a rollercoaster ride through the human brain, pausing at stations to learn about key brain regions that are affected by alcohol—and how alcohol, in turn, affects behavior. There is also a video version for viewing the experience without a VR headset.

  16. Hacking the Inner Ear for VR—And for Science

    Put an electrode behind each ear, hook up a 9 volt battery, and you can stimulate the nerves that run from your inner ears to the brain. Zap with GVS and your head suddenly feels like it's ...

  17. Virtual reality boosts and retunes brain rhythms crucial for learning

    Emergence of distinct ~4 Hz eta oscillation during running in VR. These brain rhythms happen in an area of the brain important for learning and memory, known as the hippocampus. Previous Nobel-winning research has shown that neurons (the cells that make up the circuitry of our brains) in the hippocampus encode information about location, making ...

  18. These Researchers Want to Make MRIs More Comfortable With Virtual Reality

    One team of researchers wants to take this optimization to a new level. Scientists at King's College London are developing an interactive virtual reality system (VR) to be used during MRI scans. This system immerses the patient into a VR environment, distracting them from the test. It even integrates key MRI features, like vibrations and ...

  19. "Tricking the Brain" Using Immersive Virtual Reality: Modifying the

    To fill this gap, we "tricked the brain" using immersive VR and investigated if multisensory feedback modulating the physical properties of an embodied avatar influences motor brain networks and control. Ten healthy participants were immersed in a virtual environment using an HMD, where they saw an avatar from first-person perspective. ...

  20. MIT.nano's Immersion Lab opens for researchers and students

    The MIT.nano Immersion Lab, MIT's first open-access facility for augmented and virtual reality (AR/VR) and interacting with data, is now open and available to MIT students, faculty, researchers, and external users. The powerful set of capabilities is located on the third floor of MIT.nano in a two-story space resembling a black-box theater.

  21. We created a VR tool to test brain function. It could one day help

    While VR is often used for entertainment, it has increasingly found applications in health care, including in rehabilitation and falls prevention. Using VR for cognitive screening is still a new area.

  22. VR Brain Exploration on SideQuest

    We have built an early version of the VR brain exploration experience, allowing the user to navigate within the brain itself. The rendering can be navigated freely, which allows the user to examine the subcomponents of the brain individually. Labels can be enabled in order to teach the names of anatomical structures to gain a deeper understanding of the structures and their positions in this ...

  23. The very real health dangers of virtual reality

    A good many people who use virtual reality complain of eye strain, headaches and, in some cases, nausea. Experts say that's due to the way VR affects the eye-brain connection. In real life, our ...

  24. Virtual reality could show others what autism feels like ...

    VR can also make autistic children more comfortable in strange environments. In an unpublished July 2018 case study, a team led by Nigel Newbutt at the University of the West of England in Bristol gave 11 autistic children, aged 10 to 14 years, a VR tour of a local science museum a few days before their actual visit. "Students reported feeling ...

  25. Virtual-Reality Entertainment Brands Are Creating Immersive Concerts

    AmazeVR, an immersive-concert platform that launched in 2015, is another leader in the VR-entertainment space and has partnered with the likes of Sony Music and Roc Nation. One of its most notable ...

  26. Immersive VR Tours Promote Inclusive Recreation

    Enter Virtual Reality (VR) technology, which creates an immersive and interactive digital environment using 360-degree video, audio and other sensory inputs through the use of a headset and motion controllers. Department of Parks, Recreation and Tourism Management Assistant Teaching Professor Nathan Williams, a longtime advocate of inclusive ...