People with eligible VR headsets were invited to install our experimental application and complete the ten minute virtual consultation study at their own discretion. We conducted a preregistered online experiment with a 2×2 between-participant design. We devised a novel intervention using virtual reality (VR) consisting of a consultation with a general practitioner for communicating the benefits of COVID-19 vaccination and, in turn, increasing the intention to get vaccinated against COVID-19. We discuss ways to address open challenges for the pose generation problem and other interesting avenues for future work.Įffective interventions for increasing people’s intention to get vaccinated are crucial for global health, especially considering COVID-19. For example, the geometric approach was more successful at avoiding poses generated in nonfree areas of the environment, but the data-driven method was better at capturing the variability of conversational spatial formations. However, the methods showed different strengths. They can also be used repeatedly to simulate conversational spatial arrangements despite being designed to output a single pose at a time. Our results suggest that the proposed methods are effective at reasoning about the environment layout and conversational group formations. We evaluate the proposed approaches through quantitative metrics designed for this problem domain and via a human experiment. It implicitly models key properties of spatial arrangements using graph neural networks and an adversarial training regimen.
One method is model-based and explicitly encodes key geometric aspects of conversational formations. We study two approaches for predicting an appropriate pose for a robot to take part in group formations typical of social human conversations subject to the physical layout of the surrounding environment. The extension includes an optimization approach that fits the parameters of the proposed model to established saliency models such as SALICON using a much larger and more realistic urban test set. This is an extended version of a short paper published in MIG 2021. The proposed method can be combined with normative and pathological models of the human visual field and gaze controllers, such as the recently proposed model of egocentric distractions for casual pedestrians that we use in our results. The user may also expand the model with additional layers and parameters. The aggregate and parameterized structure of the method allows the user to model a range of diverse agents. The model aggregates a saliency score from user-defined parameters for objects and characters in an agent’s view and uses that to output a 2D saliency map which can be modulated by an attention field to incorporate 3D information as well as a character’s state of attentiveness. This work proposes a parametric model and method for generating real-time saliency maps from the perspective of virtual agents which approximate those of vision-based saliency approaches. Modeling visual attention is an important aspect of simulating realistic virtual humans. We argue that the virtual illusion of a moving body from the first person perspective can initiate a cascade of events, from the perception of the visual illusion to physiological activation that triggers other biological effects, such as the neuroendocrine stress response.
We found a decreased salivary alpha-amylase concentration (a biomarker for the stress response) after the virtual training among the experimental group only, as well as a decreased subjective feeling of state anxiety (but no difference in heart rate). While sitting, 26 healthy young adults watched a virtual avatar running for 30 min from the first person perspective (experimental group), while another 26 participants watched the virtual body from the third person perspective (control group). Capitalizing on this evidence, we hypothesized that virtual training could also induce neuroendocrine effects that prompt a decreased psychosocial stress response, as occurs after physical training. When a person is sitting and the virtual body runs, it is possible to measure physiological, behavioral and cognitive reactions that are comparable to those that occur during actual movement. Previous research involving healthy participants has reported that seeing a moving virtual body from the first person perspective induces the illusion of ownership and agency over that virtual body.