IEEE 2017-2018 Project Titles on Virtual Reality

This study presented a Leap Motion somatosensory controlled switches. The switches were implemented by the relays. The “open” or “short” of the switching circuit were controlled by the sensing of the Leap Motion somatosensory module. The virtual switches on the screen have designed to be 5 circle buttons. When the sensing hand touched the circle buttons virtually, the programming language “Processing” sent the instruction codes to the Arduino MEGA module which gave the “high” and “low” signal to the transistor switches. Therefore, a four-channel Leap Motion somatosensory controlled switching module has been implemented. For testing the module, the bulbs have been connected with the switching module. Consequently, the “light” or “dark” of LED modules can be controlled by touching the virtual buttons on the screen.
When traveling virtually through large scenes, long distances and different detail densities render fixed movement speeds impractical. However, to manually adjust the travel speed, users have to control an additional parameter, which may be uncomfortable and requires cognitive effort. Although automatic speed adjustment techniques exist, many of them can be problematic in indoor scenes. Therefore, we propose to automatically adjust travel speed based on viewpoint quality, originally a measure of the informativeness of a viewpoint. In a user study, we show that our technique is easy to use, allowing users to reach targets faster and use less cognitive resources than when choosing their speed manually.
Autism Spectrum Disorder (ASD) is a highly prevalent neurodevelopmental disorder with enormous individual and social cost. In this paper, a novel virtual reality (VR)-based driving system was introduced to teach driving skills to adolescents with ASD. This driving system is capable of gathering eye gaze, electroencephalography, and peripheral physiology data in addition to driving performance data. The objective of this paper is to fuse multimodal information to measure cognitive load during driving such that driving tasks can be individualized for optimal skill learning. Individualization of ASD intervention is an important criterion due to the spectrum nature of the disorder. Twenty adolescents with ASD participated in our study and the data collected were used for systematic feature extraction and classification of cognitive loads based on five well-known machine learning methods. Subsequently, three information fusion schemes-feature level fusion, decision level fusion and hybrid level fusion-were explored. Results indicate that multimodal information fusion can be used to measure cognitive load with high accuracy. Such a mechanism is essential since it will allow individualization of driving skill training based on cognitive load, which will facilitate acceptance of this driving system for clinical use and eventual commercialization.
This paper presents the remote control of a mobile robot via internet. To solve the problem of delay time which is unpredictable, direct teleoperation architecture is proposed. This architecture allow s us to minimize the trajectory error of the robot movement which is controlled via the internet. This work also demonstrates the use of virtual reality in the context of the remote control. Virtual reality can be used in a conventional manner to simulate the behavior of a system, but also in parallel with the real system to improve quality control. To validate our work, we conducted teleoperation experiments in various places. Experimental results show the effectiveness of the proposed architecture.
Virtual reality has been widely explored to immerse users in environments other than those considered to be their surrounding realities. We discuss the possibility of immersion not in another environment but in another person's body. The power of body swap illusion opens up a great deal of possibilities and applications in several areas, such as neuroscience, psychology, and education. For this experiment, we used a low budget system that reproduces a person's head movements as if one's own head were in another body viewed through a head mounted display (HMD) while having body agency, i.e., controlling the movements of another real body as if it was a "real avatar". In this pilot study we describe the tool in details and discuss its feasibility and preliminary results based on the analysis of the participants' perceptions collected through validated questionnaires and in-depth interviews. We observed that the system does promote higher levels of realism and involvement ("presence") compared with an immersion experience without body agency. Moreover, spontaneous declarations by the participants also showed how impactful this experience may be. Future applications of the tool are discussed.
Flight simulators with a physical mock-up are dependent on the aircraft type and have high costs. In order to overcome high cost issues, a generic virtual reality flight simulator is designed. Virtual buttons are used without a physical mock-up to make the virtual reality flight simulator independent of the aircraft type. The classic virtual hand metaphor is employed to interact with the virtual objects. This paper examines the virtual hand-button interaction in the generic virtual reality flight simulator where no haptic feed-back is provided. The effect of the collision volume of a virtual button during the virtual hand-button interaction is determined. It is concluded that a change in the collision volume within aircraft design limits, does not have a significant impact on the interaction. We also investigate different virtual hand avatars. We find that the accuracy of hand-button interaction depends on the hand avatar rather than the collision volume. Representing a smaller part of the hand avatar results in less efficient interaction. This shows the size and shape of hand avatars plays a major role in the virtual reality simulator design. This finding contributes to the various virtual reality applications which exploit the virtual hand metaphor.