CECD  

Home
About CECD
Faculty
Education
Projects
Labs
Publications
Photo Gallery
News
Events
Contact Info
Directions
Members Only-Login Required

 
search

UMD    CECD




Main Participants: Satyandra K. Gupta, DK Anand, JE Brough, M. Schwartz, and A. Thakur

Sponsors: This project is sponsored by Naval Surface Warfare Center at Indian Head, Maryland, Center for Energetic Concepts Development at the University of Maryland, and National Science Foundation.

Keywords: Virtual Environments, Virtual Reality, Training, Assembly Instruction Generation, and Assembly Planning


Motivation

The workforce in most industries requires continued training and update. Current training methods, for the most part, involve a combination of paper-based manuals, DVD/ video-based instructions and/or hands on master-apprentice training. Due to the rapid influx of new and changing technologies and their associated complexities, accelerated training is a necessity in order to maintain an advanced and educated workforce. We believe that existing training methods can be further improved in terms of cost, effectiveness, time expenditure and quality through the use of digital technologies such as virtual environments (VE). The advent of personal virtual environments offers many new possibilities for creating accelerated training technologies.

Our exploratory studies indicated that people preferred to utilize the virtual environment differently, for training purposes, based on the task at hand and the individual training styles of the user. We found that sometimes it is useful to get 3D visual clues from 3D animation and sometimes it is useful to see images of real parts. Sometimes practicing assembly tasks in the virtual environment helps facilitate training and aids in transferring that knowledge to real life. To meet this requirement, we have developed a system that supports three different training modes. Developing these training modes and providing the ability to seamlessly switch between them required us to develop several new features.

The virtual environment based training system we have developed is called Virtual Training Studio (VTS). The VTS aims to improve existing training methods through the use of a Virtual Environment based multi-media training infrastructure that allows users to learn using different modes of instruction presentation while focusing mainly on cognitive aspects of training as opposed to highly realistic physics based simulations. The VTS is composed of the following three modules: Virtual Workspace, Virtual Author and Virtual Mentor. Virtual Workspace provides the underlying VE multi-modal infrastructure. It provides the platform for the other two modules to function and integrates the hardware and software into a cohesive package. Virtual Author is a component of the VTS that allows non-programmers to quickly create new tutorials. Virtual Mentor is a module, running on top of the Virtual Workspace, which checks for user errors, assists users in the training process and provides additional details to further clarify the action required.


Main Results and Their Anticipated Impact

Overview of virtual training studio: The VTS was designed to be an affordable Personal Virtual Environment (PVE) for training. We developed a low cost wand design and use an off the shelf head mounted display (HMD). The level of physics based modelling that has been implemented as well as the hardware selected reflects this design decision.

The user interacts with the tutorial using a Head Mounted Display (HMD) and a wireless wand. Four optical trackers (infrared cameras) and two gyroscopes are used to track the position and orientation of the user and the wand. The wand consists of an off the shelf wireless presenter, an infrared LED, and a wireless gyroscope. Inside the Virtual Reality environment, the user can manipulate the parts and the buttons using a virtual laser pointer, which is controlled by the wireless wand. A wireless gyroscope and another infrared LED are mounted on the HMD. The cameras track the two LEDs and use triangulation to return the x, y, z, positions. Use of haptics and gloves was avoided in order to keep the cost of the system down. After user testing with a glove-based version of the system, utilizing 2 5DT DataGloves, we made the decision to create a wand based system due to the complications of the user interface in the use of gloves and the simplicity and user friendliness of the wand interface. The glove-based interface, when integrated with our system, forced users to memorize some gestures, caused excessive arm and body movement. These problems could have been overcome by use of a more expensive glove. But we decided against it to reduce the system cost.

The software infrastructure of the VTS was built using a combination of programming languages: C/C++, Python, OpenGl. Additionally, a number of libraries were used: WoldViz’s Vizard for general purpose loading and transformation of VRML models, ColDet for collision detection, Gnu Triangulated Surface library (GTS) for segmentation and wxPython for the Graphical User Interface.

Virtual Workspace: Virtual Workspace houses the necessary framework to allow manipulation of objects, collision detection, execution of animations, and it integrates the hardware with the software to provide the user an intuitive, easy to use interface to the virtual environment. Virtual Workspace also acts as the platform for the Virtual Author and the Virtual Mentor. A major new feature of the Virtual Workspace is dynamic generation of animations. The current version of the Virtual Workspace places the user in a furnished room with a table at the center and a projector screen on one of the walls. Parts used in the tutorial are placed on the table, while video as well as text instructions are displayed on the projector screen. The user interacts with the VE using a single wand, represented in the VE as a virtual laser pointer, to pick up, move and rotate objects and to click on buttons located on the control panel at the front of the room. The implementation of the Virtual Workspace also includes the option to interact with the VE through a desktop personal computer (PC) interface. Virtual Workspace offers three primary modes of training: 3D Animation Mode which allows users to view the entire assembly via animations, Interactive Simulation Mode which is a fully user driven mode that allows users to manually perform the assembly tasks and Video Mode which allows users to view the entire assembly via video clips. Trainees can switch between these modes at any time with the click of a button.

Virtual Author: The goal of the Virtual Author is to enable the instructor to quickly create multi-media training instructions for use in the Virtual Workspace without writing any code. The Virtual Author package includes a ProEngineer (ProE) plug-in written in ProE Toolkit, which allows an engineer to load an assembly into ProE and export it to the file formats used in the VTS – VRML and STL. We decided to use VRML and STL formats to ensure that the VTS system can work with a wide variety of CAD systems.

The instructor begins the authoring process by loading a set of VRML and STL CAD models into a tool called Part Loader where the instructor declares a tutorial specific dictionary for Virtual Author. The dictionary is created by giving names to parts and selected features. The instructor also uses the tool to give all the CAD models initial arrangement on the virtual table. At the end of the dictionary declaration process, the tool generates a data file which is loaded into Virtual Author on start up. The instructor then steps into the virtual environment and performs a virtual demonstration which the system monitors and records.

During the virtual demonstration, the instructor picks up one part or subassembly with a single virtual laser pointer and inserts it into another part or subassembly. Hence, there is always a moving subassembly and a receiving subassembly which remains stationary. After the instructor carries out the assembly inside the virtual environment for a particular step, the Virtual Author performs motion smoothening by calculating the final assembly path, calculating the insertion point, and more precisely realigning the held assembly with the receiving assembly. Motion smoothening is necessary due to the fact that the system does not prevent one object from passing through another upon collision and the fact that highly precise placement and alignment of parts may not be possible inside the virtual environment. Not permitting the parts to intersect at all during the motion would have required computationally expensive constraint management techniques that may slow down the scene refresh rate. Hence we allow CAD models to intersect with each other during virtual demonstrations. Most such intersections are eliminated from the training instructions using a motion smoothening technique. Motion smoothening allows Virtual Author to deal with minor placement and orientation errors during virtual demonstrations that result due to no enforcement of non-penetration constraints and lack of haptics feedback.

For each step that the instructor demonstrates in the virtual environment the instructor also declares symmetries and specifies the symmetry types. The part symmetry information is later used by Virtual Workspace to allow trainees to assemble parts using alternate insertion locations and orientations. For each step, highly detailed text instructions are generated automatically by combining data about collision detection, part motion and alignment constraints with the dictionary declared by the instructor. Text instructions enable trainees to refresh their memories about the assembly process at the shop floor where VE installations are not available. Automated generation of text instructions reduces the text instruction generation time and ensures that there is no missing step in the text instructions. In addition to the text instructions, Virtual Author also automatically generates data for dynamic animations and interactive simulation for later use in Virtual Workspace.

During the final phase the instructor also has the option of loading video clips (.avi files) and audio (.wav files) and associating them with each step. Both the motion smoothening techniques as well as automatic text from motion generation are heavily dependent on extraction of alignment constraints from polygonal models. We came up with a simple, heuristics-based method for extracting planar and cylindrical surfaces, and their characteristics, from triangulated polyhedral models.

Virtual Mentor: The goal of the Virtual Mentor is to simulate the classical master-apprentice training model by monitoring the actions of the user in the Virtual Workspace and assisting the user at appropriate times to enhance the trainee’s understanding of the assembly/disassembly process. If users make repeated errors, then the system will attempt to clarify instructions by adaptively changing the level of detail and inserting targeted training sessions. The instruction level of detail will be changed by regulating the detail of text/audio instructions and regulating the detail level of visual aids such as arrows, highlights, and animations. The current version of the Virtual Mentor performs the following tasks:

  • Error detection and presentation of very specific error messages.
  • Handling symmetries during interactive simulation.
  • Extensive logging.
  • Testing.

In the most interactive mode, called Interactive Simulation, the user first positions and orients a part so that the interfaces align and the components can be assembled. The user can then click on a ‘‘Complete’’ button. If the part is positioned and oriented correctly near the insertion marker, allowing for a certain margin for error, the assembly of the part is completed via animation. If the orientation or position of the part is incorrect, an error message is given and the user must realign the part so that assembly can be completed. In this manual mode, Virtual Mentor must check for alternate orientations and insertion positions based on the symmetries that were specified in the Virtual Author.

The extensive logging that the Virtual Mentor currently performs is the first step toward an adaptive Virtual Mentor that adjusts the level of detail and provides dynamic, performance- based hints. Currently, the analysis of the logs and adapting of instructions is performed interactively by the user. Adapting of instructions or annotation of ambiguous instructions is done by analysing the logs. Ongoing work, however, aims to achieve a higher level of automation in this area.

Case Study: We conducted a detailed user study involving 30 subjects and two tutorials to assess performance of our system. Thirty subjects were selected from three different groups: ten undergraduate engineering students, ten graduate engineering students and ten working engineers. The purpose of this study was to gather large amounts of data from each user and mine this data to gain a better understanding of how people were training in the VTS. Also of interest was which features and training modes were most used, how long people were training and user response through pre and post-training questionnaires.

The main findings of the study were as follows:

  • During the first study involving a rocket motor, overall 94.4% steps were performed correctly by the users during the physical demonstration after completing the training. During the second study involving a model airplane engine, overall 97.3% steps were performed correctly by the users during the physical demonstration after completing the training. None of the users tested during these two studies had assembled either a rocket motor or a model airplane engine similar to the ones used in these experiments prior to participating in this study. These results clearly show that our system can be successfully used for training of assembly operations.
  • Users show different preferences for training modes based on the task at hand and individual training styles (i.e. different people chose to train differently on the same task).
  • All three main training modes were used during the studies. 94% of the subjects used interactive simulation while 81% used the 3D animation mode and 18% used the video mode. (Note: the percentages will not sum to 100% because subjects were allowed to use more than one training method.)
  • Certain task characteristics make certain training modes more popular. Video mode and hint use were, on average, used more often on steps where the geometry or the part orientation was considered complex. Interactive simulation and 3D animation were very close to a 50–50 distribution but still slightly favoring the more complex steps.
  • All three training modes work satisfactorily during user studies and users are able to successfully learn using them. The use of the three modes varied little between the tutorials. Had there been significant problems with any one mode, we believe trainees would have used it much less in the second tutorial.
  • Users are able to seamlessly switch back and forth between training modes and utilize multiple training modes on the same task. Two of the most popular training paths were completing the assembly in 3D animation and then trying to complete it again using the interactive simulation mode (3D-IS) and alternating between 3D animation and interactive simulation for each step (3D/IS).
  • Novel features that have been implemented to support training modes and switching back and forth between modes have satisfactory computational performance for assembly tasks requiring ten steps or less. Training time was virtually unaffected by the fraction of time it takes to switch between modes or to check for errors during interactive simulation.
  • Our implementation of VR based training system matches or exceeds users’ expectations in most cases. Pre- and post-training questionnaires allowed us to capture users’ likes and dislikes of the VTS. When asked to rank order 7 training methods used for manufacturing processes before being exposed to VTS, the average ranking for the VR based method was 3. After being exposed and allowed to train in the VTS, that average rank improved to 2, just behind master-apprentice style training.
  • Subjects that are not prone to VR-induced motion sickness are, on average, able to learn 10-step assembly sequences within 17.4 min training sessions. These training sessions did not have any adverse effect on the subjects.
  • The wand-based interface is an effective user interface for tasks where the primary objective is cognitive learning as opposed to motor skill development. This interface is significantly less expensive than a haptics type of interface.
  • The average training time for a simple step was 90 s while the average training time for a complex step was 116 s, a 29% increase.
  • As people get more experienced (i.e. move from the first tutorial to the second tutorial) with our system they tend to utilize the wand rotation feature more often. This is primarily due to gaining familiarity with the VTS and the wand rotation feature. This results in an average per step time reduction from 108 s per step on the rocket tutorial to 101 s on the airplane engine tutorial, a reduction of 6.5%.

Related Publications

The following papers provide more details on the above-described results.

  • J.E. Brough, M. Schwartz, S.K. Gupta, D.K. Anand, R.Kavetsky and R. Pettersen. Towards development of a virtual environment-based training system for mechanical assembly operations. Accepted for publication in Virtual Reality.
  • M. Schwartz, S.K. Gupta, D.K. Anand, J.E. Brough and R. Kavetsky. Using Virtual Demonstrations for Creating Multi-Media Training Instructions. Accepted for Publication in CAD Conference, Hawaii, June 2007.
  • J. E. Brough, M. Schwartz, S.K. Gupta, D.K. Anand, C.F. Clark, R. Pettersen, and C. Yeager. Virtual Training Studio: A Step Towards Virtual Environment Assisted Training. IEEE Virtual Manufacturing Workshop, Alexandria, Virginia, March 2006.



 Contact:

Dr. Satyandra K. Gupta
Phone:  301.405.4311
Email:  skgupta@umd.edu

Website: Dr. S. K. Gupta

 

 

.

 

Back to top      
Home Clark School Home Mechanical Engineering Home CECD Home