EIRP Proceedings, Vol 14, No 1 (2019)



Building Virtual Environments for Optimizing

Learning Processes inside the Modern Organization



Dragos Sebastian Cristea1, Luminița Arhip2, Marius Ivanov3, Cristina Chelariu4, Carmen-Catalina Rusu5



Abstract: This article presents different ways in which new virtual reality technologies can be used in the development of software applications dedicated to supporting organizational learning/training processes. Thus, virtual reality can offer employees of enterprises in the Romanian economy access to an advanced training system by creating virtual environments that simulate real scenarios in which users have total freedom of movement and interaction with the constituent elements. Virtual Reality addresses to senses and perception. The VR technologies presented in this article address not only basic senses such as vision or hearing, but also senses such as balance or intuition. Thus, the brain is helped by the entire sensory system to receive a rich flow of information that starts from the environment and comes to mind. Therefore, if we present the artificially created information to the senses, the way in which reality is perceived will also be altered. In this context, we will also present the current state of the virtual reality technologies that can be used to implement VR applications as well as the essential principles applicable to the development of virtual reality applications. At the same time, the article shows how the use of virtual reality can support the three important aspects of vocational training: work experience - users having the opportunity to be in unfamiliar workplaces in a 360 degree perspective, learning skills – virtual reality helps to identify the balance between acquiring knowledge and training the experience, allowing the same scenarios to be scanned several times, without additional costs or inconveniences, access to a different perspective - allowing the user to perform actions normally performed by experienced employees .

Keywords: virtual reality; organizational learning; virtual environments; refresh rate; VR glasses



1. Introduction

Although virtual reality has been discussed since the late 1980s, it has taken a major stance in 2010 with the first version of Oculus Rift. Since then, various projects have been created and developed that have contributed to the development of the VR domain. Virtual reality is by no means a new concept. This combination of words emerged before the 1950s, first through illustrations and texts describing an alternative reality and then machines that mimicked the consumer’s journey into unknown worlds. From the very beginning, VR was introduced to be able to immerse the user into an alternative world, including him and allowing him to interact with the environment, giving him the feeling of being elsewhere, and being able to move and make decisions in real time. In the 1970s there were attempts to build the “magic theater”, with a rather small success. The world of video games was the one that, in the late 1980s and early 1990s, gave a new impetus to this technology. Releases such as Sega VR or Virtual Boy Nintendo have tried to bring the user into the game with rudimentary headphones (compared to today’s), but have enjoyed rather limited success. These, however, were just the beginning of what was to come. The world of entertainment continues to spur the creation of a new technology that will change the way we enjoy the content. Virtual reality can be defined as a set of technologies and devices that, combined, are used to create immersive simulation in a three-dimensional environment. The virtual environment is a real-world replica and is achieved using three-dimensional settings, such as in-depth perception, sound and tools such as consoles, to allow the user to interact with it. The user’s movement is tracked using either a head-mounted device or sensors. Virtual reality is used in a wide range of applications such as video games, engineering, education, psychological therapy, e-commerce, marketing, etc. For example, virtual reality is used in games as a third person to interact with separate parts of the virtual world. At the same time, both in engineering and education, mechanical modeling, using CAD software, allows engineers, students and students to develop and manipulate models created in a way similar to physical objects (Neelakantam, 2017). Designing a fair virtual reality implies a good understanding of both perception and technology. It involves good communication between man and machine, indicating what are the possible interactions, what is happening at the moment or what might happen in the future. The design of human-centered virtual reality is based on real-world observations and is not based solely on software/hardware and engineering considerations, but is also based on the understanding of human behavior and how our mind works. Achieving an ideal virtual reality allows the user to physically walk around objects in the virtual environment as well as in the real world. A number of major products have emerged in the market in 2016, from companies like Oculus VR, Sony and Google. Since the acquisition of Oculus, Facebook has already bought 11 AR/VR companies, which indicates its intention to make the VR and AR the next border. The large investments and acquisitions of the giants suggest that these technologies will become an integral part of the platforms that provide us with content. According to recent estimates by Goldman Sachs, the AR and VR markets will grow to 95 billion by 2025. The greatest demand for this type of technology comes from the creative industries - the gaming industry, live events, video entertainment and retail - but will find even wider applications in sectors such as health, education, military and real estate.



2. Virtual Reality Technologies Review

First category of Virtual Reality technologies is represented by PC based systems. HTC Vive is the VR system created in collaboration with the Valve giant. It attaches to a PC and works through the well-known gaming system Steam by Valve. Currently it is considered to be the best VR system in the market. The 70 sensors that Vive comes with offer the 360-degree headset location and a 90 Hz refresh rate, which reduces the delay between frames (decreasing the latency), delay that could cause motion sickness. Normally, threshold for compelling VR must be at or below 20 milliseconds of latency. When latency exceeds 60 milliseconds, disjunction between one’s head motions and the motions of the virtual world start to feel out of sync, causing discomfort and disorientation. Fortunately, this is not a very common problem with the applications available for HTC Vive. Users should consider not only the price that is not very friendly (not including your PC here), but also the generous space that HTC Vive needs.

The next important VR system is Oculus Rift. Developed by Palmer Luckey, funded by Kickstarter and enthusiastically taken over by Facebook for a modest $ 2 billion, Rift connects to the PC’s DVI and USB slots and tracks the user’s head to produce 3D images on stereo screens. The Rift Edition for the general public uses a resolution of 2160x1200, working at 233 million pixels per second with a 90 Hz refresh rate. It does not give the user the same freedom of movement as HTC VIVE, still their applications are better rated, as Oculus has its own application/games production studios that provides the best interactivity and VR user experience. Unlike HTC Vive, Oculus is not a VR system as much focused on motion. Being connected to a PC, Oculus has much more power than the Playstation VR. Oculus offers not only games, but also varied applications such as Discovery VR, which allows the exploration of wrecks or other places around the world through 360-degree video. In fact, most applications are based on 360-degree photos, reminiscent of Samsung Gear. The new Fove 0 implements what we call “interactive eye tracking”, differentiating itself from Oculus Rift or Playstation VR. Interactive eye tracking is a way of looking interactively. The infrared sensor in the headset monitors the eyes of the wearer, providing a new way of control when it comes to realism. Fove allows to simulate the depth of the visual fields, thanks to the system that knows exactly what the user is looking at. The result is the one expected: the virtual seems more real. The Fove Setup is a 5.7 inch, 1440px display, with a field of view above 100 degrees, 90 fps frame rate and eye tracking measured at 120 fps.

Figure 1. VR headsets

Second category is represented by console-based systems. Playstation VR is what most of the general public perceives to be a “decent virtual reality.” It is not a perfect VR system, being an accessory for PS4, but it has as target audience console lovers. It is an accessible VR system, a reference for virtual mainstream reality. Even if the technology does not compare to that of its competitors, the Playstation VR remains a possibility to consider. Mobile phone-based systems: The cheapest way to experience virtual reality is Google Cardboard.

Practically, you insert the phone into a pair of cardboard glasses. The phone contains some gyroscopic sensors and positioning systems that track the movements of the head to a decent level. The quality is lost in terms of lenses, processing and video card, a problem that is true of all VR systems for mobile phones. The new Samsung Gear VR does not look very different from its predecessor, but has had some improvements to justify the investment. New vents reduce much of the steaming phenomenon. The experience of VR depends on the type of Samsung phone used and does not compare to the experiences of headphones that connect to a PC. It is compatible with: Galaxy Note 7, S7, S7 Edge, Note 5, S6, S6 Edge or S6 Edge +.

One of the best VR systems is Google’s Daydream VR platform. It's currently only compatible with Google’s new Pixel Phone, but it's likely to be compatible with the rest of the phones. Microsoft Hololens brings together the virtual reality with the augmented reality, the helmet combining real-world elements with virtual holographic images. To recognize gestures or voice commands, Kinect technology is used. The helmet has around 34 degrees’ visual field on both axes being very high definition, but it is still at a low level compared to Vive or Oculus. The helmet has its own Windows 10 embedded system and works on batteries, so there is no need for a PC connection. HoloLens 2, announced on February 24, 2019 at MWC in Barcelona, represents the next generation of mixed reality smart glasses developed and manufactured by Microsoft. These smart glasses represents the evolution of the pioneering Microsoft HoloLens, being more business oriented compared to its predecessor there were three main improvements of this device as highlighted by Microsoft: immersiveness, ergonomics and business friendliness. HoloLens 2 has a diagonal field of view of 52 degrees, improving over the 34 degree field of view of the first edition of HoloLens, while keeping a resolution of 47 pixels per degree.

Figure 2. VR glasses and google cardboard

The direct competition for Microsoft HoloLens 2 is represented by a pair of AR glasses coming from Magic Leap company. The Magic Leap One is equipped with an LCOS screen manufactured by Omnivision, offering a definition of 1280 x 960. The Magic Leap One is equipped with a Nvidia Parker SOC, and its CPU includes two 64-bit Denver 2.0 cores and four 64-bit ARM Cortex A57 cores. The GPU Magic Leap One is a Nvidia Pascal with 256 hearts CUDA. The Magic Leap One comes with its controller that provides six degrees of freedom of movement without the need for additional external sensors. The Magic Leap One also offers eye tracking and hand tracking features. This last point allows to interact using his hands in a natural way.

Even if Microsoft comes only with Hololens as a MR solution, their Windows 10 operating system is heavily used by VR head sets producers. As such, we can further present a number of six windows-based VR devices that are available to global consumers: Samsung HMD Odyssey is probably the best Windows Mixed Reality (WMR) headset. It has two 3.5-inch AMOLED displays, each with a resolution of 1440 x 1600 pixels with an immersive 110-degree field of view. The headset supports 360-degree spatial sound, which is made possible by built-in AKG headphones. HMD Odyssey is the only Windows Mixed Reality headset to have integrated headphones. There’s also a built-in microphone array, which you can use to communicate with other users. The headset has an IPD range of 60-72mm, and features inside-out position tracking. There are two cameras, with each having six degrees-of-freedom (DOF) support for improved motion control accuracy. The included wireless controllers also feature six degrees-of-freedom support, and the headset can be controlled via Xbox One controllers as well. HP VR1000-127il represents another WMR headset solution. It has two 2.89-inch displays, with a per eye resolution of 1440 x 1440 pixels and a 95-degree field of view with two front-facing cameras for inside-out motion tracking, complete with six degrees of freedom (DOF). In order to connect with a Windows 10 PCs, HP VR1000-127il uses the standard HDMI and USB ports. Asus HC102 is a luxury device conceived to enhance the comfort of the user(s). It has fabric-like finish and quick-drying materials with antibacterial properties, while featuring adjustment mechanisms, which makes it easy to adjust the headset with one hand. It has two 2.89-inch displays, each with a resolution of 1440 x 1440 pixels. The combined display has a brightness of 100 nits and a 95-degree field of view. The HC102 uses two cameras for inside-out tracking, and the included wireless controllers (x2) come with six degrees-of-freedom (DOF) support. Its integrated sensors include an accelerometer, a gyroscope, a magnetometer, and a P-sensor. Dell Visor has a wide 110-degree field of view allowing the visualization and interaction with an expanded Virtual Reality space (and all of its elements), at all times. Its dual display is comprised of two 2.89-inch LCD (RGB subpixel) panels, each with a resolution of 1440 x 1440 pixels and a pixel density of 706ppi. Having a refresh rate of 90Hz, the panels are comprised of Fresnel lenses that deliver sharp focus and enhanced focal depth. Dell Visor features inside-out tracking, and the bundled controllers come with six degrees-of-freedom (DOF) support, allowing for seamless movements in VR environments. Acer AH101-D8EY is probably the best-looking WMR headset on the market. It features a visor done in a glossy shade of blue and the rest of the headset having a matte black colour. Also, it has two 2.89-inch displays, with a per eye resolution of 1440 x 1440 pixels and a refresh rate of 90Hz. The device features inside-out tracking using B+W VGA cameras The visor can be flipped up easily, allowing a quickly transition between the real and virtual worlds. AH101-D8EY is fully compatible with all the Mixed Reality applications available from the Microsoft Store, It has a 100-degree field of view, and a maximum IPD of 63mm. Lenovo Explorer weights’ only 380 grams, being the lightest Windows Mixed Reality headset. It has two 2.89-inch LCD panels, each with a resolution of 1440 x 1440 pixels. The display has a refresh rate of 90Hz and a 110-degree field of view. As sensors, it includes a magnetometer, an accelerometer, a gyroscope, and a proximity sensor. Lenovo Explorer comes with two wireless controllers having six degrees of freedom support. Apart from that, it can be handled via Xbox controllers, and even keyboard and mouse. Lenovo Explorer includes also support for Cortana. The main element of the VR is the virtual reality glasses. When using this headset, the user sees only the contents of a display. The helmet is equipped with sensors (gyroscope, accelerometer) that detect the movements of the user's head, and the display shows the image in the direction it looks at, so as to create a sense of virtual reality as if the user were inside the scene. There are currently two types of virtual reality glasses on the market. The first type, with no built-in viewfinder, requires a display phone, and the second type with a built-in screen. From the first category, devices are available on the market in a wide range of prices and capabilities. In fact, anyone can make their VR glasses at home using Google Cardboard, a model that can be printed to manufacture this device. Of course, the experience is not the same as the more sophisticated glasses, but you can get an idea of the VR experience using Google Cardboard. On the other hand, there are devices designed for the mobile phone. They are usually similar to plastic glasses and include adjustable lenses and ergonomic accessories. With a decent screen, the experience is more than pleasant. To enjoy content, just search in the app store (App Store or Google Play) using “VR”, where there are many applications suitable for these devices. In the middle category, there are also some devices like the Samsung Gear VR, which includes some additional sensors to improve the experience but still require a mobile phone (in this case, Galaxy). Samsung has an agreement with Oculus, so the experience is enhanced by Oculus applications with quality seal. Finally, we have VR headphones with affordable screens. The last one was the Playstation VR, released in October 2016. It connects to the PS4 to deliver unique video games and experiences through its OLED screen, with a 100-degree field of view and reduced latency. The most important device on the market today is the virtual reality product of Taiwanese company HTC Vive. This is a system that incorporates two multi-function, battery-powered controllers, visible in the VR, two sensors that are installed in the room and are in charge of tracking controllers and helmets, and a headset. The helmet supports all intelligence, has two OLED screens with a resolution of 1024 × 1200 pixels each and a 90 Hz refresh rate.

In 2019, the most important VR related companies proposed a new generation of VR headsets based on inside-out technologies, where the environment sensors are located on the headset, bringing a completely new level of mobility when performing VR actions. Among the best headsets that were announced we can mention: a) Oculus Quest, a standalone headset that offers 6 degrees of freedom on 3 axis of rotation and 3 axis of translation powered by a computing architecture similar to the one found on Samsung S8, b) Oculus Rift S – a connected headset that is using inside-out technology, c) HTC Cosmos – a PC based headset with inside-out technology, that can connect to other devices, having new controllers and a 2880x1600 resolution , d) HTC Focus Plus – an inside-out headset which supports six degrees of freedom tracking, being powered by a Qualcomm Snapdragon 835 and equipped with a 3K AMOLED display, e) HP Reverb - A headset outfitted with two 2.89-inch screens with a 2160×2160 resolution per eye (4320×2160 combined resolution), a 90 Hz refresh rate, and a 114-degree field of view, 6-degree-of-freedom (6DoF), positional tracking with no need for any external sensors. It has its own spatial audio headset, and two front-facing cameras to enable augmented reality applications, f) Valve index. A headset that uses two custom 1440×1600 LCD panels with full RGB subpixels where each pixel has three subpixels instead of just two. The refresh rate of Index increases to 120Hz. The pixel persistence is down to 0.33 ms, the lowest of any headset that should entirely eliminate motion blur. The maximum field of view is around 130 degrees.



3. Developing VR Environments for Optimizing Organizational Learning Processes. Study Case: PSI Application

The developed framework application, based on the use of interactive 3D concepts, is an application designed to train industrial personnel in the field of labor protection, prevention and firefighting. Virtual training, with the help of the developed application, has the advantage of being available to a practically unlimited number of trainees, safely, at low cost. The instructional method used aims to expose trained individuals to situations that may occur in the real work environment, but comes with the advantage of offering the possibility of resuming and exercising actions and procedures until learning the knowledge and acquiring the skills necessary for employees in similar situations at work.The proposed scenario includes two work variants: assisted and free. In the assisted demo, the learner is explained the use of the specific virtual reality devices used, together with theoretical rules and notions related to concrete fire protection and fire fighting procedures. Then he is instructed to go through the steps required to stop the fire, receive feedback and end up with the application only if the required steps are performed correctly. The free option leaves the learner to act in the VR as if exposed to the real situation at the workplace. He can make mistakes, has unlimited time, the application runs to the end, and finally gets a score for the time and extent to which he has achieved the goal. The chosen field, dealing with procedures related to labor protection and firefighting, is applicable to all industrial sectors, and the developed application can be used in personnel training, regardless of the activity carried out. Learning in the virtual environment is well suited to the training of hired personnel.

Adults learn best when they have a concrete purpose, they are focused on problem solving, and they do their own practical action. Virtual reality provides a learning environment, increases motivation, maintains user interest and results at least as good as conventional methods. The PSI application begins with the user entering a space called holodeck. In this space, the user is presented with the content of the experience he/she is about to have, namely the three segments that he/she can access according to his/her interest. The three segments can be accessed in any order and unconditionally. A user who is just experiencing the VR for the first time has the opportunity to access the VR Tutorial, explaining all the commands he will need. A user who wants to go through the free VR experience can do so without having to go through the assisted Demo.

Figure 3. Choosing VR experience type

In the VR tutorial, the user learns how to teleport, manipulate and interact with objects. The following table describes the different interactions the user can have in the VR environment.

Table 1. Tools for VR environment interactions

Written message

What the user sees




Welcome! For starters, be aware of the use of navigation aids.




The clue controller must be in the right hand. Check now!

How do you move? It’s simple. Use the Teleport button. Try it!



In this application you will need to pick up objects. You will do this by using the Trigger. Pick up the extinguisher near you.

Observe the components of a fire extinguisher.

Texts in the print screen: Discharge lever, Discharge locking pin and seal, Discharge hose, Discharge nozzle, Discharge orifice, Pressure gauge, Carrying handle, Data plate, Body

Remove the pin with your right hand

At this point, the user removes the extinguisher's pin

Press the Trigger to release the fire extinguisher

At this point, the user should press the button and the white jet exits the extinguisher



You want to drop the extinguisher? Press here:


Here's how an extinguisher works in real life.

Hint of animation: https://www.youtube.com/watch?v=w4jHpHoYZhk de la 1:14 -

https://www.youtube.com/watch?v=f4FEirH8kCE de la 0:40 - 0:58

https://www.youtube.

In the demo-assisted scenario, the user can not fail/burn (the fire stays constant or will be extinguished but will not include the same scene as the unguided simulation). Thus, the user will be able to go through the following sequence of activities: 1) the entrance is made on a walkway; 2) when entering the workshop, the user sees the fire (which has already been triggered). To enhance the sensation, a sound will be use that will attract the user’s attention. The user will need to find the right fire extinguisher and use it by directing the extinguisher correctly. Once in the workshop, the following actions will be available: 3) the user sees the fire already produced; 4) there are 3 extinguishers on the wall. Above them, there are some metal tabs, placed above each fire extinguisher, showing the graphic elements of the black and white icons for A, B, E; 5) if the user chooses the appropriate extinguisher and uses it correctly, he extinguishes the fire. The scene is over; 6) if the user chooses the extinguisher that is not suitable for the given situation, it must go through Learning Step 1; 7) if the user chooses the wrong extinguisher, he can use it (he can act on the extinguishing agent) but can not extinguish the fire; 8) if the user chooses the appropriate extinguisher (no longer passes through Learning Step 1) and goes to the extinguishment of the fire; 9) extinguishes the correct fire. The exercise is over, 10) does not use the extinguisher correctly, it must go through Learning Step 2. The inability to extinguish fire may be due to the fact that he did not act in any way or failed to remove the pin or did not correctly direct the extinguishing agent (at the base of the fire). The table below presents the events that can take place during the assisted demo scenario.

Table 2. VR events in assisted demo scenario

Scene

Written messages





Welcome!

Cross the walkway and descend the stairs. You have to get into the workshop.






Continue on the teleport points.






Open the left door with your right hand.



Flammable liquids fired.

Safely extiguish the fire. You only have two minutes for that. After 5 seconds without action, the message appears:

Choose the appropriate fire extinguisher.







Pay attention to the labels on the wall.

The user chooses the wrong extinguisher (class A or E)

Learning Step 1:

Texts from the print screen: Class A - Fire produced by the burning of solids: wood, paper, textiles, rubber, plastics. Class B - Fire produced by the burning of liquids or liquefiable solids: petrol, petroleum, alcohols, paints, oils, wax, paraffin. Class E – Fire produced because of electrical equipment and installations: electrical panels, transformers, computers, servers.






In this environment you could be confronted with the following types of fires. Each type of fire corresponds to a type of fire extinguisher.

The user selects the correct extinguisher (class B) but does not act. After 10 seconds of inactivity, the message appears:


Act the extinguisher!

After another 5 seconds of inactivity:

Remove the pin and act!

The user operates the extinguisher correctly, but does not direct the jet to the fire base.

Here is Learning Step 2:





The extinguishing agent must be directed to the fire base.








Press the anti-panic bar to exit.


Finish

You have finished the exercise. You can repeat the assisted tour or try Free Demo from the original menu.

Learning Step 1 refers to the three classes of fire the user needs to know. The following images are suggestions for presenting the three classes of fire in this demo. Through this presentation, the user receives information about the materials that are part of certain classes of fires.

Table 3. Materials and fire classes

Class A - Fire produced by burning solid materials: wood, paper, textiles, rubber, plastics.

Class B - Fire produced by the burning of liquids or liquefiable solids: petrol, petroleum, alcohols, paints, oils, wax, paraffin.


Class E – Fire produced by electrical equipment and installations: electrical panels, transformers, computers, servers

Learning Step 2 refers to instructing the user about how to use the extinguisher (essentially because the triggered jet has to be directed fire base rather than at the top of it). Steps to operate the extinguisher must be performed in the following order: a) The user must take off the pin - Use the right hand to remove the pin; b) The user has to press on the lever (to release the extinguisher) - Press the trigger to switch off the fire, c) The user has to turn the extinguisher towards the fire base - Turn the jet to the fire base. In the free demo scenario, the user will end the mission without any help tools. Therefore, in this case there will be no written messages, no other pointers to the user. The successful end of this journey must be to extinguish the fire by using the appropriate extinguisher correctly.



4. Evaluation of Interfaces in Virtual Reality

Interestingly, during all these years of IT development, the methods for improving user experience have focused on making it easier and more interesting for a human to interact with the computer, by using limited devices available, like a mouse. All industry’s effort to facilitate the connection of man to the computer begun with the need to adapt the way in which man can naturally interact with the surrounding objects, taking into consideration the constraints imposed by the computer interface: flat screens, 2d views, mouse, etc. But now, for the first time in human history, man has the ability to interact with the computer as he naturally does with other components of the environment or life: grab objects, speak natural and respond physically to events.

Virtual Reality (VR) systems may suffer from serious usage problems such as conceptual disorientation and inability to manipulate objects (Kaur et al, 1996); however, no methods of assessing the usability of VR systems have been reported. There is a need for better-designed VR systems (Bolas, 1994, Loeffler & Anderson, 1994) that support perception, navigation, exploration and engagement (Wann & Mon-Williams, 1996). Significant problems of using current VR systems have been reported by Miller (1994);

In addition, Kaur et al (1996) in a field study of design practice found that designers did not have a coherent approach to design, especially interaction design, lack of understanding of the concepts of use underpinning the VR, and did not use conventional HCI methods or instructions. Standard evaluation methods (for example, Nielsen, 1993) may reveal some usability problems, but as Hook and Dahlback (1992) note, no current evaluation method matches the specific problem of VR applications. It can be argued that conventional usability assessment methods exist, such as heuristic evaluation (Nielsen, 1993) or cooperative assessment with users to diagnose problems (Monk et al, 1993) or cognitive walkthrough (Wharton et al., 1994) reality. However, Nielsen’s heuristics, for example, does not address problems locating and manipulating objects or navigating in 3D worlds; while cognitive visits (Wharton et al., 1994) were not designed to address perceptual orientation and navigating and interpreting change in the virtual world.

The assessment of the use of the laboratory (Monk et al., 1993) can identify failures and interaction problems, but these methods offer little guidance for a solution. It is therefore necessary to support the usability assessment process addressing the new issue of the VR as well as the link between problem identification and the specific interface design guidelines. In order to achieve a successful interaction, the user needs knowledge of the field and environment, on the one hand, and the availability of the machine on the other hand. Table 1 shows the stages of interaction with sources and supplies of knowledge, which we call “generic design properties” (GDP). GDP expresses abstract requirements in a design to ensure successful interaction, but needs to be specialized in concrete design guidelines. The GDP specialization was described in Kaur et al (1999), and the space limitation excludes a broad description in this paper.

Table 4. Stages of interaction with sources and supplies of knowledge

Steps

Question

GDP

User knowledge

1. Objective

(i) Can the user form or remember his/her task? (ii) Can the user formulate an intention?

1. Compatible task flow

Task knowledge Domain knowledge- appropriate situation

2. Location of active environment

(iii) Are the objects or part of the environment visible?

2. Clear environmental structure

Knowledge of the field - the layout of the environment

3. Locating objects

(iv) Can objects be localized?

2. Clear environmental structure

5. Recognizable objects

Knowledge of the domain - objects and their location

4. Approaching and orientation

(v) Can the users get closer and orientate themselves in order to accomplish the task?

6. Object that you can approach

7.Components of the object

4. Flexible operation

Knowledge of the field - object structure, knowledge of the task - orientation actions

5. Specifies the action

(vi) Can the user decide what to do and how?

3. Purpose of the action

Knowledge of the task - details

6. Manipulation of objects

(vii) Is manipulation easy?

4. Flexible operation

7. Deleting objects components

8. Locating areas for handling


7. Feedback recognition

(viii) Are the consequences visible?

9. Visible effects

Knowledge of the field - expected effects

8. Feedback evaluation

(ix) Can the user interpret the change?

10. Interpretable effect

Knowledge of the field, Knowledge of the task - expected effects

9. Next action

(x) Can the user decide what he does next?

3. Purpose of the action

Knowledge of the task

Questions are directed to determine potential users’ issues, then the answers are cross-referenced to GDP indicating the possible causes of usability issues. The evaluator passes through the task for each subgroup in the sequence, progressing around the cycle of action illustrated in Figure 1, with the following questions:

  1. Can the user form or recall his/her goal of activity? The answer to this question will be yes, unless the user has poor knowledge of the task. In this case, a load support memory can be displayed as a list of step points; otherwise, training should be provided under this task. The goal-building stage requires user knowledge of the task, while locating the active part of the virtual environment (VE) requires indications of where the appropriate objects (GDP 2) are. These can be delivered by the environment itself or by other design features, such as overall maps. The distribution and appearance of objects should correspond to the user's expectations based on their real-world task (GDP 1) memory;

  2. Can the user specify the intention to do what they need to do? At this stage, the user needs either a procedural memory of how he can perform the task or the clues and supplies of the VE to suggest the best way to act. There should be offers, otherwise it should be mentioned where the user might find them;

  3. Are the objects or part of the environment necessary for the realization of the action visible? If they are not, the user will have to look for them;

  4. Can the objects required for the task be located? Objects may be hidden or not visible even if the user is sure that the appropriate part of the environment has been reached. The necessary object should be highlighted or shown. Important objectives should be detailed in order to help recognize them. If highlighting violates criteria of nature for the environment, speech indices may be used. Object localization requires knowledge of user domain and clear indications by system representation (GDP 2, 5);

  5. Can the users approach the object so that the required action can be performed?;

  6. Can the user decide what action to take and how? Objects may not suggest clues or challenges for action. If the user can not decide, then the problem may be either a lack of detailed knowledge of tasks, or the design of virtual objects may be unclear. The detailed representation of the object should be improved. If naturalness is not vital, the object can be animated to suggest actions to the user, otherwise speaking or texting instructions can be displayed. Self-fulfillment must perform the intended operations of users so that the action is planned and made as natural as possible. Objects should have easy-to-recognize subcomponents and should be accessible by providing clear indications of orientation so that the user can position himself/herself;

  7. Can the user easily perform manipulation or action?;

If action is difficult, it may exceed the physical capabilities of users (for example, manipulations are too precise or require special perceptual-motor coordination); alternatively, the user may not have acquired the necessary physical skills. Additional details are required for successful handling, e.g. clues showing the parts that can be grabbed, handled, turned, etc. (GDP 7, 8). If the scale of the object is not vital, its size can be resized by scaling, so it will be easier to manipulate; alternatively, the required action can be simplified or automated. Also, the design of the object can be improved so that manipulations are easier to control by the user. If the user uses a virtual tool, this may require changing the object in 3D to make it more natural. Evaluating this stage usually involves taking feedback into consideration.

  1. Is the consequence of user action visible? Feedback may be absent, ambiguous, or hidden (meaning that it happens but it does in another part of the environment outside the user's field of view). Changes to the shape of objects should be true to the real world. The ways in which the object can be modified are important because, ideally, feedback should include haptic representations as well as audio and visual representations. Remote feedback must be signaled to the user and his location displayed. If the feedback is unclear, the subject can be highlighted to indicate a change. If forcing the feedback is not possible, a trans-modal substitution may be used, e.g. changing the tone of the sound or changing the color to visualize an action. Text and/or textual clarifications may be necessary to explain the complex changes. Once the action takes place, the objects in the environment and their characteristics must be easily maneuverable within the normal limits of human physiological-motor skills. This implies not only the design of the objects on which they act, but also the representation of the ego and eventually of a virtual instrument. All three must be easy to operate in order for the interaction to be successful;

  2. Can the user interpret a change? If the principle of naturalness is not violated, feedback should always be clear and unambiguous. The user should be able to interpret the effect in the light of the knowledge of the tasks/domains and the relationship between effect and Observable Virtual Environment. If the effect of the change is unclear, the feedback may be clarified or possibly explained to the user. Explanations may be required for complex effects; alternatively, the global effect can be shown in slow motion (slow or slow motion) to facilitate interpretation. Moreover, feedback, which is recognizable and easy to interpret (GDP 9, 10), is an important ingredient in the success of user action. Operations may be visible, but without this forced feedback, the limits of actions and manipulations of the user are often difficult to quantify;

  3. Can the user decide what to do next? At this stage, the tree structure bifurts. If the user completed a task, then the next step is to form the next goal, so repeat the protocol starting with (i). Alternatively, if the user is in a procedure, the next step is to select the action to be taken. The failure of this step can be caused by obliviousness or inadequate knowledge of user tasks; however, failure can also be due to a misleading virtual environment that gives inadequate hints. In this case, the environment must be redesigned to suggest the next action so it is compatible with the user’s task. Note that this step may be related to iterations between questions (vi) and (vii) for closely related interconnected actions. When the user is qualified, the next action decision is automatic.



Conclusions

The flexibility provided by a VR system most likely will atract those people responsible with managing the learning processes inside an organization. Even the costs are still high compared to conventional multi-media systems it is easy to see how many different applications it can support. It is possible to imagine that a time will come when current training procedures will be enhanced by VR systems that will allow immersions in different realities, where different experiments can be performed. For example, VR based laboratories will be able to integrate chemistry, physics, engineering or human reactions sessions into a cohesive, unitary experience. At the beginning, desktop-based VR systems will be used in the first instance, followed by partial immersion VR systems where projection screens can be adapted to support wide screen representations. Finally, fully immersive systems will be introduced, but this will happen when acceptable head coupled displays will be available at reasonable cost. In terms of issues, there is a need to develop new skills related to the usage of VR tools in learning environments. It would be beneficial to be able to quantify/benchmark VR systems against more traditional methods of teaching. Among some other VR related issues that should be addressed, we can mention: costs, technology obsolescence, interaction in a 3D environment can be problematic with 2D devices such as a mouse, current tools may be good for creating virtual environments and object databases but these need to be linked to a carefully structured training programme with quantifying learning benefits. Still, by using VR based systems, the organization can obtain many key benefits like: flexibility, upgradeability, learning processes based on a sense of presence, that is important for many educational contexts and a high degree of interaction, 3D interactions and visualisations, easy to achieve a sense of true scale in 3D environments (which is important in fields such as architecture ), safe processes with less restrictions ( it is possible to experiment the implications of exceeding certain limits ), access to otherwise dangerous situations, situations where observation of internal workings/structure is important to aid understanding.



References

Bolas, M. (1994). Designing virtual environments. In Loeffler, C.G. & Anderson, T. (Eds). The Virtual Reality casebook. New York: Van Nostrand Reinhold.

Kaur, K. (1998). Designing virtual environments for usability. Doctoral Thesis, Centre for HCI Design, School of Informatics, City University.

Kaur, K.; Maiden, N.A.M. & Sutcliffe, A.G. (1996). Design practice and usability problems with virtual environments. In: Virtual Reality World '96 conference, Stuttgart, Proceedings (IDG Conferences).

Kaur, K.; Maiden, NA.M. & Sutcliffe, A.G. (1999). Interacting with virtual environments: an evaluation of a model of interaction. Interacting with Computers, Vol 11, pp. 403-426.

Loeffler, C.E. & Anderson, T. (1994). The virtual reality casebook. New York: Van Nostrand Reinhold.

Miller, L.D. (1994). A usability evaluation of the Rolls-Royce virtual reality for aero engine maintenance system. Masters Thesis. University College London.

Monk, A.; Wright, P.; Haber, J. & Davenport, L. (1993). Improving your human computer interface. Prentice Hall.

Neelakantam, D. & Pant, T., Learning Web-based Virtual Reality Build and Deploy Web-based Virtual Reality Technology. ISBN-13: 978-1-4842-2710-7, DOI 10.1007/978-1-4842-2710-7, Apress, 2017, India.

Nielsen, J. (1993). Usability engineering. New York: Academic Press.

Wann, J. & Mon-Williams, M. (1996). What does virtual reality NEED? Human factors issues in the design of three-dimensional computer environments. International Journal of Human Computer Studies, 44, pp. 829-847.

Wharton, C.; Reiman, J.; Lewis, C. & Polson, P. (1994). The cognitive walkthrough method: a practitioner’s guide. In Nielsen, J. & Mack, R.L. (Eds). Usability inspection methods, pp. 105-140. New York: J Wiley.

http://karlkapp.com/principles-for-creating-a-successful-virtual-reality-learning-experience/.

https://elearningindustry.com/instructional-design-strategies-virtual-reality-learning.

https://www.learningsolutionsmag.com/articles/2385/four-essentials-for-effective-learning-using-virtual-reality.

http://store.steampowered.com/app/607590/Earthquake_Simulator_VR/.

http://www.passfirevr.com/.

https://uploadvr.com/varjo-vr-1-impressions/.

https://www.theverge.com/2018/1/10/16875494/pimax-8k-vr-headset-design-comfort-pixels-resolution-ces-2018.

https://www.theverge.com/2019/2/19/18231495/varjo-vr-1-human-eye-resolution-dual-screen-business-headset.

https://www.pcmag.com/article/342537/the-best-virtual-reality-vr-headsets.

https://www.lifewire.com/best-windows-mixed-reality-headsets-4173017.



1 Assistant Professor, University Dunarea de Jos, Romania, Address: Strada Domnească 47, Galați, Romania, Tel.: 0336 130 108, Corresponding author: dragoscristea@yahoo.com

2 Philologue, SC ALTFACTOR SRL, Galati, Romania, Address: Strada Portului 7, Galați 800032, Romania, Tel.: 0236 407 030.

3 Engineer, SC ALTFACTOR SRL, Galati, Romania, Address: Strada Portului 7, Galați 800032, Romania, Tel.: 0236 407 030

4 Philologue, SC ALTFACTOR SRL, Galati, Romania, Address: Strada Portului 7, Galați 800032, Romania, Tel.: 0236 407 030.

5 Senior Lecturer, PhD, University Dunarea de Jos, Romania, Address: Strada Domnească 47, Galați, Romania, Tel.: 0336 130 108.

Refbacks

  • There are currently no refbacks.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.