10:15 - 11:15 Presentation Session 1 (M)
M1 |
Lisa W. Billman |
Human System Performance Evaluations of Heterogeneous Intelligent Autonomous Systems |
||
The Office of Naval Research, Intelligent Autonomy program, is developing and demonstrating autonomous control and human interface technologies that support management of multiple heterogeneous unmanned systems by a single operator. Accomplishing this requires significant increases in autonomous control and greatly reducing the need for human intervention in the system as compared with many current systems that require one or more dedicated and skilled operators to control even a single system. The types of applications being looked at are long, complex missions with many interdependencies that will sometimes require significant human collaboration with the autonomous systems in order to coordinate unmanned systems planning and execution with changing situations in a dynamic operational space. Further, there may be significant differences between how human operators and highly advanced autonomous systems conceptualize planning and execution. In addition, designers of autonomous systems are unlikely to have a full understanding of how users in the field will want to utilize these systems and there will be times when unforeseen problems will arise. As a result, it will be important for operators to be able to interact with these systems at a variety of different levels within different control loops. Thus, there is a strong need to determine how best to design the entire autonomous system in a way that supports the role of the human in the system and not just assume that this can be solved with a good user interface design. For the Intelligent Autonomy Program, it is critical to examine the performance of the total system including the human in-the-loop as well as examine what factors are impacting on the ability of the human to effectively collaborate with the automation. The particular type of system being examined has some complex features that can make it difficult to evaluate including highly heterogeneous vehicles that have different types of on-board autonomous control systems. These platforms also have significant differences in communications (e.g., a high altitude UAV that may have relatively good communications with an operator or a UUV that may have extended periods without communications or relatively low bandwidth communications). The vehicles also may be widely distributed geographically throughout the course of the mission. Finally, this program is focusing on vehicles that can operate with various levels of autonomy, which the operators can adjust. One consequence of this i s it may be important to examine factors that are difficult to measure such as the operator s trust and mental model of the autonomous system. This presentation will review a series of operator-in-the-loop usability assessments of a variety of software packages developed under the program, and in particular on the metrics used during these evaluations. These software packages are assessed in order to evaluate human system performance for very complex autonomous systems. It is important to note that these are not meant to be solely human performance evaluations, or assessments of hardware/software performance. Rather, these assessments are intended to assess human-in-the-loop system performance, which includes hardware, software, liveware, and the environment.
| ||||
M2 |
Harry L. Litaker Jr., Shelby Thompson, and Ronald D. Archer |
Evolution of the Mobile Information System (MIST) |
||
The Mobile Information SysTem (MIST) had its origins in the need to determine whether commercial off the shelf (COTS) technologies could improve intervehicular activities (IVA) on International Space Station (ISS) crew maintenance productivity. It began with an exploration of head mounted displays (HMDs), but quickly evolved to include voice recognition, mobile personal computing, and data collection. The unique characteristic of the MIST lies within its mobility, in which a vest is worn that contains a mini-computer and supporting equipment, and a headband with attachments for a HMD, lipstick camera, and microphone. Data is then captured directly by the computer running MoraeTM or similar software for analysis. To date, the MIST system has been tested in numerous environments such as two parabolic flights on NASA s C-9 microgravity aircraft and several mockup facilities ranging from ISS to the Altair Lunar Sortie Lander. Functional capabilities have included its li ghtweight and compact design, commonality across systems and environments, and usefulness in remote collaboration. Human Factors evaluations of the system have proven the MIST s ability to be worn for long durations of time (approximately four continuous hours) with no adverse physical deficits, moderate operator compensation, and low workload being reported as measured by Corlett Bishop Discomfort Scale, Cooper-Harper Ratings, and the NASA Total Workload Index (TLX), respectively. Additionally, through development of the system, it has spawned several new applications useful in research. For example, by only employing the lipstick camera, microphone, and a compact digital video recorder (DVR), we created a portable, lightweight data collection device. Video is recorded from the participants point of view (POV) through the use of the camera mounted on the side of the head. Both the video and audio is recorded directly into the DVR located on a belt around the waist. This da ta is then transferred to another computer for video editing and analysis. Another application has been discovered using simulated flight, in which, a kneeboard is replaced with mini-computer and the HMD to project flight paths and glide slopes for lunar ascent. As technologies evolve, so will the system and its application for research and space system operations.
| ||||
M3 |
Laura D. Strater, Erik S. Connors, and Fleet C. Davis |
Collaborative Control of Heterogeneous Unmanned Vehicles |
||
The increase in unmanned systems on the battlefield creates current and future requirements for critical collaboration, communications and control technologies to support the teams of warfighters tasked with commanding their operations. These technology solutions, however, must be based on a sound understanding of the operational needs of these warfighting teams. The current research describes a preliminary attempt to identify both the collaboration and information display requirements of operators controlling and monitoring heterogeneous UVs in future U.S. Navy operations. In this future operational environment, operators are expected to control vehicles and sensor systems across multiple platforms in a dynamic manner, with the support of automation, to conduct a variety of remote tasks. In this effort, we investigated the requirements for operators controlling unmanned aerial vehicles (UAVs), unmanned underwater vehicles (UUVs) and unmanned surface vehicles (U SVs) to identify collaboration requirements and interfaces designs that best support their command and control requirements. For the preliminary investigation, we identified a pair of missions in which vehicles of these three types might collaborate; mine warfare (MIW) and anti-terrorism/force protection (AT/FP) missions. Using these missions, we conducted a preliminary cognitive task analysis using the goal-directed task analysis (GDTA) methodology for each of the three domains. Where possible, vehicle operators were interviewed as Subject Matter Experts (SMEs) for the domain. The GDTA seeks to identify the operators goals in the missions, questions or decisions the operator has related to these goals, and the information requirements necessary to make sound decisions. These information requirements, or SA requirements, are further hierarchically structured to identify the Level 1 (perception), Level 2 (comprehension) and Level 3 (projection) requirements corresponding to the three levels of SA (Endsley, 1995). For one of the three vehicle types, USVs, we were unable to identify SMEs to inter view as the vehicles are still in testing. In each case, literature searches supplemented the information we were able to obtain through SME interviews. After developing the three preliminary GDTAs, they were compared to one another to identify critical collaboration requirements and information needs. The SA-oriented design process (Endsley, Bolte and Jones, 1993), a principled approach to designing to support high levels of operator SA, was then applied using the GDTA goal hierarchies. Here the SA-oriented design principles were applied to develop a set of dual screen displays. One display focused on supporting the team collaboration environment, providing support for shared SA, a common schedule, coordination requirements such as vehicle hand off and communications relay, and team tasks. The second screen provides a UV tools display that provides support for planning, vehicle scheduling, payload, systems, and vehicle controls. This research describes a process for conducting an empirical investigation of the cognitive requirements in a domain, translating the output of that research into a goal hierarchy that then drive s the development of information displays organized around the users goals, within the context of a future collaborative environment.
|
||||
M4 |
Vikki B. Sanders; Atkins Global, and Ben Woodcock |
Practical Issues of Applying Human Factors in Offshore Installation Design | ||
The aim of this presentation will be to discuss the practical challenges of applying Human Factors best practice in the offshore oil and gas industry; and how Human Factors Consultants can collaborate with design and operating companies to ensure sufficient integration of Human Factors (HF) in the design process. Offshore installations consist of inherent challenges for the HF practitioner, for example, with limited real estate available to the designers and operators of offshore installations, creating an ideal working environment is not always possible. To overcome this it is of paramount importance that HF is integrated into the design of new offshore installations and refurbishment of existing installations. Although Human Factors is considered to be increasingly valuable in this area, HF Consultants are often faced with reluctance when it comes to fully embracing HF into the design process. How, as practitioners, do we overcome this? In order to promote Human Factors in industries such as Oil and Gas, it is necessary to adopt a pragmatic approach to the application of HF: to identify those areas where safety and performance can be most influenced; to utilize a considered application of established HF practices; to demonstrate the value of applying HF in these arenas, with the ultimate goal to fully establish Human Factors as an integral part of the design process. This presentation will demonstrate the intrinsic challenges that face HR practitioners and will illustrate practical solutions using real life examples from the Oil and Gas industry. |
11:15 - 12:45 Lunch and Poster Session
P1 |
Jacob T. Fleming, Brian R. Johnson, Frank T. Durso; Texas Tech University, and Jerry M. Crutchfield; Federal Aviation Administration |
Information Needs of Ground and Local Air Traffic Controllers: When a runway becomes unavailable |
||
The purpose of this study was to determine the information needs of air traffic controllers in a tower environment. Six male professional air traffic controllers controlled traffic in a high fidelity control tower simulator, but without access to any equipment. During this task, all of the displays (e.g., radar) that a controller would normally have access to were turned off, leaving only information from out-the-window. If a controller needed information, he would orally ask for it, rather than visually glean it from a display. Answers to questions were provided by a simulator support staff member, called Information, who was monitoring the displays to which a controller would normally have access. We also indicated to the controllers that it was permissible to ask for information that they would like but that was not available on todays displays. We found that ground control and local control have similar, but not identical, information needs. For ground controllers, aircraft identification and location requests constituted over half of the information requests. Aircraft type, graphic depictions of the airspace, and flight rules (IFR or VFR) were requested less, but they accounted for 9%, 8%, and 8% respectively of the ground controller s requests. For local controllers, aircraft identification, flight rules, and departure procedures constituted over half of the information requests, with location and aircraft type also being popular requests. Thus, the two positions seem to need some of the same information clearly aircraft identification and to a lesser extent flight rules and location. Among our more interesting findings was the change in information needs when a runway became unavailable, especially for the ground controller. Although runways are often unavailable for short periods, we closed the runway for a prolonged period. This allowed us an extended period during which to determine the relative value of different types of information. If the proportion of requests for a particular piece of information, say aircraft identification, starts at baseline, rises (or falls) during closure, and then returns to baseline, then the pattern implies that the treatment (i.e., the runway closure) caused the change in the proportion. For the ground controller, the unavailability of a runway caused an increase in the need for aircraft identification information. After the reopening of the runway, the baseline was largely recovered. . In contrast, the proportion of requests for aircraft identification by local controllers continued to increase, regardless of the runway closure. The unavailable runway also seemed to cause a decrease in the proportion of flight rules requests, but once again, the change was only for the ground controller. These results can be useful in the development of dynamic displays that can change as the situation, and therefore the information needs, of the controller changes. Runway closures do not seem to require any great change in the information required by the local controller, but a dynamic display for the ground controller could make aircraft identification and flight rules more salient when a runway closes.
|
||||
P2 |
Arathi Sethumadhavan, Francis T. Durso; Texas Tech Univeristy, and Jerry Crutchfield; FAA |
Computing Information Relevance from Task Analysis: An Illustration Using Air Traffic Control |
||
Relevance is a pervasive term used in several domains. However, relevance holds different definitions to pragmatists (e.g., Sperber & Wilson, 2004), to information scientists (e.g., Xu & Chen, 2006), and human factors psychologists (e.g., Sanders & McCormick, 1993). Understanding relevance has important applied consequences. Quantifying the relevance of information can be helpful in effective display design. For example, pieces of information that are more relevant should be more accessible in a display than less relevant ones. We illustrate, using the air traffic control (ATC) domain, a methodology that can be used to compute the relevance of information. The Federal Aviation Administration (FAA) has plans to modernize the National Air Space, including the control tower. Identifying the relevance of pieces of information could ultimately be used to generate a set of information requirements for aiding the design of ATC tower interfaces. Our methodology focuses on c omputing the relevance of a piece of information by taking into account three aspects of ATC tasks that use the information: the number of different tasks that make use of the information, the frequency of occurrence of those tasks, and the criticality of those tasks. Thus, information that is required by several tasks that are very frequent and very critical can be considered more relevant than information that is required by fewer tasks of low frequency and low criticality. The methodology can be used to compute the relevance of a piece of information for a particular component of a system (e.g., local controller) or for the entire system (e.g., control tower). The number of tasks, the frequency, and the criticality of the tasks using a piece of information were obtained from a cognitive analysis of the ATC domain. In support of the validity of the methodology, we were able to confirm the value of weather information and traffic information in ATC towers. The method also pointed to the relevance of information about routine procedures and status information. The method can be used to derive information relevance, a characteristic of information that has implications for display design for any domain.
|
||||
P3 |
Winfred Arthur Jr., Tobin B. Kyte, Anton J. Villado; Texas A&M University, Curtis A. Morgan, and Stephen S. Roop; Texas Transportation Institute |
Assessing the Utility of Crew Resource Management Training: Introduction of a Subject Matter Expert-Based Utility Analysis Approach
|
||
This paper presents a subject matter expert-based approach to implementing a utility analysis-based evaluation of Crew Resource Management (CRM) and other related training programs and interventions. Empirical evaluation studies are typically used to generate the specific parameters used in the assessment of the utility of interventions such as training programs and personnel selection systems. However, particular circumstances and situations may make it difficult if not impossible to conduct such studies. In response to circumstances where it is not possible to conduct formal empirical evaluation studies, we present a detailed description and demonstration of a subject matter expert-based approach to implementing utility analysis using a CRM case study in commercial aviation as an illustrative example. Our subject matter expert-based approach to utility analysis was effective in generating utility estimates when the conduct of traditional methods of utility analyses were not feasible. Results of the case study suggested that CRM training has clear positive monetary benefits to the commercial airline industry. This paper demonstrates the efficacy of our subject matter expert-based approach to assessing the utility of CRM and other related training programs and interventions. The primary advantage of this approach is that it permits the implementation of a utility analysis in circumstances and situations where it would otherwise not be possible to do so. This approach also permits the collection of data necessary to conduct sensitivity analyses. Applications of this approach include its use in the assessment of the utility of other types of training programs and interventions in different settings and industries where the use of traditional utility analysis methods are not feasible.
|
||||
P4
|
Sebastian O. Thomas, Rochelle E Evans, and Ahmad Rahmati; Rice University |
User performance on the iPhone touchscreen keyboard |
||
Touchscreens are becoming predominant among portable devices. The iPhone utilizes touchscreen technology and implements a virtual keyboard, which allows for some flexibility in its screen layout and overall presentation. Unlike hard keyboards such as those seen on the Blackberry, a major concern regarding the use of the iPhone's soft keyboard is its lack of tactile feedback. This feedback allows users to focus more on what they are typing rather than on the keyboard, and its absence has been shown to increase error rates for the iPhone. Previous research has failed to address performance on portable touchscreen keyboards for varying key and finger sizes. The iPhone's keyboard can be displayed in either a default portrait orientation, and some applications support a landscape orientation. In the portrait orientation, a key has a width of 4 mm and a height of 6 mm. In the landscape orientation, a key has a width and height of 6 mm and 5.5 mm, respectively. It is conceivable that the smaller surface area of the portrait orientation would be more difficult to accurately hit, particularly for individuals with larger fingers. An additional consideration is the way users utilize their fingers when typing. The iPhone's user manual instructs users to begin typing with the index finger and, with practice, to graduate to two-thumb typing in order to type faster. This study sought to explore how performance on the iPhone's soft-keyboard (measured by words per minute and error rate) varied across the two orientations of the keyboard and the index finger versus two thumb typing strategy for novice iPhone users with varying finger sizes. Participants typed short sentences over ten sessions (five in both the landscape and portrait orientations). One half of each session was allocated to the index finger typing style; the other half, to the two-thumb method. Exploratory analyses have thus far revealed that users' typing performance improved over the five sessions irrespective of orientation or typing style. Comparing performance across typing style showed that error rates were consistently low in the index finger condition while the two-thumb condition showed a higher and more erratic error rate. Words per minute (wpm) were also lower in the two-thumb condition (contrary to the user's manual), particularly for participants with larger fingers. Participants with larger fingers typically had worse performance across sessions regardless of condition, however by the end of the fifth session for each keyboard orientation, even the slowest participant was typing at an error-corrected rate of approximately 20 wpm. In an absolute sense, while it may not be suitable for lengthy typing tasks, the iPhone's soft-keyboard seems to be appropriate for limited typing tasks such as those required for text messaging. There appear to be, at least in the short term, definite performance deficits for individuals with larger fingers, particularly when utilizing the two-thumb typing style. These users seemed to perform best when typing with their index finger in the landscape orientation which suggests that this orientation be supported by a wider range of applications.
|
||||
P5
|
Vickie Nguyen, Ashitra Megasari, April Amos, Angie Romine, S. Camille Peres, Magdy Akladios; University of Houston-Clear Lake, Philip Kortum, and Claudia Ziegler; Rice University |
Software Ergonomics Assessment Using Subjective and Objective Tools: A Literature Review |
||
Previous research has shown that there are a high prevalence of upper musculoskeletal disorders found in the workplace that are associated with improper workstation placement and repetitive work tasks. A minimum of fifteen hundred to two thousand muscle exertions per hour can be tolerated before tiring and or damaging the muscles, but workers in the workplace usually exert about nine thousand muscle exertions per hour. Even with the advent of proper workstation ergonomics and breaks between work tasks, musculoskeletal problems still occur. Usability tests have been run on software to determine the ease of use and the amount of efficient output, but workers in the field who have to spend multiple hours on the software can still experience upper musculoskeletal problems. The software may have passed usability tests, but because of monitor size requirements, modifications by management, multiple opened windows, and icons/font sizes that are hard to view on larger monitors, musculoskeletal disorders are still prevalent. For example, intense muscle exertions can still be experienced as a result of repetitive movements from moving the mouse cursor to the icons in order to implement commands. Although keyboard shortcuts could be helpful in reducing these repetitive movements, workers do not adopt them because they vary from software to software. In addition, besides usability testing and workstation ergonomics, correlations have been found between subjective, objective, and physiological measures regarding software and its effects on mental workloads, but only a few studies have focused on the effects of the software on physiological data due to repetitive movements made to implement commands. This poster will present a review on previous literature, regarding this subject, and will present pilot data on the possible relationship between subjective, objective, and physiological data concerning software ergonomics.
|
||||
P6
|
Samuel G. McLellan, and Andrew W. Muddimer; Schlumberger |
User Interfaces & Aesthetics: a real-world 1/20 second test |
||
In recent efforts to find out how fast people formed first impressions, researchers like Dr. Gitte Lindgaard from the HotLab (Human Oriented Technology Lab) at Carleton University in Ontario found that in as little as 50 ms (1/20 second), users formed judgements about webpages they glimpsed. This past fall, we ran with two separate subject groups inside Schlumberger through two distinct 1/20 second tests: one duplicating a benchmark test consisting of the same webpages performed by Dr. Lindgaard & one consisting of real-world screens from our own web applications. But we had a secondary motive as well, something we haven't seen in the research literature yet. Among the screens were two screenshots of the same web-based application--one before and one after user experience specialists had worked with the product development team to incorporate design niceties without changing basic functionality. We wanted to see if differences in screens resulting from design enhancements or improvements would be recognizable--in a statisfically significant way--at first glance. In both user test groups, this poster shows that the ratings for the screenshots, each shown for only 1/20 second, is almost identical for both groups, and comparing the 2 screenshots for the same application, 1/20 second was enough to show that the revised application is rated on average and across users far better than the original design. We ran ANOVA against our samples and found that the difference you can see in the data and charts turns out, in fact, to be statisically significant.
|
||||
P7
|
Christopher M. Sader, and Keith S. Jones; Texas Tech University |
Does Social Value Orientation determine whether someone enjoys a video game? |
||
Video games have become a large and profitable industry. Therefore, researchers have begun to study why video games are enjoyable. Interestingly, Holbrook, et al. (1984) demonstrated that certain individual differences influence whether a player enjoys a video game. Specifically, they found that matching a player s preferred cognitive style (visual or verbal) to their game (visual or verbal) increased enjoyment. This suggests that designing video games to match a player s personality can enhance enjoyment. However, additional research is needed to determine whether this is true for other individual differences. One variable that has promise is Social Value Orientation. Social Value Orientations (SVOs) describe people s motivation to behave in a certain way when the outcome of those behaviors affects themselves and an other. Typical SVOs include being cooperative, competitive, or individualistic. Cooperative people generally behave in a manner that maximizes the outcome for both themselves and the other. Competitive people generally engage in behaviors that maximize their outcome, while minimizing the others outcome. Individualistic people generally behave so as to maximize their outcomes, regardless of the others outcomes. Interestingly, different types of video games seem to reflect these SVOs. In cooperative games, players assist other players so that everyone benefits. In competitive games, players obtain the highest score possible, while ensuring that their opponents scores are very low. In individualistic games, players maximize their own score without regard to anyone else. Given Holbrook, et al. s (1984) findings, one might expect that a person with a cooperative SVO would enjoy a cooperative game more, while a person with a competitive or individualistic SVO would show greater enjoyment of a competitive or individualistic game, respectively. Such an outcome would be consistent with previous research, which demonstrated that SVOs can be used to predict whether someone will participate in a given activity, e.g., being a volunteer (McClintock & Allison, 1989) or using public transportation (Van Vugt, Van Lange, & Meertens, 1996). To date, however, no research has determined whether matching a person s SVO to a type of game will result in a greater level of enjoyment. We begin to here. To do so, the current study assessed Social Value Orientation using the Ring Measure of Social Values. Once participants were classified as cooperative, competitive, or individualistic, they played three specialized modifications of the Unreal Tournament 2004 PC game. The modifications required cooperative, competitive or individualistic play, and they were presented in a random order. In the cooperative game, players worked alongside a computerized partner to ward off swarms of monsters and maximize a team kill score. In the competitive game, players worked against a computerized opponent to maximize their own score, while ensuring that the opponents score was as low as possible. In the individualistic game, players tried to maximize their own score. After playing each game, participants rated their overall enjoyment of the game, their perceived skill at playing the game, and game difficulty. We expected that matching the players SVO to the game would increase enjoyment. Results will be discussed.
|
||||
P8
|
John R. Morris |
A Flow-Diagrammatic Approach for Presenting Participant Navigation Patterns during Web Site Usability Testing |
||
The translation of research results acquired in the usability testing arena to usable recommendations for clients and system developers can be a difficult process. While several approaches exist to diagnose and rank usability issues found during testing, the documentation of user activities observed during tasks can provide added benefit for developers. Because programmers and other Web site developers are already familiar with their use, flow diagrams offer a practical approach for presenting the navigation routes taken by participants during testing. Flow diagrams are used often in Web site development to visualize the elements and structure of a proposed or existing design. We have been exploring the use of a flow diagrammatic approach as a way to facilitate the transfer of Web site usability findings to developers. The use of flow diagrams can dramatically enhance the comprehension of testing results by tapping into the existing knowledge of clients and system d evelopers. A benefit of using flow diagrams, in contrast to other similar approaches such as link analyses, for example, is that the direction of activities is captured. Another common approach to representing the succession of linked Web pages is simply to list the links in the order they were selected. While this is certainly a straightforward approach, combining the results of several users may be confusing at best, if not an impossible chore. Flow diagrams excel in the presentation of information comprising the activities of multiple users. A comparative usability evaluation was conducted on two proposed design styles for the Texas Tech University School of Law Web site. The aim of the Web site was to attract prospective law students to the program. One design style reflected the Admissions portion of the site, while the other was created for the Financial Aid section. Each design was partially built out, with as much as four tiers of depth in some sections. Eight tasks were conducted by five undergraduate prospective law students. Task flow diagrams were created for each of the eight tasks to depict the navigation activities of the prospective law students. The diagrams were created with boxes representing Web pages, and arrows between them indicating the direction of navigation. For connections between Web pages occurring more than once in the same direction, a numerical value was placed along the line of the arrow to illustrate the frequency of the relationship. The resulting diagrams displayed easily recogni zable traits, such as navigational routes that were common to multiple users, and tasks where users experienced difficulty. Analyses assessing the usefulness of this approach were not conducted, although, the system developers expressed great interest in the flow diagrams and later reported that they were crucial in determining user behaviors and interests with certain aspects of the proposed designs.
|
||||
P9
|
Lauren FV Schraff; Stephen F Autsin State University, and Phillip Kortum; Rice University |
The Immediate Benefit of Disappearing Web Links |
||
Introduction: Removal of a web page link is normally considered to be detrimental to user performance on subsequent visits to a web page. This study investigated the impact of removing a shortcut link based on time delay of the subsequent visit. Method: Eighty university students participated in a 2 (no-link group) x 2 (delay) experiment. The first no-link group (YN) returned to search for a target after a previous single search on the site when there was a shortcut link leading directly to the target. The second no-link group (N) made their first visit to the site when there was no shortcut link present. Participants either searched the site for their second time immediately following their first visit or one week later. The experiment used a replica of a publicly available education web site. In one version of the web site, a short-cut link was present in the left-hand navigation. Other than link presence or absence, the two site versions were identical. In all conditions, participants were told to search for the same piece of information as quickly as possible. Although the location of the information was the same, the specific answer was varied to ensure that participants were not using simple memory to compl ete the second task. Results: For each dependent variable, (search time and page count), a 2 (no-link group) x 2 (delay) ANOVA was performed. There was a main effect for no-link group, F(1, 76) = 3.72, p=.05, with the YN group (99 sec) finding the target significantly faster than the N group (132 sec). No-link group significantly interacted with delay, F(1, 76) = 3.79, p=.05, so that the YN zero delay group (67 sec) was significantly faster than all other groups (YN one week delay, 131 sec; N zero delay, 134 sec; N one week delay, 130 sec). There were no significant main effects or interaction for page count (YN zero delay, 10.5 pages, YN one week delay, 10.2 pages; N zero delay, 10.6 pages; N one week delay, 9.3 pages). Discussion: At first it might seem surprising that the loss of a web site link (YN group) did not negatively impact user performance on the immediate subsequent visit. While the number of pages searched was similar for all groups, the time it took to find the target on the subsequent visit was vastly improved for those who initially had the shortcut link. Why might this happen? Our hypothesis is that users have retained an iconic memory of the target web page. Thus, while they searched the same number of pages, this iconic memory allowed them to do so with significantly greater speed. In other words, they were more quickly able to reject non-target web pages. This effect should not be due to the users learning the site, because in the first task they should have only seen the home page and the target page due to the use of a shortcut link.
|
||||
P10
|
Michael K. Anthony; SRA International, Katarina Derek; SRA International, and Dorothy Buckholdt; USAFSAM ADL |
Blended Learning Program: Aerospace Medicine Primary Course (AMP) |
||
The C-17 is a recent addition to the United States Air Force Aeromedical Evacuation (AE) inventory. Because of demand for actual aircraft in theatre, United States Air Force School of Aerospace Medicine (USAFSAM) instructors trained AE students using photographs of critical C-17 aircraft components during lecture. In support of a larger Advanced Distributed Learning effort, SRA International developed a virtual 3D C-17 for use as a lecture insertion technology. The tool provides instructors a navigable 3D aircraft that enables free play familiarization with internal and external aircraft systems. Additional functionalities include high-resolution photographs, animations, video procedures for 30 systems or locations, and 3D configurable personnel and stanchions (i.e., stretchers). This paper describes the development, evolution, and deployment of the stand-alone procedural trainer and assessment tool as well as its future potential. Use of the tool will support exist ing research validating the efficacy of such tools.
|
||||
P11
|
James Dykes |
Mesopic Contrast Acuity and Contrast Sensitivity |
||
PURPOSE Photopic and scotopic luminance efficiency functions accurately predict contrast acuity and contrast sensitivity for photopic (above about 10^1 cd/m2) and scotopic (below about 10^-2 cd/m2) luminances. Current research focuses on the transition between photopic and scotopic vision (mesopic). This is particularly relevant to understanding automotive accidents and cockpit compatibility. The current research modeled contrast acuity and sensitivity based on photoreceptor sensitivity across the mesopic range (between low photopic and high scotopic: 34- 0.002 cd/m2). METHOD Wearing an NVG, the experimenter tested 12 observers with normal acuity and color vision (8 males and 4 females, range= 18- 28 years). On a calibrated Sony Artisan 20” CRT, observers identified Regan-font contrast acuity letters and the orientation of circular bar gratings of four spatial frequencies (0.125, 0.25, 0.5, and 0.833 cpd) varying in contrast. Stimulus background was Illuminant D (x, y= .313, .328) at 35 cd/m2. Eight luminances were generated by ND gels sandwiched between clear acrylic sheets. Given the CRT gamut, individual Weber photoreceptor contrasts (Rabin) were maximized while holding Y contrast close to zero and minimizing contrasts for the other photoreceptors. The l- and m-cone contrasts were set lower than for the rod and s-cone contrasts to compensate for human CSF differences. During the 120 minute session, luminances were run from high to low and a subset of 96 chart X luminance combinations were presented based on pilot data. The observers first dark adapted for 15 minutes by playing a low-resolution video game on a paper white monitor ND filtered to 35 cd/m2. RESULTS Due to contrast differences, logMAR acuity was separately fit for reciprocal l- vs. m-cone contrast sensitivity and for reciprocal s-cone vs. rod contrast for each luminance condition. For 34 cd/m2, l-cone contrast better predicted acuity than m-cone contrast; but m-cone was the better predictor for 8.5 cd/m2 and less. For 34 and 8.5 cd/m2, s-cone contrast better predicted acuity than rod contrast; but rod cone was the better predictor for 2.14 cd/m2 and below. For 0.034 cd/m2 and below, rod contrast was the only relevant predictor. Base logMAR acuity was adequately fit by log cd/m2 (r2= .88). Overall logMAR acuity for all 54 data points was adequately fit by log cd/m2, l- or m-cone contrast, and s-cone or rod contrast (r2= .90). While noisier (also some cpd caveats), the contrast sensitivity data mirrored the contrast acuity data. Preliminary data from five participants reveal no difference based on green adaptation to the same paper white game filtered to 35 cd/m2 by a gel matched to the peak wavelength of an NVG (albeit with a wider notch). CONCLUSIONS Modeling contrast acuity and contrast sensitivity based on photoreceptor contrast is consistent with the established photopic and scotopic ranges, provides an understandable fit to the transition through the mesopic ranges, and may provide insights to help reduce traffic accidents and improve cockpit compatibility. ACKNOWLEDGEMENTS This research was funded by an AFRL/HE HBCU/MI grant to the PI with invaluable guidance and expertise provided by Drs. Leon McLin and Tom Kuyk.
|
||||
P12
|
Aniko Sandor, Shelby G. Thompson, Kritinal L. Holden, and Jennifer L. Boyer; Lockheed Martin |
The Effect of Software Label Alignment and Orientation on Visual Search Time |
||
Two studies investigated the effect of labels alignment and label orientation in software displays. Some of the most common elements on a visual display are columns of labeled data values and edge key labels. There are two frequently used methods of aligning text labels on a display. Labels can be aligned to the left margin ( left-aligned ), or aligned to the data value ( data-aligned ). Long labels are sometimes also wrapped where space is limited. Study 1 varied label alignment, label length (short and two-word labels), and text wrapping. Fixed length groupings contained four labels that were all short or all long, while mixed length groupings contained two short and two long labels. Participants were presented with a target label, and after they clicked on it, a column of labeled data was presented that contained the target label. The task was to find and click on the value that corresponded to the target label. The results showed that for four-label groupings, label alignment did not significantly affect search times. Response times were reduced when label lengths varied within the grouping. Overall, performance was better for both label lengths when long labels were not wrapped. Therefore, based on this study, wrapping labels is not recommended. This advantage of mixed length groupings over fixed length groupings may be due to the fact that in mixed length groupings one can quickly reduce the search set based on label length. Study 2 investigated the effect of label orientation on target identification using 4 orientations: horizontal, 90° left and right, and marquee. Participants were presented either short words or acronyms/abbreviations at four locations on the screen: top-left, top-right, bottom-left, and bottom-right. The experimenter read aloud the target word prior to the participant clicking a start button that brought up a random selection of either four items on the screen. Once the participant located the target word, they were instructed to click it as quickly as possible using a mouse. The fastest response times were found for horizontal text, and the slowest were found for marquee. Responses to horizontal text were fastest, with no significant difference between horizontal and 90° right, nor marquee versus either 90° rotated text. In addition, there was no significant difference between response times to the two 90° rotated text conditions. Furthermore, response times revealed that participants conducted their visual search from top to bottom, left to right, with response times to targets presented at the top-left location being faster than to targets in the bottom-right locations. Future studies will look at more realistic displays and tasks to further investigate the effect of label alignment and orientation.
|
||||
P13
|
Ron Macklin |
Evaluation of design criteria for use in the development and specification of hand crank mechanisms |
||
Semiconductor manufacturing equipment (SME) presents many challenges to the ergonomics professional. One such challenge is in the area of mechanical movement of heavy chamber lids using simple hand cranks. Hand cranks are found in a number of applications not just within SME, they are also found on support equipment like lift devices traditionally found in warehousing environments. These crank mechanisms are not always designed specifically for the application they are employed in; many are off the shelf common winch type devices with a rated load limit and end up being deployed with little consideration as to how they will be used within the work environment. This presents challenges to the ergonomics professional especially in light of the minimum amount of guidance there is on this subject. Published guidance on the use of hand cranks give information on force, orientation, and direction, some give guidance on number of turns per unit time. However, what seems to be missing is guidance that the design engineer can take and put directly to use in the development of their widget. Additionally, these criteria must also be assessable by third-party evaluators, typically called upon to make a judgment of the designers work. One approach that the ergonomics task force within SEMI Standards has been considering is to develop criteria for hand cranks based upon a force by time relationship. The task force chose to develop this curve based upon Rohmert s work involving muscle fatigue. The benefits of this criteria gives the designer more control of his environment by not limiting them to prescribed radiuses and at the same time gives guidance on maximum forces permissible within a given time that the crank is in use. This proposed approach gives the designer something to use during the design stage, whereas before, they had no real criteria by which to design or specify if outsourcing their design. The problem with this approach is that it does not take into account the speed at which the user of the crank can operate the device. This discussion will review some of the published information on this subject and develop a framework by which we can evaluate how best to address the development of design level criteria that can be used by equipment suppliers regardless of industry and look further into whether the speed by which the crank is operated is a compelling factor.
|
||||
P14
|
Andy Y. Su, Alicia Ling, and Tim Borden |
SoniFinder - A pilot study on sonification of 3D object location |
||
The SoniFinder is a monoaural auditory interface simulation that represents relative positional information in three dimensions, designed to enable the user to locate a target object such as a car in a large multistory parking garage. Previous works involving sonified navigational data have been mostly two-dimensional and rely on stereoaural panning to convey direction. While this has generally been shown to be effective in gross outdoor navigation, 2D stereo auditory signals differentiate poorly between frontal and rear locations. The lack of elevation information also prevents them from being effective in many navigational situations. The current study uses monoaural sonification, which is intended to eventually enable a portable implementation embedded within mobile electronic devices. The spatial dimensions of range, heading, and elevation are mapped onto the auditory dimensions of intensity, interstimulus interval, and pitch. The mapping of intensity to range is naturalistic, as sound emitters in the real world are loudest when they are proximal to the listener. The mapping of interstimulus interval to heading leverages the familiar hot/cold Geiger counter metaphor, with the rate of stimulus presentation being greatest in the target direction. The mapping of pitch to elevation was chosen by the experimenters because of the direct height metaphor, with ascending note patterns suggesting that the target is above the user, and vice versa for descending patterns. Seven users were tested on an object location task. A Java applet was constructed to resemble a clock face, with a clock hand rotating in sync with sound sequences conveying location information. The clock hand represented an imaginary locator device being panned around the user, who was represented by the center of the clock circle. Users were asked to judge from the sounds whether the target was above, below, or level with themselves, whether the range was increasing or decreasing compared with the previous trial, and the heading of the target. Correct identification rates were 31% for elevation, 71% for range, and 55% for heading. For the heading dimension, the mean angle off target was 33 degrees with SD of 41 degrees. While inferential statistics are not meaningful with the small sample gathered, the results seem to indicate that the added mapping of elevation to pitch is not very effective in conveying target location, as performance is approximately equal to chance. This poor performance may have been caused by the small size of the pitch manipulation, and this should be increased to make the pitch differences more noticeable. The data also suggest that there may be large individual differences in the effectiveness of the directional heading mapping, with some users performing almost perfectly and others being consistently off the mark. Future work should explore ways to make the elevation mapping less problematic, or alternate mapping combinations so that the elevation information can be conveyed efficiently a longside range and heading information.
|
||||
P15
|
Camille B. Major |
A Case Study: Developing a Proactive Industrial Evaluation Process |
||
With a mature ergonomics program in place, employees and management were able to identify situations that required an ergonomics evaluation, AFTER the illness was reported. A renewed focus to lower injury rates even further sparked benchmarking visits with other manufacturing industries to identify best practices that could be adopted. Using Six Sigma methodology an industrial survey evaluation process was designed. It includes an employee self assessment survey, deep dive analysis techniques, and a database for tracking corrective actions. Each step of the process is based on commonly used research that focuses on risk factors, exposure durations and standards to minimize risk of injury. This case study will focus on one Texas site of 3000+ employees and its impact on the rest of the company.
|
||||
P16
|
Vicky E. Byrne; Lockheed Martin, Gordon A. Vos; Wyle Laboratories, Mihriban Whitmore, and Susan D. Baggerman NASA |
Early Impacts of a Human-in-the-Loop Evaluation in a Space Vehicle Mock-up Facility |
||
The development of a new space vehicle, the Orion Crew Exploration Vehicle (CEV), provides Human Factors engineers an excellent opportunity to have an impact early in the design process. This case study highlights a Human-in-the-Loop (HITL) evaluation conducted in a Space Vehicle Mock-Up Facility and will describe the human-centered approach and how the findings are impacting design and operational concepts early in space vehicle design. The focus of this HITL evaluation centered on the activities that astronaut crewmembers would be expected to perform within the functional internal volume of the Crew Module (CM) of the space vehicle. The primary objective was to determine if there are aspects of a baseline vehicle configuration that would limit or prevent the performance of dynamically volume-driving activities (e.g. six crewmembers donning their suits in an evacuation scenario). A second objective was to step through concepts of operations for known systems and evaluate them in integrated scenarios. The functional volume for crewmember activities is closely tied to every aspect of system design (e.g. avionics, safety, stowage, seats, suits, and structural support placement). As this evaluation took place before the Preliminary Design Review of the space vehicle with some designs very early in the development, it was not meant to determine definitely that the crewmembers could complete every activity, but rather to provide inputs that could improve developing designs and concepts of operations definition refinement. Each session consisted of crews of four to six participants that were brought into the mockup and asked to enact scenarios within the internal volume of the space vehicle with representative equipment. Participants were system/subsystem developers and astronaut crewmembers. The scenarios to be highlighted for this discussion involved a crew of six on a 3-day mission with the expectation that the internal cabin would be reconfigured with seats and suits stowed and nominal operations in a shirt-sleeve environment. Activities for the crew of six included: in-space cabin reconfiguration, group meal/meal preparation, medical event/patient treatment and space suit donning for an evacuation return back to Earth. Each participant was assigned an activity (e.g. set up waste/hygiene area, stow equipment, monitor avionics panel) until the end of the scenario. Some activities required translation across the volume in areas of other activities. Human factors observations and participa nt comments were recorded and post-evaluation questionnaires were administered. The activities were reported to be satisfactorily performed in the mock-up in this early design configuration. Potential obstructions within the functional volume and systems concept of operations issues were identified. An example of the impacts of a potential obstruction involved the waste/hygiene area and interference from other internal cabin structures. An example of incongruous concept of operations involving the medical care area in the vehicle and a concept for stowage of equipment in that same area was made apparent. Stowed items covered the area expected for medical treatment and in an emergency situation, removing items to perform patient care is undesirable. As participants in the evaluation and from out-briefs of findings, system developers have been updating space vehicle design concepts and concept of operations definitions.
|
||||
P17
|
Daniel Nguyen; University of Texas at San Antonio, and Gregory G. Manley |
Assessing Personality Through Interviews: Examining the Role of Self Monitoring and Impression Management in Faking Behavior |
||
Aside from measuring cognitive ability, many organizations also employ the use of personality tests. Personality tests have gained popularity due to the variety of applications in which they may be used. Whether self-reported questionnaires are used in selection or in predicting behavior (contextual task performance, counterproductive behavior, success), organizations typically assess personality through paper-and-pencil surveys. Because organizations widely accept and use these measures, some applicants often attempt to misrepresent themselves. Some have even tried to gain the upper-hand and bought books on how to beat these personality tests, or in other words, participate in lying (otherwise known as response distortion or faking behavior in the literature). Another standard selection method used in the organization, is the employment interview. Aside from self-report paper-and-pencil measures of personality, observations are able to assess personality as well. The employment interview could be viewed as an observational social interaction test of personality. Even though interviews are able to assess the same personality traits as paper-and-pencil measures, some critics are skeptical towards using interviews. Interviews are viewed by these critics as being prone to response distortion in the same manner in which paper-and-pencil measures are. However, in the case of interviews, faking behavior is referred to as impression management. This study examines a college sample and investigates whether or not interviews are able to validly assess the personality domains of the Five-Factor-Model. Based on previous research, extroversion is expected to have the highest agreement among raters, while neuroticism is expected to have the lowest level of agreement. Furthermore, self-report measures and interviews are compared in order to examine the susceptibility of faking behavior of each method. Interview questions were developed to measure the domains of neuroticism, extroversion, openness, agreeableness, and conscientiousness. Participants participated in a mock interview either in an honest condition or an applicant condition, and only individuals in the applicant condition completed a self-monitoring and impression management questionnaire. Interviews are expected to be less susceptible to faking behavior when compared to paper-and-pencil self-report measures due the cognitive demands, the behavioral nature, and the perceived control of outcome. Two d-statistics (Cohen s d) will be computed. One d-statistic will examine the interview condition and find the standardized difference between the honest condition and the applicant condition (D1). Likewise, the other d-statistic will examine self-reported personality NEO ratings and find the standardized difference between the honest condition and applicant condition (D2). Because the interview method id expected to be less suseptable to faking good, the d-statistic for interviews is expected to be significantly smaller than the d-statistic for the paper-and-pencil self-reported personality measure. Furthermore, a test of significance will be conducted to determine the significant difference between the two d-statistics (D2-D1). Additionally, self-monitoring will be examined in respect to the five personality domains, impression management, and interview structure, but it is only expected to be positively related to impres sion management.
|
||||
P18
|
Keith M. White |
Claim validation: The missing element in determining work relatedness of an injury claim |
||
Across the United States and Canada, there is considerable variance within the health care and workers compensation systems in determining work relatedness of ergonomic related injury claims. Claim validation is the missing element in most post injury claim activity and is essential because it is the first point in determining causation. Claim validation includes the collection and communication of objective workplace conditions to the health care provider (HCP) and the HPC s response of an objective determination if the injury claim is work related. Further, this process is an essential beneficial service for both employee and employer. To benefit the employee, a correct determination is essential for his/her well-being and quality of life. Specifically, if a work task is implicated but lacks risk factors for the diagnosed condition, the employee could continue to suffer until the true cause is identified and resolved. To benefit the employer, a correct determination is essential to conserve valuable resources so that the organization can remain a work producing entity. Specifically, if the employer modifies a work task that is implicated as an MSD cause but is void of risk factors for the diagnosed condition, considerable resources (time, material and funding) may be consumed that would be better allocated to a job that has risk factors. This presentation reviews the need for claim validation, explains the claim validation process and presents two case examples, dorsal ganglion cysts and trigger finger. Carpal tunnel syndrome will be reviewed as a third example as time permits.
|
||||
P19
|
Nikoalus Walch |
Improving Orthodontic Quality through structured SureSmile MACROS |
||
Orthodontic archwire prescriptions require several steps, where each step is vulnerable to inaccuracies and miscommunications. To reduce inaccuracies and better control quality of treatment, a study to capture a series of “ best methods” was initiate examining the activities in step leading to an archwire order. This resulted in the following changes: a formal procedure for evaluating treatment of the patient, a redesigned software interface to submit orthodontic prescriptions, a formal procedure to implement the orthodontist’s treatment prescription, and a method to check to accuracy of the prescription to plan. I will present on how this study was conducted in-house and with customer participation, and on the improvements it has lead to.
|
||||
P20
|
Eston T. Betts, and Patricia R. Delucia |
The Relationship between Patient Wait Time and Patient Satisfaction |
||
Patient satisfaction is associated with self-reported treatment compliance and patient outcomes. One factor that influences patient satisfaction is wait time-- how long a patient waits during a visit to a health care facility. Here, we measured the time spent at each component of a patient s visit to a cancer treatment facility and its correlation with patient satisfaction. Results suggest that reducing the total time of a patient s visit to a health care facility will improve not only patient satisfaction with how long the entire visit takes, but also satisfaction with other aspects of the visit not including waiting. Moreover, results suggest that the time spent in the examination room waiting for the doctor is highly associated with patient satisfaction with overall time spent during the entire visit. In short, our results suggest several ways to improve patient satisfaction at a cancer treatment facility. The implication is that such improvements will lead to greater treatment compliance and ultimately to better patient outcomes.
|
||||
P21
|
Andrew Muddimer, Trevor Hicks, Thomas Parmeson; Schlumberger, Ken Harry; Ken R. Harry Associates, Gayle Smith; Vanguard Environments, and Anne Helmick-Lyon; Inscape/JohnsonSimon Resources |
Environmental changes: Improvements to our working environment |
||
History, in 2003 cost savings meant a change from individual offices to a semi-open concept, leaving some walls but taking off the doors. 4 years later the POD concept (34 people in the space of two offices) has been accepted and we are looking for improvements. What ideas did we have, and what was the final solution. Implementation on changes to the work environment included, glass walls with sliding doors, adjustable height desking, new chairs, telephone headsets, harbor rooms, retreat rooms, color coding of areas of the building, new artwork and a general refresh of paint.
|
||||
P22
|
Paul L. Derby, Keith S. Jones, and Elizabeth A. Schmidlin; Texas Tech University |
An Investigation of the Prevalence of Replications in Human Factors Research |
||
Replication research benefits the scientific community because it increases the confidence, and sometimes the scope, of previous findings. The current consensus suggests that replication research is uncommon outside of the hard sciences for a variety of reasons. Many believe that they lack the time, subjects, or funds necessary to reproduce original findings (Smith, 1970). Also, conducting a replication risks that the study might produce different results (Lindsay & Ehrenberg, 1993). Lastly, by emphasizing the production of original findings, journal editors may discourage researchers from pursuing replications (Lindsay & Ehrenberg, 1993). If correct, then this will negatively impact the field of Human Factors. Currently, there is no way to understand this impact because we do not know how often replications are conducted in Human Factors. This study is the first to investigate the extent to which Human Factors professionals conduct replications. To do so, eight articles (hereafter referred to as parent articles) were selected from the 1991 issues of the journal, Human Factors. Each article that had referenced one of the eight parent articles between 1991 and September 2006 (hereafter referred to child articles) were also retrieved (N = 127). Two investigators coded and compared each child article against its 1991 parent article to determine whether the child article replicated its parent article and the type of replication. Our results indicated that six out of the eight parent articles were replicated at least once. Each parent article was replicated 0 to 10 times (M = 3.75; SE = 1.41). Of the total number of individual replications (N = 30), 18 replications were conducted by the author(s) or co-author(s) of the parent article. On the other hand, 12 replications were conducted by someone other than the author(s) or co-author(s) of the parent article. In other words, many of the replications were conducted by the parent article s authors or their co-authors. In addition, we found that the number of replications for a given parent article was significantly correlated with the number of child articles that were associated with that parent article (r = .76, p < .03). Generally, parent articles were more likely to be replicated when they had a lot of child articles. Our results indicated that 25% of the parent articles were not yet replicated. Perhaps the results of these parent articles should not be blindly accepted until follow-up work has been conducted. From a more positive perspective, however, our results indicated that there may be more replications within Human Factors than what was first expected despite the belief that social scientists rarely replicate previous research (Hendrick, 1990; Kelly, 2006; Lindsay & Ehrenberg, 1993; Schneider, 2004; Smith, 1970). This is a positive finding in that each time an article is replicated, the generalizability and reliability of the findings are increased (Kelly, 2006; Lindsay & Ehrenberg, 1993). Moreover, it seems that many researchers are taking responsibility for replicating their own research. This is encouraging as well, although it would benefit the Human Factors community if researchers replicated others research.
|
||||
P23
|
Sarah P. Everett, and James H. Pratt |
Usability Testing Method for Communication by Customer Pairs |
||
The primary purpose of cell phones and other such devices is to facilitate communication between two or more parties. However, often when these devices are tested in a laboratory setting, only a single participant is observed using the device with minimal interaction with the experimenter. This methodology fails to assess behavior in a realistic environment and misses the natural interaction of users with each other. In addition, it can be difficult for the experimenter to carefully observe user behaviors while sending messages, calling, or otherwise enabling the tasks. This poster describes a method for testing communication devices by using pairs of friends as participants. Two versions of an Instant Messenger (IM) application on a cell phone were tested by bringing in sets of participants who knew each other. The friends were given time to carry on free-form conversations as well as specified tasks. This allowed them to chat with each other as they would on an everyday basis, without the formality that is often seen between strangers. Overall, this method worked extremely well and provided better data for the experimenters, and a more fun and natural experience for participants. A number of usability issues were captured and recommendations were made to address them. Feedback from clients indicated that they were happy with the method and hoped their next study would employ is as well. Advantages, disadvantages, and operational issues arising from this method are further discussed in the poster.
|
||||
P24
|
Samuel G. McLellan, and Andrew W. Muddimer |
Longitudinal Usability & SUS: the effect of user experience on standard instruments |
||
Longitudinal studies have to do with testing over time or testing that takes into consideration previous user experience with a product or product versions. The literature is sparse on examples about the explicit effect of user experience on user satisfaction metrics in industry-standard survey instruments. This poster looks at explicit examples that we have collected from real world product development experience in 2007 about the effects of user profiles on ratings for commercial products that utilize one such instrument, the System Usability Scale or SUS. Our findings with one suite of products across 50+ users and multiple geographic locations support the claim that level of experience with a product or product version can have a substantially effect on SUS ratings in general, as user experience increases, so do overall SUS scores. |
1:30 - 2:30 Presentation Session II (A)
A1
|
Adrian O. Salinas |
Human Systems Integration Requirements in Action |
||
The existence of requirements is paramount to weapon system design and development. Requirements afford the opportunity for government systems engineers to stipulate the foreseen needs that the user and maintainer requires to allow them to accomplish their mission effectively. After the capability gaps are identified for a particular mission, the requirements identify the needed solution to fill in the capability gaps. Those requirements solutions will hopefully translate to an existing technology or a technology in development. Requirements make the most impact if accomplished early in the acquisition life cycle and incur minimal cost impact due to the avoidance of redesign. An essential portion of those requirements that all weapon systems should include are those requirements associated with human participation within the operation and maintenance realms of the system. Those essential human elements that are involved in the operation and maintenance of a system are captured in the composition of the Human Systems Integration (HSI) process that includes manpower, personnel, training, human factors engineering, environment, safety, occupational health, survivability and habitability. Requirements have always existed in some form or fashion whether verbal or documented ever since items have needed to be made or manufactured. The human element stands particularly true for all weapon systems whether they are manned or unmanned which includes flying machines. The human element is even evident in requirements for flying machines even those dating back to 1908 which addressed HSI by stipulating such characteristics as the flying machine will be able to accommodate two persons, and capable of being quickly and easily assembled involve the HSI domains of manpower and human factors engineering, respectively. What becomes a challenge in the composition of requirements for the HSI practitioner is to identify areas within the requirements documents where HSI-related information can be inserted that would describe the system with a specific and measureable human attribute for operations and maintenance. This challenge forces the HSI practitioner to become thoroughly familiar with the operational and maintenance aspects of the system under review. The process of document reviews and assurance of writing human attributable requirements will be discussed, categorized and examined.
|
||||
A2 |
Camille B. Major; Raytheon, Inc |
Bringing Engineers into the World of Ergonomics |
||
Many times the world of ergonomics is a concept that is reserved for the Ergonomics Specialist. From evaluation requests to ergonomics training, the Specialist holds the answers and the guidelines. They are an invaluable resource to the company. However, to develop an ergonomics culture, the knowledge must be shared. By sharing ergonomics information, guidelines, and analysis techniques with the Engineering departments, the knowledge base is expanded. Others are welcomed and empowered to become involved in the solution focused world of Ergonomics. This case study focuses on the effective training of 20+ manufacturing and Design Engineers which covered topics such as ergonomics, evaluation techniques and internal web based resources. The program has evolved and grown from standard tools such as the NIOSH Lifting Index to virtual analysis tools such as Jack.
|
||||
A3 |
Melanie D. Davis; 3M, and Miriam Joffe; Auburn Engineers |
Cost-Effective Management of a Global Ergonomics Program |
||
The success of any ergonomics program revolves around a solid process that illuminates a global perspective while providing a detailed process of individual responsibility, timetables, data collection, data analysis, cost-effective solution development and data storage. We invite you to see an overview of how 3M has developed a very successful ergonomics program that meets these criteria while relying on a mix of EHS professionals, engineers and management teams in plants across the globe.
|
||||
A4 |
Keith M. White |
Lean Manufacturing v. Ergonomics: It does not have to be a fight |
||
Lean manufacturing is considered a critical strategy to be competitive in today s global economy. As a necessary goal, the focus is on maximizing manufacturing gains of productivity and quality. However, this focus is often shortsighted to the extent that the other necessary elements of a lean manufacturing approach are minimized or worse neglected. This minimization can have significant negative impacts on the process being improved and even the organizations bottom line. Research from the aviation industry, automotive industry and through hands-on experience, ergonomics is routinely minimized not only during lean events but the overall continuous improvement process. This is very disturbing as one of the 7 Wastes that Lean seeks to Eliminate is motion. That is, there is a stated effort to reduce any wasted motion to pick up parts or stack parts to include walking. Among many concerns, ergonomics also seeks to eliminate wasted motion. Ergonomics can be simply integrated into any continuous improvement activity. While this presentation provides evidence of the problems of lean manufacturing and ergonomics, the focus is to describe how ergonomics can partner with lean manufacturing for true permanent gains. Actual examples are provided to solidify that this integration is painlessly possible. |
2:45 - 3:45 Presentation Session III (AII)
AII-1 |
Michael K. Anthony, Katarina Derek; SRA International, and Dorothy Buckholdt; USAFSAM ADL |
Aeromedical Evacuation C-17 Virtual Walkthrough |
||
The traditional Aerospace Medical Primary (AMP) classroom instruction was restructured into two delivery modalities out of the United States Air Force School of Aerospace Medicine s (USAFSAM) need to avoid high costs associated with presenting an on-site course, the Secretary of the Air Force mandate for implementation of internet-based courseware, and the 2003 Utilization and Training Workshop (U&TW) requirement for an additional four weeks of instruction (resulting in a 10-week course). SRA International re-engineered the classroom instruction into a four-week advanced distributed learning course (Web AMP) and a six-week In-Residence (I-R AMP), follow-on practical portion. Web AMP provides all of the updated UT&W required critical knowledge and skills, which can be accessed as a refresher, used as educational resource, or customized to the needs of the international students. It consists of 12 modules, 139 lessons, 4,900 pages of content, 5,040 static images, 564 animated or interactive exercises, and 12 exams. Module exams are comprised of questions randomly selected from a large bank of test questions specific to each module. A passing score is required to progress to subsequent modules. The first class of students completed Web AMP in approximately 60-80 hours. Additional benefits associated with Web AMP/I-R AMP blended learning program include the fact that Web delivery increases the course load from 200 to 400 students per year, while avoiding TDY-to-school costs of $980,000 per year. Additionally, once Web AMP is approved to cover CME credits, it will save USAF an additional $10,000 to $20,000 per USAF physician. This paper describes the development of this blended learning program including its current and future savings and benefits. This approach to learning (blended) supports current research findings and trends in successful training development.
|
||||
AII-2 |
Keith S. Jones, and Brian R. Johnson |
Why Do Tele-Operated Robots Get Stuck? |
||
After 9/11, researchers used robots to assist search-and-rescue operations (Casper, 2002; Murphy, 2004). While doing so, they experienced several human-robot interaction issues. For example, Casper reported that operators frequently got robots stuck (2.1 times per minute). Consequently, researchers began to study factors that influence an operator s ability to judge whether a robot is larger or smaller than an aperture (e.g., camera height & viewing distance: Fune, et al., 2006; view point: Moore & Pagano, 2006; Moore, et al., 2007). Interestingly, those studies found that judgments were fairly accurate (Fune, et al., 2006; Moore & Pagano, 2006; Moore, et al., 2007). Given such accuracy, it seems unlikely that field robots got stuck because operators misjudged whether the robot was smaller than an aperture. If so, then why did field robots get stuck? One possibility is that operators could accurately judge that the robot was smaller than an aperture, but they were unable to drive the robot through that aperture without getting it stuck. This possibility was tested in the present experiment.To do so, eighteen participants completed seven blocks of trials. During each trial, participants carried out four activities. First, they viewed an aperture via a camera mounted on the robot. Second, participants drove the robot toward the aperture and stopped before they crossed a line that was a short distance in front of the aperture. Third, they judged whether the robot could pass through the aperture. Fourth, participants tried to drive the robot through the aperture. During each block, participants completed these four activities with fourteen apertures. Ten of fourteen were test apertures. Five of the ten test apertures were wider than the robot. The other five were thinner than the robot. The remaining four apertures were filler, which were selected at random from the test apertures. Including filler varied the proportio ns of larger and smaller apertures across trials, which discouraged participants from basing their judgments on impressions about the relative frequency of larger and smaller apertures. The results indicated that judgments were relatively accurate. Generally, participants judged that the robot a) could pass through apertures that were wider than the robot, and b) could not pass through apertures that were thinner than the robot. However, the results also indicated that participants routinely drove the robot into the sides of apertures that were wider than the robot. Thus, these results indicate that operators understood that the robot was smaller than a given aperture, but they were sometimes unable to drive the robot through that aperture. Consequently, it is possible that robots became stuck in field settings because operators decided to pass through an aperture that was bigger than the robot, but too small for the operator to drive the robot through. If correct, then this suggests that operators need to base decisions about aperture pass-ability on the dynamics associated with the operators control of that robot, rather than the robot s physical dimensions. It remains to be seen whether operators are sensitive to those dynamics. Future research will examine that issue.
|
||||
AII-3 |
Allyson R. Hall, Keith S. Jones, Patricia R. Delucia, and Brian R. Johnson; Texas Tech University |
Does Metric Feedback Hinder All Actions Related to Distance? |
||
People are inaccurate at estimating distances between themselves and targets. Accordingly, training protocols have been implemented in an attempt to improve distance estimation. Typically, distance estimation training consists of an individual estimating the distance to a target, and then being told the actual distance to that target in a given metric (e.g., feet). Studies have shown that providing trainees with metric feedback improves their metric distance estimations. However, available evidence suggests that such training affects cognitive processing, rather than perception (Richardson & Waller, 2005). If this is true, metric feedback training could decrease the accuracy of certain actions related to distance. This could occur if, after receiving metric feedback, trainees attempt to unnecessarily apply cognitive strategies to a task that is accomplished more precisely with perception/action (Heft, 1993). Basically, metric feedback could hinder certain act ions because it encourages trainees to use cognition for a task that requires perception. In other words, metric feedback may encourage trainees to think too much about what they are doing. Consequently, applying cognition to a task that does not require it could result in degraded performance. Interestingly, this explanation makes a novel prediction. Basically, while metric feedback may hinder certain actions related to distance, it may not hinder all actions related to distance. Specifically, metric feedback might not hinder actions that are guided by cognitive processing. Our hypothesis for the negative effect of metric feedback suggests that metric feedback should only hinder actions that do not normally require cognition. However, action tasks that do have a cognitive component should not degrade after metric feedback training is implemented. As a means of investigating this hypothesis, two separate studies were conducted. Both studies will be described. The first examined whether receiving metric feedback during verbal distance estimation training negatively affected a subsequent distance related action that does not normally require cognition (i.e., throwing a beanbag to a target). Specifically, participants were instructed to throw a beanbag to targets placed at various distances. The results of the study indicated that metric feedback improved subsequent estimations of distance, however, subsequent throwing performance degraded. The second study examined the effects of metric feedback on a cognitively-guided action task (i.e., throwing a beanbag to a specified metric distance). For example, participants were asked to throw the beanbag so that it came to rest 25 ft away. The results of this study indicated that receiving metric feedback training improved both subsequent verbal estimations and cognitively-guided throws. Therefore, the results of these two studies confirmed our hypothesis that metric feedback only hinders c ertain actions. Specifically, Experiment 1 demonstrated that metric feedback degraded certain actions related to distance that did not normally require cognition (i.e., throwing to a target). Moreover, Experiment 2 demonstrated that metric feedback did not hinder tasks that have a cognitive component (i.e., throwing to a specific distance). Overall, the current studies suggest that trainees must know whether their distance estimation training should be applied to untrained tasks. Doing so may benefit certain tasks. However, other tasks may suffer from it.
|
||||
AII-4 |
Zachary O. Toups, and Andruid Kerne |
From ethnography to design: Non-mimetic simulation for team coordination |
||
Fire emergency responders work in dangerous dynamic situations with lives on the line. Responders work in teams, which are distributed in and around an emergency incident. Team members must coordinate in order to effectively rescue victims and fight fires. Explicit communication occupies cognitive resources and saturates the radio communication channel. Implicit coordination is the ability of high performance teams to work in concert with little or no communication. Because implicit coordination skills are valuable, we seek to design team coordination teaching systems. We have undertaken an ethnographic investigation of fire emergency response work and teaching practice at one of the world s largest firefighter schools, Brayton Fire Training Field. At the field, we interviewed experts on communication and coordination practices and observed student burn training exercises. The student exercises are live simulations, where they use real equipment to fight fires in buildings. The data from the investigation indicate that switching between radio and face-to-face communication is important for effective coordination. Further, the perspectives of team members shift as they move around the incident, and some members have access to information artifacts. Thus, emergency responders handle the benefit and responsibility of information differential by gathering, integrating, and sharing information. From this analysis, we develop design implications for systems teaching team coordination skills. The design implications are best instantiated through a new form of simulation, non-mimetic simulation, which focuses simulation resources. Non-mimetic simulations are operational environments that capture abstract aspects of the working environment, in this case, the information, communication, and team structure aspects of fire emergency response. They are grounded in practice, but do not directly mimic concrete aspects of the environment because it is expensive to do so. Non-mimetic simulation is meant as a complementary tool to existing teaching methods. In this talk, we discuss our ethnographic investigation at Brayton Fire Training Field. We share our data and analysis indicating ways that fire emergency responders coordinate in practice. The data and analysis suggest design implications for systems teaching implicit coordination skills, and a new means of instantiating these implications: non-mimetic simulation. |