Miller Robotic Interface II Manual de usuario Pagina 12

  • Descarga
  • Añadir a mis manuales
  • Imprimir
  • Pagina
    / 80
  • Tabla de contenidos
  • MARCADORES
  • Valorado. / 5. Basado en revisión del cliente
Vista de pagina 11
1.2 Contributions
This work describes an interface design that is a result of an evolutionary process. This design
was validated through user testing, which showed improved awareness of the robot’s
surroundings with each new version.
Few researchers in the HRI domain are iterating interface designs via user testing. Notable
exceptions are the Idaho Nation Laboratory (INL) and also Swarthmore College. Due to the
relative paucity of literature describing HRI design iterations, one of this thesis’ contributions is
the documentation of our evolutionary process.
This work puts into action many of the guidelines produced by Yanco, Drury and Scholtz [2004]
and Scholtz et al. [2004]. We provide a map of where the robot has been as well as fused sensor
information. We do not have the user tab through multiple windows to find the information
they need. All of the important information is displayed in a single window. We provide more
spatial information about the robot in the environment. This spatial information takes the form
of a map, as well as displaying the current distance sensor readings in an easy to interpret
distance panel. This information makes it easy to know if the robot is close to an obstacle or
not. We also provide information on which camera is currently the main one and we use a
crosshair overlaid on the video to indicate the current pan/tilt position of it.
Some original guidelines were also created as result of this work. These guidelines can be added
to those already mentioned from prior work. For instance, using this system, we proved that
having multiple cameras, especially one facing the rear of the robot, greatly improved situation
awareness. Therefore, we state that to improve SA most effectively, if at least two cameras are
present on the robot system, one should face forward and one backward. We also show that
having the ability to see at least part of the robot’s chassis in the video stream also leads to
improved SA which follows along the guideline about being able to use the cameras to inspect
the robot. Also, for our interface, we presented all of the important information on or around
the video screen. Operators pay attention primarily to the video, so information will only be
noticed if it is overlaid on the video, or directly adjacent to it. Although we did not explicitly test
the validity of this claim in our experiments, it has been seen that users ignore information not
near the video screen. Therefore, as a guideline, we state that all important information needed
5
Vista de pagina 11
1 2 ... 7 8 9 10 11 12 13 14 15 16 17 ... 79 80

Comentarios a estos manuales

Sin comentarios