Human Augmentation

0
321
augmented reality

Human augmentation is a field of research that aims to enhance human abilities through medicine or technology. This has historically been achieved by consuming chemical substances that improve a selected ability or by installing implants which require medical operations. Both of these methods of augmentation can be invasive. Augmented abilities have also been achieved with external tools, such as eyeglasses, binoculars, microscopes or highly sensitive microphones. Lately, augmented reality and multimodal interaction technologies have enabled non-invasive ways to augment human.

Since the invention of direct manipulation and graphical user interfaces (Sutherland, 1963; Engelbart, 1968; Shneiderman, 1982; Lipkie et al., 1982), the developments in mainstream human-technology interaction have been incremental. In the past, humans had to adapt to computers. In the future, computers will adapt to humans. Here we use the term ‘natural’ when referring to interaction that closely resembles the innate ways humans act and interact with physical objects. Human-centric user interface paradigms include perceptual interfaces (Turk, 2014), augmented reality (AR) (Schmalstieg and Höllerer, 2016), virtual reality (VR) (van Krevelen and Poelman, 2010; Jerald, 2015), and ubiquitous computing (Weiser, 1993). It turns out that human augmentation as a field is still so young that there is no commonly agreed-upon definition even though the number of articles and books on the topic is increasing. In her book entitled ‘Augmented Human’, Papagiannis (2017) focuses mainly on the potential of augmented reality and offers no definition for the field. For the purpose of this article and the whole research community, we present the following definition: Human augmentation is an interdisciplinary field that addresses methods, technologies and their applications for enhancing sensing, action and/or cognitive abilities of a human. This is achieved through sensing and actuation technologies, fusion and fission of information, and artificial intelligence (AI) methods.

Human augmentation can further be divided into three main categories of augmentation:

• Augmented senses (aka enhanced senses, extended senses) are achieved by interpreting available multisensory information and presenting content to the human through selected human senses. Sub-classes include augmented vision, hearing, haptic sensation, smell, and taste.

• Augmented action is achieved by sensing human actions and mapping them to actions in local, remote or virtual environments. Subclasses include motor augmentation, amplified force, and movement, speech input, gaze-based controls, teleoperation, remote presence, and others.

• Augmented cognition (aka enhanced cognition) is achieved by detecting human cognitive state, using analytical tools to make a correct interpretation of it, and adapting computer’s response to match the current and predictive needs of the user (e.g., providing stored or recorded information during natural interaction).

Wearable interactive technology is an essential component in enabling human augmentation. It offers a seamless integration with the physical and digital world around us. It can empower the user with non-invasive and easy to use extensions to interact with smart objects and augmented information of the hybrid physical-virtual world of the future.

Crossmodal interaction allows the characteristics of one sensory modality to be transformed as stimuli for another. This can benefit people with disabilities as well as the elderly with deteriorating sensory abilities.

We have included examples of different approaches in augmented senses, action, and cognition. The aim of the Section is to familiarize the reader with methods, systems, and experiments that show both the extensive potential and the variety of disciplines involved in this field.

1. Augmented senses

Augmented senses use methods and technologies to either compensate for sensory impairments (mostly visual and auditory) or to exceed the capabilities of existing senses. In the first case, the sensory signals for the impaired senses are amplified significantly or supplemented through other healthy senses. For example, haptic actuators can be used to describe surroundings to a blind person (Maidenbaum et al., 2014; Shull and Damian, 2015) or speech signals to a deaf person (Novich and Eagleman, 2015). In the second case, the human senses are augmented by using additional sensors to observe signals beyond normal human sensory capabilities and transforming them to a suitable format for human use (Evreinov et al., 2017; Farooq, 2017).

Further techniques for building augmented senses include sensors designed for specific uses such as cameras for very dim light or for nonvisible spectrum, auditory or vibration sensors within mobile devices or even large-scale sensor arrays, such as networks of remote sensors continuously broadcasting environmental information and global positioning systems for tracking movements of objects and individuals. Integrating these with distributed sensor networks, such as smart traffic systems and location-specific information sources, can increase our awareness of the surrounding world. No matter which type of sensors are being used or what their configuration is, augmented sensing has the potential to increase visual acuity, auditory reception, olfaction threshold, gustatory perception, and haptic sensation beyond the current natural human abilities. Yet with the addition of all these sensors the amount of various sensory information will increase exponentially, therefore, it will be critical how this information is processed and presented to the individual user with reference to its context, task and general need over the course of time.

2. Augmented action

The earliest examples of augmenting human action were related to motion augmentation. For instance, prosthetic limbs restored some of the capabilities of an amputated limb. Recently, new digital technologies have enabled augmenting action in ways that go beyond natural human motor and sensory limits. For example, exoskeletons enable paralyzed people to walk on robotic feet (Dollar and Herr, 2008; Young and Ferris, 2017).

To enable the next step in augmented action, it is necessary to understand the user’s cognitive state by measuring, for example, the human brain. This leads us to advanced augmented action technologies such as neuroprosthetics (Leuthardt et al., 2009) that can enable thought control for remote robots (Warwick et al., 2004) and control of prosthetic fingers with a brain-machine interface (Hotson et al., 2016).

3, Augmented cognition

Augmented cognition is a form of human-technology interaction where a tight coupling between a user and a computer is achieved via physiological and neurophysiological sensing of the user’s cognitive state (Stanney et al., 2009a). Augmented cognition integrates information detected from the user to adapt computer input to match the user’s situational needs. Finally, one long-term goal in human-technology interaction is to use the knowledge of human cognition to build machines that can think like humans. Zheng et al. (2017) and Ren et al. (2017) suggest that hybrid-augmented intelligence could take cognition way beyond human abilities.

LEAVE A REPLY

Please enter your comment!
Please enter your name here