Thoughts on natural interaction
Posted on July 31, 2020Human-Computer Interaction for User Experience Design
How people interact with computers and other technological devices is likely to be drastically changed by natural interaction considerations, and applications or devices that incorporate natural interaction. In 2017, a popular phrase users directed to voice-activated personal assistant Alexa was “help me relax.” This is the kind of request that a person may ask a friend, and suggests that people are starting to address devices naturally. Ahmed Bouzid observed that “[u]ntil now, all of us have bent to accommodate tech…Now the user interfaces are bending to us” (Anders 2017).
Which applications would you most like to see adopt natural interfaces? Can you think of instances where natural interaction may not work? Discuss your ideas on the forum, and engage with posts by your fellow participants.
First, I would like to comment on how interactions are presented to us as “natural” or “unnatural.” Words matter when qualifying technology and interaction, and I believe this choice of word is incorrect. I agree and understand about how natural it is to interact in a conversation, or by having some computer vision interpret moods and emotions when watching a face, as we do between humans. However, I don’t agree with the term “unnatural” as it seems to come with a judgemental tone that is best left to the believers of intelligent design.
I would instead propose that we use the term “learned interactions.” It is not true that pen-based gestures natural. We spend a great deal of time teaching kids to master a pen to write the different kinds of alphabets that we have invented. Another kind of gesture which is believed to be natural—playing music on a piano or a guitar for example—also is not natural. It was learned at great pain of practice and repetition. Thus, it stands to reason that mouse and keyboard interactions also are learned interaction, and are no more “unnatural” than pen-based or music-instrument-based interactions. Rather, it seems that tools-based interactions all fall under the “learned interactions” bucket.
As for applications, I would like to present two examples:
1. Speech interaction: Many jobs require—pre-COVID-era that is—that many people are in a same room while typing lots of things on the computer, whether it is text for newspapers or numbers and data into spreadsheets for banks, for example. These people oftentimes require much concentration, so if instead they had to speak to their respective computers endlessly, it might take more time for them to execute their work, and the incredible noise level that this would generate would make it hard for them to sustain their own work environment. Clearly, the tool-assisted interaction of mouse and keyboard is ideal. It could likely be improved with touchscreens and other devices that make interacting more natural, but speech interaction would not be ideal here.
2. Direct gestures as opposed to a control panel: Jobs that require to move gigantic devices, such as cranes, oftentimes require workers to interact with a control panel akin to a game controller, with buttons, knobs, levers, etc. In many cases, those are now attached to the belt of workers, so they do not need to be stuck in a control cabin—already an improvement. Think of a potential improvement that would be more natural: instead of workers being static with their hands on controllers, workers body motions could be mapped to that of the crane, so when workers make the gesture of grabbing with their hands, the crane would physically grab an object.