Module 6 Assignment: Reflect on the future of HCI
Posted on September 1, 2020Human-Computer Interaction for User Experience Design
Think of the future directions of human-computer interaction discussed in this module. Consider the following two directions specifically: developments that enable human-machine teams, and developments in brainwave-based interactions.
In two paragraphs of 200 words each, reflect on the future you envision for human-computer interaction in those two directions, and on how the business environment might change when each of these technologies are integrated into business processes and procedures.
In this module, we discovered the research of Pr. Rus, in which she and her team investigate how machines can interact with humans with a natural language interface so they can work together. This type of interaction is much more promising than other research streams which try to completely automate the behaviour of machines based solely on artificial intelligence or machine learning. First, we know that much of the training data is biased, or imperfect at best, causing issues. Second, this would remove the agency from humans. Human-machine teams could lead to quite interesting collaborations. For example, drivers instructing their vehicle as they are performing a task that requires minute manual dexterity, such as a fireman attempting a rescue and requesting assistance for tools or light. We know that some jobs are not very rewarding, and instead of automating jobs and putting people out of work, human-machine teams could have those people direct and maintain those—maybe multiple?—machines instead. If we wish to empower our users with our products, we must then empower workers by not taking away their ability to work, but by augmenting it. Human-machine teams is a good way to respect users and the society in which they evolve.
As for brainwave-based interactions, there are some ethical concerns that we should resolve before deploying such technologies. As Pr. Rus mentioned, signals obtained by EEG are not understood—save for the error-related potential signal—so extrapolating from unclear data could lead to disastrous results. Even with more data gathered, systems can be mislead. The current debates regarding facial recognition and machine-learning-enabled decision software are examples of this: biased or dirty data can reproduce harmful behaviours. If we ensure to address those issues, and if we take a more utopian approach, we could create human-machine interactions that are almost supernatural: what if a machine could guess the desires of users by analysing their brainwaves? If such a development occurs, it would be important to leverage some of the points learned from the human-machine teams research of Pr. Rus. Instead of creating machines that would act on independently according to EEG signals analyses, those machines should present their intent to users to obtain an authorization to act. Over time, that learning could be used to improve the autonomy of the machines. If this starts to sounds like Aasimov’s Laws, it’s probably because he gave a lot of thought to how autonomous machines would be allowed or not to act.
Additionally, it would be important to clarify who would have the right to access signals obtained from humans: Only the machine? The corporations who own the machines? The corporations who own the EEG signal headsets? There are so many points that should be clarified. I addressed a few of them in an article a few years ago, but no progress has been made on that field since.
I strongly believe that human-machine teams where humans retain control of the communication flow and the final decision to act or not is a better path to investigate for the future of interactions.
LO5: Reflect on the future directions in the field of human-computer interaction, such as human-machine teams and brainwave-based interfaces.