Design for Man or Machine? The Lines are Blurring

23 April 2019

One of a series of interviews with Dr Chris McGinley, Research Fellow and Leader in the Age and Diversity Space at the Helen Hamlyn Centre for Design.

In this article we explore with Chris how technology is affecting human interactions and social connections. Do designers now need to be more careful than ever as the lines blur between human and machine?

Do you feel at a loss if you leave home without your mobile phone? Rely on Siri’s advice more than your friends? Or find yourself minding your manners when speaking to Alexa? If you answered yes to any of those questions or had to mull over your responses - your boundaries between man and machine may already be blurred. We are in a time of tech pervasiveness and it seems timely to consider the ethical implications of the design of voice-controlled services, digital assistants, robot citizens and the new wave of digit companions all around us. Many people’s lives have undoubtedly been positively impacted by smart devices, but is it time we become more mindful of our interactions?

image 1.png
Sophia at SXSW, Sophiabot

We asked Chris how he feels about technology taking over from humans. Here’s what he said. 

Keeping Humans and Machines Separate
“The boundaries between what technology traditionally has been and what it now is increasingly blurring -  you go onto a live chat and you wonder, ‘is there a real person on the other end’? With virtual assistants for example, I have doubts around the intent to make them look and feel more ‘human’ - it isn’t actually a person, so should we just be transparent about that? Such tech advances seem to work best when they have a distinct and clear purpose – they don’t currently have the flexibility and adaptive nature of a real person, and that’s fine – they’re not replacing people, they’re more often assisting some of the more mundane tasks, or assisting with more trivial things, such as playing music.”

Chris mentions his own experiences where he has questioned the transparency of his own tech - “several times I’ve been talking about a very particular and non-typical topic, and shortly after I’ve taken a look at Facebook or the likes, and there it is, a prompt to an article or advertisement uncannily close to what I had been discussing earlier – too specific to be coincidence? So of course, you wonder how that happened – ‘is my tech listening when I don’t want it to’? Everyone knows some tech does, Siri is waiting for the prompt ‘hey Siri’, but Apple have made clear this is buffered and it’s not being collected, that’s not part of their business model, but can the same be said for other groups that actually are interested in data collection – are they listening? There’s something in that doubt that is incredibly uncomfortable for users that hope to manage their own privacy”.

Amazon’s Alexa has also recently been reported to laugh at people when ‘she’ hears certain humorous conversations. When it works, this design feature is creepy enough for some; however, when it doesn’t work this could create far more unease - as reported recently when people heard Alexa laughing to herself in the middle of night, unprompted, and clearly due to some design fault. Chris argues that instances like these demonstrate that, “while the designers may have had the intention of creating something warm and human-like, a simple mistake shifted the experience into a weird and unsettling space - Alexa inviting itself into moments of people’s day where they are not invited. The naturalness is a long way off, Alexa’s random creepy laugh is a funny and harmless example, but what other unforeseen invasive impacts might we experience in the future”?

Image 2.png

While Chris expresses some concern, he explains that his caution is dependent on context and intention - “There are definitely positive potentials - if we’re designing a technology to entertain a child, like Cosmo the robot, or say something to offer some tactile comfort in a care home like Paro the seal, that uses recognition technology to more convincingly engage and connect, I understand that, and I see the value in making those exchanges as seamless and convincing as possible, as they are very particular use scenarios. However, we start to tread on thin ice when we blur this ‘knowingness’ with for example personalised ‘human-like’ assistants; here we lose some of that transparency. We’re dealing with important personal and confidential data and interactions, and we are trying to convince people that a human-like experience can be had, so we risk lulling people into a false sense of security. If you consider where human connection might be sought, say in moments of strain or anxiety, current assistants that are not designed for that interaction, yet might try to provide people with general help, unrelated to the role they’ve been designed for.”

An example of this can be illustrated in the scenario of if Siri being told “I feel depressed.” Her first reply is human like - “I am here to listen”; however, from the second interaction it appears that Siri doesn’t know what the conversation is really about, or how best to respond. In such moments of crisis this blurred interaction has a clear downside.

Image 3.jpg

Anticipating Unintended Consequences
Many technologies intended for convenience and social good can also bring unintended consequences.

Chris cited one example from a project in Hong Kong where they were researching care home experiences. In one care home they investigated the monthly payment system had previously been over a desk within the care home; however, this was changed to an online service for convenience. Once this happened, those same relatives who previously dropped off payments, and then spent some time with their loved ones, began to stop coming to the care home entirely. Effectively, the new efficient payment system had removed the need to visit in person, but an unforeseen consequence of this was reduced social visits, resulting in elderly residents becoming more isolated.

Closer to home in the UK, people’s traditional routines are being disrupted by digital intervention, as exchanges such as are banking and shopping move online – and with this, the high street exchanges disappear. While older citizens routines might have been to walk to the shops to carry out a number of daily tasks, and along the way get some fresh air and bump into familiar faces, the closing of physical outlets has led to many people remaining in their home, often disconnected from the very technologies that have replaced their established exchange points. A huge part of wellbeing is also feeling a sense of ‘belonging’, and if older communities become digitally isolated - where do they belong? Designers must consciously consider the implications of technologies on people and communities.

Enhancing benefits, and moving forward ethically
We are seeing new innovations emerge trying to undo the impact of their technological predecessors. Chris remarks, “the world is more connected, yet ironically many people feel more alone – we should create technologies that can enhance the connections people have, bringing them together and ensuring more face-to-face time again. There is a lot of conversation now in the design community around blending digital and physical worlds and how we can reconstruct or reinstate the physical connections that we seem to be losing. We were recently involved in a project with ECAL EPFL in Switzerland where we were exploring older people and solidarity, and how tech can be used to find out what events and the likes are going on in peoples’ lives, and to put some intelligence into planners to solidify their social networks, presenting more opportunities to connect together and actually meet. So taking social media and using it in ways that prompts actual physical interaction rather than online”.

This can be also seen in the growing rise of millennials deleting their social media accounts and taking ‘social media detoxes’ to improve their mental health and social lives, in addition to becoming less controlled and visible to parents who keep tabs on them over social media. Technology is most valuable when there are clear benefits and trade-offs, the opposite is true when it becomes vague and all-consuming, draining both attentions and time.
Image 4.png

So how do we now move forward to minimise the negative consequences that some of these invasive and often socially disruptive technologies bring? Chris believes that transparency is the way forward for designers, remarking, “to my mind, things that are designed well don’t pretend to be something that they’re not. It’s time to be transparent and show technologies for what they really are, what their intended purpose is; technology shouldn’t be unsettling, it should be a wonderful thing that enhances human experience”.

As technology accelerates, regulators, governments and even designers themselves cannot accurately predict the impacts these new innovations will have on society. Therefore, we need to ensure we are especially rigorous at the drawing board, under the pens of the creators.  With design having such a crucial role in constructing the technologies and shaping the future –let’s make sure that we are driven by the right values, and doing it right.


Interview conducted by Tina Woods, CEO, and Jasmine Eskenzi, Content Manager, at Collider Health

Dr Chris McGinley, Research Fellow, Leader of Age & Diversity Space, Helen Hamlyn Centre for Design

Chris is an award-winning people centred designer, researcher and writer. He has developed projects including medication packaging to address compliance issues with GlaxoSmithKline, cooker redesigns with a focus on the elderly for Osaka Gas, design research to prevent alcohol related violence for the Home Office and hospital furnishings for the Department of Health. His research interests are based around design ethnography and the role empathy plays to inform research driven people centred design.  He is a member of the AXA Health Tech & You Expert Group.