Tech discourse is using the term “humanise” to describe everything from designing more personable chatbots, through to ensuring an AI overlord doesn’t enslave the human race.
I find this use of the term confused and I believe we need to have a more sharply defined agenda for it to be useful.
For example, at a Google For Startups event in Berlin this week, billed “Humanising AI,” there was no critical discussion about what it means to humanise AI.
This prevented the conversation from having valuable take-aways, despite the interesting work of the panelists in AI startups in Berlin.
Google for startups event in Berlin last night: “Humanising AI”
The confusion might stem from the word “humanise” having two different definitions: 1) to make something more humane or civilized or 2) to give something a human character.
When used in the context of technology, the term describes at least nine different topics:
Joyful to use. To humanise is to create technology which goes beyond pure functionality to create pleasureable interaction, such as through ergonomics or softer aesthetics.
Personable interfaces. To humanise is to make automated interfaces more personable and joyful to use, whilst not trying to convince us that they are human — such as making chattier chatbots.
Explaining tech to humans. To humanise is to educate human users by explaining complex systems, to help foster trust in those systems. For example, explaining to users how a machine learning algorithm interprets a CV.
Reading human factors. To humanise is to make technology better able to interpret and react to human factors, such as recognising the emotions of users.
Ethics in the algorithm. To humanise is to design AIs and algorithms which conform to ethical and political intuitions — such as avoiding biases, or not manipulating human users.
Robots doing the drudge. To humanise is to use intelligent automation to do the menial/ repetitive work, releasing humans to do the creative work we are uniquely capable of.
Act like a human being. To humanise is to create AIs / robots which can at most convince can pass as a human, or at least, act and appear convincingly human.
Human-like intelligence. To humanise is to work towards developing AIs with human-level or human-like general intelligence.
Safeguarding humanity. To humanise is to design AI with safeguards in order to promote a future which is good for humans — that is, protecting against an AI-powered catastrophe by ensuring AIs share our goals.
Clearly, these topics involve very different activities, from design tasks, to political and ethical dilemmas, to state-of-the-art science. They also operate on extremely different levels of importance and abstraction.
Nevertheless, there is a shared agenda behind these dimensions of humanising tech: They are all about designing technology to be better for humans and better at interacting with humans.
Humanising tech means designing technology to be better for humans and better at interacting with humans.
Understood in this way, humanising tech usefully describes a spectrum of activities all reaching towards a shared agenda.
It’s not hairsplitting to be pernickety on definitions here. To humanise technology, people from very different fields must successfully communicate between them a shared vision of what they believe is good for humans.
And that vision needs critical attention. Biases can colour our concept of what it means to be human and what is good for humans — as much as they can be built into algorithims.