I’m sorry Dave. I am afraid I can’t do that. Die-hard science fiction fans might get the reference from the cult classic, 2001: A Space Odyssey, writes Berniece Hieckmann, Head of Metropolitan GetUp. The line, delivered by the spaceship’s intelligent onboard computer HAL 9000, represents the very moment when the downside of sharing a spacecraft with a machine significantly-smarter-than-you is revealed – with dire consequences for the ship’s captain.But Kubrick’s sci-fi epic is certainly not the only pop culture example that graphically depicts just how pear-shaped things can go when mankind cedes control to machine: search movies that relate to artificial intelligence (AI) and a long list of dystopian tales springs up.
Yet it seems that our real-life fears relate more to economic rather than physical survival. Will AI threaten jobs? is a concern that began to rise around the time of the third industrial revolution, when mechanisation evolved to automation and digitalisation.
Can AI think like a human?
To determine whether this fear is founded, we need to determine whether AI can truly emulate the thinking processes of humans, by taking a look at the landscape.
AI is not entirely autonomous and the vast majority of the commercial economic value it currently generates is enabled through supervised learning models – meaning that it relies on algorithms to operate but needs human intervention to validate learnings. The sentient version, like the eerie HAL, remains very much in the realm of science fiction.
Even deep learning models – still in their infancy – rely on neural networks to function. So while their decisions might look like magic, they are only the product of a complex algorithmic process. Unsupervised learning models can figure things out on their own yet they’re also far from autonomous; they simply assimilate data in ways that are difficult to unthread.
Within the realm of South African financial services, we often engage transfer learning models, in the form of bots. We don’t build these models from scratch because that takes enormous capital investment – we replicate and transfer learnings from overseas models, which we then make relevant to our market by programming in local nuances.
So while AI cannot currently make judgements without human intervention, will it be able to in future? Consider that computers make use of sensors while human beings use senses, which are far more complex and intuitive input systems that enable us to feel as well as to think. Tech can convert things to data, but not feelings.
In testing this, a research excerpt from Marcus, Rossi & Veloso (2016) references the Turing Test, which seeks to determine the ability of tech to fool people into thinking that machines are human. While advancing AI can approximate passion and intelligence, empathy and judgement are derived from sensory input that machines are not capable of experiencing. Human behaviour, such as passion or motivation or intelligence, can be emulated through AI. However, in humans, if passion is not balanced with empathy – and if intelligence is not balanced with judgement – anti-social behaviour is probable. It is empathy and judgement that make us human, and which are hard to replicate through algorithms.
Thus in answer to, ‘is society approaching its Sorry Dave moment?’ Nope. Computers cannot think like humans, and are unlikely to be able to anytime soon.
From man to machine
Here’s a curveball: there’s a far greater risk to humanity that people will become like machines.
Consider the technological evolution of human physicality – mind-controlled prosthetic limbs, 3D organ replacements, DNA splicing – juxtaposed with the influence of tech on human cognitive development – social media governance of relationships, work-from-home dulling sensory interactions – and you will see that a very real threat exists.
With the prevalence of cyberbullying, we see countless examples of people losing their conscience behind their device screens every day. When our body and our emotions are no longer central to our identity, we start to cross the threshold from man to machine.
The intersection between AI and humanity
We should be excited about the potential that AI wields for both business and society. However, clear ethics-driven frameworks must be put in place, as innovation will always precede regulation.
In business, we need to keep pace with AI or risk becoming redundant. Company leaders generally realise this but tend to relegate it to the domain of data scientists. Yet, when we realise the threat tech can have on humanity, how can we not start having these discussions in the boardroom?
It’s the responsibility of business leaders to drive the creation of proper AI governance frameworks to ensure that the ethical boundaries remain clear and human rights are not infringed upon. These frameworks should cover all aspects; from fairness and minimisation of algorithmic bias to the creation of ethical standards that prevent harm – which is something that should concern all business leaders. We need to have a holistic and strategic understanding of AI models and their evolution so that we can anticipate possible threats to our stakeholders.
As we inch towards the fifth industrial revolution, there’s a growing consciousness around impact. We understand that AI can make us better, but if we don’t exercise human judgement, we run the risk of relinquishing our humanity.
Human society is contingent on trust. To add value to us, we need to be able to trust the technology that supports us…while realising that it’s our responsibility to make it trustworthy.