But, whatever our reaction, it is increasingly becoming a big part of the technological landscape. Unfortunately the focus of too many discussions is grounded more in science fiction than science fact. This may make for provocative and alarming headlines, but it seldom enables discussions to focus on the right areas.
Take, for example, Dr Stephen Hawking’s claims that AI could be the best or worst thing to happen to humanity. Immediately our minds go to various sci-fi films of robots ‘deciding’ to take over the world (note the verb implying a conscious choice). Could this happen to us?
How big is Google Brain?
Then we hear of the provocatively named Google Brain research facility and bold claims by Google CEO Sundar Pichai on its Google Translate function: ‘As of the previous weekend, Translate had been converted to an AI-based system for much of its traffic… The AI system had demonstrated overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime.’
The gains sound impressive and with such rapid advancements we start to wonder: ‘What else are Google innovating in their Brain? How long before they will be able to create conscious machines – surely it can’t be long?’
To get some clarity into such questions we need to understand a few terms. First Artificial Intelligence is not the same as artificial consciousness. Or, to put it another way, there is an important distinction between ‘weak AI’ and ‘strong AI’. The former refers to non-sentient technology, AI that cannot have conscious states – no mind, no subjective awareness, no choices – just machine learning (like your browser ‘learns’ what you like to search for). Strong AI, by contrast, would possess the full range of human cognitive abilities – a kind of digital soul.
Consciousness from a machine?
No one has yet innovated anything even close to strong AI and as Christians we should be highly sceptical that they ever will. A Christian theology of the human soul should raise big questions about a view that thinks consciousness can ever arise from a machine, no matter how sophisticated. Changing the terminology from AI to machine learning helps to clarify what we are really talking about.
But such debates arguably miss more pressing concerns.
First, think about the way machine learning will profoundly affect the labour market. Forbes recently identified ten areas where machine learning is already being used and where it is likely to grow rapidly, including data security, financial trading, banking, and healthcare diagnosis. These are areas with wide-reaching social implications that need to be considered. What if radiographers were quickly replaced by computers? This may save money, but what impact will it have on integrated healthcare delivery and patient care in which the human element is so important?
Outsourcing moral judgments
Secondly, machine learning will increasingly be used for moral decisions. Based on utilitymaximising models (where the ‘utility’ is variously defined) public policies, community interventions and individual decisions will rapidly start to be shaped more by data-driven algorithms than collective moral reasoning or (dare we say) prayerful scriptural reflection. The risk here is not just that we outsource our moral judgments to computers, but that we allow important moral decisions to be hidden behind the source code, away from scrutiny and without appropriate oversight.
Such advancements may promise efficiency or ‘labour saving’ improvements, but have we really considered the costs they carry with them?
Pete is co-author of Virtually Human: Flourishing in a Digital Age. For more resources visit www. virtuallyhuman.co.uk