The pros and cons of robot carers
If you have seen science fiction television series such as Humans or Westworld, you might be imagining a near future where intelligent, humanoid robots play an important role in meeting the needs of people, including caring for children or older relatives.
The reality is that current technologies in this sector are not yet very humanoid, but nonetheless, a range of robots are being used in our care services including disability, aged care, education, and health.
Our new research, published today by the Australia and New Zealand School of Government, finds that governments need to carefully plan for the inevitable expansion of these technologies to safeguard vulnerable people.
The care crisis and the rise of robots
Australia, like a number of other advanced liberal democracies, is anticipating a future with an older population, with a more complex mix of chronic illness and disease. A number of care organisations already operate under tight fiscal constraints and report challenges recruiting enough qualified staff.
In the future, fewer numbers in the working-age population and increased numbers of retirees will compound this problem. If we then add to this equation the fact consumer expectations are increasing, it starts to look like future care services are facing a somewhat perfect storm.
Robots are increasingly becoming a feature of our care services, capable of fulfilling a number of roles from manual tasks through to social interaction. Their wider use has been heralded as an important tool in dealing with our impending care crisis. Countries such as Japan see robots playing a key role in filling their workforce gaps in care services.
A number of Australian residential aged care facilities are using Paro, a therapeutic robot that looks and sounds like a baby harp seal. Paro interacts by moving its head, heavily-lashed wide eyes and flippers, making sounds and responding to particular forms of touch on its furry coat.
Paro has been used extensively in aged care in the United States, Europe and parts of Asia, typically among people living with dementia.
Nao is an interactive companion robot developed in a humanoid form but standing just 58cm tall in height.
Nao has gone through a number of different iterations and has been used for a variety of different applications worldwide, including to help children engaged in paediatric rehabilitation and in various educational and research institutes.
The double-edged sword of technology
Robots are capable of enhancing productivity and improving quality and safety. But there is a potential for misuse or unintended consequences.
Concerns have been expressed about the use of robots potentially reducing privacy, exposing people to data hacking, or even inflicting physical harm.
We also lack evidence about the potential long-term implications of human-machine interactions.
Our research explored the roles robots should and, even more critically, should not play in care delivery. We also investigated the role of government as a steward in shaping this framework through interviews with 35 policy, health care and academic experts from across Australia and New Zealand.
We found that despite these technologies already being in use in aged care facilities, schools and hospitals, government agencies don’t typically think strategically about their use and often aren’t aware of the risks and potential unintended consequences.
This means the sector is largely being driven by the interests of technology suppliers. Providers in some cases are purchasing these technologies to differentiate them in the market, but are also not always engaging in critical analysis.
Our study participants identified that robots were “leveraged” as something new and attractive to keep young people interested in learning, or as “a conversation starter” with prospective families exploring aged care providers.
But there are significant risks as the technologies become more developed. Drawing on research in other emerging technologies, our participants raised concerns about addiction and reliance on the robot. What would happen if the robot broke or became obsolete, and who would be responsible if a robot caused harm?
As artificial intelligence develops, robots will develop different levels of capabilities for “knowing” the human they are caring for. This raises concerns about potential hacking and security issues. On the flip side, it raises questions of inequity if different levels of care available at different price points.
Participants were also concerned about the unintended consequences of robot relationships on human relationships. Families may feel that the robot proxy is sufficient companionship, for instance, and leave their aged relative socially isolated.
What should governments do?
Government has an important role to play by regulating the rapidly developing market.
We suggest a responsive regulatory approach, which relies on the sector to self- and peer-regulate, and to escalate issues as they arise for subsequent regulation. Such engagement will require education, behaviour change, and a variety of regulatory measures that go beyond formal rules.
Government has an important role in helping providers understand the different technologies available and their evidence base. Care providers often struggle to access good evidence about technologies and their effectiveness. As such, they’re largely being informed by the market, rather than high quality evidence.
Many of the stakeholders we spoke to for our research also see a role for government in helping generate an evidence base that’s accessible to providers. This is particularly important where technologies may have been tested, but in a different national context.
Many respondents called for establishment of industry standards to protect against data and privacy threats, and the loss of jobs.
Finally, governments have a responsibility to ensure vulnerable people aren’t exploited or harmed by technologies. And they must also ensure robots don’t replace human care and lead to greater social isolation.
This article was written by Helen Dickinson, an Associate Professor in the Public Service Research Group at UNSW and Catherine Smith, a Research fellow at the University of Melbourne. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Helen Dickinson is Associate Public Service Research at the School of Business at the University of New South Wales in Canberra. Her expertise lies in public services, particularly in relation to topics such as governance, leadership, commissioning and priority setting and decision-making.
Alan Stevenson
November 11, 2018 at 4:41 pm
Helen Dickinson suggested self and peer regulation regarding the use of new technologies. During my travels around Australia I occasionally drop in to aged care facilities in country towns. They nearly all have at least one computer available for the residents and that is usually attached to the internet. More often that not few residents use it. I enjoy sharing my knowledge and my offer of a short talk on computers and the internet is met with interest. After explaining the intricacies of the P.C. and some of the interesting sites on the internet, I answer questions. These have indicated that the residents had been afraid of damaging the machine through ignorance. After about an hour, they appear to become more self confident and eager to use it.
It would appear that the machines have been donated, one or two residents given an introductory course and then left.
In 1999, an inquisitive physicist named Sugata Mitra installed a computer in a slum in New Delhi, India, and then walked away. Local children congregated and began trying to use the unfamiliar device. When Mitra returned a few days later, they had already taught themselves to surf the internet.
After some follow-up research Mitra found that groups of people of varying ages, if left to their own devices, can form what he calls a hive mind; working through problems together and coming up with the right answer. This happened without the aid of adults. He asked more and more difficult questions and, in time got more and more in-depth answers. For this research, he received the backing of the TED organisation and is continuing his research. He is not suggesting that teachers are not needed, but that instead of merely disseminating facts, they taught their pupils to ask questions and search for the answers themselves.
Children are more self-confident than most aged care residents and tend to communicate more with their peers. Maybe if we taught those residents to approach AI, technology and robots in the same way as the New Delhi kids approached the computer, we could help them achieve a happier lifestyle. Most aged care facilities I have been in do not appear to encourage peer to peer communication and this is a loss for everyone.