Media and developing minds header

Emerging Technologies: Children’s Engagement with Intelligent and Interactive Media

October 16, 2018

 
Jakki Bailey, PhD

Moderator

Jakki Bailey, PhD

Scott C. and Vickie S. Reeve Endowed Faculty Fellow, Assistant Professor, School of Information, University of Texas at Austin

 
Patricia Greenfield, PhD

Patricia Greenfield, PhD

Distinguished Professor of Psychology, University of California Los Angeles; Director of Children’s Digital Media Center, Los Angeles

Rachel Severson, PhD

Rachel Severson, PhD

Assistant Professor of Developmental Psychology, Director of The Minds Lab, Director of the Experimental Psychology PhD Program, Affiliate Faculty of Human and Family Development Minor, University of Montana

Mark Mon Williams, PhD

Mark Mon Williams, PhD

Chair in Cognitive Psychology, University of Leeds; Professor of Psychology, Bradford Institute of Health Research; Professor of Paediatric Vision, The Norwegian Centre for Vision; Turing Fellow, The Alan Turing Institute

Overview

Dr. Bailey:  Virtual reality, augmented reality, and social robots aren’t new inventions at this point.  They are, however, being integrated into children’s lives in ways, and to a degree, that we haven’t seen before.  What are the implications of this development?

Dr. Mon Williams: We’re completely failing our children.  Public health isn’t working. In the English city of Bradford (population about 350,000), a “whole systems” approach is being used to try and solve that problem.  This coordinated effort involves local and central governments, and education, health, social care, and justice services.

Some epidemiological information illustrates the extent of the challenge.  For example, the diabetes rate in Bradford is more than 10%. In some segments of the community, every other adult has diabetes.  This health crisis threatens to bring the UK’s National Health Service down. Parallel to the poor state of its health, Bradford’s educational achievements also are below the national average.  These two factors are highly related (and health literacy and digital literacy begin with literacy itself).

The evidence-based problem-solving effort involves a large-scale longitudinal study.  For four years, every pregnant woman in Bradford was invited to join it. About 80% did so.  Those mothers were followed through pregnancy and birth. Now, the study cohort includes approximately 13,500 children whose demographic attributes are representative of the city’s population.  Its size and diversity make it possible to identify the particular determinants of children’s physical and mental health, educational attainment, and social mobility. The impact of digital technology is an open question, but that in turn is going to depend on ethnicity and socioeconomic position.

One of this study’s objectives is to assess the positive role of immersive technologies in children’s health and education outcomes.  These technologies are going to become ubiquitous, and they will change everything about the way we live, work, and play. Do we observe these changes passively, or do we try to get ahead of the process and steer it in a socially beneficial direction?  There are six ways in which immersive technologies can help children.

The first is improving children’s motor abilities.  Immersive technologies make it possible to “gamify” both habilitation and rehabilitation.  Examples include VR and robotic system training for children with cerebral palsy, and classroom technologies to help kids struggling with handwriting.  

The second is removing cultural inequalities.  Children in a low socioeconomic position don’t go to museums.  As part of the Bradford study, all of the region’s museums are working to provide all 206 local schools virtual access to the cultural knowledge formerly accessible only to visitors.  

Third, these new technologies are empowering teachers to identify children with motor and cognition issues.  The less time they have to wait for an educational psychologist to get involved, the sooner teachers can provide intervention and support.

Fourth, immersive technologies can be used to cultivate children’s empathy.  To explore this possibility, children in Bradford and Saudi Arabia are now being given virtual access to each other.  

Fifth, virtual reality systems make it possible to avoid some of the known physical risks associated with screen use.  Staring at screens creates pressure on the visual system. It also creates the kinds of musculoskeletal problems that account for substantial loss of work days.  VR systems pose risks for developing visual systems, too, but knowing about such issues early in the adoption cycle makes timely improvement possible.

Finally, just as immersive technologies make healthier interactions with digital devices possible, they also create opportunities for more naturalistic interactions with digital content.

Dr. Severson:  Social robot technology is being adopted rapidly.  In the first quarter of 2017, about 7% of US homes had Amazon Alexa, Google Home, or a similar digital home assistant.  By the first quarter of 2018, such devices were present in 27% of US homes. The estimated for year-end 2018 is over 50%.  As scientists and as a society, we need to think crucially and critically about the implications of having such personified technologies in children’s lives.

First, how do children think about these interactive personified technologies? Children think about social robots and other personified technologies in an entirely different way from adults – as ‘sort of’ alive – referred to as the “new ontological hypothesis.” That is, children know that these devices are technological, rather than biological entities, but still think of them as having emotions, thoughts, the capacity for friendship, and a right to moral treatment.  This is a category of being between animate and inanimate, being applied by children from pre-school age to late childhood, as to some extent in early adolescence.

The resulting attributions guide children’s actions and judgements.  For example, a child lets a dinosaur robot win a game of tug of war. Why? Because the child views the robot as having feelings – not human or animal feelings, but programmed feelings that still require accommodation. This was a common perception among children in the study, across age groups (5, 7, and 9 years): robots need to be treated in a moral manner.  In this way, although children understand the structural differences between personified technologies and humans or animals, they nevertheless they see them as functionally similar. That is, a robot has programmed feelings, but they are feelings nonetheless, and it is therefore not right to hurt those feelings. We don’t yet know if this is a view that children will outgrow.  

Second, context matters.  Lab studies by Brian Scasselatti (Yale), Rachael Burns (Max Plank), and Kerstein Dautenhahn (Hertfordshire) and others suggest that personified technologies may be particularly useful for interventions with children on the autism spectrum.  Cynthia Breazeal’s group (MIT) has evidence that children consider robots to be knowledgeable and credible sources for new information. This suggests uses in such formal and informal learning environments as schools and museums. The flipside of this is that these technologies may be problematic in other contexts.  For example, will children treat social robots as confidants, or turn to them for guidance regarding social and moral deliberations? Questions also exist about deploying electronic devices as virtual friends. Even high-fidelity mimicry of human-to-human interactions is not necessarily good, in part because it lacks authenticity.  It’s appropriate to proceed with caution.

Dr. Greenfield:  Children and adolescents treat robots and virtual agents as virtual human beings (with rights, feelings, and conversational competence).  Youngsters also distinguish the conventional from the moral based on virtual agent responses, whether or not the device in question is three-dimensional, a robot, or virtual.  This raises the question of moral education.

Adults might be able to use these technologies to teach children about right and wrong, and about respecting others’ feelings.  The underlying assumption, that such teaching would transfer to the real world, is testable. Adults might instead choose to teach children that these devices are not “real” and therefore, have no moral or emotional standing.  Could this kind of lesson also transfer to the real world, causing kids to stop thinking of other humans in moral and emotional terms? This, too, requires study. Other topics for further research include how differently autistic children respond to humans’ and robots’ social behaviors, and how children’s play with robots and similar devices differs from traditional pretend play.  

It’s concerning that children might start preferring robots to real people, and that parents might start using robots as babysitters.  This could displace parent-child interactions and the bonding that they produce, and lead to the further degeneration of social life and social development.  Parents already are noticing that questions that children used to ask adults (for example, about homework) are now being directed to virtual assistants. This is making children more independent, but that doesn’t answer the question, “Is this good or bad?”

Another interesting socialization issue is, what is the developmental effect of a 4-year-old bossing around a virtual assistant?  Some parents have observed that these devices are making children more curious, but far less polite.

These influences are accelerating existing historical trends toward increasingly individualistic child behavior.  Children may be feeling empowered by their command of Alexa, and frustrated when real people are less compliant. What is it like when two children who’ve grown up playing Alexa play with each other, and each expects the other one to be the compliant playmate?

Other research questions include: How will childhood interactions with virtual assistants influence children’s communication styles?  Will they favor the kinds of simple questions that their devices favor, and the simple language that those devices employ? How will these digitally assisted children learn interpersonal skills?  Will the “smart home” services that some devices provide produce physically lazy children? Why get up when Alexa will turn off that light for you?

Questions and Answers

Audience Question: What differences were observed between how children played with personified social robots and how they played with stuffed animals?

Dr. Severson: Play with stuffed animals involved a lot of pretend play (such as endowing them animation).  Children employed a lot of the little props available to them when then played with stuffed animals, too.  In contrast, with robots, the play principally involved “calls for reciprocity” (that is, efforts to induce the robot to respond).

 

Audience Question (Dr. Gentile): Based on studies involving 18-month-olds, Felix Vernican argues that compassion, empathy, and pro-social behavior start very young.  Does the work discussed (about children’s attribution of moral and emotional attributes to electronic devices) support a similar argument?

Dr. Severson: Children do tend to think about these entities empathetically.  Kids have big hearts, and can include robots in that. For kids 9-12, drawing on social domain theory for more stringent criteria about what constitutes something having moral standing, half of those children meet all those criteria for attributing moral standing to robots.

 

Audience Question: What about parasocial relationships?  And is it just natural to treat these objects as living entities?  Is it just part of who we are as people and why we have imaginations and create that way?

Dr. Greenfield: Think of it as a model for how to treat humans.  Thinking of the reality-testing aspect – saying it’s not real – could have negative effects if transferred to humans.

 

Audience Question (Dr. Bailey): Compared to prior technologies, these immersive ones have different affordances.  VR blocks out the physical world and creates a shared space. You’re in the content.  Augmented reality imposes an overlay on the physical world, allowing children to interact with both the physical and the digital.  Robots are embodied in things, but digital assistants present as disembodied voices. What particular opportunities and risks are associated with these differences?

Dr. Mon Williams: We are giving Bradford children who’ve never been to the seaside a virtual version of that experience, so that they can acquire the language they need to write essays about it.  This raises a moral and ethical issue, though. Is a virtual substitute for the actual experience sufficient? No. We really should be taking the children to the seaside. We need to think critically about how we use this tool

 

Audience Question: There is anecdotal evidence from clinical practice that children with a failed or very dysfunctional primary attachment with their caregivers – for example, children in foster care – are susceptible to unhealthy attachments to these devices.  Would including parents (and looking closely at those primary attachments) in these comparative play studies (conventional stuffed animals and robots) tell us more about which kids are likely to benefit from personified social robots? Also, how do we get virtual handwriting instruction – motor system training, motor planning, and motor execution – to transfer to the actual physical task?

Dr. Mon Williams: The current UK standard for addressing a child’s handwriting difficulties is a referral to hospital-based occupational therapy, for which there’s about a five-year waiting.   At the end of that wait, the child gets a half-hour of help every four weeks. It’s just not working. The immersive technology alternative is a robotic system that gives a gentle, motherly push back toward the correct movements whenever the child starts making a large error.  The robot isn’t doing the work. It’s steering the child toward doing the work – active learning. The robot constrains the child’s errors, but the amount of help decreases as the child’s performance improves. The technology allows this form of occupational therapy to be turned into a game, and to be scaled so that the human therapists’ time can be allocated to children with more extreme, complex needs.

 

Audience Question: Commercial manufacturers of these personified technology devices will be financially motivated to embed advertising and to induce certain behaviors that may or may not be in the young users’ interests.  Early examples from brand-centric children’s software (Mattel and Disney) suggest as much. Who will protect the children? And does the benefit of playing with a robot – not playing creatively – outweigh the harm from embedded marketing?

Dr. Mon Williams: This is the kind of protection that society needs to provide to the next generation.  That may mean that central or local governments need to produce guidelines and laws.

Dr. Severson: That’s a huge concern (although there are some non-commercial alternatives being used now, such as Cynthia Brazile’s Jibo).  Children aren’t necessarily the end users the device designers have in mind. That’s how the politeness issue came up with Alexa (for which there now are politeness plug-ins).  Now that kids’ uses are becoming known, the manufacturers are thinking about how to make more money off of that.

Session Materials