
The Future of Childhood and Technology
October 15, 2018

Justine Cassell, PhD
Associate Dean of Technology Strategy and Impact, School of Computer Science, Carnegie Mellon University; Co-Director of the Yahoo-CMU InMind Partnership on the Future of Personal Assistants
Overview
Criticizing an Anachronistic Paradigm. The current discussion about technology’s influence on children bears a frustrating resemblance to prior generations’ assessments of innovations like television, trains, the telegraph, and the printing press. A blanket defense of technology now would be as facile as blanket condemnations of the past. What’s needed instead is a way of “parameterizing” or thinking through what it is we’re doing – not whether it’s good or bad but how we think about it. The search for that begins at the origin of some of today’s dominant "metaphors of technology."
An Old Paradigm Rooted in Inaccurate Assumptions. J.C.R. Licklider was an influential psychoacoustician, computer scientist, and human factors expert whose career spanned the early decades of the computer era. In his 1960 paper “Man-Computer Symbiosis”, Licklider wrote, “The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today." This established the dominant conception of computing as something done by a middle-aged white man, sitting alone at a computer, thinking through a math problem.
We need to replace that 1960s-style paradigm with something more realistic (with respect to populations, processes, venues, form factors, purposes, and social contexts) and more useful. What are we assuming about the race, age, and gender of users? About the physical character of the computing experience (which today may look like playing with fabric or toys or the cloud rather than a seat at a computer)? About whether we are thinking (as Licklider might have assumed), or feeling, sensing, or moralizing? Are we just doing math with our technologies, or are we telling stories, creating relationships, changing ourselves, or improving the world? Are we acting alone? In pairs? As a society, or a collection of neurons?
The extent and importance of these questions highlights the need for a new model of technology assessment – a “new parameterization”. It must reflect our growing knowledge of today’s children and today’s technology. In other words, we must liberate ourselves from historical presuppositions about what a computer is, what role a screen plays, by whom such devices are used, for what purposes, and in what social settings. Among other things, task-based technology assessment is insufficient.
Technology and Education; Informing vs. Evoking. In this conceptual context, the question of whether emerging technologies are “the best thing to happen to education” demands reconsideration. It “… comes from a white man or a white boy sitting alone at a computer working on a math problem,” and reflects “… an unexamined, extremely old-fashioned notion of what learning is, that we are empty vessels – tabulae rasae – into which knowledge is poured.”
“We know that’s not the case, and yet somehow, when we adopt new tools we turn back to very old models of pedagogy. So we know, for example, that education is far more likely to come from curiosity, from argument, from interaction, from mental or cognitive buttings of heads.” Can we use screens to evoke, rather than to teach?
This matters because curiosity is among the “socioemotional skills that are so predictive of learning.” Assuming that coming face-to-face with the environment and with others is the origin of curiosity, “… what role do those others play and what part could a screenplay in the brain regarding curiosity in a social context?”
Technology and Social Learning. More broadly, how can we use technology to draw out our most positive characteristics? That question has informed observational research of children in tech-free settings (in collaboration with Dr. Jessica Hammer). The study team created an interactive game – a “virtual child” – that induced the children playing with it to enumerate hypotheses and deduce evidence. The virtual child mined data from its young playmates during these interactions. This revealed that, for any particular Child “A”, his or her own actions were less predictive of curiosity than were the actions of the other children in the environment.
This is a new finding. No current technology is capitalizing on it. It supports the view that asking if computers can teach material is beside the point. The more relevant question is, can computers teach skills that better predict lifelong learning, such as the love of learning and the desire for discovery?
More on Implicit Race, Gender, and Wealth-Based Biases in Media Effects Research. Better study design is needed to refine our questions and obtain more useful answers. For example, in the child development literature, what was considered “typical” language development for children was actually typical for the middle-class white children being studied. Technologies are being created with non-diverse user populations and solitary uses in mind, but they are being used by diverse populations in social settings. Enrolling more diverse populations in media effects research, and studying more social (as opposed to solitary) activities and processes, will help.
The Importance of Small Data and the Limitations of Big Data. This contrasts with work that Dr. Cassell and others did 20 years ago. In a study with no scientific controls, they gave internet access to approximately 3,000 children from 139 countries, and followed up in 21 countries five years later to explore concepts of self-efficacy, agentivity, and self-esteem. Facebook dismissed the data from this small sample at the time, noting that it collected that much data every second. There is a role for data from small samples, however, even in a “big data” world. “Our behaviors – the behaviors of ours that are interesting – are not common, and deep learning algorithms are not ready for that yet.”
Technology and Identity. Although “identity formation is probably what we spend the most time working on as we grow . . . very little technology is intentionally built to support the work of identity formation.” This points to a conundrum in youth media; namely, that parents are hostile to technologies useful for identity formation (such as Instagram) precisely because they allow children to experiment with alternate versions of themselves.
Work by Samantha Finkelstein (while Dr. Cassell’s advisee) illustrates technology’s potential role in identity formation. She investigated the instructive capabilities of a “virtual child”. One version of the technology spoke to children only in Mainstream Classroom English. The other first brainstormed with children in African-American Vernacular English for three minutes, then switched to Mainstream Classroom English for their class presentations. Those three minutes, and the sense of affiliation that speaking in a familiar style produced, was associated with higher use of science discourse. This demonstrates how, when it comes to assessing technology, “… the population is as important as the task.”
Beyond Screens. Technologies that transcend the traditional screen, such as augmented reality and virtual reality, also challenge the way that researchers, social critics, and others approach digital media. Examples include interactive books, augmented quilts, construction toys, stuffed animals, and the digital assistants populating our homes.
In a study by Cassell and others, children presented with computers in unconventional, toy-like forms attributed identities to them. That is, “Children themselves decided who they were talking to.” This supports the assertion that learning happens in the context of encounters – confrontations – between different understandings of things.
This view, and evidence that certain conflict behaviors among children contribute to learning, invites a question. “[W]hy are we looking at the future of technology as being things that fill up our children rather than challenge them and listen to them?”
Conclusion. Prominent thinkers disagree about whether building relationships with objects is bad for people. One view is that doing so can allow children to develop and expand their capacity to love. Research with technology and children with autism spectrum disorders suggests as much. Those children, who ordinarily struggle to form social bonds, were more able to do so when technology reduced the immediacy and difficulty that they experienced when interacting with other children.
“You can believe that these technologies are the end of make-believe, or you can believe that they’re mirrors that we hold up to ourselves and to our representatives that we may not be very happy about. Are they, in fact, the sign of a kind of moral panic about who we have become? And for that reason, we're truly scared.”
This has important implications for what we do about emerging technologies. Equal AI, a nonprofit with which Dr. Cassell is affiliated, seeks to identify and reduce the infiltration of unintentional biases into learning tools in artificial intelligence. Among these are distortions that arise from the nature of the data used by Ais, the representations embedded in them, and even the hiring practices of the organizations behind them.
Session Materials

