Media and developing minds header

Modern Methods in the Study of Media Violence

October 16, 2018

 
Douglas Gentile, PhD

Moderator

Douglas Gentile, PhD

Professor of Developmental Psychology, Director of the Media Research Laboratory, Iowa State University

 
Craig Anderson, PhD

Craig Anderson, PhD

Distinguished Professor of Psychology, Director of the Center for Study of Violence, Iowa State University

Sandra Calvert, PhD

Sandra Calvert, PhD

Developmental and Child Psychologist; Professor of Psychology, Director of the Children’s Digital Media Center, Affiliated Faculty Member at the McCourt School of Public Policy, Georgetown University

Tom Hummer, PhD

Tom Hummer, PhD

Assistant Research Professor of Psychiatry, Indiana University School of Medicine

Overview

Dr. Anderson: There is no real replication crisis, and little media violence controversy, among people who really know the media violence research literature.  There are methodological, statistical, conceptual, and interpretational reasons why some studies underestimate effects or produce non-significant results. Many of these poor quality studies are done in good faith by people who simply are not aware of the errors that they are making. There is also a growing constituency within the field of media effects that is doing this intentionally to serve a “denialist” agenda.  Scholars who seek to produce valid results need to understand these error types and avoid them.

In media violence (MV) research, there are six principal mistakes being made in the design of randomized, controlled studies (i.e., experiments).  These are:

  • Inadequate sample size (power analyses based on meta-analyses to recommend at least 100 per condition for the main effects, and more if interactions are hypothesized)
  • Inappropriate comparison conditions (failing to provide truly non-violent stimuli to compare with violent ones)
  • Lack of appropriate cover story (to frame the study experience in a neutral way)
  • Lack of suspicion check (to confirm the plausibility of the cover story)
  • Weak dependent variables (for example, using a single-item dependent variable, or a dependent variable that isn’t relevant to most of the study participants)
  • Inappropriate dependent variables (trait measures in a short-term experiment)  

There is a similar list of mistakes that undermine cross-sectional and longitudinal media violence effect studies.  It includes:

  • Inadequate sample size
  • Overcontrol (e.g., inappropriate “control” variables, such as controlling for trait aggression when aggression is the dependent variable)
  • Lack of appropriate cover story
  • Lack of suspicion check
  • Weak measure of media violence exposure (for example, “time on media” rather than “time on media violence”, too few items, covering too short a time period)
  • Weak measure of dependent variables (for example, using a state measure when a trait measure is better)
  • Inappropriate time lags between longitudinal assessments

Five kinds of statistical errors also are undermining media violence effects research:

  • Mindless formulaic analysis (including short-sighted preregistration)
  • Failure to check for outliers and non-normal distributions
  • Overcontrol by use of inappropriate covariates (such as controlling for one “trait aggression” measure when another “trait aggression” variable is the dependent variable)
  • Over controlling for what are, conceptually, mediating variables
  • Under controlling (by lack of appropriate covariates)

There are also six main ways in which conceptual and interpretational errors compromise the findings of research in this field.  They are:

  • Misunderstanding the definition of media violence (e.g., MV doesn’t require blood and gore)
  • Confusing theoretically distinct concepts with each other (for example, aggression, aggressive cognition, and aggressive feelings)
  • Misunderstand the role of correlational studies in studying causal questions (for example, failing to test for plausible alternative explanations, or recognizing that correlations are only one part of a triangulation process)
  • Ecological fallacies (jumping from inferences about groups to inferences about individual-level processes)
  • Using theoretically inappropriate populations
  • Interpreting a lack of correlation as proof of a lack of causation

The mistakes described above are avoidable.  Bear them in mind when designing your studies and interpreting your findings, and you will be more likely to produce accurate, useful information.

Dr. Calvert: The American Psychological Association (“APA”) convened a task force to update knowledge and policy regarding the relationship between violent video game use and adverse outcomes.  The task force applied a number of different filters and screening methods, and a long list of inclusion criteria and key variables to find and sort relevant scholarship in this field since the 2005 APA report on media violence.   These were analyzed using both systematic evidentiary review and meta-analysis.

The APA task force’s key findings were:

  1. Exposure to violent video games is associated with increased aggressive behavior, thoughts, and feelings, and decreased prosocial behavior, empathy, and sensitivity to aggression.  The findings on this are robust across methodologies.
  2. Exposure to violent video games is a risk factor for aggressive outcomes.  The magnitude of this risk is the same for adolescents, college students, and young adults.  There is not enough research to confirm that the same is true for children under ten.
  3. There is not enough published evidence to reach a conclusion about whether or not such exposure is also linked to more extreme outcomes, such as delinquency, criminal behavior, or other violent outcomes.  More research is needed in this area.
  4. There are major gaps in the literature regarding gender, ethnicity, and socioeconomic status; dose response relation effects; user motivations; and video game properties (such as first-person versus third-person player perspectives, and cooperation versus competition).
  5. Everyone – children and adults – needs to be better educated about this topic.
  6. Interventions are needed to help decrease the impact of violent video game exposure on children.
  7. The Entertainment Software Ratings Board’s system for rating games requires refinement.

Dr. Hummer: The need for MRI patients to lay still in a dark tube imposes some limitations on how naturalistic such imaging studies can be.  It is not impossible to image the brain of someone playing a video game in a realistic setting, but it is hard. That means that such imaging can only be part of the process of explaining what happens inside a gamer’s brain.

Electroencephalography overcomes some of MRI’s limitations.  Subjects don’t need to stay still, and EEG has better temporal resolution than fMRI.  That makes it better at distinguishing between early primary responses to stimuli and secondary, top-down effects (which an fMRI could not discern).  EEG also offers a direct measure of brain activity, as opposed to the measures of blood flow and oxygen metabolism that fMRI provides. On the downside, EEG has lower spatial resolution.  EEG can be performed outside of lab settings, but the “noise” associated with such testing in uncontrolled settings invites skepticism.

Traditional fMRI measures differences in brain activity associated with different conditions.  When this tool is used to perform simple comparisons, the data it generates is easy to interpret.  Simple designs are unrealistic, however. The more complicated the behavior or experience being measured, the more difficult it is to interpret an fMRI of the brain involved.  For this reason, fMRI is a better way of imaging the brain’s reaction to general or cumulative effects rather than specific moments (such as the differences between violent and non-violent incidents within a video game).  

Even when we see changes in the brain’s energy use, we still need to explain what is causing those changes.  For example, is the underlying mechanism attention, effort, duration of activity, or just the duration of a particular activity?  It’s hard to parse temporal activity when it takes 100-200 milliseconds. It’s also relevant to ask whether more brain activity is better or worse than less in any particular context.  Is more energy use a sign of higher engagement? Is less energy use a sign of greater efficiency? Answering these questions requires correlation between research imaging and clinical and behavioral information from outside the lab.

Many recent studies go beyond the activation of brain regions, and look at the brain as a dynamic system instead.  The seed-based method (which focuses on particular regions of interest within the brain) and the more data-driven Independent Component Analysis are two examples.  These are better for more naturalistic viewing design. Looking at what networks are involved across the entire duration of a viewing experience (comparing the brain under different conditions, as opposed to different momentary events) reveals correlations within subjects.

Brain structure is another study topic.  An understanding of brain development across adolescence is a prerequisite for interpreting structural data about adolescent brains.  For example, brain size peaks around age 10-11. Gray matter thins out after that. As for white matter, available measurement techniques include Diffusion Tensor Imaging and Voxel Based Morphometry.  These kinds of images have shown a negative association between TV violence association and white matter. That association must be viewed in the context of two facts about Attention Deficit Hyperactivity Disorder.  First, many studies have found it to be a disease of neurodevelopmental delay. Second, clinical populations like those with ADHD may be associated with media violence exposure.

Connectome imaging defines connectivity among all brain regions to form a network of hubs, modules, and subnetworks.  This technique has not been used much to study media exposure, but it has promise.

Imaging cannot tell the whole story.  It must be combined with other methods, measures, and models, and interpreted in the context of what we know about brain development across childhood and adolescence.  In this research area as in others, we need clear, testable models. We also need more neuroimaging of clinical populations.

Dr. Gentile: Wherever the research and whatever the methodology, there is strong evidence that exposure to violent video game content is associated with some aspect of aggression.  Even critics of that conclusion usually find the same effects sizes that its proponents do. They just interpret them differently.

The core of the methodological debate is theory.  Even Christopher Ferguson, PhD, the strongest critic of the violent video game literature – who denies any effect of violent video game play on aggressive behavior – acknowledges that such content has an effect on aggressive cognition and feelings.  The idea that something that influences thoughts and feelings can’t affect behavior contradicts a basic tenet of psychology. The connection may not be simple, linear, or mechanistic, but there is almost no theory that would deny its existence.  Learning theory says as much. You can’t practice something over and over again and be unaffected by that experience. In this light, we should acknowledge valid criticism, but we also should also be critical of the critics. We need to ask if there is any reasonable theory under which their analyses and conclusions make sense.

Questions and Answers

Audience Question: Can you describe the magnitude of the effect of violent video game exposure, in terms of its relationship to other risk factors for aggressive behavior?

Dr. Hummer: An effect size of 0.15 is not uncommon in media violence research studies.  By older standards, a 0.2 correlation would be a small effect. In psychology, the average effect size is about 0.25.  By comparison, the large-scale study of low-dose aspirin (versus a placebo) after a first heart attack was suspended after two years because the correlation with better outcomes was so large, and that correlation was just 0.04.  Note that, due to intentional overcontrol (a consequence of conservative study design), violent media effects research probably underestimates correlations.

Dr. Calvert: Controlling for publication bias in these studies also reduces the effect size

Audience Question: The APA study seemed useful for helping parents understand the impact of media, and a research-based foundation for guiding their children’s media consumption.  Widely reported attacks on that study, by Chris Ferguson, are dismaying. What are his basic criticisms? And isn’t his position as chair of a relevant division of the APA undermining the organization’s work in this area?

Dr. Calvert: One of Dr. Ferguson’s criticisms is that the whole field is biased.  He attempted to overturn the ABA task force report, but there was no evidence to support his view and he failed.

Audience Question: Since the debate is largely thrown off by contrary definitions and improperly matched control and experimental groups, what constitutes a violent video game?

Dr. Calvert: The public study used the industry’s own ESRB ratings.  That’s why we asked for a better rating system from an objective third party.

Dr. Anderson: Something like 80% to 90% games rated for everyone or everyone over age 10 contain characters harming each other.  That’s inconsistent with the standard in media effects research, in which violence generally has been understood for the last six decades to mean “characters harming each other”.  Violence doesn’t have to bloody or gory or involve screaming. Such imagery almost certainly increases the physiological desensitization to more of the same, but we don’t know if it increases the effect that killing or harming (within a game) has on aggressive behavior.  Teasing apart the variable, and looking at long- versus short-term effects, would take an impossibly large and complicated study.

Audience Question (Jeff R. Temple, PhD): After the Santa Fe high school shooting, Texas legislators tried to frame violent videos games and mental health as the causes for such incidents. What is the recommendation for policy makers, when the clear culprit is firearms?

Dr. Calvert: Violent video game use is a factor.  Another major risk fact is access to guns.  Most schools are accessible to people with semi-automatic weapons.  This doesn’t diminish the role of violent video game use as a risk factor, but it is something to consider in light of our cultural heritage and the right to bear arms.

Audience Question (Dimitri Christakis, MD, MPH): From a public health perspective, the effect sizes that we’re discussing are large at a population level.  They translate to tens or hundreds of thousands of kids having aggressive thoughts and feelings and maybe taking aggressive actions.   From a clinical perspective, parents want to know whether playing violent video games is going to make their children aggressive, or even a bully.  Doing more studies that either do or don’t replicate prior work, arguing about methodological weaknesses, doing another meta-analysis or launching another task force – none of these seem like the way forward.  The effect sizes probably misrepresent individual children’s risks, and we know that some children are more susceptible than others. We should be creating reliable clinical cools with which doctors can identify kids at particular risk.  This idea of differential susceptibility means that some children will need this kind of help even if the next task force finds no effect. Why aren’t we taking a more personalized medicine approach?

Dr. Calvert: Even if I don’t go out and kill people, there probably is a better use of my time than to be playing violent video games.  All developmental theory tells us that everything we do, in every moment, has an impact on who we are. Parents should think seriously about what it is that their children, and their families, are doing with their moments.

Dr. Anderson: We’re getting correlations in the 0.2 or so from longitudinal and cross-sectional studies, after controlling for several things, but looking at the data in different ways can produce much higher correlations.  Experimental research supports this concern. Take more risk factors into account, and extrapolate to the small number of extreme effects, and a child with high exposure to violent games could become 20% or 30% more likely to have a fight during the school year. As for the notion that some populations are more or less vulnerable to the effects of violent video games than others, the data don’t show that.  For kids who have a lot of risk factors, their risk goes up if you add media violence to their lives. Among kids who have no other risk factors for aggressive behavior, adding high media violence still raises their risk a little bit.

Session Materials

Image