Monday (child_sex_abuse)

 
Monday, July 19, 1999
PERSPECTIVE ON PSYCHOLOGY
Copyright 1999 Los Angeles Times. All Rights Reserved

Uproar Over Sexual Abuse Study Muddies the Waters
Suppressing credible but unpopular scientific findings won't reduce the
number of incidents.
By CAROL TAVRIS

I guess I should be reassured to know that Congress
disapproves of pedophilia and the sexual abuse of children. On
July 12, the House voted unanimously to denounce a study
that the resolution's sponsor, Matt Salmon (R-Ariz.), called "the
emancipation proclamation of pedophiles." In a stunning display of
scientific illiteracy and moral posturing, Congress misunderstood the
message, so it condemned the messenger.

What got Congress riled was an article last year in the journal
Psychological Bulletin, which is to behavioral science what the
Journal of the American Medical Assn. is to medicine. Articles must
pass rigorous peer review, during which they are scrutinized for their
methods, statistics and conclusions. The authors of the article--Bruce
Rind, Philip Tromovitch and Robert Bauserman--statistically analyzed
59 studies, involving more than 37,000 men and women, on the
effects of childhood sexual abuse on college students. (A previous
paper reviewed studies of more than 12,000 adults in the general
population.)

The findings, reported with meticulous detail and caution, are
astonishing. The researchers found no overall link between childhood
sexual abuse and later emotional disorders or unusual psychological
problems in adulthood. Of course, some experiences, such as rape by
a father, are more devastating than others, such as seeing a flasher in
an alley. But the children most harmed by sexual abuse are those
from terrible family environments, where abuse is one of many awful
things they have to endure.

Perhaps the researchers' most inflammatory finding, however,
was that not all experiences of child-adult sexual contact have equally
emotional consequences nor can they be lumped together as "abuse."
Being molested at the age of 5 is not comparable to choosing to have
sex at 15. Indeed, the researchers found that two-thirds of males
who, as children or teenagers, had had sexual experiences with adults
did not react negatively.

Shouldn't this be good news? Shouldn't we be glad to know which
experiences are in fact traumatic for children, and which are not
upsetting to them? Shouldn't we be pleased to get more evidence of
the heartening resilience of children? And "more" evidence it is, for
abundant research now shows that most people, over time, cope
successfully with adversity--even war. Many not only survive, but
find meaning and strength in the experience, discovering
psychological resources they did not know they had.
But the fact that many people survive life's losses and cruelties is
surely no endorsement of child abuse, rape or war. A criminal act is
still a criminal act, even if the victim eventually recovers. If I get over
having been mugged, it's still illegal for someone to mug me, and if I
recover from rape, my recovery should offer no mercy for rapists. If
a child eventually recovers from molestation by an adult, pedophilia is
still illegal and wrong. Moreover, the fact that many people recover
on their own says nothing about the importance of promoting
interventions that help those who cannot.

The article by Rind and his colleagues, however, has upset two
powerful constituencies: religious fundamentalists and other
conservatives who think this research endorses pedophilia and
homosexuality, and psychotherapists who believe that all sexual
experiences in childhood inevitably cause lifelong psychological harm.
These groups learned about the research last December, when the
National Assn. for the Research and Therapy of Homosexuality, or
NARTH, posted an attack on the paper on their Web site.
NARTH endorses the long-discredited psychoanalytic notion that
homosexuality is a mental disorder and that it is a result of seduction
in childhood by an adult. Thus NARTH was exercised by the study's
findings that most boys are not traumatized for life by experiences
with older men (or women) and that these experiences do not "turn
them" into homosexuals.

NARTH's indictment of the article was picked up by right-wing
magazines, organizations and radio talk-show hosts, notably Laura
Schlessinger. They in turn contacted allies in Congress, and soon the
study was being used as evidence of the liberal agenda to put a
pedophile in every home, promote homosexuality and undermine
"family values."

The conservatives found further support from a group of clinicians
who still maintain that childhood sexual abuse causes "multiple
personality disorder" and "repressed memories." These ideas have
been as discredited by research as the belief that homosexuality is a
mental illness or a chosen "lifestyle," but their promulgators cannot let
them go. These clinicians want to kill the Rind study because they
fear that it will be used to support malpractice claims against their
fellow therapists. And, like their right-wing allies, they claim the
article will be used to protect pedophiles in court.

But all scientific research, on any subject, can be used wisely or
stupidly. For clinicians to use the "exoneration of pedophiles"
argument to try to suppress this article's important findings, and to
smear the article's authors by impugning their scholarship and
motives, is particularly reprehensible. They should know better. The
Bible can be used wisely or stupidly, too.

And so the American Psychological Assn. (the journal's publisher)
has been under constant attack by the Christian Coalition, Republican
members of Congress, panicked citizens, radio talk-show hosts and a
consortium of clinicians that reads like a "who's who" in the multiple
personality disorder and repressed-memories business. The APA has
responded that future articles on sensitive subjects will be more
carefully considered for their "public policy implications" and that the
article would be re-reviewed by independent scholars. It assured
Congress that "the sexual abuse of children is a criminal act that is
reprehensible in any context."

These placatory gestures are understandable given the ferocity of
the attacks. But the APA missed its chance to educate the public and
Congress about the scientific method, the purpose of peer review and
the absolute necessity of protecting the right of its scientists to publish
unpopular findings. Researchers cannot function if they have to
censor themselves according to potential public outcry or are silenced
by social pressure, harassment or political posturing from those who
misunderstand or disapprove of their results.

On emotionally sensitive topics such as sex, children and trauma,
we need all the clear-headed information we can get. We need to
understand what makes most people resilient, and how to help those
who are not. We need to understand a lot more about sexuality,
including children's sexuality. Congress and clinicians may feel a
spasm of righteousness by condemning scientific findings they dislike.
But their actions will do no more to reduce the actual abuse of
children than posting the Ten Commandments in schools will improve
children's morality.
- - -

Carol Tavris Is a Social Psychologist Who Writes Frequently on
Behavioral Research
 

The Authors Respond


The following message is from Bruce Rind. His email address is:
RIND@VM.TEMPLE.EDU

Mike,

I wanted to forward to you some of my responses to Ray Fowler. After the May 12 Family
Research Council press conference, he emailed me that congressmen were using two main
methodological criticisms to attack the study. Paul J. Fink, he told me, sent these criticisms
to Dr. Laura in a letter. She apparently relayed them to the congressmen.  The first criticism
is that 60% of the data in our meta-analysis came from a single study done 40 years ago
that was flawed, making our paper flawed. The second is that about 38% of the studies
we included were "unpublished" which, they claim, invalidates the whole study.  Now
that congress has condemned our study and the APA has basically given their blessing
to this (and is congratulated in the resolution for reversing course), I think it is important
that fellow researchers be aware of the outrageous invalidity of these two methodological
criticisms put out by Fink and his colleagues at the "leadership council." Fink in interviews
for the Philadelphia Inquirer has called out study "perverse" and "terrible;" David Spiegel,
his collaborator, said in the NYT interview that our study had serious methodological flaws
and that we "used meta-analysis the way a drunk uses a lamppost--for support, rather than i
llumination." The "60%" and "unpublished" arguments are central to their attacks. You may
put our refutations--and I mean refutations, not merely answers--on your list server.

We are interested in hearing from other researchers about their reactions to the information we provide below and any other comments they might have regarding the methodology of our review."

Here are the refutations that we sent to Fowler two months ago in May:
______________________________________________________

The following 15 lines (numbered) are a quote from Dr. Paul Fink in a letter to Dr. Laura "critiquing" our meta-analysis. Below these lines, we debunk his critique.

1 Of the 59 studies included in the analysis, over 60% of the data is (sic)
2 drawn from one single study done over 40 years ago.
3 The authors loaded their analysis with data involving primarily mild
4 adult-child interactions involving no physical contact. Rather than
5 focusing on child sexual abuse, the 1956 study on which they largely relied,
6 asked about college student's encounters with sexual deviants during
7 childhood and adolescence, usually in public places. Based on the nature of
8 these mild experiences, it is not surprising that the students described
9 little permanent harm. Nonetheless, the authors of the Rind study
10 generalized these findings to all sexual abuse.
11 It is as if a study that purports to examine the effects of
12 being shot in the head contained a majority of cases in which
13 the marksman missed. Such research might demonstrate that
14 being shot in the head generally has no serious or lasting
15 effects.

We show below that Fink's criticisms are completely specious. His claim that, of the 59 studies we included, over 60% of the data is (sic) drawn from one single study done over 40 years ago (lines 1,2) is blatantly false. His claims that we "loaded" our analysis with these data (line 4), that we "largely relied" on these data (line 5), and that we generalized these data to all sexual abuse (line 10) are similarly blatantly false. Fink is referring to a study by Landis (1956). Here are the facts:

(1) The Landis study was NOT used in any of our meta-analyses, which were the primary and most important analyses in the study, from which we concluded that sexually abused students were only slightly less well adjusted than control students.

(2) We only used the Landis data for self-reported reactions and effects. Regarding self-reported reactions, data from 9 female and 9 male samples were combined (see Table 7, p. 36) to get overall reactions. The Landis data made up 35% of the female data and 30% of the male data (33% of male and female combined). The Landis data were the most negative of all studies; if we had been trying to doctor the results in favor of positive reactions, we would have calculated the unweighted means for reactions. Instead, we used weighted means, giving substantial weight to the Landis study. Below we present the means as presented versus the means WITHOUT the Landis study to show how inclusion of the Landis study negatively biased the means, which contradicts Fink's assertion that we "loaded" our analysis: as presented in paper WITHOUT Landis
              pos neut neg                     pos    neut neg
women   11 18    72             women    16     18   66
men     37 29    33              men     50     25   24

The table on the right is the effect of removing Landis, which
clearly goes against Fink's argument of "loading" to minimize
reports of harm. If we had included Landis, but reported
UNWEIGHTED means, then we would have gotten the following:

            pos     neut     neg
women 14     18         68
men      43     27         30

This shows that using weighted means, as we did, and including the Landis study, as we did, gave the highest values of overall negative reactions, which contradicts the "loading" imputation.

(3) In the self-reported effects analyses, we reviewed the 6 male and 5 female samples that had this information. Here, the Landis data made up 53% of the total N for males and 68% of the total N for females (combined = 63%). Thus, this may be what Fink was referring to when he claimed that over 60% of the data is (sic) drawn from one single study (see Table 8, p. 37). We first examined self-reported negative effects on subjects' current sex lives or attitudes. For males we noted, of the 5 samples that had data, the percent of reported negative effects ranged from 0.4% (Landis) to 16% (Condy). If we had been trying to "load" the overall mean, we would have used the weighted mean to give more weight to Landis' very low percentage. But we did not do this; instead we used the UNWEIGHTED mean, which yielded 8.5% negative reports (using the weighted mean to take advantage of Landis' low percentage would have yielded 4.4%). If we merely dropped Landis' study, the overall negative mean would change trivially from 8.5% to 10.5%. In the case of females for negative effects on current sex lives or attitudes, only two samples had data: 2.2% (Landis) and 24% (Fritz et al.). We gave the UNWEIGHTED mean of 13%, when the weighted mean of 3.8% would have "loaded" the results.

Next, we considered lasting general negative effects. Those based on males came from only 3 samples (Fishman 27%, Landis 0%, and West & Woodhouse 0%). We did not give a mean. For females, 3 samples had data of lasting effects (Hrabowy 25%, Nash & West 20%, and Landis 3%). We did not give a mean. What we did do was to conclude properly that lasting negative self-reported effects occurred for only a minority of students--a conclusion that holds INDEPENDENTLY of inclusion of the Landis data.

(4) Fink's point about "loading" our analysis with "primarily mild adult-child interactions involving no physical contact" (lines 3,4) is also false. We included all the studies that were available at the time, 16 of which included only cases of physical contact. We examined in our meta-analysis whether CSA-symptom relations varied as a function of contact vs. non-contact CSA. They did not (see p. 33). Thus, we were not trying "load" the data, as Fink imputes. Moreover, we established that abuse severity was the same in the college samples as in national probability samples (see Table 1, p. 30).

(5) Fink's analogy to being shot in the head versus having the marksman miss shows that he is not well read in the child sexual abuse literature, in which it has often been claimed that non-contact CSA can be just as traumatizing as contact CSA. Thus, it was completely appropriate to examine the Landis data. Fink's analogy is particularly poor, given recent school shootings: is Fink implying that the high school students in Littleton who barely missed being hit by bullets will have no lasting effects?

(6) IN SUMMARY, the Landis data played no role in the MOST IMPORTANT analyses in the article, which were the meta-analyses, from which we derived our most important conclusions. Second, for self-reported reactions, we analyzed the Landis data, which were the most negative of all studies, in such a way as to give them maximum impact, which contradicts imputations of "loading" our analyses. Third, for self-reported effects, we analyzed the Landis data, which were less negative than most other studies, in such a way so as to minimize their overall impact, which again contradicts Fink's assertion of "loading" the data.

(7) CONCLUSION: Fink misrepresented how we analyzed the Landis data. Above, we showed how we analyzed the Landis data to do just the opposite of "loading" them, as Fink has wrongly characterized. Further, the section of the paper to which Fink is referring constitutes relatively minor analyses; the major part of the analyses and conclusions in the paper come from the meta-analyses, to which Landis's data are completely irrelevant. Due to Fink's misrepresentation of our analyses and his feeding this misrepresentation to Dr. Laura and ultimately to the Congress with all the grave consequences of media sensationalism and
political pandering, the question should now turn to why he has done this.

In conclusion, we add that our handling of the Landis data MAXIMIZED the reporting of negative outcomes, rather than MINIMIZING it. This is the EXACT OPPOSITE of what our critics have claimed.

The next email is our response to the claim that about 38% of the studies we used were "unpublished"--of course, calling doctoral dissertations unpublished is debatable because they are part of the public record, being available at the library of congress, etc. The criticism is that these studies were never subjected to peer review or published, which invalidates our meta-analysis:

We included 36 published studies along with 23 unpublished studies (21 doctoral dissertations and 2 master theses). The critics cite this information from p.27, but then deceptively do NOT cite the follow-up information on p.34, in which we statistically compared results from the published and unpublished studies. In comparing the mean effect sizes (i.e., associations between CSA and symptoms) of the two groups, we found them NOT to be statistically significantly different at the conventional .05 level. The mean published versus unpublished effect sizes were r=.11 and r=.08, respectively, which are certainly NOT different in a practical sense. For comparison, the mean effect size in national probability samples was r=.09, with which the unpublished data were completely consistent. Moreover, the critics fail to mention our findings regarding the homogeneity (i.e., consistency) of the effect sizes across the studies (see p.31). Of the 54 effect sizes meta-analyzed, all but 3 were consistent with the mean effect size of r=.09. The three outliers were all published studies. Thus, all unpublished studies were consistent with the overall trend, demonstrating that they were in no way anomalous and that they in no way biased the overall results.

Furthermore, the doctoral dissertations were generally very well done studies, often better than published studies, because they often included more measures and better designs, reflecting the supervision of a group of university professors with PhD's. Including unpublished studies is STANDARD practice in conducting meta-analyses. Any good meta-analyst attempts to locate unpublished studies relevant to the issue he or she is reviewing. This is because of the "file drawer" problem--i.e., there is potential bias in academic journals in publishing only studies with significant results; consequently much research on a phenomenon that comes up with nonsignificant results may go unpublished regardless of the research quality (and the research quality of the dissertations was generally quite good). Thus, as indicated by the file drawer problem, including "unpublished" doctoral dissertations in all likelihood INCREASED, rather than decreased, the validity of our overall results.

Finally, the 36 published studies alone make this review as extensive or more extensive than previous meta-analyses on CSA (e.g., Jumper had 26 studies, Neumann et al. had 38 studies). In terms of assessing nonclinical samples, our 36 published studies are the most by far ever employed (only about half of Jumper's and Neumann et al.'s were nonclinical).

SUMMARY: The critics are selective in what information they cite from our article. They claim we used a large percent (38%) of unpublished studies, but don't bother to mention that unpublished results are consistent with published results, both statistically and practically. They also fail to mention that the unpublished studies were almost all doctoral dissertations that had to go through the rigorous process of review by groups of university professors with PhD's.

Forwarded by:

Michael Bailey
Department of Psychology
Northwestern University
Evanston, IL 60208-2710
847-491-7429
Fax: 847-491-7859
http://www.psych.nwu.edu/psych/people/faculty/bailey/bailey.html
jm-bailey@nwu.edu