Autism News Beat

An evidence-based resource for journalists

Autism News Beat header image 2

Texas two step

May 5th, 2008 · 11 Comments · Critical thinking

There is no credible evidence that mercury causes autism. It doesn’t matter if the mercury comes for vaccines, coal fired power plants, forest fires, or UFO tailpipe exhaust. What mercury can cause is mercury poisoning, which is nasty and horrible, but the symptoms are distinct from autism and not easily confused. Unless you’re an anti-vaccine activist who wants to pull a fast one on deadline-stressed reporters.

So if you’re a news editor or reporter in the Lone Star State, beware of a much publicized epidemiological study of coal-fired power plants and autism. Some parents of autistic children gathered in front of the Dallas federal court house last week to call attention to the study led by Raymond Palmer, PhD, associate professor of family and community medicine at the University of Texas Health Science Center at San Antonio. Palmer reports that community autism prevalence is reduced by 1 percent to 2 percent with each 10 miles of distance from the pollution source. Unfortunately, Palmer’s study is yet one more example of a biased researcher cherry picking data to “prove” a hypothesis. An honest scientist looks for data to “test” the hypothesis.

This was Palmer’s second bite at the apple – his 2006 study on the same topic was widely criticized for failing to control for confounds such as urbanicity. His second attempt fell short, and you can read why here and here.

But junk science is to some people what bloated carrion is to a jackal, and fringe websites, and at least one law firm, are slavering over Palmer’s population study.

Epidemiological studies have not been kind to the anti-vaccine movement. Thimerosal, a mercury-based preservative, has been absent from scheduled childhood vaccines long enough that today’s 3-5 years old should be autism free, if a certain hypothesis was valid. Other epidemiological studies in Europe and elsewhere have failed to confirm a link between vaccines and autism.

But no matter. Vaccine hysteria pays fealty to science, but its true master is public relations. Websites such as AgeOfAutism.com regularly exhort its readers to bombard media outlets with spurious studies and unverifiable anecdotes, all aimed at getting journalists on their side. Sometimes it works. The latest call to action is aimed at four Texas media outlets who ran their own stories on reaction to the Palmer study: the Ft. Worth Star Telegram, WFAA-TV, KVTV, and KDAF.

An empty bucket, as you say in Texas, makes the most rattle, so grab your earplugs:

The media needs to hear from parents! If all these news sources receive emails from parents living everywhere in the U.S. and beyond telling them about the heavy metal levels in children with autism, pointing out the changes that occur after chelation and other bio-medical treatment, they may write more. We need to make it clear that something terrible is happening to our children but that there is hope. We can stop the exposure to toxins and we can recover these kids.

There is no credible evidence that children with autism have more heavy metals than their neurotypical peers. There are no peer reviewed studies that show chelation is an effective treatment for autism, and no good reason to suppose it would be.

By all means keep writing about autism. Tell the world about these children, their challenges, and the wonderful gifts they bring. And when reporting on the science, call a pediatric neurologist at the nearest medical college, or an immunologist, or the American Academy of Pediatrics. And when readers tell you they cured their kids with a special diet or a swim with the dolphins, show some skepticism. Purity of motive does not confer accuracy – dirt shows up on the cleanest cotton.

Something terrible happens to children with autism each time a credulous reporter repeats unverifiable and deliberately misleading stories about these kids. There is hope, but it has nothing to do with quack medical treatments and improbable conspiracy theories. Because when you get down to it, kids are kids, even ones with autism. And the best hope for any child with a disability is accommodation and acceptance.

buckets.jpg

Share

Tags:

11 responses so far ↓

  • 1 Ms. Clark // May 5, 2008 at 1:52 am

    Nicely put. Did Palmer ever state his conflict of interest? That he is a believer in bogus cures for his autistic son? This guy is scarcely neutral. Maybe he was planning on making a career as an “expert witness” as I suspect that DeSoto and/or Hitlan are/were.

  • 2 isles // May 6, 2008 at 1:45 pm

    Good to see you beating the anti-vax crowd to the punch. They’ve left science so far behind they don’t even distinguish between methylmercury and ethylmercury anymore.

    I can understand parents sitting down at their computers and coming up with wrong ideas about autism, but there’s no excuse for reporters doing the same. They have experts at their disposal. They shouldn’t choose to pay more attention to weepers and shouters than to scientists.

  • 3 Lisa // May 12, 2008 at 3:20 pm

    wish I found it as easy as you do to tell “junk” science from the real deal…

    my usual source is PubMed.com; I look to see whether an article is cited there, and who the co-authors are.

    If the whole group is legit, and the citation is from a legit peer reviewed journal, it seems reasonable to think that the study is legit…

    when I look at the critiques of anti-mercury studies (from sites like yours and similar), I find myself in an infinite research loop. You say the mercury baby hair studies are b.s., and maybe they are – but how do I find out? Dr. X says they’re legit; Dr. Y says they’re not – but do Drs. X or Y have axes to grind? funding from the wrong sources? how do I find out? Source A says Dr. X is a genius, Source B says he a liar… and on it goes…

    Given that I’m not a biologist or a researcher by training, how do I parse out the dueling studies in a manner that respects all the players?

    (This is a serious question – not being cute!)

    Best,

    Lisa
    (about.com guide to autism)

  • 4 Joseph // May 12, 2008 at 3:56 pm

    Lisa: What I do is follow the debate and try to understand the points from each side. That’s how science works. Results are found, but some are discarded because of better science. Replication matters. There are people who cherry-pick certain papers over others, and pretend certain findings don’t exist, which means they aren’t being objective. I and others like me like to understand all studies on a matter, if that’s possible, and why they get the results they get.

    Some say it’s peer-reviewed papers vs. non-peer-reviewed. That’s a heuristic that is not bad, but that’s not the absolute way to tell where the debate stands.

    Within peer-reviewed papers there are levels of quality and so forth. For example, a double-blind placebo-controlled study is more conclusive than a retrospective study. A case report, expert opinion and things like that are the lowest levels of evidence there is.

    Some say it’s authority. Certainly, if you prefer not to get immersed in the science, this is a perfectly valid way to get reasonably reliable information. But it doesn’t always work. I’ve given my reasons why I think authority is wrong on ABA, for example.

  • 5 culvercitycynic // May 12, 2008 at 4:10 pm

    To my knowledge Ms Clark, Prof. Palmer has not yet stated his conflict of interest, and that is beyond troubling. I think proper ethical procedure would dictate that he would somehow–with protecting all interested parties privacy–convey his interests and conflicts. It’s important that those discussing this study know why he’s conducting this kind of research.

  • 6 Prometheus // May 12, 2008 at 4:20 pm

    Lisa,

    The best way to be able to tell the good science from the junk science is to be well-educated in the field. And by “well-educated”, I don’t mean having a “Google PhD” in the field – nor do I mean having read all the abstracts on PubMed (or even all the papers).

    What “well-educated” means is – sad to say – taking the courses, getting a degree and doing research (not “internet research”, but real laboratory research) in the field. It’s not something you can do in your “spare time” and there are no “short-cuts”.

    Clearly, this route is not for everybody. The next best approach is to critically evaluate each study. Part of that process is as you’ve outlined above – legitimate researchers publishing in a peer-reviewed journal. However, legitimate researchers often get carried away by their hypotheses and I’ve seen lots of absolute rubbish published in peer-reviewed journals – and not just about autism.

    By the way – “conflict of interest” cuts both ways. In fact, the “mavericks” often are more swayed by their conflicts than people who are getting a research grant. Research grants – once given – aren’t easily rescinded. Someone whose livlihood or reputation depends on getting a specific answer has a much bigger motivation to come up with that specific answer.

    So, what can you do? You can read the articles. If you don’t understand what they’re talking about or the methods they’re using, you’ll have to admit that you can’t evaluate the article on your own. At that point, you can either give up or you can find somebody who can explain the paper and what it means – preferably not somebody with a vested interest in the matter.

    A few “red flags” of “junk science” to look for:

    [1] The conclusions aren’t supported by the data.

    This is the flaw in the Holmes et al “Baby Haircut” paper. They got unusual results that didn’t support their “merury-causes-autism” hypothesis. Then they made up a mechanism to explain it that wasn’t supported by their data and conflicted with what is known about hair and mercury.

    A better explanation of their data would be that hair low hair mercury – and thus low blood mercury (and low mercury exposure) are associated with autism. That conclusion may not “make sense”, but it is all that you can conclude from their data.

    [2] There are uncontrolled/unmeasured/unreported variables.

    The recent Palmer et al study (and the preceding Palmer study) is a good example of this. They looked at power plant emissions and ignored the available data on total mercury emissions/depositions that is available. They also ignored wind and other confounding variables. And their “explanation” fails to explain why the states with the highest mercury deposition have lower autism prevalence than those with lower mercury deposition.

    [3] The conclusion seems unreleated to the data.

    This is a variation of [1], but an important one in the autism field. Many studies – the James et al study comes to mind – look at one response to mercury/thimerosal pretend that it is an analogue to autism. Autoimmune disorders in a strain of mice prone to autoimmune disorders, for example.

    Lisa, I’m not trying to say that you should just be quiet and listen to what the “experts” say – far from it. For one thing, some of the so-called “experts” are among the worst offenders. Instead, I’m suggesting that you be extremely skeptical of anybody claiming that they are “right” and “mainstream medicine/science” is wrong.

    Every once in a while, the “maverick” claiming that he/she is right and everybody else is wrong turns out to be on the right track. The other 9,999 times out of ten thousand, they’ve made a mistake.

    Prometheus

  • 7 Matt // May 12, 2008 at 5:47 pm

    Lisa,

    this is a toughie. I can say that certain papers just pop out to me as possible junk–and this was in the early days of trying to learn about autism.

    The best example to me is the Geier “Early Downard Trends…” paper. The problem is, many of the red-flags were because of my background.

    That said, some things to check.

    The Journal’s people publish in. Check that they are actually Peer reviewed. “Medical Hypotheses” is a great example of a non peer reviewed journal. While “Medical Veritas” is, as I recall, peer reviewed, it is not an excellent journal.

    So, how can you get a first measure? Search for the “impact factor” of the journal.

    It is difficult to get a read on the authors. If they are publishing in low-value journals, that is one sign. But, other issues? Not everyone has the background and time to, say, compare a paper by one author vs. another and demonstrate that the author is, well, borrowing shall we say, from another.

    http://neurodiversity.com/weblog/article/108/bibliographic-mergers-acquisitions

    While many would claim that the government is biased, when the courts are telling people that certain authors are not qualified to be experts, that says something very strong about their work.

    Check their publications on scholar.google.com. One thing that will tell you is how often a paper is referenced. While not a perfect measure, it does give information about how many other authors out there are reading and using that paper.

    Of course, when a Greek God gives you advice, it is worth listening as well…

  • 8 Lisa // May 13, 2008 at 5:18 am

    Thanks so much to all; your answers are wonderful, and in many cases I am able to get unbiased (or overtly and thus transparently biased) expert perspectives on relatively non-controversial topics (eg, sensory integration, AIT, HBOT, and so forth). There was a recent double-blind placebo-based research study done on sensory integration, and I wrote about it – but in that case it was NOT peer reviewed or replicated – OR published in a legit journal… making me wonder whether the design of the study outweighed its presentation or not!

    Vaccines seem to be a uniquely difficult subject, for so many reasons that we’ve already covered to some degree…

    But Joseph, you say: “Some say it’s authority. Certainly, if you prefer not to get immersed in the science, this is a perfectly valid way to get reasonably reliable information. But it doesn’t always work. I’ve given my reasons why I think authority is wrong on ABA, for example.”

    Now – there’s a sacred cow! I’ve certainly heard what sound like very legit arguments that ABA is not all it’s cracked up to be, and interviewed several experts who have given me real hope that ABA is becoming (Slooowly) a more humanistic intervention. But surely there are properly structured, replicated studies to turn to with ABA? They certainly LOOK like such when I read them?! AND the American Pediatriac Association supplorts ABA wholeheartedly, which is surely a meaningful endorsement?

    Not?

    All the best,

    Lisa (autism.about.com)

  • 9 Joseph // May 13, 2008 at 12:43 pm

    I’ve certainly heard what sound like very legit arguments that ABA is not all it’s cracked up to be, and interviewed several experts who have given me real hope that ABA is becoming (Slooowly) a more humanistic intervention. But surely there are properly structured, replicated studies to turn to with ABA?

    Lisa: My argument is here. Essentially, I argue that while there is a lot of low-quality evidence supporting ABA as an intervention, the best quality evidence produces results that are not very impressive at all. This is a red flag (with homeopathy being a good example of something that has turned out in a similar manner). I also list some problems with Lovaas (1987) which is the study (or promotional material, for some) that started it all.

    It’s impressive that ABA is supported by authority, but then, expert opinion is only Level-III evidence.

    Actually, you’ll note Interverbal (Jonathan) showed up in the comments and essentially agreed with the argument. Interverbal is a teacher, ABA therapist, and generally favors ABA. But he’s also a very scientifically-minded blogger.

  • 10 laurentius-rex // May 19, 2008 at 2:29 pm

    I have said it before and I will say it again.

    This fixation with Mercury and the arguments for and against is not the big issue.

    It is merely a reflection of the media’s propensity to like stories with simple plots, a simple causation, be that pet shampoo, mobile phones or whatever the cause du jour is.

    The big issue is not the amount of papers being written or published about mercury, because in the big picture of autism research (and I subscribe to a series of autism alerts so I can see for myself) the amount of time that the science devotes to mercury is insignificant.

    Ok, so thoughtful house snook into IMFAR and there were sessions devoted to environmental epigenesis. You have to give the punters what they want or they won’t come to the show, but this fixation is obscuring what is really happening never mind what is not happening in research and instead of all this bull about proving or disproving the mercury hypothesis let us look at the paucity of research into interventions, the low profile outfits like Research Autism get in the glamour stakes.

  • 11 Harold L Doherty // Jun 10, 2008 at 2:22 am

    Joseph states: “It’s impressive that ABA is supported by authority, but then, expert opinion is only Level-III evidence.”

    What Joseph ignores is the fact that the authority he references – which apart from the AAP also includes the MADSEC (Maine) Autism Task Force, the US Surgeon General’s Office, the Association for Science in Autism Treatment and the New York State Department of Health to name only some of the better known “authorities” base their conclusions on A SUBSTANTIAL BODY OF EVIDENCE:

    AAP:

    The effectiveness of ABA-based intervention in
    ASDs has been well documented through 5 decades of research by using single-subject methodology21,25,27,28 and in controlled studies of comprehensive early intensive
    behavioral intervention programs in university and community settings.29–40 Children who receive early intensive behavioral treatment have been shown to make substantial, sustained gains in IQ, language, academic performance, and adaptive behavior as well as some measures of social behavior, and their outcomes have
    been significantly better than those of children in control groups.31–40

    MADSEC Autism Task Force Report:

    Over the past 30 years, several thousand published research studies have documented the effectiveness of ABA across a wide range of:
    • populations (children and adults with mental illness, developmental disabilities and
    learning disorders)
    • interventionists (parents, teachers and staff)
    • settings (schools, homes, institutions, group homes, hospitals and business offices), and
    • behaviors (language; social, academic, leisure and functional life skills; aggression, selfinjury,
    oppositional and stereotyped behaviors)

    The effectiveness of ABA-based interventions with persons with autism is well documented, with current research replicating already-proven methods and further developing the field.

    Documentation of the efficacy of ABA-based interventions with persons with autism emerged in the 1960s, with comprehensive evaluations beginning in the early 1970s. Hingtgen & Bryson (1972) reviewed over 400 research articles pertinent to the field of autism
    that were published between 1964 and 1970. They concluded that behaviorally-based
    interventions demonstrated the most consistent results. In a follow-up study, DeMeyer, Hingtgen & Jackson (1981) reviewed over 1,100 additional studies that appeared in the 1970s. They examined studies that included behaviorally-based interventions as well as interventions based upon a wide range of theoretical foundations. Following a comprehensive review of these studies, DeMeyer, Hingtgen & Jackson (1982) concluded “. . .the overwhelming evidence
    strongly suggest that the treatment of choice for maximal expansion of the autistic child’s
    behavioral repertoire is a systematic behavioral education program, involving as many child
    contact hours as possible, and using therapists (including parents) who have been trained in the
    behavioral techniques” (p.435).
    Support of the consistent effectiveness and broad-based application of ABA methods with
    persons with autism is found in hundreds of additional published reports.

    Baglio, Benavidiz, Compton, et al (1996) reviewed 251 studies from 1980 to 1995 that reported on the efficacy of behaviorally-based interventions with persons with autism. Baglio, et al (1996) concluded that since 1980, research on behavioral treatment of autistic children has become increasingly sophisticated and encompassing, and that interventions based upon ABA have consistentlyresulted in positive behavioral outcomes. In their review, categories of target behaviors included
    aberrant behaviors (ie self injury, aggression), language (ie receptive and expressive skills,
    augmentative communication), daily living skills (self-care, domestic skills), community living skills (vocational, public transportation and shopping skills), academics (reading, math, spelling, written language), and social skills (reciprocal social interactions, age-appropriate social skills).

    In 1987, Lovaas published his report of research conducted with 38 autistic children using methods of applied behavior analysis 40 hours per week. Treatment occurred in the home and school setting. After the first two years, some of the children in the treatment group were able to enter kindergarten with assistance of only 10 hours of discrete trial training per week, and required only minimal assistance while completing first grade. Others, those who did not progress to independent school functioning early in treatment, continued in 40 hours per week of treatment for up to 6 years. All of the children in the study were re-evaluated between the ages of six and seven by independent evaluators who were blind as to whether the child had been in the treatment or control groups. There were several significant findings:

    1) In the treatment group, 47% passed “normal” first grade and scored average or above on IQ
    tests. Of the control groups, only one child had a normal first grade placement and average
    IQ.
    2) Eight of the remaining children in the treatment group were successful in a language
    disordered classroom and scored a mean IQ of 70 (range = 56-95). Of the control groups,
    18 students were in a language disordered class (mean IQ = 70).
    3) Two students in the treatment group were in a class for autistic or retarded children and
    scored in the profound MR range. By comparison, 21 of the control students were in
    autistic/MR classes, with a mean IQ of 40.
    4) In contrast to the treatment group which showed significant gains in tested IQ, the control
    groups’ mean IQ did not improve. The mean post-treatment IQ was 83.3 for the treatment
    group, while only 53.3 for the control groups.
    In 1993, McEachin, et al investigated the nine students who achieved the best outcomes in the
    1987 Lovaas study. After a thorough evaluation of adaptive functioning, IQ and personality
    conducted by professionals blind as to the child’s treatment status, evaluators could not
    distinguish treatment subjects from those who were not.

    Subsequent to the work of Lovaas and his associates, a number of investigators have
    addressed outcomes from intensive intervention programs for children with autism. For example, the May Institute reported outcomes on 14 children with autism who received 15 – 20 hours of discrete trial training (Anderson, et al, 1987). While results were not as striking as those reported by Lovaas, significant gains were reported which exceeded those obtained in more traditional treatment paradigms. Similarly, Sheinkopf and Siegel (1998) have recently reported on interventions based upon discrete trial training which resulted in significant gains in the treated children’s’ IQ, as well as a reduction in the symptoms of autism. It should be noted that subjects in the May and Sheinkopf and Siegel studies were given a far less intense program than those of the Lovaas study, which may have implications regarding the impact of intensity on the effectiveness of treatment.

Leave a Comment