Evidence Check: Bryce Wylde’s 21 Favourite Papers

A few weeks ago, I wrote an article critical of Bryce Wylde’s appearance on Canada AM where he indicated that homeopathic treatments were of benefit for cuts, bruises, burns, and bug bites. Mr. Wylde responded in the comments section of that post (leading to further discussion here) where he provided a list of his “favourite scientific documents” for my review.

As Mr. Wylde took the time to respond to criticism with a list of citations that are his favourite, I must assume that he intends this list to be persuasive supporting data for homeopathy, if not the best data available. Given that he prides himself on his evidence-based practice and discusses homeopathy in regular media appearances, I expect that if there’s good data to support homeopathy, he would have it. I also expect that Mr. Wylde, in using scientific papers to validate his position, values the scientific method and recognizes that science is not inherently biased against homeopathy or ineffective in evaluating its effects.

Before reviewing the papers, let’s consider some background on evidence to ensure that we are all clear on what that word means in the context of science.

Evidence-Based Practice

There’s an apparent philosophical distinction between what homeopaths and what science advocates consider evidence-based (or science-based) practice. Perhaps this is why, after reviewing the same research, scientists and homeopaths have come to different conclusions regarding the efficacy of homeopathy. This point that was made after the British Homeopathic Association faced strong criticism for their interpretation of data in the Evidence Check just conducted in the United Kingdom. The house of commons committee noted:

73. We regret that advocates of homeopathy, including in their submissions to our inquiry, choose to rely on, and promulgate, selective approaches to the treatment of the evidence base as this risks confusing or misleading the public, the media and policy-makers.

In the view of homeopathy supporters, it seems that any literature or personal testimonial supporting the use of homeopathy is evidence of efficacy. However, the scientific view is that there are several factors that need be taken into account. Even if we set aside prior plausibility, the treatment has to be demonstrably and repeatedly effective in objective contexts with high quality research. A high-quality, objective study reduces as much bias as possible by employing certain standard methodologies and appropriate statistical analyses. It is important to sort out objective change from personal perception, because feeling better is not the same as affecting the course of an illness and the patient’s condition could worsen while they subjectively feel better.

In evidence-based practice, research (supporting and non-supporting) must be evaluated for both content and quality. Not to consider literature in context is colloquially called “cherry picking” and is undesirable because it gives a skewed representation of the data — any trial could be an outlier. A high quality approach to review a lot of data in a short time is to examine systematic reviews, such as Cochrane Reviews, that comprehensively summarize research on a particular topic.

With these principles in mind, each citation is reviewed below.

Literature Review

For the sake of brevity, the following critical appraisals are not exhaustive. The most obvious or problematic factors with the paper are noted, with links to further discussion where applicable. As Mr. Wylde provided no contextual analysis, it’s not clear why these papers in particular are his favourite. I invite him to provide this context in the comments, if he so desires.


None of the 21 provided citations had any direct relevance to the topic of first aid (the topic on Canada AM). Most of the studies’ conclusions were not representative of the literature, had inadequate statistical analysis/power, and/or had significant methodological flaws. Even the most remote positive results were reported enthusiastically by the authors, whereas negative results were downplayed or said to call for “further research” — despite reviews demonstrating negative overall results that are more pronounced with improved study quality. This pattern is not necessarily due to devious attempts at misrepresenting data; rather this can arise from unintentional investigator biases, hence the value of peer review and independent replication. For more information, see:

  • Edzard Ernst’s analysis of homeopathy-related Cochrane reviews (cached here) and systematic reviews (here).
  • The NHS Homeopathy Evidence Check (pdf) from the UK.
  • A 2003 critical overview of homeopathy, concluding: “There is a lack of conclusive evidence on the effectiveness of homeopathy for most conditions. Homeopathy deserves an open-minded opportunity to demonstrate its value by using evidence-based principles, but it should not be substituted for proven therapies.”

Readers who are in a hurry can skip to the conclusion.

Clinical trials

Jacobs et al. (2003). Homeopathy for childhood diarrhea: combined results and meta analysis from three randomized, controlled clinical trials. Pediatric Infectious Disease Journal, 22: 229-234.

  • Unfortunately homeopathy was not directly compared to oral rehydration therapy (standard treatment), only to placebo. So this study does not show that homeopathy has clinical usefulness for childhood diarrhea. Later investigation by the same authors in Honduras showed negative results. This systematic review of similar research shows a general lack of support for homeopathic treatments in children and adolescents.

Vickers et al. (2006). Homoeopathic Oscillococcinum for preventing and treating influenza and influenza-like syndromes (Cochrane review). In: The Cochrane Library. Chichester, UK: John Wiley & Sons, Ltd. CD001957.

  • From that review: “Trials do not show that homoeopathic Oscillococcinum can prevent influenza. However, taking homoeopathic Oscillococcinum once you have influenza might shorten the illness, but more research is needed.” This is not exactly a glowing recommendation, particularly given that the average time shortened was about 6 hours. This paper is discussed in more detail here and here. Notably, a 2009 update to this review has been withdrawn.

Taylor et al. (2000). Randomised controlled trials of homoeopathy versus placebo in perennial allergic rhinitis with overview of four trial series. British Medical Journal, 321: 471-476.

  • Public full text here. One of the main outcomes (visual analogue scale scores) of this study showed no significance. Criticisms of this study, among others, can be found here in a PubMed topic review of homeopathy for allergic rhinitis. For example, the statistical power used in the analysis was not appropriate for the number of participants. No conclusion can be drawn from this study, due to the lack of statistical power.

Frass et al. (2005). Adjunctive homeopathic treatment in patients with severe sepsis: a randomized, double-blind, placebo-controlled trial in an intensive care unit. Homeopathy, 94: 75–80.

  • Only 35 participants were in each group (homeopathy and placebo). After 30 days, there was no significant difference. After 180 days there was a barely significant difference. Due to the small numbers, even 1-2 deaths would have significantly affected these results. Dr. Mark Crislip, an infectious disease specialist, reviews the paper here and questions the 6 month endpoint.

Oberbaum et al. (2001). A randomized, controlled clinical trial of the homeopathic medication Traumeel S in the treatment of chemotherapy-induced stomatitis in children undergoing stem cell transplantation. Cancer, 92: 684-690.

  • Public full text here. Though the results of this single trial were significant, there were only 15 participants in each treatment group: 10 people in the Traumeel group developed stomatitis, whereas 14 people in the control group did. A 2009 Cochrane Review on the general topic of the adverse effects of cancer treatments, that included this paper, states that these trials require replication as “the risk of bias was unclear, and four further studies reported negative results.” The authors of the review conclude: “There is no convincing evidence for the efficacy of homeopathic medicines for other adverse effects of cancer treatments.” Homeopathic treatments for cancer are also discussed here.

Frei et al. (2005). Homeopathic treatment of children with attention deficit hyperactivity disorder: a randomised, double blind, placebo controlled crossover trial. European Journal of Pediatrics, 164: 758-767.

  • Bias is suggested in the abstract, where the authors state that they aimed to “obtain scientific evidence of the effectiveness of homeopathy in ADHD”. Objective researchers should test the hypothesis of whether homeopathy is effective for ADHD. Perhaps unsurprisingly, this study showed significant results. However a 2009 Cochrane review on the topic of homepathy and ADHD, concluded: “There is currently little evidence for the efficacy of homeopathy for the treatment of ADHD.

Brinkhaus et al. (2006). Homeopathic arnica therapy in patients receiving knee surgery: Results of three randomised double-blind trials. Complementary Therapies in Medicine, 14: 237-246.

  • Of the three trials in this study of oral Arnica treatment, only one was significant and it had only 57 participants. There is not enough statistical power to draw firm conclusions and previous reviews of better-designed studies than this one have shown that homeopathic Arnica is not a promising avenue for acute treatment. This 1998 review states that Arnica is not supported beyond placebo effects; this 2001 review of more robust studies concurs.

Adler et al. (2009). Homeopathic Individualized Q-potencies versus Fluoxetine for moderate to severe depression: double-blind, randomized non-inferiority trial. Evidence-based Complementary and Alternative Medicine: eCAM. doi:10.1093/ecam/nep114

  • This study had negative results, concluding only that the feasibility of such research was demonstrated. One would think that feasibility had already been sufficiently determined, given there were enough studies to populate these 2005 and 2007 reviews of similar research. Both conclude that homeopathy is not effective for treating depression, with one citing the low quality of available research.

Cost effectiveness

Cost, while relevant to economic impact, is not relevant to the efficacy of a particular treatment. Given the lack of objective established efficacy for homeopathy beyond the placebo effect, one wonders how any associated cost (beyond base ingredients) is ethically justified.

Rossi et al. (2009). Cost–benefit evaluation of homeopathic versus conventional therapy in respiratory diseases. Homeopathy, 98: 2-10.

  • This was a non-blinded, non-random study of the treatments costs for patients of a homeopathic clinic compared to retrospective matched controls receiving conventional therapy. Insufficient evidence is provided to demonstrate that the groups were properly matched (e.g., diagnosis was not verified, which could bias the results if the homeopathy group was less ill on average than the conventional therapy group). The drug tracking methodology is unclear and homeopathy costs were not tracked. Therefore, the title (“versus”) is misleading and conclusions that costs were “reduced” in the homeopathy group are inappropriate. As there was no intervention, the only fair observation would be that costs “differed” between the two groups, which could be due to several factors. There is no evaluation of the appropriateness of treatments given, nor of the efficacy of homeopathy for respiratory illnesses (for that, see this review and this Cochrane review).

Witt et al. (2005). Outcome and costs of homeopathic and conventional treatment strategies: a comparative cohort study in patients with chronic disorders. Complementary Therapies in Medicine, 13: 79-86.

  • Public full text here (pdf). “Health economic data were obtained for a subgroup of 38% of the patients.” – there was no explanation as to why 62% of the participants were excluded, even though these data are the primary endpoint. In any case, the study concluded: “In the present study, there were no significant differences between the overall costs incurred by patients according to the homoeopathic or conventional treatment strategies.” And the study “does not provide firm data on the comparative efficacy of conventional and homoeopathic treatments.

Kneis et al. (2009). Economic evaluation of Sinfrontal® in the treatment of acute maxillary sinusitis in adults. Applied Health Economics and Health Policy, 7: 181-191.

  • This paper is based on a previous trial that investigated Sinfrontal with apparent bias — the primary objective was to “demonstrate the efficacy of [Sinfrontal]”, as opposed to testing the hypothesis of efficacy. The methodology was questionable, as diagnosis of acute sinusitis is not reliably determined by x-ray and participants were allowed to use “saline inhalations, paracetamol, and over-the-counter medications” throughout the study. Either of these factors could potentially bias the results. The economic evaluation is also questionable, as antibiotics are not typically indicated for acute sinusitis. No solid conclusions can be reliably drawn from these two studies. Some criticisms of the rationale behind the methodology of the underlying trial can be found here.


Witt et al. (2005). Homeopathic medical practice: long-term results of a cohort study with 3,981 patients. BMC Public Health, 5: 115.

  • Public full text here. The results were based on self- and homeopath-reported outcomes (1-10 scale) and quality of life measures at baseline and intervals. There were “major” improvements in quality of life for adults with severe disease and young children. The authors conclude that homeopathy may play a beneficial role in the long-term care of chronic patients. However, the study had no control group, so there’s no way to know whether these improvements were due to the homeopathic intervention, a placebo effect, or other variables.

Spence et al. (2005). Homeopathic treatment for chronic disease: a 6-year university-hospital based outpatient observational study. Journal of Alternative and Complementary Medicine, 5: 793-798.

  • Patients self-reported (on a Likert scale) feeling better after homeopathic treatment. The authors conclude from these data that “Homeopathic intervention offered positive health changes to a substantial proportion of a large cohort of patients with a wide range of chronic diseases.” Unfortunately, they did not measure health changes. They measured subjective patient perception of health. Though interesting, this is not the same as an actual change in health outcome, as is implied by the conclusion. This study had no control group, so these improvements are indistinguishable from placebo effects.

Biological models

Belon et al. (2004). Histamine dilutions modulate basophil activation. Inflammation Research, 53: 181-188.

Aguejouf et al. (2008). Prothrombotic and Hemorrhagic Effects of Aspirin. Clinical and Applied Thrombosis/Hemostasis, doi:10.1177/1076029608319945.

  • Table 3 and Figure 3 from this animal study (rats) demonstrate quite clearly that the dilutions of Aspirin were no better than saline or salicylate. Also shown is that ASA, at a non-homeopathic dose of 100 mg/kg, significantly reduced the number and duration of emboli.

Witt et al. (2007). The in vitro evidence for an effect of high homeopathic potencies – A systematic review of the literature. Complementary Therapies in Medicine, 15: 128-138.

  • The authors state that 75% of studies produced positive results, but from the conclusion: “No positive result was stable enough to be reproduced by all investigators. A general adoption of succussed controls, randomization and blinding would strengthen the evidence of future experiments.” In other words, though there were many positive studies found, they were not of high enough quality to be consistently replicated in order to draw solid conclusions from the data.

Endler et al. (2010). Repetitions of fundamental research models for homeopathically prepared dilutions beyond 10-23: a bibliometric study. Homeopathy, 99: 25-36.

  • This paper was a literature search of studies about “high homeopathic potencies that have been subjected to laboratory-internal, multicenter or independent repetition trials”. Of the studies they included, almost a third did not support previous research and this went up to over half if the studies were independent replications. So what this paper might actually show is that there is bias in homeopathy research, highlighting the importance of independent replication. Also, repeatability is an irrelevant measure if both the original research and the replication are of poor quality (which wasn’t assessed).


The following three papers come from the same special issue of the journal Homeopathy, investigating the concept of “water memory”. The blog Bad Science discussed the entire series in an online journal club and copies of each paper can be found there.

Rey (2007). Can low temperature thermoluminescence cast light on the nature of ultra-high dilutions? Homeopathy, 96: 170-174.

  • This study aims “to demonstrate that the high dilutions are physically different from the diluent and have, indeed, an ‘individual personality’.” There were no controls and the methods do not describe how many samples were analyzed. Graphs are presented with no accompanying statistical analysis. No justification is provided for the methods which do not resemble traditional homeopathic preparation processes. New Scientist discussed the article here.

Elia et al. (2007). The “memory of water”: an almost deciphered enigma. Dissipative structures in extremely dilute aqueous solutions. Homeopathy, 96: 163-169.

  • The methods and materials used aren’t disclosed and the results contain no statistics, nor even a description of the number of trials run — for all we know, there was only one trial, making the results indistinguishable from chance. Consider this graph from the paper presented without units, error bars, or proper labeling. The authors state “we cannot derive reproducible information concerning the influence of the different degrees of homeopathic dilution or the nature of the active principle (solute) on the measured physicochemical parameters.”  With such poor detail in their methods and results, lack of reproducibility is not surprising.

Chaplin (2007). The memory of water: an overview. Homeopathy, 96: 143-150.

  • From the introduction: “whether homeopathy works or not is a mostly separate issue from the content of this paper … It follows that simply proving that water does have a memory does not prove that homeopathic medicines work.” This paper discusses how impurities might affect water’s structure, but this is not clearly linked to memory or increased potency after dilution, such as in homeopathic preparations. No rationale is given for why every dilute water solution doesn’t have potent effects due to memory of past solutes. More in depth discussion can be found here.


Teixeira (2006). Evidence of the principle of similitude in modern fatal iatrogenic events. Homeopathy, 95: 229-236.

  • This meta-analysis discusses the withdrawal effects and side-effects of pharmaceutical drugs. These are used to justify homeopathic treatment, as homeopathy is so dilute that the remedies produce no undesirable effects. However, though it is an interesting summary of the research on several pharmaceutical drugs, this paper does not establish homeopathic efficacy — undesirable drug effects provide no justification for the clinical use of homeopathy.


A review of this literature in broader scientific context demonstrates that the efficacy of homeopathy does not match that of available therapeutic interventions and it does not appear to be effective beyond the placebo effect. Positive effects are generally found in studies of poor quality that suffer from multiple methodological and analytical issues and these effects do not persist in higher quality studies. No evidence has been provided, nor does any appear to exist, to suggest that homeopathy is an appropriate or necessary intervention for either first-line or co- treatment among self-limiting, acute, or chronic conditions.

Mr. Wylde’s list of citations reinforces, rather than addresses, concerns about homeopathy. As a self-proclaimed evidence-based practitioner, he is presumably familiar with the principles of evaluating research and is open to peer review, an important aspect of scientific discourse. Yet he has chosen these papers apparently without an appreciation of the numerous limitations identified in each. In addition, he apparently failed to consider a number of negative well-designed studies and systematic reviews that are far more persuasive in their findings due to their adherence to objective scientific standards. One wonders why this research is apparently inapplicable or non-valuable to discussions about homeopathy.

There are many medical treatments that do not pan out in the long run, however few of them have the marketing and subsequent social support that homeopathy has. Therapies with an even less abysmal research history than homeopathy (for example, bloodletting and balancing the four humours, high dose chemotherapy for breast cancer, antiarrhythmic agents following a heart attack, etc) have been discarded for more promising avenues of treatment. Yet homeopathy remains due to persistent misunderstanding of the placebo effect, confusion between subjective assessment of illness and objective health outcome, and almost religious devotion in the face of copious non-supporting evidence.

Given appropriate evidence, I would of course re-evaluate my position. But because of the negative history of homeopathic research, this evidence would have to be relatively extraordinary. Until then, I remain skeptical and so should conscientious health consumers.

*Papers were gathered and reviewed by Kim Hebert and Scott Gavura. Special thanks to @hanna_louise, @psweetman, @xtaldave, @ethicsblogger, and @coxar for their assistance in the literature retrieval.

33 Responses to “Evidence Check: Bryce Wylde’s 21 Favourite Papers”

  1. Moderation says:

    Excellent review. Often on blogs such as this, the authors don’t have the time or inclination to methodically dismantle the “shotgun blast of research papers” approach that supporters of alt med practices sometimes use in the comment sections. I believe alt med mavens think that if they throw enough crap research on the wall some of it will eventually stick. Thank you for your time and effort

  2. Dianne Sousa says:

    I’ll echo the previous commenter on the quality and utility of this review. I wonder, to what degree did Mr. Wylde expect you (or anyone else) to read and consider each paper? Judging from the batch of citations that were submitted by him, I suppose that he expected the number to be impressive in itself, such that you wouldn’t bother with a careful consideration of them.

    I think that the arguments regarding homeopathy being cost effective are the most curious. Even if cost effectiveness is shown, at some point one would have to explain why homeopathy is so cost effective. Proponents put this argument forward without realizing that this is a red flag for skepticism. It might also be a good target for chipping away at the facade of legitimacy that homeopathy has.

    • Kim Hebert says:

      I agree Dianne, there are many sub-topics of homeopathy that could be discussed in much greater depth, which we will likely address separately in the future.

  3. Rob Tarzwell says:

    Way to go, Kim! Personally, I could never pull this off, because the increasing dose of inanity would start to make my blood boil.

    Newsflash: Taking Water for Serious Diseases Does Nothing!!

    I did have to laugh at the diarrhea trial where they used a placebo. If the placebo was a tablet, then at least the active treatment arm woud be getting a bit of rehydration.

  4. And here’s Bryce’s response:
    Bryce Wylde

    Enough snark?

    • Erik Davis says:

      So let me get this straight. Kim called Bryce out for touting unsubstantiated treatments on TV. Bryce responded to say, “They’re not unsubstantiated, here’s 21 citations that prove homeopathy works.” Kim, despite already knowing that the broader body of literature disagrees, suspended her disbelief and went painstakingly through ALL 21 PAPERS — only to find that they didn’t prove much of anything because they’re either off-point or so riddled with methodological errors as to have no evidentiary value. But rather than respond substantively, as she did, he just taunts her and says there are yet more citations.

      After 21 papers — the ones he told you were his “favorite scientific documents” — I think he’s had a more than fair shot, and there’s no reason to believe that if you went through another 30 or 40 the picture would change. Nor would his advocacy, because in the end, he’s just proven that he really doesn’t care about evidence as long as the Vaughan Medical Centre thrives.

      I started reading this thread doubting his science, and ended it doubting his motives. Those papers aren’t evidence, they’re shiny objects designed to mesmerize his patients — not meant to be read, but admired.

  5. gmcevoy says:

    What an unconscionable douche Wylde is. It boggles that two other unconscionable douches of the Naturoquackery variety, Rochoutas(?) & Tardik, laugh at his branch of the $CAM tree as per that revealing Coren roundtable.

    Not really boggling though is it? It’s the nature of the beast.

    Rather than respond to any of the excellent critical analysis, Wylde provides a jovial kiss-off link to more studies which is literally the same list of favourites with 30 or so more studies. It seems his 21 faves were lazily clipped from the top of the new list to make it appear he gathered them for you.

    It would seem Wylde hasn’t read his new, or more complete, list of shiny objects (shiny) either for he includes Shang et al. A meta study from 2005 that concludes:

    “Biases are present in placebo-controlled trials of both homoeopathy and conventional medicine. When account was taken for these biases in the analysis, there was weak evidence for a specific effect of homoeopathic remedies, but strong evidence for specific effects of conventional interventions. This finding is compatible with the notion that the clinical effects of homoeopathy are placebo effects.”

    The beast always talks up science and then ignores or misrepresents the results it provides. Always.

    What? My list of 21 fave studies is unsupportive or contradictory of homeopathic efficacy? Here’s a bunch more… is not the response of someone that takes their profession or this conversation seriously.

    I can see why Shang et al – http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(05)67177-2/abstract – wouldn’t be in Wylde’s top 21 studies.

    But given that Shang et al concludes homeopathy is nothing more than nothing, I canna ken why it came in at 26 at all. There is a good chance many of the other papers prior to 2005 referenced were included in Shang. When Ms. Hebert’s analysis is added, I have to conclude rest of Wylde’s unfavourite papers will be just as void of backing evidence.

    I also have the backing of two professional doctors, Rochoutas(?) and Tardik.

    • Erik Davis says:

      gmcevoy: Appreciate the comments and references. Would appreciate if you could avoid the name calling in the future though…we’re open to all comments, but we try to keep things civil. Thanks!

  6. Papers in support of homeopathy medicine published in lancet not listed by Bryce Wylde

    http://www.ncbi.nlm.nih.gov/pubmed/9310601(1997) //homeopathy is 2.45 times more effective than placeb

    http://www.thelancet.com/journals/lancet/article/PIIS0140-6736%2886%2990410-1/abstract (1986) //hayfever

    • Jon C says:

      The first link doesn’t work.

      The second link to The Lancet is for a study from 1986 with little details on how it was performed other than it uses data gleaned from “assessed symptom scores” which for a laymen such as myself seems highly subjective.

      Coincidently, on the same linked page, there are links to two other studies on homeopathy.

      The first paper is a 1997 meta-analysis of a placebo controlled trial of homeopathy.

      The conclusion from this study is: “The results of our meta-analysis are not compatible with the hypothesis that the clinical effects of homoeopathy are completely due to placebo. However, we found insufficient evidence from these studies that homoeopathy is clearly efficacious for any single clinical condition.”

      The second paper is a “Comparative study of placebo-controlled trials of homoeopathy and allopathy” from 2005

      The conclusion from this study is: “there was weak evidence for a specific effect of homoeopathic remedies, but strong evidence for specific effects of conventional interventions. This finding is compatible with the notion that the clinical effects of homoeopathy are placebo effects.”

      These studies are hardly endorsements for homeopathy. Following the time line, it appears the more we study homeopathy the more apparent it becomes that it is not effective as a remedy for anything.

      • Kim Hebert says:

        For the first linked paper (strip off the 1997 part), I think Nancy must be focusing on the sentence “The results of our meta-analysis are not compatible with the hypothesis that the clinical effects of homeopathy are completely due to placebo.” and not the sentence “However, we found insufficient evidence from these studies that homeopathy is clearly efficacious for any single clinical condition.” Note that the odds ratio was 1.66-1.78 for higher quality studies, indicating that as research quality goes up, the effect of homeopathy goes down.

        There is legitimate scientific discussion to be had here: Why the lack of support for the application of homeopathy to a particular ailment? Why is “not completely due to placebo” being uncritically equated to objective clinical efficacy (by Nancy, apparently)? Why do the effects decrease with increasing study quality?

    • Brian says:

      So saith not-a-doctor Nancy Malik, or to use her professional title, Nancy Malik.

  7. Dave says:

    Dear Kim – I would like to make several comments critical of your post. It seems to me that your comments regarding evidence imply that the only evidence worth assessing are Cochrane systematic reviews and only the highest quality reproduced clinical trials. In fact, according to Sackett DL, Straus SE, Richardson WS, et al. Evidence-based medicine: how to practice and teach EBM evidence based medicine comprises of many different forms of evidence and data. Case reports are evidence and data. Observational studies are evidence and data. Clinical experiences are evidence and data. All of these things together inform our decisions regarding whether a treatment is valid. Of course there is a hierarchy, and the most unbiased evidence deserves to be on the top of the list and should be given the most attention, however that does not mean the rest is not valid.
    I feel that you did not approach this “task” which a spirit of scientific curiosity or inquiry, but rather that you were motivated by simply finding any possible critique of the 21 studies outlined. That is one of the simplest tasks given the power of google these days, and I’m sure it did not take you long to come up with your commentary. No study is perfect. If you were similarly motivated, I’m sure you could come up with an equally exhaustive exposé with almost any topic in medicine (or science in general). That said, I would like to rebut the comments on your first example. Regarding the Jacobs study, you comment that the evidence was effective against placebo but may not be as effective oral rehydration. Are you conceding that homeopathy works better than placebo? Next you say that later investigation showed negative results, but if you may have noticed when you were doing your research, that later investigation was a completely different protocol using a completely different (albeit homeopathic) intervention. Therefore it is rather disingenuous of you to mention. Regarding the systematic review quoted, the only the fault found in that review was that it hasn’t been replicated by another group. That is an issue of funding, not science.
    Studies such as Jacobs, et al should be making scientists go “hmmm… interesting”. Those studies have some very compelling results in spite of their relatively minor methodological problems.It’s up to objective scientists to put aside their biases and be curious about compelling results and not dismiss them out of hand.

    • Kim Hebert says:

      I hope you will appreciate that I did not have the space to go into detail for each paper. Many links are provided for more information. Readers are free to look at that information and come to whatever conclusion they’d like. I encourage readers to consider each paper in the context of the full body of literature (as full as can be reasonably assessed by one person).

      I did not say that reviews were the only evidence worth assessing, I said that looking at reviews was a high quality way to assess a lot of information in context in a short time. I did not have the space to give a full history on evidence, but in a nutshell, homeopathy does not fit the pattern of a promising intervention: positive results persisting with increasing study quality.

      There is almost no such thing as a perfect paper, as you say, however in looking at flaws, we can see what common issues there are within a particular field that must be addressed (and/or are failing to be addressed). Presenting a list of citations without such critical appraisal tends to exaggerate support. Many of the reviews/papers on homeopathy consistently (and as recently as in the past year) cited poor quality and small treatment groups as reasons for being unable to draw solid conclusions about efficacy. These flaws are absolutely reasonable to point out, given that these criticisms have been present in the literature for years.

      What clinical/ethical justification is there to sell insufficiently supported products to health consumers? Why do a few positive studies (with results possibly due to chance because of low numbers and poor methodology) trump several more better-designed negative studies? These are not questions asked out of stubborn refusal, but of legitimate inquiry.

      It’s telling that this basic level of scientific criticism about CAM treatments, that is applied consistently throughout any scientific discourse, is often perceived as closed-minded dogmatism or bullying. What are your views on the other papers? Is there anything in particular that you think I should read? Rather than question my motives, provide me with the scientific context that you apparently feel that I’m missing and let’s have a discussion.

      • Dave says:

        Hi Kim – Thank you for your response. I guess since we are discussing the wider realm of publication bias, the suitability of selling insufficiently supported products, and scientific context, I would have you take a look at the following paper – http://help.senate.gov/old_site/Hearings/2009_02_23/Kemper.pdf – and two ideas that arise from it.
        The first is the issue of publication bias. The authors note: “It should be noted that publication bias in CAM research is opposite that of conventional medicine; that is, negative studies are more likely to be published in well-known journals, and positive studies are more likely to be published in foreign-language journals.” Based on the following study: http://www.jclinepi.com/article/S0895-4356(05)00059-4/abstract. Although there hasn’t been a study performed (to the best of my knowledge) on reverse publication bias in homeopathy trials in particular, this study does suggest that CAM research may be subject to reverse publication bias. Therefore I don’t buy your argument that better designed negative studies are being trumped by poorer quality studies.
        The second issue I would highlight is the issue of safety. Figure 1 of the Kemper paper refers to a “common sense guide to CAM treatment recommendations” quoted from the following paper: Kemper
        K, Cohen M. Ethics meet complementary and alternative medicine: new light on old principles. Contemp Pediatr. 2004;21:65. That chart recommends that treatments be tolerated even if there is no evidence of it being effective so long as the treatment is safe.
        This would be my argument for a clinical/ethical justification for allowing homeopathic medicines in the marketplace. Because the products are safe, because there is (I would argue) a lot of “low level” evidence to support effectiveness (with two of Bryce’s 21 articles defending that thesis: Spence (2005) and Witt (2005)), and because there is some compelling evidence that may show efficacy, homeopathic products should not be denied to health consumers. Because these products are safe, homeopathic products should be given a long leash when it comes to availability while research is ongoing.
        I’m not sure if giving a counter argument to your critique of the other papers would be productive. As you have pointed out, most (perhaps all) of the efficacy studies are under-powered be they negative or positive for homeopathy (I don’t know of any that performed any type of sample size calculation prior to launching into a trial). I don’t know of any trial that claims that they are proof that homeopathy works.

      • Kim Hebert says:

        Do these studies show bias against homeopathy, or that homeopathy is more inherently supported in some cultures and by some researchers, possibly leading to positive research bias? With such results, one must consider the possibility of bias on either or both “sides”.

        If homeopathy claims to be efficacious, then it is also claiming that it does something. That something can be measured objectively. The pattern of effects decreasing with increasing quality suggests that homeopathy is not clinically valuable because its effects are strongest when there are more subjective biases present and weakest when measured objectively. Homeopathy is not specially exempt from the same standards that are applied to other medical treatments.

        Safety is only one aspect of importance in medical treatment. There is also efficacy, for which homeopathy promoters have had 200 years to provide solid evidence. If I create a mythos surrounding a rock that keeps pain away, am I justified in selling it for profit because it causes no harm even though there’s no evidence that it works beyond placebo? After all, people can get similar rocks from their backyards for free. How much more research should be devoted to Geotherapy if study after study is either too poorly done to draw solid conclusions or shows that there are no therapeutic effects?

        One must also consider the possible psychological consequences of homeopathy treatment without appropriate evidence of clinical efficacy beyond the placebo effect. There is the nocebo effect, in which the patient experiences undesirable effects. There is also the possibility that an underlying condition can worsen while the patient subjectively feels better. Furthermore a patient, if the treatment doesn’t work but they believe that it should have, could become convinced that their condition is worse than it is. Are these effects ethical? Are they harmless?

        “I don’t know of any trial that claims that they are proof that homeopathy works.” I never said they did. As I did say, readers should consider each of these studies in the context of the broader literature to observe patterns in the research. The pattern suggests that homeopathy is not effective.

  8. gmcevoy says:

    Mr. Davis, as you wish. Just consider the d-word a vernacular version of charlatan, mountebank, snakeoil salesman…

    Those aren’t just names being called, they’re quite accurate descriptors for someone to use, like me, who questions the motives of some (the descriptees?) whose only evidence for their claims seems to be their relative fame and the size of their offices.

    As one wag said of homeopathy, “At least snakeoil has some snakeoil in it.”

    The kind of intellectual footsie displayed by Wylde and his ilk irks me because it defecates on honest scientific inquiry and what it means to be a doctor. Beast is a valid metaphor for an industry which does that.

    I have been reading aboot this for quite some time now and it comes as no surprise Wylde includes links to papers that don’t support homeopathy, but actually show it is as effective as nothing. Many, many do, and have done, ever since Hahnemann vomited it up a coupla hundred years ago or so.

    As Ms. Hebert notes, “must be focusing on the sentence “The results of our meta-analysis are not compatible with the hypothesis that the clinical effects of homeopathy are completely due to placebo.” and not the sentence “However, we found insufficient evidence from these studies that homeopathy is clearly efficacious for any single clinical condition.”

    Cherry picking is ubiquitous, pandemic amongst that crowd. Much like the religious cultists, they must forage for fruit ’cause that’s the only way to make it work.

    A recent example:

    “Regarding the systematic review quoted, the only the fault found in that review was that it hasn’t been replicated by another group. That is an issue of funding, not science.”

    That really isn’t all it had to say was it? –

    “Studies of homeopathy, including data of adult populations, concluded that studies with better methodological quality tend to yield less positive results.39-41 Our systematic review of double-blind, placebo-controlled RCTs that assessed only children and adolescents also does not show convincing evidence of effectiveness and therefore does not allow any recommendations. Other reviews of homeopathy for pediatric populations have reached more favorable conclusions.16,42,43 However, these reviews were not systematic and therefore are open to bias.”

    “Homeopathic remedies are generally regarded as safe.44 Only a few mild adverse events were reported in the reviewed RCTs (Table 1). This finding is supported by several postmarketing surveillance studies, which reported only a few adverse events.45,46 However, homeopathy is not totally devoid of risks. According to homeopathic beliefs, aggravations of symptoms occur in approximately 20% of patients.47 Also, it may delay effective treatment or diagnosis.48,49 One example for this is the reluctance of some homeopaths to recommended immunizations.50,51″

    This is why Wylde included Shang in his list, there must be one sentence in there somewhere that can be squinted at and spun positively. Or he Googled and included it without reading the paper at all.

    I really hope the folks handling my guineapiggishness don’t behave as cavalierly with their jobs…

    The one good thing you can say aboot homeopathy, IIRC, is that it led to the development of RCTs. Ironically it would seem.

  9. K.O. Myers says:

    The investment in time and energy to compile this list will pay dividends for years to come. Any time anyone of us is confronted with this list, or any of the studies on it, we’ll be able to say “actually,” and link to Kim’s outstanding dissection. Thanks very much for your effort here, Kim. You’ve done us all a valuable service.

  10. Dave says:

    I hesitate to continue this thread because I feel your argument is becoming polemic and you haven’t defended any of your statements with any scientific references.
    However I will respond. The statement that “effects decrease with improved trial quality” I’m guessing refers to the 2005 Shang et al study (http://www.homeovet.cl/BRIONES/Are%20the%20clinical%20effects%20of%20homoeopathy%20placebo%20effects%20Comparative%20study%20of%20placebo-controlled%20t.pdf). I would like to remind you that the study showed that both conventional trials and homeopathic trials show decreasing effects as their trial quality improves. I hope you are not saying that conventional medicine is also not effective since their efficacy decreases as the trial quality increases.
    Furthermore, the Shang study has been critiqued in the peer reviewed Journal of Clinical Epidemiology (http://www.ncbi.nlm.nih.gov/pubmed/18834714) drawing into question the validity of their results.
    Regarding homeopathy as being the equivalent to geotherapy, what do you want me to say. I have already stated that I think there is compelling evidence for efficacy and you insist that that there isn’t. We each accuse each other of bias. I am trying to back up my statements with articles from peer reviewed journals.
    Regarding the potential detrimental consequences of homeopathic treatment, I ask that you please back up your statements with research. The two long term observational studies I have mentioned (Spence 2005 and Witt 2005) show a net positive benefit of homeopathic treatment. I haven’t come across any research that shows that either a nocebo effect or that homeopathic treatments put patients at risk of masking underlying health conditions. If you have any work on the subject, I would be more than happy to take a look.

    • Kim Hebert says:

      I have defended my statements with several scientific references. See the article above.

      The statement “effects decrease with improved trial quality” refers to the body of literature, not just Shang. As I’ve said many times, one study cannot be considered in a vacuum – we have to consider the context. This is not my opinion, this is how science is done. In any case, the study concluded that there was still strong evidence of specific effects with conventional treatments, but not for homeopathy – that is the difference. So no, I am not saying what you think I am.

      Conventional treatments aren’t free of scrutiny, by the way. It’s entirely possible for there to be bias in conventional treatment research. And when that is the case, it absolutely needs to be rectified and examined by independent critical appraisal in much the same way as I have done here with Bryce’s list. All medical interventions require scrutiny to protect the public. Bias isn’t an accusation or something to be ashamed of, it’s something to be recognized (or called out by independent parties) and avoided with appropriate objective controls.

      Two studies (Spence and Witt) do not negate the entire context of the literature, especially ones that had no comparison group to distinguish the effects from placebo intervention.

      Some nocebo effects are suggested by some of the papers above, in which some patients experienced temporary exacerbation of symptoms after beginning placebo or homeopathy treatment. This review suggests there are about as many homeopathic “aggravations” as placebo in the research they covered, suggesting a possible nocebo effect rather than a clinical side-effect of treatment. Though, I would like to see more research on the matter. There are many anecdotal examples of people who have been harmed by delaying conventional treatment. And there is at least some research to support the trend of delayed conventional treatment in patients who use alternative medicine. Unfortunately I do not have time for a full review on this topic, but hopefully the papers I’ve linked can direct you to further literature.

      In any case, “doesn’t hurt you” is not the same thing as clinical justification for use, especially given the poor evidence of efficacy. Magic rocks “don’t hurt” anyone either, but should we sell them for profit? Is that ethical?

      • Dave says:

        I agree with you Kim that we cannot study in a vacuum and that we have to consider the context. I also agree that there is bias is present in all research be it conventional, alternative or whatnot. I also agree that bias is something to be recognized and pointed out.
        I too would love to see more research on homeopathic aggravations. There have been a few more studies done in this area (http://www.balcas.com.co/cinac/pdf/adversos_homeopatia.pdf
        http://www.clubdelarazon.org/downloads/homeopatia45-95.pdf) and they all suffer from poor methodology and are thus not conclusive either way. Thankfully, reporting guidelines for all clinical trials are pushing the inclusion of adverse events, including the inclusion of homeopathic aggravations.
        I think we are going to have to agree to disagree regarding the context of the literature. When I look at the literature in its full context, I see a lot of compelling evidence at all levels – definitely enough to keep pursuing research, and definitely enough to not deny individual patients to seek out homeopathic treatment. I furthermore think that this evidence, even if it is inconclusive, should be given more weight when assessing the overall acceptability of an intervention given the admittedly low adverse event profile (low incidence of adverse events are seen both when clinical trials report them, and in the context of federal reporting mechanisms). Again I would direct you to the Pediatrics article quoted previously.
        There is a lot of anecdotal evidence of harm. Scanning the list you provided, it seems to me that many of those harms occurred in an unregulated environment or else by under qualified health professionals. Most, if not all, of the harms mentioned in the link you provided were the result of poor judgement on behalf of the health professional involved. These scenarios are an argument for the regulation of a homeopathic profession and not for a claim that homeopathic treatment in itself is harmful.
        In their wisdom, the Ontario government is proceeding with the regulation of the homeopathic profession (http://www.collegeofhomeopaths.on.ca/) in order to protect the public and to integrate and facilitate the medical referral system to include homeopaths. The regulated profession will also ensure educational standards which will help to mitigate the types of events which are listed in your reference. I think the issue of delayed medical treatment will also be addressed by a regulated profession of homeopathy. As homeopaths become integrated into the health care system, they will be required to be trained in recognizing medical emergencies as well as condition which need prompt conventional medical treatment. Once integrated more fully in to the medical system homeopaths will be in a better position to refer promptly under those conditions. There has not been a lot of research in this area so it is hard to say whether my assumptions are well founded. As far as I know the sky hasn’t fallen since chiro, naturopathy or TCM have become regulated.

    • Scott Gavura says:

      The critique by Lüdtke and Rutten does not invalidate Shang et al. It actually reinforces it. David Gorski discussed it in detail here:


      • Dave says:

        Thanks for the reference. This commentary would have more validity if it were published in a peer reviewed journal or publication. Do you know if it was published elsewhere?

  11. gmcevoy says:

    I was reading Dave’s latest comment and when I get to:

    “Furthermore, the Shang study has been critiqued in the peer reviewed Journal of Clinical Epidemiology (http://www.ncbi.nlm.nih.gov/pubmed/18834714) drawing into question the validity of their results.”

    I wonder if that’s what this other paper really says and then am happily unsurprised to find out, thanks to Scott, that it doesn’t say that at all:

    “The amazing thing about Lüdtke and Rutten’s study, though, is just how much handwaving is involved to try to make this result sound like a near-refutation of Shang et al.”


    “…the entire discussion is nothing more than an attempt to handwave, obfuscate, and try to convince readers that there is some problem with Shang et al that render its conclusions much less convincing than they in fact are. Indeed, I fear very much for them. They’ll get carpal tunnel syndrome with all that handwaving. We’re talking cherry picking subset analysis until they can find a subset that shows an “effect.”

    It is also worth nothing (also from Scott’s link):

    “In any case, the study to which he refers, entitled The conclusions on the effectiveness of homeopathy highly depend on the set of analyzed trials and coming from a clearly pro-homeopathy source(/I), Dr. Rutten of the Association of Dutch Homeopathic Physicians, it’s hot off the presses (the electronic presses, that is, given that this is an E-pub ahead of print) in the October issue of the Journal of Clinical Epidemiology. Suffice it to say that, as always, Mr. Ullman is reading far more into the study than it, in fact, actually says.”

    A paper from a pro-homeopathic source and Dana Ullman. No chance of bias there.

    “I would like to remind you that the study showed that both conventional trials and homeopathic trials show decreasing effects as their trial quality improves.”

    refers to this about Shang:

    “The higher the quality of the study and the greater the number of subjects, the closer to 1.0 its odds ratio tended to be. The same was true for trials of allopathy as well, not surprisingly.”

    But when homeopathy is examined further:

    “However, analysis of the highest quality trials showed a range of odds ratios with a 95% confidence interval that overlapped 1.0, which means that there was no statistically significant difference between them and 1.0; i.e., there was not detectable effect.”

    and conventional medicine?

    “For the very highest quality trials of allopathy, however, there was still an odds ratio less than 1.0 whose confidence level did not include 1.0.”

    Interesting that Shang was blown out of the water in Dec 2008 and here it is Aug 2010 and Wylde still cites it as a near favourite paper on the scientific validity of homeopathy that doesn’t actually endorse homeopathy.

  12. There are infact many good misses by Bryce Wylde

    Rheumatology (Oxford University Press)

    http://bit.ly/9cs6C2 (2004) FULL TEXT // LM potency for fibromyalgia

    http://rheumatology.oxfordjournals.org/cgi/content/full/39/7/714 (2000) FULL TEXT //osteo-arthritis of the knee

    • Scott Gavura says:

      Link #1. Bell et al. Improved clinical status in fibromyalgia patients treated with individualized homeopathic remedies versus placebo. Thanks for linking to the PDF. Look at table 2. There are no significant differences between the two groups in all nine outcome measures.

      Link #2. van Haselen et al. A randomized controlled trial comparing topical piroxicam gel with a homeopathic gel in osteoarthritis of the knee. Look at Figure 2. The confidence intervals overlap. That means there’s no significant difference between the groups. Given there’s no placebo group, there’s no evidence the homeopathic product provides efficacy beyond that of the control. It could be that the control product is itself a placebo.

  13. It seems that Bryce Wylde is quite preoccupied with defending his image without ever resorting to science. I posted a criticism on what could be considered a rather minor “mistake” he made on his television show, and he popped up there as well. Without a list though, simply with the “recomnmendation” to read “Homeopathy”, a journal with largely pathetic imitations of scientific research.


  1. [...] to actually go through a huge list of references, and appraise all of them. Kim has written up a comprehensive post that critically appraises every article.  Guess what? Homeopathy hasn’t been shown to work more effectively than a placebo.  Listen [...]

  2. [...] Here are the papers that homeopaths use as evidence debunked in one handy blog post. [...]

  3. [...] Here are the papers that homeopaths use as evidence debunked in one handy blog post. [...]

  4. [...] Evidence Check: Bryce Wylde’s 21 Favourite Papers This entry was posted in Thoughts and tagged Bryce Wylde, Homeopathy, research, Skeptic North. Bookmark the permalink. ← Regarding Today’s Election [...]

  5. [...] priori criticisms. Bryce Wylde even linked to his same old list of homeopathic evidence, including 21 papers which were already demonstrated to lack scientific rigor and to sometimes have nothing to do with homeopathy at [...]

  6. [...] bias in papers found to have artificial 'proof' results including systematic methodological error: Evidence Check: Bryce Wylde Examination of failures including illegal marketing claims: Homeopathy: The Ultimate Fake Review [...]

  • Kim Hebert

    Kim Hébert is an occupational therapist. She is interested in the promotion of science and reason, particularly regarding therapeutic health interventions. She blogs occasionally about occupational therapy and other health topics at Science-Based Therapy. Her hobbies are art and astronomy. **All views expressed by Kim are her personal views alone, and do not necessarily reflect the opinions of current or former employers, associations, or other affiliations. All information is provided for discussion purposes only, and should not be used as a replacement for consultation with a licensed and accredited health professional.