Bob's Medical Humbug Detection Kit

Read this first!

 

Medical humbug is a particularly nasty bit of work. Unfortunately, there's quite a lot of it. It goes without saying that medical humbug can cost tons of money and worse, tons of lives. Even if the "treatment" itself is harmless, a great harm may be done in that the patient no longer seeks real medical attention.

To make matters even worse, medical humbug can be among the most difficult to detect. Proper medical training takes many years, and it can frequently take a thorough understanding of medicine in order to differentiate between real medicine and junk science. In other words, laypeople are in a lot of danger.

But all is not bad. Though it can take medical training to fully understand the implications of medical humbug, there are several tools we can add to our kits that will help us spot it before it's too late. They're not fail safe. But they're better than nothing.

Let's look at some of the tools.

Too Good To Be True

One of your greatest tools for the detection of medical humbug is knowing when something seems too good to be true. If someone says they have a cure for cancer, it sounds too good to be true. And it probably is. By no means should any such statement automatically be discarded if they have real evidence to back it up, but you should be skeptical.

Miracle cures usually aren't.

Placebo Effect

There is such a thing as a placebo effect. As James Randi tells us, it works great for imaginary ailments. But it's useless against a real disease.

Many fraudulent medical claims come supported by tons of testimonials. Testimonials are anecdotal evidence, precisely what science tells us we must not trust. The reason they can get these testimonials is due to the placebo effect.

Put simply, the placebo effect is getting a positive response from nothing. Studies conducted on some new drug, for example, are usually double-blind studies. What this means is there are two groups of patients. One group gets the drug, while the other gets a sugar pill (the placebo). Neither the patient nor the administrator knows who gets what (preventing the administrator from unknowingly tipping the patient off).

If the drug works while the placebo doesn't, that's considered good evidence that they may actually have something here, and they'll continue research. If both work, on the other hand, it's a good sign that it's an imaginary effect.

A placebo effect is, to a certain degree, "mind over matter." If you believe it's going to help you, you'll get a positive result, while if you don't believe, you'll get no result. It can help to relieve symptoms, but can't cure a disease. One of the things it's incredibly useful for seems to be a psychological effect, but that's outside the scope of this writing.

Basically, a placebo effect can help with something like pain relief, but not something like cancer. What many people don't understand is that almost every ailment cures itself over a fairly short time. The body has a wonderful healing mechanism. If this happens, whatever treatment you're receiving at the time will get the credit.

The trouble with the placebo effect is that, through testimonials, it seems to give undue credit to claims that have no other merit at all.

Natural Cures

Nature is wonderful. We've evolved alongside nature in such a way that nature can provide us with many useful tools for healing. This cannot be denied. However, the trick is to know when nature is going to help us, and when we need to turn to advanced medical science for the answer.

It's difficult to know, that's for sure. Studies can be done on individual cases or claims to determine the truth, but there really is no one size fits all answer that would be well suited to our toolbox.

A general rule of thumb, however, is to always be cautious of natural "cures."

If their claim is that they can help keep you healthy, they may be right. Proper nutrition is very important and very useful (in this case, I must tell you to do as I say, not as I do). However, when they start claiming they can actually cure something, a red flag should go up. They're not necessarily lying, but it's precisely the sort of claim many quacks like to make.

"Alternative Medicine"

When someone mentions "alternative medicine," your antennae should immediately go up. Ask yourself the question: Why is it alternative?

It's true that there are many EXPERIMENTAL forms of medicine. That's all well and good. That's how science moves on: through experimentation. When someone says they're researching an experimental method, there's full disclosure. They are making the patient aware that this is only experimental. They might think it will have a positive effect, but they're making no promises. It's an experiment.

However, when they discuss alternative medicine, they're giving the impression that this is a valid and accepted procedure, albeit one that not everyone practices. This can be very misleading.

All alternative medicines are not necessarily quackery. However, if they've been proven, they're no longer "alternative medicine." They're a valid and accepted part of the collected medical knowledge.

There are alternatives IN medicine (you could try drugs or surgery, for instance). But there's not really any such thing as alternative medicine.

Statistics

Many medical claims appear to be scientific because they're backed up by statistics. However, you need to add a good understanding of how these statistics work to your toolkit. If you understand how they work, you'll know the right things to look for and the right questions to ask when confronted by such a claim.

The first rule: Remember what Disraeli told us: "There are three kinds of lies: lies, damned lies, and statistics."

The simple fact is that statistics are pure. If gathered properly, they don't lie. However, they are manipulated by men who do lie.

There are a few good things to do when confronted with statistics.

First, demand definitions. If you look back at the general humbug detection kit, you'll notice there's a section on "weasel words." Well, statistics bear this same problem. The numbers may be true, but the definitions of what they describe may have been changed.

Another thing you must know is that statistics don't demonstrate cause and effect. Statistics demonstrate a correlation. A different type of study must determine what the actual cause was.

Understand epidemiology! Epidemiology is essentially a statistical study of rare diseases, which measures incidence and distribution of a disease in a population. There are four types of such studies: clinical trials, case-control studies, cohort studies, and ecologic studies.

Clinical studies are the most scientific of the lot. These are experimental studies in which an experimenter assigns exposures to patients and measures the results.

Case-control studies are observational. They begin with a group of patients with a disease, and examine the patients' histories to see if some connection may be found.

Cohort studies are similar to case-control studies, but reversed. They begin with disease-free patients, and follow them in the future to determine if certain exposures can be linked to the disease.

Ecologic studies are the worst of the lot. They follow populations, rather than individuals. They try to determine if a disease rate is higher in a particular population than another and draw connections to certain possible causes.

Understand that none of these are inherently wrong. However, some are more conclusive than others. For instance, an ecologic study may point a scientist in the right direction, giving him an idea of what to examine more closely in the future. It should not be considered conclusive on its own, though.

Understand relative risk! Relative risk (RR) is a statistical comparison between two different groups of people. Relative risk is not the same thing as risk, however. It determines a statistical probability that there may be a risk.

Relative risk is generally written down in the form of a simple number, such as "0.5," "1.0" or "3.5."

It's important to know what these numbers mean. A RR of 1.0 shows no statistical difference in the disease rate between the two groups. For example, if a study is conducted to determine if say, Chemical X causes cancer or not, a RR of 1.0 means that those exposed to Chemical X are no more likely to get cancer than those not exposed.

A RR of 0.5 shows a protective effect. Those exposed to Chemical X are 50% less likely to get cancer than those not exposed. If the RR is 2.0, those exposed to Chemical X are 100% more likely (double) to get cancer than those not exposed.

Note: When I say "more likely" or "less likely," I'm engaging in a bit of rhetoric. Since statistics aren't science, I feel I should point out that these numbers are only measuring the frequency in which a disease occurs in a particular population.

See how it works? Well, there's a general rule of thumb regarding "statistical significance." Relative risks ranging between 0.5 and 2.0 are almost always discarded as not statistically significant. There are too many other factors that could account for such differences. Some regard an RR as high as 3.0 with strong suspicion.

Even if we do get a study that demonstrates strong statistical significance, the research isn't over yet. You need to make sure you're not confusing correlation with causation, by figuring out what actually causes this association.

If you're measuring food poisoning and determine that people who ate the steak were four times more likely to have food poisoning than those who don't (a RR of 4.0), you have something good to go on. Now your job is to look at that steak and find the "bug" that actually caused the illness. Only then can you scientifically state that the steak was at fault.

You should always beware of big numbers or scary sounding numbers. If someone says that people who wear socks to bed are four times more likely to commit suicide, it might sound scary. Until you think about it and realize that "times more likely" doesn't mean a whole lot, since it's such a small portion of the population.

Old Medicine

People often talk about alternative medicine (particularly oriental medicine such as acupuncture) in terms such as this: "It's been around for thousands of years. Surely there's something to it!"

Well, so has religion, and that doesn't make it any more true.

I often wonder, if it's been around for thousands of years, why it hasn't evolved in all that time. All other aspects of medical science are completely different from what they were using in those days. Why do these practices still survive?

My "blanket" answer is that it's not real science. Real science evolves and changes as new discoveries are made, while superstition and religion (but I repeat myself) are rigid and unchanging. That's not entirely fair, but it's USUALLY true.

There's one exception to that rule: It may indeed be real science if it actually works, and there's no room for improvement.

Acupuncture's only changes since it was introduced in ancient China are: 1) It's explained and sold in more modern terms; and 2) The equipment used is more modern. The actual methods are exactly the same. And that makes me wonder…

Those are a few tools to get you started. As always, please add your own tools to the kit as you discover them.

An excellent reference I highly recommend you use is Quackwatch. Quackwatch is operated by Stephen Barrett, M.D., and is one of the best skeptical websites I've found for information on alternative medicine and general quackery.

Medical Humbug Main Page

HomeContact