I’ve had a number of people email me about a new study appearing in the Archives of Internal Medicine purportedly showing that statins really do provide benefit to those who take them regularly. As you can see from the heading of an email piece I pasted above, even Medscape is all over this article and blasting it out to physicians all over the world.
I’m sad to say that this is the same kind of paper I would have been taken in by 20 years ago before I really understood how to read the scientific literature critically. In fact, I would have used it myself to justify giving statins to all kinds of people, and I’m sure other physicians are doing so right now. But I would have been in error to base my prescribing on this paper, and all the other docs out there giving statins like they were candy are in error as well.
If you don’t want to read a dissection of this study, let me just tell you up front that it doesn’t really mean a thing. It certainly doesn’t prove that you should rush out and get started on statins. If, however, you do want to learn about how perniciously deceptive these kinds of studies are and how to analyze them, read on.
Here’s the deal. Researchers went back and combed through the records of a large HMO in Israel and pulled those of patients who had been prescribed statins from 1998-2006. Since the HMO provided the statin prescriptions, there were records of how many of these people who were prescribed statins actually filled their prescriptions (and, one would assume, took the medications). Then the researchers figured out how many of those people prescribed statins died. The final step was to compare the list of those who died with the list of those who took their statin prescriptions (or, more accurately, those who filled their statin prescriptions). After crunching all this data, it turns out that those patients who filled over 90 percent of their prescriptions were 45 percent less likely to die than those who filled under 10 percent of their prescriptions. Which, to the uncritical reader (including, obviously the Medscape writers and the peers who reviewed this piece for the journal in which it was published), this appears to be pretty persuasive evidence that statins confer some kind of benefit in terms of preventing death. After all, those that took them lived while those who didn’t died.
As I say, these kinds of studies are pretty beguiling. But do they really mean anything?
Before we get to the specifics of this study, let’s contemplate this type of study in general to see why the data they generate is often misleading.
The gold standard for scientific studies is the randomized, double-blind, placebo-controlled trial. In this type of study, researchers randomize the study population into two similar groups and give the members of one group the drug being studied and the other a placebo. Double blinded means that neither the researchers nor the subjects know who got what. At the end of the trial, the data are analyzed to determine if the study drug really showed any difference in efficacy as compared to the placebo. If it did, then it can be said that the drug works to treat whatever condition was being studied. Or that it decreases all-cause mortality, if that is the end point of the study.
It’s impossible to do these gold standard studies with diet and/or exercise because a) they involve lifestyle changes and b) they can’t be double blinded. When it comes to diet and exercise, there are basically two ways studies can be done. Researchers can allow subjects to self-select which arm of the study they want to be in. Or researchers can put subjects into one arm or the other. Neither of these choices is optimal, but they are all that are available.
If I decide that I’m going to compare a very-low-carb diet to a very-low-fat diet, I can recruit volunteers and ask them which diet they would prefer. If readers of this blog were recruited into such a study, I would assume most would opt for the very-low-carb diet. Those who are fans of Dean Ornish would opt for the other. What you end up with is people in each arm of the study who are already believers in the diet they will be following, and they will be more likely to remain on the diet until the end of the study. At the end, the data will be a little polluted because it really doesn’t prove that one diet is superior to the other – it only proves that people who self-select into that diet do better on that diet than people who self-select into the other. The last it an important point, especially when applied to exercise. More about which in a moment.
The other way to study diet is to gather a group of people together and randomize them into one diet group or the other. That takes the self-selection bias out of the equation. But it creates other problems. If a person committed ideologically to a low-carb diet gets randomized into the low-fat group (or vice verse) there are problems with compliance. Most nutritional studies randomized this way end up with large numbers of dropouts. If you do an intention-to-treat analysis of the data (which includes the drop outs), you usually find little difference between the two diets. If you look at only those subjects who hung in there for the duration on whichever diet they were randomized onto, it raises the issue of whether these subjects may have been the same ones who would have self-selected themselves into this same diet if given the chance, which then creates the same problems as self-selection. These issues make diet studies difficult to do and difficult to interpret validly. It’s even worse with exercise.
I get a ton of email and comments from people who can’t come to grips with the idea that there is no proof that exercise brings about weight loss. I say this because it is difficult to come by this proof. Even those who are adamant that exercise brings about weight loss agree that pretty intensive exercise is required to do so. The typical prescription to just get out and move a little more virtually everyone realizes is worthless. Most people believe that it’s intensive exercise that does the trick. Maybe so, but how do you prove it?
If you randomize people into an intensive exercise group and another into a no exercise group to see which loses the most weight (assuming diet is held constant), how many of those sedentary people are going to stick with the intensive exercise for any length of time. They will be the dropouts. If you allow people to self select, all the people who enjoy exercise will put themselves into the exercise group while those who hate it will put themselves into the sedentary group. Then if those in the exercise group do lose weight, how can you tell it’s the exercise and not due to some other component of a person who will commit to an intensive exercise program that brings about the weight loss? The answer is that you can’t tell. Which is why the notion that exercise brings about weight loss is similar to a particular religious belief: it is accepted as an article of faith, not as a product of scientific investigation.
You can send me a comment (as several people have done) telling me how you were stuck in your weight loss efforts at 220 pounds and then you decided to start high intensity interval training. After a couple of months of this, you lost 25 more pounds. Therefore that’s proof that exercise brings about weight loss. Wrong! That’s proof that in you exercise brought about weight loss. There may be something different about you that allows you to commit to such a regimen that others might have difficulty following AND allows you to lose weight. This sounds ridiculous, but it is true. And it is the key to understanding why this statin study is bogus in terms of whether or not taking statins makes people live longer.
Almost thirty years ago a study was published in the New England Journal of Medicine looking at this very idea. The study that inspired the article didn’t start out looking at this idea, but one of the investigators noted a key piece of the data and published on it. The study was looking at clofibrate, a pre-statin cholesterol lowering drug and all cause mortality. Subjects were randomized into two groups – those in one group got the drug, those in the other got the placebo. After the subjects were on either the drug or the placebo for five years, researchers calculated the mortality from the number of deaths in each group. Turned out that the five-year mortality of those on clofibrate was 20.0 percent whereas the five-year mortality of those on the placebo was 20.9 percent, or essentially the same. Taking the drug was no different than taking the placebo, i.e., the drug was worthless. Had one of the researchers not looked a little closer, that would have been the end of the story.
When the data were looked at from the perspective of how many people actually took the drug as prescribed, the researcher discovered that those subjects who took at least 80 percent or more of their clofibrate had a five year mortality of only 15.0 percent, substantially less than the overall five-year mortality. Those who took their clofibrate sporadically had a five-year mortality of 24.6 percent, significantly higher than those who took it as directed, a piece of data that would seem to confirm the efficacy of clofibrate. Right? Not necessarily. Let’s look at compliance with the placebo.
Turns out that those subjects on the placebo who regularly took their placebo had a five-year mortality of 15.1 percent while those who took their placebo sporadically had a five-year mortality of 28.3 percent. What this study really showed was that there is something intrinsic to people who religiously take their medicine that makes them live longer. There was no difference between the drug and placebo in either those who took them regularly or those who took them sporadically, but there was a huge difference in mortality between those who took either drug or placebo on schedule and those who didn’t.
Lest you think this was a bizarre one-of-a-kind study, another study published a few years ago in The Lancet showed a virtually identical outcome. Patients taking a medication for congestive heart failure were compared to those taking placebo. Those taking the drug (Candesartan) showed no difference in mortality compared to those taking placebo. But when compliance was evaluated, those taking either the drug or the placebo as directed had much lower mortality than those taking either one sporadically. In fact, as you can see from the graph below, the mortality curves were almost identical.
So there is something about adherers to a drug regimen that promotes longevity as compared to non-adherers.
Getting back to our statin study, how do we know that the decreased risk of death in those who religiously stuck with their statin prescriptions as compared to those who didn’t came about because they were adherers and not because of the statins? We don’t. In fact, based on the two studies I detailed above, it’s much likelier that the decreased mortality in those who took all their statins came about not because of the statins, but because those who stuck with them are adherers and have what ever quality it is that adherers have that makes them live longer. And, if this is the case in this study as in the others, the statins don’t really do anything at all.
Despite its not really proving that statins confer greater longevity, the study does provide some interesting admissions and entertaining confabulations.
First, the study authors admit that there is no gold standard, randomized controlled study data showing that statins are of benefit in preventing death except for one group of people (and they even get that wrong).
The beneficial effects on cardiovascular mortality of treatment with statins to decrease levels of low-density lipoprotein cholesterol (LDL-C) have been established in several long-term, placebo-controlled trials.
The value of primary prevention with statin therapy in the reduction of overall mortality has recently been questioned.
A pooled analysis of 8 randomized trials in primary prevention populations showed that statins did not reduce overall mortality, indicating that lipid-lowering therapy with statins should not be prescribed for true primary prevention in women of any age or in men older than 69 years.
What they’re saying here is that statins have been shown to reduce mortality from heart disease in those who have elevated LDL, which is true. But this decrease in deaths from heart disease is compensated for by an increase in deaths from cancer and other causes, so there really isn’t a gain. You’re still dead. Just maybe not from heart disease, but what difference does it make. Are you going to spend $200 per month for the rest of your life and stay on medications that may make you feel lousy and lose your memory just so you can die of something other than heart disease?
In the last paragraph in the quote above, the authors confess that the data from actual randomized control trials show that statins confer no all-cause mortality benefits to women of any age and to men over 69. They are playing a little fast and loose with the truth here because as I have posted before, the gold standard trials have shown no benefit for women and no benefit to men over 65 or to men under 65 who have never had heart disease. The only improvement in all-cause mortality has been in men under 65 who have been diagnosed with heart disease, and even that benefit is so small that many people question if the extra cost and side effects of the statins are worth it.
So the authors of this study acknowledge that there has never been a randomized control trial that has shown any benefit to taking statins, but that doesn’t stop them. They forge ahead trying to figure a reason that all these clinical trials haven’t shown an advantage.
Because clinical trials do not usually include individuals with multiple comorbid conditions or those receiving an extensive list of medications, there are considerable concerns regarding the applicability of findings from randomized clinical trials to the general population of patients seen in routine clinical practice.
Aha! They are saying that because the randomized controlled trial didn’t show what they wanted them to show – that statins worked for everyone all the time (thus the “considerable concerns”) – that they need to figure out a better way to study them, one that involves patients with a lot of problems so that they don’t have to randomize them and confront failure yet again.
In light of the controversy surrounding lipid-lowering treatment for reduction of mortality among primary prevention populations, we undertook the present study to evaluate the effect of statin therapy in a large and diverse cohort of patients treated for dyslipidemia in a single health maintenance organization.
Interesting take. There is no controversy. The randomized controlled studies clearly show very little benefit to statin therapy in terms of decreasing all-cause mortality, the one statistic that really counts. The controversy arises because the statinators simply don’t want to believe what these carefully performed trials tell them. They by God want statins to work. And they’re going to keep looking and fiddling with the data until they get a study that tells them what they want to hear whether the data is valid or not.
It’s pitiful that they are so desperate.
Don’t fall for the false promise of this or any other version of an observational study. These kinds of studies do not prove causality. Nor do they prove that a drug regimen works. The patients in this study who religiously took their statins had better all-cause mortality than those who didn’t. But, as we saw above, adherers always have better all-cause mortality than non-adherers. In this case, was it that the adherers lived longer or was it that statins conferred some sort of benefit. We can’t tell. But we do know that in the real studies, the randomized control trials, statins didn’t do squat, so my vote would be that what we’re seeing here is an adherer effect and not a statin effect.
My advice is to continue to regard statins with a jaundiced eye. So far, we haven’t seen any evidence that justifies the expense and the side effects of these drugs.
Monthly Book Reviews
I have been writing a series of book reviews each month that I email to subscribers. If you're interested and want to get on the list, sign up here (or above where it says Get free email alerts in the upper right). I'll send you an email notice of all new blog posts plus all my monthly book reviews. Also, you will get a link to all the previous month's book reviews I've sent. Hope to see you aboard.