Mediterranean Diet - Were the Heart Disease Statistics Flawed?

Mediterranean Diet - Were the Heart Disease Statistics Flawed?

The New England Journal of Medicine is correcting five of the papers deemed containing statistical errors, and a sixth about the Mediterranean diet preventing heart disease.

Despite the errors that were missed until now, the journal system actually worked as intended. The NEJM launched an inquiry within days of the accusations.

Related - Is the Vertical Diet Right for You?

So What Spurred This?

In June of 2017, a controversial analysis was published. This controversial analysis is what prompted this.

John Carlisle is an anesthesiologist at Torbay Hospital in Torquay, U.K. He took a deep dive into 5087 randomized, controlled trials.

Using a computer program, John looked for a very specific type of anomaly in these trials.

What Was He Looking For?

The nonrandom assignment of volunteers to different treatments. The trial claimed the assignments were random.

This will skew a trial's results. For example, if there were many more elderly people assigned to a control group while the young get put into the experimental treatment - the drug may appear to have fewer side effects because the people who get the drug are healthier.

So across eight journals, Carlisle analyzed different features of the volunteers:

  • Height
  • Weight
  • Age

When the assignments seemed too perfect or too far off, he suspected that these assignments weren't truly random - either by misconduct or an honest error.

About 2% of the papers he ran through the program fell into the "questionable" category. So about 100 papers. While these studies do through up questions, we aren't necessarily getting any answers. Within days of Carlisle's report, the NEJM honed in on 11 of the papers in question that were the most glaringly obvious.

Out of those 11 papers, six of these turned out to contain mistakes. In five of the cases, there was a mix-up of the statistical terms. For example, writing "standard deviation" in place of "standard error."

Standard Deviation - Standard deviation is a measure used to quantify the amount of variation or dispersion of a set of data values.

Standard Error - Standard error of a statistic is the standard deviation of its sampling distribution. So, it's the estimate of that standard deviation.

As you see, a small error in the terms used could create much different results than intended.

In a sixth trial - a large clinical trial in Spain that was published in 2013 reported that a Mediterranean diet could prevent people from developing heart disease. Especially for those at risk. “It turned out when we contacted the investigators, they had already been working on it, they had seen the same thing we had and been concerned,” says NEJM Editor-in-Chief Jeffrey Drazen in Waltham, Massachusetts.

Almost 7500 people had enrolled in this trail up to 15 years ago. Tracking down what may have gone wrong was no easy task. It took a month-long inquiry by the Spanish researchers and the NEJM staff uncovered that up to 1588 people in the trial weren't properly randomized.

Some were assigned the same diet as someone else in the same household. This is a common feature for diet studies, but it wasn't reported in the original paper. “The investigator realized he couldn’t get people to travel as far as they needed so he made his study ‘cluster randomized,’” by clinic rather than by individual, Drazen says.

Authors reanalyzed the data without the 1588 people and they found that, despite the missteps, the conclusion held true - Nuts, olive oil, and fatty fish remained a net positive on your heart health. The statistical "oomph" that the original paper had was lost, but the results are still true.

The editor-in-chief of the Canadian Journal of Anesthesia said that an inquiry on the seven other journals targeted is in progress. It is running slowly due to limited resources. At another journal, Anesthesiology editors had looked over the papers and found no obvious reasons to retract them. Other journals either did not respond or do not wish to investigate.

Carlisle's paper obviously came with some heavy criticism after it was published because its methods assumed different variables such as height and weight were unrelated. While Carlisle agrees this is a limitation, not every paper he identified necessarily contained errors.

Eight papers were pinpointed in Carlisle's journal, Anaesthesia that was worth probing. “We wrote to the authors and got two responses … The others have not responded,” he says.

Out of the two that responded back, a correction was published, though it did not have an impact on the paper's conclusions. The second response was that the authors no longer had data to the patients that the journal requested.

How Far Should He Push?

Carlisle doesn't know exactly how far to push and drill into some possible errors in these reports. He wonders especially due to the time and money resources needed to dive deep.

While the errors are minor, Carlisle wonders if they are part of a statistical problem in parts of dozens of papers he didn't examine.

NEJM's Editor-in-Chief Jeffrey Drazen is unsettled enough by what his own journal findings were that he gave his manuscript editors a statistics course and implemented extra scrutiny of statistics in accepted papers.

Previous article Throw Your Bathroom Scale Away! Why it May Be Your Worst Enemy