When decision-makers say they are “guided by science,” they really mean that they are guided by scientists – specifically the scientists whose guidance they choose to heed.
That is generally a sensible practice. Whether the issue is COVID-19, climate change or anything else where public policy interacts with nature, the world is a very complicated place. There are an infinite number of ways to measure and describe it.
Science is a process of discovery, measurement and testing. Its goal is to identify the truth, or at least theories most likely to be true based on the current state of knowledge. Most of us rely on scientists to identify and distinguish between that which is true, that which is most likely to be true, and that which we believe or suspect to be true subject to further testing.
Science in the abstract is objective; scientists who perform it are not. They are human. They have biases and agendas and personal ambitions, just like anyone else. They are vulnerable to groupthink and peer pressure. Fear, excitement or anger can overpower their professional judgment. The check on these human frailties is the objectivity of scientific measurement. We judge the soundness of scientific results, in large part, by the ability of other scientists to replicate those results under similar conditions.
Sound science produces results that are consistent both internally – the conclusions make sense based on the data supporting them, which also makes sense – and externally, in the context of other knowledge or results that can be replicated. Sound science knows what it knows and how it knows it. It also acknowledges what it does not yet know.
The scientific method as we understand it today has been practiced and taught since the dawn of the Age of Enlightenment, more than three centuries ago. Yet it did not prevent last week’s debacle, in which two of the world’s leading medical journals almost simultaneously retracted influential papers on the treatment of COVID-19 because the science underlying their conclusions was not demonstrably sound.
The Lancet, an international medical journal owned by Dutch publisher Elsevier, drew most of the attention. It published a study concluding that the anti-malarial drug hydroxychloroquine, which had been promoted by President Donald Trump (among others) at an earlier stage in the pandemic, was associated with a significantly increased risk of death in COVID-19 patients. The study supposedly drew on a database of more than 96,000 patient records from more than 600 hospitals on every continent except Antarctica. Policymakers responded immediately. The World Health Organization and several countries, including France, Belgium and Italy, discontinued clinical trials of the drug in COVID-19 patients.
But researchers noticed discrepancies in the data summarized in the study. The study found more fatalities associated with COVID-19 in Australia than Australia had reported at the time. The clinical results were suspiciously consistent all over the world, despite sharp disparities in patient demographic profiles. The dosages reported were above the standard dosages used in North America. All this data was provided by Surgisphere, a Chicago company that reportedly has 11 employees and whose founder, Dr. Sapan Desai, was one of the Lancet study’s four authors.
It turned out that none of the study’s other authors had seen the underlying data, which Desai’s company declined to make available either to them or to outside auditors. Critics, inside and outside of the scientific community, also questioned how such a small and little-known company could have acquired such a large global database. One such question that jumped to my mind: How would Surgisphere have gotten data from Iran, which was one of the early centers of the pandemic but which is under strict American sanctions?
After first trying to edit the study to partially address these discrepancies, The Lancet retracted it at the request of the other three authors.
The New England Journal of Medicine published a second study involving Desai and Surgisphere. That study purported to show that the use of certain blood-pressure medications was not associated with an increased risk of death from COVID-19. It was also withdrawn after all the authors – including, bizarrely, Desai himself – said they were unable to access the underlying data that Desai’s company provided.
Science magazine reported on its website that a third study involving Desai, which had been posted online, was removed. The report found a clinical benefit in the use of ivermectin, an anti-parasitic drug, in COVID-19 treatment. As a result, some Latin American countries where the pandemic has been gaining speed deployed the drug. That paper was a “preprint,” meaning it had not been subject to the peer review necessary for publication in a medical journal.
Desai is a common link in these three episodes, but it would be a mistake to focus on him or his company alone. The scientific publication process broke down and sent policymakers veering in directions they would not otherwise have gone. (The WHO restarted its clinical trials of hydroxychloroquine after The Lancet withdrew the paper in question.)
Amid a pandemic that has claimed more than 400,000 lives globally and continues to kill more people every day, everyone is eager to distinguish treatments that work from those that do not, and especially from those that do more harm than good. This eagerness may be the major explanation for the rush to publish results built on such an obscure and shaky foundation of data. Cardiac surgeon Dr. Mandeep Mehra of Harvard University, one of the co-authors of all three studies, cited exactly this. In a personal statement following the retractions, Mehra said, “I did not do enough to ensure that the data source was appropriate for this use. For that, and for all the disruptions—both directly and indirectly—I am truly sorry.”
But there is also the reality that publishing, especially in such prestigious outlets as The Lancet and the NEJM, offer tangible and intangible rewards to researchers and academics. Those journals are seldom interested in publishing results that merely confirm earlier conclusions. They see their role as providing groundbreaking research.
Under editor Richard Horton, The Lancet in particular has something of a history of activist science. The Lancet published the hydroxychloroquine study soon after it had published an unsigned editorial, which presumably reflected its editor’s view. That editorial held that the Trump administration “is obsessed with magic bullets.” It further advised that “Americans must put a president in the White House come January, 2021, who will understand that public health should not be guided by partisan politics.” This digression by The Lancet into partisan politics, in telling Americans who they “must” put in the White House, did not help the publication’s credibility and further calls into question how closely it looked at the study it was publishing.
The Lancet’s retraction does not mean hydroxychloroquine is an effective COVID-19 treatment, or an ineffective one. It does not mean it is likely to harm patients, or that it is risk free. All it tells us is that the process to evaluate a study’s soundness broke down, whether due to haste, politics or something else. As for the hypothesis in that paper – or the other two papers based on Surgisphere’s data – we will have to wait for the slower grind of the scientific method to take its course.
We humans owe an incalculable debt to science. We live longer, healthier and more fulfilling lives than our species has ever enjoyed thanks to the advances it has brought us. Scientific knowledge may be the closest we ever come to pure truth, but it reaches us via scientists, who are only human. We do well to recognize and allow for their limitations, especially in times when emotions run high.
Posted by Larry M. Elkin, CPA, CFP®
photo by Pixabay user padrinan
When decision-makers say they are “guided by science,” they really mean that they are guided by scientists – specifically the scientists whose guidance they choose to heed.
That is generally a sensible practice. Whether the issue is COVID-19, climate change or anything else where public policy interacts with nature, the world is a very complicated place. There are an infinite number of ways to measure and describe it.
Science is a process of discovery, measurement and testing. Its goal is to identify the truth, or at least theories most likely to be true based on the current state of knowledge. Most of us rely on scientists to identify and distinguish between that which is true, that which is most likely to be true, and that which we believe or suspect to be true subject to further testing.
Science in the abstract is objective; scientists who perform it are not. They are human. They have biases and agendas and personal ambitions, just like anyone else. They are vulnerable to groupthink and peer pressure. Fear, excitement or anger can overpower their professional judgment. The check on these human frailties is the objectivity of scientific measurement. We judge the soundness of scientific results, in large part, by the ability of other scientists to replicate those results under similar conditions.
Sound science produces results that are consistent both internally – the conclusions make sense based on the data supporting them, which also makes sense – and externally, in the context of other knowledge or results that can be replicated. Sound science knows what it knows and how it knows it. It also acknowledges what it does not yet know.
The scientific method as we understand it today has been practiced and taught since the dawn of the Age of Enlightenment, more than three centuries ago. Yet it did not prevent last week’s debacle, in which two of the world’s leading medical journals almost simultaneously retracted influential papers on the treatment of COVID-19 because the science underlying their conclusions was not demonstrably sound.
The Lancet, an international medical journal owned by Dutch publisher Elsevier, drew most of the attention. It published a study concluding that the anti-malarial drug hydroxychloroquine, which had been promoted by President Donald Trump (among others) at an earlier stage in the pandemic, was associated with a significantly increased risk of death in COVID-19 patients. The study supposedly drew on a database of more than 96,000 patient records from more than 600 hospitals on every continent except Antarctica. Policymakers responded immediately. The World Health Organization and several countries, including France, Belgium and Italy, discontinued clinical trials of the drug in COVID-19 patients.
But researchers noticed discrepancies in the data summarized in the study. The study found more fatalities associated with COVID-19 in Australia than Australia had reported at the time. The clinical results were suspiciously consistent all over the world, despite sharp disparities in patient demographic profiles. The dosages reported were above the standard dosages used in North America. All this data was provided by Surgisphere, a Chicago company that reportedly has 11 employees and whose founder, Dr. Sapan Desai, was one of the Lancet study’s four authors.
It turned out that none of the study’s other authors had seen the underlying data, which Desai’s company declined to make available either to them or to outside auditors. Critics, inside and outside of the scientific community, also questioned how such a small and little-known company could have acquired such a large global database. One such question that jumped to my mind: How would Surgisphere have gotten data from Iran, which was one of the early centers of the pandemic but which is under strict American sanctions?
After first trying to edit the study to partially address these discrepancies, The Lancet retracted it at the request of the other three authors.
The New England Journal of Medicine published a second study involving Desai and Surgisphere. That study purported to show that the use of certain blood-pressure medications was not associated with an increased risk of death from COVID-19. It was also withdrawn after all the authors – including, bizarrely, Desai himself – said they were unable to access the underlying data that Desai’s company provided.
Science magazine reported on its website that a third study involving Desai, which had been posted online, was removed. The report found a clinical benefit in the use of ivermectin, an anti-parasitic drug, in COVID-19 treatment. As a result, some Latin American countries where the pandemic has been gaining speed deployed the drug. That paper was a “preprint,” meaning it had not been subject to the peer review necessary for publication in a medical journal.
Desai is a common link in these three episodes, but it would be a mistake to focus on him or his company alone. The scientific publication process broke down and sent policymakers veering in directions they would not otherwise have gone. (The WHO restarted its clinical trials of hydroxychloroquine after The Lancet withdrew the paper in question.)
Amid a pandemic that has claimed more than 400,000 lives globally and continues to kill more people every day, everyone is eager to distinguish treatments that work from those that do not, and especially from those that do more harm than good. This eagerness may be the major explanation for the rush to publish results built on such an obscure and shaky foundation of data. Cardiac surgeon Dr. Mandeep Mehra of Harvard University, one of the co-authors of all three studies, cited exactly this. In a personal statement following the retractions, Mehra said, “I did not do enough to ensure that the data source was appropriate for this use. For that, and for all the disruptions—both directly and indirectly—I am truly sorry.”
But there is also the reality that publishing, especially in such prestigious outlets as The Lancet and the NEJM, offer tangible and intangible rewards to researchers and academics. Those journals are seldom interested in publishing results that merely confirm earlier conclusions. They see their role as providing groundbreaking research.
Under editor Richard Horton, The Lancet in particular has something of a history of activist science. The Lancet published the hydroxychloroquine study soon after it had published an unsigned editorial, which presumably reflected its editor’s view. That editorial held that the Trump administration “is obsessed with magic bullets.” It further advised that “Americans must put a president in the White House come January, 2021, who will understand that public health should not be guided by partisan politics.” This digression by The Lancet into partisan politics, in telling Americans who they “must” put in the White House, did not help the publication’s credibility and further calls into question how closely it looked at the study it was publishing.
The Lancet’s retraction does not mean hydroxychloroquine is an effective COVID-19 treatment, or an ineffective one. It does not mean it is likely to harm patients, or that it is risk free. All it tells us is that the process to evaluate a study’s soundness broke down, whether due to haste, politics or something else. As for the hypothesis in that paper – or the other two papers based on Surgisphere’s data – we will have to wait for the slower grind of the scientific method to take its course.
We humans owe an incalculable debt to science. We live longer, healthier and more fulfilling lives than our species has ever enjoyed thanks to the advances it has brought us. Scientific knowledge may be the closest we ever come to pure truth, but it reaches us via scientists, who are only human. We do well to recognize and allow for their limitations, especially in times when emotions run high.
Related posts: