The pharmaceutical landscape for COVID-19 treatments is rapidly changing. Since my post last week on hydroxychloroquine, three new reports have come out regarding the drug’s use: two related to cardiac side-effects, and a separate randomized controlled trial on the drug’s capacity to clear the virus and improve blood markers of infection and inflammation. The cardiology reports noted a high percentage of heart rhythm disturbances in hydroxychloroquine-treated COVID-19 patients in the form of QT prolongation (that predisposes patients to more malignant heart rhythms like ventricular tachycardia and fibrillation). The first study, from a Boston teaching hospital, noted dangerous QT prolongations in 19% of patients treated with hydroxychloroquine and 21% of those receiving hydroxychloroquine plus azithromycin (both drugs are associated with this effect, so it’s not surprising that the combination was worse than hydroxychloroquine alone), while a second report from Lyon, France reported similar QT prolongation in 18% of hydroxychloroquine-treated ICU patients. More than 90% of treated patients in the French study experienced at least some QT prolongation, meaning that the drug is almost certainly inappropriate for widespread use in the outpatient setting. And it’s not just the heart that is affected. Hydroxychloroquine is associated with a high rate of non-cardiac side-effects, including severe nausea, that resulted in the drug’s discontinuation in 11% of the Boston study participants.
Meanwhile, a randomized controlled trial from the Cleveland Clinic Abu Dhabi (pre-publication) in 34 COVID-19 positive patients (21 treated patients, 13 controls) found that hydroxychloroquine actually prolonged the time to cure as measured by nasal swab testing. In a group of moderately ill patients (42% pneumonia), the mean time to viral clearance was a full week longer (17 days) in hydroxychloroquine-treated patients than in those not receiving the drug (10 days). This joins a growing number of trials offering counter-evidence to the original French report that set the President’s twitter fingers ablaze. The findings of the Abu Dhabi trial also contradict those of an earlier Chinese study (pre-publication) that noted improvements in inflammatory blood markers and cell counts in COVID-19 patients treated with the drug. In this trial, the drug improved nothing.
In the one randomized, controlled trial that examined important patient outcomes like death and/or ICU admissions, hydroxychloroquine was a bust (pre-publication). Overall, the data strongly suggests no benefit, and likely harm, when hydroxychloroquine is used to treat COVID-19 infections.
So how to decide when to give an experimental treatment for a new, potentially fatal disease? It has to do with the quality and quantity of pre-existing research in cell culture and animal trials, along with human trial data where the drug has been used to treat other conditions. For a novel infectious disease, there will never be pre-existing, randomized controlled trial data—after all, the infection is new. Very often, however, in the rush to do something, the harms of treatments are trivialized or ignored altogether. For a completely safe drug, there needn’t be a lot of high-quality evidence of benefit before a recommendation for widespread use, but this is the only situation where “What do you have to lose?” applies. And this was not the case with hydroxychloroquine.
To understand why both benefits and harms need be considered, let’s examine a theoretical scenario involving a drug capable of saving 10% of severely ill COVID-19 patients, but one that is also associated with a serious side-effect, independent of the disease being treated, that kills 1% of those taking it. At first glance, this might appear a good trade-off, but it’s not. The drug only benefits those with severe disease likely to die, while harming 1% of all patients taking it. When administered indiscriminately to all comers, such a drug would harm far more people than it would help. In the case of COVID-19, where 80% of people have mild disease and less than 2% die from it, if you administered the drug to 1,000 infected patients without pre-selecting for disease severity, two lives would be saved (ten percent of the two percent likely to die, times 1,000 patients treated = 0.1 x 0.02 x 1,000 = 2), but the drug would also kill ten people (1 percent harmed, times 1,000 patients treated = 0.1 x 1,000 = 10). This doesn’t mean that such a drug would be of no value—when used in appropriately selected ICU patients, the potential benefit would justify the risk of harm, but there is no way to know the answers to these sorts of questions ahead of time. It takes research and time.
On May 1st, the FDA issued a second Emergency Use Authorization, this time for the current drug du jour, remdesivir. There were differences in the approval process for remdesivir versus hydroxychloroquine. Although a compound’s potential benefits are often discovered serendipitously (e.g. penicillin), human trials are rarely conducted before first gathering a host of pre-clinical data. In vitro, chloroquine had been shown to increase the pH of intracellular phagolysosomes while also inhibiting viral binding at cell surface ACE2 receptor sites, and this provided theoretical evidence that the drug and its analogues (including hydroxychloroquine) might inhibit viral fusion and replication in vivo (i.e. in humans). Following the SARS (Severe Acute Respiratory Syndrome) coronavirus outbreak in late 2002, chloroquine had been shown to inhibit replication of the virus in kidney cell cultures isolated from African green monkeys.
To justify an Emergency Use Authorization, however, requires more than just theoretical and pre-clinical data. There should also be at least a modicum of case reports and, preferably, actual trial data to suggest safety and effectiveness. In the case of hydroxychloroquine, there was extensive safety data for use in patients with rheumatologic diseases at doses similar to those employed against COVID-19. There was no such safety data, however, when the drug was combined with azithromycin, an antibiotic that also affects cardiac conduction.
As to effectiveness, at the time of Trump’s tweet promoting use, there was just the small French study (see post: “I’m skeptical about … hydroxychloroquine,” 4/30/20), and the completely non-scientific, case-report musings from an upstate New York family doctor. In both instances, pronouncements of benefit came via YouTube, not from peer-reviewed medical journals. When the FDA issued its Emergency Use Authorization for the hydroxychloroquine/azithromycin combination to treat COVID-19 on March 27th, it was issued in the absence of adequate safety and efficacy data.
As to remdesivir, phase 1 trial data collected by the manufacturer (Gilead) in 138 healthy subjects showed no evidence of major side-effects other than a trend toward elevated liver enzymes. The drug was well-tolerated (but ineffective) when administered to 175 Ebola victims between November 2018 and August 2019, as part of a compassionate use study on anti-viral therapies. Finally, there was additional safety data from a compassionate use study wherein remdesivir was given to 61 sick COVID-19 patients across nine countries during the month of March. The most common side-effects were elevated liver enzymes, diarrhea, rash, and worsening kidney function. Side effects were more common in sicker patients, resulting in the drug’s discontinuation in four.
In the compassionate use for COVID-19 study, complete data was available for 53 of 61 treated patients. All patients received remdesivir via IV infusion (there is no pill form). The patients were sick; more than half were on ventilators, while another 8% were on ECMO (extracorporeal membrane oxygenation, a sophisticated blood oxygenation system that requires cardiac bypass capability). Nearly 60% of mechanically ventilated patients were able to be extubated (i.e. taken off the ventilator), and nearly half were able to leave the hospital during 18-days of follow up. Because the study lacked a control group, we don’t know whether these results are better than in patients treated without remdesivir. While acknowledging that patient populations across various studies are not directly comparable, the authors nonetheless note an 18% mortality in their remdesivir-treated ventilator patients, compared to a 66% mortality reported in ventilator patients in Wuhan, China not treated with remdesivir. Meanwhile, in a separate report, the mortality rate in more than 1,100 ventilator patients in NYC was 24.5%. Being placed on a ventilator is bad news no matter where it occurs. At best, the data suggests a possible utility in using remdesivir in sick ventilator patients, but certainly doesn’t prove it.
There are a lot of problems with this kind of report, not the least of which is that the analysis was sponsored by the drug’s manufacturer (Gilead), who also helped write and edit the manuscript. With potentially billions of dollars at stake, it’s concerning that Gilead was so involved in every aspect of the trial. It’s also concerning that they were only able to find 61 suitable candidates for their drug, despite tens of thousands of sick COVID-19 patients in the nine countries where remdesivir was tried. Although it’s not entirely clear as to how and why specific patients were selected, it’s very clear that this was not a random cohort. It’s also clear, that in the absence of a control group, it’s impossible to draw any firm conclusions.
[As an aside, it’s interesting to note (as evidenced by the cure claims of people like Michigan state representative, Karen Whitsett, who swears that it was the hydroxychloroquine that saved her), when an unproven treatment is administered to a patient who survives, it’s always the treatment that gets the credit, but, if the person dies, it’s always the disease that gets the blame. The truth is, that without a proper control group, it’s impossible to ascribe either credit or blame to a treatment. It’s just anecdote.]
Another problem with the remdesivir paper is that the median time between symptom onset and the initiation of the drug was twelve days! In that amount of time, most patients who are going to die likely have already done so, thereby removing themselves from the pool of possible candidates by having drowned. The drug is purported to work by hindering viral replication. After twelve days, most patients have already turned the corner on viral loads. Late deaths are more often due to the tsunami storm of cytokine release, a function of the body’s immune response to the virus, not a function of the number of viruses within it. There is little reason to think that administering remdesivir late in the course would be helpful. Here is where toclizumab, a drug that markedly inhibits the body’s immune response, has the potential to work.
In fairness, Gilead’s press release acknowledged many of the study’s design flaws. “While the outcomes observed in this compassionate use analysis are encouraging, the data are limited,” said Dr Merdad Parsey, Gilead’s Chief Medical Officer. “Gilead has multiple clinical trials underway for remdesivir with initial data expected in the coming weeks.”
And sure enough, he delivered, with the first published, randomized, double-blind, placebo-controlled trial of a pharmacologic agent for COVID-19 appearing in the Lancet just 17 days later. The trial, conducted at ten Chinese hospitals, enrolled 158 COVID-19 positive patients to receive remdesivir, while another 79 received a placebo. Patients were moderately sick, with pneumonia and low oxygen levels at the time of enrollment. The primary outcome was the time to clinical improvement as measured on a six-point scale (1 = discharged or meeting discharge criteria; 2 = hospitalized without need for supplemental oxygen; 3 = hospitalized with need for supplemental oxygen; 4 = hospitalized with need for high-flow oxygen; 5 = hospitalized requiring ventilator or ECMO support; and 6 = death). “Improvement” was defined as a change of -2 points or more. The results were underwhelming: the median time to clinical improvement in remdesivir patients was 21 days, while in the placebo group it was 23 days. There was also no difference in mortality or the time to viral clearance based on swab testing. Remdesivir patients required shorter ventilator support than placebo patients (7 days versus 15.5 days), but the reason for this observation is unclear. The total number of ventilator patients was too small to determine if the observed outcome wasn’t the result of random chance. It certainly doesn’t make sense that a drug that fails to clear the virus faster or reduce mortality, would somehow result in the need for shorter ventilator support. The authors’ conclusion: “remdesivir was not associated with statistically significant clinical benefits.”
But that’s not how the press reported it. Everything I saw on the news and in the media touted the drug as a success, latching on to just a few lines of an otherwise lengthy manuscript: “in patients receiving remdesivir or placebo within 10 days of symptom onset … those receiving remdesivir had a numerically faster time to clinical improvement than those receiving placebo (median 18 days vs 23 days).” Suddenly, the drug shortened symptoms by five days! The difference? It has to do with when the drug was started. The average time from symptom onset to initiation of the drug was eleven days. The benefit was only observed in the 71 patients who started the drug early. There was no benefit in the 84 patients who started the drug late, and when the two groups were combined, as per the trial’s intent, there was no difference. Nobody lied, but the press wasn’t completely honest either. The result wasn’t statistically significant and the study certainly didn’t prove that the drug shortens symptom duration. Also, it’s not really fair to cherry-pick results, particularly from a trial that enrolled just over half of the intended number of patients.
To detect a statistically significant benefit (i.e. one not likely to have occurred through chance), it’s possible to calculate the minimum number of patients required beforehand to demonstrate that benefit, provided you know a few things about the disease. For example, using COVID-19 mortality data from NYC as a reference, to determine with 95% certainty whether a drug is capable of reducing the mortality rate by 10% would require a sample size of roughly 2,174 patients (math geeks have at it). In the Chinese trial, the authors had intended to enroll 453 patients based on certain a priori assumptions and calculations. They started the trial on February 6th, but due to excellent (albeit draconian) Chinese containment measures, the number of cases petered out by March 12th., causing researchers to shut down the study prematurely. They published the results they had, not the ones they intended to have. This doesn’t diminish their effort, but it does diminish any claim of proof that remdesivir shortens the duration of COVID-19 symptoms by five days—which is why the authors never made such a claim in the first place. This also illustrates the danger of trusting stories with “clickbait” headlines. Nuance, nuance, nuance.
My takeaway is that IV remdesivir might be useful in a subset of hospitalized COVID-19 patients when administered early in the course of disease. Gilead has announced that they have more supportive data coming in the form of an NIH-sponsored randomized controlled trial from Nebraska that began enrolling patients in mid-February. Based on what we know so far, there is room for optimism.
As opposed to hydroxychloroquine, that was approved on a wing-and-a-prayer and a presidential “hunch,” remdesivir was granted an Emergency Use Authorization only after a review of pre-clinical data, the results of an observational cohort study, and a randomized, controlled trial. This is as it should be.
The question often arises: Why not just let patients decide? The answer is plain: When doctors have inadequate information about the benefits and harms of a treatment, it’s impossible for them to offer informed consent. Without informed consent, there is no patient autonomy. When evidence is absent, emotion fills the void. Playing a hunch is a bad idea when it comes to horse racing, but far worse when it comes to making life-and-death decisions. Demand evidence. Trust the process. Hope for the best.
(Disclosure: I own shares of Gilead as part of a personal portfolio.)
- Nicholas J. Mercuro et al., “Risk of QT Interval Prolongation Associated with Use of Hydroxychloroquine with or Without Concomitant Azithromycin Among Hospitalized Patients Testing Positive for Coronavirus Disease 2019 (COVID-19),” JAMA Card doi:10.1001/jamacardio.2020.1834 – Published online May 1, 2020.
- Francis Bessière et al., “Assessment of QT Intervals in a Case Series of Patients With Coronavirus Disease 2019 (COVID-19) Infection Treated with Hydroxychloroquine Alone or in Combination with Azithromycin in an Intensive Care Unit,” JAMA Card 2020; Research Letter – Published online May 1, 2020.
- *Jihad Mallat et al., “Hydroxychloroquine is Associated with Slower Viral Clearance in Clinical COVID-19 Patients with mild to Moderate Disease: A Retrospective Study,” MedRxiv 2020; doi:https://doi.org/10.1101/2020.04.27.20082180.
- Fabrio Taccone et al., “Hydroxychloroquine in the Management of Critically Ill Patients with COVID-19: The Need for an Evidence Base,” Lancet Respir Med 2020 – Published Online April 15, 2020 – https://doi.org/10.1016/ S2213-2600(20)30172-7.
- Manli Wang et al., “Remdesivir and Chloroquine Effectively Inhibit the Recently Emerged Novel Coronavirus (2019-nCoV) In Vitro,” Cell Res 2020: 30: 269–271; https://doi.org/10.1038/s41422-020-0282-0.
- Martin Vincent et al., “Chloroquine is a Potent Inhibitor of SARS Coronavirus Infection and Spread,” Virology J 2005; 2: 69; doi:10.1186/1743-422X-2-69.
- *Brandi Williamson et al., “Clinical Benefit of Remdesivir in Rhesus Macaques Infected with SARS-CoV-2,” BioRxiv 2020; org/10.1101/2020.04.15.043166.
- *Philippe Gautret et al., “Hydroxychloroquine and Azithromycin as a Treatment of COVID-19: Results of an Open-Label Non-Randomized Clinical Trial,” Internat J Antimicrob Agents– In Press 17 March 2020 – doi: 10.1016/j.ijantimicag.2020.10594.
- *Matthieu Mahévas et al., “No Evidence of Clinical Efficacy of Hydroxychloroquine in Patients Hospitalised for COVID-19 Infection and Requiring Oxygen: Results of a Study using Routinely Collected Data to Emulate a Target Trial,” MedRxiv 2020; org/10.1101/2020.04.10.20060699.
- . European Medicines Agency. “Summary on Compassionate Use: Remdesivir, Gilead.” April 3, 2020 (https://www.ema .europa.eu/en/documents/other/summary -compassionate-use-remdesivir-gilead_en .pdf).
- Sabue Mulenga et al., “A Randomized, Controlled Trial of Ebola Virus Disease Therapeutics,” NEJM 2019; 381 (24): 2293-303.
- Grein et al., “Compassionate Use of Remdesivir for Patients with Severe Covid-19,” NEJM 2020; DOI: 10.1056/NEJMoa2007016.
- Safiya Richardson et al., “Presenting Characteristics, Comorbidities, and Outcomes Among 5700 Patients Hospitalized With COVID-19 in the New York City Area,” JAMA 2020; doi:10.1001/jama.2020.6775 – Published online April 22, 2020 (corrected April 24, 2020).
- Josh Farkas, “Eleven Reasons the NEJM Paper on Remdesivir Reveals Nothing,”PulmCrit, April 11, 2020.
- “Data on 53 Patients Treated with Investigational Antiviral Remdesivir Through the Compassionate Use Program Published in New England Journal of Medicine,” Gilead Sciences Inc – Press Release, April 10, 2020.
- Yeming Wang et al., “Remdesivir in Adults with Severe COVID-19: A Randomised, Double-Blind, Placebo-Controlled, Multicentre Trial,” Lancet 2020 – Published Online April 29, 2020 – https://doi.org/10.1016/ S0140-6736(20)31022-9.
- John Norrie, “Remdesivir for COVID-19: Challenges of Underpowered Studies,” Lancet 2020 – Published Online April 29, 2020 – https://doi.org/10.1016/ S0140-6736(20)310230.
- Justin Morgenstern, “Remdesivir: The First Real Trial,” First 10EM, May 1, 2020.
- James Sanders et al., “Pharmacologic Treatments for Coronavirus Disease 2019 (COVID-19): A Review,” JAMA 2020 – Published Online April 13, 2020 – doi:10.1001/jama.2020.6019.
*indicates pre-publication, non-peer-reviewed data