Can Bayesian Analysis Save Paxlovid?

Bayes Covid-19 Clinical Trials Stan Causal Inference

Can Bayesian analysis be used to understand the impact of a treatment even though the frequentist results are not significant?

Michael DeWitt https://michaeldewittjr.com
2021-12-15

Paxlovid

Paxlovid is a new antiviral (protease inhibitor) that when paired with ritonavir provides excellent protection against hospitalization for those patients infected with SARS-CoV-2. Pfizer, the manufacturer, had two endpoints. The first endpoint was for “High Risk” patients and a secondary endpoint for “Standard Risk” patients. I thought it was neat that BioNTech/Pfizer used a Bayesian analysis for their BNT162b2 vaccine and was a little surprised when I saw a frequentist p-value for their second endpoint:

From their press release.

A follow-on analysis at 80% of enrolled patients was consistent with these findings. In this analysis, 0.7% of those who received PAXLOVID were hospitalized following randomization (3/428 hospitalized with no deaths), compared to 2.4% of patients who received placebo and were hospitalized or died (10/426 hospitalized with no deaths); p=0.051.

Oh no, p < 0.05, let’s throw away the drug for standard patients. This seems like a perfect time to do a Bayesian analysis and see what our uncertainty could be regarding the effect in this seemingly underpowered arm of the trial (e.g., we would have liked to have seem more people in this cohort because the smallest reasonable effect we would like would be say a 20% reduction in relative risk or even less as our hospitals fill with patients).

Reanalysis

First we need to recreate the data:

library(cmdstanr)
library(tidybayes)
library(posterior)
library(ggplot2)
library(dplyr)
library(ggdist)

dat_trials <- tibble::tribble(
  ~"hospitalized", ~"arm_n", ~"arm",
  3, 428, "treatment",
  10, 426, "control"

) %>%
  mutate(prop = hospitalized/arm_n)


dat_trials %>%
  ggplot(aes(arm, prop))+
  geom_point()+
  scale_y_continuous(labels = scales::percent)+
  labs(
    title = "Percent of Arm Hospitalized",
    y = "Percent",
    x = NULL
  )+
  theme_classic()

Seems like there is an effect! We can also reproduce the frequentist analysis:

prop.test(x = c(3,10), n = c(428,426),
          alternative = "less", correct = TRUE)

    2-sample test for equality of proportions with continuity
    correction

data:  c(3, 10) out of c(428, 426)
X-squared = 2.8407, df = 1, p-value = 0.04595
alternative hypothesis: less
95 percent confidence interval:
 -1.000000000 -0.000353954
sample estimates:
     prop 1      prop 2 
0.007009346 0.023474178 

Ok, so p < 0.05 in this is weird. Maybe they did some multiple testing adjustment? Moving on….

Enter Stan

I am going to use the following Stan code that was posted on this blog which in turn originated from this blog.

Nothing too crazy, it this code, it just fits our observed data to binomial distributions and we can look at the relative effect and the absolute effect

writeLines(readLines("trial.stan"))
//based on https://www.robertkubinec.com/post/vaccinepval/
// Which is based on https://rpubs.com/ericnovik/692460
data {
  int<lower=1> r_c; // num events, control
  int<lower=1> r_t; // num events, treatment
  int<lower=1> n_c; // num cases, control
  int<lower=1> n_t; // num cases, treatment
  real a[2];   // prior values for treatment effect

}
parameters {
  real<lower=0, upper=1> p_c; // binomial p for control
  real<lower=0, upper=1> p_t; // binomial p for treatment
}
transformed parameters {
  real RR  = 1 - p_t / p_c;  // Relative effectiveness
}
model {
  (RR - 1)/(RR - 2) ~ beta(a[1], a[2]); // prior for treatment effect
  r_c ~ binomial(n_c, p_c); // likelihood for control
  r_t ~ binomial(n_t, p_t); // likelihood for treatment
}
generated quantities {
  real effect   = p_t - p_c;      // treatment effect
  real log_odds = log(p_t / (1 - p_t)) - log(p_c / (1 - p_c));
}

Additionally, we need to supply a prior. I am going to use exactly the same priors that Pfizer/BioNTech for their vaccine as a starting point:

curve(dbeta(x, shape1 = .700102, shape2 = 1), 
      from = 0, to = 1, 
      ylab = "Density", xlab = "Relative Risk Reduction",
      main = "Prior Distribution")

This distribution places more density on the drug not having an effect, so we are being a bit pessimistic.

Run the Code

mod <- cmdstan_model("trial.stan")

dat <- list(
  n=sum(dat_trials$arm_n),
       r_c=dat_trials[2,]$hospitalized,
       r_t=dat_trials[1,]$hospitalized,
       n_c=dat_trials[2,]$arm_n,
       n_t=dat_trials[1,]$arm_n,
       a=c(.700102,1)
)
fit <- mod$sample(dat, refresh = 0)
Running MCMC with 4 parallel chains...

Chain 1 finished in 0.1 seconds.
Chain 2 finished in 0.1 seconds.
Chain 3 finished in 0.0 seconds.
Chain 4 finished in 0.0 seconds.

All 4 chains finished successfully.
Mean chain execution time: 0.0 seconds.
Total execution time: 0.4 seconds.

Analyzing the Results

First we can look at our summaries (note that I did look at the chains for mixing and we have adequate samples in the tails).

fit$summary() %>% 
  select(variable, median, mean, q5,q95, rhat) %>% 
  knitr::kable(digits = 2)
variable median mean q5 q95 rhat
lp__ -74.16 -74.47 -76.56 -73.46 1
p_c 0.03 0.03 0.01 0.04 1
p_t 0.01 0.01 0.00 0.02 1
RR 0.69 0.63 0.17 0.90 1
effect -0.02 -0.02 -0.03 0.00 1
log_odds -1.17 -1.21 -2.31 -0.19 1

So it seems like there is evidence of an effect (a relative risk reduction of 20-89%)!

draws <- fit$draws() %>% as_draws_df()


draws %>%
  #median_qi(VE, .width = c(.5, .89, .995)) %>%
  ggplot(aes(y = "RR", x = RR)) +
  stat_halfeye()+
  scale_color_brewer()+
  theme_classic()+
  coord_cartesian(expand = FALSE)+
  labs(
    title = "Posterior Distribution of Relative Risk Reduction"
  )

Ok, so looks again like the bulk of the posterior distribution is a positive effect. If I said I were only interested in a drug that had a 30% relative risk reduction, I could do the following:

mean(as.numeric(draws$RR>.3))
[1] 0.91125

Ok so 0.91 isn’t bad! In the middle of a pandemic with patients appearing (with standard risk factors) and an oral treatment option, I would take that probability. I could also look at what the absolute effect would mean on hospitalizations:

draws %>%
  ggplot(aes(y = "", x = effect*1000)) +
  stat_halfeye()+
  scale_color_brewer()+
  theme_classic()+
  coord_cartesian(expand = FALSE)+
  labs(
    title = "Hospitalizations Avoided per 1,000 Patients",
    y = "Density",
    x = "Hospitalizations Avoided"
  )

If I could prevent 0-40 hospitalizations with a cheap-ish treatment, I would take it. A reminder that if I admit 20 patients and they each stay 4-5 days, I have tied up a ton of “bed days.”

Summary

The important point here is the Bayesian analysis provides a way to do these kinds of analysis and not throw away data when endpoints are not met. In this case the “standard risk” patients would likely profit from the drug.

Corrections

If you see mistakes or want to suggest changes, please create an issue on the source repository.

Reuse

Text and figures are licensed under Creative Commons Attribution CC BY 4.0. Source code is available at https://github.com/medewitt/medewitt.github.io, unless otherwise noted. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

Citation

For attribution, please cite this work as

DeWitt (2021, Dec. 15). Michael DeWitt: Can Bayesian Analysis Save Paxlovid?. Retrieved from https://michaeldewittjr.com/programming/2021-12-15-can-bayesian-analysis-save-paxlovid/

BibTeX citation

@misc{dewitt2021can,
  author = {DeWitt, Michael},
  title = {Michael DeWitt: Can Bayesian Analysis Save Paxlovid?},
  url = {https://michaeldewittjr.com/programming/2021-12-15-can-bayesian-analysis-save-paxlovid/},
  year = {2021}
}