About that Florida study on vaccine harm
The Science™ can be more art than science. Not that there is anything wrong with that …
The Florida Department of Health has posted a study about the hazards posed by the mRNA therapies (the “vaccines”). The Florida study purported to illuminate vivid evidence of the increased prospect of “cardiac-related death” “in the 28 days following vaccination.” These cardiac-related, post-vax fatalities were concentrated on “males aged 18-39.” At the same time, however, the study indicated no statistically discernible increase in all-cause mortality across all people captured in the study. So, there is a discernible increase in cardiac-related fatalities, but it is otherwise not obvious that the vaccines induced an overall increase in mortality? Shouldn’t an increase in “cardiac-related death” also show up as an overall increase in total mortality? What’s going on?
I did some digging around. But, first, a little more context.
The result on cardiac mortality created a bit of a stir. Tucker Carlson featured it in a segment on October 10. (Go to minute 21:05.) Meanwhile, our friends over at UnHerd performed the great service of getting some (broadly skeptical) commentary about the study.
I say “great service,” because there’s nothing wrong with skepticism. Sorting out what the results really are—and are not—is an important project. That said, my own impression is that the interviewees on UnHerd made little progress in making the results accessible. They did not do a very good job of explaining what is going on. Hence my own bit of digging around.
What did the Florida Department of Health do?
Ideally, the Department would conduct a quasi-experiment: Do something like get a random sample of, say, 44,000 people, randomly administer the jab to a half of those people, and administer a placebo (saline solution) to the other 22,000. Then track the performance of those people for a long time.
That makes for a lot of work, and time is something we would hope to finesse. But, this type of random control study is the kind of thing that Pfizer itself did in 2021. It administered the vax to about 22,000 people and administered a placebo to another 22,000 people. Pfizer then checked up on these people after six months.
What happened? Pfizer claimed that some modest number of people in the control group (the placebo group) contracted COVID, and only about 5% as many people in the vaxxed group contracted COVID. That was the basis for the headline result that the vax was “95% effective”.
We get this result from a Pfizer study titled “Six Month Safety and Efficacy of the BNT162b2 mRNA COVID-19Vaccine.” But, buried on page 23 of a subsequent filing to the FDA we learn the actual topline result: “From Dose 1 through the March 13, 2021 data cutoff date, there were a total of 38 deaths, 21 in the COMIRNATY [Pfizer vaccine] group and 17 in the placebo group.” In other words, a larger number of vaxxed people died than unvaxxed people. The vaccine did not actually make people better off, but “95% effective” is the result we hear about. That gross misrepresentation of results is what we get for fetishizing the wrong metrics (COVID infections and fatalities) in place of the right metrics (overall mortality and harm).
Meanwhile, the Florida Department of Health implemented a statistical technique that’s been around for nearly 30 years. It implemented a version of the stuffily titled “self-controlled case series method.”
Here’s the motivation for this method: Conducting random control studies takes a lot of time and expense, but we may have data laying around from “surveillance programs” such as VAERS (the “Vaccine Adverse Effects Reporting System”). The reports that derive from such surveillance programs are not extracted from a randomly selected set of the population. Thus, we might be worried that examining reports from a non-random sample of people may not allow us to extract results about how vaccines would perform if we really were to administer them to the general population. Can we really draw generalizable conclusions about vaccine performance by restricting attention to these exceptional, self-reported cases of vaccine harm?
The easy answer is that we can always make general statements about vaccine performance if we impose a lot of (possibly implausible) assumptions about how self-reports of vaccine harm become manifest. So, in a more orthodox study, we might remain agnostic about how long (and if) “adverse effects” obtain after vaccination. Such effects may depend on a whole host of individual attributes (like age and indicators of an individual’s general health). But, in these “self-controlled” studies, we impose a big assumption that there is a specific window of time during which a vaxxed individual may be susceptible to vaccine-induced harm. That’s where this business of “28 days following mRNA vaccination” comes from in the Florida study.
That is trick #1: Impose a heavy assumption that the prospect of vaccine harm is binary. It’s “on” during a “risk period”—a risk period that we, the researchers, just assume. It’s “off” out of the “risk period.” The Florida researchers assume that the “risk period” amounts to a 28-day window.
There are three other tricks to the “self-controlled case series method.”
“Proportional Hazards” assumption: We assume that there is some baseline likelihood that an individual will experience an event that looks a lot like a vaccine-induced “adverse effect” whether during the specified risk period, beyond the risk period, or even before vaccination. But, if a vaccine really does increase the likelihood of an adverse effect during the risk period, then assume that that effect amounts to shifting the baseline hazard up as some fixed proportion of that baseline hazard. Having imposed such an assumption, we may then be able to conclude something in the spirit of “the hazard increased by 84% relative to the baseline.”
“Partial Likelihood” estimation: An individual’s baseline hazard may be a really complex thing, but the “proportional hazards” assumption and a little algebra allows us to ignore it. That algebra amounts to constructing a “partial likelihood” function. Partial likelihood allows us to concentrate on figuring out the proportional change in the underlying likelihood of experiencing an adverse effect during the specific “risk period” without having to figure out what that baseline likelihood is.
The “Self-control” method: We formulate a partial likelihood separately for each individual and build up partial likelihood estimation by adding up those individually specific partial likelihoods. Basically, we assume that each individual may experience an adverse effect at most one time during the pre-specified “risk period,” and at most one time in any other prespecified period, either pre or post the risk period.
“Proportional hazards” and “partial likelihood estimation” are very standard things in “survival analysis”. Survival analysis is about figuring how long things (or people) will last. What is the life expectancy of a light bulb or of a rooftop compressor for an air-conditioning system? How well do people fare after certain invasive surgeries? It is, however, the combination of all four tricks—the proportional hazards assumption, partial likelihood estimation, the business of “self-control,” and the crude assumption of the timing of vaccine hazards—which allows the researcher to justify using non-random data (self-reports of vaccine adverse effects) to make generalizable statements about vaccine performance. The algebra says that, if a vaxxed person does not experience anything bad, whether pre or post vaccination, then that person’s experience does not inform the inquiry into the vaccine hazards. (The logarithm of that person’s partial likelihood is zero.) We can ignore those people and focus on the self-reported harms.
That last bit makes for a remarkable conclusion: We really can impose enough structure on how we look at data so that we can argue that we can ignore most vaccine experiences and just concentrate on the (possibly very small) number of bad, self-reported cases.
Fine. That’s the method that the Florida Department of Health applied. The “adverse effect” of principal interest to the Department was plain old death, meaning, if someone were to die during the assumed 28-day “risk period,” then that person would not be around to possibly die after the risk period had expired. That fact could potentially bias the results, although the direction of the bias is uncertain. The commentators in the UnHerd piece complained about this potential for bias.
This business of assuming a binary, on/off “risk period” is a little discomforting. Instead of allowing the data to speak for themselves, we impose this structure in order to justify restricting attention to non-random data on self-reports of possible vaccine-induced adverse effects.
Imposing structure is part of the art of data analysis. We have to impose models of how data are generated in order to draw conclusions about things like vaccine performance. And, when we don’t have ideal data, we may have to impose even more assumptions to allow us to draw conclusions from the data we have. It can be like squeezing water from a rock. At some point, however, the assumptions may amount to a mass of band-aid solutions, and we may end up with a mess of a research project.
Is that “scientific?” Does that amount to incontrovertible “Science” as in “The Science™?”
I submit that the The Science™ is more art than science. That is not a problem per se. A problem is that the equations and numbers and concepts—“proportional hazards,” “partial likelihood estimation,” and such—give the art the appearance of false precision. That false precision invests the art with the appearance of hard-nosed truth, and the fiction of incontrovertible truth enables the fetishization of (sometimes irresponsible) Science.
So, the Department applied a method that has become pretty standard in research on vaccine effects. That method involves some nontrivial shoehorning of data into poorly fitting glass slippers, and the Department might even have had to indulge in a small extra bit of shoehorning. But, it got some plausible results. At the same time, the method was not able to yield results suggesting that the vax induces an increase in overall mortality. Granted, vaccine effects may be very drawn out and complex. The on/off, 28-day assumption enables a very narrow kind of “self-control” analysis, but it doesn’t preclude studies that remain agnostic about the effects, whether long-term or short-term, that further research may yet reveal—or that research from Pfizer had already revealed. Over the course of six months, the vax had not proven to leave people better off. (I discussed the Pfizer episode and other episodes of bad science more fully here.) With vivid evidence of persistent, elevated excess mortality in the developed countries of the world, we can yet reasonably wonder about the role of the vaccines in inducing some nontrivial measure of that excess mortality.
Thanks for some hand holding while looking at an analysis. We ought to be concerned about the charts of excess mortality vs vaccine uptake and similar charts about the decline in births.
Watching well trained athletes having heart events on the pitch is quite unusual. They along with their hearts are conditioned to operate near their limit; unrecognized damage may show up as a decrease in that limit. Young men are inclined to push and develop those limits which then put them at some risk. I suppose we "normals" never really operate at the edges so may not even note minor damage that at the edges might be deadly.