Limitations of Excluding Unvaccinated Participants in Vaccine Research, Analysis of Yale Data (Preprint), Reevaluating Immune Biomarkers // The Case for Disease Tolerance
They would never make the mistake of providing uncompromised data—there’s too much at stake!
But let’s take a step back: how is COVID-19 even identified in these datasets? The selection bias starts right there—with how infections are classified. Once you accept their framework, you’ve already lost.
You can run calculations across the entire dataset, and it will always favour vaccination, because the dataset itself is structured to produce that result. The key? Overrepresentation of favourable data.
---------------------------------
Selection Bias: The Core Issue
- Individuals who had no major adverse events after vaccination were more likely to continue to additional doses.
- Those who experienced adverse reactions often stopped—leaving higher-dose groups artificially "healthier."
- Fewer infections in higher-dose groups were overrepresented in the whole dataset.
This isn’t just a coincidence—it’s selection bias baked into the study design.
---------------------------------
Effect Sizes and Skewed Associations
- IgG4 levels in the high-dose group were more than double those in the low-dose group (~31 mg/dL difference).
- Global Health (GHVAS scores): Each additional dose was linked to ~10 points higher health score (0-100 scale).
- Regression models show a strong positive link between dose count, self-reported health, and IgG4 levels.
🚨 But causation is a problem:
- PVS participants (those with health issues post-vaccine) stopped at lower doses.
- Their lower dose count reflects poor health.
- The observational effect is large and statistically significant, but heavily influenced by study design.
---------------------------------
🚨The Bigger Question: Biomarker Manipulation?
- Consider this possibility: biomarkers can be triggered and adjusted per dose count.
- First dose, booster, next booster… does the vaccine itself induce biomarker responses artificially?
- If biomarker elevation isn’t directly linked to disease status, but still changes with dose count, that suggests a separate mechanism at play.
- Could it be that these biomarkers don’t reflect natural immunity but rather an engineered response?
More doses → More biomarker activation → Stronger dataset justification.
Once you understand how selection bias is built into the system, you realize that the data will always "prove" what they need it to prove.
🚨 It’s reproducible not because it’s true, but because the system ensures it is!
An R² value of 1 and an extremely low sum of squares might not indicate genuine model robustness but could instead be artifacts of overfitting or inherent limitations in the dataset.
In the Yale study, antibody responses were modelled using asymmetrical sigmoidal five-parameter least-squares fits (resulting in an R² of 1) and supplemented by linear models that identified significant predictors. However, if the underlying antibody response data is inherently unreliable, then even these impressive statistical indicators lose their meaning. This raises important doubts about the precision and generalisability of the model validation process.
This underscores a critical point in scientific research: even when advanced statistical tools are applied, if the data acquisition methods are imprecise, the results—and any conclusions drawn—are rendered meaningless!
They would never make the mistake of providing uncompromised data—there’s too much at stake!
But let’s take a step back: how is COVID-19 even identified in these datasets? The selection bias starts right there—with how infections are classified. Once you accept their framework, you’ve already lost.
You can run calculations across the entire dataset, and it will always favour vaccination, because the dataset itself is structured to produce that result. The key? Overrepresentation of favourable data.
---------------------------------
Selection Bias: The Core Issue
- Individuals who had no major adverse events after vaccination were more likely to continue to additional doses.
- Those who experienced adverse reactions often stopped—leaving higher-dose groups artificially "healthier."
- Fewer infections in higher-dose groups were overrepresented in the whole dataset.
This isn’t just a coincidence—it’s selection bias baked into the study design.
---------------------------------
Effect Sizes and Skewed Associations
- IgG4 levels in the high-dose group were more than double those in the low-dose group (~31 mg/dL difference).
- Global Health (GHVAS scores): Each additional dose was linked to ~10 points higher health score (0-100 scale).
- Regression models show a strong positive link between dose count, self-reported health, and IgG4 levels.
🚨 But causation is a problem:
- PVS participants (those with health issues post-vaccine) stopped at lower doses.
- Their lower dose count reflects poor health.
- The observational effect is large and statistically significant, but heavily influenced by study design.
---------------------------------
🚨The Bigger Question: Biomarker Manipulation?
- Consider this possibility: biomarkers can be triggered and adjusted per dose count.
- First dose, booster, next booster… does the vaccine itself induce biomarker responses artificially?
- If biomarker elevation isn’t directly linked to disease status, but still changes with dose count, that suggests a separate mechanism at play.
- Could it be that these biomarkers don’t reflect natural immunity but rather an engineered response?
More doses → More biomarker activation → Stronger dataset justification.
Once you understand how selection bias is built into the system, you realize that the data will always "prove" what they need it to prove.
🚨 It’s reproducible not because it’s true, but because the system ensures it is!
.
🚨🚨This is a big issue in science🚨🚨
The tools used to gather data are imprecise and therefore render the findings meaningless!
My short article highlights serious concerns about relying on antibody response data for regression and model fitting. https://x.com/m_a_n_u______/status/1894725032565473398?s=46&t=8lKsot7pcdmGUGcY7FMq6Q
An R² value of 1 and an extremely low sum of squares might not indicate genuine model robustness but could instead be artifacts of overfitting or inherent limitations in the dataset.
In the Yale study, antibody responses were modelled using asymmetrical sigmoidal five-parameter least-squares fits (resulting in an R² of 1) and supplemented by linear models that identified significant predictors. However, if the underlying antibody response data is inherently unreliable, then even these impressive statistical indicators lose their meaning. This raises important doubts about the precision and generalisability of the model validation process.
This underscores a critical point in scientific research: even when advanced statistical tools are applied, if the data acquisition methods are imprecise, the results—and any conclusions drawn—are rendered meaningless!
https://x.com/m_a_n_u______/status/1894725032565473398