Worcester Polytechnic Institute  ·  BUS596: Master of Science Capstone Project  ·  Spring 2026

The Compliance
Trap

Readmission Displacement, Infection Illusions, and Multi-Program Penalty Burden in U.S. Acute Care Hospitals

Team 7  ·  Advisor: Prof. Jim Ryan

0
hospitals
analyzed
0
in blind-spot
zone
0
excess EDAC
(heart failure)
0
face 2+ simultaneous
CMS penalties
Scroll to explore

Research Poster

The Compliance Trap

Full academic poster - click Present to go fullscreen for your presentation session.

BUS596 Capstone Poster - Team 7 - WPI 2026

Research Findings

Three ways the system misfires

Three programs. Three failure modes. One system.

RQ1

The Infection Illusion

550
hospitals with low HAI scores but above-median mortality - appearing compliant while underperforming on survival
HAI–Mortality correlation: r = 0.015, p = 0.46 (not significant)
  • HAC program detects only 9.8% of blind-spot hospitals
  • HAI composite explains essentially zero variance in mortality (R² = 0.0002)
  • Detection failure is non-random (χ² = 62.75, p < 0.001)
RQ2

Readmission Displacement

+17.9
excess days in acute care (EDAC - heart failure) for HRRP-penalized hospitals
22.5% attenuation M1→M3 - most behaviorally robust finding in the study
  • EDAC gap persists after controlling for hospital quality (***)
  • Non-penalized hospitals route more spending through SNFs (39.0% vs 38.0%)
  • Consistent with displacement: metric compliance achieved through routing, not care
RQ3

Multi-Program Convergence

47.4%
of U.S. acute care hospitals face 2 or more simultaneous CMS penalty programs
Every additional penalty predicts −1.41 pts HCAHPS & −0.48 CMS stars (p < 0.001)
  • 336 hospitals face all three penalties simultaneously
  • Convergent group: mean star rating 2.26 vs. system avg 3.07
  • PSI-90 rises from 0.943 → 1.148 across penalty groups (↑ safety events)

Hospital Explorer

Look up any hospital

2,833 U.S. acute care hospitals. Filter by state, penalty status, or ownership type.

Loading hospitals…

Research Design

Data & Methodology

Cross-sectional OLS analysis of 2,833 U.S. acute care hospitals using 11 merged CMS public-use files. Three research questions, nine regression models, one analytical framework.

2,833
Hospitals
Analyzed
11
CMS Files
Merged
96
Variables
Engineered
9
OLS Regression
Models
3
Research
Questions

Analytical Pipeline

01
Data Collection
11 CMS Hospital Quality Reporting public-use files merged at hospital level via Provider ID
02
Cleaning & Imputation
Excluded hospitals missing core HRRP or HAC variables. Median imputation for CLABSI (35.7%) and CAUTI (29.0%) with missingness flags
03
EDA
Correlation matrices, quadrant heatmaps, distribution plots, t-tests and chi-square group comparisons
04
OLS Regression
HC3 heteroskedasticity-robust SE. Three-model attenuation framework per RQ. Subgroup robustness checks by ownership
05
Interpretation
Attenuation magnitude separates genuine behavioral response from quality-selection artifact

CMS Source Files

HRRP Penalties & Readmission Rates
HAC Composite & Penalty Status
VBP Total Performance Score
HCAHPS Patient Satisfaction
CMS Star Ratings (Overall)
PSI-90 Patient Safety Composite
EDAC - Heart Failure
EDAC - Pneumonia
SNF & Home Health Spending Share
30-Day Mortality (4 conditions)
HAI Composite (CLABSI, CAUTI, MRSA, C.diff)

Three-Model Attenuation Framework

How we distinguish behavioral response from quality-selection artifact
M1 - Unadjusted

Baseline Effect

Raw association. No controls. Captures the total observed gap between penalized and non-penalized groups.

M2 - + Ownership & Region

Hospital-Type Controls

Adds ownership type (nonprofit, for-profit, government) and geographic region. Removes structural between-group differences.

M3 - + CMS Star Rating

Baseline Quality Control

Adds CMS star rating as a pre-existing quality proxy. Large attenuation here = selection bias, not behavioral effect.

Low attenuation (<30%) - Effect survives controls. Genuine behavioral response to CMS incentives.
High attenuation (>40%) - Effect explained by pre-existing quality. Quality-selection artifact.

Regression Results by Research Question

RQ1

The Infection Illusion

Dependent variable30-day mortality (4 outcomes)
Key predictorHAI composite score
HAI-Mortality correlationr = 0.015, p = 0.46 (ns)
Blind-spot hospitals675 (23.8%) Low HAI + High Mort.
HAC detection rateOnly 12.6% of blind-spot flagged
Quadrant chi-squareχ²(1) = 3.61, p = 0.057
M1-M3 attenuation<30% - Behavioral signal
Robustness checkMissingness flag spec - no change
RQ2

Readmission Displacement

Dependent variablesEDAC-HF, EDAC-PN, SNF share
Key predictorHRRP penalty status (binary)
EDAC gap - Heart Failure+17.94 days (p < 0.0001)
EDAC gap - Pneumonia+21.29 days (p < 0.0001)
M1 R-squared (HF / PN)0.059 / 0.093
M1-M3 attenuation (HF)23.5% - Strongest behavioral signal
SNF routing findingNon-penalized route MORE to SNF
RobustnessConsistent across all 3 ownership types
RQ3

Multi-Program Convergence

Dependent variablesHCAHPS, CMS Stars, PSI-90
Key predictorPenalty count (0, 1, 2, 3)
HCAHPS coeff (M1)-1.41 pts/penalty (p < 0.0001)
R-squared M1 / M30.119 / 0.321
M1-M3 attenuation55% - Quality-selection artifact
Hospitals with 2+ penalties47.4% of all U.S. acute care
Convergent group (all 3 RQs)155 hospitals (5.5%)
Geographic concentration56.8% concentrated in the South

Data Visualizations

The numbers behind the findings

FY2026 CMS dataset. Select a research question to view the analysis.

RQ1 Infection Illusion chart

Left: 550 hospitals (19.4%) fall in the blind-spot zone - low HAI composite but above-median mortality. The HAC program detects only 9.8% of them.  |  Right: Scatter of HAI composite vs 30-day mortality. Blind-spot hospitals   All others. Near-zero correlation (r = 0.015) means HAI compliance tells us almost nothing about whether patients survive.

RQ2 Readmission Displacement chart

Left: HRRP-penalized hospitals show dramatically higher excess days in acute care (EDAC) post-discharge - +17.9 days for heart failure, +21.3 days for pneumonia. Only 22–24% of this gap is explained by pre-existing quality differences.  |  Right: Non-penalized hospitals route more spending through SNFs (39.0% vs 38.0%) and home health (7.0% vs 6.0%) - consistent with readmission displacement via post-acute routing.

RQ3 Multi-Program Burden chart

Left: HCAHPS overall rating drops from 89.776 (0 penalties) to 85.824 (3 penalties) - a 3.95-point gap (p < 0.001).  |  Center: CMS star rating falls from 3.532★ to 2.256★.  |  Right: PSI-90 composite rises from 0.943 → 1.148 - values above 1.0 indicate worse-than-expected patient safety events. Every additional concurrent penalty is independently associated with worse outcomes across all three measures.

So What Now

From findings to fixes

CMS is measuring hospitals. Hospitals are managing the measurements. Someone needs to close the gap.

Policy Implications
RQ1 - HAC Program

Co-monitor mortality alongside HAI metrics

Hospitals meeting the HAI threshold but with above-median mortality should be flagged regardless of penalty status. Zero-readmission performance combined with high mortality is not quality - it is displacement.

Low implementation cost No new data required
RQ2 - HRRP Program

Make EDAC a co-primary HRRP metric

Weight EDAC at 50% of the readmission score. This eliminates the financial incentive to discharge patients to SNFs to game the 30-day window - the exact displacement mechanism we document at 22-24% behavioral attenuation.

Structural reform EDAC data exists
RQ3 - Multi-program

Cap penalties & create convergence grants

Hospitals in 2+ concurrent programs should face a penalty cap. Redirect penalty revenue as improvement grants targeting the 155 convergent hospitals - 56.8% of which are in the South, disproportionately serving Medicaid populations.

Equity-focused Budget-neutral design
Future Research Directions
01

Longitudinal panel analysis

Track the same hospitals across 5+ years to establish Granger causality - does penalty exposure precede quality decline, or do low-quality hospitals self-select into penalized status?

02

Patient-level discharge routing

Link hospital penalty status to Medicare claims data. Directly test whether post-acute discharge patterns (SNF vs. home) change in the quarters immediately following HRRP penalty announcements.

03

Geographic disparity mapping

Model whether the South's disproportionate convergent-penalty burden reflects differential CMS scoring design, payer-mix constraints, or structural care infrastructure gaps - each demands a different policy lever.

04

Composite scoring simulation

Simulate alternative composite weighting schemes (e.g., EDAC at 50%, mortality as veto condition) to quantify how many of the 155 convergent hospitals exit the penalty corridor under reformed metrics.

Conclusion

CMS is measuring hospitals.
Hospitals are managing the measurements.

Across three research questions, using nine OLS models on 2,833 hospitals, we find consistent evidence that simultaneous CMS penalty programs produce behavioral responses that diverge from their intended outcomes. Hospitals penalized for readmissions show EDAC gaps unexplained by quality. Hospitals penalized for HACs post better HAI numbers but not better mortality. Hospitals in multiple programs score worse on every metric - not because they are worse hospitals, but because they are operating under the weight of compounding incentive structures.

Attenuation analysis distinguishes this from quality-selection: 22-24% in RQ2 signals genuine behavioral response. The mechanism is not hidden. The data is public. The fix is structural.

22-24%
M1-to-M3 attenuation in RQ2 - behavioral signal, not selection artifact
155
Hospitals caught in 3+ concurrent penalty programs simultaneously
56.8%
Of convergent-penalized hospitals located in the South
The Bottom Line
When the score becomes the goal,
the goal stops being care.
CMS built a measurement system. Hospitals built a response system.
Neither system is measuring what patients actually need.
Goodhart's Law - in healthcare 2,833 hospitals WPI BUS596 Capstone - 2026