Methodology
How we measure college value — and what we don't measure.
1. Why We Don't Rank
College rankings collapse complex, multi-dimensional outcomes into a single number. This obscures more than it reveals. A school that excels at launching first-generation students into the middle class may score poorly on metrics that reward selectivity, endowment size, or peer assessment.
Research supports skepticism of composite rankings. NORC at the University of Chicago (2024) found that students and families prioritize affordability and career outcomes — not prestige. Malcolm Gladwell has argued that US News rankings reward wealth concentration rather than educational quality. Pierre Bourdieu's theory of cultural capital suggests that rankings reflect and reinforce existing social hierarchies rather than measuring institutional effectiveness.
Instead of telling you which school is “best,” we show you the data that matters for your specific situation: What will you earn? What will you owe? Can you pay your bills? What's the realistic probability of finishing?
2. How We Measure Value — The Prosperity Gauge
The Prosperity Gauge answers the question every student actually asks: “If I graduate from this program, can I pay my bills?”
No other college comparison tool shows this. Georgetown shows 40-year net present value. PayScale shows 20-year ROI. US News shows ordinal rankings. We show your estimated monthly budget after graduation.
The calculation chain:
- Start with median annual earnings 1 year after graduation (College Scorecard)
- Estimate federal income tax, state income tax, and FICA (Social Security + Medicare)
- Calculate monthly take-home pay
- Subtract monthly student loan payment (10-year amortization at 5%)
- Subtract estimated rent (state-level Census ACS median)
- What remains covers food, transportation, savings, and everything else
Health levels: Comfortable (≥30% of gross remaining), Tight (15–30%), Stressed (<15%).
3. Completion-Weighted ROI
Standard ROI assumes you graduate. But not everyone does. A program with +200% ROI and a 40% graduation rate has a very different expected outcome than one with +200% ROI and a 90% graduation rate.
We show both numbers side by side:
If you graduate: standard ROI (earnings − debt) / debt
Expected value: grad_rate × ROI_if_graduate + (1 − grad_rate) × (−50%)
The −50% for non-completers is a midpoint estimate: students who don't finish typically borrow roughly half the median debt but receive no earnings premium. This is a simplification — reality varies — but it's a defensible baseline consistent with the FREOPP methodology.
The gap between these two numbers is the “completion risk cost.” A 120-point gap means completion risk is eating most of the program's apparent value.
Why this matters for first-generation students: National 4-year graduation rates for first-gen students range from 27–51%, compared to 68% for continuing-generation students. Completion risk disproportionately affects the students who can least afford it.
Limitation: We use institution-level graduation rates because program-level completion data is not available in the College Scorecard. Actual completion rates vary by program.
4. Debt-to-Earnings Ratio
The debt-to-earnings (D/E) ratio measures whether graduates can reasonably service their student loan debt from their post-graduation earnings. We use the same methodology as the federal Gainful Employment (GE) rule (effective July 2024).
The calculation:
annual_payment = median_debt × 0.12950
D/E ratio = annual_payment / annual_earnings
Where 0.12950 is the annual payment factor for a 10-year loan at 5% interest:
PMT = P × r(1+r)n / ((1+r)n − 1)
= P × 0.05(1.05)10 / ((1.05)10 − 1) = P × 0.12950
Federal thresholds:
- Pass (<8%): Debt payments are manageable relative to earnings
- Warning (8–15%): Approaching the federal threshold; debt payments may strain budgets
- Fail (>15%): Would fail the federal Gainful Employment standard. Programs exceeding this threshold in 2 of 3 years face loss of federal financial aid eligibility.
Source: 34 CFR 668.403; Federal Register Vol 88, No 194 (Oct 10, 2023).
5. Cost-of-Living Adjustment
A $50,000 salary in Mississippi buys more than $50,000 in California. We adjust for this using BEA Regional Price Parities (RPPs), which measure the relative price level in each state compared to the national average (100).
adjustment = 1 + α × ((100 / cost_index) − 1)
adjusted_earnings = nominal_earnings × adjustment
where α = 0.65 (Moretti partial adjustment factor)
Example: $50,000 in Mississippi (RPP 87.3) = $54,728 purchasing power
Example: $50,000 in California (RPP 112.6) = $46,363 purchasing power
Moretti correction applied (2013): Full deflation by RPP overcorrects because nominal wages already partially embed local cost information. Employers in expensive cities pay more partly because costs are higher. We apply a partial adjustment factor (α = 0.65, the midpoint of the recommended 0.5–0.7 range) to dampen the correction and avoid double-counting geographic wage premiums.
We also provide an “earnings premium” view (used by FREOPP) that compares program graduate earnings to high school graduate earnings in the same state. This avoids the double-counting problem entirely because both earnings figures are subject to the same geographic effects.
6. Selection Bias
Institutional earnings data reflects both educational quality AND student characteristics. Schools that admit higher-achieving students will report higher post-graduation earnings, but it's unclear how much of that is the school's contribution versus the students' pre-existing advantages.
Dale & Krueger (2002, 2014) found that when you control for the selectivity of schools a student was admitted to (but chose not to attend), the earnings premium of attending a more selective school largely disappears for most students. In other words: the student matters more than the school.
The critical exception: Dale & Krueger also found that this does NOT hold for first-generation and low-income students. For these groups, attending a more selective school does produce significantly higher earnings. The hypothesis: these students benefit more from the networks, resources, and signaling that selective institutions provide, because they lack the family-based social capital that continuing-generation students already have.
Chetty et al. (2017) identified specific institutions that function as genuine “mobility engines” — schools with high bottom-quintile representation AND high rates of movement to the top quintile. Many of these are not traditionally prestigious institutions.
We disclose this because you should know: when you see earnings data on this site, you are seeing the outcomes of the students who went there, not necessarily the causal effect of the institution itself.
7. Data Sources
| Source | What We Use |
|---|---|
| College Scorecard (Dept. of Education) | Earnings (1yr, 4yr), median debt, graduation rates, admission rates, demographics, Pell grant rates, program-level data |
| BEA Regional Price Parities | State-level cost indices for purchasing power adjustment |
| Census ACS | State-level median rent estimates |
| BLS OEWS | Occupational wage data by state (planned) |
All federal data is publicly available. We do not use proprietary surveys, peer assessments, or “reputation scores.”
8. Limitations
- Federal-aid-only sample: College Scorecard earnings data only covers students who received federal financial aid. Students who paid entirely out of pocket are excluded, which may skew earnings figures.
- 1-year earnings snapshot: Our primary earnings metric is median earnings 1 year after graduation. This captures early-career outcomes but misses career trajectory. Some fields (e.g., medicine, law) have low starting earnings but steep growth curves.
- Institution vs. program graduation rates: We use institution-level graduation rates for completion-weighted ROI because the College Scorecard does not provide program-level completion data. Actual completion rates vary significantly by program within the same school.
- State vs. MSA geographic granularity: Cost-of-living data is currently at the state level. State averages can mask significant within-state variation (e.g., New York City vs. upstate New York). MSA-level data is planned for a future update.
- Non-completer ROI is estimated: We use −50% as a midpoint estimate for non-completers. The actual financial impact of not completing varies enormously depending on how long the student attended, how much they borrowed, and their earnings trajectory.
- Tax estimation is approximate: The Prosperity Gauge uses simplified federal + state + FICA tax estimates. Actual tax liability depends on filing status, deductions, and credits (e.g., student loan interest deduction).
- No causal claims: All metrics on this site are descriptive, not causal. We show what graduates earn and owe, not what the school caused them to earn. See Selection Bias above.
Questions about our methodology? Let us know. We believe transparency is a feature, not a vulnerability.