When each metric is computed in its native form, it produces values on very different scales:
Metric | Raw output range | Meaning of a “small” value |
---|---|---|
SAM | 0 – π/2 radians | Spectra point in nearly the same direction (shape match) |
SID | 0 – ∞ (bits) | Information content is nearly identical |
SED | 0 – ∞ (reflectance²) | Little absolute radiance difference |
EMD | 0 – ∞ (mass-distance) | Minimal effort to realign bands |
If you fed those raw numbers straight into a classifier or colour ramp you would need a different threshold—or even a different colour bar—for every metric.
What the 0-to-1 normalisation does #
- SAM Angle is inherently bounded, so we convert it to a similarity:
- 1 = identical spectral shape
- 0 = orthogonal shape
- SID, SED, EMD Their raw distances are unbounded, so we:
- Find a robust upper cap (99-percentile of all pixels)
- Scale each pixel distance by that cap
- Flip distance → similarity:
Result: every metric now outputs 1 for a perfect match and 0 for the worst practical mismatch within the scene.
Why this helps you #
- Cross-comparison – You can display several similarity layers side-by-side without tweaking legends; a value of 0.8 always means “high similarity,” whichever metric produced it.
- Unified thresholds – Rules like “keep pixels ≥ 0.7” work across all four metrics; no need to remember that for SAM the cut-off is 0.1 rad, for SED it’s 0.004, etc.
- Consistent analytics – Aggregating or stacking metrics (e.g., averaging SAM and SID similarities) is safe because they live on the same numeric scale.
Think of it like converting temperatures from °F, K, and °C to a single “percentage of boiling point” scale—once everything is in 0–1 you can compare, combine, and threshold effortlessly.