If I'm reading the unicode-math code correctly, the way it handles ssty just fails to do what the feature is intended for.
Instead of coming up with a mechanism that can track whether a glyph really is being deployed in a super/subscript context, unicode-math simply treats ssty as a size feature. It applies +ssty=1 to math that's smaller than \sf@size at \normalsize, and +ssty=0 to math that's between \sf@size and \tf@size at \normalsize. The feature is not applied to math above \tf@size at \normalsize. That's what yields the pattern of prime behaviours we can observe, and why fonts behave so differently based on what ssty variants they actually have. For those fonts that only have one ssty variant per prime and no others, with the current unicode-math version, one can simply issue a SizeFeatures specification that will turn ssty on at all sizes. For fonts that have a different set of variants, who knows. Best, Jura
