Status: Assigned
Owner: ----
Labels: Type-Bug Priority-Medium

New issue 4206 by [email protected]: TurboFan's Float(32|64)(Min|Max) operations are not consistent between architectures.
https://code.google.com/p/v8/issues/detail?id=4206

I've attached a small JavaScript file that can demonstrate the problem. Example output:

$ out/x64.optdebug/d8 fmin.js --always-opt
MathMin(NaN, 0) -> NaN
TernaryMin(NaN, 0) -> 0
$ out/arm64.optdebug/d8 fmin.js --always-opt
MathMin(NaN, 0) -> NaN
TernaryMin(NaN, 0) -> NaN

Using Float64Min as an example:

`CommonOperatorReducer::ReducePhi` will turn `(a < b) ? a : b` into Float64Min, if `machine()->HasFloat64Min()`. On x86, this is implemented using `minsd`, but on ARM64 it is implemented using `fmin`, which is _not_ equivalent.

Briefly:
- ARM64's fmin is like `Math.min(a, b)`.
- x86's minsd is like `(a < b) ? a : b`.

Since only the ternary sequence is matched, ARM64 has the wrong behaviour for code like this. Also, matters are somewhat confused because Crankshaft's HMathMinMax _does_ behave like Math.min/max.

What's the best way to fix this? Since the semantics of fmin and minsd seem to be architecture-specific, I suggest removing `HasFloat64Min()` and the like, and moving the logic in `CommonOperatorReducer::ReducePhi` to architecture-specific instruction selectors. Is that sensible, or is there a better way to approach this?

Attachments:
        fmin.js  597 bytes

--
You received this message because this project is configured to send all issue notifications to this address.
You may adjust your notification preferences at:
https://code.google.com/hosting/settings

--
--
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- You received this message because you are subscribed to the Google Groups "v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to