On 6/2/21 1:36 PM, Alex Bennée wrote:

Richard Henderson <richard.hender...@linaro.org> writes:

Rename to parts$N_minmax.  Combine 3 bool arguments to a bitmask,
return a tri-state value to indicate nan vs unchanged operand.
Introduce ftype_minmax functions as a common optimization point.
Fold bfloat16 expansions into the same macro as the other types.

Signed-off-by: Richard Henderson <richard.hender...@linaro.org>
---
<snip>
-        switch (a.cls) {
-        case float_class_normal:
-            a_exp = a.exp;
-            break;
-        case float_class_inf:
-            a_exp = INT_MAX;
-            break;
-        case float_class_zero:
-            a_exp = INT_MIN;
-            break;
-        default:
-            g_assert_not_reached();
-            break;
-        }
-        switch (b.cls) {
-        case float_class_normal:
-            b_exp = b.exp;
-            break;
-        case float_class_inf:
-            b_exp = INT_MAX;
-            break;
-        case float_class_zero:
-            b_exp = INT_MIN;
-            break;
-        default:
-            g_assert_not_reached();
-            break;
-        }
-
<snip>
+    if (unlikely(ab_mask != float_cmask_normal)) {
+        switch (a->cls) {
+        case float_class_normal:
+            break;
+        case float_class_inf:
+            a_exp = INT16_MAX;

We've gone from min/max of our internal representation to INT16_MIN/MAX
- I guess that doesn't break until we get to Float256 but why the change?

Ah, because I've changed from "bool a_less" to "int cmp". I subtract the two exponents and I don't want that subtraction to overflow.


r~

Reply via email to