Seems ok to me as a user. Note also that there is the TF32 format (tf for 
tensor-float), which takes up 32 bits, but only uses 19. So, the switches could 
be:

-t f2    IEEE half
-t fh    IEEE half
-t fb    brain
-t ft     tensor

or

-t ft4  tensor

(which indicates the 32-bitness)

Eyal

-----Original Message-----
From: Paul Eggert <egg...@cs.ucla.edu> 
Sent: Friday, 2 February 2024 3:47
To: Pádraig Brady <p...@draigbrady.com>; Rozenberg, Eyal (Consultant) 
<eyal.rozenb...@gehealthcare.com>; 68...@debbugs.gnu.org
Subject: Re: bug#68871: Can't use od to print half-precision floats

WARNING: This email originated from outside of GE HealthCare. Please validate 
the sender's email address before clicking on links or attachments as they may 
not be safe.

On 2/1/24 13:59, Pádraig Brady wrote:
>
> bfloat16 looks like a truncated single precision IEEE, so we should be 
> able to just pad the extra 16 bits with zeros when converting to 
> single precision internally for processing.

Sounds good. This would mean od could work even the platform doesn't support 
bfloat16_t, since od.c could fall back on the above code (though I suppose it 
could be endianness-dependent).

Reply via email to