I've been thinking about the meaning of your proposal.

I think you're saying that

(a) there should be no f8-storage-class in the SRFI because there's no IEEE standard format for 8-bit floating-point numbers;

(b) unless and until that happens, even if an implementation supports an 8-bit format, it shouldn't be called "*the* f8-storage-class" but given a different name; and

(c) if the IEEE ever settles on a single standard for 8-bit floating-point format, then the name "f8-storage-class" should be reserved for a storage-class (if any) supporting that format.

That brings up the larger issue of what f32-storage-class and f64-storage-class mean.

The term "IEEE" appears only once in the document, in the discussion of an example, and not in the context of f{32|64}-storage-class.

I'll ask some questions:

1. Should the document say that if a Scheme supports 32- and 64-bit IEEE floating-point numbers, then the storage classes f{32|64}-storage-class should be reserved for those format?

2. If a Scheme supports non-IEEE formats natively (an unlikely possibility at this point, I know), should f{32|64}-storage-class refer to the native formats, or should classes for the native formats be required to have another name?

Perhaps the SRFI needs a Post-Finalization Note clarifying the floating-point storage classes.

Brad

On 3/14/23 12:13 PM, John Cowan wrote:


On Mon, Mar 13, 2023 at 4:36 PM Bradley Lucier <[email protected] <mailto:[email protected]>> wrote:

    https://ieeexplore.ieee.org/abstract/document/9515082
    <https://ieeexplore.ieee.org/abstract/document/9515082>


I'm not an IEEE member and this paper isn't in You-Know-Where either, so I have no access to it.

    and another paper, 8-bit Numerical Formats for Deep Neural Networks,
    that investigates one of the issues you mentions, various ways of
    interpreting the bit patterns of 8-bit floating point and how useful
    each variation may be:

    https://deepai.org/publication/8-bit-numerical-formats-for-deep-neural-networks 
<https://deepai.org/publication/8-bit-numerical-formats-for-deep-neural-networks>


This paper definitely implies that standardizing on an f8 format is not only premature but may be the Wrong Thing.  In addition, squeezing out a little bit more range is more important to them than supporting the full IEEE range of ±inf.0 and +nan.0.

Matters may have shaken out more by ~2030 when the next revision of IEEE 754 can be expected, but I doubt it.  But perhaps you know better than I do, being closer to where the rubber meets the road.

    As long as you didn't specifically call an array's getter or setter
    (explicitly, or implicitly through array-ref, array-set!, etc.) then
    all
    the Bawden-style transformations of slicing and dicing and rearranging
    arrays would work just fine.


That's quite true.  You would need to make sure that the relevant procedures in the f8-storage-class returned an error.  By the same token, the SRFI should specify the array procedures that don't work on f8-arrays.  That would be satisfactory.

But I have a more radical proposal.  Remove the f8-storage-class unless and until there is a corresponding standard, at which time a new SRFI can add it back.  Instead, provide a simple (make-f8-storage-class getter-converter setter-converter) that provides a wrapped version of a u8 storage class.  The idea is that the getter-converter translates a u8 Scheme value into whatever floating-point Scheme value would be the Right Thing, and the setter-converter is the inverse transformation. That allows full use of f8-arrays given a little bit of specialized code that understands the particular f8 format in use.

What do you think of this?

Reply via email to