Yes. But not this week. I am curious at what the discussion was on the C/C++ standard when they were discussing this when they deliberated this topic.

We might just know the right person to help with this.  Let me check.


A related question in my mind is whether there should be imag(*) equivalents of these constants,

Yes is probably the answer. Sorry, lazy me. I have not done anything with complex(*) and imag(*) in any language for nearly 20 years so those types tend to slip my mind.

You're not the only one...


Assuming Chapel just passes it through to the underlying C compilers, and without checking by running some Chapel, some C compilers would allow you to define in the Chapel code

        proc fpINFINITY(type t) param real(64) where param t == real(64)
                return 0x1.0p1024;

        proc fpINFINITY(type t) param real(32) where param t == real(32)
                return 0x1.0p128f;

and produce the right results when called by Chapel. And we have had hex floating constants in Chapel for a while now. Thanks Michael.

This is what I was imagining/wondering if we could do as well... Without trying to compile this, I think that the where and param keywords are reversed (the order being "say things about the return value first, then apply any where clause constraints").


Note sure I have that code correct as far as Chapel code goes. I only just got educated on that technique a few hours ago!!! Indeed the definitions of those constants is what GCC used to do until it replaced them with a compiler builtin.

Another question for the historians: Why did gcc change? To aid with optimization by recognizing the built-in rather than the constant value?


I assume that a 'proc param', if that works, means it gets done at compile time. I cannot see how a param definition would let you define a constant with the same name for multiple floating types, real or imaginary.

You're exactly right. 'param' means evaluated at compile-time; the advantage of the 'proc' is that we can parameterize by type. Otherwise, we'd need to invent names like INF64, INF32, etc. if they were plain-old constants. This is part of what I'm hoping you'll take a stand on in your feature request issue -- what do you think would be most appealing as a programmer between constants, procedures accepting types, methods on types, methods on values, etc. As well as naming. I must say that I'm finding the ALL_CAPS names a little screamy and old-fashioned at this point and wonder if we should go the route of using 'infinity' or simply 'inf' (though I'm still inclined to use something like 'NaN' for that case given that it's so pervasic). I wonder what all the other young languages are doing for their infinity constants...


ASIDE - Note that the above two numbers,

        0x1.0p128f AND 0x1.0p1024

are illegal floating point constants. Their evaluation by a Chapel-called underlying C/C++ compiler would generate an floating point exception. But luckily, certainly GCC ignores that and just inserts whatever value pops out of the calculation which should be the correct value for Infinity for the word size. So I guess you would say we are relying on current compiler behaviour. I hope no overly zealous compiler writer gets up and tells us we are defining numbers which are out of range and flags errors!

Hmmm... This seems like this could be problematic from a portability standpoint given that we don't want to be overly tied to gcc and want to maintain our current level of support for Intel, Cray, PGI, IBM, etc. compilers.


They're definitely standalone constants at present, though we could make them into methods on a real value -- or better, the real type -- if that was considered attractive.

That should be a discussion for everybody. Object oriented or procedure oriented.

Let's have this discussion on the GitHub issue (we're trying to do more design on GitHub than on mailing lists due to the better support for search, cross-linking, formatting, etc.)


I believe we _ought_ to also be able to create overloads on 32- vs. 64-bit real types like so:

         proc type (real(32)).INFINITY return INFINITY;
         proc type (real(64)).INFINITY return INFINITY;

That would be illegal as they different, having bit-wise representations

        0x7f800000 = 0x1.0p128f (if such a number exists)
and
        0x7ff0000000000000 = 0x1.0p1024 (again if such a number exists)

respectively so.

Sorry, I meant the above only as a rough (and poor) sketch of the approach that would compile today without my learning what the different floating point values for inifinities would be. You're of course right that they'd have to return different values.

If (assuming) you have access to the C constants HUGE_VAL and HUGE_VALF defined in

        /usr/include/bits/huge_val.h
and
        /usr/include/bits/huge_valf.h

then you can probably consider trying

        proc type (real(32)).INFINITE return HUGE_VALF;
and
        proc type (real(64)).INFINITE return HUGE_VAL;

We don't have access to those C constants at compile-time today which gets back to the current implementation and why they're not param procs...


but I'm having trouble getting them to compile today. I suspect this is due to a bug in the compiler related to 'real's being treated as somewhat more of a special type than typical generics...

Again, beyond my knowledge. That I will leave to the compiler gurus.

Yeah, this is definitely in our court.


If the 'type' keyword were dropped on the declarations above, they'd apply to values of those types rather than the types themselves.

I do not fully understand but it's probably irrelevant to this discussion.

Maybe easier to see with an example:

        proc int.foo() { ... }  // define a method on integers

Lets me call:

        42.foo();

but not:

        int.foo();

Whereas:

        proc type int.bar() { ... }  // define a method on the int type

Lets me call:

        int.bar();

but not:

        42.bar();



I really only mentioned these types because careful consideration of the implications of what to do with these word-sizes might resolve any thought bubble issues that are not so clear if we are just looking at 32/64 bit. There was a lot of momentum a few years ago on those types in publications but that does seem to have translated to silicon. ARM still only handles 16-bit reals for storage, not for data processing. But I would think that the prospect of 32 16-bit reals in Intel's 512bit AVX register would have people drooling over the performance potential, even if error analyses of the imprecision gave people nightmares. 'NVIDIA's Pascal GPU implements IEEE 16bit arithmetic' but those 7 words are about all I know on the topic except that I think it is targetted at deep learning. I am struggling with normal learning! Only Sparc does 128 bit reals in assembler so far as I know, admittedly in software. But I would bet money on 128 bit reals being in Power11 CPUs sometime in the next decade.

Got it. My guess is that 64-bit reals will continue to be the default in Chapel for quite some time now (hopefully forever), but that we've established a language in which we could add real(16) and real(128) to the type hierarchy without too much trouble.

Thanks again for your thoughts here, as always,
-Brad


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Chapel-developers mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/chapel-developers

Reply via email to