I'm catching up on this thread belatedly (and am reading pretty quickly
while doing so), but wanted to chime in on a few things (some of which
occurred off-list):
First, going back to Michael's original proposed workaround:
> var m = x * (i:real(32));
I wanted to point out that you could also define your own overload on
real(32) * int(32) as a means of getting the originally desired/expected
behavior. We're also currently working on adding user-defined coercions
to the language in which case you could introduce your own coercion from
int(32) to real(32) (I believe -- the uncertainty coming from the fact
that it isn't done yet, so I can't test it :). And naturally these things
could be put into a 'MixIntReal32' library for frequent re-use. I'm not
going to argue that these are preferable to getting it right in the
language, just pointing out that there are workarounds other than
introducing explicit casts everywhere.
Later, Damian proposed:
> I would prefer the rigour of saying that
>
> int(width) can coerce to real(width)
>
> and let the program(mer) address overflow issues which should be addressed
> anyway by them. Just my 2c.
I don't feel strongly confident that we're right here, but I also don't
yet feel confident that we're wrong either. Preserving bit widths for
things like, say, int(32)*int(32) or real(32)*real(32) makes sense to me
from a notion of keeping arithmetic closed on a certain type; but once
we're mixing types, it isn't clear to me that preserving bit widths is the
obvious right thing to do.
> Anytime you have a rule and then break it as an "important convenience"
> is bound to buy you grief and make a rod for your own back.
I view the rule/convenience that we're establishing as being that int(64)
really ought to coerce to real(128), but since we don't have a real(128),
it makes sense to cap it at real(64). So to me, following the rule would
suggest introducing a real(128) rather than having int(?w) coerce to
real(w).
But again, I want to emphasize that I'm not the expert here by a long
shot, and would be curious what other users who work with smaller
bitwidths think is right/expected (not to imply I don't trust your opinion
Damian :), I just don't want to thrash). Why don't I kick off a new
thread about that in case others haven't kept up with this thread, like
me.
I am noting that a quick glance at Java and C#, which we typically used as
references for many of our implicit conversion rules, permit coercions
from ints to floats/doubles (and ditto for longs, for that matter) which
suggests to me that we may have done the well-intentioned, but wrong thing
here.
> are the same and and make the default width of real a compile time
> option, e.g.
>
> chpl --default-width=32 ....
> or
> chpl --default-width=64 ....
>
> (where the latter is the default case to match what it is now)
I'll mention that I'm personally not crazy about the notion of changing a
language's default width via a compiler flag, having bitten by Fortran
codes that I compiled "incorrectly" by not overriding the default widths
on the command-line. I also worry about the effect on modules that have
been written assuming a certain precision and later get compiled by a user
who decides to change the width on the command-line.
That said, we've long envisioned some notion of changing the default width
like this, but via the source code rather than a flag. Let me enumerate
some of them:
1) One mechanism we've discussed is a module (real or artificial) that
could be 'use'd to change the default width. For example, imagine:
use DefaultWidth32;
as a means of setting the default width for a given scope to be something
different.
We've also talked about various ways in which the source code could
redefine the default widths of 'int' or 'real'. Here are a few that have
been tantalizing, but have never quite stuck:
2) One technique might be via a type alias that overrides their default
definitions, along the lines of:
type int = int(32);
(though this runs into circularity problems given Chapel's current rules
about scoping and order evaluation, so doesn't actually work. But one of
the reasons we've avoided making 'int' and 'real' into reserved words is
to leave the door open for redefinitions like this...).
3) We've also kicked around a notion of implementing ints/reals as a
record within the internal modules:
record int {
param width=64;
...
}
in which case we could write them in a way that supported changing the
record's width using a config param, say:
config param defaultWidth = 64;
record int {
param width=defaultWidth;
...
}
but this gets us back to, essentially, a command-line flag to set that
default value... (or perhaps the default value of defaultWidth could
itself be specified via a module...).
4) Even without any changes, though, I think that our current
parameterization of types saves our bacon here by a long shot compared to
C (say), since, if you wanted to write a multiple precision code, you
could write:
config param dfw = 64;
var x : real(dfw);
and now you've got your command-line flag that you wanted, yet in a way
that doesn't affect others' modules which haven't bought into your scheme
for dialing different widths. This seems flexible and safe to me.
5) And of course, you could create your own type aliases to make this
simpler:
type myreal = real(dfw);
type myint = int(dfw);
I'd be curious for your reactions to any of these concepts (in fact, let
me go back and number them to help with that... OK, done).
> B) Enforce .. int(width) can coerce to real(width)
>
> and with procedure overloading, you pretty much have a language which can
> have a single source that can be used to build both 32-bit versions and
> 64-bit versions of say a finite element code or an optimization code or
> whatever.
>
> And by 32-bit or 64-bit version, I do not mean address space sizes, I mean
> where the number crunching uses either IEEE854 binary32 and binary64. And
> while binary32 computations run quicker than binary64, we will always need
> such multiple versions.
>
> This transparency of floating point sizes solves the holy grail of many
> programs, an area which is just not really addressed by anything native.
> Some people address the problem with preprocessors but that is grubby and
> makes debugging painful.
>
> Have a language whose default precision depends on how you conpile It
> makes Chapel today's solution to a important problem, even outside the
> main "raison d'etre" of Chapel.
I find this enticing.
> Note that real(?w) as well as other things makes argument list longer
> and it would be nice to allow multiple arguments to be specified against
> some particular type like in Pascal (which then separates by a colon),
> i.e.
>
> proc distance(x, y, z : T) : T
You've touched on one of the aspects of the language that makes me wince
every time I think of it. Pulling back the curtain a bit, I like that we
ended up saying that for a declaration like:
var a, b: int,
c, d = 1.2,
e, f: string = "hi";
* a and b are integers, default initialized
* c and d are reals, initialized to 1.2
* e and f are strings, intialized to "hi"
I think this is reasonably clear and concise while also being precise.
For formal argument lists, as you're noting, we took a different approach
with some hesitancy because it (a) is asymmetric with the variable
declaration case above and (b) doesn't support sharing of complex types,
which is often important. Our thinking was that one might start with a
completely generic function declaration:
proc foo(x, y, z) { ... }
and then at some point want to specify the type of one of the arguments,
say z:
proc foo(x, y, z: int) { ... }
but if we took the variable declaration approach, this would unduly
influence x and y, which didn't seem like what one "wanted" from a
procedure declaration by default. Ditto for optional arguments:
proc foo(x, y, z = 2) { ... }
Here, it didn't seem natural to expect that because z has a default value,
x and y should as well. So these caused us to take the approach of having
each formal argument be considered completely independently.
But, as you point out, the converse case is that if I want to write:
proc foo(x: [1..n] real, y: [1..n] real, z: [1..n] real) ...
it's a lot of repeated typing (and it gets worse if things are
distributed, etc.) So, while Michael points out that there are some
approaches one can use to get around this, it remains not completely
satisfying to me.
I believe we considered having a mixed semicolon/comma approach, but shied
away from it, thinking that it seemed odd to have semicolons in the formal
arugment list given that we didn't expect/want them in the actual argument
list.
So, at this point, I'd be happy for someone to propose something that (a)
supported both mental models, (b) seemed intuitive, and (c) was
backwards-compatible with what we've got today (or to invent a time
machine so we can go back and make a different decision without rewriting
all existing module code :).
(Damian, if your mail contained such a proposal, I think I need more
detail as I wasn't seeing it).
Thanks for the healthy debate and opinions on this,
-Brad
------------------------------------------------------------------------------
_______________________________________________
Chapel-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/chapel-users