> At 05:32 PM 9/23/2001 +0200, Bart Lateur wrote:
> >On Thu, 13 Sep 2001 06:27:27 +0300 [ooh I'm far behind on these lists],
> >Jarkko Hietaniemi wrote:
> >
> > >I always see this claim ("why would you use 64 bits unless you really
> > >need them big, they must be such a waste") being bandied around, without
> > >much hard numbers to support the claims.
> >
> > >Unless you are thinking of huge and/or multidimensional arrays
> > >of tightly packed integers, I don't think you should care.
> >
> >We're talking bytecode. That will indeed be a case of "huge arrays of
> >tightly packed integers".
>
> For bytecode, it's not a big problem, certainly not one I'm worried about.
> Machines that want 64-bit ints have, likely speaking, more than enough
> memory to handle the larger bytecode.
>
That's not the problem.  The problem is protability of the byte-code.
I've heard various discussions about byte-code-compiling.  One of which
was that it is not considered cosher to let a program compile code and
write into a "/usr/local/scripts" directory or any such thing.  Many
companies want to be able to distribute "byte-commpiled" code (such as
with python).

Though I don't know what the powers on high want, are we saying here and
now that the byte-code is strickly non-portable?  That parrot code is only
as good as the machine it was compiled on?  64-bit compiled parrot-code
will not work on 32-bit machines.

If we're making that distinction, then I'm under the impression that a
whole heck of a lot more optimizations can be made for the byte-code than
simple word-size.  Not least of which would be in-line assembly (from
xs-code).

-Michael

Reply via email to