Dear Prof. Tierney, thank you very much to answer my question. It is good to
know that the loss of efficiency can be small.
I came to this question after using R to implement a few low level algorithm:
KD-tree and recursive algorithm for conditional Poisson binomial. The R's speed
has been
The 64 bit version of VisualWorks Smalltalk has an immediate ShortDouble,
which sacrifices two bits of exponent for a tag. It thus has the same
precision as an IEEE double, but one fourth as much range. Overflows
automatically get promoted to ordinary Double's, which are pointers to objects
Thanks -- that's good to know.
Best,
luke
On Fri, 23 Feb 2007, Jeffrey J. Hallman wrote:
The 64 bit version of VisualWorks Smalltalk has an immediate ShortDouble,
which sacrifices two bits of exponent for a tag. It thus has the same
precision as an IEEE double, but one fourth as much
I think the short answer is not much.
Longer answer: In an interpreted framework with double precision
floating point scalars there is little chance of avoiding fresh
allocations for each scalar; given that, the overhead associated with
length checks can be made negligible. (That isn't to say it
I have been comparing R with other languages and systems. One peculiar feature
of R is there is no scalar. Instead, it is just a vector of length one. I
wondered how much performance penalty this deign cause, particular in
situations with many scalars in a program. Thanks.
Jason Liao,