http://d.puremagic.com/issues/show_bug.cgi?id=785


Witold Baryluk <bary...@smp.if.uj.edu.pl> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |bary...@smp.if.uj.edu.pl


--- Comment #26 from Witold Baryluk <bary...@smp.if.uj.edu.pl> 2012-02-03 
01:01:05 PST ---
Hi,

I was waiting for support of this mainly to write generic and more future proof
library. As stated I cannot even try conditional compilation using cent/ucent
now. Also some platforms allows doing 64bit * 64bit -> full 128bit product, so
it could be possible represented in the future as u/cent, which can have some
basic arithmetic imlemented (like bitshifting, extracing upper/lower 64-bits).
I'm also starting to do some research on FPGAs (with D!) and was wondering if
FPGA can augment some functions to main CPU, to add biger arithmetic on demand.
Would be good to have this as transparent as possible, so at least I can do:


    static if(is(cent)){
         ...
    }

in my library, or something very similar.

Also it would be good to have in Phobos, a something similar to existing
BigInt, but to behave more like normal arithmetic in terms of overflows and
integer promotions. For example WideUInt!(32) will be equivalent to uint,
WideUInt!(128) will be equivalent to ucent, or emulated using 2xulong, but
WideUInt!(256) will probably by just something like BigInt, but with truncation
or results (so multiplication, substraction or addition, powers, like
WideUInt!(256) *  WideUInt!(256) will return always  WideUInt!(256), never less
or more). It sometimes may be wastefull (when value is actually small), but is
usefull especially in cryptography (when one wants to treat this 32-bytes of
memory as unsigned integer, and also want to be sure time of operation is
constant independend of the range or value of operands), like hash, symetric or
asymetric encryption. It just makes it easier to use general WideUInt template,
which will use hardware support or fallback to the software. Addition and
bitwise logic can be easly translated to the SIMD and potential loops
statically unrolled at compile time. Also things like multiplication emulation
in such software can be done faster, even for big sizes, because library can
determine which algorithm (simple, Karatsuba, FFT) is best for particular width
at compile time (it also make it possible to depend this choice on size of left
and right operand), statically without any runtime checks (big win). Similary
for Karatsuba and FFT, optimal structure of recurences can be calculated at
compile time by library, removing many conditionals for runtime code and
allowing agressive inlineing. I know many of this things can be solved by good
JIT compiler and VM (like HotSpot - by specializing code and loops based on
runtime constants, and then agressivly optimizing it including unrolling,
vectorizing and inlineing calls), but I like to relay more on deterministic
behaviour of compiler and statically do as much as possible at compile time.

Is this something a Phobos would like to have? Cryptographer will probably
would like to implement own one even if Phobos provides such functionality
because I guess Phobos will aim at maximum performance, but Cryptographers will
aim at security and lack of side-channel attacks, and this may be contradicting
aims (however Crypgrapher can use Phobos WideUInt!(K), and implement own fast
and safe powmod(a,b,n), with n=2^K (const, so rather powmod!(K)(a,b)). I think
given existing BigInt presence in Phobos it should be relativly easy to add
WideInt/WideUInt.

When we are at long types, I have another question/problem about language.

I was wondering if in case of float with quad-preceission, one should use
'real' keyword or something else? I think we should add quadprecision floats
(+complex ones) as possibility to the language, to the existing float, and
double, for three reasons: they are commonly used in PowerPC (long double,
probably not in hardware, but done by most compilers in software), there are
some speculations about hardware support for them, and they can be cheply
emulated in software using double-double scheme (which isn't exactly equivalent
to 128-bit IEEE-754 arithmetic, but have guaranted 107-bits of precision
compared to 113-bits of precisions in IEEE-754 quad precision, range is also
limited to about 10^308, instead of 10^4932, but this is not a practical
limitation in any real application). Some SPARC ISA have reserved bits for quad
precission in FPU, also I know on PowerPC some C/Fortran compiler interpretes
'long double' as quad precission, but actually does double-double calculations.
Same does probably Sun Studio Compiler on SPARC. I also know that Intel
compiler FORTRAN and C does this on Itanium. So one can argue that there is
almost no hardware support for quad precission, so this should be leaved in
library, using structs and software implementation. But as reality also shows
there is demand for quad precission. On the other hand, somebody can say that
it is easly possible to syntethize such hardware in FPGA copouled with CPU
(ARM) on demand. Just was going to ask, theoretically if some compiler (most
probably modified LDC + own LLVM backend) would want to support quad
precission, do it need to use new keyword (sic!), like 'quad' and 'cquad', or
just use 'real' and 'creal' ?

Sorry for messing with bug report, but was thinking this is discussion is
essentially equivalent to 'cent' discussion - future proofing language and
libraries.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
------- You are receiving this mail because: -------

Reply via email to