--- Comment #7 from Don <> 2011-03-23 13:35:05 PDT ---
(In reply to comment #6)
> (In reply to comment #5)
> > That's a false consistency. T ^^ int  is the common operation, not T ^^ T.
> > Really. BigInt ^^ BigInt isn't a BigInt. It's too big to be representable.
> Python3 (and Lisp-family languages, and others) use multi-precision integers 
> on
> default, but most people don't store large numbers in them, most times they
> store little numbers, and most people doesn't use them for cryptography.

They are not system languages. The comparison is irrelevant.

> I am not going to use D BigInts for cryptography. 99.9% times inside BigInts
> I'll keep numbers less than 63 bit long. I'd like to use BigInt as in Python 
> to
> avoid the problems caused by int, because currently in D there are no integer
> overflow tests, because Walter doesn't want them.

OK, now the truth comes out. You shouldn't be using BigInt for that. That's a
very simple task.

> The first and by a wide margin most important purpose of multi-precision
> integers is not to represent huge numbers or to do cryptography, but to free
> the mind of the programmer from being forced to think all the time about
> possible overflows breaking the code he/she is writing, freeing that part of
> attention, and allowing him/her to focus more on the algorithm instead.

Sorry, the viewpoint that BigInt is a workaround for your personal hobby horse
(integer overflow) has ABSOLUTELY ZERO support from me.
BigInt is for arithemetic on big integers. It is not for "freeing the
programmers mind" of overflow.
A type that you can be used as a drop-in replacement for integers but will warn
of overflow, is 100 times simpler than BigInt. Why don't you just write it?

Configure issuemail:
------- You are receiving this mail because: -------

Reply via email to