I believe that the complexity of binary addition, as determined by the
expansion of an addition algorithm into a true Boolean formula, is
exponential as the number of addends increases. However, the length of the
addends also has an effect on the complexity of the algorithm of course.

On Tue, Aug 7, 2012 at 7:22 PM, Jim Bromer <jimbro...@gmail.com> wrote:

> The positional notation system is a powerful compression of a
> representation of a number. The decimal number system and the binary number
> system are both examples of this method.  If you were to use a single mark
> to represent 1, two marks to represent 2, three marks to represent 3 and so
> on, you would find that the positional notation system has a logarithmic
> compression value.  Decompressed, a value of n
> (significant)bits expressed in the binary system would decompress into 2^n
> sticks (in the unary system that I just mentioned.)  This is a major
> representational compression method and it represents one of greatest of
> historical achievements. I feel that the system fails badly and leaves a
> lot of artifacts when used to express fractional amounts but I don't want
> to get into that can of worms (which has already been opened for centuries).
>
> The methods of addition and multiplication of numbers expressed in the
> positional notation system represent major procedural compression methods
> because they can be used to compute their functions so efficiently.  I
> always felt that since multiplication was more efficient than addition
> (when used to do what they do) that it had to be the more powerful system.
> Cross multiplication (where you multiply the values in the ones column of
> the two multiplicands, then go to multiply the ones column of one
> multiplicand by the twos column of the other, and so on) seemed to be the
> most amazing procedural compression method.  However, I was wrong.  The
> addition of multiple numbers is the most powerful procedural engine.  When
> you try to express the addition of multiple binary numbers in the terms of
> a true Boolean formula (on the input bit fields of the addends)
> it expands into a formula that has to be expressed at a roughly exponential
> degree of complexity.  (That is the formula is itself exponentially long
> relative to some function of the number of bits in the addends and the
> number of addends.)  If you look at cross multiplication alone, ignoring
> the subsequent step when you add the partial products of the
> multiplication, you would find that it expands into a Boolean formula that
> is roughly n^2 steps (based on the input fields of multiplicands which were
> each n bits long).  But when you realize that the addition of the partial
> products has a complexity which is at least exponential relative to the
> input bit fields you can see that this step of the addition is main engine
> of efficiency.  So it is true that multiplication is more complex than
> addition, but it is the part where the additions are being done which is
> most complex (that is to say the part which is most efficient.)
>
> (Of course there are some cases where this analysis of the complexity may
> not hold, one can find short cuts for some situations, as when one of the
> multiplicands is zero or one, but in general, the best case of procedural
> efficiency, as found through an expansion into a true Boolean formula, can
> be found in binary addition. You can say that multiplication is more
> powerful, but without binary addition - you got squat.
>
> (I have not worked the details out but I am confident that this analysis
> is in the ballpark.)
>
> Jim Bromer
>
>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to