Jim,

> I do believe that projection, first produced from primal or a programmed 
> methods and then later from learned reactions is obviously a necessary part 
> of general intelligence.

I think you're talking about instincts, conditioned responses, & analogues. No, 
they're not part of GI, - there's nothing general about them.

> Since they are obviously necessary, I don't see any reason not to amp them up 
> so long as they do not interfere with the ability of an AGI program to do 
> some genuine learning from induction-like methods.

My point is that you can't put the cart before the horse. Deduction / pattern 
projection is a feedback, to serve GI it must be directly determined by 
previously discovered patterns. Well, unless you're doing pure math, but that's 
not part of our basic learning algorithm. Lot's of intelligent people are 
totally clueless about math.
   
> It will take me some time to figure out what you are saying in the rest of 
> the message.


Strange, I thought it was crystal-clear :).

Boris,
It is going to take some time for me to understand what you wrote. But I do 
have a quick reaction to something easy to understand.

You said:
This reflects on our basic disagreement, - you (& most logicians, 
mathematicians, & programmers) start from deduction / pattern projection, which 
is based on direct operations. And I think real GI must start from induction / 
pattern discovery, which is intrinsically an inverse operation. It's pretty 
dumb to generate / project patterns at random, vs. first discovering them in 
the real world & projecting accordingly.

Well, no and yes.  No I don't disagree that GI must start from induction and 
pattern discovery, but I do believe that projection, first produced from primal 
or a programmed methods and then later from learned reactions is obviously a 
necessary part of general intelligence.  Since they are obviously necessary, I 
don't see any reason not to amp them up so long as they do not interfere with 
the ability of an AGI program to do some genuine learning from induction-like 
methods.
It will take me some time to figure out what you are saying in the rest of the 
message.

Jim.

On Wed, Aug 8, 2012 at 10:21 AM, Boris Kazachenko <bori...@verizon.net> wrote:

  Jim,

  I agree with your focus on binary computational compression, but, as you 
said, that efficiency depends on specific operands. Even though low-power 
operations (addition) are more efficient for most data, it's the exceptions 
that matter. Most data is noise, what we care about is patterns. So, to improve 
both representational & computational compression, we need to quantify it for 
each operand ) operation. And the atomic operation that quantifies compression 
is what I call comparison, which starts with an inverse, vs. direct arithmetic 
operation. This reflects on our basic disagreement, - you (& most logicians, 
mathematicians, & programmers) start from deduction / pattern projection, which 
is based on direct operations. And I think real GI must start from induction / 
pattern discovery, which is intrinsically an inverse operation.  It's pretty 
dumb to generate / project patterns at random, vs. first discovering them in 
the real world & projecting accordingly.

  This is how I proposed to quantify compression (pattern strength) in my 
intro, part 2: 


  "AIT quantifies compression for sequences of inputs, while I define match for 
comparisons among individual inputs. On this level, a match is a lossless 
compression by replacing a larger comparand with its derivative (miss), 
relative to the smaller comparand. In other words, a match a complementary of a 
miss. That’s a deeper level of analysis, which I think can enable a far more 
incremental (thus potentially scalable) approach.

  Given incremental complexity of representation, initial inputs should have 
binary resolution. However, average binary match won’t justify the cost of 
comparison, which adds a syntactic overhead of newly differentiated match & 
miss to positionally distinct inputs. Rather, these binary inputs are 
compressed by digitization: a selective carry, aggregated & then forwarded up 
the hierarchy of digits. This is analogous to hierarchical search, explained in 
the next chapter, where selected templates are compared & conditionally 
forwarded up the hierarchy of expansion levels: a “digital hierarchy” of a 
corresponding coordinate. Digitization is done on inputs within a shared 
coordinate, the resolution of which is adjusted by feedback. This resolution 
must form average integers that are large enough for an average match between 
them (a subset of their magnitude) to merit the above-mentioned costs of 
comparison.

  Hence, the next order of compression is comparison across coordinates 
(initially defined with binary resolution as before | after input). Any 
comparison is an inverse arithmetic operation of incremental power: Boolean 
AND, subtraction, division, logarithm, & so on. Binary match is a sum of AND: 
partial identity of uncompressed bit strings, & miss is !AND. Binary comparison 
is useful for digitization, but it won’t further compress the integers produced 
thereby. In general, the products of a given-power comparison are further 
compressed only by a higher-power comparison between them, where match is the 
*additive* compression.

  Thus, initial comparison between digitized integers is done by subtraction, 
which increases match by compressing miss from !AND to difference, in which 
opposite-sign bits cancel each other via carry | borrow. The match is increased 
because it is a complimentary of difference, equal to the smaller of the 
comparands.

  All-to-all comparison across 1D queue of pixels forms signed derivatives, 
complemented by which new inputs can losslessly & compressively replace older 
templates. At the same time, current input match determines whether individual 
derivatives are also compared (vs. aggregated), forming successively higher 
derivatives. “Atomic” comparison is between a single-variable input & a 
template (older input):
  Comparison: match= min (input, template), miss= dif (i-t): aggregated over 
the span of constant sign.
  Evaluation: match - average_match_per_average_difference_match, formed on the 
next search level.
  This evaluation is for comparing higher derivatives, vs. evaluation for 
higher-level inputs explained in part 3. It can also be increasingly complex, 
but I will need a meaningful feedback to elaborate.

  Division further reduces difference to a ratio, which can then be reduced to 
a logarithm, & so on. Thus, complimentary match is increased with the power of 
comparison. But the costs may grow even faster, for both operations & 
incremental syntax to record incidental sign, fraction, irrational fraction. 
The power of comparison is increased if current match plus miss predict an 
improvement, as indicated by higher-order comparison between the results from 
different powers of comparison. This meta-comparison can discover algorithms, 
or meta-patterns..."


  http://www.cognitivealgorithm.info/2012/01/cognitive-algorithm.html 

  From: Jim Bromer 
  Sent: Wednesday, August 08, 2012 9:08 AM
  To: AGI 
  Subject: [agi] Re: Addition is the Engine of Computation


  Rereading my original message I realized that what I said may not have been 
very easy to read.  It is, however, interesting and serious programmers should 
be aware of it.

  An algorithm can be (and usually is) like a procedural compression method.  
(I have sometimes called it a transformational compression method.)  That is 
not to say that algorithms typically compress data, but that they are extremely 
efficient. For instance, multiplication is defined as the repeated addition of 
a particular number.  However, the standard multiplier algorithm is much more 
efficient than doing the repeated additions because it is, what I am calling, a 
procedural compression method.  Furthermore, both addition and multiplication 
can use binary numbers which, as I explained, are extremely efficient 
compressions of the representations of values.

  It is difficult to compare the complexity (that is the efficiency) of these 
compressed procedural methods, so one way to do so is to expand the algorithm 
into a true formula of Boolean Logic where only AND, OR, Negation and 
parentheses are used with atomic Boolean variables.   If you can find the 
shortest Boolean Formula then we can say (if we agree to do so) that this is a 
measure of the complexity of the algorithm.  Most programmers (including 
myself) do not know how you go about it and the problem of efficiently finding 
the most efficient Boolean Formula may be a problem that has probably not yet 
been solved.  However, we can expand some algorithms into Boolean formulas and 
at least get an intuitive sense of just how efficient these algorithms are.

  What I said in this thread is that I was surprised to find that the real 
engine of efficiency in arithmetic procedures is found in additions of multiple 
addends. Multiplication is more efficient than repeated addition for one class 
of multiple addends, but even there it is my opinion that the real power of 
multiplication comes from the addition of the multiple partial products.

  Most examples of compressed data cannot be used without decompressing the 
data. The benefits of addition and multiplication is that it is not necessary 
to 'decompress' the binary representations of values in order to use them in 
arithmetic operations.  So addition and multiplication (and of the 
computational-numerical methods which are derived from addition) are not only 
procedural compressions but they also use data in it's compressed form without 
first decompressing it!  This is an amazing power, and I believe that it is 
exactly where the power of computation comes from.

  What does this have to do with you?
  It is my opinion, based on this analysis, that if you can -effectively- use 
multiple addend addition in your programs you would be well advised to consider 
doing so.  The problem with many supposed numerical 'solutions' to AGI is that 
no one has been able to find an effective method to use numbers to represent or 
model the problem space of AGI.

  Jim Bromer




        AGI | Archives  | Modify Your Subscription   

        AGI | Archives  | Modify Your Subscription   



      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to