Jim:
I am talking about my own theories now, please try to remain calm: 

Boris:
It's hard, because you're maddeningly vague :). And you can't help being vague, 
because you don't get that complexity of representation, & all relevant 
definitions, must be incremental.
You can only be explicit if you start from minimal complexity.

Jim:
I think that it is important to be able to store or to find the data that 
represents the basis of a concept or grouping of related concepts in order to 
resolve some issues that will become apparent as the AGI program learns about 
the concept or as it relates it to other concepts. However, the program will 
not be able to store all input that it is exposed to, and this basis has to be 
derived from, or represent, a collection of variations on the primary subject, 
so for those reasons the concept has to be composed of generalizations and 
variations.

Boris: Yes, I call those match & miss :).

Jim:
Are you thinking of storing representations of all primitives that would be 
used by your program (raw sensory data) so that comparisons might be later made 
against some of them?

Boris:
That would be buffering, it's optional for inputs that are pruned-out, - not 
selected for immediate search. The same cost-benefit analysis applies, but the 
cost of buffering is a lot lower than that of search. This is done on all 
levels.

Jim:
Or are the compressions going to be taken from generalizations of the 
variations of sensory data that commonly represent a particular event to be 
gauged?

Boris:
Generalization is compression. There're all kinds of possible variations, - 
syntactic complexity of inputs is incremental, & individual variables are 
pruned just like multi-variable inputs.

Jim:
Are comparisons going to be made against partial decompressions of previously 
compressed representations?

Boris:
This would be a comparison to feedback, & that's only cost-efficient if the 
feedback is aggregated over all inputs of higher-level search span. I call it 
evaluation for elevation, rather than comparison.   

Jim:
You don't have to continue if you don't want to, however, I am curious about 
what you are talking about.

Boris: I'd love to continue, as long as we're talking substance :).

Jim:
I guess you must mind my being rude since you are not able to appreciate the 
substance of my criticisms. 

Boris:
I think it's the other way around :). It should be obvious to both of us that I 
am a lot rudder than you. What we disagree on is which one of us doesn't 
appreciate the substance :).

> Boris:
> My whole approach is about cognitive economics, I quantify costs & benefits 
> on the lowest level of representation (& consistently translated on 
> incremental higher levels). 
> That's the basis for predictive search pruning, which is what scalability is 
> all about.

Are you saying that you consistently use costs and benefits derived at the 
lowest level of representation through all incremented higher levels? 

Boris: Yes, except that "opportunity cost" of utilized computational resources 
is a feedback from relatively higher levels. Benefit is projected match: 
current match also adjusted by such downward feedback.

And then you are saying that is the basis of pruning searches based on 
predictions...(of what is being looked for?)

Boris:
Yes, inputs are forwarded to higher levels if their additive projected match 
exceeds opportunity cost of thus- expanded search. You're maximizing predictive 
power of a whole system. 
I have a lot more details in my intro: 
http://www.cognitivealgorithm.info/2012/01/cognitive-algorithm.html 



From: Jim Bromer 
Sent: Saturday, August 18, 2012 12:41 PM
To: AGI 
Subject: Re: [agi] *Subtraction* is the Engine of Computation


I am talking about my own theories now, please try to remain calm: I think that 
it is important to be able to store or to find the data that represents the 
basis of a concept or grouping of related concepts in order to resolve some 
issues that will become apparent as the AGI program learns about the concept or 
as it relates it to other concepts. However, the program will not be able to 
store all input that it is exposed to, and this basis has to be derived from, 
or represent, a collection of variations on the primary subject, so for those 
reasons the concept has to be composed of generalizations and variations.
Are you thinking of storing representations of all primitives that would be 
used by your program (raw sensory data) so that comparisons might be later made 
against some of them? Or are the compressions going to be taken from 
generalizations of the variations of sensory data that commonly represent a 
particular event to be gauged?
Are comparisons going to be made against partial decompressions of previously 
compressed representations?
You don't have to continue if you don't want to, however, I am curious about 
what you are talking about.

Boris:
I don't mind "rude", as long as it's interesting. Which it would be if you did 
address my weak points, but you can't unless you have a stronger alternative. I 
think the weak points are problems I am currently working on, but I can't 
explain them if you don't understand those that I already solved.

Jim:
I guess you must mind my being rude since you are not able to appreciate the 
substance of my criticisms. So, to repeat what I have already said, sorry.

Boris:
> My whole approach is about cognitive economics, I quantify costs & benefits 
> on the lowest level of representation (& consistently translated on 
> incremental higher levels). 
> That's the basis for predictive search pruning, which is what scalability is 
> all about.

Are you saying that you consistently use costs and benefits derived at the 
lowest level of representation through all incremented higher levels?  And then 
you are saying that is the basis of pruning searches based on predictions...(of 
what is being looked for?)

Jim Bromer


 
On Sat, Aug 18, 2012 at 11:29 AM, Boris Kazachenko <bori...@verizon.net> wrote:

  > Boris:
  We are a living proof that in our world effective scalability *is* feasible.
  Jim:
  I was talking about artificial general intelligence.
  Boris:
  The distinction *is* artificial, it's all about algorithms.

  Jim:
  I am interested in discovering the weak points of your theory so that I am 
better able to understand it. Sorry if that is rude. 

  Boris:
  I don't mind "rude", as long as it's interesting. Which it would be if you 
did address my weak points, but you can't unless you have a stronger 
alternative. I think the weak points are problems I am currently working on, 
but I can't explain them if you don't understand those that I already solved.

  Jim:
  I was asking you why you think that your approach to scalability would make 
your AGI method feasible. I agree that we have to be able to examine the 
foundations of an (artificial) idea to make a convergence of (artificial) 
thoughts scalable -in some cases- but I was also saying that the reference to 
raw sensory data is not generally sufficient for general (artificial) reasoning.

  Boris: You must've missed this:

  > I said it's necessary, not sufficient.
  > My whole approach is about cognitive economics, I quantify costs & benefits 
on the lowest level of representation (& consistently translated on incremental 
higher levels). 
  > That's the basis for predictive search pruning, which is what scalability 
is all about.

  If you agree with this, show me who else is doing it. If there isn't anyone, 
then I am a frontrunner.
  Jim: So you are saying that ontological hierarchy always piggybacks 
epistemological hierarchy

  Boris:
  No, I said nothing of the sort, & in fact it's the reverse: ontological 
hierarchy is the external reality, epistemological hierarchy is the sequence in 
which we discover infinitesimal subset of the that, via iterative application 
of *unsupervised* pattern discovery algorithm, that *always* starts with analog 
uncompressed data. If it's not analog, then it's already part of our collective 
epistemological hierarchy. 

  Jim: Aren't you effectively saying that ontological hierarchy has to be 
reduced to raw sensory data since that has been the basis of your scalability 
argument?

  Boris: On the opposite, it's manifested to us via raw sensory data...



  From: Jim Bromer 
  Sent: Saturday, August 18, 2012 10:14 AM
  To: AGI 
  Subject: Re: [agi] *Subtraction* is the Engine of Computation


  Boris:
  We are a living proof know that in our world effective scalability *is* 
feasible.

  Jim:
  I was talking about artificial general intelligence.

  I am interested in discovering the weak points of your theory so that I am 
better able to understand it.  Sorry if that is rude.  The philosophical issue 
that I was discussing is whether or not you actually have some evidence - or 
really good reasons - to think that your approach to AGI will eventually work 
(without some major advancement in computer science outside of your work.)  
Your response that "we are living proof..." really did not answer my question 
in this case.  I am not arguing against the possibility of artificial general 
intelligence but I was asking you why you think that your approach to 
scalability would make your AGI method feasible.  I agree that we have to be 
able to examine the foundations of an (artificial) idea to make a convergence 
of (artificial) thoughts scalable -in some cases- but I was also saying that 
the reference to raw sensory data is not generally sufficient for general 
(artificial) reasoning.

  Take a look at what you said in response to my comments:
  Boris: You are confusing ontological hierarchy, in which we always start from 
some arbitrary point, & epistemological hierarchy, in which the brain ) 
civilization of brains *always* starts with analog / un-encoded / uncompressed 
data. GI is the algorithm of *unsupervised* pattern discovery, supervised 
education always piggybacks on the former done by prior generations. 

  Jim: So you are saying that ontological hierarchy always piggybacks 
epistemological hierarchy which (I believe you are saying) is the algorithm of 
*unsupervised* pattern discovery based on analog uncompressed data.  Aren't you 
effectively saying that ontological hierarchy has to be reduced to raw sensory 
data since that has been the basis of your scalability argument?

  I wasn't confusing ontological hierarchy with epistemological hierarchy by 
the way.  The question which is irrelevant to your presentation but relevant to 
my effort to understand the substance of your presentation is whether or  not 
you realized that I hadn't.  if it was a mistake that you made then ok, but if 
you were misrepresenting my views in order to dismiss my comments then I would 
discontinue taking that tact with you because I have learned that it is almost 
hopeless to continue with people who do that.  In the one case, you simply 
misunderstood what I was saying, in the other, you will insist that I am the 
one who does not understand a foundation of what we are talking about in order 
to avoid dealing with an issue of relative complexity that no one has solved.

  Jim Bromer


  On Sat, Aug 18, 2012 at 9:51 AM, Boris Kazachenko <bori...@verizon.net> wrote:

    Jim,

    It would help if you tried to address specific points that I made.

    > You and I do not need to understand the particle physics of cellular 
microbiology in order to study an introductory text of biology. And in order to 
learn what the text is presenting, we do not need to reduce everything 
mentioned in the text to the order of particle physics...

    You are confusing ontological hierarchy, in which we always start from some 
arbitrary point, & epistemological hierarchy, in which the brain ) civilization 
of brains *always* starts with analog / un-encoded / uncompressed data. GI is 
the algorithm of *unsupervised* pattern discovery, supervised education always 
piggybacks on the former done by prior generations. 

    > For example, to really understand what is presented in the biology text 
we do not need to recall the sensory experience of reading.

    Sensory experience is how we learn phonemes, alphabet, words, & the 
concepts behind basic words in the first place.

    > Also, some scalability issues cannot be resolved just by having the 
foundations of the subject (or object) handy. 

    I said it's necessary, not sufficient. My whole approach is about cognitive 
economics, I quantify costs & benefits on the lowest level of representation. 
That's the basis for predictive search pruning, which is what scalability is 
all about.

    > The potential complexity of interrelations (as in derivable 
interrelations) may make scalability infeasible.

    We are a living proof know that in our world effective scalability *is* 
feasible.

    http://www.cognitivealgorithm.info/2012/01/cognitive-algorithm.html 



    From: Jim Bromer 
    Sent: Saturday, August 18, 2012 8:44 AM
    To: AGI 
    Subject: Re: [agi] *Subtraction* is the Engine of Computation


    Boris,
    You and I do not need to understand the particle physics of cellular 
microbiology in order to study an introductory text of biology.  And in order 
to learn what the text is presenting, we do not need to reduce everything 
mentioned in the text to the order of particle physics.

    So while I agree that we need to go to the basis of knowledge to resolve 
some scalability issues, and derived knowledge is often based on raw sensory 
experience, the point that I am trying to make is that the basis of knowledge 
that we have to use in many scalability scenarios are not raw sensory 
experience.

    For example, to really understand what is presented in the biology text we 
do not need to recall the sensory experience of reading.  (I guess it would be 
nice to be able to do that but it is not necessary for the problem of learning 
to understand what the text referred to.) So we really do not need to reduce 
all problems to primitive forms.

    Also, some scalability issues cannot be resolved just by having the 
foundations of the subject (or object) handy.  The potential complexity of 
interrelations (as in derivable interrelations) may make scalability infeasible.

    Jim Bromer
          AGI | Archives  | Modify Your Subscription   

          AGI | Archives  | Modify Your Subscription   



        AGI | Archives  | Modify Your Subscription   

        AGI | Archives  | Modify Your Subscription   



      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to