Just to clarify: "Epistemological hierarchy is the sequence in which we 
discover infinitesimal subset of ontological hierarchy":
Yes, that subset does piggyback epistemological hierarchy, if that's what you 
meant.



From: Boris Kazachenko 
Sent: Saturday, August 18, 2012 11:29 AM
To: AGI 
Subject: Re: [agi] *Subtraction* is the Engine of Computation


> Boris:
We are a living proof that in our world effective scalability *is* feasible.
Jim:
I was talking about artificial general intelligence.
Boris:
The distinction *is* artificial, it's all about algorithms.

Jim:
I am interested in discovering the weak points of your theory so that I am 
better able to understand it. Sorry if that is rude. 

Boris:
I don't mind "rude", as long as it's interesting. Which it would be if you did 
address my weak points, but you can't unless you have a stronger alternative. I 
think the weak points are problems I am currently working on, but I can't 
explain them if you don't understand those that I already solved.

Jim:
I was asking you why you think that your approach to scalability would make 
your AGI method feasible. I agree that we have to be able to examine the 
foundations of an (artificial) idea to make a convergence of (artificial) 
thoughts scalable -in some cases- but I was also saying that the reference to 
raw sensory data is not generally sufficient for general (artificial) reasoning.

Boris: You must've missed this:

> I said it's necessary, not sufficient.
> My whole approach is about cognitive economics, I quantify costs & benefits 
> on the lowest level of representation (& consistently translated on 
> incremental higher levels). 
> That's the basis for predictive search pruning, which is what scalability is 
> all about.

If you agree with this, show me who else is doing it. If there isn't anyone, 
then I am a frontrunner.
Jim: So you are saying that ontological hierarchy always piggybacks 
epistemological hierarchy

Boris:
No, I said nothing of the sort, & in fact it's the reverse: ontological 
hierarchy is the external reality, epistemological hierarchy is the sequence in 
which we discover infinitesimal subset of the that, via iterative application 
of *unsupervised* pattern discovery algorithm, that *always* starts with analog 
uncompressed data. If it's not analog, then it's already part of our collective 
epistemological hierarchy. 

Jim: Aren't you effectively saying that ontological hierarchy has to be reduced 
to raw sensory data since that has been the basis of your scalability argument?

Boris: On the opposite, it's manifested to us via raw sensory data...



From: Jim Bromer 
Sent: Saturday, August 18, 2012 10:14 AM
To: AGI 
Subject: Re: [agi] *Subtraction* is the Engine of Computation


Boris:
We are a living proof know that in our world effective scalability *is* 
feasible.

Jim:
I was talking about artificial general intelligence.

I am interested in discovering the weak points of your theory so that I am 
better able to understand it.  Sorry if that is rude.  The philosophical issue 
that I was discussing is whether or not you actually have some evidence - or 
really good reasons - to think that your approach to AGI will eventually work 
(without some major advancement in computer science outside of your work.)  
Your response that "we are living proof..." really did not answer my question 
in this case.  I am not arguing against the possibility of artificial general 
intelligence but I was asking you why you think that your approach to 
scalability would make your AGI method feasible.  I agree that we have to be 
able to examine the foundations of an (artificial) idea to make a convergence 
of (artificial) thoughts scalable -in some cases- but I was also saying that 
the reference to raw sensory data is not generally sufficient for general 
(artificial) reasoning.

Take a look at what you said in response to my comments:
Boris: You are confusing ontological hierarchy, in which we always start from 
some arbitrary point, & epistemological hierarchy, in which the brain ) 
civilization of brains *always* starts with analog / un-encoded / uncompressed 
data. GI is the algorithm of *unsupervised* pattern discovery, supervised 
education always piggybacks on the former done by prior generations. 

Jim: So you are saying that ontological hierarchy always piggybacks 
epistemological hierarchy which (I believe you are saying) is the algorithm of 
*unsupervised* pattern discovery based on analog uncompressed data.  Aren't you 
effectively saying that ontological hierarchy has to be reduced to raw sensory 
data since that has been the basis of your scalability argument?

I wasn't confusing ontological hierarchy with epistemological hierarchy by the 
way.  The question which is irrelevant to your presentation but relevant to my 
effort to understand the substance of your presentation is whether or  not you 
realized that I hadn't.  if it was a mistake that you made then ok, but if you 
were misrepresenting my views in order to dismiss my comments then I would 
discontinue taking that tact with you because I have learned that it is almost 
hopeless to continue with people who do that.  In the one case, you simply 
misunderstood what I was saying, in the other, you will insist that I am the 
one who does not understand a foundation of what we are talking about in order 
to avoid dealing with an issue of relative complexity that no one has solved.

Jim Bromer


On Sat, Aug 18, 2012 at 9:51 AM, Boris Kazachenko <bori...@verizon.net> wrote:

  Jim,

  It would help if you tried to address specific points that I made.

  > You and I do not need to understand the particle physics of cellular 
microbiology in order to study an introductory text of biology. And in order to 
learn what the text is presenting, we do not need to reduce everything 
mentioned in the text to the order of particle physics...

  You are confusing ontological hierarchy, in which we always start from some 
arbitrary point, & epistemological hierarchy, in which the brain ) civilization 
of brains *always* starts with analog / un-encoded / uncompressed data. GI is 
the algorithm of *unsupervised* pattern discovery, supervised education always 
piggybacks on the former done by prior generations. 

  > For example, to really understand what is presented in the biology text we 
do not need to recall the sensory experience of reading.

  Sensory experience is how we learn phonemes, alphabet, words, & the concepts 
behind basic words in the first place.

  > Also, some scalability issues cannot be resolved just by having the 
foundations of the subject (or object) handy. 

  I said it's necessary, not sufficient. My whole approach is about cognitive 
economics, I quantify costs & benefits on the lowest level of representation. 
That's the basis for predictive search pruning, which is what scalability is 
all about.

  > The potential complexity of interrelations (as in derivable interrelations) 
may make scalability infeasible.

  We are a living proof know that in our world effective scalability *is* 
feasible.

  http://www.cognitivealgorithm.info/2012/01/cognitive-algorithm.html 



  From: Jim Bromer 
  Sent: Saturday, August 18, 2012 8:44 AM
  To: AGI 
  Subject: Re: [agi] *Subtraction* is the Engine of Computation


  Boris,
  You and I do not need to understand the particle physics of cellular 
microbiology in order to study an introductory text of biology.  And in order 
to learn what the text is presenting, we do not need to reduce everything 
mentioned in the text to the order of particle physics.

  So while I agree that we need to go to the basis of knowledge to resolve some 
scalability issues, and derived knowledge is often based on raw sensory 
experience, the point that I am trying to make is that the basis of knowledge 
that we have to use in many scalability scenarios are not raw sensory 
experience.

  For example, to really understand what is presented in the biology text we do 
not need to recall the sensory experience of reading.  (I guess it would be 
nice to be able to do that but it is not necessary for the problem of learning 
to understand what the text referred to.) So we really do not need to reduce 
all problems to primitive forms.

  Also, some scalability issues cannot be resolved just by having the 
foundations of the subject (or object) handy.  The potential complexity of 
interrelations (as in derivable interrelations) may make scalability infeasible.

  Jim Bromer
        AGI | Archives  | Modify Your Subscription   

        AGI | Archives  | Modify Your Subscription   



      AGI | Archives  | Modify Your Subscription   

      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to