Thanks for the smiley faces Boris...
I disagree that you have to multiply all the vectors in a pattern by a
relative distance to a target coordinate in order to
combine imagined complex ideas and related observations.  Our theories are
very different.  (On the other hand I am interested in conjectures about
conceptual vectors and stuff like that.)

I am interested in a continuation of the explanation of your theories and I
hope to get back to it soon.
Jim Bromer
On Tue, Aug 21, 2012 at 7:57 AM, Boris Kazachenko <[email protected]>wrote:

> **
> Jim,
>
> >Where Boris and I disagree is that I feel that because of relativity the
> input source of an idea may not be the most elemental source of the idea
> that needs to be considered.
>
> Right, but that's the simplest assumption, you must make it unless you
> know otherwise. And you only know otherwise if you've discovered more
> "elemental" (stable) source on some higher level of search &
> generalization. That would generate a focusing / motor feedback, always
> derived from prior feedforward. As I keep saying, complexity must be
> incremental :).
>
> > One simple example is that we can use our imagination and study of the
> subject of the concept in order to extend our ideas about the subject
> beyond those ideas which came directly from observations of it.
>
> This is interactive pattern projection, but you have to discover those
> patterns first. Technically, you simply multiply all the vectors in a
> pattern by a relative distance to a target coordinate. And then you compare
> multiple patterns projected to the same coordinate, & multiply the
> difference by relative strength of each pattern. That gives you a combined
> prediction, or probability distribution if the patterns are mutually
> exclusive :).
>
>
>
>  *From:* Jim Bromer <[email protected]>
> *Sent:* Monday, August 20, 2012 7:44 PM
> *To:* AGI <[email protected]>
> *Subject:* Re: [agi] Uncertainty, causality, entropy, self-organization,
> and Schroedinger's cat.
>
> Sergio, I will give you an example of a dedicated effort to communicate an
> idea from my own experience.  I have tried over and over to talk about
> relativism in human thought. Very few people even made the effort to try to
> understand what I was saying.  One effect of conceptual relativism is that
> when you use concepts to consider other concepts the concepts you use will
> affect the concept under consideration.  This is simple to understand and
> yet I don't remember anyone actually talking about it to me.  It is one of
> those things that people either ignore or don't understand or don't care
> about.
>
> So I can't say that this is an idea that everyone in AGI has been waiting
> for.
>
> Now if I could use it to create an actual AGI program then some people
> would become curious.  However, the problem is that this idea introduces
> the potential for so much complexity that it is not an effective and
> simplifying idea.  So I keep repeating it every once in a while waiting for
> someone who might have something useful to say about it.  But I don't
> actually expect anyone to actually have something useful to say about the
> matter.
>
> One thing that Boris and I seem to agree with is that you have to be able
> to refer to the source of a concept (or information) in order to resolve
> some issues related to data derived from it. (Since we need to use
> generalizations then you would have to refer to the simplest generalization
> of the source, or an elemental source event that characterized the class of
> the generalization of the concept in order to resolve some issues that
> concern the derived concept or information.  Boris talks about
> scalability.)  Where Boris and I disagree is that I feel that because of
> relativity the input source of an idea may not be the most elemental source
> of the idea that needs to be considered.  One simple example is that we can
> use our imagination and study of the subject of the concept in order to
> extend our ideas about the subject beyond those ideas which came directly
> from observations of it.  So our most elemental ideas about matter, for
> example, do not come only from our macro observations of it but from the
> application of our imaginations to understand various theories about the
> particles and waves of it.
>
> I know that some people must be able to understand what I just said,
> because it was all pretty basic stuff.  But since the AGI guys cannot
> convert those simple ideas into a computer program they do not seem too
> interested.
>
> So I have a good idea but it is not a great idea that explains how someone
> might actually create an AGI program.
>
> Jim Bromer
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/18407320-d9907b69> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to