Alan said:
Capture a second frame at a time when the input has changed in some way.
Take a second frame and then subtract the first frame. Take this
difference as a new thing to play with and see if it can be used to
compress the first frame. Then take the thirdframe, attempt to compress
it, then subtarct it from the compressed second frame, and just continue
in that manner. The system will have limits, especially early on with
how fast it can acquire data, but it should radically outperform machine
learning as it matures.

I am saying that abstraction is more than picking out items from a
list and it is a lot more than subtracting frame 1 from frame 2, and
it is more than applying compression to some simple product of
comparative operation. It is a kind of compression, and comparative
operations are essential, but it is more than that. To produce
abstraction, which in my view is more like a combination of discrete
objects and less like a deep learned net, your program would need to
develop different kinds of algorithms to deal with different kinds of
input. I haven't read Dirreda yet, and I am not just talking about
text, but I think the simple comparisons of a tentative or  simple
abstraction against other kinds of abstractions that might seem
relevant and against other types of input or construction based on
abstractions are essential. But, the algorithms of abstraction have to
be more than some initial functions of a theory of simple picking out
of features as if the features were going to be superficially obvious.
Jim Bromer
On Mon, Oct 29, 2018 at 7:55 PM Alan Grimes <[email protected]> wrote:
>
> Stanley Nilsen wrote:
> > response's below
> >
> > On 10/29/18 1:32 AM, Nanograte Knowledge Technologies via AGI wrote:
> >>
> >> The question is, how should such a component of
> >> abstraction/deabstraction be successfully engineered as a component
> >> of an AGI service? Considering what you shared about your
> >> architecture, I would understand you are saying that you would
> >> develop the blueprint by virtue of moving from an abstraction of
> >> information, to a codable (deabstraction) of said information "text".
> >> If so, that is in compliance with the classical SDLC where analysis
> >> and design moved from conceptual- to logical- to physical,
> >> architectural layers.
> > The concept I see fitting into AGI doesn't involve de-abstraction.
> > Once abstraction occurs, there is no way to go back the other way.
> > The abstraction has the purpose of being able to compare two things
> > that are not easily compared.  The "information" that was used in the
> > abstraction process is replaced by a less specific representation that
> > doesn't reveal anything about it's past (except that it came from a
> > specific promoter which is our tie back into the choice.)
>
> WRONG WRONG WRONG, COMPLETELY AND UTTERLY WRONG, spectacularly wrong
> even... Wrong enough even to be completely backwards!!!!!
>
> First, the opposite of abstraction, is usually "resoultion".
>
> Your stupid, misseducated, underporforming brain is actually
> CONTINUOUSLY resolving abstract representations to produce your
> conscious experience though it is obviously too oblivious to realize
> that. =\
>
> I'm not trying to be especially cruel to you individually, I'm mostly
> expressing my frustration with this line of thought and how much time it
> wastes. It is not an exaggeration to say that the ratio of resolution to
> abstraction in human cognition in the adult brain is on the order of
> several hundred to one, if not more... It is different in young children
> for both physiological and developmental reasons.
>
> > The promoter is essentially watching the conditions, and when
> > conditions are satisfied, the promoter gives a bid that is based on
> > the value of the action - an action that was related to this
> > promoter.   So in a simple sense, the promoter, when activated says
> > "hey, I'm promoter 472346 and my benefit bid is 14."  If there is no
> > promoted benefit higher than 14 then the promoter's action will be
> > initiated.   The abstracted value of 14 compares with other promoters
> > whose calculations are virtually unrelated to how this promoter came
> > to 14.  The "relatedness" comes from all promoters being set up by a
> > common abstractor.
>
> How about cortical columns (abstractions) produce a signal that is
> modified by feedback to be either more or less... Then through both
> feed-forward and feed-back connections, a world model is built.
>
> >> I know, this would probably play havoc with linear and
> >> component-based programming techniques, as it should. I think the
> >> simplest reason for this is that AGI functionality probably is not a
> >> teachable construct. Would we need a different programming approach?
> >> Probably. Would this approach be derived from an AGI-services
> >> architecture? Probably. Which leaves the question; what would the
> >> SDLC for an AGI-services architecture look like?
> > Programming approach is fairly straight forward for most of the
> > system, but the "magic sauce" is in the abstractor.   One can think of
> > the system as built on assertion of facts which become "conditions."
> > Conditions are distributed to promoters who use the data to make
> > simple calculations.   The action of asserting a fact is itself the
> > result of the unit taking an action that was promoted - or on a lower
> > level, sensors could assert a fact of some condition.
> >
> > The hard programming is building an abstractor, and the abstractor is
> > likely the last part of the system to mature.  The abstractor is
> > tasked with setting a value for an expected outcome and having that
> > value be comparable to values for other outcomes.  It's complicated,
> > but the advantage is that we know what we are trying to achieve as we
> > build the code that does abstraction.   The rest of the system is not
> > mysterious.
> 
> 
> Well, there are two basic approaches, there is the conventional Machine
> Learning/Deep Learning approach that has some ammount of success by
> training recognizers for abstractions. Thes approaches have currently
> hit a wall because they
> 
> -> have finite layers (can't really be recursive),   (see the
> cortico-thalamo-cortical loop in the brain)
> -> have a fixed structure and can't recruit task-specific groups like
> the human brain can.
> -> have rigid heirarchies and can't borrow intuition from different
> channels/layers.
> 
> Another that should produce a theoretically optimal intelligence but at
> probably excessive cost.
> 
> Let the input be a stream.
> Capture an initial frame and let that be your starting point.
> 
> Capture a second frame at a time when the input has changed in some way.
> Take a second frame and then subtract the first frame. Take this
> difference as a new thing to play with and see if it can be used to
> compress the first frame. Then take the third frame, attempt to compress
> it, then subtarct it from the compressed second frame, and just continue
> in that manner. The system will have limits, especially early on with
> how fast it can acquire data, but it should radically outperform machine
> learning as it matures.
> 
> --
> Please report bounces from this address to [email protected]
> 
> Powers are not rights.
> 

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T586df509299da774-Mfe8c7d24c84e25003c18db5f
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to