I think we have finally hit upon a fundamental divide in (at least) 2 schools 
of thinking with regards AGI. Recently, I worked through a part of the 
"current" thinking on AI and ML, as presented by a university in Iceland.

What I appreciated most about the course was how much trouble they went to in 
order to establish a classification for Systems theory, AI, and ML. From that 
information and some shared here on the forum, I realized that the working 
model of AI and ML they represented (i.e., self-driving cars), is not the AGI 
myself and others may have been envisaging. It is but the commercial interest 
of the day. Off course there are overlaps, but still, there seemingly exists a 
fundamental difference.

For example, AI still tries to code ever-smarter robots. No matter how you look 
at it, the premise is that intelligence must be coded and the apparent 
assumption is made how most abstractions of advanced, human functioning cannot 
be coded at all, therefore to be ignored. In my view, such thinking is highly 
unscientific.

Some of us have come to realize that machine intelligence and its ability to 
explain (consciousness) cannot be coded by clever people. It requires much, 
much more than that. That is, unless in frustration the industry decides to 
dumb it down.

To most this thinking might be absolutely (a terrible word for the AGI context, 
for first, it presumes to know absolutely) and second, wrong (assuming to know 
right). However, we are entering into a world of incompleteness and 
inabsolutes. and the term 'deabstraction' - as antithesis to 'abstraction' - 
may easily mean 'absolution' as well, and 'shoe', and 'your cat's whiskers'. 
It's a variable.

It's the world of general relativity, with some special relativity thrown into 
the mix for good measure. How does one "engineer", or code that? In the 
theoretical beginning of AGI, there can be no absolute rights, or wrongs. Until 
the frameworks and architectures have not been completed and standardized 
scientifically, it's still all up in the air. If not so, all development would 
cease right away.

I do understand this seeming frustration, but evidently it must be mostly with 
your own inability to shift your thinking to a different plane and a lack of 
interest in, and comprehension of, what others are saying as well.

Rob
________________________________
From: Alan Grimes <[email protected]>
Sent: Tuesday, 30 October 2018 1:55 AM
To: AGI; Stanley Nilsen
Subject: Re: [agi] Abstraction is not simple

Stanley Nilsen wrote:
> response's below
>
> On 10/29/18 1:32 AM, Nanograte Knowledge Technologies via AGI wrote:
>>
>> The question is, how should such a component of
>> abstraction/deabstraction be successfully engineered as a component
>> of an AGI service? Considering what you shared about your
>> architecture, I would understand you are saying that you would
>> develop the blueprint by virtue of moving from an abstraction of
>> information, to a codable (deabstraction) of said information "text".
>> If so, that is in compliance with the classical SDLC where analysis
>> and design moved from conceptual- to logical- to physical,
>> architectural layers.
> The concept I see fitting into AGI doesn't involve de-abstraction.
> Once abstraction occurs, there is no way to go back the other way.
> The abstraction has the purpose of being able to compare two things
> that are not easily compared.  The "information" that was used in the
> abstraction process is replaced by a less specific representation that
> doesn't reveal anything about it's past (except that it came from a
> specific promoter which is our tie back into the choice.)

WRONG WRONG WRONG, COMPLETELY AND UTTERLY WRONG, spectacularly wrong
even... Wrong enough even to be completely backwards!!!!!

First, the opposite of abstraction, is usually "resoultion".

Your stupid, misseducated, underporforming brain is actually
CONTINUOUSLY resolving abstract representations to produce your
conscious experience though it is obviously too oblivious to realize
that. =\

I'm not trying to be especially cruel to you individually, I'm mostly
expressing my frustration with this line of thought and how much time it
wastes. It is not an exaggeration to say that the ratio of resolution to
abstraction in human cognition in the adult brain is on the order of
several hundred to one, if not more... It is different in young children
for both physiological and developmental reasons.

> The promoter is essentially watching the conditions, and when
> conditions are satisfied, the promoter gives a bid that is based on
> the value of the action - an action that was related to this
> promoter.   So in a simple sense, the promoter, when activated says
> "hey, I'm promoter 472346 and my benefit bid is 14."  If there is no
> promoted benefit higher than 14 then the promoter's action will be
> initiated.   The abstracted value of 14 compares with other promoters
> whose calculations are virtually unrelated to how this promoter came
> to 14.  The "relatedness" comes from all promoters being set up by a
> common abstractor.

How about cortical columns (abstractions) produce a signal that is
modified by feedback to be either more or less... Then through both
feed-forward and feed-back connections, a world model is built.

>> I know, this would probably play havoc with linear and
>> component-based programming techniques, as it should. I think the
>> simplest reason for this is that AGI functionality probably is not a
>> teachable construct. Would we need a different programming approach?
>> Probably. Would this approach be derived from an AGI-services
>> architecture? Probably. Which leaves the question; what would the
>> SDLC for an AGI-services architecture look like?
> Programming approach is fairly straight forward for most of the
> system, but the "magic sauce" is in the abstractor.   One can think of
> the system as built on assertion of facts which become "conditions."
> Conditions are distributed to promoters who use the data to make
> simple calculations.   The action of asserting a fact is itself the
> result of the unit taking an action that was promoted - or on a lower
> level, sensors could assert a fact of some condition.
>
> The hard programming is building an abstractor, and the abstractor is
> likely the last part of the system to mature.  The abstractor is
> tasked with setting a value for an expected outcome and having that
> value be comparable to values for other outcomes.  It's complicated,
> but the advantage is that we know what we are trying to achieve as we
> build the code that does abstraction.   The rest of the system is not
> mysterious.


Well, there are two basic approaches, there is the conventional Machine
Learning/Deep Learning approach that has some ammount of success by
training recognizers for abstractions. Thes approaches have currently
hit a wall because they

-> have finite layers (can't really be recursive),   (see the
cortico-thalamo-cortical loop in the brain)
-> have a fixed structure and can't recruit task-specific groups like
the human brain can.
-> have rigid heirarchies and can't borrow intuition from different
channels/layers.

Another that should produce a theoretically optimal intelligence but at
probably excessive cost.

Let the input be a stream.
Capture an initial frame and let that be your starting point.

Capture a second frame at a time when the input has changed in some way.
Take a second frame and then subtract the first frame. Take this
difference as a new thing to play with and see if it can be used to
compress the first frame. Then take the third frame, attempt to compress
it, then subtarct it from the compressed second frame, and just continue
in that manner. The system will have limits, especially early on with
how fast it can acquire data, but it should radically outperform machine
learning as it matures.



--
Please report bounces from this address to [email protected]

Powers are not rights.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T586df509299da774-Ma5719daa7cd409af37baac8b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to