Todor said:

It's true (but in different sense), though, that the typical AI-ers
(old-school) believe that "intelligence" is their own 30-40-50
years-old developed brain and more precisely, some specific reasoning
abilities (which they can't trace back, they can't trace back even 1
hour of operation back).

They try to design a system that behaves like this brain - which in
fact CAN"T learn a lot of elementary things, such as arts - without
having (and without understanding) the appropriate developmental
history, which would otherwise explain everything, and is in fact
easier to do.

This AI-ers case may calso be called design "from scratch" in the
sense that it lacks history/background, but it's also not from scratch
- it's an attempt to design fully developed system - one that's having
required subsystems present in "final" stage as early as the
beginning.
--------------------------------


The main problem with AGI is that we cannot see it work effectively at
an elementary stage.  It builds up some steam, shoots out a puff of
black smoke and jolts forward for a few seconds but it never takes
off.  To me that indicates that it is necessary to write a (hope to
be) AGI program that can do some genuine learning to fill in those
gaps.  If that were possible then it would not need a lot of
preliminary AGI-stuff programming.  But it still would need some.  In
order to understand what kinds of things that it might need I start
with some theories that seem to span unformed incidental reactions to
create a relational integration of higher insights.  So I start off
with ideas like 'creating theories,' or 'using trial and error
methods,' or 'isolating effects that are being studied,' or 'comparing
against control groups,' or 'forming generalizations from experiences
of a kind,' or 'using abstract reasoning based on derived relational
insight,' and so on.  While these are all types of ideas that require
higher level reasoning abilities the point is that a computer could
hypothetically be programmed to use them with events that occur in the
IO data environment, and there is no good reason not to try using them
in an AGI program that could learn from scratch (which could learn new
kinds of things).  The question is not whether babies are born
scientists with competent mastery over these techniques but whether
the scientific ideas can be used as techniques of primitive learning.
There is no question, for example, that the idea of 'trial and error
learning' is versatile enough to be seen as a naïve technique of
learning.  The idea that someone can rule out the possibility that
babies do use primitive versions of these kinds of methods without
some kind of good basis to make that rule is not sound. There is
research that suggests that young children who are learning to speak
do start to exhibit the capability of isolating new words by using a
control of a familiar form of sentence and they isolate new forms of
sentences by using a control of familiar words. (Then they
mysteriously jump to a higher level of forming new sentences that do
not exhibit these methods as clearly.)  My calling it a "control" does
not mean that they are actually forming controls but the fact that
this is not something that they are consciously deciding to do (as far
as we can tell) does not mean that they are not effectively doing it.
Since using a controlled 'background' of familiar objects (of thought)
is method that can be used to try new things (of thought) that occur
in settings that are somewhat familiar this could be a naïve process
that can actually work.  But to establish it as an AGI method one has
to go and actually write a program - even a simple program - that can
exhibit this. That is the real issue.

On Thu, Nov 28, 2013 at 4:15 PM, Todor Arnaudov <[email protected]> wrote:
> Thank you for sharing this story, Steve!
>
> A vote from me for the interpreters (virtual machines) and
> (self-)modification/improvement.
> Hard-coded compiled software is too limited and fragile. It should always be
> a part of a hybrid system, with part of the system compiled (maximally
> optimized), and part - flexible and capable to adjust to the needs (such as
> your Huffman-coded opcodes or different programing languages/domain-specific
> ones), to self-repair and to improve.
>
> Also congratulations for pushing the limits of the tiny hardware!
>
>
> Steve >These were CAREFULLY designed so that NO add-on extensions would ever
> be needed
>
> Todor: Indeed! Regarding one of the popular definitions about AGI, that it
> should solve problems which the designers "haven't conceived" or for which
> the system wasn't "designed". If  the sense of "design" is "what the
> designer understood, wrote down on paper, specifically marked etc." - OK.
>
> However, if the system is really and consistently GENERAL - globally or for
> its domain, - then all possible problems would be conceived.... :-)  That's
> the criterion for true generality.
>
> The specific *INSTANCES* of the problems might not be predefined "on paper"
> with full details - but that's one of the purposes of the AGI, the versatile
> limitless self-improver - to fill-in and execute all the instances,
> combinations, details, versions etc. automatically and much faster and
> deeper than a human would do.
>
> The "problem" are the methods for quick and efficient improvement,
> accumulation, generalization and then specialization (application) etc., and
> it should be possible to develop the basis that start to develop quick
> enough with minimum resources...
>
> ...
>
> Sequences of coordinate adjustments and applications of forces
>
> For example a superficial analyst may say that "humans learn ever novel and
> novel motions/behaviors of the hands - never ending new gestures, which
> weren't performed in the past - grasping an apple, a bolt, a pencil, a
> soccer ball, a tennis ball, a tennis racket, a pebble, a knife; headphones;
> a car's tire... etc. etc.
>
> However it's "novel" only to the analyst/evaluator who doesn't realize that
> it doesn't matter WHAT is grasped, but HOW it's grasped. When "HOW" is
> generalized, it's all the same  - just sequences of adjustments of
> coordinates' and forces' applied from/by the hand.
>
> That's all, and it was ALL conceived by the design - hands/body are
> universal actuators, because they allow  free manipulation and application
> of forces in 3D. The possible sequences are many, but it's all CONFIGURABLE,
> and each of the vectors/coordinates within the sequence are simple. There's
> NOTHING "radically new".
>
> When one grasps an integrated circuit, it's not true to say that "human body
> wasn't conceived to pick integrated circuits - such didn't exist etc., so it
> learns new behaviors".
>
> It's "new" only if you compare WHAT's captured, while the essential about
> the behavior/motion is HOW the hand perform it.
>
> Actually hand WAS designed to pick and operate with anything with so and so
> boundary dimensions (calculable by any particular person's hands/body
> dimensions and biomechanics - allowing some variations achievable by
> practice, or reducible by lack of practice and aging), boundary weight,
> sharpness etc. etc. etc.
>
> There's nothing new for the body.
>
> Here some would say - well, maybe you're right, but for example the robotic
> hand is "radically new". That would be wrong again, because in the above
> POV, the essential part is the same - any physical actuator or any
> mechanical part or engine or system is about applying given forces at given
> coordinates in given sequences in time. Sorry inventors: it's already
> invented...
>
> The "new" is that the analysts don't get the real (essential, generalized)
> purpose of the actuators, but pick inessential details which can be vastly
> generalized and compressed.
>
> Steve>these were CAREFULLY designed so that NO add-on extensions would ever
> be needed, though we did add-in some additional capabilities before we
> finished. With each interpreter able to do everything that was needed in its
> domain AND NO MORE, there could be NO system-crashing bugs, malware, etc.
>
> Todor: Another good point, the final one (I'd rather allow extensions,
> though, my system does and I think it should).
>
> However the system should never crash beyond self-repair, it should be
> designed to ever "live". It could be allowed to crash non-"fatally" or to
> "respawn" - some of its experiments may cause some of its subordinate
> systems to stop working, it may even halt, but it should be capable to
> switch to alternative subsystem while restarting the malfunction subsystems,
> use backup subsystems, watchdogs, backup copies of its mind etc.
>
> Steve>I suspect that if programmed intelligence is ever developed, it will
> start with something REALLY SIMPLE that is then successively modified and
> enhanced to be what we call intelligent. With this approach, each step is
> tractably doable. With a "design intelligence from scratch" approach, it
> appears to be obviously beyond human ability.
>
> Todor:  IMO there's a bit of confusion above. The newborn babies ARE AGIs.
> Designing an AGI's "baby's body" (and its appropriate environment and
> teachers), seed-AI, fetus of intelligence, Self-Improving General
> Intelligence, Versatile Limitless Self Improver - which is capable to
> self-improve step-by-step IS design of intelligence "from scratch".
>
> It's true (but in different sense), though, that the typical AI-ers
> (old-school) believe that "intelligence" is their own 30-40-50 years-old
> developed brain and more precisely, some specific reasoning abilities (which
> they can't trace back, they can't trace back even 1 hour of operation back).
>
> They try to design a system that behaves like this brain - which in fact
> CAN"T learn a lot of elementary things, such as arts - without having (and
> without understanding) the appropriate developmental history, which would
> otherwise explain everything, and is in fact easier to do.
>
> This AI-ers case may calso be called design "from scratch" in the sense that
> it lacks history/background, but it's also not from scratch - it's an
> attempt to design fully developed system - one that's having required
> subsystems present in "final" stage as early as the beginning.
>
>
> === Todor "Tosh" Arnaudov ===
>
> .... Twenkid Research:  http://research.twenkid.com
>
> .... Author of the world's first University courses in AGI  (2010, 2011):
> http://artificial-mind.blogspot.com/2010/04/universal-artificial-intelligence.html
>
> .... Todor Arnaudov's Researches Blog: http://artificial-mind.blogspot.com
>
> AGI | Archives | Modify Your Subscription



-- 
Jim Bromer


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to