Hi Rob,

I think the trickiest aspect with deep universal principles like
"least action" is using them to actually model and design complex
systems...

In physics, many intervening layers of theory have been required, to
get from basic principles like least-action, to the physics of
building complex things like engines or semiconductors....  Of course,
the theories underlying engines or semiconductors ultimately do boil
down to least-action, but they also involve many other key concepts,
and often involve detailed trains of thought that make no explicit
reference to least-action...

Note that nobody can yet derive the Periodic Table of Elements from
underlying physics.... instead it's derived from underlying physics
PLUS some data from macroscopic chemistry experiments to calibrate
some constants...

I suppose that Marcus Hutter's theory of general intelligence, aspires
to have a foundational status in AGI, similar to least-action on
physics....  This is founded on Occam's Razor as embodied in
Solomonoff induction, so in the end it's a very similar notion to
least-action....

I have some half-finished math work of my own, that tries to show
that, when interpreted in terms of an appropriate computational model,
entropy maximization is really the same thing as program length
minimization.  As least-energy and maximum-entry are closely tied
together, there may be some basis for a common foundation for physics
and intelligence here, which is exciting...

Read Montague is one neuroscientist who has explicitly tied the
brain's need for energy minimization into the brain's reinforcement
learning activity and other dynamics...

But yet there are lots of layers between this sort of foundational
mathematical/philosophical thinking, and building specific complex
systems...

For instance, human minds have multiple largely distinct memory
systems, dealing with semantic, procedural, episodic, sensorimotor
memory, etc.   It may be that this fact is indirectly derived from
some underlying principles of least-energy, program-length
minimization, etc.   But yet, the derivation of the multiple-memory
aspect of human mind from these underlying principles is currently
sketchy at best....

So when one sets about building an AGI, I don't think one can proceed
basedly solely on core underlying principles.  Deriving the complex
higher-level structures and dynamics from these underlying principles
would require intervening layers of theory that are not currently
existent.  Instead one must proceed partly guided by underlying
abstract principles, and partly by cruder methods like analogy to the
existing intelligent systems we observe in the world...

-- Ben G



On Thu, Jan 3, 2013 at 8:38 AM, Rob Clennell
<[email protected]> wrote:
> ("stuff like expert systems are just not going to yield AGI even if you
> worked on them for centuries with billions of dollars...")
>
> Ben,
>
> I think you're right to point out that deep thinking is required to make
> sure AGI research is moving forward on the right track.
>
> I would say the fundamental problem is a version of Gödel's incompleteness
> theorem(s).  An AGI system (that is required to ‘think for itself’) cannot
> do so if it is based on a set of ‘rules' because those rules were created by
> the programmer, not by the AGI system.  The AGI system must somehow create
> its own rules.
>
> So what does the computer actually do?  My argument is that this is where
> defining the 'ability to cause change' in a system is critically important.
>
> We make a very subtle assumption in our normal experience concerning
> 'change'.  We tend to assume that if we are not 'causing' change
> (influencing a system) then 'no events should be taking place'.  Given that
> events do take place, we then categorise these events as an objective
> independent system ('physics').
>
> My view is that this idea that 'physics' is an independent system is a false
> assumption, because we need to first figure out what the minimum state of
> the system can be at the most profound information-level before we can
> determine whether physics is a 'separate' system.  When we do this we find
> that our information system must obey a minimum principle like the principle
> of least action.  All of physics (nearly) has been proven to obey the
> principle of least action, - so, I would argue that it is false to assume
> the existence of an independent system (physics) because this system's
> properties are the same as the absolute-minimum information scenario that
> can occur.
>
> This is important for AGI because a programmer can create a system to find
> the minimum expression scenario and still show that the outcome is not an
> expression of any arbitrary rules but is an approximation to the minimum
> sequence of events that can actually logically exist.  As such, there is
> nothing arbitrary (other than the approximation) about the output.  We
> thereby have a system that outputs a scenario that is not a product of a
> programmer's arbitrary rules and so does not suffer from Gödel's
> incompleteness.  This is because the only rule is 'that there are no rules'
> i.e. it must compute the minimum scenario that can possibly occur.  If the
> output were something other than this scenario - the 'that' would be
> arbitrary and subject to Gödel's incompleteness problem.
>
> Rob
> -------------------------- Rob Clennell
> Director of Development
> Generic AI Ltd
> 63 High Bridge Street
> Newcastle upon Tyne
> NE1 6BX
> T: 0191 260 5777
> [email protected]
> -----------------------------------------------------------
> This communication and any files or
> attachments transmitted with it may
> contain information that is confidential,
> privileged and exempt from disclosure
> under applicable law. It is intended solely
> for the use of the individual or the entity
> to which it is addressed. If you are not the
> intended recipient, you are hereby notified
> that any use, dissemination, or copying of
> this communication is strictly prohibited. If
> you have received this communication in
> error, please notify us at once so that we
> may take the appropriate action and avoid
> troubling you further.
> -----------------------------------------------------------
> ----- Original Message ----- From: Ben Goertzel
> To: AGI
> Sent: Thursday, January 03, 2013 11:34 AM
> Subject: Re: [agi] Using Prediction As a Method of Validation. Try Using It
> In Your Real Life,
>
>
>
>
>
> I agree that using theories to make predictions about the results of
> computational experiments, and then seeing if the predictions are correct
> (and if not, in what ways they are incorrect), is a good way of exploring
> theories...
>
>
> However, theories of AGI don't generally give specific guidance about the
> number of man-months it will take to implement and test a given capability,
> let alone about the number of calendar-months it will take given real-world
> uncertainties about team size and allocation, etc. ....  Most commonly an AI
> or computer science theory will constrain "number of man-months" (or similar
> measures -- setting the "mythical man-month" factor aside) for implementing
> an algorithm/structure/system only within an order of magnitude or so...
>
>
> So if one is talking about confirming or disconfirming an AGI theory, one
> should be talking about predictions of the form "A software system
> implementing approach X, when given input A, will probably give output
> something like B" ...
>
>
> The ultimate problem with most of the AI gurus of the late 1960s and early
> 1970s is not that their temporal predictions of progress rate were wrong,
> but that their underlying ideas about how the mind works and how to build
> AGI were woefully incomplete....  Had their basic ideas been right, but
> merely taken 40 years rather than 10 to implement -- then we would have AGI
> now and those early AI guys would be considered on part with Newton and
> Archimedes....  Instead, subsequent work showed that the REASON their early
> optimistic AI projections were wrong was NOT that they underestimated the
> amount of practical work or system testing/debugging needed in implementing
> their ideas in the real world, but rather that their basic ideas were not
> adequate... (i.e. stuff like expert systems are just not going to yield AGI
> even if you worked on them for centuries with billions of dollars...)
>
>
> These are very elementary points, and not so interesting to go over in
> depth, IMO ;-/ ...
>
>
> ben
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>
> AGI | Archives  | Modify Your Subscription



-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to