Re: [agi] Defining "understanding" (was Re: Newcomb's Paradox)

2008-05-12 Thread Matt Mahoney
--- Jim Bromer <[EMAIL PROTECTED]> wrote:

> Matt Mahoney said,
> "A formal explanation of a program P would be a equivalent program Q,
> such
> that P(x) = Q(x) for all x.  Although it is not possible to prove
> equivalence in general, it is sometimes possible to prove nonequivalence
> by finding x such that P(x) != Q(x), i.e. Q fails to predict what P will
> output given x."
> 
> But I have a few problems with this although his one example was ok.
> One, there are explanations of ideas that cannot be expressed using the
> kind of formality he was talking about. Secondly, there are ideas that
> are inadequate when expressed only using the methods of formality he
> mentioned,  Third, an explanation needs to be used relative to some
> other purpose.  For example, making a prediction of how long something
> will fall to the ground is a start, but if a person understands Newton's
> law of gravity, he will be able to utilize it in other gravities as
> well.  And he may be able to relate it to real world situations where
> precise measurements are not available.  And he might apply his
> knowledge of Newton's laws to see the dimensional similarities (of
> length, mass, force and so on) between different kinds of physical
> formulas.

Remember that the goal is to test for "understanding" in intelligent
agents that are not necessarily human.  What does it mean for a machine to
understand something?  What does it mean to understand a string of bits?

I propose prediction as a general test of understanding.  For example, do
you understand the sequence 0101010101010101 ?  If I asked you to predict
the next bit and you did so correctly, then I would say you understand it.

If I want to test your understanding of X, I can describe X, give you part
of the description, and test if you can predict the rest.  If I want to
test if you understand a picture, I can cover part of it and ask you to
predict what might be there.

Understanding = compression.  If you can take a string and find a shorter
description (a program) that generates the string, then use that program
to predict subsequent symbols correctly, then I would say you understand
the string (or its origin).

This is what Hutter's universal intelligent agent does.  The significance
of AIXI is not a solution to AI (AIXI is not computable), but that it
defines a mathematical framework for intelligence.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining "understanding" (was Re: Newcomb's Paradox)

2008-05-12 Thread Jim Bromer
Matt Mahoney said,

"Prediction can be used as a test of understanding lots of things.  For
example, if I wanted to test whether you understand Newton's law of
gravity, I would ask you to predict how long it will take an object of a
certain mass to fall from a certain height."

That is ok, but you would need to show that the prediction was accomplished 
using Newton's law of gravity.  This is what Matt was getting at when he said,

"A formal explanation of a program P would be a equivalent program Q, such
that P(x) = Q(x) for all x.  Although it is not possible to prove
equivalence in general, it is sometimes possible to prove nonequivalence
by finding x such that P(x) != Q(x), i.e. Q fails to predict what P will
output given x."

But I have a few problems with this although his one example was ok. One, there 
are explanations of ideas that cannot be expressed using the kind of formality 
he was talking about. Secondly, there are ideas that are inadequate when 
expressed only using the methods of formality he mentioned,  Third, an 
explanation needs to be used relative to some other purpose.  For example, 
making a prediction of how long something will fall to the ground is a start, 
but if a person understands Newton's law of gravity, he will be able to utilize 
it in other gravities as well.  And he may be able to relate it to real world 
situations where precise measurements are not available.  And he might apply 
his knowledge of Newton's laws to see the dimensional similarities (of length, 
mass, force and so on) between different kinds of physical formulas.

Jim Bromer

- Original Message 

From: Matt Mahoney <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Saturday, May 10, 2008 8:25:51 PM
Subject: [agi] Defining "understanding" (was Re: Newcomb's Paradox)

--- Stan Nilsen <[EMAIL PROTECTED]> wrote:

> I'm not understanding why an *explanation* would be ambiguous?  If I 
> have a process / function that consistently transforms x into y, then 
> doesn't the process serve as a non-ambiguous explanation of how y came 
> into being? (presuming this is the thing to be explained.)

A formal explanation of a program P would be a equivalent program Q, such
that P(x) = Q(x) for all x.  Although it is not possible to prove
equivalence in general, it is sometimes possible to prove nonequivalence
by finding x such that P(x) != Q(x), i.e. Q fails to predict what P will
output given x.

Prediction can be used as a test of understanding lots of things.  For
example, if I wanted to test whether you understand Newton's law of
gravity, I would ask you to predict how long it will take an object of a
certain mass to fall from a certain height.  If I wanted to test whether
you understand French, I could give you a few lines of text in French and
ask you to predict what the next word will be.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Weak AGI / strong chatbot

2008-05-12 Thread Bob Mottram
This seems to be occupying a no mans land between your average chatbot
and AGI proper, and I think its intended primarily for use in games.

http://fora.tv/2008/02/13/SILVIA_Artificial_Intelligence_Platform

There seems to be little or no learning going on here, although there
does appear to be some kind of context detection/matching.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-12 Thread Russell Wallace
On Mon, May 12, 2008 at 2:58 PM, William Pearson <[EMAIL PROTECTED]> wrote:
> It will be a process inside a heavily modified QEmu VM.  I'm taking
> baby steps towards modding the Sparc architecture in QEmu as that has
> the biggest open hardware following, and should be relatively clean
> compared to the heavily crufty x86 arch. But time is lacking at the
> moment to learn the arch properly. My low level coding skills could
> use a brush up as well. Things I will change about the arch include,
> BIOS stage, MMU and memory protection I'll also add domain and
> capability creation/destruction instructions.
>
> I could possibly create my own arch a lot more quickly, but it would
> be no where near as optimised or as cross platform. And I would have
> to figure out about interfacing with displays and other IO more than I
> have to do if I mod QEmu.

Fair enough, thanks for the clarification. I think you'd be better off
creating your own and punting optimization to version 2.0, but if
you've decided on this direction, I'll wish you the best of luck with
it; and let us know if you get something up and running.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-12 Thread William Pearson
2008/5/12 Russell Wallace <[EMAIL PROTECTED]>:
> On Mon, May 12, 2008 at 10:52 AM, William Pearson <[EMAIL PROTECTED]> wrote:
>  > So let us get back to the problem a new process is being trialled.
>  > Whichever process is creating the new process, will set it up with its
>  > own domain, a bank and put some credit in it. Then pay some credit to
>  > register it with a low priority scheduler.
>
>  Right, that's cool. The aspects I'm curious about are still much lower
>  level - by "process", do you mean an operating system process, or are
>  you going to write a VM, or use an off the shelf one?

It will be a process inside a heavily modified QEmu VM.  I'm taking
baby steps towards modding the Sparc architecture in QEmu as that has
the biggest open hardware following, and should be relatively clean
compared to the heavily crufty x86 arch. But time is lacking at the
moment to learn the arch properly. My low level coding skills could
use a brush up as well. Things I will change about the arch include,
BIOS stage, MMU and memory protection I'll also add domain and
capability creation/destruction instructions.

I could possibly create my own arch a lot more quickly, but it would
be no where near as optimised or as cross platform. And I would have
to figure out about interfacing with displays and other IO more than I
have to do if I mod QEmu.

> What will the
>  programs consist of, machine code, byte code, Lisp S-expressions...?
>

Machine code initially. Although I plan to create a C compiler +
extensions to run on the system and then further languages to be added
as needed. Compilers and interpreters are likely to very credit rich
systems, but they will have to be a lot smarter in how they use those
credits, than they are at the moment.

Lisp S-expressions hide too much of the underlying resources being
used, although to be honest I don't know how the Lisp based machines
work, a version of them may be possible. But I figure work with what
the most people know at the moment. I would like to go to a more
parallel computer system in the future with persistent memory.

>  (I'm of the opinion that systems of the type you propose are
>  potentially a good idea, but are likely to succeed or fail based on
>  the nitty-gritty engineering details.)
>

True, there are a number of issues. Things like loading programs from
persitant to transient memory are problematic (having a single program
responsible for booting the whole system is not advisable IMO).

My rules of thumb are

1) Avoid giving one program the ability to muck up/take over all the
other programs as much as possible
2) There are more ways that a program can do this, than you can
initially think of, look harder.

If I could find a sufficient group of people interested in this topic
I would branch off a mailing list.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] The Messy World [for Steve R]

2008-05-12 Thread Mike Tintner
Steve R,

Reading the following in a psych. group, made me think of your diagnostic 
program -  the real world is generally messy: The principle applies to all AGI 
programs.
  "there is of course an interesting phenomenon in medicine generally where 
theories of causality may swing with the development of new data - consider 
things like stomach ulcers, generally seen as a purely medical problem until 
theories of stress (and especially Brady's work) had us all believing this was 
the key. Then we had helicobacter come onto the scene, and everything became 
medical again - until we discovered that we still had to explain why many 
people had helicobacter but no sign of ulcers. Is often *far* from easy to 
identify causes or even to specify "one" cause."

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-12 Thread Russell Wallace
On Mon, May 12, 2008 at 10:52 AM, William Pearson <[EMAIL PROTECTED]> wrote:
> So let us get back to the problem a new process is being trialled.
> Whichever process is creating the new process, will set it up with its
> own domain, a bank and put some credit in it. Then pay some credit to
> register it with a low priority scheduler.

Right, that's cool. The aspects I'm curious about are still much lower
level - by "process", do you mean an operating system process, or are
you going to write a VM, or use an off the shelf one? What will the
programs consist of, machine code, byte code, Lisp S-expressions...?

(I'm of the opinion that systems of the type you propose are
potentially a good idea, but are likely to succeed or fail based on
the nitty-gritty engineering details.)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-12 Thread William Pearson
2008/5/11 Russell Wallace <[EMAIL PROTECTED]>:
> On Sun, May 11, 2008 at 7:45 AM, William Pearson <[EMAIL PROTECTED]> wrote:
>> I'm starting to mod qemu (it is not a straightforward process) to add
>> capabilities.
>
> So if I understand correctly, you're proposing to sandbox candidate
> programs by running them in their own virtual PC, with their own
> operating system instance? I assume this works recursively, so a
> qemu-sandboxed program can itself run qemu?

Not quite what I meant. All the AI programs will run in the
architecture I create, the fact that I am emulating it on qemu just
means I can hopefully easily go to silicon one day. The architecture I
create will be set up so that it is as near as it can be close to
impossible for an unproven program to disrupt a proven one.

I'll try and give an example of how I envisage the system working. A
few bits of terminology first.

Domain - A region of memory in which normal memory access can take
place, . New domains can be created if the region of memory doesn't
have a domain in, using a system call.

Bank - Domains specialised in the storing and transferring of credit.
System calls are used to create and destroy them. Read and write
capabilities are never created for them

Credit - a money like quantity. It is conserved and handed out to
programs dependent upon the current performance of the system.

Capabilities - Inter domain pointers, can be used to access memory .
They can only be created by code within a domain, for regions of
memory within that domain.

Bidding -  Periodically credit is used bid for system capabilities
(special domain capabilities that allow them to be destroyed). If no
bids are input the current controller stays the same. Bidding is also
likely to be used for heavily contested non-system capabilities, like
being able to put a process on a high priority scheduler.

So let us get back to the problem a new process is being trialled.
Whichever process is creating the new process, will set it up with its
own domain, a bank and put some credit in it. Then pay some credit to
register it with a low priority scheduler. The process itself will
then have to interact with the rest of the system to try and earn some
credit to pay for the place on the scheduler and keep up the domain
(some of these values might be 0 if the system is below capacity). The
setting up process will likely insert some code to the system so that
when it gets given credit a portion is funnelled back to itself, so it
makes a credit profit on the trial of the new program. A program that
consistently makes trials that negatively impacts on the system as a
whole will slowly lose credit and be unable to make new trials.

If the program is truly malicious or badly coded any negative affect
on the system it has will be of limited time duration until it runs
out of credit. Any errors it causes will be limited to itself (it
can't corrupt others memory). There is no equivalent of root to try
and get access to, all programs have to be part of the economy.


 Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com