Re: [agi] small code small hardware

2007-03-29 Thread YKY (Yan King Yin)

Let's take a poll?

I believe that a minimal AGI core, *sans* KB content, may be around 100K
lines of code.

What are other people's estimates?

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] AGI interests

2007-03-29 Thread YKY (Yan King Yin)

On 3/28/07, Russell Wallace [EMAIL PROTECTED] wrote:

Do you have a source of finance? This is not a rhetorical question; if you

have, I'd be very interested in working for money.

Yes, I think I have seed capital, that is enough to get a conventional
startup started.  Also I believe getting subsequent VC funding is not that
difficult.

But it seems that AGI is more complex than a conventional startup.

It is not programming that is the problem, at least not yet.  Currently we
need the ability of a group of people to come up with *algorithms* that work
within a certain architecture.  Some of these algorithms are not well-known
to standard AI students (eg inference, belief revision, probabilistic logic,
etc).

I estimate that a *minimal* AGI core may consist of ~100K lines of code.  So
programming is not the problem.  We need people who have a good
understanding of the right algorithms.

Russell -- I recognize that our ideas are quite similar and that your views
are usually very sensible, so there is a good chance that we can work
together.  But IMO we should wait for more partners.

Another option is to work as a spin-off of Novamente, licensing some of
Ben's technology (eg probabilistic logic).  Also we may borrow an
established knowledge representation scheme, eg Cyc, or Novamente's.

A dilemma is that starting from scratch seems too much work, but working
with older tech (eg Cyc) requires modifications as well.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Mark Waser
I'll go you one better . . . . I truly believe that the minimal AGI core, sans 
KB content, is 0 lines of code . . . . 

Just like C compilers are written in C, the AGI should be entirely written in 
it's knowledge base (eventually) to the point that it can understand itself, 
rewrite itself, and recompile itself in it's entirety.  The problem is 
bootstrapping to that point.

Personally, I find all of these wild-ass guess-timates and opinion polls quite 
humorous.  Given that we can't all even agree on what an AGI is, much less how 
to do it, how can we possibly think that we can accurately estimate it's 
features?

Mark
  - Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Thursday, March 29, 2007 8:42 AM
  Subject: **SPAM** Re: [agi] small code small hardware



  Let's take a poll?

  I believe that a minimal AGI core, sans KB content, may be around 100K lines 
of code.

  What are other people's estimates?

  YKY


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] AGI interests

2007-03-29 Thread Russell Wallace

On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


Yes, I think I have seed capital, that is enough to get a conventional
startup started.  Also I believe getting subsequent VC funding is not that
difficult.



That's more than most people have! I think the reason a lot of AGI people
think in terms of working for free is that if it's in the when I'm
independently wealthy category, it might as well also go in the and I'll
make it open source category.

But it seems that AGI is more complex than a conventional startup.




Longer and trickier route to getting a product, that's for sure.

It is not programming that is the problem, at least not yet.  Currently we

need the ability of a group of people to come up with *algorithms* that work
within a certain architecture.  Some of these algorithms are not well-known
to standard AI students (eg inference, belief revision, probabilistic logic,
etc).



Right, and before that the architecture that will enable these algorithms to
work together.

I estimate that a *minimal* AGI core may consist of ~100K lines of code.  So

programming is not the problem.  We need people who have a good
understanding of the right algorithms.



Agreed.

Russell -- I recognize that our ideas are quite similar and that your views

are usually very sensible, so there is a good chance that we can work
together.  But IMO we should wait for more partners.



Thanks for the compliment! Sure, no hurry.

Another option is to work as a spin-off of Novamente, licensing some of

Ben's technology (eg probabilistic logic).  Also we may borrow an
established knowledge representation scheme, eg Cyc, or Novamente's.

A dilemma is that starting from scratch seems too much work, but working
with older tech (eg Cyc) requires modifications as well.



I think there's value in being able to connect to older tech (I'd look at
things like relational databases before e.g. Cyc). I think it is necessary
to start from scratch; it's not _that_ much work (I agree with your estimate
of a version 1 framework being on the order of 100 kloc) and I think the
risk of having to do it is much less than the risk of ending up with the
wrong design due to trying to build on existing work.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 08:42:53PM +0800, YKY (Yan King Yin) wrote:

I believe that a minimal AGI core, sans KB content, may be around 100K
lines of code.

I don't know what 'KB' content is. But the kLoCs are irrelevant, because
the data is where it's at, and it's huge.
 
What are other people's estimates?

10^17 sites, 10^23 OPs/s total. The transformation
complexity might very well be 100 kLoC, or even 10 kLoC.

But that code is worthless without the magic data.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Russell Wallace

On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:



Let's take a poll?

I believe that a minimal AGI core, *sans* KB content, may be around 100K
lines of code.

What are other people's estimates?



Sounds right to me. I'd put the framework (sans content) as roughly
comparable to a web browser, IDE or CAD program, for which 100 kloc seems
about the order of magnitude of size for a first version.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Jean-Paul Van Belle
I guess (50 to 100 modules) x (500 to 2500 locs) x fudge factor x
language factor
with fudge factor = 2 to 4 and language factor = 1 for eg Python; 5 for
eg C++
i.e. minimum 50 klocs (Python) which is what i wishfully think;
realistically probably closer to 5000 klocs C++
that's of course for the prototype which may or may not bootstrap.
however, the devil's in the data (you're on the money there, Mark) and
more importantly the architecture and algorithms.

| YKY (Yan King Yin) [EMAIL PROTECTED]
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21
om 2007/03/29 14:42:53 
| What are other people's estimates?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 09:16:09AM -0400, Mark Waser wrote:

I'll go you one better . . . . I truly believe that the minimal AGI
core, sans KB content, is 0 lines of code . . . .

In theory, a TOE can be quite small. In theory, you could have
a low-level physical simulation that's happening to be an intelligent
system. In practice, however... As they say, in theory, there is no
difference between practice and theory. In practice, there is.  
 
Just like C compilers are written in C, the AGI should be entirely
written in it's knowledge base (eventually) to the point that it can

What's the knowledge base between your ears is written in? 

understand itself, rewrite itself, and recompile itself in it's

What makes you think the system can ever understand itself, whatever
that term means exactly? Evolution doesn't understand anything, but 
as an optimization process it produced us from prebiotic ursoup, which
is nothing to sneeze at.

entirety.  The problem is bootstrapping to that point.

Since nobody here knows, how about evolution? Empirically validated
is not good enough for you?
 
Personally, I find all of these wild-ass guess-timates and opinion
polls quite humorous.  Given that we can't all even agree on what an

It's okay as long as everybody agrees they're wild-ass guesstimates.

AGI is, much less how to do it, how can we possibly think that we can

I dunno about you, but I see a general intelligence (admittedly, not much
of an intelligence) every morning in the shaving mirror. As I said, you'll
know AGI when it hits the job market and the news.

accurately estimate it's features?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Pei Wang

On 3/29/07, Mark Waser [EMAIL PROTECTED] wrote:


I'll go you one better . . . . I truly believe that the minimal AGI core,
sans KB content, is 0 lines of code . . . .

Just like C compilers are written in C, the AGI should be entirely written
in it's knowledge base (eventually) to the point that it can understand
itself, rewrite itself, and recompile itself in it's entirety.  The problem
is bootstrapping to that point.


I have to disagree. The following is adapted from my chapter in the
AGI collection 
(http://www.springer.com/west/home/generic/order?SGWID=4-40110-22-43950079-0):

*. Complete self-modifying is an illusion. As Hofstadter put it, below
every tangled hierarchy lies an inviolate level [in GEB]. If we
allow a system to modify its meta-level knowledge, i.e., its inference rules
and control strategy, we need to give it (fixed) meta-meta-level knowledge
to specify how the modification happens. As flexible as the human mind
is, it cannot modify its own low of thought.

*. Though high-level self-modifying will give the system more flexibility, it
does not necessarily make the system more intelligent. Self-modifying at
the meta-level is often dangerous, and it should be used only when the
same effect cannot be produced in the object-level. To assume the more
radical the changes can be, the more intelligent the system will be is
unfounded. It is easy to allow a system to modify its own source code,
but hard to do it right.

Even if you write a C compiler in C, or a Prolog interpreter in Prolog
(which is much easier), it cannot be used without something else that
understand at least a subset of the language.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 09:35:57AM -0400, Pei Wang wrote:

 I have to disagree. The following is adapted from my chapter in the
 AGI collection 
 (http://www.springer.com/west/home/generic/order?SGWID=4-40110-22-43950079-0):

I have to disagree with your disagreement. Provably optimal computational
substrates and representation can be optimized by co-evolution. This
process is open-ended. 
 
 *. Complete self-modifying is an illusion. As Hofstadter put it, below
 every tangled hierarchy lies an inviolate level [in GEB]. If we
 allow a system to modify its meta-level knowledge, i.e., its inference rules
 and control strategy, we need to give it (fixed) meta-meta-level knowledge
 to specify how the modification happens. As flexible as the human mind

Stochastical optimization doesn't have any blinkers. Of course, it
takes a population, because most of these are fatal.

 is, it cannot modify its own low of thought.
 
 *. Though high-level self-modifying will give the system more flexibility, 
 it
 does not necessarily make the system more intelligent. Self-modifying at

If intelligence is infoprocessing capability, then any process that
maximizes the ops/g and ops/J will also optimize for intelligence.

 the meta-level is often dangerous, and it should be used only when the
 same effect cannot be produced in the object-level. To assume the more
 radical the changes can be, the more intelligent the system will be is
 unfounded. It is easy to allow a system to modify its own source code,
 but hard to do it right.

Yes, it took evolution a while before it learned to evolve. ALife hasn't
reached that first milestone yet.
 
 Even if you write a C compiler in C, or a Prolog interpreter in Prolog
 (which is much easier), it cannot be used without something else that
 understand at least a subset of the language.

The whole language metaphor in AI is a crock. It makes so many smart
people go chasing wild geese up blind alleys.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Russell Wallace

On 3/29/07, Pei Wang [EMAIL PROTECTED] wrote:


*. Though high-level self-modifying will give the system more flexibility,
it
does not necessarily make the system more intelligent. Self-modifying at
the meta-level is often dangerous, and it should be used only when the
same effect cannot be produced in the object-level. To assume the more
radical the changes can be, the more intelligent the system will be is
unfounded. It is easy to allow a system to modify its own source code,
but hard to do it right.



Yep. Supporting data: Eurisko, which had largely unrestricted
self-modification ability and did a few interesting things with it, but
would rather quickly banjax itself and require human intervention to fix it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread YKY (Yan King Yin)

On 3/29/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote:

I guess (50 to 100 modules) x (500 to 2500 locs) x fudge factor x language

factor

with fudge factor = 2 to 4 and language factor = 1 for eg Python; 5 for eg

C++

50-100 modules?  Sounds like you have a very unconventional architecture.


From what you say, Python sounds like a pretty good *procedural* language --

would you say it's the easiest way to build an AGI prototype?

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Jean-Paul Van Belle
Re number of modules - ask any neuroscientist how many modules there are
in the brain... and see which you think you can do without. My approach
was to list important brain modules, delete those that I thought I can
do without, add a very few that they haven't located or seem needed.
Some modules end up being split in smaller ones as you start delving
into implementation issues.
 
Re PYTHON - hey I though we just *had* the language debate. FWIW In a
previous life I've coded in Fortran and various flavours of Basic.
Python gives fast learning curve, high productivity, high readability
(important if you have gaps between programming time), it *is* OO but
also procedural/functional - i like that mesh -, self-modification, the
efficient data structures which I need, and lots of community support
e.g. MontyLingua gives you a natural language parser free. Low
performance is an issue but one could always inline C. So Python it is
for my first prototype. I don't recommend people change their current
language tho if they're happy with it. Still early days for me. 

 YKY (Yan King Yin) [EMAIL PROTECTED] 2007/03/29
15:58:45 
On 3/29/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote:
 I guess (50 to 100 modules) x (500 to 2500 locs) x fudge factor x
language
factor
 with fudge factor = 2 to 4 and language factor = 1 for eg Python; 5
for eg
C++

50-100 modules?  Sounds like you have a very unconventional
architecture.

From what you say, Python sounds like a pretty good *procedural*
language --
would you say it's the easiest way to build an AGI prototype?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Russell Wallace

On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


From what you say, Python sounds like a pretty good *procedural* language
-- would you say it's the easiest way to build an AGI prototype?



Remember this is for the framework (rather than content) we're talking
about, so a procedural language is appropriate. I've done a bit of Python,
it's nice and easy to use - but for the framework, whatever one is most
productive in is probably the best choice.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread kevin . osborne

 Let's take a poll?
 I believe that a minimal AGI core, sans KB content, may be around 100K lines 
of code.
 What are other people's estimates?


from: 
http://web.archive.org/web/20060306104407/www.etla.org/cpan-sloccount-report.txt
Perl CPAN:
 15,000,000.

from: http://www.dwheeler.com/sloc/
GNU/Linux:
 30,000,000.

from: http://en.wikipedia.org/wiki/Source_lines_of_code
Windows Vista:
 50,000,000.

50M LOC to code an OS to interface with an AGI (i.e. us).

Thing is, we do all the smart stuff.

An OS without a user is for all intents and purposes a 'dumb terminal'.

from: http://faculty.washington.edu/chudler/facts.html
Average number of neurons in a human brain:
 100,000,000,000.
Number of dendrites, axons and synapses this equates to:
 too bloody much :-)

In the The 21st Century Brain neuroscientist Steve Rose states the
current estimation of 'degrees of separation' between neurons in the
brain is 2-3.
say 2.5^10e9 interconnects, which is a number too big for even a
crypto BigInt calculator, if not a number too big for computronium :-)

Now I know it's wrong to bunch all of the brain into lines of code;
there's obviously alot of it which is simply data points for memory
etc and a host of other 'non-processing' functions.

But that said, with the numbers involved, even if only a small
percentage of those interconnects provide processing ability, thats
still a ridiculously large number.

You could argue that a lot of all this is the same kind of functions
just operating in 'parrellel' with a lot of 'redundancy'.

I'm not sure I buy that. Evolution is a miserly mistress. If thinking
could have been achieved with less, it would have been, and any
'extra' would have no means of selection.

The (also ridiculously large) amount of years involved in mammalian
brain evolution all led towards what we bobble around with us today.

I think there is an untold host of support functions necessary to take
a Von Neumann machine to a tipping-point|critical-mass where it can
truly think for itself. To even begin to equate top the generalised
abilities of an imbecile.

Not to be discouraging though. :-)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Jean-Paul Van Belle
IMHO
IF you can provide a learning environment similar in complexity as our
world
THEN (maximum code size(zipped using Matt Mahoney algorithm)   portion
of non-redundant DNA devoted to brain
/IMHO
 
Some random thoughts.
Any RAM location can link to any other RAM location so there are more
interconnects.
The structure of RAM can be described very succintly.
A CPU has 800 million transistors - a much more generous instruction
set than our brain.
 
Most likely we're *all* way off the mark ;-)

 kevin.osborne [EMAIL PROTECTED] 2007/03/29 16:24:20 
say 2.5^10e9 interconnects, which is a number too big for even a

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread BillK

On 3/29/07, kevin osborne wrote:
snip

You could argue that a lot of all this is the same kind of functions
just operating in 'parrellel' with a lot of 'redundancy'.

I'm not sure I buy that. Evolution is a miserly mistress. If thinking
could have been achieved with less, it would have been, and any
'extra' would have no means of selection.

The (also ridiculously large) amount of years involved in mammalian
brain evolution all led towards what we bobble around with us today.

I think there is an untold host of support functions necessary to take
a Von Neumann machine to a tipping-point|critical-mass where it can
truly think for itself. To even begin to equate top the generalised
abilities of an imbecile.



I think you have too high an opinion of Evolution.
Evolution is kludge piled upon kludge.
This is because evolution via natural selection cannot construct
traits from scratch. New traits must be modifications of previously
existing traits. This is called historical constraint.

There are many examples available in nature of bad design.

So it is not unlikely that a lot of the human brain processing is a
redundant hangover from earlier designs.  Of course, it is not a
trivial problem to decide which functions are not required to create
AGI.   :)

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Mark Waser

*. Complete self-modifying is an illusion. As Hofstadter put it, below
every tangled hierarchy lies an inviolate level [in GEB]. If we
allow a system to modify its meta-level knowledge, i.e., its inference 
rules

and control strategy, we need to give it (fixed) meta-meta-level knowledge
to specify how the modification happens. As flexible as the human mind
is, it cannot modify its own low of thought.


I've always disagreed with Hofstadter's argument since it pre-supposes a 
single static system (in terms of available knowledge).  Your argument is 
exactly the same.  Why does the meta-meta-level knowledge have to be fixed? 
And why can't a system spawn a subsystem that is separate enough to change 
that inviolable level?


Also, your statement about the human mind is 100% specious and irrelevant, 
not to mention the fact that I don't find the human mind particularly 
flexible.


*. Though high-level self-modifying will give the system more flexibility, 
it

does not necessarily make the system more intelligent. Self-modifying at
the meta-level is often dangerous, and it should be used only when the
same effect cannot be produced in the object-level. To assume the more
radical the changes can be, the more intelligent the system will be is
unfounded. It is easy to allow a system to modify its own source code,
but hard to do it right.


I agree with this paragraph 100%.


Even if you write a C compiler in C, or a Prolog interpreter in Prolog
(which is much easier), it cannot be used without something else that
understand at least a subset of the language.


Hunh?  The AGI needs to understand it's language.  Don't you understand and 
can't you explain your *logical* thought processes?  Can't you write a 
computer program to emulate any single given one (given sufficient time, 
etc.)?  I'm not sure where you're going with this . . . .


- Original Message - 
From: Pei Wang [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, March 29, 2007 9:35 AM
Subject: Re: [agi] small code small hardware



On 3/29/07, Mark Waser [EMAIL PROTECTED] wrote:


I'll go you one better . . . . I truly believe that the minimal AGI core,
sans KB content, is 0 lines of code . . . .

Just like C compilers are written in C, the AGI should be entirely 
written

in it's knowledge base (eventually) to the point that it can understand
itself, rewrite itself, and recompile itself in it's entirety.  The 
problem

is bootstrapping to that point.


I have to disagree. The following is adapted from my chapter in the
AGI collection 
(http://www.springer.com/west/home/generic/order?SGWID=4-40110-22-43950079-0):


*. Complete self-modifying is an illusion. As Hofstadter put it, below
every tangled hierarchy lies an inviolate level [in GEB]. If we
allow a system to modify its meta-level knowledge, i.e., its inference 
rules

and control strategy, we need to give it (fixed) meta-meta-level knowledge
to specify how the modification happens. As flexible as the human mind
is, it cannot modify its own low of thought.

*. Though high-level self-modifying will give the system more flexibility, 
it

does not necessarily make the system more intelligent. Self-modifying at
the meta-level is often dangerous, and it should be used only when the
same effect cannot be produced in the object-level. To assume the more
radical the changes can be, the more intelligent the system will be is
unfounded. It is easy to allow a system to modify its own source code,
but hard to do it right.

Even if you write a C compiler in C, or a Prolog interpreter in Prolog
(which is much easier), it cannot be used without something else that
understand at least a subset of the language.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Mark Waser
 50-100 modules?  Sounds like you have a very unconventional architecture.

Depends upon what you call a module and whether you're only counting true core 
modules (and not counting any specializations, descendents, compositions, etc., 
etc., etc.).  

How many key words do you have in any given programming language and what can 
you do with that language (short answers: well less than fifty and *anything*).

As I said before, how can you make or argue an estimate without even agreeing 
upon terms and baselines?
  - Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Thursday, March 29, 2007 9:58 AM
  Subject: **SPAM** Re: [agi] small code small hardware





  On 3/29/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote: 
   I guess (50 to 100 modules) x (500 to 2500 locs) x fudge factor x language 
factor 
   with fudge factor = 2 to 4 and language factor = 1 for eg Python; 5 for eg 
C++


  50-100 modules?  Sounds like you have a very unconventional architecture.

  From what you say, Python sounds like a pretty good *procedural* language -- 
would you say it's the easiest way to build an AGI prototype?

  YKY


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 04:46:59PM +0200, Jean-Paul Van Belle wrote:

Some random thoughts.
 
Any RAM location can link to any other RAM location so there are more
interconnects.

Not so fast. Memory bandwidth is very limited (~20 GByte/s current,
GDDR3/GPUs are much better, agreed), and the access
pattern is not flat. Predictable and local accesses are preferred,
whereas worst case can be as low as 5% of advertised peak.

The difference between CPU speed and memory bandwidth growth
is a linear semi-log plot, too.

However, the limited fan-out factors are not a problem with 
active media and even simple packet-switched signalling mesh.
Embedded DRAM, wide bus ALU (with in-register parallelism)
meshed up with a packet-switched signalling fabric is the bee's
knees -- but you can't buy these yet.
 
The structure of RAM can be described very succintly.

RAM alone doesn't compute. Try hardware CAs. These are pretty
regular, too, and actually pack a lot of punch, especially in 3d.
(In fact, the best possible classical computational substrate is
a molecular-cell CA).
 
A CPU has 800 million transistors - a much more generous instruction
set than our brain.

I have absolutely no idea what you mean by this. I'm hazarding
that you yourself don't, either.
 
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] AGI and Web 2.0

2007-03-29 Thread YKY (Yan King Yin)

How does the new phenomenon of web-based collaboration change the way we
build an AGI?  I feel that something is amiss in a business model if we
don't make use of some form of Web 2.0.

I think rooftop8000 is on the right track by thinking this way, but he may
not have it figured out yet.

Obviously, commonsense knowledge (ie KB contents) can be acquired from the
internet community.  But what about the core?  Can we build it using
web-collaboration too?

One of my strong conviction is that opensource should be combined with
commercial.  That will result in the most productive and satisfying
organization, IMO.

Suppose we opensource an AGI codebase, so people can contribute by adding to
/ modifying it.  Then we should have a way to measure the contribution's
value and reward the contributor accordingly.  What we need is:

1. a way to decide which contributions to accept (by voting?)
2. a way to measure the *value* of each contribution (perhaps voting as
well)

A problem is that we cannot take universal ballots every time on every
trivial issue.  So probably we need a special adminstrative committee for
decision-making.

This idea is worth trying because it may cut down on development costs.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 08:40:02AM -0700, David Clark wrote:

I would like to know what computer executes data without code.  None
that I have used since 1976 so please educate me!

The distinction is a bit arbitrary. Machine instructions are nothing
but data to the CPU.

But the lack of distinction between code and data in biological
tissue processing is significant. Such systems are best seen as
state, and their evolution (the state space variety) as iterative
transformation on that state.

Considering the memory bottleneck, you don't get a lot of refreshes/s
on a typical 10^9 word node. With current technology 10 MBytes/node in order
to match the refresh rate of neuronal circuitry, which is not a lot
of state/node, so you need an awful lot of nodes.
 
Even though some state designs can put logic into data instead of
program code, and even though program code is stored as data, they
aren't the same.

The distinction between storage and processing, between code and
data is arbitrary. It's an earmark of a particular technology, and a
rather pitiful technology, which goes back directly to the Jaquard loom.
We're stuck in a bad optimum for time being, but luckily people have
started running into enough limitations (recent multicore mania is a
symptom) so they're willing to abandon the conventional approach,
because it no longer offers enough ROI, especially long-term.

To estimate given, insufficient knowledge is problematic, to estimate
given NO knowledge produces useless conjectures.

Yes. This is why I stick to what biology can do in a given volume, because
it's the only working instance we can analyze.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] AGI and Web 2.0

2007-03-29 Thread Russell Wallace

On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


Obviously, commonsense knowledge (ie KB contents) can be acquired from the
internet community.  But what about the core?  Can we build it using
web-collaboration too?



I think the framework at least initially needs to be written by a small team
of dedicated people. Open sourcing it later would be an option, but I think
it needs to get to a working version before it'll start being possible for
outside/casual contributions to be useful. (From what I remember off the top
of my head, most successful open source projects have followed a similar
model: an individual or small team produces the first working version, and
after that other people get involved.)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread David Clark
Some of the points you make below might be correct about self modification
but *absolutely no* modification of code or new code solutions can be had if
your AGI doesn't contain a programming language.  An AGI with the tools to
create programs (or change existing ones) would surely have more options to
create solutions than one that doesn't.

Having the ability to modify code at the lower or higher meta levels doesn't
mean that it has to.

-- David Clark


- Original Message - 
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 29, 2007 6:35 AM
Subject: Re: [agi] small code small hardware


 On 3/29/07, Mark Waser [EMAIL PROTECTED] wrote:
 
  I'll go you one better . . . . I truly believe that the minimal AGI
core,
  sans KB content, is 0 lines of code . . . .
 
  Just like C compilers are written in C, the AGI should be entirely
written
  in it's knowledge base (eventually) to the point that it can understand
  itself, rewrite itself, and recompile itself in it's entirety.  The
problem
  is bootstrapping to that point.

 I have to disagree. The following is adapted from my chapter in the
 AGI collection
(http://www.springer.com/west/home/generic/order?SGWID=4-40110-22-43950079-0
):

 *. Complete self-modifying is an illusion. As Hofstadter put it, below
 every tangled hierarchy lies an inviolate level [in GEB]. If we
 allow a system to modify its meta-level knowledge, i.e., its inference
rules
 and control strategy, we need to give it (fixed) meta-meta-level knowledge
 to specify how the modification happens. As flexible as the human mind
 is, it cannot modify its own low of thought.

 *. Though high-level self-modifying will give the system more flexibility,
it
 does not necessarily make the system more intelligent. Self-modifying at
 the meta-level is often dangerous, and it should be used only when the
 same effect cannot be produced in the object-level. To assume the more
 radical the changes can be, the more intelligent the system will be is
 unfounded. It is easy to allow a system to modify its own source code,
 but hard to do it right.

 Even if you write a C compiler in C, or a Prolog interpreter in Prolog
 (which is much easier), it cannot be used without something else that
 understand at least a subset of the language.

 Pei

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread David Clark
Even if your estimate of 65 different brain modules is correct, why couldn't an 
AGI combine any of these into a bigger modules or create any number of modules 
to accomplish what 1 module does in a human?  My point is, I see no connection 
to the number of modules needed in an AGI when compared to a human.  On top of 
that, some of these human modules might take orders of magnitude more code in 
an AGI than another module. (Not all 65 human modules are of equal complexity 
or would be if coded in an AGI)

I see no rational correlation for estimating the size of code required to 
create an AGI and what exists in our brains.

-- David Clark
  - Original Message - 
  From: Jean-Paul Van Belle 
  To: agi@v2.listbox.com 
  Sent: Thursday, March 29, 2007 8:24 AM
  Subject: Re: [agi] small code small hardware


  True - many definitions of modules  ;-)
  My definition: unique functionality - as usually reflected in a different 
type of data being manipulated (i.e. different input and/or output types). I 
cannot reduce the number of different functional modules below 65. Many modules 
embed more than one function and all inherit and/or specialize general methods.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread David Clark
- Original Message - 
From: Eugen Leitl [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 29, 2007 8:55 AM
Subject: Re: [agi] small code small hardware


 On Thu, Mar 29, 2007 at 08:40:02AM -0700, David Clark wrote:

 I would like to know what computer executes data without code.  None
 that I have used since 1976 so please educate me!

 The distinction is a bit arbitrary. Machine instructions are nothing
 but data to the CPU.

I said that too.

 But the lack of distinction between code and data in biological
 tissue processing is significant. Such systems are best seen as
 state, and their evolution (the state space variety) as iterative
 transformation on that state.

Can you program a CPU made of biological material?  I program using normal
silicon based computers, the last I looked!

 Considering the memory bottleneck, you don't get a lot of refreshes/s
 on a typical 10^9 word node. With current technology 10 MBytes/node in
order
 to match the refresh rate of neuronal circuitry, which is not a lot
 of state/node, so you need an awful lot of nodes.

I think the quality of algorithms matters more than quantity.  On this we
can just agree to disagree.  I don't relate any computer cycles or memory
speed to what humans can do.  I program computers using normal machine code.
If I had a different tool to accomplish your vision, then I might see your
algorithm in a different light.

 Even though some state designs can put logic into data instead of
 program code, and even though program code is stored as data, they
 aren't the same.

 The distinction between storage and processing, between code and
 data is arbitrary. It's an earmark of a particular technology, and a
 rather pitiful technology, which goes back directly to the Jaquard loom.

The difference between data and code may be arbitrary but you have to admit
that they aren't the same on modern day computers that we are all using.
Pitiful compared to what?  My first computer had an 8080 CPU and it was
state-of-the-art and great at the time!

 We're stuck in a bad optimum for time being, but luckily people have
 started running into enough limitations (recent multicore mania is a
 symptom) so they're willing to abandon the conventional approach,
 because it no longer offers enough ROI, especially long-term.

A person can only work with the tools they have.  Better or different tools
can create an environment that makes failed algorythms from the past, work.
I sadly don't possess such hardware and if you don't either then you should
create solutions based on the tools you have.

 To estimate given, insufficient knowledge is problematic, to estimate
 given NO knowledge produces useless conjectures.

 Yes. This is why I stick to what biology can do in a given volume, because
 it's the only working instance we can analyze.

Biology is the only concrete example we can study BUT our tools are not the
tools of biology.  We need to work with computer techniques if our solution
is to be had on that computer.  Other direct analogies to biology are nice
but not necessarily directly helpful.

-- David Clark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] AGI and Web 2.0

2007-03-29 Thread Bob Mottram

Yes that's usually the way it works.  Initially you need one person or a
small team to produce something which is at least good enough to be run and
tested by others.  Improvements can be made from there on.



On 29/03/07, Russell Wallace [EMAIL PROTECTED] wrote:


On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 Obviously, commonsense knowledge (ie KB contents) can be acquired from
 the internet community.  But what about the core?  Can we build it using
 web-collaboration too?


I think the framework at least initially needs to be written by a small
team of dedicated people. Open sourcing it later would be an option, but I
think it needs to get to a working version before it'll start being possible
for outside/casual contributions to be useful. (From what I remember off the
top of my head, most successful open source projects have followed a similar
model: an individual or small team produces the first working version, and
after that other people get involved.)
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Pei Wang

Well, once again we need to distinguish two different levels of
language. In my NARS, the system's knowledge/beliefs are represented
in a language called Narsese, which has the ability to describe a
sequence of system operations. In that sense, the system can create
and modify its programs by which given tasks are processed. On the
other level, all the Narsese sentences are treated as data by the
system's implementation language, Java, whose code the system cannot
modify. In theory, Narsese can be extended to include all Java
functionality (though I don't think it will be necessary), but even
after that, the system still doesn't/cannot/shouldn't modify its own
source code.

If what you are after is just flexibility in behavior, I think there
are much better ways to achieve it than self-modifying source-code.

Pei

On 3/29/07, David Clark [EMAIL PROTECTED] wrote:

Some of the points you make below might be correct about self modification
but *absolutely no* modification of code or new code solutions can be had if
your AGI doesn't contain a programming language.  An AGI with the tools to
create programs (or change existing ones) would surely have more options to
create solutions than one that doesn't.

Having the ability to modify code at the lower or higher meta levels doesn't
mean that it has to.

-- David Clark


- Original Message -
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 29, 2007 6:35 AM
Subject: Re: [agi] small code small hardware


 On 3/29/07, Mark Waser [EMAIL PROTECTED] wrote:
 
  I'll go you one better . . . . I truly believe that the minimal AGI
core,
  sans KB content, is 0 lines of code . . . .
 
  Just like C compilers are written in C, the AGI should be entirely
written
  in it's knowledge base (eventually) to the point that it can understand
  itself, rewrite itself, and recompile itself in it's entirety.  The
problem
  is bootstrapping to that point.

 I have to disagree. The following is adapted from my chapter in the
 AGI collection
(http://www.springer.com/west/home/generic/order?SGWID=4-40110-22-43950079-0
):

 *. Complete self-modifying is an illusion. As Hofstadter put it, below
 every tangled hierarchy lies an inviolate level [in GEB]. If we
 allow a system to modify its meta-level knowledge, i.e., its inference
rules
 and control strategy, we need to give it (fixed) meta-meta-level knowledge
 to specify how the modification happens. As flexible as the human mind
 is, it cannot modify its own low of thought.

 *. Though high-level self-modifying will give the system more flexibility,
it
 does not necessarily make the system more intelligent. Self-modifying at
 the meta-level is often dangerous, and it should be used only when the
 same effect cannot be produced in the object-level. To assume the more
 radical the changes can be, the more intelligent the system will be is
 unfounded. It is easy to allow a system to modify its own source code,
 but hard to do it right.

 Even if you write a C compiler in C, or a Prolog interpreter in Prolog
 (which is much easier), it cannot be used without something else that
 understand at least a subset of the language.

 Pei

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] AGI and Web 2.0

2007-03-29 Thread Russell Wallace

On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:



Yes, I've heard the same thing, but I'm wondering if we can do better than
that by going open sooner.

You know, very often the biggest mistakes are made at the very beginning.
If we can solicit the collective intelligence of a wider group perhaps
the basic design will be better.



I think there's at least one good practical reason to avoid doing that, or
at least to do it at arm's length in a potential users discussing potential
features mailing list rather than here's our code as we write it. In the
early stages of something as bleeding-edge as this, it's normal to need
several rounds of scrapping and redoing major chunks of design; if you
don't/can't do that, if you have to go with whatever your first guess was,
it's easy to end up hamstrung later because the design doesn't really handle
the requirements and it's too late to rewrite from scratch. It's
psychologically a lot easier to do that sort of scrap-and-redo if the world
isn't looking over your shoulder.

One thing we can try is to build an extremely primitive prototype so it can

be out as soon as possible.



That I agree with, aim to get something that works but doesn't yet have all
the bells and whistles, so it can be released soon.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] AGI and Web 2.0

2007-03-29 Thread Bob Mottram

On 29/03/07, Russell Wallace [EMAIL PROTECTED] wrote:


I think there's at least one good practical reason to avoid doing that, or
at least to do it at arm's length in a potential users discussing potential
features mailing list rather than here's our code as we write it. In the
early stages of something as bleeding-edge as this, it's normal to need
several rounds of scrapping and redoing major chunks of design; if you
don't/can't do that, if you have to go with whatever your first guess was,
it's easy to end up hamstrung later because the design doesn't really handle
the requirements and it's too late to rewrite from scratch. It's
psychologically a lot easier to do that sort of scrap-and-redo if the world
isn't looking over your shoulder.




The process of invention inevitably involves scrapping designs when they
reach a point where it's obvious that they're not going to work.  This is
especially a problem for AI systems, where even the theoretical basis
underlying the project is subject to uncertainty, whereas if you're just
writing a web browser all the theory of basically what it should do is
completely known from the outset.

I've lost count of the number of times which I've scrapped and re-written
some of my own projects, but by now I think I've made most of the mistakes
which its possible to make, and as they say when you have eliminated the
impossible, whatever remains, however improbable, must be the truth.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] AGI and Web 2.0

2007-03-29 Thread Russell Wallace

On 3/29/07, Bob Mottram [EMAIL PROTECTED] wrote:


I've lost count of the number of times which I've scrapped and re-written
some of my own projects, but by now I think I've made most of the mistakes
which its possible to make, and as they say when you have eliminated the
impossible, whatever remains, however improbable, must be the truth.



Yep, that's about where I'm at too ^.^

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] AGI and Web 2.0

2007-03-29 Thread YKY (Yan King Yin)

On 3/30/07, Russell Wallace [EMAIL PROTECTED] wrote:

I think there's at least one good practical reason to avoid doing that, or

at least to do it at arm's length in a potential users discussing potential
features mailing list rather than here's our code as we write it. In the
early stages of something as bleeding-edge as this, it's normal to need
several rounds of scrapping and redoing major chunks of design; if you
don't/can't do that, if you have to go with whatever your first guess was,
it's easy to end up hamstrung later because the design doesn't really handle
the requirements and it's too late to rewrite from scratch. It's
psychologically a lot easier to do that sort of scrap-and-redo if the world
isn't looking over your shoulder.

OK, that's reasonable ;)

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] knowledge representation, Cyc

2007-03-29 Thread YKY (Yan King Yin)

I just talked to some Cyc folks, and they assured me that CycL is adequate
to represent entire stories like Little Red Riding Hood.

The AGI framework has to operate on a knowledge representation language, and
building that language is not a programming task, rather a ontology
engineering task, which I'm not very familiar with.  I guess we should not
underestimate the amount of work required for the KR scheme.  If we use CycL
we may save a lot of time.

I may try to translate LRRH into CycL to see if it is too cumbersome or
what.

Ben:  Are you interested in translating LRRH into Novamente's KR, as a demo?

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread David Clark

- Original Message - 
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 29, 2007 10:26 AM
Subject: Re: [agi] small code small hardware


 Well, once again we need to distinguish two different levels of
 language. In my NARS, the system's knowledge/beliefs are represented
 in a language called Narsese, which has the ability to describe a
 sequence of system operations. In that sense, the system can create
 and modify its programs by which given tasks are processed.

This is as good as any to define what a language inside an AGI is.  How
efficient and how much of your *system* that language can access is the only
other question I have.

 On the other level, all the Narsese sentences are treated as data by the
 system's implementation language, Java, whose code the system cannot
 modify. In theory, Narsese can be extended to include all Java
 functionality (though I don't think it will be necessary), but even
 after that, the system still doesn't/cannot/shouldn't modify its own
 source code.

If the code in Java encodes algorithms that are part of your AGI design,
then your internal language can't access all your Java functionality unless
you made it explicitly that way.  If your whole AGI was coded in your
internal language then I wouldn't have that same criticism as to it's
flexibility.  If you code your AGI algorithms in Java and then call those
programs from your internal language, what happens when you want to enhance
or add to the algorithms written in Java?  How do you guarantee that all
algorithms needed to power the AGI will be present in any single copy of
your Java program?

 If what you are after is just flexibility in behavior, I think there
 are much better ways to achieve it than self-modifying source-code.

This isn't an either/or.  Solutions can be coded in programs and/or data.
We have no disagreement on that.  I'm only saying that having *both*
abilities will always be better than just being able to change the data
only.  If you can make a program instead of just using data with a program
that already exists, you will always have more flexibility than if this
option wasn't open to you at all.

If you disagree, please explain why.  It seems quite obvious to me and if I
am mistaken, I would appreciate the reasons so I can adjust my thinking.

-- David Clark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] AGI and Web 2.0

2007-03-29 Thread Mike Dougherty

On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

How does the new phenomenon of web-based collaboration change the way we
build an AGI?  I feel that something is amiss in a business model if we
don't make use of some form of Web 2.0 .

A problem is that we cannot take universal ballots every time on every
trivial issue.  So probably we need a special adminstrative committee for
decision-making.


I think the primitive prototype should be about inter-node
communication methodology.  A basic API for Tell me what you know
about X, Y, Z  would allow nodes utilizing different storage or
processing methods to interact with each other.  ex:  I ask for
information about some process flow and I get back a chart.  I am not
particularly good at consuming a chart, so I store this content as
possibly relevant but currently less than ideally consumable.
Eventually I may develop a way to get the meaning out of that media
format.  Meanwhile if someone asks ME for that same process flow, I
can communicate in my more 'native' expression of
words/paragraphs/etc. and simply pass along the chart.  That consumer
might prefer the chart.  Assuming I pass along the chart with proper
source identification, I have communicated not only my knowledge of
the subject, but a potential forward reference for further query.
(conceivably the source of that chart might have gained new
information on the subject while I was storing it)

whether nodes represent people in a social network or neurons in a
brain, I believe the interconnect protocol is what makes the whole
greater than the mere sum of the parts.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] knowledge representation, Cyc

2007-03-29 Thread Matt Mahoney

--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 I just talked to some Cyc folks, and they assured me that CycL is adequate
 to represent entire stories like Little Red Riding Hood.
 
 The AGI framework has to operate on a knowledge representation language, and
 building that language is not a programming task, rather a ontology
 engineering task, which I'm not very familiar with.  I guess we should not
 underestimate the amount of work required for the KR scheme.  If we use CycL
 we may save a lot of time.
 
 I may try to translate LRRH into CycL to see if it is too cumbersome or
 what.

Wouldn't it save time in the long run to build a system that could translate
English into your KR?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] knowledge representation, Cyc

2007-03-29 Thread YKY (Yan King Yin)

On 3/30/07, Matt Mahoney [EMAIL PROTECTED] wrote:

Wouldn't it save time in the long run to build a system that could

translate

English into your KR?


Yes, that's the goal.  I'm just doing a human translation of the first
paragraph or so, to get the feel of CycL.

It can also be compared with Novamente's version.  I think world-wide there
are only about 3-5 KR schemes capable of representing such stories
adequately.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread David Clark

- Original Message - 
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 29, 2007 1:13 PM
Subject: Re: [agi] small code small hardware


 As I said before, I don't think it is a good idea to allow that
 flexibility. If all the desired changed can be made in the content
 language, why bother to modify the Java code?

Does that mean that 1 algorithm (or a small number of algorithms programmed
in Java) is all that it will take to make an AGI?  Will the AGI at some
point not need any modifications to it's Java code to continue to get
smarter and solve new problems?  Is flexibility of all kinds bad or just
flexibility that has to do with an AGI producing code?

 I'll need to redesign the system in that case. I know it sounds less
 exciting than the system will redesign itself, but at least for the
 near future, the latter path will cause more troubles than successes.

Why do you believe this?  I am not asking you to change/redesign your system
but just explain the reasons why more choice (in problem solving) is bad.
If an AGI *can* make/change programs, why would they have to use this
facility to redesign parts of itself that might cause a problem?

 I cannot guarantee that. What I'm doing is to add in the algorithms I
 think is necessary, and see what will happen.

 Can you guarantee a self-modifying system always makes the right changes?

I can't *guarantee* that I would make the *right changes* if I was working
on your source code!  Code is rarely bug free but that doesn't mean that
some coding ability isn't useful for an AGI is it?  Changes could be
confined to areas that don't affect it's goals or in test areas.  I can see
using code to solve problems that would be difficult or impossible using
data only.  I don't think that constitutes changes to *core* areas although
it still means the AGI can change itself.

 The key is not program vs. data, but data in one level is program
 in another level. I fully agree with you that an AGI should be able
 to generate and modify algorithms, but it don't necessarily mean the
 source code.

This implies that you believe that some algorithms are source code worthy
and others can be made/modified by the AGI.  Is this correct?  If so, will
the efficiency of the AGI algorithms be substantially less than the ones
programmed by humans in Java?  Can you agree that any AGI must be able to
create and use a model to predict something?  This condition isn't the only
definition of an AGI, by any means, but would you say an AGI must have that
kind of modeling capability?  If yes, then how does a person create and
execute that model with many iterations if the tools available are only
data?  If your system was asked to create a model of a line given a Y
intercept and a slope, how would it take a number as input, calculate the
result and display it using data only?

If the above set of questions is walking when you are at the crawling stage,
I understand if you can't answer them.  I am really not trying to pick on
you.

My design so far does exactly what you said above. (data in one level is
program in another level)  My language system is programmed in C++ and
can't change itself at all.  No AGI code is written in C++, however.  The
AGI will be written in only the language created by the C++ so that it can
change/create it's programs.  My AGI programs will be considered data from
the C++ programs' point of view.  The difference is that my whole AGI
program will be coded in a totally changeable, very high speed language as
oposed to a high speed human created one.

All errors in this internal language are totally trapable (unlike C++) so
that the AGI could actually make programming mistakes without affecting
normal data or concurrent operation.

I am sure you are very busy so don't feel you must respond.  If you have the
time, however, your answers might help me a great deal.

-- David Clark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Pei Wang

On 3/29/07, David Clark [EMAIL PROTECTED] wrote:


 As I said before, I don't think it is a good idea to allow that
 flexibility. If all the desired changed can be made in the content
 language, why bother to modify the Java code?

Does that mean that 1 algorithm (or a small number of algorithms programmed
in Java) is all that it will take to make an AGI?


Of course, 1 algorithm is not enough. Whether it is a small number
depends on what it is compared to.


Will the AGI at some
point not need any modifications to it's Java code to continue to get
smarter and solve new problems?


I guess it will never happen --- AGI will need modification for a long
time in the near future. I just think it is better for it to be
modified by the designer than by the system itself.


Is flexibility of all kinds bad or just
flexibility that has to do with an AGI producing code?


AGI needs flexibility, but flexibility alone is not enough for
intelligence. Especially, unlimited flexibility is not a good thing.


 I'll need to redesign the system in that case. I know it sounds less
 exciting than the system will redesign itself, but at least for the
 near future, the latter path will cause more troubles than successes.

Why do you believe this?  I am not asking you to change/redesign your system
but just explain the reasons why more choice (in problem solving) is bad.


The flexibility in intelligence doesn't mean everything is
changeable. Instead, all changes should be adaptive, in the sense
that problem solving should be carried out according to the system's
experience. This will rule out the possibilities that are not
supported by the system's experience.


If an AGI *can* make/change programs, why would they have to use this
facility to redesign parts of itself that might cause a problem?


Because there is no guarantee that this change will actually make
things better in the long run. Intelligent systems are actually quite
conservative with respect to radical changes. To change its beliefs of
the environment is one thing (which is relatively mild), but to change
how it changes beliefs may destroy the system's coherence.


 Can you guarantee a self-modifying system always makes the right changes?

I can't *guarantee* that I would make the *right changes* if I was working
on your source code!  Code is rarely bug free but that doesn't mean that
some coding ability isn't useful for an AGI is it?


Again, some coding ability is not only useful, but also necessary
for an AGI. Our difference is not here, but that I use two languages,
one for object-level knowledge, which is fully modifiable by the
system, and the other for meta-level knowledge, which is modifiable by
the human designer only (you or me), but not the system itself; on the
other hand, you assume a single language for both purposes, and want
to to be fully modifiable by the system. Though your solution is
technically possible, I don't do it your way because these two
languages have very different features, and it is more manageable in
the near future.


Changes could be
confined to areas that don't affect it's goals or in test areas.


This is possible for object-level knowledge/skills, but not for
meta-level knowledge/skills, since the latter is used to all areas.


I can see
using code to solve problems that would be difficult or impossible using
data only.  I don't think that constitutes changes to *core* areas although
it still means the AGI can change itself.


Again, it is not code vs. data, but which type of code. All
object-level code can be changed by the system, as you suggested, but
it is not in the source code of the system in the usual sense.


 The key is not program vs. data, but data in one level is program
 in another level. I fully agree with you that an AGI should be able
 to generate and modify algorithms, but it don't necessarily mean the
 source code.

This implies that you believe that some algorithms are source code worthy
and others can be made/modified by the AGI.  Is this correct?  If so, will
the efficiency of the AGI algorithms be substantially less than the ones
programmed by humans in Java?  Can you agree that any AGI must be able to
create and use a model to predict something?  This condition isn't the only
definition of an AGI, by any means, but would you say an AGI must have that
kind of modeling capability?  If yes, then how does a person create and
execute that model with many iterations if the tools available are only
data?  If your system was asked to create a model of a line given a Y
intercept and a slope, how would it take a number as input, calculate the
result and display it using data only?


I hope I've answered these questions previously.


If the above set of questions is walking when you are at the crawling stage,
I understand if you can't answer them.  I am really not trying to pick on
you.

My design so far does exactly what you said above. (data in one level is
program in another level)  My language system is