Re: [agi] small code small hardware

2007-03-29 Thread YKY (Yan King Yin)

Let's take a poll?

I believe that a minimal AGI core, *sans* KB content, may be around 100K
lines of code.

What are other people's estimates?

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Mark Waser
I'll go you one better . . . . I truly believe that the minimal AGI core, sans 
KB content, is 0 lines of code . . . . 

Just like C compilers are written in C, the AGI should be entirely written in 
it's knowledge base (eventually) to the point that it can understand itself, 
rewrite itself, and recompile itself in it's entirety.  The problem is 
bootstrapping to that point.

Personally, I find all of these wild-ass guess-timates and opinion polls quite 
humorous.  Given that we can't all even agree on what an AGI is, much less how 
to do it, how can we possibly think that we can accurately estimate it's 
features?

Mark
  - Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Thursday, March 29, 2007 8:42 AM
  Subject: **SPAM** Re: [agi] small code small hardware



  Let's take a poll?

  I believe that a minimal AGI core, sans KB content, may be around 100K lines 
of code.

  What are other people's estimates?

  YKY


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 08:42:53PM +0800, YKY (Yan King Yin) wrote:

I believe that a minimal AGI core, sans KB content, may be around 100K
lines of code.

I don't know what 'KB' content is. But the kLoCs are irrelevant, because
the data is where it's at, and it's huge.
 
What are other people's estimates?

10^17 sites, 10^23 OPs/s total. The transformation
complexity might very well be 100 kLoC, or even 10 kLoC.

But that code is worthless without the magic data.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Russell Wallace

On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:



Let's take a poll?

I believe that a minimal AGI core, *sans* KB content, may be around 100K
lines of code.

What are other people's estimates?



Sounds right to me. I'd put the framework (sans content) as roughly
comparable to a web browser, IDE or CAD program, for which 100 kloc seems
about the order of magnitude of size for a first version.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Jean-Paul Van Belle
I guess (50 to 100 modules) x (500 to 2500 locs) x fudge factor x
language factor
with fudge factor = 2 to 4 and language factor = 1 for eg Python; 5 for
eg C++
i.e. minimum 50 klocs (Python) which is what i wishfully think;
realistically probably closer to 5000 klocs C++
that's of course for the prototype which may or may not bootstrap.
however, the devil's in the data (you're on the money there, Mark) and
more importantly the architecture and algorithms.

| YKY (Yan King Yin) [EMAIL PROTECTED]
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21
om 2007/03/29 14:42:53 
| What are other people's estimates?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 09:16:09AM -0400, Mark Waser wrote:

I'll go you one better . . . . I truly believe that the minimal AGI
core, sans KB content, is 0 lines of code . . . .

In theory, a TOE can be quite small. In theory, you could have
a low-level physical simulation that's happening to be an intelligent
system. In practice, however... As they say, in theory, there is no
difference between practice and theory. In practice, there is.  
 
Just like C compilers are written in C, the AGI should be entirely
written in it's knowledge base (eventually) to the point that it can

What's the knowledge base between your ears is written in? 

understand itself, rewrite itself, and recompile itself in it's

What makes you think the system can ever understand itself, whatever
that term means exactly? Evolution doesn't understand anything, but 
as an optimization process it produced us from prebiotic ursoup, which
is nothing to sneeze at.

entirety.  The problem is bootstrapping to that point.

Since nobody here knows, how about evolution? Empirically validated
is not good enough for you?
 
Personally, I find all of these wild-ass guess-timates and opinion
polls quite humorous.  Given that we can't all even agree on what an

It's okay as long as everybody agrees they're wild-ass guesstimates.

AGI is, much less how to do it, how can we possibly think that we can

I dunno about you, but I see a general intelligence (admittedly, not much
of an intelligence) every morning in the shaving mirror. As I said, you'll
know AGI when it hits the job market and the news.

accurately estimate it's features?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Pei Wang

On 3/29/07, Mark Waser [EMAIL PROTECTED] wrote:


I'll go you one better . . . . I truly believe that the minimal AGI core,
sans KB content, is 0 lines of code . . . .

Just like C compilers are written in C, the AGI should be entirely written
in it's knowledge base (eventually) to the point that it can understand
itself, rewrite itself, and recompile itself in it's entirety.  The problem
is bootstrapping to that point.


I have to disagree. The following is adapted from my chapter in the
AGI collection 
(http://www.springer.com/west/home/generic/order?SGWID=4-40110-22-43950079-0):

*. Complete self-modifying is an illusion. As Hofstadter put it, below
every tangled hierarchy lies an inviolate level [in GEB]. If we
allow a system to modify its meta-level knowledge, i.e., its inference rules
and control strategy, we need to give it (fixed) meta-meta-level knowledge
to specify how the modification happens. As flexible as the human mind
is, it cannot modify its own low of thought.

*. Though high-level self-modifying will give the system more flexibility, it
does not necessarily make the system more intelligent. Self-modifying at
the meta-level is often dangerous, and it should be used only when the
same effect cannot be produced in the object-level. To assume the more
radical the changes can be, the more intelligent the system will be is
unfounded. It is easy to allow a system to modify its own source code,
but hard to do it right.

Even if you write a C compiler in C, or a Prolog interpreter in Prolog
(which is much easier), it cannot be used without something else that
understand at least a subset of the language.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 09:35:57AM -0400, Pei Wang wrote:

 I have to disagree. The following is adapted from my chapter in the
 AGI collection 
 (http://www.springer.com/west/home/generic/order?SGWID=4-40110-22-43950079-0):

I have to disagree with your disagreement. Provably optimal computational
substrates and representation can be optimized by co-evolution. This
process is open-ended. 
 
 *. Complete self-modifying is an illusion. As Hofstadter put it, below
 every tangled hierarchy lies an inviolate level [in GEB]. If we
 allow a system to modify its meta-level knowledge, i.e., its inference rules
 and control strategy, we need to give it (fixed) meta-meta-level knowledge
 to specify how the modification happens. As flexible as the human mind

Stochastical optimization doesn't have any blinkers. Of course, it
takes a population, because most of these are fatal.

 is, it cannot modify its own low of thought.
 
 *. Though high-level self-modifying will give the system more flexibility, 
 it
 does not necessarily make the system more intelligent. Self-modifying at

If intelligence is infoprocessing capability, then any process that
maximizes the ops/g and ops/J will also optimize for intelligence.

 the meta-level is often dangerous, and it should be used only when the
 same effect cannot be produced in the object-level. To assume the more
 radical the changes can be, the more intelligent the system will be is
 unfounded. It is easy to allow a system to modify its own source code,
 but hard to do it right.

Yes, it took evolution a while before it learned to evolve. ALife hasn't
reached that first milestone yet.
 
 Even if you write a C compiler in C, or a Prolog interpreter in Prolog
 (which is much easier), it cannot be used without something else that
 understand at least a subset of the language.

The whole language metaphor in AI is a crock. It makes so many smart
people go chasing wild geese up blind alleys.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Russell Wallace

On 3/29/07, Pei Wang [EMAIL PROTECTED] wrote:


*. Though high-level self-modifying will give the system more flexibility,
it
does not necessarily make the system more intelligent. Self-modifying at
the meta-level is often dangerous, and it should be used only when the
same effect cannot be produced in the object-level. To assume the more
radical the changes can be, the more intelligent the system will be is
unfounded. It is easy to allow a system to modify its own source code,
but hard to do it right.



Yep. Supporting data: Eurisko, which had largely unrestricted
self-modification ability and did a few interesting things with it, but
would rather quickly banjax itself and require human intervention to fix it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread YKY (Yan King Yin)

On 3/29/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote:

I guess (50 to 100 modules) x (500 to 2500 locs) x fudge factor x language

factor

with fudge factor = 2 to 4 and language factor = 1 for eg Python; 5 for eg

C++

50-100 modules?  Sounds like you have a very unconventional architecture.


From what you say, Python sounds like a pretty good *procedural* language --

would you say it's the easiest way to build an AGI prototype?

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Jean-Paul Van Belle
Re number of modules - ask any neuroscientist how many modules there are
in the brain... and see which you think you can do without. My approach
was to list important brain modules, delete those that I thought I can
do without, add a very few that they haven't located or seem needed.
Some modules end up being split in smaller ones as you start delving
into implementation issues.
 
Re PYTHON - hey I though we just *had* the language debate. FWIW In a
previous life I've coded in Fortran and various flavours of Basic.
Python gives fast learning curve, high productivity, high readability
(important if you have gaps between programming time), it *is* OO but
also procedural/functional - i like that mesh -, self-modification, the
efficient data structures which I need, and lots of community support
e.g. MontyLingua gives you a natural language parser free. Low
performance is an issue but one could always inline C. So Python it is
for my first prototype. I don't recommend people change their current
language tho if they're happy with it. Still early days for me. 

 YKY (Yan King Yin) [EMAIL PROTECTED] 2007/03/29
15:58:45 
On 3/29/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote:
 I guess (50 to 100 modules) x (500 to 2500 locs) x fudge factor x
language
factor
 with fudge factor = 2 to 4 and language factor = 1 for eg Python; 5
for eg
C++

50-100 modules?  Sounds like you have a very unconventional
architecture.

From what you say, Python sounds like a pretty good *procedural*
language --
would you say it's the easiest way to build an AGI prototype?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Russell Wallace

On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


From what you say, Python sounds like a pretty good *procedural* language
-- would you say it's the easiest way to build an AGI prototype?



Remember this is for the framework (rather than content) we're talking
about, so a procedural language is appropriate. I've done a bit of Python,
it's nice and easy to use - but for the framework, whatever one is most
productive in is probably the best choice.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread kevin . osborne

 Let's take a poll?
 I believe that a minimal AGI core, sans KB content, may be around 100K lines 
of code.
 What are other people's estimates?


from: 
http://web.archive.org/web/20060306104407/www.etla.org/cpan-sloccount-report.txt
Perl CPAN:
 15,000,000.

from: http://www.dwheeler.com/sloc/
GNU/Linux:
 30,000,000.

from: http://en.wikipedia.org/wiki/Source_lines_of_code
Windows Vista:
 50,000,000.

50M LOC to code an OS to interface with an AGI (i.e. us).

Thing is, we do all the smart stuff.

An OS without a user is for all intents and purposes a 'dumb terminal'.

from: http://faculty.washington.edu/chudler/facts.html
Average number of neurons in a human brain:
 100,000,000,000.
Number of dendrites, axons and synapses this equates to:
 too bloody much :-)

In the The 21st Century Brain neuroscientist Steve Rose states the
current estimation of 'degrees of separation' between neurons in the
brain is 2-3.
say 2.5^10e9 interconnects, which is a number too big for even a
crypto BigInt calculator, if not a number too big for computronium :-)

Now I know it's wrong to bunch all of the brain into lines of code;
there's obviously alot of it which is simply data points for memory
etc and a host of other 'non-processing' functions.

But that said, with the numbers involved, even if only a small
percentage of those interconnects provide processing ability, thats
still a ridiculously large number.

You could argue that a lot of all this is the same kind of functions
just operating in 'parrellel' with a lot of 'redundancy'.

I'm not sure I buy that. Evolution is a miserly mistress. If thinking
could have been achieved with less, it would have been, and any
'extra' would have no means of selection.

The (also ridiculously large) amount of years involved in mammalian
brain evolution all led towards what we bobble around with us today.

I think there is an untold host of support functions necessary to take
a Von Neumann machine to a tipping-point|critical-mass where it can
truly think for itself. To even begin to equate top the generalised
abilities of an imbecile.

Not to be discouraging though. :-)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Jean-Paul Van Belle
IMHO
IF you can provide a learning environment similar in complexity as our
world
THEN (maximum code size(zipped using Matt Mahoney algorithm)   portion
of non-redundant DNA devoted to brain
/IMHO
 
Some random thoughts.
Any RAM location can link to any other RAM location so there are more
interconnects.
The structure of RAM can be described very succintly.
A CPU has 800 million transistors - a much more generous instruction
set than our brain.
 
Most likely we're *all* way off the mark ;-)

 kevin.osborne [EMAIL PROTECTED] 2007/03/29 16:24:20 
say 2.5^10e9 interconnects, which is a number too big for even a

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread BillK

On 3/29/07, kevin osborne wrote:
snip

You could argue that a lot of all this is the same kind of functions
just operating in 'parrellel' with a lot of 'redundancy'.

I'm not sure I buy that. Evolution is a miserly mistress. If thinking
could have been achieved with less, it would have been, and any
'extra' would have no means of selection.

The (also ridiculously large) amount of years involved in mammalian
brain evolution all led towards what we bobble around with us today.

I think there is an untold host of support functions necessary to take
a Von Neumann machine to a tipping-point|critical-mass where it can
truly think for itself. To even begin to equate top the generalised
abilities of an imbecile.



I think you have too high an opinion of Evolution.
Evolution is kludge piled upon kludge.
This is because evolution via natural selection cannot construct
traits from scratch. New traits must be modifications of previously
existing traits. This is called historical constraint.

There are many examples available in nature of bad design.

So it is not unlikely that a lot of the human brain processing is a
redundant hangover from earlier designs.  Of course, it is not a
trivial problem to decide which functions are not required to create
AGI.   :)

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Mark Waser

*. Complete self-modifying is an illusion. As Hofstadter put it, below
every tangled hierarchy lies an inviolate level [in GEB]. If we
allow a system to modify its meta-level knowledge, i.e., its inference 
rules

and control strategy, we need to give it (fixed) meta-meta-level knowledge
to specify how the modification happens. As flexible as the human mind
is, it cannot modify its own low of thought.


I've always disagreed with Hofstadter's argument since it pre-supposes a 
single static system (in terms of available knowledge).  Your argument is 
exactly the same.  Why does the meta-meta-level knowledge have to be fixed? 
And why can't a system spawn a subsystem that is separate enough to change 
that inviolable level?


Also, your statement about the human mind is 100% specious and irrelevant, 
not to mention the fact that I don't find the human mind particularly 
flexible.


*. Though high-level self-modifying will give the system more flexibility, 
it

does not necessarily make the system more intelligent. Self-modifying at
the meta-level is often dangerous, and it should be used only when the
same effect cannot be produced in the object-level. To assume the more
radical the changes can be, the more intelligent the system will be is
unfounded. It is easy to allow a system to modify its own source code,
but hard to do it right.


I agree with this paragraph 100%.


Even if you write a C compiler in C, or a Prolog interpreter in Prolog
(which is much easier), it cannot be used without something else that
understand at least a subset of the language.


Hunh?  The AGI needs to understand it's language.  Don't you understand and 
can't you explain your *logical* thought processes?  Can't you write a 
computer program to emulate any single given one (given sufficient time, 
etc.)?  I'm not sure where you're going with this . . . .


- Original Message - 
From: Pei Wang [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, March 29, 2007 9:35 AM
Subject: Re: [agi] small code small hardware



On 3/29/07, Mark Waser [EMAIL PROTECTED] wrote:


I'll go you one better . . . . I truly believe that the minimal AGI core,
sans KB content, is 0 lines of code . . . .

Just like C compilers are written in C, the AGI should be entirely 
written

in it's knowledge base (eventually) to the point that it can understand
itself, rewrite itself, and recompile itself in it's entirety.  The 
problem

is bootstrapping to that point.


I have to disagree. The following is adapted from my chapter in the
AGI collection 
(http://www.springer.com/west/home/generic/order?SGWID=4-40110-22-43950079-0):


*. Complete self-modifying is an illusion. As Hofstadter put it, below
every tangled hierarchy lies an inviolate level [in GEB]. If we
allow a system to modify its meta-level knowledge, i.e., its inference 
rules

and control strategy, we need to give it (fixed) meta-meta-level knowledge
to specify how the modification happens. As flexible as the human mind
is, it cannot modify its own low of thought.

*. Though high-level self-modifying will give the system more flexibility, 
it

does not necessarily make the system more intelligent. Self-modifying at
the meta-level is often dangerous, and it should be used only when the
same effect cannot be produced in the object-level. To assume the more
radical the changes can be, the more intelligent the system will be is
unfounded. It is easy to allow a system to modify its own source code,
but hard to do it right.

Even if you write a C compiler in C, or a Prolog interpreter in Prolog
(which is much easier), it cannot be used without something else that
understand at least a subset of the language.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Mark Waser
 50-100 modules?  Sounds like you have a very unconventional architecture.

Depends upon what you call a module and whether you're only counting true core 
modules (and not counting any specializations, descendents, compositions, etc., 
etc., etc.).  

How many key words do you have in any given programming language and what can 
you do with that language (short answers: well less than fifty and *anything*).

As I said before, how can you make or argue an estimate without even agreeing 
upon terms and baselines?
  - Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Thursday, March 29, 2007 9:58 AM
  Subject: **SPAM** Re: [agi] small code small hardware





  On 3/29/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote: 
   I guess (50 to 100 modules) x (500 to 2500 locs) x fudge factor x language 
factor 
   with fudge factor = 2 to 4 and language factor = 1 for eg Python; 5 for eg 
C++


  50-100 modules?  Sounds like you have a very unconventional architecture.

  From what you say, Python sounds like a pretty good *procedural* language -- 
would you say it's the easiest way to build an AGI prototype?

  YKY


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 04:46:59PM +0200, Jean-Paul Van Belle wrote:

Some random thoughts.
 
Any RAM location can link to any other RAM location so there are more
interconnects.

Not so fast. Memory bandwidth is very limited (~20 GByte/s current,
GDDR3/GPUs are much better, agreed), and the access
pattern is not flat. Predictable and local accesses are preferred,
whereas worst case can be as low as 5% of advertised peak.

The difference between CPU speed and memory bandwidth growth
is a linear semi-log plot, too.

However, the limited fan-out factors are not a problem with 
active media and even simple packet-switched signalling mesh.
Embedded DRAM, wide bus ALU (with in-register parallelism)
meshed up with a packet-switched signalling fabric is the bee's
knees -- but you can't buy these yet.
 
The structure of RAM can be described very succintly.

RAM alone doesn't compute. Try hardware CAs. These are pretty
regular, too, and actually pack a lot of punch, especially in 3d.
(In fact, the best possible classical computational substrate is
a molecular-cell CA).
 
A CPU has 800 million transistors - a much more generous instruction
set than our brain.

I have absolutely no idea what you mean by this. I'm hazarding
that you yourself don't, either.
 
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 08:40:02AM -0700, David Clark wrote:

I would like to know what computer executes data without code.  None
that I have used since 1976 so please educate me!

The distinction is a bit arbitrary. Machine instructions are nothing
but data to the CPU.

But the lack of distinction between code and data in biological
tissue processing is significant. Such systems are best seen as
state, and their evolution (the state space variety) as iterative
transformation on that state.

Considering the memory bottleneck, you don't get a lot of refreshes/s
on a typical 10^9 word node. With current technology 10 MBytes/node in order
to match the refresh rate of neuronal circuitry, which is not a lot
of state/node, so you need an awful lot of nodes.
 
Even though some state designs can put logic into data instead of
program code, and even though program code is stored as data, they
aren't the same.

The distinction between storage and processing, between code and
data is arbitrary. It's an earmark of a particular technology, and a
rather pitiful technology, which goes back directly to the Jaquard loom.
We're stuck in a bad optimum for time being, but luckily people have
started running into enough limitations (recent multicore mania is a
symptom) so they're willing to abandon the conventional approach,
because it no longer offers enough ROI, especially long-term.

To estimate given, insufficient knowledge is problematic, to estimate
given NO knowledge produces useless conjectures.

Yes. This is why I stick to what biology can do in a given volume, because
it's the only working instance we can analyze.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread David Clark
Some of the points you make below might be correct about self modification
but *absolutely no* modification of code or new code solutions can be had if
your AGI doesn't contain a programming language.  An AGI with the tools to
create programs (or change existing ones) would surely have more options to
create solutions than one that doesn't.

Having the ability to modify code at the lower or higher meta levels doesn't
mean that it has to.

-- David Clark


- Original Message - 
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 29, 2007 6:35 AM
Subject: Re: [agi] small code small hardware


 On 3/29/07, Mark Waser [EMAIL PROTECTED] wrote:
 
  I'll go you one better . . . . I truly believe that the minimal AGI
core,
  sans KB content, is 0 lines of code . . . .
 
  Just like C compilers are written in C, the AGI should be entirely
written
  in it's knowledge base (eventually) to the point that it can understand
  itself, rewrite itself, and recompile itself in it's entirety.  The
problem
  is bootstrapping to that point.

 I have to disagree. The following is adapted from my chapter in the
 AGI collection
(http://www.springer.com/west/home/generic/order?SGWID=4-40110-22-43950079-0
):

 *. Complete self-modifying is an illusion. As Hofstadter put it, below
 every tangled hierarchy lies an inviolate level [in GEB]. If we
 allow a system to modify its meta-level knowledge, i.e., its inference
rules
 and control strategy, we need to give it (fixed) meta-meta-level knowledge
 to specify how the modification happens. As flexible as the human mind
 is, it cannot modify its own low of thought.

 *. Though high-level self-modifying will give the system more flexibility,
it
 does not necessarily make the system more intelligent. Self-modifying at
 the meta-level is often dangerous, and it should be used only when the
 same effect cannot be produced in the object-level. To assume the more
 radical the changes can be, the more intelligent the system will be is
 unfounded. It is easy to allow a system to modify its own source code,
 but hard to do it right.

 Even if you write a C compiler in C, or a Prolog interpreter in Prolog
 (which is much easier), it cannot be used without something else that
 understand at least a subset of the language.

 Pei

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread David Clark
Even if your estimate of 65 different brain modules is correct, why couldn't an 
AGI combine any of these into a bigger modules or create any number of modules 
to accomplish what 1 module does in a human?  My point is, I see no connection 
to the number of modules needed in an AGI when compared to a human.  On top of 
that, some of these human modules might take orders of magnitude more code in 
an AGI than another module. (Not all 65 human modules are of equal complexity 
or would be if coded in an AGI)

I see no rational correlation for estimating the size of code required to 
create an AGI and what exists in our brains.

-- David Clark
  - Original Message - 
  From: Jean-Paul Van Belle 
  To: agi@v2.listbox.com 
  Sent: Thursday, March 29, 2007 8:24 AM
  Subject: Re: [agi] small code small hardware


  True - many definitions of modules  ;-)
  My definition: unique functionality - as usually reflected in a different 
type of data being manipulated (i.e. different input and/or output types). I 
cannot reduce the number of different functional modules below 65. Many modules 
embed more than one function and all inherit and/or specialize general methods.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread David Clark
- Original Message - 
From: Eugen Leitl [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 29, 2007 8:55 AM
Subject: Re: [agi] small code small hardware


 On Thu, Mar 29, 2007 at 08:40:02AM -0700, David Clark wrote:

 I would like to know what computer executes data without code.  None
 that I have used since 1976 so please educate me!

 The distinction is a bit arbitrary. Machine instructions are nothing
 but data to the CPU.

I said that too.

 But the lack of distinction between code and data in biological
 tissue processing is significant. Such systems are best seen as
 state, and their evolution (the state space variety) as iterative
 transformation on that state.

Can you program a CPU made of biological material?  I program using normal
silicon based computers, the last I looked!

 Considering the memory bottleneck, you don't get a lot of refreshes/s
 on a typical 10^9 word node. With current technology 10 MBytes/node in
order
 to match the refresh rate of neuronal circuitry, which is not a lot
 of state/node, so you need an awful lot of nodes.

I think the quality of algorithms matters more than quantity.  On this we
can just agree to disagree.  I don't relate any computer cycles or memory
speed to what humans can do.  I program computers using normal machine code.
If I had a different tool to accomplish your vision, then I might see your
algorithm in a different light.

 Even though some state designs can put logic into data instead of
 program code, and even though program code is stored as data, they
 aren't the same.

 The distinction between storage and processing, between code and
 data is arbitrary. It's an earmark of a particular technology, and a
 rather pitiful technology, which goes back directly to the Jaquard loom.

The difference between data and code may be arbitrary but you have to admit
that they aren't the same on modern day computers that we are all using.
Pitiful compared to what?  My first computer had an 8080 CPU and it was
state-of-the-art and great at the time!

 We're stuck in a bad optimum for time being, but luckily people have
 started running into enough limitations (recent multicore mania is a
 symptom) so they're willing to abandon the conventional approach,
 because it no longer offers enough ROI, especially long-term.

A person can only work with the tools they have.  Better or different tools
can create an environment that makes failed algorythms from the past, work.
I sadly don't possess such hardware and if you don't either then you should
create solutions based on the tools you have.

 To estimate given, insufficient knowledge is problematic, to estimate
 given NO knowledge produces useless conjectures.

 Yes. This is why I stick to what biology can do in a given volume, because
 it's the only working instance we can analyze.

Biology is the only concrete example we can study BUT our tools are not the
tools of biology.  We need to work with computer techniques if our solution
is to be had on that computer.  Other direct analogies to biology are nice
but not necessarily directly helpful.

-- David Clark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Pei Wang

Well, once again we need to distinguish two different levels of
language. In my NARS, the system's knowledge/beliefs are represented
in a language called Narsese, which has the ability to describe a
sequence of system operations. In that sense, the system can create
and modify its programs by which given tasks are processed. On the
other level, all the Narsese sentences are treated as data by the
system's implementation language, Java, whose code the system cannot
modify. In theory, Narsese can be extended to include all Java
functionality (though I don't think it will be necessary), but even
after that, the system still doesn't/cannot/shouldn't modify its own
source code.

If what you are after is just flexibility in behavior, I think there
are much better ways to achieve it than self-modifying source-code.

Pei

On 3/29/07, David Clark [EMAIL PROTECTED] wrote:

Some of the points you make below might be correct about self modification
but *absolutely no* modification of code or new code solutions can be had if
your AGI doesn't contain a programming language.  An AGI with the tools to
create programs (or change existing ones) would surely have more options to
create solutions than one that doesn't.

Having the ability to modify code at the lower or higher meta levels doesn't
mean that it has to.

-- David Clark


- Original Message -
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 29, 2007 6:35 AM
Subject: Re: [agi] small code small hardware


 On 3/29/07, Mark Waser [EMAIL PROTECTED] wrote:
 
  I'll go you one better . . . . I truly believe that the minimal AGI
core,
  sans KB content, is 0 lines of code . . . .
 
  Just like C compilers are written in C, the AGI should be entirely
written
  in it's knowledge base (eventually) to the point that it can understand
  itself, rewrite itself, and recompile itself in it's entirety.  The
problem
  is bootstrapping to that point.

 I have to disagree. The following is adapted from my chapter in the
 AGI collection
(http://www.springer.com/west/home/generic/order?SGWID=4-40110-22-43950079-0
):

 *. Complete self-modifying is an illusion. As Hofstadter put it, below
 every tangled hierarchy lies an inviolate level [in GEB]. If we
 allow a system to modify its meta-level knowledge, i.e., its inference
rules
 and control strategy, we need to give it (fixed) meta-meta-level knowledge
 to specify how the modification happens. As flexible as the human mind
 is, it cannot modify its own low of thought.

 *. Though high-level self-modifying will give the system more flexibility,
it
 does not necessarily make the system more intelligent. Self-modifying at
 the meta-level is often dangerous, and it should be used only when the
 same effect cannot be produced in the object-level. To assume the more
 radical the changes can be, the more intelligent the system will be is
 unfounded. It is easy to allow a system to modify its own source code,
 but hard to do it right.

 Even if you write a C compiler in C, or a Prolog interpreter in Prolog
 (which is much easier), it cannot be used without something else that
 understand at least a subset of the language.

 Pei

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread David Clark

- Original Message - 
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 29, 2007 10:26 AM
Subject: Re: [agi] small code small hardware


 Well, once again we need to distinguish two different levels of
 language. In my NARS, the system's knowledge/beliefs are represented
 in a language called Narsese, which has the ability to describe a
 sequence of system operations. In that sense, the system can create
 and modify its programs by which given tasks are processed.

This is as good as any to define what a language inside an AGI is.  How
efficient and how much of your *system* that language can access is the only
other question I have.

 On the other level, all the Narsese sentences are treated as data by the
 system's implementation language, Java, whose code the system cannot
 modify. In theory, Narsese can be extended to include all Java
 functionality (though I don't think it will be necessary), but even
 after that, the system still doesn't/cannot/shouldn't modify its own
 source code.

If the code in Java encodes algorithms that are part of your AGI design,
then your internal language can't access all your Java functionality unless
you made it explicitly that way.  If your whole AGI was coded in your
internal language then I wouldn't have that same criticism as to it's
flexibility.  If you code your AGI algorithms in Java and then call those
programs from your internal language, what happens when you want to enhance
or add to the algorithms written in Java?  How do you guarantee that all
algorithms needed to power the AGI will be present in any single copy of
your Java program?

 If what you are after is just flexibility in behavior, I think there
 are much better ways to achieve it than self-modifying source-code.

This isn't an either/or.  Solutions can be coded in programs and/or data.
We have no disagreement on that.  I'm only saying that having *both*
abilities will always be better than just being able to change the data
only.  If you can make a program instead of just using data with a program
that already exists, you will always have more flexibility than if this
option wasn't open to you at all.

If you disagree, please explain why.  It seems quite obvious to me and if I
am mistaken, I would appreciate the reasons so I can adjust my thinking.

-- David Clark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread David Clark

- Original Message - 
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 29, 2007 1:13 PM
Subject: Re: [agi] small code small hardware


 As I said before, I don't think it is a good idea to allow that
 flexibility. If all the desired changed can be made in the content
 language, why bother to modify the Java code?

Does that mean that 1 algorithm (or a small number of algorithms programmed
in Java) is all that it will take to make an AGI?  Will the AGI at some
point not need any modifications to it's Java code to continue to get
smarter and solve new problems?  Is flexibility of all kinds bad or just
flexibility that has to do with an AGI producing code?

 I'll need to redesign the system in that case. I know it sounds less
 exciting than the system will redesign itself, but at least for the
 near future, the latter path will cause more troubles than successes.

Why do you believe this?  I am not asking you to change/redesign your system
but just explain the reasons why more choice (in problem solving) is bad.
If an AGI *can* make/change programs, why would they have to use this
facility to redesign parts of itself that might cause a problem?

 I cannot guarantee that. What I'm doing is to add in the algorithms I
 think is necessary, and see what will happen.

 Can you guarantee a self-modifying system always makes the right changes?

I can't *guarantee* that I would make the *right changes* if I was working
on your source code!  Code is rarely bug free but that doesn't mean that
some coding ability isn't useful for an AGI is it?  Changes could be
confined to areas that don't affect it's goals or in test areas.  I can see
using code to solve problems that would be difficult or impossible using
data only.  I don't think that constitutes changes to *core* areas although
it still means the AGI can change itself.

 The key is not program vs. data, but data in one level is program
 in another level. I fully agree with you that an AGI should be able
 to generate and modify algorithms, but it don't necessarily mean the
 source code.

This implies that you believe that some algorithms are source code worthy
and others can be made/modified by the AGI.  Is this correct?  If so, will
the efficiency of the AGI algorithms be substantially less than the ones
programmed by humans in Java?  Can you agree that any AGI must be able to
create and use a model to predict something?  This condition isn't the only
definition of an AGI, by any means, but would you say an AGI must have that
kind of modeling capability?  If yes, then how does a person create and
execute that model with many iterations if the tools available are only
data?  If your system was asked to create a model of a line given a Y
intercept and a slope, how would it take a number as input, calculate the
result and display it using data only?

If the above set of questions is walking when you are at the crawling stage,
I understand if you can't answer them.  I am really not trying to pick on
you.

My design so far does exactly what you said above. (data in one level is
program in another level)  My language system is programmed in C++ and
can't change itself at all.  No AGI code is written in C++, however.  The
AGI will be written in only the language created by the C++ so that it can
change/create it's programs.  My AGI programs will be considered data from
the C++ programs' point of view.  The difference is that my whole AGI
program will be coded in a totally changeable, very high speed language as
oposed to a high speed human created one.

All errors in this internal language are totally trapable (unlike C++) so
that the AGI could actually make programming mistakes without affecting
normal data or concurrent operation.

I am sure you are very busy so don't feel you must respond.  If you have the
time, however, your answers might help me a great deal.

-- David Clark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Pei Wang

On 3/29/07, David Clark [EMAIL PROTECTED] wrote:


 As I said before, I don't think it is a good idea to allow that
 flexibility. If all the desired changed can be made in the content
 language, why bother to modify the Java code?

Does that mean that 1 algorithm (or a small number of algorithms programmed
in Java) is all that it will take to make an AGI?


Of course, 1 algorithm is not enough. Whether it is a small number
depends on what it is compared to.


Will the AGI at some
point not need any modifications to it's Java code to continue to get
smarter and solve new problems?


I guess it will never happen --- AGI will need modification for a long
time in the near future. I just think it is better for it to be
modified by the designer than by the system itself.


Is flexibility of all kinds bad or just
flexibility that has to do with an AGI producing code?


AGI needs flexibility, but flexibility alone is not enough for
intelligence. Especially, unlimited flexibility is not a good thing.


 I'll need to redesign the system in that case. I know it sounds less
 exciting than the system will redesign itself, but at least for the
 near future, the latter path will cause more troubles than successes.

Why do you believe this?  I am not asking you to change/redesign your system
but just explain the reasons why more choice (in problem solving) is bad.


The flexibility in intelligence doesn't mean everything is
changeable. Instead, all changes should be adaptive, in the sense
that problem solving should be carried out according to the system's
experience. This will rule out the possibilities that are not
supported by the system's experience.


If an AGI *can* make/change programs, why would they have to use this
facility to redesign parts of itself that might cause a problem?


Because there is no guarantee that this change will actually make
things better in the long run. Intelligent systems are actually quite
conservative with respect to radical changes. To change its beliefs of
the environment is one thing (which is relatively mild), but to change
how it changes beliefs may destroy the system's coherence.


 Can you guarantee a self-modifying system always makes the right changes?

I can't *guarantee* that I would make the *right changes* if I was working
on your source code!  Code is rarely bug free but that doesn't mean that
some coding ability isn't useful for an AGI is it?


Again, some coding ability is not only useful, but also necessary
for an AGI. Our difference is not here, but that I use two languages,
one for object-level knowledge, which is fully modifiable by the
system, and the other for meta-level knowledge, which is modifiable by
the human designer only (you or me), but not the system itself; on the
other hand, you assume a single language for both purposes, and want
to to be fully modifiable by the system. Though your solution is
technically possible, I don't do it your way because these two
languages have very different features, and it is more manageable in
the near future.


Changes could be
confined to areas that don't affect it's goals or in test areas.


This is possible for object-level knowledge/skills, but not for
meta-level knowledge/skills, since the latter is used to all areas.


I can see
using code to solve problems that would be difficult or impossible using
data only.  I don't think that constitutes changes to *core* areas although
it still means the AGI can change itself.


Again, it is not code vs. data, but which type of code. All
object-level code can be changed by the system, as you suggested, but
it is not in the source code of the system in the usual sense.


 The key is not program vs. data, but data in one level is program
 in another level. I fully agree with you that an AGI should be able
 to generate and modify algorithms, but it don't necessarily mean the
 source code.

This implies that you believe that some algorithms are source code worthy
and others can be made/modified by the AGI.  Is this correct?  If so, will
the efficiency of the AGI algorithms be substantially less than the ones
programmed by humans in Java?  Can you agree that any AGI must be able to
create and use a model to predict something?  This condition isn't the only
definition of an AGI, by any means, but would you say an AGI must have that
kind of modeling capability?  If yes, then how does a person create and
execute that model with many iterations if the tools available are only
data?  If your system was asked to create a model of a line given a Y
intercept and a slope, how would it take a number as input, calculate the
result and display it using data only?


I hope I've answered these questions previously.


If the above set of questions is walking when you are at the crawling stage,
I understand if you can't answer them.  I am really not trying to pick on
you.

My design so far does exactly what you said above. (data in one level is
program in another level)  My language system is 

Re: [agi] small code small hardware

2007-03-28 Thread Jean-Paul Van Belle
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21
 
 kevin.osborne [EMAIL PROTECTED] 2007/03/28 15:57 
 as a techie: scepticism. I think the 'small code' and 'small
hardware'
 people are kidding themselves. 
Kevin, you're most probably right there. But remember that us small
code people *have* to have this belief in order to justify ourselves
working as individuals / tiny teams often during spare time and snatched
moments. As a small code person I think the chance of a small code
project achieving AGI is probably 1% (still probably an optimistic
estimate) that that of a larger, coordinated, well-funded and focussed
research group. But some of us are loners, like it that way, keep
dreaming and thinking away. Some of us have also seen how some really
innovative ideas tend to get lost in larger groups due to the
normalisation/group pressure. And we take heart in the fact that many of
the big advances in history (i.e. the big ideas) were typically produced
by single individuals or tiny teams. Not so sure about the small
hardware bit. Singularity software will require massive distributed
hardware IMHO but prototypes should run fine on tomorrow's PCs. When I
get technical enough, I'll plan my nebulous design/architecture around
~2012 hardware: i.e. a couple of 64-processor 256GB RAM machines - gives
me a realistic time horizon and something concrete.
 
as a person: nihilism  the human condition. crime, drugs,
debauchery.
self-destructive and life-endangering behaviour; rejection of social
norms. the world as I know it is a rather petty, woeful place [...]
 
hey i liked that bit ;-) most of the time i think the world is a great
place tho. But that's probably because I'm living in
paradise^H^H^H^H^H^H^H^H^HCape Town ;-)
 
Jean-Paul

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-28 Thread A. T. Murray
Jean-Paul Van Belle responded to Kevin Osborne:
 as a techie: scepticism. I think the 'small code'
 and 'small hardware' people are kidding themselves. 
 Kevin, you're most probably right there.
 But remember that us small code people *have* to
 have this belief in order to justify ourselves
 working as individuals / tiny teams often during
 spare time and snatched moments. As a small code
 person I think the chance of a small code
 project achieving AGI is probably 1% 
 (still probably an optimistic estimate) that
 that of a larger, coordinated, well-funded and
 focussed research group. But some of us are loners,
 like it that way, keep dreaming and thinking away. 

Right on, Bro! (mon frere). Here is my small code, small
hardware work of today:

Today we gear up to do our first Mind.Forth programming since 
the 18jun06C.F version that has been on the Web since 18 June 2006. 
Back then, we switched to coding the JavaScript AI Mind that had 
not been updated since two years earlier, in 2004. Initially 
we worked on the timing problems of the main JSAI aLife loop, 
and then we worked on bringing the JSAI up to par with Mind.Forth AI. 
We were especially concerned with porting the Mind.Forth dynamic 
tutorial mode into the JavaScript AI, which had previously only 
a rotating tutorial message display and now has both the static 
but rotating message display and the impressively dynamic display. 

After coding the dynamic JSAI tutorial, we set about fixing bugs 
that had long been hidden in the JavaScript AI code, and were 
probably hidden also in the Mind.Forth code. At the same time, 
we were trying hard to implement slosh-over in the JavaScript AI, 
which we finally achieved in the 20mar07A.html version of the JSAI. 
Afterwards we made plans to further improve the JSAI before moving on 
to resume coding Mind.Forth, but yesterday we realized that the time 
to update Mind.Forth is now, when the JSAI has taught us what to do. 
It would be too risky and too imprecise to try to perfect the JSAI 
in advance of upgrading the Forth AI. Something could happen that 
might long or forever prevent us from getting Mind.Forth to work right, 
and it would be hard to know precisely when to stop improving the JSAI. 
The success of slosh-over in the JSAI is precisely when to code in Forth. 
We may find that we once again get far advanced in Forth, or we may 
be able to code Mind.Forth and the JSAI simultaneously now that since 
20.MAR.2007 we finally know what we are doing in either language. 

Today we are running out of time and we have only just begun. 
First we spent precious time compiling a C:\MAR01Y07\JSAI\chglog01.txt 
file of Changelog entries of the JavaScript Mind.html AI program. 
We need such a summary of our JSAI work so that we will know what 
we need to code in Forth. We may not have to repeat the exact order 
of the JavaScript changes, since the languages are different and since 
we may be able to take short-cuts  achieve slosh-over quicker in Forth. 

Next we spent quite some time updating our C:\MAR01Y07\JSAI\mfpjtemp.html 
file today so that it will be easier to do Mind.Forth coding from now on. 
We were updating the template file  this fp070328 page simultaneously 
as we saw exactly what we needed to change to make our work easier. 
Upshot: We ran out of time for now  we need to monitor our Web situation.

Arthur
--
http://mind.sourceforge.net/Mind.html 
P.S. Ben Goertzel runs a big team but he has to clean the
turtle tank and do other jobs in his embourgeoisement.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-28 Thread Russell Wallace

On 3/28/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote:


Kevin, you're most probably right there. But remember that us small code
people *have* to have this belief in order to justify ourselves working as
individuals / tiny teams often during spare time and snatched moments.



A very good point. But I think there's a way to reconcile the belief we need
with realism: small framework, big content. That is, I think if the right
framework were created by an individual or small team, it would then be
possible to get a community effort started on providing the required large
volume of content.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303