Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-06 Thread Ben Goertzel
Richard wrote:

  Then, when we came back from the break, Ben Goertzel announced that the
 roundtable on symbol grounding was cancelled, to make room for some other
 discussion on a topic like the future of AGI, or some such.  I was
 outraged by this.  The subsequent discussion was a pathetic waste of time,
 during which we just listened to a bunch of people making vacuous
 speculations and jokes about artificial intelligence.

  In the end, I decided that the reason this happened was that when the
 workshop was being planned, the title was chosen in ignorance.  That, in
 fact, Ben never even intended to talk about the real issue of grounding
 symbols, but just needed a plausible-sounding theme-buzzword, and so he just
 intended the workshop to be about a meaningless concept like connecting AGI
 systems to the real world.

No, that is not the case.

What happened, as I recall, was that the conference schedule was
running late, and one of the speakers from the session on symbol
grounding had cancelled anyway, so it seemed apropos to skip from that
session to the next one -- since **something** had to be cancelled to
make the schedule fit

That conference was a small workshop and was pretty loosely organized,
I decided to let the discussion and content flow according to the
general interests of the participants.  As it happened the
participants as a whole were not gripped by the symbol grounding theme
and gravitated to other topics, which was OK with me.
Unfortunately to Richard this seemed to have been his main interest.

Feedback on AGI-06 overall was overwhelmingly positive; in fact
Richard's is the only significantly negative report I've seen.

AGI-08 was more formally structured, as will be AGI-09; but these are
larger conferences, which require more structure to run at all
smoothly.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI-08 videos

2008-05-06 Thread Benjamin Johnston


Richard Loosemore said:

But instead of deep-foundation topics like these, what do we get? 
Mostly what we get is hacks.  People just want to dive right and make 
quick assumptions about the answers to all of these issues, then they 
get hacking and build something - *anything* - to make it look as 
though they are getting somewhere.



I don't believe that this is accurate. Those who I speak to in this 
community (including the authors of papers at AGI-08 you claim have 
produced hacks) give me the clear impression that they spend every day 
considering deep-foundation issues. The systems that you see are not 
random hacks based on quick assumptions, but the by-products of people 
grappling with these deep-foundation issues.


This is certainly my experience. Every day I'm trying to grapple with 
deeper problems, but must admit that I'm unlikely to solve anything from 
my armchair. To create something that can be comprehended, critiqued and 
studied, I have to carefully reduce my ideas to a set of what may be 
almost laughable assumptions. However, once I've made such assumptions 
and implemented a system, I have a much better grasp on the problem at 
hand. From there, I can go back and explore ways of removing some of 
those assumptions, I can try to better model my ideas, and I can rethink 
the deeper issues with the knowledge I learnt from that experiment. When 
I publish work on those concrete systems, I admit that I am not directly 
discussing deeper issues. However, I believe that this method makes 
communication much more effective and clear (I've tried both ways and 
have experienced remarkably more success in conveying my ideas with 
sloppy examples than with excellent arguments) and I believe that most 
readers can look beyond the annoying but necessary assumptions and see 
the deeper ideas that I am attempting to express. As I work on the 
problem further, I'll create systems that are closer to my own ideas and 
may find ways of distilling my ideas into more formal treatments of the 
fundamental issues. I suspect that this experience is shared by most 
people here.


Ultimately, I think that any work in an area like AGI should be read 
with attention to the things left between the lines. In fact, I think 
that expecting researchers to focus only on the fundamentals first is 
counterproductive: not only will you end up with a whole lot of 
hypothesizing with no connection to reality or experience, but you'll 
have a whole lot of talk and opinion but no understanding of each other.


-Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] reflectivity as consciousness

2008-05-06 Thread Vladimir Nesov
On Mon, May 5, 2008 at 11:15 PM, Anthony George [EMAIL PROTECTED] wrote:

 But, I want to ask the list whether or not there has been any trend
 or attempt to incorporate reflexivity into an AGI model.  By reflexivity I
 mean, basically, two computers that interact with each other but, perhaps,
 don't know that they are two separate computers.  This is just based on an
 intuitive image that I've had: that consciousness might be something like
 the tension between two viewpoints, and not either of the viewpoints
 themselves.  Two speakers oriented the same way with their soundwaves going
 out would be the two computers, where the soundwaves interact would be where
 the AGI would be.


Hi,

What problem are you trying to solve?

You seem to be talking about detection of novel situations. Let's say
a simple component can be stable when it receives the same kind of
input again and again and it doesn't change in response to that input
anymore. Then it will only be necessary (or possible) for this
component to change when it receives novel input, that is unlike the
usual kind. The change in the component can be viewed as adding a new
fact to memory. In this case memory only needs to be created when
something changes.

When multiple components interact, that is their responses influence
their inputs, it may be simple to create unusual inputs for some of
the components at first. But if system gradually adapts, most of the
inputs circulating in such system will become usual for components
that receive them, and there will be less and less memory formation.

Conscious activity is usually accompanied by new memories (at least in
the short term). If no new memories are formed, you wouldn't know
about how the activity proceeds, you'll only be able to observe the
outcome (if any). You can observe this feature in actions that you are
used to, e.g. you don't remember thinking about choosing specific
movements when walking or driving. It happens automatically, and the
process of performing some quite complex activities is closed from
introspection. But whenever something unusual happens, you notice it
instantly.

A human mind is sufficiently complex, and it's possible to clash
many facts combinatorially, creating novel combinations to form
memories about. Thought-aware thinking happens when situation is
novel, when either incompatible knowledge elements within the mind
interact, or when the fact perceived in the world is different from
what's in the mind.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI-08 videos

2008-05-06 Thread Mark Waser

OK.  Let me give a system engineer's perspective . . . .

I believe that a lot of the current systems have done a lot of excellent, 
rigorous work both at the bottom-most and top-most levels of cognition.


The problem is, I believe, that these two levels are separated by two to 
five more levels and that no one is really even willing to acknowledge that 
these levels exist and are necessary and will require *a lot* of work and 
learning to implement.


We are not going to get to human-level intelligence with low-level 
mechanisms and a scheduler.  The low-level mechanisms are not going to 
miraculously figure out how to assemble knowledge into a usable, scalable 
foundation for more discovery and knowledge.


Most of the systems that are highly touted are actually merely generic 
discovery systems with PLANS to extend to a complete cognitive system but 
nothing more -- and most of them operate at a lower (i.e. data) level 
(rather than a knowledge level) than makes sense if you're truly trying to 
build a knowledge and cognitive system rather than a data-mining and 
discovery system.


Most of the rest (Cyc, etc.) operate at the highest conceptual level but are 
massive compendiums of bolt-ons with no internal consistency, rhyme, reason, 
or hope of being extendable by the system itself.


Almost all of the systems are starting out way too large rather than trying 
for a very small seed with rational mechanisms for growth and ways to 
cleanly add additional mechanisms.


Most of the systems have too much Not-Invented-Here syndrome and as a result 
are being leap-frogged by others who are intelligently using 
Commercial-Off-The-Shelf or Open-Source software.


Note:  Most of these complaints do *NOT* apply to Texai (except possibly the 
two to five level complaint -- except that Texai is actually starting at 
what I would call one of the middle levels and looks like it has reasonable 
plans for branching out.


Richard doesn't express his arguments in easy to understand terms . . . . 
but his core belief that we need more engineering to solve deep problems and 
less hacking to quickly achieve low-hanging fruit (and then stalling out 
afterwards) definitely needs to be given more currency.



- Original Message - 
From: Benjamin Johnston [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, May 06, 2008 4:36 AM
Subject: Re: [agi] AGI-08 videos




Richard Loosemore said:

But instead of deep-foundation topics like these, what do we get? Mostly 
what we get is hacks.  People just want to dive right and make quick 
assumptions about the answers to all of these issues, then they get 
hacking and build something - *anything* - to make it look as though they 
are getting somewhere.



I don't believe that this is accurate. Those who I speak to in this 
community (including the authors of papers at AGI-08 you claim have 
produced hacks) give me the clear impression that they spend every day 
considering deep-foundation issues. The systems that you see are not 
random hacks based on quick assumptions, but the by-products of people 
grappling with these deep-foundation issues.


This is certainly my experience. Every day I'm trying to grapple with 
deeper problems, but must admit that I'm unlikely to solve anything from 
my armchair. To create something that can be comprehended, critiqued and 
studied, I have to carefully reduce my ideas to a set of what may be 
almost laughable assumptions. However, once I've made such assumptions and 
implemented a system, I have a much better grasp on the problem at hand. 
From there, I can go back and explore ways of removing some of those 
assumptions, I can try to better model my ideas, and I can rethink the 
deeper issues with the knowledge I learnt from that experiment. When I 
publish work on those concrete systems, I admit that I am not directly 
discussing deeper issues. However, I believe that this method makes 
communication much more effective and clear (I've tried both ways and have 
experienced remarkably more success in conveying my ideas with sloppy 
examples than with excellent arguments) and I believe that most readers 
can look beyond the annoying but necessary assumptions and see the deeper 
ideas that I am attempting to express. As I work on the problem further, 
I'll create systems that are closer to my own ideas and may find ways of 
distilling my ideas into more formal treatments of the fundamental issues. 
I suspect that this experience is shared by most people here.


Ultimately, I think that any work in an area like AGI should be read with 
attention to the things left between the lines. In fact, I think that 
expecting researchers to focus only on the fundamentals first is 
counterproductive: not only will you end up with a whole lot of 
hypothesizing with no connection to reality or experience, but you'll have 
a whole lot of talk and opinion but no understanding of each other.


-Ben

---
agi

[agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-06 Thread Richard Loosemore

Ben Goertzel wrote:


Feedback on AGI-06 overall was overwhelmingly positive; in fact
Richard's is the only significantly negative report I've seen.



Of course, if the conference was filled with low-quality presentations 
and low-quality comments from participants, then all of those people who 
gave presentations and who made comments would be BOUND to give an 
objective evaluation of the quality of the conference, wouldn't 
they?  ;-)


They wouldn't have any vested interest in saying What a success!, 
would they?


And if one person gave a poor evaluation of the conference based on 
specific points of fact, rather than just feel-good opinion (if, for 
example, that person noted a complete inability of the participants to 
talk about the main theme of the conference in a technically accurate 
way), that empirically-based observation would count for nothing, 
compared with the great feeling that everyone had about the meeting?





Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] organising parallel processes

2008-05-06 Thread YKY (Yan King Yin)
On 5/4/08, Stephen Reed [EMAIL PROTECTED] wrote:
 As perhaps you know, I want to organize Texai as a vast multitude of
agents situated in a hierarchical control system,  grouped as possibly
redundant, load-sharing, agents within an agency sharing a specific
mission.  I have given some thought to the message content, and assuming
that my bootstrap English dialog effort actually works, then English
language as an Agent Control Language vocabulary becomes possible at the
more deliberative, higher levels of the hierarchy, when the duration of NL
parsing and generation is small compared to the overall task duration.

Let me offer my naive opinion:

The distributive agents would be owned by different people on the net, who
would want their agents to do *different* things for them.  This occurs
simultaneously.

We need to distinguish 2 situations:
A)  where all the agents cooperate to solve ONE problem
B)  where agents are solving their own problems

Your scheme would be useful for A.  But it seems that most AGI users would
want B.  Which problem do you intend to solve?

In case B, your scheme would add a lot of complications and whether it'd
be beneficial or not is rather unclear.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] jamming with OpenCog / Novamente

2008-05-06 Thread YKY (Yan King Yin)
I'm wondering if it's possible to plug in my learning algorithm to
OpenCog / Novamente?

The main incompatibilities stem from:

1.  predicate logic vs term logic
2.  graphical KB vs sentential KB

If there is a way to somehow bridge these gaps, it may be possible

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-06 Thread Stefan Pernar
On Tue, May 6, 2008 at 10:10 PM, Richard Loosemore [EMAIL PROTECTED]
wrote:

 Ben Goertzel wrote:

  Feedback on AGI-06 overall was overwhelmingly positive; in fact
  Richard's is the only significantly negative report I've seen.
 


 Of course, if the conference was filled with low-quality presentations and
 low-quality comments from participants, then all of those people who gave
 presentations and who made comments would be BOUND to give an objective
 evaluation of the quality of the conference, wouldn't they?  ;-)

 They wouldn't have any vested interest in saying What a success!, would
 they?

 And if one person gave a poor evaluation of the conference based on
 specific points of fact, rather than just feel-good opinion (if, for
 example, that person noted a complete inability of the participants to talk
 about the main theme of the conference in a technically accurate way), that
 empirically-based observation would count for nothing, compared with the
 great feeling that everyone had about the meeting?


Ben: I admire your patience.
Richard: congrats - you just made my ignore list - and that's a first

-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-06 Thread Richard Loosemore

Stefan Pernar wrote:
On Tue, May 6, 2008 at 10:10 PM, Richard Loosemore [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Ben Goertzel wrote:

Feedback on AGI-06 overall was overwhelmingly positive; in fact
Richard's is the only significantly negative report I've seen.



Of course, if the conference was filled with low-quality
presentations and low-quality comments from participants, then all
of those people who gave presentations and who made comments would
be BOUND to give an objective evaluation of the quality of the
conference, wouldn't they?  ;-)

They wouldn't have any vested interest in saying What a success!,
would they?

And if one person gave a poor evaluation of the conference based on
specific points of fact, rather than just feel-good opinion (if, for
example, that person noted a complete inability of the participants
to talk about the main theme of the conference in a technically
accurate way), that empirically-based observation would count for
nothing, compared with the great feeling that everyone had about
the meeting?


Ben: I admire your patience.
Richard: congrats - you just made my ignore list - and that's a first


Another person who cannot discuss the issues.

Another person who, instead, indulges in personal abuse.


This field will stand or die according to the number of people in it who 
can address issues, even when those issues are challenging and/or 
embarrassing.




Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI-08 videos

2008-05-06 Thread Lukasz Stafiniak
On Tue, May 6, 2008 at 4:07 PM, Mark Waser [EMAIL PROTECTED] wrote:

  Note:  Most of these complaints do *NOT* apply to Texai (except possibly
 the two to five level complaint -- except that Texai is actually starting at
 what I would call one of the middle levels and looks like it has reasonable
 plans for branching out.

Texai has the added value of freshness, but the challenge Steve is
facing now is perhaps bigger than the ones he has conquered already:
to reflect on the system's state and to represent, learn and reason
about actions.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-06 Thread Murphy, Tommy
I didn't sign up to listen to you whine, but I certainly tried to cancel
my subscription because you whine.
Any ETA on when that'll actually go through, anyone?

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: May 6, 2008 12:28 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Evaluating Conference Quality [WAS Re: Symbol
Grounding ...]

Stefan Pernar wrote:
 On Tue, May 6, 2008 at 10:10 PM, Richard Loosemore [EMAIL PROTECTED]

 mailto:[EMAIL PROTECTED] wrote:
 
 Ben Goertzel wrote:
 
 Feedback on AGI-06 overall was overwhelmingly positive; in
fact
 Richard's is the only significantly negative report I've seen.
 
 
 
 Of course, if the conference was filled with low-quality
 presentations and low-quality comments from participants, then all
 of those people who gave presentations and who made comments would
 be BOUND to give an objective evaluation of the quality of the
 conference, wouldn't they?  ;-)
 
 They wouldn't have any vested interest in saying What a
success!,
 would they?
 
 And if one person gave a poor evaluation of the conference based
on
 specific points of fact, rather than just feel-good opinion (if,
for
 example, that person noted a complete inability of the
participants
 to talk about the main theme of the conference in a technically
 accurate way), that empirically-based observation would count for
 nothing, compared with the great feeling that everyone had about
 the meeting?
 
 
 Ben: I admire your patience.
 Richard: congrats - you just made my ignore list - and that's a first

Another person who cannot discuss the issues.

Another person who, instead, indulges in personal abuse.


This field will stand or die according to the number of people in it who
can address issues, even when those issues are challenging and/or
embarrassing.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
7356
Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] organising parallel processes

2008-05-06 Thread Stephen Reed
Hi YKY,

You said:


The distributive agents would be owned by different people on the net, who 
would want their agents to do different things for them.  This occurs 
simultaneously.
 
We need to distinguish 2 situations:
A)  where all the agents cooperate to solve ONE problem
B)  where agents are solving their own problems
 
Your scheme would be useful for A.  But it seems that most AGI users would want 
B.  Which problem do you intend to solve?
 
In case B, your scheme would add a lot of complications and whether it'd be 
beneficial or not is rather unclear.

I believe the opposite of what you say  I hope that my following explanation 
will help converge our thinking.  Let me first emphasize that I plan a vast 
multitude of specialized agencies, in which each agency has a particular 
mission.  This pattern is adopted from human agencies.  For example, an human 
advertising agency has as its mission the preparation of advertising media for 
its customers.  Agents, who are governed by the agency, fulfill its mission by 
carrying out commanded tasks, responding to perceived events, reporting to 
superiors and controlling subordinates.  

In the Texai organization I envisage, it would never be the case that all 
agents or agencies try to solve the same problem - not (A) above.  Rather, if a 
problem arose that is both urgent and sufficiently important, then the agency 
capable of solving that problem would commandeer the computing resources of 
other, lower priority, agencies.  This is analogous to a natural disaster 
affecting a human organization - in a flood, everyone changes roles to move 
materials above the water line.  In my plan for Texai, the agents could 
cooperate together only to the extent that the problem to be solved could be 
decomposed to match the capabilities of particular agencies.  Agents lacking 
the capability to help out would be omitted from that job.

From the standpoint of AGI users, a Texai agent exists for each specialized 
role (i.e. interest, activity-type) that the user wants - (B) above.  My 
current thinking is to a first provide an instant-message chatbot that 
intelligent acquires knowledge and skills.  The user would only require an 
instant messaging client, (e.g. text messaging on a cellphone, Google Chat, 
Yahoo Chat, IRC, AIM, email, etc.).  The user would name one or more Texai 
chatbots and assign them roles (i.e.  sports buddy, family buddy, work buddy, 
cooking buddy, financial buddy, product adviser buddy, matchmaker buddy, etc. 
).  The required servers would, at the kickoff, be supplied by me.  Users 
would be offered a higher level of benefit if they agree to download a Texai 
instance, and to leave it running as much as possible, connected to the net.  
When not providing work for the owning user, it would add its agent-hosting 
server capability to the Texai cloud.

At the time that the Texai bootstrap English dialog system is available, I'll 
begin fleshing out the hundreds of agencies for which I hope to recruit human 
mentors.  Each agency I establish will have paragraphs of English text to 
describe its mission, including its relationship with commanding and 
subordinate agencies.  Mentors then will use the dialog system to teach each of 
their respective agencies the knowledge and skills it requires for its mission. 
  Learned skills will be compiled into Java code for execution.  Hopefully this 
will advance into automatic programming from high level requirements, because 
programming is a skill which can be taught.  Furthermore, I plan agencies whose 
missions will accomplish recursive self-improvement (e.g.  propagate best 
practices from the discovering agency to all applicable agencies). 

While at Cycorp, I created two previous small versions of the agency graph, 
which of course were very much focused on the needs of Cycorp.  Now I am 
inspired by the Wikipedia, to work first on agencies that are required for 
basic infrastructure and then let mentors, governed by locally-applicable human 
law, and governed by consensus, establish agencies as they see fit. 

When I get further along on these ideas, especially when I have an initial set 
of agencies and missions, I would very much like to vet that organization chart 
with the readers of this list, before tying to deploy it.

Cheers.
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: YKY (Yan King Yin) [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 6, 2008 10:36:16 AM
Subject: Re: [agi] organising parallel processes


On 5/4/08, Stephen Reed [EMAIL PROTECTED] wrote: 
 As perhaps you know, I want to organize Texai as a vast multitude of agents 
 situated in a hierarchical control system,  grouped as possibly redundant, 
 load-sharing, agents within an agency sharing a specific mission.  I have 
 given some thought to the message content, and 

Re: [agi] AGI-08 videos

2008-05-06 Thread Stephen Reed
Hi Lukasz ,

With regard to the Texai approach, I have subjected myself to these constraints:

* to author the bootstrap portion of the system by myself

* to write the least amount of code (e.g. not to write an ideal AI 
language first)
* to reuse existing narrow AI solutions and infrastructure to the 
widest possible extent
* as Turing suggested, to build a 'child' mechanism capable of being 
taught, and to subsequently train it to achieve 'adult' capability
* to design a scalable architecture having a multitude of mentors and 
users 
It looks now that I am trying to solve at least the following AI-hard problems 
simultaneously:

* to communicate with humans using natural language
* to learn by being taught
* to achieve generally applicable commonsense behavior
* to achieve automatic programming, e.g. programming from very high 
level specifications, using algorithmic and domain knowledge, plus real-time 
advice from human mentors
Although daunting, I believe small progress on this combination of problems 
will procede into a virtuous circle of exponential improvement.


Cheers,
-Steve 

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: Lukasz Stafiniak [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 6, 2008 11:25:35 AM
Subject: Re: [agi] AGI-08 videos

On Tue, May 6, 2008 at 4:07 PM, Mark Waser [EMAIL PROTECTED] wrote:

  Note:  Most of these complaints do *NOT* apply to Texai (except possibly
 the two to five level complaint -- except that Texai is actually starting at
 what I would call one of the middle levels and looks like it has reasonable
 plans for branching out.

Texai has the added value of freshness, but the challenge Steve is
facing now is perhaps bigger than the ones he has conquered already:
to reflect on the system's state and to represent, learn and reason
about actions.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] organising parallel processes

2008-05-06 Thread Matt Mahoney

--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 On 5/4/08, Stephen Reed [EMAIL PROTECTED] wrote:
  As perhaps you know, I want to organize Texai as a vast multitude
 of agents situated in a hierarchical control system,  grouped as
 possibly
 redundant, load-sharing, agents within an agency sharing a specific
 mission.  I have given some thought to the message content, and
 assuming
 that my bootstrap English dialog effort actually works, then English
 language as an Agent Control Language vocabulary becomes possible at
 the
 more deliberative, higher levels of the hierarchy, when the duration
 of NL parsing and generation is small compared to the overall task
 duration.
 
 Let me offer my naive opinion:
 
 The distributive agents would be owned by different people on the
 net, who
 would want their agents to do *different* things for them.  This
 occurs simultaneously.
 
 We need to distinguish 2 situations:
 A)  where all the agents cooperate to solve ONE problem
 B)  where agents are solving their own problems
 
 Your scheme would be useful for A.  But it seems that most AGI users
 would want B.  Which problem do you intend to solve?
 
 In case B, your scheme would add a lot of complications and whether
 it'd be beneficial or not is rather unclear.
 
 YKY

We already have many examples where cooperation between selfish agents
results in solving problems that could not be solved individually.  AIs
would ultimately communicate faster and more effectively than humans,
resulting in a more efficient division of labor and better coordination
of efforts.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] organising parallel processes

2008-05-06 Thread YKY (Yan King Yin)
   On 5/6/08, Stephen Reed [EMAIL PROTECTED] wrote:
I believe the opposite of what you say  I hope that my following 
explanation will help converge our thinking.  Let me first emphasize 
that I plan a vast multitude of specialized agencies, in which each 
agency has a particular mission.  This pattern is adopted from human 
agencies.  For example, an human advertising agency has as its mission 
the preparation of advertising media for its customers.  Agents, who 
are governed by the agency, fulfill its mission by carrying out 
commanded tasks, responding to perceived events, reporting to superiors 
and controlling subordinates.


If the agents have common sense, they can use their common sense to
broker capabilities among themselves, but that is begging the question
because we don't have commonsense AGI yet.

A more interesting possibility is whether we can spawn a large number
of very *weak* intelligent agents over the net, who don't have common
sense, and let commonsense emerge out of them.

It seems possible, but we'll need to design distributive algorithms
for reasoning, especially distributive deduction...

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] organising parallel processes

2008-05-06 Thread wannabe

Stephen Reed wrote:

At the time that the Texai bootstrap English dialog system is   
available, I'll begin fleshing out the hundreds of agencies for   
which I hope to recruit human mentors.  Each agency I establish will  
 have paragraphs of English text to describe its mission, including   
its relationship with commanding and subordinate agencies.  Mentors   
then will use the dialog system to teach each of their respective   
agencies the knowledge and skills it requires for its mission. 
Learned skills will be compiled into Java code for execution.
Hopefully this will advance into automatic programming from high   
level requirements, because programming is a skill which can be   
taught.  Furthermore, I plan agencies whose missions will accomplish  
 recursive self-improvement (e.g.  propagate best practices from the  
 discovering agency to all applicable agencies).


This jumped out at me because I just read an article where someone was  
talking about how we don't know how to program:

http://paulspontifications.blogspot.com/2008/05/under-appreciated-fact-we-dont-know-how.html

There is no real process for it.  We don't really know how it works.   
There are plenty of courses, and it seems like we teach people to do  
it, but it is a very mysterious thing.  A good portion of people just  
don't get it when they try to learn.  Anyway, it is a very hard  
problem, and I wouldn't be surprised if programming were an  
AI-complete problem, because it sure seems like programming always  
involves arbitrary amounts of domain knowledge, theories of mind, and  
who knows what all else.


But I have to admit I am a fan of compiled programs as part of a set  
of skills of an intelligent computer agent.  The idea showed up in an  
essay (OK, blog entry) I wrote recently.


Also, I want to thank Stephen for adding to this community.  He really  
stood out at AGI-08 as a level-headed, diligent creative force working  
toward AGI.


andi

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] organising parallel processes

2008-05-06 Thread Matt Mahoney
--- [EMAIL PROTECTED] wrote:

http://paulspontifications.blogspot.com/2008/05/under-appreciated-fact-we-dont-know-how.html

Computer programming is an art, as Knuth observed.

I teach classes in C++, Java, and x86 assember.  I can show my students
some simple drawings and show them how to hold a brush.  But I can't
make a Picasso out of them.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] organising parallel processes

2008-05-06 Thread Bob Mottram
The blog entry is amusing.  I started writing software at quite young
age (about 10), and I always assumed that it was an art rather like
writing a novel or a musical composition.  So when I grew older and
became employed to write programs I was shocked in my early career to
find that some people consider programming to be an activity requiring
no creative input.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] organising parallel processes

2008-05-06 Thread Stephen Reed
Thanks Andi for the kind words.

My directly preceding post is about the combination of AI-hard problems that I 
am trying to solve. It hints that incrementally solving the bunch of them may 
be achievable, but that sufficiently solving one of them alone may not be.

I've given automatic programming a lot of thought over my  rather long (i.e. 40 
years) experience programming computers.  While at Cycorp, I persisted abstract 
syntax trees for an Turing-complete agent control language into the Cyc 
knowledge base. I had the idea then that Cyc could thus reason about programs 
and begin to author portions of them.  But the idea did not gain traction with 
our sponsors nor with Cycorp management, so it languished.

My current thinking about automatic programming (a.k.a program synthesis) is 
centered on the notion of program composition.  This activity is a skill that 
involves both algorithmic knowledge as well as domain knowledge.  I am now 
building an essential set of agent capabilities to compose Java classes, 
variables, methods, and statements.  These capabilities will be associated with 
preconditions, input/output bindings, and postconditions.  A capability matcher 
library will find candidate agent capabilities that match the given task 
preconditions and postconditions via subsumption reasoning.  I intend to 
hand-craft some of these compositions into executable Java programs to build 
use cases for the essential programming skills required.  Note that I am 
building a program composition facility rather than a task-solving interpreter. 
 What I hope to accomplish is a system that solves a problem by writing a 
program that, when compiled,  performs the given task. 
 This will be much faster to repeatedly execute because the overhead of program 
composition greatly diminishes once the tasks are well understood (e.g. task 
variations become program parameters  and thus no need to compose a new 
program).

Earlier in the project I had hoped to postpone this phase until after the 
bootstrap English dialog system is completed.  But I see now that the general 
facility of skill acquisition will require precondition and postcondition 
vocabulary.  And because all skills will bottom out into the execution of a 
Java method, I made the decision to precede now with defining these primitives 
as skills.  For representing pre-and-post conditions I am using an elaborated 
version of RDF, based upon formula-manipulating classes that I previously 
released for the Incremental Fluid Construction Grammar vocabulary.  I added 
IMPLIES, NOT, OR, and AND logical operators and variables.  Today I finished a 
class that performs canonicalization (i.e. transformation to conjunctive normal 
form) and hopefully tomorrow I will complete the capability subsumption 
matcher.  Interested readers can follow my source code commits here.  


Having a dialog system simplifies automatic programming because Texai will be 
able to ask for help when required from its mentor, and will not be expected to 
complete the job on its own.  I am resigned to the fact however that initially 
programming via English dialog will be much more tedious than directly 
performing the task myself.  Until the student exceeds the skill of the mentor, 
programming this way will be hard.

Cheers,
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: [EMAIL PROTECTED] [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 6, 2008 3:36:38 PM
Subject: Re: [agi] organising parallel processes

Stephen Reed wrote:

 At the time that the Texai bootstrap English dialog system is  
 available, I'll begin fleshing out the hundreds of agencies for  
 which I hope to recruit human mentors.  Each agency I establish will  
  have paragraphs of English text to describe its mission, including  
 its relationship with commanding and subordinate agencies.  Mentors  
 then will use the dialog system to teach each of their respective  
 agencies the knowledge and skills it requires for its mission.
 Learned skills will be compiled into Java code for execution.
 Hopefully this will advance into automatic programming from high  
 level requirements, because programming is a skill which can be  
 taught.  Furthermore, I plan agencies whose missions will accomplish  
  recursive self-improvement (e.g.  propagate best practices from the  
  discovering agency to all applicable agencies).

This jumped out at me because I just read an article where someone was  
talking about how we don't know how to program:
http://paulspontifications.blogspot.com/2008/05/under-appreciated-fact-we-dont-know-how.html

There is no real process for it.  We don't really know how it works.  
There are plenty of courses, and it seems like we teach people to do  
it, but it is a very mysterious thing.  A good portion of people just  
don't get it when 

Re: [agi] jamming with OpenCog / Novamente

2008-05-06 Thread Ben Goertzel
Predicate logic vs term logic won't be an issue for OpenCog, as the
AtomTable knowledge representation supports both (and many other)
formalisms.

I don't **think** the sentential KB will be a problem because i
believe each of your sentences will be representable as an Implication
or Equivalence relationship in the AtomTable.  If you give me a
specific example of a sentence in your representation, I will tell you
how it could most straightforwardly be represented in the AtomTable
using the PLN-friendly node and link types.

thanks
Ben


On Tue, May 6, 2008 at 11:40 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 I'm wondering if it's possible to plug in my learning algorithm to
  OpenCog / Novamente?

  The main incompatibilities stem from:

  1.  predicate logic vs term logic
  2.  graphical KB vs sentential KB

  If there is a way to somehow bridge these gaps, it may be possible

  YKY

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-06 Thread Steve Richfield
Kaj, Richard, et al,

On 5/5/08, Kaj Sotala [EMAIL PROTECTED] wrote:

   Drive 2: AIs will want to be rational
   This is basically just a special case of drive #1: rational agents
   accomplish their goals better than irrational ones, and attempts at
   self-improvement can be outright harmful if you're irrational in the
   way that you try to improve yourself. If you're trying to modify
   yourself to better achieve your goals, then you need to make clear to
   yourself what your goals are. The most effective method for this is to
   model your goals as a utility function and then modify yourself to
   better carry out the goals thus specified.
 
   Well, again, what exactly do you mean by rational?  There are many
  meanings of this term, ranging from generally sensible to strictly
  following a mathematical logic.
 
   Rational agents accomplish their goals better than irrational
 ones?  Can
  this be proved?  And with what assumptions?  Which goals are better
  accomplished  is the goal of being rational better accomplished by
  being rational?  Is the goal of generating a work of art that has
 true
  genuineness something that needs rationality?
 
   And if a system is trying to modify itself to better achieve its goals,
  what if it decides that just enjoying the subjective experience of life
 is
  good enough as a goal, and then realizes that it will not get more of
 that
  by becoming more rational?


This was somewhat wrung out in the 1950s by Herman Kahn of the RAND Corp,
who is credited with inventing MAD (Mutually Assured Destruction) built on
vengeance, etc.

Level1: People are irrational, so a rational path may play on that
irrationality, and hence be irrational against an unemotional opponent.

Level 2: By appearing to be irrational you also appear to be
dangerous/violent, and hence there is POWER in apparent irrationality, most
especially if on a national and thermonuclear scale. Hence, a maximally
capable AGI may appear to be quite crazy to us all-too-human observers.

Story: I recently attended an SGI Buddhist meeting with a friend who was a
member there. After listening to their discussions, I asked if there was
anyone there (from ~30 people) who had ever found themselves in a position
of having to kill or injure another person, as I have. There were none, as
such experiences tend to change people's outlook on pacifism. Then I
mentioned how Herman Kahn's MAD solution to avoiding an almost certain WW3
involved an extremely non-Buddhist approach, gave a thumbnail account of the
historical situation, and asked if anyone there had a Buddhist-acceptable
solution. Not only was there no other solutions advanced, but they didn't
even want to THINK about such things! These people would now be DEAD if not
for Herman Kahn, yet they weren't even willing to examine the situation that
he found himself in!

The ultimate power on earth: An angry 3-year-old with a loaded gun.

Hence, I come to quite the opposite solution - that AGIs will want to appear
to be IRrational, like the 3-year-old, taking bold steps that force
capitulation.

I have played tournament chess. However, when faced with a REALLY GREAT
chess player (e.g. national champion), as I have had the pleasure of on a
couple of occasions, they at first appear to play as novices, making unusual
and apparently stupid moves that I can't quite capitalize on, only to pull
things together later on and soundly beat me. While retrospective
analysis would show them to be brilliant, that would not be my evaluation
early in these games.

Steve Richfield

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-06 Thread Matt Mahoney
--- Steve Richfield [EMAIL PROTECTED] wrote:

 I have played tournament chess. However, when faced with a REALLY
GREAT
 chess player (e.g. national champion), as I have had the pleasure of
 on a
 couple of occasions, they at first appear to play as novices, making
 unusual
 and apparently stupid moves that I can't quite capitalize on, only to
 pull things together later on and soundly beat me. While
retrospective
 analysis would show them to be brilliant, that would not be my
 evaluation early in these games.

As your example illustrates, a higher intelligence will appear to be
irrational, but you cannot conclude from this that irrationality
implies intelligence.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-06 Thread Stefan Pernar
On Wed, May 7, 2008 at 12:27 AM, Richard Loosemore [EMAIL PROTECTED]
wrote:

 Stefan Pernar wrote:

  On Tue, May 6, 2008 at 10:10 PM, Richard Loosemore [EMAIL 
  PROTECTED]mailto:
  [EMAIL PROTECTED] wrote:
  DELETED
 
  Ben: I admire your patience.
  Richard: congrats - you just made my ignore list - and that's a first
 

 Another person who cannot discuss the issues.


Richard - after having spent time looking through your stuff here is my
conclusion:

You postulate that Achieving AGI requires solving a complex problem and
that you do not see this being properly incorporated in current AGI
research.

As pointed out by others this position puts you in the scruffies camp of
AI research (http://en.wikipedia.org/wiki/Neats_vs._scruffies)

What follows are wild speculations and grand pie-in-the-sky plans without
substance with a letter to investors attached. Oh, come on!

PS: obviously my ignore list sucks ;-)
-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com