Richard wrote:
Then, when we came back from the break, Ben Goertzel announced that the
roundtable on symbol grounding was cancelled, to make room for some other
discussion on a topic like the future of AGI, or some such. I was
outraged by this. The subsequent discussion was a pathetic
Richard Loosemore said:
But instead of deep-foundation topics like these, what do we get?
Mostly what we get is hacks. People just want to dive right and make
quick assumptions about the answers to all of these issues, then they
get hacking and build something - *anything* - to make it look
On Mon, May 5, 2008 at 11:15 PM, Anthony George [EMAIL PROTECTED] wrote:
But, I want to ask the list whether or not there has been any trend
or attempt to incorporate reflexivity into an AGI model. By reflexivity I
mean, basically, two computers that interact with each other but, perhaps,
OK. Let me give a system engineer's perspective . . . .
I believe that a lot of the current systems have done a lot of excellent,
rigorous work both at the bottom-most and top-most levels of cognition.
The problem is, I believe, that these two levels are separated by two to
five more levels
Ben Goertzel wrote:
Feedback on AGI-06 overall was overwhelmingly positive; in fact
Richard's is the only significantly negative report I've seen.
Of course, if the conference was filled with low-quality presentations
and low-quality comments from participants, then all of those people who
On 5/4/08, Stephen Reed [EMAIL PROTECTED] wrote:
As perhaps you know, I want to organize Texai as a vast multitude of
agents situated in a hierarchical control system, grouped as possibly
redundant, load-sharing, agents within an agency sharing a specific
mission. I have given some thought to
I'm wondering if it's possible to plug in my learning algorithm to
OpenCog / Novamente?
The main incompatibilities stem from:
1. predicate logic vs term logic
2. graphical KB vs sentential KB
If there is a way to somehow bridge these gaps, it may be possible
YKY
On Tue, May 6, 2008 at 10:10 PM, Richard Loosemore [EMAIL PROTECTED]
wrote:
Ben Goertzel wrote:
Feedback on AGI-06 overall was overwhelmingly positive; in fact
Richard's is the only significantly negative report I've seen.
Of course, if the conference was filled with low-quality
Stefan Pernar wrote:
On Tue, May 6, 2008 at 10:10 PM, Richard Loosemore [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Ben Goertzel wrote:
Feedback on AGI-06 overall was overwhelmingly positive; in fact
Richard's is the only significantly negative report I've seen.
On Tue, May 6, 2008 at 4:07 PM, Mark Waser [EMAIL PROTECTED] wrote:
Note: Most of these complaints do *NOT* apply to Texai (except possibly
the two to five level complaint -- except that Texai is actually starting at
what I would call one of the middle levels and looks like it has reasonable
I didn't sign up to listen to you whine, but I certainly tried to cancel
my subscription because you whine.
Any ETA on when that'll actually go through, anyone?
-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: May 6, 2008 12:28 PM
To: agi@v2.listbox.com
Subject:
Hi YKY,
You said:
The distributive agents would be owned by different people on the net, who
would want their agents to do different things for them. This occurs
simultaneously.
We need to distinguish 2 situations:
A) where all the agents cooperate to solve ONE problem
B) where agents
Hi Lukasz ,
With regard to the Texai approach, I have subjected myself to these constraints:
* to author the bootstrap portion of the system by myself
* to write the least amount of code (e.g. not to write an ideal AI
language first)
* to reuse existing narrow AI
--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On 5/4/08, Stephen Reed [EMAIL PROTECTED] wrote:
As perhaps you know, I want to organize Texai as a vast multitude
of agents situated in a hierarchical control system, grouped as
possibly
redundant, load-sharing, agents within an agency
On 5/6/08, Stephen Reed [EMAIL PROTECTED] wrote:
I believe the opposite of what you say I hope that my following
explanation will help converge our thinking. Let me first emphasize
that I plan a vast multitude of specialized agencies, in which each
agency has a particular
Stephen Reed wrote:
At the time that the Texai bootstrap English dialog system is
available, I'll begin fleshing out the hundreds of agencies for
which I hope to recruit human mentors. Each agency I establish will
have paragraphs of English text to describe its mission, including
--- [EMAIL PROTECTED] wrote:
http://paulspontifications.blogspot.com/2008/05/under-appreciated-fact-we-dont-know-how.html
Computer programming is an art, as Knuth observed.
I teach classes in C++, Java, and x86 assember. I can show my students
some simple drawings and show them how to hold a
The blog entry is amusing. I started writing software at quite young
age (about 10), and I always assumed that it was an art rather like
writing a novel or a musical composition. So when I grew older and
became employed to write programs I was shocked in my early career to
find that some people
Thanks Andi for the kind words.
My directly preceding post is about the combination of AI-hard problems that I
am trying to solve. It hints that incrementally solving the bunch of them may
be achievable, but that sufficiently solving one of them alone may not be.
I've given automatic
Predicate logic vs term logic won't be an issue for OpenCog, as the
AtomTable knowledge representation supports both (and many other)
formalisms.
I don't **think** the sentential KB will be a problem because i
believe each of your sentences will be representable as an Implication
or Equivalence
Kaj, Richard, et al,
On 5/5/08, Kaj Sotala [EMAIL PROTECTED] wrote:
Drive 2: AIs will want to be rational
This is basically just a special case of drive #1: rational agents
accomplish their goals better than irrational ones, and attempts at
self-improvement can be outright harmful if
--- Steve Richfield [EMAIL PROTECTED] wrote:
I have played tournament chess. However, when faced with a REALLY
GREAT
chess player (e.g. national champion), as I have had the pleasure of
on a
couple of occasions, they at first appear to play as novices, making
unusual
and apparently stupid
On Wed, May 7, 2008 at 12:27 AM, Richard Loosemore [EMAIL PROTECTED]
wrote:
Stefan Pernar wrote:
On Tue, May 6, 2008 at 10:10 PM, Richard Loosemore [EMAIL
PROTECTED]mailto:
[EMAIL PROTECTED] wrote:
DELETED
Ben: I admire your patience.
Richard: congrats - you just made my ignore
23 matches
Mail list logo