Yin wrote:
John Scanlon wrote:
[...]
Logical deduction or inference is not thought. It is mechanical symbol
manipulation that can can be programmed into any scientific pocket calculator.
[...]
Hi John,
I admire your attitude for attacking the core AI issues =)
One is either
Is there anyone out there who has a sense that most of the work being done in
AI is still following the same track that has failed for fifty years now? The
focus on logic as thought, or neural nets as the bottom-up, brain-imitating
solution just isn't getting anywhere? It's the same thing,
No GOFAI here.
On 12/12/06, John Scanlon wrote:
These rebukes to my statement that generating images is unnecessary are
right on target. I misinterpreted the quoted statement by Hinton: To
recognize shapes, first learn to generate images.
Therefore, I strongly recommend you
Sorry, I meant that someone said that links to one's published papers should
be the criterion. Not necessarily mathematical proofs.
Richard Loosemore wrote:
John Scanlon wrote:
[snip]
And bottom-up processing combined with top-down processing is also
perfectly reasonable and necessary
, etc. It's the synchronisation of these two
streams which results in a percept.
On 10/12/06, John Scanlon wrote:
Recognizing shapes by an AGI and being able to talk to the AGI about them
is the first step -- a very necessary step. But I don't understand why an AI
system would have
Hank - do you have any theories or AGI designs?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
Alright, I have to say this.
I don't believe that the singularity is near, or that it will even occur. I am
working very hard at developing real artificial general intelligence, but from
what I know, it will not come quickly. It will be slow and incremental. The
idea that very soon we can
) Zvorygin wrote:
On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:
Alright, I have to say this.
I don't believe that the singularity is near, or that it will even occur.
I
am working very hard at developing real artificial general intelligence,
but
from what I know, it will not come quickly
that will allow it in a few years time to comprehend its own
computer code and intelligently re-write it (especially a system as complex
as Novamente)? The artificial intelligence problem is much more difficult
than most people imagine it to be.
Ben Goertzel wrote:
John,
On 12/5/06, John Scanlon [EMAIL
is directly dependent upon one's understanding/design of AGI and
intelligence in general.
On 12/5/06, Ben Goertzel [EMAIL PROTECTED] wrote:
John,
On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:
I don't believe that the singularity is near, or that it will even occur.
I
I get the impression that a lot
of people interested in AIstill believe that the mental manipulation of
symbols is equivalent to thought. As many other people understand now,
symbol-manipulation is not thought. Instead, symbols can be manipulated by
thought to solve various problems that
Chris Petersen wrote:
That magical, undefined
'thought'...
On 11/11/06, John
Scanlon wrote:
I get the impression that a
lot of people interested in AIstill believe that the mental
manipulation of symbols is equivalent to thought. As many other people
understand now,
that a system based
on
the mechanical manipulation of statements in logic, without a foundation
of
primary intelligence to support it, can produce thought?
[John Scanlon]
The problem with answering your question is that I don't really know
what you mean, exactly, by symbol manipulation
Do you just
This is very funny. I just sat down at my computer before I went to bed to
say just to make sure I'm not misunderstood, when I referred to a paper
tiger, I didn't mean you, I meant people like Douglas Lenat.
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To:
Eric,
Wow, I'm very impressed by the positive reviews from people with these
credentials. Now I have to read your book. Should I just order it from
Amazon, or could you find it in the goodness of your heart to send me an
electronic copy? I don't mind paying for it if that's a problem.
Richard,
I will get back to you on this. There's a lot of e-mail coming in, and I
have to digest what you've said here. This is important.
Richard Loosemore wrote:
John Scanlon wrote:
Richard, could you describe your algorithms in a general way (I'm not
asking for any proprietary
Fully decoding the human genome is almost impossible. Not only is there the
problem of protein folding, which I think even supercomputers can't fully
solve, but the purpose for the structure of each protein depends on
interaction with the incredibly complex molecular structures inside cells.
The crux of the problem is this: what should
be the fundamental elements used for knowledge representation. Should they
be statements in predicate or term logic, maybe with the addition of
probabilities and confidence? Should they be neural-net-type learned
functional mappings? Or should
Richard, could you describe your algorithms in a general way (I'm not asking
for any proprietary information), so I could see if they would fit into my
concept of a KBMS?
- Original Message -
From: John Scanlon [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, November 06, 2006
YKY,
I looked at your language, and it
makessense. It seems that a lot of us are working on this same
problem of developing a good interface language for an AI system.
As far as a demo of how my Gnoljinn server
processes statements in Jinnteera,this is what it does this right now:
it
James Ratcliff wrote:
"In some form or another we are going to HAVE to have a natural language
interface, either a translation program that can convert our english to the
machine understandable form, or a simplified form of english that is
trivial for a person to quickly understand and
Any comments?
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
Richard Loosemore wrote:
When you say that it provides ... a general AI shell, within which any AI
algorithms can be experimented with ..., I find myself exasperated [tho'
not to worry, I am exasperated a lot ;-) ] -- it does not provide a
language shell within which *any* of the algorithms
Sorry, I completely ignored what you said about
your language in my reply. Could you describe it some more?
YKY wrote:
I'm working on something very
similar to Jinnteera; my language is called Geniform and is half-way between
English and an augmented form of predicate logic. I'd say it
I'll keep this short, just to weigh in a vote - I
completelyagree with this. AGI will be measured by what we recognize
as intelligent behavior andthe usefulness ofthat intelligence for
tasks beyond the capabilities ofordinary software. Normal metrics
don't apply.
Russell Wallace wrote:
Matt, I totally agree with you on Cyc and LISP. To go further, I think Cyc
is a dead end because of the assumption that intelligence is dependent on a
vast store of knowledge, basically represented in a semantic net.
Intelligence should start with the learning of simple patterns in images and
One of the major obstacles to real AI is the belief
thatknowledge ofa natural language is necessary for
intelligence. Ahuman-level intelligent system should be expected to
have the ability to learn a natural language, but it is not necessary. It
is better to start with a formal language,
interface languages
John Scanlon wrote:
One of the major obstacles to real AI is the belief that knowledge of a
natural language is necessary for intelligence. A human-level
intelligent system should be expected to have the ability to learn a
natural language, but it is not necessary
In the para-natural formal language I've developed, called Jinnteera, I saw
the man with the telescope. would be expressed for each meaning in a
declarative phrase as:
1. I did see with a telescope the_man
2. I did see the man which did have a telescope
3. I saw with a telescope the_man or
: Tuesday, October 31, 2006 12:24 PM
Subject: Re: [agi] Natural versus formal AI interface languages
John --
See
lojban.org
and
http://www.goertzel.org/papers/lojbanplusplus.pdf
-- Ben G
On 10/31/06, John Scanlon [EMAIL PROTECTED] wrote:
One of the major obstacles to real AI
that if they really did epistemology they would call it knowledge
understanding.
On Monday 14 August 2006 13:06, John Scanlon wrote:
Does anyone know why the term ontology in artificial intelligence
refers
to knowledge representation, while in philosophy, theories of knowledge
belong to epistemology
Is anyone interested in discussing the use of
formal logic as the foundation for knowledge representation schemes for
AI? It'sa common approach, but I think it's the wrong path.
Even if you add probability or fuzzy logic, it's still insufficient for true
intelligence.
The human brain, the
n software makes inrecognizing a word or two in the input
stream.
From: John Scanlon
[mailto:[EMAIL PROTECTED] Sent: Sunday, May 07, 2006 2:40
AMTo: agi@v2.listbox.comSubject: [agi] Logic and
Knowledge Representation
Is anyone interested in discussing the use of
fo
33 matches
Mail list logo