I have a list at
http://www.cis.temple.edu/~pwang/203-AI/Lecture/203-1126.htm, including
projects satisfying the following three standards:
a.. Each of them has the plan to eventually grow into a thinking machine
or artificial general intelligence (so it is not merely about part of AI);
b..
I studied OSCAR years ago, but haven't followed it closely. Simply speaking,
both OSCAR and NARS are logic-based approaches, and their major difference
is that OSCAR stays much closer to traditional mathematical logic (in terms
of formal language, semantics, rules, control mechanism, and so on).
Sneps is one of the most long-lasting AI project. I met Stuart Shapiro in a
conference a few years ago, and he seems like my work, which is unusual
among mainstream AI big-names. ;-)
In my opinion, their strength is in knowledge representation and its
relation to NLP, but reasoning/learning is
From: Kevin Copple [EMAIL PROTECTED]
It seems to me that rout memorization is an aspect of human learning, so
why
not include a variety of jokes, poems, trivia, images, and so on as part
of
an AI knowledge base? In the EllaZ system we refer to these chunks of
data
as Convuns (conversational
I have a paper
(http://www.cogsci.indiana.edu/farg/peiwang/PUBLICATION/#semantics) on this
topic, which is mostly in agreement with what Kevin said.
For an intelligent system, it is important for its concepts and beliefs to
be grounded on the system's experience, but such experience can be
On this issue, we can distinguish 4 approaches:
(1) let symbols get their meaning through interpretation (provided in
another language) --- this is the approach used in traditional symbolic AI.
(2) let symbols get their meaning by grounding on textual experience ---
this is what I and Kevin
I get similar impression with Cliff.
Though their descriptions are not really wrong, I don't like the
approach --- they introduce too many artificial concepts to cut the
categorization process into pieces. For example, the first sentence in
abstract:
Conceptual integration-blending-is a
As I posted to this mailing list a few months ago, I have a list (now
including 10 projects) that:
a.. Each of them has the plan to eventually grow into a thinking machine
or artificial general intelligence (so it is not merely about part of AI);
b.. Each of them has been carried out for
Sorry, I forgot to mention that my list is at
http://www.cis.temple.edu/~pwang/203-AI/Lecture/203-1126.htm.
Happy New Year to everyone!
Pei
- Original Message -
From: Pei Wang [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, December 30, 2002 6:26 PM
Subject: Re: [agi] Early
I don't know who coined the term AGI, but since in the psychological study
of human intelligence (e.g., IQ test and so on), the so-called general
factor has been discussed for many years by many people, it is quite
natural to introduce the concept into AI.
Though I do use the term AGI in
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, January 08, 2003 6:38 PM
Subject: RE: [agi] Q: Who coined AGI?
I guess most AI researchers consider AI to be inclusive of AGI and ASI.
That's Ok with me ... ASI is interesting too, though
in solving it? What is the
computational complexity of this process?
Pei
- Original Message -
From: Shane Legg [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, January 11, 2003 5:12 PM
Subject: Re: [agi] AI and computation (was: The Next Wave)
Pei Wang wrote:
In my opinion, one
- Original Message -
From: Shane Legg [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, January 11, 2003 9:42 PM
Subject: Re: [agi] AI and computation (was: The Next Wave)
Hi Pei,
One issue that make that version of the paper controversial is the term
computation, which
I'm working on a paper to compare predicate logic and term logic. One
argument I want to make is that it is hard to infer on uncountable nouns in
predicate logic, such as to derive ``Rain-drop is a kind of liquid'' from
Water is a kind of liquid'' and ``Rain-drop is a kind of water'', (which
can
Title: Message
- Original Message -
From:
Daniel
Colonnese
To: [EMAIL PROTECTED]
Sent: Monday, February 03, 2003 11:57
AM
Subject: RE: [agi] KNOW
Thanks Pei. Post a link to your paper if
possible.
I'll do that whenthepaperis finished.
Some of the
I havetwo new drafts for comments:"Non-Axiomatic Logic", at
http://www.cis.temple.edu/~pwang/drafts/NAL.pdfThis
is a complete description of the logic I've been working on."A Term
Logic for Cognitive Science", athttp://www.cis.temple.edu/~pwang/drafts/TermLogic.pdfThis
is a comparison
The book is "Computational Models for
Neuroscience"
( http://www.amazon.com/exec/obidos/ASIN/1852335939/qid%3D1058710388/sr%3D11-1/ref%3Dsr%5F11%5F1/002-0312061-9441635),
for which he is a co-editor, and he has a chapter in it, "A Theory of
Thalamocortex".
The claim he made in that chapter
Why A.I. Is
Brain-Dead
"There is no computer that has common sense. We're only
getting the kinds of things that are capable of making an airline reservation.
No computer can look around a room and tell you about it. But the real topic of
my talk was overpopulation. "
I wonder if we have enough people interested in organizing/participating an
AGI workshop during AAAI-04 (or some other conference).
Pei
Call for AAAI-04 Workshop Proposals
Nineteenth National Conference on Artificial
The paper can be accessed at
http://www.enel.ucalgary.ca/People/wangyx/Publications/Papers/BM-Vol4.2-HMC.pdf
Their conclusion is based on the assumptions that there are 10^11 neurons
and their average synapses number is 10^3. Therefore the total potential
relational combinations is
(10^11)! /
Ben,
Some comments to this interesting article:
*. S = space of formal synapses, each one of which is identified with a
pair (x,y), with x Î N and y ÎNÈS.
Why not x ÎNÈS?
*. outgoing: N à S* and incoming: N - S*
Don't you want them to cover higher-order synapses?
*. standard neural net
Actually, in attractor neural nets it's well-known that using random
asynchronous updating instead of deterministic synchronous updating does
NOT
change the dynamics of a neural network significantly. The attractors are
the same and the path of approach to an attractor is about the same. The
2004 AAAI Fall Symposium Series
Achieving Human-Level Intelligence through Integrated Systems and Research
October 21-24, 2004
Washington D.C.
See http://xenia.media.mit.edu/~nlc/conferences/fss04.html
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
I found a PDF file of the paper with Google. The work is indeed interesting
(thank Ben for the message), but their conclusion, as well as the title, is
a little over generalized, and may be misleading.
What their work actually shows is that when trained with certain data (their
data follow a
theory of mind.
-- Ben
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Pei Wang
Sent: Saturday, January 31, 2004 9:47 AM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Bayes rule in the brain
I found a PDF file of the paper with Google
Sure, but NARS or any other uncertain inference system, when applied to
predicting the future, also falls prey to Hume's induction paradox.
There's
no way to avoid it.
Recall how Hume avoided it: he introduced the assumption of human
nature.
In modern terms, he argued that we have some
Here is an old paper of Pei's on the Wason card experiment:
http://www.cogsci.indiana.edu/farg/peiwang/PUBLICATION/wang.evidence.pdf
Ben: Thanks for replying for me.
I don't know if he wrote something similar relating to Tversky's
experiments
or not. I think I remember reading it, but I
Probability theory is not compactable with the first semantics above ...
It should be compatible. Sorry.
Pei
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
Since confidence is defined as a function of the amount of evidence (in past
experience), it is based on no assumption about the object world. Of course,
I cannot prevent other people from interpreting it in other ways.
I've made it clear in several places (such as
: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Pei Wang
Sent: Sunday, February 01, 2004 8:26 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Bayes rule in the brain
Since confidence is defined as a function of the amount of
evidence (in past
experience), it is based
because you have built into NARS a certain inductive assumption about the
way future experience will be related to past experience.
These inductive assumptions, intuitively, represent an assertion that some
possible experiences are MORE LIKELY than others. So they are very
closely
analogous
downloaded NARS from your website and played with entering various
info. I was wondering if you have an updated version you were planning to
put on the web. I think the last version was from 1999...
--Kevin
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Pei
I took a brief look at your NARS site, but I haven't read the
papers yet. Just a quick question: Do you think your ideas of
induction and abduction etc can be formulated in a Bayesian
framework?
A quick answer: no. You may want to read
http://www.cis.temple.edu/~pwang/drafts/Bayes.pdf and
Shane,
I fully agree with what you said.
My own plan for NARS is to publish the logic it used in detail (including
the grammar of its formal language, the semantics, the inference rules with
their truth-value functions), but, at the current time, not to reveal the
technical details of the
To be perfectly honest, connectionist versus symbolic has always come
across as a strange dichotomy that seems to me would be a false
dichotomy as well in any reasonable model. I don't see why a reasonable
system couldn't be interpreted as either depending on how narrowly one
wanted to slice
"Sure, they're only machines. But the more they
interact with us humans, the more important their apparent gender
becomes."
See http://www.technologyreview.com/articles/wo_garfinkel050504.asp
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to
Hi,
Thanks for sharing your ideas and plans.
I have some questions after reading the writing.
*. If you think your theory is compatible with AGIs developed by the
various groups on this list, what is unique in your approach that is
missing in other approaches?
*. In your 2-part architecture,
Hi,
I just put demos of NARS 4.2 (a Java version and a Prolog version) and
several recent papers at
http://www.cogsci.indiana.edu/farg/peiwang/papers.html.
Comments are welcome.
Pei
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to
My suggestion (which applies to all AGI researchers) to assess the
merits of AGI models is to consider the following 4 points:
1) speed
2) approximation (=fault tolerance/robustness)
3) flexibility
4) adaptiveness
And it seems that speed is the limiting factor with current hardware.
Well,
aren't familiar with the long-running debates
between Pei Wang and myself, you should know that Pei and I have a lot of
respect for one anothers' AI approaches even though we don't agree on
everything. If I argue with Pei's ideas it's because, unlike most ideas
in
the AI field, I actually consider them
formulas, but that's another story ;-)
[A side note: For those who aren't familiar with the long-running debates
between Pei Wang and myself, you should know that Pei and I have a lot of
respect for one anothers' AI approaches even though we don't agree on
everything. If I argue with Pei's ideas it's
My contention is that the incremental approach will take unacceptably long
to generate the compounds needed to solve nontrivially complex practical
problems.
We don't have to start from atoms --- most compounds in our mind are
obtained through interaction with other people. We just build upon
were interesting technically -- the only other person
presenting a real approach to human-level intelligence, besides me and
Moshe, was Pei Wang. Nearly all of the work presented was from a
logic-based approach to AI. Then there were some folks who posited that
logic is a bad approach and AI
One idea proposed by Minsky at that conference is something I disagree
with
pretty radically. He says that until we understand human-level
intelligence, we should make our theories of mind as complex as possible,
rather than simplifying them -- for fear of leaving something out! This
reminds me
Billionaire Paul Allen's latest project to build electronic science
tutors falls short
http://www.spectrum.ieee.org/WEBONLY/publicfeature/jan05/0105ldigi.html
Also, the most recent issue of AI Magazine has a technical paper on it.
Pei
*
*
---
To unsubscribe, change your address, or
Ben,
Again, I'll omit the positive comments. ;-)
*. page 6 of ITSSIM, rule R: within it, P is the probability that R
holds for me --- isn't it a self-reference? What do you mean by me
here? Is it the system, the rule, or the current application of the rule?
*. Also, you said it defines a
in
this summer, and I'll write something after that and post it in this
list --- it is a topic too complicated to be handled by emails.
I don't know any work in the Bayesian school that addressed this issue,
but you may want to send your question to the UAI list at
uai@ENGR.ORST.EDU .
Pei Wang
To summarize some relevant points in my previous writings:
(1) There are two senses of computation: whatever a computer does
and the process defined by a TM. The former sets no limitation for
AI, but is empty; the latter is solid, but has limited the AI research
in various ways.
(2) The
I have to admit that the bad news somehow make me feel better. ;-)
I posted a brief comment on their design to that group, and won't
cross-post it here.
Pei
On 11 Jun 2005 07:33:55 -0400, Ben Goertzel [EMAIL PROTECTED] wrote:
I'll comment more on the design later, I'm away from home for a
Shane and Ben,
Thanks for the comments.
Let me clarify some general points first.
(1) My memo is not intend to cover every system labeled as neural network
--- that is why I use a whole section to define what I mean by NN
model discussed in the paper. I'm fully aware of the fact that given
a
I think I prefer Daniel Amit's approach, where one views NN's as the
class of nonlinear dynamical systems composed of networks of
neuron-like elements.
Then, it becomes clear that the standard NN architectures form a very
small subclass of possible NN's
Of course, most of the
Biological cognition is based on network processing, too.
No problem here --- it is in the NN ideas that I think is necessary.
However, it doesn't only belong to neural network, in the technical
sense. Both Novamente and NARS do network processing, in the broad
sense.
Because you're reading
On 12/18/05, Shane Legg [EMAIL PROTECTED] wrote:
Pei,
To my mind the key thing with neural networks is that they
are based on large numbers of relatively simple units that
interact in a local way by sending fairly simple messages.
Of course that's still very broad. A CA could be
On 12/18/05, Eugen Leitl [EMAIL PROTECTED] wrote:
On Sun, Dec 18, 2005 at 03:36:59PM -0500, Pei Wang wrote:
I'm afraid the issue is not as simple as you belief. Your argument is
based on the theory that to get what we call intelligence, a
necessary condition is to get a computer
On 12/18/05, Ben Goertzel [EMAIL PROTECTED] wrote:
The way I think about it, a neural net is a dynamical system composed
of connected components that roughly model neurons. The system's
dynamics have got to take place via equations that update the
quantitative parameters of the simulated
Ben,
The following is a brief summary of my responses to the paper.
The topics where I agree with Cassimatis:
*. humans use the same or similar mechanisms for linguistic and
nonlinguistic cognition
*. there are dualities between elements of physical and grammatical structure
*. Infant
I don't think that the example he gives of whole-versus-part dominance
transferring from the physical to the linguistic domain is very
representative. I think that there are going to be plenty of
linguistic phenomena that cannot be dealt with via any simple mapping
from heuristics relevant
On 5/7/06, John Scanlon [EMAIL PROTECTED] wrote:
Is anyone interested in discussing the use of formal logic as the foundation
for knowledge representation schemes for AI? It's a common approach, but I
think it's the wrong path. Even if you add probability or fuzzy logic, it's
still
On 5/7/06, sanjay padmane [EMAIL PROTECTED] wrote:
On 5/7/06, Pei Wang [EMAIL PROTECTED] wrote:
AI
doesn't necessarily follow the same path as how human intelligence is
produced, even though it is indeed the only path that has been proved
to work so far.
IMO, if a machine achieves true
it, but not to absorb
it to the extent that its content can be applied flexibly in the
future.
While mobility and vision processing is a much harder action for them.
I'm not sure about whether it will remains to be the case in the future.
Pei
James Ratcliff
Pei Wang [EMAIL PROTECTED] wrote:
On 5/7/06, sanjay
Definitions like the Wikipedia one have the problem of only talking
about the good/right intuitions, while there are at least as many
intuitions that are bad/wrong. To call them by another name would
make things worse, because they are produced by the same mechanism,
therefore you couldn't get
Whether the common response to the Linda example is a fallacy or not
depends on the normative theory that is used as the standard of
correct thinking.
The traditional probabilistic interpretation is purely extensional,
in the sense that the degree of belief for L is a C is interpreted
as the
true, or even usually true. To
me, the human fallacy literature just shows the opposite --- these
assumptions are usually false, and as a result, the normative theory
involved is not applicable.
Pei
On 6/8/06, Peter de Blanc [EMAIL PROTECTED] wrote:
On Thu, 2006-06-08 at 07:56 +0800, Pei Wang
Soar, like other cognitive architectures (such as ACT-R), is not
designed to directly deal with domain problems. Instead, it is a
high-level platform on which a program can be built for a specific
problem.
On the contrary, Novamente, like other AGI systems (such as NARS),
is designed to directly
commonality.
-- Ben
On 7/13/06, Pei Wang [EMAIL PROTECTED] wrote:
Ben,
For example, I guess most of your ideas about how to train Novamente
cannot be applied to AIXI. ;-)
Pei
Pei,
I think you are right that the process of education and mental
development is going to be different
http://news.com.com/Getting+machines+to+think+like+us/2008-11394_3-6090207.html?tag=nefd.lede
Some interesting QA in the interview:
*. What would be the biggest achievements in the last 50 years? Or how
much of the original goals were accomplished?
McCarthy: Well, we don't have human-level
On 7/15/06, Russell Wallace [EMAIL PROTECTED] wrote:
On 7/16/06, Ricardo Barreira [EMAIL PROTECTED] wrote:
Put simply - intelligent beings like us can solve np-hard problems, so
how could intelligence NOT be NP-hard??
Of course anything to do with intelligence is _at least_ NP-hard -
No matter how bad fuzzy logic is, it cannot be responsible for the
past failures of AI --- fuzzy logic has never been popular in the AI
community. Actually, numerical approaches have been criticized and
rejected by similar reasons from the very beginning, until the coming
of the Bayesian
On 8/3/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Thanks for the thoughtful responses, folks. I have a few replies.
Pei Wang wrote:
No matter how bad fuzzy logic is, it cannot be responsible for the
past failures of AI --- fuzzy logic has never been popular in the AI
community.
Oh
YKY:
(1) Your worry about the Bayesian approach is reasonable, but it is
not the only possible way to use numerical truth value --- even Ben
will agree with me here. ;-)
(2) Accuracy is not a big problem, but if you do some experiments on
incremental learning, you will soon see that 1-2 digits
On 8/5/06, Yan King Yin [EMAIL PROTECTED] wrote:
I think the brain is actually quite smart, perhaps due to intense selection
for intelligence over a long period of time dating back to fishes. I
suspect that the brain actually has an internal representation somewhat
similar to predicate logic.
If you just want an advanced production system, why bother to build
your own, but not to simply use Soar or ACT-R? Both of them allow you
to define your own rules.
Pei
On 8/5/06, Yan King Yin [EMAIL PROTECTED] wrote:
Indeed, the AGI model that I have in mind is basically a production-rule
On 8/7/06, Yan King Yin [EMAIL PROTECTED] wrote:
On 8/6/06, Pei Wang [EMAIL PROTECTED] wrote:
If you just want an advanced production system, why bother to build
your own, but not to simply use Soar or ACT-R? Both of them allow you
to define your own rules.
Indeed, when Allen Newell
are not universal statements, so can be
multi-valued. See http://www.cogsci.indiana.edu/pub/wang.induction.ps
for more.
Pei
On 8/8/06, Yan King Yin [EMAIL PROTECTED] wrote:
On 8/7/06, Pei Wang [EMAIL PROTECTED] wrote:
At the beginning, I also believed that first-order predicate logic
(FOPL) plus
:
On 8/9/06, Pei Wang [EMAIL PROTECTED] wrote:
There are two different issues: whether an external communication
language needs to be multi-valued, and whether an internal
representation language needs to be multi-valued. My answer to the
former is No, and to the latter is Yes. Many people
See the paper at
http://www.cogsci.rpi.edu/CSJarchive/Proceedings/2006/docs/p2059.pdf
ABSTRACT:
The Human Speechome Project is an effort to observe
and computationally model the longitudinal course of
language development of a single child at an unprecedented
scale. The idea is this: Instrument
Matt,
To summarize and generalize data and to use the summary to predict the
future is no doubt at the core of intelligence. However, I do not call
this process compressing, because the result is not faultless, that
is, there is information loss.
It is not only because the human brains are
.
477-493.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, August 12, 2006 4:03:55 PM
Subject: Re: [agi] Marcus Hutter's lossless compression of human knowledge prize
Matt,
To summarize and generalize data
, and the opposite approach (to
losslessly remember every word, even in a compressed way) is not
intelligent.
Pei
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, August 12, 2006 8:53:40 PM
Subject: Re: [agi] Marcus Hutter's
probabilistic truths).
Pei
On 8/13/06, Shane Legg [EMAIL PROTECTED] wrote:
On 8/13/06, Pei Wang [EMAIL PROTECTED] wrote:
Hutter's only assumption about AIXI is that the environment can be
simulated by a Turing machine.
That is already too strong to me. Can our environment be simulated by
a Turing
Hi,
A demo applet of the recently developed NARS 4.3.1 is at the new NARS
website http://nars.wang.googlepages.com/ . Though the work is by no
means complete, it does have some new features.
A book on NARS will be published by Springer soon (hopefully) :
Shane,
Thanks for the great job! It will be a useful resource for all of us.
In my definition, I didn't use the word agent, but system.
You may also want to consider the 8 definitions listed in AIMA
(http://aima.cs.berkeley.edu/), page 2.
Pei
On 9/1/06, Shane Legg [EMAIL PROTECTED] wrote:
On 9/6/06, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
Well, let's change the question a bit: are there some specific features in
Hawkin's theory that you (or others) think should be included in my
architecture to make it better?
I don't have any confident comment to make on that topic.
In
of these schemes.
Agree. Halpern's work is important and influential, but to me it is
still too idealized to be used in AGI.
Pei
Cheers=)
YKY
On 8/20/06, Pei Wang [EMAIL PROTECTED] wrote:
Hi,
A demo applet of the recently developed NARS 4.3.1 is at the new NARS
website http
Good question.
I and Ben are drafting an introductory chapter for the AGIRI Workshop
Proceedings, and in it we want to list the major objections to AGI
research, then reject them one by one. Now the list includes the
following:
1. AGI is impossible --- such as the opinions from Lucas, Dreyfus,
Why in other fields of AI, or CS in general, do many people work on
other people's ideas?
I guess the AGI ideas are still not convincing and attractive enough
to other people.
Pei
On 9/13/06, Andrew Babian [EMAIL PROTECTED] wrote:
PS. http://adaptiveai.com/company/opportunities.htm
This
In my case (http://nars.wang.googlepages.com/), that scenario won't
happen --- it is impossible for the project to fail. ;-)
Seriously, if it happens, most likely it is because the control
process is too complicated to be handled properly by the designer's
mind. Or, it is possible that the
We all know that, in a sense, every computer system (hardware plus
software) can be abstractly described as a Turing machine.
Can we say the same for every robot? Why?
Reference to previous publications are also welcome.
Pei
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To
, Sergio Navega [EMAIL PROTECTED] wrote:
From: Pei Wang [EMAIL PROTECTED]
We all know that, in a sense, every computer system (hardware plus
software) can be abstractly described as a Turing machine.
Can we say the same for every robot? Why?
Reference to previous publications are also
New Book Announcement [apologies for cross-posting]
Rigid Flexibility: The Logic of Intelligence
by Pei Wang
Springer, October 2006, ISBN: 1402050445
This book provides the blueprint of a thinking machine.
While most of the current works in Artificial Intelligence
(AI) focus on individual
Peter,
I'm afraid that your question cannot be answered as it is. AI is
highly fragmented, which not only means that few project is aiming at
the whole field, but also that few is even covering a subfield as you
listed. Instead, each project usually aims at a special problem under
a set of
in AI in general.
I wonder if there is anyone in this list who has been actually working
in the field of robotics, and I would be very interested in learning
the causes of the recent development.
Pei
On 10/19/06, Olie Lamb [EMAIL PROTECTED] wrote:
(Excellent list there, Matt)
Although Pei Wang
Loosemore wrote
Matt Mahoney wrote:
From: Pei Wang [EMAIL PROTECTED]
On 10/20/06, Matt Mahoney [EMAIL PROTECTED] wrote:
It is not that we can't come up with the right algorithms.
It's that we don't have the
computing power to implement them.
Can you give us an example? I hope you don't
On 10/21/06, Matt Mahoney [EMAIL PROTECTED] wrote:
I read Pei Wang's paper, http://nars.wang.googlepages.com/wang.AGI-CNN.pdf
Some of the shortcomings of neural networks mentioned only apply to classical
(feedforward or symmetric) neural networks, not to asymmetric networks with
recurrent
On 10/21/06, Matt Mahoney [EMAIL PROTECTED] wrote:
- Original Message
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, October 21, 2006 5:25:13 PM
Subject: Re: [agi] SOTA
For example, the human mind and some other AI techniques handle
structured knowledge much
On 10/22/06, Matt Mahoney [EMAIL PROTECTED] wrote:
Also to Novamente, if I understand correctly. Terms are linked by a
probability and confidence. This seems to me to be an optimization of a neural
network or connectionist model, which is restricted to one number per link,
representing
Bob and Neil,
Thanks for the informative discussion!
Several questions for you and others who are familiar with robotics:
For people whose interests are mainly in the connection between
sensorimotor and high-level cognition, what kind of API can be
expected in a representative robot? Something
Let's don't confuse two statements:
(1) To be able to use a natural language (so as to passing Turing
Test) is not a necessary condition for a system to be intelligent.
(2) A true AGI should have the potential to learn any natural language
(though not necessarily to the level of native
On 11/2/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
Pei Wang wrote:
On 11/2/06, Eric Baum [EMAIL PROTECTED] wrote:
Moreover, I argue that language is built on top of a heavy inductive
bias to develop a certain conceptual structure, which then renders the
names of concepts highly
On 11/2/06, Eric Baum [EMAIL PROTECTED] wrote:
So Pei's comments are in some sense wishes. To be charitable--
maybe I should say beliefs supported by his experience.
But they are not established facts. It remains a possibility,
supported by reasonable evidence,
that language learning may be an
1 - 100 of 419 matches
Mail list logo