One thing I don't get, YKY, is why you think you are going to take
textbook methods that have already been shown to fail, and somehow
make them work. Can't you see that many others have tried to use
FOL and ILP already, and they've run into intractable combinatorial
explosion problems?
Some may
On Tue, Jun 3, 2008 at 11:08 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
We can tell what parts of the brain tend to be involved in what sorts
of activities, from fMRI. Not much else.
Puzzling out complex neural functions often involves combining fMRI
data from humans with data from
Also, YKY, I can't help but note that your currently approach seems
extremely similar to Texai (which seems quite similar to Cyc to me),
more so than to OpenCog Prime (my proposal for a Novamente-like system
built on OpenCog, not yet fully documented but I'm actively working on
the docs now).
I
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Also, YKY, I can't help but note that your currently approach seems
extremely similar to Texai (which seems quite similar to Cyc to me),
more so than to OpenCog Prime (my proposal for a Novamente-like system
built on OpenCog, not yet fully
Hi Ben,
Note that I did not pick FOL as my starting point because I wanted to
go against you, or be a troublemaker. I chose it because that's what
the textbooks I read were using. There is nothing personal here.
It's just like Chinese being my first language because I was born in
China. I
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
1) representing uncertainties in a way that leads to tractable, meaningful
logical manipulations. Indefinite probabilities achieve this. I'm not saying
they're the only way to achieve this, but I'll argue that single-number,
Walley-interval,
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
One thing I don't get, YKY, is why you think you are going to take
textbook methods that have already been shown to fail, and somehow
make them work. Can't you see that many others have tried to use
FOL and ILP already, and they've run into
As we have discussed a while back on the OpenCog mail list, I would like to
see a RDF interface to some level of the OpenCog Atom Table. I think that
would suit both YKY and myself. Our discussion went so far as to consider
ways to assign URI's to appropriate atoms.
Yes, I still think
First of all, the *tractability* of your algorithm depends on
heuristics that you design, which are separable from the underlying
probabilistic logic calculus. In your mind, these 2 things may be
mixed up.
Indefinite probabilities DO NOT imply faster inference.
Domain-specific heuristics
Hi Ben.
Thanks for suggesting that YKY collaborate with Texai because of our similar
approaches to knowledge representation. I believe that Cyc's lack of AGI
progress is not due to their choice of FOL but rather that Cycorp emphasizes
the hand-crafting of commonsense knowledge about things
You have done something new, but not so new as to be in a totally
different dimension.
YKY
I have some ideas more like that too but I've postponed trying to sell them
to others, for the moment ;-) ... it's hard enough to sell fairly basic stuff
like PLN ...
Look for some stuff on the
Modus ponens can be defined in a few ways.
If you take the binary logic definition:
A - B means ~A v B
you can translate this into probabilities but the result is a mess. I
have analysed this in detail but it's complicated. In short, this
definition is incompatible with probability
Thanks. I must confess to my usual confusion/ignorance here - but perhaps I
should really have talked of solid rather than 3-D mapping.
When you sit in a familiar chair, you have, I presume, a solid mapping (or
perhaps the word should be moulding) - distributed over your body, of how
it can
I mean this form
http://en.wikipedia.org/wiki/Modus_ponens
i.e.
A implies B
A
|-
B
Probabilistically, this means you have
P(B|A)
P(A)
and want to infer from these
P(B)
under the most direct interpretation...
ben
On Wed, Jun 4, 2008 at 12:08 AM, YKY (Yan King Yin)
[EMAIL PROTECTED]
Ben,
If we don't work out the correspondence (even approximately) between
FOL and term logic, this conversation would not be very fruitful. I
don't even know what you're doing with PLN. I suggest we try to work
it out here step by step. If your approach really makes sense to me,
you will gain
Vladimir,
On 6/3/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Tue, Jun 3, 2008 at 6:59 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
Note that modern processors are ~3 orders of magnitude faster than a
KA10,
and my 10K architecture would provide another 4 orders of magnitude, for
a
Propositions are not the only things that can have truth values...
I don't have time to carry out a detailed mathematical discussion of
this right now...
We're about to (this week) finalize the PLN book draft ... I'll send
you a pre-publication PDF early next week and then you can read it and
we
On 6/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Propositions are not the only things that can have truth values...
Terms in term logic can have truth values. But such terms
correspond to propositions in FOL. There is absolutely no confusion
here.
I don't have time to carry out a detailed
Strongly disagree. Computational neuroscience is moving as fast as any field
of science has ever moved. Computer hardware is improving as fast as any
field of technology has ever improved.
I would be EXTREMELY surprised if neuron-level simulation were necessary to
get human-level
hello ben
if i can have a pdf draf,i think you very much
bruno
- Message d'origine
De : Ben Goertzel [EMAIL PROTECTED]
À : agi@v2.listbox.com
Envoyé le : Mardi, 3 Juin 2008, 18h33mn 02s
Objet : Re: [agi] OpenCog's logic compared to FOL?
Propositions are not the only things that can
On 6/3/08, Stephen Reed [EMAIL PROTECTED] wrote:
I believe that the crisp (i.e. certain or very near certain) KR for these
domains will facilitate the use of FOL inference (e.g. subsumption) when I
need it to supplement the current Texai spreading activation techniques for
word sense
On 6/3/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Do you have any insights on how this learning will be done?
That research area is known as ILP (inductive logic programming).
It's very powerful in the sense that almost anything (eg, any Prolog
program) can be learned that way. But the problem
JOHN ROSE
I suppose the optimal approach to AGI has to involve some degree of
connectionism. But to find isomorphic structures to connectionist graphs
that are more efficient. Many things in nature cannot be evolved, for
example few if any animals have wheels. Evolved structures go
YKY said:
1. Probabilistic inference cannot be grafted onto crisp logic easily. The
changes may be so great that much of the original work will be rendered useless.
Agreed. However, I hope that by the time probabilistic inference is taught to
Texai by mentors, it will be easy to supersede
From: Brad Paulsen [mailto:[EMAIL PROTECTED]
John wrote:
A rock is either conscious or not conscious.
Excluding the middle, are we?
Conscious, not conscious or null?
I don't want to put words into Ben company's mouths, but I think what
they are trying to do with PLN is to
From: Ed Porter [mailto:[EMAIL PROTECTED]
ED PORTER
I am not an expert at computational efficiency, but I think graph
structures
like semantic nets, are probably close to as efficient as possible given
the
type of connectionism they are representing and the type of computing
--- On Tue, 6/3/08, John G. Rose [EMAIL PROTECTED] wrote:
Actually on further thought about this conscious rock, I
want to take that particular rock and put it through some
further tests to absolutely verify with a high degree of
confidence that there may not be some trace amount of
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Subject: Are rocks conscious? (was RE: [agi] Did this message get
completely lost?)
--- On Tue, 6/3/08, John G. Rose [EMAIL PROTECTED] wrote:
Actually on further thought about this conscious rock, I
want to take that particular rock and put it
On 6/4/08, Stephen Reed [EMAIL PROTECTED] wrote:
All of the work to date on program generation, macro processing,
application configuration via parameters, compilation, assembly, and program
optimization has used crisp knowledge representation (i.e. non-probabilistic
data structures).
John G. Rose wrote:
You see what I'm getting at. In order to be 100% sure. Any failed tests of the
above would require further scientific analysis and investigation to achieve
proper non-conscious certification.
Not
YKY said:
How about these scenarios:
1. If a task is to be repeated 'many' times, use a loop. If only 'a few'
times, write it out directly. -- this requires fuzziness
2. The gain of using algorithm X on this problem is likely to be small. --
requires probability
Agreed. When Texai
Josh,
On 6/3/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Strongly disagree. Computational neuroscience is moving as fast as any
field
of science has ever moved.
Perhaps you are seeing something that I am not. There are ~200 different
types of neurons, but no one seems to understand what
Hi All,
An excellent 20-minute TED talk from Susan Blackmore (she's a brilliant
speaker!)
http://www.ted.com/talks/view/id/269
I considered posting to the singularity list instead, but Blackmore's
theoretical talk is much more germane to AGI than any other
singularity-related technology.
-dave
33 matches
Mail list logo