at 4:48 PM, Pei Wang [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
In particular, the result that NARS induction and abduction each
depend on **only one** of their premise truth values ...
Ben,
I'm sure you know it in your mind
, Ben Goertzel [EMAIL PROTECTED] wrote:
Sorry Pei, you are right, I sloppily mis-stated!
What I should have said was:
the result that the NARS induction and abduction *strength* formulas
each depend on **only one** of their premise truth values ...
Anyway, my point
I meant frequency, sorry
Strength is a term Pei used for frequency in some old sicsussions...
If I were taking more the approach Ben suggests, that is, making
reasonable-sounding assumptions and then working forward rather than
assuming NARS and working backward, I would have kept the
On Fri, Oct 10, 2008 at 6:01 PM, Pei Wang [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 5:52 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
I meant frequency, sorry
Strength is a term Pei used for frequency in some old sicsussions...
Another correction: strength is never used in any NARS
Pei,
I finally took a moment to actually read your email...
However, the negative evidence of one conclusion is no evidence of the
other conclusion. For example, Swallows are birds and Swallows are
NOT swimmers suggests Birds are NOT swimmers, but says nothing
about whether Swimmers are
is Swallows are birds and Swallows are
NOT swimmers, will the system assigns the same lower-than-default
probability to Birds are swimmers and Swimmers are birds? Again,
I only need a qualitative answer.
Pei
On Fri, Oct 10, 2008 at 7:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Pei,
I
On Fri, Oct 10, 2008 at 8:29 PM, Pei Wang [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 8:03 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this
would
be the case...
(Of course, this kind of example is cognitively
This seems loosely related to the ideas in 5.10.6 of the PLN book, Truth
Value Arithmetic ...
ben
On Fri, Oct 10, 2008 at 9:04 PM, Abram Demski [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Given those three assumptions, plus the NARS
Abram,
Anyway, perhaps I can try to shed some light on the broader exchange?
My route has been to understand A is B as not P(A|B), but instead
P(A is X | B is X) plus the extensional equivalent... under this
light, the negative evidence presented by two statements B is C and
A is not C
://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
/member/?;Your Subscription
http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research
I think we're at the stage where a team of a couple dozen could do it in
5-10 years
I repeat - this is outrageous. You don't have the slightest evidence of
progress - you [the collective you] haven't solved a single problem of
general intelligence - a single mode of generalising - so you
And you
can't escape flaws in your reasoning by wearing a lab coat.
Maybe not a lab coat... but how about my trusty wizard's hat??? ;-)
http://i34.tinypic.com/14lmqg0.jpg
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO
On Mon, Oct 6, 2008 at 7:36 PM, Mike Tintner [EMAIL PROTECTED]wrote:
Matthias (cont),
Alternatively, if you'd like *the* creative ( somewhat mathematical)
problem de nos jours - how about designing a bail-out fund/ mechanism for
either the US or the world, that will actually work? No
Mike,
by definition a creative/emergent problem is one where you have to bring
about a given effect by finding radically new kinds of objects that move or
relate in radically new kinds of ways - to produce that effect. By
definition, you *do not know which domain is appropriate to solving
On the contrary,it is *you* who repeatedly resort to essentially
*reference to authority* arguments - saying read my book, my paper etc
etc - and what basically amounts to the tired line I have the proof, I
just don't have the time to write it in the margin
No. I do not claim to have
Hi all,
In preparation for an upcoming (invitation-only, not-organized-by-me)
workshop on Evaluation and Metrics for Human-Level AI systems, I
concatenated a number of papers on the evaluation of AGI systems into a
single PDF file (in which the readings are listed alphabetically in order of
file
Maybe all we need is just a simple interface for entering facts...
YKY
I still don't understand why you think a simple interface for entering facts
is so important... Cyc has a great UI for entering facts, and used it to
enter millions of them already ... how far did it get them toward
So the key question is whether there will be enough opensource
contributors with innovative ideas and expertise in AGI...
YKY
It's a gamble ... and I don't yet know if my gamble with OpenCog will pay
off!!
A problem is that to recruit a lot of quality volunteers, you'll first need
to
banter on this forum.
With luck, it would help wring your ideas out and disarm your detractors,
and provide more than a mere writeup - a piece to help sell your concept on
a wider scale.
Steve Richfield
===
On 10/1/08, Ben Goertzel [EMAIL PROTECTED] wrote:
On Wed, Oct 1, 2008
3. I think it is extremely important, that we give an AGI no bias about
space and time as we seem to have. Our intuitive understanding of space and
time is useful for our life on earth but it is completely wrong as we know
from theory of relativity and quantum physics.
-Matthias Heger
in
a club, not a scientific discipline. This is of great concern to me. Please
sit back and let this realisation wash over you. It's what I had to do. I
used to think in COMP terms too. And have fun! This is supposed to be fun!
cheers
Colin Hales
Ben Goertzel wrote:
The argument seems wrong
Abram,
thx for restating his argument
Your argument appears to assume computationalism. Here is a numbered
restatement:
1. We have a visual experience of the world.
2. Science says that the information from the retina is insufficient
to compute one.
I do not understand his argument for
On Sun, Oct 5, 2008 at 7:59 PM, Abram Demski [EMAIL PROTECTED] wrote:
Agreed. Colin would need to show the inadequacy of both inborn and
learned bias to show the need for extra input. But I think the more
essential objection is that extra input is still consistent with
computationalism.
cool ... if so, I'd be curious for the references... I'm not totally up on
that area...
ben
On Sun, Oct 5, 2008 at 8:20 PM, Trent Waddington [EMAIL PROTECTED]
wrote:
On Mon, Oct 6, 2008 at 10:03 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Arguably, for instance, camera+lidar gives enough data
://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible
://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
---
agi
Archives: https
On Sun, Oct 5, 2008 at 11:16 PM, Abram Demski [EMAIL PROTECTED] wrote:
Ben,
I think the entanglement possibility is precisely what Colin believes.
That is speculation on my part of course. But it is something like
that. Also, it is possible that quantum computers can do more than
normal
On Fri, Oct 3, 2008 at 9:57 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Fri, 10/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
You seem to misunderstand the notion of a Global Brain, see
http://pespmc1.vub.ac.be/GBRAIFAQ.html
http://en.wikipedia.org/wiki/Global_brain
You are right
by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
---
agi
On Sat, Oct 4, 2008 at 8:37 PM, Mike Tintner [EMAIL PROTECTED]wrote:
Matt:The problem you describe is to reconstruct this image given the highly
filtered and compressed signals that make it through your visual perceptual
system, like when an artist paints a scene from memory. Are you saying
--
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your Subscription
http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director
Hi,
CMR (my proposal) has no centralized control (global brain). It is a
competitive market in which information has negative value. The environment
is a peer-to-peer network where peers receive messages in natural language,
cache a copy, and route them to appropriate experts based on
by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
---
agi
: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
---
agi
Archives: https
://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL
On Thu, Oct 2, 2008 at 2:02 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Thu, 10/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
I hope not to sound like a broken record here ... but ... not every
narrow AI advance is actually a step toward AGI ...
It is if AGI is billions of narrow experts
]
--- On *Thu, 10/2/08, Ben Goertzel [EMAIL PROTECTED]* wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Let's face it, this is just dumb.
To: agi@v2.listbox.com
Date: Thursday, October 2, 2008, 2:08 PM
On Thu, Oct 2, 2008 at 2:02 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Thu
I was saying that most
people don't have any idea what I mean when I talk about things like
interrelated ideological structures in an ambiguous environment, and
that this issue was central to the contemporary problem,
Maybe the reason people don't know what you mean, is that your manner
of
No, the mainstream method of extracting knowledge from text (other than
manually) is to ignore word order. In artificial languages, you have to
parse a sentence before you can understand it. In natural language, you have
to understand the sentence before you can parse it.
More exactly: in
On Wed, Oct 1, 2008 at 2:07 PM, Steve Richfield
[EMAIL PROTECTED]wrote:
Ben,
I have been eagerly awaiting such a document. However, the Grand Technical
Guru (i.e. you) is usually NOT the person to write such a thing. Usually, an
associate, user, author, or some such person who is on the user
On Tue, Sep 30, 2008 at 2:58 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
On Tue, Sep 30, 2008 at 6:43 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
We are talking about 2 things:
1. Using an ad hoc parser to translate NL to logic
2. Using an AGI to parse NL
I'm not sure what you
Markov chains are one way of doing the math for spreading activation, but
e.g.
neural nets are another...
On Tue, Sep 30, 2008 at 1:23 AM, Linas Vepstas [EMAIL PROTECTED]wrote:
2008/9/29 Ben Goertzel [EMAIL PROTECTED]:
Stephen,
Yes, I think your spreading-activation approach makes sense
publicized yet ...
but it does already
address this particular issue...)
ben
On Tue, Sep 30, 2008 at 12:23 PM, Terren Suydam [EMAIL PROTECTED] wrote:
Hi Ben,
If Richard Loosemore is half-right, how is he half-wrong?
Terren
--- On *Mon, 9/29/08, Ben Goertzel [EMAIL PROTECTED]* wrote:
From: Ben
On Tue, Sep 30, 2008 at 12:45 PM, Mike Tintner [EMAIL PROTECTED]wrote:
Ben: the reason AGI is so hard has to do with Santa Fe Institute style
complexity ...
Intelligence is not fundamentally grounded in any particular mechanism but
rather in emergent structures
and dynamics that arise in
On Tue, Sep 30, 2008 at 2:08 PM, Jim Bromer [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
To give a brief answer to one of your questions: analogy is
mathematically a matter of finding mappings that match certain
constraints. The traditional AI approach to this would
On Tue, Sep 30, 2008 at 2:43 PM, Lukasz Stafiniak [EMAIL PROTECTED]wrote:
On Tue, Sep 30, 2008 at 3:38 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Markov chains are one way of doing the math for spreading activation, but
e.g.
neural nets are another...
But these are related things
And if you look at your brief answer para, you will find that while you
talk of mappings and constraints, (which are not necessarily AGI at all),
you make no mention in any form of how complexity applies to the crossing of
hitherto unconnected domains [or matrices, frames etc], which, of
* available).
--
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your Subscription
http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC
On Tue, Sep 30, 2008 at 4:18 PM, Mike Tintner [EMAIL PROTECTED]wrote:
Ben,
Well, funny perhaps to some. But nothing to do with AGI - which has
nothing to with well-defined problems.
I wonder if you are misunderstanding his use of terminology.
How about the problem of gathering as much
You have already provided one very suitable example of a general AGI
problem - how is your pet having learnt one domain - to play fetch, - to
use that knowledge to cross into another domain - to learn/discover the
game of hide-and-seek.? But I have repeatedly asked you to give me your
it right now I'm more motivated
personally to spend time writing new technical stuff than writing better
expositions of stuff I already wrote down ;-)
ben g
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all
On Mon, Sep 29, 2008 at 4:23 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
On Mon, Sep 29, 2008 at 4:10 AM, Abram Demski [EMAIL PROTECTED]
wrote:
How much will you focus on natural language? It sounds like you want
that to be fairly minimal at first. My opinion is that chatbot-type
activation has quiesced.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, September
On Mon, Sep 29, 2008 at 6:28 PM, Lukasz Stafiniak [EMAIL PROTECTED]wrote:
On Mon, Sep 29, 2008 at 11:33 PM, Eric Burton [EMAIL PROTECTED] wrote:
It uses something called MontyLingua. Does anyone know anything about
this? There's a site at
On Mon, Sep 29, 2008 at 6:03 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
On Mon, Sep 29, 2008 at 9:18 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Parsing English sentences into sets of formal-logic relationships is not
extremely hard given current technology.
But the only feasible
I mean that a more productive approach would be to try to understand why
the problem is so hard.
IMO Richard Loosemore is half-right ... the reason AGI is so hard has to do
with Santa Fe Institute style
complexity ...
Intelligence is not fundamentally grounded in any particular mechanism
Cognitive linguistics also lacks a true deveopmental model of language
acquisition that goes beyond the first few years of life, and can embrace
all those several - and, I'm quite sure, absolutely necessary - stages of
mastering language and building a world picture.
Tomassello's theory of
://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
---
agi
Archives: https
My guess is that Schank and AI generally start from a technological POV,
conceiving of *particular* approaches to texts that they can implement,
rather than first attempting a *general* overview.
I can't speak for Schank, who was however working a long time ago when
cognitive science was
that they *are* explicitly devoting a
lot of resources to the problem ...
ben g
On Sun, Sep 28, 2008 at 9:38 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/28/08, Ben Goertzel [EMAIL PROTECTED] wrote:
FYI, Cyc has a natural language front end and a lot of folks have been
working
On Sun, Sep 28, 2008 at 10:00 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/28/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Yes, the big weakness of the whole Cyc framework is learning. Their logic
engine seems to be pretty poor at incremental, experiential learning
IMO Cyc's problem is due to:
1. the lack of a well-developed probabilistic/fuzzy logic (thus
brittleness)
Cyc has local Bayes nets within their knowledge base...
2. the emphasis on ontology (plain facts) rather than production rules
While I agree that formulating knowledge in terms
I mean assumptions like symmetric treatment of intension and extension,
which are technical mathematical assumptions...
But they are still not assumptions about domain knowledge, like node
probability.
Well, in PLN the balance between intensional and extensional knowledge is
On Wed, Sep 24, 2008 at 11:43 AM, Pei Wang [EMAIL PROTECTED] wrote:
The distinction between object-level and meta-level knowledge is very
clear in NARS, though I won't push this issue any further.
yes, but some of the things you push into the meta-level knowledge in NARS,
seem more like the
I guess my previous question was not clear enough: if the only domain
knowledge PLN has is
Ben is an author of a book on AGI tv1
This dude is an author of a book on AGI tv2
and
Ben is odd tv1
This dude is odd tv2
Will the system derives anything?
Yes, via making
OK, we're done with AGI, time to move on to discussion of psychic powers 8-D
On Wed, Sep 24, 2008 at 12:17 PM, Pei Wang [EMAIL PROTECTED] wrote:
Thanks for the detailed answer. Now I'm happy, and we can turn to
something else. ;-)
Pei
On Wed, Sep 24, 2008 at 12:09 PM, Ben Goertzel [EMAIL
If we have
Ben == AGI-author s1
Dude == AGI-author s2
|-
Dude == Ben s3
the PLN abduction rule would yield
s3 = s1 s2 + w (1-s1)(1-s2)
But ... before we move on to psychic powers, let me note that this PLN
abduction strength rule (simplified for the case of equal node
.
Guesses, systematically managed, may help on the way from definite premises
to definite conclusions...
ben g
On Tue, Sep 23, 2008 at 3:31 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
On Thu, Sep 18, 2008 at 3:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Prolog is not fast
Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted
Yes. One of my biggest practical complaints with NARS is that the
induction
and abduction truth value formulas don't make that much sense to me.
I guess since you are trained as a mathematician, your sense has
been formalized by probability theory to some extent. ;-)
Actually, the main
://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL
PLN needs to make assumptions about node probability in this case; but
NARS
also makes assumptions, it's just that NARS's assumptions are more deeply
hidden in the formalism...
If you means assumptions like insufficient knowledge and resources,
you are right, but that is not at the same
On Tue, Sep 23, 2008 at 9:28 PM, Pei Wang [EMAIL PROTECTED] wrote:
On Tue, Sep 23, 2008 at 7:26 PM, Abram Demski [EMAIL PROTECTED]
wrote:
Wow! I did not mean to stir up such an argument between you two!!
Abram: This argument has been going on for about 10 years, with some
on periods and
I think it's mathematically and conceptually clear that for a system with
unbounded
resources probability theory is the right way to reason. However if you
look
at Cox's axioms
http://en.wikipedia.org/wiki/Cox%27s_theorem
you'll see that the third one (consistency) cannot reasonably be
/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your Subscription
http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel
. If there was evidence for a
low par, there would be an effect in the direction you want. (It might
be way too small, though?)
--Abram
On Sun, Sep 21, 2008 at 10:46 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
On Sun, Sep 21, 2008 at 10:43 PM, Abram Demski [EMAIL PROTECTED]
wrote
PROTECTED]wrote:
Sure, but it is a consistent extension; {A}-statements have a strongly
NARS-like semantics, so we know they won't just mess everything up.
On Mon, Sep 22, 2008 at 11:31 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Of course ... but then you are not doing NARS inference
, 2008 at 1:28 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
The {A} statements are consistent with NARS, but the existing NARS
inference
rules don't use these statements...
A related train of thought has occurred to me...
In PLN we explicitly have both intensional and extensional
Hi Pei,
Assuming 4 input judgments, with the same default confidence value (0.9):
(1) {Ben} -- AGI-author 1.0;0.9
(2) {dude-101} -- AGI-author 1.0;0.9
(3) {Ben} -- odd-people 1.0;0.9
(4) {dude-102} -- odd-people 1.0;0.9
From (1) and (2), by abduction, NARS derives (5)
(5) {dude-101} --
See
http://goertzel.org/agiq.pdf
for an essay I just wrote on this topic...
Comments actively solicited!!
ben g
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
Now if you want to compare gzip, a chimpanzee, and a 2 year old child using
language prediction as your IQ test, then I would say that gzip falls in the
middle. A chimpanzee has no language model, so it is lowest. A 2 year old
child can identify word boundaries in continuous speech, can
I'm not building AGI. (That is a $1 quadrillion problem). I'm studying
algorithms for learning language. Text compression is a useful tool for
measuring progress (although not for vision).
OK, but the focus of this list is supposed to be AGI, right ... so I suppose
I should be forgiven for
]
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel
yes, but your cost estimate is based on some very odd and specialized
assumptions regarding AGI architecture!!!
On Sun, Sep 21, 2008 at 8:12 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
That seems a pretty sketchy anti-AGI argument
On Sun, Sep 21, 2008 at 8:08 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Text compression is IMHO a terrible way of measuring incremental progress
toward AGI. Of course it may be very valuable for other purposes...
It is a way
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
... ;-)
ben g
On Sun, Sep 21, 2008 at 9:54 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
yes, but your cost estimate is based on some very odd and specialized
assumptions regarding AGI architecture!!!
As I explained, my cost estimate
On Sun, Sep 21, 2008 at 10:43 PM, Abram Demski [EMAIL PROTECTED]wrote:
The calculation in which I sum up a bunch of pairs is equivalent to
doing NARS induction + abduction with a final big revision at the end
to combine all the accumulated evidence. But, like I said, I need to
provide a more
no
proper probabilistic interpretation. But, I think I have found one
that works OK. Not perfectly. There are some differences, but the
similarity is striking (at least to me).
I imagine that what I have come up with is not too different from what
Ben Goertzel and Pei Wang have already hashed out
matter, please feel free to run the idea by co-organizer James
Clement[EMAIL PROTECTED]
___
extropy-chat mailing list
[EMAIL PROTECTED]
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind
On Sat, Sep 20, 2008 at 4:44 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Fri, 9/19/08, Jan Klauck [EMAIL PROTECTED] wrote:
Formal logic doesn't scale up very well in humans. That's why this
kind of reasoning is so unpopular. Our capacities are that
small and we connect to other human
I haven't read the PLN book yet (though I downloaded a copy, thanks!),
but at present I don't see why term probabilities are needed... unless
inheritance relations A inh B are interpreted as conditional
probabilities A given B. I am not interpreting them that way-- I am
just treating
And the definition 3.7 that you mentioned *does* match up, perfectly,
when the {w+, w} truth-value is interpreted as a way of representing
the likelihood density function of the prob_inh. Easy! The challenge
is section 4.4 in the paper you reference: syllogisms. The way
evidence is spread
On Sat, Sep 20, 2008 at 6:24 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sat, 9/20/08, Ben Goertzel [EMAIL PROTECTED] wrote:
If formal reasoning were a solved problem in AI, then we would have
theorem-provers that could prove deep, complex theorems unassisted. We
don't. This indicates
Beside the problem you mentioned, there are other issues. Let me start
at the basic ones:
(1) In probability theory, an event E has a constant probability P(E)
(which can be unknown). Given the assumption of insufficient knowledge
and resources, in NARS P(A--B) would change over time, when
To pursue an overused metaphor, to me that's sort of like trying to
understand flight by carefully studying the most effective high-jumpers.
OK, you might learn something, but you're not getting at the crux of the
problem...
A more appropriate metaphor is that text compression is the
401 - 500 of 1549 matches
Mail list logo