Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
m prover; yes, absolutely, so long as the mathematical > entity is understandable by the definition I gave. Unfortunately, I > still have some work to do, because as far as I can tell that > definition does not explain how uncountable sets are meaningful... > (maybe it does and I am just

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
ble by the definition I gave. Unfortunately, I > still have some work to do, because as far as I can tell that > definition does not explain how uncountable sets are meaningful... > (maybe it does and I am just missing something...) > > --Abram > > On Wed, Oct 22, 2008 at 12:30

Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Ben Goertzel
> > >>> > Not everything that is a necessary capability of a completed human-level, > roughly human-like AGI, is a sensible "first step" toward a human-level, > roughly human-like AGI > > <<< > > This is surely true. But let's say someone wants to develop a car. Doesn't > it makes sense first to d

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
> > So, a statement is meaningful if it has procedural deductive meaning. > We *understand* a statement if we are capable of carrying out the > corresponding deductive procedure. A statement is *true* if carrying > out that deductive procedure only produces more true statements. We > *believe* a st

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
or very short phrase for > each Lojban word. :-) > > Actually, h . . . . a Lojban dictionary would probably help me focus my > efforts a bit better and highlight things that I may have missed . . . . do > you have a preferred dictionary or resource? (Google has too many for m

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
> > The problem is to gradually improve overall causal model of > environment (and its application for control), including language and > dynamics of the world. Better model allows more detailed experience, > and so through having a better inbuilt model of an aspect of > environment, such as langua

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
> > It looks like all this "disambiguation" by moving to a more formal > language is about sweeping the problem under the rug, removing the > need for uncertain reasoning from surface levels of syntax and > semantics, to remember about it 10 years later, retouch the most > annoying holes with simpl

[agi] Fun with first-order inference in OpenCog ...

2008-10-22 Thread Ben Goertzel
http://brainwave.opencog.org/ -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
(joke) On Wed, Oct 22, 2008 at 11:11 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser <[EMAIL PROTECTED]> wrote: > >> >> I don't want to diss the personal value of logically inconsistent >> thoughts

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser <[EMAIL PROTECTED]> wrote: > >> I don't want to diss the personal value of logically inconsistent > thoughts. But I doubt their scientific and engineering value. > I doesn't seem to make sense that something would have personal value and > then not ha

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
> > Personally, rather than starting with NLP, I think that we're going to need > to start with a formal language that is a disambiguated subset of English IMHO that is an almost hopeless approach, ambiguity is too integral to English or any natural language ... e.g preposition ambiguity If you

Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Ben Goertzel
In brief --> You've agreed that even a stupid person is a general > intelligence. By "do science", I (originally and still) meant the > amalgamation that is probably best expressed as a combination of critical > thinking and/or the scientific method. My point was a combination of both > a) to be

Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Ben Goertzel
rchive/303/=now> > <https://www.listbox.com/member/archive/rss/303/>| > Modify<https://www.listbox.com/member/?&;>Your Subscription > > <http://www.listbox.com> > > > -- > *agi* | Archives <https://www.listbox.com/memb

[agi] A huge amount of math now in standard first-order predicate logic format!

2008-10-22 Thread Ben Goertzel
g with all this data would be a mighty test of adaptive inference control ;-O ben g -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
t; Isn't it just like thinking "This is an image that is way too detailed for > me to ever see"? > > Charles Griffiths > > --- On *Tue, 10/21/08, Ben Goertzel <[EMAIL PROTECTED]>* wrote: > > From: Ben Goertzel <[EMAIL PROTECTED]> > Subject: Re:

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMA

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
On Tue, Oct 21, 2008 at 10:11 PM, Abram Demski <[EMAIL PROTECTED]>wrote: > > It doesn't, because **I see no evidence that humans can > > understand the semantics of formal system in X in any sense that > > a digital computer program cannot** > > I agree with you there. Our disagreement is about wh

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
Abram, > To re-explain: We might construct generalizations of AIXI that use a > broader range of models. Specifically, it seems reasonable to try > models that are extensions of first-order arithmetic, such as > second-order arithmetic (analysis), ZF-set theory... (Models in > first-order logic o

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
gt; can, > > and that it's motives are benevolent, and that it has a good > understanding > > of our desires...that should suffice. And I think we'll be able to do > > considerably better than that. > > > > > > > > ---

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
Mark W wrote: What were we disagreeing on again? > This conversation has drifted into interesting issues in the philosophy of science, most of which you and I seem to substantially agree on. However, the point I took issue with was your claim that a stupid person could be taught to effectively

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
Incorrect things are wrapped up with correct things in peoples' minds However, pure slowness at learning is another part of the problem ... > > It's also particularly interesting when you compare it to information > theory where the sole cost is in erasing a bit, not in setting

Re: [agi] natural language -> algebra (was Defining AGI)

2008-10-21 Thread Ben Goertzel
ced with open arms by the whole > list. > > Terren > > --- On Tue, 10/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > OK, but I didn't think we were talking about what is "possible in > principle" but may be unrealizable in practice... > > It'

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
Mark, >> As you asked for references I will give you two: > Thank you for setting a good example by including references but the > contrast between the two is far better drawn in *For and Against > Method*(ISB

Re: [agi] Re: Value of philosophy

2008-10-21 Thread Ben Goertzel
pulled > together and organized. I would be happier because the feasibility issues > would all be together for anyone entering AGI to consider, and you would be > happier because your technical section would be undisturbed by > "philosophical" discussion, except for a few hyperlink

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
he provided > examples regarded as indisputable instances of progress and showed how the > political structures of the time fought against or suppressed them). But > his rants against structure and formalism (or, purportedly, for freedom and > humanitarianism ) are simply garbage in my

Re: [agi] natural language -> algebra (was Defining AGI)

2008-10-21 Thread Ben Goertzel
> > > Here's my simple proof: algebra, or any other formal language for that > matter, is expressible in natural language, if inefficiently. > > Words like quantity, sum, multiple, equals, and so on, are capable of > conveying the same meaning that the sentence "x*3 = y" conveys. The rules > for ma

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC D

Re: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Ben Goertzel
; -- Matt Mahoney, [EMAIL PROTECTED] > > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > > https://www.listbox.com/member/?&am

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
> > > But, worse, there are mathematically well-defined entities that are > not even enumerable or co-enumerable, and in no sense seem computable. > Of course, any axiomatic theory of these objects *is* enumerable and > therefore intuitively computable (but technically only computably > enumerable)

Re: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Ben Goertzel
Porter > > > > > > -Original Message- > *From:* Ben Goertzel [mailto:[EMAIL PROTECTED] > *Sent:* Monday, October 20, 2008 10:52 PM > *To:* agi@v2.listbox.com > *Subject:* Re: [agi] Who is smart enough to answer this question? > > > > > But, supp

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: https://www.listbox.com/member/?&; > > Powered by Listbox: http://www.listbox.com > > > > > --- > a

Re: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Ben Goertzel
RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > > > > --- > agi > Archives: https://www.listbox.com/membe

Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Ben Goertzel
tractor has around .5n neurons switched on. > > In a constant-weight code, I believe the numbers estimated tell you the > number of sets where the Hamming distance is greater than or equal to d. > The idea in coding is that the code strings denoting distinct messages > should not be clo

Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
On Mon, Oct 20, 2008 at 10:30 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Mon, 10/20/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > I do have a limited argument against these ideas, which has to do with > > language. My point is that, if you take a

Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
On Mon, Oct 20, 2008 at 5:29 PM, Abram Demski <[EMAIL PROTECTED]> wrote: > Ben, > > "[my statement] seems to incorporate the assumption of a "finite > period of time" because a finite set of sentences or observations must > occur during a finite period of time." > > A finite set of observations, s

Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
ng problem? Are you assuming a universe that ends in finite time, > so that the box always has only a finite number of queries? Otherwise, > it is consistent to assume that for any program P, the box is > eventually queried about its halting. Then, the universal statement > "T

Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
> > I am not sure about your statements 1 and 2. Generally responding, > I'll point out that uncomputable models may compress the data better > than computable ones. (A practical example would be fractal > compression of images. Decompression is not exactly a computation > because it never halts, w

Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Ben Goertzel
hamming distance is d or less. >> > > > The case where the Hamming distance is d or less corresponds to a > bounded-weight code rather than a constant-weight code. > > I already forwarded you a link to a paper on bounded-weight codes, which > are also combinatorially i

Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Ben Goertzel
On Mon, Oct 20, 2008 at 4:04 PM, Eric Burton <[EMAIL PROTECTED]> wrote: > > "Ben Goertzel says that there is no true defined method > > to the scientific method (and Mark Waser is clueless for thinking that > there > > is)." > That is not what I said. M

Re: [agi] Re: Value of philosophy

2008-10-20 Thread Ben Goertzel
> And even then... > > P.S. Philosophy is always a matter of (conflicting) opinion. (Especially, > given last night's exchange, philosophy of science itself). > > > > > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS F

Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Ben Goertzel
On Mon, Oct 20, 2008 at 12:07 PM, Ed Porter <[EMAIL PROTECTED]> wrote: > As I said in my last email, since the Wikipedia article on constant > weight codes said "APART FROM SOME TRIVIAL OBSERVATIONS, IT IS GENERALLY > IMPOSSIBLE TO COMPUTE THESE NUMBERS IN A STRAIGHTFORWARD WAY." And since all >

Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Ben Goertzel
I also don't understand whether A(n,d,w) is the number of sets where the > hamming distance is exactly d (as it would seem from the text of > http://en.wikipedia.org/wiki/Constant-weight_code ), or whether it is the > number of set where the hamming distance is d or less. If the former case > is t

Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
l leave it at that > for now. > > Clearly, this argument is very "type 2" at the moment. What I *really* > would like to discuss is, as you put it, the set of sufficient > mathematical axioms for (patially-)logic-based AGI such as > OpenCogPrime. > > --Abram >

Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
PROTECTED]> wrote: > Ben, > > How so? Also, do you think it is nonsensical to put some probability > on noncomputable models of the world? > > --Abram > > On Sun, Oct 19, 2008 at 6:33 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > > But: it seems

Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Ben Goertzel
It would also be nice if this mailing list could be operate on a bit more of > a scientific basis. I get really tired of pointing to specific references > and then being told that I have no facts or that it was solely my opinion. > > This really has to do with the culture of the community on the l

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
nimals do science? They > can not. > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powere

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
> And why don't we keep this on the level of scientific debate rather than > arguing insults and vehemence and confidence? That's not particularly good > science either. > Right ... being unnecessarily nasty is not either good or bad science, it's just irritating for others to deal with ben g

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
;re calling me an incompetent in my > profession now. > > It depends. Are you going to continue promoting something as inexcusable > as saying that theory should trump data (because of the source of the > theory)? I was quite clear that I was criticizing a very specific action. >

Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "Nothing will

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
a mere > inconvenience to someone's theory. Feynman's exceptional intelligence > allowed him to discover a possibility that might have been correct if the > point was an outlier, but good scientific evaluation relies on data, data, > and more data. Using that story as an exampl

Re: [agi] constructivist issues

2008-10-19 Thread Ben Goertzel
ox: http://www.listbox.com > > > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
ery operation). > > So let me rephrase my statement -- Can a stupid person do good scientific > evaluation if taught the rules and willing to abide by them? Why or why > not? > > - Original Message - > *From:* Ben Goertzel <[EMAIL PROTECTED]> > *To:* agi@v2.l

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
o you believe is so difficult about science other than overcoming the > sub/unconscious? > > Your statement is obviously spoken by someone who has lectured as opposed > to taught. > > - Original Message - > *From:* Ben Goertzel <[EMAIL PROTECTED]> > *To:* agi@v2.

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
> >>> > *Any* human who can understand language beyond a certain point (say, that > of > > a slightly sub-average human IQ) can easily be taught to be a good > scientist > > if they are willing to play along. Science is a rote process that can be > learned and executed by anyone -- as long as thei

Re: [agi] constructivist issues

2008-10-19 Thread Ben Goertzel
to define our own intelligence. Therefore, we can't engineer > human-level AGI. I don't like this conclusion! I want a different way > out. > > I'm not sure the "guru" explanation is enough... who was the Guru for > Humankind? > > Thanks, > > --Abra

Re: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Ben Goertzel
translation, but it's easy to see how our > technology, as physical medium, transfers information ready for > translation. This outward appearance has little bearing on semantic > models. > > -- > Vladimir Nesov > [EMAIL PROTECTED] > http://causalityrelay.wordpress.com/ > >

Re: [agi] constructivist issues

2008-10-19 Thread Ben Goertzel
> > > --- > > agi > > Archives: https://www.listbox.com/member/archive/303/=now > > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > > Modify Your Subscription: https://www.listbox.com/member/?&; > > Power

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-18 Thread Ben Goertzel
Matt wrote: > I think the source of our disagreement is the "I" in "RSI". What does it > mean to improve? From Ben's OpenCog roadmap (see > http://www.opencog.org/wiki/OpenCogPrime:Roadmap ) I think it is clear > that Ben's definition of improvement is Turing's definition of AI: "more > like a hu

Re: [agi] META: A possible re-focusing of this list

2008-10-18 Thread Ben Goertzel
easonable expectation of success. > > After consulting my assortment of reference dictionaries,,, > > On 10/16/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > >> >> >>> I completely agree that puzzles can be ever so much more interesting when >>>

Re: [agi] Re: Defining AGI

2008-10-18 Thread Ben Goertzel
AGI math is via first > giving that AGI embodied, linguistic experience ;-) > > See Lakoff and Nunez, "Where Mathematics Comes From", for related > arguments. > > -- Ben G >-- > *agi* | Archives <https://www.listbox.com/member/archive/303/=n

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-17 Thread Ben Goertzel
t; the same approach to start with, or if you purposefully set it so that > you didn't you would all still rate certain things/approaches as very > unlikely to be any good, when they might well be what you need to do. > > Will > > > --

Re: [agi] First issue of H+ magazine ... http://hplusmagazine.com/

2008-10-17 Thread Ben Goertzel
pany will eventually succeed in > producing a mass market PC based robot. > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription

Re: [agi] Re: Defining AGI

2008-10-17 Thread Ben Goertzel
erated by some > brute-force method? > > --Abram > > On Thu, Oct 16, 2008 at 11:26 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > > > > On Thu, Oct 16, 2008 at 11:21 PM, Abram Demski <[EMAIL PROTECTED]> > > wrote: > >> > >> On Thu

Re: [agi] Re: Defining AGI

2008-10-16 Thread Ben Goertzel
On Thu, Oct 16, 2008 at 11:21 PM, Abram Demski <[EMAIL PROTECTED]>wrote: > On Thu, Oct 16, 2008 at 10:32 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> > wrote: > > In theorem proving computers are weak too compared to performance of good > > mathematicians. > > I think Ben asserted this as well (mayb

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Ben Goertzel
key's palimpsest learning scheme for Hopfield nets, specialized for simple experiments with character arrays. -- Ben G On Thu, Oct 16, 2008 at 10:30 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > On Fri, Oct 17, 2008 at 6:26 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:

Re: [agi] Re: Defining AGI

2008-10-16 Thread Ben Goertzel
> As Ben has pointed out language understanding is useful to teach AGI. But > if > we use the domain of mathematics we can teach AGI by formal expressions > more > easily and we understand these expressions as well. > > - Matthias That is not clear -- no human has learned math that way. We lear

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Ben Goertzel
2008 at 10:23 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > On Fri, Oct 17, 2008 at 6:05 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > > Right, but his problem is equivalent to bounded-weight, not > constant-weight > > codes... > > > > Why? Bou

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Ben Goertzel
Right, but his problem is equivalent to bounded-weight, not constant-weight codes... On Thu, Oct 16, 2008 at 10:04 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > On Fri, Oct 17, 2008 at 5:31 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > > I still think th

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Ben Goertzel
om/ > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: htt

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-16 Thread Ben Goertzel
nt back in time together with a million of my clones, we could dramatically accelerate the progress of medieval society toward modernity, for sure -- Ben G On Thu, Oct 16, 2008 at 9:00 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Thu, 10/16/08, Ben Goertzel <[EMAIL PROTE

Re: [agi] META: A possible re-focusing of this list

2008-10-16 Thread Ben Goertzel
> > I completely agree that puzzles can be ever so much more interesting when > you can successfully ignore that they cannot possibly lead to anything > useful. Further, people who point out the reasons that they cannot succeed > are really boors and should be censored. This entire thread should be

Re: [agi] META: A possible re-focusing of this list

2008-10-16 Thread Ben Goertzel
ments. *So I am > encouraging (1) style solutions*, albeit constructed with eyes *scientifically > wide open*. A myopic (1) ish forum, for me, will represent intermittent > small dialogs between those few in the forum with a broader, > multidisciplinary approach, still interested in the (1) approach, like m

[agi] First issue of H+ magazine ... http://hplusmagazine.com/

2008-10-16 Thread Ben Goertzel
Including a brief article by me about open-source robotics, that I wrote back in April... ben -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "Nothing will ever be attempted if all possible objections must be first overcome " -

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Ben Goertzel
he (storage or) transmission medium. *** ;-) -- Ben On Thu, Oct 16, 2008 at 6:24 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > Ed, > > After a little more thought, it occurred to me that this problem was > already solved in coding theory ... just take the bound

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Ben Goertzel
at 6:40 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > One more addition... > > Actually the Hamming-code problem is not exactly the same as your problem > because it does not place an arbitrary limit on the size of the cell > assembly... oops > > But I'm not sure w

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Ben Goertzel
, Oct 16, 2008 at 6:43 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > They also note that according to their experiments, bounded-weight codes > don't offer much improvement over constant-weight codes, for which > analytical results *are* available... and for which lower bound

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Ben Goertzel
unds (such as just the number of possible combinations), and I was more > interested in lower bounds. > > > > -----Original Message- > *From:* Ben Goertzel [mailto:[EMAIL PROTECTED] > *Sent:* Thursday, October 16, 2008 2:45 PM > *To:* agi@v2.listbox.com > *Subject:* Re: [agi]

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Ben Goertzel
> to extract from it guidance as to how to solve the problem I posed. > > > > Ed Porter > > > > -Original Message- > *From:* Ben Goertzel [mailto:[EMAIL PROTECTED] > *Sent:* Thursday, October 16, 2008 11:32 AM > *To:* agi@v2.listbox.com > *Subject:* Re: [

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-16 Thread Ben Goertzel
Brothers spent their time building planes rather than laboriously poking holes in the intuitively-obviously-wrong supposed-impossibility-proofs of what they were doing... ben g On Thu, Oct 16, 2008 at 11:38 AM, Tim Freeman <[EMAIL PROTECTED]> wrote: > From: "Ben Goertzel" <

Re: [agi] META: A possible re-focusing of this list

2008-10-16 Thread Ben Goertzel
. That is good to see! ben g On Thu, Oct 16, 2008 at 11:22 AM, Abram Demski <[EMAIL PROTECTED]>wrote: > I'll vote for the split, but I'm concerned about exactly where the > line is drawn. > > --Abram > > On Wed, Oct 15, 2008 at 11:01 AM, Ben Goertzel <[EMA

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Ben Goertzel
er/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > > > > --- > agi > Archives: https://www

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel
Matt wrote, in reply to me: > > An AI twice as smart as any human could figure > > out how to use the resources at his disposal to > > help him create an AI 3 times as smart as any > > human. These AI's will not be brains in vats. > > They will have resources at their disposal. > > It depends on

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel
Hi, > Also, you are right that it does not apply to many real world problems. > Here my objection (as stated in my AGI proposal, but perhaps not clearly) is > that creating an artificial scientist with slightly above human intelligence > won't launch a singularity either, but for a different reas

Re: [agi] Who is smart enough to answer this question?

2008-10-15 Thread Ben Goertzel
any thoughts on this topic. I would just like to > beable to get a rough idea to what extent the use of cell assemblies > increase or decrease the number of semantic nodes a set of neural net nodes > can represent. > > Ed Porter > > > > > > ------- >

[agi] mailing-list / forum software

2008-10-15 Thread Ben Goertzel
This widget seems to integrate mailing lists and forums in a desirable way... http://mail2forum.com/forums/ http://mail2forum.com/v12-stable-release/ I haven't tried it out though, just browsed the docs... -- Ben -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Res

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Ben Goertzel
ities. > > = > Rafael C.P. > = > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Ben Goertzel
n their subject lines and allow things to otherwise continue as they are. > Then, when you fail, it won't poison other AGI efforts. Perhaps Matt or > someone would like to separately monitor those postings. > > Steve Richfield > === > On 10/15/08, Ben Goertzel <[

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Ben Goertzel
he list who would look for funding... I'd want to > see you defend your ideas, especially in the absence of peer-reviewed > journals (something the JAGI hopes to remedy obv). > > Terren > > --- On *Wed, 10/15/08, Ben Goertzel <[EMAIL PROTECTED]>* wrote: > > From: Ben G

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Ben Goertzel
for anyone else on the list who would look for funding... I'd want to > see you defend your ideas, especially in the absence of peer-reviewed > journals (something the JAGI hopes to remedy obv). > > Terren > > --- On *Wed, 10/15/08, Ben Goertzel <[EMAIL PROTECTED]>*

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Ben Goertzel
> > > I don't really understand why moving to the forum presents any sort of > technical or logistical issues... just personal ones from some of the > participants here. > It's a psychological issue. I rarely allocate time to participate in forums, but if I decide to pipe a mailing list to my in

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Ben Goertzel
> > > What I am trying to debunk is the perceived risk of a fast takeoff > singularity launched by the first AI to achieve superhuman intelligence. In > this scenario, a scientist with an IQ of 180 produces an artificial > scientist with an IQ of 200, which produces an artificial scientist with an

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Ben Goertzel
So, just > setting up a forum site is not the answer... > > ben g > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/mem

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Ben Goertzel
chives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD CEO, Novamente LLC

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Ben Goertzel
By the way, I'm avoiding responding to this thread till a little time has passed and a larger number of lurkers have had time to pipe up if they wish to... ben On Wed, Oct 15, 2008 at 3:07 PM, Bob Mottram <[EMAIL PROTECTED]> wrote: > 2008/10/15 Ben Goertzel <[EMAIL PROTECTE

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Ben Goertzel
nyone?), so presumably there's a good chance it would show up > here, and that is good for you and others actively involved in AGI research. > > Best, > Terren > > > --- On *Wed, 10/15/08, Ben Goertzel <[EMAIL PROTECTED]>* wrote: > > From: Ben Goertzel <

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Ben Goertzel
Richard, One of the mental practices I learned while trying to save my first marriage (an effort that ultimately failed) was: when criticized, rather than reacting emotionally, to analytically reflect on whether the criticism is valid. If it's valid, then I accept it and evaluate it I should make

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Ben Goertzel
ps://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Dire

[agi] META: A possible re-focusing of this list

2008-10-15 Thread Ben Goertzel
e" ... etc.) What are your thoughts on this? -- Ben On Wed, Oct 15, 2008 at 10:49 AM, Jim Bromer <[EMAIL PROTECTED]> wrote: > On Wed, Oct 15, 2008 at 10:14 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > > Actually, I think COMP=false is a perfectly valid subje

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Ben Goertzel
ww.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://w

Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread Ben Goertzel
ese Rooms will sign up for the new COMP=false list... > > -dave > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&a

<    1   2   3   4   5   6   7   8   9   10   >