for someone...
ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org
"I intend to live forever, or die trying."
-- Groucho Marx
---
agi
Archives: https://www.listbox.com/member/ar
>
> In my opinion you are being too generous and your generosity is being
> taken advantage of.
That is quite possible; it's certainly happened before...
>
> As well as trying to be nice to Mike, you have to bear list quality in
> mind and decide whether his ramblings are of some benefit to all
had some mutual colleagues in the past who favored such a style
of discourse ;-)
ben
On Fri, Dec 19, 2008 at 1:49 PM, Pei Wang wrote:
> On Fri, Dec 19, 2008 at 1:40 PM, Ben Goertzel wrote:
> >
> > IMHO, Mike Tintner is not often rude, and is not exactly a "troll"
> b
tps://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> M
On Thu, Dec 18, 2008 at 6:47 PM, Mike Tintner wrote:
> Ben:I don't think there's any lack of creativity in the AGI world ... and
> I think it's pretty clear that rationality and creativity work together in
> all really good scientific work.Creativity is about coming up with new
> ideas. Rational
thx for the reply!
***
Anyway, to answer you simply - conflict is v. fruitful, if you embrace it.
(Jerry Rubin expounded this POV well in Do It! )
***
I've always been more of an Abbie Hoffman guy, but ... sure...
***
> More specifically, AGI-ers -as I have in part explained - are almost
> per
ambitions (which is
>> understandable/ obviously v. risky) but ALSO its simpler ambitions, i.e.
>> making even the smallest progress towards *general* as opposed to
>> *specialist/narrow* intelligence, producing a ,machine, say, that could
>> cross just two or three domains
erm AGI & its more grandiose ambitions (which is
>> understandable/ obviously v. risky) but ALSO its simpler ambitions, i.e.
>> making even the smallest progress towards *general* as opposed to
>> *specialist/narrow* intelligence, producing a ,machine, say, that could
>> cross jus
dictment of the AGI field?
>
>
>
>
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <ht
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...
On Wed, Dec 17, 2008 at 6:12 PM, YKY (Yan King Yin) <
generic.intellige...@gmail.com> wrote:
> > "If...you want a non-research career, a Ph.D. is definitely not for you."
>
> I want to be either an entrepreneur or a researcher... it's hard to
> decide. What does AGI need most? Further research,
://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
>
>
>
>
>
>
;s my current
> reasoning...
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&
Well, no... actually I think this is extremely bad advice ;-)
On Wed, Dec 17, 2008 at 3:20 PM, Steve Richfield
wrote:
> Yan,
>
> Your quest incorporates some questionable presumptions, that you will
> literally be "betting your (future) life on".
>
> 1. That AGI as presently conceived won't be j
/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@
On Wed, Dec 17, 2008 at 12:29 PM, YKY (Yan King Yin) <
generic.intellige...@gmail.com> wrote:
> > I got my PhD there in 1989 in math, not AI
>
> Let me see... you were about 22 in 1989? I was still an undergrad at
> that age...
Yep...
I was already interested in working on AGI, but didn't fee
people--Ben Goertzel, Pei Wang, and now Peter de Blanc. Is
> this just a coincidence?
> Joshua
>
> On Wed, Dec 17, 2008 at 5:48 PM, Ben Goertzel wrote:
>
>>
>>
>>
>>
>>>
>>> Can I start the PhD directly without getting the MS first?
>>&
>
> You have interpreted my below post in an overly defensive manner.
>
>
>
Sorry ... I'm dealing with some other frustrating things this morning so
maybe the frustratedness unintentionally rubbed off on this email exchange
...
>
>
>
> (Are you saying Novamente is not scaleable to human level w
little advantage of all the rich, complex
> hierarchical
> > and generalization knowledge contained in the hypergraph --- although it
> was
> > clear to me that their would be ways in which it could be modified to do
> > so.
> >
> >
>
>
> -----
>
> Can I start the PhD directly without getting the MS first?
>
You can start a PhD without having an MS first, but you'll still need to
take all the coursework corresponding to the MS
I don't personally know of any university that lets you go directly from a
BS/BA to a PhD without doing a coup
such explanations for its ideas. So I'd
> be v. interested).
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/
>
> Do you mean that examples that Hofstadter/Mitchell used in their
> papers for CopyCat did not in fact work on their codebase? I remember
> downloading second copycat implementations (in Java IIRC), it seemed
> to be working. Besides, they don't claim anything grandiose for this
> model, and it
I happened to use CopyCat in a university AI class I taught years ago, so I
got some experience with it
It was **great** as a teaching tool, but I wouldn't say it shows anything
about what can or can't work for AGI, really...
ben
On Wed, Dec 17, 2008 at 10:02 AM, Ben Goertzel wrote:
ww.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org
"I intend to live forev
; ---
>
> agi
>
> Archives: https://www.listbox.com/member/archive/303/=now
>
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
>
> Powered by Listbox: htt
en
On Tue, Dec 16, 2008 at 11:50 AM, Tim Freeman wrote:
> >From: "Ben Goertzel"
> >
> >I'm considering writing a paper on hypercomputation, ...
>
> If I understand right, hypercomputation is theoretical computer
> science arguments of the form "If I
tbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org
"I intend to live forever, or die trying."
-- Groucho Marx
-
original
> > before writing the paper...
> >
> > thx
> > ben
> >
> >
> >
> > --
> > Ben Goertzel, PhD
> > CEO, Novamente LLC and Biomind LLC
> > Director of Research, SIAI
> > b...@goertzel.org
> >
> > "I inten
I'm considering writing a paper on hypercomputation, and am wondering if
anyone on this list could suggest a good bibliography on the topic ... I
want to read up on the latest literature to be sure my thoughts are original
before writing the paper...
thx
ben
--
Ben Goertzel, Ph
I just read an interesting (somewhat mathy) paper on transfer learning,
and put the link here
http://www.opencog.org/wiki/Transfer_Learning
ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org
"I intend to live forever, or die t
e brain activities and reconstructed the images of Roman letters and other
> figures, succeeding in recreating optically received images.
>
> (Dec. 11, 2008)
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.c
Hi,
>> > There isn't much that an MIMD machine can do better than a similar-sized
>> > SIMD machine.
>>
>> Hey, that's just not true.
>>
>> There are loads of math theorems disproving this assertion...
>
>
> Oops, I left out the presumed adjective "real-world". Of course there are
> countless diop
>
> Visual images, in particular, uniquely provide *isomorphic maps of objects.*
>
Well, no.
The congenitally blind also create internal isomorphic maps of objects
Vision is a rich source of information, but it is does not in itself
provide isomorphic maps of object -- it provides messy, noisy d
Hi,
> There isn't much that an MIMD machine can do better than a similar-sized
> SIMD machine.
Hey, that's just not true.
There are loads of math theorems disproving this assertion...
>>
>> OO and generic design patterns do buy you *something* ...
>
>
> OO is often impossible to vectorize.
The
Steve wrote:
> Bit#3: Did Ben realize that the prospective emergence of array processors
> (e.g. as I have been promoting) would obsolete much of his present
> work, because its structure isn't vectorizable, so he is in effect betting
> on continued stagnation in processor architecture, and may in
s cost money to attend: we would have liked to be
able to offer it for free, but the funds to pay the faculty has to
come from somewhere, and grant funding for AGI is generally hard to
come by...)
-- Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL
http://dsc.discovery.com/news/2008/12/08/virtual-human-empathy.html
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"I intend to live forever, or die trying."
-- Groucho Marx
---
agi
Archi
http://robot.watch.impress.co.jp/cda/column/2008/12/08/1489.html
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id
for other neurons.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://
een any explanation of how changes in gene expression in a
> neuron's nucleus would store memories, even given the knowledge that the
> epigenome can store information.
>
>
>
> If there is such an explanation, either now or in the future, I would
> welcome hearing it
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
>
>
>
>
> ---
erts could rapidly
>>>> extend the Cyc KB without Cycorp ontological engineers having to intervene.
>>>> A Cycorp paper describing its KRAKEN system is here.
>>>>
>>>> I would be glad to answer questions about Cycorp and Cyc technology to
>>>> the best
On Wed, Dec 3, 2008 at 3:19 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Terry and Ben,
>
>
>
> I never implied anything that could be considered a "memory" at a conscious
> level is stored at just one synapse, but all the discussions I have heard of
> learning in various brain science books and lect
>
> I know you're just playing here but it would be easy to empirically test
> this. Does junk DNA change between birth and death? Something tells me we
> would have discovered something that significant a long time ago.
>
> Terren
well, loads of mutations occur in nuclear DNA between birth and
es: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
>
>
>
>
> ---
>>> Implication for neuroscientists proposing to build a WBE (whole brain
>>> emulation): the resolution you need may now have to include all the
>>> DNA in every neuron. Any bets on when they will have the resolution
>>> to do that?
>>
>> No bets here. But they are proposing that elements ar
; RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"
Hi Hector,
>> You may say the hypothesis of neural hypercomputing valid in the sense
>> that it helps guide you to interesting, falsifiable theories. That's
>> fine. But, then you must admit that the hypothesis of souls could be
>> valid in the same sense, right? It could guide some other peop
the
>>> number
>>
>>> of subspaces that could be represented with a given number, say 100
>>> billion,
>>
>>> of nodes --- or that the minute changes in boundaries, or the occasional
>>
>>> difference in tipping points t
> If two theories give identical predictions under all circumstances
> about how the real world behaves, then they are not two separate
> theories, they are merely rewordings of the same theory. And choosing
> between them is arbitrary; you may prefer one to the other because
> human minds can visu
>We cannot
> ask Feynman, but I actually asked Deutsch. He does not only think QM
> is our most basic physical reality (he thinks math and computer
> science lie in quantum mechanics), but he even takes quite seriously
> his theory of parallel universes! and he is not alone. Speaking by
> myself, I
On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil <[EMAIL PROTECTED]> wrote:
> On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>> But I don't get your point at all, because the whole idea of
>> "nondeterministic" randomness has nothi
first N bits of an uncomputable series or of a
computable one...
ben g
On Sun, Nov 30, 2008 at 10:53 PM, Hector Zenil <[EMAIL PROTECTED]> wrote:
> On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>> OTOH, there is no possible real-world test to distinguis
On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>> But quantum theory does appear to be directly related to limits of the
>>> computations of physical reality. The uncertainty theory and the
>>> quantization of quantum states are limitations o
HI,
> "In quantum physics, the Heisenberg uncertainty principle states that the
> values of certain pairs of conjugate variables (position and momentum, for
> instance) cannot both be known with arbitrary precision. That is, the more
> precisely one variable is known, the less precisely the other
> But quantum theory does appear to be directly related to limits of the
> computations of physical reality. The uncertainty theory and the
> quantization of quantum states are limitations on what can be computed by
> physical reality.
Not really. They're limitations on what measurements of phy
human brain model an infinity of
> infinitely complexity things?
>
>
>
> -----
>
>
>
> I don't understand what your paper on uncomputability has to do with my
> questions and comments about Richard's paper, other than to highlight
> pro
>
> Regarding winning a DARPA contract, I believe that teaming with an
> established contractor, e.g. SAIC, SRI, is beneficial.
>
> Cheers,
> -Steve
Yeah, I've tried that approach too ...
As it happens, I've had significant more success getting funding from
various other government agencies ... b
hilip Hunt <[EMAIL PROTECTED]> wrote:
> 2008/11/30 Ben Goertzel <[EMAIL PROTECTED]>:
>> Hi,
>>
>>> I have proposed a problem domain called "function predictor" whose
>>> purpose is to allow an AI to learn across problem sub-domains,
&g
p://www.gnu.org/philosophy/no-word-attachments.html
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.co
om/2008/01/semantics-and-brain-more-on-atl-as-hub.html
As Richard L would likely point out, the authors' data supports plenty
of different interpretations, and the one presented is only one of the
many plausible ones...
-- ben G
On Tue, Nov 25, 2008 at 12:45 AM, Ben Goertzel <[EMAIL PROTECTED
Hi,
> I have proposed a problem domain called "function predictor" whose
> purpose is to allow an AI to learn across problem sub-domains,
> carrying its learning from one domain to another. (See
> http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )
>
> I also think it would be use
> Could you give me a little more detail about your thoughts on this?
> Do you think the problem of increasing uncomputableness of complicated
> complexity is the common thread found in all of the interesting,
> useful but unscalable methods of AI?
> Jim Bromer
Well, I think that dealing with comb
ave to be able to express
> and analyze its statistical assessments in terms of some kind of
> declarative methods as well.
>
> Jim Bromer
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS F
from will be the inevitable
>>> emergence of inherently illogical decision processes that will mush up
>>> an AI system long before it gets any traction.
>>>
>>> Jim Bromer
>>>
>>>
>>> ---
>>>
On Thu, Nov 27, 2008 at 12:42 PM, Eric Burton <[EMAIL PROTECTED]> wrote:
> All I've tried to impress is that these revelations, epiphanies,
> theophanies or what-have-you are at least as primary as the sensations
> associated with daily life.
I tend to agree ... but unless you are going to tie th
> But that in no way means your statements are correct descriptions of
> external reality, as many of your statements would appear to claim to be.
> And you have provided no evidence, other than drug induced experience within
> your own mind, that they are.
>
> Ed Porter
The notions of "correct de
gt;>
>>> You'll remember that I've been saying this for quite a while - now Kevin
>>> Kelly is saying it - and you'll be hearing a lot more of this
>>>
>>>
>>> http://www.nytimes.com/2008/11/23/magazine/23wwln-future-t.html?_r=2&sq=KEVIN%20KELLY&st=cse&scp=1&pagewa
---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
> I could also argue that the limitations on RSI would constrain a hard-takeoff
> singularity to an explosion of computational power, not of knowledge. But I
> think that might be a stretch. Not everyone agrees that there will even be a
> singularity in the first place.
You could argue that, b
On Tue, Nov 25, 2008 at 11:48 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> yeah, it's coming back to me now .. I remember holons and holarchies
> and all that stuff ;-)
>
> However, Koestler was writing before complex dynamics and attractors
> and such were well-underst
vade Iraq" and a little later a
>>> vast army of 150,000 with all its machinery is elaborating his command.
>>>
>>> Our machines also are designed in terms of simple switches, or key
>>> mechanisms, setting off whole elaborate complexes of action.
>>>
&
>>
>> http://www2.le.ac.uk/departments/engineering/extranet/research-groups/neuroengineering-lab/
>
>
> There are always more papers that can be discussed.
OK, sure, but this is a more recent paper **by the same authors,
discussing the same data***
and more recent similar data.
>
> But that does
I wrote this a long while ago but just got around to posting it now...
>>
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify You
A semi-technical essay on the global/local (aka glocal) nature of
memory is linked to from here
http://multiverseaccordingtoben.blogspot.com/
I wrote this a long while ago but just got around to posting it now...
ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of
e Sciences. 12: 87-91; 2008
***
at
http://www2.le.ac.uk/departments/engineering/extranet/research-groups/neuroengineering-lab/
-- Ben G
On Mon, Nov 24, 2008 at 1:32 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Ben Goertzel wrote:
>>
>> Hi,
>>
>> BTW, I jus
of being and computation
> upon which the much higher levels of organization that provide human
> awareness are built.
>
>
>
> If you have communicable evidence to the contrary, please enlighten me.
>
>
>
> Ed Porter
>
>
>
>
>
> -Original Message-
On Mon, Nov 24, 2008 at 1:30 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Since I assume Ben, as well as a lot of the rest of us, want the AGI
> movement to receive respectability in the academic and particularly in the
> funding community, it is probably best that other than brain-science- or
> AGI-
Hi,
BTW, I just read this paper
> For example, in Loosemore & Harley (in press) you can find an analysis of a
> paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the latter
> try to claim they have evidence in favor of grandmother neurons (or sparse
> collections of grandmother n
what is off topic was more narrow?
>
>
>
> That is what I assumed, and that is why, in the post you responding to
> below, I was asking if there were any describable non-entheogenic aspects of
> the ego-loss experience, other than what I had already described.
>
>
>
> Ed
r gin and tonic with lime)?
>
>>
>
>> What is the respective emphasis given to each of these three parts in the
>
>> proper pronunciations.
>
>>
>
>> It is a word that would be deeply appreciated by many at my local
>> Unitarian
>
>> Church
rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"The empires of the future are the empires of the mind."
work would be to have
> (about) 1% of the MTL respond to one picture - a *huge* number of cells, by
> anyone's standard!
>
> This is a contradiction. Or, as we put it, an incoherent claim.
>
>
>
> All of this was in the paper.
>
> Yes, the data by itself is inte
>
> Agreed. The beliefs we have expressed are certainly beyond humanity's
> current powers of explanation, and at least certain aspects of them probably
> always will be. But as I expressed in my recent discussion of Richard's
> paper, I think science will know much more about consciousness in 50
Hi,
> I have said many times on this list that I believe there is nothing we know
> about reality that is anything other than computing, and that there is
> nothing we know about consciousness that is anything other than computing,
> other than our sense of awareness, which can be considered an at
On Fri, Nov 21, 2008 at 4:54 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Sat, Nov 22, 2008 at 12:30 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>>
>> They want some kind of mixture of "sparse" and "multiply redundant" and "not
>> distributed". The whole point of what we wrote was that
On Fri, Nov 21, 2008 at 4:44 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Ben Goertzel wrote:
>>
>> I saw the main point of Richard's paper as being that the available
>> neuroscience data drastically underdetermines the nature of neural
>> knowle
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscr
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and
now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
&
u say
> that cognitive science is "running on raw data". I cannot find any way to
> understand this statement that does not lead directly to the conclusion that
> it is completely and utterly wrong. Cognitive science involves a huge
> theoretical interpretation of raw data.
> I stated a Ben's List challenge a while back that you apparently missed, so
> here it is again.
>
> You can ONLY learn how a system works by observation, to the extent that its
> operation is imperfect. Where it is perfect, it represents a solution to the
> environment in which it operates, and a
> The neuron = concept
> 'theory' is extremely broken: it is so broken, that when neuroscientists
> talk about bayesian contingencies being calculated or encoded by spike
> timing mechanisms, that claim is incoherent.
This is not always true ... in some cases there are solidly demonstrated
conne
> When I was in college and LSD was the rage, one of the main goals of the
> heavy duty heads was "ego loss" which was to achieve a sense of cosmic
> oneness with all of the universe. It was commonly stated that 1000
> micrograms was the ticket to "ego loss." I never went there. Nor have I
> eve
box.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"A human being should be able to c
e harder, much more
>
>> important problem. And without the sense of awareness and realness ---
>
>> which you never convincingly explain the source of --- other than to say
>
>> it exist and it comes from the operation of the framework --- even the
>
>> major conclusion
Richard,
> The main problem is that if you interpret spike timing to be playing the
> role that you (and they) imply above, then you are commiting yourself to a
> whole raft of assumptions about how knowledge is generally represented and
> processed. However, there are *huge* problems with that s
>
> So, basically, you don't disagree with his paper to much.
> You just don't like his attitude.;)
>
> Danged AI researchers that think they know it all! ;)
>
> You don't think you could call it excessive PR where he is trying to
> dislodge an entrenched view?
The thing is, the simplis
hu, Nov 20, 2008 at 11:53 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> yay ... we all agree on something ;-p
>
> On Thu, Nov 20, 2008 at 11:46 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> On Thu, Nov 20, 2008 at 7:04 PM, Richard Loosemore <[EMAIL PROTECTED]> w
esov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https
101 - 200 of 2064 matches
Mail list logo