Our paper "How long until human-level AI? Results from an expert
assessment" (based on a survey done at AGI-09) was finally accepted
for publication, in the journal "Technological Forecasting & Social
Change" ...
See the preprint at
http://sethbaum.com/ac/fc_AI-Exper
Hi all,
I gave a talk in Teleplace yesterday, about Cosmist philosophy and future
technology. A video of the talk is here:
http://telexlr8.wordpress.com/2010/09/12/ben-goertzel-on-the-cosmist-manifesto-in-teleplace-september-12/
I also put my "practice version" of the talk, that I
cond Life but simpler and
more focused on presentation/collaboration...]
Thanks much to the great Giulio Prisco for setting it up ;)
Ben Goertzel on The Cosmist Manifesto in Teleplace,
September 12, 10am PST
http://telexlr8.wordpress.com/2010/09/09/reminder-ben-goertzel-on-the-cosmist-manifes
ou have any questions, please email me off list.)
*singularity* | Archives<https://www.listbox.com/member/archive/11983/=now>
<https://www.listbox.com/member/archive/rss/11983/> |
Modify<https://www.listbox.com/member/?&;>Your
Subscription
<http://www.listbox.com>
--
B
On Wed, Aug 11, 2010 at 11:34 PM, Steve Richfield wrote:
> Ben,
>
> It seems COMPLETELY obvious (to me) that almost any mutation would shorten
> lifespan, so we shouldn't expect to learn much from it.
Why then do the Methuselah flies live 5x as long as normal flies? You're
conjecturing this i
> We have those fruit fly populations also, and analysis of their genetics
>> refutes your claim ;p ...
>>
>
> Where? References? The last I looked, all they had in addition to their
> long-lived groups were uncontrolled control groups, and no groups bred only
> from young flies.
>
Michael rose's
> I should dredge up and forward past threads with them. There are some flaws
> in their chain of reasoning, so that it won't be all that simple to sort the
> few relevant from the many irrelevant mutations. There is both a huge amount
> of noise, and irrelevant adaptations to their environment and
> On Mon, Aug 9, 2010 at 1:07 PM, Ben Goertzel wrote:
>
>>
>> I'm speaking there, on Ai applied to life extension; and participating in
>> a panel discussion on narrow vs. general AI...
>>
>> Having some interest, expertise, and experience in both areas,
agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO,
think they're only a moderate portion of the problem, and not the
> hardest part...
>
> Which is?
>
>
> *From:* Ben Goertzel
> *Sent:* Monday, August 09, 2010 4:57 PM
> *To:* agi
> *Subject:* Re: [agi] How To Create General AI Draft2
>
>
>
> On Mon, Aug 9,
>
> The human visual system doesn't evolve like that on the fly. This can be
> proven by the fact that we all see the same visual illusions. We all exhibit
> the same visual limitations in the same way. There is much evidence that the
> system doesn't evolve accidentally. It has a limited set of ru
processing?
>
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
--
Ben Goertzel, Ph
https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO,
, matmaho...@yahoo.com
>
>
> ------
> *From:* Ben Goertzel
> *To:* agi
> *Sent:* Sat, August 7, 2010 9:10:23 PM
> *Subject:* [agi] Help requested: Making a list of (non-robotic) AGI low
> hanging fruit apps
>
> Hi,
>
> A fellow AGI researcher sent me
Hi,
A fellow AGI researcher sent me this request, so I figured I'd throw it
out to you guys
I'm putting together an AGI pitch for investors and thinking of low
hanging fruit applications to argue for. I'm intentionally not
involving any mechanics (robots, moving parts, etc.). I'm focusin
&gl=&source=alertsmail&cd=sfIgD21-SMc&cad=:s1:f2:v0:d1:>this
alert.
Create<http://www.google.com/alerts?hl=en&gl=&source=alertsmail&cd=sfIgD21-SMc&cad=:s1:f2:v0:d1:>another
alert.
Manage<http://www.google.com/alerts/manage?hl=en&gl=&source=alertsmail&
y learning and analytical learning). In AGI 2010, virtual pets
>>have been presented by Ben Goertzel and are also another topic of this forum.
>>There are other approaches in AGI that uses some digital evolutionary
>>approach for AGI. For me it is a clear clue that both are relat
-- Ben G
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
--
Ben Goertzel,
Oh... and, a PDF version of the book is also available for free at
http://goertzel.org/CosmistManifesto_July2010.pdf
;-) ...
ben
On Tue, Jul 20, 2010 at 11:30 PM, Ben Goertzel wrote:
> Hi all,
>
> My new futurist tract "The Cosmist Manifesto" is now available on
> A
Hi all,
My new futurist tract "The Cosmist Manifesto" is now available on
Amazon.com, courtesy of Humanity+ Press:
http://www.amazon.com/gp/product/0984609709/
Thanks to Natasha Vita-More for the beautiful cover, and David Orban
for helping make the book happen...
-- Ben
--
Ben Goe
; 0.9) && !(x > 1.1) expanding gives ( getting rid of "!" and "&&")
> (x > 0.9) == ((x > 1.1) == 0) == 1 note "!x" can be defined in terms
> of "==" like so x == 0.
>
> (b) is a generalisation, and expansion of the
com
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
--
Ben Goertzel, PhD
CEO, Novamente
http://www.wired.co.uk/news/archive/2010-07/9/singularity-university-robotics-ai
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com
;
> This is perfectly OK! You don't have to find a silver bullet method of
> induction or inference that works for everything!
>
> Dave
>
>
>
> On Fri, Jul 9, 2010 at 10:49 AM, Ben Goertzel wrote:
>
>>
>> To make this discussion more concrete, please lo
paper do you think is wrong?
thx
ben
On Fri, Jul 9, 2010 at 9:54 AM, Jim Bromer wrote:
> On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel wrote:
>
> If you're going to argue against a mathematical theorem, your argument must
> be mathematical not verbal. Please explain one of
>
On Fri, Jul 9, 2010 at 8:38 AM, Matt Mahoney wrote:
> Ben Goertzel wrote:
>> > Secondly, since it cannot be computed it is useless. Third, it is not
>> the sort of thing that is useful for AGI in the first place.
>>
>
> > I agree with these two statement
On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer wrote:
> Abram,
> Solomoff Induction would produce poor "predictions" if it could be used to
> compute them.
>
Solomonoff induction is a mathematical, not verbal, construct. Based on the
most obvious mapping from the verbal terms you've used above into
etty funny ;-)
-- Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org
"
“When nothing seems to help,
severe problem for contemporary AGI.
>
> Jim Bromer
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.c
On Sun, Jun 27, 2010 at 7:09 PM, Steve Richfield
wrote:
> Ben,
>
> On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel wrote:
>
>> know what dimensional analysis is, but it would be great if you could
>> give an example of how it's useful for everyday commonsense reasoni
rds the centre of the court so that you will be prepared to
>> cover a ball to the extreme, near right side - or do you move more slowly?
>> If you don't move rapidly, you won't be able to cover that ball if it comes.
>> But if you do move rapidly, your opponent can play
ir own complex rules regarding
> what other types of neurons they can connect to, and how they process
> information. "Architecture" might involve deciding how many of each type to
> provide, and what types to put adjacent to what other types, rather than the
> more detailed con
.
>
> What is the real significance of the difference between the two types of
> functions here?
>
> Joshua
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://ww
ily correct. Once I convey my vision,
> then let the chips fall where they may.
>
> On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel wrote:
>
>> Hutter's AIXI for instance works [very roughly speaking] by choosing the
>> most compact program that, based on historical data,
e entire problem of dealing with complicated situations is that these
>> narrow AI methods haven't really worked. That is the core issue.
>>
>> Jim Bromer
>>
>>
>> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://ww
>
> To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject
> - narrow AI. Looking for the one right prediction/ explanation is narrow
> AI. Being able to generate more and more possible explanations, wh. could
> all be valid, is AGI. The former is rational, uniform thinking.
hoping
> to solve. The theory has been there a while... How to effectively implement
> it in a general way though, as far as I can tell, has never been solved.
>
> Dave
>
> On Sun, Jun 27, 2010 at 9:35 AM, Ben Goertzel wrote:
>
>>
>> Hi,
>>
>> I certain
able.
>
> I thought I'd share my progress with you all. I'll be testing the ideas on
> test cases such as the ones I mentioned in the coming days and weeks.
>
> Dave
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com
;> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> |
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com/>
>>
>
>*agi* | A
f houses and to pictures of flying, would have the
>> ability to eventually draw a picture of a flying house (along with a
>> lot of other creative efforts that you have not) even thought of. But
>> the thing is, that I can do this without using advanced AGI
>> technique
x27;t find. It's not that the test set
> developers weren't careful. They spent probably $1 million developing it
> (several people over 2 years). It's that you can't simulate the high
> complexity of thousands of computers and human users with anything less than
>
x27;t...
On Tue, Jan 13, 2009 at 1:13 PM, Philip Hunt wrote:
> 2009/1/9 Ben Goertzel :
>> Hi all,
>>
>> I intend to submit the following paper to JAGI shortly, but I figured
>> I'd run it past you folks on this list first, and incorporate any
>> useful feedback in
com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org
"This is no place to stop -- half way between ape and angel"
-- Benjamin Disraeli
--
Hi,
> Since I can now get to the paper some further thoughts. Concepts that
> would seem hard to form in your world is organic growth and phase
> changes of materials. Also naive chemistry would seem to be somewhat
> important (cooking, dissolving materials, burning: these are things
> that a pre-
ue, Jan 13, 2009 at 5:56 AM, William Pearson wrote:
> 2009/1/9 Ben Goertzel :
>> This is an attempt to articulate a virtual world infrastructure that
>> will be adequate for the development of human-level AGI
>>
>> http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf
&g
> The last one ("John has cybersex with 1000 women") is very hard to
> think of a replacement that is equally convincing...
I'm not offended by sexual references at all ... but I have to say,
this comment of yours bespeaks a VERY highly biased imagination on
your pard 8-DD
ben
-
ttps://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of
AGI company A2I2 has released a product for automating call center
functionality, see...
http://www.smartaction.com/index.html
Based on reading the website here is my initial reaction
Certainly, automating a higher and higher percentage of call center
functionality is a worthy goal, and a place
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertze
nd supporting the first eight candidates
listed at the URL: Sonia Arrison, George Dvorsky, Patri Friedman, Ben
Goertzel (big surprise), Stephane Gounari, Todd Huffman, Jonas Lamis,
and Mike LaTorra.
Sorry for the short notice, but if you see this in time and have the
interest, I hope you'll be
>
> I outlined the basic principle in this paper:
> http://www.comirit.com/papers/commonsense07.pdf
> Since then, I've changed some of the details a bit (some were described in
> my AGI-08 paper), added convex hulls and experimented with more laws of
> physics; but the bas
> The model feels underspecified to me, but I'm OK with that, the ideas
> conveyed. It doesn't feel fair to insist there's no fluid dynamics
> modeled though ;-)
Yes, the next step would be to write out detailed equations for the
model. I didn't do that in the paper because I figured that would b
On Sat, Jan 10, 2009 at 4:27 PM, Nathan Cook wrote:
> What about vibration? We have specialized mechanoreceptors to detect
> vibration (actually vibration and pressure - presumably there's processing
> to separate the two). It's vibration that lets us feel fine texture, via the
> stick-slip fricti
It's actually mentioned there, though not emphasized... there's a
section on senses...
ben g
On Fri, Jan 9, 2009 at 8:10 PM, Eric Burton wrote:
> Goertzel this is an interesting line of investigation. What about in
> world sound perception?
>
> On 1/9/09, Ben Goer
lying virtual world infrastructure an
effective AGI preschool would minimally require.
thx
Ben G
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org
"I intend to live forever, or die trying."
-- Groucho Marx
> If it was just a matter of writing the code, then it would have been done
> 50 years ago.
if proving Fermat's Last theorem was just a matter of doing math, it would
have been done 150 years ago ;-p
obviously, all hard problems that can be solved have already been solved...
???
--
I'm heading off on a vacation for 4-5 days [with occasional email access]
and will probably respond to this when i get back ... just wanted to let you
know I'm not ignoring the question ;-)
ben
On Tue, Dec 30, 2008 at 1:26 PM, William Pearson wrote:
> 2008/12/30 Ben Goertzel :
>
iam Pearson wrote:
> 2008/12/29 Ben Goertzel :
> >
> > Hi,
> >
> > I expanded a previous blog entry of mine on hypercomputation and AGI into
> a
> > conference paper on the topic ... here is a rough draft, on which I'd
> > appreciate commentary from anyon
oo.com
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered
seem...
-- ben g
On Mon, Dec 29, 2008 at 4:18 PM, J. Andrew Rogers <
and...@ceruleansystems.com> wrote:
>
> On Dec 29, 2008, at 10:45 AM, Ben Goertzel wrote:
>
>> I expanded a previous blog entry of mine on hypercomputation and AGI into
>> a conference paper on the topic ...
such as
chance, imitation or intuition...
-- Ben G
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org
"I intend to live forever, or die trying."
-- Groucho Marx
---
agi
Archives: https
, or sand, or throws it at another ball in mid-air, or
> (as we've partly discussed) plays with it like an infant ?]
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.li
hysical world.
http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html
-- Ben
On Sat, Dec 27, 2008 at 8:28 AM, Ben Goertzel wrote:
>
> David,
>
> Good point... I'll revise the essay to account for it...
>
> The truth is, we just don't know
Sat, Dec 27, 2008 at 6:46 AM, David Hart wrote:
> On Sat, Dec 27, 2008 at 5:25 PM, Ben Goertzel wrote:
>
>>
>> I wrote down my thoughts on this in a little more detail here (with some
>> pastings from these emails plus some new info):
>>
>>
>> http://multiver
I wrote down my thoughts on this in a little more detail here (with some
pastings from these emails plus some new info):
http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html
On Sat, Dec 27, 2008 at 12:23 AM, Ben Goertzel wrote:
>
>
>> Suppos
> Much of AI and pretty much all of AGI is built on the proposition that we
> humans must code knowledge because the stupid machines can't efficiently
> learn it on their own, in short, that UNsupervised learning is difficult.
>
No, in fact almost **no** AGI is based on this proposition.
Cyc is b
>
> Suppose I take the universal prior and condition it on some real-world
> training data. For example, if you're interested in real-world
> vision, take 1000 frames of real video, and then the proposed
> probability distribution is the portion of the universal prior that
> explains the real vide
> Most compression tests are like defining intelligence as the ability to
> catch mice. They measure the ability of compressors to compress specific
> files. This tends to lead to hacks that are tuned to the benchmarks. For the
> generic intelligence test, all you know about the source is that it h
rom
> it all, and by all means let me know if you eventually come to a different
> conclusion.
>
>
>
>
> Richard Loosemore
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.li
oronic. If you can
> deliver general intelligence then you are not delivering a model of it, you
> are delivering *actual* general intelligence. To use models as a basis for
> it you need to have a scientific basis for a claim that the models that have
> been used to implement the AGI can (in the
f the
theoretical speculations one reads in the neuroscience literature... and I
can't really think of any recent neuroscience data that refutes any of his
key hypotheses...
On Tue, Dec 23, 2008 at 10:36 AM, Richard Loosemore wrote:
> Ben Goertzel wrote:
>
>>
>> Richard,
>>
usion.
>>
>>
>>
>> Richard Loosemore
>>
>>
>> -------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>
on for it. In addition, if you attend
> events at either MIT's brain study center or its AI center, you will find
> many of the people who are there are from the other of these two centers,
> and that there is a considerable degree of cross-fertilization there that I
> have hear
On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter wrote:
> Ben,
>
>
>
> Thanks for the reply.
>
>
>
> It is a shame the brain science people aren't more interested in AGI. It
> seems to me there is a lot of potential for cross-fertilization.
>
I don't think many of these folks have a principled or
Hi,
>
>
> So if the researcher on this project have been learning some of your ideas,
> and some of the better speculative thinking and neural simulations that have
> been done in brains science --- either directly or indirectly --- it might
> be incorrect to say that "there is no 'design for a th
oosemore
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/
ason. There's always 2009! You never
>
> > know
>
>
>
> You talked about building your 'chips'. Just curious what are you
>
> working on? Is it hardware-related?
>
>
>
> YKY
>
>
>
>
>
> --
>
>
> Consider an object, such as a sock or a book or a cat. These objects
> can all be recognised by young children, even though the visual input
> coming from trhem chasnges from what angle they're viewed at. More
> fundamentally, all these objects can change shape, yet humans can
> still effortl
s form), current robots
face a hard and odd problem...
ben
On Sat, Dec 20, 2008 at 11:42 AM, Philip Hunt wrote:
> 2008/12/20 Ben Goertzel :
> >
> > It doesn't have to be humanoid ... but apart from rolling instead of
> > walking,
> > I don't see any really sig
On Sat, Dec 20, 2008 at 10:44 AM, Philip Hunt wrote:
> 2008/12/20 Ben Goertzel :
> >
> > Well, it's completely obvious to me, based on my knowledge of virtual
> worlds
> > and robotics, that building a high quality virtual world is orders of
> > magnitude eas
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> &
>
> It's an interesting idea, but I suspect it too will rapidly break down.
> Which activities can be known about in a rich, better-than-blind-Cyc way
> *without* a knowledge of objects and object manipulation? How can an agent
> know about reading a book,for example, if it can't pick up and manip
On Sat, Dec 20, 2008 at 8:01 AM, Derek Zahn wrote:
> Ben:
>
> > Right. My intuition is that we don't need to simulate the dynamics
> > of fluids, powders and the like in our virtual world to make it adequate
> > for teaching AGIs humanlike, human-level AGI. But this could be
> > wrong.
>
> I s
Hi,
>>
>> Because some folks find that they are not subjectively sufficient to
>> explain everything they subjectively experience...
>>
> That would be more convincing if such people were to show evidence that
> they understand what algorithmic processes are and can do. I'm almost
> tempted to c
deaf, I suppose ;-)
On Fri, Dec 19, 2008 at 9:42 PM, Ben Goertzel wrote:
>
> Ahhh... ***that's*** why everyone always hates my cakes!!! I never
> realized you were supposed to **taste** the stuff ... I thought it was just
> supposed to look funky after you throw it i
Ahhh... ***that's*** why everyone always hates my cakes!!! I never realized
you were supposed to **taste** the stuff ... I thought it was just supposed
to look funky after you throw it in somebody's face ;-)
On Fri, Dec 19, 2008 at 9:31 PM, Philip Hunt wrote:
> 2008/12/20
On Fri, Dec 19, 2008 at 9:10 PM, J. Andrew Rogers <
and...@ceruleansystems.com> wrote:
>
> On Dec 19, 2008, at 5:35 PM, Ben Goertzel wrote:
>
>> The problem is that **there is no way for science to ever establish the
>> existence of a nonalgorithmic process**, beca
s of naive physics...
ben g
On Fri, Dec 19, 2008 at 8:56 PM, Philip Hunt wrote:
> 2008/12/20 Ben Goertzel :
> >
> >>
> >> 3. to provide a "toy domain" for the AI to think about and become
> >> proficient in.
> >
> > Not just to become prof
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: htt
On Fri, Dec 19, 2008 at 8:42 PM, Philip Hunt wrote:
> 2008/12/20 Ben Goertzel :
> >
> > I.e., I doubt one needs serious fluid dynamics in one's simulation ... I
> > doubt one needs bodies with detailed internal musculature ... but I think
> > one does need basic N
> You, like the rest of us, are incapable of discussing anything else. Email
> cannot carry non-algorithmic ideas or concepts. Just because you do not
> consider your system "algorithmic" does not mean that it is not. Nature is
> algorithmic, your chip is algorithmic, everything is algorithmic.
> You can't deliver any evidence at all that the processes I am investigating
> are invalid.
>
True, and you can't deliver any evidence that once AGIs reach an IQ of 1000,
aliens will contact them and welcome them to the Trans-Universal Club of
Really Clever Beings.
In fact, I won't be at all sur
eniently
> doable
> I mean)?
>
> Thanks!
>
>
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Yo
On Fri, Dec 19, 2008 at 7:51 PM, Ben Goertzel wrote:
>
> Well, I think you might have overreacted to his writing style for cultural
> reasons
>
> However, I also think that -- to be Americanly blunt -- you're very
> unlikely to learn anything from conversing with Mike,
O
e
> know. In that case I'll try my best to learn his way of communication,
> at least when talking to British and American people --- who knows, it
> may even improve my marketing ability. ;-)
>
> Pei
>
> On Fri, Dec 19, 2008 at 7:01 PM, Ben Goertzel wrote:
> >
> *
> d) 75 years of computer-based-AGI failure - has sent me a message that no
> amount of hubris on my part can overcome. As a scientist I must be informed
> by empirical outcomes, not dogma or wishful thinking.
>
> *
>
That argument really is a foolish one not worth paying attention to.
I me
s' the balance sheet and
> revenue statements. It panics about cash flows. Feels extatic when profit is
> good. No longer do we need 'rules of incorporation'. The company literally
> IS the AGI. If the company "goes bad" you take it out and shoot it. The
> process of giving bi
ben
On Fri, Dec 19, 2008 at 5:29 PM, Richard Loosemore wrote:
> Ben Goertzel wrote:
>
>>
>> yeah ... that's not a matter of the English language but rather a matter
>> of the American Way ;-p
>>
>> Through working with many non-Americans I have noted that w
t as *obviously* far beyond the scope of contemporary AGI
designs (at least according to some experts, like me), which is what makes
it more interesting in the present moment...
ben g
-- Ben G
On Fri, Dec 19, 2008 at 5:12 PM, Philip Hunt wrote:
> 2008/12/19 Ben Goertzel :
> >
> > What
Colin,
It is of course possible that human intelligence relies upon
electromagnetic-field sensing that goes beyond the traditional "five
senses."
However, this argument
> Functionally, the key behaviour I use to test my approach is "scientific
> behaviour". If you sacrifice the full EM field, an
1 - 100 of 2064 matches
Mail list logo