Re: [agi] Re: "Cosmist Manifesto" available via Amazon.com

2010-07-21 Thread David Orban
That's fantastic.

Next steps I am going to do:
- set up a Kindle edition
- set up an iBooks edition
- set up a Scribd edition

D

David Orban
skype, twitter, linkedin, sl, etc: davidorban



On Wed, Jul 21, 2010 at 8:01 AM, Ben Goertzel  wrote:
> Oh... and, a PDF version of the book is also available for free at
>
> http://goertzel.org/CosmistManifesto_July2010.pdf
>
> ;-) ...
>
> ben
>
> On Tue, Jul 20, 2010 at 11:30 PM, Ben Goertzel  wrote:
>> Hi all,
>>
>> My new futurist tract "The Cosmist Manifesto" is now available on
>> Amazon.com, courtesy of Humanity+ Press:
>>
>> http://www.amazon.com/gp/product/0984609709/
>>
>> Thanks to Natasha Vita-More for the beautiful cover, and David Orban
>> for helping make the book happen...
>>
>> -- Ben
>>
>>
>> --
>> Ben Goertzel, PhD
>> CEO, Novamente LLC and Biomind LLC
>> CTO, Genescient Corp
>> Vice Chairman, Humanity+
>> Advisor, Singularity University and Singularity Institute
>> External Research Professor, Xiamen University, China
>> b...@goertzel.org
>>
>> "I admit that two times two makes four is an excellent thing, but if
>> we are to give everything its due, two times two makes five is
>> sometimes a very charming thing too." -- Fyodor Dostoevsky
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> CEO, Novamente LLC and Biomind LLC
>> CTO, Genescient Corp
>> Vice Chairman, Humanity+
>> Advisor, Singularity University and Singularity Institute
>> External Research Professor, Xiamen University, China
>> b...@goertzel.org
>>
>> "I admit that two times two makes four is an excellent thing, but if
>> we are to give everything its due, two times two makes five is
>> sometimes a very charming thing too." -- Fyodor Dostoevsky
>>
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> CTO, Genescient Corp
> Vice Chairman, Humanity+
> Advisor, Singularity University and Singularity Institute
> External Research Professor, Xiamen University, China
> b...@goertzel.org
>
> "I admit that two times two makes four is an excellent thing, but if
> we are to give everything its due, two times two makes five is
> sometimes a very charming thing too." -- Fyodor Dostoevsky
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] evolution-like systems

2007-10-19 Thread David Orban
> the intuitiveness (or not) of evolution-like systems

I had a speech recently at the Life 2.0 Conference about the
"Evolution of Objects"
http://www.slideshare.net/davidorban/evolving-useful-objects-life-20-summit/
which touches a similar subject, in a different context.

> How many have a model of mind
> that explains why some people find these models intuitive while many do not?

I don't know if I can call it a 'model of the mind', but the
difficulty in my opinion for many stems from the context switching
required. The first order causes of individual fitness, for example,
manifesting themselves in the second order system of the species
changing. If this is true, and it is a question of context switching
difficulties, then maybe the roots of the lack of intuitiveness might
be found in the way our perceptive systems recognize patterns, looking
for the causes, of the patterns at the same level as well...


David Orban
www.davidorban.com
skype davidorban
sl davidorban


On 10/19/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> There's a really nice blog at
> http://karmatics.com/docs/evolution-and-wisdom-of-crowds.html talking about
> the intuitiveness (or not) of evolution-like systems (and a nice glimpse of
> his Netflix contest entry using a Kohonen-like map builder).
>
> Most of us here understand the value of a market or evolutionary model for
> internal organization and learning in the mind. How many have a model of mind
> that explains why some people find these models intuitive while many do not?
>
> Josh
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55356420-26023c


Re: [agi] More public awarenesss that AGI is coming fast

2007-10-19 Thread David Orban
Ben wrote:
> Having said that, I would still prefer to avoid the VC route for Novamente.

An other route that Novamente is apparently exploring, is that of open
source development, with OpenCog. It will be very interesting to see
how it pans out, what level of interest and involvement from the
larger developer community it garners, etc.

And to bring this thread back somewhat to its origin, the economical
viability of AGI projects is obviously relevant to the public, as
investors in publicly quoted companies, in capitalistic societies with
a widely spread mass  of institutional or individual shareholders.

To many an Open Source AGI project is the most dangerous path, for the
knowledge in the open enables anybody to accelerate their route to
evil, and for others it is the only way, through spreading the
knowledge to make sure that we have an equally wide understanding of
best defenses.

In this way the economical model of AGI development is intimately tied
to its possible public perception.

----
David Orban
www.davidorban.com
skype davidorban
sl davidorban


On 10/19/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
>
> >
> >
> > AGI is poorly suited for venture capital in every case I can think
> > of.  Ignoring everything else, it tends to leave the venture
> > constantly begging for capital which has serious consequences on
> > performance and reputation.  It is a Catch-22, though perhaps well-
> > deserved.
> >
> > In short, traditional venture capital is a poor finance model for
> > AGI.  Which does not suggest other finance models.
>
>
> I think that AGI for agent control in virtual worlds is not so hopeless
> in terms of appealing to VC's ... there's a real market there, and there's
> clearly a situation where more and more powerful AGI can yield more and
> more profits...
>
> Having said that, I would still prefer to avoid the VC route for Novamente.
>
> I have talked to a number of VC's in recent months -- and by and large they
> want to pigeonhole us as a company that forever will be focused on whatever
> our first product is gonna be (If your first product is for instance an
> animal
> in virtual worlds then -- bingo! -- you're a virtual animal company!!)
>
> VC's in nearly all cases don't have a long time horizon, so to find an AGI
> opportunity that synergizes with their needs requires a good bit of luck...
>
> -- Ben
>
>
>  
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55278091-615d4f


Re: [agi] More public awarenesss that AGI is coming fast

2007-10-17 Thread David Orban
> > What I find more interesting is the question of whether, if, how,
> > when, and in what way these systems might become "self-aware".

Yes, Linas, you are right, that is a very interesting, and intriguing question.

Your examples are also very good. Should we then assume, that since it
is already the case that major industry segments and corporations are
run by software, and nobody seems to mind, that it will stay like
that?

I think that we should still think through, together with the answer
to your question, what should our position be if there were major
protests against systems becoming progressively, but not yet
radically, more autonomous.

There are now Department of Labor predictions of 50%-80% unemployment
rates due to automation of white collar jobs. This in my opinion is
not a small matter either.

The stem cell research in the US, and the genetically modified food
research in the EU have both been frozen through political
intervention because of their perceived threats. Neither of these
decisions were fully informed, but very emotional.

We should analyze what are the means to make sure the same doesn't
happen to AGI research.

-- 

David Orban
www.davidorban.com
skype davidorban
sl davidorban


On 10/18/07, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 17, 2007 at 10:48:31PM +0200, David Orban wrote:
> >
> > During the Summit there was a stunning prediction, if I am not mistaken by
> > Peter Thiel, who said that the leading corporations on the planet will be
> > run by their MIS and ERP systems. There is no need for a qualitative change
> > for this, and still it will potentially be a very dramatic impact on the
> > hierarchies of enterprises, and the white collar jobs they employ.
>
> My impression is that, to a fair degree, this is already the case
> for the airline industry, and the retail/wholesale relationship.
>
> For decades, airlines have been slaves to thier pricing/scheduling
> algorithms, which figure out what to fly where and how often.
> Failure to obey the algorithm will bankrupt the airline in short
> order (witness the turmoil after 9/11, where the algo's didn't quite
> understand the changed nature of the marketplace).
>
> Similary, the movement of products through walmart and home depot
> are also controlled by "narrow AI" type data-mining, sales-forcast,
> ordering automation software. So is the loading of trucks, and the
> routes taken by trucks. Failure to follow the output of your sales
> forcast algos will likewise cause you to loose a lot of income pretty
> rapidly.
>
> Use of datamining and optimization algos will only increase.
> Manufacturing uses algos to do "just-in-time" parts ordering.
> Robots put things together, and robots wander loose in warehouses.
> Packing slips/bills of lading are automated, and so is billing,
> and accounts receivable/payable.
>
> Remember some of the Y2K fiascos? e.g. in 1995, a paint company
> computer decided that paint cans with an expiration date of
> 1/1/00 were expired, and ordered workers to dump out fresh cans
> of paint as they rolled off the assembly line?  I heard they
> actually dumped some of it, until a supervisor put a halt to it.
>
> What I find more interesting is the question of whether, if, how,
> when, and in what way these systems might become "self-aware".
>
> --linas
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=54783433-12b862


Re: [agi] More public awarenesss that AGI is coming fast

2007-10-17 Thread David Orban
This meta-discussion, about people's opinions about AGIs probabilities of
being realized within a given timeframe, actually is crucial. These opinions
can shape their actions towards AGI, regardless of their correctness.

As the public is going to be more and more aware of the various scenarios
surrounding AGIs, as a concrete possibility, and not a science fiction or
futurologist dream, the turning tide will also bring with itself a the
flotsam of active resistance, towards AGIs in practice, but also AGI theory,
and research.

In this context, in my opinion, it is a fundamental task of the Singularity
Institute to formulate sharp policy recommendations, and be ready with
detailed answers to the various criticisms that will emerge. The levels of
these answers have to be both divulgative, and technical, for different
audiences. The criticism, not necessarily constructive, is mainly going to
come from those established interests in the field public service, and
industrial organizations that are likely to be disrupted by even below-human
level AGI.

During the Summit there was a stunning prediction, if I am not mistaken by
Peter Thiel, who said that the leading corporations on the planet will be
run by their MIS and ERP systems. There is no need for a qualitative change
for this, and still it will potentially be a very dramatic impact on the
hierarchies of enterprises, and the white collar jobs they employ. (How many
middle managers are already today nothing but slow and unreliable interfaces
between computer systems that would be much ore useful if directly
connected?)

The next generation of Facebook-type applications, applied to social systems
of increasing complexity, entire countries, starting maybe with
technologically friendly and not necessarily democratic ones, or just
authoritarian enough, like Malaysia or Indonesia, entire countries are going
to be managed and run by these systems as well. Not as a planned economy,
but as a flexible, bottom-up organism that achieves a very high level of
efficiency.

When Christine Petersen, again at the Summit, referred to the need of
managing the debate process in an intelligent manner, she expressed the
feeling, in my opinion based on her experience with the nanotech field, that
we must pro-actively involve in the dialog those stakeholders in society who
are not technically prepared, but who will nonetheless be crucial in shaping
the constraints of future's development.

-- 
----
David Orban
www.davidorban.com
skype davidorban
sl davidorban

On 10/17/07, Bob Mottram <[EMAIL PROTECTED]> wrote:
>
> This is a very optimistic prediction, since 2015 is only seven years from
> now.  It implies a highly concerted space race type of effort towards AGI,
> with associated funding levels and a few conceptual breakthroughs along the
> way.
>
> I would be cautious about claiming that conscious machines will arrive in
> less than a decade, but it all depends upon what is meant by "conscious".
> Under some definitions of consciousness victory could already be
> proclaimed.  Since we don't yet know what the neuronal correlates of
> consciousness are (although there are a few theories) this is a fairly
> meaningless prediction.
>
> Also it's a mistake to assume that because because someone works for a
> major company that their views are more valuable than others working in the
> same field.
>
>
> On 17/10/2007, Edward W. Porter <[EMAIL PROTECTED]> wrote:
>
> >  In today's KurzweilAI.net mailing list is a link to an article in which
> > British Telecom's futurologist is predicting conscious machines by 2015 and
> > one brighter than people by 2020.
> >
> > I think these predictions are very reasonable, and the fact that a
> > furturologist for a major company is making this statement to the public in
> > his capacity as an employee of such a major company indicates the extent to
> > which the tide is turning.  As I have said before on this list: "The race
> > has begun."
> >
> > (The article isn't really that valuable in terms of explaining things
> > those on this list have not already heard or thought of, but is its evidence
> > of the changing human collective consciousness on subjects relating to the
> > singularity.  Its link is 
> > ***http://www.computerworld.com.au/index.php/id;1028029695;fp;;fpid;;pf;1
> > *<http://www.computerworld.com.au/index.php/id;1028029695;fp;;fpid;;pf;1>)
> >
> > Edward W. Porter
> > Porter & Associates
> > 24 String Bridge S12
> > Exeter, NH 03833
> >  (617) 494-1722
> > Fax (617) 494-1822
> > [EMAIL PROTECTED]
> > --
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change y

Re: [agi] META: spam? ZONEALARM CHALLENGE

2007-06-12 Thread David Orban

R. Schwall set up the filter on the incoming emails, so that it is triggered
by the sender. The filter could be changed so that it allows all messages
with [agi] in the subject through, but it is not something that the list
owner or any of us can do, just R. Schwall. The message doesn't luckily go
to the group, just to whoever sends a message...

David

--
www.davidorban.com
skype davidorban
sl davidorban

On 6/13/07, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:



I keep getting the following message whenever I post to [agi].
It looks like spam.  Can we get rid of it?  Or is it just me?

YKY

-- Forwarded message --
From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Date: Jun 13, 2007 12:19 PM
Subject: Re: Re: [agi] AGI Consortium [ZONEALARM CHALLENGE]
To: [EMAIL PROTECTED]

   ZoneAlarm Security Suite E-Mail Verification  Thank you for sending me
your email with the subject "Re: [agi] AGI Consortium". I really want to
receive your email.

In an effort to eliminate junk email, I am using ZoneAlarm Security Suite.

ZoneAlarm Security Suite has placed your message on hold.

Please click the button below so you will be added to my Allowed people
list, I will receive your email, and we will be able to communicate freely
going forward.

Do not reply to this message.
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e<>