Re: [singularity] future of mankind blueprint and relevance of AGI

2008-05-21 Thread Nathan Cravens
Hi Phillip,

Unless you're willing to present a great deal of evidence in relation to ET
or time machines, its probably best you not mention them, for I hope,
obvious reasons.

Isolate states exporting anarchy or not attempting to participate in
 globalized workforce. Begin measuring purchasing parity adjusted annual cost
 to provide a Guaranteed Annual Income (GAI) in various nations.


Anarchy technically means no rulers, though by anarchy you may mean
chaos. Socially speaking, you might want to consider it a lack or
unwillingness to collaborate or communicate rather hinting at
incomprehensibility. Bryan, it would seem we're kindred spirits on this
subject. ;)

A basic income would help encourage those to work in more self motivated
ways rather than by coerced social pressures to get a job because you
have to to pay the bills. Because of labor deskilling caused by
productive development overall, the unfolding of AGI will be another can of
worms to trump all previous cans filled with worms, no doubt. A GAI, or even
more cleverly rearranged as GIA would hint at socialist  underpinnings
of basic income proposals.

3) Brainstomring of industries required to maximize longeivty, and to handle
 technologies and wield social systems essential for safely transitioning
 first to a medical/health, then to a leisure society.


By longevity do you mean the continuation of distributed resources? Industry
usually entails methods of profit making, none of which will occur once
abundant resources sink markets that where once proprietary in nature. I
formed an organization to tackle your #3. Its certainly a topic likely for
more discussion as labor economies tank. For now, we're just a bunch of
cukes involved in the so-called technological unemployment debate.

8) Mature medical ethics needed. Mature medical AI safeguards needed.
 Education in all medical AI-relevant sectors. Begin measuring AI medical R+D
 advances vs. human researcher medical R+D advances.


Why the emphasis on health? Running a fever are you? ;P

Potentially powerful occupations and consumer goods will require increased
 surveillence. Brainstorming metrics to determine the most responsible
 handlers of a #13 technology (I suggest something like the CDI Index as a
 ranking).


I think it would be better to build safe guards into produced items that
prevent measures for surveillance. CDI analysis will eventually transition
from monetary figures to resource allocation statistics, measuring in the
realms of physics rather than financial figures, because money will
gradually lose face.

To maintain security for some applications it may be necessary to engineer
 entire cities from scratch. Sensors should be designed to maximize human
 privacy rights. The is a heighten risk of WWIII from this period on until
 just after the technology is developed.


The progress made in the open source community suggests strongly that open
and freely available tools, if applied to all resource domains, would
prevent potential global warfare. Dinosaur economies rely on this warfare
business when money doesn't circulate, and this lack of circulation is often
due to greed. Hoarding and scarcity insights war. Abundance fosters peace.
Of course, there isn't much money in either abundance or peace, so let us
hope the ride isn't too rocky before abundant society is the norm. Perhaps
abundant thinking will catch on and spread much like the internet did,
deemed by default as a worthwhile universal endeavor. Resistance to
abundance and adherent measures that insist on scarcity based economies will
only cause more bloodshed as more productive technologies develop.
Information wants to be free.

Safe AGI/AI software programs would be needed before desired humane
 applications should be used. Need mature sciences of psychology and
 psychiatry to assist the benevolent administration of this technology. Basic
 Human Rights, goods and services should be administered to all where
 tyrannical regimes don't possess military parity.


Yes. AGI could help distribute resources in a way less arguably fair and
reduce harm to a minimum. If AGI is introduced it will collapse scarce
economies, and all of them over time, allocating resources and spatial
boundaries in an abundant or more preferred manner.

 I suggest at this point invoking a Utilitarian system like Mark Walker's
 Angelic Heirarchy


Preference Utilitarianism is a good one. Social heirarchy, whether angelic
or otherwise, poses problems. It fuels one-upmentships, creates classes,
rewards or punishes, and environments that compete for arbitrary reasons.
There are of course many tastes. Maybe some folks will prefer this angelic
hierarchy business. Like anything, it's not for everyone. AGI could help
match different personalities and establish places for a variety of
political behaviors.

Nathan



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: 

Re: [singularity] Vista/AGI

2008-03-16 Thread Nathan Cravens
Hi Matt,

Great topic here.

Remember, the Manhattan Project didn't come about until everyone believed a
global catastrophe was afoot. That kind of mentality seems to help bring
people together to make amazing stuff, in that case explosive stuff. As
narrow AI and robotics become more ubiquitous, the pressure to form an AGI
Manhattan Project will increase. Simple technologies like narrow AI
(software) and robotics are weeding out labor, reforming the economic
playing field (however slowly) into a laborless society. The signs of this
are slight, but striking. It seems that only those in the
hypertechnology/Singularity field see where its going economically, however
scantily. Some examples: Major unemployment in management positions second
to industrial loss, the failure of the debt market, increased hoarding of
the rich, and price inflation that began to catapult in the mid 1970s,
progressing to this day.

Continued automation of service and expert systems fused with robotics will
break the old economic dinosaur sooner or later. Like AGI research,
heterodox economic research isn't profitable, which will remain so until the
glass underneath us thins and shatters. I see one of two likely pathways
approaching before Manhattan Project activity ensues, (1) a great economic
collapse or (2) the formation of a new friendly opposition that acts to
even things using big stick political means. Either of these movements
will require capable AGI.

Microsoft could use a Human Waste Management department to go with the
infinitude of other departments it currently has, not to mention a Human
Waste Management department for the Human Waste Management department.
Perhaps that would be too costly?

It would be wise for the AGI collective to write an AGI Roadmap to present
to the public once working or theoretical architectures are firmly in place.
That would help promote AGI and potentially save forming an AGI Manhattan
Project.

Nathan

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


[singularity] Why Artificial Intelligence Will Not Take Over The World (Without Our Permission)

2008-01-26 Thread Nathan Cravens
 religious culture.

Anyone else who sees an opposing scenario or a variation of the same theme,
I'd love to hear it. Others have argued about machine consciousness. Because
consciousness is defined in many ways, I've described a conscious machine as
one that has feelings. I don't think it possible for an Intelligence to have
feelings or at least human feelings until it can physically model how we
have them. These topics were discussed some in previous threads. I
anticipate this topic will be a recurring one. This is my first post, thanks
for having me. On a side note, I'd love to have suggested readings from
those interested in sharing them off thread or otherwise.


1. Pinker, Steven. A History of Violence.
http://www.edge.org/3rd_culture/pinker07/pinker07_index.html


Nor that either,

Nathan Cravens
effortlesseconomy.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90233695-ca2655

Re: [singularity] Wrong focus?

2008-01-26 Thread Nathan Cravens
 The reason biowarfare has failed so far is mostly a lack of good delivery
 mechanisms: there are loads of pathogens that will kill people, but no one
 has yet figured out how to deliver them effectively ... they die in the
 sun,
 disperse in the wind, drown in the water, whatever


Biowarfare probably hasn't caught on because it's difficult to control, like
a destructive gift that keeps on giving. Its difficult to devise an antidote
for something that may mutate with the chance of taking out both the target
and the shooter. Tit-for-tat in a war game is not an ideal stratagem... ;)


 If advanced genetic engineering solves these problems, then what happens?
 Are we totally screwed?

 Or will we be protected by the same sociopsychological dynamics that have
 kept DC from being nuked so far: the intersection of folks with a
 terrorist
 mindset and folks with scientific chops is surprisingly teeny...


What's the point really, it's not like DC runs things... They just make sure
the checks get cashed. Once an AGI helps us root out the cause of violence,
my utopian sentiment suggests terrorism will be a null point. Rage against
peace may be a future slogan. Until then, I hope we have thinking posthuman
software before the stodgily human acting hardware comes available to do
what humans do, stay 'right' by default by eliminating the other other
'right' or otherwise.

- Nathan Cravens
effortlesseconomy.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90258841-e9a14e