[agi] IBM building a brain.

2005-06-28 Thread deering




Henry Markram: "I believe 
the intelligence that is going to emerge ifwe succeed in doing that is going 
to be far more than we can evenimagine."


http://tinyurl.com/bawt2

http://bluebrainproject.epfl.ch/

http://tinyurl.com/8e8e8


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.323 / Virus Database: 267.8.5/32 - Release Date: 6/27/2005

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Hawkins founds AI company named Numenta

2005-03-24 Thread deering



http://www.forbes.com/technology/personaltech/2005/03/24/cz_qh_0324numenta.html 



Ben, this is good news, that someone with such 
mainstream computer business credentials is getting into the AI business. 
This can't but add legitimacy to the field, and if he makes any money at it many 
will be rushing to invest in him and his competitors, that's you.




To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.4 - Release Date: 3/18/2005

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Unlimited intelligence.

2004-10-21 Thread deering




Computer chess programs are merely one example of 
many kinds of software that display human level intelligence in a very narrow 
domain. The chess program on my desktop computer can beat me (but just 
barely), nevertheless, I consider myself more intelligent than it because I can 
do a lot of other things in addition to playing chess. But even if someone 
were to tack together a bunch of specialized programs to make a super program 
that did lots of stuff, I would still be more intelligent than 
it.Intelligence isn't just being able to do lots of stuff, but also 
having multiple levels of abstraction. The computer program has one level 
of abstraction; it plays chess. It doesn't know why it plays chess, the 
greater goal satisfied by playing chess, or the even greater goal that the chess 
playing goal serves. 

True intelligence must be aware of the widest 
possible context and derive super-goals based on direct observation of that 
context,and then generate subgoals for subcontexts. Anything with 
preprogrammed goals is limited intelligence.




To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Unlimited intelligence. --- Super Goals

2004-10-21 Thread deering



Yes, we have instincts, drives built into our 
systems at a hardware level, beyond the ability to reprogram through merely a 
software upgrade. These drives, sex, pain/pleasure, food, air, security, 
social status, self-actualization, are not supergoals, they are 
reinforcers.

Reinforcers give you positive or negative feelings 
when they are encountered. 

Supergoals are the top level of rules you use to 
determine a choice of behavior.

You can make reinforcers your supergoals, which is 
what animals do because their contextual understanding, and reasoning 
abilityis so limited. People have a choice. You don't have to 
be a slave to the biologically programmed drives you were born with. You 
can perceive a broader context where you are not the center of the 
universe. You can even imagine redesigning your hardware and software to 
become something completely different with no vestige of your human 
reinforcers. 

Can a system choose to change its supergoal, or 
supergoals? Obviously not, unless some method of supergoal change is 
specifically written into the supergoals. People's supergoals change as 
they mature but this is not a voluntary process. Systems can be designed 
to have some sensitivity to the external environment for supergoal 
modification. Certainly systems with immutable supergoals are more stable, 
but stability isn't always desirable or even safe.




To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Singularity Institute's The SIAI Voice - August 2004

2004-08-26 Thread deering



Tyler, take a look at the website Chris Phoenix and 
I are making: http://nano-catalog.com/ If 
you have any comments, send them to me and I will post them on the "Affiliates" 
page http://nano-catalog.com/affiliates.html 
along with your logo and a link to your website. You can never have too 
many links to your website.



Mike Deering,General Editor, http://nano-catalog.com/ 
Director, http://www.singularityawareness.com/Email: 
deering9 at mchsi dot com


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Kinds of minds: minimal-, modest-, huge-resource

2004-08-26 Thread deering



Anyone on this list, take a look at the website 
Chris Phoenix and I are making: http://nano-catalog.com/ If you have any comments, 
send them to me and I will post them on the "Affiliates" page http://nano-catalog.com/affiliates.html 
along with your logo and a link to your website. You can never have too 
many links to your website.



Mike Deering,General Editor, http://nano-catalog.com/ 
Director, http://www.singularityawareness.com/Email: 
deering9 at mchsi dot com


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Teaching AI's to self-modify

2004-07-05 Thread deering



Ben, I hope you are going to keep a human in the 
loop. 


Human in the loop scenario:

The alpha Novamente makes a suggestion about some 
change to its software.
The human implements the change on the beta 
Novamente running on a separate machine, and tests it.
If it seems to be an improvement, it is 
incorporated into the alpha Novamente.


Human not in the loop scenario:

The Novamente looks at its code.
The Novamente makes changes to its code, and 
reboots itself.

The Novamente looks at its code.
The Novamente makes changes to its code, and 
reboots itself.

The Novamente looks at its code.
The Novamente makes changes to its code, and 
reboots itself.
The humans wonder what the hell is going on.


Mike Deering.


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] AGI's and emotions

2004-02-24 Thread deering



An unexpected mental event or an unplanned mental 
excursion does not in itself constitute an emotion. An epileptic seizure 
is not an emotion. Most emotions, perhaps all, are very predictable from 
causes. You will the lottery or the girl next door says "yes" and you are 
happy. Someone runs into your classic Beetle, and you are sad. You 
finish a major work of great value, and you feel joy. There is nothing 
mysterious about these emotions, no unpredictable mental dynamics. I don't 
consider "confusion" an emotion. I consider it a error in 
processing. I know I'm not telling you anything new. You surely 
understand all of this already. Therefore I must be missing some 
fundamental aspect of your thoughts on emotions. I have to admit, I've 
never been very good at emotions, and tend to ignore them. I feel like we 
must be talking past each other, but I can't imagine how we could be ambiguous 
about an experience as fundamental as emotion. We all have them. 
It's the ocean our thoughts swim in, waves taking us to and fro, and sometimes 
crashing us against the rocks.


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] AGI's and emotions

2004-02-24 Thread deering



It is true that there is a portion of the process 
of emotion that is not under our conscious control. There are in fact many 
cognitive functions underlying lots of different conscious thoughts that are not 
subject to our introspection or direct control, though perhaps not beyond our 
understanding. We necessarily have limited ability to watch our own 
thought processes, in order to have time to think about the important stuff, and 
to avoid an infinite regress. This limitation is "hardwired" in our 
design. The ability to selectively observe and control any cognitive 
function is a possible design option in an AI. The fact that there will 
not be time or resources to monitor every mental process, that most will be 
automatic, does not make it emotion. Lack of observation, and lack of 
control, do not mean lack of understanding. 

I agree that there will necessarily be automatic 
functions in a practical mind. I don't agree that these processes have to 
be characterized or shaped as emotions. I expect to see emotional AI's and 
non-emotional AI's. We don't know enough yet to predict which will 
function better.

1. highly emotional AL. (out of 
control)

2. moderately emotional AI. (like us, 
undependable)

3. slightly emotional AI. (your 
supposition, possibly good)

4. non-emotional AI. (my choice, 
including simulated emotions for human interaction)


Mike Deering.




To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] AGI's and emotions

2004-02-23 Thread deering



In your paper you take a stab at defining emotions 
and explaining different kinds of emotions' relationship to goals achievement 
and motivation of important behaviors (fight, flight, reproduction). And 
then you go on to say that AI's will have goals and motivations and important 
behaviors, so of course, AI's will have emotions. I don't exactly 
agree.

I think AI's could have emotions if they were 
designed that way. I don't think this is the only way a mind can 
work. I doubt if it is the best way. Evolution gave feathers to 
birds, and feathers are certainly functional, but I don't think that is any 
excuse to be pasting them on the wings of an F16. Emotions are evolution's 
solution to a motivational problem in biological minds. I don't want my 
computer to stop sending my email because it is depressed about the 
economy.

Emotions...I don't know. Maybe there are some 
applications where they might be useful, dealing with humans. But then the 
emotions could be faked. Humans do it all the time. I'm trying to 
think of a case where real emotions would be a functional advantage to a purpose 
built machine. I can't think of any. Then again, it's late, and I 
have to get to bed. I'll sleep on it.


Mike Deering.


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] probability theory and the philosophy of science

2004-01-31 Thread deering



Ben, I get the impression from reading this article 
that it is very closely related to your work on Novamente. In trying to 
design a mind that is intelligent and useful you have decided that the scientist 
comes closest as an example. So you are trying to figure out how the best 
scientists think and build that into your software. 

You certainly wouldn't want to build super-human 
processing AI that was fascinated by astrology and tried to solve every problem 
using only that. How to keep your AI from getting as messed up as some of 
us? Of course, make it a scientist.





To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Real world effects on society after development of AGI????

2004-01-13 Thread deering



Brad, I completely agree with you that the 
computer/human crossover point is meaningless and all the marbles are in the 
software engineering not the hardware capability. I didn't emphasize this 
point in my argument because I considered it a side issue and I was trying to 
keep the email from being any longer than necessary. But even when someone 
figures out how to write the software of the mind, you still need the machine to 
run it on. I believe in the creative ability of the whole AGI research 
ecosystem to be able to deliver the software when the hardware is 
available. I believe that the human mind is capable of solving this 
design/engineering problem, and will solve it at the earliest opportunity 
presented by hardware availability. 

Regarding nanotechnology development, I think we are approaching nano-assembly capability much 
faster than you seem to be aware. Check out the nanotech news 


http://nanotech-now.com/

Regarding science, Yes, turtles all the way 
down. Probably. But atoms are so handy. Everything of any 
usefulness is made of atoms. To go below atoms to quarks and start 
manipulating them and making stuff other than the 92 currently stable atoms has 
such severe theoretical obstacles that I can't imagine solving them all. 
Granted, I may be lacking imagination, or maybe I just know too much about 
quarks to ignore all the practical problems. Quarks are not 
particles. You can't just pull them apart and start sticking them together 
any way you want. Quarks are quantified characteristics of the particles 
they make up. We have an existence proof that you can make neat stuff out 
of atoms. Atoms are stable. Quarks are more than unstable, they 
don't even have a separate existence. I realize that my whole argument has 
one great big gaping hole, "We don't know what we don't know." Okay, but 
what I do know about quarks leads me to believe that we are not going to have 
quark technology. On a more general vein, we have known for some time that 
areas of scientific research are shutting down. Mechanics is 
finished. Optics is finished. Chemistry is finished. Geology 
is basically finished. We can't predict earthquakes but that's not because 
we don't know what is going on. Metrology we understand be can't 
calculate, not science's fault. Oceanography, ecology, biology, all that 
is left to figure out is the molecular biology and they are done. Physics 
goes on, and on, and on, but to no practical effect beyond QED and that is all 
about electrons and photons and how they interact with atoms, well 
roughly.


I don't expect this clarification to change your 
mind. I think we are going to have to agree to disagree and wait and 
see.


See you after the Singularity.


Mike Deering.



To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread deering



How long the transition from the emergence of AGI 
to full integration into society is debatable. If the transition is 
deformed by interference from government then things could get really screwed 
up, but I think there is at least an even chance that they will let it develop 
according to free economic forces. For the remainder of this message I 
will assume a natural free development.


The first stage will be the initial training with 
basic knowledge including specific information about the human 
environment. This training stage will not need to be repeated with each 
new AGI as computers can copy their information.

The second stage will be replacement of workers 
with robots. All jobs are susceptible to replacement by robots. I 
hope you have a fat 401K or other assets, you're going to need it. The 
cost of all products and services will drop precipitously. The cost of a 
product or service consists of the labor cost, plus the cost of the machinery 
used in its production amortized over the total number of products 
produced. The reason electronic equipment has dropped is due to the 
increase in automation of the factories. When robots take all the jobs in 
the factories labor costs will drop to zero leaving the equipment cost. 
The equipment cost consists of the cost of the raw materials plus the labor 
costs to convert them into equipment. The labor costs will disappear 
leaving only the raw materials cost. The cost of the raw materials is 
primarily the labor cost to extract it from the ground or recycle it from the 
dump. If you look at the whole economy from the mine to Wal-Mart you find 
that labor makes up almost all the cost. And as more manufacturing 
capacity is built, the cost of production drops. Theseare the basics 
of the 'abundance economy'. Obviously our current social structure will 
need to make significant adjustments in the transition to the 'abundance 
economy'.

The last stage is the integration of super-human 
AGI into government and decision making positions at the top of the societal 
control structures.


Mike Deering.


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread deering



Well, the third stage requires a higher level of 
intelligence by the AGI's than the second stage, so, if the advancement of AGI 
intelligence from human-level to super-human levels is rapid, then yes, it is 
possible that the integration of AGI's into top level decision making positions 
could occur before the replacement of all workers, which involve mid and 
low-level decision making. 

Mike Deering.


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Dr. Turing, I presume?

2004-01-10 Thread deering



Ben, you are absolutely correct. It was my 
intention to exaggerate the situation a bit without actually crossing the 
line. But I don't think it is much of an exaggeration to say that a 'baby' 
Novamente even with limited hardware and speed is a tremendous event in the 
history of life on Earth. A phase change starts with one molecule. 
As computers are becoming more powerful and nanotech capabilities reach closer 
to the ultimate goal of molecular positional assembly the world will 
crossa threshold similar to supercooled water where one triggering event 
will set off a chain reaction causing a phase change to ice throughout the 
entire mass. Okay, I'm exaggerating again, but not much. The money 
men know it is coming. But they have been burned so many times before in 
the A.I. category that they are not willing to touch the stove again, unless 
someone can show them something that works. It doesn't have to be a 
finished product, just something that demonstrates a new capability. Your 
'baby' Novamente or Peter's proof-of-concept example or James Rogers' 
who-knows-super-secret-whatits will trigger a phase change in funding for 
AGI. The practical applications are unlimited. The profit potential 
is unlimited. That's why the money men threw away so much twenty years ago 
on projects that didn't have a ghost of a chance and got burned. I'm not 
saying that your 'baby' Novemente will change the whole world overnight all by 
itself. But any working example of AGI, no matter how limited, will 
trigger a complicated chain reaction in the economy and mindset of the 
world. The initial example, whatever it is,may turn out to be a 
flawed design of limited usefulness (I wouldn't want to see scaled-up jumbo 
'Wright Flyers' populating airport terminals) but it will not matter. Just 
look at the funding that GOOGLE has attracted with some cleverly written but 
dumb (non-AGI) rules. 


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Dr. Turing, I presume?

2004-01-09 Thread deering



Arthur, I am disappointed with the way that A.I. is 
depicted in science fiction books and movies. Unfortunately most people 
get their idea of what the future will be like from movies and novels. Why 
don't they show A.I. and robots in a more realistic scenario? Take Star 
Trek for instance. Data is the humanoid robot with the machine 
intelligence quotient of 1000 and the human intelligence quotient of 85. 
Why don't they make a lot of Data-like robots? Because they supposedly 
don't understand how his brain works. Nevertheless, in their holodecks 
they routinely generate convincing artificial characters. Why don't they 
take the same knowledge that allows them to create artificial intelligences in 
their holodecks and build character driven robots that operate in the real human 
environment? 

It seems obvious that real A.G.I. is just around 
the corner. Ben's Novamente progress report says they should have a 
working system in 12 to 18 months. Peter's a2i2 project report states that 
a proof-of-concept prototype should be 
operational in 12 months. Toyota just announced that they will have an 
industrial humanoid robot on the market in 2005 to work in factories and other 
uses.

But the general public is not expecting humanoid 
robots with anything like real intelligence any time soon because every movie 
they see about the future either doesn't include robots at all or shows them as 
the enemy. Or as in Star Wars, robots with only very limited smarts. 
 

Let's take the Mars rovers as an example of current 
robotic expectations. Nasa doesn't trust anything as squishy as real 
intelligence, way to unpredictable or controllable. The rovers are touted 
as autonomous robots capable of navigating around obstacles and avoiding 
hazardous terrain, but they can't do anything without specific orders from home, 
not even roll or climb off the lander. 

There is such a profound gap between the public's 
perception of the state-of-the-art of AGI and the reality of AGI research that 
society is in for a major disruption. 

Here is an open question for everyone on this email 
list: What do you think some of the real world effects on society will be 
after the development of AGI?


Mike Deering.


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-24 Thread deering



Ben, you haven't given us an update on how things 
are going with the Novamente A.I. engine lately. Is this because progress 
has been slow and there is nothing much to report, or you don't want to get 
peoples hopes up while you are still so far from being done, or that you want to 
surprise us one day with, "Hey guys, guess what? The Singularity has 
arrived!"





To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] request for feedback

2003-08-14 Thread deering



Arnoud, not exactly theanthropic argument:


Conceptual necessity - There is a concept out there 
which is the root of all reality. It is not a person, an entity, or a 
mind. It is a concept like the idea that circular logic doesn't prove your 
point or mutually contradictory percepts cannot both be true in a self 
consistent system. It's complicated, very. Kind of related to a 
corollary to entropy. Complicated dynamical systems generate cool stuff 
like self replicating elements and they generate multicellular systemic emergent 
behaviors. It's not chance; it's determination. Bosons, fermions, 
atoms, galaxies, stars, planets, DNA, cells, organisms, societies, information, 
computers, AGI's, the Singularity, it's all inevitable because of conceptual 
necessity.



Mike Deering, Director,http://www.SingularityActionGroup.com


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] request for feedback

2003-08-14 Thread deering



Arnoud, I don't know how much help this will be 
considering I am an amateur, like you. Your AGI system looks good to 
me. I think we will find, in time, that many different approaches to AGI 
will work, though some may be more efficient than others, and some may be easier 
for us to communicate with. Your system is a low level implementation 
without an initial high level architecture, which is supposed to emerge through 
learning. This may be possible butmay require a tremendous amount of 
computational resources. I don't know if you have read my own feeble 
attempt at AGI but it is complimentary to yours.http://home.mchsi.com/~deering9/sim_mod.html 
Mine is a high level architecture without a low level implementation. If I 
had the time and money I might try to integrate them. 

I'm in the same time crunch you are, 
wife,child, job. I too have gone through a Christian phase, just 
like Holland, and ended up with atheism, just like you. Although, I don't 
hold out much hope for your plan to create a super-intelligence and ask it, 
"What is the meaning of existence?" Many different highly intelligent 
people have come to completely contradictory conclusions regarding this question 
and I don't see how merely increasing the intelligencedecreases the number 
of theories capable of explaining the observations. It may eliminate some 
but also add new ones. Anyway, even though I share your lack of answers, I 
am not disturbed by it. I'm thinking that the reason we are here is not 
intelligent design or chance, but rather conceptual necessity. 


Well, if there is anything the SAG can do to help 
your AGI project, email me off-list at [EMAIL PROTECTED]


Mike Deering, Director,http://www.SingularityActionGroup.com


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Request for invention of new word

2003-07-04 Thread Deering



AND ?

AND a collective-level conscious 
theater?

How the heck does that work?


Does the collective mind have control over the 
individual minds?

If not, in what way is it different from just 
another of the individual minds?

Are the individual minds 
(Novamentes)components of an overarching cognitive 
architecture?



SuperNova,

MetaNova,

AlphaNova,

BetaNova,

La Costra Nova,






[agi] Doubling-time watcher - March 2003.

2003-03-24 Thread Deering
I didn't intend this to become a monthly advertisement for Dell, but if
someone comes up with more bang-for-the-buck (BFTB) from someone else I
would be very interested.

The February 2003 most BFTB system ran $399, this month you have to spend a
little more to get the best deal.


$499 including Free shipping.
Dell Dimension 2350 Series:  Intel Celeron Processor at 1.80GHz
 Memory:   256MB DDR SDRAM
 Keyboard:  Dell Quietkey Keyboard
 Monitor:  New 17 in (16.0 in v.i.s., .27dp) E772 Monitor
 Video Card:  Integrated Intel Extreme 3D Graphics
 Hard Drive:  30GB Value Hard Drive
 Floppy Drive and Additional Storage Devices:  3.5 in Floppy Drive
 Operating System:  Microsoft Windows XP Home Edition
 Mouse:  Dell 2-button scroll mouse
 Network Interface:  Integrated 10/100 Ethernet
 Modem:  56K PCI Data/Fax Modem
 CD or DVD Drive:  48x Max CD-ROM Drive
 Sound Card:  Integrated Audio
 Speakers:  New Harman Kardon HK-206 Speakers
 Bundled Software:  WordPerfect Productivity Pack with Quicken New User
Edition
 Digital Music:  Dell Jukebox powered by MUSICMATCH
 Digital Photography:  Dell Picture Studio Image Expert Standard
 Limited Warranty, Services and Support Options:  1Yr Ltd Warr plus 1Yr
At-Home Service + 90Days Dell SecurityCenter (McAfee)
 Internet Access Services:  6 Months of EarthLink Internet Access
FREE! Lexmark X75 Inkjet Printer

After we have a few more data points we can discuss how best to graph the
power/price function as it applies specifically to the AGI application.

Mike Deering, Director
www.SingularityActionGroup.com

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] swarm intellience

2003-02-28 Thread Mike Deering




Ants by Daniel 
Hoffman
Theirs is a perfection of pure form.Nobody but has his proper 
place and knows it.Everything they do is functional.Each foray in a 
zigzag lineEach prodigious liftingOf thirty-two times their own 
weightEach excavation into the earth's coreEach erectionOf a crumbly 
parapetted tower-
None of these feats is a private pleasure,None of them 
doneFor the sake of the skill alone-
They've got a going concern down there,A full 
egg-hatcheryA wet-nursery of aphidsA trained troop of maintenance 
engineersSanitation expertsA corps of huntersAnd butchersAn 
army
A queenEachIs nothing without the others, each being a 
partOf something greater than all of them put togetherA purpose which 
none of them knowsSince each is only The one thing that he does. There 
isA true consistencyToward which their actions tend.The ants have 
bred and inbred to perfection.The strains of their genes that survive 
survive.Every possible contingencyHas been foreseen and written into the 
plan.
Nothing they do will be wrong.



[agi] doubling time watcher.

2003-02-18 Thread Mike Deering



Unless Ben thinks it would not be appropriate for this 
list, I would like to start a "doubling time" watcher monthly posting of retail 
computer pricesfor purposes of establishing a historical record so that 
questions of doubling time can be grounded in current data.

My choice of category is "most bang for the buck" 
complete system from a major retailer or manufacturer. Usually this will 
be their lowest priced system, as upgrades generally cost more than the 
differential computational value they add. Anyone that would like to post 
a different category, well, you can never have too much data.

My selection for "most bang for the buck" category for 
2/18/03 is:

Dell Dimension 2350 Series
Processor: Celeron 1.7 GHz
Memory: 128 MB
Hard Drive: 60 GB
Monitor: 15 inch
CD: 48 
speed
Floppy drive: Y
Keyboard: Y
Mouse: Y
GraphicsCard: Extreme 3D 
Graphics
OS: Windows XP (HOME)
Speakers: Y
Sound card: Y
Ethernet: Y
Modem: Y
Software: WordPerfect, Quicken.

Price: $399

I might get one of these for my wife so she will stay off 
mine. We are a poor one computer family.

Mike Deering.
www.SingularityActionGroup.com 
---new website.




Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Mike Deering



Billy, I agree that AGI is a complicated architecture of 
hundreds of separarate software solutions. But all of these solutions have 
utility in other software environments and progress is being made by tens of 
thousands of programmers each working on improving some little software function 
for some other purpose that they have no idea will someday be used in AGI. 
There is nothing truly unique about the functional building blocks of AGI, just 
the overall architecture.


Having gone way out on a limb here, all you AGI experts 
can now start sawing.


Mike Deering.
www.SingularityActionGroup.com 
---new website.


Re: [agi] doubling time revisted.

2003-02-17 Thread Mike Deering



It is obvious that no one on this list agrees with 
me. This does not mean that I am obviously wrong. The division is 
very simple.

My position: the doubling time has been reducing and 
will continue to do so.

Their position: the doubling time is 
constant.

This is not a question of philosophy but only of the 
data. What does the data show? If we had a stack of COMPUTER 
SHOPPER magazines for the past twenty years the question could be decided 
in short order. The drop in doubling time starts out very slowly. 
That is why it is not obvious yet. By the time it becomes obvious it will 
be too late.


Mike Deering.
www.SingularityActionGroup.com 
---new website.


Re: Games for AIs (Was: [agi] TLoZ: Link's Awakening.)

2002-12-12 Thread Mike Deering



This whole approach of successive games is interesting but 
let me suggest a different route to AI teaching. Borrow the biological 
model. Simulate simplified ecological environments. Start with a 
simple organism,perhaps a worm, in a simplified environment with 
obstacles, rewards, penalties, and other organisms (never alone). As the 
AI learns to master the contingencies of the environment gradually evolve the 
organism and the environment to more complex forms maintaining a contiguous 
logical path to the final human form. And if you are lucky and your 
algorithms are good one day your AI will look up at the simulated sky and 
scream, "I want to talk to whoever is in charge! And I want to know what 
the heck is going on!"

Mike Deering.


Re: Games for AIs (Was: [agi] TLoZ: Link's Awakening.)

2002-12-12 Thread Mike Deering



Ben,

I think there would be advantages to a single continuously 
evolving environment rather than a series of disjointed game environments. 
And environments closely modeled on natural environments will naturally take 
care of the ordering of the lessons taught. Also this type of learning 
strategy will mold the AI into a form that will be easier to relate to than a 
less biocentric approach. And your suggestion to transition the 
environment and the AI into the real world is a natural advantage of this 
approach. 

Mike Deering.