[agi] Measuerabel Fitness Functions?.... Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-06 Thread Danny G. Goe

What are the measurable fitness functions that can be built into AI?

Dan Goe

- Original Message - 
From: William Pearson [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, July 06, 2006 5:21 AM
Subject: Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How 
too? . ping




On 06/07/06, Russell Wallace [EMAIL PROTECTED] wrote:

On Wed, 05 Jul 2006 15:58:28 -500, [EMAIL PROTECTED]


 That just gets you a circular definition: if intelligence is the ability 
to
self-improve, what counts as improvement? Change in the direction of 
greater

intelligence? But then what's intelligence? Etc.

 Basically the problem with all this is that there's no such thing as
intelligence in the sense of a mathematical property of an algorithm.


I would agree with you here.


Intelligence is an informal term for certain types of effectiveness of an
algorithm in carrying out tasks in an environment. So the first thing you
need to do is figure out what sort of environments you want your AI 
system

to work in, and what sort of tasks you want it to carry out.


How would you define the sorts of tasks humans are designed to carry
out? I can't see an easy way of categorising all the problems
individual humans have shown there worth at, such as key-hole surgery,
fighter piloting, cryptography and quantum physics.

I'm not saying a human is a general purpose problem solver, just that
humans seem to have the ability to mold themselves to many different
tasks, that do not seem to be genetically specified.

Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, please go to 
http://v2.listbox.com/member/[EMAIL PROTECTED]



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.9.9/382 - Release Date: 7/4/2006




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Nothing can go wrong... Two draft papers: AI and existential risk; heuristics and biases

2006-06-07 Thread Danny G. Goe
The AI system could be built as more of an advisor of actions that we might 
take.

The investment field has already progressed into automated program trading.
I would bet that the investment brokers have some human monitors watching 
and maybe even approving the trading.

But you have heard the story many times...

Nothing can go wrong... go wrong... go wrong

Dan Goe


- Original Message - 
From: BillK [EMAIL PROTECTED]

To: agi@v2.listbox.com
Cc: [EMAIL PROTECTED]
Sent: Wednesday, June 07, 2006 4:08 AM
Subject: Re: [agi] Two draft papers: AI and existential risk; heuristics and 
biases




On 6/7/06, Eliezer S. Yudkowsky wrote:
snip


Because this is a young field, how much mileage you get out will be
determined in large part by how much sweat you put in.  That's the
simple practical truth.  The reasons why you do X are irrelevant given
that you do X; they're screened off, in Pearl's terminology.  It
doesn't matter how good your excuse is for putting off work on Friendly
AI, or for not building emergency shutdown features, given that that's
what you actually do.  And this is the complaint of IT security
professionals the world over; that people would rather not think about
IT security, that they would rather do the minimum possible and just get
it over with and go back to their day jobs.  Who can blame them for such
human frailty?  But the result is poor IT security.




This is the real world that you have to deal with.
You cannot get the funding, or the time, to do the job properly,
because there is always pressure to be the first to market.

AGI is so tricky a problem that just getting it to work at all is
regarded as a minor miracle.  (Like the early days of computers and
the internet).
Implement first, we can always patch it afterwards.

A much more worrying consideration, of course, is that the people with
the most resources, DARPA (and the Chinese) want an AGI to help them
kill their enemies. For defensive reasons only, naturally.

When AGI is being developed as a weapon by massive government
resources, AGI ethics and being Friendly doesn't even get into the
specification.
Following orders from the human owners does.


BillK

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, please go to 
http://v2.listbox.com/member/[EMAIL PROTECTED]



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.8.2/357 - Release Date: 6/6/2006




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Best methods of Knowledge Representaion and Advantages Disadvantages?

2006-05-31 Thread Danny G. Goe




- Original Message - 
From: "Ben Goertzel" [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, May 31, 2006 8:25 AM
Subject: Re: [agi] Types of Knowledge Representaion and Advantages  
Disadvantages?



 Well, the main disadvantage 
of not representing knowledge is that doing so makes you completely 
unintelligent ;-) [Of course, whether or not this is really a 
disadvantage is a philosophical question, I suppose. It has been 
said that "ignorance is bliss" ... ]

Those 
that choose this path are not likely to achieve success. 
  Seriously: Do you 
mean to suggest that some intelligent systems *don't* contain any (even 
implicit) representation of knowledge?
My 
question was more to thedifferent methodology of knowledge Representations 
(KR) andKnowledge Base (KB) types of designs and their performance at 
retrieving facts in respect to the computer time/computer instructions required 
to retrieve factsand storage requirements. 
 I have seen this 
claim made by some advocates of self-organizing-systems approaches to 
building and analyzing intelligent systems, but I have always felt it to 
be a kind of "game with words"... (Feel free to argue otherwise, 
though!)
The 
product configuration baselineshould be functionally interwoven with the 
sophisticated software and 
adds 
many different trade off considerations. (wordy pun)

All I am 
interested inis what works fast and within the limits of resources. 


 IMO, all intelligent 
systems represent knowledge internally in some sense, and the right 
question is what methods are best (in what senses) for doing 
so
What 
methods are best (concerning fast retrieval, low number ofcomputer 
instructions, lowmemory requirement) now this can also mean that these facts are somehow 
zipped/compressed to reduce memory storage requirements. 

Maybe 
someone might know of how much (percent) compression can be achieved to help 
reduce the KB to some more manageable size, yet 
again there are some computer instructions/time needed to use this methodology. 

 For instance, in an 
attractor neural net, each piece of knowledge is stored in a wholly 
distributed way, interpenetrated with other pieces of knowledge. 
In a traditional semantic net OTOH, pieces of knowledge are stored 
separately and distinctly without interpenetration. In Novamente's 
hybrid design there is both a distinct and an interpenetrative/holistic 
aspect to knowledge representation. The advantages and 
disadvantages of these different KR strategies may be subtle to 
understand...  -- Ben G   On 
5/30/06, Danny G. Goe [EMAIL PROTECTED] 
wrote: Can someone elaborate on the 
advantages and disadvantages of Knowledge 
Representation(KR)? Dan Goe
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



[agi] Data there vs data not there, Limits to storage?

2006-05-31 Thread Danny G. Goe

What are the Novamate limits of storage?

Does Novamate look for what is there(data mining) as well as what is not 
there?


How big is Novamate?
Reading/Writing data can result in I/O bound systems.

Dan Goe


- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, May 31, 2006 10:28 AM
Subject: Re: [agi] Best methods of Knowledge Representaion and Advantages  
Disadvantages?




My question was more to the different methodology of knowledge
Representations (KR) and Knowledge Base (KB) types of designs and their
performance at retrieving facts in respect to the computer time/computer
instructions required to retrieve facts and storage requirements.


Well, viewing the memory problem as retrieving facts is in itself a
serious philosophical statement ...

Storing crisp, declarative facts efficiently is not *such* a hard
problem; one can use for instance a hypergraph data structure, with
multiple indices constructed to make frequent queries rapid.  One can
even automate the construction of new indices.  The space/time
tradeoff rears its head here in that more indices means faster access
but more memory usage.

The subtler conceptual issue, IMO, regards how to store uncertain,
context-dependent patterns of knowledge: these may be stored in the
same manner as crisp declarative facts, or in a thoroughly distributed
way as in an Attractor Neural Net, or via some combination approach...

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, please go to 
http://v2.listbox.com/member/[EMAIL PROTECTED]



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.8.0/353 - Release Date: 5/31/2006




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Types of Knowledge Representaion and Advantages Disadvantages?

2006-05-30 Thread Danny G. Goe



Can someone elaborate on the advantages and disadvantages of Knowledge 
Representation(KR)? 

Dan Goe

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



[agi] Google wants AI for search... The first step..

2006-05-22 Thread Danny G. Goe



Fellow AI ...

"Seems thatGoogle wants a searchengine that knows exactly what 
you want"... 

http://news.google.com/news?ie=utf8oe=utf8persist=1hl=enclient=googlencl=http://news.independent.co.uk/business/news/article570273.ece

I doubt once that google gets this far they will stop there. 

They have the means and the structure to do AI totally. 

The question remains is who is going to get AI developed first and what 
will they use it for? 

We live in interesting times. 
These future events will have a most profound effect upon societies around 
the world. 

Comments? 

Dan Goe

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



Re: [agi] Cell-DG

2005-02-10 Thread Danny G. Goe
I think this begs the question of how you factor in the learning cost.
CPU-time?
Resources?
Memory?
Total Instruction counts?
If you can arrive at the same answer with less instructions executed and or 
less resources isn't that a better model?

Weighing the cost will be based on the availability of those Resources.
What resources give the highest rate of learning? CPU-time, memory?
Is an Intelligence Quotient a good way to find the learning curve?
Or some other method(s) to find the learning rate.
I am sure that this will be processed on clusters.
Some neural nets might run in background while different mutations are run.
Previous arrived computational states maybe either continuing or become 
fixed at some point in time.

If any configuration creates a learning system that the next generation of 
mutations generates a value greater than 1 from the previous generation you 
can then start to determine the evolution rates.

When you start your process you will have to run a large number of test 
generated methods and determine if any show promise to learning while some 
might work better early on others will mutate into higher learning curves as 
the evolution continues. You will have to run a large number of permutations 
of all the learning methods to find the optimal mix to obtain a high 
learning curve. If you decide to add any other learning methods, the new 
method will have to be tested with all the others.


First time runs will generate high learning rates, but level off as the 
known knowledge gets aborbed by any given configuration.


Comments?

Dan Goe



- Original Message - 
From: Brad Wyble [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, February 10, 2005 10:15 AM
Subject: Re: [agi] Cell


I'd like to start off by saying that I have officially made the transition 
into old crank.  It's a shame it's happened so early in my life, but it 
had to happen sometime.  So take my comments in that context.  If I've 
ever had a defined role on this list, it's in trying to keep the pies from 
flying into the sky.


Evolution is limited by mutation rates and generation times.  Mammals 
need from 1 to 15 years before they reach reproductive age.  Generation
That time is not useless or wasted.  Their brains are acquiring 
information, molding themselves.  I don't think you can just skip it.

times are long and evolution is slow.  A computer could eventually 
simulate 10^9 (or 10^20, or
whatever) generations per second, and multiple mutation rates (to find 
optimal
evolutionary methodologies).  It can already do as many operations per 
second,
it just needs to be able to do them for billions of agents.

10^ 9 generations per second?  This rate depends(inversely) on the 
complexity of your organism.

And while fitness functions for simple ant AI's are (relatvely) simple to 
write and evaluate, when you start talking about human level AI, you need 
a very thorugh competition, involving much scoial interaction.  This takes 
*time* whether simulated time or realtime, it will add up.

A simple model of interaction between AI's will give you simple AI's.  We 
didn't start getting really smart until we could exchange meaningful 
ideas.


But yes it's true, there are stupidly insane emounts of CPU power that
would give us AI instantly (although it would be so alien to us that 
we'd
have no idea how to communicate with it). However nothing that we'll get
in the next 100 century will be so vast.  You'd need a computer many
times the size of the earth to generate AI through evolution in a
reasonable time frame.
That's not a question that I'm equipped to answer, but my educated 
opinion is
that when we can do 10^20 flops, it'll happen.  Of course, rationally 
designed
AI could happen under far, far less computing power, if we know how to do 
it.

I'd be careful throwing around guesses like that.  You're dealing with so 
many layers of unknown.

Before the accusation comes, I'm not saying these problems are unsolvable. 
I'm just saying that (barring planetoid computers) sufficient hardware is 
a tiny fraction of the problem.  But I'm hearing a disconcerting level of 
optimism here that if we just wait long enough, it'll happen on all of our 
desktops with off-the shelf AI building kits.

Let me defuse another criticism of my perspective,  I'm not saying we need 
to copy the brain.  However, the brain is an excellent lesson of how Hard 
this problem is and should certainly be embraced as such.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, please go to 
http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]