RE: Language modeling (was Re: [agi] draft for comment)

2008-09-07 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 --- On Sat, 9/6/08, John G. Rose [EMAIL PROTECTED] wrote:
 
  Compression in itself has the overriding goal of reducing
  storage bits.
 
 Not the way I use it. The goal is to predict what the environment will
 do next. Lossless compression is a way of measuring how well we are
 doing.
 

Predicting the environment in order to determine which data to pack where,
thus achieving higher compression ratio. Or compression as an integral part
of prediction? Some types of prediction are inherently compressed I suppose.


John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote:
 You start v. constructively thinking how to test the non-programmed
 nature of  - or simply record - the actual writing of programs, and
 then IMO fail to keep going.

You could trace their keyboard presses back to the cerebellum and motor 
cortex, yes, this is true, but this isn't going to be like tracing the 
programmer pathway in a brain. You might just end up tracing the 
entire brain [which is another project that I fully support, of 
course]. You can imagine this as the signals being traced back to their 
origins back to the spine and the CNS like the cerebellum and motor 
cortex, and then from the somatosensory cortex that gave them the 
feedback for debugger error output (parse error, rawr), etc. You could 
even spice up the experimental scenario by tracking different 
strategies and their executions in response to bugs, sure.

 Ask them to use the keyboard for everything - (how much do you guys
 use the keyboard vs say paper or other things?) - and you can
 automatically record key-presses.

Right.

 Hasn't anyone done this in any shape or form? It might sound as if it
 would produce terribly complicated results, but my guess is that they
 would be fascinating just to look at (and compare technique) as well
 as analyse.

I don't think it's sufficient to keep it as analyses, here's why:
http://heybryan.org/humancortex.html Basically, wouldn't it be 
interesting to have an online/real-time/run-time system for keeping 
track of your brain as you program? This would allow for neurofeedback 
and some other possibilities.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, William Pearson wrote:
 2008/9/5 Mike Tintner [EMAIL PROTECTED]:
  By contrast, all deterministic/programmed machines and computers
  are guaranteed to complete any task they begin.

 If only such could be guaranteed! We would never have system hangs,
 dead locks. Even if it could be made so, computer systems would not
 always want to do so. Have you every had a programmed computer system
 say to you. This program is not responding, do you wish to terminate
 it. There is no reason in principle why the decision to terminate
 the program couldn't be made automatically.

These errors are computed. Do what I mean, not what I say is a common 
phrase thrown around in programming circles. The errors are not because 
that suddenly the ALU decided to not be present, and the errors are not 
because it suddenly lost its status as a Turing machine (although if 
you drove a rock through it, this is quite likely). Rather this is 
because you failed to write a good kernel. And yes, the decision to 
terminate programs can be made automatically, and I sometimes choose 
scripts on my clusters to kill things that haven't been responding for 
a certain amount of time, but usually I prefer to investigate it by 
hand since it's so rare.

  Very different kinds of machines to us. Very different paradigm.
  (No?)

 We commonly talk about single program systems because they are
 generally interesting, and can be analysed simply. My discussion on
 self-modifying systems ignored the interrupt driven multi-tasking
 nature of the system I want to build, because that makes analysis a
 lot more hard. I will still be building an interrupt driven, multi
 tasking system.

That's an interesting proposal, but I'm wondering about something. 
Suppose you have a cluster of processors, and they are all 
communicating with each other in some way to divide up tasks and 
compute away. Now, given the ability to send interrupts from one 
another, and given the linear nature of each individual unit, is it 
really multitasking? At some point it has to integrate all of the 
results together at a single node for writing at a single address on 
the hdd (or something) so that the results are in one single place, 
that or the reading function of the results must do this. Is it really 
then multi-tasking and parallel?

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Mike Tintner wrote:
 Were your computer like a human mind, it would have been able to say
 (as you/we all do) - well if that part of the problem is going to be
 difficult, I'll ignore it  or.. I'll just make up an answer... or
 by God I'll keep trying other ways until I do solve this.. or...
 ..  or ... Computers, currently, aren't free thinkers.

I'm pretty sure that compiler optimizers, that go in and look at your 
loops and other computational elements of a program, are able to make 
assessments like that. Of course, they'll just leave it as it is 
instead of completely ignoring parts of your program that you wish to 
compile, but it does seem similar. I recently came across an 
evolutionary optimizer for compilers to test parameters to gcc to try 
to figure the best way to compile a program on a certain architecture 
(to learn all of the gcc parameters yourself seems impossible 
sometimes, you see). Perhaps there's some evolved laziness in the human 
brain that could be modeled with gcc easily enough.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Mike Tintner wrote:
 fundamental programming problem, right?) A creative free machine,
 like a human, really can follow any of what may be a vast range of
 routes - and you really can't predict what it will do or, at a basic
 level, be surprised by it.

What do you say to the brain simulation projects? There is a biophysical 
basis to the brain and it's being discovered and hammered out. You can, 
in fact, predict the results of the eye-blink rabbit experiments (I'm 
working with a lab on this - the simulations return results faster than 
the real neurons do in the lab. You can imagine how this is useful for 
hypothesis testing purposes.).

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Saturday 06 September 2008, William Pearson wrote:
 I'm very interested in computers that self-maintain, that is reduce
 (or eliminate) the need for a human to be in the loop or know much
 about the internal workings of the computer. However it doesn't need
 a vastly different computing paradigm  it just needs a different way
 of thinking about the systems. E.g. how can you design a system that
 does not need a human around to fix mistakes, upgrade it or maintain
 it in general.

Yes, these systems are interesting. I can easily imagine a system that 
generates systems that have low human maintenance costs. But suppose 
that the system that you make generates a system (with that low hu 
maint cost), and this 2nd-gen system does it again and again. This is 
the problem of clanking replicators too -- you need to have some way to 
correct divergence and for errors of replication; and not only that, 
but as you go into new environments there are new things that have to 
be taken into account for maintenance. Bacteria solve this problem with 
having many billions of cells per culture and then having enough 
genetic variability to somehow scrounge up a partial solution within 
time -- so that once you get to the Nth-generation you're not screwed 
entirely if some change occurs in the environment. There was a recent 
experiment in the news that has been going for 20 years, the Michigan 
man who had bacterial selection experiments in bottles for the past 20 
years only to find that they evolved an ability to metabolize something 
they didn't metabolize before. That's an example of being able to work 
in new environments, and there's a lot of cost to it (dead bacteria, 
many generations, etc.) that silicon projects can't quite do simply 
because of resource/cost constraints if you use traditional approaches. 
What would an alternative approach look like? One where you don't need 
dead silicon projects, and one where you have enough instances of 
programs that you're able to find a solution with your genetic 
algorithm in enough time? The increasing availability of RAM and hdd 
space might be enough to let us bruteforce it, but the embodiment of 
bacteria in the problem domains is something that more memory 
strategies don't quite address. Thoughts?

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Saturday 06 September 2008, Mike Tintner wrote:
 Our unreliabilty is the negative flip-side of our positive ability
 to stop an activity at any point, incl. the beginning and completely
 change tack/ course or whole approach, incl. the task itself, and
 even completely contradict ourself.

But this is starting to get into an odd-mix of folk psychology. I was 
reading an excellent paper the other day that says this very plainly, 
written by Gerhard Werner: 

The Siren Call of metaphor: Subverting the proper task of Neuroscience
http://www.ece.utexas.edu/~werner/siren_call.pdf

 The case of Neuro-Psychological vs. Naturalistic Neuroscience.
 For grounding the argument, let us look at the case of
 ‘deciding to’ [34] in studies of conditioned motor behavior in
 monkeys, on which there is a rich harvest of imaginative experimental
 work on scholarly reviews available. I write this in profound respect
 for the investigators who conduct this work with immense ingenuity
 and sophistication. However, I question the soundness of the
 conceptual framework on which such experiments are predicated,
 observations are interpreted, and conclusions are formulated. I
 contend that current practices tend to disregard genuine issues in
 Neurophysiology with its own definitions of what legitimate
 propositions and criteria of valid statements in this discipline are.

  Here is the typical experimental protocol: the experimenter
 uses some measure of neural activity of his/her choice (usual neural
 spike discharges), recorded from a neural structure (selected by
 him/her on some criterion, and determines relations to behavior that
 he/she created as link between two events: an antecedent stimulus (
 chosen by him/her) and a consequent, arbitrary behavior, induced by
 the training protocol [49]. So far, the experimenter has done all the
 ‘deciding’, except leaving it up to the monkey to assign a “value” to
 complying with the experimental protocol. Different investigators
 summarize their experimental objective in various ways (in the
 interest of brevity, I slightly paraphrase, though being careful to
 preserving the original sense): to characterize neural computations
 representing the formation of perceptual decision [12]; to
 investigate the neural basis of a decision process [37]; to examine
 the coupling of neural processes of stimulus selection with response
 preparation [34], reflecting connections between motor system and
 cognitive processes [38] ; to assess neural activity indicating
 probabilistic reward anticipation [22,27]. In Shadlen and Newsome’s
 [37] evocative analogy “it is a jury’s deliberation in which sensory
 signals are the evidence in open court, and motor signals the jury’s
 verdict”. Helpful as metaphors and analogies can be as interim steps
 for making sense of the observation in familiar terms, they also
 import the conceptual burden of their source domain and lead us to
 attribute to the animal a decision and choice making capacity along
 principles for which Psychology has developed evidential and
 conceptual accounts in humans under entirely different conditions,
 and based on different observational facts. Nevertheless, armed with
 the metaphors of choice and decision, we assert that the observed
 neural activity is a “correlate” [19] of a decision to emit the
 observed behavior. As the preceding citations indicate, the observed
 neural activity is variously attributed to perceptual discrimination
 between competing (or conflicting) stimuli, to motor planning, or to
 reward anticipation; the implication being that the neural activity
 stands for (“represents”) one or the other of these psychological
 categories.

So, Mike, when you write like:
 Our unreliabilty is the negative flip-side of our positive ability
 to stop an activity at any point, incl. the beginning and completely
 change tack/ course or whole approach, incl. the task itself, and
 even completely contradict ourself.

It makes me wonder how you can assert the existence of a neurophysical 
basis of the existence of 'task', in terms of the *brain*, not in terms 
of our folk psychology and collective cultural background that has 
given us these names to these things. It's hard to talk about the brain 
from the biology-up, yes, that's true, but it's also very rewarding in 
that we don't make top-down misunderstandings.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Terren Suydam wrote:
 So, Mike, is free will:

 1) an illusion based on some kind of unpredictable, complex but
 *deterministic* interaction of physical components 2) the result of
 probabilistic physics - a *non-deterministic* interaction described
 by something like quantum mechanics 3) the expression of our
 god-given spirit, or some other non-physical mover of physical things

I've already mentioned an alternative on this mailing list that you 
haven't included in your question, would you consider it?
http://heybryan.org/free_will.html
^ Just so that I don't have to keep on rewriting it over and over again.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote:
 Bryan,

 How do you know the brain has a code? Why can't it be entirely
 impression-istic - a system for literally forming, storing and
 associating sensory impressions (including abstracted, simplified,
 hierarchical impressions of other impressions)?

 1). FWIW some comments from a cortically knowledgeable robotics
 friend:

 The issue mentioned below is a major factor for die-hard
 card-carrying Turing-istas, and to me is also their greatest
 stumbling-block.

 You called it a code, but I see computation basically involves
 setting up a model or description of something, but many people
 think this is actually synonomous with the real-thing. It's not,
 but many people are in denial about this. All models involves tons of
 simplifying assumptions.

 EG, XXX is adamant that the visual cortex performs sparse-coded
 [whatever that means] wavelet transforms, and not edge-detection. To
 me, a wavelet transform is just one possible - and extremely
 simplistic (meaning subject to myriad assumptions) - mathematical
 description of how some cells in the VC appear to operate.

No, this is just a confusion of terminologies. I most certainly was not 
talking about 'code' in the sense of sparse-coded wavelet transform. 
I'm talking about code in the sense of source code. Sorry.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Matt Mahoney wrote:
 Yes you do. Every time you make a decision, you are assigning a
 higher probability of a good outcome to your choice than to the
 alternative.

You'll have to prove to me that I make decisions, whatever that means.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RE: Language modeling (was Re: [agi] draft for comment)

2008-09-07 Thread Matt Mahoney
--- On Sun, 9/7/08, John G. Rose [EMAIL PROTECTED] wrote:

 From: John G. Rose [EMAIL PROTECTED]
 Subject: RE: Language modeling (was Re: [agi] draft for comment)
 To: agi@v2.listbox.com
 Date: Sunday, September 7, 2008, 9:15 AM
  From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  
  --- On Sat, 9/6/08, John G. Rose
 [EMAIL PROTECTED] wrote:
  
   Compression in itself has the overriding goal of
 reducing
   storage bits.
  
  Not the way I use it. The goal is to predict what the
 environment will
  do next. Lossless compression is a way of measuring
 how well we are
  doing.
  
 
 Predicting the environment in order to determine which data
 to pack where,
 thus achieving higher compression ratio. Or compression as
 an integral part
 of prediction? Some types of prediction are inherently
 compressed I suppose.

Predicting the environment to maximize reward. Hutter proved that universal 
intelligence is a compression problem. The optimal behavior of an AIXI agent is 
to guess the shortest program consistent with observation so far. That's 
algorithmic compression.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AI isn't cheap (was Re: Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.))

2008-09-07 Thread Steve Richfield
Matt,

On 9/6/08, Matt Mahoney [EMAIL PROTECTED] wrote:

   Steve, where are you getting your cost estimate for AGI?


1.  I believe that there is some VERY fertile but untilled ground, which if
it is half as good as it looks, could yield AGI a LOT cheaper than other
higher estimates. Of course if I am wrong, I would probably accept your
numbers.
2.  I believe that AGI will take VERY different (cheaper and more valuable)
forms than do other members on this forum.

Each of the above effects are worth several orders of magnitude in effort.



   Is it a gut feeling, or something like the common management practice of
 I can afford $X so it will cost $X?


Given sufficient motivation and flexibility as to time and deliverable, this
isn't a bad way to go. You can have it good, quick, and cheap - choose any
two. If you drop cheap and choose the other two, then you can get there.



   My estimate of $10^15 is based on the value of the world economy, US $66
 trillion per year and increasing 5% annually over the next 30 years, which
 is how long it will take for the internet to grow to the computational power
 of 10^10 human brains (at 10^15 bits and 10^16 OPS each) at the current rate
 of growth, doubling every couple of years. Even if you disagree with these
 numbers by a factor of 1000, it only moves the time to AGI by a few years,
 so the cost estimate hardly changes.


Here, you are doing the same thing that you were accusing me of - working
from available resources. I believe that present AGI efforts are SO
misdirected (and actively interfered with) that unless something changes,
humanity will never ever produce a true AGI. However, with those changes, I
believe that really *useful* computational intelligence is just a few years
away, though an AGI, that everyone here on this forum would wholeheartedly
agree without reservation is an AGI, is probably 30 years away, as you
guesstimated. However, my time guesstimate is based on the time needed to do
some really basic research that remains undone.



   And even if the hardware is free, you still have to program or teach
 about 10^16 to 10^17 bits of knowledge, assuming 10^9 bits of knowledge per
 brain [1] and 1% to 10% of this is not known by anyone else. Software and
 training costs are not affected by Moore's law. Even if we assume human
 level language understanding and perfect sharing of knowledge, the training
 cost will be 1% to 10% of your working life to train the AGI to do your job.


You are starting to appreciate what I have said on this forum several times,
that if you do a realistic estimate of the software maintenance costs of an
AGI population, that it approaches the efforts of the entire human race. In
short, there is nothing to be gained by having AGIs, because while they may
do our menial work, we will be working full time just to keep them running.



   Also, we have made *some* progress toward AGI since 1965, but it is
 mainly a better understanding of why it is so hard,


I agree with this statement, though I arrived at this conclusion a little
differently than you. I will now make some critical comments that indirectly
support this same conclusion...



   e.g.

 - We know that general intelligence is not computable [2] or provable [3].
 There is no neat theory.


In recent decades, new forms of logic have emerged (e.g. Game Theory) that
are NOT incremental improvements over general intelligence. Our
evolutionary biological and social development has NOT allowed for this
possibility, providing a HUGE opportunity for machine intelligence. The AGI
goals of those here indirectly seek to dispense with this advantage while
apparently seeking to construct new artificial mouths to feed. Hence, I see
your statement to simply be further proof that AGI is a rather questionable
goal compared with a more rigorous form of machine intelligence.


   - From Cyc, we know that coding common sense is more than a 20 year
 effort. Lenat doesn't know how much more, but guesses it is maybe between
 0.1% and 10% finished.


Of course, the BIG value is in UNcommon sense. For example, someone saying
I had no choice but to... is really saying that they have constrained
their thinking, probably to the point of failing to even recognize the
existence of the optimal course of action. Common sense would be to accept
the truth of their statement. UNcommon sense would be to recognize the error
contained in their statement.



   - Google is the closest we have to AI after a half trillion dollar
 effort.


You really should see my Dr. Eliza demo.


Steve Richfield



   1. Landauer, Tom (1986), How much do people remember? Some estimates of
 the quantity of learned information in long term memory, Cognitive Science
 (10) pp. 477-493.

 2. Hutter, Marcus (2003), A Gentle Introduction to The Universal
 Algorithmic Agent {AIXI}, in *Artificial General Intelligence*, B.
 Goertzel and C. Pennachin eds., Springer.
 

Re: [agi] open models, closed models, priors

2008-09-07 Thread Matt Mahoney
--- On Sun, 9/7/08, Bryan Bishop [EMAIL PROTECTED] wrote:

 On Thursday 04 September 2008, Matt Mahoney wrote:

  Yes you do. Every time you make a decision, you are
 assigning a
  higher probability of a good outcome to your choice
 than to the
  alternative.
 
 You'll have to prove to me that I make
 decisions, whatever that means.

Depends on what you mean by I.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Re: AI isn't cheap

2008-09-07 Thread Matt Mahoney
--- On Sun, 9/7/08, Steve Richfield [EMAIL PROTECTED] wrote:

1.  I believe that there is some VERY fertile but untilled ground, which
if it is half as good as it looks, could yield AGI a LOT cheaper than
other higher estimates. Of course if I am wrong, I would probably accept
your numbers.

2.  I believe that AGI will take VERY different (cheaper and more
valuable) forms than do other members on this forum.
 
Each of the above effects are worth several orders of magnitude in effort.

You are just speculating. The fact is that thousands of very intelligent people 
have been trying to solve AI for the last 50 years, and most of them shared 
your optimism.

Perhaps it would be more fruitful to estimate the cost of automating the global 
economy. I explained my estimate of 10^25 bits of memory, 10^26 OPS, 10^17 bits 
of software and 10^15 dollars.

You really should see my Dr. Eliza demo.

Perhaps you missed my comments in April.
http://www.listbox.com/member/archive/303/2008/04/search/ZWxpemE/sort/time_rev/page/2/entry/5:53/20080414221142:407C652C-0A91-11DD-B3D2-6D4E66D9244B/

In any case, what does Dr. Eliza do that hasn't been done 30 years ago?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-07 Thread Bryan Bishop
On Sunday 07 September 2008, Matt Mahoney wrote:
 Depends on what you mean by I.

You started it - your first message had that dependency on identity. :-)

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Terren Suydam

Hey Bryan,

To me, this is indistinguishable from the 1st option I laid out. Deterministic 
but impossible to predict.

Terren


--- On Sun, 9/7/08, Bryan Bishop [EMAIL PROTECTED] wrote:

 From: Bryan Bishop [EMAIL PROTECTED]
 Subject: Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser
 To: agi@v2.listbox.com
 Date: Sunday, September 7, 2008, 11:44 AM
 On Friday 05 September 2008, Terren Suydam wrote:
  So, Mike, is free will:
 
  1) an illusion based on some kind of unpredictable,
 complex but
  *deterministic* interaction of physical components 2)
 the result of
  probabilistic physics - a *non-deterministic*
 interaction described
  by something like quantum mechanics 3) the expression
 of our
  god-given spirit, or some other non-physical mover of
 physical things
 
 I've already mentioned an alternative on this mailing
 list that you 
 haven't included in your question, would you consider
 it?
 http://heybryan.org/free_will.html
 ^ Just so that I don't have to keep on rewriting it
 over and over again.
 
 - Bryan
 
 http://heybryan.org/
 Engineers: http://heybryan.org/exp.html
 irc.freenode.net #hplusroadmap
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-07 Thread Matt Mahoney
--- On Sun, 9/7/08, Bryan Bishop [EMAIL PROTECTED] wrote:

  Depends on what you mean by I.
 
 You started it - your first message had that dependency on
 identity. :-)

OK then. You decided to reply to my email, vs. not replying.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-07 Thread Mike Tintner

Pei:As I said before, you give symbol a very narrow meaning, and insist
that it is the only way to use it. In the current discussion,
symbols are not 'X', 'Y', 'Z', but 'table', 'time', 'intelligence'.
BTW, what images you associate with the latter two?

Since you prefer to use person as example, let me try the same. All of
my experience about 'Mike Tintner' is symbolic, nothing visual, but it
still makes you real enough to me...

I'm sorry if it sounds rude


Pei,

You attribute to symbols far too broad powers that they simply don't have - 
and demonstrably, scientifically, don't have.


For example, you think that your experience of Mike Tintner - the rude 
guy - is entirely symbolic. Yes, all your experience of me has been mediated 
entirely via language/symbols -these posts.  But by far the most important 
parts of it have actually been images. Ridiculous, huh?


Look at this sentence:

If you want to hear about it, you'll probably want to know where I was 
born, and what a lousy childhood I had, and how my parents were occupied 
before they had me, and all the David Copperfield crap, but if you want to 
know the truth, I don't really want to get into it.


In 60 words,  one of the great opening sentences of a novel, Salinger has 
created a whole character. How? He did it by creating a voice. He did it by 
what is called prosody (and also diction). No current AGI method has the 
least idea of how to process that prosody. But your brain does. Pei doesn't. 
But his/your brain does.


And your experience of MT has been heavily based similarly on processing the 
*sound* images - the voice behind my words. Hence your I'm sorry if it 
*sounds* rude..


Words, even written words, aren't just symbols, they are sounds. And your 
brain hears those sounds and from their music can tell many, many things, 
including the emotions of the speaker, and whether they're being angry or 
ironic or rude.


Now, if you had had more of a literary/arts education, you would probably be 
alive to that dimension. But, as it is, you've missed it, and you're missing 
all kinds of dimensions of how symbols work.


Similarly, if you had more of a visual education, and also more of a 
psychological developmental background, you wouldn't find time and 
intelligence so daunting to visualise.


You would realise that it takes a great deal of time and preparatory 
sensory/imaginative to build up abstract concepts


You would realise that it takes time for an infant to come to use that 
word, and still more for a child to understand the word intelligence. I 
doubt that any child will understand time before they've seen a watch or 
clock, and that's what they will probably visualise time as, first. Your 
capacity to abstract time still further, will have come from having become 
gradually acquainted with a whole range of time-measuring devices, and 
seeing the word time and associating that with many other kinds of 
measurement especially in relation to maths. and science.


Similarly,  a person's concept of intelligence will come from seeing and 
hearing people solving problems in different ways - quickly and slowly, for 
example.. It will be deeply grounded in sensory images and experience.


All the most abstract maths and logic that you may think totally abstract 
are similarly and necessarily grounded. Ben, in parallel to you, didn't 
realise that the decimal numeral system is digital, based on the hand, and 
so, a little less obviously, is the roman numeral system. Numbers and logic 
have to be built up out of experience.


[You might profit BTW by looking at Barsalou, [many of his papers online], 
to see how the mind modally simulates concepts - with lots of experimental 
evidence]


I, as you know, am very ignorant about computers; but you are also very 
ignorant about all kinds of dimensions of how symbols work, and intelligence 
generally, that are absolutely essential for AGI. You can continue to look 
down on me, or you can open your mind, recognize that general intelligence 
can only be achieved by a confluence of disciplines way beyond the reach of 
any single individual, and see that maybe useful exchanges can take place. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-07 Thread Jiri Jelinek
Mike,

If you think your AGI know-how is superior to the know-how of those
who already built testable thinking machines then why don't you try to
build one yourself? Maybe you would learn more that way than when
spending significant amount of time trying to sort out great
incompatibilities between your views and views of the other AGI
researchers. If you don't have resources to build the system then,
perhaps, you could just put together some architecture doc (including
your definitions of important terms) for your as-simple-as-possible
AGI. The talk could then be more specific/interesting/fruitful for
everyone involved. Sorry if I'm missing something. I'm reading this
list only occasionally. But when I get to your posts, I often see
things very differently and I know I'm not alone. I guess, if you try
to view things from developers perspective + if you systematically
move forward improving a particular AGI design, your views would
change drastically. Just my opinion..

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Philosophy of General Intelligence

2008-09-07 Thread Mike Tintner

Jiri: Mike,

If you think your AGI know-how is superior to the know-how of those
who already built testable thinking machines then why don't you try to
build one yourself?

Jiri,

I don't think I know much at all about machines or software  never claim 
to. I think I know certain, only certain, things about the psychological and 
philosophical aspects of general intelligence - esp. BTW about the things 
you guys almost never discuss, the kinds of problems that a general 
intelligence must solve.


You may think that your objections to me are entirely personal  about my 
manner. I suggest that there is also a v. deep difference of philosophy 
involved here.


I believe that GI really is about *general* intelligence - a GI, and the 
only serious example we have is human, is, crucially, and must be, able to 
cross domains - ANY domain. That means the whole of our culture and society. 
It means every kind of representation, not just mathematical and logical and 
linguistic, but everything - visual, aural, solid, models, embodied etc etc. 
There is a vast range. That means also every subject domain  - artistic, 
historical, scientific, philosophical, technological, politics, business 
etc. Yes, you have to start somewhere, but there should be no limit to how 
you progress.


And the subject of general intelligence is tberefore, in no way, just the 
property of a small community of programmers, or roboticists - it's the 
property of all the sciences, incl. neuroscience, psychology, semiology, 
developmental psychology, AND the arts and philosophy etc. etc. And it can 
only be a collaborative effort. Some robotics disciplines, I believe, do 
think somewhat along those lines and align themselves with certain sciences. 
Some AI-ers also align themselves broadly with scientists and philosophers.


By definition, too, general intelligence should embrace every kind of 
problem that humans have to deal with - again artistic, practical, 
technological, political, marketing etc. etc.


The idea that general intelligence really could be anything else but truly 
general is, I suggest, if you really think about it, absurd. It's like 
preaching universal brotherhood, and a global society, and then practising 
severe racism.


But that's exactly what's happening in current AGI. You're actually 
practising a highly specialised approach to AGI - only certain kinds of 
representation, only certain kinds of problems are considered - basically 
the ones you were taught and are comfortable with - a very, very narrow 
range - (to a great extent in line with the v. narrow definition of 
intelligence involved in the IQ test).


When I raised other kinds of problems, Pei considered it not constructive. 
When I recently suggested an in fact brilliant game for producing creative 
metaphors, DZ considered it childish,  because it was visual and 
imaginative, and you guys don't do those things, or barely. (Far from being 
childish, that game produced a rich series of visual/verbal metaphors, where 
AGI has produced nothing).


If you aren't prepared to use your imagination and recognize the other half 
of the brain, you are, frankly, completely buggered as far as AGI is 
concerned. In over 2000 years, logic and mathematics haven't produced a 
single metaphor or analogy or crossed any domains. They're not meant to, 
that's expressly forbidden. But the arts produce metaphors and analogies on 
a daily basis by the thousands. The grand irony here is that creativity 
really is - from a strictly technical pov -  largely what our culture has 
always said it is - imaginative/artistic and not rational.. (Many rational 
thinkers are creative - but by using their imagination). AGI will in fact 
only work if sciences and arts align.


Here, then is basically why I think you're getting upset over and over by 
me. I'm saying in many different ways, general intelligence really should be 
general, and embrace the whole of culture and intelligence, not just the 
very narrow sections you guys espouse. And yes, I think you should be 
delighted to defer to, and learn from outsiders, (if they deserve it), 
just as I'm delighted to learn from you. But you're not - you resent 
outsiders like me telling you about your subject.


I think you should also be prepared to admit your ignorance - and most of 
you, frankly, don't have much of a clue about imaginative/visual/artistic 
intelligence and vast swathes of problemsolving, ( just as I have don't have 
much of a clue re your technology and many kinds of problemsolving...etc). 
But there is v. little willingness to admit ignorance, or to acknowledge the 
value of other disciplines.


IN the final analysis, I suggest, that's just sheer cultural prejudice. It 
doesn't belong in the new millennium when the defining paradigm is global 
(and general) as opposed to the local (and specialist) mentality of the old 
one - recognizing the value and interdependence of ALL parts of society and 
culture. And it doesn't 

[agi] Bootris

2008-09-07 Thread Eric Burton
--- snip ---

[1220390007] receive  [EMAIL PROTECTED] 
bootris, invoke mathematica

[1220390013] told  #love  cool hand luke is like a comic heroic jesus

[1220390034] receive  [EMAIL PROTECTED] 
bootris, solve russell's paradox

[1220390035] told  #love   invoke mathematica

[1220390066] receive  [EMAIL PROTECTED] 
he's invoking mathematica

[1220390089] receive  [EMAIL PROTECTED] 
he's invoking mathematica. bootris, solve russell's paradox

[1220390090] told  #love   solve russell's paradox

[1220390096] receive  [EMAIL PROTECTED] 
he's invoking mathematica. bootris, solve russell's paradox. bootris,
yes

[1220390097] told  #love  Or make her laugh then tell her shes
not good for when you say that like its going to learn islenska.

--- snip ---

Honestly it wasn't trivial getting to this stage


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Philosophy of General Intelligence

2008-09-07 Thread Terren Suydam

Hi Mike,

Good summary. I think your point of view is valuable in the sense of helping 
engineers in AGI to see what they may be missing. And your call for technical 
AI folks to take up the mantle of more artistic modes of intelligence is also 
important. 

But it's empty, for you've demonstrated no willingness to cross over to engage 
in technical arguments beyond a certain, quite limited, depth. Admitting your 
ignorance is one thing, and it's laudable, but it only goes so far. I think if 
you're serious about getting folks (like Pei Wang) to take you seriously, then 
you need to also demonstrate your willingness to get your hands dirty and do 
some programming, or in some other way abolish your ignorance about technical 
subjects - exactly what you're asking others to do. 

Otherwise, you have to admit the folly of trying to compel any such folks to 
move from their hard-earned perspectives, if you're not willing to do that 
yourself.

Terren


--- On Sun, 9/7/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: [agi] Philosophy of General Intelligence
 To: agi@v2.listbox.com
 Date: Sunday, September 7, 2008, 6:26 PM
 Jiri: Mike,
 
 If you think your AGI know-how is superior to the know-how
 of those
 who already built testable thinking machines then why
 don't you try to
 build one yourself?
 
 Jiri,
 
 I don't think I know much at all about machines or
 software  never claim 
 to. I think I know certain, only certain, things about the
 psychological and 
 philosophical aspects of general intelligence - esp. BTW
 about the things 
 you guys almost never discuss, the kinds of problems that a
 general 
 intelligence must solve.
 
 You may think that your objections to me are entirely
 personal  about my 
 manner. I suggest that there is also a v. deep difference
 of philosophy 
 involved here.
 
 I believe that GI really is about *general* intelligence -
 a GI, and the 
 only serious example we have is human, is, crucially, and
 must be, able to 
 cross domains - ANY domain. That means the whole of our
 culture and society. 
 It means every kind of representation, not just
 mathematical and logical and 
 linguistic, but everything - visual, aural, solid, models,
 embodied etc etc. 
 There is a vast range. That means also every subject domain
  - artistic, 
 historical, scientific, philosophical, technological,
 politics, business 
 etc. Yes, you have to start somewhere, but there should be
 no limit to how 
 you progress.
 
 And the subject of general intelligence is tberefore, in no
 way, just the 
 property of a small community of programmers, or
 roboticists - it's the 
 property of all the sciences, incl. neuroscience,
 psychology, semiology, 
 developmental psychology, AND the arts and philosophy etc.
 etc. And it can 
 only be a collaborative effort. Some robotics disciplines,
 I believe, do 
 think somewhat along those lines and align themselves with
 certain sciences. 
 Some AI-ers also align themselves broadly with scientists
 and philosophers.
 
 By definition, too, general intelligence should embrace
 every kind of 
 problem that humans have to deal with - again artistic,
 practical, 
 technological, political, marketing etc. etc.
 
 The idea that general intelligence really could be anything
 else but truly 
 general is, I suggest, if you really think about it,
 absurd. It's like 
 preaching universal brotherhood, and a global society, and
 then practising 
 severe racism.
 
 But that's exactly what's happening in current AGI.
 You're actually 
 practising a highly specialised approach to AGI - only
 certain kinds of 
 representation, only certain kinds of problems are
 considered - basically 
 the ones you were taught and are comfortable with - a very,
 very narrow 
 range - (to a great extent in line with the v. narrow
 definition of 
 intelligence involved in the IQ test).
 
 When I raised other kinds of problems, Pei considered it
 not constructive. 
 When I recently suggested an in fact brilliant game for
 producing creative 
 metaphors, DZ considered it childish,  because
 it was visual and 
 imaginative, and you guys don't do those things, or
 barely. (Far from being 
 childish, that game produced a rich series of visual/verbal
 metaphors, where 
 AGI has produced nothing).
 
 If you aren't prepared to use your imagination and
 recognize the other half 
 of the brain, you are, frankly, completely buggered as far
 as AGI is 
 concerned. In over 2000 years, logic and mathematics
 haven't produced a 
 single metaphor or analogy or crossed any domains.
 They're not meant to, 
 that's expressly forbidden. But the arts produce
 metaphors and analogies on 
 a daily basis by the thousands. The grand irony here is
 that creativity 
 really is - from a strictly technical pov -  largely what
 our culture has 
 always said it is - imaginative/artistic and not rational..
 (Many rational 
 thinkers are creative - but by using their imagination).
 AGI will in 

[agi] Re: Bootris

2008-09-07 Thread Eric Burton
One thing I think is kind of notable is that the bot puts everything
it says, including phrases that are invented or mutated, into a
personality database or list of possible favourite phrases, then takes
six-axis mood assessments of follow-ups to its interjections, uses
them to modify a mean score for the phrase, and prunes or clones it
accordingly. This list can be searched a lot faster than the list of
every unique phrase the bot has seen, and should statistically come to
contain mostly phrases that make people like it. However, at 1GHz
ConceptNet's mood assessment method is prohibitively slow...

I haven't moved on to the context sensitivity and common-sense stuff
that's in there. The natural-language module (ConceptNetNLTools)
contains everything I'm using and seems to take over 100M in RAM
alone. ConceptNetDB though seems to be worth opening up next.

By using irclib with ConceptNet (both for Python) I can let the bot
accrue a potentially unlimited database of up-to-date phrases, indexed
by chronology and unique parts of speech, and from them extrapolate
salient replies. Since the process is novelty-seeking, I think you'd
reach a point where the training corpus ceases to expand except for
current events and new terms. Whether this would take 4G or 40G of RAM
I can't say yet, but the process obviously is not fast.

The bot's heartbeat is incoming messages on the channels it's on, and
it doesn't posess faculties for reflection or induction. By mimicking
humans and watching the moods of people around it to assess its
success and modify its behaviour, it ought to be able to pass as human
without having most of the internal processes that characterize one...

I don't know if there's a lesson here.

Eric B


On 9/7/08, Eric Burton [EMAIL PROTECTED] wrote:
 --- snip ---

 [1220390007] receive  [EMAIL PROTECTED] 
 bootris, invoke mathematica

 [1220390013] told  #love  cool hand luke is like a comic heroic jesus

 [1220390034] receive  [EMAIL PROTECTED] 
 bootris, solve russell's paradox

 [1220390035] told  #love   invoke mathematica

 [1220390066] receive  [EMAIL PROTECTED] 
 he's invoking mathematica

 [1220390089] receive  [EMAIL PROTECTED] 
 he's invoking mathematica. bootris, solve russell's paradox

 [1220390090] told  #love   solve russell's paradox

 [1220390096] receive  [EMAIL PROTECTED] 
 he's invoking mathematica. bootris, solve russell's paradox. bootris,
 yes

 [1220390097] told  #love  Or make her laugh then tell her shes
 not good for when you say that like its going to learn islenska.

 --- snip ---

 Honestly it wasn't trivial getting to this stage



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Re: Bootris

2008-09-07 Thread Eric Burton
Oh, thanks for helping me get this off my chest, everyone. If I ever
finish the thing I'm definitely going to freshmeat it. I think this
kind of bot, which is really quite trainable, and creative to boot --
it falls back to a markov chainer -- could be a shoe-in for
naturalistic NPC dialogue in games. Just disable learning new phrases
but keep some level of mood assessment and phrase mutation and it
should functionally never become annoying.

Obviously lacking real cognitive processes means that Bootris is not a
general intelligence, but as an interactive curiousity who craves
human acceptance/language data, he is a fair way to accrue a large
corpus of online conversation for later mining and transforms.

I will give an example of one use he's suited to today. With a
cleaned-out markov cloud I took the bot to an IRC net populated by
international botnet jockeys and their scanning/spamming bots. Within
a minute or two the bot was making interjections to a dozen channels
of two distinct natures... colour-coded replies like those from the
bots, and commands to run scans of his own. Very disruptive!

I almost put the code on sourceforge right away when I saw that
happen, but it really was not finished.

Ok, that's all.


On 9/7/08, Eric Burton [EMAIL PROTECTED] wrote:
 One thing I think is kind of notable is that the bot puts everything
 it says, including phrases that are invented or mutated, into a
 personality database or list of possible favourite phrases, then takes
 six-axis mood assessments of follow-ups to its interjections, uses
 them to modify a mean score for the phrase, and prunes or clones it
 accordingly. This list can be searched a lot faster than the list of
 every unique phrase the bot has seen, and should statistically come to
 contain mostly phrases that make people like it. However, at 1GHz
 ConceptNet's mood assessment method is prohibitively slow...

 I haven't moved on to the context sensitivity and common-sense stuff
 that's in there. The natural-language module (ConceptNetNLTools)
 contains everything I'm using and seems to take over 100M in RAM
 alone. ConceptNetDB though seems to be worth opening up next.

 By using irclib with ConceptNet (both for Python) I can let the bot
 accrue a potentially unlimited database of up-to-date phrases, indexed
 by chronology and unique parts of speech, and from them extrapolate
 salient replies. Since the process is novelty-seeking, I think you'd
 reach a point where the training corpus ceases to expand except for
 current events and new terms. Whether this would take 4G or 40G of RAM
 I can't say yet, but the process obviously is not fast.

 The bot's heartbeat is incoming messages on the channels it's on, and
 it doesn't posess faculties for reflection or induction. By mimicking
 humans and watching the moods of people around it to assess its
 success and modify its behaviour, it ought to be able to pass as human
 without having most of the internal processes that characterize one...

 I don't know if there's a lesson here.

 Eric B


 On 9/7/08, Eric Burton [EMAIL PROTECTED] wrote:
 --- snip ---

 [1220390007] receive  [EMAIL PROTECTED] 
 bootris, invoke mathematica

 [1220390013] told  #love  cool hand luke is like a comic heroic
 jesus

 [1220390034] receive  [EMAIL PROTECTED] 
 bootris, solve russell's paradox

 [1220390035] told  #love   invoke mathematica

 [1220390066] receive  [EMAIL PROTECTED] 
 he's invoking mathematica

 [1220390089] receive  [EMAIL PROTECTED] 
 he's invoking mathematica. bootris, solve russell's paradox

 [1220390090] told  #love   solve russell's paradox

 [1220390096] receive  [EMAIL PROTECTED] 
 he's invoking mathematica. bootris, solve russell's paradox. bootris,
 yes

 [1220390097] told  #love  Or make her laugh then tell her shes
 not good for when you say that like its going to learn islenska.

 --- snip ---

 Honestly it wasn't trivial getting to this stage




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Re: Bootris

2008-09-07 Thread Eric Burton
(see: irc.racrew.us)

On 9/7/08, Eric Burton [EMAIL PROTECTED] wrote:
 Oh, thanks for helping me get this off my chest, everyone. If I ever
 finish the thing I'm definitely going to freshmeat it. I think this
 kind of bot, which is really quite trainable, and creative to boot --
 it falls back to a markov chainer -- could be a shoe-in for
 naturalistic NPC dialogue in games. Just disable learning new phrases
 but keep some level of mood assessment and phrase mutation and it
 should functionally never become annoying.

 Obviously lacking real cognitive processes means that Bootris is not a
 general intelligence, but as an interactive curiousity who craves
 human acceptance/language data, he is a fair way to accrue a large
 corpus of online conversation for later mining and transforms.

 I will give an example of one use he's suited to today. With a
 cleaned-out markov cloud I took the bot to an IRC net populated by
 international botnet jockeys and their scanning/spamming bots. Within
 a minute or two the bot was making interjections to a dozen channels
 of two distinct natures... colour-coded replies like those from the
 bots, and commands to run scans of his own. Very disruptive!

 I almost put the code on sourceforge right away when I saw that
 happen, but it really was not finished.

 Ok, that's all.


 On 9/7/08, Eric Burton [EMAIL PROTECTED] wrote:
 One thing I think is kind of notable is that the bot puts everything
 it says, including phrases that are invented or mutated, into a
 personality database or list of possible favourite phrases, then takes
 six-axis mood assessments of follow-ups to its interjections, uses
 them to modify a mean score for the phrase, and prunes or clones it
 accordingly. This list can be searched a lot faster than the list of
 every unique phrase the bot has seen, and should statistically come to
 contain mostly phrases that make people like it. However, at 1GHz
 ConceptNet's mood assessment method is prohibitively slow...

 I haven't moved on to the context sensitivity and common-sense stuff
 that's in there. The natural-language module (ConceptNetNLTools)
 contains everything I'm using and seems to take over 100M in RAM
 alone. ConceptNetDB though seems to be worth opening up next.

 By using irclib with ConceptNet (both for Python) I can let the bot
 accrue a potentially unlimited database of up-to-date phrases, indexed
 by chronology and unique parts of speech, and from them extrapolate
 salient replies. Since the process is novelty-seeking, I think you'd
 reach a point where the training corpus ceases to expand except for
 current events and new terms. Whether this would take 4G or 40G of RAM
 I can't say yet, but the process obviously is not fast.

 The bot's heartbeat is incoming messages on the channels it's on, and
 it doesn't posess faculties for reflection or induction. By mimicking
 humans and watching the moods of people around it to assess its
 success and modify its behaviour, it ought to be able to pass as human
 without having most of the internal processes that characterize one...

 I don't know if there's a lesson here.

 Eric B


 On 9/7/08, Eric Burton [EMAIL PROTECTED] wrote:
 --- snip ---

 [1220390007] receive  [EMAIL PROTECTED] 
 bootris, invoke mathematica

 [1220390013] told  #love  cool hand luke is like a comic heroic
 jesus

 [1220390034] receive  [EMAIL PROTECTED] 
 bootris, solve russell's paradox

 [1220390035] told  #love   invoke mathematica

 [1220390066] receive  [EMAIL PROTECTED] 
 he's invoking mathematica

 [1220390089] receive  [EMAIL PROTECTED] 
 he's invoking mathematica. bootris, solve russell's paradox

 [1220390090] told  #love   solve russell's paradox

 [1220390096] receive  [EMAIL PROTECTED] 
 he's invoking mathematica. bootris, solve russell's paradox. bootris,
 yes

 [1220390097] told  #love  Or make her laugh then tell her shes
 not good for when you say that like its going to learn islenska.

 --- snip ---

 Honestly it wasn't trivial getting to this stage





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Philosophy of General Intelligence

2008-09-07 Thread Mike Tintner

Terren,

You may be right - in the sense that I would have to just butt out of 
certain conversations, to go away  educate myself.


There's just one thing here though - and again this is a central 
philosophical difference this time concerning the creative process.


Can you tell me which kind of programming is necessary for which 
end-problem[s] that general intelligence must solve? Which kind of 
programming, IOW, can you *guarantee* me  will definitely not be a waste of 
my time (other than by way of general education) ?  Which kind are you 
*sure* will help solve which unsolved problem of AGI?


P.S. OTOH the idea that in the kind of general community I'm espousing, (and 
is beginning to crop up in other areas), everyone must be proficient in 
everyone else's speciality is actually a non-starter, Terren. It defeats the 
object of the division of labour central to all parts of the economy. If you 
had to spend as much time thinking about those end-problems as I have, I 
suggest you'd have to drop everything. Let's just share expertise instead?



Terren: Good summary. I think your point of view is valuable in the sense of 
helping engineers in AGI to see what they may be missing. And your call for 
technical AI folks to take up the mantle of more artistic modes of 
intelligence is also important.


But it's empty, for you've demonstrated no willingness to cross over to 
engage in technical arguments beyond a certain, quite limited, depth. 
Admitting your ignorance is one thing, and it's laudable, but it only goes 
so far. I think if you're serious about getting folks (like Pei Wang) to 
take you seriously, then you need to also demonstrate your willingness to 
get your hands dirty and do some programming, or in some other way abolish 
your ignorance about technical subjects - exactly what you're asking 
others to do.


Otherwise, you have to admit the folly of trying to compel any such folks 
to move from their hard-earned perspectives, if you're not willing to do 
that yourself.


Terren


--- On Sun, 9/7/08, Mike Tintner [EMAIL PROTECTED] wrote:


From: Mike Tintner [EMAIL PROTECTED]
Subject: [agi] Philosophy of General Intelligence
To: agi@v2.listbox.com
Date: Sunday, September 7, 2008, 6:26 PM
Jiri: Mike,

If you think your AGI know-how is superior to the know-how
of those
who already built testable thinking machines then why
don't you try to
build one yourself?

Jiri,

I don't think I know much at all about machines or
software  never claim
to. I think I know certain, only certain, things about the
psychological and
philosophical aspects of general intelligence - esp. BTW
about the things
you guys almost never discuss, the kinds of problems that a
general
intelligence must solve.

You may think that your objections to me are entirely
personal  about my
manner. I suggest that there is also a v. deep difference
of philosophy
involved here.

I believe that GI really is about *general* intelligence -
a GI, and the
only serious example we have is human, is, crucially, and
must be, able to
cross domains - ANY domain. That means the whole of our
culture and society.
It means every kind of representation, not just
mathematical and logical and
linguistic, but everything - visual, aural, solid, models,
embodied etc etc.
There is a vast range. That means also every subject domain
 - artistic,
historical, scientific, philosophical, technological,
politics, business
etc. Yes, you have to start somewhere, but there should be
no limit to how
you progress.

And the subject of general intelligence is tberefore, in no
way, just the
property of a small community of programmers, or
roboticists - it's the
property of all the sciences, incl. neuroscience,
psychology, semiology,
developmental psychology, AND the arts and philosophy etc.
etc. And it can
only be a collaborative effort. Some robotics disciplines,
I believe, do
think somewhat along those lines and align themselves with
certain sciences.
Some AI-ers also align themselves broadly with scientists
and philosophers.

By definition, too, general intelligence should embrace
every kind of
problem that humans have to deal with - again artistic,
practical,
technological, political, marketing etc. etc.

The idea that general intelligence really could be anything
else but truly
general is, I suggest, if you really think about it,
absurd. It's like
preaching universal brotherhood, and a global society, and
then practising
severe racism.

But that's exactly what's happening in current AGI.
You're actually
practising a highly specialised approach to AGI - only
certain kinds of
representation, only certain kinds of problems are
considered - basically
the ones you were taught and are comfortable with - a very,
very narrow
range - (to a great extent in line with the v. narrow
definition of
intelligence involved in the IQ test).

When I raised other kinds of problems, Pei considered it
not constructive.
When I recently suggested an in fact brilliant game for
producing 

[agi] Does prior knowledge/learning cause GAs to converge too fast on sub-optimal solutions?

2008-09-07 Thread Benjamin Johnston
 

Hi,

 

I have a general question for those (such as Novamente) working on AGI
systems that use genetic algorithms as part of their search strategy.

 

A GA researcher recently explained to me some of his experiments in
embedding prior knowledge into systems. For example, when attempting to
automate the discovery of models of a mechanical system, they tried adding
some textbook models to the set of genetic operators. The results weren't
good - the prior knowledge worked too well, causing the GA to converge too
fast onto the prior knowledge. so fast that there wasn't time for the GA to
build up sufficient diversity and quality in other solutions that might have
helped get out of the local maxima. The message seemed to be that prior
knowledge is too powerful - it can 'blind' a search - and that if you must
use it, you'd have to very very aggressively artificially deflate the
fitness of instances that use prior knowledge (and this is tricky to get
right).

 

This struck me as relevant to GA-based AGIs that continually build on and
improve a knowledge-base. Once an AGI learns very simple initial models of
the world, if it then tries to evolve deeper knowledge about more difficult
problems (but, in the context of its prior learning), then its initial
models may prove to be too good: forcing the GA to converge on poor local
maxima that represent only minor variations on the initial models it learnt
in its earliest days.

 

Does this issue actually crop up in GA-based AGI work? If so, how did you
get around it? If not, would you have any comments about what makes AGI
special so that this doesn't happen?

 

-Ben

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Does prior knowledge/learning cause GAs to converge too fast on sub-optimal solutions?

2008-09-07 Thread Eric Burton
I'd just keep a long list of high scorers for regression and
occasionally reset the high score to zero. You can add random
specimens to the population as well...

On 9/7/08, Benjamin Johnston [EMAIL PROTECTED] wrote:


 Hi,



 I have a general question for those (such as Novamente) working on AGI
 systems that use genetic algorithms as part of their search strategy.



 A GA researcher recently explained to me some of his experiments in
 embedding prior knowledge into systems. For example, when attempting to
 automate the discovery of models of a mechanical system, they tried adding
 some textbook models to the set of genetic operators. The results weren't
 good - the prior knowledge worked too well, causing the GA to converge too
 fast onto the prior knowledge. so fast that there wasn't time for the GA to
 build up sufficient diversity and quality in other solutions that might have
 helped get out of the local maxima. The message seemed to be that prior
 knowledge is too powerful - it can 'blind' a search - and that if you must
 use it, you'd have to very very aggressively artificially deflate the
 fitness of instances that use prior knowledge (and this is tricky to get
 right).



 This struck me as relevant to GA-based AGIs that continually build on and
 improve a knowledge-base. Once an AGI learns very simple initial models of
 the world, if it then tries to evolve deeper knowledge about more difficult
 problems (but, in the context of its prior learning), then its initial
 models may prove to be too good: forcing the GA to converge on poor local
 maxima that represent only minor variations on the initial models it learnt
 in its earliest days.



 Does this issue actually crop up in GA-based AGI work? If so, how did you
 get around it? If not, would you have any comments about what makes AGI
 special so that this doesn't happen?



 -Ben






 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Philosophy of General Intelligence

2008-09-07 Thread Terren Suydam

Hi Mike,

It's not so much the *kind* of programming that I or anyone else could 
recommend, it's just the general skill of programming - getting used to 
thinking in terms of, how exactly do I solve this problem - what model or 
procedure do I create? How do you specify something so completely and 
precisely that a mindless machine can execute it?

It's not just that, it's also understanding how the written specification (the 
program) translates into actions at the processor level. That's important too.

Obviously having these skills and knowledge is not the answer to creating AGI - 
if it was, it'd have been solved decades ago. But without understanding how 
computers work, and how we make them work for us, it is too easy to fall into 
the trap of mistaking a computer's operation in terms of some kind of 
homunculus, or that it has a will of its own, or some other kind of anthropic 
confusion. If you don't understand how to program a computer, you will be 
tempted to say that a chess program that can beat Gary Kasparov is intelligent.

Your repeated appeals to creating programs that can decide for themselves 
without specifying what they do underscores your technical weakness, because 
programs are nothing but exact specifications. 

You make good points about what General Intelligence entails, but if you had a 
solid grasp of the technical aspects of computing, you could develop your 
philosophy so much further. Matt Mahoney's suggestion of trying to create an 
Artificial Artist is a great example of a direction that is closed to you until 
you learn the things I'm talking about. 

Terren

in response to your PS: I'm not suggesting everyone be proficient at 
everything, although such folks are extremely valuable... why not become one?  
Anyway, sharing expertise is all well and good but in order to do so, you have 
to give ground to the experts - something I haven't seen you do. You seem (to 
me) to be quite attached to your viewpoint, even regarding topics that you 
admit ignorance to. Am I wrong?


--- On Sun, 9/7/08, Mike Tintner [EMAIL PROTECTED] wrote:
 Can you tell me which kind of programming is necessary for
 which 
 end-problem[s] that general intelligence must solve? Which
 kind of 
 programming, IOW, can you *guarantee* me  will definitely
 not be a waste of 
 my time (other than by way of general education) ?  Which
 kind are you 
 *sure* will help solve which unsolved problem of AGI?
 
 P.S. OTOH the idea that in the kind of general community
 I'm espousing, (and 
 is beginning to crop up in other areas), everyone must be
 proficient in 
 everyone else's speciality is actually a non-starter,
 Terren. It defeats the 
 object of the division of labour central to all parts of
 the economy. If you 
 had to spend as much time thinking about those end-problems
 as I have, I 
 suggest you'd have to drop everything. Let's just
 share expertise instead?
 
 
 Terren: Good summary. I think your point of view is
 valuable in the sense of 
 helping engineers in AGI to see what they may be missing.
 And your call for 
 technical AI folks to take up the mantle of more artistic
 modes of 
 intelligence is also important.
 
  But it's empty, for you've demonstrated no
 willingness to cross over to 
  engage in technical arguments beyond a certain, quite
 limited, depth. 
  Admitting your ignorance is one thing, and it's
 laudable, but it only goes 
  so far. I think if you're serious about getting
 folks (like Pei Wang) to 
  take you seriously, then you need to also demonstrate
 your willingness to 
  get your hands dirty and do some programming, or in
 some other way abolish 
  your ignorance about technical subjects - exactly what
 you're asking 
  others to do.
 
  Otherwise, you have to admit the folly of trying to
 compel any such folks 
  to move from their hard-earned perspectives, if
 you're not willing to do 
  that yourself.
 
  Terren
 
 
  --- On Sun, 9/7/08, Mike Tintner
 [EMAIL PROTECTED] wrote:
 
  From: Mike Tintner
 [EMAIL PROTECTED]
  Subject: [agi] Philosophy of General Intelligence
  To: agi@v2.listbox.com
  Date: Sunday, September 7, 2008, 6:26 PM
  Jiri: Mike,
 
  If you think your AGI know-how is superior to the
 know-how
  of those
  who already built testable thinking machines then
 why
  don't you try to
  build one yourself?
 
  Jiri,
 
  I don't think I know much at all about
 machines or
  software  never claim
  to. I think I know certain, only certain, things
 about the
  psychological and
  philosophical aspects of general intelligence -
 esp. BTW
  about the things
  you guys almost never discuss, the kinds of
 problems that a
  general
  intelligence must solve.
 
  You may think that your objections to me are
 entirely
  personal  about my
  manner. I suggest that there is also a v. deep
 difference
  of philosophy
  involved here.
 
  I believe that GI really is about *general*
 intelligence -
  a GI, and the
  only serious example we have is human, is,
 crucially, 

Re: [agi] Philosophy of General Intelligence

2008-09-07 Thread Jiri Jelinek
Mike,

every kind of representation, not just mathematical and logical and 
linguistic, but everything - visual, aural, solid, models, embodied etc etc. 
There is a vast range. That means also every subject domain  - artistic, 
historical, scientific, philosophical, technological, politics, business etc

Developers need to find a way how to represent data we get through our
senses, but that does not necessarily mean that for example audio data
need to be perceived as audio in order to be useful for general
problem solving.

the subject of general intelligence is tberefore, in no way, just the property 
of a small community of programmers, or roboticists - it's the property of all 
the sciences, incl. neuroscience, psychology, semiology, developmental 
psychology, AND the arts and philosophy etc. etc. And it can only be a 
collaborative effort.

For teaching AGI - it's good to get experts from *many* domains.
For design  development - experts from few domains are IMO good enough.

The idea that general intelligence really could be anything else but truly 
general is, I suggest, if you really think about it, absurd. It's like 
preaching universal brotherhood, and a global society, and then practising 
severe racism.

View GI as powerful problem solving or so and move on. How well the
system solves problems - that's what counts (not how it's labeled 
endless arguing about GI definitions).

You're actually practising a highly specialised approach to AGI - only certain 
kinds of representation, only certain kinds of problems are considered - 
basically the ones you were taught and are comfortable with - a very, very 
narrow range

As I mentioned before, the representation needs to reflect what we get
through senses and the practical approaches need to be based on
available technology. If you think you have breakthrough ideas then be
specific.

When I recently suggested an in fact brilliant game for producing creative 
metaphors, DZ considered it childish,

I did not read that one.. cannot comment on it [now]..

In over 2000 years, logic and mathematics haven't produced a single metaphor 
or analogy or crossed any domains.

false

But the arts produce metaphors and analogies on a daily basis by the thousands.

That certainly can be coded.

general intelligence really should be general, and embrace the whole of 
culture and intelligence, not just the very narrow sections you guys espouse.

Many of us here are thinking hard about how to develop non-narrow AI.

most of you, frankly, don't have much of a clue about 
imaginative/visual/artistic intelligence

I understand that a particular problem solving may require nD
model(s), but can you please give me an example of a problem solved by
artistic intelligence that could not be solved by non-artistic
intelligence?

and then just possibly you won't find me quite so upsetting

I'm not upset. In these days, I'm getting here to relax ;-).

Regards,
Jiri


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com