Re: [agi] DARPA funds using memsistors to model synapses in neuromorphic computing

2008-11-27 Thread Philip Hunt
in the cortex. 10^8 seconds is 3 years! I think that number's wrong. -- Philip Hunt, [EMAIL PROTECTED] Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com

Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Philip Hunt
the Loebner prize is silly. -- Philip Hunt, [EMAIL PROTECTED] Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

[agi] AIXI (was: Mushed Up Decision Processes)

2008-11-30 Thread Philip Hunt
, is there something to AIXI or is it something I can safely ignore? -- Philip Hunt, [EMAIL PROTECTED] Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com

Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Philip Hunt
/Function_predictor ) I also think it would be useful if there was a regular (maybe annual) competition in the function predictor domain (or some similar domain). A bit like the Loebner Prize, except that it would be more useful to the advancement of AI, since the Loebner prize is silly. -- Philip Hunt

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-01 Thread Philip Hunt
theories, they are merely rewordings of the same theory. And choosing between them is arbitrary; you may prefer one to the other because human minds can visualise it more easily, or it's easier to calculate, or you have an aethetic preference for it. -- Philip Hunt, [EMAIL PROTECTED] Please avoid

Re: [agi] AIXI

2008-12-01 Thread Philip Hunt
That was helpful. Thanks. 2008/12/1 Matt Mahoney [EMAIL PROTECTED]: --- On Sun, 11/30/08, Philip Hunt [EMAIL PROTECTED] wrote: Can someone explain AIXI to me? AIXI models an intelligent agent interacting with an environment as a pair of interacting Turing machines. At each step, the agent

Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Philip Hunt
in the mammalian immune system does change as the immune system evolves to cope with infectious agents; but these changes aren't passed along to the next generation.) * if there are any molecular biologists reading, feel free to correct me. -- Philip Hunt, [EMAIL PROTECTED] Please avoid sending me

Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Philip Hunt
. IIRC that was the rough order of magnitude assumed in the proposal I reviewed here recently. It might well be. It is anyway apparent that there are different mechanisms in the brain for laying down long-term memories and for short-term thinking over the order of a few seconds. -- Philip Hunt

Re: [agi] Religious attitudes to NBIC technologies

2008-12-08 Thread Philip Hunt
that nanotechnology or AI are specifically prohibited by any of the major religions. And if one society forgoes science, they'll just get outcompeted by their neighbours. -- Philip Hunt, [EMAIL PROTECTED] Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
to train it in the real world (at least some of the time). If you don't care whether your AGI can use a screwdriver, why have one in the virtual world? -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
to be anything like a virtual world, it could for example be a software modality that can see/understand source code as easily and fluently as humans interprete visual input.) AIUI you're mostly thinking in terms of 2 or 3. Fair comment? -- Philip Hunt, cabala...@googlemail.com Please avoid

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
, even if they look the same. An animals intuitive physics is a complex system. I expect that in humans a lot of this machinery isd re-used to create intelligence. (It may be true, and IMO probably is true, that it's not necessary to re-create this machinery to make an AGI). -- Philip Hunt, cabala

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
that). -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
help too. -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
). On the other hand, making a virtual world such as I envision, is more than a spare-time project, but not more than the project of making a single high-quality video game. GTA IV cost $5 million, so we're not talking about peanuts here. -- Philip Hunt, cabala...@googlemail.com Please avoid

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
comes in. Actually, $$ aside, we don't even **know how** to make a decent humanoid robot. Or, a decently functional mobile robot **of any kind** Is that because of hardware or software issues? -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word or PowerPoint attachments

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
it. Until an AI can do this, there's no point in trying to get it to play at making cakes, etc. -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
it interacts with our internal model of the world, than vision is. Is the reason just that AI researchers spend all day staring at screens and ignoring their physical bodies and surroundings?? ;-) :-) -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word or PowerPoint attachments. See

Re: [agi] Relevance of SE in AGI

2008-12-21 Thread Philip Hunt
in this field? I've never used formal proofs of correctness of software, so can't comment. I use software testing (unit tests) on pretty much all non-trivial software thast I write -- i find doing so makes things much easier. -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word

Re: [agi] Levels of Self-Awareness?

2008-12-24 Thread Philip Hunt
computer to do tasks better than they can (e.g. play chess) and I see no reason why it shouldn't be possible for self awareness. Indeed it would be rather trivial to give an AGI access to its source code. -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word or PowerPoint

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
intelligence this way. Care to enlighten me? -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com/member/archive/303

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
be a more useful one. While you're at it you may want to change the size of the chunks in each item of prediction, from characters to either strings or s-expressions. Though doing so doesn't fundamentally alter the problem. -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
code. -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/27 Matt Mahoney matmaho...@yahoo.com: --- On Fri, 12/26/08, Philip Hunt cabala...@googlemail.com wrote: Humans are very good at predicting sequences of symbols, e.g. the next word in a text stream. Why not have that as your problem domain, instead of text compression? That's

Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/28 Philip Hunt cabala...@googlemail.com: Now, consider if I build a program that can predict how some sequences will continue. For example, given ABACADAEA it'll predict the next letter is F, or given: 1 2 4 8 16 32 it'll predict the next number is 64. (Whether the program

Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
at prediction. Whereas all programs that're good at prediction are guaranteed to be good at prediction. -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html

Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/29 Philip Hunt cabala...@googlemail.com: 2008/12/29 Matt Mahoney matmaho...@yahoo.com: Please remember that I am not proposing compression as a solution to the AGI problem. I am proposing it as a measure of progress in an important component (prediction). [...] Turning

Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Philip Hunt
2008/12/29 Matt Mahoney matmaho...@yahoo.com: --- On Mon, 12/29/08, Philip Hunt cabala...@googlemail.com wrote: Incidently, reading Matt's posts got me interested in writing a compression program using Markov-chain prediction. The prediction bit was a piece of piss to write; the compression

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Philip Hunt
processing power you need: if processing is very expensive, it makes less sense to re-run an extensive test suite whenever you make a change. -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Philip Hunt
to say so and make your assumptions concrete. -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com/member/archive