Re: [agi] Books

2007-06-11 Thread Joshua Fox
Josh, Your point about layering makes perfect sense. I just ordered your book, but, impatient as I am, could I ask a question about this, though I've asked a similar question before: Why have not the elite of intelligent and open-minded leading AI researchers not attempted a multi-layered

Re: [agi] Books

2007-06-11 Thread J Storrs Hall, PhD
I'll try to answer this and Mike Tintner's question at the same time. The typical GOFAI engine over the past decades has had a layer structure something like this: Problem-specific assertions Inference engine/database Lisp on top of the machine and OS. Now it turns out that this is plenty to

Re: [agi] about AGI designers

2007-06-11 Thread Lukasz Stafiniak
On 6/6/07, Peter Voss [EMAIL PROTECTED] wrote: 'fraid not. Have to look after our investors' interests… (and, like Ben, I'm not keen for AGI technology to be generally available) But at least Novamente makes a convinceable amount of their ideas available IMHO. P.S. Probabilistic Logic

Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread James Ratcliff
Interesting points, but I believe you can get around alot of the problems with two additional factors, a. using either large quantities of quality text, (ie novels, newspapers) or similar texts like newspapers. b. using a interactive built in 'checker' system, assisted learning where the AI

Re: [agi] Pure reason is a disease.

2007-06-11 Thread James Ratcliff
Two different responses to this type of arguement. Once you simulate something to the fact that we cant tell the difference between it in any way, then it IS that something for most all intents and purposes as far as the tests you have go. If it walks like a human, talks like a human, then for

Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread Mike Dougherty
On 6/11/07, James Ratcliff [EMAIL PROTECTED] wrote: Interesting points, but I believe you can get around alot of the problems with two additional factors, a. using either large quantities of quality text, (ie novels, newspapers) or similar texts like newspapers. b. using a interactive built in

Re: [agi] AGI Consortium

2007-06-11 Thread James Ratcliff
Has anyone tried a test of something as simple as per line of code / function? Meaning that each function or module could have a % value associated with it (set by many users average rating) And then simply giving credit by line of code input. Anyone writing cruddy long code would initially

Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread James Ratcliff
Correct, but I don't believe that systems (like Cyc) are doing this type of Active learning now, and it would help to gather quality information and fact-check it. Cyc does have some interesting projects where it takes a proposed statment and when a engineer is working with it, will go out

Re: [agi] AGI Consortium

2007-06-11 Thread Mark Waser
Has anyone tried a test of something as simple as per line of code / function? My first official programming course was a Master's level course at an Ivy League college. The course project was a full-up LISP interpreter. My program was ~800-900 lines and passed all testing with flying

Re: [agi] AGI Consortium

2007-06-11 Thread J Storrs Hall, PhD
On Monday 11 June 2007 12:12:26 pm Mark Waser wrote: ... The last thing that I want to do is *anything* that encourages people to write more code ... The classic apocryphal story is of the shop where they had this fellow who was an unbelievably productive programmer -- up until the day he

Re: [agi] AGI Consortium

2007-06-11 Thread Vladimir Nesov
Monday, June 11, 2007, Mark Waser wrote: MW The only scheme that I'd possibly accept based on lines of code MW would be one where if someone else wrote a tighter program, the original MW writer would get negative credit (i.e. something like MW if they wrote 7,000 lines and I re-did it with

Re: [agi] Books

2007-06-11 Thread Joshua Fox
Josh, Thanks for that answer on the layering of mind. It's not that any existing level is wrong, but there aren't enough of them, so that the higher ones aren't being built on the right primitives in current systems. Word-level concepts in the mind are much more elastic and plastic than

Re: [agi] Pure reason is a disease.

2007-06-11 Thread Jiri Jelinek
James, Frank Jackson (in Epiphenomenal Qualia) defined qualia as ...certain features of the bodily sensations especially, but also of certain perceptual experiences, which no amount of purely physical information includes.. :-) If it walks like a human, talks like a human, then for all those

Re: [agi] AGI Consortium

2007-06-11 Thread YKY (Yan King Yin)
On 6/11/07, Mark Waser [EMAIL PROTECTED] wrote: I'm sorry about the confusion. Let me correct by saying: it *is* to your advantage to exaggerate your contributions, but your peers won't allow it. Cool. I'll then move back to my other point that is probably better phrased as I don't

Re: [agi] Books

2007-06-11 Thread J Storrs Hall, PhD
On Monday 11 June 2007 02:06:35 pm Joshua Fox wrote: ... Could I ask also that you take a stab at a psychological/sociological question: Why have not the leading minds of AI (considering for this purpose only the true creative thinkers with status in the community, however small a fraction

Re: [agi] Pure reason is a disease.

2007-06-11 Thread Matt Mahoney
Below is a program that can feel pain. It is a simulation of a programmable 2-input logic gate that you train using reinforcement conditioning. /* pain.cpp This program simulates a programmable 2-input logic gate. You train it by reinforcement conditioning. You provide a pair of input bits

RE: [agi] Pure reason is a disease.

2007-06-11 Thread Derek Zahn
Matt Mahoney writes: Below is a program that can feel pain. It is a simulation of a programmable 2-input logic gate that you train using reinforcement conditioning. Is it ethical to compile and run this program? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe

Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread Matt Mahoney
--- James Ratcliff [EMAIL PROTECTED] wrote: Interesting points, but I believe you can get around alot of the problems with two additional factors, a. using either large quantities of quality text, (ie novels, newspapers) or similar texts like newspapers. b. using a interactive built in

Re: [agi] Pure reason is a disease.

2007-06-11 Thread Matt Mahoney
Here is a program that feels pain. It is a simulation of a 2-input logic gate that you train by reinforcement learning. It feels in the sense that it adjusts its behavior to avoid negative reinforcement from the user. /* pain.cpp - A program that can feel pleasure and pain. The program

Re: [agi] Pure reason is a disease.

2007-06-11 Thread J Storrs Hall, PhD
On Monday 11 June 2007 03:22:04 pm Matt Mahoney wrote: /* pain.cpp - A program that can feel pleasure and pain. ... Ouch! :-) Josh - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to:

Re: [agi] AGI Consortium

2007-06-11 Thread YKY (Yan King Yin)
An additional idea: each member's vote could be weighted by the member's total amount of contributions. This way, we can establish a network of genuine contributors via self-organization, and protect against mischief-makers, nonsense, or sabotage, etc. YKY - This list is sponsored by

RE: [agi] Pure reason is a disease.

2007-06-11 Thread Matt Mahoney
--- Derek Zahn [EMAIL PROTECTED] wrote: Matt Mahoney writes: Below is a program that can feel pain. It is a simulation of a programmable 2-input logic gate that you train using reinforcement conditioning. Is it ethical to compile and run this program? Well, that is a good question. Ethics

Re: [agi] Pure reason is a disease.

2007-06-11 Thread James Ratcliff
And here's the human psuedocode: 1. Hold Knife above flame until red. 2. Place knife on arm. 3. a. Accept Pain sensation b. Scream or respond as necessary 4. Press knife harder into skin. 5. Goto 3, until 6. 6. Pass out from pain Matt Mahoney [EMAIL PROTECTED] wrote: Below is a program

Re: [agi] AGI Consortium

2007-06-11 Thread J Storrs Hall, PhD
Keep going ... won't be too long until you invent fungible tokens for your people that act as a medium of exchange, a store of value, and a unit of account. On Monday 11 June 2007 07:22:46 pm YKY (Yan King Yin) wrote: An additional idea: each member's vote could be weighted by the member's