Re: [agi] Building a machine that can learn from experience

2008-12-20 Thread Charles Hixson
Ben Goertzel wrote: Hi, Because some folks find that they are not subjectively sufficient to explain everything they subjectively experience... That would be more convincing if such people were to show evidence that they understand what algorithmic processes are and

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Charles Hixson
Ben Goertzel wrote: On Fri, Dec 19, 2008 at 9:10 PM, J. Andrew Rogers and...@ceruleansystems.com mailto:and...@ceruleansystems.com wrote: On Dec 19, 2008, at 5:35 PM, Ben Goertzel wrote: The I suppose it would be more accurate to state that every process we can

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Charles Hixson
Hector Zenil wrote: On Mon, Dec 1, 2008 at 6:20 AM, Ben Goertzel [EMAIL PROTECTED] wrote: On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But I don't get your point at all, because the

Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Charles Hixson
A response to: I wondered why anyone would deface the expression of his own thoughts with an emotional and hostile message, My theory is that thoughts are generated internally and forced into words via a babble generator. Then the thoughts are filtered through a screen to remove any that

Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-29 Thread Charles Hixson
A general approach to this that frequently works is to examine the definitions that you are using for ambiguity. Then to look for operational tests. If the only clear meanings lack operational tests, then it's probably worthless to waste computing resources on the problem until those

Re: [agi] If aliens are monitoring us, our development of AGI might concern them

2008-11-29 Thread Charles Hixson
Well. The speed of light limitation seems rather secure. So I would propose that we have been visited by roboticized probes, rather than by naturally evolved creatures. And the energetic constraints make it seem likely that they were extremely small and infrequent...though I suppose that

Re: [agi] Re: JAGI submission

2008-11-29 Thread Charles Hixson
Matt Mahoney wrote: --- On Tue, 11/25/08, Eliezer Yudkowsky [EMAIL PROTECTED] wrote: Shane Legg, I don't mean to be harsh, but your attempt to link Kolmogorov complexity to intelligence is causing brain damage among impressionable youths. ( Link debunked here:

Re: [agi] Cog Sci Experiment

2008-11-22 Thread Charles Hixson
Acilio Mendes wrote: My question is: how do they know your vegetable association? ... Try this experiment: repeat the same procedure of the video, but instead of asking for a vegetable, ask for an 'an animal that lives in the jungle'. Most people will answer 'Lion' even though lions don't

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-15 Thread Charles Hixson
Robert Swaine wrote: Conciousness is akin to the phlogiston theory in chemistry. It is likely a shadow concept, similar to how the bodily reactions make us feel that the heart is the seat of emotions. Gladly, cardiologist and heart surgeons do not look for a spirit, a soul, or kindness in

Re: [agi] constructivist issues

2008-11-03 Thread Charles Hixson
, then in my opinion we've made important headway. I think I found the logics you're referring to? Looks *very* interesting. http://en.wikipedia.org/wiki/Self-verifying_theories --Abram On Fri, Oct 31, 2008 at 2:26 AM, Charles Hixson [EMAIL PROTECTED] wrote: It all depends on what definition

Re: [agi] constructivist issues

2008-10-31 Thread Charles Hixson
is that it is important to examine the simplifications and abstractions, and discover how they work, so that we can ease computation in our implementations. --Abram On Thu, Oct 30, 2008 at 7:58 PM, Charles Hixson [EMAIL PROTECTED] wrote: If you were talking about something actual, then you

Re: [agi] constructivist issues

2008-10-30 Thread Charles Hixson
. Abram Demski wrote: Charles, Interesting point-- but, all of these theories would be weaker then the standard axioms, and so there would be *even more* about numbers left undefined in them. --Abram On Tue, Oct 28, 2008 at 10:46 PM, Charles Hixson [EMAIL PROTECTED] wrote: Excuse me, but I

Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Charles Hixson
If not verify, what about falsify? To me Occam's Razor has always been seen as a tool for selecting the first argument to attempt to falsify. If you can't, or haven't, falsified it, then it's usually the best assumption to go on (presuming that the costs of failing are evenly distributed).

Re: [agi] constructivist issues

2008-10-28 Thread Charles Hixson
Excuse me, but I thought there were subsets of Number theory which were strong enough to contain all the integers, and perhaps all the rational, but which weren't strong enough to prove Gödel's incompleteness theorem in. I seem to remember, though, that you can't get more than a finite number

Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-26 Thread Charles Hixson
Matt Mahoney wrote: --- On Sun, 10/26/08, Mike Tintner [EMAIL PROTECTED] wrote: So what's the connection according to you between viruses and illness/disease, heating water and boiling, force applied to object and acceleration of object? Observing illness causes me to believe a virus

Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Charles Hixson
Dr. Matthias Heger wrote: The goal of chess is well defined: Avoid being checkmate and try to checkmate your opponent. What checkmate means can be specified formally. Humans mainly learn chess from playing chess. Obviously their knowledge about other domains are not sufficient for most

Re: [agi] constructivist issues

2008-10-21 Thread Charles Hixson
Abram Demski wrote: Ben, ... One reasonable way of avoiding the humans are magic explanation of this (or humans use quantum gravity computing, etc) is to say that, OK, humans really are an approximation of an ideal intelligence obeying those assumptions. Therefore, we cannot understand the math

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-15 Thread Charles Hixson
identical processing power and storage space, then the winner will be the one that was able to assimilate and model each problem space the most efficiently, on average. Which ultimately means the one which used the *least* amount of overall computation. Terren --- On Tue, 10/14/08, Charles Hixson

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Charles Hixson
If you want to argue this way (reasonable), then you need a specific definition of intelligence. One that allows it to be accurately measured (and not just in principle). IQ definitely won't serve. Neither will G. Neither will GPA (if you're discussing a student). Because of this, while I

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Charles Hixson
Ben Goertzel wrote: Jim, I really don't have time for a long debate on the historical psychology of scientists... To give some random examples though: Newton, Leibniz and Gauss were certainly obnoxious, egomaniacal pains in the ass though ... Edward Teller ... Goethe, whose stubbornness

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread Charles Hixson
Dr. Matthias Heger wrote: *Ben G wrote* ** Well, for the purpose of creating the first human-level AGI, it seems important **to** wire in humanlike bias about space and time ... this will greatly ease the task of teaching the system to use our language and communicate with us effectively...

Re: [agi] COMP = false

2008-10-06 Thread Charles Hixson
Ben Goertzel wrote: On Sun, Oct 5, 2008 at 7:41 PM, Abram Demski [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Ben, I have heard the argument for point 2 before, in the book by Pinker, How the Mind Works. It is the inverse-optics problem: physics can predict what image

Re: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-06 Thread Charles Hixson
Mike Tintner wrote: Ben:I didn't read that book but I've read dozens of his papers ... it's cool stuff but does not convince me that engineering AGI is impossible ... however when I debated this with Stu F2F I'd say neither of us convinced each other ;-) ... Ben, His argument (like mine),

Re: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-06 Thread Charles Hixson
Abram Demski wrote: Charles, Again as someone who knows a thing or two about this particular realm... Math clearly states that to derive all the possible truths from a numeric system as strong as number theory requires an infinite number of axioms. Yep. I.e., choices. This is

Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Charles Hixson
I would go further. Humans have demonstrated that they cannot be trusted in the long term even with the capabilities that we already possess. We are too likely to have ego-centric rulers who make decisions not only for their own short-term benefit, but with an explicit After me the deluge

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Charles Hixson
Dawkins tends to see an truth, and then overstate it. What he says isn't usually exactly wrong, so much as one-sided. This may be an exception. Some meanings of group selection don't appear to map onto reality. Others map very weakly. Some have reasonable explanatory power. If you don't

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Charles Hixson
Matt Mahoney wrote: An AGI will not design its goals. It is up to humans to define the goals of an AGI, so that it will do what we want it to do. Are you certain that this is the optimal approach? To me it seems more promising to design the motives, and to allow the AGI to design it's own

Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Charles Hixson
to pass the Turing test (or win either of the two, one-time-only, Loebner Prizes) is a waste of precious time and intellectual resources. Thought experiments? No problem. Discussing ideas? No problem. Human-like AGI? Big problem. Cheers, Brad Charles Hixson wrote: Play is a form

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Charles Hixson
Play is a form a strategy testing in an environment that doesn't severely penalize failures. As such, every AGI will necessarily spend a lot of time playing. If you have some other particular definition, then perhaps I could understand your response if you were to define the term. OTOH, if

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Charles Hixson
Jonathan El-Bizri wrote: On Mon, Aug 25, 2008 at 2:26 PM, Terren Suydam [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: If an AGI played because it recognized that it would improve its skills in some domain, then I wouldn't call that play, I'd call it practice. Those are

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-18 Thread Charles Hixson
This is probably quibbling over a definition, but: Jim Bromer wrote: On Sat, Aug 9, 2008 at 5:35 PM, Charles Hixson [EMAIL PROTECTED] wrote: Jim Bromer wrote: As far as I can tell, the idea of making statistical calculation about what we don't know is only relevant for three conditions

Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Charles Hixson
Brad Paulsen wrote: ... Sigh. Your point of view is heavily biased by the unspoken assumption that AGI must be Turing-indistinguishable from humans. That it must be AGHI. This is not necessarily a bad idea, it's just the wrong idea given our (lack of) understanding of general intelligence.

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-09 Thread Charles Hixson
Jim Bromer wrote: In most situations this is further limited because one CAN'T know all of the consequences. So one makes probability calculations weighting things not only by probability of occurrence, but also by importance. So different individuals disagree not only on the definition of

Re: [agi] Groundless (AND fuzzy) reasoning - in one

2008-08-09 Thread Charles Hixson
Brad Paulsen wrote: Mike Tintner wrote: That illusion is partly the price of using language, which fragments into pieces what is actually a continuous common sense, integrated response to the world. Mike, Excellent observation. I've said it many times before: language is analog human

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-08 Thread Charles Hixson
Jim Bromer wrote: On Thu, Aug 7, 2008 at 3:53 PM, Charles Hixson [EMAIL PROTECTED] wrote: At this point I think it relevant to bring in an assertion from Larry Niven (Protector): Paraphrase: When you understand all the consequences of an act, then you don't have free will. You must choose

FWIW: Re: [agi] Groundless reasoning

2008-08-07 Thread Charles Hixson
Brad Paulsen wrote: ... Nope. Wrong again. At least you're consistent. That line actually comes from a Cheech and Chong skit (or a movie -- can't remember which at the moment) where the guys are trying to get information by posing as cops. At least I think that's the setup. When the

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Charles Hixson
Jim Bromer wrote: ... I mostly agree with your point of view, and I am not actually saying that your technical statements are wrong. I am trying to explain that there is something more to be learned. The apparent paradox can be reduced to the never ending deterministic vs free will argument.

Re: [agi] META: do we need a stronger politeness code on this list?

2008-08-03 Thread Charles Hixson
Vladimir Nesov wrote: On Sun, Aug 3, 2008 at 7:47 AM, Ben Goertzel [EMAIL PROTECTED] wrote: I think Ed's email was a bit harsh, but not as harsh as many of Richard's (which are frequently full of language like fools, rubbish and so forth ...). Some of your emails have been pretty harsh in

P.S.; Re: [agi] META: do we need a stronger politeness code on this list?

2008-08-03 Thread Charles Hixson
Charles Hixson wrote: Vladimir Nesov wrote: On Sun, Aug 3, 2008 at 7:47 AM, Ben Goertzel [EMAIL PROTECTED] wrote: I think Ed's email was a bit harsh, but not as harsh as many of Richard's (which are frequently full of language like fools, rubbish and so forth ...). Some of your emails

Re: [agi] How do we know we don't know?

2008-07-29 Thread Charles Hixson
On Tuesday 29 July 2008 03:08:55 am Valentina Poletti wrote: lol.. well said richard. the stimuli simply invokes no signiticant response and thus our brain concludes that we 'don't know'. that's why it takes no effort to realize it. agi algorithms should be built in a similar way, rather than

Re: [agi] How do we know we don't know?

2008-07-29 Thread Charles Hixson
On Tuesday 29 July 2008 04:12:27 pm Brad Paulsen wrote: Richard Loosemore wrote: Brad Paulsen wrote: All, Here's a question for you: What does fomlepung mean? If your immediate (mental) response was I don't know. it means you're not a slang-slinging Norwegian. But, how did

Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Charles Hixson
On Monday 28 July 2008 07:04:01 am YKY (Yan King Yin) wrote: Here is an example of a problematic inference: 1. Mary has cybersex with many different partners 2. Cybersex is a kind of sex 3. Therefore, Mary has many sex partners 4. Having many sex partners - high chance of getting STDs

Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Charles Hixson
On Monday 28 July 2008 09:30:08 am YKY (Yan King Yin) wrote: On 7/29/08, Charles Hixson [EMAIL PROTECTED] wrote: There's nothing wrong with the logical argument. What's wrong is that you are presuming a purely declarative logic approach can work...which it can in extremely simple

Re: [agi] Re: Theoretic estimation of reliability vs experimental

2008-07-03 Thread Charles Hixson
On Thursday 03 July 2008 11:14:15 am Vladimir Nesov wrote: On Thu, Jul 3, 2008 at 9:36 PM, William Pearson [EMAIL PROTECTED] wrote:... I know this doesn't have the properties you would look for in a friendly AI set to dominate the world. But I think it is similar to the way humans work,

Re: [agi] A point of philosophy, rather than engineering

2002-11-12 Thread Charles Hixson
and valuations. I have a project which I am aiming at that area, but it is barely getting started. -- Ben --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/ -- -- Charles Hixson Gnu software that is free

Re: [agi] A point of philosophy, rather than engineering

2002-11-11 Thread Charles Hixson
The problem with a truly general intelligence is that the search spaces are too large. So one uses specializing hueristics to cut down the amount of search space. This does, however, inevitably remove a piece of the generality. The benefit is that you can answer more complicated questions