Re: [agi] Google aims for AGI (purportedly).. Performance is what counts..

2006-01-14 Thread Charles D Hixson
It probably wouldn't do very well at such a test...no better than Delphi, say. OTOH, how well do human experts in the field do on such tests? The question might be, how is it developed from a sophisticated Delphi+filter system. The answer to that isn't obvious, and probably isn't singular.

Re: [agi] Performance is what counts...Financial Economics

2006-01-18 Thread Charles D Hixson
On Sunday 15 January 2006 04:41 am, [EMAIL PROTECTED] wrote: Searching is a part of AI... But is not deep logic like Chess... Is IBM Deep Blue just a look up machine or really perceiving and logical reasoning with an output of action.. the next move. Deep Blue, the Chess Expert, was purely an

Re: [agi] Vision and Physical World Model

2006-02-14 Thread Charles D Hixson
A model based approach is necessary, but not sufficient. Equally important will be using parallax to divide the visual field into objects which move together. This can be done with one camera, by oscillating it's position, though two cameras add significantly. Three allow for better

Re: [agi] Timing of Human-Level AGI [was: Joint Stewardship of Earth]

2006-05-06 Thread Charles D Hixson
Ben Goertzel wrote: Hmmm The inimitable Mentifex wrote: http://www.blogcharm.com/Singularity/25603/Timetable.html 2006 -- True AI 2007 -- AI Landrush 2009 -- Human-Level AI 2011 -- Cybernetic Economy 2012 -- Superintelligent AI 2012 -- Joint Stewardship of Earth 2012 --

Re: [agi] Logic and Knowledge Representation

2006-05-07 Thread Charles D Hixson
John Scanlon wrote: Is anyone interested in discussing the use of formal logic as the foundation for knowledge representation schemes for AI? It's a common approach, but I think it's the wrong path. Even if you add probability or fuzzy logic, it's still insufficient for true intelligence.

Re: [agi] procedural vs declarative knowledge

2006-06-02 Thread Charles D Hixson
Mike Dougherty wrote: On 6/2/06, *Charles D Hixson* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Rule of thumb: First get it working, doing what you want. Then optimize. When optimizing, first check your algorithms, then check to see where time is actually spent

Re: [agi] Two draft papers: AI and existential risk; heuristics and biases

2006-06-07 Thread Charles D Hixson
Mark Waser wrote: What was your operational definition of friendliness, again? My personal operational definition of friendliness is simply what my current self would be willing to see implemented as the highest level goal of an AGI. Obviously, that includes being robust enough that it

Re: [agi] Four axioms (Was Two draft papers: AI and existential risk; heuristics and biases)

2006-06-08 Thread Charles D Hixson
Mark Waser wrote: .. The first thing that is necessary is to define your goals. It is my contention that there is no good and no bad (or evil) except in the context of a goal and that those who believe that there is some absolute morality out there have been fooled by the unconscious

Re: [agi] Friendly AI in an unfriendly world... AI to the future socieities.... Four axioms (WAS Two draft papers . . . .)

2006-06-10 Thread Charles D Hixson
[EMAIL PROTECTED] wrote: If your AI was operating on the web it might find itself at a sever disadvantage with all of those con artist... Your AI might lose bad... Friendly does not equal trusting. It does not equal stupid. It does not equal not being willing to learn from the

Re: [agi] Processing speed for core intelligence in human brain

2006-07-14 Thread Charles D Hixson
Try calculating instead the incoming bits/second stored...now calculate the required storage space. When you do that the computer starts looking much less competitive...today. Calculate the space required to store, without definitions or attached meanings, all the words in the English language.

Re: [agi] fuzzy logic necessary?

2006-08-05 Thread Charles D Hixson
Yan King Yin wrote: ... 2. If you think your method is better, the mechanism underlying your rule might be more complex than predicate logic. That's kind of strange. YKY Not strange at all. The brain had a long evolutionary history before language was ever created. Languages are attempts

Re: [agi] Re: strong and weakly self improving processes

2006-08-07 Thread Charles D Hixson
Eric Baum wrote: Eric Baum wrote: even if there would be some way to keep modifying the top level to make it better, one could presumably achieve just as powerful an ultimate intelligence by keeping it fixed and adding more powerful lower levels (or maybe better yet, middle levels) or more

Re: [agi] Re: Applicability of NP-hard concept

2006-08-07 Thread Charles D Hixson
Eric Baum wrote: My apologies for delay in responding. I was busy... but I think there is a lot of confusion on the list about NP-hardness still so here goes another attempt. I'm taking portion from a different thread and changing subject, Eliezer when I get time I'll try to respond a bit more

Re: [agi] Re: Applicability of NP-hard concept

2006-08-07 Thread Charles D Hixson
Charles D Hixson wrote: ...I think the mistake here is presuming that intelligence is some particular set of tools that can solve everything. It is my belief that OTOH intelligence is a framework into which can be slotted a (perhaps) almost infinite set of tools. Most of them will be special

Re: [agi] confirmation paradox

2006-08-10 Thread Charles D Hixson
Yan King Yin wrote: ... To avoid confusion we can fix it that the probability/NTV associated with a sentence is always interpreted as the (subjective) probability of that sentence being true. So p( all ravens are black ) will become 0 whenever a single nonblack raven is found. If, from

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-13 Thread Charles D Hixson
Mark Waser wrote: Hi all, I think that a few important points have been lost or misconstrued in most of this discussion. First off, there is a HUGE difference between the compression of knowledge and the compression of strings. The strings Ben is human., Ben is a member of the

Re: [agi] Lossy ** lossless compression

2006-08-27 Thread Charles D Hixson
Matt Mahoney wrote: Mark, I didn't get your attachment, the program that tells me if an arbitrary text string is in canonical form or not. Actually, if it will make it any easier, I really only need to know if a string is a canonical representation of Wikipedia. Oh, wait... there can only

Re: [agi] AGI open source license

2006-08-28 Thread Charles D Hixson
Stephen Reed wrote: I would appreciate comments regarding additional constraints, if any, that should be applied to a traditional open source license to achieve a free but safe widespread distribution of software that may lead to AGI. ... My personal opinion is that the best license is the

Re: [agi] AGI open source license

2006-08-30 Thread Charles D Hixson
Philip Goetz wrote: On 8/28/06, Stephen Reed [EMAIL PROTECTED] wrote: An assumption that some may challenge is that AGI s... source license retain these benefits yet be safe? I would rather see a license which made the software free for non-commercial use, but (unlike the GNU licenses)

Re: [agi] AGI open source license

2006-09-01 Thread Charles D Hixson
Stephen Reed wrote: ... Rather than cash payments I have in mind a scheme similar to the pre-world wide web bulletin board system in which FTP sites had upload and download ratios. If you wished to benefit from the site by downloading, you had to maintain a certain level of contributions via

Re: [agi] AGI open source license

2006-09-04 Thread Charles D Hixson
Philip Goetz wrote: On 8/30/06, Charles D Hixson [EMAIL PROTECTED] wrote: ... some snipping ... - Phil The idea with the GPL is that if you want to also sell the program commercially, you should additionally make it available under an alternate license. Some companies have been successful

Re: [agi] AGI open source license

2006-09-05 Thread Charles D Hixson
Philip Goetz wrote: ... Those companies don't make money off the software. They sell products and services. The GPL is not successful at enabling people to make money directly off software. This is critical, because it takes a large company and a large capital investment to make money selling

Re: [agi] Why so few AGI projects?

2006-09-13 Thread Charles D Hixson
Joshua Fox wrote: I'd like to raise a FAQ: Why is so little AGI research and development being done? ... Thanks, Joshua What proportion of the work that is being done do you believe you are aware of? On what basis? My suspicion is that most people on the track of something new tend to be

Re: [agi] Is a robot a Turing Machine?

2006-10-02 Thread Charles D Hixson
Pei Wang wrote: We all know that, in a sense, every computer system (hardware plus software) can be abstractly described as a Turing machine. Can we say the same for every robot? Why? Reference to previous publications are also welcome. Pei The controller for the robot might be a Turing

Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread Charles D Hixson
John Scanlon wrote: Ben, I did read your stuff on Lojban++, and it's the sort of language I'm talking about. This kind of language lets the computer and the user meet halfway. The computer can parse the language like any other computer language, but the terms and constructions are

Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread Charles D Hixson
BillK wrote: On 11/1/06, Charles D Hixson wrote: So. Lojban++ might be a good language for humans to communicate to an AI with, but it would be a lousy language in which to implement that same AI. But even for this purpose the language needs a verifier to insure that the correct forms

Re: [agi] Natural versus formal AI interface languages

2006-11-05 Thread Charles D Hixson
Richard Loosemore wrote: ... This is a question directed at this whole thread, about simplifying language to communicate with an AI system, so we can at least get something working, and then go from there This rationale is the very same rationale that drove researchers into Blocks World

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Charles D Hixson
Ben Goertzel wrote: ... On the other hand, the notions of intelligence and understanding and so forth being bandied about on this list obviously ARE intended to capture essential aspects of the commonsense notions that share the same word with them. ... Ben Given that purpose, I propose the

Re: [agi] A question on the symbol-system hypothesis

2006-11-18 Thread Charles D Hixson
a knowledge test. That's not what I mean. Maybe we could extract simple facts from wiki, and start creating a test there, then add in more complicated things. James */Charles D Hixson [EMAIL PROTECTED]/* wrote: Ben Goertzel wrote: ... On the other hand, the notions of intelligence

Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Charles D Hixson
learns what a normal state of being is, and detect deviations. On 21/11/06, *Charles D Hixson* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Bob Mottram wrote: On 17/11/06, *Charles D Hixson* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED

Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Charles D Hixson
Mark Waser wrote: Hi Bill, ... If storage and access are the concern, your own argument says that a sufficiently enhanced human can understand anything and I am at a loss as to why an above-average human with a computer and computer skills can't be considered nearly indefinitely enhanced.

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Charles D Hixson
Mark Waser wrote: ... For me, yes, all of those things are good since they are on my list of goals *unless* the method of accomplishing them steps on a higher goal OR a collection of goals with greater total weight OR violates one of my limitations (restrictions). ... If you put every good

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Charles D Hixson
James Ratcliff wrote: There is a needed distinctintion that must be made here about hunger as a goal stack motivator. We CANNOT change the hunger sensation, (short of physical manipuations, or mind-control stuff) as it is a given sensation that comes directly from the physical body. What

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Charles D Hixson
you think we should leave it up to a single Constroller to interpret the signals coming from teh body and form the goals. In humans it looks to be the one way, but with AGI's it appears it would/could be another. James */Charles D Hixson [EMAIL PROTECTED]/* wrote: J... Goals

Re: [agi] The Singularity

2006-12-05 Thread Charles D Hixson
Ben Goertzel wrote: ... According to my understanding of the Novamente design and artificial developmental psychology, the breakthrough from slow to fast incremental progress will occur when the AGI system reaches Piaget's formal stage of development:

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Charles D Hixson
BillK wrote: ... Every time someone (subconsciously) decides to do something, their brain presents a list of reasons to go ahead. The reasons against are ignored, or weighted down to be less preferred. This applies to everything from deciding to get a new job to deciding to sleep with your best

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Charles D Hixson
BillK wrote: On 12/5/06, Charles D Hixson wrote: BillK wrote: ... No time inversion intended. What I intended to say was that most (all?) decisions are made subconsciously before the conscious mind starts its reason / excuse generation process. The conscious mind pretending to weigh

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-13 Thread Charles D Hixson
Philip Goetz wrote: ... The disagreement here is a side-effect of postmodern thought. Matt is using evolution as the opposite of devolution, whereas Eric seems to be using it as meaning change, of any kind, via natural selection. We have difficulty because people with political agendas -

Re: [agi] Project proposal: MindPixel 2

2007-01-17 Thread Charles D Hixson
Joel Pitt wrote: ... Some comments/suggestions: * I think such a project should make the data public domain. Ignore silly ideas like giving be shares in the knowledge or whatever. It just complicates things. If the project is really strapped for cash later, then either use ad revenue or look

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Charles D Hixson
YKY (Yan King Yin) wrote: ... I think a project like this one requires substantial efforts, so people would need to be paid to do some of the work (programming, interface design, etc), especially if we want to build a high quality knowledgebase. If we make it free then a likely outcome is

Re: [agi] Project proposal: MindPixel 2

2007-01-20 Thread Charles D Hixson
Benjamin Goertzel wrote: And, importance levels need to be context-dependent, so that assigning them requires sophisticated inference in itself... The problem may not be so serious. Common sense reasoning may require only *shallow* inference chains, eg 5 applications of rules. So I'm

Re: [agi] Project proposal: MindPixel 2

2007-01-20 Thread Charles D Hixson
Benjamin Goertzel wrote: Hi, Possibly this could be approached by partitioning the rule-set into small chunks of rules that work together, so that one didn't end up trying everything against everything else. These chunks of rules might well be context dependent, so that one would use

Re: [agi] Project proposal: MindPixel 2

2007-01-26 Thread Charles D Hixson
Philip Goetz wrote: On 1/17/07, Charles D Hixson [EMAIL PROTECTED] wrote: It's find to talk about making the data public domain, but that's not a good idea. Why not? Because public domain offers NO protection. If you want something close to what public domain used to provide, then the MIT

Re: [agi] foundations of probability theory

2007-01-28 Thread Charles D Hixson
gts wrote: Hi Ben, On Extropy-chat, you and I and others were discussing the foundations of probability theory, in particular the philosophical controversy surrounding the so-called Principle of Indifference. Probability theory is of course relevant to AGI because of its bearing on decision

Re: [agi] Relevance of Probability

2007-02-04 Thread Charles D Hixson
Richard Loosemore wrote: ... [ASIDE. An example of this. The system is trying to answer the question Are all ravens black?, but it does not just look to its collected data about ravens (partly represented by the vector of numbers inside the raven concept, which are vaguely related to the

Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Charles D Hixson
Chuck Esterbrook wrote: On 2/18/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: Mark Waser wrote: ... I find C++ overly complex while simultaneously lacking well known productivity boosters including: * garbage collection * language level bounds checking * contracts * reflection /

Re: [agi] general weak ai

2007-03-09 Thread Charles D Hixson
Russell Wallace wrote: On 3/9/07, *Charles D Hixson* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Russell Wallace wrote: To test whether a program understands a story, start by having it generate an animated movie of the story

Re: [agi] My proposal for an AGI agenda

2007-03-18 Thread Charles D Hixson
Russell Wallace wrote: On 3/13/07, *J. Storrs Hall, PhD.* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: But the bottom line problem for using FOPC (or whatever) to represent the world is not that it's computationally incapable of it -- it's Turing complete, after all -- but

Re: [agi] My proposal for an AGI agenda

2007-03-18 Thread Charles D Hixson
Russell Wallace wrote: On 3/18/07, *Charles D Hixson* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Perhaps it would be best to have, say, four different formats for different classes of problems (with the understanding that most problems are mixed). E.g., some classes

Re: [agi] My proposal for an AGI agenda

2007-03-20 Thread Charles D Hixson
rooftop8000 wrote: ... I think we should somehow allow people to use all the program languages they want. That somehow is the big problem. Most approaches to dealing with it are...lamentable. ... You can use closed modules if you have meta-information on how to use them and what they do.

Re: [agi] My proposal for an AGI agenda

2007-03-22 Thread Charles D Hixson
Chuck Esterbrook wrote: On 3/20/07, Charles D Hixson [EMAIL PROTECTED] wrote: rooftop8000 wrote: ... I think we should somehow allow people to use all the program languages they want. That somehow is the big problem. Most approaches to dealing with it are...lamentable. ... You can use

Re: [agi] My proposal for an AGI agenda

2007-03-22 Thread Charles D Hixson
Chuck Esterbrook wrote: On 3/22/07, Charles D Hixson [EMAIL PROTECTED] wrote: Unfortunately, MS is claiming undefined things as being proprietary. As such, I intend to stay totally clear of implementations of it's protocols. Including mono. I am considering jvm, however, as Sun has now freed

Re: SV: [agi] mouse uploading

2007-04-29 Thread Charles D Hixson
I think someone at UCLA did something similar for lobsters. This was used as material for an SF story (Lobsters, Charles Stross[sp?]) Jan Mattsson wrote: Has this approach been successful for any lesser animals? E.g.; has anyone simulated an insect brain system connected to a simulated

Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-30 Thread Charles D Hixson
Stripping away a lot of your point here, I just want to point out how many jokes are memorized fragments. A large part of what is going on here is using a large database. I'm not disparaging your point about pattern matching being necessary, but one normally pattern matches and returns a

Re: [agi] rule-based NL system

2007-05-02 Thread Charles D Hixson
Mark Waser wrote: What is meaning to a computer? Some people would say that no machine can know the meaning of text because only humans can understand language. Nope. I am *NOT* willing to do the Searle thing. Machines will know the meaning of text (i.e. understand it) when they have a

Re: [agi] What would motivate you to put work into an AGI project?

2007-05-04 Thread Charles D Hixson
What would motivate you to put work into an AGI project? 1) A reasonable point of entry into the project 2) The project would need to be FOSS, or at least communally owned. (FOSS for preference.) I've had a few bad experiences where the project leader ended up taking everything, and don't

Re: [agi] rule-based NL system

2007-05-04 Thread Charles D Hixson
J. Storrs Hall, PhD. wrote: On Wednesday 02 May 2007 15:08, Charles D Hixson wrote: Mark Waser wrote: ... Machines will know the meaning of text (i.e. understand it) when they have a coherent world model that they ground their usage of text in. ... But note that in this case

Re: [agi] Books

2007-06-09 Thread Charles D Hixson
Mark Waser wrote: The problem of logical reasoning in natural language is a pattern recognition problem (like natural language recognition in general). For example: - Frogs are green. Kermit is a frog. Therefore Kermit is green. - Cities have tall buildings. New York is a city.

Re: [agi] Pure reason is a disease.

2007-06-17 Thread Charles D Hixson
Eric Baum wrote: Josh On Saturday 16 June 2007 07:20:27 pm Matt Mahoney wrote: --- Bo Morgan [EMAIL PROTECTED] wrote: ... ... I claim that it is the very fact that you are making decisions about whether to supress pain for higher goals that is the reason you are conscious of pain. Your

Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Charles D Hixson
Matt Mahoney wrote: --- J Storrs Hall, PhD [EMAIL PROTECTED] wrote: ... So you are arguing that RSI is a hard problem? That is my question. Understanding software to the point where a program could make intelligent changes to itself seems to require human level intelligence. But could

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-07 Thread Charles D Hixson
Edward W. Porter wrote: So is the following understanding correct? If you have two statements Fred is a human Fred is an animal And assuming you know nothing more about any of the three terms in both these

Re: [agi] Religion-free technical content

2007-10-08 Thread Charles D Hixson
Derek Zahn wrote: Richard Loosemore: a... I often see it assumed that the step between first AGI is built (which I interpret as a functoning model showing some degree of generally-intelligent behavior) and god-like powers dominating the planet is a short one. Is that really likely? Nobody

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-08 Thread Charles D Hixson
a wrote: Linas Vepstas wrote: ... The issue is that there's no safety net protecting against avalanches of unbounded size. The other issue is that its not grains of sand, its people. My bank-account and my brains can insulate me from small shocks. I'd like to have protection against the

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-08 Thread Charles D Hixson
to the related issues. Pei On 10/8/07, Charles D Hixson [EMAIL PROTECTED] wrote: Pei Wang wrote: Charles, What you said is correct for most formal logics formulating binary deduction, using model-theoretic semantics. However, Edward was talking about the categorical logic of NARS, though he

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-08 Thread Charles D Hixson
Mike Tintner wrote: Vladimir: In experience-based learning there are two main problems relating to knowledge acquisition: you have to come up with hypotheses and you have to assess their plausibility. ...you create them based on various heuristics. How is this different from narrow AI? It

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson
Mike Tintner wrote: Charles H:as I understand it, this still wouldn't be an AGI, but merely a categorizer. That's my understanding too. There does seem to be a general problem in the field of AGI, distinguishing AGI from narrow AI - philosophically. In fact, I don't think I've seen any

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson
Linas Vepstas wrote: On Sun, Oct 07, 2007 at 12:36:10PM -0700, Charles D Hixson wrote: Edward W. Porter wrote: Fred is a human Fred is an animal You REALLY can't do good reasoning using formal logic in natural language...at least

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson
Mark Waser wrote: Thus, as I understand it, one can view all inheritance statements as indicating the evidence that one instance or category belongs to, and thus is “a child of” another category, which includes, and thus can be viewed as “a parent” of the other. Yes, that is inheritance as Pei

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson
Mike Tintner wrote: Charles, I don't see - no doubt being too stupid - how what you are saying is going to make a categorizer into more than that - into a system that can, say, go on to learn various logic's, or how to build a house or other structures or tell a story - that can be a

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson
Generally, yes, you know more. In this particular instance we were told the example was all that was known. Linas Vepstas wrote: On Wed, Oct 10, 2007 at 01:06:35PM -0700, Charles D Hixson wrote: For me the sticking point was that we were informed that we didn't know anything about anything

Re: [agi] Do the inference rules.. P.S.

2007-10-11 Thread Charles D Hixson
Consider, however, the case of someone who was not only blind, but also deaf and incapable of taste, smell, tactile, or goinometric perception. I would be dubious about the claim that such a person understood English. I might be dubious about any claim that such a person was actually

Re: [agi] The Grounding of Maths

2007-10-12 Thread Charles D Hixson
But what you're reporting is the dredging up of a memory. What would be the symbolism if in response to 4 came the question How do you know that? For me it's visual (and leads directly into the definition of + as an amalgamation of two disjunct groupings). Edward W. Porter wrote: (second

Re: [agi] The Grounding of Maths

2007-10-12 Thread Charles D Hixson
They may be doing it with the tongue now. A few decades ago it was done with an electrode mesh on the back. It worked, but the resolution was pretty low. (IIRC, you don't need to be blind to learn to use this kind of mapping device.) Mike Tintner wrote: All v. interesting. Fascinating

Re: [agi] The Grounding of Maths

2007-10-13 Thread Charles D Hixson
conscious thought, or am I (a) out of touch with my own conscious processes, and/or (b) weird? Edward W. Porter Porter Associates 24 String Bridge S12 Exeter, NH 03833 (617) 494-1722 Fax (617) 494-1822 [EMAIL PROTECTED] -Original Message- From: Charles D Hixson [mailto:[EMAIL PROTECTED

Re: [agi] The Grounding of Maths

2007-10-13 Thread Charles D Hixson
Grounding requires sensoria of some sort. Not necessarily vision. Spatial grounding requires sensoria that connect spatially coherent signals. Vision is one form of spatial grounding, but I believe that goinometric sensation is even more important...though it definitely needs additional

Re: [agi] The Grounding of Maths

2007-10-14 Thread Charles D Hixson
a wrote: Are you trying to make an intelligent program or want to launch a singularity? I think you are trying to do the former, not the latter. I think you do not have a plan and are thinking out loud. Chatting in this list is equivalent to thinking out loud. Think it all out first, before

Re: Images aren't best WAS Re: [agi] Human memory and number of synapses

2007-10-20 Thread Charles D Hixson
Let me take issue with one point (most of the rest I'm uninformed about): Relational databases aren't particularly compact. What they are is generalizable...and even there... The most general compact database is a directed graph. Unfortunately, writing queries for retrieval requires domain

Re: [agi] Human memory and number of synapses.. P.S.

2007-10-20 Thread Charles D Hixson
FWIW: A few years (decades?) ago some researchers took PET scans of people who were imagining a rectangle rotating (in 3-space, as I remember). They naturally didn't get much detail, but what they got was consistent with people applying a rotation algorithm within the visual cortex. This

Re: [agi] Can humans keep superintelligences under control

2007-11-04 Thread Charles D Hixson
Richard Loosemore wrote: Edward W. Porter wrote: Richard in your November 02, 2007 11:15 AM post you stated: ... I think you should read some stories from the 1930's by John W. Campbell, Jr. Specifically the three stories collectively called The Story of the Machine. You can find them in

Re: [agi] Can humans keep superintelligences under control

2007-11-05 Thread Charles D Hixson
Richard Loosemore wrote: Charles D Hixson wrote: Richard Loosemore wrote: Edward W. Porter wrote: Richard in your November 02, 2007 11:15 AM post you stated: ... I think you should read some stories from the 1930's by John W. Campbell, Jr. Specifically the three stories collectively

Re: [agi] Can humans keep superintelligences under control

2007-11-05 Thread Charles D Hixson
Richard Loosemore wrote: Charles D Hixson wrote: Richard Loosemore wrote: Charles D Hixson wrote: Richard Loosemore wrote: Edward W. Porter wrote: Richard in your November 02, 2007 11:15 AM post you stated: ... In parents, sure, those motives exist. But in an AGI there is no earthly

Re: [agi] NLP + reasoning?

2007-11-05 Thread Charles D Hixson
Matt Mahoney wrote: --- Linas Vepstas [EMAIL PROTECTED] wrote: ... It still has a few bugs. ... (S (NP I) (VP ate pizza (PP with (NP Bob))) .) My name is Hannibal Lector. ... -- Matt Mahoney, [EMAIL PROTECTED] (Hannibal Lector was a movie cannibal)

Re: [agi] Connecting Compatible Mindsets

2007-11-10 Thread Charles D Hixson
Benjamin Goertzel wrote: Hi, *** Maybe listing all the projects that have NOT achieved AGI might give us some insight. *** That information is available in numerous published histories, and is well known to all professional researchers in the field. ... -- Ben

Re: [agi] question about algorithmic search

2007-11-11 Thread Charles D Hixson
YKY (Yan King Yin) wrote: I have the intuition that Levin search may not be the most efficient way to search programs, because it operates very differently from human programming. I guess better ways to generate programs can be achieved by imitating human programming -- using techniques

Re: [agi] Connecting Compatible Mindsets

2007-11-11 Thread Charles D Hixson
Bryan Bishop wrote: On Saturday 10 November 2007 14:10, Charles D Hixson wrote: Bryan Bishop wrote: On Saturday 10 November 2007 13:40, Charles D Hixson wrote: OTOH, to make a go of this would require several people willing to dedicate a lot of time consistently over a long

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread Charles D Hixson
Ed Porter wrote: Richard, Since hacking is a fairly big, organized crime supported, business in eastern Europe and Russia, since the potential rewards for it relative to most jobs in those countries can be huge, and since Russia has a tradition of excellence in math and science, I would be very

Re: [agi] Funding AGI research

2007-11-29 Thread Charles D Hixson
Benjamin Goertzel wrote: Nearly any AGI component can be used within a narrow AI, That proves my point [that AGI project can be successfully split into smaller narrow AI subprojects], right? Yes, but it's a largely irrelevant point. Because building a narrow-AI system in an

Re: [agi] Funding AGI research

2007-11-29 Thread Charles D Hixson
I think you're making a mistake. I *do* feel that lots of special purpose AIs are needed as components of an AGI, but those components don't summate to an AGI. The AGI also needs a specialized connection structure to regulate interfaces to the various special purpose AIs (which probably don't

Re: [agi] Self-building AGI

2007-12-01 Thread Charles D Hixson
Well... Have you ever tried to understand the code created by a decompiler? Especially if the original language that was compiled isn't the one that you are decompiling into... I'm not certain that just because we can look at the code of a working AGI, that we can therefore understand it.

Re: [agi] AGI and Deity

2007-12-10 Thread Charles D Hixson
Gary Miller wrote: ... supercomputer might be v. powerful - for argument's sake, controlling the internet or the the world's power supplies. But it's still quite a leap from that to a supercomputer being God. And yet it is clearly a leap that a large number here have no problem making. So

Re: [agi] AGI and Deity

2007-12-10 Thread Charles D Hixson
John G. Rose wrote: If you took an AGI, before it went singulatarinistic[sic?] and tortured it…. a lot, ripping into it in every conceivable hellish way, do you think at some point it would start praying somehow? I’m not talking about a forced conversion medieval style, I’m just talking

Re: [agi] AGI and Deity

2007-12-10 Thread Charles D Hixson
Mark Waser wrote: Then again, a completely rational AI may believe in Pascal's wager... Pascal's wager starts with the false assumption that belief in a deity has no cost. Pascal's wager starts with a multitude of logical fallacies. So many that only someone pre-conditioned to believe in the

Re: [agi] AGI and Deity

2007-12-10 Thread Charles D Hixson
I find Dawkins less offensive than most theologians. He commits many fewer logical fallacies. His main one is premature certainty. The evidence in favor of an external god of any traditional form is, frankly, a bit worse than unimpressive. It's lots worse. This doesn't mean that gods don't

Re: [agi] AGI and Deity

2007-12-11 Thread Charles D Hixson
John G. Rose wrote: From: Charles D Hixson [mailto:[EMAIL PROTECTED] The evidence in favor of an external god of any traditional form is, frankly, a bit worse than unimpressive. It's lots worse. This doesn't mean that gods don't exist, merely that they (probably) don't exist in the hardware

Re: Re : [agi] List of Java AI tools libraries

2007-12-20 Thread Charles D Hixson
Bruno Frandemiche wrote: Psyclone AIOS http://www.cmlabs.com/psyclone/™ is a powerful platform for building complex automation and autonomous systems I couldn't seem to find what license that was released under. (The library was LGPL, which is very nice.) But without knowing the license,

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-23 Thread Charles D Hixson
Richard Loosemore wrote: Matt Mahoney wrote: ... Matt, ... As for your larger point, I continue to vehemently disagree with your assertion that a singularity will end the human race. As far as I can see, the most likely outcome of a singularity would be exactly the opposite. Rather than

Re: [agi] Wozniak's defn of intelligence

2008-02-09 Thread Charles D Hixson
Richard Loosemore wrote: J Storrs Hall, PhD wrote: On Friday 08 February 2008 10:16:43 am, Richard Loosemore wrote: J Storrs Hall, PhD wrote: Any system builders here care to give a guess as to how long it will be before a robot, with your system as its controller, can walk into the

Re: [agi] would anyone want to use a commonsense KB?

2008-02-29 Thread Charles D Hixson
Ben Goertzel wrote: yet I still feel you dismiss the text-mining approach too glibly... No, but text mining requires a language model that learns while mining. You can't mine the text first. Agreed ... and this gets into subtle points. Which aspects of the language model need to be

Re: [agi] This is homunculus fallacy, no? [WAS Re: Common Sense Consciousness...]

2008-02-29 Thread Charles D Hixson
Richard Loosemore wrote: Mike Tintner wrote: Eh? Move your hand across the desk. You see that as a series of snapshots? Move a noisy object across. You don't see a continuous picture with a continuous soundtrack? Let me give you an example of how impressive I think the brain's powers here

Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Charles D Hixson
Mark Waser wrote: ... The motivation that is in the system is I want to achieve *my* goals. The goals that are in the system I deem to be entirely irrelevant UNLESS they are deliberately and directly contrary to Friendliness. I am contending that, unless the initial goals are deliberately

  1   2   >