Re: [agi] The Test

2008-02-05 Thread wannabe
Benjamin Johnston wrote, among other things: I like to think about Deep Blue a lot. Prior to Deep Blue, I'm sure that there were people who, like you, complained that nobody has offered a crux idea that could make truly intelligent computer chess system. In the end Deep Blue appeared to win

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread wannabe
One thing I would expect from an AGI is that it least it would be able to Google for something that might talk about how to do whatever it needs and to have available library references on the subject. Being able to follow and interpret written instructions takes a lot of intelligence in

Re: [agi] organising parallel processes

2008-05-06 Thread wannabe
Stephen Reed wrote: At the time that the Texai bootstrap English dialog system is available, I'll begin fleshing out the hundreds of agencies for which I hope to recruit human mentors. Each agency I establish will have paragraphs of English text to describe its mission, including

Re: [agi] organising parallel processes

2008-05-12 Thread wannabe
And I'd also like to thank Brad for pointing out Skype's API, as I've also being wanting to use a VOIP platform for speech processing and communication. I don't know if Steve is going to end up using it, but it's nice to hear about a useful platform like this. andi Quoting Stephen Reed

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread wannabe
I was sitting in the room when they were talking about it and I didn't feel like speaking up at the time (why break my streak?) but I felt he was just wrong. It seemed like you could boil the claim down to this: If you are sufficiently advanced, and you have a goal and some ability to

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-28 Thread wannabe
There was one little line in this post that struck me, and I wanted to comment: Quoting Ed Porter [EMAIL PROTECTED]: With regard to performance, such systems are not even close to human brain level but they should allow some interesting proofs of concepts Mentioning some huge system. My

Re: [agi] Approximations of Knowledge

2008-06-28 Thread wannabe
Richard wrote: Interestingly enough, Melanie Mitchell has a book due out in 2009 called The Core Ideas of the Sciences of Complexity. Interesting title, given my thoughts in the last post. Thanks for the tip, Richard! I like her book on CopyCat, and I'd heard she had been doing complexity

Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread wannabe
On Monday 28 July 2008 07:04:01 am YKY (Yan King Yin) wrote: Here is an example of a problematic inference: 1. Mary has cybersex with many different partners 2. Cybersex is a kind of sex 3. Therefore, Mary has many sex partners 4. Having many sex partners - high chance of getting STDs

Re: [agi] Search vs Recall.. P.S.

2008-08-01 Thread wannabe
I haven't really followed this very closely. I kind of get the feeling that Mike is proposing some kind of intelligence special sauce that involves some type of figurative thinking. It sounded like it was about images or something. I'm sorry, but people are collections of hacks. There just

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread wannabe
Argh! Are you all making the mistake I think you are making? Searle is using a technical term in philosophy--intentionality. It is different from the common use of intending as in aiming to do something or intention as a goal. (Here's s wiki http://en.wikipedia.org/wiki/Intentionality). The

Re: [agi] The Necessity of Embodiment

2008-08-10 Thread wannabe
Interesting conversation. I wanted to suggest something about how an AGI might be qualitatively different from human. One possible difference could be an overriding thoroughness. People generally don't put in the effort to consider all the possibilities in the decisions they make, but computers

Re: [agi] The Necessity of Embodiment

2008-08-10 Thread wannabe
me: And I've said it before, but it bears repeating in this context. Real intelligence requires that mistakes be made. And that's at odds with regular programming, because you are trying to write programs that don't make mistakes, so I have to wonder how serious people really would be about

Re: [agi] The Necessity of Embodiment

2008-08-24 Thread wannabe
Valentina wrote: Sorry if I'm commenting a little late to this: just read the thread. Here is a question. I assume we all agree that intelligence can be defined as the ability to achieve goals. My question concerns the establishment of those goals. As human beings we move in a world of limitations

Re: [agi] The Necessity of Embodiment

2008-08-28 Thread wannabe
Interesting discussion. And we brought up wireheading. It's kind of the ultimate example that shows that pursuing pleasure is different from pursuing the good. It really is an area for the philosophers. What is the good, anyway? But what I wanted to comment on was my understanding of the

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread wannabe
Colin appears to have clarified his position. It seems to be that computers cannot be intelligent, and we need some other kind of device for AGI, which he is working on. That is a perfectly possible assertion and approach. Unfortunately, what Ben try to say as A is kind of an assumption for the

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread wannabe
And I remember the good old Usenet comp.ai.philosophy, though it's been a long time. I remember Dr. Minsky once taking time out of his day to post that I was wrong about something or other. That kind of thing can be a bunch silliness, it's true. But I'm not sure that the re-focusing is really

RE: [agi] Re: Defining AGI

2008-10-17 Thread wannabe
On Lakoff and Nunez, Where Mathematics Comes From. Dittos. Great book. I have had to buy multiple copies because I keep loaning it and not getting it back. Lakoff's emobdiment theme is a primary concept for me. andi --- agi Archives:

Re: [agi] Re: Defining AGI

2008-10-18 Thread wannabe
I do appreciate the support of embodiment frameworks. And I really get the feeling that Matthias is wrong about embodiment because when it comes down to it, embodiment is an assumption made by people when judging if something is intelligent. But that's just me. And what's up with language as

Re: AW: [agi] Re: Defining AGI

2008-10-18 Thread wannabe
Matthias wrote: There is no big depth in the language. There is only depth in the information (i.e. patterns) which is transferred using the language. This is a claim with which I obviously disagree. I imagine linguists would have trouble with it, as well. And goes on to conclude: Therefore

Re: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread wannabe
This really seems more like arguing that there is no such thing as AI-complete at all. That is certainly a possibility. It could be that there are only different competences. This would also seem to mean that there isn't really anything that is truly general about intelligence, which is again

RE: [agi] Cloud Intelligence

2008-10-30 Thread wannabe
It sure seems to me that the availability of cloud computing is valuable to the AGI project. There are some claims that maybe intelligent programs are still waiting on sufficient computer power, but with something like this, anybody who really thinks that and has some real software in mind has no

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread wannabe
When people discuss the ethics of the treatment of artificial intelligent agents, it's almost always with the presumption that the key issue is the subjective level of suffering of the agent. This isn't the only possible consideration. One other consideration is our stance relative to that

Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-08 Thread wannabe
I don't know if it's low-hanging fruit, but it certainly seems like it would require AGI to have a system that could given some picture or video input, say what some object is. And along those lines, accept verbal instruction as to what it is if it's wrong in what it thinks. I bring that up