Benjamin Johnston wrote, among other things:
I like to think about Deep Blue a lot. Prior to Deep Blue, I'm sure
that there were people who, like you, complained that nobody has
offered a crux idea that could make truly intelligent computer chess
system. In the end Deep Blue appeared to win
One thing I would expect from an AGI is that it least it would be able
to Google for something that might talk about how to do whatever it
needs and to have available library references on the subject. Being
able to follow and interpret written instructions takes a lot of
intelligence in
Stephen Reed wrote:
At the time that the Texai bootstrap English dialog system is
available, I'll begin fleshing out the hundreds of agencies for
which I hope to recruit human mentors. Each agency I establish will
have paragraphs of English text to describe its mission, including
And I'd also like to thank Brad for pointing out Skype's API, as I've
also being wanting to use a VOIP platform for speech processing and
communication. I don't know if Steve is going to end up using it, but
it's nice to hear about a useful platform like this.
andi
Quoting Stephen Reed
I was sitting in the room when they were talking about it and I didn't
feel like speaking up at the time (why break my streak?) but I felt he
was just wrong. It seemed like you could boil the claim down to this:
If you are sufficiently advanced, and you have a goal and some
ability to
There was one little line in this post that struck me, and I wanted to
comment:
Quoting Ed Porter [EMAIL PROTECTED]:
With regard to performance, such systems are not even close to human brain
level but they should allow some interesting proofs of concepts
Mentioning some huge system. My
Richard wrote:
Interestingly enough, Melanie Mitchell has a book due out in 2009
called The Core Ideas of the Sciences of Complexity. Interesting
title, given my thoughts in the last post.
Thanks for the tip, Richard! I like her book on CopyCat, and I'd
heard she had been doing complexity
On Monday 28 July 2008 07:04:01 am YKY (Yan King Yin) wrote:
Here is an example of a problematic inference:
1. Mary has cybersex with many different partners
2. Cybersex is a kind of sex
3. Therefore, Mary has many sex partners
4. Having many sex partners - high chance of getting STDs
I haven't really followed this very closely. I kind of get the feeling
that Mike is proposing some kind of intelligence special sauce that
involves some type of figurative thinking. It sounded like it was about
images or something. I'm sorry, but people are collections of hacks.
There just
Argh! Are you all making the mistake I think you are making? Searle is
using a technical term in philosophy--intentionality. It is different
from the common use of intending as in aiming to do something or intention
as a goal. (Here's s wiki http://en.wikipedia.org/wiki/Intentionality).
The
Interesting conversation. I wanted to suggest something about how an AGI
might be qualitatively different from human. One possible difference
could be an overriding thoroughness. People generally don't put in the
effort to consider all the possibilities in the decisions they make, but
computers
me:
And I've said it before, but it bears repeating in this context. Real
intelligence requires that mistakes be made. And that's at odds with
regular programming, because you are trying to write programs that don't
make mistakes, so I have to wonder how serious people really would be
about
Valentina wrote:
Sorry if I'm commenting a little late to this: just read the thread. Here
is a question. I assume we all agree that intelligence can be defined as
the ability to achieve goals. My question concerns the establishment of
those goals. As human beings we move in a world of limitations
Interesting discussion. And we brought up wireheading. It's kind of the
ultimate example that shows that pursuing pleasure is different from
pursuing the good. It really is an area for the philosophers. What is
the good, anyway?
But what I wanted to comment on was my understanding of the
Colin appears to have clarified his position. It seems to be that
computers cannot be intelligent, and we need some other kind of device for
AGI, which he is working on.
That is a perfectly possible assertion and approach. Unfortunately, what
Ben try to say as A is kind of an assumption for the
And I remember the good old Usenet comp.ai.philosophy, though it's been a
long time. I remember Dr. Minsky once taking time out of his day to post
that I was wrong about something or other. That kind of thing can be a
bunch silliness, it's true.
But I'm not sure that the re-focusing is really
On Lakoff and Nunez, Where Mathematics Comes From. Dittos. Great book.
I have had to buy multiple copies because I keep loaning it and not
getting it back. Lakoff's emobdiment theme is a primary concept for me.
andi
---
agi
Archives:
I do appreciate the support of embodiment frameworks. And I really get
the feeling that Matthias is wrong about embodiment because when it comes
down to it, embodiment is an assumption made by people when judging if
something is intelligent. But that's just me.
And what's up with language as
Matthias wrote:
There is no big depth in the language. There is only depth in the
information (i.e. patterns) which is transferred using the language.
This is a claim with which I obviously disagree. I imagine linguists
would have trouble with it, as well.
And goes on to conclude:
Therefore
This really seems more like arguing that there is no such thing as
AI-complete at all. That is certainly a possibility. It could be that
there are only different competences. This would also seem to mean that
there isn't really anything that is truly general about intelligence,
which is again
It sure seems to me that the availability of cloud computing is valuable
to the AGI project. There are some claims that maybe intelligent programs
are still waiting on sufficient computer power, but with something like
this, anybody who really thinks that and has some real software in mind
has no
When people discuss the ethics of the treatment of artificial intelligent
agents, it's almost always with the presumption that the key issue is the
subjective level of suffering of the agent. This isn't the only possible
consideration.
One other consideration is our stance relative to that
I don't know if it's low-hanging fruit, but it certainly seems like it
would require AGI to have a system that could given some picture or video
input, say what some object is. And along those lines, accept verbal
instruction as to what it is if it's wrong in what it thinks. I bring
that up
23 matches
Mail list logo