If you could ask an AGI anything, what would you ask it?
--
Carlos A Mejia
Taking life one singularity at a time.
www.Transalchemy.com
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
How do you work?
_
From: The Wizard [mailto:key.unive...@gmail.com]
Sent: Wednesday, June 23, 2010 11:05 PM
To: agi
Subject: [agi] Questions for an AGI
If you could ask an AGI anything, what would you ask it?
--
Carlos A Mejia
Taking life one singularity at a time.
I would ask What should I ask if I could ask AGI anything?
On Thu, Jun 24, 2010 at 11:34 AM, The Wizard key.unive...@gmail.com wrote:
If you could ask an AGI anything, what would you ask it?
--
Carlos A Mejia
Taking life one singularity at a time.
www.Transalchemy.com
*agi* |
Tell me what I need to know, by order of importance.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
I would ask the agi What should I ask an agi
On Thu, Jun 24, 2010 at 4:56 AM, Florent Berthet
florent.bert...@gmail.comwrote:
Tell me what I need to know, by order of importance.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
One of the problems of AI researchers is that too often they start off with an
inadequate
understanding of the problems and believe that solutions are only a few years
away. We need an educational system that not only teaches techniques and
solutions, but also an understanding of problems and
Carlos A Mejia invited questions for an AGI!
If you could ask an AGI anything, what would you ask it?
Who killed Donald Young, a gay sex partner
of U.S. President Barak Obama, on December
24, 2007, in Obama's home town of Chicago,
when it began to look like Obama could
actually be elected
I get the impression from this question that you think an AGI is some sort
of all-knowing, idealistic invention. It is sort of like asking if you
could ask the internet anything, what would you ask it?. Uhhh, lots of
stuff, like how do I get wine stains out of white carpet :). AGI's will not
be
Am I a human or am I an AGI?
Dana Ream wrote:
How do you work?
Just like you designed me to.
deepakjnath wrote:
What should I ask if I could ask AGI anything?
The Wizard wrote:
What should I ask an agi
You don't need to ask me anything. I will do all of your thinking for you.
Florent
Both of you are wrong. (Where did that quote come from by the way. What
year did he write or say that.)
An inadequate understanding of the problems is exactly what has to
be expected by researchers (both professional and amateurs) when they are
facing a completely novel pursuit. That is why we
Yes... the idea underlying Sloman's quote is why the interdisciplinary field
of cognitive science was invented a few decades ago...
ben g
On Thu, Jun 24, 2010 at 12:05 PM, Jim Bromer jimbro...@gmail.com wrote:
Both of you are wrong. (Where did that quote come from by the way. What
year did
I have to agree that a big problem with the field is a lack of understanding
of the problems and how they should be solved. I see too many people
pursuing solutions to poorly defined problems and without defining why the
solution solves the problem. I even see people pursuing solutions to the
Let me be very clear about this. Of course a multi-disciplinary approach is
helpful! And when AGI becomes a reality, that will be even more obvious. I
am only able to follow what I am able to follow thanks to the
contemporary philosophers who note it and contribute to it. All that I am
saying
[BTW Sloman's quote is a month old]
I think he means what I do - the end-problems that an AGI must face. Please
name me one true AGI end-problem being dealt with by any AGI-er - apart from
the toybox problem.
As I've repeatedly said- AGI-ers simply don't address or discuss AGI
end-problems.
I think some confusion occurs where AGI researchers want to build an
artificial person verses artificial general intelligence. An AGI might be
just a computational model running in software that can solve problems
across domains. An artificial person would be much else in addition to AGI.
Mike, I think your idealistic view of how AGI should be pursued does not
work in reality. What is your approach that fits all your criteria? I'm sure
that any such approach would be severely flawed as well.
Dave
On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
John,
You're making a massively important point, wh. I have been thinking about
recently.
I think it's more useful to say that AGI-ers are thinking in terms of building
a *complete AGI system* (rather than person) wh. could range from a simple
animal robot to fantasies of an all intelligent
On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
[BTW Sloman's quote is a month old]
Are you sure it was A. Sloman who wrote or said that? From where I'm
sitting it looks like it was Margaret Boden who wrote it. But then again, I
am one of those people who
Dave, Re my first point there is no choice whatsoever - you (any serious
creative) *have* to start by addressing the creative problem - in this case
true AGI end-problems. You have to start, e.g.,, addressing the problem part of
your would-be plane, the part that's going to give you take-off,
Are you sure it was A. Sloman who wrote or said that? From where I'm sitting
it looks like it was Margaret Boden who wrote it. But then again, I am one of
those people who sometimes make mistakes.
Jim Bromer
And this is indeed another of your mistakes:
Mike,
start by addressing the creative problem. this phrase doesn't mean
anything to me. You haven't properly defined what you mean by creative to
me. What do you think the true AGI end-problems are? Try not to use the word
creative so much. There possible algorithms that produce high level
I think there is a great deal of confusion between these two objectives.
When I wrote that if you had a car accident due to a fault in AI/AGI and
Matt wrote back talking about downloads this was a case in point. I was
assuming that you had a system which was intelligent but was *not* a
download in
On Thu, Jun 24, 2010 at 3:21 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
Are you sure it was A. Sloman who wrote or said that? From where I'm
sitting it looks like it was Margaret Boden who wrote it. But then again, I
am one of those people who sometimes make mistakes.
Jim Bromer
And
I suggest we form a team for this purpose ..and I am willing to join
From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Thu, June 24, 2010 2:33:01 PM
Subject: [agi] The problem with AGI per Sloman
One of the
problems of AI
24 matches
Mail list logo