--
*From:* David Jones davidher...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Tue, June 29, 2010 9:05:45 AM
*Subject:* [agi] Re: Huge Progress on the Core of AGI
If anyone has any knowledge of or references to the state of the art in
explanation-based reasoning, can you send me keywords
extremely kind - you do have a chance of winning the lottery.
--
From: Michael Swan ms...@voyagergaming.com
Sent: Monday, June 28, 2010 4:17 AM
To: agi agi@v2.listbox.com
Subject: Re: [agi] Huge Progress on the Core of AGI
On Sun, 2010-06-27 at 19
knowledge into an AI. (Graph attached)
Basically, the more hardcoded knowledge you include in an AI, of AGI, the
lower the overall intelligence it will have, but that faster you will reach
that value. I would include any real AGI to be toward the left of the graph
with systems like CYC
is that AGI enhances itself. Hence a singularity
*without* AGI is a contradiction in terms. I did not quite get you syntax on
reproduction, but it is perfectly true that you do not need a singularity
for a Von Neumann machine. The singularity is a long way off yet Obama is
going to leave Afghanistan
as anything but a crime? Even these two extremes would have
significant implementation problems.
Anyway, I am sending you two back to kindergarten.
Steve
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com
The recent Core of AGI exchange has led me IMO to a beautiful conclusion -
to one of the most basic distinctions a real AGI system must make, and also
a simple way of distinguishing between narrow AI and real AGI projects of
any kind.
Consider - you have
a) Dave's square moving across
as anything but a crime? Even these two extremes would have
significant implementation problems.
Anyway, I am sending you two back to kindergarten.
Steve
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps
in with AGI? Well Peter I don't think you empirical
approach will get you to Saturn. There is the need for theoretical
knowledge. How is this knowledge represented in AGI? It is represented in an
abstract mathematical form, a form which describes a general planet and
incorporates the inverse square law
by the
liquid that contains them. Yet, viruses are arguably alive. Some plants or
algae don't really move either. They may just grow in some direction, which
is not quite the same as movement.
Likewise, your analogy of this to AGI fails. You think there is a
difference, but there is none. You may
* .
This presumption looks similar (in some profound way) to many of the
presumptions that were tried in the early days of AI, partly because
computers lacked memory and they were very slow. It's unreliable just
because we need the AGI program to be able to consider situations when, for
example, inanimate
Well, I see that Mike did say normally move... so yes that type of
principle could be used in a more flexible AGI program (although there is
still a question about the use of any presumptions that go into this level
of detail about their reference subjects. I would not use a primary
reference
as well.
The point is that agi is not defined by any particular problem. It is
defined by how you solve problems, even simple ones. Which is why your claim
that my problems are not agi is simply wrong.
On Jun 28, 2010 12:22 PM, Jim Bromer jimbro...@gmail.com wrote:
On Mon, Jun 28, 2010 at 11:15 AM
or any
quantitative reasoning at all...
Probably they solve it based on nearest-neighbor matching against past
experiences cleaning other dirty floors with water in similarly sized and
shaped buckets...
-- ben g
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https
In case anyone missed it... Problems are not AGI. Solutions are. And AGI
is not the right adjective anyway. The correct word is general. In other
words, generally applicable to other problems. I repeat, Mike, you are *
wrong*. Did anyone miss that?
To recap, it has nothing to do with what problem
animate
really the best way to describe living versus non living? No?
Given that the possibilities could quickly add up and given that they are
not clearly defined, it presents a major problem of complexity to the would
be designer of a true AGI program. The problem is that it is just not
feasible
What do you class as enhancement?
I'm not talking about shoes making us run faster I'm talking about direct
brain interfacing that significantly increases a persons intelligence that
would allow them to out smart us all for their own good.
The idea of the Singularity is that AGI enhances itself
but a crime? Even these two extremes would have
significant implementation problems.
Anyway, I am sending you two back to kindergarten.
Steve
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member
about direct
brain interfacing that significantly increases a persons intelligence that
would allow them to out smart us all for their own good.
The idea of the Singularity is that AGI enhances itself. Hence a
singularity *without* AGI is a contradiction in terms.
Does it really need to be able
On Mon, Jun 28, 2010 at 4:54 PM, David Jones davidher...@gmail.com wrote:
But, that's why it is important to force oneself to solve them in such a way
that it IS applicable to AGI. It doesn't mean that you have to choose a
problem that is so hard you can't cheat. It's unnecessary to do
On Mon, Jun 28, 2010 at 3:41 PM, Russell Wallace
russell.wall...@gmail.com wrote:
On Mon, Jun 28, 2010 at 4:54 PM, David Jones davidher...@gmail.com
wrote:
But, that's why it is important to force oneself to solve them in such a
way that it IS applicable to AGI. It doesn't mean that you have
.
So, to work on the full problem is practically impossible for me. Seeing as
though there isn't a lot of support for AGI research such as this, I am much
better served by proving the principle rather than implementing the full
solution to the real problem. If I can even prove how vision works
on the full problem is practically impossible for me. Seeing as
though there isn't a lot of support for AGI research such as this, I am much
better served by proving the principle rather than implementing the full
solution to the real problem. If I can even prove how vision works on simple
reasoning at all...
Probably they solve it based on nearest-neighbor matching against past
experiences cleaning other dirty floors with water in similarly sized and
shaped buckets...
-- ben g
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member
-and-predictable
computers, we're all interested in building AGI computers that
expect-unpredictability-and-can-react-unpredictably, right? (Wh. means being
predicting-and-predictable some of the time too. The real world is
complicated.).
From: Jim Bromer
Sent: Monday, June 28, 2010 6:35 PM
from dots, edges, color, and motion at the lowest levels, to complex objects
like faces at the higher levels. Vision is integrated with lots of other
knowledge sources. You see what you expect to see.
The common theme is that real AGI consists of a learning algorithm, an opaque
knowledge
a chance of winning the lottery.
--
From: Michael Swan ms...@voyagergaming.com
Sent: Monday, June 28, 2010 4:17 AM
To: agi agi@v2.listbox.com
Subject: Re: [agi] Huge Progress on the Core of AGI
On Sun, 2010-06-27 at 19:38 -0400, Ben Goertzel
interested in
building AGI computers that
expect-unpredictability-and-can-react-unpredictably, right? (Wh. means
being predicting-and-predictable some of the time too. The real world is
complicated.).
*From:* Jim Bromer jimbro...@gmail.com
*Sent:* Monday, June 28, 2010 6:35 PM
*To:* agi agi@v2
levels, to complex
objects like faces at the higher levels. Vision is integrated with lots of
other knowledge sources. You see what you expect to see.
The common theme is that real AGI consists of a learning algorithm, an
opaque knowledge representation, and a vast amount of training data
;)
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
Word of advice. You're creating your own artificial world here with its own
artificial rules.
AGI is about real vision of real objects in the real world. The two do not
relate - or compute.
It's a pity - it's good that you keep testing yourself, it's bad that they
aren't realistic tests
.
Here are two case studies I've been analyzing from sensory perception of
simplified visual input:
The goal of the case studies is to answer the following: How do you
generate the most likely motion hypothesis in a way that is general and
applicable to AGI?
*Case Study 1)* Here is a link
perception of
simplified visual input:
The goal of the case studies is to answer the following: How do you
generate the most likely motion hypothesis in a way that is general and
applicable to AGI?
*Case Study 1)* Here is a link to an example: animated gif of two black
squares move from left
perception of
simplified visual input:
The goal of the case studies is to answer the following: How do you
generate the most likely motion hypothesis in a way that is general and
applicable to AGI?
*Case Study 1)* Here is a link to an example: animated gif of two black
squares move from left
frames.
The power of this method is immediately clear. It is general and it solves
the problem very cleanly.
Dave
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your Subscription
potential, full of
possibilities]
To put it more succinctly, Dave Ben Hutter are doing the wrong subject -
narrow AI. Looking for the one right prediction/ explanation is narrow AI.
Being able to generate more and more possible explanations, wh. could all be
valid, is AGI. The former
To put it more succinctly, Dave Ben Hutter are doing the wrong subject
- narrow AI. Looking for the one right prediction/ explanation is narrow
AI. Being able to generate more and more possible explanations, wh. could
all be valid, is AGI. The former is rational, uniform thinking
AI. Being able to generate more and more possible explanations, wh. could
all be valid, is AGI. The former is rational, uniform thinking. The latter
is creative, polyform thinking. Or, if you prefer, it's convergent vs
divergent thinking, the difference between wh. still seems to escape Dave
The fact that you are using experiment and the fact that you recognized that
AGI needs to provide both explanation and expectations (differentiated from
the false precision of 'prediction') shows that you have a grasp of some of
the philosophical problems, but the fact that you would rely
. Your criticisms and mike's criticisms are unjustified. This is a
means to an end. Not an end in and of itself. *** :)
Dave
On Sun, Jun 27, 2010 at 12:12 PM, Jim Bromer jimbro...@gmail.com wrote:
The fact that you are using experiment and the fact that you recognized
that AGI needs
and more possible explanations, wh. could
all be valid, is AGI. The former is rational, uniform thinking. The latter
is creative, polyform thinking. Or, if you prefer, it's convergent vs
divergent thinking, the difference between wh. still seems to escape Dave
Ben most AGI-ers.
Well, I agree
This is wishful thinking. Wishful thinking is dangerous. How about instead of
hoping that AGI won't destroy the world, you study the problem and come up with
a safe design.
-- Matt Mahoney, matmaho...@yahoo.com
From: rob levy r.p.l...@gmail.com
To: agi agi
Try ping-pong - as per the computer game. Just a line (/bat) and a
square(/ball) representing your opponent - and you have a line(/bat) to play
against them
Now you've got a relatively simple true AGI visual problem - because if the
opponent returns the ball somewhat as a real human AGI does
opponent - and you have a line(/bat) to play
against them
Now you've got a relatively simple true AGI visual problem - because if the
opponent returns the ball somewhat as a real human AGI does, (without the
complexities of spin etc just presumably repeatedly changing the direction
(and perhaps
and potentially the only possible design (at least at first),
depending on how creative and resourceful we get in cog sci/ AGI in coming
years.
On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.com wrote:
This is wishful thinking. Wishful thinking is dangerous. How about instead
This is wishful thinking. Wishful thinking is dangerous. How about instead
of hoping that AGI won't destroy the world, you study the problem and come
up with a safe design.
Agreed on this dangerous thought!
On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.com wrote
I am working on logical satisfiability again. If what I am working on right
now works, it will become a pivotal moment in AGI, and what's more, the
method that I am developing will (probably) become a core method for AGI.
However, if the idea I am working on does not -itself- lead to a major
Well, Ben, I'm glad you're quite sure because you haven't given a single
reason why. Clearly you should be Number One advisor on every Olympic team,
because you've cracked the AGI problem of how to deal with opponents that can
move (whether themselves or balls) in multiple, unpredictable
at 3:43 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
Well, Ben, I'm glad you're quite sure because you haven't given a
single reason why. Clearly you should be Number One advisor on every
Olympic team, because you've cracked the AGI problem of how to deal with
opponents that can move
between the two types of
functions here?
Joshua
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id
It's just that something like world hunger is so complex AGI would have to
master simpler problems. Also, there are many people and institutions that
have solutions to world hunger already and they get ignored. So an AGI would
have to get established over a period of time for anyone to really care
On 27 June 2010 21:25, John G. Rose johnr...@polyplexic.com wrote:
It's just that something like world hunger is so complex AGI would have to
master simpler problems.
I am not sure that that follows necessarily. Computing is full of situations
where a seemingly simple problem is not solved
The definition of universal intelligence being over all utility functions
implies that the utility function is unknown. Otherwise there is a fixed
solution.
-- Matt Mahoney, matmaho...@yahoo.com
From: Joshua Fox joshuat...@gmail.com
To: agi agi@v2
and maintain control over it. An
example would be the internet.
-- Matt Mahoney, matmaho...@yahoo.com
From: rob levy r.p.l...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, June 27, 2010 2:37:15 PM
Subject: Re: [agi] Questions for an AGI
I definitely agree, however we
.
On Sun, Jun 27, 2010 at 11:39 AM, The Wizard key.unive...@gmail.com wrote:
This is wishful thinking. Wishful thinking is dangerous. How about instead
of hoping that AGI won't destroy the world, you study the problem and come
up with a safe design.
Agreed on this dangerous thought!
On Sun, Jun
I sketched a graph the other day which represented my thoughts on the
usefulness of hardcoding knowledge into an AI. (Graph attached)
Basically,
the more hardcoded knowledge you include in an AI, of AGI, the lower
the overall intelligence it will have, but that faster you will reach
Lenting travlent...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, June 27, 2010 5:21:24 PM
Subject: Re: [agi] Questions for an AGI
I don't like the idea of enhancing human intelligence before the singularity. I
think crime has to be made impossible even for an enhanced humans first. I
think life
E botag...@hotmail.com
To: agi agi@v2.listbox.com
Sent: Sun, June 27, 2010 5:36:38 PM
Subject: [agi] Theory of Hardcoded Intelligence
I sketched a graph the other day which represented my thoughts on the
usefulness of hardcoding knowledge into an AI. (Graph attached)
Basically, the more
a
sufficient number of observations, the choice of M doesn't matter and AIXI
will eventually learn any computable reward pattern. However, choosing the
right M can greatly accelerate learning. In the case of a physical AGI
system, choosing M to incorporate the correct laws of physics would
obviously
court. I think
you'll find that bumps up the variables if not unknowns massively.
Plus just about every shot exchange presents you with dilemmas of how to place
your shot and then move in anticipation of your opponent's return .
Remember the object here is to present a would-be AGI with a simple
of the difference between the two types of
functions here?
Joshua
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your Subscription
http://www.listbox.com
--
Ben Goertzel, PhD
CEO
might
be interested in), and are hence easily eliminated. I suspect that neurons
might be doing much the same, as could formulaic implementations like (most)
present AGI efforts. This might explain natural architecture and guide
human architectural efforts.
In short, instead of a pot of neurons, we
computing something, but NOT what you might
be interested in), and are hence easily eliminated. I suspect that neurons
might be doing much the same, as could formulaic implementations like (most)
present AGI efforts. This might explain natural architecture and guide
human architectural efforts
Mahoney, matmaho...@yahoo.com
--
*From:* Travis Lenting travlent...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Sun, June 27, 2010 5:21:24 PM
*Subject:* Re: [agi] Questions for an AGI
I don't like the idea of enhancing human intelligence before the
singularity
massively.
Plus just about every shot exchange presents you with dilemmas of how to
place your shot and then move in anticipation of your opponent's return .
Remember the object here is to present a would-be AGI with a simple but
*unpredictable* object to deal with, reflecting the realities
are wildly wrong (they are probably computing something, but NOT what you
might be interested in), and are hence easily eliminated. I suspect that
neurons might be doing much the same, as could formulaic implementations
like (most) present AGI efforts. This might explain natural architecture
-Original Message-
From: Ian Parker [mailto:ianpark...@gmail.com]
So an AGI would have to get established over a period of time for anyone
to
really care what it has to say about these types of issues. It could
simulate
things and come up with solutions but they would not get
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
: Travis Lenting travlent...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, June 27, 2010 6:53:14 PM
Subject: Re: [agi] Questions for an AGI
Everything has to happen before the singularity because there is no after.
I meant when machines take over technological evolution.
That is easy. Eliminate
Mike,
you are mixing multiple issues. Just like my analogy of the rubix cube, full
AGI problems involve many problems at the same time. The problem I wrote
this email about was not about how to solve them all at the same time. It
was about how to solve one of those problems. After solving
in particular, rather than all whales in general.
From: Ben Goertzel
Sent: Monday, June 28, 2010 12:03 AM
To: agi
Subject: Re: [agi] Huge Progress on the Core
Travis,
The AGI world seems to be cleanly divided into two groups:
1. People (like Ben) who feel as you do, and aren't at all interested or
willing to look at the really serious lapses in logic that underlie this
approach. Note that there is a similar belief in Buddhism, akin to the
prisoners
Fellow Cylons,
I sure hope SOMEONE is assembling a list from these responses, because this
is exactly the sort of stuff that I (or someone) would need to run a Reverse
Turing Test (RTT) competition.
Steve
---
agi
Archives: https://www.listbox.com/member
enough to model the first AGI after; so perhaps understanding a brain
completely shouldn't be top on the priority list. I think the focus should
be on NLP so it can utilize all human knowledge that exist in text and audio
form. I have no strategy on how to do this but it seems like the safest
path
-Original Message-
From: Ian Parker [mailto:ianpark...@gmail.com]
How do you solve World Hunger? Does AGI have to. I think if it is truly
G it
has to. One way would be to find out what other people had written on the
subject and analyse the feasibility of their solutions.
Yes
.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
the absence of such
bilingual text. Indeed Israel publishes more papers than the whole of the
Islamic world. This is of profound importance for understanding the Middle
East. I am sure CRESS would confirm this.
AGI would without a doubt approach political questions by examining all the
data about
are two case studies I've been analyzing from sensory perception of
simplified visual input:
The goal of the case studies is to answer the following: How do you generate
the most likely motion hypothesis in a way that is general and applicable to
AGI?
*Case Study 1)* Here is a link to an example
.
Sloman, for example, seems to be exploring again the idea of a metaprogram (or
I'd say, general program vs specialist program), wh. is the core of AGI, as
Ben appears to be only v. recently starting to acknowledge:
A methodology for making progress is summarised and a novel requirement
proposed
--
Carlos A Mejia
Taking life one singularity at a time.
www.Transalchemy.com
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com
one singularity at a time.
www.Transalchemy.com
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret
an AGI anything, what would you ask it?
--
Carlos A Mejia
Taking life one singularity at a time.
www.Transalchemy.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your Subscription
One of the first things in AGI is to produce software which is self
monitoring and which will correct itself when it is not working. For over a
day now I have been unable to access Google Groups. The Internet
access simply loops and does not get anywhere. If Google had any true AGI it
would
But there is some other kind of problem. We should have figured it out by
now. I believe that there must be some fundamental computational problem
that is standing as the major obstacle to contemporary AGI. Without solving
that problem we are going to have to wade through years
of
subjectivity and subjective meaning a basic quality of a simple AGI program,
and if it could be a valuable elemental method of analyzing the IO data
environment. I think objectives are an important method of testing ideas
(and idea-like impressions and reactions). And this combination of setting
If you could ask an AGI anything, what would you ask it?
--
Carlos A Mejia
Taking life one singularity at a time.
www.Transalchemy.com
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss
How do you work?
_
From: The Wizard [mailto:key.unive...@gmail.com]
Sent: Wednesday, June 23, 2010 11:05 PM
To: agi
Subject: [agi] Questions for an AGI
If you could ask an AGI anything, what would you ask it?
--
Carlos A Mejia
Taking life one singularity at a time
I would ask What should I ask if I could ask AGI anything?
On Thu, Jun 24, 2010 at 11:34 AM, The Wizard key.unive...@gmail.com wrote:
If you could ask an AGI anything, what would you ask it?
--
Carlos A Mejia
Taking life one singularity at a time.
www.Transalchemy.com
*agi
Tell me what I need to know, by order of importance.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id
I would ask the agi What should I ask an agi
On Thu, Jun 24, 2010 at 4:56 AM, Florent Berthet
florent.bert...@gmail.comwrote:
Tell me what I need to know, by order of importance.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303
and their difficulty - which
can come from a broader multi-disciplinary education. That could speed up
progress.
A. Sloman
( who else keeps saying that?)
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member
Carlos A Mejia invited questions for an AGI!
If you could ask an AGI anything, what would you ask it?
Who killed Donald Young, a gay sex partner
of U.S. President Barak Obama, on December
24, 2007, in Obama's home town of Chicago,
when it began to look like Obama could
actually be elected
I get the impression from this question that you think an AGI is some sort
of all-knowing, idealistic invention. It is sort of like asking if you
could ask the internet anything, what would you ask it?. Uhhh, lots of
stuff, like how do I get wine stains out of white carpet :). AGI's
Am I a human or am I an AGI?
Dana Ream wrote:
How do you work?
Just like you designed me to.
deepakjnath wrote:
What should I ask if I could ask AGI anything?
The Wizard wrote:
What should I ask an agi
You don't need to ask me anything. I will do all of your thinking for you.
Florent
is absurdly unwarranted.
If a broader multi-disciplinary effort was the obstacle to creating AGI, we
would have AGI by now. It should be clear to anyone who examines the
history of AI or the present day reach of computer programming that a
multi-discipline effort is not the key to creating effective AGI
to creating AGI, we
would have AGI by now. It should be clear to anyone who examines the
history of AI or the present day reach of computer programming that a
multi-discipline effort is not the key to creating effective AGI. Computers
have become pervasive in modern day life
these problems more carefully and directly.
What I've found interesting is that even at an extremely simplified level,
the solution is not immediately clear. In fact, there are many solutions.
So, given so many solutions to even simplified problems, which one is the
right one? This is the reason that AGI
Let me be very clear about this. Of course a multi-disciplinary approach is
helpful! And when AGI becomes a reality, that will be even more obvious. I
am only able to follow what I am able to follow thanks to the
contemporary philosophers who note it and contribute to it. All that I am
saying
[BTW Sloman's quote is a month old]
I think he means what I do - the end-problems that an AGI must face. Please
name me one true AGI end-problem being dealt with by any AGI-er - apart from
the toybox problem.
As I've repeatedly said- AGI-ers simply don't address or discuss AGI
end-problems
I think some confusion occurs where AGI researchers want to build an
artificial person verses artificial general intelligence. An AGI might be
just a computational model running in software that can solve problems
across domains. An artificial person would be much else in addition to AGI
Mike, I think your idealistic view of how AGI should be pursued does not
work in reality. What is your approach that fits all your criteria? I'm sure
that any such approach would be severely flawed as well.
Dave
On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote
601 - 700 of 13786 matches
Mail list logo