Re: [agi] Re: Huge Progress on the Core of AGI

2010-06-29 Thread Abram Demski
-- *From:* David Jones davidher...@gmail.com *To:* agi agi@v2.listbox.com *Sent:* Tue, June 29, 2010 9:05:45 AM *Subject:* [agi] Re: Huge Progress on the Core of AGI If anyone has any knowledge of or references to the state of the art in explanation-based reasoning, can you send me keywords

Re: [agi] Huge Progress on the Core of AGI

2010-06-28 Thread Mike Tintner
extremely kind - you do have a chance of winning the lottery. -- From: Michael Swan ms...@voyagergaming.com Sent: Monday, June 28, 2010 4:17 AM To: agi agi@v2.listbox.com Subject: Re: [agi] Huge Progress on the Core of AGI On Sun, 2010-06-27 at 19

Re: [agi] Theory of Hardcoded Intelligence

2010-06-28 Thread Ian Parker
knowledge into an AI. (Graph attached) Basically, the more hardcoded knowledge you include in an AI, of AGI, the lower the overall intelligence it will have, but that faster you will reach that value. I would include any real AGI to be toward the left of the graph with systems like CYC

Re: [agi] Questions for an AGI

2010-06-28 Thread Ian Parker
is that AGI enhances itself. Hence a singularity *without* AGI is a contradiction in terms. I did not quite get you syntax on reproduction, but it is perfectly true that you do not need a singularity for a Von Neumann machine. The singularity is a long way off yet Obama is going to leave Afghanistan

Re: [agi] Questions for an AGI

2010-06-28 Thread Steve Richfield
as anything but a crime? Even these two extremes would have significant implementation problems. Anyway, I am sending you two back to kindergarten. Steve --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com

[agi] A Primary Distinction for an AGI

2010-06-28 Thread Mike Tintner
The recent Core of AGI exchange has led me IMO to a beautiful conclusion - to one of the most basic distinctions a real AGI system must make, and also a simple way of distinguishing between narrow AI and real AGI projects of any kind. Consider - you have a) Dave's square moving across

Re: [agi] Questions for an AGI

2010-06-28 Thread Erdal Bektaş
as anything but a crime? Even these two extremes would have significant implementation problems. Anyway, I am sending you two back to kindergarten. Steve *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps

Re: [agi] The problem with AGI per Sloman

2010-06-28 Thread Ian Parker
in with AGI? Well Peter I don't think you empirical approach will get you to Saturn. There is the need for theoretical knowledge. How is this knowledge represented in AGI? It is represented in an abstract mathematical form, a form which describes a general planet and incorporates the inverse square law

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
by the liquid that contains them. Yet, viruses are arguably alive. Some plants or algae don't really move either. They may just grow in some direction, which is not quite the same as movement. Likewise, your analogy of this to AGI fails. You think there is a difference, but there is none. You may

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Jim Bromer
* . This presumption looks similar (in some profound way) to many of the presumptions that were tried in the early days of AI, partly because computers lacked memory and they were very slow. It's unreliable just because we need the AGI program to be able to consider situations when, for example, inanimate

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Jim Bromer
Well, I see that Mike did say normally move... so yes that type of principle could be used in a more flexible AGI program (although there is still a question about the use of any presumptions that go into this level of detail about their reference subjects. I would not use a primary reference

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
as well. The point is that agi is not defined by any particular problem. It is defined by how you solve problems, even simple ones. Which is why your claim that my problems are not agi is simply wrong. On Jun 28, 2010 12:22 PM, Jim Bromer jimbro...@gmail.com wrote: On Mon, Jun 28, 2010 at 11:15 AM

Re: [agi] Hutter - A fundamental misdirection?

2010-06-28 Thread rob levy
or any quantitative reasoning at all... Probably they solve it based on nearest-neighbor matching against past experiences cleaning other dirty floors with water in similarly sized and shaped buckets... -- ben g *agi* | Archives https://www.listbox.com/member/archive/303/=now https

[agi] The true AGI Distinction

2010-06-28 Thread David Jones
In case anyone missed it... Problems are not AGI. Solutions are. And AGI is not the right adjective anyway. The correct word is general. In other words, generally applicable to other problems. I repeat, Mike, you are * wrong*. Did anyone miss that? To recap, it has nothing to do with what problem

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Jim Bromer
animate really the best way to describe living versus non living? No? Given that the possibilities could quickly add up and given that they are not clearly defined, it presents a major problem of complexity to the would be designer of a true AGI program. The problem is that it is just not feasible

Re: [agi] Questions for an AGI

2010-06-28 Thread Travis Lenting
What do you class as enhancement? I'm not talking about shoes making us run faster I'm talking about direct brain interfacing that significantly increases a persons intelligence that would allow them to out smart us all for their own good. The idea of the Singularity is that AGI enhances itself

Re: [agi] Questions for an AGI

2010-06-28 Thread Travis Lenting
but a crime? Even these two extremes would have significant implementation problems. Anyway, I am sending you two back to kindergarten. Steve *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member

Re: [agi] Questions for an AGI

2010-06-28 Thread The Wizard
about direct brain interfacing that significantly increases a persons intelligence that would allow them to out smart us all for their own good. The idea of the Singularity is that AGI enhances itself. Hence a singularity *without* AGI is a contradiction in terms. Does it really need to be able

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Russell Wallace
On Mon, Jun 28, 2010 at 4:54 PM, David Jones davidher...@gmail.com wrote: But, that's why it is important to force oneself to solve them in such a way that it IS applicable to AGI. It doesn't mean that you have to choose a problem that is so hard you can't cheat. It's unnecessary to do

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
On Mon, Jun 28, 2010 at 3:41 PM, Russell Wallace russell.wall...@gmail.com wrote: On Mon, Jun 28, 2010 at 4:54 PM, David Jones davidher...@gmail.com wrote: But, that's why it is important to force oneself to solve them in such a way that it IS applicable to AGI. It doesn't mean that you have

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
. So, to work on the full problem is practically impossible for me. Seeing as though there isn't a lot of support for AGI research such as this, I am much better served by proving the principle rather than implementing the full solution to the real problem. If I can even prove how vision works

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Russell Wallace
on the full problem is practically impossible for me. Seeing as though there isn't a lot of support for AGI research such as this, I am much better served by proving the principle rather than implementing the full solution to the real problem. If I can even prove how vision works on simple

Re: [agi] Hutter - A fundamental misdirection?

2010-06-28 Thread Steve Richfield
reasoning at all... Probably they solve it based on nearest-neighbor matching against past experiences cleaning other dirty floors with water in similarly sized and shaped buckets... -- ben g *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Mike Tintner
-and-predictable computers, we're all interested in building AGI computers that expect-unpredictability-and-can-react-unpredictably, right? (Wh. means being predicting-and-predictable some of the time too. The real world is complicated.). From: Jim Bromer Sent: Monday, June 28, 2010 6:35 PM

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Matt Mahoney
from dots, edges, color, and motion at the lowest levels, to complex objects like faces at the higher levels. Vision is integrated with lots of other knowledge sources. You see what you expect to see. The common theme is that real AGI consists of a learning algorithm, an opaque knowledge

Re: [agi] Huge Progress on the Core of AGI

2010-06-28 Thread Michael Swan
a chance of winning the lottery. -- From: Michael Swan ms...@voyagergaming.com Sent: Monday, June 28, 2010 4:17 AM To: agi agi@v2.listbox.com Subject: Re: [agi] Huge Progress on the Core of AGI On Sun, 2010-06-27 at 19:38 -0400, Ben Goertzel

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Ben Goertzel
interested in building AGI computers that expect-unpredictability-and-can-react-unpredictably, right? (Wh. means being predicting-and-predictable some of the time too. The real world is complicated.). *From:* Jim Bromer jimbro...@gmail.com *Sent:* Monday, June 28, 2010 6:35 PM *To:* agi agi@v2

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
levels, to complex objects like faces at the higher levels. Vision is integrated with lots of other knowledge sources. You see what you expect to see. The common theme is that real AGI consists of a learning algorithm, an opaque knowledge representation, and a vast amount of training data

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Michael Swan
;) --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Word of advice. You're creating your own artificial world here with its own artificial rules. AGI is about real vision of real objects in the real world. The two do not relate - or compute. It's a pity - it's good that you keep testing yourself, it's bad that they aren't realistic tests

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
. Here are two case studies I've been analyzing from sensory perception of simplified visual input: The goal of the case studies is to answer the following: How do you generate the most likely motion hypothesis in a way that is general and applicable to AGI? *Case Study 1)* Here is a link

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
perception of simplified visual input: The goal of the case studies is to answer the following: How do you generate the most likely motion hypothesis in a way that is general and applicable to AGI? *Case Study 1)* Here is a link to an example: animated gif of two black squares move from left

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
perception of simplified visual input: The goal of the case studies is to answer the following: How do you generate the most likely motion hypothesis in a way that is general and applicable to AGI? *Case Study 1)* Here is a link to an example: animated gif of two black squares move from left

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
frames. The power of this method is immediately clear. It is general and it solves the problem very cleanly. Dave *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
potential, full of possibilities] To put it more succinctly, Dave Ben Hutter are doing the wrong subject - narrow AI. Looking for the one right prediction/ explanation is narrow AI. Being able to generate more and more possible explanations, wh. could all be valid, is AGI. The former

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
To put it more succinctly, Dave Ben Hutter are doing the wrong subject - narrow AI. Looking for the one right prediction/ explanation is narrow AI. Being able to generate more and more possible explanations, wh. could all be valid, is AGI. The former is rational, uniform thinking

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
AI. Being able to generate more and more possible explanations, wh. could all be valid, is AGI. The former is rational, uniform thinking. The latter is creative, polyform thinking. Or, if you prefer, it's convergent vs divergent thinking, the difference between wh. still seems to escape Dave

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
The fact that you are using experiment and the fact that you recognized that AGI needs to provide both explanation and expectations (differentiated from the false precision of 'prediction') shows that you have a grasp of some of the philosophical problems, but the fact that you would rely

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
. Your criticisms and mike's criticisms are unjustified. This is a means to an end. Not an end in and of itself. *** :) Dave On Sun, Jun 27, 2010 at 12:12 PM, Jim Bromer jimbro...@gmail.com wrote: The fact that you are using experiment and the fact that you recognized that AGI needs

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
and more possible explanations, wh. could all be valid, is AGI. The former is rational, uniform thinking. The latter is creative, polyform thinking. Or, if you prefer, it's convergent vs divergent thinking, the difference between wh. still seems to escape Dave Ben most AGI-ers. Well, I agree

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
This is wishful thinking. Wishful thinking is dangerous. How about instead of hoping that AGI won't destroy the world, you study the problem and come up with a safe design. -- Matt Mahoney, matmaho...@yahoo.com From: rob levy r.p.l...@gmail.com To: agi agi

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Try ping-pong - as per the computer game. Just a line (/bat) and a square(/ball) representing your opponent - and you have a line(/bat) to play against them Now you've got a relatively simple true AGI visual problem - because if the opponent returns the ball somewhat as a real human AGI does

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
opponent - and you have a line(/bat) to play against them Now you've got a relatively simple true AGI visual problem - because if the opponent returns the ball somewhat as a real human AGI does, (without the complexities of spin etc just presumably repeatedly changing the direction (and perhaps

Re: [agi] Questions for an AGI

2010-06-27 Thread rob levy
and potentially the only possible design (at least at first), depending on how creative and resourceful we get in cog sci/ AGI in coming years. On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.com wrote: This is wishful thinking. Wishful thinking is dangerous. How about instead

Re: [agi] Questions for an AGI

2010-06-27 Thread The Wizard
This is wishful thinking. Wishful thinking is dangerous. How about instead of hoping that AGI won't destroy the world, you study the problem and come up with a safe design. Agreed on this dangerous thought! On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.com wrote

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
I am working on logical satisfiability again. If what I am working on right now works, it will become a pivotal moment in AGI, and what's more, the method that I am developing will (probably) become a core method for AGI. However, if the idea I am working on does not -itself- lead to a major

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Well, Ben, I'm glad you're quite sure because you haven't given a single reason why. Clearly you should be Number One advisor on every Olympic team, because you've cracked the AGI problem of how to deal with opponents that can move (whether themselves or balls) in multiple, unpredictable

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
at 3:43 PM, Mike Tintner tint...@blueyonder.co.ukwrote: Well, Ben, I'm glad you're quite sure because you haven't given a single reason why. Clearly you should be Number One advisor on every Olympic team, because you've cracked the AGI problem of how to deal with opponents that can move

[agi] Reward function vs utility

2010-06-27 Thread Joshua Fox
between the two types of functions here? Joshua --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id

RE: [agi] The problem with AGI per Sloman

2010-06-27 Thread John G. Rose
It's just that something like world hunger is so complex AGI would have to master simpler problems. Also, there are many people and institutions that have solutions to world hunger already and they get ignored. So an AGI would have to get established over a period of time for anyone to really care

Re: [agi] The problem with AGI per Sloman

2010-06-27 Thread Ian Parker
On 27 June 2010 21:25, John G. Rose johnr...@polyplexic.com wrote: It's just that something like world hunger is so complex AGI would have to master simpler problems. I am not sure that that follows necessarily. Computing is full of situations where a seemingly simple problem is not solved

Re: [agi] Reward function vs utility

2010-06-27 Thread Matt Mahoney
The definition of universal intelligence being over all utility functions implies that the utility function is unknown. Otherwise there is a fixed solution. -- Matt Mahoney, matmaho...@yahoo.com From: Joshua Fox joshuat...@gmail.com To: agi agi@v2

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
and maintain control over it. An example would be the internet. -- Matt Mahoney, matmaho...@yahoo.com From: rob levy r.p.l...@gmail.com To: agi agi@v2.listbox.com Sent: Sun, June 27, 2010 2:37:15 PM Subject: Re: [agi] Questions for an AGI I definitely agree, however we

Re: [agi] Questions for an AGI

2010-06-27 Thread Travis Lenting
. On Sun, Jun 27, 2010 at 11:39 AM, The Wizard key.unive...@gmail.com wrote: This is wishful thinking. Wishful thinking is dangerous. How about instead of hoping that AGI won't destroy the world, you study the problem and come up with a safe design. Agreed on this dangerous thought! On Sun, Jun

[agi] Theory of Hardcoded Intelligence

2010-06-27 Thread M E
I sketched a graph the other day which represented my thoughts on the usefulness of hardcoding knowledge into an AI. (Graph attached) Basically, the more hardcoded knowledge you include in an AI, of AGI, the lower the overall intelligence it will have, but that faster you will reach

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
Lenting travlent...@gmail.com To: agi agi@v2.listbox.com Sent: Sun, June 27, 2010 5:21:24 PM Subject: Re: [agi] Questions for an AGI I don't like the idea of enhancing human intelligence before the singularity. I think crime has to be made impossible even for an enhanced humans first. I think life

Re: [agi] Theory of Hardcoded Intelligence

2010-06-27 Thread Matt Mahoney
E botag...@hotmail.com To: agi agi@v2.listbox.com Sent: Sun, June 27, 2010 5:36:38 PM Subject: [agi] Theory of Hardcoded Intelligence I sketched a graph the other day which represented my thoughts on the usefulness of hardcoding knowledge into an AI. (Graph attached) Basically, the more

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
a sufficient number of observations, the choice of M doesn't matter and AIXI will eventually learn any computable reward pattern. However, choosing the right M can greatly accelerate learning. In the case of a physical AGI system, choosing M to incorporate the correct laws of physics would obviously

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
court. I think you'll find that bumps up the variables if not unknowns massively. Plus just about every shot exchange presents you with dilemmas of how to place your shot and then move in anticipation of your opponent's return . Remember the object here is to present a would-be AGI with a simple

Re: [agi] Reward function vs utility

2010-06-27 Thread Ben Goertzel
of the difference between the two types of functions here? Joshua *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Steve Richfield
might be interested in), and are hence easily eliminated. I suspect that neurons might be doing much the same, as could formulaic implementations like (most) present AGI efforts. This might explain natural architecture and guide human architectural efforts. In short, instead of a pot of neurons, we

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
computing something, but NOT what you might be interested in), and are hence easily eliminated. I suspect that neurons might be doing much the same, as could formulaic implementations like (most) present AGI efforts. This might explain natural architecture and guide human architectural efforts

Re: [agi] Questions for an AGI

2010-06-27 Thread Travis Lenting
Mahoney, matmaho...@yahoo.com -- *From:* Travis Lenting travlent...@gmail.com *To:* agi agi@v2.listbox.com *Sent:* Sun, June 27, 2010 5:21:24 PM *Subject:* Re: [agi] Questions for an AGI I don't like the idea of enhancing human intelligence before the singularity

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
massively. Plus just about every shot exchange presents you with dilemmas of how to place your shot and then move in anticipation of your opponent's return . Remember the object here is to present a would-be AGI with a simple but *unpredictable* object to deal with, reflecting the realities

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Steve Richfield
are wildly wrong (they are probably computing something, but NOT what you might be interested in), and are hence easily eliminated. I suspect that neurons might be doing much the same, as could formulaic implementations like (most) present AGI efforts. This might explain natural architecture

RE: [agi] The problem with AGI per Sloman

2010-06-27 Thread John G. Rose
-Original Message- From: Ian Parker [mailto:ianpark...@gmail.com] So an AGI would have to get established over a period of time for anyone to really care what it has to say about these types of issues. It could simulate things and come up with solutions but they would not get

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
--- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
: Travis Lenting travlent...@gmail.com To: agi agi@v2.listbox.com Sent: Sun, June 27, 2010 6:53:14 PM Subject: Re: [agi] Questions for an AGI Everything has to happen before the singularity because there is no after. I meant when machines take over technological evolution. That is easy. Eliminate

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
Mike, you are mixing multiple issues. Just like my analogy of the rubix cube, full AGI problems involve many problems at the same time. The problem I wrote this email about was not about how to solve them all at the same time. It was about how to solve one of those problems. After solving

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Michael Swan
in particular, rather than all whales in general. From: Ben Goertzel Sent: Monday, June 28, 2010 12:03 AM To: agi Subject: Re: [agi] Huge Progress on the Core

Re: [agi] Questions for an AGI

2010-06-26 Thread Steve Richfield
Travis, The AGI world seems to be cleanly divided into two groups: 1. People (like Ben) who feel as you do, and aren't at all interested or willing to look at the really serious lapses in logic that underlie this approach. Note that there is a similar belief in Buddhism, akin to the prisoners

Re: [agi] Questions for an AGI

2010-06-26 Thread Steve Richfield
Fellow Cylons, I sure hope SOMEONE is assembling a list from these responses, because this is exactly the sort of stuff that I (or someone) would need to run a Reverse Turing Test (RTT) competition. Steve --- agi Archives: https://www.listbox.com/member

Re: [agi] Questions for an AGI

2010-06-26 Thread Travis Lenting
enough to model the first AGI after; so perhaps understanding a brain completely shouldn't be top on the priority list. I think the focus should be on NLP so it can utilize all human knowledge that exist in text and audio form. I have no strategy on how to do this but it seems like the safest path

RE: [agi] The problem with AGI per Sloman

2010-06-26 Thread John G. Rose
-Original Message- From: Ian Parker [mailto:ianpark...@gmail.com] How do you solve World Hunger? Does AGI have to. I think if it is truly G it has to. One way would be to find out what other people had written on the subject and analyse the feasibility of their solutions. Yes

Re: [agi] Questions for an AGI

2010-06-26 Thread rob levy
. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com

Re: [agi] The problem with AGI per Sloman

2010-06-26 Thread Ian Parker
the absence of such bilingual text. Indeed Israel publishes more papers than the whole of the Islamic world. This is of profound importance for understanding the Middle East. I am sure CRESS would confirm this. AGI would without a doubt approach political questions by examining all the data about

[agi] Huge Progress on the Core of AGI

2010-06-26 Thread David Jones
are two case studies I've been analyzing from sensory perception of simplified visual input: The goal of the case studies is to answer the following: How do you generate the most likely motion hypothesis in a way that is general and applicable to AGI? *Case Study 1)* Here is a link to an example

Re: [agi] The problem with AGI per Sloman

2010-06-25 Thread Mike Tintner
. Sloman, for example, seems to be exploring again the idea of a metaprogram (or I'd say, general program vs specialist program), wh. is the core of AGI, as Ben appears to be only v. recently starting to acknowledge: A methodology for making progress is summarised and a novel requirement proposed

[agi] AGI Alert: DARPA wants quintillion-speed computers

2010-06-25 Thread The Wizard
-- Carlos A Mejia Taking life one singularity at a time. www.Transalchemy.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com

[agi] Re: AGI Alert: DARPA wants quintillion-speed computers

2010-06-25 Thread The Wizard
one singularity at a time. www.Transalchemy.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret

Re: [agi] Questions for an AGI

2010-06-25 Thread Travis Lenting
an AGI anything, what would you ask it? -- Carlos A Mejia Taking life one singularity at a time. www.Transalchemy.com *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription

Re: [agi] Questions for an AGI

2010-06-25 Thread Ian Parker
One of the first things in AGI is to produce software which is self monitoring and which will correct itself when it is not working. For over a day now I have been unable to access Google Groups. The Internet access simply loops and does not get anywhere. If Google had any true AGI it would

Re: [agi] The problem with AGI per Sloman

2010-06-25 Thread rob levy
But there is some other kind of problem. We should have figured it out by now. I believe that there must be some fundamental computational problem that is standing as the major obstacle to contemporary AGI. Without solving that problem we are going to have to wade through years

Re: [agi] The problem with AGI per Sloman

2010-06-25 Thread Jim Bromer
of subjectivity and subjective meaning a basic quality of a simple AGI program, and if it could be a valuable elemental method of analyzing the IO data environment. I think objectives are an important method of testing ideas (and idea-like impressions and reactions). And this combination of setting

[agi] Questions for an AGI

2010-06-24 Thread The Wizard
If you could ask an AGI anything, what would you ask it? -- Carlos A Mejia Taking life one singularity at a time. www.Transalchemy.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss

RE: [agi] Questions for an AGI

2010-06-24 Thread Dana Ream
How do you work? _ From: The Wizard [mailto:key.unive...@gmail.com] Sent: Wednesday, June 23, 2010 11:05 PM To: agi Subject: [agi] Questions for an AGI If you could ask an AGI anything, what would you ask it? -- Carlos A Mejia Taking life one singularity at a time

Re: [agi] Questions for an AGI

2010-06-24 Thread deepakjnath
I would ask What should I ask if I could ask AGI anything? On Thu, Jun 24, 2010 at 11:34 AM, The Wizard key.unive...@gmail.com wrote: If you could ask an AGI anything, what would you ask it? -- Carlos A Mejia Taking life one singularity at a time. www.Transalchemy.com *agi

Re: [agi] Questions for an AGI

2010-06-24 Thread Florent Berthet
Tell me what I need to know, by order of importance. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id

Re: [agi] Questions for an AGI

2010-06-24 Thread The Wizard
I would ask the agi What should I ask an agi On Thu, Jun 24, 2010 at 4:56 AM, Florent Berthet florent.bert...@gmail.comwrote: Tell me what I need to know, by order of importance. *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303

[agi] The problem with AGI per Sloman

2010-06-24 Thread Mike Tintner
and their difficulty - which can come from a broader multi-disciplinary education. That could speed up progress. A. Sloman ( who else keeps saying that?) --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member

Re: [agi] Questions for an AGI

2010-06-24 Thread A. T. Murray
Carlos A Mejia invited questions for an AGI! If you could ask an AGI anything, what would you ask it? Who killed Donald Young, a gay sex partner of U.S. President Barak Obama, on December 24, 2007, in Obama's home town of Chicago, when it began to look like Obama could actually be elected

Re: [agi] Questions for an AGI

2010-06-24 Thread David Jones
I get the impression from this question that you think an AGI is some sort of all-knowing, idealistic invention. It is sort of like asking if you could ask the internet anything, what would you ask it?. Uhhh, lots of stuff, like how do I get wine stains out of white carpet :). AGI's

Re: [agi] Questions for an AGI

2010-06-24 Thread Matt Mahoney
Am I a human or am I an AGI? Dana Ream wrote: How do you work? Just like you designed me to. deepakjnath wrote: What should I ask if I could ask AGI anything? The Wizard wrote: What should I ask an agi You don't need to ask me anything. I will do all of your thinking for you. Florent

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Jim Bromer
is absurdly unwarranted. If a broader multi-disciplinary effort was the obstacle to creating AGI, we would have AGI by now. It should be clear to anyone who examines the history of AI or the present day reach of computer programming that a multi-discipline effort is not the key to creating effective AGI

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Ben Goertzel
to creating AGI, we would have AGI by now. It should be clear to anyone who examines the history of AI or the present day reach of computer programming that a multi-discipline effort is not the key to creating effective AGI. Computers have become pervasive in modern day life

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread David Jones
these problems more carefully and directly. What I've found interesting is that even at an extremely simplified level, the solution is not immediately clear. In fact, there are many solutions. So, given so many solutions to even simplified problems, which one is the right one? This is the reason that AGI

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Jim Bromer
Let me be very clear about this. Of course a multi-disciplinary approach is helpful! And when AGI becomes a reality, that will be even more obvious. I am only able to follow what I am able to follow thanks to the contemporary philosophers who note it and contribute to it. All that I am saying

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Mike Tintner
[BTW Sloman's quote is a month old] I think he means what I do - the end-problems that an AGI must face. Please name me one true AGI end-problem being dealt with by any AGI-er - apart from the toybox problem. As I've repeatedly said- AGI-ers simply don't address or discuss AGI end-problems

RE: [agi] The problem with AGI per Sloman

2010-06-24 Thread John G. Rose
I think some confusion occurs where AGI researchers want to build an artificial person verses artificial general intelligence. An AGI might be just a computational model running in software that can solve problems across domains. An artificial person would be much else in addition to AGI

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread David Jones
Mike, I think your idealistic view of how AGI should be pursued does not work in reality. What is your approach that fits all your criteria? I'm sure that any such approach would be severely flawed as well. Dave On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote

<    2   3   4   5   6   7   8   9   10   11   >