Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Tim Freeman
From: Derek Zahn [EMAIL PROTECTED] Date: Sun, 30 Sep 2007 08:57:53 -0600 ... One thing that could improve safety is to reject the notion that AGI projects should be focused on, or even capable of, recursive self improvement in the sense of reprogramming its core implementation. ... Let's take

RE: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Derek Zahn
Tim Freeman writes: Let's take Novamente as an example. ... It cannot improve itself until the following things happen: 1) It acquires the knowledge and skills to become a competent programmer, a task that takes a human many years of directed training and practical experience. 2) It is

RE: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Derek Zahn
Tim Freeman: No value is added by introducing considerations about self-reference into conversations about the consequences of AI engineering. Junior geeks do find it impressive, though. The point of that conversation was to illustrate that if people are worried about Seed AI exploding, then

Re: [META] Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-12 Thread Linas Vepstas
On Wed, Oct 10, 2007 at 01:22:26PM -0400, Richard Loosemore wrote: Am I the only one, or does anyone else agree that politics/political theorising is not appropriate on the AGI list? Yes, and I'm sorry I triggred the thread. I particularly object to libertarianism being shoved down our

Re: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Vladimir Nesov
Derek, Tim, There is no oversight: self-improvement doesn't necessarily refer to actual instance of self that is to be improved, but to AGI's design. Next thing must be better than previous one for runaway progress to happen, and one way of doing it is for next thing to be a refinement of

Re: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Linas Vepstas
Let's take Novamente as an example. ... It cannot improve itself until the following things happen: 1) It acquires the knowledge and skills to become a competent programmer, a task that takes a human many years of directed training and practical experience. Wrong. This was hashed to

Re: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Tim Freeman
From: Derek Zahn [EMAIL PROTECTED] You seem to think that self-reference buys you nothing at all since it is a simple matter for the first AGI projects to reinvent their own equivalent from scratch, but I'm not sure that's true. The from scratch part is a straw-man argument. The AGI project will

RE: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Derek Zahn
Linas Vepstas: Let's take Novamente as an example. ... It cannot improve itself until the following things happen:1) It acquires the knowledge and skills to become a competent programmer, a task that takes a human many years of directed training and practical experience. Wrong. This

Re: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Eliezer S. Yudkowsky
Tim Freeman wrote: My point is that if one is worried about a self-improving Seed AI exploding, one should also be worried about any AI that competently writes software exploding. There *is* a slight gap between competently writing software and competently writing minds. Large by human

Re: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Lukasz Stafiniak
On 10/12/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: some of us are much impressed by it. Anyone with even a surface grasp of the basic concept on a math level will realize that there's no difference between self-modifying and writing an outside copy of yourself, but *either one*

Re: [META] Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-11 Thread Bob Mottram
On 10/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: Am I the only one, or does anyone else agree that politics/political theorising is not appropriate on the AGI list? Agreed. There are many other forums where political ideology can be debated. - This list is sponsored by AGIRI:

Re: [META] Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-11 Thread JW Johnston
-to-market effect [WAS Re: [agi] Religion-free technical content] On 10/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: Am I the only one, or does anyone else agree that politics/political theorising is not appropriate on the AGI list? Agreed. There are many other forums where political

Re: [META] Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-11 Thread a
- From: Bob Mottram [EMAIL PROTECTED] Sent: Oct 11, 2007 11:12 AM To: agi@v2.listbox.com Subject: Re: [META] Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content] On 10/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: Am I

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-10 Thread Robert Wensman
The only solution to this problem I ever see suggested is to intentionally create a Really Big Fish called the government that can effortlessly eat every fish in the pond but promises not to -- to prevent the creation of Really Big Fish. That is quite the Faustian bargain to protect

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-10 Thread Eric Baum
BillK On 10/6/07, a wrote: I am skeptical that economies follow the self-organized criticality behavior. There aren't any examples. Some would cite the Great Depression, but it was caused by the malinvestment created by Central Banks. e.g. The Federal Reserve System. See the Austrian

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-10 Thread J. Andrew Rogers
On Oct 10, 2007, at 2:26 AM, Robert Wensman wrote: Yes, of course, the Really Big Fish that is democracy. No, you got this quite wrong. The Really Big Fish is institution responsible for governance (usually the government); democracy is merely a fuzzy category of rule set used in

[META] Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-10 Thread Richard Loosemore
Am I the only one, or does anyone else agree that politics/political theorising is not appropriate on the AGI list? I particularly object to libertarianism being shoved down our throats, not so much because I disagree with it, but because so much of the singularity / extropian / futurist

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-09 Thread Robert Wensman
(off topic, but there are something relevant for AGI) My fears about economical libertarianism could be illustrated with a fish pond analogy. If there is a small pond with a large number of small fish of some predatory species, after an amount of time they will cannibalize and eat each other

Re: [agi] Religion-free technical content breaking the small hardware mindset

2007-10-09 Thread a
With googling, I found that older people has lower IQ http://www.sciencedaily.com/releases/2006/05/060504082306.htm IMO, the brain is like a muscle, not an organ. IQ is said to be highly genetic, and the heritability increases with age. Perhaps that older people do not have much mental

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-09 Thread J. Andrew Rogers
On Oct 9, 2007, at 4:27 AM, Robert Wensman wrote: This is of course just an illustration and by no means a proof that the same thing would occur in a laissez-faire/libertarianism economy. Libertarians commonly put blame for monopolies on government involvement, and I guess some would

RE: [agi] Religion-free technical content breaking the small hardware mindset

2007-10-09 Thread Edward W. Porter
To: agi@v2.listbox.com Subject: Re: [agi] Religion-free technical content breaking the small hardware mindset With googling, I found that older people has lower IQ http://www.sciencedaily.com/releases/2006/05/060504082306.htm IMO, the brain is like a muscle, not an organ. IQ is said to be highly

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-09 Thread Eliezer S. Yudkowsky
J. Andrew Rogers wrote: Generally though, the point that you fail to see is that an AGI can just as easily subvert *any* power structure, whether the environment is a libertarian free market or an autocratic communist state. The problem has nothing to do with the governance of the

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-08 Thread Bob Mottram
Economic libertarianism would be nice if it were to occur. However, in practice companies and governments put in place all sorts of anti-competitive structures to lock people into certain modes of economic activity. I think economic activity in general is heavily influenced by cognitive biases

Re: [agi] Religion-free technical content

2007-10-08 Thread Charles D Hixson
Derek Zahn wrote: Richard Loosemore: a... I often see it assumed that the step between first AGI is built (which I interpret as a functoning model showing some degree of generally-intelligent behavior) and god-like powers dominating the planet is a short one. Is that really likely? Nobody

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-08 Thread Charles D Hixson
a wrote: Linas Vepstas wrote: ... The issue is that there's no safety net protecting against avalanches of unbounded size. The other issue is that its not grains of sand, its people. My bank-account and my brains can insulate me from small shocks. I'd like to have protection against the

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-08 Thread a
Bob Mottram wrote: Economic libertarianism would be nice if it were to occur. However, in practice companies and governments put in place all sorts of anti-competitive structures to lock people into certain modes of economic activity. I think economic activity in general is heavily influenced

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-08 Thread Linas Vepstas
On Sat, Oct 06, 2007 at 10:05:28AM -0400, a wrote: I am skeptical that economies follow the self-organized criticality behavior. Oh. Well, I thought this was a basic principle, commonly cited in microeconomics textbooks: when there's a demand, producers rush to fill the demand. When there's

RE: [agi] Religion-free technical content breaking the small hardware mindset

2007-10-08 Thread Edward W. Porter
) 494-1722 Fax (617) 494-1822 [EMAIL PROTECTED] -Original Message- From: a [mailto:[EMAIL PROTECTED] Sent: Saturday, October 06, 2007 10:00 AM To: agi@v2.listbox.com Subject: Re: [agi] Religion-free technical content breaking the small hardware mindset Edward W. Porter wrote: It's also

Re: [agi] Religion-free technical content breaking the small hardware mindset

2007-10-06 Thread a
Edward W. Porter wrote: It's also because the average person looses 10 points in IQ between mid twenties and mid fourties and another ten points between mid fourties and sixty. (Help! I'am 59.) But this is just the average. Some people hang on to their marbles as they age better than

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-06 Thread a
Linas Vepstas wrote: My objection to economic libertarianism is its lack of discussion of self-organized criticality. A common example of self-organized criticality is a sand-pile at the critical point. Adding one grain of sand can trigger an avalanche, which can be small, or maybe

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-06 Thread BillK
On 10/6/07, a wrote: I am skeptical that economies follow the self-organized criticality behavior. There aren't any examples. Some would cite the Great Depression, but it was caused by the malinvestment created by Central Banks. e.g. The Federal Reserve System. See the Austrian Business Cycle

Re: [agi] Religion-free technical content

2007-10-05 Thread Mark Waser
. Simple. Unambiguous. Impossible to implement. (And not my proposal) - Original Message - From: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, October 04, 2007 7:26 PM Subject: **SPAM** Re: [agi] Religion-free technical content --- Mark Waser [EMAIL PROTECTED

Re: [agi] Religion-free technical content

2007-10-05 Thread Matt Mahoney
: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, October 04, 2007 7:26 PM Subject: **SPAM** Re: [agi] Religion-free technical content --- Mark Waser [EMAIL PROTECTED] wrote: I'll repeat again since you don't seem to be paying attention to what I'm saying

Re: [agi] Religion-free technical content

2007-10-05 Thread Mike Dougherty
On 10/5/07, Mark Waser [EMAIL PROTECTED] wrote: Then I guess we are in perfect agreement. Friendliness is what the average person would do. Which one of the words in And not my proposal wasn't clear? As far as I am concerned, friendliness is emphatically not what the average person

Re: [agi] Religion-free technical content

2007-10-05 Thread Mark Waser
Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Friday, October 05, 2007 10:40 AM Subject: **SPAM** Re: [agi] Religion-free technical content --- Mark Waser [EMAIL PROTECTED] wrote: Then state the base principles or the algorithm that generates them, without ambiguity and without

Re: [agi] Religion-free technical content

2007-10-05 Thread Matt Mahoney
--- Mike Dougherty [EMAIL PROTECTED] wrote: On 10/5/07, Mark Waser [EMAIL PROTECTED] wrote: Then I guess we are in perfect agreement. Friendliness is what the average person would do. Which one of the words in And not my proposal wasn't clear? As far as I am concerned,

Re: [agi] Religion-free technical content

2007-10-05 Thread Linas Vepstas
On Tue, Oct 02, 2007 at 03:03:35PM -0400, Mark Waser wrote: Do you really think you can show an example of a true moral universal? Thou shalt not destroy the universe. Thou shalt not kill every living and/or sentient being including yourself. Thou shalt not kill every living and/or sentient

Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-05 Thread Linas Vepstas
On Thu, Oct 04, 2007 at 07:49:20AM -0400, Richard Loosemore wrote: As to exactly how, I don't know, but since the AGI is, by assumption, peaceful, friendly and non-violent, it will do it in a peaceful, friendly and non-violent manner. I like to think of myself as peaceful and non-violent,

Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-05 Thread a
Linas Vepstas wrote: On Thu, Oct 04, 2007 at 07:49:20AM -0400, Richard Loosemore wrote: As to exactly how, I don't know, but since the AGI is, by assumption, peaceful, friendly and non-violent, it will do it in a peaceful, friendly and non-violent manner. I like to think of myself as

Re: [agi] Religion-free technical content breaking the small hardware mindset

2007-10-05 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 08:39:18PM -0400, Edward W. Porter wrote: the IQ bell curve is not going down. The evidence is its going up. So that's why us old folks 'r gettin' stupider as compared to them's young'uns. --linas - This list is sponsored by AGIRI: http://www.agiri.org/email To

Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-05 Thread Linas Vepstas
OK, this is very off-topic. Sorry. On Fri, Oct 05, 2007 at 06:36:34PM -0400, a wrote: Linas Vepstas wrote: For the most part, modern western culture espouses and hews to physical non-violence. However, modern right-leaning pure capitalism advocates not only social Darwinism, but also the

RE: [agi] Religion-free technical content breaking the small hardware mindset

2007-10-05 Thread Edward W. Porter
.) Edward W. Porter Porter Associates 24 String Bridge S12 Exeter, NH 03833 (617) 494-1722 Fax (617) 494-1822 [EMAIL PROTECTED] -Original Message- From: Linas Vepstas [mailto:[EMAIL PROTECTED] Sent: Friday, October 05, 2007 7:31 PM To: agi@v2.listbox.com Subject: Re: [agi] Religion

The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Richard Loosemore
Linas Vepstas wrote: On Tue, Oct 02, 2007 at 01:20:54PM -0400, Richard Loosemore wrote: When the first AGI is built, its first actions will be to make sure that nobody is trying to build a dangerous, unfriendly AGI. Yes, OK, granted, self-preservation is a reasonable character trait. After

Small amounts of Complexity [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Richard Loosemore
Linas Vepstas wrote: On Wed, Oct 03, 2007 at 12:20:10PM -0400, Richard Loosemore wrote: Second, You mention the 3-body problem in Newtonian mechanics. Although I did not use it as such in the paper, this is my poster child of a partial complex system. I often cite the case of planetary

Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Bob Mottram
On 04/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: Linas Vepstas wrote: Um, why, exactly, are you assuming that the first one will be freindly? The desire for self-preservation, by e.g. rooting out and exterminating all (potentially unfreindly) competing AGI, would not be what I'd

Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Richard Loosemore
Bob Mottram wrote: On 04/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: Linas Vepstas wrote: Um, why, exactly, are you assuming that the first one will be freindly? The desire for self-preservation, by e.g. rooting out and exterminating all (potentially unfreindly) competing AGI, would

Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Bob Mottram
On 04/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: As to exactly how, I don't know, but since the AGI is, by assumption, peaceful, friendly and non-violent, it will do it in a peaceful, friendly and non-violent manner. This seems very vague. I would suggest that if there is no clear

Re: [agi] Religion-free technical content

2007-10-04 Thread Mark Waser
I mean that ethics or friendliness is an algorithmically complex function, like our legal system. It can't be simplified. The determination of whether a given action is friendly or ethical or not is certainly complicated but the base principles are actually pretty darn simple. However, I

Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Richard Loosemore
Bob Mottram wrote: On 04/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: As to exactly how, I don't know, but since the AGI is, by assumption, peaceful, friendly and non-violent, it will do it in a peaceful, friendly and non-violent manner. This seems very vague. I would suggest that if

Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Bob Mottram
To me this seems like elevating that status of nanotech to magic. Even given RSI and the ability of the AGI to manufacture new computing resources it doesn't seem clear to me how this would enable it to prevent other AGIs from also reaching RSI capability. Presumably lesser techniques means black

Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread BillK
On 10/4/07, Bob Mottram [EMAIL PROTECTED] wrote: To me this seems like elevating that status of nanotech to magic. Even given RSI and the ability of the AGI to manufacture new computing resources it doesn't seem clear to me how this would enable it to prevent other AGIs from also reaching RSI

Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread J Storrs Hall, PhD
On Thursday 04 October 2007 11:50:21 am, Bob Mottram wrote: To me this seems like elevating that status of nanotech to magic. Even given RSI and the ability of the AGI to manufacture new computing resources it doesn't seem clear to me how this would enable it to prevent other AGIs from also

Re: [agi] Religion-free technical content

2007-10-04 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: I mean that ethics or friendliness is an algorithmically complex function, like our legal system. It can't be simplified. The determination of whether a given action is friendly or ethical or not is certainly complicated but the base principles

Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Vladimir Nesov
On 10/4/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: We can't build a system that learns as fast as a 1-year-old just now. Which is our most likely next step: (a) A system that does learn like a 1-year-old, or (b) a system that can learn 1000 times as fast as an adult? Following Moore's

Re: [agi] Religion-free technical content

2007-10-04 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: I'll repeat again since you don't seem to be paying attention to what I'm saying -- The determination of whether a given action is friendly or ethical or not is certainly complicated but the base principles are actually pretty darn simple. Then state

Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
On Tuesday 02 October 2007 08:46:43 pm, Richard Loosemore wrote: J Storrs Hall, PhD wrote: I find your argument quotidian and lacking in depth. ... What you said above was pure, unalloyed bullshit: an exquisite cocktail of complete technical ignorance, patronizing insults and breathtaking

Re: [agi] Religion-free technical content

2007-10-03 Thread Mark Waser
So do you claim that there are universal moral truths that can be applied unambiguously in every situation? What a stupid question. *Anything* can be ambiguous if you're clueless. The moral truth of Thou shalt not destroy the universe is universal. The ability to interpret it and apply it

Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
On Tuesday 02 October 2007 05:50:57 pm, Edward W. Porter wrote: The below is a good post: Thank you! I have one major question for Josh. You said “PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS TO DO, WITH THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Mon, Oct 01, 2007 at 10:40:53AM -0400, Edward W. Porter wrote: [...] RSI (Recursive Self Improvement) [...] I didn't know exactly what the term covers. So could you, or someone, please define exactly what its meaning is? Is it any system capable of learning how to improve its current

Re: [agi] Religion-free technical content

2007-10-03 Thread Richard Loosemore
I criticised your original remarks because they demonstrated a complete lack of understanding of what complex systems actually are. You said things about complex systems that were, quite frankly, ridiculous: Turing-machine equivalence, for example, has nothing to do with this. In your more

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
03833 (617) 494-1722 Fax (617) 494-1822 [EMAIL PROTECTED] -Original Message- From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 03, 2007 10:14 AM To: agi@v2.listbox.com Subject: Re: [agi] Religion-free technical content On Tuesday 02 October 2007 05:50:57 pm

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
: Linas Vepstas [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 03, 2007 12:19 PM To: agi@v2.listbox.com Subject: Re: [agi] Religion-free technical content On Mon, Oct 01, 2007 at 10:40:53AM -0400, Edward W. Porter wrote: [...] RSI (Recursive Self Improvement) [...] I didn't know exactly what

Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
Thanks! It's worthwhile being specific about levels of interpretation in the discussion of self-modification. I can write self-modifying assembly code that yet does not change the physical processor, or even its microcode it it's one of those old architectures. I can write a self-modifying

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
Associates 24 String Bridge S12 Exeter, NH 03833 (617) 494-1722 Fax (617) 494-1822 [EMAIL PROTECTED] -Original Message- From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 03, 2007 3:21 PM To: agi@v2.listbox.com Subject: Re: [agi] Religion-free technical content

Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Dougherty
On 10/3/07, Edward W. Porter [EMAIL PROTECTED] wrote: In fact, if the average AI post-grad of today had such hardware to play with, things would really start jumping. Within ten years the equivents of such machines could easily be sold for somewhere between $10k and $100k, and lots of

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 02:00:03PM -0400, Edward W. Porter wrote: From what you say below it would appear human-level AGI would not require recursive self improvement, [...] A lot of people on this list seem to hang a lot on RSI, as they use it, implying it is necessary for human-level AGI.

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
. Porter Porter Associates 24 String Bridge S12 Exeter, NH 03833 (617) 494-1722 Fax (617) 494-1822 [EMAIL PROTECTED] -Original Message- From: Mike Dougherty [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 03, 2007 5:20 PM To: agi@v2.listbox.com Subject: Re: [agi] Religion-free

Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Tintner
RE: [agi] Religion-free technical contentEdward Porter:I don't know about you, but I think there are actually a lot of very bright people in the interrelated fields of AGI, AI, Cognitive Science, and Brain science. There are also a lot of very good ideas floating around. Yes there are bright

Re: [agi] Religion-free technical content

2007-10-03 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: So do you claim that there are universal moral truths that can be applied unambiguously in every situation? What a stupid question. *Anything* can be ambiguous if you're clueless. The moral truth of Thou shalt not destroy the universe is

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
, 2007 5:51 PM To: agi@v2.listbox.com Subject: Re: [agi] Religion-free technical content On Wed, Oct 03, 2007 at 02:00:03PM -0400, Edward W. Porter wrote: From what you say below it would appear human-level AGI would not require recursive self improvement, [...] A lot of people on this list seem

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 06:31:35PM -0400, Edward W. Porter wrote: One of them once told me that in Japan it was common for high school boys who were interested in math, science, or business to go to abacus classes after school or on weekends. He said once they fully mastered using physical

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Tue, Oct 02, 2007 at 01:20:54PM -0400, Richard Loosemore wrote: When the first AGI is built, its first actions will be to make sure that nobody is trying to build a dangerous, unfriendly AGI. Yes, OK, granted, self-preservation is a reasonable character trait. After that point, the

Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
On Wednesday 03 October 2007 06:21:46 pm, Mike Tintner wrote: Yes there are bright people in AGI. But there's no one remotely close to the level, say, of von Neumann or Turing, right? And do you really think a revolution such as AGI is going to come about without that kind of revolutionary,

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 12:20:10PM -0400, Richard Loosemore wrote: Second, You mention the 3-body problem in Newtonian mechanics. Although I did not use it as such in the paper, this is my poster child of a partial complex system. I often cite the case of planetary system dynamics as an

RE: [agi] Religion-free technical content breaking the small hardware mindset

2007-10-03 Thread Edward W. Porter
- From: Mike Tintner [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 03, 2007 6:22 PM To: agi@v2.listbox.com Subject: Re: [agi] Religion-free technical content Edward Porter:I don’t know about you, but I think there are actually a lot of very bright people in the interrelated fields of AGI

Re: [agi] Religion-free technical content breaking the small hardware mindset

2007-10-03 Thread Russell Wallace
On 10/4/07, Edward W. Porter [EMAIL PROTECTED] wrote: The biggest brick wall is the small-hardware mindset that has been absolutely necessary for decades to get anything actually accomplished on the hardware of the day. But it has caused people to close their minds to the vast power of brain

Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Dougherty
On 10/3/07, Edward W. Porter [EMAIL PROTECTED] wrote: I think your notion that post-grads with powerful machines would only operate in the space of ideas that don't work is unfair. Yeah, i can agree - it was harsh. My real intention was to suggest that NOT having a bigger computer is not

Re: [agi] Religion-free technical content

2007-10-02 Thread Mark Waser
: **SPAM** Re: [agi] Religion-free technical content So this hackability is a technical question about possibility of closed-source deployment that would provide functional copies of the system but would prevent users from modifying its goal system. Is it really important? Source/technology

Distributed Semantics [WAS Re: [agi] Religion-free technical content]

2007-10-02 Thread Richard Loosemore
Mark Waser wrote: Interesting. I believe that we have a fundamental disagreement. I would argue that the semantics *don't* have to be distributed. My argument/proof would be that I believe that *anything* can be described in words -- and that I believe that previous narrow AI are brittle

Re: [agi] Religion-free technical content

2007-10-02 Thread Vladimir Nesov
But yet robustness of goal system itself is less important than intelligence that allows system to recognize influence on its goal system and preserve it. Intelligence also allows more robust interpretation of goal system. Which is why the way particular goal system is implemented is not very

Re: [agi] Religion-free technical content

2007-10-02 Thread Richard Loosemore
PROTECTED] To: agi@v2.listbox.com Sent: Monday, October 01, 2007 8:36 PM Subject: **SPAM** Re: [agi] Religion-free technical content Mark Waser wrote: And apart from the global differences between the two types of AGI, it would be no good to try to guarantee friendliness using the kind

RE: Distributed Semantics [WAS Re: [agi] Religion-free technical content]

2007-10-02 Thread Mark Waser
PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, October 02, 2007 9:49 AM Subject: **SPAM** Distributed Semantics [WAS Re: [agi] Religion-free technical content] Mark Waser wrote: Interesting. I believe that we have a fundamental disagreement. I would argue that the semantics *don't* have

Re: [agi] Religion-free technical content

2007-10-02 Thread Mark Waser
Sent: Tuesday, October 02, 2007 9:49 AM Subject: **SPAM** Re: [agi] Religion-free technical content But yet robustness of goal system itself is less important than intelligence that allows system to recognize influence on its goal system and preserve it. Intelligence also allows more robust

Re: [agi] Religion-free technical content

2007-10-02 Thread J Storrs Hall, PhD
On Tuesday 02 October 2007 10:17:42 am, Richard Loosemore wrote: ... Since the AGIs are all built to be friendly, ... The probability that this will happen is approximately the same as the probability that the Sun could suddenly quantum-tunnel itself to a new position inside the perfume

Re: Distributed Semantics [WAS Re: [agi] Religion-free technical content]

2007-10-02 Thread Richard Loosemore
Okay, I'm going to wave the white flag and say that what we should do is all get together a few days early for the conference next March, in Memphis, and discuss all these issues in high-bandwidth mode! But one last positive thought. A response to your remark: So let's look at the mappings

Re: [agi] Religion-free technical content

2007-10-02 Thread Richard Loosemore
J Storrs Hall, PhD wrote: On Tuesday 02 October 2007 10:17:42 am, Richard Loosemore wrote: ... Since the AGIs are all built to be friendly, ... The probability that this will happen is approximately the same as the probability that the Sun could suddenly quantum-tunnel itself to a new

Re: [agi] Religion-free technical content

2007-10-02 Thread BillK
On 10/2/07, Mark Waser wrote: A quick question for Richard and others -- Should adults be allowed to drink, do drugs, wirehead themselves to death? This is part of what I was pointing at in an earlier post. Richard's proposal was that humans would be asked in advance by the AGI what level of

Re: [agi] Religion-free technical content

2007-10-02 Thread J Storrs Hall, PhD
Beyond AI pp 253-256, 339. I've written a few thousand words on the subject, myself. a) the most likely sources of AI are corporate or military labs, and not just US ones. No friendly AI here, but profit-making and mission-performing AI. b) the only people in the field who even claim to be

Re: [agi] Religion-free technical content

2007-10-02 Thread Vladimir Nesov
-- Should adults be allowed to drink, do drugs, wirehead themselves to death? - Original Message - From: Vladimir Nesov [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, October 02, 2007 9:49 AM Subject: **SPAM** Re: [agi] Religion-free technical content But yet robustness of goal

Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote: A quick question for Richard and others -- Should adults be allowed to drink, do drugs, wirehead themselves to death? A correct response is That depends. Any should question involves consideration of the pragmatics of the system, while

Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
: **SPAM** Re: [agi] Religion-free technical content But yet robustness of goal system itself is less important than intelligence that allows system to recognize influence on its goal system and preserve it. Intelligence also allows more robust interpretation of goal system. Which

Re: [agi] Religion-free technical content

2007-10-02 Thread Mark Waser
. So how do I get to be an assessor and decide? - Original Message - From: Jef Allbright [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, October 02, 2007 12:55 PM Subject: **SPAM** Re: [agi] Religion-free technical content On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote

Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote: Effective deciding of these should questions has two major elements: (1) understanding of the evaluation-function of the assessors with respect to these specified ends, and (2) understanding of principles (of nature) supporting increasingly

Re: [agi] Religion-free technical content

2007-10-02 Thread Vladimir Nesov
On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote: Argh! Goal system and Friendliness are roughly the same sort of confusion. They are each modelable only within a ***specified***, encompassing context. In more coherent, modelable terms, we express our evolving nature, rather than strive

RE: [agi] Religion-free technical content

2007-10-02 Thread Derek Zahn
Richard Loosemore: a) the most likely sources of AI are corporate or military labs, and not just US ones. No friendly AI here, but profit-making and mission-performing AI. Main assumption built into this statement: that it is possible to build an AI capable of doing anything except dribble

Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Vladimir Nesov [EMAIL PROTECTED] wrote: On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote: Argh! Goal system and Friendliness are roughly the same sort of confusion. They are each modelable only within a ***specified***, encompassing context. In more coherent, modelable

Re: [agi] Religion-free technical content

2007-10-02 Thread Mark Waser
] Religion-free technical content On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote: Effective deciding of these should questions has two major elements: (1) understanding of the evaluation-function of the assessors with respect to these specified ends, and (2) understanding of principles

Re: [agi] Religion-free technical content

2007-10-02 Thread Vladimir Nesov
On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote: On 10/2/07, Vladimir Nesov [EMAIL PROTECTED] wrote: On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote: Argh! Goal system and Friendliness are roughly the same sort of confusion. They are each modelable only within a ***specified***,

Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote: Wrong. There *are* some absolute answers. There are some obvious universal Thou shalt nots that are necessary unless you're rabidly anti-community (which is not conducive to anyone's survival -- and if you want to argue that community survival

Re: [agi] Religion-free technical content

2007-10-02 Thread Mark Waser
Allbright [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, October 02, 2007 2:53 PM Subject: **SPAM** Re: [agi] Religion-free technical content On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote: Wrong. There *are* some absolute answers. There are some obvious universal Thou shalt nots

  1   2   >