RE: [agi] An idea for promoting AI development.

2002-12-02 Thread Bill Hibbard
Hi Ben, I think that true machine intelligence will be computationally demanding and will initially appear on expensive hardware available only to wealthy institutions like the government or corporations. Even when it is possible on commodity hardware, expensive hardware will still support much

RE: [agi] AGI morality

2003-02-10 Thread Bill Hibbard
Hi Philip, On Tue, 11 Feb 2003, Philip Sutton wrote: Ben, If in the Novamente configuration the dedicated Ethics Unit is focussed on GoalNode refinement, it might be worth using another term to describe the whole ethical architecture/machinery which would involve aspects of most/all (??)

RE: [agi] AGI morality

2003-02-10 Thread Bill Hibbard
situation it observes... i.e. it's a 'valuation' ;-) Interesting. Are these values used for reinforcing behaviors in a learning system? Or are they used in a continuous-valued reasoning system? Cheers, Bill -- Bill Hibbard, SSEC, 1225 W

RE: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Bill Hibbard
On Wed, 12 Feb 2003, Philip Sutton wrote: Ben/Bill, My feeling is that goals and ethics are not identical concepts. And I would think that goals would only make an intentional ethical contribution if they related to the empathetic consideration of others. . . . Absolutely goals (I prefer

RE: [agi] unFriendly AIXI

2003-02-11 Thread Bill Hibbard
On Tue, 11 Feb 2003, Ben Goertzel wrote: Eliezer wrote: * a paper by Marcus Hutter giving a Solomonoff induction based theory of general intelligence Interesting you should mention that. I recently read through Marcus Hutter's AIXI paper, and while Marcus Hutter has done valuable

RE: [agi] unFriendly AIXI

2003-02-11 Thread Bill Hibbard
Ben, On Tue, 11 Feb 2003, Ben Goertzel wrote: The formality of Hutter's definitions can give the impression that they cannot evolve. But they are open to interactions with the external environment, and can be influenced by it (including evolving in response to it). If the reinforcement

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Bill Hibbard
Hi Arthur, On Wed, 12 Feb 2003, Arthur T. Murray wrote: . . . Since the George and Barbara Bushes of this world are constantly releasing their little monsters onto the planet, why should we creators of Strong AI have to take any more precautions with our Moravecian Mind Children than human

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Bill Hibbard
On Wed, 12 Feb 2003, Arthur T. Murray wrote: The quest is as hopeless as it is with human children. Although Bill Hibbard singles out the power of super-intelligence as the reason why we ought to try to instill morality and friendliness in our AI offspring, such offspring are made in our own

Re: [agi] Breaking AIXI-tl

2003-02-12 Thread Bill Hibbard
is that an AIXI's optimality is only as valid as its assumption about the probability distribution of universal Turing machine programs. Cheers, Bill -- Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706 [EMAIL PROTECTED] 608-263-4427 fax

[agi] request for reference for Ben's new book

2003-02-13 Thread Bill Hibbard
Hi Ben, I'd like to reference your soon-to-be-published book in a paper. Could you please send me the proper form of reference. I am sending this to the AGI list as others may want this information. Thanks, Bill -- Bill Hibbard, SSEC, 1225

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Bill Hibbard
that the stakes are high, but think the safer apporach is to build ethics into the fundamental driver of super-intelligent machines, which will be their reinforcement values. Cheers, Bill -- Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI

Re: [agi] unFriendly Hubbard SIs

2003-02-14 Thread Bill Hibbard
Hey Eliezer, my name is Hibbard, not Hubbard. On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote: Bill Hibbard wrote: I never said perfection, and in my book make it clear that the task of a super-intelligent machine learning behaviors to promote human happiness will be very messy. That's

[agi] who is this Bill Hubbard I keep reading about?

2003-02-14 Thread Bill Hibbard
Strange that there would be someone on this list with a name so similar to mine. Cheers, Bill -- Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706 [EMAIL PROTECTED] 608-263-4427 fax: 608-263-6738 http://www.ssec.wisc.edu/~billh

Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Bill Hibbard
Eliezer S. Yudkowsky wrote: Bill Hibbard wrote: On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote: It *could* do this but it *doesn't* do this. Its control process is such that it follows an iterative trajectory through chaos which is forbidden to arrive at a truthful solution, though

Re: [agi] unFriendly Hibbard SIs

2003-02-15 Thread Bill Hibbard
Eliezer S. Yudkowsky wrote: . . . Yes. Laws (logical constraints) are inevitably ambiguous. Does that include the logical constraints governing the reinforcement process itself? There is a logic of the reinforcement process, but it is a behavior rather than a constraint on a behavior. By

Re: [agi] who is this Bill Hubbard I keep reading about?

2003-02-15 Thread Bill Hibbard
Ed, I agree that it was very decent of Philip to admit to starting the mis-spelling of my name. My general complaint abou the mis-spelling was sent hours before I even read Eliezer's message, but due to the vagaries of email was delivered hours after my reply to Eliezer, giving the impression

RE: [agi] unFriendly Hibbard SIs

2003-02-15 Thread Bill Hibbard
Ben, As Moshe pointed out to me, Marcus Hutter and his students tried to replicate Baum's work, with mixed results: go to http://www.idsia.ch/~marcus/ click on Artificial Intelligence and scroll down to Market-Based Reinforcement Learning in Partially Observable Worlds (with I. Kwee

RE: [agi] unFriendly Hibbard SIs

2003-02-15 Thread Bill Hibbard
On Sat, 15 Feb 2003, Ben Goertzel wrote: In my book I say that consciousness is part of the way the brain implements reinforcement learning, and I think something like that is necessary for a really robust solution. That's why I think it will take 100 years. I would say, rather, that

Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Bill Hibbard
On Tue, 18 Feb 2003, Brad Wyble wrote: . . . Incorrect. The cortex has genetically pre-programmed systems. It cannot be said that is a matrix loaded with software from subcortical structures.. . . . Yes, but there is a very interesting experiment with rewiring brains of young ferrets so

Re: [agi] swarm intellience

2003-02-26 Thread Bill Hibbard
On Wed, 26 Feb 2003, Brad Wyble wrote: The limitation in multi-agent systems is usually the degree of interaction they can have. The bandwidth between ants, for example, is fairly low even when they are in direct contact, let alone 1 inch apart. This limitation keeps their behavior

[agi] interesting story about prosthetic hippocampus

2003-03-12 Thread Bill Hibbard
http://www.newscientist.com/news/news.jsp?id=ns3488 --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Discovering the Capacity of Human Memory

2003-09-16 Thread Bill Hibbard
On Mon, 15 Sep 2003, Amara D. Angelica wrote: Any commments on this paper? http://www.kluweronline.com/issn/1389-1987/current Anders Sandberg's PhD thesis (thanks to Cole Kitchen for originally posting this to the AGI list) at: http://akira.nada.kth.se/~asa/Thesis/thesis.pdf entitled

Re: [agi] Complexity of environment of agi agent

2003-09-18 Thread Bill Hibbard
market an agent only has to be better than competing agents to make money. And in predicting the weather, there is a real limit on how well an agent can do. Cheers, Bill -- Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706 [EMAIL

Re: [agi] What is Thought? Book announcement

2004-02-04 Thread Bill Hibbard
On Wed, 21 Jan 2004, Eric Baum wrote: New Book: What is Thought? Eric B. Baum What a great book. --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] What is Thought? Book announcement

2004-02-04 Thread Bill Hibbard
It seems that Baum is arguing that biological minds are amazingly quick at making sense of the world because, as a result of evolution, the structure of the brain is set up with inbuilt limitations/assumptions based on likely possibilities in the real world - thus cutting out vast areas for

RE: [agi] AGI's and emotions

2004-02-25 Thread Bill Hibbard
Ben, I think that emotions in humans are CORRELATED with value-judgments, but are certainly not identical to them. We can have emotions that are ambiguous in value, and we can have strong value judgments with very little emotion attached to them. That is reasonable. As I said in my first

p.s., RE: [agi] AGI's and emotions

2004-02-25 Thread Bill Hibbard
I said: That is reasonable. As I said in my first post on this topic, there is variation in the way people define emotion. The quotes from Edelman and Crick show some precedence for defining emotion essentially as value, but it is also common to define emotion more in terms of expression or

Re: [agi] Open AGI?

2004-03-05 Thread Bill Hibbard
years) I'd definitely see creating the first open source AGI system as a big opportunity. Cheers, Bill -- Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706 [EMAIL PROTECTED] 608-263-4427 fax: 608-263-6738 http://www.ssec.wisc.edu

Re: [agi] Ben vs. the AI academics...

2004-10-24 Thread Bill Hibbard
in the near term Anyway, in addition to catching up with Pei and Bill Hibbard, I made a couple useful new contacts at the conference -- and interestingly, both were industry scientists rather than academics. For some reason there was more broad AI vision in the industry AI researchers than

p.s., Re: [agi] Ben vs. the AI academics...

2004-10-24 Thread Bill Hibbard
My talk is available at: http://www.ssec.wisc.edu/~billh/g/FS104HibbardB.pdf There was a really interesting talk by the neuroscientist Richard Grainger with some publications available at: http://www.brainengineering.com/publications.html Cheers, Bill --- To unsubscribe, change your

Re: [agi] Google as a strong AI

2005-03-15 Thread Bill Hibbard
I agree with the posters who say that Google is not strong AI. But it is amazingly useful because it, along with the web, forms a huge content-addressable memory. That's an important part of human brains. I think of google as my second brain. It can't think, but it is a wonderful complement to our

[agi] Hawkins founds AI company named Numenta

2005-03-24 Thread Bill Hibbard
http://www.nytimes.com/2005/03/24/technology/24think.html? The name is a lot like Novamente. Interesting to see what he comes up with. --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

[agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Bill Hibbard
at: http://www.ssec.wisc.edu/~billh/g/Singularity_Notes.html Bill Hibbard --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] the Singularity Summit and regulation of AI

2006-05-11 Thread Bill Hibbard
Thank you for your responses. Jeff, I have taken your suggestion and sent a couple questions to the Summit. My concern is motivated by noticing that the Summit includes speakers who have been very clear about their opposition to regulating AI, but none who I am aware of who have advocated it

[agi] Re: Two draft papers: AI and existential risk; heuristics and biases

2006-06-07 Thread Bill Hibbard
Eliezer, I don't think it inappropriate to cite a problem that is general to supervised learning and reinforcement, when your proposal is to, in general, use supervised learning and reinforcement. You can always appeal to a different algorithm or a different implementation that, in some

[agi] Re: Two draft papers: AI and existential risk; heuristics and biases

2006-06-15 Thread Bill Hibbard
Eliezer Yudkowsky wrote: Bill Hibbard wrote: Eliezer, I don't think it inappropriate to cite a problem that is general to supervised learning and reinforcement, when your proposal is to, in general, use supervised learning and reinforcement. You can always appeal to a different

Re: [agi] AGI open source license

2006-08-28 Thread Bill Hibbard
Hi Stephen, As a small operation independent of Cyc, distributing your AGI system as open source is likely to be a good strategy. As a small university PI developing visualization software, distributing my systems as open source turned out to be very good for my project. Our collaborators and

Re: [agi] AGI-09 - Preliminary Call for Papers

2008-08-29 Thread Bill Hibbard
The special rate at the Crowne Plaza does not apply to the night of Monday, 9 March. If the post-conference workshops on Monday extend into the afternoon, it would be useful if the special rate was available on Monday night. Thanks, Bill --- agi