Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Tim Freeman
up if I'm being too unclear.) Do you think the result is different in an important way from the real-world probability distribution you're looking for? -- Tim Freeman http://www.fungible.com t...@fungible.com --- agi Archi

Why? (was Re: [agi] references on hypercomputation?)

2008-12-16 Thread Tim Freeman
could complete an infinite computation, then...". Is there anything useful that can come out of this? At first glance, since you don't have an oracle for the halting problem and you won't be getting one, then answer seems to be "no". However, you aren't stu

[agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-16 Thread Tim Freeman
re at best talking about an hopefully-someday empirical result rather than something that could be proved. (I'm not following the larger argument that this is a part of, so I have no opinion about it.) -- Tim Freeman http://www.fungible.com [EMAIL PROTECTED]

Re: [agi] The Necessity of Embodiment

2008-08-30 Thread Tim Freeman
logical puzzle. The decision procedure itself is the only formal description of what I like that I have available. So what is there to prove? I wish I knew a better approach to this. -- Tim Freeman http://www.fungible.com [EMAIL PROTECTED] -

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Tim Freeman
I'll read the paper if you post a URL to the finished version, and I somehow get the URL. I don't want to sort out the pieces from the stream of AGI emails, and I don't want to try to provide feedback on part of a paper. -- Tim Freeman http://www.fungible.com

Special status to Homo Sap. (was Re: [agi] Recap/Summary/Thesis Statement)

2008-03-08 Thread Tim Freeman
mple. I don't want to find out what a powerful AI would do about that. -- Tim Freeman http://www.fungible.com [EMAIL PROTECTED] --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.co

AIXItl; Wolfram's hypothesis (was Re: [agi] How valuable is Solmononoff Induction for real world AGI?)

2007-11-10 Thread Tim Freeman
ly tell whether Wolfram is saying that the actual outcomes are computable, or just the probabilities of the outcomes. -- Tim Freeman http://www.fungible.com [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=63844010-625f39

[agi] Nonverbal negotiation (was Re: Self-improvement is not a special case)

2007-10-13 Thread Tim Freeman
otivate more sophisticated behavior, then so far as I can tell we would have a solution to the Friendly AI problem. Maybe someone has already done this. I have a theoretical solution that's partially written up. I'll have more details later. -- Tim Freeman http://www.fun

Re: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Tim Freeman
should also be worried about any AI that competently writes software exploding. Keeping its source code secret from itself doesn't help much. Hmm, I suppose an AI that does mechanical engineering could explode too, perhaps by doing nanotech, so AI's competently doing engineering is a ri

Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Tim Freeman
ramming requirement. No value is added by introducing considerations about self-reference into conversations about the consequences of AI engineering. Junior geeks do find it impressive, though. -- Tim Freeman http://www.fungible.com [EMAIL PROTECTED] - This list is