up if I'm being too unclear.)
Do you think the result is different in an important way from the
real-world probability distribution you're looking for?
--
Tim Freeman http://www.fungible.com t...@fungible.com
---
agi
Archi
could complete an
infinite computation, then...".
Is there anything useful that can come out of this?
At first glance, since you don't have an oracle for the halting
problem and you won't be getting one, then answer seems to be "no".
However, you aren't stu
re at
best talking about an hopefully-someday empirical result rather than
something that could be proved.
(I'm not following the larger argument that this is a part of, so I
have no opinion about it.)
--
Tim Freeman http://www.fungible.com [EMAIL PROTECTED]
logical puzzle. The decision procedure itself is the
only formal description of what I like that I have available. So what
is there to prove? I wish I knew a better approach to this.
--
Tim Freeman http://www.fungible.com [EMAIL PROTECTED]
-
I'll read the paper if you post a URL to the finished version, and I
somehow get the URL. I don't want to sort out the pieces from the
stream of AGI emails, and I don't want to try to provide feedback on
part of a paper.
--
Tim Freeman http://www.fungible.com
mple. I don't want to find out what a powerful AI
would do about that.
--
Tim Freeman http://www.fungible.com [EMAIL PROTECTED]
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.co
ly tell whether Wolfram is saying that the actual
outcomes are computable, or just the probabilities of the outcomes.
--
Tim Freeman http://www.fungible.com [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=63844010-625f39
otivate more sophisticated
behavior, then so far as I can tell we would have a solution to the
Friendly AI problem.
Maybe someone has already done this.
I have a theoretical solution that's partially written up. I'll have
more details later.
--
Tim Freeman http://www.fun
should also be worried about any AI that competently
writes software exploding. Keeping its source code secret from
itself doesn't help much. Hmm, I suppose an AI that does mechanical
engineering could explode too, perhaps by doing nanotech, so AI's
competently doing engineering is a ri
ramming requirement. No value is
added by introducing considerations about self-reference into
conversations about the consequences of AI engineering.
Junior geeks do find it impressive, though.
--
Tim Freeman http://www.fungible.com [EMAIL PROTECTED]
-
This list is
10 matches
Mail list logo