Hi Jason, Sorry for the delay !
On Thu, Dec 11, 2014 at 5:53 AM, Jason Resch <[email protected]> wrote: > > Telmo, > > Very creative solution! I think you may have been the first to out-smart > the super-intelligence. Although would you risk $1,000,000 to gain the > extra $1,000 on the belief that the super intelligence hasn't figured out a > way to predict or account for collapse? QM could always be wrong of > course, or maybe the super intelligence knows we're in a simulation and has > reverse engineered the state of the pseudorandom number generator used to > give the appearance of collapse/splitting. :-) > Realistically, I would be a boring one boxer. Why risk one million for the extra one thousand? If I was convinced that the AI was that good, then I might risk it, more out of curiosity than a desire to beat the AI. In the worst case I would end up feeling like the K foundation: http://en.wikipedia.org/wiki/K_Foundation_Burn_a_Million_Quid Telmo. > > > Jason > > > On Wed, Dec 10, 2014 at 10:59 AM, Telmo Menezes <[email protected]> > wrote: > >> >> >> On Wed, Dec 10, 2014 at 9:55 AM, Jason Resch <[email protected]> >> wrote: >> >>> I started quite a lively debate at work recently by bringing up >>> Newcomb's Paradox. We debated topics ranging from the prisoner's dilemma to >>> the halting problem, from free will to retro causality, from first person >>> indeterminacy to Godel's incompleteness. >>> >>> My colleagues were about evenly split between one-boxing and two-boxing, >>> and I was curious if there would be any more consensus among the members of >>> this list. If you're unfamiliar with the problem there are descriptions >>> here: >>> >>> http://www.philosophyexperiments.com/newcomb/ >>> http://en.wikipedia.org/wiki/Newcomb%27s_paradox >>> >>> If you reach a decision, please reply with whether your strategy would >>> be to take one box or two, what assumptions you make, and why you think >>> your strategy is best. I don't want to bias the results so I'll provide my >>> answer in a follow-up post. >>> >> >> Employ a quantum noise source to generate a random decision. With it, >> generate a very slightly unbalanced coin flip. Use it to decide on one box >> vs. two boxes. Give "one box" a very slight advantage. The only rational >> choice for the oracle is to bet on "one box". You get 1 million with a >> probability of 0.51111 or the full 1.01 million with a probability of >> 0.49999. >> >> Telmo. >> >> >>> >>> >>> Jason >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "Everything List" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected]. >>> To post to this group, send email to [email protected]. >>> Visit this group at http://groups.google.com/group/everything-list. >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Everything List" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> To post to this group, send email to [email protected]. >> Visit this group at http://groups.google.com/group/everything-list. >> For more options, visit https://groups.google.com/d/optout. >> > > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > Visit this group at http://groups.google.com/group/everything-list. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.

