> From my bystander POV I got something different out of this exchange of > messages... it appeared to me that Eliezer was not trying to say that > his point was regarding having more time for simulating, but rather that > humans possess a qualitatively different "level" of reflectivity that > allows them to "realize" the situation they're in, and therefore come up > with a simple strategy that probably doesn't even require much > simulating of their clone. It is this reflectivity difference that I > thought was more important to understand... or am I wrong? > -- > Brian Atkins
The "qualitatively different level of reflectivity" that exists is simply that humans are able to devote a lot of their resources to simulating or analyzing programs that are around as slow as they are, and hence -- if they wish -- to simulating or analyzing large portions of themselves. Whereas AIXItl's by design are only able to devote their resources to simulating or analyzing programs that are much much faster than they are -- hence they are not able to simulate or analyze large portions of themselves. This does enable humans to have a qualitatively different type of reflectivity. For any fixed problem, defined independently of the solver, a big enough AIXItl can solve it better than a human. But a human can analyze itself better than an AIXItl can analyze itself, in some senses. But not in all senses: for instance, an AIXItl can prove theorems about itself better than a human can prove theorems about itself... -- Ben G ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
