I think we are probably just talking past each other...perhaps our
background/goals are different.
> >To reconcile with anthropic fine-tuning without white rabbits, I had
> >into the postulate that we were in the simplest possible universe, in the
> >absence of knowing the exact criteria for developing self-aware
> >consciousness, but just assuming that some absolute criteria exist. But
> >this begs the questions, what are those criteria and why those criteria?
> >Without knowing these criteria, we cannot tell what is the simplest
> >universe containing consciousness.
> With comp we don't need such criteria. We sum on all program executions.
> It really looks like a generalisation of Feynman integral. In particular
> it should be an integral ...
> (previously)...What I just show is that if we are machines then the
> appearances *must* emerges from *all* computations *at once*,
> and that the physical discourses, both first person (including
> uncommunicable qualia) and third person (communicable quanta)
> must be defined with a sort of sum on all computations...
I think relying on the sum/integral over all possible programs as the FINAL
explanation would lead to avoiding the questions about details of the
criteria. We are safe because we are included in the overall sum. True, for
the general purpose of explaining our existence, we don't know the details
of the criteria. But if we are deeply involved in modeling specific
fundamental phenomena, or are just extremely curious, we are led to pursue
the details of the criteria, instead of staying satisfied with the top-down
result. True, we will never succeed in finding out the actual program, but
we can speculate about it, and could try to approach it asymptotically (that
is what I meant by narrowing down the infinite subset of options) from a
bottom-up approach. I feel that is what science is all about.