Hi all,

Thanks for starting this list, Ben.

I have a concern that I believe is adequately related to the question
of the limits of FAI theory. It's a mathematical concern using the
very simple notion of reflexive identity, and it's prior to the
circularity of Bayesian map/territory probability science. Therefore,
default Bayesianism doesn't apply.

// Spaces

{n: n is a unique label to denote a unique space}
En(e) = "experiences"
Sn(s) = "senses"
Rn(r) = "referents"

// Definitions/Clarifications

* "stimulus structure" = "stimulostruct"
* "neural structure" = "neurostruct"
* "hypothetical augmentation" = 'a primitive, self-evident notion, and
very appropriate'
* An experience e is the sense s of a referent r
* s is an element of an extensional stratum, the 'sense' stratum of
the stimulostruct/neurostruct/sense strata
* r is an element of an intensional stratum, the
'stimulostruct/neurostruct' stratum of the
stimulostruct/neurostruct/sense strata

// Axioms

* e and s are identical
* e is the hypothetical augmentation of r
* s and r are not identical

// Theorems

[. . . An ontology that is not innately afraid of reality, whatever
reality is . . .]

To grasp this is to preclude some Singularitarian science, especially
FAI theoretical science, from accidentally trying to be communistic
over VR space -- not that it isn't already communistically advanced
over intensional space -- which is already implicit in the application
of Bayesian circularity, of profound abuse of the appeal to anthropo-x
fallacy, and of arbitrarily selective statistical and causal science.

Possibly a nice immediate side-effect for all could be a further
enabling of mathematics such that it can extend its notions infinitely
beyond scientific notions without tacitly presuming mathematical
realism in a presumably physical universe. Furthermore, if some of us
understand that we can't be perfectly reductive, then those some of us
can be informatively careful not to conduct analyses by jumping around
ad hoc on a reductive-holistic continuum with the so far false,
communist belief in one set of inviolable, perfect predicate
functions.

The main impetus of all this has been the concern about possible,
negatively consequential, subtle, misguided, implicit assumptions in
the development of FAI theory regardless that there could be any
explicit intention to be "corrected" through the implementation of a
"suitable" FAI theory.

Nevertheless, I empathize with some AGI anxiety.

Nate

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to