Although the LessWrong guys are right to say to the "Cartesian Divide" is a problem with AIXI <https://wiki.lesswrong.com/wiki/The_Problem_with_AIXI>, they are wrong about the nature of that problem.
This is very much related to the problem of correcting "bias" that has everyone going into hysterics about "algorithmic bias" -- each claiming their own viewpoints are "unbiased" while those reflected by machines that have recognized patterns in data are "biased" in their resulting decisions. The LessWrong guys are basically saying that Solomonoff's "subject" can't model its own biases, including sensor biases. All it has to do is maximally compress it observations and it will detect thence factor out the equivalent of "information sabotage". This it can do based on the very Cartesianism that leads to Classical Physics. This, by the way, is the real underlying reason sociologists go crazy at the mention of biology. They don't want their information sabotage to be detected. This is also my _primary_ motivation for supporting The Hutter Prize for Lossless Compression of Human Knowledge: To detect and factor out the information sabotage of sociologists in Wikipedia. Think about it like this: When you place "scare quotes" around something, you are implicitly saying something like: So and so _says_ that "This is so." Contrast that with the statement: This is so. In providing a place for the attribution of the source, you are providing a place to assign a latent identity that can be also assigned a bias that helps compress the data from that source. It's sort of like saying: It's 32 degrees and it's 0 degrees. As opposed to: This thermometer says, "32 degrees." and that thermometer says, "0 degrees". The sensor doesn't need to be internal to the "subject" for its bias to be detected and factored out. On Sat, Feb 22, 2020 at 1:25 PM James Bowery <[email protected]> wrote: > PS: I use "singularity" in its vernacular, not formal meaning since as > Matt has quite adequately pointed out, that word doesn't really belong > physics. > > On Sat, Feb 22, 2020 at 1:21 PM James Bowery <[email protected]> wrote: > >> >> >> On Sat, Feb 22, 2020 at 12:32 PM Stanley Nilsen <[email protected]> >> wrote: >> >>> >>> >>> On 2/22/20 1:22 AM, WriterOfMinds wrote: >>> >>> ... >>> I recommend looking up the "orthogonality thesis" and doing some >>> reading thereon. Morality, altruism, "human values," etc. are distinct >>> from intellectual capacity, and must be *intentionally *incorporated >>> into AGI if you want a complete, healthy artificial personality. >>> >>> -------------------------------------------------------------- >>> I don't think the orthogonality thesis is relevant. It talks about any >>> combination of goals and intelligence which is not what the "General" part >>> of AGI is about. >>> >> >> Be careful about "generality". According to what might be thought of as >> the Cartesian paradigm of "generality", there is a very clear division >> between subject and object, hence AIXI is it and the orthogonality thesis >> is a consequence of unifying sequential decision theory with Solomonoff >> induction -- but leaving SDT's utility function open. >> >> But even AIXI's author has written a paper titled "A Complete Theory of >> Everything Will Be Subjective <https://arxiv.org/abs/0912.5434>". >> >> My own take on this is that the orthogonality thesis is wrong, but only >> once we dispense with the mechanistic view of spacetime implied by the >> Cartesian divide. Dispensing with that is beyond the capacity of those of >> the tech-singularity and, I strongly suspect, beyond any mechanistic >> implementation of AI (ie: AI's not incorporating _some_ interpretation of >> QM's uncertainty -- although I'm not prepared to say which). Hell, it's >> beyond the capacity of the vast majority of people nowadays as the vast >> majority are de facto of the tech-singularity. >> ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tc67faac3048278cf-M6031f0e2cee58d174d3abf97 Delivery options: https://agi.topicbox.com/groups/agi/subscription
