On Sat, Feb 22, 2020 at 12:32 PM Stanley Nilsen <[email protected]>
wrote:

>
>
> On 2/22/20 1:22 AM, WriterOfMinds wrote:
>
> ...
>   I recommend looking up the "orthogonality thesis" and doing some reading
> thereon.  Morality, altruism, "human values," etc. are distinct from
> intellectual capacity, and must be *intentionally *incorporated into AGI
> if you want a complete, healthy artificial personality.
>
> --------------------------------------------------------------
> I don't think the orthogonality thesis is relevant.  It talks about any
> combination of goals and intelligence which is not what the "General" part
> of AGI is about.
>

Be careful about "generality".  According to what might be thought of as
the Cartesian paradigm of "generality", there is a very clear division
between subject and object, hence AIXI is it and the orthogonality thesis
is a consequence of unifying sequential decision theory with Solomonoff
induction -- but leaving SDT's utility function open.

But even AIXI's author has written a paper titled "A Complete Theory of
Everything Will Be Subjective <https://arxiv.org/abs/0912.5434>".

My own take on this is that the orthogonality thesis is wrong, but only
once we dispense with the mechanistic view of spacetime implied by the
Cartesian divide.  Dispensing with that is beyond the capacity of those of
the tech-singularity and, I strongly suspect, beyond any mechanistic
implementation of AI (ie:  AI's not incorporating _some_ interpretation of
QM's uncertainty -- although I'm not prepared to say which).  Hell, it's
beyond the capacity of the vast majority of people nowadays as the vast
majority are de facto of the tech-singularity.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc67faac3048278cf-Mea799f4d13f0f8c8a4695a56
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to