One of the more difficult concepts that I'm trying to write about in my
paper on full immersion VR is that of synthetic personalities.

Synthetic personalities is a concept that is closely related to AGI. It
is one of my main areas of focus because it would seem to be the ideal
framework for refining human like AGI.

What I'm talking about is not just an AGI algorithm but a complete
functional system including an avatar (real or virtual) and a balanced,
functional motivational system that may or may not attempt to exactly
replicate human emotions.

There is surprisingly little visible effort towards using ai in virtual
avatars. I suspect the primary reason is that it is definitely cheaper
and seems to be reasonably useful to just use much simpler tests to
develop algorithms in a more tightly bound environment. The
computational costs also being a major factor.

Even beoynd the basic desire to have intelligent avatars to, ahem,
interract with and do mad science on, the usefulness of full agent-based
AI shouldn't be overlooked. Even simple games have led to some of the
most impressive demonstrations in AI. Now consider an environment as or
more rich than Minecraft where the agent can explore, create, discover
and solve problems, play, and just contemplate.

How is it possible that this is not a major focus of AGI efforts?  
yeah, I know about Malmo but it seems to be far too tightly bound to the
idea of a mission, and seems too weak in all respects. It's probably
past time for me to re-evaluate it tho. =\

A true synthetic personality would be very much like a mind upload in
all respects except that it makes sense to create a synthetic
personality where it makes no sense whatsoever to try to emulate a brain
scan outside of a laboratory setting...

So it's not a "run_mission" type of thing at all, it's more of a
"connect to agent server, persistent storage = [x, y, z], Avatar session
id = 12345"

Where both the personality server and the environment are, as much as
possible, continuously running systems with redundancy, automatic
fail-over and live-time reconfiguration...

It seems it should be very possible to make a great deal of progress on
this, all the way from the environment to the more mundane parts of the
neurology of the avatar, this would make it 10,000 times easier for a
random moron (thinking of myself here) to come along and finish it off.

Once you get that up and running, the next area of research is figuring
out how to manipulate the minds, specifically how would you link one to
another. The point of doing that is figuring out exactly how you would
design a brain jack to link your consciousness to whatever digital
counterpart you build for yourself. Just think of all the fun you could
have playing mario cart!

https://knowyourmeme.com/photos/1413805-bowsette

Why is this not immediately obvious to everyone?


-- 
Please report bounces from this address to [email protected]

Powers are not rights.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T670cda47d7114ee7-Mf6ff03a6dc7845f5f54407d3
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to