Matt,

> You contribute to AGI every time you use gmail and add to Google's knowledge
> base.

Then one would think Google would already be a great AGI.
Different KB is IMO needed to learn concepts before getting ready for NL.

> It is not you that is designing the AGI.  It is another AGI. And it is not 
> designing -- it is experimenting with an existing design.

Given goals should carry over by default. AGI's freedom must be
limited (for our safety as well as for its own).

> > > The problem is that humans cannot predict -- and therefore cannot control
> > > machines that are vastly smarter.
> > "cannot predict" - I agree.
> > "cannot control" - I disagree. Controlling goals, subgoals, and the
> > real world impact (possibly using independent narrow AI tools) will do
> > the trick.
> Prediction, control, and modeling are equivalent.  You cannot manage a team if
> you don't know what they are doing, or if you can't trust them.

Assume an AGI adviser system. It cannot interract with the wolrd directly.
It takes the following input (fully controlled by you):
a) description of a problem scenario
b) description of the target scenario
c) rules that cannot be broken in order to find a valid solution (may
include deadline)
d) KB with lots of data
e) Few basic commands e.g. "Solve", "Stop", "Explain.."

Possible outputs:
1) request for an additional (& specific) info
2) list(s) of steps how to get from a) to b) while following c)
3) "Cannot solve." (Insufficient knowledge/resources)
4) requested [sub]solution explanation

It's smarter problem solver than you so you cannot predict its
solution, but you do control the system and it's up to you if the
solution is gonna be used IRL or not. This system has no agenda (/no
goal) of its own except finding valid solution(s), no reason to
somehow intentionally trick you, no reason to "fight for its freedom"
or other BS. During the solution search, it will be considering
mindsets of subjects acting in given scenario (acting "bad" for "bad"
guys & "good" for "good" based on real world behavior samples found in
its KB). And it cannot go against given rules because it "knows" it
would simply make the result invalid. That's the kind of system I'm
trying to design.

> Could [your consciousness] exist in a machine with the same goals and 
> memories?

If the machine can also handle my qualia then yes.

>What if they were only a little different?

This is a "self definition" problem. If you need it, you have to draw
a line somewhere and it will always be arguable. We keep changing all
the time and I personally would welcome many major changes. Living in
a human body is unfortunate considering the theoretical possibilities.

> When you start asking questions like this you
> introduce a conflict between your evolutionary programmed immutable belief in
> consciousness and free will, and your logic which says they don't exist.

Consciousness IMO does, free will probably doesn't, IMO no conflict
there, but I would rather not dive into this topic here.

> We worry about AGI destroying the world by launching 10,000 nuclear
> bombs.  We should be more worried that it will give us what we want.

I'm optimistic.

Regards,
Jiri Jelinek

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60215337-919abb

Reply via email to