Richard Loosemore wrote:
...
[ASIDE. An example of this. The system is trying to answer the
question "Are all ravens black?", but it does not just look to its
collected data about ravens (partly represented by the vector of
numbers inside the "raven" concept, which are vaguely related to the
relevant probability), it also matters, quite crucially, that the STM
contains a representation of the fact that the question is being asked
by a psychologist, and that whereas the usual answer would be p(all
ravens are black) = 1.0, this particular situation might be an attempt
to make the subject come up with the most bizarre possible
counterexamples (a genetic mutant; a raven that just had an accident
with a pot of white paint, etc. etc.). In these circumstances, the
numbers encoded inside concepts seem less relevant than the fact of
there being a person of a particular type uttering the question.]
...
Just doing my usual anarchic bit to bend the world to my unreasonable
position, that's all ;-).
Richard Loosemore.
I would model things differently, the reactions would likely be the
same, but ...
One encounters an assertion "All ravens are black." (in some context)
One immediately hits memories of previously encountering this (or
equivalent?) statements.
One then notices that one hasn't encountered any ravens that aren't black.
Then one creates a tentative acknowledgement "Yes, all ravens are black."
One evaluates the importance of an accurately correct answer (in the
current context). If approximate is "good enough", one sticks with this
acknowledgement.
If, however, it's important to be precisely accurate, one models the
world, examining what features might cause a raven to not be black. If
some are found, then one modifies the statement, thus: "All ravens are
black, except for special circumstances".
One checks to see whether this suffices. If not, then one begins
attaching a list of possible special circumstances, in order of
generation from the list.
"All ravens are black, except for special circumstances, such as:
they've acquired a coat of paint (or other coloring material), there
might be a mutation that would change their color, etc.
The significant thing here is that there are many stages where the
derivation could be truncated. At each stage a check is made if it's
necessary to continue. Just how precise an answer is needed? Your
example of a psychologist asking the question shapes the frame of the
"quest for sufficiently precise", but it's always present. Rarely does
one calculate a complete answer. Usually one either stops at "good
enough", or retrieves a "appropriate" answer from memory.
Note that I implicitly asserted that, in this case, modeling the world
was more expensive than retrieving from memory. That's because that's
how I experienced it. It is, however, not always true. Also, if the
answer to a question is dependent on the current context, then modeling
the world may well be the only way to derive an answer. (Memory will
still be used to set constraints and suggest approaches. This is
because that approach is faster and more efficient that calculating such
things de novo...and often more accurate.)
This is related to the earlier discussion on "optimality". I feel that
generally minds don't even attempt optimality as normally defined, but
rather search for a least cost method that's "good enough". Of course,
if several "good enough" methods are available the most nearly optimal
will often be chosen. Not always though. Exploration is a part of what
minds do. A lot depends on what the pressures are at the moment. One
could consider this exploration as the search for a "more nearly
optimal" method, but I'm not sure that's an accurate characterization.
I rather suspect that what's happening is a "getting to know the
environment". Of course, one could always argue that in a larger
context this is more nearly optimal...because minds have been selected
to be more nearly optimal than the competition, but it's a global
optimality, not the optimality in any particular problem. And, of
course, the optimal organization of a mind historically depends upon the
body that it's inhabiting. Thus beavers, cats, and humans will approach
the problem of crossing a stream differently. Of them all, only the
beaver is likely to have a mind that is tuned to a nearly optimal
approach to that problem. (And its optimal approach would be of no use
to a human or a cat, because of the requirement that minds match their
bodies.)
Is the AGI going to be disembodied? Then it will have a very different
optimal organization that will a human. But a global optimization of
the AGI will require that it initially be able to communicate with and
understand the motivations of humans. This doesn't imply that humans
will understand its motivations. Odds are they will do so quite
poorly. They will probably easily model the AGI as if it were another
human. (I've seen people to that with cats, dogs, and cars...an AGI
would likely cause this to be inescapable, as it could communicate
intelligibly.)
So, in this context, what does "nearly optimal" mean? (I'm avoiding the
term "almost optimal", as I don't thing we could either approach it or
define it.) One thing I'm certain it will entail is being vague rather
than precise in answering questions except in specific cases where
precision is requested.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303