Richard Loosemore wrote:
Brad Paulsen wrote:
Richard Loosemore wrote:
Brad Paulsen wrote:
James,
Someone ventured the *opinion* that keeping such a list of "things I
don't know" was "nonsensical," but I have yet to see any evidence or
well-reasoned argument backing that opinion. So, it's just an
opinion. One with which I, obviously, do not agree.
Please be clear about what was intended by my remarks.
I *now* have an explicit, episodic memory of confronting the question
"Who won the world series in 1954", and as a result of that episode
that occured today, I have the explicit knowledge that I do not know
the answer. Having that kind of explicit knowledge of
lack-of-knowledge is not problematic at all.
The only thing that seems implausible is that IN GENERAL we try to
answer questions by first looking up explicit elements that encode
the fact that we do not know the answer. As a general strategy this
must, surely, be deeply implausible, for the reasons that I
originally gave, which centered on the fact that the sheer quantity
of unknowns would be overwhelming for any system. For almost every
one of the potentially askable questions that would elicit, in me, a
response of "I do not know", there would not be any such episode.
Similarly, it would be clearly implausible for the cognitive system
to spend its time making lists of things that it did not know. If
that is not an example of an obviously implausible mechanism, then I
do not know what would be.
Ah. Now we're getting somewhere! I do *not* (and did not) propose
that we keep a list of "all the things unknown" in memory. Nor did I
propose some "background" task that would maintain or add to such a
list. That would be "...wildly, outrageously impossible, for any
system!" Maybe, instead of assuming the worse (that I could be so
ignorant as to propose such a list), you might have asked for some
"clarification?"
The list of "things I don't know" is, by definition, a list of "things
I know I don't know." How could I *possibly* know about things I
don't know I don't know? The list I propose contains ONLY those
things we know we don't know. Such a list is, in my opinion,
completely manageable and, indeed, helpful information to have
around. When we first encounter a completely novel object or event we
will have to search (percolate, whatever) for it in memory and come up
empty (however you want to define that). It is then, and *only* then,
that we put this knowledge (or meta-knowledge) on the "things (I know)
I don't know" list.
This list can be consulted before performing a search of all memory to
determine if there's a need to do such an exhaustive search. If the
thing we're trying to remember is on the "things (I know) I don't
know" list, we can very quickly report the "feeling of not knowing."
Otherwise, we have to do the exhaustive (however you define that)
search of things we do know and come up empty. Such a list can also
be used by subconscious processes to power our desire to learn.
Presumably, we experience cognitive dissonance when we feel there's
something we know nothing about and want to resolve that feeling.
How? By learning. Once learned, the thing falls off the "things (I
know) I don't know" list. Similarly, if an item is on the list for a
long time, it will naturally "fall off" the list (the "use it or lose
it" principle). Both of these "natural" actions will work, I believe,
to keep this list quite small.
These are all interesting questions, in a way, but they involve a way of
doing AI that I find ... problematic ... for other reasons. I would
have many questions about whether the maintenance and deployment of such
a list would actually be as viable as you imply, but that is very much a
practical question specific to that type of AI.
The more general issue of whether the system keeps meta knowledge of
that sort is something that we completely agree on: whichever way it
uses it, it certainly does keep it for at least a while.
Sometimes (well, don't ask my ex) I can be a bit thick. I know you're
all surprised to hear that, but...
It just dawned on me that much of the uproar here may have been caused
by a miscommunication (gee, where have we heard of that happening
before?). I may have used the term "things we don't know" to denote
the "things we know we don't know" list. If so, please accept my
apologies. Having played with these questions for a long time, this
*important* distinction apparently became lost to me and I began to
assume it self-evident that a "things we don't know" list would have
had to come into being as the result of our encounters with those
things when they were "things we didn't know we didn't know" (and,
therefore, could not be in any list of knowledge we had -- we are
clueless about these things until we encounter them).
If that's the case, let me (finally) be clear: the "list" I am talking
about in the human or AGI agent's memory is a list of THINGS I KNOW I
DON'T KNOW. In the first (misleading) example I gave, the word
"fomlepung" would be on that list after the query containing it had
resulted in the "I don't know" answer (how that determination is made
is really a minor point for this discussion). In the second example
query I gave, the "Which team won the 1924 World Series?" would also,
after eliciting the "I don't know" response, find its way onto this list.
Distinction understood and completely accepted. I did indeed start out
thinking that you meant "things we don't know" as opposed to "things
that we know we do not know".
I think that one of the factors that helps cause misunderstandings like
this is that there certainly are (or have been) people on this list (and
on the good old SL4 list) who really are dumb enough to push completely
ludicrous ideas that cannot possibly be implemented, and will defend
those ideas right down to the wire. With that kind of flak flying
around, it can sometimes be difficult to know whether a given idea is
just open to multiple interpretations or is the opening salvo in a
genuine piece of technicolor stir-fried baloney. :-)
(Not that I thought that your comments were ever at that level, I hasten
to add).
Richard Loosemore
Richard,
I come from a computer science background (in case you hadn't noticed :-)). I
played with neural nets back in the 1980's (McClelland and Rumelhart’s "Parallel
Distributed Processing"). Sadly, I never really got a chance to work with that
approach. Practically speaking, the hardware needed to make it work on
real-world problems in real time back then just wasn't there.
As for AGI, I confess to being a functionalist. By that I mean, I believe we
can produce AGI without having to model the human brain down to the last
molecule. I believe we can build AGI without simulating the human senses (at
least a non-human form of AGI).
Humankind didn't achieve heavier-than-air, controlled flight by modeling the
bird down to the last molecule. In fact, for some applications of flight that
are today routine, no bird would suffice. I've also seen goldfinch fly at 40
MPH and stop on a dime at the feeder. No human-made aircraft can do that. I
believe AGI could be in a similar situation vis-a-vis human intelligence before
mid-century (easy for me to say, I'll be long dead by then). Not just a
simulated human, but an intelligence that would be different and much greater in
some respects than any human intelligence (biological or simulated) ever could
hope to be. Faster processing, bigger, more reliable, memory. Able to scale-up
as needed. Well, you get the idea...
Anyhow, I enjoy agreeing to disagree with you. I promise to be more mindful of
how I say what I say in the future.
Cheers,
Brad
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com