-> What I wanted to emphasize was the fact that if you are happy that the 
-> feeling of knowing could be explained by a search-plus-timeout mechanism 
-> (or, as I would prefer to phrase it, a constraint-satisfaction-plus- 
-> timeout mechanism), then your initial question was quite easy to answer, 
-> and not so very mysterious.
-> 
-Actually, one of the benefits of having a list of things not known comes from 
-being able to search it for a positive result without having to exhaustively 
-(however one defines that) search the (presumably much larger) list of things 
-known.  Again, I'm using "list" here generically -- I'm not concerned about 
the -
-memory storage mechanism (not here, at least).

The problem here is this entire "not known" list would either be stored with 
the regular knowledge base, or in a structure similar to the main knowledge 
base, so would really help as far as speed.

The thing it would give would be a definite answer, instead of an 'unknown' or 
not found answer in the database.

_______________________________________

James Ratcliff - http://falazar.com

Looking for something...

--- On Wed, 7/30/08, Brad Paulsen <[EMAIL PROTECTED]> wrote:
From: Brad Paulsen <[EMAIL PROTECTED]>
Subject: Re: [agi] How do we know we don't know?
To: agi@v2.listbox.com
Date: Wednesday, July 30, 2008, 4:34 PM

Richard Loosemore wrote:
> Brad Paulsen wrote:
>>
>>
>> Richard Loosemore wrote:
>>> Brad Paulsen wrote:
>>>>
>>>>
>>>> Richard Loosemore wrote:
>>>>> Brad Paulsen wrote:
>>>>>> All,
>>>>>>
>>>>>> Here's a question for you:
>>>>>>
>>>>>>     What does fomlepung mean?
>>>>>>
>>>>>> If your immediate (mental) response was "I
don't know." it means 
>>>>>> you're not a slang-slinging Norwegian.  But, how
did your brain 
>>>>>> produce that "feeling of not knowing"?  And,
how did it produce 
>>>>>> that feeling so fast?
>>>>>>
>>>>>> Your brain may have been able to do a
massively-parallel search of 
>>>>>> your entire memory and come up "empty." 
But, if it does this, 
>>>>>> it's subconscious.  No one to whom I've
presented the above 
>>>>>> question has reported a conscious "feeling of
searching" before 
>>>>>> having the conscious feeling of not knowing.
>>>>>>
>>>>>> It could be that your brain keeps a "list of
things I don't 
>>>>>> know."  I tend to think this is the case, but it
doesn't explain 
>>>>>> why your brain can react so quickly with the feeling
of not 
>>>>>> knowing when it doesn't know it doesn't know
(e.g., the very first 
>>>>>> time it encounters the word "fomlepung").
>>>>>>
>>>>>> My intuition tells me the feeling of not knowing when
presented 
>>>>>> with a completely novel concept or event is a product
of the 
>>>>>> "Danger, Will Robinson!", reptilian part of
our brain.  When we 
>>>>>> don't know we don't know something we react
with a feeling of not 
>>>>>> knowing as a survival response.  Then, having
survived, we put the 
>>>>>> thing not known at the head of our list of
"things I don't know."  
>>>>>> As long as that thing is in this list it explains how
we can come 
>>>>>> to the feeling of not knowing it so quickly.
>>>>>>
>>>>>> Of course, keeping a large list of "things I
don't know" around is 
>>>>>> probably not a good idea.  I suspect such a list will
naturally 
>>>>>> get smaller through atrophy.  You will probably never
encounter 
>>>>>> the fomlepung question again, so the fact that you
don't know what 
>>>>>> it means will become less and less important and
eventually it 
>>>>>> will drop off the end of the list.  And...
>>>>>>
>>>>>> Another intuition tells me that the list of
"things I don't know", 
>>>>>> might generate a certain amount of cognitive
dissonance the 
>>>>>> resolution of which can only be accomplished by
seeking out new 
>>>>>> information (i.e., "learning")?  If so, does
this mean that such a 
>>>>>> list in an AGI could be an important element of that
AGI's 
>>>>>> "desire" to learn?  From a functional point
of view, this could be 
>>>>>> something as simple as a scheduled background task
that checks the 
>>>>>> "things I don't know" list occasionally
and, under the right 
>>>>>> circumstances, "pings" the AGI with a pang
of cognitive dissonance 
>>>>>> from time to time.
>>>>>>
>>>>>> So, what say ye?
>>>>>
>>>>> Isn't this a bit of a no-brainer?  Why would the human
brain need 
>>>>> to keep lists of things it did not know, when it can
simply break 
>>>>> the word down into components, then have mechanisms that
watch for 
>>>>> the rate at which candidate lexical items become activated
.... 
>>>>> when  this mechanism notices that the rate of activation
is well 
>>>>> below the usual threshold, it is a fairly simple thing for
it to 
>>>>> announce that the item is not known.
>>>>>
>>>>> Keeping lists of "things not known" is wildly,
outrageously 
>>>>> impossible, for any system!  Would we really expect that
the word 
>>>>>
"ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-
>>>>>
owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
>>>>>
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
>>>>> dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw" is
represented 
>>>>> somewhere as a "word that I do not know"? :-)
>>>>>
>>>>> I note that even in the simplest word-recognition neural
nets that 
>>>>> I built and studied in the 1990s, activation of a nonword
proceeded 
>>>>> in a very different way than activation of a word:  it
would have 
>>>>> been easy to build something to trigger a "this is a
nonword" neuron.
>>>>>
>>>>> Is there some type of AI formalism where nonword
recognition would 
>>>>> be problematic?
>>>>>
>>>>>
>>>>>
>>>>> Richard Loosemore
>>>>>
>>>> Richard,
>>>>
>>>> You seem to have decided my request for comment was about word

>>>> (mis)recognition.  It wasn't.  Unfortunately, I included a

>>>> misleading example in my initial post.  A couple of list
members 
>>>> called me on it immediately (I'd expect nothing less from
this group 
>>>> -- and this was a valid criticism duly noted).  So far, three
people 
>>>> have pointed out that a query containing an un-common
(foreign, 
>>>> slang or both) word is one way to quickly generate the
"feeling of 
>>>> not knowing."  But, it is just that: only one way.  Not
all 
>>>> "feelings of not knowing" are produced by linguistic
analysis of 
>>>> surface features.  In fact, I would guess that the vast
majority of 
>>>> them are not so generated.  Still, some are and pointing this
out 
>>>> was a valid contribution (perhaps that example was fortunately
bad).
>>>>
>>>> I don't think my query is a no-brainer to answer (unless
you want to 
>>>> make it one) and your response, since it contained only
another 
>>>> "flavor" of the previous two responses, gives me no
reason 
>>>> whatsoever to change my opinion.
>>>>
>>>> Please take a look at the revised example in this thread.  I
don't 
>>>> think it has the same problems (as an example) as did the
initial 
>>>> example.  In particular, all of the words are common (American

>>>> English) and the syntax is valid.
>>>
>>> Well, no, I did understand that your point was a general one (I
just 
>>> focused on the example because it was there, if you see what I
mean).
>>>
>>> But the same response applies exactly to any other version of the 
>>> "absence of a feeling of knowing" question.  Thus:  the
system tries 
>>> to assemble a set of elements (call them symbols, nodes, or
whatever) 
>>> that together form a consistent interpretation of the input, but
the 
>>> progress of the assembly operation is monitored, and it is quite
easy 
>>> to tell if the assembly of a consistent interpretation is going
well 
>>> or not.  When the monitor detects that there are no significant 
>>> elements becoming activated in a strong way, it can reliably say
that 
>>> this input is something that the system does not know.
>>>
>>> So that would apply to a question like "Who won the 1945
World 
>>> Series" as much as it would to the appearance of a simple
non-word.
>>>
>>> In principle, there is nothing terribly difficult about such a 
>>> mechanism.
>>>
>>>
>>>
>>> Richard Loosemore
>>>
>>>
>> The question was about the "feeling of not knowing" not the
"absence 
>> of a feeling of knowing."  These could be quite different
feelings.  I 
>> prefer to stick to the more positive version commonly evinced by the 
>> statement "I don't know."  Characterizing this as
"absence of a 
>> feeling of knowing" implies there is a "normal feeling of
knowing" 
>> typically present that is felt as "absent" in the presence
of a query 
>> such as the "World Series" example query.  I would not be
prepared to 
>> argue that position, but I can argue there is a "feeling of not 
>> knowing" since an "I don't know" response is a
definitive declaration 
>> of that mental state.
>>
>> Your general description of "monitoring" during "the
progress of the 
>> assembly" contains implicit within it some form of search.  There
is 
>> nothing in the query statement, itself, that would engender the 
>> "feeling of not knowing" (as would a phonological surface
feature 
>> anomaly) without performing a semantic (meaning) analysis.  Such an 
>> analysis will require a search (e.g., of concepts).  It, therefore, 
>> falls under one of the mechanisms posited in my initial post.
>>
>> If you think the analysis in the last paragraph is incorrect, I would 
>> appreciate it if you could provide a concrete example of monitoring 
>> the process of the assembly.
> 
> Ah, but I am trying to suggest, quite deliberately, that a "feeling
of 
> not knowing" something can be exactly explained by a mechanism that 
> tracks the progress in a recognition episode, and does a time out.  So 
> this is not the absence of knowing, this is a mechanism that knows that 
> it does not know.
> 
> Now, you refer to something that you label "an absence of a feeling
of 
> knowing", and you suggest that this might be different from a feeling
of 
> not knowing" ..... but I honestly do not think that the first thing
is 
> well defined.  It sounds like something not happening, whereas what I am 
> suggesting is that there is a monitoring mechanism that, at a specific 
> time, kicks in and says "I declare that, since nothing has happened
in 
> this recognition episode, we KNOW positively that this piece of 
> information is not in the system".  This would indeed, as you put it,
be 
> a definitive declaration of that mental state.
> 
No.  You brought up the "absence of a feeling of knowing."  I agree
it's not 
well defined, which is why I brought it up in my reply.  I understand your
point 
about a "time out."  That would, roughly, correspond to the point in
search 
(i.e., recognition) where the search mechanism feels justified in declaring it 
has no knowledge about the thing it's trying to recognize.  The criteria
will 
probably depend on the type of search mechanism.

> (To be precise, though, I should say that the system *believes* that it 
> does not know:  there is a whole research literature regarding people 
> thinking that they do not know something, where in fact they do know it, 
> but often implicitly.  But let's not go in that direction).

Actually, I believe all knowledge is belief.  It's the distance one must
jump 
from one's conclusions to one's premises that makes an knowledge feel
less like 
belief and more like "hard fact."  Either way, the result would be
more or less 
the same (although false "not knowing" would probably resolve itself
quicker).
> 
> As for the role of "search" in this:  yes, you could construe it
as a 
> form of search.  However, there are many circumstances in parallel 
> constraint satisfaction (e.g. neural nets) where it is stretching a 
> point to call it "search".  However, I do not really want to
dispute 
> that point either:  I think it is a matter of convention whether a 
> neural net activation of a large number of microfeatures, for example, 
> should be called search or not.
> 
> What I did disagree with was the idea that there might be a search for 
> (in the case of your first, lexical example) a specific nonword, which 
> then was eventually *successful* because it found a lexical item 
> corresponding to the target nonword, together with a label attached to 
> it saying "not known".  I can make sense of a search (massively
parallel 
> or otherwise) that ends in a timeout, which then causes something to say 
> "Because we did not find a match within the timeout period, the item
or 
> fact is not known", but I cannot make sense of a mechanism that must 
> find the target, together with a label attached to it saying that the 
> item is not known.  Searches are not a problem, only searches of that 
> peculiar sort.
> 
Sorry.  That first example was not well thought out.  In my defense, I speak 
Norwegian so I missed the phonological surface feature "problem" with
that 
example.  I still shouldn't have missed it, but I did.

> What I wanted to emphasize was the fact that if you are happy that the 
> feeling of knowing could be explained by a search-plus-timeout mechanism 
> (or, as I would prefer to phrase it, a constraint-satisfaction-plus- 
> timeout mechanism), then your initial question was quite easy to answer, 
> and not so very mysterious.
> 
Actually, one of the benefits of having a list of things not known comes from 
being able to search it for a positive result without having to exhaustively 
(however one defines that) search the (presumably much larger) list of things 
known.  Again, I'm using "list" here generically -- I'm not
concerned about the 
memory storage mechanism (not here, at least).

> In the case of the semantic example (trying to answer the question
"Who 
> won the World Series in 1954?") the mechanism would look the same as
in 
> the lexical example, provided that you used constraint satisfaction to 
> get there.  Initially, the words would activate associated concepts 
> (baseball, United States, sports, historical events, people's names, 
> team names, city names ...), and it would also activated concepts that 
> that captured self-knowledge, such as the fact that (in my case) I know 
> almost nothing of baseball, being that I am English.  All of those 
> concepts would gel into a framework representing the question itself and 
> also the knowledge that I have about the context of the question, and 
> out of that interacting melange of concepts would emerge a stable state, 
> among which be the meta-knowledge about what happened as a result of 
> allowing all this to percolate through the system.  And one part of all 
> that would be the knowledge that the network of concepts had settled 
> into a steady state after the question, but the "slot" for
"name of team 
> that won in 1954" would have nothing in it.  It is the combination of

> the steady state and the lack of a filler in the relevant slot that 
> would lead the system to conclude that it did not know the answer.
> 
That sounds like a *lot* of work.  And it sounds like it would have to be 
repeated every time you ran into that query (which may be never again, but
humor 
me for a sec...).  If at the end of the process the first time (i.e., when you 
didn't know you didn't know the answer), you put that query (or the 
meta-knowledge about what happened) on the things I don't know list,
wouldn't 
that act to keep the subsequent percolation time to a minimum?  And,
couldn't it 
also act as an on-going subconscious means to activate the desire for conscious

learning?

> Now, sometimes the lack of knowledge comes quickly, and sometimes it 
> comes slowly.  In the case of your question, my brain came to the 
> conclusion very quickly.  But to someone else it might be that they try 
> for a while, before realizing that they do not really know the answer. 
> The speed at which the conclusion appears is a function of how quickly 
> the concepts settle down into a stable configuration that does not 
> include a slot filler.
> 
> Again, it is not that I dispute the role of "search".  I only
dispute 
> that the problem is particularly difficult to imagine a solution for (at 
> least, it is *not* difficult in a constraint-satisfaction formalism, but 
> I would not be surprised if in some other formalism it turned out to be 
> difficult.... I did ask you about that in my first message, because I 
> quite open to the possibility that some formalism might make it into 
> more of a problem than I think it is), and secondly, I would dispute 
> that the system keeps lists of stuff that it does not know, as a general 
> policy.
> 
> 
Well, now, that was a very satisfying response!  You took at least part of my 
initial post seriously and provided some very good arguments for a paradigm in 
which the "feeling of not knowing" would be generated.  And, your
scenario makes 
as much sense to me as the one I proposed.  You have taught me something and, 
for that, I thank you very much!  We continue to disagree on the
"list" 
business.  I should point out, once again, that I am using that term as a 
conversational convenience, not literally.  Perhaps the same functionality
could 
be provided in the form of feedback to a constraint mechanism.

Cheers,

Brad

> 
> Richard Loosemore
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: 
> https://www.listbox.com/member/?&; 
> 
> Powered by Listbox: http://www.listbox.com
> 


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to