Re: [agi] How do we know we don't know?

2008-08-01 Thread Valentina Poletti
Mike:

 I wrote my last email in a rush. Basically what I was trying to explain is
precisely the basis of what you call creative process in understanding
words. I simplified the whole thing a lot because I did not even consider
the various layers of mappings - mappings of mappings and so on.

What you say is correct, the word art-cop will invoke various ideas, amongst
which art - which in turn will evoke art-exhibit, painting, art-criticism,
and whatever else you want to add. The word cop in analogy will evoke a
series of concepts, and those concepts themselves will evoke more concepts
and so on.

Now obviously if there were no 'measuring' system to how much concepts
are evoked among each other, this process would go no-where. But fortunately
there is such a measuring process and simplifying things a lot, it consists
of excitatory and inhibitory synapses, as well as overall disturbance or
'noise' which after so and so many transitions will make the signal lose its
significance (i.e. become random for practical purposes).

Hope this is not too confusing. I'm not that great at explaining my ideas
with words :)

Jim  Vlad:

that is a difficult question because it depends a lot on your
database. Actually Marko Rodriguez has attempted this in a program that
uses a database of related words from the University of South Florida. This
program is able to understand very simple analogies such as
Which word of the second list best fits in the first list?
bear, cow, dog, tiger: turtle, carp, parrot, lion

Obviously this program is very limited. If you just need to just search
words correspondence, I'd go with Vlad's suggestion. Otherwise there is a
lot to be implemented, in terms of layers, inhibitory vs excitatory
connections, concept from stimuli and so on..What strikes me in AGI is that
so many researchers try to build an AGI with the presupposition that
everything should be built in already, the machine should be able to resolve
tasks from day 1 - just like in AI. That's like expecting a new born baby to
talk about political issues! It's easy to forget that the database we have
in our brains, upon which we make decisions, selections, creations, and so
on.. is incredibly large.. in fact it took a life-time to assembe.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-08-01 Thread Vladimir Nesov
On Fri, Aug 1, 2008 at 12:16 PM, Valentina Poletti [EMAIL PROTECTED] wrote:

 Jim  Vlad:

 that is a difficult question because it depends a lot on your
 database. Actually Marko Rodriguez has attempted this in a program that
 uses a database of related words from the University of South Florida. This
 program is able to understand very simple analogies such as
 Which word of the second list best fits in the first list?
 bear, cow, dog, tiger: turtle, carp, parrot, lion

 Obviously this program is very limited. If you just need to just search
 words correspondence, I'd go with Vlad's suggestion. Otherwise there is a
 lot to be implemented, in terms of layers, inhibitory vs excitatory
 connections, concept from stimuli and so on..What strikes me in AGI is that
 so many researchers try to build an AGI with the presupposition that
 everything should be built in already, the machine should be able to resolve
 tasks from day 1 - just like in AI. That's like expecting a new born baby to
 talk about political issues! It's easy to forget that the database we have
 in our brains, upon which we make decisions, selections, creations, and so
 on.. is incredibly large.. in fact it took a life-time to assembe.


Hi Valentina,

I'm not quite sure what you mean here, but to be on the safe side, did
you internalize warnings given in e.g. (
http://www.overcomingbias.com/2008/07/detached-lever.html ), ( Drew
McDermott's Artificial Intelligence Meets Natural Stupidity ), (
http://www.overcomingbias.com/2007/11/artificial-addi.html )? I tried
to describe the physical origins of this disconnect between the
shallowness of properties and tags we use to reason about real-world
objects and exquisite level of detail in objects themselves in (
http://causalityrelay.wordpress.com/2008/07/06/rules-of-thumb/ ), (
http://causalityrelay.wordpress.com/2008/07/17/flow-of-reality/ ), (
http://causalityrelay.wordpress.com/2008/07/20/precise-answers/ ).

My argument wasn't about word-matching per se, but about the
high-level characterization of the process of
reasoning/perception/recall/problem-solving. You can't paint a word
Intelligence in big letters and expect it to start doing intelligent
things.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-31 Thread Valentina Poletti
This is how I explain it: when we perceive a stimulus, word in this case, it
doesn't reach our brain as a single neuron firing or synapse, but as a set
of already processed neuronal groups or sets of synapses, that each recall
various other memories, concepts and neuronal group. Let me clarify this. In
the example you give, the wod artcop might reach us as a set of stimuli:
art, cop, mediu-sized word, word that begins with a, and so on. All these
connect activate various maps in our memory, and if something substantial is
monitored at some point (going with Richard's theory of the monitor, I don't
have other references of this actually), we form a response.

This is more obvious in the case of sight - where an image is first broken
into various compontents that are separately elaborated: colours, motion,
edges, shapes, etc. - and then further sent to the upper parts of the memory
where they can be associated to higher level concepts.

If any of this is not clear let me know, instead of adding me to your
kill-lists ;-P

Valentina



On 7/31/08, Mike Tintner [EMAIL PROTECTED] wrote:


 Vlad:

 I think Hofstadter's exploration of jumbles (
 http://en.wikipedia.org/wiki/Jumble ) covers this ground. You don't
 just recognize the word, you work on trying to connect it to what you
 know, and if set of letters didn't correspond to any word, you give
 up.


 There's still more to word recognition though than this. How do we decide
 what is and isn't, may or may not be a word?  A neologism? What may or may
 not be words from:

 cogrough
 dirksilt
 thangthing
 artcop
 coggourd
 cowstock

 or fomlepaung or whatever?





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-31 Thread Richard Loosemore

Vladimir Nesov wrote:

I think Hofstadter's exploration of jumbles (
http://en.wikipedia.org/wiki/Jumble ) covers this ground. You don't
just recognize the word, you work on trying to connect it to what you
know, and if set of letters didn't correspond to any word, you give
up. This establishes deep similarity between problem-solving,
perception and memory, and poses deliberative reasoning as iterative
application of reflexive perception-steps. If you think the question
and it gives you an answer, you can act on it. If it doesn't, the
context in which you thought the question, deliberative program
starting the request, will produce I don't know... response. It's
probably as simple as that: a higher level of organization, not
fundamental to the structure of mind, learned behavior.



Agreed:  Hofstadter's Jumbo system was inspirational to me when i read 
it in 1986/7, and that idea of relaxation is exactly what was behind the 
descriptions that I gave, earlier in this thread, of systems that tried 
to do recognition and question answering by constraint relaxation.



Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-31 Thread Brad Paulsen

Mike,

Valentina was referring to a remark I made (and shouldn't have -- just on 
general principles) about her making my *personal* kill-list thanks to the LOL 
she left regarding Richard Loosemore's original reply to the post that started 
this thread.  I should have taken a time out before I opened my big fingers. 
Had I done so, I would have found out (as I did through subsequent exchanges 
with Richard) that his comments were based on a misunderstanding.  He thought 
what I was calling the list of things we don't know was a list of all things 
not known.  It wasn't.  I was referring to the list of things we know we don't 
know.  I take full responsibility for creating this misunderstanding through 
sloppy writing/editing.  Anyhow, I took Richard's initial comments the wrong way 
(probably because I'm as insecure as the next person).  Valentina's message got 
read in that context.  The misunderstanding has all been worked out now, so 
there was really no reason for all the initial drama.


Valentina: if you're reading this, I apologize for overreacting.  I re-read your 
post after I'd calmed down and realized that you did add a brief comment on 
Richard's reply.  You didn't just pile on.  I look forward to hearing more 
about your views on building an AGI.


I'm happy to see this thread has generated some interesting side discussions. 
I'm here to learn and, occasionally, see what people who give a lot of time and 
thought to this subject think of my whacky ideas.


Cheers,

Brad


Mike Tintner wrote:

Er no, I don't believe in killing people :)
 
I'm not quite sure what you're what getting at. I was just trying to add 
another layer of complexity to the brain's immensely multilayered 
processing.  Our processing of new words/word combinations shows that 
there is a creative aspect to this processing - it isn't just matching.  
Some of this might be done by standard verbal associations/ semantic 
networks - e.g. yes IMO artcop could be a word for, say, art critic -  
cops police, and art can be seen as being policed - I may even have 
that last expression in memory.  But in other cases, the processing may 
have to be done by imaginative association/drawing - dirksilt could 
just conceivably be a word, if I imagine some dirk/dagger-like 
tool being used on silt, (doesn't make much sense but conceivable for my 
brain) -  I doubt that such reasoning could be purely verbal.
 
 
Valentina: This is how I explain it: when we perceive a stimulus, word 
in this case, it doesn't reach our brain as a single neuron firing or 
synapse, but as a set of already processed neuronal groups or sets of 
synapses, that each recall various other memories, concepts and neuronal 
group. Let me clarify this. In the example you give, the wod artcop 
might reach us as a set of stimuli: art, cop, mediu-sized word, word 
that begins with a, and so on. All these connect activate various maps 
in our memory, and if something substantial is monitored at some point 
(going with Richard's theory of the monitor, I don't have other 
references of this actually), we form a response.


 
This is more obvious in the case of sight - where an image is first

broken into various compontents that are separately elaborated:
colours, motion, edges, shapes, etc. - and then further sent to the
upper parts of the memory where they can be associated to higher
level concepts.
 
If any of this is not clear let me know, instead of adding me to

your kill-lists ;-P
 
On 7/31/08, *Mike Tintner* [EMAIL PROTECTED] wrote:



Vlad:

I think Hofstadter's exploration of jumbles (
http://en.wikipedia.org/wiki/Jumble ) covers this ground.
You don't
just recognize the word, you work on trying to connect it to
what you
know, and if set of letters didn't correspond to any word,
you give
up.


There's still more to word recognition though than this. How do
we decide what is and isn't, may or may not be a word?  A
neologism? What may or may not be words from:

cogrough
dirksilt
thangthing
artcop
coggourd
cowstock

or fomlepaung or whatever?

 



*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription	[Powered by Listbox] http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-30 Thread Mark Waser
Wow!  The civility level on this list is really bottoming out . . . . along 
with any sort of scientific grounding.


I have to agree with both Valentina and Richard . . . . since they are 
supported by scientific results while others are merely speculating without 
basis.


Experimental (imaging) evidence shows that known words will strongly 
activate some set of neurons when heard.  Unknown words with recognizable 
parts/features will also activate some other set of neurons when heard, 
possibly allowing the individual to puzzle out the meaning even if the word 
has never been heard before.  Totally unknown words will not strongly 
activate any neurons -- except subsequently (i.e. on a delay) some set of 
HUH? neurons.


If you wish, you can consider this to be an analogue of a massively parallel 
search carried out by the subconscious but it's really just an automatic 
operation.  Recognized word == activated neurons bringing it's meaning 
forward through spreading activation.  Totally unrecognized word == no 
activated neurons which is then interpreted as I don't know this word.


Ed's response (which you praised), while a nice fanciful story that might 
work in another universe, is *not* supported by any evidence and is 
contra-indicated by a reasonable amount of experimental evidence.



- Original Message - 
From: Brad Paulsen [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, July 29, 2008 7:33 PM
Subject: Re: [agi] How do we know we don't know?



Valentina,

Well, the LOL is on you.

Richard failed to add anything new to the two previous responses that each 
posited linguistic surface feature analysis as being responsible for 
generate the feeling of not knowing with that *particular* (and, 
admittedly poorly-chosen) example query.  This mechanism will, however, 
apply to only a very tiny number of cases.


In response to those first two replies (not including Richard's), I 
apologized for the sloppy example and offered a new one.  Please read the 
entire thread and the new example.  I think you'll find Richard's and your 
explanation will fail to address how the new example might generate the 
feeling of not knowing.


Cheers,

Brad

Valentina Poletti wrote:

lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain 
concludes that we 'don't know'. that's why it takes no effort to realize 
it. agi algorithms should be built in a similar way, rather than 
searching.



Isn't this a bit of a no-brainer?  Why would the human brain need to
keep lists of things it did not know, when it can simply break the
word down into components, then have mechanisms that watch for the
rate at which candidate lexical items become activated  when
 this mechanism notices that the rate of activation is well below
the usual threshold, it is a fairly simple thing for it to announce
that the item is not known.

Keeping lists of things not known is wildly, outrageously
impossible, for any system!  Would we really expect that the word
ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-
owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere
as a word that I do not know? :-)

I note that even in the simplest word-recognition neural nets that I
built and studied in the 1990s, activation of a nonword proceeded in
a very different way than activation of a word:  it would have been
easy to build something to trigger a this is a nonword neuron.

Is there some type of AI formalism where nonword recognition would
be problematic?



Richard Loosemore

 
*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; Your Subscription [Powered by 
Listbox] http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-30 Thread Jim Bromer
On Wed, Jul 30, 2008 at 9:50 AM, Mark Waser [EMAIL PROTECTED] wrote:
 Wow!  The civility level on this list is really bottoming out . . . . along
 with any sort of scientific grounding.

 Experimental (imaging) evidence shows that known words will strongly
 activate some set of neurons when heard.  Unknown words with recognizable
 parts/features will also activate some other set of neurons when heard,
 possibly allowing the individual to puzzle out the meaning even if the word
 has never been heard before.  Totally unknown words will not strongly
 activate any neurons -- except subsequently (i.e. on a delay) some set of
 HUH? neurons.

Well, your imaging evidence is part imaging and part imagining since
no one knows what the imaging is actually showing.  I think it is
commonly believed that the imaging techniques show blood flow into
areas of the brain, and this is (reasonably in my view) taken as
evidence of neural activity. Ok, but what kind of thinking is actually
going on and how extensive are the links that don't have enough wow
factor for repeatable experiments researchers to issue as a press
release.  So if you are going to claim that you're speculations are
superiorly grounded,  I would like to see some research that shows
that unknown words will not strongly activate any neurons.  Take your
time, I am only asking a question, not challenging you to fantasy
combat.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-30 Thread Richard Loosemore

Brad Paulsen wrote:



Richard Loosemore wrote:

Brad Paulsen wrote:

All,

Here's a question for you:

What does fomlepung mean?

If your immediate (mental) response was I don't know. it means 
you're not a slang-slinging Norwegian.  But, how did your brain 
produce that feeling of not knowing?  And, how did it produce that 
feeling so fast?


Your brain may have been able to do a massively-parallel search of 
your entire memory and come up empty.  But, if it does this, it's 
subconscious.  No one to whom I've presented the above question has 
reported a conscious feeling of searching before having the 
conscious feeling of not knowing.


It could be that your brain keeps a list of things I don't know.  I 
tend to think this is the case, but it doesn't explain why your brain 
can react so quickly with the feeling of not knowing when it doesn't 
know it doesn't know (e.g., the very first time it encounters the 
word fomlepung).


My intuition tells me the feeling of not knowing when presented with 
a completely novel concept or event is a product of the Danger, Will 
Robinson!, reptilian part of our brain.  When we don't know we don't 
know something we react with a feeling of not knowing as a survival 
response.  Then, having survived, we put the thing not known at the 
head of our list of things I don't know.  As long as that thing is 
in this list it explains how we can come to the feeling of not 
knowing it so quickly.


Of course, keeping a large list of things I don't know around is 
probably not a good idea.  I suspect such a list will naturally get 
smaller through atrophy.  You will probably never encounter the 
fomlepung question again, so the fact that you don't know what it 
means will become less and less important and eventually it will drop 
off the end of the list.  And...


Another intuition tells me that the list of things I don't know, 
might generate a certain amount of cognitive dissonance the 
resolution of which can only be accomplished by seeking out new 
information (i.e., learning)?  If so, does this mean that such a 
list in an AGI could be an important element of that AGI's desire 
to learn?  From a functional point of view, this could be something 
as simple as a scheduled background task that checks the things I 
don't know list occasionally and, under the right circumstances, 
pings the AGI with a pang of cognitive dissonance from time to time.


So, what say ye?


Isn't this a bit of a no-brainer?  Why would the human brain need to 
keep lists of things it did not know, when it can simply break the 
word down into components, then have mechanisms that watch for the 
rate at which candidate lexical items become activated  when  this 
mechanism notices that the rate of activation is well below the usual 
threshold, it is a fairly simple thing for it to announce that the 
item is not known.


Keeping lists of things not known is wildly, outrageously 
impossible, for any system!  Would we really expect that the word 
ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-

owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere 
as a word that I do not know? :-)


I note that even in the simplest word-recognition neural nets that I 
built and studied in the 1990s, activation of a nonword proceeded in a 
very different way than activation of a word:  it would have been easy 
to build something to trigger a this is a nonword neuron.


Is there some type of AI formalism where nonword recognition would be 
problematic?




Richard Loosemore


Richard,

You seem to have decided my request for comment was about word 
(mis)recognition.  It wasn't.  Unfortunately, I included a misleading 
example in my initial post.  A couple of list members called me on it 
immediately (I'd expect nothing less from this group -- and this was a 
valid criticism duly noted).  So far, three people have pointed out that 
a query containing an un-common (foreign, slang or both) word is one way 
to quickly generate the feeling of not knowing.  But, it is just that: 
only one way.  Not all feelings of not knowing are produced by 
linguistic analysis of surface features.  In fact, I would guess that 
the vast majority of them are not so generated.  Still, some are and 
pointing this out was a valid contribution (perhaps that example was 
fortunately bad).


I don't think my query is a no-brainer to answer (unless you want to 
make it one) and your response, since it contained only another flavor 
of the previous two responses, gives me no reason whatsoever to change 
my opinion.


Please take a look at the revised example in this thread.  I don't think 
it has the same problems (as an example) as did the initial example.  In 
particular, all of the words are common (American English) and the 
syntax is valid.


Well, no, I did understand 

Re: [agi] How do we know we don't know?

2008-07-30 Thread Richard Loosemore

Brad Paulsen wrote:

Valentina,

Well, the LOL is on you.

Richard failed to add anything new to the two previous responses that 
each posited linguistic surface feature analysis as being responsible 
for generate the feeling of not knowing with that *particular* (and, 
admittedly poorly-chosen) example query.  This mechanism will, however, 
apply to only a very tiny number of cases.


In response to those first two replies (not including Richard's), I 
apologized for the sloppy example and offered a new one.  Please read 
the entire thread and the new example.  I think you'll find Richard's 
and your explanation will fail to address how the new example might 
generate the feeling of not knowing.


Brad,

Isn't this response, as well as the previous response directed at me, 
just a little more annoyed-sounding than it needs to be?


Both Valentina and I (and now Mark Waser also) have simply focused on 
the fact that it is relatively trivial to build mechanisms that monitor 
the rate at which the system is progressing in its attempt to do a 
recognition operation, and then call it as a not known if the progress 
rate is below a certain threshold.


In particular, you did suggest the idea of a system keeping lists of 
things it did not know, and surely it is not inappropriate to give a 
good-naturedly humorous response to that one?


So far, I don't see any of us making a substantial misunderstanding of 
your question, nor anyone being deliberately rude to you.




Richard Loosemore











Valentina Poletti wrote:

lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain 
concludes that we 'don't know'. that's why it takes no effort to 
realize it. agi algorithms should be built in a similar way, rather 
than searching.



Isn't this a bit of a no-brainer?  Why would the human brain need to
keep lists of things it did not know, when it can simply break the
word down into components, then have mechanisms that watch for the
rate at which candidate lexical items become activated  when
 this mechanism notices that the rate of activation is well below
the usual threshold, it is a fairly simple thing for it to announce
that the item is not known.

Keeping lists of things not known is wildly, outrageously
impossible, for any system!  Would we really expect that the word
ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-
owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere
as a word that I do not know? :-)

I note that even in the simplest word-recognition neural nets that I
built and studied in the 1990s, activation of a nonword proceeded in
a very different way than activation of a word:  it would have been
easy to build something to trigger a this is a nonword neuron.

Is there some type of AI formalism where nonword recognition would
be problematic?



Richard Loosemore



*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; Your Subscription[Powered by 
Listbox] http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-30 Thread Richard Loosemore

Jim Bromer wrote:

On Wed, Jul 30, 2008 at 9:50 AM, Mark Waser [EMAIL PROTECTED] wrote:

Wow!  The civility level on this list is really bottoming out . . . . along
with any sort of scientific grounding.



Experimental (imaging) evidence shows that known words will strongly
activate some set of neurons when heard.  Unknown words with recognizable
parts/features will also activate some other set of neurons when heard,
possibly allowing the individual to puzzle out the meaning even if the word
has never been heard before.  Totally unknown words will not strongly
activate any neurons -- except subsequently (i.e. on a delay) some set of
HUH? neurons.


Well, your imaging evidence is part imaging and part imagining since
no one knows what the imaging is actually showing.  I think it is
commonly believed that the imaging techniques show blood flow into
areas of the brain, and this is (reasonably in my view) taken as
evidence of neural activity. Ok, but what kind of thinking is actually
going on and how extensive are the links that don't have enough wow
factor for repeatable experiments researchers to issue as a press
release.  So if you are going to claim that you're speculations are
superiorly grounded,  I would like to see some research that shows
that unknown words will not strongly activate any neurons.  Take your
time, I am only asking a question, not challenging you to fantasy
combat.


There are many such studies.  It appears that activation is strong in 
semantic areas when words are involved, but only in the phoneme-grapheme 
mapping areas when nonwords are involved.  If something in the brain is 
monitoring the strength of activation in the semantic area, it would be 
able to extract a feeling of knowing signal.


There are also studies of feeling of knowing in episodic memory, and 
also work on being able to distinguish syntactically correct sentences 
from incorrect ones.


The common thread in all of these studies is that gross differences of 
processing can be found between recognition of known and unknown items, 
whether those items be at the word level or higher.


And the common interpretation of these results seems to be that strong 
activation occurs in the expected area when the item is known  which 
means that it would be easy for the system to conclude that the 
*absence* of such strong activation can be taken to mean that the item 
is not known.


I have copied one example of such a paper below (title and abstract 
only).  This is about the word-nonword case, but as I have said before, 
the interpretation does generalize easily to higher cases such as 
recognizing that you do not know the answer to a question.





Richard Loosemore






Neural Correlates of Lexical Access During Visual Word Recognition,
Binder, J.R., McKiernan, K.A., Parsons, M.E. , Westbury, C.F., Possing, 
E.T., Kaufman, J.N., Buchanan, L.J., Cogn. Neurosci..2003; 15: 372-393



People can discriminate real words from nonwords even when the latter 
are orthographically and phonologically word-like, presumably because 
words activate specific lexical and/or semantic information. We 
investigated the neural correlates of this identification process using 
event-related functional magnetic resonance imaging (fMRI). Participants 
performed a visual lexical decision task under conditions that 
encouraged specific word identification: Nonwords were matched to words 
on orthographic and phonologic characteristics, and accuracy was 
emphasized over speed. To identify neural responses associated with 
activation of nonsemantic lexical information, processing of words and 
nonwords with many lexical neighbors was contrasted with processing of 
items with no neighbors. The fMRI data showed robust differences in 
activation by words and word-like nonwords, with stronger word 
activation occurring in a distributed, left hemisphere network 
previously associated with semantic processing, and stronger nonword 
activation occurring in a posterior inferior frontal area previously 
associated with grapheme-to-phoneme mapping. Contrary to lexicon-based 
models of word recognition, there were no brain areas in which 
activation increased with neighborhood size. For words, activation in 
the left prefrontal, angular gyrus, and ventrolateral temporal areas was 
stronger for items without neighbors, probably because accurate 
responses to these items were more dependent on activation of semantic 
information. The results show neural correlates of access to specific 
word information. The absence of facilitatory lexical neighborhood 
effects on activation in these brain regions argues for an 
interpretation in terms of semantic access. Because subjects performed 
the same task throughout, the results are unlikely to be due to 
task-specific attentional, strategic, or expectancy effects.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] How do we know we don't know?

2008-07-30 Thread Brad Paulsen

Abram,

The syntactic surface feature argument makes a good, but rather narrow, addition 
to the list of mechanisms that can engender a feeling of not knowing.  The 
interesting part is that, as someone who speaks Norwegian, using that word in 
the example didn't set off phonological feature alarms for me.  Non-Norwegian 
speakers picked it up right off.


Your argument for semantic (i.e., meaning) features lacks concrete examples so 
it is difficult to tell exactly what you mean.  Based on your general argument, 
I would conclude that it requires a content search of some sort and, therefore, 
falls under one of the mechanisms posited in my initial post.


Cheers,

Brad

Abram Demski wrote:

I think the same sort of solution applies to the world series case;
the only difference is that it is semantic features that fail to
combine, rather than syntactic. In other words, there are either zero
associations or none with the potential to count as an answer.

--Abram

On Tue, Jul 29, 2008 at 7:51 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

Matt,

I confess, I'm not sure I understand your response.  It seems to be a
variant of the critique made by three people early-on in this thread based
on the misleading example query in my original post.  These folks noted that
an analysis of linguistic surface features (i.e., the word fomlepung would
not sound right to an English speaking query recipient) could account for
the feeling of not knowing.  And they were right.  For queries of that
type (i.e., queries that contained foreign, slang or uncommon words).

I apologized for that first example and provided an improved query (one that
has valid English syntax and uses common English words -- so it will pass
linguistic surface feature analysis).  To wit: Which team won the 1924
World Series?

Cheers,

Brad


Matt Mahoney wrote:

This is not a hard problem. A model for data compression has the task of
predicting the next

bit in a string of unknown origin. If the string is an encoding of natural
language text, then

modeling is an AI problem. If the model doesn't know, then it assigns a
probability of about

1/2 to each of 0 and 1. Probabilities can be easily detected from outside
the model, regardless

of the intelligence level of the model.

 -- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-30 Thread James Ratcliff
Sure, 
search is at the root of all processing, be it human or AI.

How we each go about the search, and how efficient we are at the task are 
different, and what exactly we are searching for, and exponential explosion.

But some type of search is done, whether we are consciously aware of our brains 
doing the search or not.

Given a bit of context information about the question should allow us to use 
some heuristics to look at a smaller area of knowledge bases in our brains, or 
in a computer's memory.

Having a list of things we dont know is nonsensical as has been pointed out, 
when it comes to individual terms, but something like a aggregate estimate of 
knowledge known could be computed.

I myself know a little about baseball  say 10%, but baseball history and 
world series statistics would be more like 0.1%

James Ratcliff

--- On Tue, 7/29/08, Brad Paulsen [EMAIL PROTECTED] wrote:
James,

So, you agree that some sort of search must take place before the feeling
of not knowing presents itself?  Of course, realizing we don't
have a lot of information results from some type of a search and not a 
separate process
(at least you didn't posit any).

Thanks for your comments!
Cheers
Brad

James Ratcliff wrote:
 It is fairly simple at that point, we have enough context to have a very 
 limited domain
 world series - baseball
 1924
 answer is a team,
 so we can do a lookup in our database easily enough, or realize that we 
 really dont have a lot of information about baseball in our mindset.
 
 And for the other one, it would just be a strait term match.
 
 James Ratcliff
 
 ___
 James Ratcliff - http://falazar.com
 Looking for something...
 
 --- On *Mon, 7/28/08, Brad Paulsen /[EMAIL PROTECTED]/*
wrote:
 
 From: Brad Paulsen [EMAIL PROTECTED]
 Subject: Re: [agi] How do we know we don't know?
 To: agi@v2.listbox.com
 Date: Monday, July 28, 2008, 4:12 PM
 
 Jim Bromer wrote:
  On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen
 [EMAIL PROTECTED] wrote:
  All,
 What does fomlepung mean?
 
  If your immediate (mental) response was I don't
know.
 it means you're not
  a slang-slinging Norwegian.  But, how did your brain produce
that
 feeling
  of not knowing?  And, how did it produce that feeling
so fast?
 
  Your brain may have been able to do a massively-parallel
search of
 your
  entire memory and
  come up empty.  But, if it does this,
 it's subconscious.
   No one to whom I've presented the above question has
reported a
 conscious
  feeling of searching before having the conscious
feeling
 of not knowing.
 
  Brad
  
  My guess that initial recognition must be based on the surface
  features of an input.  If this is true, then that could suggest
that
  our initial recognition reactions are stimulated by distinct
  components (or distinct groupings of components) that are found
in the
  surface input data.
  Jim Bromer
  
  
 Hmmm.  That particular query may not have been the best example since,
to a 
 non-Norwegian speaker, the phonological surface feature of that
statement alone
 
 could account for the feeling of not knowing.  In other
words, the
 word 
 fomlepung just doesn't sound right.  Good
point. 
 But, that may only
  explain 
 how we know we don't know strange sounding words.
 
 Let's try another example:
 
   Which team won the 1924 World Series?
 
 Cheers,
 
 Brad
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
  
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 
 
 
 *agi* | Archives https://www.listbox.com/member/archive/303/=now 
 https://www.listbox.com/member/archive/rss/303/ | Modify 
 https://www.listbox.com/member/?; 
 Your Subscription [Powered by Listbox] http://www.listbox.com
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member

Re: [agi] How do we know we don't know?

2008-07-30 Thread Brad Paulsen



Richard Loosemore wrote:

Brad Paulsen wrote:



Richard Loosemore wrote:

Brad Paulsen wrote:

All,

Here's a question for you:

What does fomlepung mean?

If your immediate (mental) response was I don't know. it means 
you're not a slang-slinging Norwegian.  But, how did your brain 
produce that feeling of not knowing?  And, how did it produce that 
feeling so fast?


Your brain may have been able to do a massively-parallel search of 
your entire memory and come up empty.  But, if it does this, it's 
subconscious.  No one to whom I've presented the above question has 
reported a conscious feeling of searching before having the 
conscious feeling of not knowing.


It could be that your brain keeps a list of things I don't know.  
I tend to think this is the case, but it doesn't explain why your 
brain can react so quickly with the feeling of not knowing when it 
doesn't know it doesn't know (e.g., the very first time it 
encounters the word fomlepung).


My intuition tells me the feeling of not knowing when presented with 
a completely novel concept or event is a product of the Danger, 
Will Robinson!, reptilian part of our brain.  When we don't know we 
don't know something we react with a feeling of not knowing as a 
survival response.  Then, having survived, we put the thing not 
known at the head of our list of things I don't know.  As long as 
that thing is in this list it explains how we can come to the 
feeling of not knowing it so quickly.


Of course, keeping a large list of things I don't know around is 
probably not a good idea.  I suspect such a list will naturally get 
smaller through atrophy.  You will probably never encounter the 
fomlepung question again, so the fact that you don't know what it 
means will become less and less important and eventually it will 
drop off the end of the list.  And...


Another intuition tells me that the list of things I don't know, 
might generate a certain amount of cognitive dissonance the 
resolution of which can only be accomplished by seeking out new 
information (i.e., learning)?  If so, does this mean that such a 
list in an AGI could be an important element of that AGI's desire 
to learn?  From a functional point of view, this could be something 
as simple as a scheduled background task that checks the things I 
don't know list occasionally and, under the right circumstances, 
pings the AGI with a pang of cognitive dissonance from time to time.


So, what say ye?


Isn't this a bit of a no-brainer?  Why would the human brain need to 
keep lists of things it did not know, when it can simply break the 
word down into components, then have mechanisms that watch for the 
rate at which candidate lexical items become activated  when  
this mechanism notices that the rate of activation is well below the 
usual threshold, it is a fairly simple thing for it to announce that 
the item is not known.


Keeping lists of things not known is wildly, outrageously 
impossible, for any system!  Would we really expect that the word 
ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-

owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere 
as a word that I do not know? :-)


I note that even in the simplest word-recognition neural nets that I 
built and studied in the 1990s, activation of a nonword proceeded in 
a very different way than activation of a word:  it would have been 
easy to build something to trigger a this is a nonword neuron.


Is there some type of AI formalism where nonword recognition would be 
problematic?




Richard Loosemore


Richard,

You seem to have decided my request for comment was about word 
(mis)recognition.  It wasn't.  Unfortunately, I included a misleading 
example in my initial post.  A couple of list members called me on it 
immediately (I'd expect nothing less from this group -- and this was a 
valid criticism duly noted).  So far, three people have pointed out 
that a query containing an un-common (foreign, slang or both) word is 
one way to quickly generate the feeling of not knowing.  But, it is 
just that: only one way.  Not all feelings of not knowing are 
produced by linguistic analysis of surface features.  In fact, I would 
guess that the vast majority of them are not so generated.  Still, 
some are and pointing this out was a valid contribution (perhaps that 
example was fortunately bad).


I don't think my query is a no-brainer to answer (unless you want to 
make it one) and your response, since it contained only another 
flavor of the previous two responses, gives me no reason whatsoever 
to change my opinion.


Please take a look at the revised example in this thread.  I don't 
think it has the same problems (as an example) as did the initial 
example.  In particular, all of the words are common (American 
English) and the syntax is valid.



Re: [agi] How do we know we don't know?

2008-07-30 Thread Richard Loosemore

Brad Paulsen wrote:



Richard Loosemore wrote:

Brad Paulsen wrote:



Richard Loosemore wrote:

Brad Paulsen wrote:

All,

Here's a question for you:

What does fomlepung mean?

If your immediate (mental) response was I don't know. it means 
you're not a slang-slinging Norwegian.  But, how did your brain 
produce that feeling of not knowing?  And, how did it produce 
that feeling so fast?


Your brain may have been able to do a massively-parallel search of 
your entire memory and come up empty.  But, if it does this, it's 
subconscious.  No one to whom I've presented the above question has 
reported a conscious feeling of searching before having the 
conscious feeling of not knowing.


It could be that your brain keeps a list of things I don't know.  
I tend to think this is the case, but it doesn't explain why your 
brain can react so quickly with the feeling of not knowing when it 
doesn't know it doesn't know (e.g., the very first time it 
encounters the word fomlepung).


My intuition tells me the feeling of not knowing when presented 
with a completely novel concept or event is a product of the 
Danger, Will Robinson!, reptilian part of our brain.  When we 
don't know we don't know something we react with a feeling of not 
knowing as a survival response.  Then, having survived, we put the 
thing not known at the head of our list of things I don't know.  
As long as that thing is in this list it explains how we can come 
to the feeling of not knowing it so quickly.


Of course, keeping a large list of things I don't know around is 
probably not a good idea.  I suspect such a list will naturally get 
smaller through atrophy.  You will probably never encounter the 
fomlepung question again, so the fact that you don't know what it 
means will become less and less important and eventually it will 
drop off the end of the list.  And...


Another intuition tells me that the list of things I don't know, 
might generate a certain amount of cognitive dissonance the 
resolution of which can only be accomplished by seeking out new 
information (i.e., learning)?  If so, does this mean that such a 
list in an AGI could be an important element of that AGI's desire 
to learn?  From a functional point of view, this could be something 
as simple as a scheduled background task that checks the things I 
don't know list occasionally and, under the right circumstances, 
pings the AGI with a pang of cognitive dissonance from time to time.


So, what say ye?


Isn't this a bit of a no-brainer?  Why would the human brain need to 
keep lists of things it did not know, when it can simply break the 
word down into components, then have mechanisms that watch for the 
rate at which candidate lexical items become activated  when  
this mechanism notices that the rate of activation is well below the 
usual threshold, it is a fairly simple thing for it to announce that 
the item is not known.


Keeping lists of things not known is wildly, outrageously 
impossible, for any system!  Would we really expect that the word 
ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-

owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere 
as a word that I do not know? :-)


I note that even in the simplest word-recognition neural nets that I 
built and studied in the 1990s, activation of a nonword proceeded in 
a very different way than activation of a word:  it would have been 
easy to build something to trigger a this is a nonword neuron.


Is there some type of AI formalism where nonword recognition would 
be problematic?




Richard Loosemore


Richard,

You seem to have decided my request for comment was about word 
(mis)recognition.  It wasn't.  Unfortunately, I included a misleading 
example in my initial post.  A couple of list members called me on it 
immediately (I'd expect nothing less from this group -- and this was 
a valid criticism duly noted).  So far, three people have pointed out 
that a query containing an un-common (foreign, slang or both) word is 
one way to quickly generate the feeling of not knowing.  But, it is 
just that: only one way.  Not all feelings of not knowing are 
produced by linguistic analysis of surface features.  In fact, I 
would guess that the vast majority of them are not so generated.  
Still, some are and pointing this out was a valid contribution 
(perhaps that example was fortunately bad).


I don't think my query is a no-brainer to answer (unless you want to 
make it one) and your response, since it contained only another 
flavor of the previous two responses, gives me no reason whatsoever 
to change my opinion.


Please take a look at the revised example in this thread.  I don't 
think it has the same problems (as an example) as did the initial 
example.  In particular, all of the words are common (American 
English) and 

Re: [agi] How do we know we don't know?

2008-07-30 Thread Brad Paulsen

Richard,

Someone who can throw comments like Isn't this a bit of a no-brainer? and 
Keeping lists of 'things not known' is wildly, outrageously impossible, for any 
system! at people should expect a little bit of annoyance in return.  If you 
can't take it, don't dish it out.


Your responses to my initial post so far have been devoid of any real 
substantive evidence or argument for the opinions you have expressed therein. 
Your initial reply correctly identified an additional mechanism that two other 
list members had previously reported (that surface features could raise the 
feeling of not knowing without triggering an exhaustive memory search).  As I 
pointed out in my response to them, this observation was a good catch but did 
not, in any way, show my ideas to be no-brainers or wildly, outrageously 
impossible.  In that reply, I posted a new example query that contained only 
common American English words and was syntactically valid.


If you want to present an evidence-based or well-reasoned argument why you 
believe my ideas are meritless, then let's have it.  Pejorative adjectives, ad 
hominem attacks and baseless opinions don't impress me much.


As to your cheerleader, she's just made my kill-list.  The only thing worse than 
someone who slings unsupported opinions around like they're facts, is someone 
who slings someone else's unsupported opinions around like they're facts.


Who is Mark Waser?

Cheers,

Brad

Richard Loosemore wrote:

Brad Paulsen wrote:

Valentina,

Well, the LOL is on you.

Richard failed to add anything new to the two previous responses that 
each posited linguistic surface feature analysis as being responsible 
for generate the feeling of not knowing with that *particular* (and, 
admittedly poorly-chosen) example query.  This mechanism will, 
however, apply to only a very tiny number of cases.


In response to those first two replies (not including Richard's), I 
apologized for the sloppy example and offered a new one.  Please read 
the entire thread and the new example.  I think you'll find Richard's 
and your explanation will fail to address how the new example might 
generate the feeling of not knowing.


Brad,

Isn't this response, as well as the previous response directed at me, 
just a little more annoyed-sounding than it needs to be?


Both Valentina and I (and now Mark Waser also) have simply focused on 
the fact that it is relatively trivial to build mechanisms that monitor 
the rate at which the system is progressing in its attempt to do a 
recognition operation, and then call it as a not known if the progress 
rate is below a certain threshold.


In particular, you did suggest the idea of a system keeping lists of 
things it did not know, and surely it is not inappropriate to give a 
good-naturedly humorous response to that one?


So far, I don't see any of us making a substantial misunderstanding of 
your question, nor anyone being deliberately rude to you.




Richard Loosemore











Valentina Poletti wrote:

lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain 
concludes that we 'don't know'. that's why it takes no effort to 
realize it. agi algorithms should be built in a similar way, rather 
than searching.



Isn't this a bit of a no-brainer?  Why would the human brain need to
keep lists of things it did not know, when it can simply break the
word down into components, then have mechanisms that watch for the
rate at which candidate lexical items become activated  when
 this mechanism notices that the rate of activation is well below
the usual threshold, it is a fairly simple thing for it to announce
that the item is not known.

Keeping lists of things not known is wildly, outrageously
impossible, for any system!  Would we really expect that the word

ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-

owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-

hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-

dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere
as a word that I do not know? :-)

I note that even in the simplest word-recognition neural nets that I
built and studied in the 1990s, activation of a nonword proceeded in
a very different way than activation of a word:  it would have been
easy to build something to trigger a this is a nonword neuron.

Is there some type of AI formalism where nonword recognition would
be problematic?



Richard Loosemore



*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; Your Subscription[Powered by 
Listbox] http://www.listbox.com





---
agi
Archives: 

Re: [agi] How do we know we don't know?

2008-07-30 Thread Mark Waser
People can discriminate real words from nonwords even when the latter are 
orthographically and phonologically word-like, presumably because words 
activate specific lexical and/or semantic information.

http://cat.inist.fr/?aModele=afficheNcpsidt=14733408

Categories like noun and verb represent the basic units of grammar in 
all human languages, and the retrieval of categorical information associated 
with words is an essential step in the production of grammatical speech. 
Studies of brain-damaged patients suggest that knowledge of nouns and verbs 
can be spared or impaired selectively; however, the neuroanatomical 
correlates of this dissociation are not well understood. We used 
event-related functional MRI to identify cortical regions that were active 
when English-speaking subjects produced nouns or verbs in the context of 
short phrases

http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1360518

Neuroimaging and lesion studies suggest that processing of word classes, 
such as verbs and nouns, is associated with distinct neural mechanisms. Such 
studies also suggest that subcategories within these broad word class 
categories are differentially processed in the brain. Within the class of 
verbs, argument structure provides one linguistic dimension that 
distinguishes among verb exemplars, with some requiring more complex 
argument structure entries than others. This study examined the neural 
instantiation of verbs by argument structure complexity: one-, two-, and 
three-argument verbs.

http://portal.acm.org/citation.cfm?id=1321140.1321142coll=dl=

The neural basis for verb comprehension has proven elusive, in part because 
of the limited range of verb categories that have been assessed. In the 
present study, 16 healthy young adults were probed for the meaning 
associated with verbs of MOTION and verbs of COGNITION. We observed distinct 
patterns of activation for each verb subcategory: MOTION verbs are 
associated with recruitment of left ventral temporal-occipital cortex, 
bilateral prefrontal cortex and caudate, whereas COGNITION verbs are 
associated with left posterolateral temporal activation. These findings are 
consistent with the claim that the neural representations of verb 
subcategories are distinct

http://cat.inist.fr/?aModele=afficheNcpsidt=13451551

Neural processing of nouns and verbs: the role of inflectional morphology

http://csl.psychol.cam.ac.uk/publications/04_Tyler_Neuropsychologia.pdf


Others:

http://cercor.oxfordjournals.org/cgi/content/abstract/12/9/900
http://www3.interscience.wiley.com/journal/99520773/abstract?CRETRY=1SRETRY=0
http://www.jneurosci.org/cgi/content/abstract/22/7/2936


- Original Message - 
From: Jim Bromer [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, July 30, 2008 12:15 PM
Subject: Re: [agi] How do we know we don't know?



On Wed, Jul 30, 2008 at 9:50 AM, Mark Waser [EMAIL PROTECTED] wrote:
Wow!  The civility level on this list is really bottoming out . . . . 
along

with any sort of scientific grounding.



Experimental (imaging) evidence shows that known words will strongly
activate some set of neurons when heard.  Unknown words with recognizable
parts/features will also activate some other set of neurons when heard,
possibly allowing the individual to puzzle out the meaning even if the 
word

has never been heard before.  Totally unknown words will not strongly
activate any neurons -- except subsequently (i.e. on a delay) some set of
HUH? neurons.


Well, your imaging evidence is part imaging and part imagining since
no one knows what the imaging is actually showing.  I think it is
commonly believed that the imaging techniques show blood flow into
areas of the brain, and this is (reasonably in my view) taken as
evidence of neural activity. Ok, but what kind of thinking is actually
going on and how extensive are the links that don't have enough wow
factor for repeatable experiments researchers to issue as a press
release.  So if you are going to claim that you're speculations are
superiorly grounded,  I would like to see some research that shows
that unknown words will not strongly activate any neurons.  Take your
time, I am only asking a question, not challenging you to fantasy
combat.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-30 Thread Terren Suydam

Brad,

--- On Wed, 7/30/08, Brad Paulsen [EMAIL PROTECTED] wrote:
 As to your cheerleader, she's just made my kill-list. 
 The only thing worse than 
 someone who slings unsupported opinions around like
 they're facts, is someone 
 who slings someone else's unsupported opinions around
 like they're facts.

I have to say I think you're over-reacting here a bit. Obviously you're free to 
do whatever, but to place someone on your kill list for supporting someone who 
disagrees with you seems awfully thin-skinned to me. I thought her post was a 
valid contribution, and not merely cheerleading, because I didn't really 
understand Richard's point about neural nets until I read her post.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-30 Thread Mark Waser

Brad,

   Go back and look at Richard's e-mail again.  His statement that Keeping 
lists of 'things not known' is wildly, outrageously  impossible, for any 
system *WAS* supported by a brief but very clear evidence-based *and* 
well-reasoned argument that should have made it's truth *very* obvious to 
someone with sufficient background.


   Just because you don't understand why something is true doesn't change 
it from a fact to an opinion.  Richard is generally very good in clearly and 
accurately distinguishing between what is a generally-accepted fact and what 
is his guestimate or opinion is his e-mails.



- Original Message - 
From: Brad Paulsen [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, July 30, 2008 4:14 PM
Subject: Re: [agi] How do we know we don't know?



Richard,

Someone who can throw comments like Isn't this a bit of a no-brainer? 
and Keeping lists of 'things not known' is wildly, outrageously 
impossible, for any system! at people should expect a little bit of 
annoyance in return.  If you can't take it, don't dish it out.


Your responses to my initial post so far have been devoid of any real 
substantive evidence or argument for the opinions you have expressed 
therein. Your initial reply correctly identified an additional mechanism 
that two other list members had previously reported (that surface features 
could raise the feeling of not knowing without triggering an exhaustive 
memory search).  As I pointed out in my response to them, this observation 
was a good catch but did not, in any way, show my ideas to be 
no-brainers or wildly, outrageously impossible.  In that reply, I 
posted a new example query that contained only common American English 
words and was syntactically valid.


If you want to present an evidence-based or well-reasoned argument why you 
believe my ideas are meritless, then let's have it.  Pejorative 
adjectives, ad hominem attacks and baseless opinions don't impress me 
much.


As to your cheerleader, she's just made my kill-list.  The only thing 
worse than someone who slings unsupported opinions around like they're 
facts, is someone who slings someone else's unsupported opinions around 
like they're facts.


Who is Mark Waser?

Cheers,

Brad

Richard Loosemore wrote:

Brad Paulsen wrote:

Valentina,

Well, the LOL is on you.

Richard failed to add anything new to the two previous responses that 
each posited linguistic surface feature analysis as being responsible 
for generate the feeling of not knowing with that *particular* (and, 
admittedly poorly-chosen) example query.  This mechanism will, however, 
apply to only a very tiny number of cases.


In response to those first two replies (not including Richard's), I 
apologized for the sloppy example and offered a new one.  Please read 
the entire thread and the new example.  I think you'll find Richard's 
and your explanation will fail to address how the new example might 
generate the feeling of not knowing.


Brad,

Isn't this response, as well as the previous response directed at me, 
just a little more annoyed-sounding than it needs to be?


Both Valentina and I (and now Mark Waser also) have simply focused on the 
fact that it is relatively trivial to build mechanisms that monitor the 
rate at which the system is progressing in its attempt to do a 
recognition operation, and then call it as a not known if the progress 
rate is below a certain threshold.


In particular, you did suggest the idea of a system keeping lists of 
things it did not know, and surely it is not inappropriate to give a 
good-naturedly humorous response to that one?


So far, I don't see any of us making a substantial misunderstanding of 
your question, nor anyone being deliberately rude to you.




Richard Loosemore











Valentina Poletti wrote:

lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain 
concludes that we 'don't know'. that's why it takes no effort to 
realize it. agi algorithms should be built in a similar way, rather 
than searching.



Isn't this a bit of a no-brainer?  Why would the human brain need 
to

keep lists of things it did not know, when it can simply break the
word down into components, then have mechanisms that watch for the
rate at which candidate lexical items become activated  when
 this mechanism notices that the rate of activation is well below
the usual threshold, it is a fairly simple thing for it to announce
that the item is not known.

Keeping lists of things not known is wildly, outrageously
impossible, for any system!  Would we really expect that the word

ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-

owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-

hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented 
somewhere

as a word that I do not know? :-)

I

Re: [agi] How do we know we don't know?

2008-07-30 Thread Richard Loosemore


Brad,

I just wrote a long, point-by-point response to this, but on reflection 
I am not going to send it.


Instead, I would like to echo Terren Suydam's comment and say that I 
think that you have overreacted here, because in my original reply to 
you I had not the slightest intention of insulting you or your ideas. 
The opening remark, for example, was meant to suggest that the QUESTION 
you posed was a no-brainer (as in, easily answerable), not that your 
ideas were brainless.  You will note that there was a smiley in the 
post, and it started with a question, not a declaration (Isn't this a 
bit of a no-brainer?...).


Throughout, I have simply been trying to explain that there is a general 
strategy for solving your initial question - a strategy quite well known 
to many people - which applies to all versions of the question, whether 
they be at the lexical level or the semantic level.


Valentina, it seems to me, was reacting to the humorous example I gave, 
not mocking you personally.


Certainly, if you feel that I insulted you I am quite willing to 
apologize for what (from my point of view) was an accident of prose style.




Richard Loosemore






Brad Paulsen wrote:

Richard,

Someone who can throw comments like Isn't this a bit of a no-brainer? 
and Keeping lists of 'things not known' is wildly, outrageously 
impossible, for any system! at people should expect a little bit of 
annoyance in return.  If you can't take it, don't dish it out.


Your responses to my initial post so far have been devoid of any real 
substantive evidence or argument for the opinions you have expressed 
therein. Your initial reply correctly identified an additional mechanism 
that two other list members had previously reported (that surface 
features could raise the feeling of not knowing without triggering an 
exhaustive memory search).  As I pointed out in my response to them, 
this observation was a good catch but did not, in any way, show my 
ideas to be no-brainers or wildly, outrageously impossible.  In that 
reply, I posted a new example query that contained only common American 
English words and was syntactically valid.


If you want to present an evidence-based or well-reasoned argument why 
you believe my ideas are meritless, then let's have it.  Pejorative 
adjectives, ad hominem attacks and baseless opinions don't impress me much.


As to your cheerleader, she's just made my kill-list.  The only thing 
worse than someone who slings unsupported opinions around like they're 
facts, is someone who slings someone else's unsupported opinions around 
like they're facts.


Who is Mark Waser?

Cheers,

Brad

Richard Loosemore wrote:

Brad Paulsen wrote:

Valentina,

Well, the LOL is on you.

Richard failed to add anything new to the two previous responses that 
each posited linguistic surface feature analysis as being responsible 
for generate the feeling of not knowing with that *particular* 
(and, admittedly poorly-chosen) example query.  This mechanism will, 
however, apply to only a very tiny number of cases.


In response to those first two replies (not including Richard's), I 
apologized for the sloppy example and offered a new one.  Please read 
the entire thread and the new example.  I think you'll find Richard's 
and your explanation will fail to address how the new example might 
generate the feeling of not knowing.


Brad,

Isn't this response, as well as the previous response directed at me, 
just a little more annoyed-sounding than it needs to be?


Both Valentina and I (and now Mark Waser also) have simply focused on 
the fact that it is relatively trivial to build mechanisms that 
monitor the rate at which the system is progressing in its attempt to 
do a recognition operation, and then call it as a not known if the 
progress rate is below a certain threshold.


In particular, you did suggest the idea of a system keeping lists of 
things it did not know, and surely it is not inappropriate to give a 
good-naturedly humorous response to that one?


So far, I don't see any of us making a substantial misunderstanding of 
your question, nor anyone being deliberately rude to you.




Richard Loosemore











Valentina Poletti wrote:

lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our 
brain concludes that we 'don't know'. that's why it takes no effort 
to realize it. agi algorithms should be built in a similar way, 
rather than searching.



Isn't this a bit of a no-brainer?  Why would the human brain 
need to

keep lists of things it did not know, when it can simply break the
word down into components, then have mechanisms that watch for the
rate at which candidate lexical items become activated  when
 this mechanism notices that the rate of activation is well below
the usual threshold, it is a fairly simple thing for it to announce
that the item is not known.

Keeping lists of things not known is wildly, outrageously

Re: [agi] How do we know we don't know?

2008-07-30 Thread Mike Tintner

Brad,

Ah,, perhaps there has been a failure of communication  -  it sounded 
(rightly or wrongly) from your original post, like your things I don't 
know list was being used DURING the process of  perception/ categorization, 
and so was key to producing the I don't know this feeling.  That was hard 
to understand and accept.   If you're just saying that AFTER our brain has 
failed to recognize something,  it effectively (as you discuss)  stores 
those failures on an I don't know list, that is unobjectionable.



James,

Someone ventured the *opinion* that keeping such a list of things I don't 
know was nonsensical, but I have yet to see any evidence or 
well-reasoned argument backing that opinion.  So, it's just an opinion. 
One with which I, obviously, do not agree.


There are two stages of not knowing.  The first is when the agent 
doesn't know it doesn't know something.  It's clueless.  This can be such 
a dangerous stage to be in that one can imagine the agent might be 
equipped with a knee-jerk type reaction, which evinces itself in a 
variety of ways.  One of those ways could be to promote this thing it 
didn't know it didn't know to the next stage of not knowing  by storing 
it (subconsciously, most likely) in a list of things I know I don't 
know.  I use the term list generically.  I don't argue that the human 
brain maintains knowledge in list structures or that this would 
necessarily be the way this information is stored in an AGI agent).


I fail to see how saving this type of information in memory is any 
different from saving any other type of information.  It's a positive fact 
about the world as that world relates to the individual human (or AGI 
agent).  The first way having such a list might help is in optimizing 
memory search.  The next time the agent encounters a thing not known on 
this list, it won't have to perform an exhaustive search of things it 
knows to come to the feeling of not knowing. It's right there on the 
(comparatively short) list of things it doesn't know (which would be 
searched first, of course).  In addition, if the agent's experience in the 
world results in repeated hits on a particular item in this list, this 
could be a factor in producing the desire to learn that is such a 
characteristic behavior of our species.  Once the thing is known, it is, 
of course, removed from the not known list.  If a thing on the list is 
not encountered again for a long period of time, it might just fall off 
the list. Both of these characteristics of such a list would work, 
subconsciously, to keep the list both small and relevant.


Cheers,

Brad


James Ratcliff wrote:

Sure,
search is at the root of all processing, be it human or AI.

How we each go about the search, and how efficient we are at the task are 
different, and what exactly we are searching for, and exponential 
explosion.


But some type of search is done, whether we are consciously aware of our 
brains doing the search or not.


Given a bit of context information about the question should allow us to 
use some heuristics to look at a smaller area of knowledge bases in our 
brains, or in a computer's memory.


Having a list of things we dont know is nonsensical as has been pointed 
out, when it comes to individual terms, but something like a aggregate 
estimate of knowledge known could be computed.


I myself know a little about baseball  say 10%, but baseball history 
and world series statistics would be more like 0.1%


James Ratcliff

--- On *Tue, 7/29/08, Brad Paulsen /[EMAIL PROTECTED]/* wrote:

James,

So, you agree that some sort of search must take place before the 
feeling

of not knowing presents itself?  Of course, realizing we don't
have a lot of information results from some type of a search and not 
a separate process

(at least you didn't posit any).

Thanks for your comments!
Cheers
Brad

James Ratcliff wrote:
 It is fairly simple at that point, we have enough context to have a 
very  limited domain

 world series - baseball
 1924
 answer is a team,
 so we can do a lookup in our database easily enough, or realize 
that we  really dont have a lot of information about baseball in our 
mindset.

  And for the other one, it would just be a
 strait term match.
  James Ratcliff
  ___
 James Ratcliff - http://falazar.com
 Looking for something...
  --- On *Mon, 7/28/08, Brad Paulsen /[EMAIL PROTECTED]/*
wrote:
  From: Brad Paulsen [EMAIL PROTECTED]
 Subject: Re: [agi] How do we know we don't know?
 To: agi@v2.listbox.com
 Date: Monday, July 28, 2008, 4:12 PM
  Jim Bromer wrote:
  On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen
 [EMAIL PROTECTED] wrote:
  All,
 What does fomlepung mean?
 
  If your immediate (mental) response was I don't
know.
 it means you're

Re: [agi] How do we know we don't know?

2008-07-30 Thread James Ratcliff
Yes ok, this is needed, but was a bit different than what was being discussed 
earlier, thank you for the clarification.

___

James Ratcliff - http://falazar.com

Looking for something...

--- On Wed, 7/30/08, Brad Paulsen [EMAIL PROTECTED] wrote:
From: Brad Paulsen [EMAIL PROTECTED]
Subject: Re: [agi] How do we know we don't know?
To: agi@v2.listbox.com
Date: Wednesday, July 30, 2008, 3:47 PM

James,

Someone ventured the *opinion* that keeping such a list of things I
don't know 
was nonsensical, but I have yet to see any evidence or
well-reasoned argument 
backing that opinion.  So, it's just an opinion.  One with which I,
obviously, 
do not agree.

There are two stages of not knowing.  The first is when
the agent doesn't 
know it doesn't know something.  It's clueless.  This can be such a
dangerous 
stage to be in that one can imagine the agent might be equipped with a 
knee-jerk type reaction, which evinces itself in a variety of ways.
 One of 
those ways could be to promote this thing it didn't know it didn't know
to the 
next stage of not knowing  by storing it (subconsciously, most
likely) in a 
list of things I know I don't know.  I use the term
list generically.  I 
don't argue that the human brain maintains knowledge in list structures or
that 
this would necessarily be the way this information is stored in an AGI agent).

I fail to see how saving this type of information in memory is any different 
from saving any other type of information.  It's a positive fact about the
world 
as that world relates to the individual human (or AGI agent).  The first way 
having such a list might help is in optimizing memory search.  The next time
the 
agent encounters a thing not known on this list, it won't have to perform
an 
exhaustive search of things it knows to come to the feeling of not
knowing. 
It's right there on the (comparatively short) list of things it doesn't
know 
(which would be searched first, of course).  In addition, if the agent's 
experience in the world results in repeated hits on a particular item in this 
list, this could be a factor in producing the desire to learn that is such a 
characteristic behavior of our species.  Once the thing is known, it is, of 
course, removed from the not known list.  If a thing on the list is
not 
encountered again for a long period of time, it might just fall off the list. 
Both of these characteristics of such a list would work, subconsciously, to
keep 
the list both small and relevant.

Cheers,

Brad


James Ratcliff wrote:
 Sure,
 search is at the root of all processing, be it human or AI.
 
 How we each go about the search, and how efficient we are at the task 
 are different, and what exactly we are searching for, and exponential 
 explosion.
 
 But some type of search is done, whether we are consciously aware of our 
 brains doing the search or not.
 
 Given a bit of context information about the question should allow us to 
 use some heuristics to look at a smaller area of knowledge bases in our 
 brains, or in a computer's memory.
 
 Having a list of things we dont know is nonsensical as has
been 
 pointed out, when it comes to individual terms, but something like a 
 aggregate estimate of knowledge known could be computed.
 
 I myself know a little about baseball  say 10%, but baseball history 
 and world series statistics would be more like 0.1%
 
 James Ratcliff
 
 --- On *Tue, 7/29/08, Brad Paulsen /[EMAIL PROTECTED]/*
wrote:
 
 James,
 
 So, you agree that some sort of search must take place before the
feeling
 of not knowing presents itself?  Of course, realizing we
don't
 have a lot of information results from some type of a search and
not a separate process
 (at least you didn't posit any).
 
 Thanks for your comments!
 Cheers
 Brad
 
 James Ratcliff wrote:
  It is fairly simple at that point, we have enough context to have
a very 
  limited domain
  world series - baseball
  1924
  answer is a team,
  so we can do a lookup in our database easily enough, or realize
that we 
  really dont have a lot of information about baseball in our
mindset.
  
  And for the other one, it would just be a
  strait term match.
  
  James Ratcliff
  
  ___
  James Ratcliff - http://falazar.com
  Looking for something...
  
  --- On *Mon, 7/28/08, Brad Paulsen
/[EMAIL PROTECTED]/*
 wrote:
  
  From: Brad Paulsen [EMAIL PROTECTED]
  Subject: Re: [agi] How do we know we don't know?
  To: agi@v2.listbox.com
  Date: Monday, July 28, 2008, 4:12 PM
  
  Jim Bromer wrote:
   On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen
  [EMAIL PROTECTED] wrote:
   All,
  What does fomlepung mean?
  
   If your immediate (mental) response was I
don't
 know.
  it means you're

Re: [agi] How do we know we don't know?

2008-07-30 Thread Brad Paulsen

Richard,

I just finished reading and replying to your post preceding this one (I guess). 
 Your tone and approach in that post was more like what I expected from you. 
I'm not going to get in a pissing match about what I should or should not take 
personally.  That will generate only heat not light.


Peace, OK?

Cheers,

Brad

P.S. I will review Valentina's post to see if I misunderstood it.  When I 
originally read it, it sure looked like piling on to me.


P.S. Terren:  I reserve the right to put anyone in my personal kill-list.  I 
don't have to justify my reasons.  If I choose to not read the posts of a 
particular list member, and that person turns up on Time Magazine's cover ten 
years from now, well... my loss.  Right?



Richard Loosemore wrote:


Brad,

I just wrote a long, point-by-point response to this, but on reflection 
I am not going to send it.


Instead, I would like to echo Terren Suydam's comment and say that I 
think that you have overreacted here, because in my original reply to 
you I had not the slightest intention of insulting you or your ideas. 
The opening remark, for example, was meant to suggest that the QUESTION 
you posed was a no-brainer (as in, easily answerable), not that your 
ideas were brainless.  You will note that there was a smiley in the 
post, and it started with a question, not a declaration (Isn't this a 
bit of a no-brainer?...).


Throughout, I have simply been trying to explain that there is a general 
strategy for solving your initial question - a strategy quite well known 
to many people - which applies to all versions of the question, whether 
they be at the lexical level or the semantic level.


Valentina, it seems to me, was reacting to the humorous example I gave, 
not mocking you personally.


Certainly, if you feel that I insulted you I am quite willing to 
apologize for what (from my point of view) was an accident of prose style.




Richard Loosemore






Brad Paulsen wrote:

Richard,

Someone who can throw comments like Isn't this a bit of a 
no-brainer? and Keeping lists of 'things not known' is wildly, 
outrageously impossible, for any system! at people should expect a 
little bit of annoyance in return.  If you can't take it, don't dish 
it out.


Your responses to my initial post so far have been devoid of any real 
substantive evidence or argument for the opinions you have expressed 
therein. Your initial reply correctly identified an additional 
mechanism that two other list members had previously reported (that 
surface features could raise the feeling of not knowing without 
triggering an exhaustive memory search).  As I pointed out in my 
response to them, this observation was a good catch but did not, in 
any way, show my ideas to be no-brainers or wildly, outrageously 
impossible.  In that reply, I posted a new example query that 
contained only common American English words and was syntactically valid.


If you want to present an evidence-based or well-reasoned argument why 
you believe my ideas are meritless, then let's have it.  Pejorative 
adjectives, ad hominem attacks and baseless opinions don't impress me 
much.


As to your cheerleader, she's just made my kill-list.  The only thing 
worse than someone who slings unsupported opinions around like they're 
facts, is someone who slings someone else's unsupported opinions 
around like they're facts.


Who is Mark Waser?

Cheers,

Brad

Richard Loosemore wrote:

Brad Paulsen wrote:

Valentina,

Well, the LOL is on you.

Richard failed to add anything new to the two previous responses 
that each posited linguistic surface feature analysis as being 
responsible for generate the feeling of not knowing with that 
*particular* (and, admittedly poorly-chosen) example query.  This 
mechanism will, however, apply to only a very tiny number of cases.


In response to those first two replies (not including Richard's), I 
apologized for the sloppy example and offered a new one.  Please 
read the entire thread and the new example.  I think you'll find 
Richard's and your explanation will fail to address how the new 
example might generate the feeling of not knowing.


Brad,

Isn't this response, as well as the previous response directed at me, 
just a little more annoyed-sounding than it needs to be?


Both Valentina and I (and now Mark Waser also) have simply focused on 
the fact that it is relatively trivial to build mechanisms that 
monitor the rate at which the system is progressing in its attempt to 
do a recognition operation, and then call it as a not known if the 
progress rate is below a certain threshold.


In particular, you did suggest the idea of a system keeping lists of 
things it did not know, and surely it is not inappropriate to give a 
good-naturedly humorous response to that one?


So far, I don't see any of us making a substantial misunderstanding 
of your question, nor anyone being deliberately rude to you.




Richard Loosemore











Valentina Poletti wrote:


Re: [agi] How do we know we don't know?

2008-07-30 Thread Brad Paulsen



Richard Loosemore wrote:

Brad Paulsen wrote:

James,

Someone ventured the *opinion* that keeping such a list of things I 
don't know was nonsensical, but I have yet to see any evidence or 
well-reasoned argument backing that opinion.  So, it's just an 
opinion.  One with which I, obviously, do not agree.


Please be clear about what was intended by my remarks.

I *now* have an explicit, episodic memory of confronting the question 
Who won the world series in 1954, and as a result of that episode that 
occured today, I have the explicit knowledge that I do not know the 
answer.  Having that kind of explicit knowledge of lack-of-knowledge is 
not problematic at all.


The only thing that seems implausible is that IN GENERAL we try to 
answer questions by first looking up explicit elements that encode the 
fact that we do not know the answer.  As a general strategy this must, 
surely, be deeply implausible, for the reasons that I originally gave, 
which centered on the fact that the sheer quantity of unknowns would be 
overwhelming for any system.  For almost every one of the potentially 
askable questions that would elicit, in me, a response of I do not 
know, there would not be any such episode.  Similarly, it would be 
clearly implausible for the cognitive system to spend its time making 
lists of things that it did not know.  If that is not an example of an 
obviously implausible mechanism, then I do not know what would be.


Ah.  Now we're getting somewhere!  I do *not* (and did not) propose that we keep 
a list of all the things unknown in memory.  Nor did I propose some 
background task that would maintain or add to such a list.  That would be 
...wildly, outrageously impossible, for any system!  Maybe, instead of 
assuming the worse (that I could be so ignorant as to propose such a list), you 
might have asked for some clarification?


The list of things I don't know is, by definition, a list of things I know I 
don't know.  How could I *possibly* know about things I don't know I don't 
know?  The list I propose contains ONLY those things we know we don't know. 
Such a list is, in my opinion, completely manageable and, indeed, helpful 
information to have around.  When we first encounter a completely novel object 
or event we will have to search (percolate, whatever) for it in memory and come 
up empty (however you want to define that).  It is then, and *only* then, that 
we put this knowledge (or meta-knowledge) on the things (I know) I don't know 
list.


This list can be consulted before performing a search of all memory to determine 
if there's a need to do such an exhaustive search.  If the thing we're trying to 
remember is on the things (I know) I don't know list, we can very quickly 
report the feeling of not knowing.  Otherwise, we have to do the exhaustive 
(however you define that) search of things we do know and come up empty.  Such a 
list can also be used by subconscious processes to power our desire to learn. 
Presumably, we experience cognitive dissonance when we feel there's something we 
know nothing about and want to resolve that feeling.  How?  By learning.  Once 
learned, the thing falls off the things (I know) I don't know list. 
Similarly, if an item is on the list for a long time, it will naturally fall 
off the list (the use it or lose it principle).  Both of these natural 
actions will work, I believe, to keep this list quite small.


Sometimes (well, don't ask my ex) I can be a bit thick.  I know you're all 
surprised to hear that, but...


It just dawned on me that much of the uproar here may have been caused by a 
miscommunication (gee, where have we heard of that happening before?).  I may 
have used the term things we don't know to denote the things we know we don't 
know list.  If so, please accept my apologies.  Having played with these 
questions for a long time, this *important* distinction apparently became lost 
to me and I began to assume it self-evident that a things we don't know list 
would have had to come into being as the result of our encounters with those 
things when they were things we didn't know we didn't know (and, therefore, 
could not be in any list of knowledge we had -- we are clueless about these 
things until we encounter them).


If that's the case, let me (finally) be clear: the list I am talking about in 
the human or AGI agent's memory is a list of THINGS I KNOW I DON'T KNOW.  In the 
first (misleading) example I gave, the word fomlepung would be on that list 
after the query containing it had resulted in the I don't know answer (how 
that determination is made is really a minor point for this discussion).  In the 
second example query I gave, the Which team won the 1924 World Series? would 
also, after eliciting the I don't know response, find its way onto this list.



This was not merely an opinion, it was a reasoned argument, 
illustrated by an example of a nonword that clearly belonged to a vast 
class of nonwords.


Well, be 

Re: [agi] How do we know we don't know?

2008-07-30 Thread Richard Loosemore

Brad Paulsen wrote:



Richard Loosemore wrote:

Brad Paulsen wrote:



Richard Loosemore wrote:

Brad Paulsen wrote:



Richard Loosemore wrote:

Brad Paulsen wrote:

All,

Here's a question for you:

What does fomlepung mean?

If your immediate (mental) response was I don't know. it means 
you're not a slang-slinging Norwegian.  But, how did your brain 
produce that feeling of not knowing?  And, how did it produce 
that feeling so fast?


Your brain may have been able to do a massively-parallel search 
of your entire memory and come up empty.  But, if it does this, 
it's subconscious.  No one to whom I've presented the above 
question has reported a conscious feeling of searching before 
having the conscious feeling of not knowing.


It could be that your brain keeps a list of things I don't 
know.  I tend to think this is the case, but it doesn't explain 
why your brain can react so quickly with the feeling of not 
knowing when it doesn't know it doesn't know (e.g., the very 
first time it encounters the word fomlepung).


My intuition tells me the feeling of not knowing when presented 
with a completely novel concept or event is a product of the 
Danger, Will Robinson!, reptilian part of our brain.  When we 
don't know we don't know something we react with a feeling of not 
knowing as a survival response.  Then, having survived, we put 
the thing not known at the head of our list of things I don't 
know.  As long as that thing is in this list it explains how we 
can come to the feeling of not knowing it so quickly.


Of course, keeping a large list of things I don't know around 
is probably not a good idea.  I suspect such a list will 
naturally get smaller through atrophy.  You will probably never 
encounter the fomlepung question again, so the fact that you 
don't know what it means will become less and less important and 
eventually it will drop off the end of the list.  And...


Another intuition tells me that the list of things I don't 
know, might generate a certain amount of cognitive dissonance 
the resolution of which can only be accomplished by seeking out 
new information (i.e., learning)?  If so, does this mean that 
such a list in an AGI could be an important element of that AGI's 
desire to learn?  From a functional point of view, this could 
be something as simple as a scheduled background task that checks 
the things I don't know list occasionally and, under the right 
circumstances, pings the AGI with a pang of cognitive 
dissonance from time to time.


So, what say ye?


Isn't this a bit of a no-brainer?  Why would the human brain need 
to keep lists of things it did not know, when it can simply break 
the word down into components, then have mechanisms that watch for 
the rate at which candidate lexical items become activated  
when  this mechanism notices that the rate of activation is well 
below the usual threshold, it is a fairly simple thing for it to 
announce that the item is not known.


Keeping lists of things not known is wildly, outrageously 
impossible, for any system!  Would we really expect that the word 
ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-

owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented 
somewhere as a word that I do not know? :-)


I note that even in the simplest word-recognition neural nets that 
I built and studied in the 1990s, activation of a nonword 
proceeded in a very different way than activation of a word:  it 
would have been easy to build something to trigger a this is a 
nonword neuron.


Is there some type of AI formalism where nonword recognition would 
be problematic?




Richard Loosemore


Richard,

You seem to have decided my request for comment was about word 
(mis)recognition.  It wasn't.  Unfortunately, I included a 
misleading example in my initial post.  A couple of list members 
called me on it immediately (I'd expect nothing less from this 
group -- and this was a valid criticism duly noted).  So far, three 
people have pointed out that a query containing an un-common 
(foreign, slang or both) word is one way to quickly generate the 
feeling of not knowing.  But, it is just that: only one way.  Not 
all feelings of not knowing are produced by linguistic analysis 
of surface features.  In fact, I would guess that the vast majority 
of them are not so generated.  Still, some are and pointing this 
out was a valid contribution (perhaps that example was fortunately 
bad).


I don't think my query is a no-brainer to answer (unless you want 
to make it one) and your response, since it contained only another 
flavor of the previous two responses, gives me no reason 
whatsoever to change my opinion.


Please take a look at the revised example in this thread.  I don't 
think it has the same problems (as an example) as did the initial 
example.  In particular, 

Re: [agi] How do we know we don't know?

2008-07-30 Thread Richard Loosemore

Brad Paulsen wrote:



Richard Loosemore wrote:

Brad Paulsen wrote:

James,

Someone ventured the *opinion* that keeping such a list of things I 
don't know was nonsensical, but I have yet to see any evidence or 
well-reasoned argument backing that opinion.  So, it's just an 
opinion.  One with which I, obviously, do not agree.


Please be clear about what was intended by my remarks.

I *now* have an explicit, episodic memory of confronting the question 
Who won the world series in 1954, and as a result of that episode 
that occured today, I have the explicit knowledge that I do not know 
the answer.  Having that kind of explicit knowledge of 
lack-of-knowledge is not problematic at all.


The only thing that seems implausible is that IN GENERAL we try to 
answer questions by first looking up explicit elements that encode the 
fact that we do not know the answer.  As a general strategy this must, 
surely, be deeply implausible, for the reasons that I originally gave, 
which centered on the fact that the sheer quantity of unknowns would 
be overwhelming for any system.  For almost every one of the 
potentially askable questions that would elicit, in me, a response of 
I do not know, there would not be any such episode.  Similarly, it 
would be clearly implausible for the cognitive system to spend its 
time making lists of things that it did not know.  If that is not an 
example of an obviously implausible mechanism, then I do not know what 
would be.


Ah.  Now we're getting somewhere!  I do *not* (and did not) propose that 
we keep a list of all the things unknown in memory.  Nor did I propose 
some background task that would maintain or add to such a list.  That 
would be ...wildly, outrageously impossible, for any system!  Maybe, 
instead of assuming the worse (that I could be so ignorant as to propose 
such a list), you might have asked for some clarification?


The list of things I don't know is, by definition, a list of things I 
know I don't know.  How could I *possibly* know about things I don't 
know I don't know?  The list I propose contains ONLY those things we 
know we don't know. Such a list is, in my opinion, completely manageable 
and, indeed, helpful information to have around.  When we first 
encounter a completely novel object or event we will have to search 
(percolate, whatever) for it in memory and come up empty (however you 
want to define that).  It is then, and *only* then, that we put this 
knowledge (or meta-knowledge) on the things (I know) I don't know list.


This list can be consulted before performing a search of all memory to 
determine if there's a need to do such an exhaustive search.  If the 
thing we're trying to remember is on the things (I know) I don't know 
list, we can very quickly report the feeling of not knowing.  
Otherwise, we have to do the exhaustive (however you define that) search 
of things we do know and come up empty.  Such a list can also be used by 
subconscious processes to power our desire to learn. Presumably, we 
experience cognitive dissonance when we feel there's something we know 
nothing about and want to resolve that feeling.  How?  By learning.  
Once learned, the thing falls off the things (I know) I don't know 
list. Similarly, if an item is on the list for a long time, it will 
naturally fall off the list (the use it or lose it principle).  Both 
of these natural actions will work, I believe, to keep this list quite 
small.


These are all interesting questions, in a way, but they involve a way of 
doing AI that I find ... problematic ... for other reasons.  I would 
have many questions about whether the maintenance and deployment of such 
a list would actually be as viable as you imply, but that is very much a 
practical question specific to that type of AI.


The more general issue of whether the system keeps meta knowledge of 
that sort is something that we completely agree on:  whichever way it 
uses it, it certainly does keep it for at least a while.



Sometimes (well, don't ask my ex) I can be a bit thick.  I know you're 
all surprised to hear that, but...


It just dawned on me that much of the uproar here may have been caused 
by a miscommunication (gee, where have we heard of that happening 
before?).  I may have used the term things we don't know to denote the 
things we know we don't know list.  If so, please accept my 
apologies.  Having played with these questions for a long time, this 
*important* distinction apparently became lost to me and I began to 
assume it self-evident that a things we don't know list would have had 
to come into being as the result of our encounters with those things 
when they were things we didn't know we didn't know (and, therefore, 
could not be in any list of knowledge we had -- we are clueless about 
these things until we encounter them).


If that's the case, let me (finally) be clear: the list I am talking 
about in the human or AGI agent's memory is a list of THINGS I KNOW I 
DON'T KNOW.  

Re: [agi] How do we know we don't know?

2008-07-29 Thread Valentina Poletti
lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain
concludes that we 'don't know'. that's why it takes no effort to realize it.
agi algorithms should be built in a similar way, rather than searching.


 Isn't this a bit of a no-brainer?  Why would the human brain need to keep
 lists of things it did not know, when it can simply break the word down into
 components, then have mechanisms that watch for the rate at which candidate
 lexical items become activated  when  this mechanism notices that the
 rate of activation is well below the usual threshold, it is a fairly simple
 thing for it to announce that the item is not known.

 Keeping lists of things not known is wildly, outrageously impossible, for
 any system!  Would we really expect that the word
 ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-
 owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
 hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
 dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere as a
 word that I do not know? :-)

 I note that even in the simplest word-recognition neural nets that I built
 and studied in the 1990s, activation of a nonword proceeded in a very
 different way than activation of a word:  it would have been easy to build
 something to trigger a this is a nonword neuron.

 Is there some type of AI formalism where nonword recognition would be
 problematic?



 Richard Loosemore





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: [agi] How do we know we don't know?

2008-07-29 Thread Ed Porter
I believe the human brain, in addition to including the controller for a
physical robot, includes the controller of a thought robot which includes
pushing much of the brain through learned or instinctual mental behaviors.
My understanding is that much of higher level function of this thought
controller is largely in the prefrontal cortex, basil ganglia, thalamic
loop.

I am guessing that answering a query such as  What does word_X (in this
case fomlepung) mean? is a type of learned behavior.  The thought robot
is consciously aware of the query task and the idea that as a query, its
task is to search for recollection of the word fomlepung and its
associations.  I think the search is generated by a consciously broadcasting
a pattern looking for a match for fomlepung to the appropriate areas of
the brain.  Although much of the spreading activation done in response to
this conscious activation is, itself, in the subconscious, the thought robot
task of answering a query is focusing attention on the query and any
feedback from it indicating a possible answer. This could be done by looking
for feedback from cortical activations to the thalamus that are in synchrony
with the query pattern, tuning into them, and testing them so see if any of
them are a desired match.  

When the conscious task of query answering does not get feedback indicating
an answer, the conscious pre-frontal process engaged in query is aware of
that lack of desired feedback and, thus, the human in whose mind the process
is taking place is conscious that he/she doesn't know (or at least can
recall) the meaning of the word.

Conscious feelings of not knowing can arise in other contexts besides
answering a what does word_X mean query.  In some of them, subconscious
processes might, for various reasons, promote a failure to match a
subconscious query or task up to the consciousness.  

For example, a sub-subconscious pattern completion process, in say high
level perception or in cognition, might draws activation to itself, pushing
its activation into semi-conscious or conscious attention, both because its
activation pattern is beginning to better match an emotionally weighted
patterns that direct more activation energy back to it, and because there is
a missing a piece of information necessary for that valuable match to be
made.  The brain may have learned by evolution or individual experience that
such information would be more likely found if the much greater spreading
activation resources of semi-conscious or conscious attention could be
utilized for conducting the search for such missing information.  This
causes a greater search to be made for such information, and if the
information is not found quickly, could cause even more attention to be
allocated to the search, pushing the search and its failure into clear
conscious awareness.

Ed Porter

-Original Message-
From: Abram Demski [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 28, 2008 4:25 PM
To: agi@v2.listbox.com
Subject: Re: [agi] How do we know we don't know?

It seems like you have some valid points, but I cannot help but point
out a problem with your question. It seems like any system for pattern
recognition and/or prediction will have a sensible I Don't Know
state. An algorithm in a published paper might suppress this in an
attempt to give as reasonable an output as is possible in all
situations, but it seems like in most such cases it would be easy to
add. Therefore, where is the problem?

Yet, I follow your comments and to an extent agree... the feeling when
I don't know something could possibly be related to animal fear
(though I am not sure), and the second time I encounter the same thing
is certainly different (because I remember the previous not-knowing,
so I at least have that info for context this time).

But I think the issue might nonetheless be non-fundamental, because
algorithms typically can easily report their not knowing.

--Abram

On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen [EMAIL PROTECTED]
wrote:
 All,

 Here's a question for you:

What does fomlepung mean?

 If your immediate (mental) response was I don't know. it means you're
not
 a slang-slinging Norwegian.  But, how did your brain produce that feeling
 of not knowing?  And, how did it produce that feeling so fast?

 Your brain may have been able to do a massively-parallel search of your
 entire memory and come up empty.  But, if it does this, it's
subconscious.
  No one to whom I've presented the above question has reported a conscious
 feeling of searching before having the conscious feeling of not knowing.

 It could be that your brain keeps a list of things I don't know.  I tend
 to think this is the case, but it doesn't explain why your brain can react
 so quickly with the feeling of not knowing when it doesn't know it doesn't
 know (e.g., the very first time it encounters the word fomlepung).

 My intuition tells me the feeling of not knowing when presented with a
 completely novel concept

Re: [agi] How do we know we don't know?

2008-07-29 Thread Charles Hixson
On Tuesday 29 July 2008 03:08:55 am Valentina Poletti wrote:
 lol.. well said richard.
 the stimuli simply invokes no signiticant response and thus our brain
 concludes that we 'don't know'. that's why it takes no effort to realize
 it. agi algorithms should be built in a similar way, rather than searching.

Unhhh that *IS* a kind of search.  It's a shallowly truncated 
breadth-first search, but it's a search.
Compare that with the the words right on the tip of my tongue phenomenon.  
In that case you get sufficient response that you become aware of a search 
going on, and you even know that the result *should* be positive.  You just 
can't find it.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-29 Thread Matt Mahoney
This is not a hard problem. A model for data compression has the task of 
predicting the next bit in a string of unknown origin. If the string is an 
encoding of natural language text, then modeling is an AI problem. If the model 
doesn't know, then it assigns a probability of about 1/2 to each of 0 and 1. 
Probabilities can be easily detected from outside the model, regardless of the 
intelligence level of the model.

 -- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-29 Thread Brad Paulsen



Richard Loosemore wrote:

Brad Paulsen wrote:

All,

Here's a question for you:

What does fomlepung mean?

If your immediate (mental) response was I don't know. it means 
you're not a slang-slinging Norwegian.  But, how did your brain 
produce that feeling of not knowing?  And, how did it produce that 
feeling so fast?


Your brain may have been able to do a massively-parallel search of 
your entire memory and come up empty.  But, if it does this, it's 
subconscious.  No one to whom I've presented the above question has 
reported a conscious feeling of searching before having the 
conscious feeling of not knowing.


It could be that your brain keeps a list of things I don't know.  I 
tend to think this is the case, but it doesn't explain why your brain 
can react so quickly with the feeling of not knowing when it doesn't 
know it doesn't know (e.g., the very first time it encounters the word 
fomlepung).


My intuition tells me the feeling of not knowing when presented with a 
completely novel concept or event is a product of the Danger, Will 
Robinson!, reptilian part of our brain.  When we don't know we don't 
know something we react with a feeling of not knowing as a survival 
response.  Then, having survived, we put the thing not known at the 
head of our list of things I don't know.  As long as that thing is 
in this list it explains how we can come to the feeling of not knowing 
it so quickly.


Of course, keeping a large list of things I don't know around is 
probably not a good idea.  I suspect such a list will naturally get 
smaller through atrophy.  You will probably never encounter the 
fomlepung question again, so the fact that you don't know what it 
means will become less and less important and eventually it will drop 
off the end of the list.  And...


Another intuition tells me that the list of things I don't know, 
might generate a certain amount of cognitive dissonance the resolution 
of which can only be accomplished by seeking out new information 
(i.e., learning)?  If so, does this mean that such a list in an AGI 
could be an important element of that AGI's desire to learn?  From a 
functional point of view, this could be something as simple as a 
scheduled background task that checks the things I don't know list 
occasionally and, under the right circumstances, pings the AGI with 
a pang of cognitive dissonance from time to time.


So, what say ye?


Isn't this a bit of a no-brainer?  Why would the human brain need to 
keep lists of things it did not know, when it can simply break the word 
down into components, then have mechanisms that watch for the rate at 
which candidate lexical items become activated  when  this mechanism 
notices that the rate of activation is well below the usual threshold, 
it is a fairly simple thing for it to announce that the item is not known.


Keeping lists of things not known is wildly, outrageously impossible, 
for any system!  Would we really expect that the word 
ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-

owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere as 
a word that I do not know? :-)


I note that even in the simplest word-recognition neural nets that I 
built and studied in the 1990s, activation of a nonword proceeded in a 
very different way than activation of a word:  it would have been easy 
to build something to trigger a this is a nonword neuron.


Is there some type of AI formalism where nonword recognition would be 
problematic?




Richard Loosemore


Richard,

You seem to have decided my request for comment was about word (mis)recognition. 
 It wasn't.  Unfortunately, I included a misleading example in my initial post. 
 A couple of list members called me on it immediately (I'd expect nothing less 
from this group -- and this was a valid criticism duly noted).  So far, three 
people have pointed out that a query containing an un-common (foreign, slang or 
both) word is one way to quickly generate the feeling of not knowing.  But, it 
is just that: only one way.  Not all feelings of not knowing are produced by 
linguistic analysis of surface features.  In fact, I would guess that the vast 
majority of them are not so generated.  Still, some are and pointing this out 
was a valid contribution (perhaps that example was fortunately bad).


I don't think my query is a no-brainer to answer (unless you want to make it 
one) and your response, since it contained only another flavor of the previous 
two responses, gives me no reason whatsoever to change my opinion.


Please take a look at the revised example in this thread.  I don't think it has 
the same problems (as an example) as did the initial example.  In particular, 
all of the words are common (American English) and the syntax is valid.


Cheers,

Brad




Re: [agi] How do we know we don't know?

2008-07-29 Thread Brad Paulsen

James,

So, you agree that some sort of search must take place before the feeling of 
not knowing presents itself?  Of course, realizing we don't have a lot of 
information results from some type of a search and not a separate process (at 
least you didn't posit any).


Thanks for your comments!

Cheers

Brad

James Ratcliff wrote:
It is fairly simple at that point, we have enough context to have a very 
limited domain

world series - baseball
1924
answer is a team,
so we can do a lookup in our database easily enough, or realize that we 
really dont have a lot of information about baseball in our mindset.


And for the other one, it would just be a strait term match.

James Ratcliff

___
James Ratcliff - http://falazar.com
Looking for something...

--- On *Mon, 7/28/08, Brad Paulsen /[EMAIL PROTECTED]/* wrote:

From: Brad Paulsen [EMAIL PROTECTED]
Subject: Re: [agi] How do we know we don't know?
To: agi@v2.listbox.com
Date: Monday, July 28, 2008, 4:12 PM

Jim Bromer wrote:
 On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen
[EMAIL PROTECTED] wrote:
 All,
What does fomlepung mean?

 If your immediate (mental) response was I don't know.
it means you're not
 a slang-slinging Norwegian.  But, how did your brain produce that
feeling
 of not knowing?  And, how did it produce that feeling so fast?

 Your brain may have been able to do a massively-parallel search of
your
 entire memory and
 come up empty.  But, if it does this,
it's subconscious.
  No one to whom I've presented the above question has reported a
conscious
 feeling of searching before having the conscious feeling
of not knowing.

 Brad
 
 My guess that initial recognition must be based on the surface

 features of an input.  If this is true, then that could suggest that
 our initial recognition reactions are stimulated by distinct
 components (or distinct groupings of components) that are found in the
 surface input data.
 Jim Bromer
 
 
Hmmm.  That particular query may not have been the best example since, to a 
non-Norwegian speaker, the phonological surface feature of that statement alone


could account for the feeling of not knowing.  In other words, the
word 
fomlepung just doesn't sound right.  Good point. 
But, that may only
 explain 
how we know we don't know strange sounding words.


Let's try another example:

Which team won the 1924 World Series?

Cheers,

Brad

 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription	[Powered by Listbox] http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-29 Thread Brad Paulsen

Valentina,

Well, the LOL is on you.

Richard failed to add anything new to the two previous responses that each 
posited linguistic surface feature analysis as being responsible for generate 
the feeling of not knowing with that *particular* (and, admittedly 
poorly-chosen) example query.  This mechanism will, however, apply to only a 
very tiny number of cases.


In response to those first two replies (not including Richard's), I apologized 
for the sloppy example and offered a new one.  Please read the entire thread and 
the new example.  I think you'll find Richard's and your explanation will fail 
to address how the new example might generate the feeling of not knowing.


Cheers,

Brad

Valentina Poletti wrote:

lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain 
concludes that we 'don't know'. that's why it takes no effort to realize 
it. agi algorithms should be built in a similar way, rather than searching.



Isn't this a bit of a no-brainer?  Why would the human brain need to
keep lists of things it did not know, when it can simply break the
word down into components, then have mechanisms that watch for the
rate at which candidate lexical items become activated  when
 this mechanism notices that the rate of activation is well below
the usual threshold, it is a fairly simple thing for it to announce
that the item is not known.

Keeping lists of things not known is wildly, outrageously
impossible, for any system!  Would we really expect that the word
ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-
owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere
as a word that I do not know? :-)

I note that even in the simplest word-recognition neural nets that I
built and studied in the 1990s, activation of a nonword proceeded in
a very different way than activation of a word:  it would have been
easy to build something to trigger a this is a nonword neuron.

Is there some type of AI formalism where nonword recognition would
be problematic?



Richard Loosemore

 



*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription	[Powered by Listbox] http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-29 Thread Brad Paulsen

Ed,

Thanks for the response.  I'm going to read it a couple more times to make sure 
I didn't miss anything.  But, on first read, looks good!


Thanks for taking the time to comment in such detail!

Cheers,

Brad

Ed Porter wrote:

I believe the human brain, in addition to including the controller for a
physical robot, includes the controller of a thought robot which includes
pushing much of the brain through learned or instinctual mental behaviors.
My understanding is that much of higher level function of this thought
controller is largely in the prefrontal cortex, basil ganglia, thalamic
loop.

I am guessing that answering a query such as  What does word_X (in this
case fomlepung) mean? is a type of learned behavior.  The thought robot
is consciously aware of the query task and the idea that as a query, its
task is to search for recollection of the word fomlepung and its
associations.  I think the search is generated by a consciously broadcasting
a pattern looking for a match for fomlepung to the appropriate areas of
the brain.  Although much of the spreading activation done in response to
this conscious activation is, itself, in the subconscious, the thought robot
task of answering a query is focusing attention on the query and any
feedback from it indicating a possible answer. This could be done by looking
for feedback from cortical activations to the thalamus that are in synchrony
with the query pattern, tuning into them, and testing them so see if any of
them are a desired match.  


When the conscious task of query answering does not get feedback indicating
an answer, the conscious pre-frontal process engaged in query is aware of
that lack of desired feedback and, thus, the human in whose mind the process
is taking place is conscious that he/she doesn't know (or at least can
recall) the meaning of the word.

Conscious feelings of not knowing can arise in other contexts besides
answering a what does word_X mean query.  In some of them, subconscious
processes might, for various reasons, promote a failure to match a
subconscious query or task up to the consciousness.  


For example, a sub-subconscious pattern completion process, in say high
level perception or in cognition, might draws activation to itself, pushing
its activation into semi-conscious or conscious attention, both because its
activation pattern is beginning to better match an emotionally weighted
patterns that direct more activation energy back to it, and because there is
a missing a piece of information necessary for that valuable match to be
made.  The brain may have learned by evolution or individual experience that
such information would be more likely found if the much greater spreading
activation resources of semi-conscious or conscious attention could be
utilized for conducting the search for such missing information.  This
causes a greater search to be made for such information, and if the
information is not found quickly, could cause even more attention to be
allocated to the search, pushing the search and its failure into clear
conscious awareness.

Ed Porter

-Original Message-
From: Abram Demski [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 28, 2008 4:25 PM

To: agi@v2.listbox.com
Subject: Re: [agi] How do we know we don't know?

It seems like you have some valid points, but I cannot help but point
out a problem with your question. It seems like any system for pattern
recognition and/or prediction will have a sensible I Don't Know
state. An algorithm in a published paper might suppress this in an
attempt to give as reasonable an output as is possible in all
situations, but it seems like in most such cases it would be easy to
add. Therefore, where is the problem?

Yet, I follow your comments and to an extent agree... the feeling when
I don't know something could possibly be related to animal fear
(though I am not sure), and the second time I encounter the same thing
is certainly different (because I remember the previous not-knowing,
so I at least have that info for context this time).

But I think the issue might nonetheless be non-fundamental, because
algorithms typically can easily report their not knowing.

--Abram

On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen [EMAIL PROTECTED]
wrote:

All,

Here's a question for you:

   What does fomlepung mean?

If your immediate (mental) response was I don't know. it means you're

not

a slang-slinging Norwegian.  But, how did your brain produce that feeling
of not knowing?  And, how did it produce that feeling so fast?

Your brain may have been able to do a massively-parallel search of your
entire memory and come up empty.  But, if it does this, it's

subconscious.

 No one to whom I've presented the above question has reported a conscious
feeling of searching before having the conscious feeling of not knowing.

It could be that your brain keeps a list of things I don't know.  I tend
to think this is the case, but it doesn't explain why your brain can react
so quickly

Re: [agi] How do we know we don't know?

2008-07-29 Thread Charles Hixson
On Tuesday 29 July 2008 04:12:27 pm Brad Paulsen wrote:
 Richard Loosemore wrote:
  Brad Paulsen wrote:
  All,
 
  Here's a question for you:
 
  What does fomlepung mean?
 
  If your immediate (mental) response was I don't know. it means
  you're not a slang-slinging Norwegian.  But, how did your brain
  produce that feeling of not knowing?  And, how did it produce that
  feeling so fast?
 
  Your brain may have been able to do a massively-parallel search of
  your entire memory and come up empty.  But, if it does this, it's
  subconscious.  No one to whom I've presented the above question has
  reported a conscious feeling of searching before having the
  conscious feeling of not knowing.
 
  It could be that your brain keeps a list of things I don't know.  I
  tend to think this is the case, but it doesn't explain why your brain
  can react so quickly with the feeling of not knowing when it doesn't
  know it doesn't know (e.g., the very first time it encounters the word
  fomlepung).
 
  My intuition tells me the feeling of not knowing when presented with a
  completely novel concept or event is a product of the Danger, Will
  Robinson!, reptilian part of our brain.  When we don't know we don't
  know something we react with a feeling of not knowing as a survival
  response.  Then, having survived, we put the thing not known at the
  head of our list of things I don't know.  As long as that thing is
  in this list it explains how we can come to the feeling of not knowing
  it so quickly.
 
  Of course, keeping a large list of things I don't know around is
  probably not a good idea.  I suspect such a list will naturally get
  smaller through atrophy.  You will probably never encounter the
  fomlepung question again, so the fact that you don't know what it
  means will become less and less important and eventually it will drop
  off the end of the list.  And...
 
  Another intuition tells me that the list of things I don't know,
  might generate a certain amount of cognitive dissonance the resolution
  of which can only be accomplished by seeking out new information
  (i.e., learning)?  If so, does this mean that such a list in an AGI
  could be an important element of that AGI's desire to learn?  From a
  functional point of view, this could be something as simple as a
  scheduled background task that checks the things I don't know list
  occasionally and, under the right circumstances, pings the AGI with
  a pang of cognitive dissonance from time to time.
 
  So, what say ye?
 
  Isn't this a bit of a no-brainer?  Why would the human brain need to
  keep lists of things it did not know, when it can simply break the word
  down into components, then have mechanisms that watch for the rate at
  which candidate lexical items become activated  when  this mechanism
  notices that the rate of activation is well below the usual threshold,
  it is a fairly simple thing for it to announce that the item is not
  known.
 
  Keeping lists of things not known is wildly, outrageously impossible,
  for any system!  Would we really expect that the word
  ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-
  owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
  hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
  dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere as
  a word that I do not know? :-)
 
  I note that even in the simplest word-recognition neural nets that I
  built and studied in the 1990s, activation of a nonword proceeded in a
  very different way than activation of a word:  it would have been easy
  to build something to trigger a this is a nonword neuron.
 
  Is there some type of AI formalism where nonword recognition would be
  problematic?
 
 
 
  Richard Loosemore

 Richard,

 You seem to have decided my request for comment was about word
 (mis)recognition. It wasn't.  Unfortunately, I included a misleading
 example in my initial post. A couple of list members called me on it
 immediately (I'd expect nothing less from this group -- and this was a
 valid criticism duly noted).  So far, three people have pointed out that a
 query containing an un-common (foreign, slang or both) word is one way to
 quickly generate the feeling of not knowing.  But, it is just that: only
 one way.  Not all feelings of not knowing are produced by linguistic
 analysis of surface features.  In fact, I would guess that the vast
 majority of them are not so generated.  Still, some are and pointing this
 out was a valid contribution (perhaps that example was fortunately bad).

 I don't think my query is a no-brainer to answer (unless you want to make
 it one) and your response, since it contained only another flavor of the
 previous two responses, gives me no reason whatsoever to change my opinion.

 Please take a look at the revised example in this thread.  I don't think it
 has the same problems (as an example) as did the initial example.  In
 

[agi] How do we know we don't know?

2008-07-28 Thread Brad Paulsen

All,

Here's a question for you:

What does fomlepung mean?

If your immediate (mental) response was I don't know. it means you're not a 
slang-slinging Norwegian.  But, how did your brain produce that feeling of not 
knowing?  And, how did it produce that feeling so fast?


Your brain may have been able to do a massively-parallel search of your entire 
memory and come up empty.  But, if it does this, it's subconscious.  No one to 
whom I've presented the above question has reported a conscious feeling of 
searching before having the conscious feeling of not knowing.


It could be that your brain keeps a list of things I don't know.  I tend to 
think this is the case, but it doesn't explain why your brain can react so 
quickly with the feeling of not knowing when it doesn't know it doesn't know 
(e.g., the very first time it encounters the word fomlepung).


My intuition tells me the feeling of not knowing when presented with a 
completely novel concept or event is a product of the Danger, Will Robinson!, 
reptilian part of our brain.  When we don't know we don't know something we 
react with a feeling of not knowing as a survival response.  Then, having 
survived, we put the thing not known at the head of our list of things I don't 
know.  As long as that thing is in this list it explains how we can come to the 
feeling of not knowing it so quickly.


Of course, keeping a large list of things I don't know around is probably not 
a good idea.  I suspect such a list will naturally get smaller through atrophy. 
 You will probably never encounter the fomlepung question again, so the fact 
that you don't know what it means will become less and less important and 
eventually it will drop off the end of the list.  And...


Another intuition tells me that the list of things I don't know, might 
generate a certain amount of cognitive dissonance the resolution of which can 
only be accomplished by seeking out new information (i.e., learning)?  If so, 
does this mean that such a list in an AGI could be an important element of that 
AGI's desire to learn?  From a functional point of view, this could be 
something as simple as a scheduled background task that checks the things I 
don't know list occasionally and, under the right circumstances, pings the 
AGI with a pang of cognitive dissonance from time to time.


So, what say ye?

Cheers,

Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-28 Thread Jim Bromer
On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen [EMAIL PROTECTED] wrote:
 All,
What does fomlepung mean?

 If your immediate (mental) response was I don't know. it means you're not
 a slang-slinging Norwegian.  But, how did your brain produce that feeling
 of not knowing?  And, how did it produce that feeling so fast?

 Your brain may have been able to do a massively-parallel search of your
 entire memory and come up empty.  But, if it does this, it's subconscious.
  No one to whom I've presented the above question has reported a conscious
 feeling of searching before having the conscious feeling of not knowing.

 Brad

My guess that initial recognition must be based on the surface
features of an input.  If this is true, then that could suggest that
our initial recognition reactions are stimulated by distinct
components (or distinct groupings of components) that are found in the
surface input data.
Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-28 Thread Abram Demski
It seems like you have some valid points, but I cannot help but point
out a problem with your question. It seems like any system for pattern
recognition and/or prediction will have a sensible I Don't Know
state. An algorithm in a published paper might suppress this in an
attempt to give as reasonable an output as is possible in all
situations, but it seems like in most such cases it would be easy to
add. Therefore, where is the problem?

Yet, I follow your comments and to an extent agree... the feeling when
I don't know something could possibly be related to animal fear
(though I am not sure), and the second time I encounter the same thing
is certainly different (because I remember the previous not-knowing,
so I at least have that info for context this time).

But I think the issue might nonetheless be non-fundamental, because
algorithms typically can easily report their not knowing.

--Abram

On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen [EMAIL PROTECTED] wrote:
 All,

 Here's a question for you:

What does fomlepung mean?

 If your immediate (mental) response was I don't know. it means you're not
 a slang-slinging Norwegian.  But, how did your brain produce that feeling
 of not knowing?  And, how did it produce that feeling so fast?

 Your brain may have been able to do a massively-parallel search of your
 entire memory and come up empty.  But, if it does this, it's subconscious.
  No one to whom I've presented the above question has reported a conscious
 feeling of searching before having the conscious feeling of not knowing.

 It could be that your brain keeps a list of things I don't know.  I tend
 to think this is the case, but it doesn't explain why your brain can react
 so quickly with the feeling of not knowing when it doesn't know it doesn't
 know (e.g., the very first time it encounters the word fomlepung).

 My intuition tells me the feeling of not knowing when presented with a
 completely novel concept or event is a product of the Danger, Will
 Robinson!, reptilian part of our brain.  When we don't know we don't know
 something we react with a feeling of not knowing as a survival response.
  Then, having survived, we put the thing not known at the head of our list
 of things I don't know.  As long as that thing is in this list it explains
 how we can come to the feeling of not knowing it so quickly.

 Of course, keeping a large list of things I don't know around is probably
 not a good idea.  I suspect such a list will naturally get smaller through
 atrophy.  You will probably never encounter the fomlepung question again, so
 the fact that you don't know what it means will become less and less
 important and eventually it will drop off the end of the list.  And...

 Another intuition tells me that the list of things I don't know, might
 generate a certain amount of cognitive dissonance the resolution of which
 can only be accomplished by seeking out new information (i.e., learning)?
  If so, does this mean that such a list in an AGI could be an important
 element of that AGI's desire to learn?  From a functional point of view,
 this could be something as simple as a scheduled background task that checks
 the things I don't know list occasionally and, under the right
 circumstances, pings the AGI with a pang of cognitive dissonance from time
 to time.

 So, what say ye?

 Cheers,

 Brad


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-28 Thread Eric Burton
I think I decided pretty quickly that I don't know any words starting
with foml.

I don't know if this is a clue

On 7/28/08, Abram Demski [EMAIL PROTECTED] wrote:
 It seems like you have some valid points, but I cannot help but point
 out a problem with your question. It seems like any system for pattern
 recognition and/or prediction will have a sensible I Don't Know
 state. An algorithm in a published paper might suppress this in an
 attempt to give as reasonable an output as is possible in all
 situations, but it seems like in most such cases it would be easy to
 add. Therefore, where is the problem?

 Yet, I follow your comments and to an extent agree... the feeling when
 I don't know something could possibly be related to animal fear
 (though I am not sure), and the second time I encounter the same thing
 is certainly different (because I remember the previous not-knowing,
 so I at least have that info for context this time).

 But I think the issue might nonetheless be non-fundamental, because
 algorithms typically can easily report their not knowing.

 --Abram

 On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen [EMAIL PROTECTED]
 wrote:
 All,

 Here's a question for you:

What does fomlepung mean?

 If your immediate (mental) response was I don't know. it means you're
 not
 a slang-slinging Norwegian.  But, how did your brain produce that feeling
 of not knowing?  And, how did it produce that feeling so fast?

 Your brain may have been able to do a massively-parallel search of your
 entire memory and come up empty.  But, if it does this, it's
 subconscious.
  No one to whom I've presented the above question has reported a conscious
 feeling of searching before having the conscious feeling of not knowing.

 It could be that your brain keeps a list of things I don't know.  I tend
 to think this is the case, but it doesn't explain why your brain can react
 so quickly with the feeling of not knowing when it doesn't know it doesn't
 know (e.g., the very first time it encounters the word fomlepung).

 My intuition tells me the feeling of not knowing when presented with a
 completely novel concept or event is a product of the Danger, Will
 Robinson!, reptilian part of our brain.  When we don't know we don't know
 something we react with a feeling of not knowing as a survival response.
  Then, having survived, we put the thing not known at the head of our list
 of things I don't know.  As long as that thing is in this list it
 explains
 how we can come to the feeling of not knowing it so quickly.

 Of course, keeping a large list of things I don't know around is
 probably
 not a good idea.  I suspect such a list will naturally get smaller through
 atrophy.  You will probably never encounter the fomlepung question again,
 so
 the fact that you don't know what it means will become less and less
 important and eventually it will drop off the end of the list.  And...

 Another intuition tells me that the list of things I don't know, might
 generate a certain amount of cognitive dissonance the resolution of which
 can only be accomplished by seeking out new information (i.e.,
 learning)?
  If so, does this mean that such a list in an AGI could be an important
 element of that AGI's desire to learn?  From a functional point of view,
 this could be something as simple as a scheduled background task that
 checks
 the things I don't know list occasionally and, under the right
 circumstances, pings the AGI with a pang of cognitive dissonance from
 time
 to time.

 So, what say ye?

 Cheers,

 Brad


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-28 Thread Eric Burton
In fact, had you asked me right away, I'd have said I don't know any
words starting with fom. But on some reflection I was able to think
of foment.

Certainly there's some kind of habituation/reflex involved here, where
a word falls off in familiarity as you sound it out. I don't know
about other kinds of not knowing. :|


On 7/28/08, Eric Burton [EMAIL PROTECTED] wrote:
 I think I decided pretty quickly that I don't know any words starting
 with foml.

 I don't know if this is a clue

 On 7/28/08, Abram Demski [EMAIL PROTECTED] wrote:
 It seems like you have some valid points, but I cannot help but point
 out a problem with your question. It seems like any system for pattern
 recognition and/or prediction will have a sensible I Don't Know
 state. An algorithm in a published paper might suppress this in an
 attempt to give as reasonable an output as is possible in all
 situations, but it seems like in most such cases it would be easy to
 add. Therefore, where is the problem?

 Yet, I follow your comments and to an extent agree... the feeling when
 I don't know something could possibly be related to animal fear
 (though I am not sure), and the second time I encounter the same thing
 is certainly different (because I remember the previous not-knowing,
 so I at least have that info for context this time).

 But I think the issue might nonetheless be non-fundamental, because
 algorithms typically can easily report their not knowing.

 --Abram

 On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen [EMAIL PROTECTED]
 wrote:
 All,

 Here's a question for you:

What does fomlepung mean?

 If your immediate (mental) response was I don't know. it means you're
 not
 a slang-slinging Norwegian.  But, how did your brain produce that
 feeling
 of not knowing?  And, how did it produce that feeling so fast?

 Your brain may have been able to do a massively-parallel search of your
 entire memory and come up empty.  But, if it does this, it's
 subconscious.
  No one to whom I've presented the above question has reported a
 conscious
 feeling of searching before having the conscious feeling of not
 knowing.

 It could be that your brain keeps a list of things I don't know.  I
 tend
 to think this is the case, but it doesn't explain why your brain can
 react
 so quickly with the feeling of not knowing when it doesn't know it
 doesn't
 know (e.g., the very first time it encounters the word fomlepung).

 My intuition tells me the feeling of not knowing when presented with a
 completely novel concept or event is a product of the Danger, Will
 Robinson!, reptilian part of our brain.  When we don't know we don't
 know
 something we react with a feeling of not knowing as a survival response.
  Then, having survived, we put the thing not known at the head of our
 list
 of things I don't know.  As long as that thing is in this list it
 explains
 how we can come to the feeling of not knowing it so quickly.

 Of course, keeping a large list of things I don't know around is
 probably
 not a good idea.  I suspect such a list will naturally get smaller
 through
 atrophy.  You will probably never encounter the fomlepung question again,
 so
 the fact that you don't know what it means will become less and less
 important and eventually it will drop off the end of the list.  And...

 Another intuition tells me that the list of things I don't know, might
 generate a certain amount of cognitive dissonance the resolution of which
 can only be accomplished by seeking out new information (i.e.,
 learning)?
  If so, does this mean that such a list in an AGI could be an important
 element of that AGI's desire to learn?  From a functional point of
 view,
 this could be something as simple as a scheduled background task that
 checks
 the things I don't know list occasionally and, under the right
 circumstances, pings the AGI with a pang of cognitive dissonance from
 time
 to time.

 So, what say ye?

 Cheers,

 Brad


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-28 Thread Eric Burton
You will probably never encounter the fomlepung question again, so the fact 
that you
don't know what it means will become less and less important and eventually it 
will drop
off the end of the list.

Does it email you when this occurs?

xD

On 7/28/08, Eric Burton [EMAIL PROTECTED] wrote:
 In fact, had you asked me right away, I'd have said I don't know any
 words starting with fom. But on some reflection I was able to think
 of foment.

 Certainly there's some kind of habituation/reflex involved here, where
 a word falls off in familiarity as you sound it out. I don't know
 about other kinds of not knowing. :|


 On 7/28/08, Eric Burton [EMAIL PROTECTED] wrote:
 I think I decided pretty quickly that I don't know any words starting
 with foml.

 I don't know if this is a clue

 On 7/28/08, Abram Demski [EMAIL PROTECTED] wrote:
 It seems like you have some valid points, but I cannot help but point
 out a problem with your question. It seems like any system for pattern
 recognition and/or prediction will have a sensible I Don't Know
 state. An algorithm in a published paper might suppress this in an
 attempt to give as reasonable an output as is possible in all
 situations, but it seems like in most such cases it would be easy to
 add. Therefore, where is the problem?

 Yet, I follow your comments and to an extent agree... the feeling when
 I don't know something could possibly be related to animal fear
 (though I am not sure), and the second time I encounter the same thing
 is certainly different (because I remember the previous not-knowing,
 so I at least have that info for context this time).

 But I think the issue might nonetheless be non-fundamental, because
 algorithms typically can easily report their not knowing.

 --Abram

 On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen [EMAIL PROTECTED]
 wrote:
 All,

 Here's a question for you:

What does fomlepung mean?

 If your immediate (mental) response was I don't know. it means you're
 not
 a slang-slinging Norwegian.  But, how did your brain produce that
 feeling
 of not knowing?  And, how did it produce that feeling so fast?

 Your brain may have been able to do a massively-parallel search of your
 entire memory and come up empty.  But, if it does this, it's
 subconscious.
  No one to whom I've presented the above question has reported a
 conscious
 feeling of searching before having the conscious feeling of not
 knowing.

 It could be that your brain keeps a list of things I don't know.  I
 tend
 to think this is the case, but it doesn't explain why your brain can
 react
 so quickly with the feeling of not knowing when it doesn't know it
 doesn't
 know (e.g., the very first time it encounters the word fomlepung).

 My intuition tells me the feeling of not knowing when presented with a
 completely novel concept or event is a product of the Danger, Will
 Robinson!, reptilian part of our brain.  When we don't know we don't
 know
 something we react with a feeling of not knowing as a survival response.
  Then, having survived, we put the thing not known at the head of our
 list
 of things I don't know.  As long as that thing is in this list it
 explains
 how we can come to the feeling of not knowing it so quickly.

 Of course, keeping a large list of things I don't know around is
 probably
 not a good idea.  I suspect such a list will naturally get smaller
 through
 atrophy.  You will probably never encounter the fomlepung question
 again,
 so
 the fact that you don't know what it means will become less and less
 important and eventually it will drop off the end of the list.  And...

 Another intuition tells me that the list of things I don't know, might
 generate a certain amount of cognitive dissonance the resolution of
 which
 can only be accomplished by seeking out new information (i.e.,
 learning)?
  If so, does this mean that such a list in an AGI could be an important
 element of that AGI's desire to learn?  From a functional point of
 view,
 this could be something as simple as a scheduled background task that
 checks
 the things I don't know list occasionally and, under the right
 circumstances, pings the AGI with a pang of cognitive dissonance from
 time
 to time.

 So, what say ye?

 Cheers,

 Brad


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] How do we know we don't know?

2008-07-28 Thread Brad Paulsen



Jim Bromer wrote:

On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

All,
   What does fomlepung mean?

If your immediate (mental) response was I don't know. it means you're not
a slang-slinging Norwegian.  But, how did your brain produce that feeling
of not knowing?  And, how did it produce that feeling so fast?

Your brain may have been able to do a massively-parallel search of your
entire memory and come up empty.  But, if it does this, it's subconscious.
 No one to whom I've presented the above question has reported a conscious
feeling of searching before having the conscious feeling of not knowing.

Brad


My guess that initial recognition must be based on the surface
features of an input.  If this is true, then that could suggest that
our initial recognition reactions are stimulated by distinct
components (or distinct groupings of components) that are found in the
surface input data.
Jim Bromer


Hmmm.  That particular query may not have been the best example since, to a 
non-Norwegian speaker, the phonological surface feature of that statement alone 
could account for the feeling of not knowing.  In other words, the word 
fomlepung just doesn't sound right.  Good point.  But, that may only explain 
how we know we don't know strange sounding words.


Let's try another example:

Which team won the 1924 World Series?

Cheers,

Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-28 Thread Richard Loosemore

Brad Paulsen wrote:

All,

Here's a question for you:

What does fomlepung mean?

If your immediate (mental) response was I don't know. it means you're 
not a slang-slinging Norwegian.  But, how did your brain produce that 
feeling of not knowing?  And, how did it produce that feeling so fast?


Your brain may have been able to do a massively-parallel search of your 
entire memory and come up empty.  But, if it does this, it's 
subconscious.  No one to whom I've presented the above question has 
reported a conscious feeling of searching before having the conscious 
feeling of not knowing.


It could be that your brain keeps a list of things I don't know.  I 
tend to think this is the case, but it doesn't explain why your brain 
can react so quickly with the feeling of not knowing when it doesn't 
know it doesn't know (e.g., the very first time it encounters the word 
fomlepung).


My intuition tells me the feeling of not knowing when presented with a 
completely novel concept or event is a product of the Danger, Will 
Robinson!, reptilian part of our brain.  When we don't know we don't 
know something we react with a feeling of not knowing as a survival 
response.  Then, having survived, we put the thing not known at the head 
of our list of things I don't know.  As long as that thing is in this 
list it explains how we can come to the feeling of not knowing it so 
quickly.


Of course, keeping a large list of things I don't know around is 
probably not a good idea.  I suspect such a list will naturally get 
smaller through atrophy.  You will probably never encounter the 
fomlepung question again, so the fact that you don't know what it means 
will become less and less important and eventually it will drop off the 
end of the list.  And...


Another intuition tells me that the list of things I don't know, might 
generate a certain amount of cognitive dissonance the resolution of 
which can only be accomplished by seeking out new information (i.e., 
learning)?  If so, does this mean that such a list in an AGI could be 
an important element of that AGI's desire to learn?  From a functional 
point of view, this could be something as simple as a scheduled 
background task that checks the things I don't know list occasionally 
and, under the right circumstances, pings the AGI with a pang of 
cognitive dissonance from time to time.


So, what say ye?


Isn't this a bit of a no-brainer?  Why would the human brain need to 
keep lists of things it did not know, when it can simply break the word 
down into components, then have mechanisms that watch for the rate at 
which candidate lexical items become activated  when  this mechanism 
notices that the rate of activation is well below the usual threshold, 
it is a fairly simple thing for it to announce that the item is not known.


Keeping lists of things not known is wildly, outrageously impossible, 
for any system!  Would we really expect that the word 
ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-

owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere as 
a word that I do not know? :-)


I note that even in the simplest word-recognition neural nets that I 
built and studied in the 1990s, activation of a nonword proceeded in a 
very different way than activation of a word:  it would have been easy 
to build something to trigger a this is a nonword neuron.


Is there some type of AI formalism where nonword recognition would be 
problematic?




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-28 Thread James Ratcliff
It is fairly simple at that point, we have enough context to have a very 
limited domain
world series - baseball
1924
answer is a team, 
so we can do a lookup in our database easily enough, or realize that we really 
dont have a lot of information about baseball in our mindset.

And for the other one, it would just be a strait term match.

James Ratcliff

___

James Ratcliff - http://falazar.com

Looking for something...

--- On Mon, 7/28/08, Brad Paulsen [EMAIL PROTECTED] wrote:
From: Brad Paulsen [EMAIL PROTECTED]
Subject: Re: [agi] How do we know we don't know?
To: agi@v2.listbox.com
Date: Monday, July 28, 2008, 4:12 PM

Jim Bromer wrote:
 On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen
[EMAIL PROTECTED] wrote:
 All,
What does fomlepung mean?

 If your immediate (mental) response was I don't know.
it means you're not
 a slang-slinging Norwegian.  But, how did your brain produce that
feeling
 of not knowing?  And, how did it produce that feeling so fast?

 Your brain may have been able to do a massively-parallel search of
your
 entire memory and come up empty.  But, if it does this,
it's subconscious.
  No one to whom I've presented the above question has reported a
conscious
 feeling of searching before having the conscious feeling
of not knowing.

 Brad
 
 My guess that initial recognition must be based on the surface
 features of an input.  If this is true, then that could suggest that
 our initial recognition reactions are stimulated by distinct
 components (or distinct groupings of components) that are found in the
 surface input data.
 Jim Bromer
 
 
Hmmm.  That particular query may not have been the best example since, to a 
non-Norwegian speaker, the phonological surface feature of that statement alone

could account for the feeling of not knowing.  In other words, the
word 
fomlepung just doesn't sound right.  Good point. 
But, that may only explain 
how we know we don't know strange sounding words.

Let's try another example:

Which team won the 1924 World Series?

Cheers,

Brad

 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-28 Thread Jim Bromer
Well, I can put the components of who won the 1924 world series and
determine that I don't know the answer (unless it was the Yankees
since I think Babe Ruth might have done something significant in
1924).  However, the fact that I was able to interpret the sentence
without seeming to search for the answer to what it means suggests
that there were some underlying processes at work. Of course there
were extensive underlying processes, so the possibility that we have
various lists of things that we know we don't know along with lists of
things that we think we do know is a possibility.  But for a feasible
AI I think we have to find a way to compact all that information in a
way that would make it accessible for quick location of information.
Maybe it could be done through generalizations and the like.

The point that we know it was a baseball team is very important
because it might help us  to delineate some of the processes of
thinking with the hope of finding feasible ways to do this might be
done in an AI program.
Jim Bromer

On Mon, Jul 28, 2008 at 5:23 PM, James Ratcliff [EMAIL PROTECTED] wrote:
 It is fairly simple at that point, we have enough context to have a very
 limited domain
 world series - baseball
 1924
 answer is a team,
 so we can do a lookup in our database easily enough, or realize that we
 really dont have a lot of information about baseball in our mindset.

 And for the other one, it would just be a strait term match.

 James Ratcliff

 ___
 James Ratcliff - http://falazar.com
 Looking for something...


 --- On Mon, 7/28/08, Brad Paulsen [EMAIL PROTECTED] wrote:

 Hmmm.  That particular query may not have been the best example since, to a
 non-Norwegian speaker, the phonological surface feature of that statement
 alone

 could account for the feeling of not knowing.  In other words, the
 word
 fomlepung just doesn't sound right.  Good point.
 But, that may only
  explain
 how we know we don't know strange sounding words.

 Let's try another example:

   Which team won the 1924 World Series?

 Cheers,

 Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com