lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain
concludes that we 'don't know'. that's why it takes no effort to realize it.
agi algorithms should be built in a similar way, rather than searching.


> Isn't this a bit of a no-brainer?  Why would the human brain need to keep
> lists of things it did not know, when it can simply break the word down into
> components, then have mechanisms that watch for the rate at which candidate
> lexical items become activated .... when  this mechanism notices that the
> rate of activation is well below the usual threshold, it is a fairly simple
> thing for it to announce that the item is not known.
>
> Keeping lists of "things not known" is wildly, outrageously impossible, for
> any system!  Would we really expect that the word
> "ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-
> owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
> hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
> dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw" is represented somewhere as a
> "word that I do not know"? :-)
>
> I note that even in the simplest word-recognition neural nets that I built
> and studied in the 1990s, activation of a nonword proceeded in a very
> different way than activation of a word:  it would have been easy to build
> something to trigger a "this is a nonword" neuron.
>
> Is there some type of AI formalism where nonword recognition would be
> problematic?
>
>
>
> Richard Loosemore
>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to