On Tue, Mar 25, 2008 at 7:19 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:

>
> Certainly ambiguity (=applicability to multiple contexts in different
> ways) and presence of rich structure in presumably simple 'ideas', as
> you call it, is a known issue. Even interaction between concept clouds
> evoked by a pair of words is a nontrivial process ("triangular
> lightbulb"). In a way, whole operation can be modeled by such
> interactions, where sensory input/recall is taken to present a stream
> of triggers that evoke concept cloud after cloud, with associations
> and compound concepts forming at the overlaps. But of course it's too
> hand-wavy without a more restricted model of what's going on.
> Communicating something that exists solely on high level is very
> inefficient, plus most of such content can turn out to be wrong. Back
> to prototyping...
>
> --
> Vladimir Nesov
> [EMAIL PROTECTED]
>


I agreed with you up until your conclusion.  While the problems that I
talked about may be known issues, they are discussed almost exclusively
using intuitive models, like we used, or by referring to ineffective models,
like network theories that do not explicitly show how its associative
interrelations would effectively deal with the intricate conceptual details
that would be required to address these issues and would be produced by an
effective solution. I have never seen any theory that was designed to
specifically address the range of situations that I am thinking of
although most earlier AI models were intended to deal similar issues and I
have seen some exemplary models that did use controlled models which showed
how some of these interrelations might be modeled.  These intuitive
discussions and the exaggerated effectiveness of inadequate programs creates
a concept cloud itself, and the problem is that the knowledgeable listener
has a feeling that he understands the problem even without having made any
kind of commitment to the exploration of an effective solution.



Although I have not detailed how the effects of the application of ideas
might be modeled in an actual AI program (or in an extremely simple model
that I would use to start studying the modeling) my whole point is that if
you are interested in advancing AI programming, then the issue that my
theory addresses is a problem that can not be dismissed with a wave of the
hand.  The next step for me is to find a model that would be strong enough
to hold up to genuine extensible learning.



If you are making a decision on how much time you should spend thinking
about this based only on whether or not you have thought about similar
problems I believe that you have already considered some sampling of the
kind of problems that my theory is meant to address.


Jim Bromer

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to