On Wed, Mar 26, 2008 at 4:27 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
>
> I agreed with you up until your conclusion.  While the problems that I
> talked about may be known issues, they are discussed almost exclusively
> using intuitive models, like we used, or by referring to ineffective models,
> like network theories that do not explicitly show how its associative
> interrelations would effectively deal with the intricate conceptual details
> that would be required to address these issues and would be produced by an
> effective solution. I have never seen any theory that was designed to
> specifically address the range of situations that I am thinking of although
> most earlier AI models were intended to deal similar issues and I have seen
> some exemplary models that did use controlled models which showed how some
> of these interrelations might be modeled.  These intuitive discussions and
> the exaggerated effectiveness of inadequate programs creates a concept cloud
> itself, and the problem is that the knowledgeable listener has a feeling
> that he understands the problem even without having made any kind of
> commitment to the exploration of an effective solution.
>
> Although I have not detailed how the effects of the application of ideas
> might be modeled in an actual AI program (or in an extremely simple model
> that I would use to start studying the modeling) my whole point is that if
> you are interested in advancing AI programming, then the issue that my
> theory addresses is a problem that can not be dismissed with a wave of the
> hand.  The next step for me is to find a model that would be strong enough
> to hold up to genuine extensible learning.
>
> If you are making a decision on how much time you should spend thinking
> about this based only on whether or not you have thought about similar
> problems I believe that you have already considered some sampling of the
> kind of problems that my theory is meant to address.
>

What you describe is essentially my own path up to this point: I
started with considering high-level capabilities and gradually worked
towards an implementation that seems to be able to exhibit these
high-level capabilities. At the end of my last message I referred to a
pragmatic problem. Substrate with which I now experiment is
essentially a very simple recurrent network with seemingly
insignificant tweaks. Without high-level view of how to make it
exhibit high-level capabilities I'd never look at it twice. Convincing
someone else that it is that capable will take a rather long
description, and I can well turn out to be wrong (so people have a
perfectly good reason not to listen). It seems more sensible to stick
to prototyping and wait for more solid results, either changing the
theory, or demonstrating its potential.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to