Re: [agi] Patterns and Automata

2008-07-17 Thread Abram Demski
No, not especially familiar, but it sounds interesting. Personally I
am interested in learning formal grammars to describe data, and there
are well-established equivalences between grammars and automata, so
the approaches are somewhat compatible. According to wikipedia,
semiautomata have no output, so you cannot be using them as a
generative model, but they also lack accept-states, so you can't be
using them as recognition models, either. How are you using them?

-Abram

On Thu, Jul 17, 2008 at 1:05 PM, John G. Rose <[EMAIL PROTECTED]> wrote:
>> From: Abram Demski [mailto:[EMAIL PROTECTED]
>> John,
>> What kind of automata? Finite-state automata? Pushdown? Turing
>> machines? Does CA mean cellular automata?
>> --Abram
>>
>
> Hi Abram,
>
> FSM, semiatomata, groups w/o actions, semigroups with action in the
> observer, etc... CA is for cellular automata.
>
> This is mostly for spatio temporal recognition and processing I haven't
> tried looking much at other data yet.
>
> Why do you ask are you familiar with this?
>
> John
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: [agi] Patterns and Automata

2008-07-17 Thread John G. Rose
> From: Abram Demski [mailto:[EMAIL PROTECTED]
> John,
> What kind of automata? Finite-state automata? Pushdown? Turing
> machines? Does CA mean cellular automata?
> --Abram
> 

Hi Abram,

FSM, semiatomata, groups w/o actions, semigroups with action in the
observer, etc... CA is for cellular automata.

This is mostly for spatio temporal recognition and processing I haven't
tried looking much at other data yet.

Why do you ask are you familiar with this?

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: Location of goal/purpose was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-17 Thread Terren Suydam

Will,

--- On Tue, 7/15/08, William Pearson <[EMAIL PROTECTED]> wrote:

> And I would also say of evolved systems. My fingers purpose
> could
> equally well be said to be for picking ticks out of the
> hair of my kin
> or for touch typing. E.g. why do I keep my fingernails
> short, so that
> they do not impede my typing. The purpose of gut bacteria
> is to help
> me digest my food. The purpose of part of my brain is to do
> differentiation of functions, because I have .

Actually, I agree with that, good point.  No matter what kind of system, 
designed or evolved, it has no intrinsic purpose, only a purpose we interpret.  
Purpose in other words is a property of the observer, not the observed.
 
> If you want to think of a good analogy for how emergent I
> want the
> system to be. Imagine someone came along to one of your
> life
> simulations and interfered with the simulation to give some
> more food
> to some of the entities that he liked the look of. This
> wouldn't be
> anything so crude as to specify the fitness or artificial
> breeding,
> but it would tilt the scales in the favour of entities that
> he liked
> all else being equal. Would this invalidate the whole
> simulation
> because he interfered and bought some of his purpose into
> it? If so, I
> don't see why.

No, it certainly wouldn't invalidate it. That is in fact what I would do to 
nudge the simulation along, provide it with incentives for developing in 
complexity, adding richness to the environment, creating problems to be solved. 
 
> > So unless you believe that life was designed by God
> (in which case the purpose of life would lie in the mind of
> God), the purpose of the system is indeed intrinsic to the
> system itself.
> 
> I think I would still say it didn't have a purpose. If
> I get your meaning right.
> 
>Will

Yes, that's what I would say (now). Here's the clearest way I can put it: 
purpose is a property of the observer - we interpret purpose in an observed 
system, and different observers can have different interpretations. However, we 
can sometimes talk about purpose in an objective sense in the observed system, 
*as if* it had an objective purpose, but only to the extent that we can relate 
it to the observed goals and behavior of the system (which, ultimately, are 
also interpreted). 

Which is another way of showing that when we examine concepts like goals, 
purpose, and behavior, we ultimately come back to the fact that these are 
mental constructions. They are our maps, not the territory. 

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Patterns and Automata

2008-07-17 Thread Abram Demski
John,
What kind of automata? Finite-state automata? Pushdown? Turing
machines? Does CA mean cellular automata?
--Abram

On Wed, Jul 16, 2008 at 5:32 PM, John G. Rose <[EMAIL PROTECTED]> wrote:
>> From: Pei Wang [mailto:[EMAIL PROTECTED]
>> On Mon, Jul 7, 2008 at 12:49 AM, John G. Rose <[EMAIL PROTECTED]>
>> wrote:
>> >
>> > In pattern recognition, are some patterns not expressible with
>> automata?
>>
>> I'd rather say "not easily/naturally expressible". Automata is not a
>> popular technique in pattern recognition, compared to, say, NN. You
>> may want to check out textbooks on PR, such as
>> http://www.amazon.com/Pattern-Recognition-Learning-Information-
>> Statistics/dp/0387310738/ref=pd_bbs_sr_2?ie=UTF8&s=books&qid=1215382348&
>> sr=8-2
>>
>> > The reason is ask is that I am trying to read sensory input using
>> "automata
>> > recognition". I hear a lot of discussion on pattern recognition and am
>> > wondering if pattern recognition is the same as automata recognition.
>>
>> Currently "pattern recognition" is a much more general category than
>> "automata recognition".
>>
>
>
> I am thinking of breaching the gap somewhat with automata recognition + CA
> recognition. So automata as in automata, semiautomata, and automata w/o
> action + CA recognition. But recognizing automata from data requires some
> techniques that pattern recognition uses. Automata are easy to work with,
> especially with visual data, as I'm trying to get to a general pattern
> recognition automata subset equivalent.
>
> I haven't heard of any profound general pattern recognition techniques so
> I'm more comfortable attempting to derive my own functional model. I suspect
> how existing pattern classification schemes work as they are ultimately
> dependant on the mathematical systems used to describe them. And the space
> of all patterns compared to the space of all probable patterns in this
> universe...
>
> I'd be interested in books that study pattern processing across a complex
> systems layer... or in this case automata processing just to get a
> perspective on any potential computational complexity advantages.
>
> John
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-17 Thread Brad Paulsen

Mike,

If memory serves, this thread started out as a discussion about binding in an 
AGI context.  At some point, the terms "forward-chaining" and 
"backward-chaining" were brought up and, then, got used in a weird way (I 
thought) as the discussion turned to temporal dependencies and hierarchical 
logic constructs.  When it appeared no one else was going to clear up the 
ambiguities, I threw in my two cents.


I made a spectacularly good living in the late 1980's building expert system 
engines and knowledge engineering front-ends, so I think I know a thing or two 
about that "narrow AI" technology.  Funny thing, though, at that time, the trade 
press were saying expert systems were no longer "real AI."  They worked so well 
at what they did, the "mystery" wore off.  Ah, the price of success in AI. ;-)


What makes the algorithms used in expert system engines less than suitable for 
AGI is their static ("snapshot") nature and "crispness."  AGI really needs some 
form of dynamic programming, probabilistic (or fuzzy) rules (such as those built 
using Bayes nets or hidden Markov models), and runtime feedback.


Thanks for the kind words.

Cheers,

Brad

Mike Tintner wrote:
Brad: By definition, an expert system rule base contains the total sum 
of the

knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect 
the

system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes “more
expert”) at runtime.  I'm not saying necessarily that this couldn't be 
done,

but I've never seen it.

In which case - (thanks BTW for a v. helpful post) - are we talking 
entirely here about narrow AI? Sorry if I've missed this, but has anyone 
been discussing how to provide a flexible, evolving set of rules for 
behaviour? That's the crux of AGI, isn't it? Something at least as 
flexible as a country's Constitution and  Body of Laws. What ideas are 
on offer here?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com