> > Yes, I believe we have found
> >
> > * a relatively small subspace of the space of all "algorithms", which
> > displays a very wide variety of useful behaviors (we call this subspace
> > "zig-zag trees", they're a special kind of "combinator tree")
> >
> > * an efficient algorithm for searching this subspace (an improvement of
> > Pelikan and Goldberg's Bayesian Optimization Algorithm,
> enhanced to make use
> > of long-term memory via invocation of probabilistic term logic)
>
> I want to ask: is your class of algorithms guaranteed to terminate
> in a *bounded* time? If there is no such guarantee then things may get
> very complicated, bordering on the undecidable.

No guarantees -- merely "probably approximately correct."

That's the way intelligence is, IMO

> OTOH, if it is time bounded (perhaps it contains no loops, no recursion
> or specific kinds of recursion only, etc),

It's completely general, but is probabilistically biased toward simplicity,
e.g. it's capable of general recursion but is strongly biased toward some
simple forms of recursion.

> Since the human retina contains roughly 1 million "pixels", *VISION*
> seems to require that we do simple operations on a lot of inputs, to
> repeatedly condense information. Therefore you probably don't want to
> do a lot of operations on your data. That's why algorithmic search
> may be inappropriate for vision (it's like multiplying a large number
> with another large number).

I agree that low-level perceptual processing should be handled differently
than cognition.  To apply our framework to low-level vision processing, I'd
restrict it in some special ways, to gain efficiency at cost of generality.
We are not currently working on vision processing, as I don't believe
humanlike sensation is necessary to achieve human-level (or greater)
cognition.

> The problem is when we have already processed the input space, then
> what does the resulting representation look like? Does it contain a
> relatively large number of concepts (like millions), or is it highly
> structured with few concepts on each level? We don't know now, but my
> model is assuming the first case. If it is the second case then your
> methods may be better. I suspect your approach is more suitable for
> things like theorem proving or other specific-domain problems...

Well, we are explicitly trying to create a general intelligence, not a
domain-specific "narrow AI" program.

We can deal with millions or billions etc. of concepts.  For some purposes
we embed concepts in an n-dimensional space and use n-dimensional metric
structure and topology, effectively treating the space of concepts as a
continuum.  This is useful for example if you want to "mutate" a concept
into a similar one.

-- Ben G
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.733 / Virus Database: 487 - Release Date: 8/2/2004

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to