This looks like it could be an interesting thread.

However, I disagree with your distinction between ad hoc and post hoc.
The programmer may see things from the high-level "maze" view, but the
program itself typically deals with the "mess". So, I don't think
there is a real distinction to be made between post-hoc AI systems and
ad-hoc ones.

When we decide on the knowledge representation, we predefine the space
of solutions that the AI can find. This cannot be avoided. The space
can be made wider by restricting the knowledge representation less
(for example, allowing the AI to create arbitrary assembly-language
programs is less of a restriction than requiring it to learn
production-rule programs that get executed by some implementation of
the rete algorithm). But obviously we run into hardware restrictions.
The broadest space to search is the space of all possible
configurations of 1s and 0s inside the computer we're using. An AI
method called "Godel Machines" is supposed to do that. William Pearson
is also interested in this.

Since we're doing philosophy here, I'll take a philosophical stance.
Here are my assumptions.

0. I assume that there is some proper logic.

1. I assume probability theory and utility theory, acting upon
statements in this logic, are good descriptions of the ideal
decision-making process (if we do not need to worry about
computational resources).

2. I assume that there is some reasonable bayesian prior over the
logic, and therefore (given #1) that bayesian updating is the ideal
learning method (again given infinite computation).

This philosophy is not exactly the one you outlined as the AI/AGI
standard: there is no searching. #2 should ideally be carried out by
computing the probability of *all* models. With finite computational
resources, this is typically approximated by searching for
high-probability models, which works well because the low-probability
models contribute little to the decision-making process in most cases.

Now, to do some philosophy on my assumptions :).

Consideration of #0:

This is my chief concern. We must find the proper logic. This is very
close to your concern, because the search space is determined by the
logic we choose (given the above-mentioned approximation to #2). If
you think the search space is too restricted, then essentially you are
saying we need a broader logic. My requirements for the logic are:
A. The logic should be grounded
B. The logic should be able to say any meaningful thing a human can say
The two requirements are not jointly satisfied by any existing logic
(using my personal definition of grounded, at least). Set theory is
the broadest logic typically considered, so it comes closest to B, but
it (and most other logics considered strong enough to serve as a
foundation of mathematics) do not pass the test of A because the
manipulation rules do not match up to their semantics. I explain this
at some length here:

http://groups.google.com/group/opencog/browse_thread/thread/28755f668e2d4267/10245c1d4b3984ca?lnk=gst&q=abramdemski#10245c1d4b3984ca

A more worrisome problem is that B may be contradictory in and of
itself. If (1) I can as a human meaningfully explain logical system X,
and (2) logical system X can meaningfully explain anything that humans
can, then (3) system X can meaningfully explain itself. Tarski's
Indefineability Theorem shows that any such system (under some
seemingly reasonable assumptions) can express the concept "This
concept is false", and is therefore (again under some seemingly
reasonable assumptions) contradictory. So, if we accept those
"seemingly reasonable assumptions", no logic satisfying B exists.

But, this implies that AI is impossible. So, some of the seemingly
reasonable assumptions need to be dismissed. (But I don't know which
ones.)

Consideration of #2:

Assumption 3 is that there exists some reasonable prior probability
distribution that we can use for learning. A now-common way of
choosing this prior is the minimum description length principle, which
tells us that shorter theories are more probable.

The following argument was sent to me by private email by Wei Dai, and
I think it is very revealing:

"I did suggest a prior based on set theory, but then I realized that
it doesn't really solve the entire problem. The real problem seems to
be that if we formalize induction as Bayesian sequence prediction with
a well-defined prior, we can immediately produce a sequence that an
ideal predictor should be able to predict, but this one doesn't, no
matter what the prior is. Specifically, the sequence is the "least
expected" sequence of the predictor. We generate each symbol in this
sequence by feeding the previous symbols to the predictor and then
pick the next symbol as the one that it predicts with the smallest
probability. (Pick the lexicographically first symbol if more than one
has the smallest probability.)

This least expected sequence has a simple description, and therefore
should not be the least expected, right? Why should it not be more
expected than a completely random sequence, for example?

Do you think your approach can avoid this problem?"

Again, if the argument is right, then AI is impossible (or, as Wei Dai
put it, induction cannot be formalized). So, I again conclude that
some assumption needs to be dismissed. In this case the most obvious
assumption is that the prior should be based on minimum description
length. I thought at the time that that was the wrong one, but I am
not sure at the moment.

So, any ideas how to resolve these two problems?

--Abram



PS-- I don't want to leave the quote from Wei Dai completely out of
context, so here is Wei Dai's full argument:

http://groups.google.com/group/everything-list/browse_frm/thread/c7442c13ff1396ec/804e134c70d4a203


On Wed, Aug 13, 2008 at 10:15 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> THE POINT OF PHILOSOPHY:  There seemed to be some confusion re this - the
> main point of philosophy is that it makes us aware of the frameworks that
> are brought to bear on any subject, from sci to tech to business to arts -
> and therefore the limitations of those frameworks. Crudely, it says: hey
> you're looking in 2D, you could be loooking in 3D or nD.
>
> Classic example: Kuhn. Hey, he said, we've thought science discovers bodies
> feature-by-feature, with a steady-accumulation-of-facts. Actually those
> studies are largely governed by paradigms [or frameworks] of bodies, which
> heavily determine  what features we even look for in the first place. A
> beatiful piece of philosophical analysis.
>
> AGI: PROBLEM-SOLVING VS LEARNING.
>
> I have difficulties with AGI-ers, because my philosophical approach to AGI
> is -  start with the end-problems that an AGI must solve, and how they
> differ from AI. No one though is interested in discussing them - to a great
> extent, perhaps, because the general discussion of such problem distinctions
> throughout AI's history (and through psychology's and philosophy's history)
> has been pretty poor.
>
> AGI-ers, it seems to me, focus on learning - on how AGI's must *learn* to
> solve problems. The attitude is : if we can just develop a good way for
> AGI's to learn here, then they can learn to solve any problem, and gradually
> their intelligence will just take off, (hence superAGI). And there is a
> great deal of learning theory in AI, and detailed analysis of different
> modes of learning, that is logic- and maths-based. So AGI-ers are more
> comfortable with this approach.
>
> PHILOSOPHY OF LEARNING
>
> However there is relatively little broad-based philosophy of learning. Let's
> do some.
>
> V. broadly, the basic framework, it seems to me, that AGI imposes on
> learning to solve problems is:
>
> 1) define a *set of options* for solving a problem,  and attach if you can,
> certain probabilities to them
>
> 2) test those options,  and carry the best, if any, forward
>
> 3) find a further set of options from the problem environment, and test
> those, updating your probabilities and also perhaps your basic rules for
> applying them, as you go
>
> And, basically, just keep going like that, grinding your way to a solution,
> and adapting your program.
>
> What separates AI from AGI is that in the former:
>
> * the set of options [or problem space]  is well-defined, [as say, for how a
> program can play chess] and the environnment is highly accessible.AGI-ers
> recognize their world is much more complicated and not so clearly defined,
> and full of *uncertainty*.
>
> But the common philosophy of both AI and AGI and programming, period, it
> seems to me, is : test a set of options.
>
> THE $1M QUESTION with both approaches is: *how do you define your set of
> options*? That's the question I'd like you to try and answer. Let's make it
> more concrete.
>
> a) Defining A Set of Actions?   Take AGI agents, like Ben's, in virtual
> worlds. Such agents must learn to perform physical actions and move about
> their world. Ben's had to learn how to move to a ball and pick it up.
>
> So how do you define the set of options here - the set of
> actions/trajectories-from-A-to-B that an agent must test? For,say, moving
> to, or picking up/hitting a ball. Ben's tried a load - how were they
> defined? And by whom? The AGI programmer or the agent?
>
> b)Defining A Set of Associations ?Essentially, a great deal of formal
> problem-solving comes down to working out that A is associated with B,  (if
> C,D,E, and however many conditions apply) -   whether A "means," "causes,"
> or "contains" B etc etc .
>
> So basically you go out and test a set of associations, involving A and B
> etc, to solve the problem. If you're translating or defining language, you
> go and test a whole set of statements involving the relevant words, say "He
> jumped over the limit" to know what it means.
>
> So, again, how do you define the set of options here - the set of
> associations to be tested, e.g. the set of texts to be used on Google, say,
> for reference for your translation?
>
> c)What's The Total Possible Set of Options [Actions/Associations] -  how can
> you work out the *total* possible set of options to be tested (as opposed to
> the set you initially choose) ? Is there one with any AGI problem?
>
> Can the set of options be definitively defined at all? Is it infinite say
> for that set of trajectories, or somehow limited?   (Is there a definitive
> or guaranteed way to learn language?)
>
> d) How Can You Insure the Set of Options is not arbitrary?  That you won't
> entirely miss out the crucial options no matter how many more you add? Is
> defining a set of options an art not a science - the art of programming,
> pace Matt?
>
> POST HOC VS AD HOC APPROACHES TO LEARNING:  It seems to me there should be a
> further condition to how you define your set of options.
>
> Basically, IMO, AGI learns to solve problems, and AI solves them, *post
> hoc.* AFTER the problem has already been solved/learned.
>
> The perspective of  both on developing a program for
> problem-solving/learning is this:
>
> http://www.danradcliffe.com/12days2005/12days2005_maze_solution.jpg
>
> you work from the end, with the luxury of a grand overview, after sets of
> options and solutions have already been arrived at,  and develop your
> program from there.
>
> But in real life, general intelligences such as humans and animals have to
> solve most problems and acquire most skills AD HOC, starting from an
> extremely limited view of them :
>
> http://graphics.stanford.edu/~merrie/Europe/photos/inside%20the%20maze%202.jpg
>
> where you DON'T know what you're getting into, and you can't be sure what
> kind of maze this is, or whether it's a proper maze at all.  That's how YOU
> learn to do most things in your life. How can you develop a set of options
> from such a position?
>
> MAZES VS MESSES. Another way of phrasing the question of :"how do you define
> the set of options?" is :
>
> is the set of options along with the problem a maze [clearly definable, even
> if only in stages] or a mess:
>
> http://www.leninimports.com/jackson_pollock_gallery_12.jpg
>
> [where not a lot is definable]?
>
> Testing a set of options, it seems to me, is the essence of AI/AGI so far.
> It's worth taking time to think about. Philosophically.
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to