THE POINT OF PHILOSOPHY: There seemed to be some confusion re this - the main point of philosophy is that it makes us aware of the frameworks that are brought to bear on any subject, from sci to tech to business to arts - and therefore the limitations of those frameworks. Crudely, it says: hey you're looking in 2D, you could be loooking in 3D or nD.

Classic example: Kuhn. Hey, he said, we've thought science discovers bodies feature-by-feature, with a steady-accumulation-of-facts. Actually those studies are largely governed by paradigms [or frameworks] of bodies, which heavily determine what features we even look for in the first place. A beatiful piece of philosophical analysis.

AGI: PROBLEM-SOLVING VS LEARNING.

I have difficulties with AGI-ers, because my philosophical approach to AGI is - start with the end-problems that an AGI must solve, and how they differ from AI. No one though is interested in discussing them - to a great extent, perhaps, because the general discussion of such problem distinctions throughout AI's history (and through psychology's and philosophy's history) has been pretty poor.

AGI-ers, it seems to me, focus on learning - on how AGI's must *learn* to solve problems. The attitude is : if we can just develop a good way for AGI's to learn here, then they can learn to solve any problem, and gradually their intelligence will just take off, (hence superAGI). And there is a great deal of learning theory in AI, and detailed analysis of different modes of learning, that is logic- and maths-based. So AGI-ers are more comfortable with this approach.

PHILOSOPHY OF LEARNING

However there is relatively little broad-based philosophy of learning. Let's do some.

V. broadly, the basic framework, it seems to me, that AGI imposes on learning to solve problems is:

1) define a *set of options* for solving a problem, and attach if you can, certain probabilities to them

2) test those options,  and carry the best, if any, forward

3) find a further set of options from the problem environment, and test those, updating your probabilities and also perhaps your basic rules for applying them, as you go

And, basically, just keep going like that, grinding your way to a solution, and adapting your program.

What separates AI from AGI is that in the former:

* the set of options [or problem space] is well-defined, [as say, for how a program can play chess] and the environnment is highly accessible.AGI-ers recognize their world is much more complicated and not so clearly defined, and full of *uncertainty*.

But the common philosophy of both AI and AGI and programming, period, it seems to me, is : test a set of options.

THE $1M QUESTION with both approaches is: *how do you define your set of options*? That's the question I'd like you to try and answer. Let's make it more concrete.

a) Defining A Set of Actions? Take AGI agents, like Ben's, in virtual worlds. Such agents must learn to perform physical actions and move about their world. Ben's had to learn how to move to a ball and pick it up.

So how do you define the set of options here - the set of actions/trajectories-from-A-to-B that an agent must test? For,say, moving to, or picking up/hitting a ball. Ben's tried a load - how were they defined? And by whom? The AGI programmer or the agent?

b)Defining A Set of Associations ?Essentially, a great deal of formal problem-solving comes down to working out that A is associated with B, (if C,D,E, and however many conditions apply) - whether A "means," "causes," or "contains" B etc etc .

So basically you go out and test a set of associations, involving A and B etc, to solve the problem. If you're translating or defining language, you go and test a whole set of statements involving the relevant words, say "He jumped over the limit" to know what it means.

So, again, how do you define the set of options here - the set of associations to be tested, e.g. the set of texts to be used on Google, say, for reference for your translation?

c)What's The Total Possible Set of Options [Actions/Associations] - how can you work out the *total* possible set of options to be tested (as opposed to the set you initially choose) ? Is there one with any AGI problem?

Can the set of options be definitively defined at all? Is it infinite say for that set of trajectories, or somehow limited? (Is there a definitive or guaranteed way to learn language?)

d) How Can You Insure the Set of Options is not arbitrary? That you won't entirely miss out the crucial options no matter how many more you add? Is defining a set of options an art not a science - the art of programming, pace Matt?

POST HOC VS AD HOC APPROACHES TO LEARNING: It seems to me there should be a further condition to how you define your set of options.

Basically, IMO, AGI learns to solve problems, and AI solves them, *post hoc.* AFTER the problem has already been solved/learned.

The perspective of both on developing a program for problem-solving/learning is this:

http://www.danradcliffe.com/12days2005/12days2005_maze_solution.jpg

you work from the end, with the luxury of a grand overview, after sets of options and solutions have already been arrived at, and develop your program from there.

But in real life, general intelligences such as humans and animals have to solve most problems and acquire most skills AD HOC, starting from an extremely limited view of them :

http://graphics.stanford.edu/~merrie/Europe/photos/inside%20the%20maze%202.jpg

where you DON'T know what you're getting into, and you can't be sure what kind of maze this is, or whether it's a proper maze at all. That's how YOU learn to do most things in your life. How can you develop a set of options from such a position?

MAZES VS MESSES. Another way of phrasing the question of :"how do you define the set of options?" is :

is the set of options along with the problem a maze [clearly definable, even if only in stages] or a mess:

http://www.leninimports.com/jackson_pollock_gallery_12.jpg

[where not a lot is definable]?

Testing a set of options, it seems to me, is the essence of AI/AGI so far. It's worth taking time to think about. Philosophically.




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to