Jim,

You are v. confused here.

AGI is about solving problems – certain kinds of problems. You have to identify 
those problems and *then* see what methods of problemsolving are applicable. 
You don’t start the other way round – wh. is what everyone in AGI is doing. You 
don’t start with methods.

AGI is in fact always – always – about solving **creative** problems – mainly 
low-level, everyday, personal forms of creativity rather than the high-level, 
extraordinary, cultural forms.

You want to perceive the real world as living creatures do? You are always 
being confronted with NEW objects (not familiar patterns – new objects) – and 
therefore a creative problem. Google image “photo scenes” – each one is new and 
different, none are formulaically like the last. And that demands NEW actions, 
NEW thoughts in response.

You want to navigate the real world as living creatures do  – walk down a 
street? You are always confronted with NEW objects – and therefore a creative 
situation, however relatively simple. You can never predict what is in the next 
street. Each real world field is new and different from the last. 

You want to have a conversation, or read a text in the real world as humans do? 
– you cannot predict what your interlocutor will say, or what you yourself will 
think in reaction to them, or what the next text – the next news article -  
will contain..Every new text is unpredictable.

You want to explain something in the real world as living creatures do ? – why 
did that object behind you fall down? What is Obama’s strategy about 
Israel/Iran? You are always confronted with new, unpredictable situations Each 
financial, economic, political, psychological situation is new and different – 
none is exactly like the last. Obama is not like Bush is not like Clinton. 
There are similarities but no formulaic repetitions. 

all real world subjects  - science, art, history, technology, commerce, are 
about dealing with the unknown and unpredictable. A great deal of familiar 
objects may often be there – but that’s not what throws narrow AI programs – 
it’s always the new and unpredictable.

Look at your post to me – you were confronted with a new and different 
situation, – this time you decided to try and argue directly with Tintner - and 
your post is a creative affair – it isn’t formulaically like anything you’ve 
written before. It isn’t that different but it doesn’t use a stock set of 
ideas, words, listings etc.

There is no frame or space or set of NEW objects. You cannot produce a list of 
the objects you are going to deal with in the next visual scene, next 
field-to-navigate, next psychological situation, next financial or political 
crisis.

IF YOU CANNOT PREDICT THE FIELD OF OBJECTS YOUR PROBLEM IS ABOUT, YOU CANNOT 
PRODUCE A FRAME/SET.OF THOSE OBJECTS. NOR CAN YOU PRODUCE A FRAME OF ACTIONS TO 
PERFORM UPON THEM.

NEVER

EVER

And if you don’t have a frame of objects/actions – you don’t have complexity. 
It never arises. Your brain certainly doesn’t solve these problems like a 
narrow AI program, by whizzing through some zillions of possible permutaions of 
some neatly arranged set of possibilties. You will never write any such program 
to deal with unpredictable situations.

Yes, I have expertise in this matter – I know that you are not going to give 
one example of a real world problem in any subject in science etc as listed 
above, or anywhere period, not one example of a real world perceptual problem, 
not one example of a real; world navigational or manipulative problem, etc. etc 
– where complexity applies.

Still less are you going to show how complexity applies to high level cultural 
creativity – *produce a new approach to AGI, *produce a new kind of network for 
inventors *produce a form of 3d printing that can deal with mulltiple 
materials” etc etc.

I know that you have never thought about any real world problem in relation to 
AGI – just logical/math/computational problems.- which are all about artificial 
sets/frames etc. But I have thought about these problems.

Save yourself wasted years and just think about these two real world problems - 

1.pack the next person’s suitcase   [s.o. will ask you to pack their suitcase – 
but you don’t know who, or what clothes]

2. read and synopsize the next article on the Boston Marathon -  you don’t know 
which article it will be – we’re just going to give you whichever article we 
like

Both are easy for you as a real world AGI. You can manipulate and pack almost 
any objects, read almost any article about a given subject.

Now explain to me how complexity can enter into solving those problems. How can 
you define 1) the items you will have to pack? 2) the nature of the details 
about the Boston incident, and every conceivable kind of word, comment, 
commentator etc that may be applied to them in news articles?

You can’t. So no complexity. The idea of complexity being relevant to real 
world problems is a *joke*.

You are a conventional person – you assume that using the methods of narrow AI, 
will work for AGI. They won’t. No frames, no complexity in AGI. (And re-read 
Detusch).

P.S. In the real world, there is no such thing as a set of explanations for a 
given effect[s]. Why/how did the Boston Marathon bombing happen? There is a 
never-ending web of causes and factors in that – as there is for every real 
world effect/action/happening. The idea that there is a limited set is an 
artificial device – useful but not real or true.









From: Jim Bromer 
Sent: Tuesday, April 16, 2013 8:42 PM
To: AGI 
Subject: [agi] Mike's Imagined Expertise. Was Summary of My Current Theory For 
an AGI Program.

Mike, I am not sure why I want to discuss this with you.
 
I have made a breakthrough in a simple computational method which is directly 
related to AGI.  However, it may not be enough to overcome the AGI complexity 
problem.  But, for just a moment, suppose that it did.  Suppose that I was able 
to create a working demo of an AGi program which showed some genuine ability to 
learn new things.  Would that mean that you were right and I was wrong?  If I 
expressed my viewpoint that I had found a way to overcome the complexity 
problem and if my method actually worked then wouldn't my interpretation of the 
situation must be closer to the truth than yours.  Yes this is just an 
imaginary situation but it is different than your imaginary situation.
 
But the question is: If I was able to devise a system to overcome what I 
considered the complexity of AGI problem would that mean that:
1. There never was a complexity problem.
2. There was a complexity problem that needed to be overcome and once someone 
figured it out AGI became feasible.
3. Your point of view was just as good as mine because we were both right in a 
way.
4. Neither point of view was right.
5. None of the above.
6. All of the above.
7. What ever I decide must be right and you can work it out from there.
 
Jim Bromer
 


--------------------------------------------------------------------------------
From: [email protected]
To: [email protected]
Subject: Re: [agi] Re: Summary of My Current Theory For an AGI Program.
Date: Tue, 16 Apr 2013 17:17:22 +0100


It’s not prejudging. And it’s not particularly directed at you.

You are simply following an intellectually mad, widespread, GOFAI notion about 
the potential productivity of pure language/text analysis – a notion that has 
already demonstrably wasted God knows how many years of would-be AGI-ers’ 
lives. Look at the idiocy (and incorrigibility) of Lenat’s enterprise.

Similarly, you are following an equally old-fashioned and mad notion that the 
complexity which has bedevilled narrow AI, has something to do with AGI – of 
which you also cannot produce a single problem example. No examples, no 
evidence = mucho waste of life.

From: Jim Bromer 
Sent: Tuesday, April 16, 2013 3:22 PM
To: AGI 
Subject: RE: [agi] Re: Summary of My Current Theory For an AGI Program.

Mike, 
I am only replying now because I want to see if the formatting of Hotmail.com 
is compatible with listbox.  I would be happy to talk to you about this after I 
finish my summary if you could avoid prejudging what I might have to say.  This 
kind of remark, "Give one example of the kind of productive text analysis you 
(or anyone else) mean[s] – and you’ll find it is impossible and save yourself 
years of life," is really a blatant example of prejudging.  I feel that 
personal remarks interfere with what is being said even though they could be 
useful if used sparingly.  Prejudging what someone is going to say is a kind of 
personal remark. 


---------- Forwarded message ----------
From: Mike Tintner <[email protected]>
Date: Tue, Apr 16, 2013 at 9:38 AM
Subject: Re: [agi] Re: Summary of My Current Theory For an AGI Program.
To: AGI <[email protected]>



So we’re talking about text analysis?  (That didn’t hurt, did it  ? )

Give one example of the kind of productive text analysis you (or anyone else) 
mean[s] – and you’ll find it is impossible and save yourself years of life. And 
you could at least start a productive discussion here. [Note that Steve was 
just specific about his proposed project – and that produced a useful 
discussion].

Lots of people seem to have fantasies about a supposed AGI program that is 
going to become wise and ultimately rule the world through analysing the texts 
on the net. It’s total cobblers. As I’ve pointed out, there isn’t a program 
that can productively analyse the possible combinations of two or three words, 
let alone two sentences, let alone the contents of one or two texts. 

The fantasies are all Chinese room fantasies about how a manipulator of 
meaningless words enclosed in a black box can become supremely wise about the 
outside world, without ever venturing outside.  Fantasies of real world wisdom 
without real world experience.

That’s how science became so relatively wise about the world, right? – by 
scientists staying inside their studies and playing with words and logic? Or 
did Francis Bacon first have to smash that fantasy ?



From: Jim Bromer 
Sent: Tuesday, April 16, 2013 2:08 PM
To: AGI 
Subject: Re: [agi] Re: Summary of My Current Theory For an AGI Program.

On Mon, Apr 15, 2013 at 11:29 AM, Alan Grimes <[email protected]> wrote:

  Mike Tintner wrote: 
    What’s your O.D. ? What’s the end-product of your program? Drawings? 
Buildings? Text-readings? Wtf is it going to DO? Or is that too difficult for 
you to say?

  ... I'm getting sick of these jags you go off on. Last week it was "Well your 
AI doesn't implement true creativity; prove that it does!"

  This week you are ignoring the G in general AI. The word GENERAL in AI, like 
in computer science at large, means "Virtually any" So it must be capable of 
dealing with virtually any problem in virtually any domain using virtually any 
method. So therefore it must be able to learn any abstraction less than equal 
some reasonable complexity metric and it must have the computational 
capabilities to optimize and apply those abstractions. 
  ...

 

Alan,
My text-based AGi program would be a limited kind of AGI program but it would 
be a proof-of-concept thing.  If it worked then it would be general enough to 
convert it for different kinds of IO actions.  A program that could do some 
genuine learning and derive abstractions from text would be flexible enough to 
modify for conversion to image AGI and so on.
Jim Bromer

      AGI | Archives  | Modify Your Subscription   

      AGI | Archives  | Modify Your Subscription   


      AGI | Archives  | Modify Your Subscription   

      AGI | Archives  | Modify Your Subscription   

      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

<<wlEmoticon-smile[1].png>>

Reply via email to