Whatever inventive project you engage in, you *have* to define the problem - effect-to-be-achieved - in order to solve the problem. ANd it must be defined in a practical way, - with respect to AGI, that means the kind of specific problems to be solved, as I explained.

Saying: "I want to achieve vision" - is an impossibly high-level, non-practical set, in fact, of effects - no diff. essentially from saying: "I want to make my machine *intelligent* "

Yes, there are an infinity of possible O.D.'s - definitions of effect. - here, kinds of AGI/creative problems you might set your machine to solve. ANd inevitably an inventor's definition will keep changing.,

But, Alan, if you don't practically define the problem, you don't get anywhere. ANd you have just seen with Matt's & Tim's comments on Ben's lack of testing, goals and criteria, (mirrored actually by Richard L. in Comments) - that the tendency to vagueness (and going nowhere) is rife in this field.

No O.D. & *you* are the one who suffers, and wastes precious time.



-----Original Message----- From: Alan Grimes
Sent: Thursday, April 11, 2013 6:24 AM
To: AGI
Subject: Re: [agi] The Two Prerequisites to start an AGI project

=\

We're talking about AGI, not classical engineering. so there's no
defined "problem" as such but rather a completely open-ended things we
expect AGI to be able to do and things we wish AGI might be able to do.
Like I could say that I want AGI to be my sexy cyborg GF... But then we
are talking about a problem that can only be described in poetry (an art
which I should not even attempt).

So therefore we use the terminology adopted by neural science textbooks,
to talk in terms of capabilities. For example we say the AI should have
the faculty of vision and should be able to stabilize vision through the
use of accelerometer feedback. (which is what the vestibular system does
and why we are capable of getting dizzy). -- that's a capability. So we
do our best to list these capabilities...

Once we have capabilities we can then specify general parameters for
responses/personality, etc... My previous post was an attempt to outline
a system that should develop most of the capabilities we care about.


Mike Tintner wrote:
Alan,

Re 1) you have to state **specific problems.** Whether it's

a) 22 + 22 = ? [narrow AI]
Make a number collage from any numbers given to you [AGI]

b) [for robots] Walk through a factory according to this itinerary [narrow AI] Walk across any simple rock/grass field presented to you without an itinerary/route map or GPS [AGI]

etc

You should explain why these are typical AGI problems (wh. I can do, but won't now) - and why their solution will lead to the solution of further diverse kinds of AGi problems.

Ben *has* presented a paper with a long list of specific problems, drawn from child psychology, but did not explain why any, if any, were AGI. Peter Voss has also presented specific problems in the past - something to do with a maze, I think - but I can't remember enough to comment on their AGI-ness.

You are illustrating the point of my proposal - most AGI-ers are deeply confused about both what an operational definition and an AGI problem are.

Steve pointed out the error of a similarly vague definition v. recently - and, I think, gave me the idea for "effective mechanism".

Re 2) Having presented no problems, you also present inevitably no effective mechanism. No reason to see why your proposals will solve any specific problem.

However, I take your other point - yes, you can have higher standards of "proof of concept" - some simple model that works to a minimal extent. But I would suggest in the first instance, it is quite enough to see an "idea on the back of an envelope". By all means have a second, higher stage as well. But start simple.

P.S. Ask others for their comments on whether you have an O.D./E.M.



-----Original Message----- From: Alan Grimes
Sent: Wednesday, April 10, 2013 7:20 PM
To: AGI
Subject: Re: [agi] The Two Prerequisites to start an AGI project

Mike Tintner wrote:
Ben:
If scientists were banned from proceeding based on intuition, until
they had convinced
skeptics of their methodology and ideas, nearly all science would halt...
The two obvious prerequisites for starting – getting serious about - any inventive project, are 1) an **operational definition**: you must be able to explain what your machine will do - in this case: what AGI problems will it solve, (and how will it diversify into solving more AGI prob lems)

An AGI system is a system that can detect interrelationships in the
environment and is capable of perceiving the environment by constructing
an internal representation of the environment by resolving abstractions
of previously detected relationships and refines that perception using
feedback from the sensory modalities. An AGI system must also be able to
construct motor actions based on a desired change in the state of the
environment through the resolution of abstractions related to motor
control. The system can achieve generality by omitting all constraints
on the types of things that can be abstracted.

2) a **proof of concept** – you must be able to give a practical reason why your project will work – in this case how your project will solve AGI problems.

My standard of proof is quite a bit higher, requiring an actual
technological artifact that exhibits the claimed property convincingly
on at least a toy problem.

My system will work because the number of abstractions in the system
grows with the logarithm of the input size, worst case being complex but
structured input. Assuming the search problem is algorithmically
solvable, then search will also be logarithmic (so finding something in
memory would be the logarithm of the logarithm of the input size).
Furthermore, the P-time of the search problem will be constant, as it is
in the brain. A system along these lines, completely divorced from the
legacy architecture (NO UPLOADS!!) can achieve the theoretical maximum
efficiency within only a few product generations.

So therefore I need a robotics lab. I need about $35,000 to build it;
but I have no job. =(

--
NOTICE: NEW E-MAIL ADDRESS, SEE ABOVE
Powers are not rights.

--
NOTICE: NEW E-MAIL ADDRESS, SEE ABOVE

Powers are not rights.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5
Modify Your Subscription: https://www.listbox.com/member/?&; Powered by Listbox: http://www.listbox.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to