Alan,

There is a particular religious belief that it may be possible to build a
super-intelligent machine that will solve civilization's problems. I have
seen NOTHING to support either that it is possible to build such a machine,
or that if it existed, it could do significantly better than talented
people.

I have previously posted how it is possible to propel the IQs of some
people into the ether. Sure, these people are REALLY intelligent, but how
can simple intelligence ever predict the operation of "broken computer"
mentalities like a majority of the population has?

There is an irrational belief that MUCH more intelligence would somehow
punch through the present barriers. Given the limited bandwidth of
observability, I see NO reason to believe this.

AGIers would bet the future of the world on a wish with nothing to support
it, rather than tackle the problems at hand. Those SAME problems now block
significant funding of AGI.

I am not interested in building pyramids to achieve an afterlife or reach
God. Let's stick to science.

Continuing...
On Wed, Apr 10, 2013 at 10:24 PM, Alan Grimes <[email protected]> wrote:

>
> We're talking about AGI, not classical engineering.


It's not science, because AGI has rejected SO much of science, e.g. the
scientific method.
It's not engineering, so it can never be made.
It's a religion.

Steve
=================

> so there's no defined "problem" as such but rather a completely open-ended
> things we expect AGI to be able to do and things we wish AGI might be able
> to do. Like I could say that I want AGI to be my sexy cyborg GF... But then
> we are talking about a problem that can only be described in poetry (an art
> which I should not even attempt).
>
> So therefore we use the terminology adopted by neural science textbooks,
> to talk in terms of capabilities. For example we say the AI should have the
> faculty of vision and should be able to stabilize vision through the use of
> accelerometer feedback. (which is what the vestibular system does and why
> we are capable of getting dizzy). -- that's a capability. So we do our best
> to list these capabilities...
>
> Once we have capabilities we can then specify general parameters for
> responses/personality, etc... My previous post was an attempt to outline a
> system that should develop most of the capabilities we care about.
>
>
>
> Mike Tintner wrote:
>
>> Alan,
>>
>> Re 1) you have to state **specific problems.** Whether it's
>>
>> a) 22 + 22 = ? [narrow AI]
>> Make a number collage from any numbers given to you [AGI]
>>
>> b) [for robots] Walk through a factory according to this itinerary
>> [narrow AI]
>> Walk across any simple rock/grass field presented to you without an
>> itinerary/route map or GPS [AGI]
>>
>> etc
>>
>> You should explain why these are typical AGI problems (wh. I can do, but
>> won't now) - and why their solution will lead to the solution of further
>> diverse kinds of AGi problems.
>>
>> Ben *has* presented a paper with a long list of specific problems, drawn
>> from child psychology, but did not explain why any, if any, were AGI. Peter
>> Voss has also presented specific problems in the past - something to do
>> with a maze, I think - but I can't remember enough to comment on their
>> AGI-ness.
>>
>> You are illustrating the point of my proposal - most AGI-ers are deeply
>> confused about both what an operational definition and an AGI problem are.
>>
>> Steve pointed out the error of a similarly vague definition v. recently -
>> and, I think, gave me the idea for "effective mechanism".
>>
>> Re 2) Having presented no problems, you also present inevitably no
>> effective mechanism. No reason to see why your proposals will solve any
>> specific problem.
>>
>> However, I take your other point - yes, you can have higher standards of
>> "proof of concept" - some simple model that works to a minimal extent. But
>> I would suggest in the first instance, it is quite enough to see an "idea
>> on the back of an envelope". By all means have a second, higher stage as
>> well. But start simple.
>>
>> P.S. Ask others for their comments on whether you have an O.D./E.M.
>>
>>
>>
>> -----Original Message----- From: Alan Grimes
>> Sent: Wednesday, April 10, 2013 7:20 PM
>> To: AGI
>> Subject: Re: [agi] The Two Prerequisites to start an AGI project
>>
>> Mike Tintner wrote:
>>
>>> Ben:
>>> If scientists were banned from proceeding based on intuition, until
>>> they had convinced
>>> skeptics of their methodology and ideas, nearly all science would halt...
>>> The two obvious prerequisites for starting – getting serious about - any
>>> inventive project, are
>>> 1) an **operational definition**: you must be able to explain what your
>>> machine will do - in this case: what AGI problems will it solve, (and how
>>> will it diversify into solving more AGI prob lems)
>>>
>>
>> An AGI system is a system that can detect interrelationships in the
>> environment and is capable of perceiving the environment by constructing
>> an internal representation of the environment by resolving abstractions
>> of previously detected relationships and refines that perception using
>> feedback from the sensory modalities. An AGI system must also be able to
>> construct motor actions based on a desired change in the state of the
>> environment through the resolution of abstractions related to motor
>> control. The system can achieve generality by omitting all constraints
>> on the types of things that can be abstracted.
>>
>>  2) a **proof of concept** – you must be able to give a practical reason
>>> why your project will work – in this case how your project will solve AGI
>>> problems.
>>>
>>
>> My standard of proof is quite a bit higher, requiring an actual
>> technological artifact that exhibits the claimed property convincingly
>> on at least a toy problem.
>>
>> My system will work because the number of abstractions in the system
>> grows with the logarithm of the input size, worst case being complex but
>> structured input. Assuming the search problem is algorithmically
>> solvable, then search will also be logarithmic (so finding something in
>> memory would be the logarithm of the logarithm of the input size).
>> Furthermore, the P-time of the search problem will be constant, as it is
>> in the brain. A system along these lines, completely divorced from the
>> legacy architecture (NO UPLOADS!!) can achieve the theoretical maximum
>> efficiency within only a few product generations.
>>
>> So therefore I need a robotics lab. I need about $35,000 to build it;
>> but I have no job. =(
>>
>> --
>> NOTICE: NEW E-MAIL ADDRESS, SEE ABOVE
>> Powers are not rights.
>>
>
> --
> NOTICE: NEW E-MAIL ADDRESS, SEE ABOVE
>
> Powers are not rights.
>
>
>
> ------------------------------**-------------
> AGI
> Archives: 
> https://www.listbox.com/**member/archive/303/=now<https://www.listbox.com/member/archive/303/=now>
> RSS Feed: https://www.listbox.com/**member/archive/rss/303/**
> 10443978-6f4c28ac<https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac>
> Modify Your Subscription: https://www.listbox.com/**
> member/?&id_**secret=10443978-ebee85ab<https://www.listbox.com/member/?&;>
> Powered by Listbox: http://www.listbox.com
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to