Alan,

This effect applies to ALL areas of human endeavor. An interesting example:

Long ago I owned a small machine shop that did prototyping work. It seemed
impossible to find people who knew BOTH mechanical engineering AND who had
extensive machining experience. The big challenge to journeyman machinists
is figuring out how to machine something as quickly as possible while
keeping the setup stable. Then I discovered an obscure and forgotten ASME
research article, explaining that when the chip (shaving) is wider than 15
times its thickness, that the tool becomes unstable. I did some quick
trigonometry and realized that ratio was the same as the ratio between the
radius of the tip of the tool and the feed rate. Suddenly, I could design
stable setups as good and often better than a journeyman machinist with
many years of experience, and I could teach this technique as fast as you
just read this. This collapses several years of machinist apprenticeship
into a single paragraph!!!

I suspect that there are already some of these like principles for AGI in
the literature. One example is my past assertion (and article that explains
why) that internal representations MUST be in dP/dt form to support
temporal learning. This would seem to automatically doom all efforts that
do not utilize dP/dt form. No one has yet challenged this assertion, yet
also apparently no one has yet acted on it. It seems mathematically obvious
to me that AGI absolutely can NOT ever advance until this particular point
has been addressed.

Of course that isn't sufficient to make an AGI, as I suspect that there are
a dozen or more such "simple" principles that also stand in the way. It
seems obvious to me that no one is ever going to "fiddle" their way around
the dP/dt issue, and I suspect that there are other issues that require
slightly radical departures (dP/dt isn't very radical) that most of the
principles you allude to must be found BEFORE the first AGI can be
constructed.

There is a similar but weaker argument for logarithmic representation, as
weighted averaging (like overly simplified synapses might do) means
something quite different and more reasonable than averaging
linearly-represented numbers.

There are other issues with NO solution in hand, as well as
not-yet-recognized issues that remain hidden because there is no
substantial foundation of understanding.

Hence, I see the future unfolding somewhat differently than you. I see AGI
laying in complete stagnation until it becomes populated and funded by
people with a clear eye on the underlying math. Then things will develop as
you theorize, but with a population of people in constant search of the
underlying principles, things will progress quickly once it has been made
to work.

I see a parallel in the development of atomic weapons. These require deep
understanding to make work, though sure there is some fiddling along the
way to refine that understanding. There sure wasn't much time between the
first atomic weapons and the first fusion-based weapons, despite their
operating on different principles. After this flurry of development, things
have pretty much stagnated, as there is only about one order of magnitude
left between fusion and antimatter. I expect this stupidity-race to finally
end with the development of a black hole weapon, in which the first mistake
in the lab will stop all weapons development.

Similarly with AGI, only sophomoric and easily-pierced arguments for a
"friendly AGI" have been advanced that an AGI wouldn't be a direct threat
to the human race, and only sophomoric arguments have been advanced that an
AGI could possibly be of much benefit when smart people are unable to
prevail. Without taking sides on the good/bad arguments for AGI, it seems
really stupid to proceed in such a dangerous vacuum. I'm not saying that we
shouldn't proceed, but rather, we must get past the sophomoric phase.

I have advanced some simple issues that an AGI would obviously take the
opposite side as anyone on this list, or anyone you are likely to find. Our
present lifestyle is unsustainable for reasons that no one wants to talk
about, because at the bottom of the issues there is no real scientific
argument. People like the way they now live, and will happily continue to
live this way until it kills everyone.

The flaws are in ourselves. The same "flaws" that pushed us to build our
great civilization will surely destroy it unless we find a way past these
barriers, and NO ONE has even proposed a way to do this.

Hence, I see the whole area of AGI development as being upside down. First
let's propose SOME way past human nature destroying our society, then let's
propose machinery as needed to pursue that path, then let's establish
companies to build that machinery. I suspect that in there somewhere will
be an AGI, but until AGI is seen as a solution rather than just another
threat, there will continue to be no hope for the future of civilization.

Steve
==================
On Fri, Nov 30, 2012 at 9:27 AM, Alan Grimes <[email protected]> wrote:

> om
>
> While I continue to recover from the shock of the news that some poor,
> hapless student was given a PhD for a paper written about the GOERTZEL's
> openCog crap, (the student was ripped off). I want to talk about a general
> problem that is preventing us from getting our AGI NOW.
>
> My hypothesis is that we are currently on the climbing side of a hump
> effect in AI. That is, until we really understand the computations involved
> in AI, we will be spending more and more on ever larger and more baroque
> neural simulations. Once we are over that hump, and have an AI, we can
> start optimizing until the machine requirements for a very capable system
> are down in the $50-100k range, assuming no changes in hardware technology
> from today.
>
> What would be really awesome would be if the Great Oracle would simply
> give us the theory we need, we could then spend a couple of mega-clams on
> ASIC design and, from there, go directly into mass production...
>
> The problem, of course, is to try to optimize the process so that we don't
> have to climb all the way to the top of the hump before getting to AI. What
> pisses me off about uploaders, on this issue, is that they don't want to
> short-cut things, they want to go all the way to the top, and, presumably,
> stay there. =( That's not the way to go! You want to get to the vast
> fertile fields on the other side. That's the only place you are going to
> get any significant *QUALITATIVE* improvements in intelligence.
> Back-porting those improvements into your own being is a completely
> different and very interesting/important subject, and off-topic for the AGI
> list...
>
> The only thing the uploaders try to offer you is an increase in "clock
> speed". They assume that you will also then require a similarly overclocked
> environment... This VR environment is typically presented as being
> secondary to the clock speed improvment. But the real paydirt is the
> qualitative improvments... That's where you get your wisdom, that's where
> you get your advanced capabilities. That's where you get your real IQ
> boosts.
>
> --
> E T F
> N H E
> D E D
>
> Powers are not rights.
>
>
>
> ------------------------------**-------------
> AGI
> Archives: 
> https://www.listbox.com/**member/archive/303/=now<https://www.listbox.com/member/archive/303/=now>
> RSS Feed: https://www.listbox.com/**member/archive/rss/303/**
> 10443978-6f4c28ac<https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac>
> Modify Your Subscription: https://www.listbox.com/**
> member/?&id_**secret=10443978-ebee85ab<https://www.listbox.com/member/?&;>
> Powered by Listbox: http://www.listbox.com
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to