Russell,

Your thinking appears to be constrained to a box that it isn't apparent
that you can see, which appears to be the source of your concerns. I'll
attempt to describe the box:

The things that happen inside a computer are NOT anything that is wholly
mathematically reasonable. For example, an ADD instruction does NOT do a
complete job of adding. OK, so you don't care about integer overflow and
floating-point truncation UNTIL is messes up some address computation and
blows your program.

Part of this comes from the giant step backwards that was taken when
microcomputers first came out, with CPUs that had to be SO small that they
couldn't even incorporate array address computation (and verification)
logic, that displaced processors that DID have these capabilities, like the
Burroughs 5000 and 6000 series processors.

Coincident with microcomputers came C, a low-level language that was
literally built upon the design screwups of minicomputers that were carried
on into microcomputers.

The Burroughs CPUs also dealt with things like integer overflow, by
converting to floating-point just like BASIC. Numbers were all 48-bit
quantities to make this work right.

Now, "modern" computers STILL lack the necessary hardware to do reasonable
arithmetic and array computations, and we STILL have languages that are
literally built on these design screwups, leaving us to do "brain surgery
in a sewer" of crummy hardware.

However, I believe that even the Burroughs approach was way too limiting
for really complex applications (like AGI), as it could not deal with
variables, as in elementary algebra.

Much of present AGI thinking is built on the LACK of algebraic EXTRAPOLATE
and other basic operations, that are NOT (easily) programmable on present
hardware.

Hence, I excuse many of your examples, e.g. program proving, as historical
artifacts. I suspect that the gap between present hardware and what is
needed for useful ML is just too great to contemplate as "programming".

I could probably design an environment where such things would be quite
tractable, but with prevailing historically-based "thinking", I would
probably be the only one using it.

Steve
======================
On Sun, Oct 14, 2012 at 9:03 AM, Russell Wallace
<[email protected]>wrote:

> On Fri, Oct 12, 2012 at 6:12 AM, Abram Demski <[email protected]>wrote:
>
> (good thoughts about chunking and memoization which I'm still mulling over)
>
> In any case, most of this is wild speculation based on a distinct lack of
>> experience implementing such systems...
>>
>
> Yep. One of the tricky parts of this whole business is it's hard to think
> clearly about what heuristics etc. an AI system could usefully employ
> without being able to try some out; but that would require an adequate
> framework within which to write them; but it's hard to design such a
> framework without having figured out some use cases in detail first; but
> that requires thinking clearly about heuristics etc. Chicken and egg.
>
> In practice I end up basically bouncing back and forth between thinking
> about one or the other, hopefully slowly converging. Right now I'm again
> trying to sketch an end to end use case in the domain of software
> verification on the 'assume you have an ideal language implementation in
> which to write the code' basis.
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to