Mike,

Of course, all math is in a sense tautological...

However, all simulations are also either tautological or
information-losing.  In a reversible simulation engine, the results
after a period of time are equivalent to the initial assumptions
(assuming the given knowledge of the simulation framework).  In an
irreversible simulation engine, the later results contain less
information than the initial assumptions, rather than creating
fundamentally new knowledge.   The derivation of a future state from
an earlier state in a simulation model, is basically a special case of
the derivation of a conclusion from premises in a logic engine.  If
you knew math you would see this immediately...

As for the practical achievements of logic --- obviously it's used
heavily in the semiconductor industry (to check the correctness of
circuits) as well as for controlling video game AI characters, medical
expert systems, and computer algebra inside Mathematica, Matlab, etc.
... and lots of other places.   None of these are AGI.   On the other
hand, nobody has achieved AGI using robotic agents guided by internal
simulation models (as you advocate) either....  I guess the list of
real-world applications of robots guided by world-simulations is
shorter than the list of logic applications, at the moment.  Most
existing practical robots are governed by  much simpler and narrower
approaches.  Personally I don't think any of these methods, on their
own, is likely to yield AGI using feasible resources; I think an
integrative approach will be needed...

ben

On Mon, Jun 18, 2012 at 4:43 AM, Mike Tintner <[email protected]> wrote:
> Ben P.S. -
>
> I can part understand, part forgive your, culturally extremely widespread,
> reluctance to think physically and visually.
>
> But what is truly insane is your and the general inability to see that logic
> produces utterly trivial results in relation to the real world, and is
> totally INCAPABLE OF ANY **NEW** CONCLUSIONS/IDEAS - wh. are the stuff of
> AGI.
>
> My (and everyone's) ability to produce all kinds of new ideas about how Sue
> and Jane might have met at the clinic, is  an embodied affair.
>
> In the final analysis, logic is tautological:
>
> This Will Make You Smarter (John Brockman)
> Bart Kosko:
>
> The catch is that we can really only prove tautologies. The great binary
> truths of mathematics are still logically equivalent to the tautology 1 = 1
> or Green is green. This differs from the factual statements we make about
> the real world-statements such as "Pine needles are green" or "Chlorophyll
> molecules reflect green light." These factual statements are approximations.
> They are technically vague or fuzzy. And they often come juxtaposed with
> probabilistic uncertainty: "Pine needles are green with high probability."
> Note that this last statement involves triple uncertainty. There is first
> the vagueness of green pine needles because there is no bright line between
> greenness and non-greenness-it is a matter of degree. There is second only a
> probability whether pine needles have the vague property of greenness. And
> there is last the magnitude of the probability itself. The magnitude is the
> vague or fuzzy descriptor "high," because here, too, there is no bright line
> between high probability and not-high probability.
>
> Logic doesn't even have concepts (like "green") - as, essentially, Kosko is
> indicating above.
>
> Our ability to come up with new ideas is founded on our *body* and its
> PHYSICAL MOBILITY -  we can simulate ourselves (and by extension others and
> other objects) moving along altogether new lines.  We also have SENSORY
> MOBILITY - an  ability to flex and repaint our sensory images in our
> reflective imagination, and so interpret concepts like "green" flexibly and
> change the shade, tint, colour etc in our minds.
>
> As I said, you can just look at all the examples of logic used in AI
> textbooks, like your Sue and Jane, and see that they are all toy stuff.
> There are no examples of logic being real world productive.
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/212726-11ac2389
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com



-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to