On 2/3/2022 2:23 PM, Terren Suydam wrote:


On Thu, Feb 3, 2022 at 4:27 PM John Clark <[email protected]> wrote:

    On Thu, Feb 3, 2022 at 2:11 PM Terren Suydam
    <[email protected]> wrote:

        > /the code generated by the AI still needs to be understandable/


    Once  AI starts to get to be really smart that's never going to
    happen, even today nobody knows how a neural network like
    AlphaZero works or understands the reasoning behind it making a
    particular move but that doesn't matter because understandable or
    not  AlphaZero can still play chess better than anybody alive, and
    if humans don't understand how that can be than that's just too
    bad for them.


With chess it's clear what the game is, what the rules are, how to win and lose. In real life, the game constantly changes. AlphaCode can potentially improve its code, but to what end?  What problem is it trying to solve?  How does it know?

Even in domains with seemingly simple goals, it's a problem. Imagine an AI tasked with making as much money in the stock market as it can. Pretty clear signals for winning and losing (like chess). And perhaps there's some easy wins there for an AI that can take advantage of e.g. arbitrage (this exists already I believe) or other patterns that are not exploitable by human brains. But it seems to me that actual comprehension of the world of investment is key. Knowing how earnings reports will affect the stock price of a company, relative to human expectations about that earnings report. That's just one tiny example. You have to import a universe of knowledge of the human domain to be effective... a universe we take for granted since we've acquired it over decades of training. And I'm not talking about mere information, but models that can be simulated in what-if scenarios, true understanding. You need real AGI. I think that's true with AIs that would supplant human programmers for the reasons I said.

        > /The hard part is understanding the problem your code is
        supposed to solve, understanding the tradeoffs between
        different approaches, and being able to negotiate with
        stakeholders about what the best approach is./


    You seem to be assuming that the "stakeholders", those that intend
    to use the code once it is completed, will always be humans, and I
    think that is an entirely unwarranted assumption. The stakeholders
    will certainly have brains, but they may be hard and dry and not
    wet and squishy.


To get to the point where machines are the stakeholders, we're already past the singularity.


        /> It'll be a very long time before we're handing that domain
        off to an AI./


    I think you're whistling past the graveyard.


Of course, nobody can know what the future holds. But I think the problem of AGI is much harder than most assume. The fact that humans, with their stupendously parallel and efficient brains, require at least 15-20 /years /on average of continuous training before they're able to grasp the problem domain we're talking about, should be a clue.

I think "able to grasp the problem domain we're talking about" is giving us way to much credit.  Every study of stock traders I've seen says that they do no better than some simple rules of thumb like index funds.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/16772563-e3dd-19d9-8241-095fbd3230b6%40gmail.com.

Reply via email to