Well consider the example of climate.  Nobody can grasp all factors in climate and their interactions.  But we can model all of them in a global climate simulation.  So climatologists+simulations "grasp the domain"  even though humans can't.  Now suppose we want to extend these predictive climate models to include predictions about what humans will do in response.  We don't know how humans will behave except in some general statistical terms.  We don't know whether they will build nuclear powerplants or not.  Whether they will go to war over immigration or not.  An AI might be able to do that, but we certainly can't.   But if it did, would we believe it? It can't explain it to us.

Brent

On 2/4/2022 8:55 AM, Terren Suydam wrote:
I think for programmers to lose their jobs to AIs, AIs will need to grasp the problem domain, and I'm suggesting that's far too advanced for today's AI, and I think it's a long way off, because the problem domain for programmers entails knowing a lot about how humans behave, what they're good at, and bad at, what they value, and so on, not to mention the domain-specific knowledge that is necessary to understand the problem in the first place.

On Thu, Feb 3, 2022 at 8:23 PM Brent Meeker <[email protected]> wrote:

    So AI's won't need to "grasp the problem domain" to be effective. 
    Which may well be true.  What we call "grasping the problem"
    domain is being able to tell simple stories about it that other
    people can grasp and understand, say by reading a book.  An AI may
    "grasp the problem" in some much more comprehensive way that is
    too much for a human to comprehend and the human will say the AI
    is just calculating and doesn't understand the problem because it
    can't explain it to humans.

    That's sort of what we do when we write simulations of complex
    things.  They are too complex for us to see what will happen and
    so we use the computer to tell us what will happen.  The computer
    can't "explain the result" to us and we can't grasp the whole
    domain of the computation, but we can grasp the result.

    Brent

    On 2/3/2022 4:29 PM, Terren Suydam wrote:

    Being able to grasp the problem domain is not the same thing as
    being effective in it.

    On Thu, Feb 3, 2022 at 6:07 PM Brent Meeker
    <[email protected]> wrote:


        I think "able to grasp the problem domain we're talking
        about" is giving us way to much credit. Every study of stock
        traders I've seen says that they do no better than some
        simple rules of thumb like index funds.

        Brent


--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA8cnqP1U4thdUQWv_AZk-zbOcrqk1u%3DeRzqzmvAHOFJMQ%40mail.gmail.com <https://groups.google.com/d/msgid/everything-list/CAMy3ZA8cnqP1U4thdUQWv_AZk-zbOcrqk1u%3DeRzqzmvAHOFJMQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c8285abf-d07c-cbd5-fc0e-f668f1a75a3a%40gmail.com.

Reply via email to