On 9/10/2019 1:10 PM, Philip Thrift wrote:

    Deep nets are "algorithms" too. One can print out the gazillion
    weights of the "neural" sigmoid functions of the connections
    after it has deep-learned. That's just an algorithm that a human
    couldn't read very well, because if it is printed out, would be
    quite big.

    And its reasoning is more like human intuition.  In general, it
    can't explain its process in a way that you could adopt it.

    Brent



DeepNets also include modularity and interpretability:

@ Google Research

https://ai.googleblog.com/2019/09/recursive-sketches-for-modular-deep.html
https://ai.googleblog.com/2018/03/the-building-blocks-of-interpretability.html

So maybe they will report "explanations" soon. maybe better than humans can their own.


From what I've read about DN the reporting of explanations is done by additional nets and so it has the same problem as a human telling you how to do something that they do intuitively (like hit a tennis ball)...the explanation may not really align with what their brain does.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/be57689f-d533-7961-87a1-fd6677a007dc%40verizon.net.

Reply via email to