So are you saying ReLu based nets are optimizable by removing all need for Relu 
activation functions?

Are you saying in your other post we can optimize each layer?

An input has an output in a trained net. If we randomly sample many times, 
maybe we can build a heterarchy and throw the net in the trash? It would let us 
see the relationship between various input, outputs, and inputs to outputs.

Better optmizations could lower cost or keep the same cost and speed while 
being less RAM hungry.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T894f73971549b2ee-M6a1948e1ff99a817b659930a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to