When the best tool you have is a hammer make your problems in the shape of nails.
Right or wrong we are going down the neural network software and hardware and researching spending path. GPUs give us 10-100x over CPUs, GraphCore and Wave Computing give another 100-200x in compute power and power efficiency. Maximum die size GraphCore chip in 10nm tech node with TSV stacked memory on top will get us a factor of 1,000,000 over CPUs. Folks are learning to do reasoning with nn, see reasoning with schematic loss function. Ben's video from Berlin 2015 about driving vision nn to how a more structured middle layer and Hiinton's capsules are improving vision. We know to use real world sequential data to do unsupervised training. For example books feed in a character at a time with the nn predicting the next letter trains for word knowledge. Books feed in a word at a time trains for sentence structure. The challenge seems to be to drive a rich and structured mid layer. -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/f84b352f-160f-4355-9de9-a4f76369f784%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
