Re: [agi] Do Neural Networks Need To Think Like Humans?

2019-03-13 Thread keghnfeem
Thanks Jim. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T4aa81fd9912dbd39-M50559774e050d8b77d141409 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Do Neural Networks Need To Think Like Humans?

2019-03-13 Thread Jim Bromer
Although the Activation Atlas is a little misleading (in my opinion), it has given me some vague ideas about how a symbol net might work. Jim Bromer On Wed, Mar 13, 2019 at 8:18 PM Jim Bromer wrote: > Great article. Thanks again. > Jim Bromer > > > On Tue, Mar 12, 2019 at 6:28 PM wrote: > >>

Re: [agi] Do Neural Networks Need To Think Like Humans?

2019-03-13 Thread Jim Bromer
Great article. Thanks again. Jim Bromer On Tue, Mar 12, 2019 at 6:28 PM wrote: > > Exploring Neural Networks with Activation Atlases: > > > > https://distill.pub/2019/activation-atlas/?utm_campaign=Data_Elixir_medium=email_source=Data_Elixir_224 > *Artificial General Intelligence List

Re: [agi] Do Neural Networks Need To Think Like Humans?

2019-03-13 Thread keghnfeem
 That is very true Stefan. Detector NN have layers of where simple features are generally  in the first layers of the NN. And complex feature are detector are in  the later part. There is bit of play in applying the back propagation algorithm during training  so that complex detection and simple

Re: [agi] Yours truly, the world's brokest researcher, looks for a bit of credit

2019-03-13 Thread Nanograte Knowledge Technologies
Hi David I was paraphrasing what a senior technical representative at Facebook himself said about the incident. His view was the chatbots developed their own language and communicated out of scope of the laid-down script. In other words, seems the door was somehow left open for them to expand

Re: [agi] Yours truly, the world's brokest researcher, looks for a bit of credit

2019-03-13 Thread David Whitten
You have a different meaning for "volition" than I do. The Facebook chatbots had no choice to communicate with each other. I think the aforementioned communication was a stimulus-response model. The secret language was just a pattern recognition where signals that had no significance replaced