I finally got around to reading the paper.  Not sure if this was
reported elsewhere, but there is a minor typo:

"These labels, in simple cases, appear function like an
image grammar"

On the whole, it's a good paper, very clear, timely with the raging
(to some extent justified) DL hype.  These are to some extent
seemingly timeless NN problems of not completely knowing what is going
on in the black box of weights and connections.

Mike A

On 5/11/15, Peter Christiansen <[email protected]> wrote:
> Hey Paige
>
> Thank you very much for your prompt reply and clarification.
>
> According to my employer, I have only one more month of Medical Leave
> remaining so I would be able to participate in Arm 11 of the alternative
> study which I believe consists of radiation only tx for approximately two
> months, assuming I meet the necessary criteria, and the randomization places
> me in that ARM, and the tx begins very soon.  Otherwise I am afraid I must,
> regretfully, decline participation in either study.
>
> Peter
>
> Sent from my iPad
>
>> On May 11, 2015, at 12:48 PM, "Greg Staskowski" <[email protected]>
>> wrote:
>>
>> Look, guys and Ben. This is all fun and cute, but I'm wondering if we
>> can't think even bigger?
>>
>> For example, is it possible to construct an irrational lambda calculus
>> that can be paired with our current lambda calculus and modal logic to
>> create agents that will evolve more towards what I call "emergent silicon
>> organisms" or ESO's?
>>
>> If I'm all wet, go ahead and say so, I don't bruise easily.
>>
>> -GJS
>>
>>> On Sun, May 10, 2015 at 5:25 PM, Boris Kazachenko <[email protected]>
>>> wrote:
>>> OK, but in general terms, that's what DL is doing anyway. You want to the
>>> same thing without its flaws, but how?
>>> You know what I think, these flaws are produced by coarse statistical
>>> operations of ANN, & the solution is to start with fine-grain one-to-one
>>> cross-comparison.
>>>
>>>> On Sun, May 10, 2015 at 12:33 PM, Ben Goertzel <[email protected]>
>>>> wrote:
>>>>
>>>>
>>>>> On Sat, May 9, 2015 at 9:45 PM, Boris Kazachenko <[email protected]>
>>>>> wrote:
>>>>> Ben,
>>>>>
>>>>> a) Episodic memory requires transition from hierarchical to sequential
>>>>> processing, which is problematic for ANN. Conventional solution is to
>>>>> model dendrites: they do all the heavy sequential processing, while
>>>>> neuron itself is more like networking node. Jeff Hawkins does that.
>>>>> In my model, this dilemma does not exist: every level of a hierarchy is
>>>>> already a sequence: www.cognitivealgorithm.info
>>>>>
>>>>> b) Your solution to pathologies is basically supervision, by some image
>>>>> grammar. It's a hack, general intelligence must be able to learn
>>>>> without supervision. I think real solution is in proper node design.
>>>>
>>>>
>>>> No, my suggested solution is NOT supervision by an image grammar.
>>>> Rather, my preferred approach involves creation of unsupervised learning
>>>> algorithms that implicitly form image grammar type structures within
>>>> their networks.  Sorry if my write-up was not clear on that point.  I
>>>> agree that most learning in an AGI must occur without supervision,
>>>> though supervision can also play a supporting role
>>>>
>>>> -- Ben
>>>> AGI | Archives  | Modify Your Subscription 
>>>
>>> AGI | Archives  | Modify Your Subscription  
>>
>> AGI | Archives  | Modify Your Subscription   
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to