From the NTM paper, you can tell the work is rather an extension
of previous works on RNN, though the idea is to combine external 
(learnable) memory (as the 'tape' of TM) with RNN (such as  LSTM) 
(as the 'head' of the TM).

Though RNN seems to have been mathematically proven to be 
Turing-complete (see ref. in the NTM paper), the naming of NTM
is apparently from its architecture (not from its mathematical nature) 
and I guess its computational power as TM is yet to be proven.

Besides, I doubt the practicality of the Turing equivalence of RNN,
as infinite sates are realized in its fractal structure and would be 
vulnerable to noise... 

-- Naoya Arakawa

2015/01/28 8:56, Mike Archbold via AGI <[email protected]> wrote:

> On 1/27/15, Ted Sanders via AGI <[email protected]> wrote:
>> Neural turing machine I think is something along those lines.
>> http://arxiv.org/pdf/1410.5401v2.pdf
>> http://www.technologyreview.com/view/532156/googles-secretive-deepmind-startup-unveils-a-neural-turing-machine/
>> 
> 
> These sorts of hybrid neural + discrete-structured efforts have been
> around for a long time in my recollection.  Suddenly now it's
> "secretive deep mind" and I'm left wondering how this effort is so
> vastly different.  I'm no expert on neural networks, though, somebody
> fill me in.
> 
>> 
>> On Tue, Jan 27, 2015 at 7:12 AM, YKY (Yan King Yin, 甄景贤) via AGI <
>> [email protected]> wrote:
>> 
>>> *The idea:*  make programs continuous and then evolve programs using
>>> continuous techniques.  Valiant's recent book "Probably Approximately
>>> Correct" has said something about evolving continuous parameters for
>>> strong
>>> AI.  (It may be more tractable than evolving programs with discrete
>>> elements, the kind of programs we have known usually).
>>> 
>>> *In the context of logic-based AI* (such as OpenCog, NARS, and my
>>> Genifer) the idea is to make all logic and procedural statements
>>> continuous.  The part concerning making logic continuous is via
>>> algebraization which I have been looking into, but will discuss
>>> elsewhere.
>>> The "procedural" aspect can be realized by letting the AGI control a
>>> Turing
>>> machine (TM) with one or more tapes, and by making such a Turing machine
>>> "continuous".
>>> 
>>> As a first step towards continuous TMs, we can start with *finite state
>>> machines (FSM)*.  It seems that a continuous version of FSMs corresponds
>>> to continuous dynamical systems (aka topological dynamical systems).  I
>>> have not looked into the details of this correspondence, but it looks
>>> fairly straightforward.
>>> 
>>> *To make TMs continuous* is somewhat more difficult.  One way is to turn
>>> the "tape read/write operations" into states of an FSM (in such case the
>>> number of states may become infinite).  But I'm not sure if that is a
>>> good
>>> way to create continuous TMs.
>>> 
>>> Any other idea for continuous TMs?
>>> 
>>> Thanks in advance =)
>>> --
>>> *YKY*
>>> *"The ultimate goal of mathematics is to eliminate any need for
>>> intelligent thought"* -- Alfred North Whitehead
>>>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/23797659-62a5f550> |
>>> Modify
>>> <https://www.listbox.com/member/?&;>
>>> Your Subscription <http://www.listbox.com>
>>> 



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to