Thanks for the code to reproduce this. I did an issue about this so we don't loose track about it. We should work on it shortly. Here is the issue:
https://github.com/Theano/Theano/issues/5249 There is a work around in it until it get fixed. thanks Frédéric On Sat, Jul 2, 2016 at 6:55 PM, Daniel Johnson <[email protected]> wrote: > > > On Saturday, July 2, 2016 at 3:54:02 PM UTC-7, Daniel Johnson wrote: >> >> Hello, >> >> Recently I have run into a problem where compiling one of my Theano >> functions with optimizer=fast_run gives an error (reproduced below). When >> compiling it with optimizer=fast_compile, this error disappears. >> >> After some trial and error I determined that the optimization responsible >> is "scanOp_pushout_output", as compiling with >> THEANO_FLAGS="optimizer_excluding=scanOp_pushout_output" >> runs without any errors as well. I am currently using this as a workaround. >> >> The Theano version I am using is >> 0.9.0dev1.dev-a668c6c5b6d055b233aa5bc50b22800d996ffce1, >> but this error was also occurring when run with Theano 0.8.2. I will attach >> a function_dump of the function responsible (along with pickled versions of >> some test input) in hope that this might be useful in fixing this. >> >> The error: >> >> Traceback (most recent call last): >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/compile/function_module.py", >> line 862, in __call__ >> self.fn() if output_subset is None else\ >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/scan_module/scan_op.py", >> line 951, in rval >> r = p(n, [x[0] for x in i], o) >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/scan_module/scan_op.py", >> line 940, in <lambda> >> self, node) >> File "theano/scan_module/scan_perform.pyx", line 547, in >> theano.scan_module.scan_perform.perform (/home/djohnson/external/.thea >> no/compiledir_Linux-3.13--generic-x86_64-with-debian-jessie- >> sid-x86_64-3.5.1-64/scan_perform/mod.cpp:6224) >> ValueError: could not broadcast input array from shape (100,390) into >> shape (100,480) >> >> During handling of the above exception, another exception occurred: >> >> Traceback (most recent call last): >> File "main.py", line 70, in <module> >> main(**args) >> File "main.py", line 52, in main >> babi_train.train(m, bucketed, len(eff_anslist), output_format, >> num_updates, outputdir, start_idx, batch_size) >> File >> "/home/djohnson/research_personal/gated-graph-memory-network/babi_train.py", >> line 54, in train >> loss = m.train_fn(*sampled_batch) >> File >> "/home/djohnson/research_personal/gated-graph-memory-network/model.py", >> line 200, in logfn >> return tfn(a,b,c) >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/compile/function_module.py", >> line 875, in __call__ >> storage_map=getattr(self.fn, 'storage_map', None)) >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/gof/link.py", >> line 325, in raise_with_op >> reraise(exc_type, exc_value, exc_trace) >> File "/home/djohnson/anaconda3/lib/python3.5/site-packages/six.py", >> line 685, in reraise >> raise value.with_traceback(tb) >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/compile/function_module.py", >> line 862, in __call__ >> self.fn() if output_subset is None else\ >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/scan_module/scan_op.py", >> line 951, in rval >> r = p(n, [x[0] for x in i], o) >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/scan_module/scan_op.py", >> line 940, in <lambda> >> self, node) >> File "theano/scan_module/scan_perform.pyx", line 547, in >> theano.scan_module.scan_perform.perform (/home/djohnson/external/.thea >> no/compiledir_Linux-3.13--generic-x86_64-with-debian-jessie- >> sid-x86_64-3.5.1-64/scan_perform/mod.cpp:6224) >> ValueError: could not broadcast input array from shape (100,390) into >> shape (100,480) >> Apply node that caused the error: >> forall_inplace,cpu,grad_of_scan_fn}(Shape_i{1}.0, >> Elemwise{sub,no_inplace}.0, Alloc.0, Alloc.0, Alloc.0, Alloc.0, >> InplaceDimShuffle{0,1,x,x,2}.0, Alloc.0, Subtensor{int64:int64:int64}.0, >> Subtensor{int64:int64:int64}.0, Subtensor{int64:int64:int64}.0, >> Subtensor{int64:int64:int64}.0, Subtensor{int64:int64:int64}.0, >> Subtensor{int64:int64:int64}.0, Subtensor{int64:int64:int64}.0, >> Subtensor{::int64}.0, Subtensor{::int64}.0, Subtensor{::int64}.0, >> Subtensor{::int64}.0, Alloc.0, Alloc.0, Alloc.0, Alloc.0, Alloc.0, Alloc.0, >> Alloc.0, Alloc.0, Alloc.0, Alloc.0, Alloc.0, Alloc.0, Shape_i{1}.0, >> Shape_i{1}.0, Shape_i{1}.0, Shape_i{1}.0, Shape_i{1}.0, Shape_i{1}.0, >> Shape_i{1}.0, Shape_i{1}.0, Shape_i{1}.0, newnodes_proposer_update_W, >> newnodes_proposer_update_b, newnodes_proposer_reset_W, >> newnodes_proposer_reset_b, newnodes_proposer_activation_W, >> newnodes_proposer_activation_b, newnodes_vote_W, >> edgestateupdate_update_W, edgestateupdate_reset_W, >> edgestateupdate_strength_W, edgestateupdate_activation_W, >> Elemwise{add,no_inplace}.0, InplaceDimShuffle{x,0}.0, >> InplaceDimShuffle{1,0}.0, InplaceDimShuffle{x,0}.0, >> InplaceDimShuffle{1,0}.0, InplaceDimShuffle{1,0}.0, >> InplaceDimShuffle{x,0}.0, InplaceDimShuffle{x,0}.0, >> InplaceDimShuffle{1,0}.0, InplaceDimShuffle{1,0}.0, >> InplaceDimShuffle{x,0}.0, Shape_i{0}.0, Shape_i{1}.0, Shape_i{0}.0, >> Shape_i{0}.0, Shape_i{1}.0, Shape_i{0}.0, Shape_i{0}.0, Shape_i{1}.0, >> Shape_i{0}.0) >> Toposort index: 715 >> Inputs types: [TensorType(int64, scalar), TensorType(int64, vector), >> TensorType(float32, 5D), TensorType(float32, 4D), TensorType(float32, 4D), >> TensorType(float32, 3D), TensorType(float32, (False, False, True, True, >> False)), TensorType(float32, vector), TensorType(float32, 3D), >> TensorType(float32, 3D), TensorType(float32, 4D), TensorType(float32, 4D), >> TensorType(float32, 5D), TensorType(int64, vector), TensorType(int64, >> vector), TensorType(float32, 3D), TensorType(float32, 4D), >> TensorType(float32, 4D), TensorType(float32, 5D), TensorType(float32, >> vector), TensorType(float32, 3D), TensorType(float32, matrix), >> TensorType(float32, 3D), TensorType(float32, matrix), TensorType(float32, >> 3D), TensorType(float32, matrix), TensorType(float32, matrix), >> TensorType(float32, matrix), TensorType(float32, matrix), >> TensorType(float32, matrix), TensorType(float32, matrix), TensorType(int64, >> scalar), TensorType(int64, scalar), TensorType(int64, scalar), >> TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, >> scalar), TensorType(int64, scalar), TensorType(int64, scalar), >> TensorType(int64, scalar), TensorType(float32, matrix), TensorType(float32, >> vector), TensorType(float32, matrix), TensorType(float32, vector), >> TensorType(float32, matrix), TensorType(float32, vector), >> TensorType(float32, matrix), TensorType(float32, matrix), >> TensorType(float32, matrix), TensorType(float32, matrix), >> TensorType(float32, matrix), TensorType(int64, scalar), TensorType(float32, >> row), TensorType(float32, matrix), TensorType(float32, row), >> TensorType(float32, matrix), TensorType(float32, matrix), >> TensorType(float32, row), TensorType(float32, row), TensorType(float32, >> matrix), TensorType(float32, matrix), TensorType(float32, row), >> TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, >> scalar), TensorType(int64, scalar), TensorType(int64, scalar), >> TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, >> scalar), TensorType(int64, scalar)] >> Inputs shapes: [(), (6,), (6, 10, 19, 19, 50), (6, 10, 19, 19), (6, 10, >> 19, 50), (6, 10, 19), (6, 10, 1, 1, 100), (6,), (6, 10, 100), (6, 10, 19), >> (6, 10, 19, 50), (6, 10, 19, 19), (6, 10, 19, 19, 50), (6,), (6,), (7, 10, >> 19), (7, 10, 19, 50), (7, 10, 19, 19), (7, 10, 19, 19, 50), (7,), (2, 151, >> 51), (2, 51), (2, 151, 51), (2, 51), (2, 151, 51), (2, 51), (2, 1), (2, >> 51), (2, 50), (2, 1), (2, 50), (), (), (), (), (), (), (), (), (), (151, >> 51), (51,), (151, 51), (51,), (151, 51), (51,), (100, 1), (250, 51), (250, >> 50), (250, 1), (250, 50), (), (1, 1), (1, 100), (1, 50), (50, 250), (1, >> 250), (1, 1), (1, 51), (51, 250), (50, 250), (1, 50), (), (), (), (), (), >> (), (), (), ()] >> Inputs strides: [(), (8,), (722000, 72200, 3800, 200, 4), (14440, 1444, >> 76, 4), (38000, 3800, 200, 4), (760, 76, 4), (-400, 2400, 400, 400, 4), >> (4,), (-400, 2400, 4), (-760, 76, 4), (-38000, 3800, 200, 4), (-14440, >> 1444, 76, 4), (-722000, 72200, 3800, 200, 4), (-8,), (-8,), (-760, 76, 4), >> (-38000, 3800, 200, 4), (-14440, 1444, 76, 4), (-722000, 72200, 3800, 200, >> 4), (4,), (30804, 204, 4), (204, 4), (30804, 204, 4), (204, 4), (30804, >> 204, 4), (204, 4), (4, 4), (204, 4), (200, 4), (4, 4), (200, 4), (), (), >> (), (), (), (), (), (), (), (204, 4), (4,), (204, 4), (4,), (204, 4), (4,), >> (4, 4), (204, 4), (200, 4), (4, 4), (200, 4), (), (4, 4), (400, 4), (200, >> 4), (4, 200), (1000, 4), (4, 4), (204, 4), (4, 204), (4, 200), (200, 4), >> (), (), (), (), (), (), (), (), ()] >> Inputs values: [array(6), 'not shown', 'not shown', 'not shown', 'not >> shown', 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', >> 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', 'not >> shown', 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', >> 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', array([[ >> -3.65952699e-04], >> [ -4.58023787e-05]], dtype=float32), 'not shown', 'not shown', >> array([[-0.0034694], >> [-0.0020045]], dtype=float32), 'not shown', array(6), array(6), >> array(6), array(6), array(6), array(6), array(6), array(6), array(6), 'not >> shown', 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', >> 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', array(19), >> array([[ 0.7651118]], dtype=float32), 'not shown', 'not shown', 'not >> shown', 'not shown', array([[ 1.12565911]], dtype=float32), 'not shown', >> 'not shown', 'not shown', 'not shown', array(51), array(51), array(151), >> array(51), array(51), array(151), array(51), array(51), array(151)] >> Outputs clients: [[], [], [], [], [], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.5, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.6, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.7, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.8, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.9, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.10, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.11, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.12, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.13, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.14, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.15, >> ScalarFromTensor.0)], >> [Subtensor{::int64}(forall_inplace,cpu,grad_of_scan_fn}.16, >> Constant{-1})], [InplaceDimShuffle{1,0,2}(fora >> ll_inplace,cpu,grad_of_scan_fn}.17)], >> [Reshape{2}(forall_inplace,cpu,grad_of_scan_fn}.18, >> MakeVector{dtype='int64'}.0), >> Shape_i{2}(forall_inplace,cpu,grad_of_scan_fn}.18), >> Shape_i{1}(forall_inplace,cpu,grad_of_scan_fn}.18)], >> [Shape_i{2}(forall_inplace,cpu,grad_of_scan_fn}.19), >> InplaceDimShuffle{1,0,2}(forall_inplace,cpu,grad_of_scan_fn}.19)], >> [Reshape{2}(forall_inplace,cpu,grad_of_scan_fn}.20, >> MakeVector{dtype='int64'}.0), >> Shape_i{2}(forall_inplace,cpu,grad_of_scan_fn}.20), >> Shape_i{1}(forall_inplace,cpu,grad_of_scan_fn}.20)], >> [Reshape{2}(forall_inplace,cpu,grad_of_scan_fn}.21, >> MakeVector{dtype='int64'}.0), >> Shape_i{2}(forall_inplace,cpu,grad_of_scan_fn}.21), >> Shape_i{1}(forall_inplace,cpu,grad_of_scan_fn}.21)], >> [InplaceDimShuffle{1,0,2}(forall_inplace,cpu,grad_of_scan_fn}.22)], >> [Reshape{2}(forall_inplace,cpu,grad_of_scan_fn}.23, >> MakeVector{dtype='int64'}.0), >> Shape_i{1}(forall_inplace,cpu,grad_of_scan_fn}.23)], >> [Reshape{2}(forall_inplace,cpu,grad_of_scan_fn}.24, >> MakeVector{dtype='int64'}.0), >> Shape_i{2}(forall_inplace,cpu,grad_of_scan_fn}.24), >> Shape_i{1}(forall_inplace,cpu,grad_of_scan_fn}.24)]] >> >> HINT: Re-running with most Theano optimization disabled could give you a >> back-trace of when this node was created. This can be done with by setting >> the Theano flag 'optimizer=fast_compile'. If that does not work, Theano >> optimizations can be disabled with 'optimizer=None'. >> HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and >> storage map footprint of this apply node. >> [djohnson@ubuntu gated-graph-memory-network]$ THEANO_FLAGS="device=cpu" >> python3 main.py ../babi_en/qa1_single-supporting-fact_train.txt category >> --outputdir output_qa1 --num_updates 2 >> Starting to train... >> Traceback (most recent call last): >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/compile/function_module.py", >> line 862, in __call__ >> self.fn() if output_subset is None else\ >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/scan_module/scan_op.py", >> line 951, in rval >> r = p(n, [x[0] for x in i], o) >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/scan_module/scan_op.py", >> line 940, in <lambda> >> self, node) >> File "theano/scan_module/scan_perform.pyx", line 547, in >> theano.scan_module.scan_perform.perform (/home/djohnson/external/.thea >> no/compiledir_Linux-3.13--generic-x86_64-with-debian-jessie- >> sid-x86_64-3.5.1-64/scan_perform/mod.cpp:6224) >> ValueError: could not broadcast input array from shape (100,390) into >> shape (100,480) >> >> During handling of the above exception, another exception occurred: >> >> Traceback (most recent call last): >> File "main.py", line 70, in <module> >> main(**args) >> File "main.py", line 52, in main >> babi_train.train(m, bucketed, len(eff_anslist), output_format, >> num_updates, outputdir, start_idx, batch_size) >> File >> "/home/djohnson/research_personal/gated-graph-memory-network/babi_train.py", >> line 54, in train >> loss = m.train_fn(*sampled_batch) >> File >> "/home/djohnson/research_personal/gated-graph-memory-network/model.py", >> line 200, in logfn >> pickle.dump(b,open('input_b.p','wb')) >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/compile/function_module.py", >> line 875, in __call__ >> storage_map=getattr(self.fn, 'storage_map', None)) >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/gof/link.py", >> line 325, in raise_with_op >> reraise(exc_type, exc_value, exc_trace) >> File "/home/djohnson/anaconda3/lib/python3.5/site-packages/six.py", >> line 685, in reraise >> raise value.with_traceback(tb) >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/compile/function_module.py", >> line 862, in __call__ >> self.fn() if output_subset is None else\ >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/scan_module/scan_op.py", >> line 951, in rval >> r = p(n, [x[0] for x in i], o) >> File >> "/home/djohnson/anaconda3/lib/python3.5/site-packages/theano/scan_module/scan_op.py", >> line 940, in <lambda> >> self, node) >> File "theano/scan_module/scan_perform.pyx", line 547, in >> theano.scan_module.scan_perform.perform (/home/djohnson/external/.thea >> no/compiledir_Linux-3.13--generic-x86_64-with-debian-jessie- >> sid-x86_64-3.5.1-64/scan_perform/mod.cpp:6224) >> ValueError: could not broadcast input array from shape (100,390) into >> shape (100,480) >> Apply node that caused the error: >> forall_inplace,cpu,grad_of_scan_fn}(Shape_i{1}.0, >> Elemwise{sub,no_inplace}.0, Alloc.0, Alloc.0, Alloc.0, Alloc.0, >> InplaceDimShuffle{0,1,x,x,2}.0, Alloc.0, Subtensor{int64:int64:int64}.0, >> Subtensor{int64:int64:int64}.0, Subtensor{int64:int64:int64}.0, >> Subtensor{int64:int64:int64}.0, Subtensor{int64:int64:int64}.0, >> Subtensor{int64:int64:int64}.0, Subtensor{int64:int64:int64}.0, >> Subtensor{::int64}.0, Subtensor{::int64}.0, Subtensor{::int64}.0, >> Subtensor{::int64}.0, Alloc.0, Alloc.0, Alloc.0, Alloc.0, Alloc.0, Alloc.0, >> Alloc.0, Alloc.0, Alloc.0, Alloc.0, Alloc.0, Alloc.0, Shape_i{1}.0, >> Shape_i{1}.0, Shape_i{1}.0, Shape_i{1}.0, Shape_i{1}.0, Shape_i{1}.0, >> Shape_i{1}.0, Shape_i{1}.0, Shape_i{1}.0, newnodes_proposer_update_W, >> newnodes_proposer_update_b, newnodes_proposer_reset_W, >> newnodes_proposer_reset_b, newnodes_proposer_activation_W, >> newnodes_proposer_activation_b, newnodes_vote_W, >> edgestateupdate_update_W, edgestateupdate_reset_W, >> edgestateupdate_strength_W, edgestateupdate_activation_W, >> Elemwise{add,no_inplace}.0, InplaceDimShuffle{x,0}.0, >> InplaceDimShuffle{1,0}.0, InplaceDimShuffle{x,0}.0, >> InplaceDimShuffle{1,0}.0, InplaceDimShuffle{1,0}.0, >> InplaceDimShuffle{x,0}.0, InplaceDimShuffle{x,0}.0, >> InplaceDimShuffle{1,0}.0, InplaceDimShuffle{1,0}.0, >> InplaceDimShuffle{x,0}.0, Shape_i{0}.0, Shape_i{1}.0, Shape_i{0}.0, >> Shape_i{0}.0, Shape_i{1}.0, Shape_i{0}.0, Shape_i{0}.0, Shape_i{1}.0, >> Shape_i{0}.0) >> Toposort index: 715 >> Inputs types: [TensorType(int64, scalar), TensorType(int64, vector), >> TensorType(float32, 5D), TensorType(float32, 4D), TensorType(float32, 4D), >> TensorType(float32, 3D), TensorType(float32, (False, False, True, True, >> False)), TensorType(float32, vector), TensorType(float32, 3D), >> TensorType(float32, 3D), TensorType(float32, 4D), TensorType(float32, 4D), >> TensorType(float32, 5D), TensorType(int64, vector), TensorType(int64, >> vector), TensorType(float32, 3D), TensorType(float32, 4D), >> TensorType(float32, 4D), TensorType(float32, 5D), TensorType(float32, >> vector), TensorType(float32, 3D), TensorType(float32, matrix), >> TensorType(float32, 3D), TensorType(float32, matrix), TensorType(float32, >> 3D), TensorType(float32, matrix), TensorType(float32, matrix), >> TensorType(float32, matrix), TensorType(float32, matrix), >> TensorType(float32, matrix), TensorType(float32, matrix), TensorType(int64, >> scalar), TensorType(int64, scalar), TensorType(int64, scalar), >> TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, >> scalar), TensorType(int64, scalar), TensorType(int64, scalar), >> TensorType(int64, scalar), TensorType(float32, matrix), TensorType(float32, >> vector), TensorType(float32, matrix), TensorType(float32, vector), >> TensorType(float32, matrix), TensorType(float32, vector), >> TensorType(float32, matrix), TensorType(float32, matrix), >> TensorType(float32, matrix), TensorType(float32, matrix), >> TensorType(float32, matrix), TensorType(int64, scalar), TensorType(float32, >> row), TensorType(float32, matrix), TensorType(float32, row), >> TensorType(float32, matrix), TensorType(float32, matrix), >> TensorType(float32, row), TensorType(float32, row), TensorType(float32, >> matrix), TensorType(float32, matrix), TensorType(float32, row), >> TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, >> scalar), TensorType(int64, scalar), TensorType(int64, scalar), >> TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, >> scalar), TensorType(int64, scalar)] >> Inputs shapes: [(), (6,), (6, 10, 19, 19, 50), (6, 10, 19, 19), (6, 10, >> 19, 50), (6, 10, 19), (6, 10, 1, 1, 100), (6,), (6, 10, 100), (6, 10, 19), >> (6, 10, 19, 50), (6, 10, 19, 19), (6, 10, 19, 19, 50), (6,), (6,), (7, 10, >> 19), (7, 10, 19, 50), (7, 10, 19, 19), (7, 10, 19, 19, 50), (7,), (2, 151, >> 51), (2, 51), (2, 151, 51), (2, 51), (2, 151, 51), (2, 51), (2, 1), (2, >> 51), (2, 50), (2, 1), (2, 50), (), (), (), (), (), (), (), (), (), (151, >> 51), (51,), (151, 51), (51,), (151, 51), (51,), (100, 1), (250, 51), (250, >> 50), (250, 1), (250, 50), (), (1, 1), (1, 100), (1, 50), (50, 250), (50, >> 250), (1, 50), (1, 51), (51, 250), (1, 250), (1, 1), (), (), (), (), (), >> (), (), (), ()] >> Inputs strides: [(), (8,), (722000, 72200, 3800, 200, 4), (14440, 1444, >> 76, 4), (38000, 3800, 200, 4), (760, 76, 4), (-400, 2400, 400, 400, 4), >> (4,), (-400, 2400, 4), (-760, 76, 4), (-38000, 3800, 200, 4), (-14440, >> 1444, 76, 4), (-722000, 72200, 3800, 200, 4), (-8,), (-8,), (-760, 76, 4), >> (-38000, 3800, 200, 4), (-14440, 1444, 76, 4), (-722000, 72200, 3800, 200, >> 4), (4,), (30804, 204, 4), (204, 4), (30804, 204, 4), (204, 4), (30804, >> 204, 4), (204, 4), (4, 4), (204, 4), (200, 4), (4, 4), (200, 4), (), (), >> (), (), (), (), (), (), (), (204, 4), (4,), (204, 4), (4,), (204, 4), (4,), >> (4, 4), (204, 4), (200, 4), (4, 4), (200, 4), (), (4, 4), (400, 4), (200, >> 4), (4, 200), (4, 200), (200, 4), (204, 4), (4, 204), (1000, 4), (4, 4), >> (), (), (), (), (), (), (), (), ()] >> Inputs values: [array(6), 'not shown', 'not shown', 'not shown', 'not >> shown', 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', >> 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', 'not >> shown', 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', >> 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', array([[ >> 0.00047027], >> [ 0.00014215]], dtype=float32), 'not shown', 'not shown', array([[ >> 0.00089044], >> [ 0.00053652]], dtype=float32), 'not shown', array(6), array(6), >> array(6), array(6), array(6), array(6), array(6), array(6), array(6), 'not >> shown', 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', >> 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', array(19), >> array([[ 0.83649749]], dtype=float32), 'not shown', 'not shown', 'not >> shown', 'not shown', 'not shown', 'not shown', 'not shown', 'not shown', >> array([[ 0.87242377]], dtype=float32), array(51), array(51), array(151), >> array(51), array(51), array(151), array(51), array(51), array(151)] >> Outputs clients: [[], [], [], [], [], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.5, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.6, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.7, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.8, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.9, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.10, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.11, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.12, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.13, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.14, >> ScalarFromTensor.0)], >> [Subtensor{int64}(forall_inplace,cpu,grad_of_scan_fn}.15, >> ScalarFromTensor.0)], >> [Subtensor{::int64}(forall_inplace,cpu,grad_of_scan_fn}.16, >> Constant{-1})], [InplaceDimShuffle{1,0,2}(fora >> ll_inplace,cpu,grad_of_scan_fn}.17)], >> [Reshape{2}(forall_inplace,cpu,grad_of_scan_fn}.18, >> MakeVector{dtype='int64'}.0), >> Shape_i{2}(forall_inplace,cpu,grad_of_scan_fn}.18), >> Shape_i{1}(forall_inplace,cpu,grad_of_scan_fn}.18)], >> [Shape_i{2}(forall_inplace,cpu,grad_of_scan_fn}.19), >> InplaceDimShuffle{1,0,2}(forall_inplace,cpu,grad_of_scan_fn}.19)], >> [Reshape{2}(forall_inplace,cpu,grad_of_scan_fn}.20, >> MakeVector{dtype='int64'}.0), >> Shape_i{2}(forall_inplace,cpu,grad_of_scan_fn}.20), >> Shape_i{1}(forall_inplace,cpu,grad_of_scan_fn}.20)], >> [Reshape{2}(forall_inplace,cpu,grad_of_scan_fn}.21, >> MakeVector{dtype='int64'}.0), >> Shape_i{2}(forall_inplace,cpu,grad_of_scan_fn}.21), >> Shape_i{1}(forall_inplace,cpu,grad_of_scan_fn}.21)], >> [InplaceDimShuffle{1,0,2}(forall_inplace,cpu,grad_of_scan_fn}.22)], >> [Reshape{2}(forall_inplace,cpu,grad_of_scan_fn}.23, >> MakeVector{dtype='int64'}.0), >> Shape_i{1}(forall_inplace,cpu,grad_of_scan_fn}.23)], >> [Reshape{2}(forall_inplace,cpu,grad_of_scan_fn}.24, >> MakeVector{dtype='int64'}.0), >> Shape_i{2}(forall_inplace,cpu,grad_of_scan_fn}.24), >> Shape_i{1}(forall_inplace,cpu,grad_of_scan_fn}.24)]] >> >> HINT: Re-running with most Theano optimization disabled could give you a >> back-trace of when this node was created. This can be done with by setting >> the Theano flag 'optimizer=fast_compile'. If that does not work, Theano >> optimizations can be disabled with 'optimizer=None'. >> HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and >> storage map footprint of this apply node. >> >> -- > > --- > You received this message because you are subscribed to the Google Groups > "theano-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/d/optout. > -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
