Dear Pascal,
Thanks for your reply. I've tried to look into the self.inp_c, but it gives 
error like this:
theano.gof.fg.MissingInputError: An input of the graph, used to compute 
Subtensor{int64::}(input_var, Constant{0}), was not provided and not given 
a value.Use the Theano flag exception_verbosity='high',for more information 
on this error.
And plus, InplaceDimShuffle{1,2,0}.0 is supposed to get the first dimension 
right? Since it shuffles the dimension inplace first. So actually it should 
gives 40 instead of 0.
Am I on the wrong way? Thanks!
On Monday, November 7, 2016 at 10:57:22 PM UTC-5, Pascal Lamblin wrote:
>
> The error message is telling you that the corresponding computation in 
> the symbolic graph is at the line: 
>
> > outputs_info=T.zeros_like(self.inp_c[0][0])) 
>
> Since the corresponding node is 
>
> > Subtensor{int64}(InplaceDimShuffle{1,2,0}.0, Constant{0}) 
>
> It is likely that the problem occurred when computing "self.inp_c[0]". 
>
> The error message also gives you the shape of the indexed variable 
> (self.inp_c, probably), which is (0, 40, 5). 
>
> So the issue is that you are trying to take the first element of an 
> empty array, which is out of bounds. 
>
> On Mon, Nov 07, 2016, Jiali Zhou wrote: 
> >   
> > Dear all, 
> > I am new to theano and trying to reproduce the result from YerevaNN of 
> the 
> > dmn_qa_draft.py using who did what dataset. 
> > However, the codes give following errors. 
> > if mode == 'train': 
> >             gradient_value = self.get_gradient_fn(inp, q, ans, ca, cb, 
> cc, 
> > cd, ce, input_mask) 
> > ''' 
> >         if self.mode == 'train': 
> >             print "==> computing gradients (for debugging)" 
> >             gradient = T.grad(self.loss, self.params) 
> >             self.get_gradient_fn = theano.function(inputs=[self.inp_var, 
> > self.q_var, self.ans_var, 
> >                                                     self.ca_var, 
> > self.cb_var, self.cc_var, self.cd_var, self.ce_var, 
> >                                                     
> self.input_mask_var], 
> > outputs=gradient) 
> >     
> > ''' 
> > I googled online and found that the error is due to index out of bounds 
> of 
> > outputs if I am right. But here theano.function will give out a 
> gradient. 
> > So what is the exact problem here? 
> > Any suggestions will be appreciated. 
> > 
> > File 
> > 
> "/Users/baymax/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
>  
>
> > line 873, in __call__ 
> > 
> > self.fn() if output_subset is None else\ 
> > 
> > IndexError: index out of bounds 
> > 
> > Apply node that caused the error: 
> > Subtensor{int64}(InplaceDimShuffle{1,2,0}.0, Constant{0}) 
> > 
> > Toposort index: 1057 
> > 
> > Inputs types: [TensorType(float32, 3D), Scalar(int64)] 
> > 
> > Inputs shapes: [(0, 40, 5), ()] 
> > 
> > Inputs strides: [(160, 4, 4), ()] 
> > 
> > Inputs values: [array([], shape=(0, 40, 5), dtype=float32), 0] 
> > 
> > Inputs type_num: [11, 7] 
> > 
> > Outputs clients: [[Subtensor{int64}(Subtensor{int64}.0, Constant{0})]] 
> > 
> > 
> > 
> > 
> > Backtrace when the node is created(use Theano flag traceback.limit=N to 
> > make it longer): 
> > 
> > File "main.py", line 64, in <module> 
> > 
> > dmn = dmn_qa.DMN_qa(**args_dict) 
> > 
> > File "/Users/baymax/Desktop/nlp/proj/dmn/dmn_qa.py", line 114, in 
> __init__ 
> > 
> > current_episode = self.new_episode(memory[iter - 1]) 
> > 
> > File "/Users/baymax/Desktop/nlp/proj/dmn/dmn_qa.py", line 248, in 
> > new_episode 
> > 
> > outputs_info=T.zeros_like(self.inp_c[0][0])) 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "theano-users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected] <javascript:>. 
> > For more options, visit https://groups.google.com/d/optout. 
>
>
>
> -- 
> Pascal 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to