Thx, it's right.

On Wednesday, December 7, 2016 at 3:55:29 PM UTC+8, Pascal Lamblin wrote:
>
> The error message indicates that the index variable (x_index_true) has 
> to have an integer dtype. 
>
> The issue in that case is that its dtype is float64, since mask has been 
> defined as a dmatrix(). If you define it as imatrix() or lmatrix(), then 
> it should work. 
>
> On Tue, Dec 06, 2016, Lijun Wu wrote: 
> > But when I try to feed in with M.shape[0], it failed, my code is: 
> > 
> > x = tensor.dmatrix('x') 
> > mask = tensor.dmatrix('m') 
> > mask_sum = mask.sum(axis=0) 
> > mask_sum_gt_1 = tensor.gt(mask_sum, 1) 
> > x_index= mask.sum - 2 
> > x_index_true = x_index * mask_sum_gt_1 
> > one_hot_matrix = tensor.extra_ops.to_one_hot(x_index_true, 
> mask.shape[0]) 
> > 
> > then it posted error: 
> > raise TypeError('index must be integers') 
> > 
> > am I doing anything wrong? 
> > 
> > 
> > On Wednesday, December 7, 2016 at 6:47:34 AM UTC+8, Pascal Lamblin 
> wrote: 
> > > 
> > > Theano definitely accepts 'nb_class' as a symbolic scalar in 
> to_one_hot(). 
> > > 
> > > >>> a = tensor.ivector() 
> > > >>> i = tensor.iscalar() 
> > > >>> b = to_one_hot(a, i) 
> > > >>> b.eval{a: [3], i: 5}) 
> > > array([[ 0.,  0.,  0.,  1.,  0.]]) 
> > > >>> b.eval({a: [3], i: 4}) 
> > > array([[ 0.,  0.,  0.,  1.]]) 
> > > 
> > > 
> > > On Tue, Dec 06, 2016, Lijun Wu wrote: 
> > > > Hi All, 
> > > > 
> > > > I want to implement the need of one_hot with variable length, so I 
> want 
> > > to 
> > > > feed in the nb_class with a tensorVariable, but how to do this? Is 
> there 
> > > > any other way? 
> > > > 
> > > > What my need is following: 
> > > > I have matrix A, example: 
> > > > [[0.1, 0.2, 0.3] 
> > > >  [0.2, 0.1, 0.1] 
> > > >  [0.1, 0.2, 0.2]] 
> > > > 
> > > > and one mask matrix M: 
> > > > [[1, 1, 1] 
> > > >  [1, 0, 1] 
> > > >  [0, 0, 0]] 
> > > > 
> > > > and I want to get the last one in each column of M, and get the 
> > > > corresponding value in A. e.g, here is 
> > > > [[0, 0.2, 0] 
> > > >  [0.2, 0, 0.1] 
> > > >  [0, 0, 0]] 
> > > > 
> > > > My solution is first get y=M.sum(axis=0), then feed y to create 
> one_hot 
> > > > matrix using extra_ops.to_one_hot(), but since my M.shape[0] will be 
> > > > different, so I want to feed in np_class as M.shape[0], but I don't 
> know 
> > > > how to do this, one_hot() can not feed in 'nb_class' as 
> tensorvariable. 
> > > > 
> > > > Can anyone help me work on this? Thanks pretty much. 
> > > > 
> > > > -- 
> > > > 
> > > > --- 
> > > > You received this message because you are subscribed to the Google 
> > > Groups "theano-users" group. 
> > > > To unsubscribe from this group and stop receiving emails from it, 
> send 
> > > an email to [email protected] <javascript:>. 
> > > > For more options, visit https://groups.google.com/d/optout. 
> > > 
> > > 
> > > -- 
> > > Pascal 
> > > 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "theano-users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected] <javascript:>. 
> > For more options, visit https://groups.google.com/d/optout. 
>
>
> -- 
> Pascal 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to