When we write z_ij it represents the ouput of layer j given input of layer 
i. So when we iterate j from 1 to k, it implies the number of hidden units 
in layer j are k.

On Wednesday, January 27, 2016 at 6:55:50 AM UTC-6, Rijuban Rangslang wrote:
>
> Hi All,
>
> I have a few doubts in the implementation of the max-out activation 
> function. Formally max-out  is defined as :-
>
>     h_i(x) = max z_ij such that j belongs to [1,k] also,
>
>     z_ij  = x *W_ij + b_ij   where  W belongs to the real space with 
> dimension dXmXk and b mXk 
>     m is the number of hidden units, d the size of the input vector and k 
> is the number of linear models.
>
>    The theano code for max-out activation when implemented in an MLP is 
>     output = activation(T.dot(input,W) + b)
>     maxout_out = None
>     for i in xrange(maxoutsize):
>        t = output[:,i::maxoutsize]
>        if maxout_out = None:
>            maxout_out = t
>        else:
>           maxout_out = T.maximum(maxout_out,t)
>
> where maxoutsize is the number of input neurons to the maxout units.
>
> What represents k the number of linear models. Is it defined by the 
> dropout value ? Also would maxoutsize represent the number of hidden units 
> ?   
>
>
>
>
>
>
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to