The softmax layer (softmax(wx + b) is a classifier, that is trained on
the last fully-connected layer, and backpropagates a gradient so that
the rest of the network is trained as well.

SVM is a different classifier, that they connected to the same input
(x, the output of the last fully-connected layer) and that they trained
(without backpropagation I think).

There is sometimes confusion in the literature between the softmax
operation itself (exp(x) / exp(x).sum(), that converts unnormalized
log-probabilities into a probability vector) and the "softmax layer", or
"logistic regression layer" (softmax(Wx+b)).

On Thu, Nov 24, 2016, Beatriz G. wrote:
> Hi Everyone, I am trying to build a cnn based in imagenet. The paper which 
> I am following sais that the architecture is formed by convolutional layers 
> and fully connected layers, and in the last layer, i.e. output layer is 
> followed by softmax. Then, it sais that after extracting the features from 
> the last fully connected layer, uses a SVM as a classifier.
> 
> I do not know if the input of the classifier is the output of the softmax.
> 
> And I thought that the softmax was a classifier, and I must be wrong
> 
> 
> Regards.
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.


-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to