[
https://issues.apache.org/jira/browse/SINGA-329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074363#comment-16074363
]
ASF subversion and git services commented on SINGA-329:
-------------------------------------------------------
Commit b6874d4f0c368068ab1c7954a14e7590b1d5a53f in incubator-singa's branch
refs/heads/master from wangwei
[ https://git-wip-us.apache.org/repos/asf?p=incubator-singa.git;h=b6874d4 ]
SINGA-329 - Support layer freezing during training (fine-tuning)
Adding an argument 'freeze' to the net.py FeedForwardNet::forward and backward
function.
The backward function will stop BP after the 'freeze' layer.
> Support layer freezing during training (fine-tuning)
> ----------------------------------------------------
>
> Key: SINGA-329
> URL: https://issues.apache.org/jira/browse/SINGA-329
> Project: Singa
> Issue Type: New Feature
> Reporter: wangwei
> Assignee: wangwei
>
> During fine-tuning (e.g. fine tune the CNN trained over ImageNet on our own
> dataset), we may want to fix some layers (e.g. bottom layers0 and train other
> layers (e.g top layers).
> This ticket adds an argument (i.e a layer name( for the forward and backward
> function of FeedForwardNet. The training will freeze the layers before that
> layer and compute the gradients of parameters after that layer (inclusive).
> If you want to freeze the top layers, you don't need to use this argument.
> Instead, you just ignore the gradients of parameters of the top layers from
> the backward function.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)