Hi @VanDavv,

Given the original context of transfer learning, with Gluon you can just 
provide the parameters for the last layer to the `Trainer` object. Usually 
you'd give all of the parameters with `net.collect_params()`, but instead only 
provide the parameters you're interested in learning/tuning.

See https://mxnet.incubator.apache.org/tutorials/onnx/fine_tuning_gluon.html 
for an example, and use `dense_layer` if you only want to train the last layer 
and not fine-tune the whole network:

```
trainer = gluon.Trainer(dense_layer.collect_params(), 'sgd', 
                        {'learning_rate': LEARNING_RATE,
                         'wd':WDECAY,
                         'momentum':MOMENTUM})
```

[ Full content available at: 
https://github.com/apache/incubator-mxnet/issues/12392 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to