zhaowwenzhong opened a new issue #11815: 学习率调整策略怎么设置???
URL: https://github.com/apache/incubator-mxnet/issues/11815
 
 
   各位大侠看看下面写法错哪里了?
   
   import mxnet.optimizer as optimizer
   。。。
   lr_scheduler = mx.lr_scheduler.PolyScheduler(base_lr = 0.1, pwr = 2, 
max_update = 1000)
   opt = optimizer.SGD(learning_rate= 0.1, momentum= 0.9, wd= 0.0005, 
rescale_grad= 1.0/4,lr_scheduler = lr_scheduler)
   ....
   def _batch_callback(param):
           print(param.locals['optimizer'].lr)
   
   model = mx.mod.Module(context = ctx, symbol = sym )
   model.fit(train_dataiter,
   optimizer =opt,
   begin_epoch = begin_epoch,
   num_epoch = num_epoch,
   arg_params = arg_params,
   aux_params = aux_params,
   eval_metric = eval_metrics,
   allow_missing = True,
   batch_end_callback = _batch_callback,
   epoch_end_callback = mx.callback.do_checkpoint(prefix))
   
   训练过程中 lr的输出都是0.1,每次迭代后都是0.1 我不知道在训练过程中lr是否已经调整,我该如何输出在迭代过程中的学习率??????????
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to