Right now, I meet the issue like:
1. I have a pre-trained model, e.g. resnet trained on my own datasets. Assume 
it is saved by the function ```save_parameters()```. The key of the saved 
function would be the format such as ```features.0.0.mean```

2.1. If I just use exactly model but only initialize the features using the 
pretrained model, the previous function does not work: 
```model.features.initialize(model_path)```. 
2.2. Based on this pretrained model, I would like to expand the functionalities 
or modify the model based on the pre-trained model. Let's say I have the 
several ```hyridizesequences``` such as ```features0, features1, ...```, which 
change the structure of the models and the key.

3. In both cases, the initialization would fail because it cannot find the 
corresponding key from the ndarray formated pre-trained model and the modified 
pre-trained models.

For example, ndarray pretrained model is ```features.0.0.mean```
but the corresponding key of parameters in the case 2.1 is ```0.0.mean``` and 
the key of parameters in case 2.2 might be ```1.0.mean```

However, in the deprecated function ```save_params()```, which uses the layer 
name as the key, I can find the corresponding key in both cases.

Hope the mxnet team can address this issue. Otherwise, it would be the terrible 
user experience when the users face both cases.

[ Full content available at: 
https://github.com/apache/incubator-mxnet/issues/12334 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to