for other beginners like me, it seems that grad system is deprecated and mxnet
module is the new policy, so i converted the whole code to module based.
it became like this and worked well for me:
train_iter = mx.io.NDArrayIter(train_data, train_label, batch_size,
shuffle=True)
val_iter = mx.io.NDArrayIter(test_data, test_label, batch_size)
mdata = mx.sym.var('data')
fc1 = mx.sym.FullyConnected(data=mdata, num_hidden=128)
act1 = mx.sym.Activation(data=fc1, act_type="relu")
# The second fully-connected layer and the corresponding activation function
fc2 = mx.sym.FullyConnected(data=act1, num_hidden = 64)
act2 = mx.sym.Activation(data=fc2, act_type="relu")
# MNIST has 10 classes
fc3 = mx.sym.FullyConnected(data=act2, num_hidden=10)
# Softmax with cross entropy loss
mlp = mx.sym.SoftmaxOutput(data=fc3, name='softmax')
logging.getLogger().setLevel(logging.DEBUG) # logging to stdout
# create a trainable module on compute context
ctx = mx.gpu() if mx.context.num_gpus() else mx.cpu()
progress = mx.callback.ProgressBar(len(train_data)/batch_size,40)
mlp_model = mx.mod.Module(symbol=mlp, context=ctx)
mlp_model.fit(train_iter, # train data
eval_data=val_iter, # validation data
optimizer='sgd', # use SGD to train
optimizer_params={'learning_rate':0.01}, # use fixed learning
rate
eval_metric='acc', # report accuracy during training
# batch_end_callback = progress, # output progress for each 100
data batches
num_epoch=100,) # train for at most 10 dataset passes
---
[Visit
Topic](https://discuss.mxnet.apache.org/t/help-with-simple-classification/6586/3)
or reply to this email to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.mxnet.apache.org/email/unsubscribe/bfe0297508c14c208a7fff01c466b095a99ed5041f69e6407a55c19dc7b07990).