i have written a simple program to test mxnet which i done the same for others 
like tf and torch.
    net = nn.Sequential()
    with net.name_scope():
        net.add(
            nn.Dense(100,activation='relu'),

            nn.Dense(10,activation='relu'),
            nn.Dense(10)
        )
    net.initialize(init=init.Xavier())
    softmax_crossentropy=gluon.loss.SoftmaxCrossEntropyLoss(sparse_label=False)

    trainer = gluon.Trainer(
        net.collect_params(),'sgd',{'learning_rate': 0.1}
    )

    def acc(output, label):
        # output: (batch, num_output) float32 ndarray
        # label: (batch, ) int32 ndarray
        return (output.argmax(0)== label.astype('float32')).mean().asscalar()

    for epoch in range(100):
        train_loss,train_acc,valid_acc = 0.0,0.0,0.0
        tic=time.time()
        for data,label in training:
            with autograd.record():
                output = net(data)
                loss = softmax_crossentropy(output,label)
            loss.backward()
            trainer.step(batch_size)
            train_loss += loss.mean().asscalar()
            train_acc += acc(output, label)
        for data, label in testing:
            valid_acc += acc(net(data), label)
        print("Epoch %d: loss %.3f, train acc %.3f, test acc %.3f, in %.1f sec" 
% (
                epoch, train_loss/len(train_data), train_acc/len(train_data),
                valid_acc/len(testing), time.time()-tic))
my problem is that loss get lower and lower and accuracy never goes up and also 
seems the network doesn't get better over time , just repeating without any 
optimization.
i`m very new to mxnet and most likely i made a big mistake somewhere 
data is very simple , and one hot encoded labels.





---
[Visit 
Topic](https://discuss.mxnet.apache.org/t/help-with-simple-classification/6586/1)
 or reply to this email to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.mxnet.apache.org/email/unsubscribe/375e41f188da0bd4a0cc701459546685c3c14afb893e012fad0a716a8ed05a6a).

Reply via email to