nudles commented on a change in pull request #697:
URL: https://github.com/apache/singa/pull/697#discussion_r433590178



##########
File path: examples/mlp/module.py
##########
@@ -56,10 +56,9 @@ def forward(self, inputs):
         x = autograd.add_bias(x, self.b1)
         return x
 
-    def loss(self, out, ty):
-        return autograd.softmax_cross_entropy(out, ty)
-
-    def optim(self, loss, dist_option, spars):
+    def train_one_batch(self, x, y, dist_option, spars):
+        out = self.forward(x)
+        loss = autograd.softmax_cross_entropy(out, y)

Review comment:
       I think a better way is to merge ReLU into Conv2D and Linear layers by 
providing a flag (activation='relu') in the init of the layer.
   Meanwhile, we also provide ReLU, Dropout, etc as Layer sublcasses.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to