[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-08-06 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r311068821
 
 

 ##
 File path: src/api/config.i
 ##
 @@ -0,0 +1,33 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+
+
+// Pass in cmake configurations to swig
+#define USE_CUDA 1
+#define USE_CUDNN 1
+#define USE_OPENCL 0
+#define USE_PYTHON 1
+#define USE_MKLDNN 1
+#define USE_JAVA 0
+#define CUDNN_VERSION 7401
+
+// SINGA version
+#define SINGA_MAJOR_VERSION 1
 
 Review comment:
   In additional to the above, I also did a 8 * K80 multi-GPUs training and 
evaluation test with a CIFAR-10 dataset on resnet 50. It reduces the training 
loss from 3983.8 to 345.7 in about 30 Epochs, and evaluation accuracy to 86.8%. 
However, this does not include the synchronization of running mean and variance 
before the evaluation phase:
   ```
   Epoch=0: 100%|██| 195/195 [06:06<00:00,  1.91s/it]Training loss = 
3983.820557, training accuracy = 0.225260
   Test accuracy = 0.347556
   Epoch=1: 100%|██| 195/195 [06:17<00:00,  1.94s/it]Training loss = 
2628.622070, training accuracy = 0.379768
   Test accuracy = 0.437700
   Epoch=2: 100%|██| 195/195 [06:12<00:00,  1.89s/it]Training loss = 
2347.072266, training accuracy = 0.448558
   Test accuracy = 0.459936
   Epoch=3: 100%|██| 195/195 [06:13<00:00,  1.88s/it]Training loss = 
2075.987305, training accuracy = 0.517348
   Test accuracy = 0.548978
   Epoch=4: 100%|██| 195/195 [06:19<00:00,  1.97s/it]Training loss = 
1890.109985, training accuracy = 0.566847
   Test accuracy = 0.594451
   Epoch=5: 100%|██| 195/195 [06:13<00:00,  1.92s/it]Training loss = 
1720.395142, training accuracy = 0.606911
   Test accuracy = 0.633413
   Epoch=6: 100%|██| 195/195 [06:10<00:00,  1.92s/it]Training loss = 
1555.737549, training accuracy = 0.645753
   Test accuracy = 0.659054
   Epoch=7: 100%|██| 195/195 [06:14<00:00,  1.91s/it]Training loss = 
1385.688477, training accuracy = 0.687220
   Test accuracy = 0.709836
   Epoch=8: 100%|██| 195/195 [06:20<00:00,  1.97s/it]Training loss = 
1269.426270, training accuracy = 0.714523
   Test accuracy = 0.735477
   Epoch=9: 100%|██| 195/195 [06:15<00:00,  1.91s/it]Training loss = 
1137.953979, training accuracy = 0.746054
   Test accuracy = 0.745393
   Epoch=10: 100%|██| 195/195 [06:11<00:00,  1.88s/it]Training loss = 
1031.773071, training accuracy = 0.770353
   Test accuracy = 0.750501
   Epoch=11: 100%|██| 195/195 [06:10<00:00,  1.89s/it]Training loss = 
956.600037, training accuracy = 0.788261
   Test accuracy = 0.44
   Epoch=12: 100%|██| 195/195 [06:16<00:00,  1.92s/it]Training loss = 
881.050171, training accuracy = 0.804167
   Test accuracy = 0.793369
   Epoch=13: 100%|██| 195/195 [06:16<00:00,  1.92s/it]Training loss = 
828.298828, training accuracy = 0.818309
   Test accuracy = 0.807692
   Epoch=14: 100%|██| 195/195 [06:11<00:00,  1.90s/it]Training loss = 
790.558838, training accuracy = 0.823918
   Test accuracy = 0.795373
   Epoch=15: 100%|██| 195/195 [06:13<00:00,  1.90s/it]Training loss = 
740.679871, training accuracy = 0.833734
   Test accuracy = 0.816707
   Epoch=16: 100%|██| 195/195 [06:20<00:00,  1.95s/it]Training loss = 
691.391479, training accuracy = 0.846855
   Test accuracy = 0.818510
   Epoch=17: 100%|██| 195/195 [06:16<00:00,  1.89s/it]Training loss = 
657.708130, training accuracy = 0.853986
   Test accuracy = 0.826122
   Epoch=18: 100%|██| 195/195 [06:10<00:00,  1.88s/it]Training loss = 
627.918579, training accuracy = 0.860216
   Test accuracy = 0.844752
   Epoch=19: 100%|██| 195/195 [06:13<00:00,  1.91s/it]Training loss = 
592.768982, training accuracy = 0.869551
   Test accuracy = 0.845653
   Epoch=20: 100%|██| 195/195 [06:19<00:00,  1.97s/it]Training loss = 
561.560608, training accuracy = 0.875060
   Test accuracy = 0.835938
   Epoch=21: 100%|██| 195/195 [06:15<00:00,  1.97s/it]Training loss = 
533.083740, training accuracy = 0.881370
   Test accuracy = 0.849860
   Epoch=22: 100%|██| 195/195 [06:12<00:00,  

[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-08-06 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r311074176
 
 

 ##
 File path: src/api/config.i
 ##
 @@ -0,0 +1,33 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+
+
+// Pass in cmake configurations to swig
+#define USE_CUDA 1
+#define USE_CUDNN 1
+#define USE_OPENCL 0
+#define USE_PYTHON 1
+#define USE_MKLDNN 1
+#define USE_JAVA 0
+#define CUDNN_VERSION 7401
+
+// SINGA version
+#define SINGA_MAJOR_VERSION 1
 
 Review comment:
   From the above, we can now train simple CNN (MNIST dataset) and resnet 
(CIFAR-10 dataset). The remaining task is the synchronization of the running 
mean and variance. 
   I tried to put the running mean and var in the batch return list of backward 
   
   ```
   def backward(self, dy):
   assert training is True and hasattr(
   self, "cache"
   ), "Please set training as True before do BP. "
   
   x, scale, mean, var = self.cache
   if isinstance(self.handle, singa.CudnnBatchNormHandle):
   dx, ds, db = singa.GpuBatchNormBackward(
   self.handle, dy, x, scale, mean, var
   )
   else:
   dx, ds, db = singa.CpuBatchNormBackward(
   self.handle, dy, x, scale, mean, var
   )
   
   #return dx, ds, db
   return dx, ds, db, self.running_mean, self.running_var
   ```
   and wish to collect it with
   ```
   #all reduce running mean and var
   for p, g in autograd.backward(loss):
   if((p.requires_grad==False) and (p.stores_grad==False)):
   all_reduce(p)
   ```
   
   However, this is the error in return
   ```
   Traceback (most recent call last):
 File "resnet_multigpu.py", line 163, in 
   for p, g in autograd.backward(loss):
 File "/usr/local/lib/python3.5/dist-packages/singa/autograd.py", line 136, 
in backward
   % (len(op.src), len(dxs))
   AssertionError: the number of src ops (=3) and dx (=5) not match
   ```
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-08-06 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r311056639
 
 

 ##
 File path: src/api/config.i
 ##
 @@ -0,0 +1,33 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+
+
+// Pass in cmake configurations to swig
+#define USE_CUDA 1
+#define USE_CUDNN 1
+#define USE_OPENCL 0
+#define USE_PYTHON 1
+#define USE_MKLDNN 1
+#define USE_JAVA 0
+#define CUDNN_VERSION 7401
+
+// SINGA version
+#define SINGA_MAJOR_VERSION 1
 
 Review comment:
   Updated on 6th August: I removed a bug in the commit 0616000 which concerns 
to the number of parameters in the all-reduce. Then I did a 8 * K80 multi-GPUs 
training and evaluation test with a simple MNIST dataset on simple CNN. It 
reduces the training loss from 802.7 to 42.2 in about 30 Epochs: 
   ```
   Epoch=0: 100%|¦¦| 117/117 [00:01<00:00, 92.86it/s]Training loss = 
802.659485, training accuracy = 0.713825
   Test accuracy = 0.920025
   Epoch=1: 100%|¦¦| 117/117 [00:01<00:00, 93.42it/s]Training loss = 
246.589371, training accuracy = 0.916767
   Test accuracy = 0.956106
   Epoch=2: 100%|¦¦| 117/117 [00:01<00:00, 94.04it/s]Training loss = 
175.012894, training accuracy = 0.941106
   Test accuracy = 0.967208
   Epoch=3: 100%|¦¦| 117/117 [00:01<00:00, 95.66it/s] Training loss = 
144.684052, training accuracy = 0.951539
   Test accuracy = 0.970806
   Epoch=4: 100%|¦¦| 117/117 [00:01<00:00, 102.59it/s]Training loss = 
120.399704, training accuracy = 0.959402
   Test accuracy = 0.976049
   Epoch=5: 100%|¦¦| 117/117 [00:01<00:00, 102.79it/s]Training loss = 
107.832191, training accuracy = 0.963709
   Test accuracy = 0.975946
   Epoch=6: 100%|¦¦| 117/117 [00:01<00:00, 102.70it/s]Training loss = 
96.289490, training accuracy = 0.967014
   Test accuracy = 0.979441
   Epoch=7: 100%|¦¦| 117/117 [00:01<00:00, 102.34it/s]Training loss = 
88.031815, training accuracy = 0.970436
   Test accuracy = 0.980983
   Epoch=8: 100%|¦¦| 117/117 [00:01<00:00, 101.81it/s]Training loss = 
79.349884, training accuracy = 0.973090
   Test accuracy = 0.980058
   Epoch=9: 100%|¦¦| 117/117 [00:01<00:00, 101.82it/s]Training loss = 
77.825607, training accuracy = 0.974342
   Test accuracy = 0.977282
   Epoch=10: 100%|¦¦| 117/117 [00:01<00:00, 101.97it/s]Training loss = 
74.710297, training accuracy = 0.974576
   Test accuracy = 0.983861
   Epoch=11: 100%|¦¦| 117/117 [00:01<00:00, 101.98it/s]Training loss = 
69.400230, training accuracy = 0.976162
   Test accuracy = 0.982936
   Epoch=12: 100%|¦¦| 117/117 [00:01<00:00, 102.03it/s]Training loss = 
65.100449, training accuracy = 0.978148
   Test accuracy = 0.983553
   Epoch=13: 100%|¦¦| 117/117 [00:01<00:00, 102.17it/s]Training loss = 
65.113991, training accuracy = 0.978249
   Test accuracy = 0.986534
   Epoch=14: 100%|¦¦| 117/117 [00:01<00:00, 101.83it/s]Training loss = 
63.065636, training accuracy = 0.978566
   Test accuracy = 0.984683
   Epoch=15: 100%|¦¦| 117/117 [00:01<00:00, 102.11it/s]Training loss = 
58.334709, training accuracy = 0.980018
   Test accuracy = 0.983758
   Epoch=16: 100%|¦¦| 117/117 [00:01<00:00, 102.16it/s]Training loss = 
58.280094, training accuracy = 0.980285
   Test accuracy = 0.983655
   Epoch=17: 100%|¦¦| 117/117 [00:01<00:00, 102.15it/s]Training loss = 
53.226196, training accuracy = 0.981420
   Test accuracy = 0.985197
   Epoch=18: 100%|¦¦| 117/117 [00:01<00:00, 102.15it/s]Training loss = 
55.968140, training accuracy = 0.980786
   Test accuracy = 0.982422
   Epoch=19: 100%|¦¦| 117/117 [00:01<00:00, 102.14it/s]Training loss = 
52.761921, training accuracy = 0.982489
   Test accuracy = 0.985814
   Epoch=20: 100%|¦¦| 117/117 [00:01<00:00, 101.86it/s]Training loss = 
51.989666, training accuracy = 0.982973
   Test accuracy = 0.983758
   Epoch=21: 100%|¦¦| 117/117 [00:01<00:00, 101.91it/s]Training loss = 
52.571381, training accuracy = 0.982455
   Test accuracy = 0.987973
   Epoch=22: 100%|¦¦| 117/117 [00:01<00:00, 101.99it/s]Training loss = 
49.347313, training accuracy = 0.983140
 

[GitHub] [incubator-singa] joddiy commented on issue #498: SINGA-475 add Pow operator

2019-08-05 Thread GitBox
joddiy commented on issue #498: SINGA-475 add Pow operator
URL: https://github.com/apache/incubator-singa/pull/498#issuecomment-518464766
 
 
   ready for merge


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-08-06 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r311068821
 
 

 ##
 File path: src/api/config.i
 ##
 @@ -0,0 +1,33 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+
+
+// Pass in cmake configurations to swig
+#define USE_CUDA 1
+#define USE_CUDNN 1
+#define USE_OPENCL 0
+#define USE_PYTHON 1
+#define USE_MKLDNN 1
+#define USE_JAVA 0
+#define CUDNN_VERSION 7401
+
+// SINGA version
+#define SINGA_MAJOR_VERSION 1
 
 Review comment:
   In additional to the above, I also did a 8 * K80 multi-GPUs training and 
evaluation test with a CIFAR-10 dataset on resnet 50. It reduces the training 
loss from 3983.8 to 35.56 in 100 Epochs, and evaluation accuracy to 90.6% 
(maximum at epoch 90). However, this does not include the synchronization of 
running mean and variance before the evaluation phase:
   ```
   Epoch=0: 100%|██| 195/195 [06:06<00:00,  1.91s/it]Training loss = 
3983.820557, training accuracy = 0.225260
   Test accuracy = 0.347556
   Epoch=1: 100%|██| 195/195 [06:17<00:00,  1.94s/it]Training loss = 
2628.622070, training accuracy = 0.379768
   Test accuracy = 0.437700
   Epoch=2: 100%|██| 195/195 [06:12<00:00,  1.89s/it]Training loss = 
2347.072266, training accuracy = 0.448558
   Test accuracy = 0.459936
   Epoch=3: 100%|██| 195/195 [06:13<00:00,  1.88s/it]Training loss = 
2075.987305, training accuracy = 0.517348
   Test accuracy = 0.548978
   Epoch=4: 100%|██| 195/195 [06:19<00:00,  1.97s/it]Training loss = 
1890.109985, training accuracy = 0.566847
   Test accuracy = 0.594451
   Epoch=5: 100%|██| 195/195 [06:13<00:00,  1.92s/it]Training loss = 
1720.395142, training accuracy = 0.606911
   Test accuracy = 0.633413
   Epoch=6: 100%|██| 195/195 [06:10<00:00,  1.92s/it]Training loss = 
1555.737549, training accuracy = 0.645753
   Test accuracy = 0.659054
   Epoch=7: 100%|██| 195/195 [06:14<00:00,  1.91s/it]Training loss = 
1385.688477, training accuracy = 0.687220
   Test accuracy = 0.709836
   Epoch=8: 100%|██| 195/195 [06:20<00:00,  1.97s/it]Training loss = 
1269.426270, training accuracy = 0.714523
   Test accuracy = 0.735477
   Epoch=9: 100%|██| 195/195 [06:15<00:00,  1.91s/it]Training loss = 
1137.953979, training accuracy = 0.746054
   Test accuracy = 0.745393
   Epoch=10: 100%|██| 195/195 [06:11<00:00,  1.88s/it]Training loss = 
1031.773071, training accuracy = 0.770353
   Test accuracy = 0.750501
   Epoch=11: 100%|██| 195/195 [06:10<00:00,  1.89s/it]Training loss = 
956.600037, training accuracy = 0.788261
   Test accuracy = 0.44
   Epoch=12: 100%|██| 195/195 [06:16<00:00,  1.92s/it]Training loss = 
881.050171, training accuracy = 0.804167
   Test accuracy = 0.793369
   Epoch=13: 100%|██| 195/195 [06:16<00:00,  1.92s/it]Training loss = 
828.298828, training accuracy = 0.818309
   Test accuracy = 0.807692
   Epoch=14: 100%|██| 195/195 [06:11<00:00,  1.90s/it]Training loss = 
790.558838, training accuracy = 0.823918
   Test accuracy = 0.795373
   Epoch=15: 100%|██| 195/195 [06:13<00:00,  1.90s/it]Training loss = 
740.679871, training accuracy = 0.833734
   Test accuracy = 0.816707
   Epoch=16: 100%|██| 195/195 [06:20<00:00,  1.95s/it]Training loss = 
691.391479, training accuracy = 0.846855
   Test accuracy = 0.818510
   Epoch=17: 100%|██| 195/195 [06:16<00:00,  1.89s/it]Training loss = 
657.708130, training accuracy = 0.853986
   Test accuracy = 0.826122
   Epoch=18: 100%|██| 195/195 [06:10<00:00,  1.88s/it]Training loss = 
627.918579, training accuracy = 0.860216
   Test accuracy = 0.844752
   Epoch=19: 100%|██| 195/195 [06:13<00:00,  1.91s/it]Training loss = 
592.768982, training accuracy = 0.869551
   Test accuracy = 0.845653
   Epoch=20: 100%|██| 195/195 [06:19<00:00,  1.97s/it]Training loss = 
561.560608, training accuracy = 0.875060
   Test accuracy = 0.835938
   Epoch=21: 100%|██| 195/195 [06:15<00:00,  1.97s/it]Training loss = 
533.083740, training accuracy = 0.881370
   Test accuracy = 0.849860
   Epoch=22: 100%|██| 195/195 

[GitHub] [incubator-singa] nudles merged pull request #493: SINGA-473 Autograd Trigonometry: Backward Test

2019-08-06 Thread GitBox
nudles merged pull request #493: SINGA-473 Autograd Trigonometry: Backward Test
URL: https://github.com/apache/incubator-singa/pull/493
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #498: SINGA-475 add Pow operator

2019-08-09 Thread GitBox
nudles merged pull request #498: SINGA-475 add Pow operator
URL: https://github.com/apache/incubator-singa/pull/498
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #494: SINGA-475 add SoftPlus operator

2019-08-09 Thread GitBox
nudles merged pull request #494: SINGA-475 add SoftPlus operator
URL: https://github.com/apache/incubator-singa/pull/494
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #500: SINGA-478 (Python 3 uses __itruediv__ instead of __idiv__)

2019-08-09 Thread GitBox
nudles merged pull request #500: SINGA-478 (Python 3 uses __itruediv__ instead 
of __idiv__)
URL: https://github.com/apache/incubator-singa/pull/500
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on issue #488: SINGA -475 add Sign operator to singa

2019-08-09 Thread GitBox
chrishkchris commented on issue #488: SINGA -475 add Sign operator to singa
URL: https://github.com/apache/incubator-singa/pull/488#issuecomment-519856801
 
 
   In my opinion:
   For the function y=sign(x) , y = 1 when x > 0, y = -1 when x < 0. Therefore, 
the derivative is 0 except x = 0. 
   The function y=sign(x) is discontinuous at x=0 where the derivative is 
undefined in the mathematical logic. However, an defined derivative leads to a 
nan number, so we used a derivative of zero also at x=0.
   
   As a result, dx is always 0 while the output shape (size of array) is the 
same as input shape.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on a change in pull request #488: SINGA -475 add Sign operator to singa

2019-08-09 Thread GitBox
nudles commented on a change in pull request #488: SINGA -475 add Sign operator 
to singa
URL: https://github.com/apache/incubator-singa/pull/488#discussion_r312397599
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -1828,3 +1828,21 @@ def backward(self, dy):
 
 def leakyrelu(x, a=0.01):
 return LeakyRelu(a)(x)[0]
+
+
+class Sign(Operation):
+def __init__(self):
+super(Sign, self).__init__()
+
+def forward(self, a):
+if training:
+self.input = a
+return singa.Sign(a)
+
+def backward(self, dy):
+dx = singa.MultFloat(dy, 0.0)
 
 Review comment:
   the dx is always 0?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on issue #500: SINGA-478 (Python 3 uses __itruediv__ instead of __idiv__)

2019-08-08 Thread GitBox
nudles commented on issue #500: SINGA-478 (Python 3 uses __itruediv__ instead 
of __idiv__)
URL: https://github.com/apache/incubator-singa/pull/500#issuecomment-519736627
 
 
   Should we need to remove `__idiv__` if it is not used by Py3?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles edited a comment on issue #500: SINGA-478 (Python 3 uses __itruediv__ instead of __idiv__)

2019-08-08 Thread GitBox
nudles edited a comment on issue #500: SINGA-478 (Python 3 uses __itruediv__ 
instead of __idiv__)
URL: https://github.com/apache/incubator-singa/pull/500#issuecomment-519736627
 
 
   Should we remove `__idiv__` if it is not used by Py3?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on issue #500: SINGA-478 (Python 3 uses __itruediv__ instead of __idiv__)

2019-08-08 Thread GitBox
chrishkchris commented on issue #500: SINGA-478 (Python 3 uses __itruediv__ 
instead of __idiv__)
URL: https://github.com/apache/incubator-singa/pull/500#issuecomment-519749290
 
 
   Yes, I guess py2 is not used by us anymore for long time (e.g. the new 
dockerfile uses Miniconda3 py36 and py37). I have resubmitted the file, just 
changed `__idiv__` to `__itruediv__`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #500: SINGA-478 (Python 3 uses __itruediv__ instead of __idiv__)

2019-08-08 Thread GitBox
chrishkchris edited a comment on issue #500: SINGA-478 (Python 3 uses 
__itruediv__ instead of __idiv__)
URL: https://github.com/apache/incubator-singa/pull/500#issuecomment-519749290
 
 
   Yes, I guess py2 is not used by us anymore for long time (e.g. the new 
dockerfile uses Miniconda3 py36 and py37). I have resubmitted the file, just 
changed `__idiv__` to `__itruediv__`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #492: Make singa use multiple memory pools

2019-08-08 Thread GitBox
nudles merged pull request #492: Make singa use multiple memory pools
URL: https://github.com/apache/incubator-singa/pull/492
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #485: SINGA -475 add Sub operator implementation to singa

2019-08-08 Thread GitBox
nudles merged pull request #485: SINGA -475 add Sub operator implementation to 
singa
URL: https://github.com/apache/incubator-singa/pull/485
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #500: SINGA-478 (Python 3 uses __itruediv__ instead of __idiv__)

2019-08-08 Thread GitBox
chrishkchris edited a comment on issue #500: SINGA-478 (Python 3 uses 
__itruediv__ instead of __idiv__)
URL: https://github.com/apache/incubator-singa/pull/500#issuecomment-519749290
 
 
   Yes, I guess py2 is not used by us anymore for long time (e.g. the new 
dockerfile uses Miniconda3 py36 and py37). I have resubmitted the file, just 
changed `__idiv__` to `__itruediv__`
   
   (If we still consider py2, we can keep it.)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #486: SINGA -475 add Sqrt operator to singa

2019-08-08 Thread GitBox
nudles merged pull request #486: SINGA -475 add Sqrt operator to singa
URL: https://github.com/apache/incubator-singa/pull/486
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles commented on issue #487: SINGA -475 add Log operator to singa

2019-08-08 Thread GitBox
nudles commented on issue #487: SINGA -475 add Log operator to singa
URL: https://github.com/apache/incubator-singa/pull/487#issuecomment-519785209
 
 
   there are conflicts..
   two solutions:
   1. [merge this branch with the latest 
master](https://docs.fast.ai/dev/git.html#how-to-keep-your-feature-branch-up-to-date).
   2. checkout a new branch from the [latest 
master](https://github.com/apache/incubator-singa/) ; copy the added code into 
the autograd.py and test_operations.py; send a new PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on issue #488: SINGA -475 add Sign operator to singa

2019-08-09 Thread GitBox
chrishkchris commented on issue #488: SINGA -475 add Sign operator to singa
URL: https://github.com/apache/incubator-singa/pull/488#issuecomment-519876368
 
 
   there are conflicts and there are two solutions (same as what happened in 
log operator): 
   1. merge this branch with the latest master
   2. checkout a new branch from the latest master. Then copy the added code 
into the autograd.py and test_operations.py; send a new PR.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on issue #499: SINGA -475 add Div operator

2019-08-09 Thread GitBox
chrishkchris commented on issue #499: SINGA -475 add Div operator
URL: https://github.com/apache/incubator-singa/pull/499#issuecomment-519877683
 
 
   There are conflicts and there are two solutions (same as what happened in 
log operator):
   1. merge this branch with the latest master
   2. checkout a new branch from the latest master. Then copy the added code 
into the autograd.py and test_operations.py; send a new PR.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to singa

2019-08-09 Thread GitBox
chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to 
singa
URL: https://github.com/apache/incubator-singa/pull/488#issuecomment-519876368
 
 
   there are conflicts and there are two solutions (same as what happened in 
log operator): 
   1. merge this branch with the latest master
   2. checkout a new branch from the latest master. Then copy the added code 
into the autograd.py and test_operations.py; send a new PR.
   
   (I guess this maybe because different PRs put the new functions at the same 
line, so git detected it as different ways of changing code and hence conflicts)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to singa

2019-08-09 Thread GitBox
chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to 
singa
URL: https://github.com/apache/incubator-singa/pull/488#issuecomment-519876368
 
 
   there are conflicts and there are two solutions (same as what happened in 
log operator): 
   1. merge this branch with the latest master
   2. checkout a new branch from the latest master. Then copy the added code 
into the autograd.py and test_operations.py; send a new PR.
   
   (I guess this maybe because different PRs put the new functions at the same 
line, so git detected it as conflicts)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #499: SINGA -475 add Div operator

2019-08-09 Thread GitBox
chrishkchris edited a comment on issue #499: SINGA -475 add Div operator
URL: https://github.com/apache/incubator-singa/pull/499#issuecomment-519877683
 
 
   There are conflicts and there are two solutions (same as what happened in 
log operator):
   1. merge this branch with the latest master
   2. checkout a new branch from the latest master. Then copy the added code 
into the autograd.py and test_operations.py; send a new PR.
   
   (I guess this maybe because different PRs put the new functions at the same 
line, so git detected it as conflicts)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to singa

2019-08-09 Thread GitBox
chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to 
singa
URL: https://github.com/apache/incubator-singa/pull/488#issuecomment-519856801
 
 
   In my opinion:
   For the function y=sign(x) , y = 1 when x > 0, y = -1 when x < 0. Therefore, 
the derivative is 0 except x = 0. 
   The function y=sign(x) is discontinuous at x=0 where the derivative is 
undefined in the mathematical logic. However, an undefined derivative leads to 
a nan number, so we used a derivative of zero also at x=0.
   
   As a result, dx is always 0 while the output shape (size of array) is the 
same as input shape.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on issue #488: SINGA -475 add Sign operator to singa

2019-08-09 Thread GitBox
chrishkchris commented on issue #488: SINGA -475 add Sign operator to singa
URL: https://github.com/apache/incubator-singa/pull/488#issuecomment-519866518
 
 
   This is what tensorflow use:
   ```cpp
   Status SignGrad(const Scope& scope, const Operation& op,
   const std::vector& grad_inputs,
   std::vector* grad_outputs) {
 auto shape = Shape(scope, op.input(0));
 auto zero = Cast(scope, Const(scope, 0.0), op.input(0).type());
 auto dx = Fill(scope, shape, zero);
 grad_outputs->push_back(dx);
 return scope.status();
   }
   REGISTER_GRADIENT_OP("Sign", SignGrad);
   ```
   seems to fill the dx with all zero


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to singa

2019-08-09 Thread GitBox
chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to 
singa
URL: https://github.com/apache/incubator-singa/pull/488#issuecomment-519866518
 
 
   This is what tensorflow use: 
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/gradients/math_grad.cc#L268
   ```cpp
   Status SignGrad(const Scope& scope, const Operation& op,
   const std::vector& grad_inputs,
   std::vector* grad_outputs) {
 auto shape = Shape(scope, op.input(0));
 auto zero = Cast(scope, Const(scope, 0.0), op.input(0).type());
 auto dx = Fill(scope, shape, zero);
 grad_outputs->push_back(dx);
 return scope.status();
   }
   REGISTER_GRADIENT_OP("Sign", SignGrad);
   ```
   seems to fill the dx with all zero


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to singa

2019-08-09 Thread GitBox
chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to 
singa
URL: https://github.com/apache/incubator-singa/pull/488#issuecomment-519866518
 
 
   This is what tensorflow uses: 
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/gradients/math_grad.cc#L268
   ```cpp
   Status SignGrad(const Scope& scope, const Operation& op,
   const std::vector& grad_inputs,
   std::vector* grad_outputs) {
 auto shape = Shape(scope, op.input(0));
 auto zero = Cast(scope, Const(scope, 0.0), op.input(0).type());
 auto dx = Fill(scope, shape, zero);
 grad_outputs->push_back(dx);
 return scope.status();
   }
   REGISTER_GRADIENT_OP("Sign", SignGrad);
   ```
   seems to fill the dx with all zero using the dy shape


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #488: SINGA -475 add Sign operator to singa

2019-08-09 Thread GitBox
chrishkchris commented on a change in pull request #488: SINGA -475 add Sign 
operator to singa
URL: https://github.com/apache/incubator-singa/pull/488#discussion_r311838195
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -1828,3 +1828,18 @@ def backward(self, dy):
 
 def leakyrelu(x, a=0.01):
 return LeakyRelu(a)(x)[0]
+
+
+class Sign(Operation):
+def forward(self, a):
 
 Review comment:
   Thanks for the modification. The below is the reason why this is needed.
   
   This function passes the name "Sign" to the superclass (Operation) 
initializer.
   The superclass Operation initializer is as follows:
   ``` python
   class Operation(object):
   op_count = 0
   def __init__(self, name=None):
   if name is None:
   self.name = "{}#{}".format(
   self.__class__.__name__, Operation.op_count
   )
   Operation.op_count += 1
   else:
   self.name = name
   ```
   If there is no initialzer in the subclass, the superclass initializer will 
be used instead with the "name=None"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to singa

2019-08-09 Thread GitBox
chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to 
singa
URL: https://github.com/apache/incubator-singa/pull/488#issuecomment-519866518
 
 
   This is what tensorflow use: 
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/gradients/math_grad.cc#L268
   ```cpp
   Status SignGrad(const Scope& scope, const Operation& op,
   const std::vector& grad_inputs,
   std::vector* grad_outputs) {
 auto shape = Shape(scope, op.input(0));
 auto zero = Cast(scope, Const(scope, 0.0), op.input(0).type());
 auto dx = Fill(scope, shape, zero);
 grad_outputs->push_back(dx);
 return scope.status();
   }
   REGISTER_GRADIENT_OP("Sign", SignGrad);
   ```
   seems to fill the dx with all zero using the dy shape


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on issue #487: SINGA -475 add Log operator to singa

2019-08-09 Thread GitBox
chrishkchris commented on issue #487: SINGA -475 add Log operator to singa
URL: https://github.com/apache/incubator-singa/pull/487#issuecomment-519882837
 
 
   Yes, could pinpom please merge your branch with master branch, or send a new 
pr after copying the new codes.
   
   I guess this maybe because different PRs insert the new codes at the same 
line, so git detected it as different ways of changing code and hence 
conflicts. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to singa

2019-08-09 Thread GitBox
chrishkchris edited a comment on issue #488: SINGA -475 add Sign operator to 
singa
URL: https://github.com/apache/incubator-singa/pull/488#issuecomment-519866518
 
 
   This is what tensorflow use: 
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/gradients/math_grad.cc#L268
   ```cpp
   Status SignGrad(const Scope& scope, const Operation& op,
   const std::vector& grad_inputs,
   std::vector* grad_outputs) {
 auto shape = Shape(scope, op.input(0));
 auto zero = Cast(scope, Const(scope, 0.0), op.input(0).type());
 auto dx = Fill(scope, shape, zero);
 grad_outputs->push_back(dx);
 return scope.status();
   }
   REGISTER_GRADIENT_OP("Sign", SignGrad);
   ```
   seems to fill the dx with all zero with the dy shape


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #499: SINGA -475 add Div operator

2019-08-09 Thread GitBox
chrishkchris edited a comment on issue #499: SINGA -475 add Div operator
URL: https://github.com/apache/incubator-singa/pull/499#issuecomment-519877683
 
 
   There are conflicts and there are two solutions (same as what happened in 
log operator):
   1. merge this branch with the latest master
   2. checkout a new branch from the latest master. Then copy the added code 
into the autograd.py and test_operations.py; send a new PR.
   
   (I guess this maybe because different PRs put the new functions in the same 
line, so git detected it as conflicts)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #487: SINGA -475 add Log operator to singa

2019-08-09 Thread GitBox
chrishkchris edited a comment on issue #487: SINGA -475 add Log operator to 
singa
URL: https://github.com/apache/incubator-singa/pull/487#issuecomment-519882837
 
 
   Yes, could pinpom please merge your branch with latest master branch, or 
send a new pr after copying the new codes.
   
   I guess this maybe because different PRs insert the new codes at the same 
line, so git detected it as different ways of changing code and hence 
conflicts. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #499: SINGA -475 add Div operator

2019-08-09 Thread GitBox
chrishkchris edited a comment on issue #499: SINGA -475 add Div operator
URL: https://github.com/apache/incubator-singa/pull/499#issuecomment-519877683
 
 
   There are conflicts and there are two solutions (same as what happened in 
log operator):
   1. merge this branch with the latest master
   2. checkout a new branch from the latest master. Then copy the added code 
into the autograd.py and test_operations.py; send a new PR.
   
   (I guess this maybe because different PRs put the new functions at the same 
line, so git detected it as different ways of changing code and hence conflicts)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-08-01 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r309709702
 
 

 ##
 File path: src/api/config.i
 ##
 @@ -0,0 +1,33 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+
+
+// Pass in cmake configurations to swig
+#define USE_CUDA 1
+#define USE_CUDNN 1
+#define USE_OPENCL 0
+#define USE_PYTHON 1
+#define USE_MKLDNN 1
+#define USE_JAVA 0
+#define CUDNN_VERSION 7401
+
+// SINGA version
+#define SINGA_MAJOR_VERSION 1
 
 Review comment:
   Updated on 1 August 2019:
   
   Concerning the above error, I found that there is a different between the 
implementation of `class _BatchNorm2d(Operation):` in master branch and 
dist_new branch.
   
   In autograd.py, both the master branch and dist_new branch has modified (or 
debugged) the conv2d and batchnorm operator, but they modified it differently. 
Meanwhile, both conv2d in the master branch and dist_new branch can train and 
reduce loss of mnist simple CNN, so there is no big problem. However, the batch 
normalization is a much more complex case, because it includes non-training 
variables that are running means and running variances.
   
   In the master branch, the running means and running variances (non-training 
variables) are in the forward function: `def forward(self, x, scale, bias, 
running_mean, running_var):`
   
https://github.com/apache/incubator-singa/blob/master/python/singa/autograd.py#L1099
   
   When I run the code using the master branch dockerfile, the error is as 
follows:
   ```
   root@26c9db193eb0:~/incubator-singa/examples/autograd# python3 resnet.py
   Start intialization
 0%|
  | 0/200 [00:00
   for p, g in autograd.backward(loss):
 File "/root/incubator-singa/build/python/singa/autograd.py", line 135, in 
backward
   % (len(op.src), len(dxs))
   AssertionError: the number of src ops (=5) and dx (=3) not match
   ```
   I think the error is because the running_mean and running_var are in the 
forward function input arguments but are not training variables, so there are 
supposed to be three src ops but finally found 5.
   
   Meanwhile, the dist_new branch has modified the batchnorm function (commit 
2b3a857 by user ubuntu on Apr14) by moving the input arguments running_mean and 
running_var into the initialization function:
   `def __init__(self, handle, running_mean, running_var, name=None):`
   `def forward(self, x, scale, bias):`
   
https://github.com/xuewanqi/incubator-singa/blob/dist_new/python/singa/autograd.py#L1096
   This one can run successfully but I am not sure if it can train and reduce 
loss.
   
   Next, I will try training the resnet with real dataset to see if it can 
reduce the loss.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] ShichengChen opened a new pull request #496: Mean

2019-07-31 Thread GitBox
ShichengChen opened a new pull request #496: Mean
URL: https://github.com/apache/incubator-singa/pull/496
 
 
   Implement ONNX Operators following 
https://github.com/onnx/onnx/blob/master/docs/Operators.md


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #484: SINGA -475 add Div operator implementation to singa

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #484: SINGA -475 add Div operator 
implementation to singa
URL: https://github.com/apache/incubator-singa/pull/484#discussion_r309663843
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -322,6 +322,21 @@ def test_LeakyRelu(self):
 np.testing.assert_array_almost_equal(tensor.to_numpy(result), XT)
 self.check_shape(dx.shape(), (3, 2))
 
+def test_Div(self):
+
X=np.array([0.8,-1.2,3.3,-3.6,-0.5,0.5]).reshape(3,2).astype(np.float32)
+Y=np.array([4.4,5.3,3.2,3.7,5.4,6.3]).reshape(3,2).astype(np.float32)
+x = tensor.from_numpy(X)
+y = tensor.from_numpy(Y)
+x.to_device(gpu_dev) 
+y.to_device(gpu_dev) 
+
+result=autograd.div(x,y)
+dx=result.creator.backward(x.data)[0]  
+
+result_np=np.divide(X,Y)  
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), 
result_np)
+self.check_shape(dx.shape(), (3, 2)) 
 
 Review comment:
   please add the test case of backward propagation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #484: SINGA -475 add Div operator implementation to singa

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #484: SINGA -475 add Div operator 
implementation to singa
URL: https://github.com/apache/incubator-singa/pull/484#discussion_r309663270
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -1787,6 +1787,22 @@ def abs(a):
 return Abs()(a)[0]
 
 
+class Div(Operation):
+def _init_(self):
+super(Div, self)._init_()
+
+def forward(self, a, b):
+if training:
+self.input = (a, b)
+return singa.__div__(a, b)
+
+def backward(self, dy):
+return singa.__div__(dy, self.input[0]), singa.__mul__(dy, 
self.input[1])
 
 Review comment:
   the gradients should be:
   dy/dx_1 = b^-1
   dy/dx_2 = -a*b^-2


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on issue #491: SINGA-474 less operator

2019-08-01 Thread GitBox
joddiy commented on issue #491: SINGA-474 less operator
URL: https://github.com/apache/incubator-singa/pull/491#issuecomment-517262483
 
 
   I'm not very sure whether logical operators should have gradients. please 
refer to 
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/gradients/math_grad.cc#L30-L41


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on issue #490: SINGA-474 equal operator

2019-08-01 Thread GitBox
joddiy commented on issue #490: SINGA-474 equal operator
URL: https://github.com/apache/incubator-singa/pull/490#issuecomment-517262568
 
 
   I'm not very sure whether logical operators should have gradients. please 
refer to 
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/gradients/math_grad.cc#L30-L41


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on issue #481: SINGA-474 greater operator

2019-08-01 Thread GitBox
joddiy commented on issue #481: SINGA-474 greater operator
URL: https://github.com/apache/incubator-singa/pull/481#issuecomment-517262583
 
 
   I'm not very sure whether logical operators should have gradients. please 
refer to 
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/gradients/math_grad.cc#L30-L41


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #482: SINGA-474 hardsigmoid operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #482: SINGA-474 hardsigmoid 
operator
URL: https://github.com/apache/incubator-singa/pull/482#discussion_r309685974
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -358,6 +358,51 @@ def relu(x):
 return ReLU()(x)[0]
 
 
+class HardSigmoid(Operation):
+def __init__(self,alpha=0.2,gamma=0.5):
+super(HardSigmoid, self).__init__()
+self.alpha=alpha
+self.gamma=gamma
+
+def forward(self, x):
+"""Do forward propgation.
+#y = max(0, min(1, alpha * x + gamma))
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+if training:
+self.input = x
+
+x = singa.AddFloat(singa.MultFloat(x,self.alpha),self.gamma)
+x = singa.ReLU(x)
+mask = singa.LTFloat(x, 1.0)
+mask2 = singa.GEFloat(x, 1.0)
+
+ans = singa.__add__(singa.__mul__(x, mask),mask2)
 
 Review comment:
   can we write like this?
   ```
   x = singa.AddFloat(singa.MultFloat(x, self.alpha), self.gamma)
   x = singa.__clamp__(x, 0.0, 1.0)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #482: SINGA-474 hardsigmoid operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #482: SINGA-474 hardsigmoid 
operator
URL: https://github.com/apache/incubator-singa/pull/482#discussion_r309671456
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -358,6 +358,51 @@ def relu(x):
 return ReLU()(x)[0]
 
 
+class HardSigmoid(Operation):
+def __init__(self,alpha=0.2,gamma=0.5):
+super(HardSigmoid, self).__init__()
+self.alpha=alpha
+self.gamma=gamma
+
+def forward(self, x):
+"""Do forward propgation.
+#y = max(0, min(1, alpha * x + gamma))
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+if training:
+self.input = x
+
+x = singa.AddFloat(singa.MultFloat(x,self.alpha),self.gamma)
+x = singa.ReLU(x)
+mask = singa.LTFloat(x, 1.0)
+mask2 = singa.GEFloat(x, 1.0)
+
+ans = singa.__add__(singa.__mul__(x, mask),mask2)
 
 Review comment:
   Please refer to the 
[implement](https://github.com/tensorflow/tensorflow/blob/6e60889d0e31c7ee13004322110eb77507fb2177/tensorflow/python/keras/backend.py#L4379-L4383)
 of Tensorflow.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #487: SINGA -475 add Log operator to singa

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #487: SINGA -475 add Log operator 
to singa
URL: https://github.com/apache/incubator-singa/pull/487#discussion_r309652140
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -610,6 +610,18 @@ def test_Atanh_gpu(self):
 np.testing.assert_array_almost_equal(tensor.to_numpy(result), XT, 
decimal=5)
 self.check_shape(dx.shape(), (3, 2))
 
+def test_Log(self):
+X=np.array([1, np.e, np.e**2, np.e**4]).reshape(2,2).astype(np.float32)
+XT=np.log(X)
+x=tensor.from_numpy(X)
+x.to_device(gpu_dev)
+
+result=autograd.log(x)
+dx=result.creator.backward(x.data)
+
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), XT)
 
 Review comment:
   the backward of log hasn't been tested.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #485: SINGA -475 add Sub operator implementation to singa

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #485: SINGA -475 add Sub operator 
implementation to singa
URL: https://github.com/apache/incubator-singa/pull/485#discussion_r309661987
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -322,6 +322,21 @@ def test_LeakyRelu(self):
 np.testing.assert_array_almost_equal(tensor.to_numpy(result), XT)
 self.check_shape(dx.shape(), (3, 2))
 
+def test_Sub(self):
+
X=np.array([0.8,-1.2,3.3,-3.6,-0.5,0.5]).reshape(3,2).astype(np.float32)
+Y=np.array([4.4,5.3,3.2,3.7,5.4,6.3]).reshape(3,2).astype(np.float32)
+x = tensor.from_numpy(X)
+y = tensor.from_numpy(Y)
+x.to_device(gpu_dev) 
+y.to_device(gpu_dev) 
+
+result=autograd.sub(x,y)
+dx=result.creator.backward(x.data)[0]  
+
+result_np=np.subtract(X,Y)  
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), 
result_np)
 
 Review comment:
   please add the test case of backward propagation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #483: SINGA-474 identity operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #483: SINGA-474 identity operator
URL: https://github.com/apache/incubator-singa/pull/483#discussion_r309664724
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -322,6 +322,27 @@ def test_LeakyRelu(self):
 np.testing.assert_array_almost_equal(tensor.to_numpy(result), XT)
 self.check_shape(dx.shape(), (3, 2))
 
+def test_Identity_cpu(self):
+x = np.array([-0.9, -0.3, -0.1, 0.1, 0.5, 0.9]).reshape(3, 
2).astype(np.float32)
+y = x.copy()
+x = tensor.from_numpy(x)
+x.to_device(cpu_dev)
 
+result = autograd.identity(x)
+dx = result.creator.backward(x.data)
+
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), y, 
decimal=5)
 
 Review comment:
   please add the test case of backward propagation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #483: SINGA-474 identity operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #483: SINGA-474 identity operator
URL: https://github.com/apache/incubator-singa/pull/483#discussion_r309664758
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -322,6 +322,27 @@ def test_LeakyRelu(self):
 np.testing.assert_array_almost_equal(tensor.to_numpy(result), XT)
 self.check_shape(dx.shape(), (3, 2))
 
+def test_Identity_cpu(self):
+x = np.array([-0.9, -0.3, -0.1, 0.1, 0.5, 0.9]).reshape(3, 
2).astype(np.float32)
+y = x.copy()
+x = tensor.from_numpy(x)
+x.to_device(cpu_dev)
 
+result = autograd.identity(x)
+dx = result.creator.backward(x.data)
+
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), y, 
decimal=5)
+self.check_shape(dx.shape(), (3, 2))
+def test_Identity_gpu(self):
+x = np.array([-0.9, -0.3, -0.1, 0.1, 0.5, 0.9]).reshape(3, 
2).astype(np.float32)
+y = x.copy()
+x = tensor.from_numpy(x)
+x.to_device(gpu_dev)
+
+result = autograd.identity(x)
+dx = result.creator.backward(x.data)
+
+np.testing.assert_array_almost_equal(tensor.to_numpy(result), y, 
decimal=5)
 
 Review comment:
   please add the test case of backward propagation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #482: SINGA-474 hardsigmoid operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #482: SINGA-474 hardsigmoid 
operator
URL: https://github.com/apache/incubator-singa/pull/482#discussion_r309685974
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -358,6 +358,51 @@ def relu(x):
 return ReLU()(x)[0]
 
 
+class HardSigmoid(Operation):
+def __init__(self,alpha=0.2,gamma=0.5):
+super(HardSigmoid, self).__init__()
+self.alpha=alpha
+self.gamma=gamma
+
+def forward(self, x):
+"""Do forward propgation.
+#y = max(0, min(1, alpha * x + gamma))
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+if training:
+self.input = x
+
+x = singa.AddFloat(singa.MultFloat(x,self.alpha),self.gamma)
+x = singa.ReLU(x)
+mask = singa.LTFloat(x, 1.0)
+mask2 = singa.GEFloat(x, 1.0)
+
+ans = singa.__add__(singa.__mul__(x, mask),mask2)
 
 Review comment:
   can we write like this?
   ```
   x = singa.AddFloat(singa.MultFloat(x, self.alpha), self.gamma)
   x = singa.__clamp__(x, 0.0, 1.0)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #480: SINGA-474 ELU operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #480: SINGA-474 ELU operator
URL: https://github.com/apache/incubator-singa/pull/480#discussion_r309704143
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -322,6 +336,34 @@ def test_LeakyRelu(self):
 np.testing.assert_array_almost_equal(tensor.to_numpy(result), XT)
 self.check_shape(dx.shape(), (3, 2))
 
+def test_Elu_cpu(self):
+#f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0
+x = np.array([-0.9, -0.3, -0.1, 0.1, 0.5, 0.9]).reshape(3, 
2).astype(np.float32)/10
+lossf = lambda x : np.sum(np.clip(x, 0, np.inf) + (np.exp(np.clip(x, 
-np.inf, 0)) - 1) * 1.0)
 
 Review comment:
   lambda x : alpha * (np.exp(x) -1) if x < 0 else x


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #480: SINGA-474 ELU operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #480: SINGA-474 ELU operator
URL: https://github.com/apache/incubator-singa/pull/480#discussion_r309687031
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -358,6 +358,48 @@ def relu(x):
 return ReLU()(x)[0]
 
 
+class Elu(Operation):
+def __init__(self,alpha=1):
+super(Elu, self).__init__()
+self.alpha=alpha
+
+def forward(self, x):
+"""Do forward propgation.
+Store the x if requires gradient.
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+#f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0
+if training:
+self.input = x
+x1 = singa.LTFloat(x, 0.0)
+x1 = singa.__mul__(x, x1)
+x1 = singa.SubFloat(singa.Exp(x1),self.alpha)
+x2 = singa.ReLU(x)
+x1 = singa.__add__(x1, x2)
+return x1
 
 Review comment:
   It's a little weird to call ReLU here, can we implement it directly?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #480: SINGA-474 ELU operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #480: SINGA-474 ELU operator
URL: https://github.com/apache/incubator-singa/pull/480#discussion_r309702428
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -358,6 +358,48 @@ def relu(x):
 return ReLU()(x)[0]
 
 
+class Elu(Operation):
+def __init__(self,alpha=1):
+super(Elu, self).__init__()
+self.alpha=alpha
+
+def forward(self, x):
+"""Do forward propgation.
+Store the x if requires gradient.
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+#f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0
+if training:
+self.input = x
+x1 = singa.LTFloat(x, 0.0)
+x1 = singa.__mul__(x, x1)
+x1 = singa.SubFloat(singa.Exp(x1),self.alpha)
+x2 = singa.ReLU(x)
+x1 = singa.__add__(x1, x2)
+return x1
+
+def backward(self, dy):
+"""
+Args:
+dy (CTensor): data for the dL / dy, L is the loss
+Returns:
+a tuple for dx
+"""
+dx1mask = singa.LTFloat(self.input, 0.0)
+dx1 = singa.MultFloat(singa.Exp(self.input), self.alpha)
 
 Review comment:
   is this formula correct?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #480: SINGA-474 ELU operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #480: SINGA-474 ELU operator
URL: https://github.com/apache/incubator-singa/pull/480#discussion_r309702428
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -358,6 +358,48 @@ def relu(x):
 return ReLU()(x)[0]
 
 
+class Elu(Operation):
+def __init__(self,alpha=1):
+super(Elu, self).__init__()
+self.alpha=alpha
+
+def forward(self, x):
+"""Do forward propgation.
+Store the x if requires gradient.
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+#f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0
+if training:
+self.input = x
+x1 = singa.LTFloat(x, 0.0)
+x1 = singa.__mul__(x, x1)
+x1 = singa.SubFloat(singa.Exp(x1),self.alpha)
+x2 = singa.ReLU(x)
+x1 = singa.__add__(x1, x2)
+return x1
+
+def backward(self, dy):
+"""
+Args:
+dy (CTensor): data for the dL / dy, L is the loss
+Returns:
+a tuple for dx
+"""
+dx1mask = singa.LTFloat(self.input, 0.0)
+dx1 = singa.MultFloat(singa.Exp(self.input), self.alpha)
 
 Review comment:
   is this formula correct?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #480: SINGA-474 ELU operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #480: SINGA-474 ELU operator
URL: https://github.com/apache/incubator-singa/pull/480#discussion_r309766405
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -358,6 +358,48 @@ def relu(x):
 return ReLU()(x)[0]
 
 
+class Elu(Operation):
+def __init__(self,alpha=1):
+super(Elu, self).__init__()
+self.alpha=alpha
+
+def forward(self, x):
+"""Do forward propgation.
+Store the x if requires gradient.
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+#f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0
+if training:
+self.input = x
+x1 = singa.LTFloat(x, 0.0)
+x1 = singa.__mul__(x, x1)
+x1 = singa.SubFloat(singa.Exp(x1),self.alpha)
+x2 = singa.ReLU(x)
+x1 = singa.__add__(x1, x2)
+return x1
+
+def backward(self, dy):
+"""
+Args:
+dy (CTensor): data for the dL / dy, L is the loss
+Returns:
+a tuple for dx
+"""
+dx1mask = singa.LTFloat(self.input, 0.0)
+dx1 = singa.MultFloat(singa.Exp(self.input), self.alpha)
+dx1 = singa.__mul__(dx1mask, dx1)
+
+dx2mask = singa.GTFloat(self.input, 0.0)
 
 Review comment:
   should be GEFloat


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #480: SINGA-474 ELU operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #480: SINGA-474 ELU operator
URL: https://github.com/apache/incubator-singa/pull/480#discussion_r309766714
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -358,6 +358,48 @@ def relu(x):
 return ReLU()(x)[0]
 
 
+class Elu(Operation):
+def __init__(self,alpha=1):
+super(Elu, self).__init__()
+self.alpha=alpha
+
+def forward(self, x):
+"""Do forward propgation.
+Store the x if requires gradient.
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+#f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0
+if training:
+self.input = x
+x1 = singa.LTFloat(x, 0.0)
+x1 = singa.__mul__(x, x1)
+x1 = singa.SubFloat(singa.Exp(x1),self.alpha)
 
 Review comment:
   I think this formula doesn't correspond to alpha * (exp(x) - 1.)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #480: SINGA-474 ELU operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #480: SINGA-474 ELU operator
URL: https://github.com/apache/incubator-singa/pull/480#discussion_r309704143
 
 

 ##
 File path: test/python/test_operation.py
 ##
 @@ -322,6 +336,34 @@ def test_LeakyRelu(self):
 np.testing.assert_array_almost_equal(tensor.to_numpy(result), XT)
 self.check_shape(dx.shape(), (3, 2))
 
+def test_Elu_cpu(self):
+#f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0
+x = np.array([-0.9, -0.3, -0.1, 0.1, 0.5, 0.9]).reshape(3, 
2).astype(np.float32)/10
+lossf = lambda x : np.sum(np.clip(x, 0, np.inf) + (np.exp(np.clip(x, 
-np.inf, 0)) - 1) * 1.0)
 
 Review comment:
   lambda x : alpha * (np.exp(x) -1) if x < 0 else x


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #489: SINGA-474 selu operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #489: SINGA-474 selu operator
URL: https://github.com/apache/incubator-singa/pull/489#discussion_r309771204
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -357,6 +357,52 @@ def backward(self, dy):
 def relu(x):
 return ReLU()(x)[0]
 
+class SeLU(Operation):
+def __init__(self,alpha=1.67326,gamma=1.0507):
+super(SeLU, self).__init__()
+self.alpha=alpha
+self.gamma=gamma
+
+def forward(self, x):
+"""Do forward propgation.
+Store the x if x requires gradient.
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+#y = gamma * (alpha * e^x - alpha) for x <= 0, y = gamma * x for x > 0
+if training:
+self.input = x
+x1 = singa.LTFloat(x, 0.0)
+x1 = singa.__mul__(x, x1)
+x1 = 
singa.MultFloat(singa.SubFloat(singa.MultFloat(singa.Exp(x1),self.alpha),self.alpha),self.gamma)
 
 Review comment:
   can optimize: 
   ```
   singa.MultFloat(singa.SubFloat(singa.Exp(x1), 1.0), self.alpha * self.gamma)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on a change in pull request #480: SINGA-474 ELU operator

2019-08-01 Thread GitBox
joddiy commented on a change in pull request #480: SINGA-474 ELU operator
URL: https://github.com/apache/incubator-singa/pull/480#discussion_r309687031
 
 

 ##
 File path: python/singa/autograd.py
 ##
 @@ -358,6 +358,48 @@ def relu(x):
 return ReLU()(x)[0]
 
 
+class Elu(Operation):
+def __init__(self,alpha=1):
+super(Elu, self).__init__()
+self.alpha=alpha
+
+def forward(self, x):
+"""Do forward propgation.
+Store the x if requires gradient.
+Args:
+x (CTensor): matrix
+Returns:
+a CTensor for the result
+"""
+#f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0
+if training:
+self.input = x
+x1 = singa.LTFloat(x, 0.0)
+x1 = singa.__mul__(x, x1)
+x1 = singa.SubFloat(singa.Exp(x1),self.alpha)
+x2 = singa.ReLU(x)
+x1 = singa.__add__(x1, x2)
+return x1
 
 Review comment:
   It's a little weird to call ReLU here, can we implement it directly?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris opened a new pull request #500: SINGA-478 (Python 3 uses __itruediv__ instead of __idiv__)

2019-08-08 Thread GitBox
chrishkchris opened a new pull request #500: SINGA-478 (Python 3 uses 
__itruediv__ instead of __idiv__)
URL: https://github.com/apache/incubator-singa/pull/500
 
 
   We need to add  `__itruediv__` for python 3 in tensor.py because the 
original `__idiv__` is not supported by python 3 anymore.

   To understand the problem, let's study the following code first:
   ```
   from singa import tensor
   from singa import device
   import numpy as np
   
   Y = np.ones(shape=[10],dtype=np.float32) * 10.0
   y = tensor.from_numpy(Y)
   y.to_device(device.get_default_device())
   
   def divide(y):
  y /= 10
   
   divide(y)
   print(tensor.to_numpy(y))
   ```
Without adding the `__itruediv__` function, the result is as follows, which 
means that the /= operation is not in place operation: 
   `[10. 10. 10. 10. 10. 10. 10. 10. 10. 10.]`
   After adding the _itruediv_ function, the result is as follows, which means 
that the /= operation is in place operation:
   `[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]`
   This is because the `__idiv__` operation is for python 2, while 
`__itruediv__` is for python 3. Therefore, if we do not add the `__itruediv__` 
operator in tensor.py, it just uses a default operation which is not in place 
operation.
   
   
   Meanwhile, I am not sure if I also need to add `__truediv__` for python 3 to 
acts as  `__div__` for python 2.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] dcslin opened a new pull request #477: used shared module to manage device creation to avoid singleton error

2019-07-19 Thread GitBox
dcslin opened a new pull request #477: used shared module to manage device 
creation to avoid singleton error
URL: https://github.com/apache/incubator-singa/pull/477
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on a change in pull request #468: Distributted module

2019-07-19 Thread GitBox
chrishkchris commented on a change in pull request #468: Distributted module
URL: https://github.com/apache/incubator-singa/pull/468#discussion_r305247134
 
 

 ##
 File path: src/api/config.i
 ##
 @@ -0,0 +1,33 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+
+
+// Pass in cmake configurations to swig
+#define USE_CUDA 1
+#define USE_CUDNN 1
+#define USE_OPENCL 0
+#define USE_PYTHON 1
+#define USE_MKLDNN 1
+#define USE_JAVA 0
+#define CUDNN_VERSION 7401
+
+// SINGA version
+#define SINGA_MAJOR_VERSION 1
 
 Review comment:
   In our server (at ncrg), I created a new anaconda python 3.6 enivorment and 
install singa 2.0 using "conda install -c nusdbsystem -c conda-forge 
singa=2.0.0=cudnn7.3.1_cuda10.0_py36"
   
   It passed the test: python -c "from singa import tensor"
   Also, it passed the old optimizer example: 
incubator-singa/example/cifar10/train.py can run and train successfully.
   
   However, the incubator-singa/examples/autograd/resnet.py cannot run, while 
the output is:
   Start intialization
 0%|
  | 0/200 [00:00
   x = model(tx)
 File "examples/autograd/resnet.py", line 155, in __call__
   x = self.conv1(x)
 File 
"/home/dcsysh/anaconda3/envs/singa2/lib/python3.6/site-packages/singa/autograd.py",
 line 939, in __call__
   self.device_check(x, self.W, self.b)
 File 
"/home/dcsysh/anaconda3/envs/singa2/lib/python3.6/site-packages/singa/autograd.py",
 line 656, in device_check
   if var.device.id() != x_dev_id:
   AttributeError: 'NoneType' object has no attribute 'device'


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris opened a new pull request #478: Singa 473 (Implement ONNX Operators in Autograd: Trigonometry functions)

2019-07-25 Thread GitBox
chrishkchris opened a new pull request #478: Singa 473 (Implement ONNX 
Operators in Autograd: Trigonometry functions)
URL: https://github.com/apache/incubator-singa/pull/478
 
 
   This PR corresponds to JIRA ticket SINGA-473 (Implement ONNX Operators in 
Autograd: Trigonometry functions).
   
   The Open Neural Network Exchange (ONNX) is a format for interchanging neural 
network models between AI systems. There is a list of ONNX operators defined in 
https://github.com/onnx/onnx/blob/master/docs/Operators.md
   
   In this PR, we are going to implement more ONNX operators in our Autograd 
module (incubator-singa/python/singa/autograd.py). First of all, this ticket 
implements 11 unary Trigonometry functions in the Autograd module, which are:
   1. cos
   2. cosh
   3. sin
   4. sinh
   5. tan
   6. acos
   7. acosh
   8. asin
   9. asinh
   10. atan
   11. atanh


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on issue #478: SINGA-473 (Implement ONNX Operators in Autograd: Trigonometry functions)

2019-07-25 Thread GitBox
chrishkchris commented on issue #478: SINGA-473 (Implement ONNX Operators in 
Autograd: Trigonometry functions)
URL: https://github.com/apache/incubator-singa/pull/478#issuecomment-515286829
 
 
   The test in my ncre docker development environment:
   
   root@26c9db193eb0:~/incubator-singa/build# python3 
/root/incubator-singa/test/python/test_operation.py
   
   --
   Ran 36 tests in 0.633s
   
   OK


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris edited a comment on issue #478: SINGA-473 (Implement ONNX Operators in Autograd: Trigonometry functions)

2019-07-25 Thread GitBox
chrishkchris edited a comment on issue #478: SINGA-473 (Implement ONNX 
Operators in Autograd: Trigonometry functions)
URL: https://github.com/apache/incubator-singa/pull/478#issuecomment-515286829
 
 
   The test in my ncrf docker development environment:
   
   root@26c9db193eb0:~/incubator-singa/build# python3 
/root/incubator-singa/test/python/test_operation.py
   
   --
   Ran 36 tests in 0.633s
   
   OK


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #477: used shared module to manage device creation to avoid singleton error

2019-07-21 Thread GitBox
nudles merged pull request #477: used shared module to manage device creation 
to avoid singleton error
URL: https://github.com/apache/incubator-singa/pull/477
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #533: fixed operation add assertion for convolution

2019-09-19 Thread GitBox
nudles merged pull request #533: fixed operation add assertion for convolution
URL: https://github.com/apache/incubator-singa/pull/533
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #535: SINGA-490 Optimize performance of stochastic gradient descent (SGD)

2019-09-19 Thread GitBox
nudles merged pull request #535: SINGA-490 Optimize performance of stochastic 
gradient descent (SGD)
URL: https://github.com/apache/incubator-singa/pull/535
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] dcslin commented on issue #533: fixed operation add assertion for convolution

2019-09-19 Thread GitBox
dcslin commented on issue #533: fixed operation add assertion for convolution
URL: https://github.com/apache/incubator-singa/pull/533#issuecomment-533383280
 
 
   Hi @nudles , this is ready for merge.
   thanks to @chrishkchris 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #529: SINGA-484 Code analysis with LGTM

2019-09-19 Thread GitBox
nudles merged pull request #529: SINGA-484 Code analysis with LGTM
URL: https://github.com/apache/incubator-singa/pull/529
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] moazreyad commented on issue #536: SINGA-491 Code Cleaning with the Reference of LGTM Analysis Result

2019-09-20 Thread GitBox
moazreyad commented on issue #536: SINGA-491 Code Cleaning with the Reference 
of LGTM Analysis Result
URL: https://github.com/apache/incubator-singa/pull/536#issuecomment-533497942
 
 
   Thank you. 
   
   In future we should not accept only the compile output alone, we should also 
run the unit tests and make sure they pass with no errors. This must be done on 
all supported platforms, not only on the developer machine. 
   
   This is why the Travis continuous integration needs to be improved to 
automate this task which is very hard to be done manually.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] dcslin opened a new pull request #538: add Cudnn mode as optional param to batch norm and softmax

2019-09-26 Thread GitBox
dcslin opened a new pull request #538: add Cudnn mode as optional param to 
batch norm and softmax
URL: https://github.com/apache/incubator-singa/pull/538
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #537: SINGA-488 Change the path of source code for CI

2019-09-26 Thread GitBox
nudles merged pull request #537: SINGA-488 Change the path of source code for CI
URL: https://github.com/apache/incubator-singa/pull/537
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #539: SINGA-491 Python Code Cleaning

2019-10-01 Thread GitBox
nudles merged pull request #539: SINGA-491 Python Code Cleaning
URL: https://github.com/apache/incubator-singa/pull/539
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] dcslin opened a new pull request #540: added softmax with axis

2019-10-02 Thread GitBox
dcslin opened a new pull request #540: added softmax with axis
URL: https://github.com/apache/incubator-singa/pull/540
 
 
   splitting https://github.com/apache/incubator-singa/pull/538 into two parts, 
this PR is for softmax with axis, which is compatible to onnx api


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris opened a new pull request #536: SINGA-491 Code Cleaning with the Reference of LGTM Analysis Result

2019-09-20 Thread GitBox
chrishkchris opened a new pull request #536: SINGA-491 Code Cleaning with the 
Reference of LGTM Analysis Result
URL: https://github.com/apache/incubator-singa/pull/536
 
 
   Since LGTM has been applied for our code analysis (see SINGA-484), I have 
cleaned up many of the obvious issues alerted by LGTM.
   
   
   The compile result is okay as follows:
   ```
   ubuntu@ip-172-31-39-137:~/incubator-singa/build$ rm -rf *
   ubuntu@ip-172-31-39-137:~/incubator-singa/build$ cmake -D 
CMAKE_PREFIX_PATH="/usr/local/cuda/lib64;/usr/local/  
cuda/" -DENABLE_TEST=OFF -DUSE_CUDA=ON -DUSE_PYTHON3=ON -DUSE_MKLDNN=ON 
-DUSE_MODULES=OFF -DUSE_DIST=ON ..
   -- The C compiler identification is GNU 5.4.0
   -- The CXX compiler identification is GNU 5.4.0
   -- Check for working C compiler: /usr/bin/cc
   -- Check for working C compiler: /usr/bin/cc -- works
   -- Detecting C compiler ABI info
   -- Detecting C compiler ABI info - done
   -- Detecting C compile features
   -- Detecting C compile features - done
   -- Check for working CXX compiler: /usr/bin/c++
   -- Check for working CXX compiler: /usr/bin/c++ -- works
   -- Detecting CXX compiler ABI info
   -- Detecting CXX compiler ABI info - done
   -- Detecting CXX compile features
   -- Detecting CXX compile features - done
   -- Looking for pthread.h
   -- Looking for pthread.h - found
   -- Looking for pthread_create
   -- Looking for pthread_create - not found
   -- Looking for pthread_create in pthreads
   -- Looking for pthread_create in pthreads - not found
   -- Looking for pthread_create in pthread
   -- Looking for pthread_create in pthread - found
   -- Found Threads: TRUE
   -- Found Protobuf: /usr/local/lib/libprotobuf.so;-lpthread (found suitable 
version "3.0.0", minimum required i  s "3.0")
   -- Found CBLAS: /usr/local/include
   -- Found GLOG: /usr/include
   -- Found cuda_v10.0
   -- Found CUDNN: /usr/local/cuda/include
   -- Found Cudnn_7401 at /usr/local/cuda/include 
/usr/local/cuda/lib64/libcudnn.so
   -- Found PythonInterp: /usr/bin/python3 (found suitable version "3.5.2", 
minimum required is "3")
   -- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.5m.so (found 
suitable version "3.5.2", minimum requi  red is "3")
   -- Found SWIG: /usr/local/bin/swig (found suitable version "3.0.12", minimum 
required is "3.0.10")
   -- Found MKLDNN at /usr/local/include
   -- Found MPI at /home/ubuntu/mpich-3.3/build/include
   -- Found MPI lib at /home/ubuntu/mpich-3.3/build/lib/libmpi.so
   -- Found all lib at 
/usr/local/lib/libprotobuf.so;/usr/local/lib/libopenblas.so;/usr/lib/x86_64-linux-gnu/libg
  
log.so;/usr/local/cuda/lib64/libcudnn.so;/usr/local/cuda/lib64/libcudart.so;/usr/local/cuda/lib64/libcurand.so
  
;/usr/local/cuda/lib64/libcublas.so;/home/ubuntu/incubator-singa/build/lib/libcnmem.a;/usr/local/lib/libmkldnn
  
.so;/home/ubuntu/mpich-3.3/build/lib/libmpi.so;/home/ubuntu/mpich-3.3/build/lib/libmpicxx.so
   -- Found NCCL at /usr/local/cuda/include
   -- Found NCCL lib at /usr/local/cuda/lib/libnccl.so
   -- Configuring done
   -- Generating done
   -- Build files have been written to: /home/ubuntu/incubator-singa/build
   ubuntu@ip-172-31-39-137:~/incubator-singa/build$ make -j4
   Scanning dependencies of target cnmem
   Scanning dependencies of target copy_protobuf
   [  1%] Running C++ protocol buffer compiler on 
/home/ubuntu/incubator-singa/src/proto/model.proto
   [  2%] Creating directories for 'cnmem'
   [  3%] Running C++ protocol buffer compiler on 
/home/ubuntu/incubator-singa/src/proto/caffe.proto
   [  4%] Running C++ protocol buffer compiler on 
/home/ubuntu/incubator-singa/src/proto/core.proto
   [libprotobuf WARNING google/protobuf/compiler/parser.cc:547] No syntax 
specified for the proto file: core.prot  o. Please use 
'syntax = "proto2";' or 'syntax = "proto3";' to specify a syntax version. 
(Defaulted to proto2 s  yntax.)
   [libprotobuf WARNING google/protobuf/compiler/parser.cc:547] No syntax 
specified for the proto file: model.pro  to. Please use 
'syntax = "proto2";' or 'syntax = "proto3";' to specify a syntax version. 
(Defaulted to proto2   syntax.)
   [  5%] Running C++ protocol buffer compiler on 
/home/ubuntu/incubator-singa/src/proto/io.proto
   [libprotobuf WARNING google/protobuf/compiler/parser.cc:547] No syntax 
specified for the proto file: io.proto.   Please use 
'syntax = "proto2";' or 'syntax = "proto3";' to specify a syntax version. 
(Defaulted to proto2 syn  tax.)
   [  6%] Performing download step (git clone) for 'cnmem'
   Cloning into 'cnmem'...
   [  7%] Copying Protobuf headers
   [  7%] Built target copy_protobuf
   [  8%] Building NVCC (Device) object 

[GitHub] [incubator-singa] chrishkchris opened a new pull request #539: SINGA-491 Python Code Cleaning

2019-09-30 Thread GitBox
chrishkchris opened a new pull request #539: SINGA-491 Python Code Cleaning
URL: https://github.com/apache/incubator-singa/pull/539
 
 
   Only Python code is cleaned at this time that is likely to remove around 
hundred alerts in python code.
   
   Since there are lots of changes in the folders examples/imagenet/inception 
and examples/imagenet/googlenet, I have run those code again including four 
parts:
   
   PART (I) examples/imagenet/googlenet/serve.py
   PART (II) examples/imagenet/inception/inception_v3.py
   PART (III) examples/imagenet/inception/inception_v4.py
   PART (IV) Python Unit Test (test_operation.py)
   
   Two test photos from the imagenet dataset are used for testing in PART (I), 
(II), (III) 
   image1.JPEG
   
![image1](https://user-images.githubusercontent.com/38325429/65866514-7bf1d280-e3a7-11e9-9ecf-5c8f789734db.JPEG)
   image2.JPEG
   
![image2](https://user-images.githubusercontent.com/38325429/65866516-7dbb9600-e3a7-11e9-952d-c0ac00c28d8e.JPEG)
   
   The results are okay as expected.
   
   PART (I) examples/imagenet/googlenet/serve.py
   
   ```
   ubuntu@ip-172-31-39-12:~/incubator-singa/examples/imagenet/googlenet$ 
python3 serve.py &
   runing with gpu
* Serving Flask app "rafiki.agent" (lazy loading)
* Environment: production
  WARNING: This is a development server. Do not use it in a production 
deployment.
  Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:/ (Press CTRL+C to quit)
   Start intialization
   ('conv1/7x7_s2', (64, 112, 112))
   ('conv1/relu_7x7', (64, 112, 112))
   ('pool1/3x3_s2/pad', (64, 113, 113))
   ('pool1/3x3_s2', (64, 56, 56))
   ('pool1/norm1', (64, 56, 56))
   ('conv2/3x3_reduce', (64, 56, 56))
   ('conv2/relue_3x3_reduce', (64, 56, 56))
   ('conv2/3x3', (192, 56, 56))
   ('conv2/relue_3x3', (192, 56, 56))
   ('conv2/norm2', (192, 56, 56))
   ('pool2/3x3_s2/pad', (192, 57, 57))
   ('pool2/3x3_s2', (192, 28, 28))
   ('inception_3a/split', [(192, 28, 28), (192, 28, 28), (192, 28, 28), (192, 
28, 28)])
   ('inception_3a/1x1', (64, 28, 28))
   ('inception_3a/relue_1x1', (64, 28, 28))
   ('inception_3a/3x3_reduce', (96, 28, 28))
   ('inception_3a/relue_3x3_reduce', (96, 28, 28))
   ('inception_3a/3x3', (128, 28, 28))
   ('inception_3a/relue_3x3', (128, 28, 28))
   ('inception_3a/5x5_reduce', (16, 28, 28))
   ('inception_3a/relue_5x5_reduce', (16, 28, 28))
   ('inception_3a/5x5', (32, 28, 28))
   ('inception_3a/relue_5x5', (32, 28, 28))
   ('inception_3a/pool', (192, 28, 28))
   ('inception_3a/pool_proj', (32, 28, 28))
   ('inception_3a/relue_pool_proj', (32, 28, 28))
   ('inception_3a/output', (256, 28, 28))
   ('inception_3b/split', [(256, 28, 28), (256, 28, 28), (256, 28, 28), (256, 
28, 28)])
   ('inception_3b/1x1', (128, 28, 28))
   ('inception_3b/relue_1x1', (128, 28, 28))
   ('inception_3b/3x3_reduce', (128, 28, 28))
   ('inception_3b/relue_3x3_reduce', (128, 28, 28))
   ('inception_3b/3x3', (192, 28, 28))
   ('inception_3b/relue_3x3', (192, 28, 28))
   ('inception_3b/5x5_reduce', (32, 28, 28))
   ('inception_3b/relue_5x5_reduce', (32, 28, 28))
   ('inception_3b/5x5', (96, 28, 28))
   ('inception_3b/relue_5x5', (96, 28, 28))
   ('inception_3b/pool', (256, 28, 28))
   ('inception_3b/pool_proj', (64, 28, 28))
   ('inception_3b/relue_pool_proj', (64, 28, 28))
   ('inception_3b/output', (480, 28, 28))
   ('pool3/3x3_s2/pad', (480, 29, 29))
   ('pool3/3x3_s2', (480, 14, 14))
   ('inception_4a/split', [(480, 14, 14), (480, 14, 14), (480, 14, 14), (480, 
14, 14)])
   ('inception_4a/1x1', (192, 14, 14))
   ('inception_4a/relue_1x1', (192, 14, 14))
   ('inception_4a/3x3_reduce', (96, 14, 14))
   ('inception_4a/relue_3x3_reduce', (96, 14, 14))
   ('inception_4a/3x3', (208, 14, 14))
   ('inception_4a/relue_3x3', (208, 14, 14))
   ('inception_4a/5x5_reduce', (16, 14, 14))
   ('inception_4a/relue_5x5_reduce', (16, 14, 14))
   ('inception_4a/5x5', (48, 14, 14))
   ('inception_4a/relue_5x5', (48, 14, 14))
   ('inception_4a/pool', (480, 14, 14))
   ('inception_4a/pool_proj', (64, 14, 14))
   ('inception_4a/relue_pool_proj', (64, 14, 14))
   ('inception_4a/output', (512, 14, 14))
   ('inception_4b/split', [(512, 14, 14), (512, 14, 14), (512, 14, 14), (512, 
14, 14)])
   ('inception_4b/1x1', (160, 14, 14))
   ('inception_4b/relue_1x1', (160, 14, 14))
   ('inception_4b/3x3_reduce', (112, 14, 14))
   ('inception_4b/relue_3x3_reduce', (112, 14, 14))
   ('inception_4b/3x3', (224, 14, 14))
   ('inception_4b/relue_3x3', (224, 14, 14))
   ('inception_4b/5x5_reduce', (24, 14, 14))
   ('inception_4b/relue_5x5_reduce', (24, 14, 14))
   ('inception_4b/5x5', (64, 14, 14))
   ('inception_4b/relue_5x5', (64, 14, 14))
   ('inception_4b/pool', (512, 14, 14))
   ('inception_4b/pool_proj', (64, 14, 14))
   ('inception_4b/relue_pool_proj', (64, 14, 14))
   ('inception_4b/output', (512, 14, 14))
   ('inception_4c/split', [(512, 14, 14), (512, 14, 14), (512, 14, 14), (512, 
14, 14)])
   ('inception_4c/1x1', 

[GitHub] [incubator-singa] dcslin opened a new pull request #471: SINGA-463

2019-06-27 Thread GitBox
dcslin opened a new pull request #471: SINGA-463
URL: https://github.com/apache/incubator-singa/pull/471
 
 
   fixed bugs for python test operation abs, exp, leakyrelu ref. SINGA-463
   
   related: SINGA-425 https://github.com/apache/incubator-singa/pull/435


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] sjs253 opened a new pull request #473: SINGA-466 Centos7 Installation instructions

2019-06-30 Thread GitBox
sjs253 opened a new pull request #473: SINGA-466 Centos7 Installation 
instructions 
URL: https://github.com/apache/incubator-singa/pull/473
 
 
   Instructions for Centos7 SINGA installation added to the Installation.md 
file.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] dcslin opened a new pull request #476: fixed python test_loss

2019-07-04 Thread GitBox
dcslin opened a new pull request #476: fixed python test_loss
URL: https://github.com/apache/incubator-singa/pull/476
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] dcslin opened a new pull request #548: DNNL(upgrade of MKLDNN) integration and add softmax

2019-10-16 Thread GitBox
dcslin opened a new pull request #548: DNNL(upgrade of MKLDNN) integration and 
add softmax
URL: https://github.com/apache/incubator-singa/pull/548
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris commented on issue #536: SINGA-491 Code Cleaning with the Reference of LGTM Analysis Result

2019-09-20 Thread GitBox
chrishkchris commented on issue #536: SINGA-491 Code Cleaning with the 
Reference of LGTM Analysis Result
URL: https://github.com/apache/incubator-singa/pull/536#issuecomment-533585944
 
 
   Thanks so much for your comments. It is a great idea concerning the CI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #536: SINGA-491 Code Cleaning with the Reference of LGTM Analysis Result

2019-09-24 Thread GitBox
nudles merged pull request #536: SINGA-491 Code Cleaning with the Reference of 
LGTM Analysis Result
URL: https://github.com/apache/incubator-singa/pull/536
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #528: SINGA 475 - add logical operator: and, or, xor, not & negative, reciprocal

2019-09-24 Thread GitBox
nudles merged pull request #528: SINGA 475 - add logical operator: and, or, 
xor, not & negative, reciprocal
URL: https://github.com/apache/incubator-singa/pull/528
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris opened a new pull request #537: SINGA-488 Change the path of source code for CI

2019-09-24 Thread GitBox
chrishkchris opened a new pull request #537: SINGA-488 Change the path of 
source code for CI
URL: https://github.com/apache/incubator-singa/pull/537
 
 
   This will fix the issue in "SINGA-488 Travis CI always build from Apache 
master branch"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] dcslin commented on issue #514: SINGA-482 tc comprehension integration

2019-09-23 Thread GitBox
dcslin commented on issue #514: SINGA-482 tc comprehension integration
URL: https://github.com/apache/incubator-singa/pull/514#issuecomment-534047644
 
 
   Integration is done except support CPU.
   CPU support development is done, however the behavior was not expected even 
in clean TC env.
   CPU backed TC return zero tensor as shown in 
https://gist.githubusercontent.com/dcslin/f3ed411012c144163f3bedbe621257f6/raw/9f6af3f5e62b5bb52a58c087967c813b3acd8872/tensor-comprehension-cpu-errors


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] dcslin commented on issue #540: added softmax with axis

2019-10-07 Thread GitBox
dcslin commented on issue #540: added softmax with axis
URL: https://github.com/apache/incubator-singa/pull/540#issuecomment-538955169
 
 
   > still has some problems, the output of multiple dimension inputs is not 
correct.
   > please check:
   > 
   > ```
   > x_0 = np.array([[0, 1, 2, 3], [1, 10001, 10002, 
10003]]).astype(np.float32)
   > # axis is 1
   > # expected output [[0.0320586, 0.08714432, 0.23688284, 0.64391428],
   > # [0.0320586, 0.08714432, 0.23688284, 0.64391428]]
   > ```
   
   Hi @joddiy , this is updated, could you please help to review?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on issue #540: added softmax with axis

2019-10-07 Thread GitBox
joddiy commented on issue #540: added softmax with axis
URL: https://github.com/apache/incubator-singa/pull/540#issuecomment-538984488
 
 
   > > still has some problems, the output of multiple dimension inputs is not 
correct.
   > > please check:
   > > ```
   > > x_0 = np.array([[0, 1, 2, 3], [1, 10001, 10002, 
10003]]).astype(np.float32)
   > > # axis is 1
   > > # expected output [[0.0320586, 0.08714432, 0.23688284, 0.64391428],
   > > # [0.0320586, 0.08714432, 0.23688284, 0.64391428]]
   > > ```
   > 
   > Hi @joddiy , this is updated, could you please help to review?
   
   Hi shicong, the axis and output are almost correct. However, the result will 
overflow if the input values are too big. For example, for input of [0, 1, 2, 
3], the result is correct. But for input of 1, 10001, 10002, 10003], the 
result will be [nan, nan, nan, nan].
   
   Please use this formation:
   ```
   def softmax(x):
   """Compute softmax values for each sets of scores in x."""
   e_x = np.exp(x - np.max(x))
   return e_x / e_x.sum()
   ```
   **not directly use x as input, instead subtracting the max value of x from x 
to avoid the overflow.**
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy opened a new pull request #545: SINGA-494 Singa autograd improvement

2019-10-08 Thread GitBox
joddiy opened a new pull request #545: SINGA-494 Singa autograd improvement
URL: https://github.com/apache/incubator-singa/pull/545
 
 
   ## Background: some autograd ops cannot satisfy the onnx demand, as 
following:
   
   ### conv, averagepool, maxpool
   - only support 2d input, i.e, N/*C/*W/*H
   - not support SAME_UPPER, SAME_LOWER, count_include_pad and ceil_mode
   
   ### reshape
   - not support zero_dim, zero_and_negative_dim
   
   ### concat
   - not support 1d
   
   ### matmul
   - only support 2d
   
   ### min, max
   - only support 2 inputs
   
   ### add
   - not support broadcast
   
   ### and, or, xor
   - not support broadcast
   
   ### div, pow, prelu
   - not support broadcast


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] dcslin commented on issue #541: added 4d test on batchnorm

2019-10-08 Thread GitBox
dcslin commented on issue #541: added 4d test on batchnorm
URL: https://github.com/apache/incubator-singa/pull/541#issuecomment-539449929
 
 
   HI @joddiy , could you please help to review this test on batchnorm value? 
which also showing that the required params/output by ONNX are given.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa-site] nudles merged pull request #1: SINGA-493 Update SINGA website

2019-10-09 Thread GitBox
nudles merged pull request #1: SINGA-493 Update SINGA website
URL: https://github.com/apache/incubator-singa-site/pull/1
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] nudles merged pull request #546: SINGA-491 Remaining LGTM Python Code Cleaning

2019-10-09 Thread GitBox
nudles merged pull request #546: SINGA-491 Remaining LGTM Python Code Cleaning
URL: https://github.com/apache/incubator-singa/pull/546
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] dcslin closed pull request #538: [WIP] add Cudnn mode as optional param to batch norm and softmax

2019-10-10 Thread GitBox
dcslin closed pull request #538: [WIP] add Cudnn mode as optional param to 
batch norm and softmax
URL: https://github.com/apache/incubator-singa/pull/538
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] dcslin commented on issue #538: [WIP] add Cudnn mode as optional param to batch norm and softmax

2019-10-10 Thread GitBox
dcslin commented on issue #538: [WIP] add Cudnn mode as optional param to batch 
norm and softmax
URL: https://github.com/apache/incubator-singa/pull/538#issuecomment-540511768
 
 
   split to https://github.com/apache/incubator-singa/pull/540 and 
https://github.com/apache/incubator-singa/pull/541


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on issue #540: added softmax with axis

2019-10-10 Thread GitBox
joddiy commented on issue #540: added softmax with axis
URL: https://github.com/apache/incubator-singa/pull/540#issuecomment-540567614
 
 
   ready to merge


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy edited a comment on issue #541: added 4d test on batchnorm

2019-10-10 Thread GitBox
joddiy edited a comment on issue #541: added 4d test on batchnorm
URL: https://github.com/apache/incubator-singa/pull/541#issuecomment-540600410
 
 
   hi, shicong, the result still has a little error, please check with the 
following case:
   ```
   def _batchnorm_test_mode(x, s, bias, mean, var, epsilon=1e-5):  # type: 
ignore
   dims_x = len(x.shape)
   dim_ones = (1,) * (dims_x - 2)
   s = s.reshape(-1, *dim_ones)
   bias = bias.reshape(-1, *dim_ones)
   mean = mean.reshape(-1, *dim_ones)
   var = var.reshape(-1, *dim_ones)
   return s * (x - mean) / np.sqrt(var + epsilon) + bias
   
   # input size: (1, 2, 1, 3)
   x = np.array(-1, 0, 1]], [[2, 3, 4).astype(np.float32)
   s = np.array([1.0, 1.5]).astype(np.float32)
   bias = np.array([0, 1]).astype(np.float32)
   mean = np.array([0, 3]).astype(np.float32)
   var = np.array([1, 1.5]).astype(np.float32)
   y = _batchnorm_test_mode(x, s, bias, mean, var).astype(np.float32)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] joddiy commented on issue #541: added 4d test on batchnorm

2019-10-10 Thread GitBox
joddiy commented on issue #541: added 4d test on batchnorm
URL: https://github.com/apache/incubator-singa/pull/541#issuecomment-540600410
 
 
   hi, shicong, the result still has a little error, please check with the 
following case:
   ```
   def _batchnorm_test_mode(x, s, bias, mean, var, epsilon=1e-5):  # 
type: ignore
   dims_x = len(x.shape)
   dim_ones = (1,) * (dims_x - 2)
   s = s.reshape(-1, *dim_ones)
   bias = bias.reshape(-1, *dim_ones)
   mean = mean.reshape(-1, *dim_ones)
   var = var.reshape(-1, *dim_ones)
   return s * (x - mean) / np.sqrt(var + epsilon) + bias
   
   # input size: (1, 2, 1, 3)
   x = np.array(-1, 0, 1]], [[2, 3, 4).astype(np.float32)
   s = np.array([1.0, 1.5]).astype(np.float32)
   bias = np.array([0, 1]).astype(np.float32)
   mean = np.array([0, 3]).astype(np.float32)
   var = np.array([1, 1.5]).astype(np.float32)
   y = _batchnorm_test_mode(x, s, bias, mean, var).astype(np.float32)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] chrishkchris opened a new pull request #547: SINGA-493 Add an additional build path

2019-10-10 Thread GitBox
chrishkchris opened a new pull request #547: SINGA-493 Add an additional build 
path
URL: https://github.com/apache/incubator-singa/pull/547
 
 
   Add an webpage build path in build.sh


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] dcslin edited a comment on issue #541: added 4d test on batchnorm

2019-10-10 Thread GitBox
dcslin edited a comment on issue #541: added 4d test on batchnorm
URL: https://github.com/apache/incubator-singa/pull/541#issuecomment-540867039
 
 
   HI @joddiy , could you share how did you call singa api? the test you 
mentioned is tested in here 
https://github.com/apache/incubator-singa/pull/541/files#diff-5e6248de6199ea6f777c73d3ea2643a5R62
   
   ``` python
   def _np_bn_testing(x, scale, bias, rm, rv, momentum=0.1, e=1e-5):
   ...
   return scale * (x - rm) / np.sqrt(rv + e) + bias
   ```
   
   
   for the `test_mode`, kindly use this api `GpuBatchNormForwardInference`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-singa] dcslin commented on issue #541: added 4d test on batchnorm

2019-10-10 Thread GitBox
dcslin commented on issue #541: added 4d test on batchnorm
URL: https://github.com/apache/incubator-singa/pull/541#issuecomment-540867039
 
 
   HI @joddiy , could you share how did you call singa api? the test you 
mentioned is tested in here 
https://github.com/apache/incubator-singa/pull/541/files#diff-5e6248de6199ea6f777c73d3ea2643a5R62
   
   for the `test_mode`, kindly use this api `GpuBatchNormForwardInference`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


<    2   3   4   5   6   7   8   >