chrishkchris opened a new pull request #543: SINGA-491 Use const reference when 
we use CopyData and ResetLike from Tensor Input
URL: https://github.com/apache/incubator-singa/pull/543
 
 
   This is to fix the remaining C language alert from LGTM: 
   
   There are some functions using ResetLike and CopyData to copy the data from 
the tensor input. This PR changes the input type from "Tensor" to "const 
Tensor&"
   
   However, it seems that the src/model/layer is used by the old 
optimizer/model, it does not have effect on the autograd.py based model.
   
   If I am correct, this avoid the system copying the input tensor with the 
tensor copy constructor, instead it uses the address of the input pointer as 
reference.
   
   For a verification, the batchnorm in src/model/layer is used by the old 
optimizer (not the autograd one), and hence we use the old resnet example to 
verify the training loss reduction progress.
   
   ```
   ubuntu@ip-172-31-42-250:~/incubator-singa/examples/cifar10$ python3 train.py 
resnet cifar-10-batches-py
   Loading data ..................
   Loading data file cifar-10-batches-py/data_batch_1
   Loading data file cifar-10-batches-py/data_batch_2
   Loading data file cifar-10-batches-py/data_batch_3
   Loading data file cifar-10-batches-py/data_batch_4
   Loading data file cifar-10-batches-py/data_batch_5
   Loading data file cifar-10-batches-py/test_batch
   ('conv1', (16, 32, 32))
   ('bn1', (16, 32, 32))
   ('relu1', (16, 32, 32))
   ('2a-split', [(16, 32, 32), (16, 32, 32)])
   ('2a-br1-conv1', (16, 32, 32))
   ('2a-br1-bn1', (16, 32, 32))
   ('2a-br1-relu', (16, 32, 32))
   ('2a-br1-conv2', (16, 32, 32))
   ('2a-br1-bn2', (16, 32, 32))
   ('2a-merge', [(16, 32, 32), (16, 32, 32)])
   ('2b-split', [(16, 32, 32), (16, 32, 32)])
   ('2b-br1-conv1', (16, 32, 32))
   ('2b-br1-bn1', (16, 32, 32))
   ('2b-br1-relu', (16, 32, 32))
   ('2b-br1-conv2', (16, 32, 32))
   ('2b-br1-bn2', (16, 32, 32))
   ('2b-merge', [(16, 32, 32), (16, 32, 32)])
   ('2c-split', [(16, 32, 32), (16, 32, 32)])
   ('2c-br1-conv1', (16, 32, 32))
   ('2c-br1-bn1', (16, 32, 32))
   ('2c-br1-relu', (16, 32, 32))
   ('2c-br1-conv2', (16, 32, 32))
   ('2c-br1-bn2', (16, 32, 32))
   ('2c-merge', [(16, 32, 32), (16, 32, 32)])
   ('3a-split', [(16, 32, 32), (16, 32, 32)])
   ('3a-br2-conv', (32, 16, 16))
   ('3a-br2-bn', (32, 16, 16))
   ('3a-br1-conv1', (32, 16, 16))
   ('3a-br1-bn1', (32, 16, 16))
   ('3a-br1-relu', (32, 16, 16))
   ('3a-br1-conv2', (32, 16, 16))
   ('3a-br1-bn2', (32, 16, 16))
   ('3a-merge', [(32, 16, 16), (32, 16, 16)])
   ('3b-split', [(32, 16, 16), (32, 16, 16)])
   ('3b-br1-conv1', (32, 16, 16))
   ('3b-br1-bn1', (32, 16, 16))
   ('3b-br1-relu', (32, 16, 16))
   ('3b-br1-conv2', (32, 16, 16))
   ('3b-br1-bn2', (32, 16, 16))
   ('3b-merge', [(32, 16, 16), (32, 16, 16)])
   ('3c-split', [(32, 16, 16), (32, 16, 16)])
   ('3c-br1-conv1', (32, 16, 16))
   ('3c-br1-bn1', (32, 16, 16))
   ('3c-br1-relu', (32, 16, 16))
   ('3c-br1-conv2', (32, 16, 16))
   ('3c-br1-bn2', (32, 16, 16))
   ('3c-merge', [(32, 16, 16), (32, 16, 16)])
   ('4a-split', [(32, 16, 16), (32, 16, 16)])
   ('4a-br2-conv', (64, 8, 8))
   ('4a-br2-bn', (64, 8, 8))
   ('4a-br1-conv1', (64, 8, 8))
   ('4a-br1-bn1', (64, 8, 8))
   ('4a-br1-relu', (64, 8, 8))
   ('4a-br1-conv2', (64, 8, 8))
   ('4a-br1-bn2', (64, 8, 8))
   ('4a-merge', [(64, 8, 8), (64, 8, 8)])
   ('4b-split', [(64, 8, 8), (64, 8, 8)])
   ('4b-br1-conv1', (64, 8, 8))
   ('4b-br1-bn1', (64, 8, 8))
   ('4b-br1-relu', (64, 8, 8))
   ('4b-br1-conv2', (64, 8, 8))
   ('4b-br1-bn2', (64, 8, 8))
   ('4b-merge', [(64, 8, 8), (64, 8, 8)])
   ('4c-split', [(64, 8, 8), (64, 8, 8)])
   ('4c-br1-conv1', (64, 8, 8))
   ('4c-br1-bn1', (64, 8, 8))
   ('4c-br1-relu', (64, 8, 8))
   ('4c-br1-conv2', (64, 8, 8))
   ('4c-br1-bn2', (64, 8, 8))
   ('4c-merge', [(64, 8, 8), (64, 8, 8)])
   ('pool4', (64, 1, 1))
   ('flat', (64,))
   ('ip5', (10,))
   Start intialization............
   Start intialization............
   Using GPU
   Epoch=0: 
100%|██████████████████████████████████████████████████████████████████████| 
500/500 [00:17<00:00, 28.18it/s, accuracy=0.59, loss=1.13]
   Training loss = 1.418575, training accuracy = 0.481940, lr = 0.100000
   Test loss = 1.145096, test accuracy = 0.586800
   Epoch=1: 
100%|█████████████████████████████████████████████████████████████████████| 
500/500 [00:17<00:00, 29.34it/s, accuracy=0.76, loss=0.784]
   Training loss = 0.996122, training accuracy = 0.645940, lr = 0.100000
   Test loss = 0.947394, test accuracy = 0.665900
   Epoch=2: 
100%|█████████████████████████████████████████████████████████████████████| 
500/500 [00:17<00:00, 28.81it/s, accuracy=0.81, loss=0.696]
   Training loss = 0.812576, training accuracy = 0.713660, lr = 0.100000
   Test loss = 0.830808, test accuracy = 0.713700
   Epoch=3: 
100%|█████████████████████████████████████████████████████████████████████| 
500/500 [00:17<00:00, 27.96it/s, accuracy=0.81, loss=0.617]
   Training loss = 0.708455, training accuracy = 0.751980, lr = 0.100000
   Test loss = 0.761715, test accuracy = 0.740200
   Epoch=4: 
100%|█████████████████████████████████████████████████████████████████████| 
500/500 [00:17<00:00, 28.55it/s, accuracy=0.75, loss=0.737]
   Training loss = 0.636522, training accuracy = 0.777100, lr = 0.100000
   Test loss = 0.656281, test accuracy = 0.771600
   Epoch=5: 
100%|█████████████████████████████████████████████████████████████████████| 
500/500 [00:18<00:00, 27.43it/s, accuracy=0.78, loss=0.628]
   Training loss = 0.576065, training accuracy = 0.798540, lr = 0.100000
   Test loss = 0.642445, test accuracy = 0.783800
   Epoch=6: 
100%|█████████████████████████████████████████████████████████████████████| 
500/500 [00:17<00:00, 27.83it/s, accuracy=0.76, loss=0.584]
   Training loss = 0.530112, training accuracy = 0.814760, lr = 0.100000
   Test loss = 0.646009, test accuracy = 0.779800
   Epoch=7: 
100%|█████████████████████████████████████████████████████████████████████| 
500/500 [00:18<00:00, 28.51it/s, accuracy=0.83, loss=0.557]
   Training loss = 0.497382, training accuracy = 0.826420, lr = 0.100000
   Test loss = 0.602355, test accuracy = 0.791800
   Epoch=8: 
100%|█████████████████████████████████████████████████████████████████████| 
500/500 [00:17<00:00, 27.81it/s, accuracy=0.85, loss=0.427]
   Training loss = 0.457444, training accuracy = 0.840000, lr = 0.100000
   Test loss = 0.566916, test accuracy = 0.803500
   Epoch=9: 
100%|█████████████████████████████████████████████████████████████████████| 
500/500 [00:18<00:00, 27.70it/s, accuracy=0.83, loss=0.561]
   Training loss = 0.432371, training accuracy = 0.848380, lr = 0.100000
   Test loss = 0.616507, test accuracy = 0.798000
   Epoch=10: 
100%|████████████████████████████████████████████████████████████████████| 
500/500 [00:18<00:00, 28.38it/s, accuracy=0.89, loss=0.316]
   Training loss = 0.406027, training accuracy = 0.857000, lr = 0.100000
   Test loss = 0.615193, test accuracy = 0.798700
   Epoch=11: 
100%|████████████████████████████████████████████████████████████████████| 
500/500 [00:17<00:00, 27.92it/s, accuracy=0.88, loss=0.322]
   Training loss = 0.393299, training accuracy = 0.861660, lr = 0.100000
   Test loss = 0.572890, test accuracy = 0.811100
   Epoch=12: 
100%|████████████████████████████████████████████████████████████████████| 
500/500 [00:17<00:00, 28.69it/s, accuracy=0.88, loss=0.383]
   Training loss = 0.365866, training accuracy = 0.870500, lr = 0.100000
   Test loss = 0.602315, test accuracy = 0.803900
   Epoch=13: 
100%|████████████████████████████████████████████████████████████████████| 
500/500 [00:17<00:00, 27.98it/s, accuracy=0.86, loss=0.442]
   Training loss = 0.352359, training accuracy = 0.874200, lr = 0.100000
   Test loss = 0.666458, test accuracy = 0.790300
   Epoch=14: 
100%|█████████████████████████████████████████████████████████████████████| 
500/500 [00:17<00:00, 27.80it/s, accuracy=0.88, loss=0.33]
   Training loss = 0.328302, training accuracy = 0.882260, lr = 0.100000
   Test loss = 0.636559, test accuracy = 0.801700
   Epoch=15: 
100%|████████████████████████████████████████████████████████████████████| 
500/500 [00:17<00:00, 27.88it/s, accuracy=0.95, loss=0.238]
   Training loss = 0.313136, training accuracy = 0.889920, lr = 0.100000
   Test loss = 0.571674, test accuracy = 0.815300
   Epoch=16: 
100%|████████████████████████████████████████████████████████████████████| 
500/500 [00:18<00:00, 27.90it/s, accuracy=0.86, loss=0.364]
   Training loss = 0.298799, training accuracy = 0.894040, lr = 0.100000
   Test loss = 0.583324, test accuracy = 0.813600
   Epoch=17: 
100%|█████████████████████████████████████████████████████████████████████| 
500/500 [00:18<00:00, 27.65it/s, accuracy=0.91, loss=0.37]
   Training loss = 0.282553, training accuracy = 0.899700, lr = 0.100000
   Test loss = 0.641781, test accuracy = 0.805500
   Epoch=18: 
100%|████████████████████████████████████████████████████████████████████| 
500/500 [00:18<00:00, 26.91it/s, accuracy=0.91, loss=0.246]
   Training loss = 0.271136, training accuracy = 0.903680, lr = 0.100000
   Test loss = 0.619996, test accuracy = 0.809200
   Epoch=19: 
100%|████████████████████████████████████████████████████████████████████| 
500/500 [00:18<00:00, 26.87it/s, accuracy=0.93, loss=0.231]
   Training loss = 0.259176, training accuracy = 0.908360, lr = 0.100000
   Test loss = 0.579376, test accuracy = 0.822300
   Epoch=20: 
100%|████████████████████████████████████████████████████████████████████| 
500/500 [00:18<00:00, 27.60it/s, accuracy=0.92, loss=0.289]
   Training loss = 0.253127, training accuracy = 0.910180, lr = 0.100000
   Test loss = 0.649447, test accuracy = 0.806900
   ```
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to