seujung opened a new issue #13594: Autograd error when using custom parameters
URL: https://github.com/apache/incubator-mxnet/issues/13594
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   I make a custom ActNorm Function. When executing the code, it occured 
set_data error when using autograd.record()
   
   ## Environment info (Required)
   mxnet = 1.3.0
   python = 3.5.2
   
   ## Error Message:
   ```
   MXNetError: [09:56:20] src/imperative/imperative.cc:193: Check failed: 
AGInfo::IsNone(*(outputs[i])) Assigning to NDArrays that are already in a 
computational graph will cause undefined behavior when evaluating gradients. 
Please call backward first to clear the graph or do this out side of a record 
section. Also note that you cannot use inplace operations like +=, *=, relu(x, 
out=x), y[idx]=x, etc inside a record section.
   
   Stack trace returned 10 entries:
   [bt] (0) /opt/venv/lib/python3.5/site-packages/mxnet/libmxnet.so(+0x382d4a) 
[0x7f76937d3d4a]
   [bt] (1) /opt/venv/lib/python3.5/site-packages/mxnet/libmxnet.so(+0x383381) 
[0x7f76937d4381]
   [bt] (2) /opt/venv/lib/python3.5/site-packages/mxnet/libmxnet.so(+0x2b8f0c2) 
[0x7f7695fe00c2]
   [bt] (3) /opt/venv/lib/python3.5/site-packages/mxnet/libmxnet.so(+0x2aa4ca6) 
[0x7f7695ef5ca6]
   [bt] (4) 
/opt/venv/lib/python3.5/site-packages/mxnet/libmxnet.so(MXImperativeInvokeEx+0x6f)
 [0x7f7695ef603f]
   [bt] (5) 
/usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(ffi_call_unix64+0x4c)
 [0x7f77798c3e20]
   [bt] (6) 
/usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(ffi_call+0x2eb)
 [0x7f77798c388b]
   [bt] (7) 
/usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(_ctypes_callproc+0x49a)
 [0x7f77798be01a]
   [bt] (8) 
/usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(+0x9fcb) 
[0x7f77798b1fcb]
   [bt] (9) /opt/venv/bin/python3.5(PyObject_Call+0x47) [0x5c20e7]
   ```
   ## Minimum reproducible example
   
   ```
   import numpy as np
   import mxnet as mx
   from mxnet import gluon, autograd, nd
   from mxnet.gluon import nn, utils
   import mxnet.ndarray as F
   from scipy import linalg as la
   from math import log, pi
   
   from mxnet.gluon.data.vision import datasets, transforms
   from mxnet.gluon.data import DataLoader
   
   ctx = mx.cpu()
   
   class ActNorm(nn.Block):
       
       def __init__(self, in_channel, logdet=True, **kwargs):
           super(ActNorm, self).__init__(**kwargs)
           with self.name_scope():
               self.loc = self.params.get('loc', 
init=mx.init.Zero(),shape=(1,in_channel,1,1))
               self.scale =self.params.get('scale', 
init=mx.init.Zero(),shape=(1,in_channel,1,1))
               
           self.initialized =  False
           self.logdet = logdet
           
               
       def init(self, x):
           self.loc.initialize()
           self.scale.initialize()
           flatten = nd.transpose(x,axes=(1,0,2,3)).reshape(x.shape[1],-1)
           with x.context:
               mean = nd.mean(flatten,1)
               mean = nd.expand_dims(mean,1)
               mean = nd.expand_dims(mean,2)
               mean = nd.expand_dims(mean,3)
               mean = nd.transpose(mean,axes=(1,0,2,3))
           
               std = np.std(flatten.asnumpy(),axis=1)
               std = nd.array(std)
               std = nd.expand_dims(std,1)
               std = nd.expand_dims(std,2)
               std = nd.expand_dims(std,3)
               std = nd.transpose(std,axes=(1,0,2,3))
   
               
               self.loc.set_data(-1*mean)
               self.scale.set_data(1/(std + 1e-6))
   
       
       def forward(self, x):
           _, _, height, width = x.shape
           if not self.initialized:
               self.init(x)
               self.initialized = True
           
           log_abs = logabs(self.scale.data())
           
           logdet = height * width * nd.sum(log_abs)
           
           if self.logdet:
               return self.scale.data() * (x + self.loc.data()), logdet
           
           else:
               return self.scale.data() * (x + self.loc.data()),
       
       def reverse(self, out):
           return out / self.scale.data() - self.loc.data()
   
   net = ActNorm(3)
   net.collect_params().initialize()
   
   with autograd.record():
       x = mx.random.normal(shape=(2,3,20,20))
       y = net(x)
   
   ```
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to