samhodge opened a new issue #9989: Cannot train example gluon style transfer
URL: https://github.com/apache/incubator-mxnet/issues/9989
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   Cannot train gluon style transfer, needs to be outside of autograd.record() 
block or need to call backward.
   
   ## Environment info (Required)
   ----------Python Info----------
   ('Version      :', '2.7.10')
   ('Compiler     :', 'GCC 4.1.2')
   ('Build        :', ('default', 'Jun 29 2015 12:45:31'))
   ('Arch         :', ('64bit', 'ELF'))
   ------------Pip Info-----------
   No corresponding pip install for current python.
   ----------MXNet Info-----------
   
/asset/common/software/thirdparty/mxnet/1.0.0-build1/python2.7/mxnet/optimizer.py:136:
 UserWarning: WARNING: New optimizer mxnet.optimizer.NAG is overriding existing 
optimizer mxnet.optimizer.NAG
     Optimizer.opt_registry[name].__name__))
   ('Version      :', '1.1.0')
   ('Directory    :', 
'/asset/common/software/thirdparty/mxnet/1.0.0-build1/python2.7/mxnet')
   Hashtag not found. Not installed from pre-built package.
   ----------System Info----------
   ('Platform     :', 
'Linux-3.10.105-1.el6.elrepo.x86_64-x86_64-with-centos-6.2-Final')
   ('system       :', 'Linux')
   ('node         :', 'bladerunner')                                            
                                                                                
                                                                                
   ('release      :', '3.10.105-1.el6.elrepo.x86_64')                           
                                                                                
                                                                                
   ('version      :', '#1 SMP Fri Feb 10 10:48:08 EST 2017')                    
                                                                                
                                                                                
   ----------Hardware Info----------                                            
                                                                                
                                                                                
   ('machine      :', 'x86_64')                                                 
                                                                                
                                                                                
   ('processor    :', 'x86_64')                                                 
                                                                                
                                                                                
   Architecture:          x86_64                                                
                                                                                
                                                                                
   CPU op-mode(s):        32-bit, 64-bit                                        
                                                                                
                                                                                
   Byte Order:            Little Endian                                         
                                                                                
                                                                                
   CPU(s):                12                                                    
                                                                                
                                                                                
   On-line CPU(s) list:   0-11                                                  
                                                                                
                                                                                
   Thread(s) per core:    1                                                     
                                                                                
                                                                                
   Core(s) per socket:    6                                                     
                                                                                
                                                                                
   Socket(s):             2                                                     
                                                                                
                                                                                
   NUMA node(s):          2                                                     
                                                                                
                                                                                
   Vendor ID:             GenuineIntel                                          
                                                                                
                                                                                
   CPU family:            6                                                     
                                                                                
                                                                                
   Model:                 63                                                    
                                                                                
                                                                                
   Model name:            Intel(R) Xeon(R) CPU E5-2609 v3 @ 1.90GHz             
                                                                                
                                                                                
   Stepping:              2                                                     
                                                                                
                                                                                
   CPU MHz:               1900.000                                              
                                                                                
                                                                                
   BogoMIPS:              3796.70                                               
                                                                                
                                                                                
   Virtualization:        VT-x                                                  
                                                                                
                                                                                
   L1d cache:             32K                                                   
                                                                                
                                                                                
   L1i cache:             32K                                                   
                                                                                
                                                                                
   L2 cache:              256K                                                  
                                                                                
                                                                                
   L3 cache:              15360K                                                
                                                                                
                                                                                
   NUMA node0 CPU(s):     0-5                                                   
                                                                                
                                                                                
   NUMA node1 CPU(s):     6-11                                                  
                                                                                
                                                                                
   ----------Network Test----------                                             
                                                                                
                                                                                
   Setting timeout: 10
   Error open MXNet: https://github.com/apache/incubator-mxnet, <urlopen error 
timed out>, DNS finished in 0.0260591506958 sec.
   Error open PYPI: https://pypi.python.org/pypi/pip, <urlopen error [Errno 
101] Network is unreachable>, DNS finished in 0.170429944992 sec.
   Error open FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 <urlopen error [Errno 101] Network is unreachable>, DNS finished in 
0.204452037811 sec.
   Error open Conda: https://repo.continuum.io/pkgs/free/, <urlopen error 
[Errno 101] Network is unreachable>, DNS finished in 0.154680967331 sec.
   Error open Gluon Tutorial(en): http://gluon.mxnet.io, <urlopen error [Errno 
101] Network is unreachable>, DNS finished in 0.381160974503 sec.
   Error open Gluon Tutorial(cn): https://zh.gluon.ai, <urlopen error [Errno 
101] Network is unreachable>, DNS finished in 0.432467937469 sec.
   
   
   Package used (Python/R/Scala/Julia):
   Python
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio): GCC-4.8.5 on Centos 6.2
   
   MXNet commit hash:
   b73c57c526396d6485bdf65986e3819c54eb7bd9
   
   
   Build config:
   ```
   # Licensed to the Apache Software Foundation (ASF) under one
   # or more contributor license agreements.  See the NOTICE file
   # distributed with this work for additional information
   # regarding copyright ownership.  The ASF licenses this file
   # to you under the Apache License, Version 2.0 (the
   # "License"); you may not use this file except in compliance
   # with the License.  You may obtain a copy of the License at
   #
   #   http://www.apache.org/licenses/LICENSE-2.0
   #
   # Unless required by applicable law or agreed to in writing,
   # software distributed under the License is distributed on an
   # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
   # KIND, either express or implied.  See the License for the
   # specific language governing permissions and limitations
   # under the License.
   
   
#-------------------------------------------------------------------------------
   #  Template configuration for compiling mxnet
   #
   #  If you want to change the configuration, please use the following
   #  steps. Assume you are on the root directory of mxnet. First copy the this
   #  file so that any local changes will be ignored by git
   #
   #  $ cp make/config.mk .
   #
   #  Next modify the according entries, and then compile by
   #
   #  $ make
   #
   #  or build in parallel with 8 threads
   #
   #  $ make -j8
   
#-------------------------------------------------------------------------------
   
   #---------------------
   # choice of compiler
   #--------------------
   
   export CC = gcc
   export CXX = g++
   export NVCC = nvcc
   
   # whether compile with options for MXNet developer
   DEV = 0
   
   # whether compile with debug
   DEBUG = 0
   
   # whether compile with profiler
   USE_PROFILER =
   
   # whether to turn on segfault signal handler to log the stack trace
   USE_SIGNAL_HANDLER =
   
   # the additional link flags you want to add
   ADD_LDFLAGS = -L 
/asset/common/software/thirdparty/cudnn/5.1-build1/cuda/lib64/ -L 
/asset/common/software/thirdparty/cuda/8.0.61-build1/lib64
   
   # the additional compile flags you want to add
   ADD_CFLAGS = -I 
/asset/common/software/thirdparty/mkl/2018.0.128-build2/mkl/include/ -I 
/asset/common/software/thirdparty/cudnn/5.1-build1/cuda/include/
   
   #---------------------------------------------
   # matrix computation libraries for CPU/GPU
   #---------------------------------------------
   
   # whether use CUDA during compile
   USE_CUDA = 1
   
   # add the path to CUDA library to link and compile flag
   # if you have already add them to environment variable, leave it as NONE
   # USE_CUDA_PATH = /usr/local/cuda
   USE_CUDA_PATH = /asset/common/software/thirdparty/cuda/8.0.61-build1/
   
   # whether to enable CUDA runtime compilation
   ENABLE_CUDA_RTC = 1
   
   # whether use CuDNN R3 library
   USE_CUDNN = 1
   
   #whether to use NCCL library
   USE_NCCL = 0
   #add the path to NCCL library
   USE_NCCL_PATH = NONE
   
   # whether use opencv during compilation
   # you can disable it, however, you will not able to use
   # imbin iterator
   USE_OPENCV = 0
   
   #whether use libjpeg-turbo for image decode without OpenCV wrapper
   USE_LIBJPEG_TURBO = 0
   #add the path to libjpeg-turbo library
   USE_LIBJPEG_TURBO_PATH = NONE
   
   # use openmp for parallelization
   USE_OPENMP = 1
   
   # MKL ML Library for Intel CPU/Xeon Phi
   # Please refer to MKL_README.md for details
   
   # MKL ML Library folder, need to be root for /usr/local
   # Change to User Home directory for standard user
   # For USE_BLAS!=mkl only
   MKLML_ROOT=/asset/common/software/thirdparty/mkl/2018.0.128-build2/
   
   # whether use MKL2017 library
   USE_MKL2017 = 0
   
   # whether use MKL2017 experimental feature for high performance
   # Prerequisite USE_MKL2017=1
   USE_MKL2017_EXPERIMENTAL = 0
   
   # whether use NNPACK library
   USE_NNPACK = 0
   
   # choose the version of blas you want to use
   # can be: mkl, blas, atlas, openblas
   # in default use atlas for linux while apple for osx
   UNAME_S := $(shell uname -s)
   ifeq ($(UNAME_S), Darwin)
   USE_BLAS = apple
   else
   USE_BLAS = mkl
   endif
   
   # whether use lapack during compilation
   # only effective when compiled with blas versions openblas/apple/atlas/mkl
   USE_LAPACK = 1
   
   # path to lapack library in case of a non-standard installation
   USE_LAPACK_PATH =
   
   # add path to intel library, you may need it for MKL, if you did not add the 
path
   # to environment variable
   USE_INTEL_PATH = /asset/common/software/thirdparty/mkl/2018.0.128-build2/
   
   # If use MKL only for BLAS, choose static link automatically to allow python 
wrapper
   ifeq ($(USE_BLAS), mkl)
   USE_STATIC_MKL = 1
   else
   USE_STATIC_MKL = NONE
   endif
   
   #----------------------------
   # Settings for power and arm arch
   #----------------------------
   ARCH := $(shell uname -a)
   ifneq (,$(filter $(ARCH), armv6l armv7l powerpc64le ppc64le aarch64))
        USE_SSE=0
   else
        USE_SSE=1
   endif
   
   #----------------------------
   # distributed computing
   #----------------------------
   
   # whether or not to enable multi-machine supporting
   USE_DIST_KVSTORE = 0
   
   # whether or not allow to read and write HDFS directly. If yes, then hadoop 
is
   # required
   USE_HDFS = 0
   
   # path to libjvm.so. required if USE_HDFS=1
   LIBJVM=$(JAVA_HOME)/jre/lib/amd64/server
   
   # whether or not allow to read and write AWS S3 directly. If yes, then
   # libcurl4-openssl-dev is required, it can be installed on Ubuntu by
   # sudo apt-get install -y libcurl4-openssl-dev
   USE_S3 = 0
   
   #----------------------------
   # performance settings
   #----------------------------
   # Use operator tuning
   USE_OPERATOR_TUNING = 1
   
   # Use gperftools if found
   USE_GPERFTOOLS = 1
   
   # Use JEMalloc if found, and not using gperftools
   USE_JEMALLOC = 1
   
   #----------------------------
   # additional operators
   #----------------------------
   
   # path to folders containing projects specific operators that you don't want 
to put in src/operators
   EXTRA_OPERATORS =
   
   #----------------------------
   # other features
   #----------------------------
   
   # Create C++ interface package
   USE_CPP_PACKAGE = 1
   
   #----------------------------
   # plugins
   #----------------------------
   
   # whether to use caffe integration. This requires installing caffe.
   # You also need to add CAFFE_PATH/build/lib to your LD_LIBRARY_PATH
   # CAFFE_PATH = $(HOME)/caffe
   # MXNET_PLUGINS += plugin/caffe/caffe.mk
   
   # WARPCTC_PATH = $(HOME)/warp-ctc
   # MXNET_PLUGINS += plugin/warpctc/warpctc.mk
   
   # whether to use sframe integration. This requires build sframe
   # g...@github.com:dato-code/SFrame.git
   # SFRAME_PATH = $(HOME)/SFrame
   # MXNET_PLUGINS += plugin/sframe/plugin.mk
   ```
   
   
   ## Error Message:
   ```
    samh@bladerunner ~/dev/mxnet/example/gluon/style_transfer/ run python with 
mxnet pillow/latest : main.py train --dataset ~/dev/coco/dataset/ 
--style-folder images/styles --save-model-dir models
   
/asset/common/software/thirdparty/mxnet/1.0.0-build1/python2.7/mxnet/optimizer.py:136:
 UserWarning: WARNING: New optimizer mxnet.optimizer.NAG is overriding existing 
optimizer mxnet.optimizer.NAG                                          
     Optimizer.opt_registry[name].__name__))                                    
                                                                                
                                                                                
   ('len(style_loader):', 21)                                                   
                                                                                
                                                                                
   ('style_model:', Net(                                                        
                                                                                
                                                                                
     (gram): GramMatrix(                                                        
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
     )                                                                          
                                                                                
                                                                                
     (model): Sequential(                                                       
                                                                                
                                                                                
       (0): Sequential(
         (0): ConvLayer(
           (pad): ReflectancePadding(
           
           )
           (conv2d): Conv2D(3 -> 64, kernel_size=(7, 7), stride=(1, 1))
         )
         (1): InstanceNorm(eps=1e-05, in_channels=64)
         (2): Activation(relu)
         (3): Bottleneck(
           (conv_block): Sequential(
             (0): InstanceNorm(eps=1e-05, in_channels=64)
             (1): Activation(relu)
             (2): Conv2D(64 -> 32, kernel_size=(1, 1), stride=(1, 1))
             (3): InstanceNorm(eps=1e-05, in_channels=32)
             (4): Activation(relu)
             (5): ConvLayer(
               (pad): ReflectancePadding(
               
               )
               (conv2d): Conv2D(32 -> 32, kernel_size=(3, 3), stride=(2, 2))
             )
             (6): InstanceNorm(eps=1e-05, in_channels=32)
             (7): Activation(relu)
             (8): Conv2D(32 -> 128, kernel_size=(1, 1), stride=(1, 1))
           )
           (residual_layer): Conv2D(64 -> 128, kernel_size=(1, 1), stride=(2, 
2))
         )
         (4): Bottleneck(
           (conv_block): Sequential(
             (0): InstanceNorm(eps=1e-05, in_channels=128)
             (1): Activation(relu)
             (2): Conv2D(128 -> 128, kernel_size=(1, 1), stride=(1, 1))
             (3): InstanceNorm(eps=1e-05, in_channels=128)
             (4): Activation(relu)
             (5): ConvLayer(
               (pad): ReflectancePadding(
               
               )
               (conv2d): Conv2D(128 -> 128, kernel_size=(3, 3), stride=(2, 2))
             )
             (6): InstanceNorm(eps=1e-05, in_channels=128)
             (7): Activation(relu)
             (8): Conv2D(128 -> 512, kernel_size=(1, 1), stride=(1, 1))
           )
           (residual_layer): Conv2D(128 -> 512, kernel_size=(1, 1), stride=(2, 
2))
         )
       )
       (1): Inspiration(N x 512)
       (2): Bottleneck(
         (conv_block): Sequential(
           (0): InstanceNorm(eps=1e-05, in_channels=512)
           (1): Activation(relu)
           (2): Conv2D(512 -> 128, kernel_size=(1, 1), stride=(1, 1))
           (3): InstanceNorm(eps=1e-05, in_channels=128)
           (4): Activation(relu)
           (5): ConvLayer(
             (pad): ReflectancePadding(
             
             )
             (conv2d): Conv2D(128 -> 128, kernel_size=(3, 3), stride=(1, 1))
           )
           (6): InstanceNorm(eps=1e-05, in_channels=128)
           (7): Activation(relu)
           (8): Conv2D(128 -> 512, kernel_size=(1, 1), stride=(1, 1))
         )
       )
       (3): Bottleneck(
         (conv_block): Sequential(
           (0): InstanceNorm(eps=1e-05, in_channels=512)
           (1): Activation(relu)
           (2): Conv2D(512 -> 128, kernel_size=(1, 1), stride=(1, 1))
           (3): InstanceNorm(eps=1e-05, in_channels=128)
           (4): Activation(relu)
           (5): ConvLayer(
             (pad): ReflectancePadding(
             
             )
             (conv2d): Conv2D(128 -> 128, kernel_size=(3, 3), stride=(1, 1))
           )
           (6): InstanceNorm(eps=1e-05, in_channels=128)
           (7): Activation(relu)
           (8): Conv2D(128 -> 512, kernel_size=(1, 1), stride=(1, 1))
         )
       )
       (4): Bottleneck(
         (conv_block): Sequential(
           (0): InstanceNorm(eps=1e-05, in_channels=512)
           (1): Activation(relu)
           (2): Conv2D(512 -> 128, kernel_size=(1, 1), stride=(1, 1))
           (3): InstanceNorm(eps=1e-05, in_channels=128)
           (4): Activation(relu)
           (5): ConvLayer(
             (pad): ReflectancePadding(
             
             )
             (conv2d): Conv2D(128 -> 128, kernel_size=(3, 3), stride=(1, 1))
           )
           (6): InstanceNorm(eps=1e-05, in_channels=128)
           (7): Activation(relu)
           (8): Conv2D(128 -> 512, kernel_size=(1, 1), stride=(1, 1))
         )
       )
       (5): Bottleneck(
         (conv_block): Sequential(
           (0): InstanceNorm(eps=1e-05, in_channels=512)
           (1): Activation(relu)
           (2): Conv2D(512 -> 128, kernel_size=(1, 1), stride=(1, 1))
           (3): InstanceNorm(eps=1e-05, in_channels=128)
           (4): Activation(relu)
           (5): ConvLayer(
             (pad): ReflectancePadding(
             
             )
             (conv2d): Conv2D(128 -> 128, kernel_size=(3, 3), stride=(1, 1))
           )
           (6): InstanceNorm(eps=1e-05, in_channels=128)
           (7): Activation(relu)
           (8): Conv2D(128 -> 512, kernel_size=(1, 1), stride=(1, 1))
         )
       )
       (6): Bottleneck(
         (conv_block): Sequential(
           (0): InstanceNorm(eps=1e-05, in_channels=512)
           (1): Activation(relu)
           (2): Conv2D(512 -> 128, kernel_size=(1, 1), stride=(1, 1))
           (3): InstanceNorm(eps=1e-05, in_channels=128)
           (4): Activation(relu)
           (5): ConvLayer(
             (pad): ReflectancePadding(
             
             )
             (conv2d): Conv2D(128 -> 128, kernel_size=(3, 3), stride=(1, 1))
           )
           (6): InstanceNorm(eps=1e-05, in_channels=128)
           (7): Activation(relu)
           (8): Conv2D(128 -> 512, kernel_size=(1, 1), stride=(1, 1))
         )
       )
       (7): Bottleneck(
         (conv_block): Sequential(
           (0): InstanceNorm(eps=1e-05, in_channels=512)
           (1): Activation(relu)
           (2): Conv2D(512 -> 128, kernel_size=(1, 1), stride=(1, 1))
           (3): InstanceNorm(eps=1e-05, in_channels=128)
           (4): Activation(relu)
           (5): ConvLayer(
             (pad): ReflectancePadding(
             
             )
             (conv2d): Conv2D(128 -> 128, kernel_size=(3, 3), stride=(1, 1))
           )
           (6): InstanceNorm(eps=1e-05, in_channels=128)
           (7): Activation(relu)
           (8): Conv2D(128 -> 512, kernel_size=(1, 1), stride=(1, 1))
         )
       )
       (8): UpBottleneck(
         (conv_block): Sequential(
           (0): InstanceNorm(eps=1e-05, in_channels=512)
           (1): Activation(relu)
           (2): Conv2D(512 -> 32, kernel_size=(1, 1), stride=(1, 1))
           (3): InstanceNorm(eps=1e-05, in_channels=32)
           (4): Activation(relu)
           (5): UpsampleConvLayer(
             (conv2d): Conv2D(32 -> 32, kernel_size=(3, 3), stride=(1, 1), 
padding=(1, 1))
           )
           (6): InstanceNorm(eps=1e-05, in_channels=32)
           (7): Activation(relu)
           (8): Conv2D(32 -> 128, kernel_size=(1, 1), stride=(1, 1))
         )
         (residual_layer): UpsampleConvLayer(
           (conv2d): Conv2D(512 -> 128, kernel_size=(1, 1), stride=(1, 1))
         )
       )
       (9): UpBottleneck(
         (conv_block): Sequential(
           (0): InstanceNorm(eps=1e-05, in_channels=128)
           (1): Activation(relu)
           (2): Conv2D(128 -> 16, kernel_size=(1, 1), stride=(1, 1))
           (3): InstanceNorm(eps=1e-05, in_channels=16)
           (4): Activation(relu)
           (5): UpsampleConvLayer(
             (conv2d): Conv2D(16 -> 16, kernel_size=(3, 3), stride=(1, 1), 
padding=(1, 1))
           )
           (6): InstanceNorm(eps=1e-05, in_channels=16)
           (7): Activation(relu)
           (8): Conv2D(16 -> 64, kernel_size=(1, 1), stride=(1, 1))
         )
         (residual_layer): UpsampleConvLayer(
           (conv2d): Conv2D(128 -> 64, kernel_size=(1, 1), stride=(1, 1))
         )
       )
       (10): InstanceNorm(eps=1e-05, in_channels=64)
       (11): Activation(relu)
       (12): ConvLayer(
         (pad): ReflectancePadding(
         
         )
         (conv2d): Conv2D(64 -> 3, kernel_size=(7, 7), stride=(1, 1))
       )
     )
     (ins): Inspiration(N x 512)
     (model1): Sequential(
       (0): ConvLayer(
         (pad): ReflectancePadding(
         
         )
         (conv2d): Conv2D(3 -> 64, kernel_size=(7, 7), stride=(1, 1))
       )
       (1): InstanceNorm(eps=1e-05, in_channels=64)
       (2): Activation(relu)
       (3): Bottleneck(
         (conv_block): Sequential(
           (0): InstanceNorm(eps=1e-05, in_channels=64)
           (1): Activation(relu)
           (2): Conv2D(64 -> 32, kernel_size=(1, 1), stride=(1, 1))
           (3): InstanceNorm(eps=1e-05, in_channels=32)
           (4): Activation(relu)
           (5): ConvLayer(
             (pad): ReflectancePadding(
             
             )
             (conv2d): Conv2D(32 -> 32, kernel_size=(3, 3), stride=(2, 2))
           )
           (6): InstanceNorm(eps=1e-05, in_channels=32)
           (7): Activation(relu)
           (8): Conv2D(32 -> 128, kernel_size=(1, 1), stride=(1, 1))
         )
         (residual_layer): Conv2D(64 -> 128, kernel_size=(1, 1), stride=(2, 2))
       )
       (4): Bottleneck(
         (conv_block): Sequential(
           (0): InstanceNorm(eps=1e-05, in_channels=128)
           (1): Activation(relu)
           (2): Conv2D(128 -> 128, kernel_size=(1, 1), stride=(1, 1))
           (3): InstanceNorm(eps=1e-05, in_channels=128)
           (4): Activation(relu)
           (5): ConvLayer(
             (pad): ReflectancePadding(
             
             )
             (conv2d): Conv2D(128 -> 128, kernel_size=(3, 3), stride=(2, 2))
           )
           (6): InstanceNorm(eps=1e-05, in_channels=128)
           (7): Activation(relu)
           (8): Conv2D(128 -> 512, kernel_size=(1, 1), stride=(1, 1))
         )
         (residual_layer): Conv2D(128 -> 512, kernel_size=(1, 1), stride=(2, 2))
       )
     )
   ))
   [13:10:54] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:107: Running 
performance tests to find the best convolution algorithm, this can take a 
while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
   Traceback (most recent call last):
     File "main.py", line 228, in <module>
       main()
     File "main.py", line 213, in main
       train(args)
     File "main.py", line 82, in train
       style_model.setTarget(style_image)
     File "/home/samh/dev/mxnet/example/gluon/style_transfer/net.py", line 228, 
in setTarget
       self.ins.setTarget(G)
     File "/home/samh/dev/mxnet/example/gluon/style_transfer/net.py", line 252, 
in setTarget
       self.gram.set_data(target)
     File 
"/asset/common/software/thirdparty/mxnet/1.0.0-build1/python2.7/mxnet/gluon/parameter.py",
 line 374, in set_data
       arr[:] = data
     File 
"/asset/common/software/thirdparty/mxnet/1.0.0-build1/python2.7/mxnet/ndarray/ndarray.py",
 line 437, in __setitem__
       self._set_nd_basic_indexing(key, value)
     File 
"/asset/common/software/thirdparty/mxnet/1.0.0-build1/python2.7/mxnet/ndarray/ndarray.py",
 line 691, in _set_nd_basic_indexing
       value.copyto(self)
     File 
"/asset/common/software/thirdparty/mxnet/1.0.0-build1/python2.7/mxnet/ndarray/ndarray.py",
 line 1884, in copyto
       return _internal._copyto(self, out=other)
     File "<string>", line 25, in _copyto
     File 
"/asset/common/software/thirdparty/mxnet/1.0.0-build1/python2.7/mxnet/_ctypes/ndarray.py",
 line 92, in _imperative_invoke
       ctypes.byref(out_stypes)))
     File 
"/asset/common/software/thirdparty/mxnet/1.0.0-build1/python2.7/mxnet/base.py", 
line 148, in check_call
       raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: [13:11:10] src/imperative/imperative.cc:192: Check 
failed: AGInfo::IsNone(*(outputs[i])) Assigning to NDArrays that are already in 
a computational graph will cause undefined behavior when evaluating gradients. 
Please call backward first to clear the graph or do this out side of a record 
section. 
   
   Stack trace returned 10 entries:
   [bt] (0) 
/asset/common/software/thirdparty/mxnet/1.0.0-build1/lib/libmxnet.so(dmlc::StackTrace()+0x38)
 [0x7f937e3b46d8]
   [bt] (1) 
/asset/common/software/thirdparty/mxnet/1.0.0-build1/lib/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x18)
 [0x7f937e3b4ad8]
   [bt] (2) 
/asset/common/software/thirdparty/mxnet/1.0.0-build1/lib/libmxnet.so(mxnet::Imperative::RecordOp(nnvm::NodeAttrs&&,
 std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, 
std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, 
mxnet::OpStatePtr const&, std::vector<bool, std::allocator<bool> >*, 
std::vector<bool, std::allocator<bool> >*)+0x10b) [0x7f938085e7cb]
   [bt] (3) 
/asset/common/software/thirdparty/mxnet/1.0.0-build1/lib/libmxnet.so(MXImperativeInvokeImpl(void*,
 int, void**, int*, void***, int, char const**, char const**)+0x756) 
[0x7f9380790f36]
   [bt] (4) 
/asset/common/software/thirdparty/mxnet/1.0.0-build1/lib/libmxnet.so(MXImperativeInvokeEx+0x63)
 [0x7f93807911e3]
   [bt] (5) 
/asset/common/software/thirdparty/python/2.7.10-build1/arch/linux-centos6/x86_64/ucs4/ndebug/lib/python2.7/lib-dynload/_ctypes.so(ffi_call_unix64+0x4c)
 [0x7f939064f6e4]
   [bt] (6) 
/asset/common/software/thirdparty/python/2.7.10-build1/arch/linux-centos6/x86_64/ucs4/ndebug/lib/python2.7/lib-dynload/_ctypes.so(ffi_call+0x1f9)
 [0x7f939064f4e9]
   [bt] (7) 
/asset/common/software/thirdparty/python/2.7.10-build1/arch/linux-centos6/x86_64/ucs4/ndebug/lib/python2.7/lib-dynload/_ctypes.so(_ctypes_callproc+0x416)
 [0x7f9390646fb6]
   [bt] (8) 
/asset/common/software/thirdparty/python/2.7.10-build1/arch/linux-centos6/x86_64/ucs4/ndebug/lib/python2.7/lib-dynload/_ctypes.so(+0x9fef)
 [0x7f939063efef]
   [bt] (9) 
/asset/common/software/thirdparty/python/2.7.10-build1/arch/linux-centos6/x86_64/ucs4/ndebug/lib/libpython2.7.so.1.0(PyObject_Call+0x67)
 [0x7f939780b427]
   ```
   
   
   ## Minimum reproducible example
   Run the main.py in 
   
   
https://github.com/apache/incubator-mxnet/tree/master/example/gluon/style_transfer
   
   as follows
   
   main.py train --dataset ~/dev/coco/dataset/ --style-folder images/styles 
--save-model-dir models
   
   after download the coco dataset and the style images
   
   ## Steps to reproduce
   
   1. Install mxnet
   2. get the installed version into the environment
   3. cd example/gluon/style_transfer/
   4. python main.py train --dataset ~/dev/coco/dataset/ --style-folder 
images/styles --save-model-dir models
   
   
   ## What have you tried to solve it?
   
   1. move 
https://github.com/apache/incubator-mxnet/blob/master/example/gluon/style_transfer/main.py#L82
   2. to between L79 and L80
   3. Model will train but produces bad result
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to