If setting cnmem=0.9 raises an exception, that means that at least 10%
of your GPU memory, or 800MB, is already in use (maybe your X server,
browser, game...). If you add the 6.313GB used by Theano, plus the 750
MB you are trying to allocate, we are already at 7.863GB. With work
memory used by cuDNN and a bit of fragmentation, you may actually go
over 8GB.

On Thu, Oct 20, 2016, Frédéric Bastien wrote:
> Theano need memory for the intermediate computation. If your GPU have 8G
> and you use 6G for weights, then you won't have enough space for
> computation.
> 
> Use a smaller model and/or lower batch size.
> 
> If it is your dataset that take much space on the GPU, keep it on the CPU
> and transfer just one mini-batch at a time to the GPU.
> 
> On Wed, Oct 19, 2016 at 5:27 AM, GUKBEOM LEE <[email protected]> wrote:
> 
> > hi. I have a problem.
> > When I run theano program, "CNMEM_STATUS_OUT_OF_MEMORY" error occured, but
> > I have enough GPU Memory (geforce 1070, 8GB). I used only 6.3GB.
> > It seems like cumem problem. So I change cumem value.
> > I settings cumem = 0.85, but error occured.
> > cumem = 0.9 raise error
> > cumem = 1.0 raise error
> > What should I do. Below is full error  code
> >
> > ____________________________________________________________
> > ________________________
> >
> > Traceback (most recent call last):
> >   File "seq2seq2.py", line 552, in <module>
> >     seq_to_1hot(seq_by_mini_batch_input, seq_by_mini_batch_target,
> > MINIBATCH_UNIT, ended_i)
> >   File "seq2seq2.py", line 436, in seq_to_1hot
> >     print("result of train function(loss or update) :", seq2seq.train(si,
> > st) )
> >   File "seq2seq2.py", line 349, in train
> >     return self._train(seq_input, seq_target)
> >   File 
> > "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py",
> > line 871, in __call__
> >     storage_map=getattr(self.fn, 'storage_map', None))
> >   File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line
> > 314, in raise_with_op
> >     reraise(exc_type, exc_value, exc_trace)
> >   File 
> > "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py",
> > line 859, in __call__
> >     outputs = self.fn()
> > MemoryError: Error allocating 750015000 bytes of device memory
> > (CNMEM_STATUS_OUT_OF_MEMORY).
> > Apply node that caused the error: 
> > GpuAlloc{memset_0=True}(CudaNdarrayConstant{[[[
> > 0.]]]}, Elemwise{Composite{((i0 * i1 * i2) // i3)}}.0, TensorConstant{15},
> > TensorConstant{100002})
> > Toposort index: 219
> > Inputs types: [CudaNdarrayType(float32, (True, True, True)),
> > TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64,
> > scalar)]
> > Inputs shapes: [(1, 1, 1), (), (), ()]
> > Inputs strides: [(0, 0, 0), (), (), ()]
> > Inputs values: [CudaNdarray([[[ 0.]]]), array(125), array(15),
> > array(100002)]
> > Outputs clients: [[GpuIncSubtensor{Inc;:int64:}(GpuAlloc{memset_0=True}.0,
> > GpuElemwise{Composite{((i0 * i1) / i2)}}[(0, 1)].0, ScalarFromTensor.0),
> > GpuIncSubtensor{InplaceInc;int64::}(GpuAlloc{memset_0=True}.0,
> > GpuIncSubtensor{Inc;:int64:}.0, Constant{0})]]
> >
> > Debugprint of the apply node:
> > GpuAlloc{memset_0=True} [id A] <CudaNdarrayType(float32, 3D)> ''
> >  |CudaNdarrayConstant{[[[ 0.]]]} [id B] <CudaNdarrayType(float32, (True,
> > True, True))>
> >  |Elemwise{Composite{((i0 * i1 * i2) // i3)}} [id C] <TensorType(int64,
> > scalar)> ''
> >  | |Elemwise{sub,no_inplace} [id D] <TensorType(int64, scalar)> ''
> >  | | |Elemwise{add,no_inplace} [id E] <TensorType(int64, scalar)> ''
> >  | | | |TensorConstant{1} [id F] <TensorType(int64, scalar)>
> >  | | | |Shape_i{0} [id G] <TensorType(int64, scalar)> ''
> >  | | |   |<TensorType(float32, 3D)> [id H] <TensorType(float32, 3D)>
> >  | | |Elemwise{Composite{Switch(LT(i0, i1), i0, i1)}} [id I]
> > <TensorType(int64, scalar)> ''
> >  | |   |TensorConstant{1} [id F] <TensorType(int64, scalar)>
> >  | |   |Elemwise{add,no_inplace} [id E] <TensorType(int64, scalar)> ''
> >  | |Shape_i{0} [id J] <TensorType(int64, scalar)> ''
> >  | | |y_t [id K] <CudaNdarrayType(float32, matrix)>
> >  | |Shape_i{1} [id L] <TensorType(int64, scalar)> ''
> >  | | |y_t [id K] <CudaNdarrayType(float32, matrix)>
> >  | |TensorConstant{1500030} [id M] <TensorType(int64, scalar)>
> >  |TensorConstant{15} [id N] <TensorType(int64, scalar)>
> >  |TensorConstant{100002} [id O] <TensorType(int64, scalar)>
> >
> > Storage map footprint:
> >  - forall_inplace,gpu,scan_fn}.0, Shape: (126, 15, 100002), ElemSize: 4
> > Byte(s), TotalSize: 756015120 Byte(s)
> >  - GpuAlloc{memset_0=True}.0, Shape: (126, 15, 100002), ElemSize: 4
> > Byte(s), TotalSize: 756015120 Byte(s)
> >  - <TensorType(float32, 3D)>, Input, Shape: (125, 15, 100002), ElemSize: 4
> > Byte(s), TotalSize: 750015000 Byte(s)
> >  - GpuReshape{3}.0, Shape: (125, 15, 100002), ElemSize: 4 Byte(s),
> > TotalSize: 750015000 Byte(s)
> >  - GpuReshape{3}.0, Shape: (125, 15, 100002), ElemSize: 4 Byte(s),
> > TotalSize: 750015000 Byte(s)
> >  - GpuElemwise{clip,no_inplace}.0, Shape: (125, 15, 100002), ElemSize: 4
> > Byte(s), TotalSize: 750015000 Byte(s)
> >  - GpuElemwise{Composite{Cast{float32}(AND(GE(i0, i1), LE(i0,
> > i2)))},no_inplace}.0, Shape: (125, 15, 100002), ElemSize: 4 Byte(s),
> > TotalSize: 750015000 Byte(s)
> >  - Uf, Shared Input, Shape: (100002, 131), ElemSize: 4 Byte(s), TotalSize:
> > 52401048 Byte(s)
> >  - Uo, Shared Input, Shape: (100002, 131), ElemSize: 4 Byte(s), TotalSize:
> > 52401048 Byte(s)
> >  - Ui, Shared Input, Shape: (100002, 131), ElemSize: 4 Byte(s), TotalSize:
> > 52401048 Byte(s)
> >  - Ug, Shared Input, Shape: (100002, 131), ElemSize: 4 Byte(s), TotalSize:
> > 52401048 Byte(s)
> >  - Uf, Shared Input, Shape: (100002, 131), ElemSize: 4 Byte(s), TotalSize:
> > 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 100002),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 100002),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (100002, 131),
> > ElemSize: 4 Byte(s), TotalSize: 52401048 Byte(s)
> >  - Ug, Shared Input, Shape: (100002, 131), ElemSize: 4 Byte(s), TotalSize:
> > 52401048 Byte(s)
> >  - Wh, Shared Input, Shape: (131, 100002), ElemSize: 4 Byte(s), TotalSize:
> > 52401048 Byte(s)
> >  - Uo, Shared Input, Shape: (100002, 131), ElemSize: 4 Byte(s), TotalSize:
> > 52401048 Byte(s)
> >  - Ui, Shared Input, Shape: (100002, 131), ElemSize: 4 Byte(s), TotalSize:
> > 52401048 Byte(s)
> >  - GpuDimShuffle{0,x,1}.0, Shape: (125, 1, 100002), ElemSize: 4 Byte(s),
> > TotalSize: 50001000 Byte(s)
> >  - <TensorType(float32, 3D)>, Input, Shape: (2, 15, 100002), ElemSize: 4
> > Byte(s), TotalSize: 12000240 Byte(s)
> >  - GpuFromHost.0, Shape: (2, 15, 100002), ElemSize: 4 Byte(s), TotalSize:
> > 12000240 Byte(s)
> >  - y_t, Shared Input, Shape: (15, 100002), ElemSize: 4 Byte(s), TotalSize:
> > 6000120 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 100002),
> > ElemSize: 4 Byte(s), TotalSize: 6000120 Byte(s)
> >  - bh, Shared Input, Shape: (15, 100002), ElemSize: 4 Byte(s), TotalSize:
> > 6000120 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 100002),
> > ElemSize: 4 Byte(s), TotalSize: 6000120 Byte(s)
> >  - forall_inplace,gpu,scan_fn}.1, Shape: (126, 15, 131), ElemSize: 4
> > Byte(s), TotalSize: 990360 Byte(s)
> >  - forall_inplace,gpu,scan_fn}.2, Shape: (126, 15, 131), ElemSize: 4
> > Byte(s), TotalSize: 990360 Byte(s)
> >  - Wo, Shared Input, Shape: (131, 131), ElemSize: 4 Byte(s), TotalSize:
> > 68644 Byte(s)
> >  - Wi, Shared Input, Shape: (131, 131), ElemSize: 4 Byte(s), TotalSize:
> > 68644 Byte(s)
> >  - Wg, Shared Input, Shape: (131, 131), ElemSize: 4 Byte(s), TotalSize:
> > 68644 Byte(s)
> >  - Wf, Shared Input, Shape: (131, 131), ElemSize: 4 Byte(s), TotalSize:
> > 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - Wg, Shared Input, Shape: (131, 131), ElemSize: 4 Byte(s), TotalSize:
> > 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (131, 131),
> > ElemSize: 4 Byte(s), TotalSize: 68644 Byte(s)
> >  - Wo, Shared Input, Shape: (131, 131), ElemSize: 4 Byte(s), TotalSize:
> > 68644 Byte(s)
> >  - Wi, Shared Input, Shape: (131, 131), ElemSize: 4 Byte(s), TotalSize:
> > 68644 Byte(s)
> >  - Wf, Shared Input, Shape: (131, 131), ElemSize: 4 Byte(s), TotalSize:
> > 68644 Byte(s)
> >  - forall_inplace,gpu,scan_fn}.1, Shape: (3, 15, 131), ElemSize: 4
> > Byte(s), TotalSize: 23580 Byte(s)
> >  - forall_inplace,gpu,scan_fn}.0, Shape: (3, 15, 131), ElemSize: 4
> > Byte(s), TotalSize: 23580 Byte(s)
> >  - GpuAlloc{memset_0=True}.0, Shape: (3, 15, 131), ElemSize: 4 Byte(s),
> > TotalSize: 23580 Byte(s)
> >  - GpuAlloc{memset_0=True}.0, Shape: (2, 15, 131), ElemSize: 4 Byte(s),
> > TotalSize: 15720 Byte(s)
> >  - bo, Shared Input, Shape: (15, 131), ElemSize: 4 Byte(s), TotalSize:
> > 7860 Byte(s)
> >  - bi, Shared Input, Shape: (15, 131), ElemSize: 4 Byte(s), TotalSize:
> > 7860 Byte(s)
> >  - bg, Shared Input, Shape: (15, 131), ElemSize: 4 Byte(s), TotalSize:
> > 7860 Byte(s)
> >  - bf, Shared Input, Shape: (15, 131), ElemSize: 4 Byte(s), TotalSize:
> > 7860 Byte(s)
> >  - h_tm1, Shared Input, Shape: (15, 131), ElemSize: 4 Byte(s), TotalSize:
> > 7860 Byte(s)
> >  - c_tm1, Shared Input, Shape: (15, 131), ElemSize: 4 Byte(s), TotalSize:
> > 7860 Byte(s)
> >  - GpuSubtensor{int64}.0, Shape: (15, 131), ElemSize: 4 Byte(s),
> > TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - GpuSubtensor{int64}.0, Shape: (15, 131), ElemSize: 4 Byte(s),
> > TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - c_tm1, Shared Input, Shape: (15, 131), ElemSize: 4 Byte(s), TotalSize:
> > 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (15, 131),
> > ElemSize: 4 Byte(s), TotalSize: 7860 Byte(s)
> >  - h_tm1, Shared Input, Shape: (15, 131), ElemSize: 4 Byte(s), TotalSize:
> > 7860 Byte(s)
> >  - bo, Shared Input, Shape: (15, 131), ElemSize: 4 Byte(s), TotalSize:
> > 7860 Byte(s)
> >  - bi, Shared Input, Shape: (15, 131), ElemSize: 4 Byte(s), TotalSize:
> > 7860 Byte(s)
> >  - bg, Shared Input, Shape: (15, 131), ElemSize: 4 Byte(s), TotalSize:
> > 7860 Byte(s)
> >  - bf, Shared Input, Shape: (15, 131), ElemSize: 4 Byte(s), TotalSize:
> > 7860 Byte(s)
> >  - TensorConstant{[    -1   ..15 100002]}, Shape: (3,), ElemSize: 8
> > Byte(s), TotalSize: 24 Byte(s)
> >  - TensorConstant{0}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
> > Byte(s)
> >  - Elemwise{add,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(LT((i0 + i1), i2), i2, (i0 + i1))}}.0,
> > Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{add,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Shape_i{0}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Shape_i{1}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{sub,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Elemwise{Composite{maximum(maximum(((i0 - Switch(i1, (i2 + i0 + i3),
> > i2)) + i3), i4), maximum(maximum(((i0 - Switch(LT(Composite{Switch(LT(i0,
> > i1), i1, i0)}(Composite{Switch(LT(i0, i1), (i2 - i3), i0)}(Composite{((i0 -
> > (Switch(LT(i1, i2), i2, i1) - i3)) - 
> > i3)}((Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i5, i6, i0, i3, i7, i8, i9, (i9 - i3)) + i3), Composite{((((i0 -
> > Switch(GE(i1, i2), i2, i1)) - i3) // i3) + 
> > i3)}(Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i5, i6, i0, i3, i7, i8, i9, (i9 - i3)), Composite{Switch(LT(i0, i1),
> > i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i10, i11, i0, i3), i7,
> > i8), i9, i3), i7, i3), i7, (Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i5, i6, i0, i3, i7, i8, i9, (i9 - i3)) + i3), i3), i7),
> > Composite{Switch(LT(i0, i1), i1, 
> > i0)}((Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i5, i6, i0, i3, i7, i8, i9, (i9 - i3)) + i3), i7)),
> > Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), (i2 -
> > i3), i0)}(Composite{((i0 - (Switch(LT(i1, i2), i2, i1) - i3)) -
> > i3)}((Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i5, i6, i0, i3, i7, i8, i9, (i9 -
> > i3)) + i3), Composite{((((i0 - Switch(GE(i1, i2), i2, i1)) - i3) // i3) +
> > i3)}(Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i5, i6, i0, i3, i7, i8, i9, (i9 -
> > i3)), Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 +
> > i3), i1)}(i10, i11, i0, i3), i7, i8), i9, i3), i7, i3), i7,
> > (Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i5, i6, i0, i3, i7, i8, i9, (i9 -
> > i3)) + i3), i3), i7), Composite{Switch(LT(i0, i1), i1,
> > i0)}((Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i5, i6, i0, i3, i7, i8, i9, (i9 -
> > i3)) + i3), i7))) + i3), i4), maximum(((i0 - 
> > Switch(LT(Composite{Switch(LT(i0,
> > i1), i1, i0)}(Composite{Switch(LT(i0, i1), (i2 - i3), i0)}(Composite{((i0 -
> > (Switch(LT(i1, i2), i2, i1) - i3)) - 
> > i3)}((Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i12, i13, i0, i3, i7, i8, i9, (i9 - i3)) + i3), Composite{((((i0 -
> > Switch(GE(i1, i2), i2, i1)) - i3) // i3) + 
> > i3)}(Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i12, i13, i0, i3, i7, i8, i9, (i9 - i3)), Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i14, i15, i0, i3),
> > i7, i8), i9, i3), i7, i3), i7, (Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i12, i13, i0, i3, i7, i8, i9, (i9 - i3)) + i3), i3), i7),
> > Composite{Switch(LT(i0, i1), i1, 
> > i0)}((Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i12, i13, i0, i3, i7, i8, i9, (i9 - i3)) + i3), i7)),
> > Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), (i2 -
> > i3), i0)}(Composite{((i0 - (Switch(LT(i1, i2), i2, i1) - i3)) -
> > i3)}((Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i12, i13, i0, i3, i7, i8, i9, (i9
> > - i3)) + i3), Composite{((((i0 - Switch(GE(i1, i2), i2, i1)) - i3) // i3) +
> > i3)}(Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i12, i13, i0, i3, i7, i8, i9, (i9
> > - i3)), Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2
> > + i3), i1)}(i14, i15, i0, i3), i7, i8), i9, i3), i7, i3), i7,
> > (Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i12, i13, i0, i3, i7, i8, i9, (i9
> > - i3)) + i3), i3), i7), Composite{Switch(LT(i0, i1), i1,
> > i0)}((Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i12, i13, i0, i3, i7, i8, i9, (i9
> > - i3)) + i3), i7))) + i3), i4)))}}.0, Shape: (), ElemSize: 8 Byte(s),
> > TotalSize: 8.0 Byte(s)
> >  - Shape_i{0}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(i0, i1, maximum(i2, (i3 - i4)))}}.0, Shape:
> > (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{add,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Shape_i{0}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - TensorConstant{-2}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
> > Byte(s)
> >  - Elemwise{maximum,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s),
> > TotalSize: 8.0 Byte(s)
> >  - TensorConstant{15}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
> > Byte(s)
> >  - Constant{1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{add,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(LT(i0, i1), (i0 + i2), (i0 - i2))}}.0,
> > Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{sub,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Constant{-1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Shape_i{0}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(i0, i1, Switch(AND(LT((i2 - i3), i1), GT(i4,
> > i1)), i5, maximum((i6 + i7), (i2 - i3))))}}.0, Shape: (), ElemSize: 8
> > Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{add,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - TensorConstant{1500030}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
> > Byte(s)
> >  - Elemwise{minimum,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s),
> > TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{(i0 - Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i1, i2, i3, i4), i5, i6),
> > i7), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i1, i2, i3, i4), i5, i6)))}}.0, Shape: (), ElemSize: 8
> > Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{sub,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(LT(i0, (i0 - i1)), i0, (i0 - i1))}}.0,
> > Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(LT(i0, i1), i0, i1)}}.0, Shape: (),
> > ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(LT(i0, i1), (i0 + i2), (i0 - i2))}}.0,
> > Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{maximum(maximum(((i0 - i1) + i2), i3), maximum(((i0
> > - Switch(LT(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(LT(i0,
> > i1), (i2 - i3), i0)}(Composite{((i0 - (Switch(LT(i1, i2), i2, i1) - i3)) -
> > i3)}(i4, Composite{(((i0 - i1) // i1) + i1)}(i5, i2), i6, i2), i6, i4, i2),
> > i6), i7), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(LT(i0,
> > i1), (i2 - i3), i0)}(Composite{((i0 - (Switch(LT(i1, i2), i2, i1) - i3)) -
> > i3)}(i4, Composite{(((i0 - i1) // i1) + i1)}(i5, i2), i6, i2), i6, i4, i2),
> > i6), i7)) + i2), i3))}}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
> > Byte(s)
> >  - Elemwise{Composite{Switch(i0, Switch(LT((i1 + i2), i3), i3, (i1 + i2)),
> > Switch(LT(i1, i2), i1, i2))}}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(LT(i0, i1), (i0 + i2), (i0 - i2))}}.0,
> > Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{sub,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Elemwise{Composite{((i0 * i1 * i2) // i3)}}.0, Shape: (), ElemSize: 8
> > Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(LT(i0, i1), (i0 + i2), (i0 - i2))}}.0,
> > Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}}.0, Shape: (), ElemSize: 8
> > Byte(s), TotalSize: 8.0 Byte(s)
> >  - Constant{0}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{add,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - TensorConstant{100002}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
> > Byte(s)
> >  - Elemwise{sub,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(LT(Composite{Switch(LT(i0, i1), i1,
> > i0)}(Composite{Switch(LT(i0, i1), i2, i0)}((i0 - i1), i2, i3), i2), i1),
> > Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2,
> > i0)}((i0 - i1), i2, i3), i2), i1)}}.0, Shape: (), ElemSize: 8 Byte(s),
> > TotalSize: 8.0 Byte(s)
> >  - Elemwise{minimum,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s),
> > TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(LT(i0, i1), i1, i0)}}.0, Shape: (),
> > ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - TensorConstant{2}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
> > Byte(s)
> >  - Elemwise{Composite{Switch(LT(Composite{Switch(i0, (i1 + i2 + i3),
> > i1)}(i0, i1, i2, i3), i4), i5, Composite{Switch(i0, (i1 + i2 + i3),
> > i1)}(i0, i1, i2, i3))}}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
> > Byte(s)
> >  - Elemwise{Composite{Switch(LT((i0 + i1), i2), i2, (i0 + i1))}}.0,
> > Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(i0, Switch(LT((i1 + i2), i3), i3, (i1 + i2)),
> > Switch(LT(i1, i2), i1, i2))}}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Elemwise{add,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(LT(i0, (i0 - i1)), i0, (i0 - i1))}}.0,
> > Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - TensorConstant{1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
> > Byte(s)
> >  - Shape_i{0}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{add,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}}.0, Shape: (), ElemSize: 8
> > Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{maximum(maximum(((i0 - Switch(i1, (i2 + i3 + i4),
> > i2)) + i4), i5), maximum(maximum(((i0 - Switch(LT(Composite{Switch(LT(i0,
> > i1), i1, i0)}(Composite{Switch(LT(i0, i1), (i2 - i3), i0)}(Composite{((i0 -
> > (Switch(LT(i1, i2), i2, i1) - i3)) - 
> > i3)}((Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i6, i7, i3, i4, i8, i9, (i3 + i4), ((i3 + i4) - i4)) + i4),
> > Composite{((((i0 - Switch(GE(i1, i2), i2, i1)) - i3) // i3) +
> > i3)}(Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i6, i7, i3, i4, i8, i9, (i3 +
> > i4), ((i3 + i4) - i4)), Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i10, i11, i3, i4), i8, i9),
> > (i3 + i4), i4), i8, i4), i8, (Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i6, i7, i3, i4, i8, i9, (i3 + i4), ((i3 + i4) - i4)) + i4), i4), i8),
> > Composite{Switch(LT(i0, i1), i1, 
> > i0)}((Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i6, i7, i3, i4, i8, i9, (i3 + i4), ((i3 + i4) - i4)) + i4), i8)),
> > Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), (i2 -
> > i3), i0)}(Composite{((i0 - (Switch(LT(i1, i2), i2, i1) - i3)) -
> > i3)}((Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i6, i7, i3, i4, i8, i9, (i3 +
> > i4), ((i3 + i4) - i4)) + i4), Composite{((((i0 - Switch(GE(i1, i2), i2,
> > i1)) - i3) // i3) + i3)}(Composite{Switch(GE(Composite{Switch(LT(i0, i1),
> > i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5), i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0,
> > (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i6, i7, i3, i4, i8, i9, (i3
> > + i4), ((i3 + i4) - i4)), Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i10, i11, i3, i4), i8, i9),
> > (i3 + i4), i4), i8, i4), i8, (Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i6, i7, i3, i4, i8, i9, (i3 + i4), ((i3 + i4) - i4)) + i4), i4), i8),
> > Composite{Switch(LT(i0, i1), i1, 
> > i0)}((Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i6, i7, i3, i4, i8, i9, (i3 + i4), ((i3 + i4) - i4)) + i4), i8))) +
> > i4), i5), maximum(((i0 - Switch(LT(Composite{Switch(LT(i0, i1), i1,
> > i0)}(Composite{Switch(LT(i0, i1), (i2 - i3), i0)}(Composite{((i0 -
> > (Switch(LT(i1, i2), i2, i1) - i3)) - 
> > i3)}((Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i12, i13, i3, i4, i8, i9, (i3 + i4), ((i3 + i4) - i4)) + i4),
> > Composite{((((i0 - Switch(GE(i1, i2), i2, i1)) - i3) // i3) +
> > i3)}(Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i12, i13, i3, i4, i8, i9, (i3 +
> > i4), ((i3 + i4) - i4)), Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i14, i15, i3, i4), i8, i9),
> > (i3 + i4), i4), i8, i4), i8, (Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i12, i13, i3, i4, i8, i9, (i3 + i4), ((i3 + i4) - i4)) + i4), i4),
> > i8), Composite{Switch(LT(i0, i1), i1, 
> > i0)}((Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i12, i13, i3, i4, i8, i9, (i3 + i4), ((i3 + i4) - i4)) + i4), i8)),
> > Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), (i2 -
> > i3), i0)}(Composite{((i0 - (Switch(LT(i1, i2), i2, i1) - i3)) -
> > i3)}((Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i12, i13, i3, i4, i8, i9, (i3 +
> > i4), ((i3 + i4) - i4)) + i4), Composite{((((i0 - Switch(GE(i1, i2), i2,
> > i1)) - i3) // i3) + i3)}(Composite{Switch(GE(Composite{Switch(LT(i0, i1),
> > i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5), i6), i7, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0,
> > (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}(i12, i13, i3, i4, i8, i9,
> > (i3 + i4), ((i3 + i4) - i4)), Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i14, i15, i3, i4), i8, i9),
> > (i3 + i4), i4), i8, i4), i8, (Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i12, i13, i3, i4, i8, i9, (i3 + i4), ((i3 + i4) - i4)) + i4), i4),
> > i8), Composite{Switch(LT(i0, i1), i1, 
> > i0)}((Composite{Switch(GE(Composite{Switch(LT(i0,
> > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3),
> > i4, i5), i6), i7, Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4,
> > i5))}(i12, i13, i3, i4, i8, i9, (i3 + i4), ((i3 + i4) - i4)) + i4), i8))) +
> > i4), i5)))}}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(LT(i0, i1), (i2 + (-i3)), Switch(GE(i0, i4),
> > (i5 + i6), Switch(LE(i4, i1), (i5 + i6), i6)))}}.0, Shape: (), ElemSize: 8
> > Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{add,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Shape_i{1}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Shape_i{1}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Shape_i{1}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Shape_i{0}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{((i0 * i1 * i2) // i3)}}.0, Shape: (), ElemSize: 8
> > Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(i0, i1, Switch(AND(LT((i2 - i3), i1), GT(i4,
> > i1)), i5, maximum(i6, (i2 - i3))))}}.0, Shape: (), ElemSize: 8 Byte(s),
> > TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{maximum(maximum(((i0 -
> > Switch(LT(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(LT(i0,
> > i1), (i2 - i3), i0)}(Composite{((i0 - (Switch(LT(i1, i2), i2, i1) - i3)) -
> > i3)}((Composite{Switch(i0, (i1 - i2), i3)}(i1, i2, i3, i4) + i3),
> > Composite{((((i0 - i1) - i2) // i2) + i2)}(Composite{Switch(i0, (i1 - i2),
> > i3)}(i1, i2, i3, i4), i5, i3), i6, i3), i6, (Composite{Switch(i0, (i1 -
> > i2), i3)}(i1, i2, i3, i4) + i3), i3), i6), Composite{Switch(LT(i0, i1), i1,
> > i0)}((Composite{Switch(i0, (i1 - i2), i3)}(i1, i2, i3, i4) + i3), i6)),
> > Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), (i2 -
> > i3), i0)}(Composite{((i0 - (Switch(LT(i1, i2), i2, i1) - i3)) -
> > i3)}((Composite{Switch(i0, (i1 - i2), i3)}(i1, i2, i3, i4) + i3),
> > Composite{((((i0 - i1) - i2) // i2) + i2)}(Composite{Switch(i0, (i1 - i2),
> > i3)}(i1, i2, i3, i4), i5, i3), i6, i3), i6, (Composite{Switch(i0, (i1 -
> > i2), i3)}(i1, i2, i3, i4) + i3), i3), i6), Composite{Switch(LT(i0, i1), i1,
> > i0)}((Composite{Switch(i0, (i1 - i2), i3)}(i1, i2, i3, i4) + i3), i6))) +
> > i3), i7), maximum((i8 + i3), i7))}}.0, Shape: (), ElemSize: 8 Byte(s),
> > TotalSize: 8.0 Byte(s)
> >  - TensorConstant{-1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
> > Byte(s)
> >  - Elemwise{add,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
> > 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(i0, i1, maximum(minimum((i2 + i3), i4),
> > i5))}}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
> >  - Elemwise{Composite{Switch(GE(Composite{Switch(LT(i0, i1), i2,
> > i0)}(Composite{Switch(i0, (i1 + i2 + i3), i1)}(i0, i1, i2, i3), i4, i5),
> > i6), i6, Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 +
> > i2 + i3), i1)}(i0, i1, i2, i3), i4, i5))}}.0, Shape: (), ElemSize: 8
> > Byte(s), TotalSize: 8.0 Byte(s)
> >  - mean, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
> >  - CudaNdarrayConstant{0.0}, Shape: (), ElemSize: 4 Byte(s), TotalSize:
> > 4.0 Byte(s)
> >  - CudaNdarrayConstant{[[ 0.94999999]]}, Shape: (1, 1), ElemSize: 4
> > Byte(s), TotalSize: 4 Byte(s)
> >  - CudaNdarrayConstant{[[  9.99999997e-07]]}, Shape: (1, 1), ElemSize: 4
> > Byte(s), TotalSize: 4 Byte(s)
> >  - CudaNdarrayConstant{[[[ 0.99999899]]]}, Shape: (1, 1, 1), ElemSize: 4
> > Byte(s), TotalSize: 4 Byte(s)
> >  - CudaNdarrayConstant{[[[  9.99999997e-07]]]}, Shape: (1, 1, 1),
> > ElemSize: 4 Byte(s), TotalSize: 4 Byte(s)
> >  - CudaNdarrayConstant{[[ 0.05]]}, Shape: (1, 1), ElemSize: 4 Byte(s),
> > TotalSize: 4 Byte(s)
> >  - CudaNdarrayConstant{[[[ 1.]]]}, Shape: (1, 1, 1), ElemSize: 4 Byte(s),
> > TotalSize: 4 Byte(s)
> >  - GpuSubtensor{int64}.0, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
> > Byte(s)
> >  - CudaNdarrayConstant{[[[ 0.]]]}, Shape: (1, 1, 1), ElemSize: 4 Byte(s),
> > TotalSize: 4 Byte(s)
> >  - GpuSubtensor{int64}.0, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
> > Byte(s)
> >  - CudaNdarrayConstant{[[[-1.]]]}, Shape: (1, 1, 1), ElemSize: 4 Byte(s),
> > TotalSize: 4 Byte(s)
> >  - Elemwise{lt,no_inplace}.0, Shape: (), ElemSize: 1 Byte(s), TotalSize:
> > 1.0 Byte(s)
> >  - Elemwise{lt,no_inplace}.0, Shape: (), ElemSize: 1 Byte(s), TotalSize:
> > 1.0 Byte(s)
> >  - TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
> > Byte(s)
> >  - Elemwise{lt,no_inplace}.0, Shape: (), ElemSize: 1 Byte(s), TotalSize:
> > 1.0 Byte(s)
> >  - TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
> > Byte(s)
> >  - Elemwise{ge,no_inplace}.0, Shape: (), ElemSize: 1 Byte(s), TotalSize:
> > 1.0 Byte(s)
> >  - Elemwise{lt,no_inplace}.0, Shape: (), ElemSize: 1 Byte(s), TotalSize:
> > 1.0 Byte(s)
> >  - Constant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
> >  - TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
> > Byte(s)
> >  - TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
> > Byte(s)
> >  - TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
> > Byte(s)
> >  - Elemwise{Composite{AND(LT(i0, i1), GT(i2, i1))}}.0, Shape: (),
> > ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
> >  - Elemwise{le,no_inplace}.0, Shape: (), ElemSize: 1 Byte(s), TotalSize:
> > 1.0 Byte(s)
> >  - TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
> > Byte(s)
> >  - TensorConstant{0}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
> > Byte(s)
> >  - Elemwise{lt,no_inplace}.0, Shape: (), ElemSize: 1 Byte(s), TotalSize:
> > 1.0 Byte(s)
> >  - Elemwise{lt,no_inplace}.0, Shape: (), ElemSize: 1 Byte(s), TotalSize:
> > 1.0 Byte(s)
> >  - TensorConstant{2}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
> > Byte(s)
> >  - TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
> > Byte(s)
> >  - TensorConstant{-1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
> > Byte(s)
> >  TotalSize: 6778870848.0 Byte(s) 6.313 GB
> >  TotalSize inputs: 2202711711.0 Byte(s) 2.051 GB
> >
> > HINT: Re-running with most Theano optimization disabled could give you a
> > back-trace of when this node was created. This can be done with by setting
> > the Theano flag 'optimizer=fast_compile'. If that does not work, Theano
> > optimizations can be disabled with 'optimizer=None'.
> >
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google Groups
> > "theano-users" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to [email protected].
> > For more options, visit https://groups.google.com/d/optout.
> >
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.

-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to