[jira] [Updated] (SINGA-494) Singa autograd improvement

2019-10-07 Thread zhangzhaoqi (Jira)


 [ 
https://issues.apache.org/jira/browse/SINGA-494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-494:
--
Description: 
Background: some autograd ops cannot satisfy the onnx demand, as following:
 # *conv, averagepool, maxpool*

 - only support 2d input, i.e, N*C*W*H
 - not support SAME_UPPER, SAME_LOWER, count_include_pad and ceil_mod

 # *reshape*

 - not support zero_dim, zero_and_negative_dim

 # *concat*

 - not support 1d

 # *matmul*

 - only support 2d

 # *min, max*

 - only support 2 inputs

 # *add*

 - not support broadcast

 # *and, or, xor*

 - not support broadcast

 # *div, pow, prelu*

 - not support broadcast

Some improvements are being done.

  was:
Background: some autograd ops cannot satisfy the onnx demand, as following:

# conv, averagepool, maxpool
- only support 2d input, i.e, N*C*W*H
- not support SAME_UPPER, SAME_LOWER, count_include_pad and ceil_mode

# reshape
- not support zero_dim, zero_and_negative_dim

# concat
- not support 1d

# matmul
- only support 2d

# min, max
- only support 2 inputs

# add
- not support broadcast

# and, or, xor
- not support broadcast

# div, pow, prelu
- not support broadcast

Some improvements are being done.


> Singa autograd improvement
> --
>
> Key: SINGA-494
> URL: https://issues.apache.org/jira/browse/SINGA-494
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Major
>
> Background: some autograd ops cannot satisfy the onnx demand, as following:
>  # *conv, averagepool, maxpool*
>  - only support 2d input, i.e, N*C*W*H
>  - not support SAME_UPPER, SAME_LOWER, count_include_pad and ceil_mod
>  # *reshape*
>  - not support zero_dim, zero_and_negative_dim
>  # *concat*
>  - not support 1d
>  # *matmul*
>  - only support 2d
>  # *min, max*
>  - only support 2 inputs
>  # *add*
>  - not support broadcast
>  # *and, or, xor*
>  - not support broadcast
>  # *div, pow, prelu*
>  - not support broadcast
> Some improvements are being done.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (SINGA-489) Add onnx backend test cases for the new operators

2019-09-13 Thread zhangzhaoqi (Jira)
zhangzhaoqi created SINGA-489:
-

 Summary: Add onnx backend test cases for the new operators
 Key: SINGA-489
 URL: https://issues.apache.org/jira/browse/SINGA-489
 Project: Singa
  Issue Type: New Feature
Reporter: zhangzhaoqi


Add onnx backend test cases for the new operators at 'SINGA-471'



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Closed] (SINGA-483) fix dummy bugs

2019-09-13 Thread zhangzhaoqi (Jira)


 [ 
https://issues.apache.org/jira/browse/SINGA-483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi closed SINGA-483.
-
Resolution: Fixed

> fix dummy bugs
> --
>
> Key: SINGA-483
> URL: https://issues.apache.org/jira/browse/SINGA-483
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There is a but at autograd dummy operator. It should call __getattribute__ 
> function to get tensor's attribute, not directly use the name. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (SINGA-481) Reconstruct SONNX

2019-09-13 Thread zhangzhaoqi (Jira)


 [ 
https://issues.apache.org/jira/browse/SINGA-481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-481:
--
Description: 
* Reconstruct the frontend and backend of soonx, and make it support transfer 
learning. 
 * Develop soonx operators: conv2d, relu, avg_pool, softmax, sigmoid, add, 
concat, matmul
 * Add these operators' test cases.

  was:
* Reconstruct the frontend and backend of soonx, and make it support transfer 
learning. 
 * Develop soonx operators: conv2d, relu, avg_pool, softmax, sigmoid, add, 
concat, matmul

 
 * Add these operators' test cases.


> Reconstruct SONNX
> -
>
> Key: SINGA-481
> URL: https://issues.apache.org/jira/browse/SINGA-481
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> * Reconstruct the frontend and backend of soonx, and make it support transfer 
> learning. 
>  * Develop soonx operators: conv2d, relu, avg_pool, softmax, sigmoid, add, 
> concat, matmul
>  * Add these operators' test cases.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (SINGA-483) fix dummy bugs

2019-08-15 Thread zhangzhaoqi (JIRA)
zhangzhaoqi created SINGA-483:
-

 Summary: fix dummy bugs
 Key: SINGA-483
 URL: https://issues.apache.org/jira/browse/SINGA-483
 Project: Singa
  Issue Type: New Feature
Reporter: zhangzhaoqi


There is a but at autograd dummy operator. It should call __getattribute__ 
function to get tensor's attribute, not directly use the name. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (SINGA-481) Reconstruct SONNX

2019-08-12 Thread zhangzhaoqi (JIRA)
zhangzhaoqi created SINGA-481:
-

 Summary: Reconstruct SONNX
 Key: SINGA-481
 URL: https://issues.apache.org/jira/browse/SINGA-481
 Project: Singa
  Issue Type: New Feature
Reporter: zhangzhaoqi


* Reconstruct the frontend and backend of soonx, and make it support transfer 
learning. 
 * Develop soonx operators: conv2d, relu, avg_pool, softmax, sigmoid, add, 
concat, matmul

 
 * Add these operators' test cases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models and their 
components as following:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
 BatchNormalization
 Conv
 LeakyRelu
 MaxPool
 Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
 Add
 BatchNormalization
 Conv
 Cos
 Dropout
 Flatten
 Gemm
 Identity
 InstanceNormalization
 LpNormalization
 Mul
 PRelu
 Reshape
 Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
 Add
 Add
 ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Conv
 Dropout
 Gather
 Hardmax
 Log
 LSTM
 MatMul
 ReduceMax
 ReduceSum
 Relu
 Shape
 Sigmoid
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

 

In summary, we already implemented 13 ops, and there're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
 -BatchNormalization-
 -Cos-
 -Conv-
 -LeakyRelu-
 -LSTM-
 -Abs-
 -MaxPool-
 -Flatten-
 -Add-
 -MatMul-
 -Relu-
 -Sigmoid-
h2. To be implemented:

ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Dropout
 Gather
 Gemm
 Hardmax
 Identity
 InstanceNormalization
 Log
 LpNormalization
 Mul
 PRelu
 ReduceMax
 ReduceSum
 Reshape
 Shape
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

Please refer to the [ONNX Operator Schemas| 
[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more detailed 
information.

  was:
For the demo purpose, we need to implement these three models and their 
components as following:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
 BatchNormalization
 Conv
 LeakyRelu
 MaxPool
 Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
 Add
 BatchNormalization
 Conv
 Cos
 Dropout
 Flatten
 Gemm
 Identity
 InstanceNormalization
 LpNormalization
 Mul
 PRelu
 Reshape
 Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
 Add
 Add
 ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Conv
 Dropout
 Gather
 Hardmax
 Log
 LSTM
 MatMul
 ReduceMax
 ReduceSum
 Relu
 Shape
 Sigmoid
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

 

In summary, we already implemented 13 ops, and there're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
 -BatchNormalization-
 -Cos-
 -Conv-
 -LeakyRelu-
 -LSTM-
 -Abs-
 -MaxPool-
 -Flatten-
 -Add-
 -MatMul-
 -Relu-
 -Sigmoid-
h2. To be implemented:

ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Dropout
 Gather
 Gemm
 Hardmax
 Identity
 InstanceNormalization
 Log
 LpNormalization
 Mul
 PRelu
 ReduceMax
 ReduceSum
 Reshape
 Shape
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

Please refer to the [ONNX Operator 
Schemas|[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more 
detailed information.


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, bidaf.png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models and their 
> components as following:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> Add
>  BatchNormalization
>  Conv
>  LeakyRelu
>  MaxPool
>  Mul
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Acos
>  Add
>  BatchNormalization
>  Conv
>  Cos
>  Dropout
>  Flatten
>  Gemm
>  Identity
>  InstanceNormalization
>  LpNormalization
>  Mul
>  PRelu
>  Reshape
>  Sub
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> Abs
>  Add
>  Add
>  ArgMax
>  Cast
>  Ceil
>  Clip
>  Compress
>  Concat
>  ConstantOfShape
>  Conv
>  Dropout
>  Gather
>  Hardmax
>  Log
>  LSTM
>  MatMul
>  ReduceMax
>  ReduceSum
>  Relu
>  Shape
>  Sigmoid
>  Slice
>  Squeeze
>  Sub
>  Sum
>  Transpose
>  Unsqueeze
>  
> In summary, we already implemented 13 ops, and there're still 27 ops needed 
> to be implemented:
> h2. Already implemented:
> -Acos-
>  -BatchNormalization-
>  -Cos-
>  -Conv-
>  -LeakyRelu-
>  -LSTM-
>  -Abs-
>  -MaxPool-
>  -Flatten-
>  -Add-
>  -MatMul-
>  -Relu-
>  -Sigmoid-
> h2. To be implemented:
> ArgMax
>  Cast
>  Ceil
>  Clip
>  Compress
>  Concat
>  ConstantOfShape
>  Dropout
>  Gather
>  Gemm
>  Hardmax
>  Identity
>  InstanceNormalization
>  Log
>  LpNormalization
>  Mul
>  PRelu
>  ReduceMax
>  ReduceSum
>  Reshape
>  Shape
>  Slice
>  Squeeze
>  Sub
>  Sum
>  Transpose
>  Unsqueeze
> Please refer to the [ONNX Operator Schemas| 
> [https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more 
> detailed information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models and their 
components as following:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
 BatchNormalization
 Conv
 LeakyRelu
 MaxPool
 Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
 Add
 BatchNormalization
 Conv
 Cos
 Dropout
 Flatten
 Gemm
 Identity
 InstanceNormalization
 LpNormalization
 Mul
 PRelu
 Reshape
 Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
 Add
 Add
 ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Conv
 Dropout
 Gather
 Hardmax
 Log
 LSTM
 MatMul
 ReduceMax
 ReduceSum
 Relu
 Shape
 Sigmoid
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

 

In summary, we already implemented 13 ops, and there're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
 -BatchNormalization-
 -Cos-
 -Conv-
 -LeakyRelu-
 -LSTM-
 -Abs-
 -MaxPool-
 -Flatten-
 -Add-
 -MatMul-
 -Relu-
 -Sigmoid-
h2. To be implemented:

ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Dropout
 Gather
 Gemm
 Hardmax
 Identity
 InstanceNormalization
 Log
 LpNormalization
 Mul
 PRelu
 ReduceMax
 ReduceSum
 Reshape
 Shape
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

Please refer to the [ONNX Operator Schemas| 
https://github.com/onnx/onnx/blob/master/docs/Operators.md] for more detailed 
information.

  was:
For the demo purpose, we need to implement these three models and their 
components as following:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
 BatchNormalization
 Conv
 LeakyRelu
 MaxPool
 Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
 Add
 BatchNormalization
 Conv
 Cos
 Dropout
 Flatten
 Gemm
 Identity
 InstanceNormalization
 LpNormalization
 Mul
 PRelu
 Reshape
 Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
 Add
 Add
 ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Conv
 Dropout
 Gather
 Hardmax
 Log
 LSTM
 MatMul
 ReduceMax
 ReduceSum
 Relu
 Shape
 Sigmoid
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

 

In summary, we already implemented 13 ops, and there're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
 -BatchNormalization-
 -Cos-
 -Conv-
 -LeakyRelu-
 -LSTM-
 -Abs-
 -MaxPool-
 -Flatten-
 -Add-
 -MatMul-
 -Relu-
 -Sigmoid-
h2. To be implemented:

ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Dropout
 Gather
 Gemm
 Hardmax
 Identity
 InstanceNormalization
 Log
 LpNormalization
 Mul
 PRelu
 ReduceMax
 ReduceSum
 Reshape
 Shape
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

Please refer to the [ONNX Operator Schemas| 
[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more detailed 
information.


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, bidaf.png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models and their 
> components as following:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> Add
>  BatchNormalization
>  Conv
>  LeakyRelu
>  MaxPool
>  Mul
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Acos
>  Add
>  BatchNormalization
>  Conv
>  Cos
>  Dropout
>  Flatten
>  Gemm
>  Identity
>  InstanceNormalization
>  LpNormalization
>  Mul
>  PRelu
>  Reshape
>  Sub
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> Abs
>  Add
>  Add
>  ArgMax
>  Cast
>  Ceil
>  Clip
>  Compress
>  Concat
>  ConstantOfShape
>  Conv
>  Dropout
>  Gather
>  Hardmax
>  Log
>  LSTM
>  MatMul
>  ReduceMax
>  ReduceSum
>  Relu
>  Shape
>  Sigmoid
>  Slice
>  Squeeze
>  Sub
>  Sum
>  Transpose
>  Unsqueeze
>  
> In summary, we already implemented 13 ops, and there're still 27 ops needed 
> to be implemented:
> h2. Already implemented:
> -Acos-
>  -BatchNormalization-
>  -Cos-
>  -Conv-
>  -LeakyRelu-
>  -LSTM-
>  -Abs-
>  -MaxPool-
>  -Flatten-
>  -Add-
>  -MatMul-
>  -Relu-
>  -Sigmoid-
> h2. To be implemented:
> ArgMax
>  Cast
>  Ceil
>  Clip
>  Compress
>  Concat
>  ConstantOfShape
>  Dropout
>  Gather
>  Gemm
>  Hardmax
>  Identity
>  InstanceNormalization
>  Log
>  LpNormalization
>  Mul
>  PRelu
>  ReduceMax
>  ReduceSum
>  Reshape
>  Shape
>  Slice
>  Squeeze
>  Sub
>  Sum
>  Transpose
>  Unsqueeze
> Please refer to the [ONNX Operator Schemas| 
> https://github.com/onnx/onnx/blob/master/docs/Operators.md] for more detailed 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models and their 
components as following:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
 BatchNormalization
 Conv
 LeakyRelu
 MaxPool
 Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
 Add
 BatchNormalization
 Conv
 Cos
 Dropout
 Flatten
 Gemm
 Identity
 InstanceNormalization
 LpNormalization
 Mul
 PRelu
 Reshape
 Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
 Add
 Add
 ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Conv
 Dropout
 Gather
 Hardmax
 Log
 LSTM
 MatMul
 ReduceMax
 ReduceSum
 Relu
 Shape
 Sigmoid
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

 

In summary, we already implemented 13 ops, and there're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
 -BatchNormalization-
 -Cos-
 -Conv-
 -LeakyRelu-
 -LSTM-
 -Abs-
 -MaxPool-
 -Flatten-
 -Add-
 -MatMul-
 -Relu-
 -Sigmoid-
h2. To be implemented:

ArgMax
 Cast
 Ceil
 Clip
 Compress
 Concat
 ConstantOfShape
 Dropout
 Gather
 Gemm
 Hardmax
 Identity
 InstanceNormalization
 Log
 LpNormalization
 Mul
 PRelu
 ReduceMax
 ReduceSum
 Reshape
 Shape
 Slice
 Squeeze
 Sub
 Sum
 Transpose
 Unsqueeze

Please refer to the [ONNX Operator 
Schemas|[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more 
detailed information.

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
BatchNormalization
Conv
LeakyRelu
MaxPool
Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
Add
BatchNormalization
Conv
Cos
Dropout
Flatten
Gemm
Identity
InstanceNormalization
LpNormalization
Mul
PRelu
Reshape
Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
Add
Add
ArgMax
Cast
Ceil
Clip
Compress
Concat
ConstantOfShape
Conv
Dropout
Gather
Hardmax
Log
LSTM
MatMul
ReduceMax
ReduceSum
Relu
Shape
Sigmoid
Slice
Squeeze
Sub
Sum
Transpose
Unsqueeze

 

In summary, we already implemented 13 ops, and they're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
-BatchNormalization-
-Cos-
-Conv-
-LeakyRelu-
-LSTM-
-Abs-
-MaxPool-
-Flatten-
-Add-
-MatMul-
-Relu-
-Sigmoid-
h2. To be implemented:

ArgMax
Cast
Ceil
Clip
Compress
Concat
ConstantOfShape
Dropout
Gather
Gemm
Hardmax
Identity
InstanceNormalization
Log
LpNormalization
Mul
PRelu
ReduceMax
ReduceSum
Reshape
Shape
Slice
Squeeze
Sub
Sum
Transpose
Unsqueeze

Please refer to the [ONNX Operator 
Schemas|[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more 
detailed information.


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, bidaf.png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models and their 
> components as following:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> Add
>  BatchNormalization
>  Conv
>  LeakyRelu
>  MaxPool
>  Mul
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Acos
>  Add
>  BatchNormalization
>  Conv
>  Cos
>  Dropout
>  Flatten
>  Gemm
>  Identity
>  InstanceNormalization
>  LpNormalization
>  Mul
>  PRelu
>  Reshape
>  Sub
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> Abs
>  Add
>  Add
>  ArgMax
>  Cast
>  Ceil
>  Clip
>  Compress
>  Concat
>  ConstantOfShape
>  Conv
>  Dropout
>  Gather
>  Hardmax
>  Log
>  LSTM
>  MatMul
>  ReduceMax
>  ReduceSum
>  Relu
>  Shape
>  Sigmoid
>  Slice
>  Squeeze
>  Sub
>  Sum
>  Transpose
>  Unsqueeze
>  
> In summary, we already implemented 13 ops, and there're still 27 ops needed 
> to be implemented:
> h2. Already implemented:
> -Acos-
>  -BatchNormalization-
>  -Cos-
>  -Conv-
>  -LeakyRelu-
>  -LSTM-
>  -Abs-
>  -MaxPool-
>  -Flatten-
>  -Add-
>  -MatMul-
>  -Relu-
>  -Sigmoid-
> h2. To be implemented:
> ArgMax
>  Cast
>  Ceil
>  Clip
>  Compress
>  Concat
>  ConstantOfShape
>  Dropout
>  Gather
>  Gemm
>  Hardmax
>  Identity
>  InstanceNormalization
>  Log
>  LpNormalization
>  Mul
>  PRelu
>  ReduceMax
>  ReduceSum
>  Reshape
>  Shape
>  Slice
>  Squeeze
>  Sub
>  Sum
>  Transpose
>  Unsqueeze
> Please refer to the [ONNX Operator 
> Schemas|[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for 
> more detailed information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

Add
BatchNormalization
Conv
LeakyRelu
MaxPool
Mul
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Acos
Add
BatchNormalization
Conv
Cos
Dropout
Flatten
Gemm
Identity
InstanceNormalization
LpNormalization
Mul
PRelu
Reshape
Sub
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

Abs
Add
Add
ArgMax
Cast
Ceil
Clip
Compress
Concat
ConstantOfShape
Conv
Dropout
Gather
Hardmax
Log
LSTM
MatMul
ReduceMax
ReduceSum
Relu
Shape
Sigmoid
Slice
Squeeze
Sub
Sum
Transpose
Unsqueeze

 

In summary, we already implemented 13 ops, and they're still 27 ops needed to 
be implemented:
h2. Already implemented:

-Acos-
-BatchNormalization-
-Cos-
-Conv-
-LeakyRelu-
-LSTM-
-Abs-
-MaxPool-
-Flatten-
-Add-
-MatMul-
-Relu-
-Sigmoid-
h2. To be implemented:

ArgMax
Cast
Ceil
Clip
Compress
Concat
ConstantOfShape
Dropout
Gather
Gemm
Hardmax
Identity
InstanceNormalization
Log
LpNormalization
Mul
PRelu
ReduceMax
ReduceSum
Reshape
Shape
Slice
Squeeze
Sub
Sum
Transpose
Unsqueeze

Please refer to the [ONNX Operator 
Schemas|[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for more 
detailed information.

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2. To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, bidaf.png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> Add
> BatchNormalization
> Conv
> LeakyRelu
> MaxPool
> Mul
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Acos
> Add
> BatchNormalization
> Conv
> Cos
> Dropout
> Flatten
> Gemm
> Identity
> InstanceNormalization
> LpNormalization
> Mul
> PRelu
> Reshape
> Sub
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> Abs
> Add
> Add
> ArgMax
> Cast
> Ceil
> Clip
> Compress
> Concat
> ConstantOfShape
> Conv
> Dropout
> Gather
> Hardmax
> Log
> LSTM
> MatMul
> ReduceMax
> ReduceSum
> Relu
> Shape
> Sigmoid
> Slice
> Squeeze
> Sub
> Sum
> Transpose
> Unsqueeze
>  
> In summary, we already implemented 13 ops, and they're still 27 ops needed to 
> be implemented:
> h2. Already implemented:
> -Acos-
> -BatchNormalization-
> -Cos-
> -Conv-
> -LeakyRelu-
> -LSTM-
> -Abs-
> -MaxPool-
> -Flatten-
> -Add-
> -MatMul-
> -Relu-
> -Sigmoid-
> h2. To be implemented:
> ArgMax
> Cast
> Ceil
> Clip
> Compress
> Concat
> ConstantOfShape
> Dropout
> Gather
> Gemm
> Hardmax
> Identity
> InstanceNormalization
> Log
> LpNormalization
> Mul
> PRelu
> ReduceMax
> ReduceSum
> Reshape
> Shape
> Slice
> Squeeze
> Sub
> Sum
> Transpose
> Unsqueeze
> Please refer to the [ONNX Operator 
> Schemas|[https://github.com/onnx/onnx/blob/master/docs/Operators.md]] for 
> more detailed information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Attachment: bidaf.png

> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, bidaf.png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2. To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Attachment: arcface(based resnet100).png

> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2. To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-31 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Attachment: tiny_yolov2.png

> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
> Attachments: arcface(based resnet100).png, tiny_yolov2.png
>
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2. To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2. To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2. To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. [Arcface]|https://arxiv.org/pdf/1801.07698.pdf]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. [BIDAF]]|https://arxiv.org/pdf/1611.01603.pdf]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. [Arcface|https://arxiv.org/pdf/1801.07698.pdf]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. [BIDAF|https://arxiv.org/pdf/1611.01603.pdf]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2.  To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. [Arcface]|https://arxiv.org/pdf/1801.07698.pdf]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. [BIDAF]]|https://arxiv.org/pdf/1611.01603.pdf]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [[Tiny yolov2|]|https://arxiv.org/pdf/1612.08242.pdf]]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. Arcface[link title|https://arxiv.org/abs/1801.07698]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [Tiny yolov2|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. [Arcface]|https://arxiv.org/pdf/1801.07698.pdf]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. [BIDAF]]|https://arxiv.org/pdf/1611.01603.pdf]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2.  To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. [[Tiny yolov2|]|https://arxiv.org/pdf/1612.08242.pdf]]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. Arcface[link title|https://arxiv.org/abs/1801.07698]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. Tiny yolov2[link title|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. Arcface[link title|https://arxiv.org/abs/1801.07698]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. [[Tiny yolov2|]|https://arxiv.org/pdf/1612.08242.pdf]]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. Arcface[link title|https://arxiv.org/abs/1801.07698]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2.  To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. Tiny yolov2[link title|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
 Conv2D
 BatchNormalization
 LeakyReLU
 Reshape
h2. Arcface[link title|https://arxiv.org/abs/1801.07698]

Conv2D
 BatchNormalization
 relu
 MaxPooling2D
 Dropout
 Flatten
 Dense
 Softmax
 l2_normalize
 acos
 cos
h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]

K.stack
 Softmax
 K.expand_dims
 K.sum
 Constant
 Dense
 Lambda(lambda x: 1.0 - x, output_shape=(dim,))
 Multiply
 Add
 K.concatenate
 K.shape
 K.max
 K.tile
 K.squeeze
 linear
 TimeDistributed
 Bidirectional(LSTM

 

 

In summary, we already implemented 12 ops, and there still are 16 ops needed to 
be implemented:
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 

  was:
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. Tiny yolov2[link title|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
Conv2D
BatchNormalization
LeakyReLU
Reshape
h2. Arcface[link title|https://arxiv.org/abs/1801.07698]

Conv2D
BatchNormalization
relu
MaxPooling2D
Dropout
Flatten
Dense
Softmax
l2_normalize
acos
cos
h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]

K.stack
Softmax
K.expand_dims
K.sum
Constant
Dense
Lambda(lambda x: 1.0 - x, output_shape=(dim,))
Multiply
Add
K.concatenate
K.shape
K.max
K.tile
K.squeeze
linear
TimeDistributed
Bidirectional(LSTM
h2. In summary, 
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. Tiny yolov2[link title|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
>  Conv2D
>  BatchNormalization
>  LeakyReLU
>  Reshape
> h2. Arcface[link title|https://arxiv.org/abs/1801.07698]
> Conv2D
>  BatchNormalization
>  relu
>  MaxPooling2D
>  Dropout
>  Flatten
>  Dense
>  Softmax
>  l2_normalize
>  acos
>  cos
> h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]
> K.stack
>  Softmax
>  K.expand_dims
>  K.sum
>  Constant
>  Dense
>  Lambda(lambda x: 1.0 - x, output_shape=(dim,))
>  Multiply
>  Add
>  K.concatenate
>  K.shape
>  K.max
>  K.tile
>  K.squeeze
>  linear
>  TimeDistributed
>  Bidirectional(LSTM
>  
>  
> In summary, we already implemented 12 ops, and there still are 16 ops needed 
> to be implemented:
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2.  To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
For the demo purpose, we need to implement these three models, and these are 
their components:
h2. Tiny yolov2[link title|https://arxiv.org/pdf/1612.08242.pdf]

MaxPooling2D
Conv2D
BatchNormalization
LeakyReLU
Reshape
h2. Arcface[link title|https://arxiv.org/abs/1801.07698]

Conv2D
BatchNormalization
relu
MaxPooling2D
Dropout
Flatten
Dense
Softmax
l2_normalize
acos
cos
h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]

K.stack
Softmax
K.expand_dims
K.sum
Constant
Dense
Lambda(lambda x: 1.0 - x, output_shape=(dim,))
Multiply
Add
K.concatenate
K.shape
K.max
K.tile
K.squeeze
linear
TimeDistributed
Bidirectional(LSTM
h2. In summary, 
h2. Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-
h2.  To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

 

  was:
Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-

 

To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> For the demo purpose, we need to implement these three models, and these are 
> their components:
> h2. Tiny yolov2[link title|https://arxiv.org/pdf/1612.08242.pdf]
> MaxPooling2D
> Conv2D
> BatchNormalization
> LeakyReLU
> Reshape
> h2. Arcface[link title|https://arxiv.org/abs/1801.07698]
> Conv2D
> BatchNormalization
> relu
> MaxPooling2D
> Dropout
> Flatten
> Dense
> Softmax
> l2_normalize
> acos
> cos
> h2. BIDAF[link title|https://arxiv.org/pdf/1611.01603]
> K.stack
> Softmax
> K.expand_dims
> K.sum
> Constant
> Dense
> Lambda(lambda x: 1.0 - x, output_shape=(dim,))
> Multiply
> Add
> K.concatenate
> K.shape
> K.max
> K.tile
> K.squeeze
> linear
> TimeDistributed
> Bidirectional(LSTM
> h2. In summary, 
> h2. Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
> h2.  To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)
zhangzhaoqi created SINGA-476:
-

 Summary: Autograd operators for ONNX
 Key: SINGA-476
 URL: https://issues.apache.org/jira/browse/SINGA-476
 Project: Singa
  Issue Type: New Feature
Reporter: zhangzhaoqi


Already implemented:

-LSTM-
-Multiply-
-Add-
-linear-
-relu-
-acos-
-cos-
-LeakyReLU-
-Softmax-
-MaxPooling2D-
-Conv2D-
-BatchNormalization-

 

To be implemented:

 

Reshape
Flatten
Dropout
max
shape
concatenate
Constant
L2Normalization
Expand
tile
squeeze
Dense*
TimeDistributed*
Bidirectional*
Stack*
Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (SINGA-476) Autograd operators for ONNX

2019-07-30 Thread zhangzhaoqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SINGA-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhaoqi updated SINGA-476:
--
Description: 
Already implemented:

-LSTM-
 -Multiply-
 -Add-
 -linear-
 -relu-
 -acos-
 -cos-
 -LeakyReLU-
 -Softmax-
 -MaxPooling2D-
 -Conv2D-
 -BatchNormalization-

 

To be implemented:

Reshape
 Flatten
 Dropout
 max
 shape
 concatenate
 Constant
 L2Normalization
 Expand
 tile
 squeeze
 Dense*
 TimeDistributed*
 Bidirectional*
 Stack*
 Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.

  was:
Already implemented:

-LSTM-
-Multiply-
-Add-
-linear-
-relu-
-acos-
-cos-
-LeakyReLU-
-Softmax-
-MaxPooling2D-
-Conv2D-
-BatchNormalization-

 

To be implemented:

 

Reshape
Flatten
Dropout
max
shape
concatenate
Constant
L2Normalization
Expand
tile
squeeze
Dense*
TimeDistributed*
Bidirectional*
Stack*
Lambda*

*means this op doesn't have a corresponding one at ONNX op sets, therefore, it 
needs a converter function by using basic op sets.


> Autograd operators for ONNX
> ---
>
> Key: SINGA-476
> URL: https://issues.apache.org/jira/browse/SINGA-476
> Project: Singa
>  Issue Type: New Feature
>Reporter: zhangzhaoqi
>Priority: Critical
>
> Already implemented:
> -LSTM-
>  -Multiply-
>  -Add-
>  -linear-
>  -relu-
>  -acos-
>  -cos-
>  -LeakyReLU-
>  -Softmax-
>  -MaxPooling2D-
>  -Conv2D-
>  -BatchNormalization-
>  
> To be implemented:
> Reshape
>  Flatten
>  Dropout
>  max
>  shape
>  concatenate
>  Constant
>  L2Normalization
>  Expand
>  tile
>  squeeze
>  Dense*
>  TimeDistributed*
>  Bidirectional*
>  Stack*
>  Lambda*
> *means this op doesn't have a corresponding one at ONNX op sets, therefore, 
> it needs a converter function by using basic op sets.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)