This is an automated email from the ASF dual-hosted git repository.
joddiyzhang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/singa-doc.git
The following commit(s) were added to refs/heads/master by this push:
new e52afda Update onnx.md
e52afda is described below
commit e52afdaf890e42842152b03fc8e219d9e85fa516
Author: Joddiy Zhang <[email protected]>
AuthorDate: Mon Jan 11 15:52:46 2021 +0800
Update onnx.md
---
docs-site/docs/onnx.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/docs-site/docs/onnx.md b/docs-site/docs/onnx.md
index 877bf9a..25ffe0b 100644
--- a/docs-site/docs/onnx.md
+++ b/docs-site/docs/onnx.md
@@ -199,7 +199,7 @@ pencil, and many animals.
|
<b>[MobileNet](https://github.com/onnx/models/tree/master/vision/classification/mobilenet)</b>
| [Sandler et al.](https://arxiv.org/abs/1801.04381) | Light-weight
deep neural network best suited for mobile and embedded vision applications.
<br>Top-5 error from paper - ~10%
| [](https://colab.res
[...]
|
<b>[ResNet18](https://github.com/onnx/models/tree/master/vision/classification/resnet)</b>
| [He et al.](https://arxiv.org/abs/1512.03385) | A CNN
model (up to 152 layers). Uses shortcut connections to achieve higher accuracy
when classifying images. <br> Top-5 error from paper - ~3.6%
| [](https://colab.res
[...]
|
<b>[VGG16](https://github.com/onnx/models/tree/master/vision/classification/vgg)</b>
| [Simonyan et al.](https://arxiv.org/abs/1409.1556) |
Deep CNN model(up to 19 layers). Similar to AlexNet but uses multiple smaller
kernel-sized filters that provides more accuracy when classifying images.
<br>Top-5 error from paper - ~8%
| [](https://colab.res
[...]
-|
<b>[ShuffleNet_V2](https://github.com/onnx/models/tree/master/vision/classification/shufflenet)</b>
| [Simonyan et al.](https://arxiv.org/pdf/1707.01083.pdf) | Extremely
computation efficient CNN model that is designed specifically for mobile
devices. This network architecture design considers direct metric such as
speed, instead of indirect metric like FLOP. Top-1 error from paper - ~30.6% |
[</b>
| [Simonyan et al.](https://arxiv.org/pdf/1707.01083.pdf) | Extremely
computation efficient CNN model that is designed specifically for mobile
devices. This network architecture design considers direct metric such as
speed, instead of indirect metric like FLOP. Top-1 error from paper - ~30.6% |
[](https://colab.res
[...]
We also give some re-training examples by using VGG and ResNet, please check
`examples/onnx/training`.
@@ -232,7 +232,7 @@ given context paragraph.
|
-----------------------------------------------------------------------------------------------------
|
-----------------------------------------------------------------------------------------------------------------------------------
|
-----------------------------------------------------------------------------------------------------------------
|
----------------------------------------------------------------------------------------------------------------------------------------
[...]
|
<b>[BERT-Squad](https://github.com/onnx/models/tree/master/text/machine_comprehension/bert-squad)</b>
| [Devlin et al.](https://arxiv.org/pdf/1810.04805.pdf)
| This model answers
questions based on the context of the given input paragraph.
| [](https://colab.research.google.com/drive/1kud-lUPjS_u-TkDAzi
[...]
|
<b>[RoBERTa](https://github.com/onnx/models/tree/master/text/machine_comprehension/roberta)</b>
| [Devlin et al.](https://arxiv.org/pdf/1907.11692.pdf)
| A large
transformer-based model that predicts sentiment based on given input text.
| [](https://colab.research.google.com/drive/1F-c4LJSx3Cb2jW6tP7
[...]
-|
<b>[GPT-2](https://github.com/onnx/models/tree/master/text/machine_comprehension/gpt-2)</b>
| [Devlin et
al.](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
| A large transformer-based language model that given a sequence of words
within some text, predicts the next word. | [
[...]
+|
<b>[GPT-2](https://github.com/onnx/models/tree/master/text/machine_comprehension/gpt-2)</b>
| [Devlin et
al.](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
| A large transformer-based language model that given a sequence of words
within some text, predicts the next word. | [](https://colab.research.google.com/drive/1ZlXLSIMppPch6HgzKR
[...]
## Supported operators