FrozenGene commented on a change in pull request #19: URL: https://github.com/apache/tvm-rfcs/pull/19#discussion_r684928442
########## File path: rfcs/add_paddlepaddle_frontend.md ########## @@ -0,0 +1,104 @@ +- Feature Name: add-paddlepaddle-frontend +- Start Date: 2021-08-05 +- RFC PR: https://github.com/apache/tvm-rfcs/pull/19 +- GitHub Issue: TODO + +# Summary +[summary]: #summary + +Add a paddlepaddle frontend, enhance TVM's compatibility of deep learning frameworks, which support PaddlePaddle>=2.0 + +# Motivation +[motivation]: #motivation + +PaddlePaddle, an independent R&D deep learning platform in China, has been officially open-sourced to professional communities since 2016. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service, and so on while serving more than 2.3 million developers. With such advantages, PaddlePaddle has helped an increasing number of partners commercialize AI. + +Currently, PaddlePaddle has built a prosperous technological ecology, there are more than 500 models developed by official organization or outside developers, including CV/NLP/OCR/Speech, for more details we can refer to the following links, + +- [PaddlePaddle/models](https://github.com/PaddlePaddle/models) +- [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) +- [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) +- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) +- [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR) +- [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) +- [DeepSpeech](https://github.com/PaddlePaddle/DeepSpeech) + +After upgrading to 2.0, PaddlePaddle supported imperative programming similar with PyTorch, but a mechanism of `Dynamic to Static` is provided, which can export PaddlePaddle model as graph representation and more friendly for deployment, the following example code shows how to export a PaddlePaddle model, + +``` +import paddle +import paddlehub +model = hub.Module(name="resnet50_vd_imagenet_ssld") +input_spec = paddle.static.InputSpec( + [1, 3, 224, 224], "float32", "image") +paddle.jit.save(model, "model/infer", input_spec=[input_spec]) +``` + +PaddlePaddle's deployment is supported by Paddle Inference/Paddle Lite/OpenVINO/Tengine/Adlik now. We have noticed there are lots of developers convmodel to ONNX format for TVM's supporting, but only part of models can be converted due to the lack of ONNX operators. Review comment: `convmodel` -> converting model; `for TVM's supporting` ---> supported by TVM ########## File path: rfcs/add_paddlepaddle_frontend.md ########## @@ -0,0 +1,104 @@ +- Feature Name: add-paddlepaddle-frontend +- Start Date: 2021-08-05 +- RFC PR: https://github.com/apache/tvm-rfcs/pull/19 +- GitHub Issue: TODO + +# Summary +[summary]: #summary + +Add a paddlepaddle frontend, enhance TVM's compatibility of deep learning frameworks, which support PaddlePaddle>=2.0 + +# Motivation +[motivation]: #motivation + +PaddlePaddle, an independent R&D deep learning platform in China, has been officially open-sourced to professional communities since 2016. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service, and so on while serving more than 2.3 million developers. With such advantages, PaddlePaddle has helped an increasing number of partners commercialize AI. + +Currently, PaddlePaddle has built a prosperous technological ecology, there are more than 500 models developed by official organization or outside developers, including CV/NLP/OCR/Speech, for more details we can refer to the following links, + +- [PaddlePaddle/models](https://github.com/PaddlePaddle/models) +- [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) +- [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) +- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) +- [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR) +- [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) +- [DeepSpeech](https://github.com/PaddlePaddle/DeepSpeech) + +After upgrading to 2.0, PaddlePaddle supported imperative programming similar with PyTorch, but a mechanism of `Dynamic to Static` is provided, which can export PaddlePaddle model as graph representation and more friendly for deployment, the following example code shows how to export a PaddlePaddle model, + +``` +import paddle +import paddlehub +model = hub.Module(name="resnet50_vd_imagenet_ssld") Review comment: Could we add another one script example showing how to load one paddlepaddle model from disk? ########## File path: rfcs/add_paddlepaddle_frontend.md ########## @@ -0,0 +1,104 @@ +- Feature Name: add-paddlepaddle-frontend +- Start Date: 2021-08-05 +- RFC PR: https://github.com/apache/tvm-rfcs/pull/19 +- GitHub Issue: TODO + +# Summary +[summary]: #summary + +Add a paddlepaddle frontend, enhance TVM's compatibility of deep learning frameworks, which support PaddlePaddle>=2.0 + +# Motivation +[motivation]: #motivation + +PaddlePaddle, an independent R&D deep learning platform in China, has been officially open-sourced to professional communities since 2016. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service, and so on while serving more than 2.3 million developers. With such advantages, PaddlePaddle has helped an increasing number of partners commercialize AI. + +Currently, PaddlePaddle has built a prosperous technological ecology, there are more than 500 models developed by official organization or outside developers, including CV/NLP/OCR/Speech, for more details we can refer to the following links, + +- [PaddlePaddle/models](https://github.com/PaddlePaddle/models) +- [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) +- [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) +- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) +- [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR) +- [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) +- [DeepSpeech](https://github.com/PaddlePaddle/DeepSpeech) + +After upgrading to 2.0, PaddlePaddle supported imperative programming similar with PyTorch, but a mechanism of `Dynamic to Static` is provided, which can export PaddlePaddle model as graph representation and more friendly for deployment, the following example code shows how to export a PaddlePaddle model, + +``` +import paddle +import paddlehub +model = hub.Module(name="resnet50_vd_imagenet_ssld") +input_spec = paddle.static.InputSpec( + [1, 3, 224, 224], "float32", "image") +paddle.jit.save(model, "model/infer", input_spec=[input_spec]) +``` + +PaddlePaddle's deployment is supported by Paddle Inference/Paddle Lite/OpenVINO/Tengine/Adlik now. We have noticed there are lots of developers convmodel to ONNX format for TVM's supporting, but only part of models can be converted due to the lack of ONNX operators. +Based on this background, we proposed this RFC addle frontend for TVM, improve usability and extend more models support for PaddlePaddle's users. + + +# Guide-level explanation +[guide-level-explanation]: #guide-level-explanation + +If you dive into the pull request code, there's 2 concepts imported from PaddlePaddle you may want to know, +- `paddle.jit.load`: Recommended API to load exported inference model, the type of return result is `TranslatedLayer`, stores `Program`(similar with computation graph) and parameters; +- `paddle.static.load_inference_model`: API to compatible with old version PaddlePaddle's model, the type of return result is `Program`, and all the parameters save in `Scope`, for the default situation, we can extract the parameters from the `paddle.fluid.global_scope()`. + +So, this RFC also will bring a new API for TVM to support PaddlePaddle model, +``` +relay.frontend.from_paddle(program_or_layer, shape_dict=None, scope=None) +``` +- `program_or_layer`: the return result of `paddle.static.load_inference_model` or `paddle.jit.load` +- `shape_dict`: optional parameter, input shapes of the model +- `scope`: optional parameter, only available if `model` is loaded by `paddle.static.load_inference_model` + +The following example code shows how to import a PaddlePaddle model, +``` +import paddle +model = paddle.jit.load('model/infer') + +shape_dict = {'image': [1, 3, 224, 224]} +mod, params = relay.frontend.from_paddle(model, shape_dict=shape_dict) +``` + +Error may happened if there are some operators is not supported by this frontend, and the details will print out. Review comment: Errors maybe happened; some operators is -> some operators are ########## File path: rfcs/add_paddlepaddle_frontend.md ########## @@ -0,0 +1,104 @@ +- Feature Name: add-paddlepaddle-frontend +- Start Date: 2021-08-05 +- RFC PR: https://github.com/apache/tvm-rfcs/pull/19 +- GitHub Issue: TODO + +# Summary +[summary]: #summary + +Add a paddlepaddle frontend, enhance TVM's compatibility of deep learning frameworks, which support PaddlePaddle>=2.0 + +# Motivation +[motivation]: #motivation + +PaddlePaddle, an independent R&D deep learning platform in China, has been officially open-sourced to professional communities since 2016. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service, and so on while serving more than 2.3 million developers. With such advantages, PaddlePaddle has helped an increasing number of partners commercialize AI. + +Currently, PaddlePaddle has built a prosperous technological ecology, there are more than 500 models developed by official organization or outside developers, including CV/NLP/OCR/Speech, for more details we can refer to the following links, + +- [PaddlePaddle/models](https://github.com/PaddlePaddle/models) +- [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) +- [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) +- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) +- [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR) +- [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) +- [DeepSpeech](https://github.com/PaddlePaddle/DeepSpeech) + +After upgrading to 2.0, PaddlePaddle supported imperative programming similar with PyTorch, but a mechanism of `Dynamic to Static` is provided, which can export PaddlePaddle model as graph representation and more friendly for deployment, the following example code shows how to export a PaddlePaddle model, + +``` +import paddle +import paddlehub +model = hub.Module(name="resnet50_vd_imagenet_ssld") +input_spec = paddle.static.InputSpec( + [1, 3, 224, 224], "float32", "image") +paddle.jit.save(model, "model/infer", input_spec=[input_spec]) +``` + +PaddlePaddle's deployment is supported by Paddle Inference/Paddle Lite/OpenVINO/Tengine/Adlik now. We have noticed there are lots of developers convmodel to ONNX format for TVM's supporting, but only part of models can be converted due to the lack of ONNX operators. +Based on this background, we proposed this RFC addle frontend for TVM, improve usability and extend more models support for PaddlePaddle's users. Review comment: addle-> to add paddlepaddle; improve -> to improve; extend -> to extend -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
