leleamol commented on a change in pull request #13680: [MXNET-1121] Example to 
demonstrate the inference workflow using RNN
URL: https://github.com/apache/incubator-mxnet/pull/13680#discussion_r246952302
 
 

 ##########
 File path: cpp-package/example/inference/README.md
 ##########
 @@ -39,3 +39,48 @@ Alternatively, The script 
[unit_test_inception_inference.sh](<https://github.com
 ```
 ./unit_test_inception_inference.sh
 ```
+
+### 
[simple_rnn.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/simple_rnn.cpp>)
+This example demonstrates sequence prediction workflow with pre-trained RNN 
model using MXNet C++ API. The purpose of this example is to demonstrate how a 
pre-trained RNN model can be loaded and used to generate an output sequence 
using C++ API.
+The example performs following tasks
+- Load the pre-trained RNN model.
+- Load the dictionary file that contains word to index mapping.
+- Convert the input string to vector of indices and padded to match the input 
data length.
+- Run the forward pass and predict the output string.
+
+The example uses a pre-trained RNN model that is trained with the dataset 
containing speeches given by Obama.
+The model consists of :
+- Embedding Layer with the size of embedding to be 650
+- 3 LSTM Layers with hidden dimension size of 650 and sequence length of 35
+- FullyConnected Layer
+- SoftmaxOutput
+The model was trained for 100 epochs.
+
+The model files can be found here.
+- 
[obama-speaks-symbol.json](<https://s3.amazonaws.com/mxnet-cpp/RNN_model/obama-speaks-symbol.json>)
+- 
[obama-speaks-0100.params](<https://s3.amazonaws.com/mxnet-cpp/RNN_model/obama-speaks-0100.params>)
+- 
[obama.dictionary.txt](<https://s3.amazonaws.com/mxnet-cpp/RNN_model/obama.dictionary.txt>)
 Each line of the dictionary file contains a word and a unique index for that 
word, separated by a space, with a total of 14293 words generated from the 
training dataset.
+The example downloads the above files while running.
+
+The example's command line parameters are as shown below:
+
+```
+./simple_rnn --help
+Usage:
+simple_rnn
+[--input] Input string sequence.
+[--gpu]  Specify this option if workflow needs to be run in gpu context.
+
+./simple_rnn
+
+or
+
+./simple_rnn --input "Good morning. I appreciate the opportunity to speak here"
+```
+
+The example will output the seqence of 35 words as follows:
 
 Review comment:
   Done.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to