stephenrawls commented on a change in pull request #13680: [MXNET-1121] Example 
to demonstrate the inference workflow using RNN
URL: https://github.com/apache/incubator-mxnet/pull/13680#discussion_r253663314
 
 

 ##########
 File path: cpp-package/example/inference/sentiment_analysis_rnn.cpp
 ##########
 @@ -0,0 +1,464 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*
+ * This example demonstrates sentiment prediction workflow with pre-trained 
RNN model using MXNet C++ API.
+ * The example performs following tasks.
+ * 1. Load the pre-trained RNN model,
+ * 2. Load the dictionary file that contains word to index mapping.
+ * 3. Create executors for pre-determined input lengths.
+ * 4. Convert each line in the input to the vector of indices.
+ * 5. Predictor finds the right executor for each line.
+ * 4. Run the forward pass for each line and predicts the sentiment scores.
+ * The example uses a pre-trained RNN model that is trained with the IMDB 
dataset.
+ */
+
+#include <sys/stat.h>
+#include <iostream>
+#include <fstream>
+#include <cstdlib>
+#include <map>
+#include <string>
+#include <vector>
+#include <sstream>
+#include "mxnet-cpp/MxNetCpp.h"
+
+using namespace mxnet::cpp;
+
+static const int DEFAULT_BUCKET_KEYS[] = {5, 10, 15, 20, 25, 30};
+static const char DEFAULT_S3_URL[] = 
"https://s3.amazonaws.com/mxnet-cpp/RNN_model/";;
+
+/*
+ * class Predictor
+ *
+ * This class encapsulates the functionality to load the model, process input 
image and run the forward pass.
+ */
+
+class Predictor {
+ public:
+    Predictor() {}
+    Predictor(const std::string& model_json,
+              const std::string& model_params,
+              const std::string& input_dictionary,
+              const std::vector<int>& bucket_keys,
+              bool use_gpu = false);
+    float PredictSentiment(const std::string &input_review);
+    ~Predictor();
+
+ private:
+    void LoadModel(const std::string& model_json_file);
+    void LoadParameters(const std::string& model_parameters_file);
+    void LoadDictionary(const std::string &input_dictionary);
+    inline bool FileExists(const std::string& name) {
+        struct stat buffer;
+        return (stat(name.c_str(), &buffer) == 0);
+    }
+    float PredictSentimentForOneLine(const std::string &input_line);
+    int ConvertToIndexVector(const std::string& input,
+                      std::vector<float> *input_vector);
+    int GetIndexForOutputSymbolName(const std::string& output_symbol_name);
+    float GetIndexForWord(const std::string& word);
+    int GetClosestBucketKey(int num_words);
+    std::map<std::string, NDArray> args_map;
+    std::map<std::string, NDArray> aux_map;
+    std::map<std::string, int>  wordToIndex;
+    Symbol net;
+    std::map<int, Executor*> executor_buckets;
+    Context global_ctx = Context::cpu();
+};
+
+
+/*
+ * The constructor takes the following parameters as input:
+ * 1. model_json:  The RNN model in json formatted file.
+ * 2. model_params: File containing model parameters
+ * 3. input_dictionary: File containing the word and associated index.
+ * 4. num_words: Number of words which will be used to predict the sentiment.
+ *
+ * The constructor:
+ *  1. Loads the model and parameter files.
+ *  2. Loads the dictionary file to create index to word and word to index 
maps.
+ *  3. For each bucket key in the input vector of bucket keys, it invokes the 
SimpleBind to
+ *     create the executor. The bucket key determines the length of input data 
required
+ *     for that executor.
+ *  4. Creates a map of bucket key to corresponding executor.
+ *  5. The model is loaded only once. The executors share the memory for the 
parameters.
+ */
+Predictor::Predictor(const std::string& model_json,
+                     const std::string& model_params,
+                     const std::string& input_dictionary,
+                     const std::vector<int>& bucket_keys,
+                     bool use_gpu) {
+  if (use_gpu) {
+    global_ctx = Context::gpu();
+  }
+
+  /*
+   * Load the dictionary file that contains the word and its index.
+   * The function creates word to index and index to word map. The maps are 
used to create index
+   * vector for the input sentence.
+   */
+  LoadDictionary(input_dictionary);
+
+  // Load the model
+  LoadModel(model_json);
+
+  // Load the model parameters.
+  LoadParameters(model_params);
+
+  // Create the executors for each bucket key. The bucket key represents the 
shape of input data.
+  for (int bucket : bucket_keys) {
+    args_map["data0"] = NDArray(Shape(bucket, 1), global_ctx, false);
+    args_map["data1"] = NDArray(Shape(1), global_ctx, false);
+    Executor* executor = net.SimpleBind(global_ctx, args_map, 
std::map<std::string, NDArray>(),
+                                std::map<std::string, OpReqType>(), aux_map);
+    executor_buckets[bucket] = executor;
+  }
 
 Review comment:
   @leleamol  -- the question I had is not weather it will effect the 
correctness or cause your program to crash. I am wondering weather the current 
order ends up allocating more memory than you need.
   
   I don't know for sure, but I suspect if you allocate the largest executor 
first, then all the rest of the executors don't have to allocate extra memory 
and they just share memory from the largest executor.
   
   However, I would assume that if the smallest executor is the "master" 
executor, then all other executors would have to allocate extra memory because 
the master executor did not allocate enough to satisfy any of the other 
executors.
   
   Can you re-run your test w/ changing the order, and look closely at total 
gpu memory allocated?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to