mwbyeon commented on issue #9308: batch size in inference
URL: 
https://github.com/apache/incubator-mxnet/issues/9308#issuecomment-355717931
 
 
   @BenLag2906  This is code for understanding. (not tested)
   
   ```cpp
   
   
   // Allocate memory for a batch of images
   const int image_size = width * height * channels;
   std::vector<mx_float> batch_data = std::vector<mx_float>(batch_size * 
image_size);
   
   // Assign
   for ( int i=0 ; i<batch_size ; ++i ) {
       GetImageFile(batch_file[i], &batch_data[i * image_size], channels,
                    cv::Size(width, height), nd_data);
   }
   
   // Set input data for prediction
   MXPredSetInput(pred_hnd, "data", batch_data.data(), batch_size * image_size);
   
   
   // Do Predict Forward
   MXPredForward(pred_hnd);
   
   
   // Allocate memory for output
   const int output_size = batch_size * num_classes;
   std::vector<float> output(output_size);
   
   // Get output
   MXPredGetOutput(pred_hnd, 0, &(output[0]), output_size);
   
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to