szaszm commented on a change in pull request #1096:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1096#discussion_r644725588



##########
File path: extensions/tensorflow/CMakeLists.txt
##########
@@ -26,7 +26,7 @@ find_package(TensorFlow REQUIRED)
 
 message("-- Found TensorFlow: ${TENSORFLOW_INCLUDE_DIRS}")
 
-include_directories(${TENSORFLOW_INCLUDE_DIRS})
+include_directories(SYSTEM ${TENSORFLOW_INCLUDE_DIRS})

Review comment:
       Can we change this to `target_include_directories` to the 
`minifi-tensorflow-extensions` target? Preferably PRIVATE.

##########
File path: extensions/tensorflow/TFApplyGraph.cpp
##########
@@ -194,7 +197,7 @@ int64_t TFApplyGraph::GraphReadCallback::process(const 
std::shared_ptr<io::BaseS
   auto num_read = stream->read(reinterpret_cast<uint8_t 
*>(&graph_proto_buf[0]),
                                    static_cast<int>(stream->size()));
 
-  if (num_read != stream->size()) {
+  if (static_cast<uint64_t>(num_read) != stream->size()) {

Review comment:
       `stream->size()` returns `size_t`, not `uint64_t`

##########
File path: extensions/tensorflow/TFExtractTopLabels.cpp
##########
@@ -133,8 +136,11 @@ int64_t 
TFExtractTopLabels::LabelsReadCallback::process(const std::shared_ptr<io
 
   while (total_read < stream->size()) {
     auto read = stream->read(reinterpret_cast<uint8_t *>(&buf[0]), 
static_cast<int>(buf_size));
+    if (read < 0) {

Review comment:
       This will need to be changed after my PR #1028 is merged, but it might 
not conflict, because this is new code. If this slipped through, it would mean 
that `read` is never smaller than zero and line 143 becomes a loop to the 
largest `size_t` value.
   
   Could you specify the type on line 138 to be int and use brace 
initialization (direct-list-initialization) to make this an error when 
`stream->read()` returns a `size_t`? Or add a static_assert that the variable 
is indeed of type int.

##########
File path: extensions/tensorflow/TFConvertImageToTensor.cpp
##########
@@ -325,7 +325,7 @@ int64_t 
TFConvertImageToTensor::ImageReadCallback::process(const std::shared_ptr
   auto num_read = stream->read(tensor_->flat<unsigned char>().data(),
                                    static_cast<int>(stream->size()));
 
-  if (num_read != stream->size()) {
+  if (static_cast<uint64_t>(num_read) != stream->size()) {

Review comment:
       same here: `stream->size()` returns `size_t`, so we should preferably 
cast `num_read` to that. Same at line 340.

##########
File path: extensions/tensorflow/TFApplyGraph.cpp
##########
@@ -221,7 +224,7 @@ int64_t TFApplyGraph::TensorWriteCallback::process(const 
std::shared_ptr<io::Bas
   auto num_wrote = stream->write(reinterpret_cast<uint8_t 
*>(&tensor_proto_buf[0]),
                                      
static_cast<int>(tensor_proto_buf.size()));
 
-  if (num_wrote != tensor_proto_buf.size()) {
+  if (static_cast<uint64_t>(num_wrote) != tensor_proto_buf.size()) {

Review comment:
       same here: `stream->size()` returns `size_t`, so we should preferably 
cast `num_wrote` to that.

##########
File path: extensions/tensorflow/TFExtractTopLabels.cpp
##########
@@ -156,7 +162,7 @@ int64_t 
TFExtractTopLabels::TensorReadCallback::process(const std::shared_ptr<io
   auto num_read = stream->read(reinterpret_cast<uint8_t 
*>(&tensor_proto_buf[0]),
                                    static_cast<int>(stream->size()));
 
-  if (num_read != stream->size()) {
+  if (static_cast<uint64_t>(num_read) != stream->size()) {

Review comment:
       same here: `stream->size()` returns `size_t`, so we should preferably 
cast `num_read` to that.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to