szaszm commented on a change in pull request #1096:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1096#discussion_r644980455
##########
File path: extensions/tensorflow/TFConvertImageToTensor.cpp
##########
@@ -325,7 +325,7 @@ int64_t
TFConvertImageToTensor::ImageReadCallback::process(const std::shared_ptr
auto num_read = stream->read(tensor_->flat<unsigned char>().data(),
static_cast<int>(stream->size()));
- if (num_read != stream->size()) {
+ if (static_cast<uint64_t>(num_read) != stream->size()) {
Review comment:
same here: `stream->size()` returns `size_t`, so we should preferably
cast `num_read` to that. Same at line 340.
##########
File path: extensions/tensorflow/TFApplyGraph.cpp
##########
@@ -221,7 +224,7 @@ int64_t TFApplyGraph::TensorWriteCallback::process(const
std::shared_ptr<io::Bas
auto num_wrote = stream->write(reinterpret_cast<uint8_t
*>(&tensor_proto_buf[0]),
static_cast<int>(tensor_proto_buf.size()));
- if (num_wrote != tensor_proto_buf.size()) {
+ if (static_cast<uint64_t>(num_wrote) != tensor_proto_buf.size()) {
Review comment:
same here: `stream->size()` returns `size_t`, so we should preferably
cast `num_wrote` to that.
##########
File path: extensions/tensorflow/TFExtractTopLabels.cpp
##########
@@ -156,7 +162,7 @@ int64_t
TFExtractTopLabels::TensorReadCallback::process(const std::shared_ptr<io
auto num_read = stream->read(reinterpret_cast<uint8_t
*>(&tensor_proto_buf[0]),
static_cast<int>(stream->size()));
- if (num_read != stream->size()) {
+ if (static_cast<uint64_t>(num_read) != stream->size()) {
Review comment:
same here: `stream->size()` returns `size_t`, so we should preferably
cast `num_read` to that.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]