bakaid commented on a change in pull request #596: MINIFICPP-925 - Fix TailFile 
hang on long lines
URL: https://github.com/apache/nifi-minifi-cpp/pull/596#discussion_r297794330
 
 

 ##########
 File path: extensions/standard-processors/processors/LogAttribute.h
 ##########
 @@ -90,26 +90,19 @@ class LogAttribute : public core::Processor {
   class ReadCallback : public InputStreamCallback {
    public:
     ReadCallback(uint64_t size)
-        : read_size_(0) {
-      buffer_size_ = size;
-      buffer_ = new uint8_t[buffer_size_];
-    }
-    ~ReadCallback() {
-      if (buffer_)
-        delete[] buffer_;
+        : buffer_(size)  {
     }
     int64_t process(std::shared_ptr<io::BaseStream> stream) {
-      int64_t ret = 0;
-      ret = stream->read(buffer_, buffer_size_);
-      if (!stream)
-        read_size_ = stream->getSize();
-      else
-        read_size_ = buffer_size_;
-      return ret;
+      if (buffer_.size() == 0U) {
+        return 0U;
+      }
+      int ret = stream->read(buffer_.data(), buffer_.size());
 
 Review comment:
   I agree with you that chucking is generally a better solution for this, if 
for nothing else than for the lower memory requirement. In this case, because 
we are only dealing with streams no larger than 1 MB, this benefit is not that 
pronounced.
   
   I think adding the log you mentioned is a good idea, I don't know why I 
didn't do it, I will do now.
   
   Just to make sure that I have correct understanding: we expect that we will 
be able to read exactly as many bytes from the stream as flow_file->getSize() 
returns (let it be in one piece or in chunks). And if that is not the case, it 
means there is a bug somewhere.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to