szaszm commented on a change in pull request #1101:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1101#discussion_r646582374



##########
File path: libminifi/src/utils/ProcessCpuUsageTracker.cpp
##########
@@ -126,7 +127,7 @@ double 
ProcessCpuUsageTracker::getProcessCpuUsageBetweenLastTwoQueries() const {
   uint64_t sys_cpu_times_diff = sys_cpu_times_ - previous_sys_cpu_times_;
   uint64_t user_cpu_times_diff = user_cpu_times_ - previous_user_cpu_times_;
   double percent = static_cast<double>(sys_cpu_times_diff + 
user_cpu_times_diff) / static_cast<double>(cpu_times_diff);
-  percent = percent / std::thread::hardware_concurrency();
+  percent = percent / (std::max)(static_cast<uint32_t>(1), 
std::thread::hardware_concurrency());

Review comment:
       `uint32_t{1}` is shorter and avoids the cast.

##########
File path: extensions/http-curl/tests/HTTPHandlers.h
##########
@@ -422,11 +423,15 @@ class HeartbeatHandler : public ServerAwareHandler {
 
   void verify(struct mg_connection *conn) {
     auto post_data = readPayload(conn);
+    if (!isServerRunning()) {
+      return;
+    }
     if (!IsNullOrEmpty(post_data)) {
       rapidjson::Document root;
-      rapidjson::ParseResult ok = root.Parse(post_data.data(), 
post_data.size());
-      assert(ok);
-      (void)ok;  // unused in release builds
+      rapidjson::ParseResult result = root.Parse(post_data.data(), 
post_data.size());
+      if (!result) {
+        throw std::runtime_error("JSON parse error: " + 
std::string(rapidjson::GetParseError_En(result.Code())) + "\n JSON data: " + 
post_data);

Review comment:
       `fmt::format` or `StringUtils::join_pack` would be nicer IMO. 
`fmt::format` allows nice format string. If you don't like that, 
`StringUtils::join_pack` can avoid intermediate allocation and deallocation of 
temporaries.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to