hunyadi-dev commented on a change in pull request #845:
URL: https://github.com/apache/nifi-minifi-cpp/pull/845#discussion_r460780614
##########
File path: libminifi/src/utils/OsUtils.cpp
##########
@@ -154,6 +160,81 @@ std::string OsUtils::userIdToUsername(const std::string
&uid) {
return name;
}
+uint64_t OsUtils::getMemoryUsage() {
+#ifdef __linux__
+ long resPages;
+ long sharedPages;
+ {
+ std::string ignore;
+ std::ifstream ifs("/proc/self/statm");
+ ifs >> ignore >> resPages >> sharedPages;
+ }
+
+ if (sharedPages > resPages) {
+ throw std::range_error("Shared memory page count ("
+ + std::to_string(sharedPages)
+ + ") should not be larger than resident set size ("
+ + std::to_string(resPages)
+ + "), that includes it"
+ );
+ }
+
+ const long ownPages = resPages - sharedPages;
+ const long pageSize = sysconf(_SC_PAGE_SIZE);
+ return ownPages * pageSize;
+#endif
+
+#ifdef __APPLE__
+ task_basic_info tInfo;
+ mach_msg_type_number_t tInfoCount = TASK_BASIC_INFO_COUNT;
+ if (KERN_SUCCESS != task_info(mach_task_self(), TASK_BASIC_INFO,
(task_info_t)&tInfo, &tInfoCount))
+ throw std::runtime_error("Could not get memory info for current
process");
+ return tInfo.resident_size;
+#endif
+
+#ifdef _WIN32
+ const auto hCurrentProcess = GetCurrentProcess();
+
+ PSAPI_WORKING_SET_INFORMATION workingSetSizeInfo;
+ QueryWorkingSet(hCurrentProcess, &workingSetSizeInfo,
sizeof(workingSetSizeInfo));
+ auto pageCountLimit = workingSetSizeInfo.NumberOfEntries * 2; // twice the
size for sure fit next time
Review comment:
Without preallocation, will this not always return `ERROR_BAD_LENGTH`?

Maybe relevant: should we not follow [the logic shown
here](https://chromium.googlesource.com/chromium/src/base/+/a899f046da95ba96250cbe3d17dc81338a8ab800/process/process_metrics_win.cc#130)?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]