This is an automated email from the ASF dual-hosted git repository.
alexey pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/kudu.git
The following commit(s) were added to refs/heads/master by this push:
new 9e4664d [util] fix another lock contention in MaintenanceManager
9e4664d is described below
commit 9e4664d44ca994484d79d970e7c7e929d0dba055
Author: Alexey Serbin <[email protected]>
AuthorDate: Thu Jan 7 16:40:45 2021 -0800
[util] fix another lock contention in MaintenanceManager
I had a chance to look at stack traces in tservers' diagnostic log files
at a Kudu cluster with high data ingest ratio. There were many stack
snapshots captured periodically every 30 seconds, but the pattern below
presented in every snapshot in a row for several hours.
tids=[4016]
0x7f64f36b05e0 <unknown>
0xa116c6
kudu::tablet::BudgetedCompactionPolicy::RunApproximation()
0xa129c9 kudu::tablet::BudgetedCompactionPolicy::PickRowSets()
0x9c8d80 kudu::tablet::Tablet::UpdateCompactionStats()
0x9ec848 kudu::tablet::CompactRowSetsOp::UpdateStats()
0x1b3de5c kudu::MaintenanceManager::FindBestOp()
0x1b3f3c5 kudu::MaintenanceManager::RunSchedulerThread()
0x1b86014 kudu::Thread::SuperviseThread()
0x7f64f36a8e25 start_thread
0x7f64f176f34d __clone
tids=[48325,48324,48323]
0x7f64f36b05e0 <unknown>
0x7f64f36af42b __lll_lock_wait
0x7f64f36aadcb _L_lock_812
0x7f64f36aac98 __GI___pthread_mutex_lock
0x1b546fd kudu::Mutex::Acquire()
0x1b42913 kudu::MaintenanceManager::LaunchOp()
0x1b929cd kudu::FunctionRunnable::Run()
0x1b8fa87 kudu::ThreadPool::DispatchThread()
0x1b86014 kudu::Thread::SuperviseThread()
0x7f64f36a8e25 start_thread
0x7f64f176f34d __clone
The thread 4016 above had acquired the MaintenanceManager::lock_ mutex
and went calculating the scores for compaction candidates. Three other
threads 48325, 48324, 48323 are waiting for the same mutex to be
acquired upon returning from the MaintenanceManager::LaunchOp() method.
These three maintenance threads were the only threads in the
'MaintenanceMgr' thread pool, i.e. no other threads were available to
perform scheduled compaction operations. These three threads were
blocked after performing scheduled maintenance operations, so they could
not process any new compaction tasks already scheduled by the former
thread. The more compaction candidates were there, the longer the
maintenance threads were blocked waiting on the compaction score
computation performed by the former thread.
To relieve the contention, I updated the code to use separate mutexes
for op-specific condition variables and the scheduler's condition
variable. Now, op-specific condition variable uses the
MaintenanceManager::running_instances_lock_, which is also used to guard
access to the MaintenanceManager::running_instances_ container.
This patch also fixes reporting on the duration of the compaction
operations. Before this patch, the timings for compaction operations
might be bloated in case of lock contention, especially in cases
attributed to conditions resulting in the stacks like shown above.
This patch doesn't contain a test to evaluate performance impact of
this change: I'm planning to do so in a separate changelist.
Change-Id: I63b12dd3641ef655f8fcbbad8d8ac515d874c0fb
Reviewed-on: http://gerrit.cloudera.org:8080/16934
Tested-by: Kudu Jenkins
Reviewed-by: Andrew Wong <[email protected]>
---
src/kudu/util/maintenance_manager.cc | 249 +++++++++++++++++++----------------
src/kudu/util/maintenance_manager.h | 57 +++++---
2 files changed, 174 insertions(+), 132 deletions(-)
diff --git a/src/kudu/util/maintenance_manager.cc
b/src/kudu/util/maintenance_manager.cc
index 536fedb..75ffd8b 100644
--- a/src/kudu/util/maintenance_manager.cc
+++ b/src/kudu/util/maintenance_manager.cc
@@ -138,9 +138,9 @@ void MaintenanceOpStats::Clear() {
MaintenanceOp::MaintenanceOp(string name, IOUsage io_usage)
: name_(std::move(name)),
+ io_usage_(io_usage),
running_(0),
- cancel_(false),
- io_usage_(io_usage) {
+ cancel_(false) {
}
MaintenanceOp::~MaintenanceOp() {
@@ -173,18 +173,20 @@ const MaintenanceManager::Options
MaintenanceManager::kDefaultOptions = {
MaintenanceManager::MaintenanceManager(const Options& options,
string server_uuid)
- : server_uuid_(std::move(server_uuid)),
- num_threads_(options.num_threads <= 0 ?
- FLAGS_maintenance_manager_num_threads : options.num_threads),
- cond_(&lock_),
- shutdown_(false),
- polling_interval_ms_(options.polling_interval_ms <= 0 ?
- FLAGS_maintenance_manager_polling_interval_ms :
- options.polling_interval_ms),
- running_ops_(0),
- completed_ops_count_(0),
- rand_(GetRandomSeed32()),
- memory_pressure_func_(&process_memory::UnderMemoryPressure) {
+ : server_uuid_(std::move(server_uuid)),
+ num_threads_(options.num_threads > 0
+ ? options.num_threads
+ : FLAGS_maintenance_manager_num_threads),
+ polling_interval_(MonoDelta::FromMilliseconds(
+ options.polling_interval_ms > 0
+ ? options.polling_interval_ms
+ : FLAGS_maintenance_manager_polling_interval_ms)),
+ cond_(&lock_),
+ shutdown_(false),
+ running_ops_(0),
+ completed_ops_count_(0),
+ rand_(GetRandomSeed32()),
+ memory_pressure_func_(&process_memory::UnderMemoryPressure) {
CHECK_OK(ThreadPoolBuilder("MaintenanceMgr")
.set_min_threads(num_threads_)
.set_max_threads(num_threads_)
@@ -201,10 +203,9 @@ MaintenanceManager::~MaintenanceManager() {
Status MaintenanceManager::Start() {
CHECK(!monitor_thread_);
- RETURN_NOT_OK(Thread::Create("maintenance", "maintenance_scheduler",
- [this]() { this->RunSchedulerThread(); },
- &monitor_thread_));
- return Status::OK();
+ return Thread::Create("maintenance", "maintenance_scheduler",
+ [this]() { this->RunSchedulerThread(); },
+ &monitor_thread_);
}
void MaintenanceManager::Shutdown() {
@@ -238,7 +239,7 @@ void
MaintenanceManager::MergePendingOpRegistrationsUnlocked() {
}
for (auto& op_and_stats : ops_to_register) {
auto* op = op_and_stats.first;
- op->cond_.reset(new ConditionVariable(&lock_));
+ op->cond_.reset(new ConditionVariable(&running_instances_lock_));
VLOG_AND_TRACE_WITH_PREFIX("maintenance", 1) << "Registered " <<
op->name();
}
ops_.insert(ops_to_register.begin(), ops_to_register.end());
@@ -261,22 +262,26 @@ void MaintenanceManager::RegisterOp(MaintenanceOp* op) {
}
void MaintenanceManager::UnregisterOp(MaintenanceOp* op) {
- {
- std::lock_guard<Mutex> guard(lock_);
- CHECK(op->manager_.get() == this) << "Tried to unregister " << op->name()
- << ", but it is not currently registered with this maintenance
manager.";
+ CHECK(op->manager_.get() == this) << "Tried to unregister " << op->name()
+ << ", but it is not currently registered with this maintenance manager.";
+ op->CancelAndDisable();
- // While the op is running, wait for it to be finished.
+ // While the op is running, wait for it to be finished.
+ {
+ std::lock_guard<Mutex> guard(running_instances_lock_);
if (op->running_ > 0) {
VLOG_AND_TRACE_WITH_PREFIX("maintenance", 1)
<< Substitute("Waiting for op $0 to finish so we can unregister it",
op->name());
}
- op->CancelAndDisable();
while (op->running_ > 0) {
op->cond_->Wait();
}
- // Remove the op from 'ops_', and if it wasn't there, erase it from
- // 'ops_pending_registration_'.
+ }
+
+ // Remove the op from 'ops_', and if it wasn't there, erase it from
+ // 'ops_pending_registration_'.
+ {
+ std::lock_guard<Mutex> guard(lock_);
if (ops_.erase(op) == 0) {
std::lock_guard<simple_spinlock> l(registration_lock_);
const auto num_erased_ops = ops_pending_registration_.erase(op);
@@ -299,76 +304,75 @@ void MaintenanceManager::RunSchedulerThread() {
return;
}
- MonoDelta polling_interval =
MonoDelta::FromMilliseconds(polling_interval_ms_);
-
- std::unique_lock<Mutex> guard(lock_);
-
// Set to true if the scheduler runs and finds that there is no work to do.
bool prev_iter_found_no_work = false;
while (true) {
- // Upon each iteration, we should have dropped and reacquired 'lock_'.
- // Register any ops that may have been buffered for registration while the
- // lock was last held.
- MergePendingOpRegistrationsUnlocked();
-
- // We'll keep sleeping if:
- // 1) there are no free threads available to perform a maintenance op.
- // or 2) we just tried to schedule an op but found nothing to run.
- // However, if it's time to shut down, we want to do so immediately.
- while (CouldNotLaunchNewOp(prev_iter_found_no_work)) {
- cond_.WaitFor(polling_interval);
- prev_iter_found_no_work = false;
- }
- if (shutdown_) {
- VLOG_AND_TRACE_WITH_PREFIX("maintenance", 1) << "Shutting down
maintenance manager.";
- return;
+ MaintenanceOp* op = nullptr;
+ string op_note;
+ {
+ std::unique_lock<Mutex> guard(lock_);
+ // Upon each iteration, we should have dropped and reacquired 'lock_'.
+ // Register any ops that may have been buffered for registration while
the
+ // lock was last held.
+ MergePendingOpRegistrationsUnlocked();
+
+ // We'll keep sleeping if:
+ // 1) there are no free threads available to perform a maintenance op.
+ // or 2) we just tried to schedule an op but found nothing to run.
+ // However, if it's time to shut down, we want to do so immediately.
+ while (CouldNotLaunchNewOp(prev_iter_found_no_work)) {
+ cond_.WaitFor(polling_interval_);
+ prev_iter_found_no_work = false;
+ }
+ if (shutdown_) {
+ VLOG_AND_TRACE_WITH_PREFIX("maintenance", 1) << "Shutting down
maintenance manager.";
+ return;
+ }
+
+ if (PREDICT_FALSE(FLAGS_maintenance_manager_inject_latency_ms > 0)) {
+ LOG(WARNING) << "Injecting " <<
FLAGS_maintenance_manager_inject_latency_ms
+ << "ms of latency into maintenance thread";
+
SleepFor(MonoDelta::FromMilliseconds(FLAGS_maintenance_manager_inject_latency_ms));
+ }
+
+ // Find the best op. If we found no work to do, then we should sleep
+ // before trying again to schedule. Otherwise, we can go right into
trying
+ // to find the next op.
+ {
+ auto best_op_and_why = FindBestOp();
+ op = best_op_and_why.first;
+ op_note = std::move(best_op_and_why.second);
+ }
+ if (op) {
+ std::lock_guard<Mutex> guard(running_instances_lock_);
+ IncreaseOpCount(op);
+ prev_iter_found_no_work = false;
+ } else {
+ VLOG_AND_TRACE_WITH_PREFIX("maintenance", 2)
+ << "no maintenance operations look worth doing";
+ prev_iter_found_no_work = true;
+ continue;
+ }
}
- if (PREDICT_FALSE(FLAGS_maintenance_manager_inject_latency_ms > 0)) {
- LOG(WARNING) << "Injecting " <<
FLAGS_maintenance_manager_inject_latency_ms
- << "ms of latency into maintenance thread";
-
SleepFor(MonoDelta::FromMilliseconds(FLAGS_maintenance_manager_inject_latency_ms));
+ // Prepare the maintenance operation.
+ DCHECK(op);
+ if (!op->Prepare()) {
+ LOG_WITH_PREFIX(INFO) << "Prepare failed for " << op->name()
+ << ". Re-running scheduler.";
+ std::lock_guard<Mutex> guard(running_instances_lock_);
+ DecreaseOpCountAndNotifyWaiters(op);
+ continue;
}
- // If we found no work to do, then we should sleep before trying again to
schedule.
- // Otherwise, we can go right into trying to find the next op.
- prev_iter_found_no_work = !FindAndLaunchOp(&guard);
+ LOG_AND_TRACE_WITH_PREFIX("maintenance", INFO)
+ << Substitute("Scheduling $0: $1", op->name(), op_note);
+ // Submit the maintenance operation to be run on the "MaintenanceMgr" pool.
+ CHECK_OK(thread_pool_->Submit([this, op]() { this->LaunchOp(op); }));
}
}
-bool MaintenanceManager::FindAndLaunchOp(std::unique_lock<Mutex>* guard) {
- // Find the best op.
- auto best_op_and_why = FindBestOp();
- auto* op = best_op_and_why.first;
- const auto& note = best_op_and_why.second;
-
- if (!op) {
- VLOG_AND_TRACE_WITH_PREFIX("maintenance", 2)
- << "No maintenance operations look worth doing.";
- return false;
- }
-
- // Prepare the maintenance operation.
- IncreaseOpCount(op);
- guard->unlock();
- bool ready = op->Prepare();
- guard->lock();
- if (!ready) {
- LOG_WITH_PREFIX(INFO) << "Prepare failed for " << op->name()
- << ". Re-running scheduler.";
- DecreaseOpCount(op);
- op->cond_->Signal();
- return true;
- }
-
- LOG_AND_TRACE_WITH_PREFIX("maintenance", INFO)
- << Substitute("Scheduling $0: $1", op->name(), note);
- // Run the maintenance operation.
- CHECK_OK(thread_pool_->Submit([this, op]() { this->LaunchOp(op); }));
- return true;
-}
-
// Finding the best operation goes through some filters:
// - If there's an Op that we can run quickly that frees log retention, run it
// (e.g. GCing WAL segments).
@@ -529,7 +533,7 @@ double MaintenanceManager::AdjustedPerfScore(double
perf_improvement,
}
void MaintenanceManager::LaunchOp(MaintenanceOp* op) {
- int64_t thread_id = Thread::CurrentThreadId();
+ const auto thread_id = Thread::CurrentThreadId();
OpInstance op_instance;
op_instance.thread_id = thread_id;
op_instance.name = op->name();
@@ -541,21 +545,30 @@ void MaintenanceManager::LaunchOp(MaintenanceOp* op) {
}
SCOPED_CLEANUP({
- op->RunningGauge()->Decrement();
+ // To avoid timing distortions in case of lock contention, it's important
+ // to take a snapshot of 'now' right after the operation completed
+ // before acquiring any locks in the code below.
+ const auto now = MonoTime::Now();
- std::lock_guard<Mutex> l(lock_);
+ op->RunningGauge()->Decrement();
{
std::lock_guard<Mutex> lock(running_instances_lock_);
running_instances_.erase(thread_id);
+
+ op_instance.duration = now - op_instance.start_mono_time;
+
op->DurationHistogram()->Increment(op_instance.duration.ToMilliseconds());
+
+ DecreaseOpCountAndNotifyWaiters(op);
+ }
+ cond_.Signal(); // wake up the scheduler
+
+ // Add corresponding entry into the completed_ops_ container.
+ {
+ std::lock_guard<simple_spinlock> lock(completed_ops_lock_);
+ completed_ops_[completed_ops_count_ % completed_ops_.size()] =
+ std::move(op_instance);
+ ++completed_ops_count_;
}
- op_instance.duration = MonoTime::Now() - op_instance.start_mono_time;
- completed_ops_[completed_ops_count_ % completed_ops_.size()] = op_instance;
- completed_ops_count_++;
-
- op->DurationHistogram()->Increment(op_instance.duration.ToMilliseconds());
- DecreaseOpCount(op);
- op->cond_->Signal();
- cond_.Signal(); // Wake up scheduler.
});
scoped_refptr<Trace> trace(new Trace);
@@ -574,12 +587,13 @@ void MaintenanceManager::LaunchOp(MaintenanceOp* op) {
trace->MetricsAsJSON());
}
-void
MaintenanceManager::GetMaintenanceManagerStatusDump(MaintenanceManagerStatusPB*
out_pb) {
+void MaintenanceManager::GetMaintenanceManagerStatusDump(
+ MaintenanceManagerStatusPB* out_pb) {
DCHECK(out_pb != nullptr);
std::lock_guard<Mutex> guard(lock_);
MergePendingOpRegistrationsUnlocked();
for (const auto& val : ops_) {
- MaintenanceManagerStatusPB_MaintenanceOpPB* op_pb =
out_pb->add_registered_operations();
+ auto* op_pb = out_pb->add_registered_operations();
MaintenanceOp* op(val.first);
const MaintenanceOpStats& stats(val.second);
op_pb->set_name(op->name());
@@ -607,13 +621,18 @@ void
MaintenanceManager::GetMaintenanceManagerStatusDump(MaintenanceManagerStatu
}
// The latest completed op will be dumped at first.
- for (int n = 1; n <= completed_ops_.size(); n++) {
- int64_t i = completed_ops_count_ - n;
- if (i < 0) break;
- const auto& completed_op = completed_ops_[i % completed_ops_.size()];
-
- if (!completed_op.name.empty()) {
- *out_pb->add_completed_operations() = completed_op.DumpToPB();
+ {
+ std::lock_guard<simple_spinlock> lock(completed_ops_lock_);
+ for (int n = 1; n <= completed_ops_.size(); ++n) {
+ if (completed_ops_count_ < n) {
+ break;
+ }
+ size_t i = completed_ops_count_ - n;
+ const auto& completed_op = completed_ops_[i % completed_ops_.size()];
+
+ if (!completed_op.name.empty()) {
+ *out_pb->add_completed_operations() = completed_op.DumpToPB();
+ }
}
}
}
@@ -623,21 +642,25 @@ string MaintenanceManager::LogPrefix() const {
}
bool MaintenanceManager::HasFreeThreads() {
- return num_threads_ - running_ops_ > 0;
+ return num_threads_ > running_ops_;
}
bool MaintenanceManager::CouldNotLaunchNewOp(bool prev_iter_found_no_work) {
+ lock_.AssertAcquired();
return (!HasFreeThreads() || prev_iter_found_no_work ||
disabled_for_tests()) && !shutdown_;
}
-void MaintenanceManager::IncreaseOpCount(MaintenanceOp *op) {
- op->running_++;
- running_ops_++;
+void MaintenanceManager::IncreaseOpCount(MaintenanceOp* op) {
+ running_instances_lock_.AssertAcquired();
+ ++running_ops_;
+ ++op->running_;
}
-void MaintenanceManager::DecreaseOpCount(MaintenanceOp *op) {
- op->running_--;
- running_ops_--;
+void MaintenanceManager::DecreaseOpCountAndNotifyWaiters(MaintenanceOp* op) {
+ running_instances_lock_.AssertAcquired();
+ --running_ops_;
+ --op->running_;
+ op->cond_->Signal();
}
} // namespace kudu
diff --git a/src/kudu/util/maintenance_manager.h
b/src/kudu/util/maintenance_manager.h
index faf5382..760eb8e 100644
--- a/src/kudu/util/maintenance_manager.h
+++ b/src/kudu/util/maintenance_manager.h
@@ -17,6 +17,8 @@
#pragma once
+#include <atomic>
+#include <cstddef>
#include <cstdint>
#include <functional>
#include <map>
@@ -32,7 +34,6 @@
#include "kudu/gutil/macros.h"
#include "kudu/gutil/ref_counted.h"
-#include "kudu/util/atomic.h"
#include "kudu/util/condition_variable.h"
#include "kudu/util/locks.h"
#include "kudu/util/monotime.h"
@@ -229,7 +230,7 @@ class MaintenanceOp {
// Returns the gauge for this op that tracks when this op is running. Cannot
be NULL.
virtual scoped_refptr<AtomicGauge<uint32_t>> RunningGauge() const = 0;
- uint32_t running() { return running_; }
+ uint32_t running() const { return running_; }
const std::string& name() const { return name_; }
@@ -237,7 +238,7 @@ class MaintenanceOp {
// Return true if the operation has been cancelled due to a pending
Unregister().
bool cancelled() const {
- return cancel_.Load();
+ return cancel_;
}
// Cancel this operation, which prevents new instances of it from being
scheduled
@@ -245,7 +246,7 @@ class MaintenanceOp {
// optionally poll 'cancelled()' on a periodic basis to know if they should
abort a
// lengthy operation in the middle of Perform().
void CancelAndDisable() {
- cancel_.Store(true);
+ cancel_ = true;
}
protected:
@@ -259,25 +260,28 @@ class MaintenanceOp {
// The name of the operation. Op names must be unique.
const std::string name_;
- // The number of instances of this op that are currently running.
- uint32_t running_;
+ IOUsage io_usage_;
+
+ // The number of instances of this op that are currently running. The field
+ // is updated by MaintenanceManager which guards the access as needed. If the
+ // access isn't guarded by the 'MaintenanceManager::running_instances_lock_',
+ // use the 'running()' accessor to read the value lock-free.
+ std::atomic<uint32_t> running_;
// Set when we are trying to unregister the maintenance operation.
// Ongoing operations could read this boolean and cancel themselves.
// New operations will not be scheduled when this boolean is set.
- AtomicBool cancel_;
+ std::atomic<bool> cancel_;
// Condition variable which the UnregisterOp function can wait on.
//
- // Note: 'cond_' is used with the MaintenanceManager's mutex. As such,
- // it only exists when the op is registered.
+ // Note: 'cond_' is used with the MaintenanceManager::running_instances_lock_
+ // mutex. As such, it only exists when the op is registered.
std::unique_ptr<ConditionVariable> cond_;
// The MaintenanceManager with which this op is registered, or null
// if it is not registered.
std::shared_ptr<MaintenanceManager> manager_;
-
- IOUsage io_usage_;
};
struct MaintenanceOpComparator {
@@ -343,8 +347,6 @@ class MaintenanceManager : public
std::enable_shared_from_this<MaintenanceManage
void RunSchedulerThread();
- bool FindAndLaunchOp(std::unique_lock<Mutex>* guard);
-
// Find the best op, or null if there is nothing we want to run.
//
// Returns the op, as well as a string explanation of why that op was chosen,
@@ -364,7 +366,7 @@ class MaintenanceManager : public
std::enable_shared_from_this<MaintenanceManage
bool CouldNotLaunchNewOp(bool prev_iter_found_no_work);
void IncreaseOpCount(MaintenanceOp *op);
- void DecreaseOpCount(MaintenanceOp *op);
+ void DecreaseOpCountAndNotifyWaiters(MaintenanceOp *op);
// Adds ops in 'ops_pending_registration_' to 'ops_'. Must be called while
// 'lock_' is held.
@@ -372,6 +374,7 @@ class MaintenanceManager : public
std::enable_shared_from_this<MaintenanceManage
const std::string server_uuid_;
const int32_t num_threads_;
+ const MonoDelta polling_interval_;
// Ops for which RegisterOp() has been called, but that have not yet been
// added to 'ops_'. Since adding to 'ops_' requires taking 'lock_', rather
@@ -381,18 +384,34 @@ class MaintenanceManager : public
std::enable_shared_from_this<MaintenanceManage
simple_spinlock registration_lock_;
OpMapType ops_pending_registration_;
- OpMapType ops_; // Registered operations.
+ // Registered operations: the access guarded by the 'lock_' field.
+ OpMapType ops_;
+
+ // Scheduler's lock used to guard the access to the 'ops_' container and
+ // notify the scheduler via the 'cond_' condition variable.
+ // This lock should be used very sparingly, since it's taken when finding
+ // the best operation to schedule, and the latter might be time consuming
+ // especially if there are many maintenance operation candidates.
Mutex lock_;
+ // Condition variable used to wake up/notify the scheduler right after
+ // performing scheduled maintenance operation (based on 'lock_').
+ ConditionVariable cond_;
+
scoped_refptr<kudu::Thread> monitor_thread_;
std::unique_ptr<ThreadPool> thread_pool_;
- ConditionVariable cond_;
bool shutdown_;
- int32_t polling_interval_ms_;
- int32_t running_ops_;
+
+ // This field is atomic because it's written under 'running_instances_lock_'
+ // and read when the latter lock isn't held.
+ std::atomic<int32_t> running_ops_;
+
+ // Lock to guard access to 'completed_ops_' and 'completed_ops_count_'.
+ simple_spinlock completed_ops_lock_;
// Vector used as a circular buffer for recently completed ops. Elements
need to be added at
// the completed_ops_count_ % the vector's size and then the count needs to
be incremented.
std::vector<OpInstance> completed_ops_;
- int64_t completed_ops_count_;
+ size_t completed_ops_count_;
+
Random rand_;
// Function which should return true if the server is under global memory
pressure.