zhannngchen commented on code in PR #12716:
URL: https://github.com/apache/doris/pull/12716#discussion_r976173550
##########
be/src/runtime/load_channel.h:
##########
@@ -174,20 +171,20 @@ inline std::ostream& operator<<(std::ostream& os, const
LoadChannel& load_channe
}
template <typename TabletWriterAddResult>
-Status LoadChannel::handle_mem_exceed_limit(bool force, TabletWriterAddResult*
response) {
- // lock so that only one thread can check mem limit
- std::lock_guard<std::mutex> l(_lock);
- if (!(force || _mem_tracker->limit_exceeded())) {
- return Status::OK();
- }
+Status LoadChannel::handle_mem_exceed_limit(TabletWriterAddResult* response) {
+ bool found = false;
+ std::shared_ptr<TabletsChannel> channel;
+ {
+ // lock so that only one thread can check mem limit
+ std::lock_guard<std::mutex> l(_lock);
- if (!force) {
- LOG(INFO) << "reducing memory of " << *this << " because its mem
consumption "
- << _mem_tracker->consumption() << " has exceeded limit " <<
_mem_tracker->limit();
+ LOG(INFO) << "reducing memory of " << *this
+ << " ,mem consumption: " << _mem_tracker->consumption();
Review Comment:
In the load channel, there will print another log with it's channel info
##########
be/src/runtime/load_channel_mgr.h:
##########
@@ -76,10 +77,17 @@ class LoadChannelMgr {
std::mutex _lock;
// load id -> load channel
std::unordered_map<UniqueId, std::shared_ptr<LoadChannel>> _load_channels;
+ std::shared_ptr<LoadChannel> _reduce_memory_channel = nullptr;
Cache* _last_success_channel = nullptr;
// check the total load channel mem consumption of this Backend
std::shared_ptr<MemTrackerLimiter> _mem_tracker;
+ int64_t _load_process_soft_limit = -1;
+
+ // If hard limit reached, one thread will trigger load channel flush,
+ // other threads should wait on the condition variable.
+ bool _hard_limit_reached = false;
Review Comment:
`_reduce_memory_channel` is used to indicate the soft limit triggered, to
avoid multiple thread flushing due to soft limit.
`_hard_limit_reached` indicates that all thread should wait here until we
released some memory
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]