Adar Dembo has posted comments on this change.

Change subject: rpc: LIFO service queue
......................................................................


Patch Set 7:

(22 comments)

http://gerrit.cloudera.org:8080/#/c/2938/7/build-support/tsan-suppressions.txt
File build-support/tsan-suppressions.txt:

Line 26: # local_addr_space_init(), suppressing init() suppresses both races.
Apparently this comment is no longer correct. Can you update it?


http://gerrit.cloudera.org:8080/#/c/2938/7/src/kudu/rpc/service_queue-test.cc
File src/kudu/rpc/service_queue-test.cc:

Line 34: using std::vector;
Nit: unsorted.


Line 62:     InboundCall * call = new InboundCall(nullptr);
Nit: InboundCall* call


Line 66:       LOG(INFO) << "queue, exit";
Nit: queue full, exit


Line 90:     call.reset(nullptr);
Didn't know you needed to provide an argument.


Line 96:   vector<shared_ptr<boost::thread>> producers;
Since this is a new test, could we use std::thread instead?


Line 99:   for (int i = 0; i< FLAGS_num_producers; i++) {
Nit: i < FLAGS_


Line 120:     total_queue_len = queue.estimated_queue_length();
        :     total_idle_workers = queue.estimated_idle_worker_count();
What's the point of calling this inside the loop? Each call overwrites the 
value of the call before it.


http://gerrit.cloudera.org:8080/#/c/2938/7/src/kudu/rpc/service_queue.cc
File src/kudu/rpc/service_queue.cc:

Line 36: bool LifoServiceQueue::BlockingGet(std::unique_ptr<InboundCall> *out) {
Nit: type* out


Line 39:     lock_guard<simple_spinlock> l(&lock_);
Nit: technically we need only lock for the consumers_ mutation.


Line 41:     consumers_.emplace_back(consumer);
AFAICT, consumers_ just accumulates entries until the entire queue is 
destroyed, right? What's the point? Why not just have the bare pointer to 
ConsumerState in the TLS? Would that constitute a memory leak?


Line 53:       if (shutdown_) {
Nit: maybe PREDICT_FALSE here?


Line 59:     InboundCall* call = consumer->Wait();
Maybe add a comment saying that if 'call' was null, we're shutting down the 
queue, and we'll loop back to check shutdown_.


Line 80:     // Notify condition var(and wake up consumer thread) takes time,
Really? Somewhat surprised by that.


Line 89:     DCHECK_EQ(queue_.size(), max_queue_size_);
This seems somewhat odd given the condition on L87. Defensive programming, I 
guess?


Line 91:     --it;
Should probably enforce that max_queue_size is > 0. That would be dumb, but 
would lead to weirdness here.


Line 110:     cs->Post(nullptr);
Here we're not posting outside of lock_ the way we do in Put().


http://gerrit.cloudera.org:8080/#/c/2938/7/src/kudu/rpc/service_queue.h
File src/kudu/rpc/service_queue.h:

Line 51: // Each consumer thread will has its own lock and condition variable. 
If a
Nit: probably don't need the word "will" here.


Line 58: // - the worker who was most recently busy is the one which we be 
selected for
Nit: which will be


Line 76:   bool BlockingGet(std::unique_ptr<InboundCall> *out);
Nit: std::unique_ptr<InbouundCall>* out


Line 105:     ANNOTATE_IGNORE_READS_BEGIN();
The goal here being to return the size (or worker count, below) without 
acquiring a lock?


Line 207:   std::vector<std::unique_ptr<ConsumerState> > consumers_;
Nit: C++11, so you can do >> to close the template.


-- 
To view, visit http://gerrit.cloudera.org:8080/2938
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: Ibd1eb677dd52f89683eb648b42918fcf51437215
Gerrit-PatchSet: 7
Gerrit-Project: kudu
Gerrit-Branch: master
Gerrit-Owner: Todd Lipcon <[email protected]>
Gerrit-Reviewer: Adar Dembo <[email protected]>
Gerrit-Reviewer: Binglin Chang <[email protected]>
Gerrit-Reviewer: Henry Robinson <[email protected]>
Gerrit-Reviewer: Kudu Jenkins
Gerrit-Reviewer: Todd Lipcon <[email protected]>
Gerrit-HasComments: Yes

Reply via email to