Michael Ho has posted comments on this change. ( 
http://gerrit.cloudera.org:8080/12987 )

Change subject: IMPALA-8341: Data cache for remote reads
......................................................................


Patch Set 1:

(71 comments)

This new patch changes the configuration string to use a uniform capacity quota 
for all directories.

http://gerrit.cloudera.org:8080/#/c/12987/1//COMMIT_MSG
Commit Message:

http://gerrit.cloudera.org:8080/#/c/12987/1//COMMIT_MSG@24
PS1, Line 24: is a tuple of (file's name, file's modification time, file offset)
> So this means that you can get a partial cache hit if the offsets match but
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/exec/hdfs-scan-node-base.cc
File be/src/exec/hdfs-scan-node-base.cc:

http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/exec/hdfs-scan-node-base.cc@948
PS1, Line 948:     
data_cache_miss_bytes_->Set(reader_context_->data_cache_miss_bytes());
> I think for the above counters we figured that this pattern of copying the
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache-test.cc
File be/src/runtime/io/data-cache-test.cc:

http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache-test.cc@52
PS1, Line 52:   // The callback is invoked by each thread in the multi-threaded 
tests below.
> callback reads like it's called when something is done, how about ThreadFn?
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache-test.cc@157
PS1, Line 157:   // Read the same same range inserted previously and they 
should still all in the cache.
> nit: be
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache-test.cc@196
PS1, Line 196:   // Create a buffer way larger than the cache.
> Why does it need to be larger?
Typo. Fixed.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache-test.cc@201
PS1, Line 201:   ASSERT_TRUE(cache.Store("foobar", 12345, 0, 
large_buffer.get(), cache_size));
> Use variables instead of inline constants?
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache-test.cc@220
PS1, Line 220:   const int64_t cache_size = 1 << 22;
> I find these easier to read in the form of 4 * 1024 * 1024
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache-test.cc@248
PS1, Line 248: differen
> typo
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache-test.cc@301
PS1, Line 301: ootprint) {
> Can you add a comment what this test does?
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache-test.cc@330
PS1, Line 330: int main(int argc, char **argv) {
> This is not needed anymore if you add your test to the unified backend test
Prefer to keep this test as stand-alone to make development easier.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h
File be/src/runtime/io/data-cache.h:

http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@18
PS1, Line 18: #ifndef IMPALA_RUNTIME_IO_DATA_CACHE_H
> Could use #pragma once instead of include guards
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@53
PS1, Line 53: /// punching holes in the backing file it's stored in. As a 
backing file reaches a certain
> It's actually a TODO item to retire older files and copy what's left in the
Added a check for filesystem support for hole punching in the new patch.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@55
PS1, Line 55: /// created instead. Note that due to hole punching, the backing 
file is actually sparse.
> This might be a case where a simple ASCII diagram would illustrate the conc
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@56
PS1, Line 56: ///
> It's by design that we don't support overlapping range for the first implem
Added some explanation in the header comment.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@63
PS1, Line 63: /// happens inline currently during Store().
> It's worth documenting the PAGE_SIZE behaviour since it implies that there'
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@76
PS1, Line 76: class DataCache{
> style nit: missing space before {
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@81
PS1, Line 81:   DataCache(const std::string& config) : config_(config) { }
> totally doesn't matter here, but general best practice in C++11 is to have
Thanks for the reminder. Almost every time, I have to look up in the internet 
for the advantage of pass-by-value-then-move over pass-by-reference. It's 
subtle but it makes sense. We should probably start sticking to this idiom more 
widely in Impala code base. May be a clang-tidy rule will help ?!

Also totally irrelevant but that's an area where I find passing by pointer in C 
is sometimes easier to use or understand than C++.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@105
PS1, Line 105:
> space
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@108
PS1, Line 108:   /// Inserts a new cache entry by copying the content in 
'buffer' into the cache.
> Can you specify whether this is synchronous or not? ie should the caller ex
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@110
PS1, Line 110:   /// is installed successfully.
> worth documenting under what cases the data wouldn't be stored successfully
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@110
PS1, Line 110:   /// is installed successfully.
> Is it interesting to document reasons why the content may not be installed?
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@110
PS1, Line 110:   /// is installed successfully.
> Maybe point out explicitly that it returns false for errors and if the cont
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@127
PS1, Line 127:   class Partition : public kudu::Cache::EvictionCallback {
> This could be defined in the .cc file and just forward-declared here, right
Tried that but hit some compilation issue about unique_ptr<> needing to know 
the size Partition. Moved the definition of CacheFile, CacheKey and CacheEntry 
into .cc file.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@129
PS1, Line 129:     /// Creates a partition at the given directory 'path' with 
quota 'capacity'.
> Mention that capacity is bytes? Maybe even add a suffix that indicates it's
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@157
PS1, Line 157:     void EvictedEntry(kudu::Slice key, kudu::Slice value) 
override;
> Can you add the virtual and override keywords to all virtual methods to mak
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@162
PS1, Line 162:     struct CacheFile {
> Consider making a destructor here responsible for closing fd if it's been s
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@208
PS1, Line 208:     std::vector<CacheFile> cache_files_;
> It seems like CacheEntrys point to CacheFile objects, but here the CacheFil
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@226
PS1, Line 226:     /// failure. Please note that the backing file is unlinked 
right after it's created
> How will these files look to an administrator? Will this make it harder to
Switched to removing stale files during initialization during restart. Punt on 
sharing of cache directory between Impalads on the same host for now.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@236
PS1, Line 236:     /// 'read' is true iff this function is called from 
Lookup(). This function is
             :     /// also called from Store(), in which case, 'read' is false.
> nit: I'm not a big fan of parameters that get documented like this. From re
Switched to passing in the ops name in the new patch if that's easier to 
follow. We do need to be able to tell the difference though.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@254
PS1, Line 254:       std::unique_ptr<kudu::faststring>* key);
> You could return the pointer by value
Function deleted in new patch. Added a struct CacheKey instead.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.h@254
PS1, Line 254:       std::unique_ptr<kudu::faststring>* key);
> why not just take a 'faststring*' here directly, since it's mutable?
Function deleted in new patch. Added a struct CacheKey instead.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc
File be/src/runtime/io/data-cache.cc:

http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@44
PS1, Line 44: using namespace impala;
            : using namespace impala::io;
> why do you need these? The class is already in impala::io so seems like the
Switched to wrapping the following code with
 namespace impala {
 namespace io {
 }
 }

We don't seem to be too consistent on doing that in all files (e.g. 
disk-io-mgr.cc).


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@57
PS1, Line 57:     "(Advanced) Enable checksumming for the cached buffer.");
> do you have any sense of the overhead here? Checksums with CRC32C are almos
No idea. Please see comments below on why CRC32 isn't picked but it'd be nice 
to fix the situation in a separate patch.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@62
PS1, Line 62: static const int64_t PAGE_SIZE = 1L << 12;
> any reason for this to be configurable? eg on SSD I think erase blocks are
This is not configurable so may be I misunderstood your comment.

I just round it up to 4KB as that seems to be the minimum block size handled by 
the page cache. Not necessarily optimized for the underlying storage media at 
this moment. This makes hole punching easier as the holes punched are 4KB 
aligned.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@66
PS1, Line 66: DRAM_CACHE
> Should we add DISK_CACHE here? Otherwise, can you add a comment to explain
This is actually a cache in memory so DRAM_CACHE seems to make sense to me.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@75
PS1, Line 75:   int64_t fd = open(path.c_str(), O_CREAT | O_TRUNC | O_RDWR, 
S_IRUSR | S_IWUSR);
> We should probably be retrying this on EINTR (Kudu has a RETRY_ON_EINTR mac
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@79
PS1, Line 79:   // Unlink the file so the disk space is recycled once the 
process exits.
> In terms of preventing the "wiping" of a directory if a previous Impala's a
The new patch set will clean up any remaining backing files from previous runs 
at partition initialization time. Please note that this assumes each Impalad 
process will use a unique caching directory on a given host. We already have 
similar assumption with the scratch directory today.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@91
PS1, Line 91: Status DataCache::Partition::Init() {
> worth verifying that the path is absolute as well, otherwise a misplaced ch
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@101
PS1, Line 101:   }
> Would be good to borrow 'CheckHolePunch' from kuduf/s/data_dirs.cc in the K
Added something similar in filesystem-util.cc. Importing kudu/fs into the code 
base can be done later in a separate patch if necessary.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@127
PS1, Line 127:   posix_fadvise(cache_file->fd, entry->offset, bytes_to_read, 
POSIX_FADV_SEQUENTIAL);
> what's the thinking behind this? This is more likely to cause it to readahe
The thinking behind this is to give advise to kernel that we plan to read 
sequentially in this region in the backing file. Did you mean the following 
notes in posix_fadvise():

         Under Linux, POSIX_FADV_NORMAL sets the readahead window to the
       default size for the backing device; POSIX_FADV_SEQUENTIAL doubles
       this size, and POSIX_FADV_RANDOM disables file readahead entirely.
       These changes affect the entire file, not just the specified region
       (but other open file handles to the same file are unaffected).

We do mostly do sequential reads from the backing files albeit at different 
offsets. Is the concern that the call may be prefetching unnecessary data 
beyond the requested regions into the page cache and thus evicting useful pages 
?


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@128
PS1, Line 128:   int64_t bytes_read = pread(cache_file->fd, buffer, 
bytes_to_read, entry->offset);
> add a TODO to add a metric for time spent reading from cache? We should pro
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@129
PS1, Line 129:   if (UNLIKELY(bytes_read != bytes_to_read)) {
> the pread docs say:
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@139
PS1, Line 139:   meta_cache_->Release(handle);
> use SCOPED_CLEANUP to define this up closer to line 116, and then you can m
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@147
PS1, Line 147:   if (charge_len > capacity_) return false;
> I think the underlying cache ends up sharded once per core unless you expli
Given that we won't actually insert into the cache until after trying to write 
to the backing file, we may end up exceeding the capacity temporarily. So, this 
check here may be helpful.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@158
PS1, Line 158:     // TODO: handle cases in which 'store_len' is longer than 
existing entry;
> It's probably worth documenting this behaviour in the header - it's a littl
Addressed the TODO in the new patch.

Based on the experiment done so far, we rarely if ever has partial hit so this 
suggests this case is not common in practice. That said, it's not too 
complicated to handle this case either in the code.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@166
PS1, Line 166:     std::unique_lock<SpinLock> partition_lock(lock_);
> I see you're dropping this lock while writing to disk, but I wonder if inst
Go with a variant of the try-lock solution for now. In particular, the new 
patch will expose a knob to control the number of concurrent inserts allowed to 
the cache. Setting it to 1 will simulate the single concurrent insert behavior. 
This allows us to test for any adverse effect to the performance with a single 
thread on fast storage or when we are not IO bound. One concern is that we may 
just skip inserting in the unfortunate cases when the IO threads are scheduled 
to run at the same time. Probabilistically, it should be low over time but 
David will probably need to experiment with different concurrencies and see how 
things go.

For the async write idea, one concern I have is the amount of memory consumed 
by the queue. We can always put a cap on that and use MemTracker to track and 
limit the consumption but that's another bound to tune. Another concern is that 
the allocation of the buffer is from the IO thread and the deallocation of the 
buffer will be in the async write thread, which is an anti-pattern for 
TCMalloc. Potentially, we can use buffer pool to work around it. Leave a TODO 
for it.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@173
PS1, Line 173:     DCHECK(!cache_files_.empty());
> may as well CHECK here, since this isn't that hot of a path, and it'd be be
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@185
PS1, Line 185:     if (cache_file->current_offset > 
FLAGS_data_cache_file_max_size) {
> So this seems we can overrun the configured max size by one entry. Should w
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@193
PS1, Line 193:   ssize_t bytes_written = 0;
> declare right before usage
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@208
PS1, Line 208: pwrite
> same as above, this needs to loop and handle partial writes and EINTR.
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@222
PS1, Line 222: done:
> agreed
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@222
PS1, Line 222: done:
> You could use a scoped cleanup lambda and returns instead of goto
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@233
PS1, Line 233:       BitUtil::RoundUp(entry->offset + entry->len, PAGE_SIZE) - 
entry->offset;
> isn't this equivalent to RoundUp(entry->len, PAGE_SIZE) because we know tha
Yes. Added a DCHECK to confirm entry->offset is rounded to page size.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@234
PS1, Line 234: fallocate
> RETRY_ON_EINTR
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@245
PS1, Line 245:   return HashUtil::FastHash64(buffer, buffer_len, 0xcafebeef);
> Why not use CRC32 which can be optimized to around 8 bytes/cycle? (our impl
Yes, the current implementation in hash-util.h doesn't support length > 32-bit 
and it's also not using SSE4_crc32_u64(). Changing that will require changing 
some codegen'd function according to the comments. So, choosing the easier path 
for now. Added a TODO here.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@262
PS1, Line 262:   for (const string& cache_config : all_cache_configs) {
> gutil has SplitStringIntoKeyValuePairs() in strings.h that could be helpful
Changed the config string format in the new patch. Thanks for the suggestion. 
There are some useful utility functions in gutil/strings/split.h.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/data-cache.cc@323
PS1, Line 323:   int idx = HashUtil::FastHash64(key->data(), key->size(), 0) % 
partitions_.size();
> maybe combine the hashing and the key construction into one method like Con
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/disk-io-mgr.h
File be/src/runtime/io/disk-io-mgr.h:

http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/disk-io-mgr.h@198
PS1, Line 198: HDFS file blocks
> not necessarily HDFS.
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/disk-io-mgr.h@445
PS1, Line 445:   std::unique_ptr<DataCache> remote_data_cache_;
> worth clarifying in the doc whether this would be null if not configured
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/disk-io-mgr.cc
File be/src/runtime/io/disk-io-mgr.cc:

http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/disk-io-mgr.cc@58
PS1, Line 58: // tuples, separated by comma. An example could be: 
/data/0:1TB,/data/1:1TB.
> This info should go into the gflags help string instead of a comment
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/disk-io-mgr.cc@240
PS1, Line 240: get
> redundant get() != nullptr -- unique_ptr implicitly casts to bool in the wa
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/disk-io-mgr.cc@295
PS1, Line 295:       LOG(ERROR) << "Failed to initialize data cache.";
> Agreed probably fatal here
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/disk-io-mgr.cc@295
PS1, Line 295:       LOG(ERROR) << "Failed to initialize data cache.";
> Should log the error message at least.
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/disk-io-mgr.cc@295
PS1, Line 295:       LOG(ERROR) << "Failed to initialize data cache.";
> Have you considered making this fatal?
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/hdfs-file-reader.h
File be/src/runtime/io/hdfs-file-reader.h:

http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/hdfs-file-reader.h@48
PS1, Line 48:   virtual void CachedFile(uint8_t** data, int64_t* length) 
override;
> Maybe rename to ReadHdfsCachedFile to make its purpose more clear at the ca
This is a common path shared with local-file-reader.h too. Not sure if it makes 
sense to call it ReadHdfsCachedFile() in that context ?


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/hdfs-file-reader.cc
File be/src/runtime/io/hdfs-file-reader.cc:

http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/hdfs-file-reader.cc@110
PS1, Line 110:     DataCache* remote_data_cache = io_mgr->remote_data_cache();
> I feel like this should be factored out into its own function so that the f
Done


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/hdfs-file-reader.cc@124
PS1, Line 124:         if (exclusive_hdfs_fh_ != nullptr) {
> I think a comment here would help explain why we do this
Please see changes in ReadFromPosInternal() in the new patch set. This is now 
removed in the new patch set.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/hdfs-file-reader.cc@163
PS1, Line 163:         LOG(INFO) << "Reopening file " << 
scan_range_->file_string()
> VLOG(3) or something?
Done. Left it as VLOG(2) as it was helpful for debugging at some point.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/hdfs-file-reader.cc@186
PS1, Line 186:    if (borrowed_hdfs_fh != nullptr) {
             :       
io_mgr->ReleaseCachedHdfsFileHandle(scan_range_->file_string(), 
borrowed_hdfs_fh);
             :     }
> is it possible this can get skipped in some of the early return error condi
Done. It's impossible with the current code but switching to scoped clean up 
makes the code more robust against any future changes.


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/hdfs-file-reader.cc@193
PS1, Line 193:       remote_data_cache->Store(*scan_range_->file_string(), 
scan_range_->mtime(),
> I had the same question. I think it sort-of works if you have a cache entry
Addressed the concern with larger entry in the new patch set. To answer the 
question, partial hits will work in the current patch and we only issue reads 
for the part missing in the cache. The insertion will be for the entire buffer 
though, not the missing part.

That said, based on the crude study with parquet + TPCDS with this current 
patch, there is barely any partial hit in the cache (see the partial hit 
counter above), which proves that the problem with entry at the same offset but 
different length isn't an issue with the  workload we tested with.

Just curious in what cases will the scanners today issue a scan range with the 
same offset in a file but different length ?


http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/request-context.h
File be/src/runtime/io/request-context.h:

http://gerrit.cloudera.org:8080/#/c/12987/1/be/src/runtime/io/request-context.h@161
PS1, Line 161:   int data_cache_partial_hit_count() const { return 
data_cache_partial_hit_count_.Load(); }
> line too long (91 > 90)
Done



--
To view, visit http://gerrit.cloudera.org:8080/12987
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-Project: Impala-ASF
Gerrit-Branch: master
Gerrit-MessageType: comment
Gerrit-Change-Id: I734803c1c1787c858dc3ffa0a2c0e33e77b12edc
Gerrit-Change-Number: 12987
Gerrit-PatchSet: 1
Gerrit-Owner: Michael Ho <k...@cloudera.com>
Gerrit-Reviewer: David Rorke <dro...@cloudera.com>
Gerrit-Reviewer: Impala Public Jenkins <impala-public-jenk...@cloudera.com>
Gerrit-Reviewer: Joe McDonnell <joemcdonn...@cloudera.com>
Gerrit-Reviewer: Lars Volker <l...@cloudera.com>
Gerrit-Reviewer: Michael Ho <k...@cloudera.com>
Gerrit-Reviewer: Sahil Takiar <stak...@cloudera.com>
Gerrit-Reviewer: Thomas Marshall <tmarsh...@cloudera.com>
Gerrit-Reviewer: Tim Armstrong <tarmstr...@cloudera.com>
Gerrit-Reviewer: Todd Lipcon <t...@apache.org>
Gerrit-Comment-Date: Tue, 16 Apr 2019 19:41:38 +0000
Gerrit-HasComments: Yes

Reply via email to