pitrou commented on code in PR #33776:
URL: https://github.com/apache/arrow/pull/33776#discussion_r1092004195


##########
cpp/src/parquet/bloom_filter.cc:
##########
@@ -65,51 +69,109 @@ void BlockSplitBloomFilter::Init(const uint8_t* bitset, 
uint32_t num_bytes) {
   PARQUET_ASSIGN_OR_THROW(data_, ::arrow::AllocateBuffer(num_bytes_, pool_));
   memcpy(data_->mutable_data(), bitset, num_bytes_);
 
-  this->hasher_.reset(new MurmurHash3());
+  this->hasher_ = std::make_unique<XxHasher>();
 }
 
-BlockSplitBloomFilter BlockSplitBloomFilter::Deserialize(ArrowInputStream* 
input) {
-  uint32_t len, hash, algorithm;
-  int64_t bytes_available;
+static constexpr uint32_t kBloomFilterHeaderSizeGuess = 32;
+static constexpr uint32_t kMaxBloomFilterHeaderSize = 1024;

Review Comment:
   I think this is pointless complication. We can try to read up to 256 bytes 
at once, which should be sufficient for future additions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to