rdblue commented on a change in pull request #398: Push down StringStartsWith 
in Spark IcebergSource
URL: https://github.com/apache/incubator-iceberg/pull/398#discussion_r317276702
 
 

 ##########
 File path: 
parquet/src/main/java/org/apache/iceberg/parquet/ParquetMetricsRowGroupFilter.java
 ##########
 @@ -339,6 +342,48 @@ public Boolean or(Boolean leftResult, Boolean 
rightResult) {
       return ROWS_MIGHT_MATCH;
     }
 
+    @Override
+    @SuppressWarnings("unchecked")
+    public <T> Boolean startsWith(BoundReference<T> ref, Literal<T> lit) {
+      int id = ref.fieldId();
+
+      Long valueCount = valueCounts.get(id);
+      if (valueCount == null) {
+        // the column is not present and is all nulls
+        return ROWS_CANNOT_MATCH;
+      }
+
+      Statistics<Binary> colStats = (Statistics<Binary>) stats.get(id);
 
 Review comment:
   Not yet. Parquet stops writing stats if they are too large. I think the 
default limit is 4k.
   
   I think I missed something earlier. I didn't catch that the length you're 
truncating to is the length in bytes of the prefix. In that case, binary 
comparison should work because UTF-8 unsigned byte-wise comparison has the same 
result as the default UTF-8 ordering comparison. So as long as we're truncating 
to the same length in bytes and comparing UTF-8 encoded values, I think you're 
right about this.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to