arina-ielchiieva commented on a change in pull request #1298: DRILL-5796:
Filter pruning for multi rowgroup parquet file
URL: https://github.com/apache/drill/pull/1298#discussion_r202750717
##########
File path:
exec/java-exec/src/main/java/org/apache/drill/exec/expr/stat/ParquetIsPredicate.java
##########
@@ -62,50 +62,51 @@ private ParquetIsPredicate(LogicalExpression expr,
BiPredicate<Statistics<C>, Ra
return visitor.visitUnknown(this, value);
}
- @Override
- public boolean canDrop(RangeExprEvaluator<C> evaluator) {
+ /**
+ * Apply the filter condition against the meta of the rowgroup.
+ */
+ public RowsMatch matches(RangeExprEvaluator<C> evaluator) {
Statistics<C> exprStat = expr.accept(evaluator, null);
- if (isNullOrEmpty(exprStat)) {
- return false;
- }
+ return isNullOrEmpty(exprStat) ? RowsMatch.SOME :
predicate.apply(exprStat, evaluator);
+ }
- return predicate.test(exprStat, evaluator);
+ /**
+ * After the applying of the filter against the statistics of the rowgroup,
if the result is RowsMatch.ALL,
+ * then we still must know if the rowgroup contains some null values,
because it can change the filter result.
+ * If it contains some null values, then we change the RowsMatch.ALL into
RowsMatch.SOME, which sya that maybe
+ * some values (the null ones) should be disgarded.
+ */
+ private static RowsMatch checkNull(Statistics exprStat) {
+ return hasNoNulls(exprStat) ? RowsMatch.ALL : RowsMatch.SOME;
}
/**
* IS NULL predicate.
*/
private static <C extends Comparable<C>> LogicalExpression
createIsNullPredicate(LogicalExpression expr) {
return new ParquetIsPredicate<C>(expr,
- //if there are no nulls -> canDrop
- (exprStat, evaluator) -> hasNoNulls(exprStat)) {
- private final boolean isArray = isArray(expr);
-
- private boolean isArray(LogicalExpression expression) {
- if (expression instanceof TypedFieldExpr) {
- TypedFieldExpr typedFieldExpr = (TypedFieldExpr) expression;
- SchemaPath schemaPath = typedFieldExpr.getPath();
- return schemaPath.isArray();
- }
- return false;
- }
-
- @Override
- public boolean canDrop(RangeExprEvaluator<C> evaluator) {
+ (exprStat, evaluator) -> {
// for arrays we are not able to define exact number of nulls
// [1,2,3] vs [1,2] -> in second case 3 is absent and thus it's null
but statistics shows no nulls
- return !isArray && super.canDrop(evaluator);
- }
- };
+ if (expr instanceof TypedFieldExpr) {
+ TypedFieldExpr typedFieldExpr = (TypedFieldExpr) expr;
+ if (typedFieldExpr.getPath().isArray()) {
+ return RowsMatch.SOME;
+ }
+ }
+ if (hasNoNulls(exprStat)) {
Review comment:
I agree with your point but if Drill sticks to such rule, it will start
reading incorrectly parquet files created by prev parquet versions which I
believe this would be a great problem for the users who have files created
earlier. Other option is to disable filter push down for files prior to 1.10
but this also will result in performance degradation. I suggest we at least try
to support files created previously. Current implementation does so, if you
have better ideas, please suggest. If you want to go by the road of ignoring
previously created files stats, please start discussion on the mailing thread.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services