Github user gparai commented on a diff in the pull request:
https://github.com/apache/drill/pull/729#discussion_r102314383
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/planner/common/DrillStatsTable.java
---
@@ -0,0 +1,347 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * <p/>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p/>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.common;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonGetter;
+import com.fasterxml.jackson.annotation.JsonSetter;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonSubTypes;
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+import com.fasterxml.jackson.annotation.JsonTypeName;
+import com.fasterxml.jackson.databind.DeserializationFeature;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Stopwatch;
+import com.google.common.collect.Maps;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.RelVisitor;
+import org.apache.calcite.rel.core.TableScan;
+import org.apache.drill.common.exceptions.DrillRuntimeException;
+import org.apache.drill.exec.ops.QueryContext;
+import org.apache.drill.exec.planner.logical.DrillTable;
+import org.apache.drill.exec.util.ImpersonationUtil;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.joda.time.DateTime;
+
+/**
+ * Wraps the stats table info including schema and tableName. Also
materializes stats from storage
+ * and keeps them in memory.
+ */
+public class DrillStatsTable {
+ private static final org.slf4j.Logger logger =
org.slf4j.LoggerFactory.getLogger(DrillStatsTable.class);
+ private final FileSystem fs;
+ private final Path tablePath;
+
+ /**
+ * List of columns in stats table.
+ */
+ public static final String COL_COLUMN = "column";
+ public static final String COL_COMPUTED = "computed";
+ public static final String COL_STATCOUNT = "statcount";
+ public static final String COL_NDV = "ndv";
+
+ private final String schemaName;
+ private final String tableName;
+
+ private final Map<String, Long> ndv = Maps.newHashMap();
+ private double rowCount = -1;
+
+ private boolean materialized = false;
+
+ private TableStatistics statistics = null;
+
+ public DrillStatsTable(String schemaName, String tableName, Path
tablePath, FileSystem fs) {
+ this.schemaName = schemaName;
+ this.tableName = tableName;
+ this.tablePath = tablePath;
+ this.fs =
ImpersonationUtil.createFileSystem(ImpersonationUtil.getProcessUserName(),
fs.getConf());
+ }
+
+ public String getSchemaName() {
+ return schemaName;
+ }
+
+ public String getTableName() {
+ return tableName;
+ }
+ /**
+ * Get number of distinct values of given column. If stats are not
present for the given column,
+ * a null is returned.
+ *
+ * Note: returned data may not be accurate. Accuracy depends on whether
the table data has changed after the
+ * stats are computed.
+ *
+ * @param col
+ * @return
+ */
+ public Double getNdv(String col) {
+ // Stats might not have materialized because of errors.
+ if (!materialized) {
+ return null;
+ }
+ final String upperCol = col.toUpperCase();
+ final Long ndvCol = ndv.get(upperCol);
+ // Ndv estimation techniques like HLL may over-estimate, hence cap it
at rowCount
+ if (ndvCol != null) {
+ return Math.min(ndvCol, rowCount);
--- End diff --
Histograms would help with the data skew. When we have histograms, the NDV
would be obtained from the Histograms. Stats will be off by default (so not as
risky?), and the existing defaults also suffer from the same shortcoming.
> Should we be more conservative? Set some minimum value?
How do we determine the minimum value? We would need to run experiments to
determine the value.
>Take a risk-based approach to deciding which side of hash join to be the
build side?
Sorry, I did not understand this. Maybe we can consider it as a followup.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---