[
https://issues.apache.org/jira/browse/DRILL-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16114731#comment-16114731
]
ASF GitHub Bot commented on DRILL-4735:
---------------------------------------
Github user jinfengni commented on a diff in the pull request:
https://github.com/apache/drill/pull/882#discussion_r131450166
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/store/direct/MetadataDirectGroupScan.java
---
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.direct;
+
+import com.fasterxml.jackson.annotation.JsonTypeName;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.exec.physical.base.GroupScan;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.physical.base.ScanStats;
+import org.apache.drill.exec.store.RecordReader;
+
+import java.util.Collection;
+import java.util.List;
+
+/**
+ * Represents direct scan based on metadata information.
+ * For example, for parquet files it can be obtained from parquet footer
(total row count)
+ * or from parquet metadata files (column counts).
+ * Contains reader, statistics and list of scanned files if present.
+ */
+@JsonTypeName("metadata-direct-scan")
+public class MetadataDirectGroupScan extends DirectGroupScan {
+
+ private final Collection<String> files;
+
+ public MetadataDirectGroupScan(RecordReader reader, Collection<String>
files) {
+ super(reader);
+ this.files = files;
+ }
+
+ public MetadataDirectGroupScan(RecordReader reader, Collection<String>
files, ScanStats stats) {
+ super(reader, stats);
+ this.files = files;
+ }
+
+ @Override
+ public PhysicalOperator getNewWithChildren(List<PhysicalOperator>
children) throws ExecutionSetupException {
+ assert children == null || children.isEmpty();
+ return new MetadataDirectGroupScan(reader, files, stats);
+ }
+
+ @Override
+ public GroupScan clone(List<SchemaPath> columns) {
+ return this;
+ }
+
+ /**
+ * <p>
+ * Returns string representation of group scan data.
+ * Includes list of files if present.
+ * </p>
+ *
+ * <p>
+ * Example: [usedMetadata = true, files = [/tmp/0_0_0.parquet], numFiles
= 1]
+ * </p>
+ *
+ * @return string representation of group scan data
+ */
+ @Override
+ public String getDigest() {
+ StringBuilder builder = new StringBuilder();
+ builder.append("usedMetadata = true, ");
--- End diff --
This "useMetadata=true" seems to be redundant, since it's for
MetadataDirectGS.
> Count(dir0) on parquet returns 0 result
> ---------------------------------------
>
> Key: DRILL-4735
> URL: https://issues.apache.org/jira/browse/DRILL-4735
> Project: Apache Drill
> Issue Type: Bug
> Components: Query Planning & Optimization, Storage - Parquet
> Affects Versions: 1.0.0, 1.4.0, 1.6.0, 1.7.0
> Reporter: Krystal
> Assignee: Arina Ielchiieva
> Priority: Critical
>
> Selecting a count of dir0, dir1, etc against a parquet directory returns 0
> rows.
> select count(dir0) from `min_max_dir`;
> +---------+
> | EXPR$0 |
> +---------+
> | 0 |
> +---------+
> select count(dir1) from `min_max_dir`;
> +---------+
> | EXPR$0 |
> +---------+
> | 0 |
> +---------+
> If I put both dir0 and dir1 in the same select, it returns expected result:
> select count(dir0), count(dir1) from `min_max_dir`;
> +---------+---------+
> | EXPR$0 | EXPR$1 |
> +---------+---------+
> | 600 | 600 |
> +---------+---------+
> Here is the physical plan for count(dir0) query:
> {code}
> 00-00 Screen : rowType = RecordType(BIGINT EXPR$0): rowcount = 20.0,
> cumulative cost = {22.0 rows, 22.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id
> = 1346
> 00-01 Project(EXPR$0=[$0]) : rowType = RecordType(BIGINT EXPR$0):
> rowcount = 20.0, cumulative cost = {20.0 rows, 20.0 cpu, 0.0 io, 0.0 network,
> 0.0 memory}, id = 1345
> 00-02 Project(EXPR$0=[$0]) : rowType = RecordType(BIGINT EXPR$0):
> rowcount = 20.0, cumulative cost = {20.0 rows, 20.0 cpu, 0.0 io, 0.0 network,
> 0.0 memory}, id = 1344
> 00-03
> Scan(groupscan=[org.apache.drill.exec.store.pojo.PojoRecordReader@3da85d3b[columns
> = null, isStarQuery = false, isSkipQuery = false]]) : rowType =
> RecordType(BIGINT count): rowcount = 20.0, cumulative cost = {20.0 rows, 20.0
> cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 1343
> {code}
> Here is part of the explain plan for the count(dir0) and count(dir1) in the
> same select:
> {code}
> 00-00 Screen : rowType = RecordType(BIGINT EXPR$0, BIGINT EXPR$1):
> rowcount = 60.0, cumulative cost = {1206.0 rows, 15606.0 cpu, 0.0 io, 0.0
> network, 0.0 memory}, id = 1623
> 00-01 Project(EXPR$0=[$0], EXPR$1=[$1]) : rowType = RecordType(BIGINT
> EXPR$0, BIGINT EXPR$1): rowcount = 60.0, cumulative cost = {1200.0 rows,
> 15600.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 1622
> 00-02 StreamAgg(group=[{}], EXPR$0=[COUNT($0)], EXPR$1=[COUNT($1)]) :
> rowType = RecordType(BIGINT EXPR$0, BIGINT EXPR$1): rowcount = 60.0,
> cumulative cost = {1200.0 rows, 15600.0 cpu, 0.0 io, 0.0 network, 0.0
> memory}, id = 1621
> 00-03 Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath
> [path=maprfs:/drill/testdata/min_max_dir/1999/Apr/voter20.parquet/0_0_0.parquet],
> ReadEntryWithPath
> [path=maprfs:/drill/testdata/min_max_dir/1999/MAR/voter15.parquet/0_0_0.parquet],
> ReadEntryWithPath
> [path=maprfs:/drill/testdata/min_max_dir/1985/jan/voter5.parquet/0_0_0.parquet],
> ReadEntryWithPath
> [path=maprfs:/drill/testdata/min_max_dir/1985/apr/voter60.parquet/0_0_0.parquet],...,
> ReadEntryWithPath
> [path=maprfs:/drill/testdata/min_max_dir/2014/jul/voter35.parquet/0_0_0.parquet]],
> selectionRoot=maprfs:/drill/testdata/min_max_dir, numFiles=16,
> usedMetadataFile=false, columns=[`dir0`, `dir1`]]]) : rowType =
> RecordType(ANY dir0, ANY dir1): rowcount = 600.0, cumulative cost = {600.0
> rows, 1200.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 1620
> {code}
> Notice that in the first case,
> "org.apache.drill.exec.store.pojo.PojoRecordReader" is used.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)