[
https://issues.apache.org/jira/browse/DRILL-3028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jinfeng Ni resolved DRILL-3028.
-------------------------------
Resolution: Won't Fix
> Exception in correlated subquery with exists when columns in subquery are not
> qualified
> ---------------------------------------------------------------------------------------
>
> Key: DRILL-3028
> URL: https://issues.apache.org/jira/browse/DRILL-3028
> Project: Apache Drill
> Issue Type: Bug
> Components: Query Planning & Optimization
> Affects Versions: 1.0.0
> Reporter: Victoria Markman
> Assignee: Jinfeng Ni
> Attachments: t1.parquet, t2.parquet
>
>
> {code}
> 0: jdbc:drill:schema=dfs> select a1 from t1 where exists ( select * from t2
> where b1 = b2 and a1 > a2) order by a1;
> Error: SYSTEM ERROR: java.lang.NumberFormatException: zzz
> Fragment 0:0
> [Error Id: 2f13436c-048c-4a19-b99b-9d60a8d6bcf4 on atsqa4-133.qa.lab:31010]
> (state=,code=0)
> {code}
> If you qualify columns, query works and returns correct result:
> {code}
> 0: jdbc:drill:schema=dfs> select a1 from t1 where exists ( select * from t2
> where t1.b1 = t2.b2 and t1.a1 > t2.a2) order by a1;
> +------------+
> | a1 |
> +------------+
> +------------+
> No rows selected (1.338 seconds)
> {code}
> From drillbit.log
> {code}
> 2015-05-11 22:44:57,895 [2aaecf16-2d9d-1548-1429-543fa1c79243:frag:0:0] INFO
> o.a.drill.exec.work.foreman.Foreman - State change requested. RUNNING -->
> FAILED
> org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR:
> java.lang.NumberFormatException: zzz
> Fragment 0:0
> [Error Id: 3fbdfc29-3fef-4968-b163-dbdefd45cdc6 on atsqa4-133.qa.lab:31010]
> at
> org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:460)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.foreman.QueryManager$RootStatusReporter.statusChange(QueryManager.java:440)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.AbstractStatusReporter.fail(AbstractStatusReporter.java:90)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.AbstractStatusReporter.fail(AbstractStatusReporter.java:86)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:290)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:254)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
> [drill-common-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> [na:1.7.0_71]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> [na:1.7.0_71]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
> 2015-05-11 22:44:57,899 [2aaecf16-2d9d-1548-1429-543fa1c79243:frag:0:0] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2aaecf16-2d9d-1548-1429-543fa1c79243:0:0: State change requested from FAILED
> --> CANCELLATION_REQUESTED for
> 2015-05-11 22:44:57,899 [2aaecf16-2d9d-1548-1429-543fa1c79243:frag:0:0] WARN
> o.a.d.e.w.fragment.FragmentExecutor - Ignoring unexpected state transition
> FAILED => CANCELLATION_REQUESTED.
> 2015-05-11 22:44:57,900 [2aaecf16-2d9d-1548-1429-543fa1c79243:frag:0:0] INFO
> o.a.drill.exec.work.foreman.Foreman - foreman cleaning up.
> 2015-05-11 22:44:57,910 [2aaecf16-2d9d-1548-1429-543fa1c79243:frag:0:0] ERROR
> o.a.d.exec.work.foreman.QueryManager - Failure while storing Query Profile
> java.lang.RuntimeException: java.io.IOException:
> java.lang.InterruptedException
> at
> org.apache.drill.exec.store.sys.local.FilePStore.put(FilePStore.java:148)
> ~[drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.foreman.QueryManager.writeFinalProfile(QueryManager.java:286)
> ~[drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:731)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:826)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:768)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.common.EventProcessor.sendEvent(EventProcessor.java:73)
> [drill-common-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.foreman.Foreman$StateSwitch.moveToState(Foreman.java:770)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:871)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.foreman.Foreman.access$2700(Foreman.java:107)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.foreman.Foreman$StateListener.moveToState(Foreman.java:1132)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:460)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.foreman.QueryManager$RootStatusReporter.statusChange(QueryManager.java:440)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.AbstractStatusReporter.fail(AbstractStatusReporter.java:90)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.AbstractStatusReporter.fail(AbstractStatusReporter.java:86)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:290)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:254)
> [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
> [drill-common-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> [na:1.7.0_71]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> [na:1.7.0_71]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
> Caused by: java.io.IOException: java.lang.InterruptedException
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:508)
> ~[hadoop-common-2.4.1-mapr-1408.jar:na]
> at org.apache.hadoop.util.Shell.run(Shell.java:418)
> ~[hadoop-common-2.4.1-mapr-1408.jar:na]
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> ~[hadoop-common-2.4.1-mapr-1408.jar:na]
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
> ~[hadoop-common-2.4.1-mapr-1408.jar:na]
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
> ~[hadoop-common-2.4.1-mapr-1408.jar:na]
> at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:676)
> ~[hadoop-common-2.4.1-mapr-1408.jar:na]
> at
> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
> ~[hadoop-common-2.4.1-mapr-1408.jar:na]
> at
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
> ~[hadoop-common-2.4.1-mapr-1408.jar:na]
> at
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
> ~[hadoop-common-2.4.1-mapr-1408.jar:na]
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:942)
> ~[hadoop-common-2.4.1-mapr-1408.jar:na]
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:923)
> ~[hadoop-common-2.4.1-mapr-1408.jar:na]
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:820)
> ~[hadoop-common-2.4.1-mapr-1408.jar:na]
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:809)
> ~[hadoop-common-2.4.1-mapr-1408.jar:na]
> at
> org.apache.drill.exec.store.dfs.DrillFileSystem.create(DrillFileSystem.java:175)
> ~[drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> at
> org.apache.drill.exec.store.sys.local.FilePStore.put(FilePStore.java:145)
> ~[drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
> ... 19 common frames omitted
> 2015-05-11 22:44:57,910 [2aaecf16-2d9d-1548-1429-543fa1c79243:frag:0:0] INFO
> o.a.drill.exec.work.foreman.Foreman - State change requested. FAILED -->
> COMPLETED
> 2015-05-11 22:44:57,911 [2aaecf16-2d9d-1548-1429-543fa1c79243:frag:0:0] WARN
> o.a.drill.exec.work.foreman.Foreman - Dropping request to move to COMPLETED
> state as query is already at FAILED state (which is terminal).
> {code}
> Explain plan:
> {code}
> 00-01 StreamAgg(group=[{0}])
> 00-02 Sort(sort0=[$0], dir0=[ASC])
> 00-03 Project(a1=[$0])
> 00-04 SelectionVectorRemover
> 00-05 Filter(condition=[NOT(IS NOT NULL($2))])
> 00-06 HashJoin(condition=[=($0, $1)], joinType=[left])
> 00-08 Scan(groupscan=[ParquetGroupScan
> [entries=[ReadEntryWithPath [path=maprfs:/drill/testdata/subqueries/t1]],
> selectionRoot=/drill/testdata/subqueries/t1, numFiles=1, columns=[`a1`]]])
> 00-07 Project(a10=[$0], $f1=[$1])
> 00-09 HashAgg(group=[{0}], agg#0=[MIN($1)])
> 00-10 Project(a1=[$2], $f0=[true])
> 00-11 HashJoin(condition=[=($0, $1)], joinType=[inner])
> 00-13 Scan(groupscan=[ParquetGroupScan
> [entries=[ReadEntryWithPath [path=maprfs:/drill/testdata/subqueries/t2]],
> selectionRoot=/drill/testdata/subqueries/t2, numFiles=1, columns=[`b2`]]])
> 00-12 Project(b20=[$0], a1=[$1], $f2=[$2])
> 00-14 HashAgg(group=[{0, 1}], agg#0=[MIN($2)])
> 00-15 Project(b2=[$2], a1=[$3], $f0=[true])
> 00-16 HashJoin(condition=[=($1, $3)],
> joinType=[inner])
> 00-18 HashJoin(condition=[=($0, $2)],
> joinType=[inner])
> 00-21 Project(b3=[$1], a3=[$0])
> 00-24 Scan(groupscan=[ParquetGroupScan
> [entries=[ReadEntryWithPath [path=maprfs:/drill/testdata/subqueries/t3]],
> selectionRoot=/drill/testdata/subqueries/t3, numFiles=1, columns=[`b3`,
> `a3`]]])
> 00-20 HashAgg(group=[{0}])
> 00-23 Scan(groupscan=[ParquetGroupScan
> [entries=[ReadEntryWithPath [path=maprfs:/drill/testdata/subqueries/t2]],
> selectionRoot=/drill/testdata/subqueries/t2, numFiles=1, columns=[`b2`]]])
> 00-17 StreamAgg(group=[{0}])
> 00-19 Sort(sort0=[$0], dir0=[ASC])
> 00-22 Scan(groupscan=[ParquetGroupScan
> [entries=[ReadEntryWithPath [path=maprfs:/drill/testdata/subqueries/t1]],
> selectionRoot=/drill/testdata/subqueries/t1, numFiles=1, columns=[`a1`]]])
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)