[
https://issues.apache.org/jira/browse/DRILL-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16572610#comment-16572610
]
ASF GitHub Bot commented on DRILL-6453:
---------------------------------------
ilooner commented on a change in pull request #1408: DRILL-6453: Resolve
deadlock when reading from build and probe sides simultaneously in HashJoin
URL: https://github.com/apache/drill/pull/1408#discussion_r208445526
##########
File path:
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/join/HashJoinBatch.java
##########
@@ -248,95 +257,134 @@ protected void buildSchema() throws
SchemaChangeException {
}
}
- @Override
- protected boolean prefetchFirstBatchFromBothSides() {
- if (leftUpstream != IterOutcome.NONE) {
- // We can only get data if there is data available
- leftUpstream = sniffNonEmptyBatch(leftUpstream, LEFT_INDEX, left);
- }
-
- if (rightUpstream != IterOutcome.NONE) {
- // We can only get data if there is data available
- rightUpstream = sniffNonEmptyBatch(rightUpstream, RIGHT_INDEX, right);
- }
-
- buildSideIsEmpty = rightUpstream == IterOutcome.NONE;
-
- if (verifyOutcomeToSetBatchState(leftUpstream, rightUpstream)) {
- // For build side, use aggregate i.e. average row width across batches
- batchMemoryManager.update(LEFT_INDEX, 0);
- batchMemoryManager.update(RIGHT_INDEX, 0, true);
-
- logger.debug("BATCH_STATS, incoming left: {}",
batchMemoryManager.getRecordBatchSizer(LEFT_INDEX));
- logger.debug("BATCH_STATS, incoming right: {}",
batchMemoryManager.getRecordBatchSizer(RIGHT_INDEX));
-
- // Got our first batche(s)
- state = BatchState.FIRST;
- return true;
- } else {
- return false;
- }
- }
/**
* Sniffs all data necessary to construct a schema.
* @return True if all the data necessary to construct a schema has been
retrieved. False otherwise.
*/
private boolean sniffNewSchemas() {
+ leftUpstream = sniffNewSchema(LEFT_INDEX,
+ left,
+ () -> probeSchema = left.getSchema());
+
+ rightUpstream = sniffNewSchema(RIGHT_INDEX,
+ right,
+ () -> {
+ // We need to have the schema of the build side even when the build
side is empty
+ buildSchema = right.getSchema();
+ // position of the new "column" for keeping the hash values (after the
real columns)
+ rightHVColPosition = right.getContainer().getNumberOfColumns();
+ });
+
+ // Left and right sides must return a valid response and both sides cannot
be NONE.
+ return (!leftUpstream.isError() && !rightUpstream.isError()) &&
+ (leftUpstream != IterOutcome.NONE && rightUpstream != IterOutcome.NONE);
+ }
+
+ private IterOutcome sniffNewSchema(final int index,
+ final RecordBatch batch,
+ final Runnable schemaSetter) {
Review comment:
I have removed this method.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> TPC-DS query 72 has regressed
> -----------------------------
>
> Key: DRILL-6453
> URL: https://issues.apache.org/jira/browse/DRILL-6453
> Project: Apache Drill
> Issue Type: Bug
> Components: Execution - Flow
> Affects Versions: 1.14.0
> Reporter: Khurram Faraaz
> Assignee: Timothy Farkas
> Priority: Blocker
> Fix For: 1.15.0
>
> Attachments: 24f75b18-014a-fb58-21d2-baeab5c3352c.sys.drill,
> jstack_29173_June_10_2018.txt, jstack_29173_June_10_2018.txt,
> jstack_29173_June_10_2018_b.txt, jstack_29173_June_10_2018_b.txt,
> jstack_29173_June_10_2018_c.txt, jstack_29173_June_10_2018_c.txt,
> jstack_29173_June_10_2018_d.txt, jstack_29173_June_10_2018_d.txt,
> jstack_29173_June_10_2018_e.txt, jstack_29173_June_10_2018_e.txt
>
>
> TPC-DS query 72 seems to have regressed, query profile for the case where it
> Canceled after 2 hours on Drill 1.14.0 is attached here.
> {noformat}
> On, Drill 1.14.0-SNAPSHOT
> commit : 931b43e (TPC-DS query 72 executed successfully on this commit, took
> around 55 seconds to execute)
> SF1 parquet data on 4 nodes;
> planner.memory.max_query_memory_per_node = 10737418240.
> drill.exec.hashagg.fallback.enabled = true
> TPC-DS query 72 executed successfully & took 47 seconds to complete execution.
> {noformat}
> {noformat}
> TPC-DS data in the below run has date values stored as DATE datatype and not
> VARCHAR type
> On, Drill 1.14.0-SNAPSHOT
> commit : 82e1a12
> SF1 parquet data on 4 nodes;
> planner.memory.max_query_memory_per_node = 10737418240.
> drill.exec.hashagg.fallback.enabled = true
> and
> alter system set `exec.hashjoin.num_partitions` = 1;
> TPC-DS query 72 executed for 2 hrs and 11 mins and did not complete, I had to
> Cancel it by stopping the Foreman drillbit.
> As a result several minor fragments are reported to be in
> CANCELLATION_REQUESTED state on UI.
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)