Github user hvanhovell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7379#discussion_r34862439
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/BroadcastRangeJoin.scala
 ---
    @@ -0,0 +1,411 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.sql.execution.joins
    +
    +import org.apache.spark.sql.catalyst.util.TypeUtils
    +import org.apache.spark.util.ThreadUtils
    +
    +import scala.annotation.tailrec
    +import scala.collection.mutable
    +import scala.concurrent._
    +import scala.concurrent.duration._
    +
    +import org.apache.spark.annotation.DeveloperApi
    +import org.apache.spark.rdd.RDD
    +import org.apache.spark.sql.Row
    +import org.apache.spark.sql.catalyst.InternalRow
    +import org.apache.spark.sql.catalyst.expressions._
    +import org.apache.spark.sql.execution.{BinaryNode, SparkPlan}
    +
    +/**
    + * Performs an inner range join on two tables. A range join typically has 
the following form:
    + *
    + * SELECT A.*
    + *        ,B.*
    + * FROM   tableA A
    + *        JOIN tableB B
    + *         ON A.start <= B.end
    + *          AND A.end > B.start
    + *
    + * The implementation builds a range index from the smaller build side, 
broadcasts this index
    + * to all executors. The streaming side is then matched against the index. 
This reduces the number
    + * of comparisons made by log(n) (n is the number of records in the build 
table) over the
    + * typical solution (Nested Loop Join).
    + *
    + * TODO NaN values
    + * TODO NULL values
    + * TODO Outer joins? StreamSide is quite easy/BuildSide requires 
bookkeeping and
    + * TODO This join will maintain sort order. The build side rows will also 
be added in a lower
    + *      bound sorted fashion.
    + */
    +@DeveloperApi
    +case class BroadcastRangeJoin(
    +    leftKeys: Seq[Expression],
    +    rightKeys: Seq[Expression],
    +    equality: Seq[Boolean],
    +    buildSide: BuildSide,
    +    left: SparkPlan,
    +    right: SparkPlan)
    +  extends BinaryNode {
    +
    +  private[this] lazy val (buildPlan, streamedPlan) = buildSide match {
    +    case BuildLeft => (left, right)
    +    case BuildRight => (right, left)
    +  }
    +
    +  private[this] lazy val (buildKeys, streamedKeys) = buildSide match {
    +    case BuildLeft => (leftKeys, rightKeys)
    +    case BuildRight => (rightKeys, leftKeys)
    +  }
    +
    +  override def output: Seq[Attribute] = left.output ++ right.output
    +
    +  @transient
    +  private[this] lazy val buildSideKeyGenerator: Projection =
    +    newProjection(buildKeys, buildPlan.output)
    +
    +  @transient
    +  private[this] lazy val streamSideKeyGenerator: () => MutableProjection =
    +    newMutableProjection(streamedKeys, streamedPlan.output)
    +
    +  private[this] val timeout: Duration = {
    +    val timeoutValue = sqlContext.conf.broadcastTimeout
    +    if (timeoutValue < 0) {
    +      Duration.Inf
    +    } else {
    +      timeoutValue.seconds
    +    }
    +  }
    +
    +  // Construct the range index.
    +  @transient
    +  private[this] val indexBroadcastFuture = future {
    +    // Deal with equality.
    +    val Seq(allowLowEqual: Boolean, allowHighEqual: Boolean) = buildSide 
match {
    +      case BuildLeft => equality.reverse
    +      case BuildRight => equality
    +    }
    +
    +    // Get the ordering for the datatype.
    +    val ordering = TypeUtils.getOrdering(buildKeys.head.dataType)
    +
    +    // Note that we use .execute().collect() because we don't want to 
convert data to Scala types
    +    // TODO find out if the result of a sort and a collect is still sorted.
    +    val eventifier = RangeIndex.toRangeEvent(buildSideKeyGenerator, 
ordering)
    +    val events = 
buildPlan.execute().map(_.copy()).collect().flatMap(eventifier)
    +
    +    // Create the index.
    +    val index = RangeIndex.build(ordering, events, allowLowEqual, 
allowHighEqual)
    +
    +    // Broadcast the index.
    +    sparkContext.broadcast(index)
    +  }(BroadcastRangeJoin.broadcastRangeJoinExecutionContext)
    +
    +  override def doExecute(): RDD[InternalRow] = {
    +    // Construct the range index.
    +    val indexBC = Await.result(indexBroadcastFuture, timeout)
    +
    +    // Iterate over the streaming relation.
    +    streamedPlan.execute().mapPartitions { stream =>
    +      new Iterator[InternalRow] {
    +        private[this] val index = indexBC.value
    +        private[this] val streamSideKeys = streamSideKeyGenerator()
    +        private[this] val join = new JoinedRow2 // TODO create our own 
join row...
    +        private[this] var row: InternalRow = EmptyRow
    +        private[this] var iterator: Iterator[InternalRow] = Iterator.empty
    +
    +        override final def hasNext: Boolean = {
    +          var result = iterator.hasNext
    --- End diff --
    
    Multiple calls to hasNext shouldn't be a problemen. Granted the first call 
can have a side effect (updating the state of the iterator), but the subsequent 
ones won't.
    
    A problem will occur when the next is called without calling hasNext first. 
I was inspired by the HashedRelation class in the same package when writing 
this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to