Github user jose-torres commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20828#discussion_r179816059
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousMemoryStream.scala
 ---
    @@ -0,0 +1,212 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.sql.execution.streaming.continuous
    +
    +import java.{util => ju}
    +import java.util.Optional
    +import java.util.concurrent.ArrayBlockingQueue
    +import javax.annotation.concurrent.GuardedBy
    +
    +import scala.collection.JavaConverters._
    +import scala.collection.mutable.ListBuffer
    +import scala.reflect.ClassTag
    +
    +import org.json4s.NoTypeHints
    +import org.json4s.jackson.Serialization
    +
    +import org.apache.spark.SparkEnv
    +import org.apache.spark.internal.Logging
    +import org.apache.spark.rpc.{RpcCallContext, RpcEndpointRef, RpcEnv, 
ThreadSafeRpcEndpoint}
    +import org.apache.spark.sql.{Dataset, Encoder, Row, SQLContext}
    +import org.apache.spark.sql.catalyst.encoders.encoderFor
    +import org.apache.spark.sql.catalyst.expressions.UnsafeRow
    +import org.apache.spark.sql.execution.streaming._
    +import org.apache.spark.sql.sources.v2.{ContinuousReadSupport, 
DataSourceOptions}
    +import org.apache.spark.sql.sources.v2.reader.{DataReader, 
DataReaderFactory, SupportsScanUnsafeRow}
    +import 
org.apache.spark.sql.sources.v2.reader.streaming.{ContinuousDataReader, 
ContinuousReader, Offset, PartitionOffset}
    +import org.apache.spark.sql.types.StructType
    +import org.apache.spark.util.RpcUtils
    +
    +/**
    + * The overall strategy here is:
    + *  * ContinuousMemoryStream maintains a list of records for each 
partition. addData() will
    + *    distribute records evenly-ish across partitions.
    + *  * ContinuousMemoryStreamRecordBuffer is set up as an endpoint for 
partition-level
    + *    ContinuousMemoryStreamDataReader instances to poll. It returns the 
record at the specified
    --- End diff --
    
    To poll from "partition-level ContinuousMemoryStreamDataReader instances". 
I can say executor-side instead.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to