Github user JoshRosen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7710#discussion_r35699204
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformation.scala
 ---
    @@ -68,9 +71,44 @@ case class ScriptTransformation(
           val errorStream = proc.getErrorStream
           val reader = new BufferedReader(new InputStreamReader(inputStream))
     
    -      val (outputSerde, outputSoi) = ioschema.initOutputSerDe(output)
    +      // TODO make the 2048 configurable?
    +      val stderrBuffer = new CircularBuffer(2048)
    +
    +      // Consume the error stream from the pipeline, otherwise it will be 
blocked if
    +      // the pipeline is full.
    +      new RedirectThread(errorStream, // input stream from the pipeline
    +        stderrBuffer,                 // output to a circular buffer
    +        "Thread-ScriptTransformation-STDERR-Consumer").start()
    +
    +      val outputProjection = new InterpretedProjection(input, child.output)
    +
    +      // This nullability is a performance optimization in order to avoid 
an Option.foreach() call
    +      // inside of a loop
    +      @Nullable val (inputSerde, inputSoi) = 
ioschema.initInputSerDe(input).getOrElse((null, null))
    +
    +      // Put the write(output to the pipeline) into a single thread
    +      // and keep the collector as remain in the main thread.
    +      // otherwise it will causes deadlock if the data size greater than
    +      // the pipeline / buffer capacity.
    +      val writerThread = new ScriptTransformationWriterThread(
    +        inputIterator,
    +        outputProjection,
    +        inputSerde,
    +        inputSoi,
    +        ioschema,
    +        outputStream,
    +        proc,
    +        stderrBuffer,
    +        TaskContext.get()
    +      )
    +
    +      // This nullability is a performance optimization in order to avoid 
an Option.foreach() call
    +      // inside of a loop
    +      @Nullable val (outputSerde, outputSoi) = {
    +        ioschema.initOutputSerDe(output).getOrElse((null, null))
    +      }
     
    -      val iterator: Iterator[InternalRow] = new Iterator[InternalRow] with 
HiveInspectors {
    +      val outputIterator: Iterator[InternalRow] = new 
Iterator[InternalRow] with HiveInspectors {
    --- End diff --
    
    I'm not sure.  It's possible that the process might be leaked. I don't 
think that this behavior is affected by this patch, though, so let's follow up 
on it in a later patch (we can test this out with a SparkPlanTest test case 
which places a Limit around a ScriptTransformation).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to