Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20097#discussion_r160251566
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecution.scala
 ---
    @@ -392,6 +443,21 @@ class MicroBatchExecution(
               cd.dataType, cd.timeZoneId)
         }
     
    +    val triggerLogicalPlan = sink match {
    +      case _: Sink => newAttributePlan
    +      case s: MicroBatchWriteSupport =>
    +        val writer = s.createMicroBatchWriter(
    +          s"$runId",
    +          currentBatchId,
    +          newAttributePlan.schema,
    +          outputMode,
    +          new DataSourceV2Options(extraOptions.asJava))
    +        Option(writer.orElse(null)).map(WriteToDataSourceV2(_, 
newAttributePlan)).getOrElse {
    --- End diff --
    
    The writer can be empty. If data does not need to be written, then the 
returned writer can be None. See the docs of createWriter.
    So just documenting that here is fine. To avoid confusion like this.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to