aokolnychyi commented on a change in pull request #35374:
URL: https://github.com/apache/spark/pull/35374#discussion_r796124909



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala
##########
@@ -19,27 +19,31 @@ package org.apache.spark.sql.execution.streaming.sources
 
 import org.apache.spark.sql.catalyst.expressions.Attribute
 import org.apache.spark.sql.catalyst.plans.logical.{LogicalPlan, UnaryNode}
-import org.apache.spark.sql.connector.metric.CustomMetric
-import org.apache.spark.sql.connector.write.streaming.StreamingWrite
-import org.apache.spark.sql.execution.datasources.v2.{DataSourceV2Relation, 
WriteToDataSourceV2}
+import org.apache.spark.sql.connector.catalog.SupportsWrite
+import org.apache.spark.sql.execution.datasources.v2.DataSourceV2Relation
+import org.apache.spark.sql.streaming.OutputMode
 
 /**
  * The logical plan for writing data to a micro-batch stream.
  *
  * Note that this logical plan does not have a corresponding physical plan, as 
it will be converted
- * to [[WriteToDataSourceV2]] with [[MicroBatchWrite]] before execution.
+ * to [[org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2 
WriteToDataSourceV2]]

Review comment:
       I did not want to modify this place but the style checker started to 
complain about the `WriteToDataSourceV2` import. It is only used in the doc now 
as I removed the method that referenced it before.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to