WweiL commented on code in PR #40887:
URL: https://github.com/apache/spark/pull/40887#discussion_r1173144019


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala:
##########
@@ -217,6 +217,20 @@ final class DataStreamReader private[sql] (sparkSession: 
SparkSession) extends L
    */
   def parquet(path: String): DataFrame = format("parquet").load(path)
 
+  /**
+   * Define a Streaming DataFrame on a Table. The DataSource corresponding to 
the table should
+   * support streaming mode.
+   * @param tableName The name of the table
+   * @since 3.5.0
+   */
+  def table(tableName: String): DataFrame = {
+    require(tableName != null, "The table name can't be null")
+    sparkSession.newDataFrame { builder =>
+      builder.getReadBuilder.setIsStreaming(true).getNamedTableBuilder

Review Comment:
   On second thought I think we do need to set options. I've updated the code. 
I had a hard time finding an option for table though..



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to