bozhang2820 commented on a change in pull request #31590:
URL: https://github.com/apache/spark/pull/31590#discussion_r579082986
##########
File path: docs/structured-streaming-programming-guide.md
##########
@@ -712,6 +712,8 @@ csvDF <- read.stream("csv", path = "/path/to/directory",
schema = schema, sep =
These examples generate streaming DataFrames that are untyped, meaning that
the schema of the DataFrame is not checked at compile time, only checked at
runtime when the query is submitted. Some operations like `map`, `flatMap`,
etc. need the type to be known at compile time. To do those, you can convert
these untyped streaming DataFrames to typed streaming Datasets using the same
methods as static DataFrame. See the [SQL Programming
Guide](sql-programming-guide.html) for more details. Additionally, more details
on the supported streaming sources are discussed later in the document.
+Alternatively, since Spark 3.1, you can create streaming DataFrames with
`DataStreamReader.table()`. See [Streaming Table APIs](#streaming-table-apis)
for more details.
Review comment:
Make sense. Will go with the one you suggested.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]