JingGe commented on a change in pull request #18660:
URL: https://github.com/apache/flink/pull/18660#discussion_r802513891



##########
File path: docs/content/docs/connectors/datastream/formats/parquet.md
##########
@@ -44,9 +44,11 @@ Thus, you can use this format in two ways:
 - Bounded read for batch mode
 - Continuous read for streaming mode: monitors a directory for new files that 
appear 
 
-**Bounded read example**:
+## Flink RowData
 
-In this example we create a DataStream containing Parquet records as Flink 
Rows. We project the schema to read only certain fields ("f7", "f4" and "f99"). 
 
+#### Bounded read example

Review comment:
       hmm, even on that page there are two different names: Unbounded and 
Bounded Data vs. Stream. Using batch will follow the vision of unified batch 
and stream processing.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to