fapaul commented on a change in pull request #17656:
URL: https://github.com/apache/flink/pull/17656#discussion_r742857348



##########
File path: 
flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/src/impl/FileRecordFormatAdapterTest.java
##########
@@ -37,6 +37,7 @@
 
 /** Unit and behavior tests for the {@link FileRecordFormatAdapter}. */
 @SuppressWarnings("serial")
+@Deprecated

Review comment:
       Nit: I do not think it is necessary to deprecate the test.

##########
File path: 
flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/src/reader/FileRecordFormat.java
##########
@@ -34,7 +34,14 @@
 import java.io.Serializable;
 
 /**
- * A reader format that reads individual records from a file.
+ * This interface is Deprecated, use {@link StreamFormat} instead. The main 
motivation for removing
+ * it is the inherent design flaw in the batching of FileRecordFormat: 
StreamFormat can guarantee
+ * that only a certain amount of memory is being used (unless a single record 
exceeds that already),
+ * but FileRecordFormat can only batch by the number of records. By removing 
FileRecordFormat, we
+ * relay the responsibility of implementing the batching to the format 
developer; they need to use
+ * BulkFormat and find a better way than batch by number of records.

Review comment:
       I am not sure we really need this docstring here. If someone is 
interested in the deprecation reason one can open the ticket which is usually 
linked in the commit message.

##########
File path: 
flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/src/impl/FileRecordFormatAdapter.java
##########
@@ -37,7 +37,12 @@
 import static org.apache.flink.util.Preconditions.checkArgument;
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
-/** The FormatReaderAdapter turns a {@link FileRecordFormat} into a {@link 
BulkFormat}. */
+/**
+ * This interface is Deprecated, use {@link StreamFormatAdapter} instead.

Review comment:
       @JingGe You can take a look at 
https://github.com/apache/flink/blob/1b4490654e0438332c2a4cac679c05a321d3f34c/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer.java#L100
 how it is done for other places.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to