PatrickRen commented on code in PR #140:
URL: 
https://github.com/apache/flink-connector-kafka/pull/140#discussion_r1907576100


##########
flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/KafkaContextAware.java:
##########
@@ -26,11 +26,8 @@
  *
  * <p>You only need to override the methods for the information that you need. 
However, {@link
  * #getTargetTopic(Object)} is required because it is used to determine the 
available partitions.
- *
- * @deprecated Will be turned into internal API when {@link 
FlinkKafkaProducer} is removed.
  */
-@PublicEvolving
-@Deprecated
+@Internal
 public interface KafkaContextAware<T> {

Review Comment:
   Actually this class is not used anymore. The only reference of it is in the 
javadoc of `KafkaSerializationSchema`, and `KafkaSerializationSchema` is also 
only referenced by several javadocs.



##########
flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/KafkaSerializationSchema.java:
##########
@@ -35,10 +35,8 @@
  * which the Kafka Producer is running.
  *
  * @param <T> the type of values being serialized
- * @deprecated Will be turned into internal API when {@link 
FlinkKafkaProducer} is removed.
  */
-@PublicEvolving
-@Deprecated
+@Internal
 public interface KafkaSerializationSchema<T> extends Serializable {

Review Comment:
   As explained above, this can be removed



##########
flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/KafkaDeserializationSchema.java:
##########
@@ -31,10 +31,8 @@
  * (Java/Scala objects) that are processed by Flink.
  *
  * @param <T> The type created by the keyed deserialization schema.
- * @deprecated Will be turned into internal API when {@link 
FlinkKafkaConsumer} is removed.
  */
-@PublicEvolving
-@Deprecated
+@Internal
 public interface KafkaDeserializationSchema<T> extends Serializable, 
ResultTypeQueryable<T> {

Review Comment:
   I think `KafkaDeserializationSchema` can be deleted as well. There are some 
references in the table connector, but they can be rewritten by 
`KafkaRecordDeserializationSchema`.



##########
flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaShortRetentionTestBase.java:
##########
@@ -128,67 +118,6 @@ public static void shutDownServices() throws Exception {
      */
     private static boolean stopProducer = false;
 
-    public void runAutoOffsetResetTest() throws Exception {

Review Comment:
   This file is not needed anymore. No one is inheriting it.



##########
flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/internals/KafkaTopicPartitionTest.java:
##########
@@ -26,14 +27,14 @@
 import static org.assertj.core.api.Assertions.assertThat;
 import static org.assertj.core.api.Assertions.fail;
 
-/** Tests for the {@link KafkaTopicPartition}. */
+/** Tests for the {@link TopicPartition}. */
 public class KafkaTopicPartitionTest {

Review Comment:
   This is no longer needed. We don't need to test the class `TopicPartition` 
provided by Kafka



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to