davidradl commented on code in PR #27536:
URL: https://github.com/apache/flink/pull/27536#discussion_r2770071979
##########
flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/AvroFormatOptions.java:
##########
@@ -83,5 +83,23 @@ public InlineElement getDescription() {
+ "you can obtain the correct mapping by
disable using this legacy mapping."
+ " Use legacy behavior by default for
compatibility consideration.");
+ public static final ConfigOption<Boolean> AVRO_FAST_READ =
+ ConfigOptions.key("avro.fast-read.enabled")
+ .booleanType()
+ .defaultValue(false)
+ .withDescription(
+ "Optional for avro fast reader. "
+ + "Avro Fastread improves Avro read speeds
by constructing a resolution chain. "
+ + "get more information about this
feature, please visit https://issues.apache.org/jira/browse/AVRO-3230");
+
+ public static final ConfigOption<String> AVRO_WRITER_SCHEMA_STRING =
Review Comment:
I can't see any test for this option.
In the Confluent Avro format, which will inherit the Avro options, you can
specify a schema to use there, also the Confluent schema registry can supply
the real schema. I think we should understand and document which options take
precedent.
Also I suggest we say that this writer schema needs to be compatible with
the table definition and what that means. I am thinking about compatibility
between nullable and non nullable fields.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]