Hi Francesco,
thanks for starting this discussion. It is definitely time to clean up
more connectors and formats that were used for the old planner but are
actually not intended for the DataStream API.
+1 for deprecating and dropping the mentioned formats. Users can either
use Table API or implement a custom
SerializationSchema/DeserializationSchema according to their needs. It
is actually not that complicated to add Jackson and configure the
ObjectMapper for reading JSON/CSV.
Regards,
Timo
On 18.10.21 17:42, Francesco Guardiani wrote:
Hi all,
In flink-avro, flink-csv and flink-json we have implementations of
SerializationSchema/DeserializationSchema for the org.apache.flink.types.Row
type. In particular, I'm referring to:
- org.apache.flink.formats.json.JsonRowSerializationSchema
- org.apache.flink.formats.json.JsonRowDeserializationSchema
- org.apache.flink.formats.avro.AvroRowSerializationSchema
- org.apache.flink.formats.avro.AvroRowDeserializationSchema
- org.apache.flink.formats.csv.CsvRowDeserializationSchema
- org.apache.flink.formats.csv.CsvRowSerializationSchema
These classes were used in the old table planner, but now the table planner
doesn't use the Row type internally anymore, so these classes are unused
from the flink-table packages.
Because these classes are exposed (some have @PublicEvolving annotation)
there might be some users out there using them when using the DataStream
APIs, for example to convert an input stream of JSON from Kafka to a Row
instance.
Do you have any opinions about deprecating these classes in 1.15 and then
drop them in 1.16? Or are you using them? If yes, can you describe your use
case?
Thank you,
FG