While the de-serializations schema is not used by the Flink internals, I think the initial idea was to use it across different sources/sinks (like Kafka, Socket, RabbitMQ, ...)
Does it make sense to have a KafkaDeSerializationSchema, and then wrap the common serialization schemata? On Tue, Jan 19, 2016 at 12:25 PM, Robert Metzger <rmetz...@apache.org> wrote: > I'll relocate the KeyedDeserializationSchema as part of the Kafka 0.9.0.0 > support (its a pending pull request I'll merge soon) > > On Tue, Jan 19, 2016 at 12:20 PM, Tzu-Li (Gordon) Tai <tzuli...@gmail.com> > wrote: > > > Hi Robert, > > > > +1 for a change to where the KeyedDeserializationSchema is located. I was > > just starting to wonder how I should name the Kinesis's > > deserializationSchema if I were to create another one in the same > package. > > > > For Kinesis, the API returns String for key, byte[] for value, String for > > streamName (similar to Kafka topic), and a String for the offset. So I > > would > > definitely need to create a new deserializationSchema for this. > > > > For https://issues.apache.org/jira/browse/FLINK-3229, I'll create the > > required new interfaces in Kinesis connector specific packages. I'd be > > happy > > to help with relocating the current KeyedDeserializationSchema related > > interfaces and classes to Kafka specific package as a seperate issue, if > > you > > want to =) > > > > Cheers, > > Gordon > > > > > > > > -- > > View this message in context: > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/What-is-the-topic-offset-used-for-in-KeyedDeserializationSchema-deserialize-tp9911p9917.html > > Sent from the Apache Flink Mailing List archive. mailing list archive at > > Nabble.com. > > >