Anything with a table structure is probably not going to handle schemaless
data (i.e. JSON) very well without some extra help -- tables usually expect
schemas and JSON doesn't have a schema. As it stands today, the JDBC sink
connector will probably not handle your use case.

To send schemaless data into a schema-based system, you'd probably need to
impose/extract a schema. An upcoming feature called Single Message
Transforms
https://cwiki.apache.org/confluence/display/KAFKA/KIP-66%3A+Single+Message+Transforms+for+Kafka+Connect
could
potentially help do this (in a generic way that doesn't depend on the
Connector being used).

The only alternative would be to update the JDBC sink to handle JSON data
directly. Some databases might handle this if the entire record were
converted to a JSON-type field (i.e. a single-column table), but I'm
guessing you are looking for output that's a bit more structured than that.

-Ewen

On Mon, Jan 9, 2017 at 4:14 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Hi,
>
> I’m wondering if the following is feasible…
> I have a json document with pretty much 0 schema. The only thing I know for
> sure is that it’s a json document.
> My goal is to pipe that json document in a postgres table that has two
> columns: id and json. The id column is basically topic+partition+offset (to
> guarantee idempotence on upserts), and the json column is basically the
> json document
>
> Is that feasible using the out of the box JDBC connector? I didn’t see any
> support for “json type” fields
>
> Thanks,
> Stephane
>

Reply via email to