Ewen: I think he was looking for exactly what you were guessing he
doesn't: "My goal is to pipe that json document in a postgres table
that has two columns: id and json."

Postgres has some nice built-in functions that make this actually
useful and not as nuts as it may appear.

As Ewen mentioned, our current JDBC sink doesn't do it. But it sounds
like a fairly trivial modification (especially if you use
auto-increment, so you don't even need to worry about IDs). You'll
want to use a String converter to make sure we are not parsing the
JSON.







On Tue, Jan 10, 2017 at 12:40 AM, Ewen Cheslack-Postava
<e...@confluent.io> wrote:
> Anything with a table structure is probably not going to handle schemaless
> data (i.e. JSON) very well without some extra help -- tables usually expect
> schemas and JSON doesn't have a schema. As it stands today, the JDBC sink
> connector will probably not handle your use case.
>
> To send schemaless data into a schema-based system, you'd probably need to
> impose/extract a schema. An upcoming feature called Single Message
> Transforms
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-66%3A+Single+Message+Transforms+for+Kafka+Connect
> could
> potentially help do this (in a generic way that doesn't depend on the
> Connector being used).
>
> The only alternative would be to update the JDBC sink to handle JSON data
> directly. Some databases might handle this if the entire record were
> converted to a JSON-type field (i.e. a single-column table), but I'm
> guessing you are looking for output that's a bit more structured than that.
>
> -Ewen
>
> On Mon, Jan 9, 2017 at 4:14 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
>> Hi,
>>
>> I’m wondering if the following is feasible…
>> I have a json document with pretty much 0 schema. The only thing I know for
>> sure is that it’s a json document.
>> My goal is to pipe that json document in a postgres table that has two
>> columns: id and json. The id column is basically topic+partition+offset (to
>> guarantee idempotence on upserts), and the json column is basically the
>> json document
>>
>> Is that feasible using the out of the box JDBC connector? I didn’t see any
>> support for “json type” fields
>>
>> Thanks,
>> Stephane
>>



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog

Reply via email to