Roman,
Agreed, this is definitely a gap in the docs (both Kafka's and Confluent's)
right now. The reason it was lower priority for documentation than other
items is that we expect there will be relatively few converter
implementations, especially compared to the number of converters.
Converters
Svante,
Just to clarify, the HDFS connector relies on some Avro translation code
which is in a separate repository. You need the
https://github.com/confluentinc/schema-registry repository built before the
kafka-connector-hdfs repository to get that dependency.
Confluent has now also released
Hi, I tried building this today and the problem seems to remain.
/svante
[INFO] Building kafka-connect-hdfs 2.0.0-SNAPSHOT
[INFO]
Downloading:
Sorry, there was an out of date reference in the pom.xml, the version on
master should build fine now.
-Ewen
On Sat, Nov 14, 2015 at 1:54 PM, Venkatesh Rudraraju <
venkatengineer...@gmail.com> wrote:
> I tried building copycat-hdfs but its not able to pull dependencies from
> maven...
>
> error
I tried building copycat-hdfs but its not able to pull dependencies from
maven...
error trace :
---
Failed to execute goal on project kafka-connect-hdfs: Could not resolve
dependencies for project
io.confluent:kafka-connect-hdfs:jar:2.0.0-SNAPSHOT: The following artifacts
could not
Yes, though it's still awaiting some updates after some renaming and API
modifications that happened in Kafka recently.
-Ewen
On Thu, Nov 12, 2015 at 9:10 AM, Venkatesh Rudraraju <
venkatengineer...@gmail.com> wrote:
> Ewen,
>
> How do I use a HDFSSinkConnector. I see the sink as part of a
Hi,
I am trying out the new kakfa connect service.
version : kafka_2.11-0.9.0.0
mode: standalone
I have a conceptual question on the service.
Can I just start a sink connector which reads from Kafka and writes to say
HDFS ?
>From what I have tried, it's expecting a source-connector as well
Hi Venkatesh,
If you're using the default settings included in the sample configs, it'll
expect JSON data in a special format to support passing schemas along with
the data. This is turned on by default because it makes it possible to work
with a *lot* more connectors and data storage systems
Hi Ewen,
Thanks for the explanation. with your suggested setting, I was able to
start just a sink connector like below :
>* bin/connect-standalone.sh config/connect-standalone.properties
config/connect-file-sink.properties*
But I have a couple of issues yet,
1) Since I am only testing a simple
Venkatesh,
1. It only works with quotes because the message needs to be parsed as JSON
-- a bare string without quotes is not valid JSON. If you're just using a
file sink, you can also try the StringConverter, which only supports
strings and uses a fixed schema, but is also very easy to use since
10 matches
Mail list logo