pening
> correctly?
>
> What would be a good way to generate keys in this case, to ensure even
> partition spread?
>
> Thanks.
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
Hi Team Kafka,
I just merged PR 20 to our website - which gives it a new (and IMO
pretty snazzy) look and feel. Thanks to Derrick Or for contributing
the update.
I had to do a hard-refresh (shift-f5 on my mac) to get the new look to
load properly - so if stuff looks off, try it.
Comments and
>>>> >>> >> >>>>> used, it must have a custom backend + front-end.
>>>> >>> >> >>>>>
>>>> >>> >> >>>>> Thanks for the recommendation of Flume. Do you think this
>>>>
it doesn't give me this lag data at the
> server level.
>
> I am looking for best way to get the lag value and monitor it using kibana or
> grafana.
>
> Please suggest what is the best approach for this.
>
> Thanks and Regards
> Vikas Bhatia
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
there.
Comments, improvements, and contributions are welcome and encouraged.
--
Gwen Shapira
ah, never mind - I just noticed you do use a schema... Maybe you are
running into this? https://issues.apache.org/jira/browse/KAFKA-3055
On Thu, Sep 15, 2016 at 4:20 PM, Gwen Shapira <g...@confluent.io> wrote:
> Most people use JSON without schema, so you should probably chan
t
> org.apache.kafka.connect.json.JsonConverter.asConnectSchema(JsonConverter.java:493)
>at
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:344)
>at
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:334)
>at
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:266)
>at
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:175)
>at
> org.apache.kafka.connect.runtime.WorkerSinkTaskThread.iteration(WorkerSinkTaskThread.java:90)
>at
> org.apache.kafka.connect.runtime.WorkerSinkTaskThread.execute(WorkerSinkTaskThread.java:58)
>at
> org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
>
> Thanks,
> Sri
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
Well deserved, Jason. Looking forward to your future contributions :)
On Tue, Sep 6, 2016 at 3:29 PM, Guozhang Wang wrote:
> Welcome, and really happy to have you onboard Jason!
>
>
> Guozhang
>
> On Tue, Sep 6, 2016 at 3:25 PM, Neha Narkhede wrote:
>
>>
erride the data transfer methods of
> kafka-connect? For example I want to put thread.sleeps in kafka-streams
> side while transferring data and see the behaviour in kafka side or in
> application side. You can think of as simulation of load.
>
> Cheers
> Jeyhun
>
>
> --
>
, Ashish Singh, Avi Flax,
> Damian Guy, Dustin Cote, Edoardo Comar, Eno Thereska, Ewen
> Cheslack-Postava, Flavio Junqueira, Florian Hussonnois, Geoff Anderson,
> Grant Henke, Greg Fodor, Guozhang Wang, Gwen Shapira, Henry Cai, Ismael
> Juma, Jason Gustafson, Jeff Klukas, Jendrik Poloczek,
rties
> /etc/kafka-connect-hdfs/quickstart-hdfs.properties
>
> And here is my quickstart-hdfs.properties:
>
> name=hdfs-sink
> connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
> tasks.max=1
> topics=hdfs
> hdfs.url=hdfs://sandbox.hortonworks.com:8020
> f
enkins.confluent.io/job/system-test-kafka-0.10.0/138/
> <https://jenkins.confluent.io/job/system-test-kafka-0.10.0/138/>*
>
> Thanks,
> Ismael
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
MirrorMaker actually doesn't have a default - it uses what you
configured in the consumer.properties file you use.
Either:
auto.offset.reset = latest (biggest in old versions)
or
auto.offset.reset = earliest (smallest in old versions)
So you can choose whether when MirrorMaker first comes up, if
Can you define a DNS name that round-robins to multiple IP addresses?
This way ZKClient will cache the name and you can rotate IPs behind
the scenes with no issues?
On Wed, Aug 3, 2016 at 7:22 AM, Zuber wrote:
> Hello –
>
> We are planning to use Kafka as Event Store in
No. If you want automatic update, you need to use the same broker id.
Many deployments use EBS to store their broker data. The
auto-generated id is stored with the data, so if a broker dies they
install a new machine and connect it to the existing EBS volume and
immediately get both the old id and
Hi,
You know the famous "Powered by Kafka" page?
https://cwiki.apache.org/confluence/display/KAFKA/Powered+By
Where the cool companies are showing off their use of Kafka?
We want to do the same for KafkaConnect and KafkaStreams - showcase
the early adopters of the technology.
If you are using
s,
> Sean
>
>
>
>
> On 7/29/16, 9:35 PM, "Gwen Shapira" <g...@confluent.io> wrote:
>
>>you know, I ran into those null pointer exceptions when I accidentally
>>tested Kafka with mismatching version of zkclient.
>>
>>Can you share the ver
You need to use the old mirrormaker (0.8.2.1) to mirror 0.8.2.1 to 0.10.0.0.
This is true in general - always use MirrorMaker from the older release.
Because new Kafka can talk to old clients and not the other way around.
Gwen
On Fri, Jul 29, 2016 at 12:04 AM, Yifan Ying
If anyone packages Kafka with Chocolatey, we'll be happy to add this
to our ecosystem page.
Currently Apache Kafka only publishes tarballs.
Gwen
On Thu, Jul 28, 2016 at 6:58 PM, Andrew Pennebaker
wrote:
> Could we please publish Chocolatey packages for ZooKeeper
you know, I ran into those null pointer exceptions when I accidentally
tested Kafka with mismatching version of zkclient.
Can you share the versions of both? And make sure you have only one
zkclient on your classpath?
On Tue, Jul 26, 2016 at 6:40 AM, Sean Morris (semorris)
woah, it looks like you have 15,000 replicas per broker?
You can go into the directory you configured for kafka's log.dir and
see how many files you have there. Depending on your segment size and
retention policy, you could have hundreds of files per partition
there...
Make sure you have at
In addition, our soon-to-be-released JDBC sink connector uses the
Connect framework to do things that are kind of annoying to do
yourself:
* Convert data types
* create tables if needed, add columns to tables if needed based on
the data in Kafka
* support for both insert and upsert
* configurable
Upgrade :)
On Tue, Jun 28, 2016 at 6:49 PM, Rohit Valsakumar wrote:
> Hi Jay,
>
> Thanks for the reply.
>
> Unfortunately in our case due to legacy reasons we are using
> WallclockTimestampExtractor in the application for all the streams and the
> existing messages in the
Charity,
1. Nothing you do seems crazy to me. Kafka should be able to work with
auto-scaling and we should be able to fix the issues you are running
into.
There are few things you should be careful about when using the method
you described though:
1.1 Your life may be a bit simpler if you have a
Thats a pretty cool feature, if anyone feels like opening a JIRA :)
On Thu, Jun 23, 2016 at 8:46 AM, Christian Posta
wrote:
> Sounds like something a traditional message broker (ie, ActiveMQ) would be
> able to do with a TTL setting and expiry. Expired messages get
More likely that we didn't think of documenting it :)
Do you want to open a JIRA? or submit a doc patch?
We should obviously document this limitation, but I'm thinking that
the REST API could also validate that connector ID doesn't collide
with the distributed worker group.
On Mon, Jun 13, 2016
Dory is pretty cool (even though it is named after a somewhat dorky
fish). Thank you for sharing :)
On Sun, Jun 12, 2016 at 1:24 AM, Dave Peterson wrote:
> Hello Kafka users,
>
> Version 1.1.0 of Dory is now available. See
> https://github.com/dspeterson/dory for details.
Actually, this is exactly what Connect is doing.
KafkaConnect uses its own "consumer" protocol called "connect" to
distribute tasks between the workers. The default group name for this
is connect-cluster, but it is possible to override it in the
connect-distributed.properties file.
SinkTasks
[A]
Unfortunately, we only documented this in the code:
/**
* For verifying the consistency among replicas.
*
* 1. start a fetcher on every broker.
* 2. each fetcher does the following
*2.1 issues fetch request
*2.2 puts the fetched result in a shared buffer
*2.3 waits for
Last time I checked (maybe 10 month ago), Camel was using the old
async producer, which is not reliable (no callbacks!). Make sure they
improved this before using it in a system where reliability is
important.
On Mon, Jun 6, 2016 at 9:44 PM, Asaf Mesika wrote:
> I'd stay
wrote:
> On 6/1/16, 11:53, "Gwen Shapira" <g...@confluent.io> wrote:
>
> > Currently this is not part of the DSL and needs to be done separately
> > through KafkaConnect. Here's an example:
> > http://www.confluent.io/blog/hello-world-kafka-connect-kafka-str
Currently this is not part of the DSL and needs to be done separately
through KafkaConnect. Here's an example:
http://www.confluent.io/blog/hello-world-kafka-connect-kafka-streams
In the future we want to integrate Connect and Streams better, so you could
do something like
The intent was definitely as you described, but I think we forgot to
actually modify the code accordingly.
Do you mind opening a JIRA on the issue?
Gwen
On Wed, Jun 1, 2016 at 4:13 PM, tao xiao wrote:
> Hi,
>
> As per the description in KIP-32 the timestamp of Kafka
Well...
We added KafkaConnect and KafkaStreams, thats two fairly big features.
On Thu, May 26, 2016 at 11:58 AM, S Ahmed wrote:
> I just pulled lated on the same old 2010 MPB and the build took over 4
> minutes.
>
> Have things changed so much since 2013? :)
>
> I ran:
are working toward improving the compatibility story in the future.
On Tue, May 24, 2016 at 4:42 PM, Andy Davidson <
a...@santacruzintegration.com> wrote:
> Does anyone know if spark plans to upgrade?
>
> I think the current version is 0.8x?
>
> Kind regards
>
> Andy
>
ang, edoardo, Edward Ribeiro, Eno Thereska, Ewen
Cheslack-Postava, Flavio Junqueira, Francois Visconte, Frank Scholten,
Gabriel Zhang, gaob13, Geoff Anderson, glikson, Grant Henke, Greg
Fodor, Guozhang Wang, Gwen Shapira, Igor Stepanov, Ishita Mandhan,
Ismael Juma, Jaikiran Pai, Jakub Nowak, James
This vote passes with 9 +1 votes (4 bindings) and no 0 or -1 votes.
+1 votes
PMC Members:
* Jay Kreps
* Jun Rao
* Guozhang Wang
* Joe Stein
Committers:
* Sriharsha Chintalapani
* Ewen Cheslack-Postava
Community:
* Dana Powers
* Vahid S. Hashemian
* Ashish Singh
Vote
Or you can use KafkaStreams, which is already available in Kafka :)
On Thu, May 19, 2016 at 2:33 AM, Radoslaw Gruchalski
wrote:
> Hey, you should have a look at Apache Samza. You put Samza on top of Kafka
> and you can inject content filtering rules into a Samza system.
RC is never available in the version information, because the RC we
vote on is identical to the version we release. This is Apache rules,
not mine :)
I am not sure about the MBeans - what is the commitID in previous versions?
Gwen
On Wed, May 18, 2016 at 10:41 AM, Ramanan, Buvana (Nokia - US)
Hello Kafka users, developers and client-developers,
This is the seventh (!) candidate for release of Apache Kafka
0.10.0.0. This is a major release that includes: (1) New message
format including timestamps (2) client interceptor API (3) Kafka
Streams.
This RC was rolled out to fix an issue
se -- I noticed that one link
> didn't go anywhere.
> Real-Time Analytics Visualized w/ Kafka + Streamliner + MemSQL + ZoomData
>
> Will this be posted anytime?
>
> Thanks,
>
> Ben
>
> On Fri, May 13, 2016 at 8:05 PM, Gwen Shapira <g...@confluent.io> wrote:
>
>
It looks like you are using MirrorMaker from 0.9.0.1 while the source
broker is older.
MirrorMaker needs to be older than the older broker involved in the replication.
Gwen
On Mon, May 16, 2016 at 2:12 PM, Meghana Narasimhan
wrote:
> Hi,
> I came across the following
Hello Kafka users, developers and client-developers,
This is the sixth (!) candidate for release of Apache Kafka 0.10.0.0.
This is a major release that includes: (1) New message format
including timestamps (2) client interceptor API (3) Kafka Streams.
Since this is a major release, we will give
Hey Kafka Community,
It was great seeing so many of you at the Kafka summit last month.
Hope you had fun and learned a lot. I certainly did. Looking forward
to meet all of you again at the next summit :)
For those of you who missed the event, or those who attended but are
sorry they couldn't see
SASL authentication
This may help us nail down the issue source of the issue.
Gwen
On Thu, May 12, 2016 at 1:38 PM, Tom Crayford <tcrayf...@heroku.com> wrote:
> Yep, confirm.
>
> On Thu, May 12, 2016 at 9:37 PM, Gwen Shapira <g...@confluent.io> wrote:
>
>> Just
Just to confirm:
You tested both versions with plain text and saw no performance drop?
On Thu, May 12, 2016 at 1:26 PM, Tom Crayford wrote:
> We've started running our usual suite of performance tests against Kafka
> 0.10.0.0 RC. These tests orchestrate multiple
Hello Jayesh,
Thank you for the suggestion. I like the proposal and the new tool seems useful.
Do you already have the tool available in a github repository?
If you don't, then this would be a good place to start - there are
many Kafka utilities in github repositories (Yahoo's Kafka Manager as
kafka-run-class.sh. There is no easy work around of this and we need a new
> RC.
>
> Thanks,
> Liquan
>
> On Mon, May 9, 2016 at 6:49 PM, Gwen Shapira <g...@confluent.io> wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the first candidate f
Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 0.10.0.0. This
is a major release that includes: (1) New message format including
timestamps (2) client interceptor API (3) Kafka Streams. Since this is
a major release, we will give
You are seeing an Hadoop error in Gobblin app, so the Kafka mailing
list is probably not your best bet.
That said, Gobblin started a MapReduce job which failed. You need to
look at the job log and the task logs for MapReduce to find out what
happened.
Gwen
On Sat, May 7, 2016 at 10:51 AM, Mudit
Hello Kafka users, developers and client-developers,
This is the fourth candidate for release of Apache Kafka 0.10.0.0.
This is a major release that includes: (1) New message format
including timestamps (2) client interceptor API (3) Kafka Streams.
Since this is a major release, we will give
Thats a good version :)
On Mon, May 2, 2016 at 11:04 AM, Kane Kim wrote:
> We are running Zookeeper version: 3.4.6-1569965, built on 02/20/2014 09:09
> GMT, does it have any known problems?
>
> On Fri, Apr 29, 2016 at 2:35 PM, James Brown wrote:
>
>>
Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 0.10.0.0. This
is a major release that includes: (1) New message format including
timestamps (2) client interceptor API (3) Kafka Streams. (4)
Configurable SASL authentication mechanisms
Congratulations, very well deserved.
On Apr 25, 2016 10:53 PM, "Neha Narkhede" wrote:
> The PMC for Apache Kafka has invited Ismael Juma to join as a committer and
> we are pleased to announce that he has accepted!
>
> Ismael has contributed 121 commits
>
few more things you can do:
* Increase "batch.size" - this will give you a larger queue and
usually better throughput
* More producers - very often the bottleneck is not in Kafka at all.
Maybe its the producer? or the network?
* Increate max.inflight.requests for the producer - it will allow
Thank you, Surendra.
I've added your connector to the Connector Hub page:
http://www.confluent.io/developers/connectors
On Fri, Apr 22, 2016 at 10:11 PM, Surendra , Manchikanti
wrote:
> Hi Jay,
>
> Thanks!! Can you please share the contact person to include this
t
> sometime requires more disk space than anticipated.
>
>
>
>
>
> On Fri, Apr 8, 2016 at 1:07 PM Gwen Shapira <g...@confluent.io> wrote:
>
> > Yes. It is whichever is shorter :)
> >
> > Another clarification:
> > A segment is deleted as a whole, base
It depends. If auto.topic.create.enable is true, a topic will be created.
If its false, you will get some kind of topic doesn't exist exception.
Gwen
On Thu, Apr 7, 2016 at 11:49 AM, Shravan Ambati
wrote:
> Hi,
>
> I could not find answer to this in the documentation.
;
> > That makes it more clear for me.
> >
> > Heath
> >
> > -Original Message-
> > From: Gwen Shapira [mailto:g...@confluent.io]
> > Sent: Tuesday, April 05, 2016 6:13 PM
> > To: users@kafka.apache.org
> > Subject: Re: Log Retention: Wh
I think you got it almost right. The missing part is that we only delete
whole partition segments, not individual messages.
As you are writing messages, every X bytes or Y milliseconds, a new file
gets created for the partition to store new messages in. Those files are
called segments.
The
Awesome summary, Dana. I'd like to fit this into our docs, but I'm not sure
where does step-by-step-description of the protocol fits. Maybe in "Design"
section?
Just one more thing:
8) At any time, the broker can respond to a fetch request with
"Rebalancing" error code, at which point the
I'm obviously a bit biased, but I'm pretty sure there is zero lock-in.
1. You can use whichever components of the platform you want. If you just
need Kafka, you don't *have* to use the schema registry (although, you
should ;)
2. Schema Registry, REST Proxy and our connectors are all open source
Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 0.10.0.0.
This is a major release that includes: (1) New message format including
timestamps (2) client interceptor API (3) Kafka Streams. Since this is a
major release, we will give
What we normally do is consumer.poll(0). This connects to the broker,
finds the consumer group, handles partition assignment, gets the
metadata - and then doesn't stick around to actually give you any
data.
Pretty hacky, but we use this all over the place.
Gwen
On Tue, Mar 8, 2016 at 12:59 PM,
Hi Michal,
Can you succesfully connect to the SASL port without StreamSet?
For example using the console consumer as explain here?
http://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption
(the end-to-end example is all the way at the end of the blog)
This can
Hey!
Yes! We'd love that too! Maybe you want to help us out with
https://issues.apache.org/jira/browse/KAFKA-2967 ?
Gwen
On Wed, Mar 2, 2016 at 2:39 PM, Christian Posta
wrote:
> Would love to have the docs in gitbook/markdown format so they can easily
> be viewed
See below
On Tue, Feb 23, 2016 at 11:45 AM, vivek shankar
wrote:
> Hello All,
>
> Can you please help with the below :
>
> I was reading up on Kafka 0.9 API version and came across the below :
>
> The following is a draft design that uses a high-available consumer
>
props.put("ssl.protocal", "SSL"); <- looks like a typo.
On Thu, Feb 18, 2016 at 2:49 PM, Srikrishna Alla <
srikrishna.a...@aexp.com.invalid> wrote:
> Hi,
>
> We are getting the below error when trying to use a Java new producer
> client. Please let us know the reason for this error -
>
>
Actually, for releases, committers are non-binding. PMC votes are the only
binding ones for releases.
On Wed, Feb 17, 2016 at 11:57 AM, Jun Rao wrote:
> Christian,
>
> Similar to other Apache projects, a vote from a committer is considered
> binding. During the voting
MAX_INT is a good value if you want to just block until the buffer has some
space (and never get an exception).
On Tue, Feb 2, 2016 at 8:08 AM, Franco Giacosa wrote:
> Thanks for the information James, the slides are really good.
>
> One question, in the new producer the
rowse%2FKAFKA-3191=7188d1843f83499b
> >
> Feel free to assign it to me (wasn't able to do that myself)
>
> On Mon, Feb 1, 2016 at 9:55 PM, Gwen Shapira <g...@confluent.io> wrote:
>
> > This is the second time I see this complaint, so we could probably make
> the
>
This is the second time I see this complaint, so we could probably make the
API docs clearer.
Adam, feel like submitting a JIRA?
On Mon, Feb 1, 2016 at 3:34 PM, Adam Kunicki wrote:
> Thanks, actually found this out per:
>
>
I have a minor preference toward modifying the API.
Because it is source-compatible and protocol-compatible, the only case that
will break is if you use client code from one version but run with a JAR
from a different version, which sounds like a pretty weird setup in general.
Its not a strong
Did you check your brokers are running?
On Wed, Jan 27, 2016 at 1:30 PM, Sandhu, Dilpreet
wrote:
> Hi all,
> I am using Kafka 0.9.0.0 with Zookeeper 3.4.6. I am not sure if I
> am missing anything :(
> When I try to create any topic I get the following error:-
>
>
Hi Eric,
1. You are correct that the way to handle custom data formats in Kafka is
to use a custom convertor.
2. You are also correct that we are currently assuming one converter per
Connect instance / cluster that all connectors share (in the hope that each
organization has one common data
Producer.send() by itself will not throw anything.
You need to either wait on the future:
producer.send().get()
Or to use it with a callback that logs the error.
On Tue, Jan 26, 2016 at 8:50 AM, Joe San wrote:
> Is this strange or wierd? I had no Kafka or Zookeeper
Hi Robert!
Jason is the expert, and I hope he'll respond soon.
Meanwhile: I think that you can do what you are trying to do by:
1. call position() to get the current position you are consuming
2. call seekToEnd() and then position(), which will give you the last
position at the point in which
I'm wondering if the protocol docs can be auto-generated from our code to a
large extent. Or if we can enhance our protocol definition classes a bit to
make them self-documenting (the way we did for configuration).
Regarding Dana's suggestion: I think you need special wiki-edit privileges.
If you
I added what I found in the code comments to the wiki. Note that there are
some gaps. For example if anyone can fill in the producer error codes, it
will be awesome :)
On Mon, Jan 18, 2016 at 9:17 AM, Gwen Shapira <g...@confluent.io> wrote:
> I'm wondering if the protocol docs ca
Hi,
There was a Jira to add "remove broker" option to the
partition-reassignment tool. I think it died in a long discussion trying to
solve a harder problem...
To your work-around - it is an acceptable work-around.
Few improvements:
1. Manually edit the resulting assignment json to avoid
Do you happen to have broker-logs and state-change logs from the controlled
shutdown attempt?
In theory, the producer should not really see a disconnect - it should get
NotALeader exception (because leaders are re-assigned before the shutdown)
that will cause it to get the metadata. I am guessing
.
>
> 4. Yep, we rely on exactly this behavior when replacing nodes. It's very
> helpful :)
>
> Thanks!
> Luke
>
>
> On Thu, Jan 14, 2016 at 10:07 AM, Gwen Shapira <g...@confluent.io> wrote:
>
> > Hi,
> >
> > 1. If you had problems with co
ble
> step would be nice.
>
> Thanks,
> Luke
>
>
> On Thu, Jan 14, 2016 at 9:36 AM, Gwen Shapira <g...@confluent.io> wrote:
>
> > Hi,
> >
> > There was a Jira to add "remove broker" option to the
> > partition-reassignment tool
ile
> format.
>
>
> On Thu, Jan 14, 2016 at 10:42 AM, Gwen Shapira <g...@confluent.io> wrote:
>
> > Ah, got it!
> >
> > There's no easy way to transfer leadership on command, but you could use
> > the reassignment tool to change the preferred leader (
force a running kafka instance to take on a new
> broker ID?
>
> Thanks,
>
> Ben
>
> On Tue, Jan 5, 2016 at 6:19 PM, Gwen Shapira <g...@confluent.io> wrote:
>
> > Stevo pointed you at the correct document for moving topics around.
> > However, if you lost a
Stevo pointed you at the correct document for moving topics around.
However, if you lost a broker, by far the easiest way to recover is to
start a new broker and give it the same ID as the one that went down.
On Tue, Jan 5, 2016 at 8:49 AM, Stevo Slavić wrote:
> Hello Ben,
gt; Kafka is still obsessing about the last failed deletion attempt and
> won't move on to subsequent delete requests.
>
> I've planned downtime today so I can restart Kafka (clearing the topic
> info) and test again.
>
> On Tue, Jan 5, 2016 at 1:30 PM, Gwen Shapira <g...@
First, I think the reason 0.8.2.2 is stable and 0.9.0.0 is latest is mostly
due to oversight.
0.9.0.0 is stable. Some of the new APIs are considered unstable, but this
doesn't imply toward a simple upgrade of the brokers.
Regarding issues, you can see what we fixed for 0.9.0.1:
We don't have a built-in functionality for this, but I know few places that
implemented the following architecture.
1. Install a small (1-3 nodes) Kafka cluster on the remote environments
(AWS, Rackspace, etc)
2. Use KafkaConnect or similar to pull logs into the local Kafka
3. Install Kafka +
Yes, this looks like a bug. Please file a JIRA :)
On Wed, Dec 23, 2015 at 1:08 AM, Enrico Olivelli - Diennea <
enrico.olive...@diennea.com> wrote:
> Hi,
> I'm running a brand new Kafka cluster (version 0.9.0.0). During my tests I
> noticed this error at Consumer.partitionsFor during a full
Hi,
Kafka *is* a data store. It writes data to files on the OS file system. One
directory per partition, and a new file every specific amount of time (you
can control this with log.roll.ms). The data format is specific to Kafka.
Hope this helps,
Gwen
On Thu, Dec 17, 2015 at 3:32 PM, Heath Ivie
Correlation ID is for a request (i.e. separate ID for produce request and a
fetch request), not a record. So it can't be used in the way you are trying
to.
On Wed, Dec 9, 2015 at 9:30 AM, John Menke wrote:
> Can a correlationID be created from a ConsumerRecord that will
Hi,
Can you explain a bit more what you'd expect this integration to do?
Kafka is a queue, just like Oracle AQ is - so I can see how you may replace
Oracle AQ with Kafka, but I'm not sure what you are trying to achieve by
integrating them.
Gwen
On Mon, Dec 7, 2015 at 7:52 PM, CY Kuek
Sounds like you need to use advertised.host configuration with the external
name / ip.
This means that the broker will send producers / consumers / zookeeper
their external address and they will be able to connect.
Gwen
On Tue, Dec 8, 2015 at 11:17 AM, Henrik Martin wrote:
On Wed, Dec 2, 2015 at 10:44 PM, Krzysztof Ciesielski <
krzysztof.ciesiel...@softwaremill.pl> wrote:
> Hello,
>
> I’m the main maintainer of Reactive Kafka - a wrapper library that
> provides Kafka API as Reactive Streams (
> https://github.com/softwaremill/reactive-kafka).
> I’m a bit concerned
In your scenario, you are receiving acks from 3 replicas while it is
possible to have 4 in the ISR. This means that one replica can be up to
4000 messages (by default) behind others. If a leader crashes, there is 33%
chance this replica will become the new leader, thereby losing up to 4000
3.
> Correct?
>
> Thanks,
> Prabhjot
> On Nov 28, 2015 10:20 AM, "Gwen Shapira" <g...@confluent.io> wrote:
>
> > In your scenario, you are receiving acks from 3 replicas while it is
> > possible to have 4 in the ISR. This means that one replica can be u
In 0.9.0, close() has a timeout parameter that allows specifying how long
to wait for the in-flight messages to complete (definition of complete
depends on value of "acks" parameter).
On Wed, Nov 25, 2015 at 3:58 AM, Muqtafi Akhmad
wrote:
> Hello guys,
>
> I am using
It looks like you have a single broker (with id = 0 ) and that topic1 has a
single replica and the broker is alive and well.
The socket error is our bug (shouldn't be an error) and doesn't indicate
that the broker is down.
On Wed, Nov 25, 2015 at 3:26 AM, Shaikh, Mazhar A (Mazhar) <
ug's version together in the same Kafka cluster?
> Also we currently run spark streaming job (with scala 2.10) against the
> cluster. Any known issues of 0.9.0 are you aware of under this scenario?
>
> Thanks,
> Tony
>
>
> On Mon, Nov 23, 2015 at 5:41 PM, Gwen Shapira <g...@confluen
101 - 200 of 488 matches
Mail list logo