We're doing some testing on Kafka 1.1 brokers in AWS EC2. Specifically,
we're cleanly shutting brokers down for 5 mins or more, and then restarting
them, while producing and consuming from the cluster all the time. In
theory, this should be relatively seamless to both producers and consumers.
sole consumer with the option *--property
print.key=false*. However, we can't figure out a way to turn off key
deserialization (if that is what is causing this) on the kafka
connect/connector side.
We're using Kafka 1.1.1, and all the packages are Confluent platform 4.1.2.
Any help would be appreciated.
Thanks,
Marcos Juarez
s.
Anyway, thought I'd post a follow-up to the original question, since it
might help somebody else in the future.
Marcos
On Mon, Oct 2, 2017 at 6:57 PM, Marcos Juarez <mjua...@gmail.com> wrote:
> I apologize for sending this to dev. Reposting to the Users mailing list.
>
> ---
to
decide the consumer offset partition, and is this something we can change
or influence?
Thanks,
Marcos Juarez
Streams API for that, can you point me to a
starting point? I've been reading docs and javadocs for a while now, and
I'm not sure where I would add/configure this.
Thanks!
Marcos Juarez
real-time application with a hard SLA?
>
> Regards,
> Ali
>
> On Thu, Apr 13, 2017 at 10:57 AM, Marcos Juarez <mjua...@gmail.com> wrote:
>
> > Ali,
> >
> > I don't know of proper benchmarks out there, but I've done some work in
> > this area, when tryi
Ali,
I don't know of proper benchmarks out there, but I've done some work in
this area, when trying to determine what hardware to get for particular use
cases. My answers are in-line:
On Mon, Apr 10, 2017 at 7:05 PM, Ali Nazemian wrote:
> Hi all,
>
> I was wondering if
you do to failover the controller to a new broker?
If that works for you, I'd like to try it in our staging clusters.
Thanks,
Marcos Juarez
On Wed, Mar 22, 2017 at 11:55 AM, Jun MA <mj.saber1...@gmail.com> wrote:
> I have similar issue with our cluster. We don’t know the root cau
?
Thanks for your help!
Marcos
On Fri, Nov 11, 2016 at 11:56 AM, Marcos Juarez <mjua...@gmail.com> wrote:
> Thanks Becket,
>
> We should get a full thread dump the next time, so I'll send it as soon
> that happens.
>
> Marcos
>
> On Fri, Nov 11, 2016 at 11:27 AM, Be
ile left over but that should be
> fine.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
> On Fri, Nov 11, 2016 at 9:51 AM, Marcos Juarez <mjua...@gmail.com> wrote:
>
> > Becket/Jason,
> >
> > We deployed a jar with the base 0.10.1.0 release plus the KAFK
/issues we should be aware of when
downgrading the cluster like that?
Thanks,
Marcos Juarez
On Mon, Nov 7, 2016 at 5:47 PM, Marcos Juarez <mjua...@gmail.com> wrote:
> Thanks Becket.
>
> I was working on that today. I have a working jar, created from the
> 0.10.1.0 branch,
oritize the ticket.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Nov 7, 2016 at 9:47 AM, Marcos Juarez <mjua...@gmail.com> wrote:
>
> > We ran into this issue several more times over the weekend. Basically,
> > FDs are exhausted so fast now, we can't e
segment, and concatenated them all
together in the attached text file.
Do you think this is something I should add to KAFKA-3994 ticket? Or is
the information in that ticket enough for now?
Thanks,
Marcos
On Fri, Nov 4, 2016 at 2:05 PM, Marcos Juarez <mjua...@gmail.com> wrote:
> Tha
.10.1 branch and you can build from there if you
> need something in a hurry.
>
> Thanks,
> Jason
>
> On Fri, Nov 4, 2016 at 9:48 AM, Marcos Juarez <mjua...@gmail.com> wrote:
>
> > Jason,
> >
> > Thanks for that link. It does appear to be a very simila
e the likelihood.
> 3. Out of curiosity, what is the size of your cluster and how many
> consumers do you have in your cluster?
>
> Thanks!
> Jason
>
> On Thu, Nov 3, 2016 at 1:32 PM, Marcos Juarez <mjua...@gmail.com> wrote:
>
> > Just to expand on Lawrence's
cipal Systems Engineer, Confluent Inc.
> * h...@confluent.io (650)924-2670
> */
>
> On Thu, Nov 3, 2016 at 10:02 AM, Marcos Juarez <mjua...@gmail.com>
> wrote:
>
> > We're running into a recurrent deadlock issue in both our production
> and
> > s
dlock problem.
That same broker is still in the deadlock scenario, we haven't restarted
it, so let me know if you'd like more info/log/stats from the system before
we restart it.
Thanks,
Marcos Juarez
is adding some context to
consumers (they all live within a single app), so that we'll be able to
pause consumption if lag becomes too high, and let the other consumers
catch up.
Any thoughts/suggestions on that?
Thanks,
Marcos Juarez
On Sun, Jan 24, 2016 at 6:10 AM, Jens Rantil <jens.ran...@tink
.
Thanks,
Marcos Juarez
On Mon, May 16, 2016 at 12:28 PM, Liquan Pei <liquan...@gmail.com> wrote:
> Hi Matteo,
>
> There was a bug in the 0.9.1 such that task.close() can be invoked both in
> the Worker thread and Herder thread. There can be a race condition that
> consum
from Kafka?
Thanks,
Marcos Juarez
Thank you Joel, will go thorough those docs and make sure our settings are
appropriate on these instances.
Marcos Juarez
On Tue, Aug 5, 2014 at 5:58 PM, Joel Koshy jjkosh...@gmail.com wrote:
The session expirations (in the log you pasted) lead to the broker
losing its registration from
? And
does that mean that Kafka can't withstand network partitions at all, and
shouldn't be used on unreliable cloud infrastructure?
Thanks for your help.
Marcos Juarez
On Fri, Aug 1, 2014 at 4:53 PM, Joel Koshy jjkosh...@gmail.com wrote:
Leadership moves automatically for at least a few
Thanks for your response Jun.
JIRA has been filed (see link below). Please let me know if I should add
more details/context:
https://issues.apache.org/jira/browse/KAFKA-1464
Thanks,
Marcos Juarez
On Wed, May 21, 2014 at 8:40 AM, Jun Rao jun...@gmail.com wrote:
We don't have
of realtime data being sent to them.
Is there a way to throttle Kafka replication between nodes, so that instead
of it going full blast, it will replicate at a fixed rate in megabytes or
activities/batches per second? Or maybe is this planned for a future
release, maybe 0.9?
Thanks,
Marcos Juarez
/superceded-by/etc JIRA issues, but it's
not clear if it's actually solved in the current ZK version. Any info about how
you've done Zookeeper cluster migrations in the past? We're running version
3.3.5.
Thanks,
Marcos Juarez
25 matches
Mail list logo