Hi All,
I have implemented a high level Kafka consumer in Storm but looks like the
parallelism isn't getting achieved as I have 3 partitions and 2 task for the
spout, but only one of it is emitting the data.
PFB the screen grab for number of task of spout and data emitted by only one of
them.
Hello,
I'm using 0.8.2-beta with 3 brokers and 3 zookeeper nodes (3.4.6). I had
some known issues with 0.8.1.1 that were supposedly fixed in 0.8.2, so I
moved to 0.82-beta, but now I'm getting different errors in 0.8.2-beta.
All the brokers come up fine and registered with the zookeeper. Then, I
The reassignment tool outputs the original assignment before executing the
next one. If you have that saved, you can initiate another assignment to
make it go to its initial state. That is probably a safer way to fix the
reassignment.
On Wed, Dec 17, 2014 at 3:12 PM, Salman Ahmed
wrote:
>
> I had
We still have a few blockers to fix in 0.8.2. When that's done, we can
discuss whether to do another 0.8.2 beta or just do the 0.8.2 final release.
Thanks,
Jun
On Wed, Dec 17, 2014 at 5:29 PM, Shannon Lloyd wrote:
>
> Are you guys planning another beta for everyone to try out the changes
> befo
Are you guys planning another beta for everyone to try out the changes
before you cut 0.8.2 final?
Cheers,
Shannon
On 18 December 2014 at 11:24, Rajiv Kurian wrote:
>
> Has the mvn repo been updated too?
>
> On Wed, Dec 17, 2014 at 4:31 PM, Jun Rao wrote:
> >
> > Thanks everyone for the feedbac
Has the mvn repo been updated too?
On Wed, Dec 17, 2014 at 4:31 PM, Jun Rao wrote:
>
> Thanks everyone for the feedback and the discussion. The proposed changes
> have been checked into both 0.8.2 and trunk.
>
> Jun
>
> On Tue, Dec 16, 2014 at 10:43 PM, Joel Koshy wrote:
> >
> > Jun,
> >
> > Tha
Thanks everyone for the feedback and the discussion. The proposed changes
have been checked into both 0.8.2 and trunk.
Jun
On Tue, Dec 16, 2014 at 10:43 PM, Joel Koshy wrote:
>
> Jun,
>
> Thanks for summarizing this - it helps confirm for me that I did not
> misunderstand anything in this thread
Thanks, now I understand largest.
On Wed, Dec 17, 2014 at 5:42 PM, Gwen Shapira wrote:
>
> When you add a new consumer group, you can choose where it will start
> reading.
>
> With high-level consumer you can set auto.offset.reset to "largest"
> (new messages only) or "smallest" (all messages) ,
I had an issue where one kafka node was filling up on disk space. I used
the reassignment script in an incorrect way, overloading large number of
topics/partition on two target machines, which caused kafka to stop on
those machines.
I would like to cancel the reassignment process, and restore it t
When you add a new consumer group, you can choose where it will start reading.
With high-level consumer you can set auto.offset.reset to "largest"
(new messages only) or "smallest" (all messages) , with simple
consumer you can pick specific offsets.
On Wed, Dec 17, 2014 at 2:33 PM, Greg Lloyd w
Thanks for the reply,
So if I wanted to add a new group of consumers 6 months into the lifespan
of my implementation and I didn't want that new group to process all the
last six months is there a method to manage this?
On Tue, Dec 16, 2014 at 9:48 PM, Gwen Shapira wrote:
>
> " If all
> the con
You can configure the number of resends on the producer.
Thanks,
Jun
On Wed, Dec 17, 2014 at 10:34 AM, Xiaoyu Wang wrote:
>
> I have tested using "async" producer with "required.ack=-1" and got really
> good performance.
>
> We have not used async producer much previously, any potential datalos
I have tested using "async" producer with "required.ack=-1" and got really
good performance.
We have not used async producer much previously, any potential dataloss
when a broker goes down? For example, when a broker goes down, does
producer resend all the messages in a batch?
On Wed, Dec 17, 20
Thanks Jun.
We have tested our producer with the different required.ack config. Even
with the required.ack=1, the producer is > 10 times slower than with
required.ack=0. Does this confirm with your testing?
I saw the presentation of LinkedIn Kafka SRE. Wondering what configuration
you guys have
Hi All,
I am trying to figure out best configuration for my Kafka brokers so that
in case of restarted, the new node catch up with Leader at quick pace.
My test environment has 2 kafka brokers and 1 Topic with one Partition.
I first ran the test (Test#1) with default setting, i.e.
num.replica.fe
Hi Team,
I have to make a decision on whether i should go with Kafka producer test
utility or build my own java tool for my load testing .
Kindly let me know if anyone knows any limitation with
"kafka-producer-per-test.sh" when it come to simulation of messages under
load condition?
Regards,
Nit
16 matches
Mail list logo