Implement your own custom
`org.apache.kafka.clients.consumer.internals.PartitionAssignor`
and assign all the subscribed partitions to the first consumer instance in
the group.
See 'partition.assignment.strategy' config in the consumer configs [1]
[1]: http://kafka.apache.org/documentation.html#ne
Hi,
Can you enable Authorization debug logs and check for logs related to
denied operations..
we should also enable operations on Cluster resource.
Thanks,
Manikumar
On Thu, Aug 4, 2016 at 1:51 AM, Bryan Baugher wrote:
> Hi everyone,
>
> I was trying out kerberos on Kafka 0.10.0.0 by creating
Hi,
Yes in fact .
And ï found à solution.
It was in editing the method punctuate in kafka stream processor.
- Message de réponse -
De : "Guozhang Wang"
Pour : "users@kafka.apache.org"
Objet : A specific use case
Date : mer., août 3, 2016 23:38
Hello Hamza,
By saying "broker" I think yo
Hi,
Sorry, i have one more query regarding consumer.poll()
As KafkaConsumer fetches record when it is available irrespective of
poll(timeout), does KafkaConsumer splits the poll(timeout) in to multiple
intervals and checks Kafka Server for any messages.
Eg., poll timeout is 10min , does it split
Hi,
We have Kafka server/broker running in a seperate machine (say machine A),
for now we are planning to have in one node. We have multiple topics and
all topics have only 1 partition for now.
We have our application which includes Kafka consumers installed in machine
B and machine C. Our applic
kafka-python by default uses the same partitioning algorithm as the Java
client. If there are bugs, please let me know. I think the issue here is
with the default nodejs partitioner.
-Dana
On Aug 3, 2016 7:03 PM, "Jack Huang" wrote:
I see, thanks for the clarification.
On Tue, Aug 2, 2016 at 10
I see, thanks for the clarification.
On Tue, Aug 2, 2016 at 10:07 PM, Ewen Cheslack-Postava
wrote:
> Jack,
>
> The partition is always selected by the client -- if it weren't the brokers
> would need to forward requests since different partitions are handled by
> different brokers. The only "def
Hi,
Thanks for your reply Kamal and Oleg.
Thanks and Regards
A.SathishKumar
>Also keep in mind that unfortunately KafkaConsumer.poll(..) will deadlock
>regardless of the
>timeout if connection to the broker can not be established and won't react to
>thread interrupts.
>This essentially mean
Thanks Manikumar. I filed KAFKA-4018 with the details of when we regressed
as well as a fix.
Ismael
On Wed, Aug 3, 2016 at 4:18 PM, Manikumar Reddy
wrote:
> Hi,
>
> There are two versions of slf4j-log4j jar in the build. (1.6.1, 1.7.21).
> slf4j-log4j12-1.6.1.jar is coming from streams:examples
Thank you all for your responses.
So here is what I tried and it worked since EIP is not an option for me -
1) Created an ENI with a dedicated IP
2) Associated that IP with A Type address
3) Assigned that ENI to the EC2 instance
4) Created EBS volume to keep ZK data
As EBS volume and ENI are boun
Hi Guozahang,
I believe we are using RocksDB. We are not using the Processor API, just simple
map and countByKey functions so it is using the default KeyValue Store.
Thanks!
Srinidhi
Hello Srinidhi,
Are you using RocksDB as well like in the WordCountDemo for your
aggregation operator?
Guozha
MirrorMaker actually doesn't have a default - it uses what you
configured in the consumer.properties file you use.
Either:
auto.offset.reset = latest (biggest in old versions)
or
auto.offset.reset = earliest (smallest in old versions)
So you can choose whether when MirrorMaker first comes up, if
Hello Srinidhi,
Are you using RocksDB as well like in the WordCountDemo for your
aggregation operator?
Guozhang
On Tue, Aug 2, 2016 at 5:20 PM, Srinidhi Muppalla
wrote:
> Hey All,
>
> We are having issues successfully storing and accessing a Ktable on our
> cluster which happens to be on AWS.
Hello Phillip,
You are right that a Kafka Streams instance has one producer and consumer
client behind the scene, and hence you can just monitor it like you are
monitoring a normal producer / consumer client.
More specifically, you can find the producer metric names as in:
http://docs.confluent.
We're using mirror maker to mirror data from one data center to another data
center ( 1 to 1 ). We noticed that by default the mirror maker by default start
from latest offset. How to change mirror maker producer config to start from
last check pointed offset in case of crash without losing dat
Hello Hamza,
By saying "broker" I think you are actually referring to a Kafka Streams
instance?
Guozhang
On Mon, Aug 1, 2016 at 1:01 AM, Hamza HACHANI
wrote:
> Good morning,
>
> I'm working on a specific use case. In fact i'm receiving messages from an
> operator network and trying to do stat
Hi everyone,
I was trying out kerberos on Kafka 0.10.0.0 by creating a single node
cluster. I managed to get everything setup and past all the authentication
errors but whenever I try to use the console producer I get 'Error while
fetching metadata ... LEADER_NOT_AVAILABLE'. In this case I've crea
Hi,
Can you please provide information on Self signed certificate setup in
Kafka. As in Kafka documentation only CA signed setup is provided.
http://kafka.apache.org/documentation.html#security_ssl
As because, we need to provide parameters trustore, keystore during
configuration.
Or to work wit
Hi Gwen,
I have explored and tested this approach in the past. It does not work for
2 reasons:
A. the first one relates to the ZKClient implementation,
B. the second is the JVM behavior.
A. The ZKConnection [1] managed by ZKClient uses a legacy constructor of
org,apache.Zookeeper [2]. The crea
In the past on classic EC2 with an autoscaling group of zookeeper
instances, I've used elastic IPs for my list. There we subscribed an SQS
queue to the autoscaling SNS topic and when a new instances was brought
online one of the spare IPs was allocated to the instance. It has to try
over and over s
Hey Zuber,
Our AWS ZK deployment involves a subnet that is not used for other things,
fixed private IP addresses, and EBS volumes for ZK data. That way, if a ZK
instance fails, it can be replaced with another instance with the same IP
and data volume.
On Wed, Aug 3, 2016 at 7:22 AM, Zuber wrote:
Can you define a DNS name that round-robins to multiple IP addresses?
This way ZKClient will cache the name and you can rotate IPs behind
the scenes with no issues?
On Wed, Aug 3, 2016 at 7:22 AM, Zuber wrote:
> Hello –
>
> We are planning to use Kafka as Event Store in a system which is being
No. If you want automatic update, you need to use the same broker id.
Many deployments use EBS to store their broker data. The
auto-generated id is stored with the data, so if a broker dies they
install a new machine and connect it to the existing EBS volume and
immediately get both the old id and
Hi,
There are two versions of slf4j-log4j jar in the build. (1.6.1, 1.7.21).
slf4j-log4j12-1.6.1.jar is coming from streams:examples module.
Thanks,
Manikumar
On Tue, Aug 2, 2016 at 8:31 PM, Ismael Juma wrote:
> Hello Kafka users, developers and client-developers,
>
> This is the second candid
Hello –
We are planning to use Kafka as Event Store in a system which is being built
using event sourcing design approach.
Here is how we deployed the cluster in AWS to verify HA in the cloud (in our
POC we only had 1 topic with 1 partition and 3 replication factor) -
1)3 ZK servers running
Thank you Eween for the help. I'll give that a try.
Regards,
Carlos.
On Tue, Aug 2, 2016, 10:12 PM Ewen Cheslack-Postava
wrote:
> It seems like we definitely shouldn't block indefinitely, but what is
> probably happening is that the consumer is fetching metadata, not finding
> the topic, then ge
You have to run kafka-reassign-partitions.sh script to move partitions to a
new replica id.
On Wed, Aug 3, 2016 at 3:14 AM, Digumarthi, Prabhakar Venkata Surya <
prabhakarvenkatasurya.digumar...@capitalone.com> wrote:
> Hi ,
>
>
> I am right now using kafka version 0.9.1.0
>
> If I choose to enab
This would be of value to me, as well. I'm currently not sure how to avoid
having users of ruby-kafka produce messages that exceed that limit when
using an async producer loop – I'd prefer to not allow such a message into
the buffers at all rather than having to deal with it only when there's a
bro
Hi,
I am using kafka 0.9.0.1 and the corresponding java client for my consumer.
I see the below error in my consumer logs:
o.a.k.c.c.i.ConsumerCoordinator - Error UNKNOWN_MEMBER_ID occurred while
committing offsets for group consumergroup001
Why could this error occur?
29 matches
Mail list logo