All:
We are deploying Kafka 1.0 as microservice. I want to understand the process
of security vulnerabilities in the Kafka project. How the vulnerabilities are
identified in addition to reported by users. Are any tools used for static and
dynamic scan? Can the scan results be shared?
Adding Users list to this email for help on below queries. please help us.
Regards,Ajay chaudhary
On Thursday 14 December 2017, 1:07:56 PM IST, ajay chaudhary
wrote:
Hi Team,
This is Ajay working with Yodlee India. We are trying to setup Kafka cluster
for
Hi,
Can any one share some documentation and best practices to follow when
setting up Kafka cluster in Production.
We have a plan to setup 3 Brokers , 3 Zookeeper, 1 Schema Registry and 1
Rest Proxy.
Regards,
Bunty Ray
I found a better approach:
final List partitions = consumer.partitionsFor(TOPIC_NAME)
.stream()
.map(part -> new
TopicPartition(TOPIC_NAME, part.partition()))
Hi
In my consumer app, in order to start consuming records from the beginning
of all partitions of a given Kafka topic, I first have to issue a poll to
make sure partitions are assigned to my consumer:
consumer.poll(1000);
consumer.seekToBeginning(consumer.assignment());
Is there an alternative
Can you look at the log from controller to see if there is some clue
w.r.t. partition
82 ?
Was unclean leader election enabled ?
BTW which release of Kafka are you using ?
Cheers
On Thu, Dec 14, 2017 at 11:49 AM, Tarun Garg wrote:
> I checked log.dir of the all nodes and
My apologies, the append log behavior was due to the repartion logs not
being cleaned up. Still the log compation has none influence on the
aggregated records in the procedure mentioned above. Are there any other
tricks one could use? Exactly once does not seem to have effect in this
particular
Hello,
I've been experimenting with Kafka and I've run into the following issue:
- Our log segment retention period is 2 hours
- A broker in the ISR for partition A went
- The broker stayed offline for several days
- The broker was brought online
- Log segments for partition A (at
I checked log.dir of the all nodes and found index, log and time index files
are in sync(size and date of modification).
This caused more confusion.
How can I add this isr back.
> On Dec 14, 2017, at 1:35 AM, UMESH CHAUDHARY wrote:
>
> Do you find any messages on broker
hm, strange. It keeps appending records, even in the state store. The
number of records grows for each run.
/Artur
On Thu, Dec 14, 2017 at 8:18 PM, Artur Mrozowski wrote:
> Ok I see, what was the default value before I've changed it?
>
> On Thu, Dec 14, 2017 at 7:47 PM, Artur
Ok I see, what was the default value before I've changed it?
On Thu, Dec 14, 2017 at 7:47 PM, Artur Mrozowski wrote:
> Hi Gouzhang,
> thank you for the answer. Indeed the value is being populated now, however
> the application behaves oddly and not how it used to. I suspect
Hi Gouzhang,
thank you for the answer. Indeed the value is being populated now, however
the application behaves oddly and not how it used to. I suspect that
disabling caching by setting CACHE_MAX_BYTES_BUFFERING_CONFIG to 0 has been
persisited somehow.
It seems as log compaction has been disabled
StreamsConfig does accept the LONG type, hence it is accepting the
properties as `Map` and do casting internally based on the specified
type.
Note that in the code snippet, StreamsConfig is not used actually, it is
only using Properties which has string constraints.
If you run the slightly
In StreamsConfig.java , CACHE_MAX_BYTES_BUFFERING_CONFIG is defined as:
.define(CACHE_MAX_BYTES_BUFFERING_CONFIG,
Type.LONG,
10 * 1024 * 1024L,
I think using numeral should be accepted (as shown by the Demo.java
classes).
On Thu, Dec 14,
Artur,
This is because Properties#getProperty() is expecting a String value, and
hence 10 * 1024 * 1024L is not recognized; you can try "10485760".
Guozhang
On Wed, Dec 13, 2017 at 10:51 PM, Artur Mrozowski wrote:
> Sure.
>
> Another observation I've made is that before I
I have also seen slower replication across the cluster when partitions per
broker are abnormally high, even though the bytes/message throughput isn't
that high. Due to legacy reasons, we have a lot of partitions per broker,
with only a handful really hot and the others just barely trickling data,
Interesting.
Looks like disconnection resulted in the stack overflow.
I think the following would fix the overflow:
https://pastebin.com/Pm5g5V2L
On Thu, Dec 14, 2017 at 7:40 AM, Jörg Heinicke
wrote:
>
> Hi everyone,
>
> We recently switched to Kafka 1.0 and are facing
Hi everyone,
We recently switched to Kafka 1.0 and are facing an issue which we have
not noticed with version 0.10.x before.
One of our consumer group falls into permanent rebalancing cycle. On
analysing the log files we noticed a StackOverflowError in
kafka-coordinator-heartbeat-thread
Hi Subhash,
Thanks for your answer, that was indeed what I read from the documentation
as well. I was hoping for some kind of solution as you are mentioning, but
cannot find any.
Think I should assume it is not possible.
Wilko
2017-12-14 15:31 GMT+01:00 Subhash Sriram
Hi,
I am not an expert, but from looking at the ACL documentation, you can't
control read authorization at the partition level, only at the topic level. If
it is possible to control access at the partition level, maybe you could have a
dedicated partition for each customerID?
Thanks,
Subhash
I have also created a monitoring application for Kafka that uses prometheus.
You can look at the source code here:
https://github.com/aglenis/kafka_monitoring_pandas
2017-12-13 9:53 GMT+02:00 Irtiza Ali :
> Ok thank you Michal
>
> On Tue, Dec 12, 2017 at 9:30 PM, Michal Michalski
Hi All,
I was wondering, is it possible to have one topic with data from different
customers. And making sure a consumer can only read the messages of a
certain customer. i.e.:
Three messages:
"message1 for customer1"
"message2 for customer2"
"message3 for customer1"
Authorized as
Not recommended. You’ll have timeout issues with the size of the controller
requests. Additionally, there appear to be problems with writing some nodes
in Zookeeper at high partition counts.
-Todd
On Thu, Dec 14, 2017 at 8:58 AM, Abhimanyu Nagrath <
abhimanyunagr...@gmail.com> wrote:
> Can I
Can I have 20k partition on a single kafka broker ?
Hi All,
This is only reproducible when I have 3 nodes in my cluster even in the
start of the app, everything works fine on 2 nodes.
I have tried this again and faced the same error, this time I
increased MAX_PARTITION_FETCH_BYTES_CONFIG to 10MB from default 1MB, still
getting the same error.
Do you find any messages on broker 3 w.r.t. Topic: XYZ Partition 82? Also
broker 3 is in isr for other partitions (at least 83,84) so I don't see any
broker issue in this.
On Thu, 14 Dec 2017 at 01:23 Tarun Garg wrote:
> Hi,
>
> I have a Kafka cluster and it is running from
Hi Bill,
I've pinged you directly with the debug file attached. Not able to attach
it here, not sure why.
Best Regards
Artur
On Wed, Dec 13, 2017 at 4:58 PM, Bill Bejeck wrote:
> Just some DEBUG level logging from when you start up your Streams
> application would be
27 matches
Mail list logo