Hi,
I want to change my email id for subscription. Is this the right group to email
to ?
Thanks,
Shibha
"Finally Kafka leans heavily on the OS pagecache for data storage. Although the
question says that kafka writes to disk immediately, that is not completely
true. Actually Kafka just writes to the filesystem immediately, which is really
just writing to the kernel's memory pool which is
reads a partition at one time (a consumer
may consume multiple partitions, but no two consumers from same group consume
the same partition)
Virgil.
On 6/28/18, 7:45 PM, "Malik, Shibha (GE Renewable Energy, consultant)"
wrote:
You mean we use multiple partitions for a topic
f your key is the customer ID, all operations done by the same
customer are ordered; etc. Just find a suitable key.
Virgil.
On 6/28/18, 4:12 AM, "Malik, Shibha (GE Renewable Energy, consultant)"
wrote:
But then restricting a consumer to use only partition seems to be similar
to tradit
. 2018, 8:33 am Malik, Shibha (GE Renewable Energy, consultant),
wrote:
> Is order of data is not maintained in Kafka, Is Kafka not suitable to
> do manage State / Transactional Scenarios such as Updating a Bank
> Account scenarios etc
>
This email and any attachments are confiden
Is order of data is not maintained in Kafka, Is Kafka not suitable to do manage
State / Transactional Scenarios such as Updating a Bank Account scenarios etc
Hi all,
What are the use cases where technologies like Kafka , Storm , Flink , , Hive ,
Hadoop and Spark differentiates ?
Is there a good material online or book to refer for this ?
Thanks,
Shibha
Hi,
Can multiple producers write to the same partition ?
Hi All
Can I have three independent zookeepers tagged to three kafka brokers
without any clustering or quorum ?
Would it be a good idea ?
HI All ,
We are seeing following behavior , let me know if its expected or some
configuration error .
I have Apache Kafka running on three server on TLS protocol . They are
clustered on ZK level .
*Behaviour *,
1.* Unable to run only one instance* - When 2 out of 3 servers or instances
goes
Hi All ,
Kafka instance is breaking down when used Kstream . It runs out of memory
frequently resulting into service unavailabilty ,
Is it a good practice to use Kstream ?
What other option must be tried to avoid such breakage ?
If it's best pratice , how do we fine tune kafka to withhold load
Hi All,
Has anybody tried to parse Kafka logs using Logstash ?
If yes , can you please share patterns used to parse .
Thanks in advance.
_
> From: IT Consultant <0binarybudd...@gmail.com>
> Sent: Friday, June 2, 2017 11:02 AM
> To: users@kafka.apache.org
> Subject: Kafka Over TLS Error - Failed to send SSL Close message - Broken
> Pipe
>
> Hi All,
>
> I have been seeing below error since three days ,
>
Hi All,
I have been seeing below error since three days ,
Can you please help me understand more about this ,
WARN Failed to send SSL Close message
(org.apache.kafka.common.network.SslTransportLayer)
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native
Hi All,
Currently , i am running TLS enabled multi-node kafka .
*Version :* 2.11-0.10.1.1
*Scenario :* Whenever producer tries to produce around 10 records at
once to kafka . *It gets failed to update metadata after 5000 ms error .*
*Server.properties :*
*[image: Inline image 1]*
*Can
Hi All ,
How can I avoid using password for keystore creation ?
We are currently passing keystore password while accessing TLS enabled
Kafka instance .
I would like to use either passwordless keystore or avoid password for
clients accessing Kafka .
connect to kafka (ie, before creating a consumer or producer)
>
> System.setProperty("zookeeper.ssl.keyStore.password", password);
>
> martin
>
>
> From: IT Consultant <0binarybudd...@gmail.com>
> Sent: April 11, 2017 2:01 PM
>
Hi All
How can I avoid using password for keystore creation ?
Our corporate policies doesn'tallow us to hardcore password. We are
currently passing keystore password while accessing TLS enabled Kafka
instance .
I would like to use either passwordless keystore or avoid password for
cleint
Hi Todd
Can you please help me with notes or document on how did you achieve
encryption ?
I have followed data available on official sites but failed as I m no good
with TLS .
On Mar 6, 2017 19:55, "Todd Palino" wrote:
> It’s not that Kafka has to decode it, it’s that it
=zoo1:2888:3888
server.2=zoo2:2888:3888
server.3=zoo3:2888:3888
Generation of a key and certificate is enough or should i do anything
on zookeeper front to make it work with
kafka brokers ?
Am i missing anything here?
On Thu, Mar 2, 2017 at 3:08 AM, IT Consultant <0binarybudd...@gmail.
; follow instructions in the doc to enable SSL.
>
> -Harsha
>
> On Mar 1, 2017, 1:08 PM -0800, IT Consultant <0binarybudd...@gmail.com>,
> wrote:
> > Hi Harsha ,
> >
> > Thanks a lot .
> >
> > Let me explain where am i stuck ,
> >
> > i h
understand the question. You need to make sure zookeeper hosts
> and port are reachable from your broker nodes.
> -Harsha
>
> On Wed, Mar 1, 2017 at 12:45 PM IT Consultant <0binarybudd...@gmail.com>
> wrote:
>
> > Hi Team ,
> >
> > Can you please help me
Hi Team ,
Can you please help me understand ,
1. How can i secure multi-node (3 machine) single broker (1 broker ) Apache
Kafka deployment secure using SSL ?
i tried to follow instructions here but found pretty confusing .
https://www.confluent.io/blog/apache-kafka-security-authoriz
24 matches
Mail list logo