added to the users mailing list, please send
a short reply to this address:
users-sc.1397781463.cnligeeapcohnilolgla-kafka=harsha...@kafka.apache.org
Usually, this happens when you just hit the reply button.
If this does not work, simply copy the address and paste it into
the To: field
Hi Sagar,
Its not there in release versions or trunk . Here is the jira
for it https://issues.apache.org/jira/browse/KAFKA-1543.
-Harsha
On Thu, Jul 31, 2014, at 05:22 AM, Sagar Khanna wrote:
Hi,
re:
http://grokbase.com/t/kafka/users/13c2rdc2yj/changing
. for ex
log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.DatePattern='.'-MM-dd-HH
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/kafka-tools.log
and export LOG_DIR as you did before.
-Harsha
On Wed, Aug 6, 2014, at 09:43 AM, Shlomi Hazan wrote
. If there is any interest in the above projects
from the community I am hoping to work on documentation and the
code as there isn't much activity happened on these projects.
Thanks,
Harsha
Did you tried gradlew script in kafka source dir.
-Harsha
On Thu, Sep 4, 2014, at 07:32 AM, Shlomi Hazan wrote:
what gradle version is used to build kafka_2.9.2-0.8.1.1 ?
tried with v2 and failed with :
gradle --stacktrace clean
FAILURE: Build failed with an exception.
* Where
-storm-starter/blob/develop/src/main/scala/com/miguno/kafkastorm/storm/KafkaStormDemo.scala
-Harsha
On Mon, Sep 15, 2014, at 04:37 AM, siddharth ubale wrote:
Hi all,
I am using a kafka spout for reading data from a producer into my storm
topology. On the kafkaconfig :
1. When i set
Are you running this using storm LocalCluster and you want to use
external zookeeper for LocalCluster?.
The latest code has those changes you can pass params to
LocalCluster(localhost,2181).
-Harsha
On Mon, Sep 15, 2014, at 10:30 PM, siddharth ubale wrote:
Hi harsha,
Yes i did check
Neha,
I am interested in picking it up. Assigining JIRA to myself.
Thanks,
Harsha
On Fri, Sep 26, 2014, at 02:05 PM, Neha Narkhede wrote:
+1 on having someone pick up the JIRA. I'm happy to mentor a contributor
through that JIRA towards a checkin.
On Fri, Sep 26, 2014 at 10:45 AM, Joel
fetch requests to the brokers .There
is no need to place the while loop over the iterator.
ConsumerIterator will take care of it for you. It uses long polling
to listen for messages on the broker and blocks those fetch requests
until there is data available.
hope that helps.
-Harsha
, If the
consumed messages drops below certain threshold .
-Harsha
On Fri, Oct 17, 2014, at 01:08 AM, Alex Objelean wrote:
@Neha Narkede
Though monitoring the health of Kafka Zookeeper clusters directly is
useful, it might not be enough.
Consider the following scenario:
You have a client
://github.com/mozilla-metrics/bagheera . This project used
kafka 0.7 but you can see how its implemented. Hope that helps.
-Harsha
On Mon, Oct 20, 2014, at 08:45 AM, Josh J wrote:
hi
Is it possible to have iOS and android to run the code needed for Kafka
producers ? I want to have
you can use log.retention.hours or log.retention.bytes to prune the log
more info on that config here
https://kafka.apache.org/08/configuration.html
if you want to delete a message after the consumer processed a message
there is no api for it.
-Harsha
On Tue, Oct 21, 2014, at 08:00 AM, Eduardo
reconn
ect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused)
kill -9 kafka-broker
restart zookeeper and than kafka-broker leads into the the error posted
by Ciprian.
Ciprian,
Can you open a jira for this.
Thanks,
Harsha
On Wed, Oct 22, 2014, at 10:03 AM, Neha
Rohit,
Please send a mail to users-subscr...@kafka.apache.org.
more info here http://kafka.apache.org/contact.html.
-Harsha
On Wed, Nov 12, 2014, at 09:20 PM, Rohit Pujari wrote:
Hello there:
I'd like to be added to the mailing list
Thanks,
--
Rohit Pujari
Solutions Engineer
you can configure it under /opt/kafka/config/log4j.properties and look
for kafka.log.dir
On Mon, Nov 17, 2014, at 11:11 AM, Jimmy John wrote:
How do I configure the application kafka log dir?
Right now the default is /var/log/upstart/kafka.log . I want to point it
to
a different mount
Hi Haoming,
Take a look at the code here
https://github.com/stealthly/scala-kafka/blob/master/src/main/scala/KafkaProducer.scala
for your partKey it should be string and when you converting it into
byte array you can use partKey.getBytes(UTF8)
-Harsha
On Thu, Nov 20, 2014
also the (key: Key, value: Val, topic: Option[String]) value should be
a string converted to a byte array.
Can you send a example of your key and value data.
On Thu, Nov 20, 2014, at 04:53 PM, Haoming Zhang wrote:
Hi Harsha,
Thanks for suggestion!
I have checked this link before, and I
Rajiv ,
which version of kafka are you using and do you see any errors
when the server goes down after sending few messages.
-Harsha
On Sat, Nov 22, 2014, at 01:05 PM, Rajiv Kurian wrote:
The brokers also seem unavailable while this is going on. Each of these
log messages
It might logs check your kafka logs dir (server logs) . Kafka can
produce lot of logs in a quick time make sure thats whats in play here.
-Harsha
On Sat, Nov 22, 2014, at 01:37 PM, Rajiv Kurian wrote:
Actually see a bunch of errors. One of the brokers is out of space and
this
might be causing
have you set fetch.message.max.bytes to 10mb or more in your consumer
config.
-Harsha
On Mon, Dec 1, 2014, at 07:27 PM, Palur Sandeep wrote:
Hi all,
Consumer doesnt receive message if it is big: When the producer sends
256kb messages to broker, consumer is able to retrieve it, but when
replica.fetch.max.bytes=104858000
replica.fetch.wait.max.ms=2000
On Mon, Dec 1, 2014 at 10:03 PM, Harsha ka...@harsha.io wrote:
have you set fetch.message.max.bytes to 10mb or more in your consumer
config.
-Harsha
On Mon, Dec 1, 2014, at 07:27 PM, Palur Sandeep wrote:
Hi all,
Consumer
I think the default port for kafka running there is 6667. Can you check
server.properties to see whats the port number
-Harsha
On Fri, Dec 5, 2014, at 06:10 AM, Marco wrote:
Yes, it's online and version -0.8.1.2.2.0.0-1084. jps lists it also
2014-12-05 14:56 GMT+01:00 svante karlsson s
--broker-list localhost:6667 --topic test
- same error :(
I've tried also to change the port, use the hostname instead of
localhost
I'm running the stuff in VMWare with sharing IP-address from my
host...don't know if this can intefere?
2014-12-05 15:14 GMT+01:00 Harsha ka
Sumit,
You can use AdminUtils.deleteTopic(zkClient, topicName) . This
will initiate the deleteTopic process by zookeeper notification
to KafkaController.deleteTopicManager. It deletes log files
along with zookeeper topic path as Timothy mentioned.
-Harsha
On Fri, Jan
the topic will be recreated.
-Harsha
On Sun, Jan 25, 2015, at 11:26 PM, Jason Rosenberg wrote:
cversion did change (incremented by 2) when I issue the delete command.
From the logs on the conroller broker (also the leader for the topic), it
looks like the delete proceeds, and then the topic gets
Jun,
I made an attempt at fixing that issue as part of this JIRA
https://issues.apache.org/jira/browse/KAFKA-1507 .
As Jay pointed out there should be admin api if there is more info on
this api I am interested in adding/fixing this issue.
Thanks,
Harsha
On Mon, Jan 26, 2015, at 07
Sumit,
lets say you are deleting a older topic test1 do you have any
consumers running simultaneously for the topic test1 while
deletion of topic going on.
-Harsha
On Tue, Feb 3, 2015, at 06:17 PM, Joel Koshy wrote:
Thanks for the logs - will take a look tomorrow unless
you try stopping the consumer first and issue
the topic delete.
-Harsha
On Tue, Feb 3, 2015, at 08:37 PM, Sumit Rangwala wrote:
On Tue, Feb 3, 2015 at 6:48 PM, Harsha ka...@harsha.io wrote:
Sumit,
lets say you are deleting a older topic test1 do you have any
consumers running
-142388317 which
looks to be getting deleted properly. Do you see same issues with the
above topic i.e /admin/delete_topics/LAX1-GRIFFIN-r45-142388317
still exists. If you can post the logs from 23:00 onwards that will be
helpful.
-Harsha
On Tue, Feb 3, 2015, at 09:19 PM, Harsha wrote:
you
.
-Harsha
On Mon, Feb 2, 2015, at 07:03 PM, Xinyi Su wrote:
Hi,
-bash-4.1$ bin/kafka-topics.sh --zookeeper zkhosst:2181 --create
--topic
zerg.hydra --partitions 3 --replication-factor 2
Created topic zerg.hydra.
-bash-4.1$ ls -lrt /tmp/kafka-logs/zerg.hydra-2
total 0
-rw-r--r-- 1 users
.
-Harsha
On Mon, Feb 2, 2015, at 07:03 PM, Xinyi Su wrote:
Hi,
-bash-4.1$ bin/kafka-topics.sh --zookeeper zkhosst:2181 --create
--topic
zerg.hydra --partitions 3 --replication-factor 2
Created topic zerg.hydra.
-bash-4.1$ ls -lrt /tmp/kafka-logs/zerg.hydra-2
total 0
-rw-r--r-- 1 users
Tousif,
I meant to say if kafka broker is going down often its better to
analyze whats the root of cause of the crash. Using supervisord
to monitor kafka broker is fine, sorry about the confusion.
-Harsha
On Fri, Jan 16, 2015, at 11:25 AM, Gwen Shapira wrote:
Those errors
Tousif,
Do you see any other errors in server.log
-Harsha
On Wed, Jan 14, 2015, at 01:51 AM, Tousif wrote:
Hello,
I have configured kafka nodes to run via supervisord and see following
exceptions
and eventually brokers going out of memory. i have given enough memory
and
process 1
to see why its going down for the
first time.
-Harsha
On Wed, Jan 14, 2015, at 10:50 PM, Tousif wrote:
Hello Chia-Chun Shih,
There are multiple issues,
First thing is i don't see out of memory error every time and OOM happens
after supervisord keep retrying to start kafka.
It goes down when
the work is in progress. I'll update the
thread with a initial prototype patch.
Thanks,
Harsha
the work is in progress. I'll update the
thread with a initial prototype patch.
Thanks,
Harsha
Thanks Joe. It will be part of KafkaServer and will run on its own
thread. Since each kafka server will run with a keytab we should make
sure they are all getting renewed.
On Wed, Feb 11, 2015, at 10:00 AM, Joe Stein wrote:
Thanks Harsha, looks good so far. How were you thinking of running
It looks like you are not adding a ZkStringSerializer to your zkClient.
ZkClient zkClient = new ZkClient(ZK_CONN_STRING);
in Kafka TopicCommand uses
new ZkClient(opts.options.valueOf(opts.zkConnectOpt), 3, 3,
ZKStringSerializer)
because of this although your topic is getting created and
You can import ZKStringSerializer from kafka.utils.ZkClient or write
your own similar string serializer like this
https://gist.github.com/harshach/7b5447c39168eb6062e0
On Tue, Jan 13, 2015, at 04:31 PM, Harsha wrote:
It looks like you are not adding a ZkStringSerializer to your zkClient
Internally producer sends a TopicMetadataRequest which creates the topic
if auto.create.topics.enable is true.
On Tue, Jan 13, 2015, at 04:49 PM, Harsha wrote:
You can import ZKStringSerializer from kafka.utils.ZkClient or write
your own similar string serializer like this
https
Paul,
There is ongoing work to move to Kafka API instead of making
calls to zookeeper.
Here is the JIRA https://issues.apache.org/jira/browse/STORM-650 .
-Harsha
On Fri, Feb 13, 2015, at 01:02 PM, Paul Mackles wrote:
I noticed that the standard Kafka storm spout gets topic
Yonghui,
Which OS you are running.
-Harsha
On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
Yes and I found the reason rename in deletion is failed.
In rename progress the files is deleted? and then exception blocks file
closed in kafka.
But I don't know how can rename
on the path
/brokers/ids/0. This probably indicates that you either have configured
a brokerid that is already in use, or else you have shutdown this broker
and restarted it faster than the zookeeper timeout so it appears to be
re-registering.
-Harsha
On Wed, Feb 18, 2015, at 03:16 PM, Deepak Dhakal wrote
Yuheng,
kafka keeps cluster metadata in zookeeper along with topic metadata
as well. You can use zookeeper-shell.sh or zkCli.sh to check zk nodes,
/brokers/topics will give you the list of topics .
--
Harsha
On March 9, 2015 at 8:20:59 AM, Yuheng Du (yuheng.du.h...@gmail.com) wrote
machine
failure and incase of 5 there should be at least 3 nodes to be up and running.
For more info on zookeeper you can look under here
http://zookeeper.apache.org/doc/r3.4.6/
http://zookeeper.apache.org/doc/r3.4.6/zookeeperAdmin.html
--
Harsha
On March 9, 2015 at 8:39:00 AM, Yuheng Du
instance within each
subscribing consumer group. Consumer instances can be in separate processes or
on separate machines.”
More info on this page http://kafka.apache.org/documentation.html look for
consumer group”.
--
Harsha
On March 31, 2015 at 6:10:59 AM, James King (jakwebin
Hi Emmanuel,
Can you post your kafka server.properties and in your producer are your
distributing your messages into all kafka topic partitions.
--
Harsha
On March 20, 2015 at 12:33:02 PM, Emmanuel (ele...@msn.com) wrote:
Kafka on test cluster:
2 Kafka nodes, 2GB, 2CPUs
3 Zookeeper
node gets deleted
as well.
Also make sure you don’t have any producers or consumers are running while the
topic deleting is going on.
--
Harsha
On March 23, 2015 at 1:29:50 AM, anthony musyoki (anthony.musy...@gmail.com)
wrote:
On deleting a topic via TopicCommand.deleteTopic()
I get Topic
you can increase num.replica.fetchers by default its 1 and also try
increasing replica.fetch.max.bytes
-Harsha
On Fri, Feb 27, 2015, at 11:15 PM, tao xiao wrote:
Hi team,
I had a replica node that was shutdown improperly due to no disk space
left. I managed to clean up the disk and restarted
These docs might help
https://kafka.apache.org/08/design.html
http://research.microsoft.com/en-us/um/people/srikanth/netdb11/netdb11papers/netdb11-final12.pdf
-Harsha
On Sun, Mar 1, 2015, at 09:42 PM, Philip O'Toole wrote:
Thanks Guozhang -- no this isn't quite it. The doc I read before
stopping any of your consumers or producers run the
delete topic command again.
-Harsha
On Wed, Mar 4, 2015, at 10:28 AM, Jeff Schroeder wrote:
So I've got 3 kafka brokers that were started with delete.topic.enable
set
to true. When they start, I can see in the logs that the property
Hi Hema, Can you attach controller.log and state-change.log. Image is
not showing up at least for me. Can you also give us details on how big
the cluster is and topic's partitions and replication-factor and any
steps on reproducing this. Thanks, Harsha
On Sun, Mar 1, 2015, at 12:40 PM, Hema
Hi Gene,
Looks like you might be running into this
https://issues.apache.org/jira/browse/KAFKA-1758 .
-Harsha
On Tue, Feb 24, 2015, at 07:17 AM, Gene Robichaux wrote:
About a week ago one of our brokers crashed with a hardware failure. When
the server restarted the Kafka broker
Akshat,
Produce.batch_size is in bytes and if your messages avg size is
310 bytes and your current number of messages per batch is 46 you
are getting close to the max batch size 16384. Did you try
increasing the producer batch_size bytes?
-Harsha
On Thu, Feb 26, 2015
Victor,
Its under kaka.tools.DumpLogSegments you can use kafka-run-class to
execute it.
--
Harsha
On March 26, 2015 at 5:29:32 AM, Victor L (vlyamt...@gmail.com) wrote:
Where's this tool (DumpLogSegments) in Kafka distro? Is it Java class in
kafka jar, or is it third party binary
Manu,
Are you passing ZkStringSerializer to zkClient .
ZkClient zkClient = new ZkClient(ZK_CONN_STRING,3, 3, new
ZkStringSerializer());
AdminUtils.createTopic(zkClient, topic, 1, 1, props);
-Harsha
On Thu, Jan 22, 2015, at 09:44 PM, Manu Zhang wrote:
Hi all ,
My application
.
--
Harsha
On March 23, 2015 at 9:57:53 AM, Grant Henke (ghe...@cloudera.com) wrote:
What happens when producers or consumers are running while the topic
deleting is going on?
On Mon, Mar 23, 2015 at 10:02 AM, Harsha ka...@harsha.io wrote:
DeleteTopic makes a node in zookeeper to let
Just to be clear, one needs to stop producers and consumers that
writing/reading from a topic “test” if they are trying to delete that specific
topic “test”. Not all producers and clients.
--
Harsha
On March 23, 2015 at 10:13:47 AM, Harsha (harsh...@fastmail.fm) wrote:
Currently we have
Hi Francois,
Looks like this belong storm mailing lists. Can you please send this
question on storm mailing lists.
Thanks,
Harsha
On March 23, 2015 at 11:17:47 AM, François Méthot (fmetho...@gmail.com) wrote:
Hi,
We have a storm topology that uses Kafka to read a topic with 6
log size to 200 bytes.
--
Harsha
On April 2, 2015 at 2:29:45 PM, Willy Hoang (w...@knewton.com) wrote:
Hello,
I’ve been having trouble using the retention.bytes per-topic configuration
(using Kafka version 0.8.2.1). I had the same issue that users described in
these two threads where logs
Hi Navneet,
Any reason that you are looking to modify the zk nodes directly to
increase the topic partition. If you are looking for an api to do this there is
AdminUtils.addPartitions .
--
Harsha
On April 7, 2015 at 6:45:40 AM, Navneet Gupta (Tech - BLR)
(navneet.gu
Sandeep, You need to have multiple replicas. Having single replica
means you've one copy of the data and if that machine goes down there isn't
another replica who can take over and be the leader for that partition.-Harsha
_
From: Sandeep Bishnoi
potential MITM. This
api doesn't exist in Java 1.6.
Are there any users still want 1.6 support or can we stop supporting 1.6
from next release on wards.
Thanks,
Harsha
Can you try using the 0.8.2.2 source here
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.2/kafka-0.8.2.2-src.tgz
https://kafka.apache.org/downloads.html
On Sun, Nov 15, 2015, at 05:35 AM, jinxing wrote:
> Hi all, I am newbie for kafka, and I try to build kafka 0.8.2 from source
> code
Mark, Can you try running the /usr/hdp/current/kafka-
broker/bin/kafka start manually as user Kafka and also do you see jars
under kafka-broker/libs/kafka_*.jar . Can you try asking the question
here as well http://hortonworks.com/community/forums/forum/kafka/
Thanks, Harsha
On Sun, Oct 4
/keytabs/kafka1.keytab"
principal="kafka/kafka1.witzend@witzend.com"; };
and KafkaClient KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
serviceName="kafka";
};
On Wed, Dec 30, 2015, at 03:10 AM, prabhu v wrote:
> Hi Harsha,
kafka
0.9.0 + additional patches . We are making sure on our side not to miss
any compatibility patches like these with 3rd party developers and have
tests to ensure that.
Thanks,
Harsha
On Wed, Dec 23, 2015, at 04:11 PM, Dana Powers wrote:
> Hi all,
>
> I've been helping debug an is
nnection to Zookeeper server without SASL authentication, if Zookeeper"
did you try kinit with that keytab at the command line.
-Harsha
On Mon, Dec 28, 2015, at 04:07 AM, prabhu v wrote:
> Thanks for the input Ismael.
>
> I will try and let you know.
>
> Also need your v
Hi Pierre,
Do you see any errors in the server.log when this command
ran. Can you please open a thread here
https://community.hortonworks.com/answers/index.html .
Thanks,
Harsha
On Tue, Jun 7, 2016, at 09:22 AM, Pierre Labiausse wrote:
> Hi,
>
ported yet. Also dropping the encryption in SSL
channel is not possible yet.
Any reason for not use kerberos for this since we support non-encrypted
channel for kerberos.
Thanks,
harsha
On Wed, Jun 8, 2016, at 02:06 PM, Samir Shah wrote:
> Hello,
>
> Few questions on Kafka Security.
&
on the
dates .
Thanks,
Harsha
On Thu, Jun 16, 2016, at 03:08 PM, Ismael Juma wrote:
> Hi Jan,
>
> That's interesting. Do you have some references you can share on this? It
> would be good to know which Java 8 versions have been tested and whether
> it
> is something th
va 7 support
when the release is minor in general not a good idea to users.
-Harsha
"Also note that Kafka bug fixes go to 0.10.0.1, not 0.10.1 and
> 0.10.0.x would still be available for users using Java 7."
On Fri, Jun 17, 2016, at 12:18 AM, Ismael Juma wrote:
> Hi Harsha,
>
>
Radu,
Please follow the instructions here
http://kafka.apache.org/documentation.html#security_ssl . At
the end of the SSL section we've an example for produce and
consumer command line tools to pass in ssl configs.
Thanks,
Harsha
On Wed, Jun 22, 2016, at 07:40
You won't be able to start and register to brokers with same id in
zookeeper.
On Thu, Feb 4, 2016, at 06:26 AM, Carlo Cabanilla wrote:
> Hello,
>
> How does a kafka cluster behave if there are two live brokers with the
> same
> broker id, particularly if that broker is a leader? Is it
Oleg,
Can you post your jaas configs. Its important that serviceName
must match the principal name with which zookeeper is running.
Whats the principal name zookeeper service is running with.
-Harsha
On Tue, Feb 23, 2016, at 11:01 AM, Oleg Zhurakousky wrote:
> Hey g
ssuming Zookeeper is started as ‘zookeeper’
> and Kafka as ‘kafka’
>
> Oleg
>
> > On Feb 23, 2016, at 2:22 PM, Oleg Zhurakousky
> > <ozhurakou...@hortonworks.com> wrote:
> >
> > Harsha
> >
> > Thanks for following up. Here
principal="kafka/ubuntu.oleg@oleg.com";
};
-Harsha
On Tue, Feb 23, 2016, at 11:37 AM, Harsha wrote:
> can you try adding "serviceName=zookeeper" to KafkaServer section like
> KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
>
Kafka doesn't have security enabled for 0.8.2.2 so make sure zookeeper
root that you're using doesn't have any acls set
-Harsha
On Sun, Feb 14, 2016, at 06:51 PM, 赵景波 wrote:
> Can you help me?
>
> ___
> JingBo Zhao<h
Did you try what Adam is suggesting in the earlier email. Also to
quickly check you can try remove keystore and key.password configs from
client side.
-Harsha
On Thu, Feb 18, 2016, at 02:49 PM, Srikrishna Alla wrote:
> Hi,
>
> We are getting the below error when trying to use a Java new
,
Harsha
On Tue, Mar 1, 2016, at 12:56 PM, Ankit Jain wrote:
> Hi All,
>
> We would need to use the SSL feature of Kafka and that would require the
> Kafka Spout upgrade from version 0.8.x to 0.9.x as the SSL is only
> supported by new consumer API.
>
> Please share
/documentation.html#security_ssl
-Harsha
On Thu, Mar 10, 2016, at 12:41 AM, Ranjan Baral wrote:
> i getting below warning while doing produce from client side which is
> connecting to server side which contains SSL based authentication.
>
> *[2016-03-10 07:09:13,018] WARN The c
Pradeep,
How about
https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seekToBeginning%28org.apache.kafka.common.TopicPartition...%29
-Harsha
On Sat, Apr 9, 2016, at 09:48 PM, Pradeep Bhattiprolu wrote:
> Liquan , tha
If i remember correctly it was because we wanted to allow non-secure
client still get into child consumers node and create their zookeeper
nodes to keep track of offset. If we add the acl at the parent path they
won't be able to write to the child nodes.
Thanks,
Harsha
On Fri, Jul 1, 2016, at 06
lot of features . If we can make this as part of 0.11 and
cutting 0.10.1 features moving to 0.11 and giving rough
timeline when that would be released would be ideal.
Thanks,
Harsha
On Fri, Jun 17, 2016, at 11:13 AM, Ismael Juma wrote:
> Hi Har
factor and
see if you can produce & consume messages
Before stepping into security make sure your non-secure Kafka cluster works ok.
Once you’ve a stable & working cluster
follow instructions in the doc to enable SSL.
-Harsha
On Mar 1, 2017, 1:08 PM -0800, IT Consultant <0binarybudd..
+1 .
1. Ran unit tests.
2. Ran few tests on 3-node cluster
Thanks,
Harsha
On Fri, Jun 22nd, 2018 at 2:41 PM Jakob Homan wrote:
>
>
>
> +1 (binding)
>
> * verified sigs, NOTICE, LICENSE
> * ran unit tests
> * spot checked headers
>
> -Jakob
>
>
&
+1.
1) Ran unit tests
2) 3 node cluster , tested basic operations.
Thanks,
Harsha
On Mon, Jul 2nd, 2018 at 11:13 AM, "Vahid S Hashemian"
wrote:
>
>
>
> +1 (non-binding)
>
> Built from source and ran quickstart successfully on Ubuntu (with Java 8).
>
+1.
1) Ran unit tests
2) 3 node cluster , tested basic operations.
Thanks,
Harsha
On Mon, Jul 2nd, 2018 at 11:57 AM, Jun Rao wrote:
>
>
>
> Hi, Matthias,
>
> Thanks for the running the release. Verified quickstart on scala 2.12
> binary. +1
>
> Jun
>
&
Hi,
Which Kafka version and Java version are you using? Did you try this with
Java 9 which has 2.5x perf improvements over Java 8 for SSL? Can you try using
a slightly weaker cipher suite to improve the performance?
-Harsha
On Wed, Aug 22, 2018, at 1:11 PM, Sri Harsha Chavali wrote:
>
read/writes on the
secure topics and it will reject any request on PLAINTEXT port for these topics
as AuthorizationException and rest of the topics you can continue access
through both the ports.
-Harsha
On Tue, Jul 17, 2018, at 5:09 PM, Matt L wrote:
> Hi,
>
> I have an existing Kafk
.
Use your custom partitioner to be configured in your producer clients.
Thanks,
Harsha
On Thu, Aug 30, 2018, at 1:45 AM, M. Manna wrote:
> Hello,
>
> I opened a very simple KIP and there exists a JIRA for it.
>
> I would be grateful if any comments are available for action.
>
> Regards,
+1.
* Ran unit tests
* Installed in a cluster and ran simple tests
Thanks,
Harsha
On Mon, Jul 9th, 2018 at 6:38 AM, Ted Yu wrote:
>
>
>
> +1
>
> Ran test suite.
>
> Checked signatures.
>
>
>
> On Sun, Jul 8, 2018 at 3:36 PM Dong Lin < lindon..
+1
* Ran Unit tests
* 3 node cluster . Ran simple tests.
Thanks,
Harsha
On Sat, Jun 23rd, 2018 at 9:7 AM, Ted Yu wrote:
>
>
>
> +1
>
> Checked signatures.
>
> Ran unit test suite.
>
> On Fri, Jun 22, 2018 at 4:56 PM, Vahid S Hashemian <
> vahidhas
Hi,
Yes, this needs to be handled more elegantly. Can you please file a JIRA
here
https://issues.apache.org/jira/projects/KAFKA/issues
Thanks,
Harsha
On Mon, Apr 1, 2019, at 1:52 AM, jorg.heym...@gmail.com wrote:
> Hi,
>
> We have our brokers secured with these standard p
a certificate to enable TLS. JKS stores are for doing it
manually. You can check out https://github.com/spiffe/java-spiffe which talks
spiffee agent to get a certificate and pass it to Kafka's SSL context.
Thanks,
Harsha
On Thu, May 16, 2019, at 3:57 PM, Zhou, Thomas wrote:
> Hi,
>
&g
Thanks Vahid.
-Harsha
On Mon, Jun 3, 2019, at 9:21 AM, Jonathan Santilli wrote:
> That's fantastic! thanks a lot Vahid for managing the release.
>
> --
> Jonathan
>
>
>
>
> On Mon, Jun 3, 2019 at 5:18 PM Mickael Maison
> wrote:
>
> > Thank you Vahid
+1 (binding)
1. Ran unit tests
2. System tests
3. 3 node cluster with few manual tests.
Thanks,
Harsha
On Wed, May 22, 2019, at 8:09 PM, Vahid Hashemian wrote:
> Bumping this thread to get some more votes, especially from committers, so
> we can hopefully make a decision on this RC by t
Thanks, Everyone.
-Harsha
On Fri, Apr 19, 2019, at 2:39 AM, Satish Duggana wrote:
> Congrats Harsha!
>
> On Fri, Apr 19, 2019 at 2:58 PM Mickael Maison
> wrote:
>
> > Congratulations Harsha!
> >
> >
> > On Fri, Apr 19, 2019 at 5:49 AM Manikumar
Here is the guide
https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide
you need zookeeper 3.5 or higher for TLS.
On Mon, Jul 29, 2019, at 1:21 AM, Nayak, Soumya R. wrote:
> Hi Team,
>
> Is there any way mutual TLS communication set up can be done with
> zookeeper. If
One way to delete is to delete the topic partition directories from disks
and delete /broker/topics.
If you just shutdown those brokers controller might try to replicate the
topic onto brokers and since you don't have any leaders you might replica
fetcher errors in the logs.
Thanks,
Harsha
On Thu
1 - 100 of 126 matches
Mail list logo