[jira] [Created] (KAFKA-1666) Issue for sending more message to Kafka Broker

2014-10-03 Thread rajendram kathees (JIRA)
rajendram kathees created KAFKA-1666:


 Summary: Issue for sending more message to Kafka Broker
 Key: KAFKA-1666
 URL: https://issues.apache.org/jira/browse/KAFKA-1666
 Project: Kafka
  Issue Type: Bug
  Components: config
Affects Versions: 0.8.1.1
 Environment: Ubundu 14
Reporter: rajendram kathees


I tried to send 5000 message to kafka broker using Jmeter ( 10 thread and 500 
messages per thread,one message is 105 byes). After 2100 messages I am getting 
the following exception and I changed buffer size (socket.request.max.bytes) 
value in server.properties file but still I am getting same exception. When I 
send  2000 message,all messages are sent to kafka broker. Can you give a 
solution?

[2014-10-03 12:31:07,051] ERROR - Utils$ fetching topic metadata for topics 
[Set(test1)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
kafka.common.KafkaException: fetching topic metadata for topics [Set(test1)] 
from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:67)
at 
kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
at 
kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:78)
at kafka.utils.Utils$.swallow(Utils.scala:167)
at kafka.utils.Logging$class.swallowError(Logging.scala:106)
at kafka.utils.Utils$.swallowError(Utils.scala:46)
at 
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:78)
at kafka.producer.Producer.send(Producer.scala:76)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.wso2.carbon.connector.KafkaProduce.send(KafkaProduce.java:71)
at org.wso2.carbon.connector.KafkaProduce.connect(KafkaProduce.java:28)
at 
org.wso2.carbon.connector.core.AbstractConnector.mediate(AbstractConnector.java:32)
at 
org.apache.synapse.mediators.ext.ClassMediator.mediate(ClassMediator.java:78)
at 
org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
at 
org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
at 
org.apache.synapse.mediators.template.TemplateMediator.mediate(TemplateMediator.java:77)
at 
org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:129)
at 
org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:78)
at 
org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
at 
org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
at 
org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:131)
at 
org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:166)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at 
org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
at 
org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:385)
at 
org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:183)
at 
org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Too many open files
at sun.nio.ch.Net.socket0(Native Method)
at sun.nio.ch.Net.socket(Net.java:423)
at sun.nio.ch.Net.socket(Net.java:416)
at sun.nio.ch.SocketChannelImpl.init(SocketChannelImpl.java:104)
at 
sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:60)
at java.nio.channels.SocketChannel.open(SocketChannel.java:142)
at kafka.network.BlockingChannel.connect(BlockingChannel.scala:48)
at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
at 
kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
at 
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
... 29 more




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1666) Issue for sending more message to Kafka Broker

2014-10-03 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157803#comment-14157803
 ] 

Manikumar Reddy commented on KAFKA-1666:


Exception shows  Too many open files error.  You need to increase no. of open 
files limit on your machine.

http://askubuntu.com/questions/162229/how-do-i-increase-the-open-files-limit-for-a-non-root-user

 Issue for sending more message to Kafka Broker
 --

 Key: KAFKA-1666
 URL: https://issues.apache.org/jira/browse/KAFKA-1666
 Project: Kafka
  Issue Type: Bug
  Components: config
Affects Versions: 0.8.1.1
 Environment: Ubundu 14
Reporter: rajendram kathees
   Original Estimate: 24h
  Remaining Estimate: 24h

 I tried to send 5000 message to kafka broker using Jmeter ( 10 thread and 500 
 messages per thread,one message is 105 byes). After 2100 messages I am 
 getting the following exception and I changed buffer size 
 (socket.request.max.bytes) value in server.properties file but still I am 
 getting same exception. When I send  2000 message,all messages are sent to 
 kafka broker. Can you give a solution?
 [2014-10-03 12:31:07,051] ERROR - Utils$ fetching topic metadata for topics 
 [Set(test1)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
 kafka.common.KafkaException: fetching topic metadata for topics [Set(test1)] 
 from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
   at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:67)
   at 
 kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
   at 
 kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:78)
   at kafka.utils.Utils$.swallow(Utils.scala:167)
   at kafka.utils.Logging$class.swallowError(Logging.scala:106)
   at kafka.utils.Utils$.swallowError(Utils.scala:46)
   at 
 kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:78)
   at kafka.producer.Producer.send(Producer.scala:76)
   at kafka.javaapi.producer.Producer.send(Producer.scala:33)
   at org.wso2.carbon.connector.KafkaProduce.send(KafkaProduce.java:71)
   at org.wso2.carbon.connector.KafkaProduce.connect(KafkaProduce.java:28)
   at 
 org.wso2.carbon.connector.core.AbstractConnector.mediate(AbstractConnector.java:32)
   at 
 org.apache.synapse.mediators.ext.ClassMediator.mediate(ClassMediator.java:78)
   at 
 org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
   at 
 org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
   at 
 org.apache.synapse.mediators.template.TemplateMediator.mediate(TemplateMediator.java:77)
   at 
 org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:129)
   at 
 org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:78)
   at 
 org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
   at 
 org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
   at 
 org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:131)
   at 
 org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:166)
   at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
   at 
 org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
   at 
 org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:385)
   at 
 org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:183)
   at 
 org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.net.SocketException: Too many open files
   at sun.nio.ch.Net.socket0(Native Method)
   at sun.nio.ch.Net.socket(Net.java:423)
   at sun.nio.ch.Net.socket(Net.java:416)
   at sun.nio.ch.SocketChannelImpl.init(SocketChannelImpl.java:104)
   at 
 sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:60)
   at java.nio.channels.SocketChannel.open(SocketChannel.java:142)
   at kafka.network.BlockingChannel.connect(BlockingChannel.scala:48)
   at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
   at 
 kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
   at 
 

[jira] [Commented] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-10-03 Thread Szczepan Faber (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157865#comment-14157865
 ] 

Szczepan Faber commented on KAFKA-1490:
---

Hey

The current bootstrap workflow in kafka is something like. The user:
- clones the repo
- looks into the 'build.gradle' file to find out which Gradle version he needs
- downloads this version of gradle (or maybe already has it)
- sets up this version of gradle, ensures it is on the PATH, etc.
- invokes this version of gradle to grab the wrapper
- now can build kafka OS via ./gradlew

Steps like above increase complexity and offer ways to break stuff (e.g. wrong 
version of gradle on PATH causing unexpected behavior, etc.). The wrapper was 
invented to avoid those kind of situations.

It's a bit of a shame that the jar file is involved - I would much prefer if 
there was no need for any jar to have 'wrapper' functionality available. 
Hopefully, in future version of Gradle it will be improved!

 remove gradlew initial setup output from source distribution
 

 Key: KAFKA-1490
 URL: https://issues.apache.org/jira/browse/KAFKA-1490
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1490-2.patch, KAFKA-1490.patch, rb25703.patch


 Our current source releases contains lots of stuff in the gradle folder we do 
 not need



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1374) LogCleaner (compaction) does not support compressed topics

2014-10-03 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157960#comment-14157960
 ] 

Manikumar Reddy commented on KAFKA-1374:


Updated reviewboard https://reviews.apache.org/r/24214/diff/
 against branch origin/trunk

 LogCleaner (compaction) does not support compressed topics
 --

 Key: KAFKA-1374
 URL: https://issues.apache.org/jira/browse/KAFKA-1374
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy
Assignee: Manikumar Reddy
  Labels: newbie++
 Fix For: 0.8.2

 Attachments: KAFKA-1374.patch, KAFKA-1374_2014-08-09_16:18:55.patch, 
 KAFKA-1374_2014-08-12_22:23:06.patch, KAFKA-1374_2014-09-23_21:47:12.patch, 
 KAFKA-1374_2014-10-03_18:49:16.patch


 This is a known issue, but opening a ticket to track.
 If you try to compact a topic that has compressed messages you will run into
 various exceptions - typically because during iteration we advance the
 position based on the decompressed size of the message. I have a bunch of
 stack traces, but it should be straightforward to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 24214: Patch for KAFKA-1374

2014-10-03 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24214/
---

(Updated Oct. 3, 2014, 1:22 p.m.)


Review request for kafka.


Bugs: KAFKA-1374
https://issues.apache.org/jira/browse/KAFKA-1374


Repository: kafka


Description (updated)
---

fixed couple of bugs and updating stress test details


Diffs (updated)
-

  core/src/main/scala/kafka/log/LogCleaner.scala 
c20de4ad4734c0bd83c5954fdb29464a27b91dff 
  core/src/main/scala/kafka/tools/TestLogCleaning.scala 
1d4ea93f2ba8d4d4d47a307cd47f54a15d3d30dd 
  core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala 
5bfa764638e92f217d0ff7108ec8f53193c22978 

Diff: https://reviews.apache.org/r/24214/diff/


Testing
---


Thanks,

Manikumar Reddy O



[jira] [Updated] (KAFKA-1374) LogCleaner (compaction) does not support compressed topics

2014-10-03 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1374:
---
Attachment: KAFKA-1374_2014-10-03_18:49:16.patch

 LogCleaner (compaction) does not support compressed topics
 --

 Key: KAFKA-1374
 URL: https://issues.apache.org/jira/browse/KAFKA-1374
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy
Assignee: Manikumar Reddy
  Labels: newbie++
 Fix For: 0.8.2

 Attachments: KAFKA-1374.patch, KAFKA-1374_2014-08-09_16:18:55.patch, 
 KAFKA-1374_2014-08-12_22:23:06.patch, KAFKA-1374_2014-09-23_21:47:12.patch, 
 KAFKA-1374_2014-10-03_18:49:16.patch


 This is a known issue, but opening a ticket to track.
 If you try to compact a topic that has compressed messages you will run into
 various exceptions - typically because during iteration we advance the
 position based on the decompressed size of the message. I have a bunch of
 stack traces, but it should be straightforward to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 24214: Patch for KAFKA-1374

2014-10-03 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24214/
---

(Updated Oct. 3, 2014, 1:50 p.m.)


Review request for kafka.


Bugs: KAFKA-1374
https://issues.apache.org/jira/browse/KAFKA-1374


Repository: kafka


Description
---

fixed couple of bugs and updating stress test details


Diffs (updated)
-

  core/src/main/scala/kafka/log/LogCleaner.scala 
c20de4ad4734c0bd83c5954fdb29464a27b91dff 
  core/src/main/scala/kafka/tools/TestLogCleaning.scala 
1d4ea93f2ba8d4d4d47a307cd47f54a15d3d30dd 
  core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala 
5bfa764638e92f217d0ff7108ec8f53193c22978 

Diff: https://reviews.apache.org/r/24214/diff/


Testing
---


Thanks,

Manikumar Reddy O



[jira] [Updated] (KAFKA-1374) LogCleaner (compaction) does not support compressed topics

2014-10-03 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1374:
---
Attachment: KAFKA-1374_2014-10-03_19:17:17.patch

 LogCleaner (compaction) does not support compressed topics
 --

 Key: KAFKA-1374
 URL: https://issues.apache.org/jira/browse/KAFKA-1374
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy
Assignee: Manikumar Reddy
  Labels: newbie++
 Fix For: 0.8.2

 Attachments: KAFKA-1374.patch, KAFKA-1374_2014-08-09_16:18:55.patch, 
 KAFKA-1374_2014-08-12_22:23:06.patch, KAFKA-1374_2014-09-23_21:47:12.patch, 
 KAFKA-1374_2014-10-03_18:49:16.patch, KAFKA-1374_2014-10-03_19:17:17.patch


 This is a known issue, but opening a ticket to track.
 If you try to compact a topic that has compressed messages you will run into
 various exceptions - typically because during iteration we advance the
 position based on the decompressed size of the message. I have a bunch of
 stack traces, but it should be straightforward to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1374) LogCleaner (compaction) does not support compressed topics

2014-10-03 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157987#comment-14157987
 ] 

Manikumar Reddy commented on KAFKA-1374:


Updated reviewboard https://reviews.apache.org/r/24214/diff/
 against branch origin/trunk

 LogCleaner (compaction) does not support compressed topics
 --

 Key: KAFKA-1374
 URL: https://issues.apache.org/jira/browse/KAFKA-1374
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy
Assignee: Manikumar Reddy
  Labels: newbie++
 Fix For: 0.8.2

 Attachments: KAFKA-1374.patch, KAFKA-1374_2014-08-09_16:18:55.patch, 
 KAFKA-1374_2014-08-12_22:23:06.patch, KAFKA-1374_2014-09-23_21:47:12.patch, 
 KAFKA-1374_2014-10-03_18:49:16.patch, KAFKA-1374_2014-10-03_19:17:17.patch


 This is a known issue, but opening a ticket to track.
 If you try to compact a topic that has compressed messages you will run into
 various exceptions - typically because during iteration we advance the
 position based on the decompressed size of the message. I have a bunch of
 stack traces, but it should be straightforward to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Question regarding topic creation

2014-10-03 Thread Sreenivasulu Nallapati
Hello,

I am new to Kafka and trying to evaluate for one of our use case. I have a
basic question regarding topic.

Assume we have a 10 broker cluster. If a create a Topic, will it be created
in all 10 brokers?
If not, how it manages internally if we try to send a message to that
perticular topic? assume we created a producer with all brokers

Thanks in advance.


Thanks
Sreeni


Re: Review Request 25886: KAFKA-1555: provide strong consistency with reasonable availability

2014-10-03 Thread Jun Rao


 On Oct. 2, 2014, 4:08 a.m., Jun Rao wrote:
  core/src/main/scala/kafka/cluster/Partition.scala, lines 372-373
  https://reviews.apache.org/r/25886/diff/7/?file=709987#file709987line372
 
  Hmm, this code looks a bit weird. It's weird in that a topic level 
  config only takes effect under certain producer side config. My feeling is 
  that if we make min.isr a topic level config, then it should work in the 
  same way independent of the producer side config. So, perhaps it's better 
  to just check inSyncSize  minIsr, irrespective of the ack. What do you 
  think?
 
 Gwen Shapira wrote:
 Agree its weird. However, that was pretty much what we agreed on in the 
 Jira discussion - if a producer specifies acks=0 and acks=1, they don't care 
 about reliability that much, they certainly don't care about size of ISR 
 (since they don't expect any acks from ISR). I'm pretty sure we decided that 
 min.isr only makes sense with acks=-1, so thats what I implemented.
 
 Since it does look out of place here, and I'm a bit worried about 
 non-producer calls to this function, I'll try to move this check higher up 
 the call stack.
 
 Does that make sense?
 
 Gwen Shapira wrote:
 Actually, we can't move it up the call stack since we don't have access 
 to the min.isr value farther up (conf only exists on the leader replica for a 
 partition). I can't think of a better way to fulfill the requirement of 
 min.isr only applies when acks=-1, which IMO is an important requirement.

Ok. Let's leave this as it is then. Could you address the rest of the comments?


- Jun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25886/#review55187
---


On Oct. 1, 2014, 1:19 a.m., Gwen Shapira wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/25886/
 ---
 
 (Updated Oct. 1, 2014, 1:19 a.m.)
 
 
 Review request for kafka.
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1555: provide strong consistency with reasonable availability
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 f9de4af 
   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java addc906 
   
 clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasAfterAppendException.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasException.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java d434f42 
   core/src/main/scala/kafka/cluster/Partition.scala ff106b4 
   core/src/main/scala/kafka/common/ErrorMapping.scala 3fae791 
   
 core/src/main/scala/kafka/common/NotEnoughReplicasAfterAppendException.scala 
 PRE-CREATION 
   core/src/main/scala/kafka/common/NotEnoughReplicasException.scala 
 PRE-CREATION 
   core/src/main/scala/kafka/log/LogConfig.scala 5746ad4 
   core/src/main/scala/kafka/producer/SyncProducerConfig.scala 69b2d0c 
   core/src/main/scala/kafka/server/KafkaApis.scala c584b55 
   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
 39f777b 
   core/src/test/scala/unit/kafka/producer/ProducerTest.scala dd71d81 
   core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala 24deea0 
   core/src/test/scala/unit/kafka/utils/TestUtils.scala 2dbdd3c 
 
 Diff: https://reviews.apache.org/r/25886/diff/
 
 
 Testing
 ---
 
 With 3 broker cluster, created 3 topics each with 1 partition and 3 replicas, 
 with 1,3 and 4 min.insync.replicas.
 * min.insync.replicas=1 behaved normally (all writes succeeded as long as a 
 broker was up)
 * min.insync.replicas=3 returned NotEnoughReplicas when required.acks=-1 and 
 one broker was down
 * min.insync.replicas=4 returned NotEnoughReplicas when required.acks=-1
 
 See notes about retry behavior in the JIRA.
 
 
 Thanks,
 
 Gwen Shapira
 




[jira] [Commented] (KAFKA-1499) Broker-side compression configuration

2014-10-03 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158091#comment-14158091
 ] 

Joel Koshy commented on KAFKA-1499:
---

[~jkreps] Regarding the two approaches, both are susceptible to people 
forgetting (or misunderstanding) the configs. Providing a 
broker.compression.enabled property which defaults to false helps avoid an 
already deployed scenario from being affected. Producers continue to set 
whatever compression.type they already use and that is unaffected at the 
broker. The issues with this as you point out are an additional config to deal 
with and forgetting to turn it on which would be confusing when people use 
per-topic overrides.

On the other hand, the other approach of not having the config and assuming 
that broker compression (or decompression) is always enabled is better when 
people use per-topic overrides but is slightly dangerous/inconvenient if an 
existing deployment needs compression enabled and upgrades and does not set it 
to a suitable compression type.

How about the following: right now all of our server configs are always present 
- either explicitly specified or default. In this instance it is better to make 
broker.compression.type an optional config. i.e., if it is is explicitly 
specified use it otherwise, assume that broker compression/decompression is 
disabled. So when appending messages to the log: if 
(broker-compression-type-config is specified) use that; else if 
(topic-compression-type-override is specified) use that; else use whatever 
compression type the producer sent the message with.

 Broker-side compression configuration
 -

 Key: KAFKA-1499
 URL: https://issues.apache.org/jira/browse/KAFKA-1499
 Project: Kafka
  Issue Type: New Feature
Reporter: Joel Koshy
Assignee: Manikumar Reddy
  Labels: newbie++
 Fix For: 0.8.2

 Attachments: KAFKA-1499.patch, KAFKA-1499.patch, 
 KAFKA-1499_2014-08-15_14:20:27.patch, KAFKA-1499_2014-08-21_21:44:27.patch, 
 KAFKA-1499_2014-09-21_15:57:23.patch, KAFKA-1499_2014-09-23_14:45:38.patch, 
 KAFKA-1499_2014-09-24_14:20:33.patch, KAFKA-1499_2014-09-24_14:24:54.patch, 
 KAFKA-1499_2014-09-25_11:05:57.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 A given topic can have messages in mixed compression codecs. i.e., it can
 also have a mix of uncompressed/compressed messages.
 It will be useful to support a broker-side configuration to recompress
 messages to a specific compression codec. i.e., all messages (for all
 topics) on the broker will be compressed to this codec. We could have
 per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1499) Broker-side compression configuration

2014-10-03 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158097#comment-14158097
 ] 

Joel Koshy commented on KAFKA-1499:
---

[~guozhang] I do believe we can avoid the need to assign offsets for every 
message by providing more information in the message header (e.g., put the 
offset for the earliest message and also include num. messages in the 
message-set). However, I'm not sure that is related to this. This patch may 
force a recompression if the target compression type is different from the 
original compression type. So not sure I follow what you mean - can you clarify?

 Broker-side compression configuration
 -

 Key: KAFKA-1499
 URL: https://issues.apache.org/jira/browse/KAFKA-1499
 Project: Kafka
  Issue Type: New Feature
Reporter: Joel Koshy
Assignee: Manikumar Reddy
  Labels: newbie++
 Fix For: 0.8.2

 Attachments: KAFKA-1499.patch, KAFKA-1499.patch, 
 KAFKA-1499_2014-08-15_14:20:27.patch, KAFKA-1499_2014-08-21_21:44:27.patch, 
 KAFKA-1499_2014-09-21_15:57:23.patch, KAFKA-1499_2014-09-23_14:45:38.patch, 
 KAFKA-1499_2014-09-24_14:20:33.patch, KAFKA-1499_2014-09-24_14:24:54.patch, 
 KAFKA-1499_2014-09-25_11:05:57.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 A given topic can have messages in mixed compression codecs. i.e., it can
 also have a mix of uncompressed/compressed messages.
 It will be useful to support a broker-side configuration to recompress
 messages to a specific compression codec. i.e., all messages (for all
 topics) on the broker will be compressed to this codec. We could have
 per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1499) Broker-side compression configuration

2014-10-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158102#comment-14158102
 ] 

Guozhang Wang commented on KAFKA-1499:
--

Hey Joel I think we are on the same page, but I was not clear before: I was 
just saying that by using a different broker-side compression if the 
compression codec is different between the producer and the broker then we 
cannot avoid de-/re-compression.

 Broker-side compression configuration
 -

 Key: KAFKA-1499
 URL: https://issues.apache.org/jira/browse/KAFKA-1499
 Project: Kafka
  Issue Type: New Feature
Reporter: Joel Koshy
Assignee: Manikumar Reddy
  Labels: newbie++
 Fix For: 0.8.2

 Attachments: KAFKA-1499.patch, KAFKA-1499.patch, 
 KAFKA-1499_2014-08-15_14:20:27.patch, KAFKA-1499_2014-08-21_21:44:27.patch, 
 KAFKA-1499_2014-09-21_15:57:23.patch, KAFKA-1499_2014-09-23_14:45:38.patch, 
 KAFKA-1499_2014-09-24_14:20:33.patch, KAFKA-1499_2014-09-24_14:24:54.patch, 
 KAFKA-1499_2014-09-25_11:05:57.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 A given topic can have messages in mixed compression codecs. i.e., it can
 also have a mix of uncompressed/compressed messages.
 It will be useful to support a broker-side configuration to recompress
 messages to a specific compression codec. i.e., all messages (for all
 topics) on the broker will be compressed to this codec. We could have
 per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1499) Broker-side compression configuration

2014-10-03 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158111#comment-14158111
 ] 

Joel Koshy commented on KAFKA-1499:
---

Yes that is correct. However, in practice I think it is still worth thinking 
through how to avoid de/re-compression. This is because most companies would 
set a default compression codec in the producer and use the same setting in the 
broker. At least we can document this as best-practice.

 Broker-side compression configuration
 -

 Key: KAFKA-1499
 URL: https://issues.apache.org/jira/browse/KAFKA-1499
 Project: Kafka
  Issue Type: New Feature
Reporter: Joel Koshy
Assignee: Manikumar Reddy
  Labels: newbie++
 Fix For: 0.8.2

 Attachments: KAFKA-1499.patch, KAFKA-1499.patch, 
 KAFKA-1499_2014-08-15_14:20:27.patch, KAFKA-1499_2014-08-21_21:44:27.patch, 
 KAFKA-1499_2014-09-21_15:57:23.patch, KAFKA-1499_2014-09-23_14:45:38.patch, 
 KAFKA-1499_2014-09-24_14:20:33.patch, KAFKA-1499_2014-09-24_14:24:54.patch, 
 KAFKA-1499_2014-09-25_11:05:57.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 A given topic can have messages in mixed compression codecs. i.e., it can
 also have a mix of uncompressed/compressed messages.
 It will be useful to support a broker-side configuration to recompress
 messages to a specific compression codec. i.e., all messages (for all
 topics) on the broker will be compressed to this codec. We could have
 per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1644) Inherit FetchResponse from RequestOrResponse

2014-10-03 Thread Alexey Ozeritskiy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158114#comment-14158114
 ] 

Alexey Ozeritskiy commented on KAFKA-1644:
--

This patch simplifies writing clients. Now we have to use the following ugly 
code:
{code:java}
if (id == ResponseKeys.FetchKey) {
  val response = FetchResponse.readFrom(contentBuffer)
  listener ! FetchAnswer(response)
} else {
  val response = ResponseKeys.deserializerForKey(id)(contentBuffer)
  listener ! Answer(response)
}
{code}

 Inherit FetchResponse from RequestOrResponse
 

 Key: KAFKA-1644
 URL: https://issues.apache.org/jira/browse/KAFKA-1644
 Project: Kafka
  Issue Type: Bug
Reporter: Anton Karamanov
Assignee: Anton Karamanov
 Attachments: 
 0001-KAFKA-1644-Inherit-FetchResponse-from-RequestOrRespo.patch


 Unlike all other Kafka API responses {{FetchResponse}} is not a subclass of 
 RequestOrResponse, which requires handling it as a special case while 
 processing responses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1644) Inherit FetchResponse from RequestOrResponse

2014-10-03 Thread Alexey Ozeritskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Ozeritskiy updated KAFKA-1644:
-
Issue Type: Improvement  (was: Bug)

 Inherit FetchResponse from RequestOrResponse
 

 Key: KAFKA-1644
 URL: https://issues.apache.org/jira/browse/KAFKA-1644
 Project: Kafka
  Issue Type: Improvement
Reporter: Anton Karamanov
Assignee: Anton Karamanov
 Attachments: 
 0001-KAFKA-1644-Inherit-FetchResponse-from-RequestOrRespo.patch


 Unlike all other Kafka API responses {{FetchResponse}} is not a subclass of 
 RequestOrResponse, which requires handling it as a special case while 
 processing responses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1666) Issue for sending more message to Kafka Broker

2014-10-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158156#comment-14158156
 ] 

Guozhang Wang commented on KAFKA-1666:
--

Also it there might be some issue with Jmeter leaking sockets if it does not 
close socket after opening one for querying metadata, since under normal 
operation the #.open sockets should be low.

 Issue for sending more message to Kafka Broker
 --

 Key: KAFKA-1666
 URL: https://issues.apache.org/jira/browse/KAFKA-1666
 Project: Kafka
  Issue Type: Bug
  Components: config
Affects Versions: 0.8.1.1
 Environment: Ubundu 14
Reporter: rajendram kathees
   Original Estimate: 24h
  Remaining Estimate: 24h

 I tried to send 5000 message to kafka broker using Jmeter ( 10 thread and 500 
 messages per thread,one message is 105 byes). After 2100 messages I am 
 getting the following exception and I changed buffer size 
 (socket.request.max.bytes) value in server.properties file but still I am 
 getting same exception. When I send  2000 message,all messages are sent to 
 kafka broker. Can you give a solution?
 [2014-10-03 12:31:07,051] ERROR - Utils$ fetching topic metadata for topics 
 [Set(test1)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
 kafka.common.KafkaException: fetching topic metadata for topics [Set(test1)] 
 from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
   at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:67)
   at 
 kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
   at 
 kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:78)
   at kafka.utils.Utils$.swallow(Utils.scala:167)
   at kafka.utils.Logging$class.swallowError(Logging.scala:106)
   at kafka.utils.Utils$.swallowError(Utils.scala:46)
   at 
 kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:78)
   at kafka.producer.Producer.send(Producer.scala:76)
   at kafka.javaapi.producer.Producer.send(Producer.scala:33)
   at org.wso2.carbon.connector.KafkaProduce.send(KafkaProduce.java:71)
   at org.wso2.carbon.connector.KafkaProduce.connect(KafkaProduce.java:28)
   at 
 org.wso2.carbon.connector.core.AbstractConnector.mediate(AbstractConnector.java:32)
   at 
 org.apache.synapse.mediators.ext.ClassMediator.mediate(ClassMediator.java:78)
   at 
 org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
   at 
 org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
   at 
 org.apache.synapse.mediators.template.TemplateMediator.mediate(TemplateMediator.java:77)
   at 
 org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:129)
   at 
 org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:78)
   at 
 org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
   at 
 org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
   at 
 org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:131)
   at 
 org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:166)
   at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
   at 
 org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
   at 
 org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:385)
   at 
 org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:183)
   at 
 org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.net.SocketException: Too many open files
   at sun.nio.ch.Net.socket0(Native Method)
   at sun.nio.ch.Net.socket(Net.java:423)
   at sun.nio.ch.Net.socket(Net.java:416)
   at sun.nio.ch.SocketChannelImpl.init(SocketChannelImpl.java:104)
   at 
 sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:60)
   at java.nio.channels.SocketChannel.open(SocketChannel.java:142)
   at kafka.network.BlockingChannel.connect(BlockingChannel.scala:48)
   at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
   at 
 kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
   at 
 

Re: Review Request 24214: Patch for KAFKA-1374

2014-10-03 Thread Manikumar Reddy O


 On Aug. 18, 2014, 5:32 p.m., Joel Koshy wrote:
  I should be able to review this later today. However, as Jun also mentioned 
  can you please run the stress test? When I was working on the original 
  (WIP) patch it worked but eventually failed (due to various reasons such as 
  corrupt message sizes, etc) on a stress test after several segments had 
  rolled and after several log cleaner runs - although I didn't get time to 
  look into it your patch should have hopefully addressed these issues.
 
 Manikumar Reddy O wrote:
 I tested the patch with my own test code and it is working fine.
 
 I ran TestLogCleaning stress test.  Some times this test is failing. 
 But i am not getting any broker-side errors/corrupt messages.  
 
 I also ran TestLogCleaning on trunk (without my patch). This test is 
 failing for multiple topics.
 I am looking into TestLogCleaning code and trying fix if any issue.
 
 I will keep you updated on the testing status.

I successfully ran the TestLogCleaning stress test. I ran the test for 1,5,10 
million messages

Jun,

I removed the usage of MemoryRecords, Compressor.putRecord classes from this 
patch. Currently Compressor.close() returns a compressed message with offset as 
number of messages in that compression. (If i compress 10,11,12,13,14,15 
message offsets, then the compressed message will have offset 5).
Because of this behavior, we can not use this for server-side compression.(For 
server side, If i compress 10,11,12,13,14,15 message offsets, then the 
compresed message shoud have offset 15)


- Manikumar Reddy


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24214/#review50901
---


On Oct. 3, 2014, 1:50 p.m., Manikumar Reddy O wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/24214/
 ---
 
 (Updated Oct. 3, 2014, 1:50 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1374
 https://issues.apache.org/jira/browse/KAFKA-1374
 
 
 Repository: kafka
 
 
 Description
 ---
 
 fixed couple of bugs and updating stress test details
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/log/LogCleaner.scala 
 c20de4ad4734c0bd83c5954fdb29464a27b91dff 
   core/src/main/scala/kafka/tools/TestLogCleaning.scala 
 1d4ea93f2ba8d4d4d47a307cd47f54a15d3d30dd 
   core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala 
 5bfa764638e92f217d0ff7108ec8f53193c22978 
 
 Diff: https://reviews.apache.org/r/24214/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Manikumar Reddy O
 




[jira] [Commented] (KAFKA-1663) Controller unable to shutdown after a soft failure

2014-10-03 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158227#comment-14158227
 ] 

Sriharsha Chintalapani commented on KAFKA-1663:
---

[~nehanarkhede] I ran all the tests in KAFKA-1558 they all pass with the above 
patch and didn't see any issues with soft failure case.

 Controller unable to shutdown after a soft failure
 --

 Key: KAFKA-1663
 URL: https://issues.apache.org/jira/browse/KAFKA-1663
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1663.patch


 As part of testing KAFKA-1558 I came across a case where inducing soft 
 failure in the current controller elects a new controller  but the old 
 controller doesn't shutdown properly.
 steps to reproduce
 1) 5 broker cluster
 2) high number of topics(I tested it with 1000 topics)
 3) on the current controller do kill -SIGSTOP  pid( broker's process id)
 4) wait for bit over zookeeper timeout (server.properties)
 5) kill -SIGCONT pid
 6) There will be a new controller elected. check old controller's
 log 
 [2014-09-30 15:59:53,398] INFO [SessionExpirationListener on 1], ZK expired; 
 shut down all controller components and try to re-elect 
 (kafka.controller.KafkaController$SessionExpirationListener)
 [2014-09-30 15:59:53,400] INFO [delete-topics-thread-1], Shutting down 
 (kafka.controller.TopicDeletionManager$DeleteTopicsThread)
 If it stops there and the broker  logs keeps printing 
 Cached zkVersion [0] not equal to that in zookeeper, skip updating ISR 
 (kafka.cluster.Partition)
 than the controller shutdown never completes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1493) Use a well-documented LZ4 compression format and remove redundant LZ4HC option

2014-10-03 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158291#comment-14158291
 ] 

Jun Rao commented on KAFKA-1493:


The easiest thing is probably to just take out LZ4 in CompressionType and 
CompressionCodec in 0.8.2. 

 Use a well-documented LZ4 compression format and remove redundant LZ4HC option
 --

 Key: KAFKA-1493
 URL: https://issues.apache.org/jira/browse/KAFKA-1493
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.2
Reporter: James Oliver
Assignee: James Oliver
Priority: Blocker
 Fix For: 0.8.2






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] 0.8.1.2 Release

2014-10-03 Thread Jun Rao
Balaji,

Thanks for your interest. Do you think you could help on KAFKA-1493?

Jun

On Thu, Oct 2, 2014 at 3:17 PM, Seshadri, Balaji balaji.sesha...@dish.com
wrote:

 I would be glad to help in any of these blockers,please let me know.

 Thanks,

 Balaji

 -Original Message-
 From: Jun Rao [mailto:jun...@gmail.com]
 Sent: Thursday, October 02, 2014 3:48 PM
 To: us...@kafka.apache.org
 Cc: dev@kafka.apache.org
 Subject: Re: [DISCUSS] 0.8.1.2 Release

 We already cut an 0.8.2 release branch. The plan is to have the remaining
 blockers resolved before releasing it. Hopefully this will just take a
 couple of weeks.


 https://issues.apache.org/jira/browse/KAFKA-1663?filter=-4jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20priority%20%3D%20Blocker%20AND%20fixVersion%20%3D%200.8.2%20ORDER%20BY%20createdDate%20DESC

 Thanks,

 Jun

 On Thu, Oct 2, 2014 at 11:28 AM, Kane Kane kane.ist...@gmail.com wrote:

  Having the same question: what happened to 0.8.2 release, when it's
  supposed to happen?
 
  Thanks.
 
  On Tue, Sep 30, 2014 at 12:49 PM, Jonathan Weeks
  jonathanbwe...@gmail.com wrote:
   I was one asking for 0.8.1.2 a few weeks back, when 0.8.2 was at
   least
  6-8 weeks out.
  
   If we truly believe that 0.8.2 will go “golden” and stable in 2-3
   weeks,
  I, for one, don’t need a 0.8.1.2, but it depends on the confidence in
  shipping 0.8.2 soonish.
  
   YMMV,
  
   -Jonathan
  
  
   On Sep 30, 2014, at 12:37 PM, Neha Narkhede
   neha.narkh...@gmail.com
  wrote:
  
   Can we discuss the need for 0.8.1.2? I'm wondering if it's related
   to
  the
   timeline of 0.8.2 in any way? For instance, if we can get 0.8.2 out
   in
  the
   next 2-3 weeks, do we still need to get 0.8.1.2 out or can people
   just upgrade to 0.8.2?
  
   On Tue, Sep 30, 2014 at 9:53 AM, Joe Stein joe.st...@stealth.ly
  wrote:
  
   Hi, I wanted to kick off a specific discussion on a 0.8.1.2 release.
  
   Here are the JIRAs I would like to propose to back port a patch
   (if not already done so) and apply them to the 0.8.1 branch for a
   0.8.1.2
  release
  
   https://issues.apache.org/jira/browse/KAFKA-1502 (source jar is
   empty)
   https://issues.apache.org/jira/browse/KAFKA-1419 (cross build for
  scala
   2.11)
   https://issues.apache.org/jira/browse/KAFKA-1382 (Update zkVersion
   on partition state update failures)
   https://issues.apache.org/jira/browse/KAFKA-1490 (remove gradlew
  initial
   setup output from source distribution)
   https://issues.apache.org/jira/browse/KAFKA-1645 (some more jars
   in
  our
   src
   release)
  
   If the community and committers can comment on the patches
   proposed
  that
   would be great. If I missed any bring them up or if you think any
   I
  have
   proposed shouldn't be int he release bring that up too please.
  
   Once we have consensus on this thread my thought was that I would
  apply and
   commit the agreed to tickets to the 0.8.1 branch. If any tickets
   don't apply of course a back port patch has to happen through our
   standard process (not worried about that we have some engineering
   cycles to contribute to making that happen). Once that is all
   done, I will build
   0.8.1.2 release artifacts and call a VOTE for RC1.
  
   /***
   Joe Stein
   Founder, Principal Consultant
   Big Data Open Source Security LLC
   http://www.stealth.ly
   Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
   /
  
  
 



[jira] [Commented] (KAFKA-1499) Broker-side compression configuration

2014-10-03 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158331#comment-14158331
 ] 

Jay Kreps commented on KAFKA-1499:
--

Hey [~jjkoshy] I agree that you can kind of argue for broker-side compression 
and you can kind of argue for just making client-side compression work with 
compaction. But I really don't understand the case for having both, other than 
reducing friction in a single release.

 Broker-side compression configuration
 -

 Key: KAFKA-1499
 URL: https://issues.apache.org/jira/browse/KAFKA-1499
 Project: Kafka
  Issue Type: New Feature
Reporter: Joel Koshy
Assignee: Manikumar Reddy
  Labels: newbie++
 Fix For: 0.8.2

 Attachments: KAFKA-1499.patch, KAFKA-1499.patch, 
 KAFKA-1499_2014-08-15_14:20:27.patch, KAFKA-1499_2014-08-21_21:44:27.patch, 
 KAFKA-1499_2014-09-21_15:57:23.patch, KAFKA-1499_2014-09-23_14:45:38.patch, 
 KAFKA-1499_2014-09-24_14:20:33.patch, KAFKA-1499_2014-09-24_14:24:54.patch, 
 KAFKA-1499_2014-09-25_11:05:57.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 A given topic can have messages in mixed compression codecs. i.e., it can
 also have a mix of uncompressed/compressed messages.
 It will be useful to support a broker-side configuration to recompress
 messages to a specific compression codec. i.e., all messages (for all
 topics) on the broker will be compressed to this codec. We could have
 per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1667) topic-level configuration not validated

2014-10-03 Thread Ryan Berdeen (JIRA)
Ryan Berdeen created KAFKA-1667:
---

 Summary:  topic-level configuration not validated
 Key: KAFKA-1667
 URL: https://issues.apache.org/jira/browse/KAFKA-1667
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Ryan Berdeen


I was able to set the configuration for a topic to these invalid values:

{code}
Topic:topic-config-test  PartitionCount:1ReplicationFactor:2 
Configs:min.cleanable.dirty.ratio=-30.2,segment.bytes=-1,retention.ms=-12,cleanup.policy=lol
{code}

It seems that the values are saved as long as they are the correct type, but 
are not validated like the corresponding broker-level properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1668) TopicCommand doesn't warn if --topic argument doesn't match any topics

2014-10-03 Thread Ryan Berdeen (JIRA)
Ryan Berdeen created KAFKA-1668:
---

 Summary: TopicCommand doesn't warn if --topic argument doesn't 
match any topics
 Key: KAFKA-1668
 URL: https://issues.apache.org/jira/browse/KAFKA-1668
 Project: Kafka
  Issue Type: Bug
Reporter: Ryan Berdeen


Running {{kafka-topics.sh --alter}} with an invalid {{--topic}} argument 
produces no output and exits with 0, indicating success.

{code}
$ bin/kafka-topics.sh --topic does-not-exist --alter --config invalid=xxx 
--zookeeper zkhost:2181
$ echo $?
0
{code}

An invalid topic name or a regular expression that matches 0 topics should at 
least print a warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1493) Use a well-documented LZ4 compression format and remove redundant LZ4HC option

2014-10-03 Thread Theo Hultberg (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158408#comment-14158408
 ] 

Theo Hultberg commented on KAFKA-1493:
--

If you're looking for a standard way to handle LZ4 there doesn't seem to be 
any, but Cassandra uses a 4 byte field for the uncompressed length and no 
checksum 
(https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/compress/LZ4Compressor.java).

I've seen varint used too in other projects, but in my opinion it's a pain to 
implement compared to just using an int, and for very little benefit. The 
drawbacks are that small messages will use one or two bytes more, and that you 
can't handle compressed chunks of over a couple of gigabyte.

Sorry for jumping into the discussion out of the blue, I just stumbled upon 
this while looking through the issues for 0.8.2. I've got very little 
experience with the Kafka codebase, but I'm the author of the Ruby driver for 
Cassandra and I recognized the issue. Hope this was helpful and I didn't 
completely miss the point.

 Use a well-documented LZ4 compression format and remove redundant LZ4HC option
 --

 Key: KAFKA-1493
 URL: https://issues.apache.org/jira/browse/KAFKA-1493
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.2
Reporter: James Oliver
Assignee: James Oliver
Priority: Blocker
 Fix For: 0.8.2






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1668) TopicCommand doesn't warn if --topic argument doesn't match any topics

2014-10-03 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1668:
-
Priority: Minor  (was: Major)

 TopicCommand doesn't warn if --topic argument doesn't match any topics
 --

 Key: KAFKA-1668
 URL: https://issues.apache.org/jira/browse/KAFKA-1668
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Ryan Berdeen
Priority: Minor
  Labels: newbie

 Running {{kafka-topics.sh --alter}} with an invalid {{--topic}} argument 
 produces no output and exits with 0, indicating success.
 {code}
 $ bin/kafka-topics.sh --topic does-not-exist --alter --config invalid=xxx 
 --zookeeper zkhost:2181
 $ echo $?
 0
 {code}
 An invalid topic name or a regular expression that matches 0 topics should at 
 least print a warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1669) Default rebalance retries and backoff should be higher

2014-10-03 Thread Clark Haskins (JIRA)
Clark Haskins created KAFKA-1669:


 Summary: Default rebalance retries and backoff should be higher
 Key: KAFKA-1669
 URL: https://issues.apache.org/jira/browse/KAFKA-1669
 Project: Kafka
  Issue Type: Improvement
Reporter: Clark Haskins


The default rebalance logic does not work for consumers with large numbers of 
partitions and/or topics. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1668) TopicCommand doesn't warn if --topic argument doesn't match any topics

2014-10-03 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1668:
-
Labels: newbie  (was: )

 TopicCommand doesn't warn if --topic argument doesn't match any topics
 --

 Key: KAFKA-1668
 URL: https://issues.apache.org/jira/browse/KAFKA-1668
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Ryan Berdeen
Priority: Minor
  Labels: newbie

 Running {{kafka-topics.sh --alter}} with an invalid {{--topic}} argument 
 produces no output and exits with 0, indicating success.
 {code}
 $ bin/kafka-topics.sh --topic does-not-exist --alter --config invalid=xxx 
 --zookeeper zkhost:2181
 $ echo $?
 0
 {code}
 An invalid topic name or a regular expression that matches 0 topics should at 
 least print a warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1668) TopicCommand doesn't warn if --topic argument doesn't match any topics

2014-10-03 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1668:
-
Component/s: tools

 TopicCommand doesn't warn if --topic argument doesn't match any topics
 --

 Key: KAFKA-1668
 URL: https://issues.apache.org/jira/browse/KAFKA-1668
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Ryan Berdeen
Priority: Minor
  Labels: newbie

 Running {{kafka-topics.sh --alter}} with an invalid {{--topic}} argument 
 produces no output and exits with 0, indicating success.
 {code}
 $ bin/kafka-topics.sh --topic does-not-exist --alter --config invalid=xxx 
 --zookeeper zkhost:2181
 $ echo $?
 0
 {code}
 An invalid topic name or a regular expression that matches 0 topics should at 
 least print a warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1669) Default rebalance retries and backoff should be higher

2014-10-03 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat reassigned KAFKA-1669:
--

Assignee: Mayuresh Gharat

 Default rebalance retries and backoff should be higher
 --

 Key: KAFKA-1669
 URL: https://issues.apache.org/jira/browse/KAFKA-1669
 Project: Kafka
  Issue Type: Improvement
Reporter: Clark Haskins
Assignee: Mayuresh Gharat
  Labels: newbie

 The default rebalance logic does not work for consumers with large numbers of 
 partitions and/or topics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1669) Default rebalance retries and backoff should be higher

2014-10-03 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1669:
-
Labels: newbie  (was: )

 Default rebalance retries and backoff should be higher
 --

 Key: KAFKA-1669
 URL: https://issues.apache.org/jira/browse/KAFKA-1669
 Project: Kafka
  Issue Type: Improvement
Reporter: Clark Haskins
Assignee: Mayuresh Gharat
  Labels: newbie

 The default rebalance logic does not work for consumers with large numbers of 
 partitions and/or topics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1669) Default rebalance retries and backoff should be higher

2014-10-03 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1669:
-
Component/s: consumer

 Default rebalance retries and backoff should be higher
 --

 Key: KAFKA-1669
 URL: https://issues.apache.org/jira/browse/KAFKA-1669
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Clark Haskins
Assignee: Mayuresh Gharat
  Labels: newbie

 The default rebalance logic does not work for consumers with large numbers of 
 partitions and/or topics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1670) Corrupt log files for segment.bytes values close to Int.MaxInt

2014-10-03 Thread Ryan Berdeen (JIRA)
Ryan Berdeen created KAFKA-1670:
---

 Summary: Corrupt log files for segment.bytes values close to 
Int.MaxInt
 Key: KAFKA-1670
 URL: https://issues.apache.org/jira/browse/KAFKA-1670
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Ryan Berdeen
Priority: Blocker


The maximum value for the topic-level config {{segment.bytes}} is 
{{Int.MaxInt}} (2147483647). *Using this value causes brokers to corrupt their 
log files, leaving them unreadable.*

We set {{segment.bytes}} to {{2122317824}} which is well below the maximum. One 
by one, the ISR of all partitions shrunk to 1. Brokers would crash when 
restarted, attempting to read from a negative offset in a log file. After 
discovering that many segment files had grown to 4GB or more, we were forced to 
shut down our *entire production Kafka cluster* for several hours while we 
split all segment files into 1GB chunks.

Looking into the {{kafka.log}} code, the {{segment.bytes}} parameter is used 
inconsistently. It is treated as a *soft* maximum for the size of the segment 
file 
(https://github.com/apache/kafka/blob/0.8.1.1/core/src/main/scala/kafka/log/LogConfig.scala#L26)
 with logs rolled only after 
(https://github.com/apache/kafka/blob/0.8.1.1/core/src/main/scala/kafka/log/Log.scala#L246)
 they exceed this value. However, much of the code that deals with log files 
uses *ints* to store the size of the file and the position in the file. 
Overflow of these ints leads the broker to append to the segments indefinitely, 
and to fail to read these segments for consuming or recovery.

This is trivial to reproduce:

{code}
$ bin/kafka-topics.sh --topic segment-bytes-test --create --replication-factor 
2 --partitions 1 --zookeeper zkhost:2181
$ bin/kafka-topics.sh --topic segment-bytes-test --alter --config 
segment.bytes=2147483647 --zookeeper zkhost:2181
$ yes Int.MaxValue is a ridiculous bound on file size in 2014 | 
bin/kafka-console-producer.sh --broker-list localhost:6667 zkhost:2181 --topic 
segment-bytes-test
{code}

After running for a few minutes, the log file is corrupt:

{code}
$ ls -lh data/segment-bytes-test-0/
total 9.7G
-rw-r--r-- 1 root root  10M Oct  3 19:39 .index
-rw-r--r-- 1 root root 9.7G Oct  3 19:39 .log
{code}

We recovered the data from the log files using a simple Python script: 
https://gist.github.com/also/9f823d9eb9dc0a410796



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-234) Make backoff time during consumer rebalance configurable

2014-10-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158618#comment-14158618
 ] 

Guozhang Wang commented on KAFKA-234:
-

Hi Jun,

Just wondering why we want to make zkSyncTime as the default value of 
fetcherBackoffMs at the first place? These two configs do not seem related to 
me.

 Make backoff time during consumer rebalance configurable
 

 Key: KAFKA-234
 URL: https://issues.apache.org/jira/browse/KAFKA-234
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.7.1
Reporter: Jun Rao
Assignee: Jun Rao
 Fix For: 0.7.1

 Attachments: kafka-234.patch


 We need to make backoff time during consumer rebalance directly configurable, 
 instead of relying on zkSyncTime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1558) AdminUtils.deleteTopic does not work

2014-10-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158626#comment-14158626
 ] 

Guozhang Wang commented on KAFKA-1558:
--

[~harsha_ch] could we also try one more test case: delete topic while leader 
rebalance is on-going?

 AdminUtils.deleteTopic does not work
 

 Key: KAFKA-1558
 URL: https://issues.apache.org/jira/browse/KAFKA-1558
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Henning Schmiedehausen
Assignee: Sriharsha Chintalapani
Priority: Blocker
 Fix For: 0.8.2

 Attachments: kafka-thread-dump.log


 the AdminUtils:.deleteTopic method is implemented as
 {code}
 def deleteTopic(zkClient: ZkClient, topic: String) {
 ZkUtils.createPersistentPath(zkClient, 
 ZkUtils.getDeleteTopicPath(topic))
 }
 {code}
 but the DeleteTopicCommand actually does
 {code}
 zkClient = new ZkClient(zkConnect, 3, 3, ZKStringSerializer)
 zkClient.deleteRecursive(ZkUtils.getTopicPath(topic))
 {code}
 so I guess, that the 'createPersistentPath' above should actually be 
 {code}
 def deleteTopic(zkClient: ZkClient, topic: String) {
 ZkUtils.deletePathRecursive(zkClient, ZkUtils.getTopicPath(topic))
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1558) AdminUtils.deleteTopic does not work

2014-10-03 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158637#comment-14158637
 ] 

Sriharsha Chintalapani commented on KAFKA-1558:
---

[~guozhang]  I'll work on that test will report the results.  Is there any way 
to induce the leader rebalance manually?

 AdminUtils.deleteTopic does not work
 

 Key: KAFKA-1558
 URL: https://issues.apache.org/jira/browse/KAFKA-1558
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Henning Schmiedehausen
Assignee: Sriharsha Chintalapani
Priority: Blocker
 Fix For: 0.8.2

 Attachments: kafka-thread-dump.log


 the AdminUtils:.deleteTopic method is implemented as
 {code}
 def deleteTopic(zkClient: ZkClient, topic: String) {
 ZkUtils.createPersistentPath(zkClient, 
 ZkUtils.getDeleteTopicPath(topic))
 }
 {code}
 but the DeleteTopicCommand actually does
 {code}
 zkClient = new ZkClient(zkConnect, 3, 3, ZKStringSerializer)
 zkClient.deleteRecursive(ZkUtils.getTopicPath(topic))
 {code}
 so I guess, that the 'createPersistentPath' above should actually be 
 {code}
 def deleteTopic(zkClient: ZkClient, topic: String) {
 ZkUtils.deletePathRecursive(zkClient, ZkUtils.getTopicPath(topic))
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 26331: Patch for KAFKA-1669

2014-10-03 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26331/
---

Review request for kafka.


Bugs: KAFKA-1669
https://issues.apache.org/jira/browse/KAFKA-1669


Repository: kafka


Description
---

Updated the default settings for MaxRebalanceRetries and RebalanceBackOffMs


Diffs
-

  core/src/main/scala/kafka/consumer/ConsumerConfig.scala 
9ebbee6c16dc83767297c729d2d74ebbd063a993 

Diff: https://reviews.apache.org/r/26331/diff/


Testing
---


Thanks,

Mayuresh Gharat



[jira] [Updated] (KAFKA-1669) Default rebalance retries and backoff should be higher

2014-10-03 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-1669:
---
Attachment: KAFKA-1669.patch

 Default rebalance retries and backoff should be higher
 --

 Key: KAFKA-1669
 URL: https://issues.apache.org/jira/browse/KAFKA-1669
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Clark Haskins
Assignee: Mayuresh Gharat
  Labels: newbie
 Attachments: KAFKA-1669.patch


 The default rebalance logic does not work for consumers with large numbers of 
 partitions and/or topics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1669) Default rebalance retries and backoff should be higher

2014-10-03 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158639#comment-14158639
 ] 

Mayuresh Gharat commented on KAFKA-1669:


Created reviewboard https://reviews.apache.org/r/26331/diff/
 against branch origin/trunk

 Default rebalance retries and backoff should be higher
 --

 Key: KAFKA-1669
 URL: https://issues.apache.org/jira/browse/KAFKA-1669
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Clark Haskins
Assignee: Mayuresh Gharat
  Labels: newbie
 Attachments: KAFKA-1669.patch


 The default rebalance logic does not work for consumers with large numbers of 
 partitions and/or topics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1669) Default rebalance retries and backoff should be higher

2014-10-03 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-1669:
---
Status: Patch Available  (was: Open)

 Default rebalance retries and backoff should be higher
 --

 Key: KAFKA-1669
 URL: https://issues.apache.org/jira/browse/KAFKA-1669
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Clark Haskins
Assignee: Mayuresh Gharat
  Labels: newbie
 Attachments: KAFKA-1669.patch


 The default rebalance logic does not work for consumers with large numbers of 
 partitions and/or topics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1441) Purgatory purge causes latency spikes

2014-10-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-1441.
--
Resolution: Fixed

 Purgatory purge causes latency spikes
 -

 Key: KAFKA-1441
 URL: https://issues.apache.org/jira/browse/KAFKA-1441
 Project: Kafka
  Issue Type: Bug
Reporter: Jay Kreps

 The request purgatory has a funky thing where it periodically loops over all 
 watches and purges them. If you have a fair number of partitions you can 
 accumulate lots of watches and purging them can take a long time. During this 
 time all expiry is halted.
 Here is an example log:
 [2014-05-08 21:07:41,950] INFO ExpiredRequestReaper-2 Expired request after 
 10ms: 5829 (kafka.server.RequestPurgatory$ExpiredRequestReaper)
 [2014-05-08 21:07:41,952] INFO ExpiredRequestReaper-2 Expired request after 
 10ms: 5882 (kafka.server.RequestPurgatory$ExpiredRequestReaper)
 [2014-05-08 21:07:41,967] INFO ExpiredRequestReaper-2 Expired request after 
 11ms: 5884 (kafka.server.RequestPurgatory$ExpiredRequestReaper)
 [2014-05-08 21:07:41,968] INFO ExpiredRequestReaper-2 Purging purgatory 
 (kafka.server.RequestPurgatory$ExpiredRequestReaper)
 [2014-05-08 21:07:41,969] INFO ExpiredRequestReaper-2 Purged 0 requests from 
 delay queue. (kafka.server.RequestPurgatory$ExpiredRequestReaper)
 [2014-05-08 21:07:42,305] INFO ExpiredRequestReaper-2 Purged 340809 (watcher) 
 requests. (kafka.server.RequestPurgatory$ExpiredRequestReaper)
 [2014-05-08 21:07:42,305] INFO ExpiredRequestReaper-2 Expired request after 
 106ms: 5847 (kafka.server.RequestPurgatory$ExpiredRequestReaper)
 [2014-05-08 21:07:42,305] INFO ExpiredRequestReaper-2 Expired request after 
 106ms: 5904 (kafka.server.RequestPurgatory$ExpiredRequestReaper)
 [2014-05-08 21:07:42,328] INFO ExpiredRequestReaper-2 Expired request after 
 10ms: 5908 (kafka.server.RequestPurgatory$ExpiredRequestReaper)
 [2014-05-08 21:07:42,329] INFO ExpiredRequestReaper-2 Expired request after 
 10ms: 5852 (kafka.server.RequestPurgatory$ExpiredRequestReaper)
 [2014-05-08 21:07:42,343] INFO ExpiredRequestReaper-2 Expired request after 
 11ms: 5854 (kafka.server.RequestPurgatory$ExpiredRequestReaper)
 Combined with our buggy purgatory request impls that can sometimes hit their 
 expiration this can lead to huge latency spikes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1650) Mirror Maker could lose data on unclean shutdown.

2014-10-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158658#comment-14158658
 ] 

Guozhang Wang commented on KAFKA-1650:
--

Jiangjie, could you also consider fixing KAFKA-1385 in your patch?

 Mirror Maker could lose data on unclean shutdown.
 -

 Key: KAFKA-1650
 URL: https://issues.apache.org/jira/browse/KAFKA-1650
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-1650.patch


 Currently if mirror maker got shutdown uncleanly, the data in the data 
 channel and buffer could potentially be lost. With the new producer's 
 callback, this issue could be solved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 26332: Patch for KAFKA-1662

2014-10-03 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26332/
---

Review request for kafka.


Bugs: KAFKA-1662
https://issues.apache.org/jira/browse/KAFKA-1662


Repository: kafka


Description
---

KAFKA-1662. gradle release issue permgen space.


Diffs
-

  gradle.properties a04a6c8b98c892dd3370b3958793b2ba0d9b646c 

Diff: https://reviews.apache.org/r/26332/diff/


Testing
---


Thanks,

Sriharsha Chintalapani



[jira] [Updated] (KAFKA-1662) gradle release issue permgen space

2014-10-03 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1662:
--
Assignee: Sriharsha Chintalapani
  Status: Patch Available  (was: Open)

 gradle release issue permgen space
 --

 Key: KAFKA-1662
 URL: https://issues.apache.org/jira/browse/KAFKA-1662
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
Priority: Blocker
  Labels: newbie
 Fix For: 0.8.2

 Attachments: KAFKA-1662.patch


 Finding issues doing the kafka release with permgen space
 ./gradlew releaseTarGzAll
 ant:scaladoc] java.lang.OutOfMemoryError: PermGen space
 :kafka-0.8.2-ALPHA1-src:core:scaladoc FAILED
 :releaseTarGz_2_10_1 FAILED
 FAILURE: Build failed with an exception.
 * What went wrong:
 Execution failed for task ':core:scaladoc'.
  PermGen space
 * Try:
 Run with --stacktrace option to get the stack trace. Run with --info or 
 --debug option to get more log output.
 BUILD FAILED
 Total time: 5 mins 55.53 secs
 FAILURE: Build failed with an exception.
 * What went wrong:
 PermGen space



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1662) gradle release issue permgen space

2014-10-03 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1662:
--
Attachment: KAFKA-1662.patch

 gradle release issue permgen space
 --

 Key: KAFKA-1662
 URL: https://issues.apache.org/jira/browse/KAFKA-1662
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Priority: Blocker
  Labels: newbie
 Fix For: 0.8.2

 Attachments: KAFKA-1662.patch


 Finding issues doing the kafka release with permgen space
 ./gradlew releaseTarGzAll
 ant:scaladoc] java.lang.OutOfMemoryError: PermGen space
 :kafka-0.8.2-ALPHA1-src:core:scaladoc FAILED
 :releaseTarGz_2_10_1 FAILED
 FAILURE: Build failed with an exception.
 * What went wrong:
 Execution failed for task ':core:scaladoc'.
  PermGen space
 * Try:
 Run with --stacktrace option to get the stack trace. Run with --info or 
 --debug option to get more log output.
 BUILD FAILED
 Total time: 5 mins 55.53 secs
 FAILURE: Build failed with an exception.
 * What went wrong:
 PermGen space



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1662) gradle release issue permgen space

2014-10-03 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158666#comment-14158666
 ] 

Sriharsha Chintalapani commented on KAFKA-1662:
---

Created reviewboard https://reviews.apache.org/r/26332/diff/
 against branch origin/trunk

 gradle release issue permgen space
 --

 Key: KAFKA-1662
 URL: https://issues.apache.org/jira/browse/KAFKA-1662
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Priority: Blocker
  Labels: newbie
 Fix For: 0.8.2

 Attachments: KAFKA-1662.patch


 Finding issues doing the kafka release with permgen space
 ./gradlew releaseTarGzAll
 ant:scaladoc] java.lang.OutOfMemoryError: PermGen space
 :kafka-0.8.2-ALPHA1-src:core:scaladoc FAILED
 :releaseTarGz_2_10_1 FAILED
 FAILURE: Build failed with an exception.
 * What went wrong:
 Execution failed for task ':core:scaladoc'.
  PermGen space
 * Try:
 Run with --stacktrace option to get the stack trace. Run with --info or 
 --debug option to get more log output.
 BUILD FAILED
 Total time: 5 mins 55.53 secs
 FAILURE: Build failed with an exception.
 * What went wrong:
 PermGen space



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 26306: Patch for KAFKA-1663

2014-10-03 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26306/#review55415
---



core/src/main/scala/kafka/controller/TopicDeletionManager.scala
https://reviews.apache.org/r/26306/#comment95770

Can we change this comment accordingly, since we have other places calling 
resumeTopicDeletionThread() than these three cases.


- Guozhang Wang


On Oct. 3, 2014, 1:31 a.m., Sriharsha Chintalapani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/26306/
 ---
 
 (Updated Oct. 3, 2014, 1:31 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1663
 https://issues.apache.org/jira/browse/KAFKA-1663
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1663. Controller unable to shutdown after a soft failure.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/controller/TopicDeletionManager.scala 
 219c4136e905a59a745c3a596c95d59b550e7383 
 
 Diff: https://reviews.apache.org/r/26306/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Sriharsha Chintalapani
 




Re: Review Request 26332: Patch for KAFKA-1662

2014-10-03 Thread Ivan Lyutov

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26332/#review55417
---


Looks OK

- Ivan Lyutov


On Окт. 3, 2014, 11:14 п.п., Sriharsha Chintalapani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/26332/
 ---
 
 (Updated Окт. 3, 2014, 11:14 п.п.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1662
 https://issues.apache.org/jira/browse/KAFKA-1662
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1662. gradle release issue permgen space.
 
 
 Diffs
 -
 
   gradle.properties a04a6c8b98c892dd3370b3958793b2ba0d9b646c 
 
 Diff: https://reviews.apache.org/r/26332/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Sriharsha Chintalapani
 




[jira] [Comment Edited] (KAFKA-1662) gradle release issue permgen space

2014-10-03 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158783#comment-14158783
 ] 

Joe Stein edited comment on KAFKA-1662 at 10/4/14 12:25 AM:


I tried the patch and ran

{code}
./gradlew releaseTarGzAll
{code}

and got 

{code}
Unexpected exception thrown.
java.lang.OutOfMemoryError: PermGen space
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2436)
at java.lang.Class.getDeclaredMethod(Class.java:1937)
at 
java.io.ObjectStreamClass.getPrivateMethod(ObjectStreamClass.java:1377)
at java.io.ObjectStreamClass.access$1700(ObjectStreamClass.java:50)
at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:436)
at java.security.AccessController.doPrivileged(Native Method)
at java.io.ObjectStreamClass.init(ObjectStreamClass.java:411)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
at java.io.ObjectStreamClass.init(ObjectStreamClass.java:407)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
at java.io.ObjectStreamClass.init(ObjectStreamClass.java:407)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
at java.io.ObjectStreamClass.init(ObjectStreamClass.java:407)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1114)
at 
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1518)
at 
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1483)
at 
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1400)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1158)
at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1346)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1154)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:330)
at org.gradle.messaging.remote.internal.Message.send(Message.java:40)
at 
org.gradle.messaging.serialize.kryo.JavaSerializer$JavaWriter.write(JavaSerializer.java:62)
at 
org.gradle.messaging.remote.internal.hub.MethodInvocationSerializer$MethodInvocationWriter.writeArguments(MethodInvocationSerializer.java:67)
at 
org.gradle.messaging.remote.internal.hub.MethodInvocationSerializer$MethodInvocationWriter.write(MethodInvocationSerializer.java:63)
at 
org.gradle.messaging.remote.internal.hub.MethodInvocationSerializer$MethodInvocationWriter.write(MethodInvocationSerializer.java:48)
at 
org.gradle.messaging.serialize.kryo.TypeSafeSerializer$2.write(TypeSafeSerializer.java:46)
at 
org.gradle.messaging.remote.internal.hub.InterHubMessageSerializer$MessageWriter.write(InterHubMessageSerializer.java:108)
at 
org.gradle.messaging.remote.internal.hub.InterHubMessageSerializer$MessageWriter.write(InterHubMessageSer.java:93)
at 
org.gradle.messaging.remote.internal.inet.SocketConnection.dispatch(SocketConnection.java:112)

{code}


was (Author: joestein):
I tried that and ran

{code}
./gradlew releaseTarGzAll
{code}

and got 

{code}
Unexpected exception thrown.
java.lang.OutOfMemoryError: PermGen space
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2436)
at java.lang.Class.getDeclaredMethod(Class.java:1937)
at 
java.io.ObjectStreamClass.getPrivateMethod(ObjectStreamClass.java:1377)
at java.io.ObjectStreamClass.access$1700(ObjectStreamClass.java:50)
at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:436)
at java.security.AccessController.doPrivileged(Native Method)
at java.io.ObjectStreamClass.init(ObjectStreamClass.java:411)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
at java.io.ObjectStreamClass.init(ObjectStreamClass.java:407)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
at java.io.ObjectStreamClass.init(ObjectStreamClass.java:407)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
at java.io.ObjectStreamClass.init(ObjectStreamClass.java:407)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1114)
at 
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1518)
at 
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1483)
at 
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1400)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1158)
at 

[jira] [Updated] (KAFKA-1662) gradle release issue permgen space

2014-10-03 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1662:
-
Status: Open  (was: Patch Available)

I tried that and ran

{code}
./gradlew releaseTarGzAll
{code}

and got 

{code}
Unexpected exception thrown.
java.lang.OutOfMemoryError: PermGen space
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2436)
at java.lang.Class.getDeclaredMethod(Class.java:1937)
at 
java.io.ObjectStreamClass.getPrivateMethod(ObjectStreamClass.java:1377)
at java.io.ObjectStreamClass.access$1700(ObjectStreamClass.java:50)
at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:436)
at java.security.AccessController.doPrivileged(Native Method)
at java.io.ObjectStreamClass.init(ObjectStreamClass.java:411)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
at java.io.ObjectStreamClass.init(ObjectStreamClass.java:407)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
at java.io.ObjectStreamClass.init(ObjectStreamClass.java:407)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
at java.io.ObjectStreamClass.init(ObjectStreamClass.java:407)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1114)
at 
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1518)
at 
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1483)
at 
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1400)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1158)
at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1346)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1154)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:330)
at org.gradle.messaging.remote.internal.Message.send(Message.java:40)
at 
org.gradle.messaging.serialize.kryo.JavaSerializer$JavaWriter.write(JavaSerializer.java:62)
at 
org.gradle.messaging.remote.internal.hub.MethodInvocationSerializer$MethodInvocationWriter.writeArguments(MethodInvocationSerializer.java:67)
at 
org.gradle.messaging.remote.internal.hub.MethodInvocationSerializer$MethodInvocationWriter.write(MethodInvocationSerializer.java:63)
at 
org.gradle.messaging.remote.internal.hub.MethodInvocationSerializer$MethodInvocationWriter.write(MethodInvocationSerializer.java:48)
at 
org.gradle.messaging.serialize.kryo.TypeSafeSerializer$2.write(TypeSafeSerializer.java:46)
at 
org.gradle.messaging.remote.internal.hub.InterHubMessageSerializer$MessageWriter.write(InterHubMessageSerializer.java:108)
at 
org.gradle.messaging.remote.internal.hub.InterHubMessageSerializer$MessageWriter.write(InterHubMessageSer.java:93)
at 
org.gradle.messaging.remote.internal.inet.SocketConnection.dispatch(SocketConnection.java:112)

{code}

 gradle release issue permgen space
 --

 Key: KAFKA-1662
 URL: https://issues.apache.org/jira/browse/KAFKA-1662
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
Priority: Blocker
  Labels: newbie
 Fix For: 0.8.2

 Attachments: KAFKA-1662.patch


 Finding issues doing the kafka release with permgen space
 ./gradlew releaseTarGzAll
 ant:scaladoc] java.lang.OutOfMemoryError: PermGen space
 :kafka-0.8.2-ALPHA1-src:core:scaladoc FAILED
 :releaseTarGz_2_10_1 FAILED
 FAILURE: Build failed with an exception.
 * What went wrong:
 Execution failed for task ':core:scaladoc'.
  PermGen space
 * Try:
 Run with --stacktrace option to get the stack trace. Run with --info or 
 --debug option to get more log output.
 BUILD FAILED
 Total time: 5 mins 55.53 secs
 FAILURE: Build failed with an exception.
 * What went wrong:
 PermGen space



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1650) Mirror Maker could lose data on unclean shutdown.

2014-10-03 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1650:
-
Reviewer: Joel Koshy

[~jjkoshy] Feel free to reassign for review.

 Mirror Maker could lose data on unclean shutdown.
 -

 Key: KAFKA-1650
 URL: https://issues.apache.org/jira/browse/KAFKA-1650
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-1650.patch


 Currently if mirror maker got shutdown uncleanly, the data in the data 
 channel and buffer could potentially be lost. With the new producer's 
 callback, this issue could be solved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1558) AdminUtils.deleteTopic does not work

2014-10-03 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158797#comment-14158797
 ] 

Neha Narkhede commented on KAFKA-1558:
--

[~sriharsha] Yes. The preferred replica election admin tool can be used to 
balance the leaders. You would have to have imbalanced leaders before that, 
though. A broker bounce is the easiest way to end up with leader imbalance.

 AdminUtils.deleteTopic does not work
 

 Key: KAFKA-1558
 URL: https://issues.apache.org/jira/browse/KAFKA-1558
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Henning Schmiedehausen
Assignee: Sriharsha Chintalapani
Priority: Blocker
 Fix For: 0.8.2

 Attachments: kafka-thread-dump.log


 the AdminUtils:.deleteTopic method is implemented as
 {code}
 def deleteTopic(zkClient: ZkClient, topic: String) {
 ZkUtils.createPersistentPath(zkClient, 
 ZkUtils.getDeleteTopicPath(topic))
 }
 {code}
 but the DeleteTopicCommand actually does
 {code}
 zkClient = new ZkClient(zkConnect, 3, 3, ZKStringSerializer)
 zkClient.deleteRecursive(ZkUtils.getTopicPath(topic))
 {code}
 so I guess, that the 'createPersistentPath' above should actually be 
 {code}
 def deleteTopic(zkClient: ZkClient, topic: String) {
 ZkUtils.deletePathRecursive(zkClient, ZkUtils.getTopicPath(topic))
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1662) gradle release issue permgen space

2014-10-03 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158804#comment-14158804
 ] 

Sriharsha Chintalapani commented on KAFKA-1662:
---

[~charmalloc] can you share which os you are running this on. I was able to 
reproduce without this patch on OS X 10.9.4 but with this patch releaseTarGzAll 
works fine for me. Did you removed the existing gradle dir and reran the gradle 
command before running ./gradlew

 gradle release issue permgen space
 --

 Key: KAFKA-1662
 URL: https://issues.apache.org/jira/browse/KAFKA-1662
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
Priority: Blocker
  Labels: newbie
 Fix For: 0.8.2

 Attachments: KAFKA-1662.patch


 Finding issues doing the kafka release with permgen space
 ./gradlew releaseTarGzAll
 ant:scaladoc] java.lang.OutOfMemoryError: PermGen space
 :kafka-0.8.2-ALPHA1-src:core:scaladoc FAILED
 :releaseTarGz_2_10_1 FAILED
 FAILURE: Build failed with an exception.
 * What went wrong:
 Execution failed for task ':core:scaladoc'.
  PermGen space
 * Try:
 Run with --stacktrace option to get the stack trace. Run with --info or 
 --debug option to get more log output.
 BUILD FAILED
 Total time: 5 mins 55.53 secs
 FAILURE: Build failed with an exception.
 * What went wrong:
 PermGen space



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1650) Mirror Maker could lose data on unclean shutdown.

2014-10-03 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158810#comment-14158810
 ] 

Jiangjie Qin commented on KAFKA-1650:
-

[~guozhang] I checked KAFKA-1385 and it seems that Mirror Maker code has 
changed a lot since then. I checked my code and it should not have that issue.

 Mirror Maker could lose data on unclean shutdown.
 -

 Key: KAFKA-1650
 URL: https://issues.apache.org/jira/browse/KAFKA-1650
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-1650.patch


 Currently if mirror maker got shutdown uncleanly, the data in the data 
 channel and buffer could potentially be lost. With the new producer's 
 callback, this issue could be solved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1558) AdminUtils.deleteTopic does not work

2014-10-03 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158812#comment-14158812
 ] 

Sriharsha Chintalapani commented on KAFKA-1558:
---

[~nehanarkhede] [~guozhang] I ran the preferred replica election tool in my 
tests. But I'll try bouncing the broker and run this tool.

 AdminUtils.deleteTopic does not work
 

 Key: KAFKA-1558
 URL: https://issues.apache.org/jira/browse/KAFKA-1558
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Henning Schmiedehausen
Assignee: Sriharsha Chintalapani
Priority: Blocker
 Fix For: 0.8.2

 Attachments: kafka-thread-dump.log


 the AdminUtils:.deleteTopic method is implemented as
 {code}
 def deleteTopic(zkClient: ZkClient, topic: String) {
 ZkUtils.createPersistentPath(zkClient, 
 ZkUtils.getDeleteTopicPath(topic))
 }
 {code}
 but the DeleteTopicCommand actually does
 {code}
 zkClient = new ZkClient(zkConnect, 3, 3, ZKStringSerializer)
 zkClient.deleteRecursive(ZkUtils.getTopicPath(topic))
 {code}
 so I guess, that the 'createPersistentPath' above should actually be 
 {code}
 def deleteTopic(zkClient: ZkClient, topic: String) {
 ZkUtils.deletePathRecursive(zkClient, ZkUtils.getTopicPath(topic))
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1663) Controller unable to shutdown after a soft failure

2014-10-03 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158828#comment-14158828
 ] 

Neha Narkhede commented on KAFKA-1663:
--

[~sriharsha] Awesome. I'll review this tomorrow.

 Controller unable to shutdown after a soft failure
 --

 Key: KAFKA-1663
 URL: https://issues.apache.org/jira/browse/KAFKA-1663
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1663.patch


 As part of testing KAFKA-1558 I came across a case where inducing soft 
 failure in the current controller elects a new controller  but the old 
 controller doesn't shutdown properly.
 steps to reproduce
 1) 5 broker cluster
 2) high number of topics(I tested it with 1000 topics)
 3) on the current controller do kill -SIGSTOP  pid( broker's process id)
 4) wait for bit over zookeeper timeout (server.properties)
 5) kill -SIGCONT pid
 6) There will be a new controller elected. check old controller's
 log 
 [2014-09-30 15:59:53,398] INFO [SessionExpirationListener on 1], ZK expired; 
 shut down all controller components and try to re-elect 
 (kafka.controller.KafkaController$SessionExpirationListener)
 [2014-09-30 15:59:53,400] INFO [delete-topics-thread-1], Shutting down 
 (kafka.controller.TopicDeletionManager$DeleteTopicsThread)
 If it stops there and the broker  logs keeps printing 
 Cached zkVersion [0] not equal to that in zookeeper, skip updating ISR 
 (kafka.cluster.Partition)
 than the controller shutdown never completes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1305) Controller can hang on controlled shutdown with auto leader balance enabled

2014-10-03 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158829#comment-14158829
 ] 

Neha Narkhede commented on KAFKA-1305:
--

[~sriharsha] Depends :) Basically, my perspective is that if doing this 
correctly requires delaying 0.8.2 by a month, then let's push it to 0.8.3. If 
there is a small fix for the issue, then let's include it. IIRC, [~junrao] was 
going to take a stab at thinking if there is a small fix or not.

 Controller can hang on controlled shutdown with auto leader balance enabled
 ---

 Key: KAFKA-1305
 URL: https://issues.apache.org/jira/browse/KAFKA-1305
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy
Assignee: Sriharsha Chintalapani
Priority: Blocker
 Fix For: 0.8.2, 0.9.0


 This is relatively easy to reproduce especially when doing a rolling bounce.
 What happened here is as follows:
 1. The previous controller was bounced and broker 265 became the new 
 controller.
 2. I went on to do a controlled shutdown of broker 265 (the new controller).
 3. In the mean time the automatically scheduled preferred replica leader 
 election process started doing its thing and starts sending 
 LeaderAndIsrRequests/UpdateMetadataRequests to itself (and other brokers).  
 (t@113 below).
 4. While that's happening, the controlled shutdown process on 265 succeeds 
 and proceeds to deregister itself from ZooKeeper and shuts down the socket 
 server.
 5. (ReplicaStateMachine actually removes deregistered brokers from the 
 controller channel manager's list of brokers to send requests to.  However, 
 that removal cannot take place (t@18 below) because preferred replica leader 
 election task owns the controller lock.)
 6. So the request thread to broker 265 gets into infinite retries.
 7. The entire broker shutdown process is blocked on controller shutdown for 
 the same reason (it needs to acquire the controller lock).
 Relevant portions from the thread-dump:
 Controller-265-to-broker-265-send-thread - Thread t@113
java.lang.Thread.State: TIMED_WAITING
   at java.lang.Thread.sleep(Native Method)
   at 
 kafka.controller.RequestSendThread$$anonfun$liftedTree1$1$1.apply$mcV$sp(ControllerChannelManager.scala:143)
   at kafka.utils.Utils$.swallow(Utils.scala:167)
   at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
   at kafka.utils.Utils$.swallowWarn(Utils.scala:46)
   at kafka.utils.Logging$class.swallow(Logging.scala:94)
   at kafka.utils.Utils$.swallow(Utils.scala:46)
   at 
 kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:143)
   at 
 kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
   - locked java.lang.Object@6dbf14a7
   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
Locked ownable synchronizers:
   - None
 ...
 Thread-4 - Thread t@17
java.lang.Thread.State: WAITING on 
 java.util.concurrent.locks.ReentrantLock$NonfairSync@4836840 owned by: 
 kafka-scheduler-0
   at sun.misc.Unsafe.park(Native Method)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
   at 
 java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
   at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
   at kafka.utils.Utils$.inLock(Utils.scala:536)
   at kafka.controller.KafkaController.shutdown(KafkaController.scala:642)
   at 
 kafka.server.KafkaServer$$anonfun$shutdown$9.apply$mcV$sp(KafkaServer.scala:242)
   at kafka.utils.Utils$.swallow(Utils.scala:167)
   at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
   at kafka.utils.Utils$.swallowWarn(Utils.scala:46)
   at kafka.utils.Logging$class.swallow(Logging.scala:94)
   at kafka.utils.Utils$.swallow(Utils.scala:46)
   at kafka.server.KafkaServer.shutdown(KafkaServer.scala:242)
   at 
 kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:46)
   at kafka.Kafka$$anon$1.run(Kafka.scala:42)
 ...
 kafka-scheduler-0 - Thread t@117
java.lang.Thread.State: WAITING on 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1dc407fc
   at sun.misc.Unsafe.park(Native Method)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
   at 
 

[jira] [Commented] (KAFKA-1650) Mirror Maker could lose data on unclean shutdown.

2014-10-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158842#comment-14158842
 ] 

Guozhang Wang commented on KAFKA-1650:
--

Cool, in this case could you close that ticket?

 Mirror Maker could lose data on unclean shutdown.
 -

 Key: KAFKA-1650
 URL: https://issues.apache.org/jira/browse/KAFKA-1650
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-1650.patch


 Currently if mirror maker got shutdown uncleanly, the data in the data 
 channel and buffer could potentially be lost. With the new producer's 
 callback, this issue could be solved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1662) gradle release issue permgen space

2014-10-03 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein resolved KAFKA-1662.
--
Resolution: Fixed

yup, works

 gradle release issue permgen space
 --

 Key: KAFKA-1662
 URL: https://issues.apache.org/jira/browse/KAFKA-1662
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
Priority: Blocker
  Labels: newbie
 Fix For: 0.8.2

 Attachments: KAFKA-1662.patch


 Finding issues doing the kafka release with permgen space
 ./gradlew releaseTarGzAll
 ant:scaladoc] java.lang.OutOfMemoryError: PermGen space
 :kafka-0.8.2-ALPHA1-src:core:scaladoc FAILED
 :releaseTarGz_2_10_1 FAILED
 FAILURE: Build failed with an exception.
 * What went wrong:
 Execution failed for task ':core:scaladoc'.
  PermGen space
 * Try:
 Run with --stacktrace option to get the stack trace. Run with --info or 
 --debug option to get more log output.
 BUILD FAILED
 Total time: 5 mins 55.53 secs
 FAILURE: Build failed with an exception.
 * What went wrong:
 PermGen space



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1662) gradle release issue permgen space

2014-10-03 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158872#comment-14158872
 ] 

Joe Stein commented on KAFKA-1662:
--

I committed to trunk and 0.8.2 branch.  Thanks for the patch!!!



 gradle release issue permgen space
 --

 Key: KAFKA-1662
 URL: https://issues.apache.org/jira/browse/KAFKA-1662
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
Priority: Blocker
  Labels: newbie
 Fix For: 0.8.2

 Attachments: KAFKA-1662.patch


 Finding issues doing the kafka release with permgen space
 ./gradlew releaseTarGzAll
 ant:scaladoc] java.lang.OutOfMemoryError: PermGen space
 :kafka-0.8.2-ALPHA1-src:core:scaladoc FAILED
 :releaseTarGz_2_10_1 FAILED
 FAILURE: Build failed with an exception.
 * What went wrong:
 Execution failed for task ':core:scaladoc'.
  PermGen space
 * Try:
 Run with --stacktrace option to get the stack trace. Run with --info or 
 --debug option to get more log output.
 BUILD FAILED
 Total time: 5 mins 55.53 secs
 FAILURE: Build failed with an exception.
 * What went wrong:
 PermGen space



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1671) upload archives isn't uploading 2.11

2014-10-03 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1671:
-
Fix Version/s: 0.8.2

 upload archives isn't uploading 2.11
 

 Key: KAFKA-1671
 URL: https://issues.apache.org/jira/browse/KAFKA-1671
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Priority: Blocker
 Fix For: 0.8.2


 https://repository.apache.org/content/groups/staging/org/apache/kafka/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1671) upload archives isn't uploading 2.11

2014-10-03 Thread Joe Stein (JIRA)
Joe Stein created KAFKA-1671:


 Summary: upload archives isn't uploading 2.11
 Key: KAFKA-1671
 URL: https://issues.apache.org/jira/browse/KAFKA-1671
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Priority: Blocker


https://repository.apache.org/content/groups/staging/org/apache/kafka/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[UNOFFICIAL] Apache Kafka 0.8.2-ALPHA1

2014-10-03 Thread Joe Stein
Hi, I created build off the 0.8.2 branch.

This could be used as help for folks testing 0.8.2.0 prior to release.

Source and binary artifacts
http://people.apache.org/~joestein/kafka-0.8.2-ALPHA1-UNOFFICIAL/

Maven repository for testing: https://repository.apache
.org/content/groups/staging/

dependency
groupIdorg.apache.kafka/groupId
artifactIdkafka_2.10/artifactId
version0.8.2-ALPHA1/version
/dependency

So far 1 new issue https://issues.apache.org/jira/browse/KAFKA-1671

BLOCKERS =
https://issues.apache.org/jira/issues/?jql=project+%3D+KAFKA+AND+resolution+%3D+Unresolved+AND+priority+%3D+Blocker+ORDER+BY+key+DESC


/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/


[jira] [Updated] (KAFKA-1671) upload archives isn't uploading 2.11

2014-10-03 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1671:
-
Labels: newbie  (was: )

 upload archives isn't uploading 2.11
 

 Key: KAFKA-1671
 URL: https://issues.apache.org/jira/browse/KAFKA-1671
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Priority: Blocker
  Labels: newbie
 Fix For: 0.8.2


 https://repository.apache.org/content/groups/staging/org/apache/kafka/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1671) upload archives isn't uploading for Scala version 2.11

2014-10-03 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1671:
-
Summary: upload archives isn't uploading for Scala version 2.11  (was: 
upload archives isn't uploading 2.11)

 upload archives isn't uploading for Scala version 2.11
 --

 Key: KAFKA-1671
 URL: https://issues.apache.org/jira/browse/KAFKA-1671
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Priority: Blocker
  Labels: newbie
 Fix For: 0.8.2


 https://repository.apache.org/content/groups/staging/org/apache/kafka/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1671) uploaded archives are missing for Scala version 2.11

2014-10-03 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1671:
-
Summary: uploaded archives are missing for Scala version 2.11  (was: upload 
archives isn't uploading for Scala version 2.11)

 uploaded archives are missing for Scala version 2.11
 

 Key: KAFKA-1671
 URL: https://issues.apache.org/jira/browse/KAFKA-1671
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Priority: Blocker
  Labels: newbie
 Fix For: 0.8.2


 https://repository.apache.org/content/groups/staging/org/apache/kafka/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)