[jira] [Commented] (KAFKA-2453) enable new consumer in EndToEndLatency

2015-09-01 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14725704#comment-14725704
 ] 

Ben Stopford commented on KAFKA-2453:
-

[~junrao] changes made and system tests passing

> enable new consumer in EndToEndLatency
> --
>
> Key: KAFKA-2453
> URL: https://issues.apache.org/jira/browse/KAFKA-2453
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jun Rao
>Assignee: Ben Stopford
> Fix For: 0.8.3
>
>
> We need to add an option to enable the new consumer in EndToEndLatency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2498) need build steps/instruction while building apache kafka from source github branch 0.8.2

2015-09-01 Thread naresh gundu (JIRA)
naresh gundu created KAFKA-2498:
---

 Summary: need build steps/instruction while building apache kafka 
from source github branch 0.8.2
 Key: KAFKA-2498
 URL: https://issues.apache.org/jira/browse/KAFKA-2498
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Affects Versions: 0.8.2.0
 Environment: I am working rhel7.1 machine
Reporter: naresh gundu
Priority: Critical
 Fix For: 0.8.2.0


I have followed the steps from the github https://github.com/apache/kafka

cd source-code
gradle
./gradlew jar (success)
./gradlew srcJar (success)
./gradlew test ( one test case failed)

so, please provide me the steps or confirm the above steps are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: How can I start writing a Copycat connector?

2015-09-01 Thread Ewen Cheslack-Postava
I'd highly recommend keeping your connector in a separate repo and
following the process Gwen suggested. That's how I'm working on connectors
(and convertalizers, as Gwen likes to call them).

Keep in mind that copycat-hdfs and copycat-jdbc haven't been updated to the
Kafka Copycat code yet, although that'll happen in the next day or two.
However, they are still a fairly good template for a project (assuming you
like Maven as your build tool).

If you want to make sure things are structured right, you might want to
start by taking the Maven structure of one of the projects, then dump a
renamed version of the file connectors in there. That'll give you something
you can get to a buildable & testable state that isn't very big. Then start
modifying it to work with the data source you *actually* want to work with.

-Ewen

On Mon, Aug 31, 2015 at 7:27 PM, Gwen Shapira  wrote:

> Hi James,
>
> This is so exciting :)
>
> While the APIs are still marked as @unstable, we don't have immediate plans
> to modify them - so now is a good time to write a connector. Just accept
> the possibility that few modifications may be needed before release (I'm
> trying to get Ewen to add mandatory connector versions to the API...)
>
> I typically clone trunk and then run ./gradlew install - which builds and
> places the jars in local mvn repository.
> Then I can use them as dependencies in other projects.
>
> IMO, starting with an existing example and modifying it is a good approach.
> You can look at few more examples here:
> https://github.com/confluentinc/copycat-hdfs
> https://github.com/confluentinc/copycat-jdbc
> (Those are still under active development... don't try them in production
> yet)
>
> Ewen will probably chime in with more advice :)
>
> Gwen
>
> On Mon, Aug 31, 2015 at 7:14 PM, James Cheng  wrote:
>
> > Hi,
> >
> > I'd like to write a Copycat connector. I saw on Github that the Copycat
> > data APIs have been committed to trunk. How can I get started? I've never
> > developed against trunk before. Do I git clone from trunk, or is there a
> > pre-built set of jars from trunk that I can use?
> >
> > I see that there is a sample connector in copycat/file. For development
> > purposes, I could just create a peer to that. Obviously I would not check
> > in my connector in that location, but would that be the best way to get
> > started?
> >
> > Thanks,
> > -James
> >
> >
>



-- 
Thanks,
Ewen


[jira] [Created] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2015-09-01 Thread Ben Stopford (JIRA)
Ben Stopford created KAFKA-2499:
---

 Summary: kafka-producer-perf-test should use something more 
realistic than empty byte arrays
 Key: KAFKA-2499
 URL: https://issues.apache.org/jira/browse/KAFKA-2499
 Project: Kafka
  Issue Type: Bug
Reporter: Ben Stopford


ProducerPerformance.scala (There are two of these, one used by the shell script 
and one used by the system tests. Both exhibit this problem)
creates messags from empty byte arrays. 

This is likely to provide unrealistically fast compression and hence 
unrealistically fast results. 

Suggest randomised bytes or more realistic sample messages are used. 

Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: ProducerPerformance.scala compressing Array of Zeros

2015-09-01 Thread Ben Stopford
You’re absolutely right. This should be fixed. I’ve made a note of this in 
https://issues.apache.org/jira/browse/KAFKA-2499 
. 

If you’d like to submit a pull request for this that would be awesome :) 

Otherwise I’ll try and fit it into the other performance stuff I’m looking at. 

Ben


> On 31 Aug 2015, at 12:22, Prabhjot Bharaj  wrote:
> 
> Hello Folks,
> 
> I was going through ProducerPerformance.scala.
> 
> Having a close look at line no. 247 in 'def generateProducerData'
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ProducerPerformance.scala,
> the message that the producer sends to kafka is an Array of 0s.
> 
> Basic understanding of compression algorithms suggest that compressing
> repetitive data can give best compression.
> 
> 
> I have also observed that when compressing array of zero bytes, the
> throughput increases significantly when I use lz4 or snappy vs
> CoCompressionCodec. But, this is largely dependent on the nature of data.
> 
> 
> Is this what we are trying to test here?
> Or, should the ProducerPerformance.scala create array of random bytes
> (instead of just zeroes) ?
> 
> If this can be improved, shall I open an issue to track this ?
> 
> Regards,
> Prabhjot



[jira] [Commented] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2015-09-01 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726069#comment-14726069
 ] 

Edward Ribeiro commented on KAFKA-2499:
---

Hi [~benstopford], I have a tidy bit of previous experience with synthetic data 
generation. If you are not going to work on this, I can provide some additional 
code if you assign this issue to me. Or I can provide you some classes for 
generating those random values. Up to you. :)

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2497) Logging

2015-09-01 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-2497.
--
Resolution: Fixed
  Assignee: Ewen Cheslack-Postava

Added, thanks for the contribution [~zheolong]!

> Logging
> ---
>
> Key: KAFKA-2497
> URL: https://issues.apache.org/jira/browse/KAFKA-2497
> Project: Kafka
>  Issue Type: New Feature
>Reporter: qiaojunlong
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
>
> I have create one log collecting tool named ‘logkafka’, please add it's link 
> to ecosystem/Logging.
> Github url: https://github.com/Qihoo360/logkafka/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2498) need build steps/instruction while building apache kafka from source github branch 0.8.2

2015-09-01 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14725882#comment-14725882
 ] 

Edward Ribeiro commented on KAFKA-2498:
---

Hi [~gundun], I am afraid you lost me here. If you are having a trouble 
building apache kafka from sources then I suggest that you post 
dev@kafka.apache.org so that we can solve your doubts there. Only open a JIRA 
if you identified a bug and, even so, only after confirming that with the 
community at the mailing list. The pages of Kafka below have many resources to 
help you get started:

See the last section at: 
https://cwiki.apache.org/confluence/display/KAFKA/Index  

And this link
https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup

But all in all, don't open a JIRA to ask questions. ;) 

Cheers!



> need build steps/instruction while building apache kafka from source github 
> branch 0.8.2
> 
>
> Key: KAFKA-2498
> URL: https://issues.apache.org/jira/browse/KAFKA-2498
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.8.2.0
> Environment: I am working rhel7.1 machine
>Reporter: naresh gundu
>Priority: Critical
> Fix For: 0.8.2.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I have followed the steps from the github https://github.com/apache/kafka
> cd source-code
> gradle
> ./gradlew jar (success)
> ./gradlew srcJar (success)
> ./gradlew test ( one test case failed)
> so, please provide me the steps or confirm the above steps are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2496) New consumer from trunk doesn't work with 0.8.2.1 brokers

2015-09-01 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-2496.
--
Resolution: Invalid
  Assignee: Ewen Cheslack-Postava

[~serejja] 2 points.

1. This isn't expected to work. The necessary broker functionality wasn't in 
place in the 0.8.2 releases. https://issues.apache.org/jira/browse/KAFKA-1326 
tracks the progress on the new consumer implementation, including broker-side 
support; for example, it links to 
https://issues.apache.org/jira/browse/KAFKA-1335, which is a critical piece of 
the consumer coordinator implementation on the broker and has Fix Version 
marked as 0.8.3 (checked into trunk, but not yet released). 
2. The recommended upgrade path is to upgrade brokers first, and then clients. 
This ensures that any features that require broker support (like the consumer 
coordinator for the new consumer) can be rolled out smoothly. It also keeps 
development much simpler -- only the broker needs to handle multiple 
request/response formats.

Note that the new consumer protocol is still undergoing a few changes (e.g., 
https://issues.apache.org/jira/browse/KAFKA-2464) so even if the implementation 
had been there, I'm not sure we'd want to support older versions as it would 
complicate things significantly and none of that code was intended to be 
supported in 0.8.2 releases.

> New consumer from trunk doesn't work with 0.8.2.1 brokers
> -
>
> Key: KAFKA-2496
> URL: https://issues.apache.org/jira/browse/KAFKA-2496
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Serhey Novachenko
>Assignee: Ewen Cheslack-Postava
>
> I have a 0.8.2.1 broker running with a topic created and some messages in it.
> I also have a consumer built from trunk (commit 
> 9c936b186d390f59f1d4ad8cc2995f800036a3d6 to be precise).
> When trying to consume messages from this topic the consumer fails with a 
> following stacktrace:
> {noformat}
> Exception in thread "main" org.apache.kafka.common.KafkaException: Unexpected 
> error in join group response: The server experienced an unexpected error when 
> processing the request
>   at 
> org.apache.kafka.clients.consumer.internals.Coordinator$JoinGroupResponseHandler.handle(Coordinator.java:361)
>   at 
> org.apache.kafka.clients.consumer.internals.Coordinator$JoinGroupResponseHandler.handle(Coordinator.java:309)
>   at 
> org.apache.kafka.clients.consumer.internals.Coordinator$CoordinatorResponseHandler.onSuccess(Coordinator.java:701)
>   at 
> org.apache.kafka.clients.consumer.internals.Coordinator$CoordinatorResponseHandler.onSuccess(Coordinator.java:675)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:163)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:129)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:105)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:293)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:237)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:274)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:182)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:172)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:145)
>   at 
> org.apache.kafka.clients.consumer.internals.Coordinator.reassignPartitions(Coordinator.java:195)
>   at 
> org.apache.kafka.clients.consumer.internals.Coordinator.ensurePartitionAssignment(Coordinator.java:170)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:770)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:731)
>   at Sandbox$.main(Sandbox.scala:38)
>   at Sandbox.main(Sandbox.scala)
> {noformat}
> What actually happens is broker being unable to handle the JoinGroup request 
> from consumer:
> {noformat}
> [2015-09-01 11:48:38,820] ERROR [KafkaApi-0] error when handling request 
> Name: JoinGroup; Version: 0; CorrelationId: 141; ClientId: consumer-1; Body: 
> {group_id=mirror_maker_group,session_timeout=3,topics=[mirror],consumer_id=,partition_assignment_strategy=range}
>  (kafka.server.KafkaApis)
> kafka.common.KafkaException: Unknown api code 11
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:70)
>   at 

[jira] [Commented] (KAFKA-2486) New consumer performance

2015-09-01 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726063#comment-14726063
 ] 

Jason Gustafson commented on KAFKA-2486:


[~jkreps] I've been running the consumer performance tests locally and the 
performance is similar to that of the old consumer. The initial rebalance seems 
to skew the results for small tests (i.e. with fewer messages), but the results 
appear to converge for larger tests. I also noticed significantly lower cpu 
utilization for the new consumer. It would be nice if others could confirm this.

> New consumer performance
> 
>
> Key: KAFKA-2486
> URL: https://issues.apache.org/jira/browse/KAFKA-2486
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
> Fix For: 0.8.3
>
>
> The new consumer was previously reaching getting good performance. However, a 
> recent report on the mailing list indicates it's dropped significantly. After 
> evaluation, even with a local broker it seems to only be reaching a 2-10MB/s, 
> compared to 600+MB/s previously. Before release, we should get the 
> performance back on par.
> Some details about where the regression occurred from the mailing list 
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201508.mbox/%3CCAAdKFaE8bPSeWZf%2BF9RuA-xZazRpBrZG6vo454QLVHBAk_VOJg%40mail.gmail.com%3E
>  :
> bq. At 49026f11781181c38e9d5edb634be9d27245c961 (May 14th), we went from good 
> performance -> an error due to broker apparently not accepting the partition 
> assignment strategy. Since this commit seems to add heartbeats and the server 
> side code for partition assignment strategies, I assume we were missing 
> something on the client side and by filling in the server side, things 
> stopped working.
> bq. On either 84636272422b6379d57d4c5ef68b156edc1c67f8 or 
> a5b11886df8c7aad0548efd2c7c3dbc579232f03 (July 17th), I am able to run the 
> perf test again, but it's slow -- ~10MB/s for me vs the 2MB/s Jay was seeing, 
> but that's still far less than the 600MB/s I saw on the earlier commits.
> Ideally we would also at least have a system test in place for the new 
> consumer, even if regressions weren't automatically detected. It would at 
> least allow for manually checking for regressions. This should not be 
> difficult since there are already old consumer performance tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2000:
---
Fix Version/s: 0.8.3

> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-2000.patch, KAFKA-2000_2015-05-03_10:39:11.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2460) Fix capitalization in SSL classes

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2460:
---
Fix Version/s: (was: 0.8.3)

> Fix capitalization in SSL classes
> -
>
> Key: KAFKA-2460
> URL: https://issues.apache.org/jira/browse/KAFKA-2460
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.3
>Reporter: Jay Kreps
>Assignee: sriharsha chintalapani
>Priority: Minor
>
> I notice that all the SSL classes are using the convention SSLChannelBuilder, 
> SSLConfigs, etc. Kafka has always used the convention SslChannelBuilder, 
> SslConfigs, etc. See e.g. KafkaApis, ApiUtils, LeaderAndIsrRequest, 
> ClientIdAndTopic, etc.
> We should fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2295) Dynamically loaded classes (encoders, etc.) may not be found by Kafka Producer

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2295:
---
Fix Version/s: (was: 0.9.0)
   0.8.3

> Dynamically loaded classes (encoders, etc.) may not be found by Kafka 
> Producer 
> ---
>
> Key: KAFKA-2295
> URL: https://issues.apache.org/jira/browse/KAFKA-2295
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Reporter: Tathagata Das
>Assignee: Manikumar Reddy
> Fix For: 0.8.3
>
> Attachments: KAFKA-2295.patch, KAFKA-2295_2015-07-06_11:32:58.patch, 
> KAFKA-2295_2015-08-20_17:44:56.patch
>
>
> Kafka Producer (via CoreUtils.createObject) effectively uses Class.forName to 
> load encoder classes. Class.forName is by design finds classes only in the 
> defining classloader of the enclosing class (which is often the bootstrap 
> class loader). It does not use the current thread context class loader. This 
> can lead to problems in environments where classes are dynamically loaded and 
> therefore may not be present in the bootstrap classloader.
> This leads to ClassNotFound Exceptions in environments like Spark where 
> classes are loaded dynamically using custom classloaders. Issues like this 
> have reported. E.g. - 
> https://www.mail-archive.com/user@spark.apache.org/msg30951.html
> Other references regarding this issue with Class.forName 
> http://stackoverflow.com/questions/21749741/though-my-class-was-loaded-class-forname-throws-classnotfoundexception
> This is a problem we have faced repeatedly in Apache Spark and we solved it 
> by explicitly specifying the class loader to use. See 
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/Utils.scala#L178



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2145) An option to add topic owners.

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2145:
---
Fix Version/s: 0.8.3

> An option to add topic owners. 
> ---
>
> Key: KAFKA-2145
> URL: https://issues.apache.org/jira/browse/KAFKA-2145
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.8.3
>
>
> We need to expose a way so users can identify users/groups that share 
> ownership of topic. We discussed adding this as part of 
> https://issues.apache.org/jira/browse/KAFKA-2035 and agreed that it will be 
> simpler to add owner as a logconfig. 
> The owner field can be used for auditing and also by authorization layer to 
> grant access without having to explicitly configure acls. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2486) New consumer performance

2015-09-01 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726085#comment-14726085
 ] 

Ben Stopford commented on KAFKA-2486:
-

[~hachikuji] [~jkreps] I'd add that the new consumer handles smaller messages 
better than the old one, at least with default settings. Larger messages show 
pretty equivalent results. 

> New consumer performance
> 
>
> Key: KAFKA-2486
> URL: https://issues.apache.org/jira/browse/KAFKA-2486
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
> Fix For: 0.8.3
>
>
> The new consumer was previously reaching getting good performance. However, a 
> recent report on the mailing list indicates it's dropped significantly. After 
> evaluation, even with a local broker it seems to only be reaching a 2-10MB/s, 
> compared to 600+MB/s previously. Before release, we should get the 
> performance back on par.
> Some details about where the regression occurred from the mailing list 
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201508.mbox/%3CCAAdKFaE8bPSeWZf%2BF9RuA-xZazRpBrZG6vo454QLVHBAk_VOJg%40mail.gmail.com%3E
>  :
> bq. At 49026f11781181c38e9d5edb634be9d27245c961 (May 14th), we went from good 
> performance -> an error due to broker apparently not accepting the partition 
> assignment strategy. Since this commit seems to add heartbeats and the server 
> side code for partition assignment strategies, I assume we were missing 
> something on the client side and by filling in the server side, things 
> stopped working.
> bq. On either 84636272422b6379d57d4c5ef68b156edc1c67f8 or 
> a5b11886df8c7aad0548efd2c7c3dbc579232f03 (July 17th), I am able to run the 
> perf test again, but it's slow -- ~10MB/s for me vs the 2MB/s Jay was seeing, 
> but that's still far less than the 600MB/s I saw on the earlier commits.
> Ideally we would also at least have a system test in place for the new 
> consumer, even if regressions weren't automatically detected. It would at 
> least allow for manually checking for regressions. This should not be 
> difficult since there are already old consumer performance tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2064) Replace ConsumerMetadataRequest and Response with org.apache.kafka.common.requests objects

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2064:

Fix Version/s: (was: 0.8.3)

> Replace ConsumerMetadataRequest and Response with  
> org.apache.kafka.common.requests objects
> ---
>
> Key: KAFKA-2064
> URL: https://issues.apache.org/jira/browse/KAFKA-2064
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>
> Replace ConsumerMetadataRequest and response with  
> org.apache.kafka.common.requests objects



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2066) Replace FetchRequest / FetchResponse with their org.apache.kafka.common.requests equivalents

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2066:

Fix Version/s: (was: 0.8.3)

> Replace FetchRequest / FetchResponse with their 
> org.apache.kafka.common.requests equivalents
> 
>
> Key: KAFKA-2066
> URL: https://issues.apache.org/jira/browse/KAFKA-2066
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>
> Replace FetchRequest / FetchResponse with their 
> org.apache.kafka.common.requests equivalents.
> Note that they can't be completely removed until we deprecate the 
> SimpleConsumer API (and it will require very careful patchwork for the places 
> where core modules actually use the SimpleConsumer API).
> This also requires a solution on how to stream from memory-mapped files 
> (similar to what existing code does with FileMessageSet. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1763) validate_index_log in system tests runs remotely but uses local paths

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1763:

Fix Version/s: (was: 0.8.3)

> validate_index_log in system tests runs remotely but uses local paths
> -
>
> Key: KAFKA-1763
> URL: https://issues.apache.org/jira/browse/KAFKA-1763
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.8.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-1763.patch
>
>
> validate_index_log is the only validation step in the system tests that needs 
> to execute a Kafka binary and it's currently doing so remotely, like the rest 
> of the test binaries. However, this is probably incorrect since it looks like 
> logs are synced back to the driver host and in other cases are operated on 
> locally. It looks like validate_index_log mixes up local/remote paths, 
> causing an exception in DumpLogSegments:
> {quote}
> 2014-11-10 12:09:57,665 - DEBUG - executing command [ssh vagrant@worker1 -o 
> 'HostName 127.0.0.1' -o 'Port ' -o 'UserKnownHostsFile /dev/null' -o 
> 'StrictHostKeyChecking no' -o 'PasswordAuthentication no' -o 'IdentityFile 
> /Users/ewencp/.vagrant.d/insecure_private_key' -o 'IdentitiesOnly yes' -o 
> 'LogLevel FATAL'  '/opt/kafka/bin/kafka-run-class.sh 
> kafka.tools.DumpLogSegments  --file 
> /Users/ewencp/kafka.git/system_test/replication_testsuite/testcase_0008/logs/broker-3/kafka_server_3_logs/test_1-2/1294.index
>  --verify-index-only 2>&1'] (system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - Dumping 
> /Users/ewencp/kafka.git/system_test/replication_testsuite/testcase_0008/logs/broker-3/kafka_server_3_logs/test_1-2/1294.index
>  (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - Exception in thread "main" 
> java.io.FileNotFoundException: 
> /Users/ewencp/kafka.git/system_test/replication_testsuite/testcase_0008/logs/broker-3/kafka_server_3_logs/test_1-2/1294.log
>  (No such file or directory) (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at java.io.FileInputStream.open(Native 
> Method) (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> java.io.FileInputStream.(FileInputStream.java:146) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> kafka.utils.Utils$.openChannel(Utils.scala:162) (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> kafka.log.FileMessageSet.(FileMessageSet.scala:74) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> kafka.tools.DumpLogSegments$.kafka$tools$DumpLogSegments$$dumpIndex(DumpLogSegments.scala:108)
>  (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> kafka.tools.DumpLogSegments$$anonfun$main$1.apply(DumpLogSegments.scala:80) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> kafka.tools.DumpLogSegments$$anonfun$main$1.apply(DumpLogSegments.scala:73) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>  (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> kafka.tools.DumpLogSegments$.main(DumpLogSegments.scala:73) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> kafka.tools.DumpLogSegments.main(DumpLogSegments.scala) 
> (kafka_system_test_utils)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2068) Replace OffsetCommit Request/Response with org.apache.kafka.common.requests equivalent

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2068:
---
Fix Version/s: (was: 0.8.3)

> Replace OffsetCommit Request/Response with  org.apache.kafka.common.requests  
> equivalent
> 
>
> Key: KAFKA-2068
> URL: https://issues.apache.org/jira/browse/KAFKA-2068
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Guozhang Wang
> Attachments: KAFKA-2068.patch
>
>
> Replace OffsetCommit Request/Response with  org.apache.kafka.common.requests  
> equivalent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2067) Add LeaderAndISR request/response to org.apache.kafka.common.requests and replace usage in core module

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2067:
---
Fix Version/s: (was: 0.8.3)

> Add LeaderAndISR request/response to org.apache.kafka.common.requests and 
> replace usage in core module
> --
>
> Key: KAFKA-2067
> URL: https://issues.apache.org/jira/browse/KAFKA-2067
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>
> Add LeaderAndISR request/response to org.apache.kafka.common.requests and 
> replace usage in core module.
> Note that this will require adding a bunch of new objects to o.a.k.common - 
> LeaderAndISR, LeaderISRAndEpoch and possibly others.
> It may be nice to have a scala implicit to translate those objects from their 
> old (core) implementation to the o.a.k.common implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36858: Patch for KAFKA-2120

2015-09-01 Thread Mayuresh Gharat


> On Sept. 1, 2015, 1:02 a.m., Jun Rao wrote:
> > clients/src/main/java/org/apache/kafka/clients/NetworkClient.java, lines 
> > 362-367
> > 
> >
> > We need to think through this logic a bit more. The patch here 
> > disconnects the connection from the selector, but doesn't mark the 
> > connectionState as disconnected immediately. Only until the next 
> > networkClient.poll(), does the connectionState change to disconnected. The 
> > issue with this approach is that selector.disconnect actually cancels the 
> > socket key. So, at that moment, the connection is no longer usable. 
> > However, the connectionState is still connected. A client can check the 
> > connection as ready and then makes a send call. The send will then get a 
> > CancelledKeyException, which is weird.
> > 
> > We are dealing with a similar issue in KAFKA-2411. Our current thinking 
> > is to have a networkClient.disconnect() that closes the socket key as well 
> > as removes the client from connectionState. This will make the state in 
> > networkClient consistent after each poll() call. If we have that, we can 
> > just call networkClient.disconnect() in handleTimedOutRequests() and handle 
> > those disconnected connections immediately. Then, we don't need to maintain 
> > the clientDisconnects state in Selector.
> 
> Mayuresh Gharat wrote:
> Thanks a lot Jun.
> I was thinking about something similar when I was rebasing the patch 
> yesterday with latest trunk. So the initial code that I wrote, was taking the 
> disconnected nodes and adding them to disconnected list in disconnect() in 
> Selector. But that imposes dpendency that handleTimeout() should be called 
> before handledisconnections(), because every poll clears the disconnected 
> list. I will test the approach you suggested and upload a patch for this.

Thinking more on this I feel that we can add the explicitly disconnected node 
in disconnect() function of Selector to the List in selector and 
call the handleTimeOut() before handleDisconnections() in NetworkClient. That 
will handle all the above requirements in the same poll(). The only issue with 
this is that as per the javadoc for disconnect() ("*The disconnection is 
asynchronous and will not be processed until the next {@link #poll(long) 
poll()} call.*")
The reason we need to have the explicitly disconnected node in to the selectors 
disconnected list is we need to have all the functionality in 
handleDisconnections() (metadataUpdate, clearing the inflight request for that 
node) and don't want to duplicate code.

The other way is maintain a list of disconnected node from handleTimeOut. In 
handleDisconnections iterate over the merged list of disconnected nodes from 
this list and the selectors disconnected list. This takes into consideration 
the approach that you mentioned above. What do you think?


- Mayuresh


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36858/#review97213
---


On Aug. 12, 2015, 5:59 p.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36858/
> ---
> 
> (Updated Aug. 12, 2015, 5:59 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2120
> https://issues.apache.org/jira/browse/KAFKA-2120
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Solved compile error
> 
> 
> Addressed Jason's comments for Kip-19
> 
> 
> Addressed Jun's comments
> 
> 
> Addressed Jason's comments about the default values for requestTimeout
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
> dc8f0f115bcda893c95d17c0a57be8d14518d034 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> 2c421f42ed3fc5d61cf9c87a7eaa7bb23e26f63b 
>   clients/src/main/java/org/apache/kafka/clients/InFlightRequests.java 
> 15d00d4e484bb5d51a9ae6857ed6e024a2cc1820 
>   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
> 7ab2503794ff3aab39df881bd9fbae6547827d3b 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> 0e51d7bd461d253f4396a5b6ca7cd391658807fa 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> d35b421a515074d964c7fccb73d260b847ea5f00 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> ed99e9bdf7c4ea7a6d4555d4488cf8ed0b80641b 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
>  9517d9d0cd480d5ba1d12f1fde7963e60528d2f8 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 03b8dd23df63a8d8a117f02eabcce4a2d48c44f7 
>   

[GitHub] kafka pull request: Fixing KAFKA-2309

2015-09-01 Thread auradkar
GitHub user auradkar opened a pull request:

https://github.com/apache/kafka/pull/185

Fixing KAFKA-2309

Currently, a LeaderAndIsrRequest does not mark the isrShrinkRate if the 
received ISR is smaller than the existing ISR. This can happen if one of the 
replicas is shut down.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auradkar/kafka K-2309

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/185.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #185






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2309) ISR shrink rate not updated on LeaderAndIsr request with shrunk ISR

2015-09-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726546#comment-14726546
 ] 

ASF GitHub Bot commented on KAFKA-2309:
---

GitHub user auradkar opened a pull request:

https://github.com/apache/kafka/pull/185

Fixing KAFKA-2309

Currently, a LeaderAndIsrRequest does not mark the isrShrinkRate if the 
received ISR is smaller than the existing ISR. This can happen if one of the 
replicas is shut down.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auradkar/kafka K-2309

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/185.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #185






> ISR shrink rate not updated on LeaderAndIsr request with shrunk ISR
> ---
>
> Key: KAFKA-2309
> URL: https://issues.apache.org/jira/browse/KAFKA-2309
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Aditya Auradkar
>Priority: Minor
>
> If a broker receives a LeaderAndIsr request with a shrunk ISR (say, when a 
> follower shuts down) it needs to mark the isr shrink rate meter when it 
> updates its ISR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2136) Client side protocol changes to return quota delays

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2136:
---
Fix Version/s: 0.8.3

> Client side protocol changes to return quota delays
> ---
>
> Key: KAFKA-2136
> URL: https://issues.apache.org/jira/browse/KAFKA-2136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Fix For: 0.8.3
>
> Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
> KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch, 
> KAFKA-2136_2015-05-12_14:40:44.patch, KAFKA-2136_2015-06-09_10:07:13.patch, 
> KAFKA-2136_2015-06-09_10:10:25.patch, KAFKA-2136_2015-06-30_19:43:55.patch, 
> KAFKA-2136_2015-07-13_13:34:03.patch, KAFKA-2136_2015-08-18_13:19:57.patch, 
> KAFKA-2136_2015-08-18_13:24:00.patch, KAFKA-2136_2015-08-21_16:29:17.patch, 
> KAFKA-2136_2015-08-24_10:33:10.patch, KAFKA-2136_2015-08-25_11:29:52.patch
>
>
> As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
> the Fetch and the ProduceResponse objects. Add client side metrics on the new 
> producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2311) Consumer's ensureNotClosed method not thread safe

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2311:
---
Fix Version/s: 0.8.3

> Consumer's ensureNotClosed method not thread safe
> -
>
> Key: KAFKA-2311
> URL: https://issues.apache.org/jira/browse/KAFKA-2311
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Tim Brooks
>Assignee: Tim Brooks
> Fix For: 0.8.3
>
> Attachments: KAFKA-2311.patch, KAFKA-2311.patch
>
>
> When a call is to the consumer is made, the first check is to see that the 
> consumer is not closed. This variable is not volatile so there is no 
> guarantee previous stores will be visible before a read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2496) New consumer from trunk doesn't work with 0.8.2.1 brokers

2015-09-01 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726448#comment-14726448
 ] 

Jay Kreps commented on KAFKA-2496:
--

The first response is expected. The group management feature didn't exist in 
0.8.2.1.

For the second case that is a bug, the error message should say something like 
"unrecognized version for fetch request".

> New consumer from trunk doesn't work with 0.8.2.1 brokers
> -
>
> Key: KAFKA-2496
> URL: https://issues.apache.org/jira/browse/KAFKA-2496
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Serhey Novachenko
>Assignee: Ewen Cheslack-Postava
>
> I have a 0.8.2.1 broker running with a topic created and some messages in it.
> I also have a consumer built from trunk (commit 
> 9c936b186d390f59f1d4ad8cc2995f800036a3d6 to be precise).
> When trying to consume messages from this topic the consumer fails with a 
> following stacktrace:
> {noformat}
> Exception in thread "main" org.apache.kafka.common.KafkaException: Unexpected 
> error in join group response: The server experienced an unexpected error when 
> processing the request
>   at 
> org.apache.kafka.clients.consumer.internals.Coordinator$JoinGroupResponseHandler.handle(Coordinator.java:361)
>   at 
> org.apache.kafka.clients.consumer.internals.Coordinator$JoinGroupResponseHandler.handle(Coordinator.java:309)
>   at 
> org.apache.kafka.clients.consumer.internals.Coordinator$CoordinatorResponseHandler.onSuccess(Coordinator.java:701)
>   at 
> org.apache.kafka.clients.consumer.internals.Coordinator$CoordinatorResponseHandler.onSuccess(Coordinator.java:675)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:163)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:129)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:105)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:293)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:237)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:274)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:182)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:172)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:145)
>   at 
> org.apache.kafka.clients.consumer.internals.Coordinator.reassignPartitions(Coordinator.java:195)
>   at 
> org.apache.kafka.clients.consumer.internals.Coordinator.ensurePartitionAssignment(Coordinator.java:170)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:770)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:731)
>   at Sandbox$.main(Sandbox.scala:38)
>   at Sandbox.main(Sandbox.scala)
> {noformat}
> What actually happens is broker being unable to handle the JoinGroup request 
> from consumer:
> {noformat}
> [2015-09-01 11:48:38,820] ERROR [KafkaApi-0] error when handling request 
> Name: JoinGroup; Version: 0; CorrelationId: 141; ClientId: consumer-1; Body: 
> {group_id=mirror_maker_group,session_timeout=3,topics=[mirror],consumer_id=,partition_assignment_strategy=range}
>  (kafka.server.KafkaApis)
> kafka.common.KafkaException: Unknown api code 11
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:70)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The consumer code that leads to this is pretty much straightforward:
> {noformat}
> import org.apache.kafka.clients.consumer.KafkaConsumer
> import scala.collection.JavaConverters._
> object Sandbox {
>   def main(args: Array[String]) {
> val consumerProps = new Properties
> consumerProps.put("bootstrap.servers", "localhost:9092")
> consumerProps.put("group.id", "mirror_maker_group")
> consumerProps.put("enable.auto.commit", "false")
> consumerProps.put("session.timeout.ms", "3")
> consumerProps.put("key.deserializer", 
> "org.apache.kafka.common.serialization.ByteArrayDeserializer")
> consumerProps.put("value.deserializer", 
> "org.apache.kafka.common.serialization.ByteArrayDeserializer")
> val consumer = new KafkaConsumer[Array[Byte], Array[Byte]](consumerProps)
> consumer.subscribe(List("mirror").asJava)
> val records = consumer.poll(1000)
> for (record <- records.iterator().asScala) {
>   

[jira] [Updated] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2419:
---
Fix Version/s: 0.8.3

> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.8.3
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Fix For: 0.8.3
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2338:

Fix Version/s: (was: 0.8.3)

> Warn users if they change max.message.bytes that they also need to update 
> broker and consumer settings
> --
>
> Key: KAFKA-2338
> URL: https://issues.apache.org/jira/browse/KAFKA-2338
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Edward Ribeiro
> Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
> KAFKA-2338_2015-07-21_13:21:19.patch, KAFKA-2338_2015-08-24_14:32:38.patch
>
>
> We already have KAFKA-1756 filed to more completely address this issue, but 
> it is waiting for some other major changes to configs to completely protect 
> users from this problem.
> This JIRA should address the low hanging fruit to at least warn users of the 
> potential problems. Currently the only warning is in our documentation.
> 1. Generate a warning in the kafka-topics.sh tool when they change this 
> setting on a topic to be larger than the default. This needs to be very 
> obvious in the output.
> 2. Currently, the broker's replica fetcher isn't logging any useful error 
> messages when replication can't succeed because a message size is too large. 
> Logging an error here would allow users that get into a bad state to find out 
> why it is happening more easily. (Consumers should already be logging a 
> useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2229) Phase 1: Requests and KafkaApis

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2229:

Fix Version/s: (was: 0.8.3)

> Phase 1: Requests and KafkaApis
> ---
>
> Key: KAFKA-2229
> URL: https://issues.apache.org/jira/browse/KAFKA-2229
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Andrii Biletskyi
>Assignee: Andrii Biletskyi
> Attachments: KAFKA-2229.patch, KAFKA-2229_2015-06-30_16:59:17.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2303) Fix for KAFKA-2235 LogCleaner offset map overflow causes another compaction failures

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2303:

Fix Version/s: (was: 0.8.3)

> Fix for KAFKA-2235 LogCleaner offset map overflow causes another compaction 
> failures
> 
>
> Key: KAFKA-2303
> URL: https://issues.apache.org/jira/browse/KAFKA-2303
> Project: Kafka
>  Issue Type: Bug
>  Components: core, log
>Affects Versions: 0.8.2.1
>Reporter: Alexander Demidko
>Assignee: Guozhang Wang
>
> We have rolled out the patch for KAFKA-2235 to our kafka cluster, and 
> recently instead of 
> {code}
> "kafka.log.LogCleaner - [kafka-log-cleaner-thread-0], Error due to
> java.lang.IllegalArgumentException: requirement failed: Attempt to add a new 
> entry to a full offset map." 
> {code}
> we started to see 
> {code}
> kafka.log.LogCleaner - [kafka-log-cleaner-thread-0], Error due to
> java.lang.IllegalArgumentException: requirement failed: 131390902 messages in 
> segment -cgstate-8/79840768.log but offset map can 
> fit only 80530612. You can increase log.cleaner.dedupe.buffer.size or 
> decrease log.cleaner.threads
> {code}
> So, we had to roll it back to avoid disk depletion although I'm not sure if 
> it needs to be rolled back in trunk. This patch applies more strict checks 
> than were in place before: even if there is only one unique key for a 
> segment, cleanup will fail if this segment is too big. 
> Does it make sense to eliminate a limit for the offset map slots count, for 
> example to use an offset map backed by a memory mapped file?
> The limit of 80530612 slots comes from memory / bytesPerEntry, where memory 
> is Int.MaxValue (we use only one cleaner thread) and bytesPerEntry is 8 + 
> digest hash size. Might be wrong, but it seems if the overall number of 
> unique keys per partition is more than 80M slots in an OffsetMap, compaction 
> will always fail and cleaner thread will die. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1933) Fine-grained locking in log append

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1933:
---
Fix Version/s: (was: 0.8.3)

> Fine-grained locking in log append
> --
>
> Key: KAFKA-1933
> URL: https://issues.apache.org/jira/browse/KAFKA-1933
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Reporter: Maxim Ivanov
>Assignee: Maxim Ivanov
>Priority: Minor
> Attachments: KAFKA-1933.patch, KAFKA-1933_2015-02-09_12:27:06.patch
>
>
> This patch adds finer locking when appending to log. It breaks
> global append lock into 2 sequential and 1 parallel phase.
> Basic idea is to allow every thread to "reserve" offsets in non
> overlapping ranges, then do compression in parallel and then
> "commit" write to log in the same order offsets where reserved.
> Results on a server with 16 cores CPU available:
> gzip: 564.0 sec -> 45.2 sec (12.4x speedup)
> LZ4: 56.7 sec -> 9.9 sec (5.7x speedup)
> Kafka was configured to run 16  IO threads, data was pushed using 32 netcat 
> instances pushing in parallel batches of 200 msg 6.2 kb each (3264 MB in 
> total)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: ProducerPerformance.scala compressing Array of Zeros

2015-09-01 Thread Jiangjie Qin
I kind of think letting the ProducerPerformance send uncompressed bytes is
not a bad idea. The reason being is that when you send compressed bytes, it
is not easy to determine how much data are you actually send. Arguably
sending uncompressed bytes does not take compression cost into performance
benchmark, but I think it is fine because the cost of compression is
typically ignorable compared with network IO and it also depends on what
kind of raw data are you sending. So it does not make too much sense to me
to add the compression step in the ProducerPerformance.

But I absolutely agree that we should document this well so people
understand that:
1. ProducerPerformance should be used to for uncompressed message.
2. The compression is very content dependent so it is not part of the
ProducerPerformance.

If we really want to test compressed message as well, maybe we can take a
file path as argument and use that as data source.

Thanks,

Jiangjie (Becket) Qin

On Tue, Sep 1, 2015 at 12:28 PM, Ben Stopford  wrote:

> You’re absolutely right. This should be fixed. I’ve made a note of this in
> https://issues.apache.org/jira/browse/KAFKA-2499 <
> https://issues.apache.org/jira/browse/KAFKA-2499>.
>
> If you’d like to submit a pull request for this that would be awesome :)
>
> Otherwise I’ll try and fit it into the other performance stuff I’m looking
> at.
>
> Ben
>
>
> > On 31 Aug 2015, at 12:22, Prabhjot Bharaj  wrote:
> >
> > Hello Folks,
> >
> > I was going through ProducerPerformance.scala.
> >
> > Having a close look at line no. 247 in 'def generateProducerData'
> >
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ProducerPerformance.scala
> ,
> > the message that the producer sends to kafka is an Array of 0s.
> >
> > Basic understanding of compression algorithms suggest that compressing
> > repetitive data can give best compression.
> >
> >
> > I have also observed that when compressing array of zero bytes, the
> > throughput increases significantly when I use lz4 or snappy vs
> > CoCompressionCodec. But, this is largely dependent on the nature of data.
> >
> >
> > Is this what we are trying to test here?
> > Or, should the ProducerPerformance.scala create array of random bytes
> > (instead of just zeroes) ?
> >
> > If this can be improved, shall I open an issue to track this ?
> >
> > Regards,
> > Prabhjot
>
>


[jira] [Updated] (KAFKA-2470) kafka-producer-perf-test.sh can't configure all to request-num-acks

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2470:

Fix Version/s: (was: 0.8.3)

> kafka-producer-perf-test.sh can't configure all to request-num-acks
> ---
>
> Key: KAFKA-2470
> URL: https://issues.apache.org/jira/browse/KAFKA-2470
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, tools
>Affects Versions: 0.8.2.1
> Environment: Linux
>Reporter: Bo Wang
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> For New Producer API, kafka-producer-perf-test.sh can't configure all to 
> request-num-acks :
> bin]# ./kafka-producer-perf-test.sh --topic test --broker-list host:port 
> --messages 100 --message-size 200 --new-producer --sync --batch-size 1
>  --request-num-acks all
> Exception in thread "main" joptsimple.OptionArgumentConversionException: 
> Cannot convert argument 'all' of option ['request-num-acks'] to class 
> java.lang.Integer
>   at 
> joptsimple.ArgumentAcceptingOptionSpec.convert(ArgumentAcceptingOptionSpec.java:237)
>   at joptsimple.OptionSet.valuesOf(OptionSet.java:226)
>   at joptsimple.OptionSet.valueOf(OptionSet.java:170)
>   at 
> kafka.tools.ProducerPerformance$ProducerPerfConfig.(ProducerPerformance.scala:146)
>   at kafka.tools.ProducerPerformance$.main(ProducerPerformance.scala:42)
>   at kafka.tools.ProducerPerformance.main(ProducerPerformance.scala)
> Caused by: joptsimple.internal.ReflectionException: 
> java.lang.NumberFormatException: For input string: "all"
>   at 
> joptsimple.internal.Reflection.reflectionException(Reflection.java:136)
>   at joptsimple.internal.Reflection.invoke(Reflection.java:123)
>   at 
> joptsimple.internal.MethodInvokingValueConverter.convert(MethodInvokingValueConverter.java:48)
>   at 
> joptsimple.ArgumentAcceptingOptionSpec.convert(ArgumentAcceptingOptionSpec.java:234)
>   ... 5 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2146) adding partition did not find the correct startIndex

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2146:

Fix Version/s: (was: 0.8.3)

> adding partition did not find the correct startIndex 
> -
>
> Key: KAFKA-2146
> URL: https://issues.apache.org/jira/browse/KAFKA-2146
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.2.0
>Reporter: chenshangan
>Priority: Minor
> Attachments: KAFKA-2146.patch
>
>
> TopicCommand provide a tool to add partitions for existing topics. It try to 
> find the startIndex from existing partitions. There's a minor flaw in this 
> process, it try to use the first partition fetched from zookeeper as the 
> start partition, and use the first replica id in this partition as the 
> startIndex.
> One thing, the first partition fetched from zookeeper is not necessary to be 
> the start partition. As partition id begin from zero, we should use partition 
> with id zero as the start partition.
> The other, broker id does not necessary begin from 0, so the startIndex is 
> not necessary to be the first replica id in the start partition. 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1887) controller error message on shutting the last broker

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1887:
---
Fix Version/s: (was: 0.8.3)

> controller error message on shutting the last broker
> 
>
> Key: KAFKA-1887
> URL: https://issues.apache.org/jira/browse/KAFKA-1887
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Sriharsha Chintalapani
>Priority: Minor
> Attachments: KAFKA-1887.patch, KAFKA-1887_2015-02-21_01:12:25.patch
>
>
> We always see the following error in state-change log on shutting down the 
> last broker.
> [2015-01-20 13:21:04,036] ERROR Controller 0 epoch 3 initiated state change 
> for partition [test,0] from OfflinePartition to OnlinePartition failed 
> (state.change.logger)
> kafka.common.NoReplicaOnlineException: No replica for partition [test,0] is 
> alive. Live brokers are: [Set()], Assigned replicas are: [List(0)]
> at 
> kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:75)
> at 
> kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:357)
> at 
> kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:206)
> at 
> kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120)
> at 
> kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117)
> at 
> kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:446)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:373)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
> at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:568)
> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-09-01 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/
---

(Updated Sept. 1, 2015, 10:36 p.m.)


Review request for kafka.


Bugs: KAFKA-2210
https://issues.apache.org/jira/browse/KAFKA-2210


Repository: kafka


Description (updated)
---

Addressing review comments from Jun.


Adding CREATE check for offset topic only if the topic does not exist already.


Addressing some more comments.


Removing acl.json file


Moving PermissionType to trait instead of enum. Following the convention for 
defining constants.


Adding authorizer.config.path back.


Addressing more comments from Jun.


Addressing more comments.


Now addressing Ismael's comments. Case sensitive checks.


Addressing Jun's comments.


Merge remote-tracking branch 'origin/trunk' into az

Conflicts:
core/src/main/scala/kafka/server/KafkaApis.scala
core/src/main/scala/kafka/server/KafkaServer.scala

Deleting KafkaConfigDefTest


Addressing comments from Ismael.


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az


Consolidating KafkaPrincipal.


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az

Conflicts:

clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java

clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
core/src/main/scala/kafka/server/KafkaApis.scala

Making Acl structure take only one principal, operation and host.


Diffs (updated)
-

  
clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
 35d41685dd178bbdf77b2476e03ad51fc4adcbb6 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
e17e390c507eca0eba28a2763c0e35d66077d1f2 
  
clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java 
b640ea0f4bdb694fc5524ef594aa125cc1ba4cf3 
  
clients/src/test/java/org/apache/kafka/common/security/auth/KafkaPrincipalTest.java
 PRE-CREATION 
  core/src/main/scala/kafka/api/OffsetRequest.scala 
f418868046f7c99aefdccd9956541a0cb72b1500 
  core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
  core/src/main/scala/kafka/common/ErrorMapping.scala 
c75c68589681b2c9d6eba2b440ce5e58cddf6370 
  core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaApis.scala 
a3a8df0545c3f9390e0e04b8d2fab0134f5fd019 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
d547a01cf7098f216a3775e1e1901c5794e1b24c 
  core/src/main/scala/kafka/server/KafkaServer.scala 
17db4fa3c3a146f03a35dd7671ad1b06d122bb59 
  core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/OperationTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
3da666f73227fc7ef7093e3790546344065f6825 

Diff: https://reviews.apache.org/r/34492/diff/


Testing
---


Thanks,

Parth Brahmbhatt



[jira] [Updated] (KAFKA-2454) Dead lock between delete log segment and shutting down.

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2454:
---
Fix Version/s: 0.8.3

> Dead lock between delete log segment and shutting down.
> ---
>
> Key: KAFKA-2454
> URL: https://issues.apache.org/jira/browse/KAFKA-2454
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.8.3
>
>
> When the broker shutdown, it will shutdown scheduler which grabs the 
> scheduler lock then wait for all the threads in scheduler to shutdown.
> The dead lock will happen when the scheduled task try to delete old log 
> segment, it will schedule a log delete task which also needs to acquire the 
> scheduler lock. In this case the shutdown thread will hold scheduler lock and 
> wait for the the log deletion thread to finish, but the log deletion thread 
> will block on waiting for the scheduler lock.
> Related stack trace:
> {noformat}
> "Thread-1" #21 prio=5 os_prio=0 tid=0x7fe7601a7000 nid=0x1a4e waiting on 
> condition [0x7fe7cf698000]
>java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000640d53540> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
> at 
> java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1465)
> at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:94)
> - locked <0x000640b6d480> (a kafka.utils.KafkaScheduler)
> at 
> kafka.server.KafkaServer$$anonfun$shutdown$4.apply$mcV$sp(KafkaServer.scala:352)
> at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:79)
> at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
> at kafka.utils.CoreUtils$.swallowWarn(CoreUtils.scala:51)
> at kafka.utils.Logging$class.swallow(Logging.scala:94)
> at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:51)
> at kafka.server.KafkaServer.shutdown(KafkaServer.scala:352)
> at 
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:42)
> at com.linkedin.kafka.KafkaServer.notifyShutdown(KafkaServer.java:99)
> at 
> com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyShutdownListener(LifeCycleMgr.java:123)
> at 
> com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyListeners(LifeCycleMgr.java:102)
> at 
> com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyStop(LifeCycleMgr.java:82)
> - locked <0x000640b77bb0> (a java.util.ArrayDeque)
> at com.linkedin.util.factory.Generator.stop(Generator.java:177)
> - locked <0x000640b77bc8> (a java.lang.Object)
> at 
> com.linkedin.offspring.servlet.OffspringServletRuntime.destroy(OffspringServletRuntime.java:82)
> at 
> com.linkedin.offspring.servlet.OffspringServletContextListener.contextDestroyed(OffspringServletContextListener.java:51)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:813)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:160)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:516)
> at com.linkedin.emweb.WebappContext.doStop(WebappContext.java:35)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
> - locked <0x0006400018b8> (a java.lang.Object)
> at 
> com.linkedin.emweb.ContextBasedHandlerImpl.doStop(ContextBasedHandlerImpl.java:112)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
> - locked <0x000640001900> (a java.lang.Object)
> at 
> com.linkedin.emweb.WebappDeployerImpl.stop(WebappDeployerImpl.java:349)
> at 
> com.linkedin.emweb.WebappDeployerImpl.doStop(WebappDeployerImpl.java:414)
> - locked <0x0006400019c0> (a 
> com.linkedin.emweb.MapBasedHandlerImpl)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
> - locked <0x0006404ee8e8> (a java.lang.Object)
> at 
> org.eclipse.jetty.util.component.AggregateLifeCycle.doStop(AggregateLifeCycle.java:107)
> at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:69)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.doStop(HandlerWrapper.java:108)
> at org.eclipse.jetty.server.Server.doStop(Server.java:338)
> at 
> 

[jira] [Updated] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2015-09-01 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-2499:

Assignee: Edward Ribeiro

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Edward Ribeiro
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2015-09-01 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726134#comment-14726134
 ] 

Edward Ribeiro commented on KAFKA-2499:
---

okay, thanks! 

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Edward Ribeiro
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1977) Make logEndOffset available in the Zookeeper consumer

2015-09-01 Thread Will Funnell (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726173#comment-14726173
 ] 

Will Funnell commented on KAFKA-1977:
-

[~hachikuji] Created a ticket here: 
https://issues.apache.org/jira/browse/KAFKA-2500

Does that describe everything as you would expect?

> Make logEndOffset available in the Zookeeper consumer
> -
>
> Key: KAFKA-1977
> URL: https://issues.apache.org/jira/browse/KAFKA-1977
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Will Funnell
>Priority: Minor
> Attachments: 
> Make_logEndOffset_available_in_the_Zookeeper_consumer.patch
>
>
> The requirement is to create a snapshot from the Kafka topic but NOT do 
> continual reads after that point. For example you might be creating a backup 
> of the data to a file.
> In order to achieve that, a recommended solution by Joel Koshy and Jay Kreps 
> was to expose the high watermark, as maxEndOffset, from the FetchResponse 
> object through to each MessageAndMetadata object in order to be aware when 
> the consumer has reached the end of each partition.
> The submitted patch achieves this by adding the maxEndOffset to the 
> PartitionTopicInfo, which is updated when a new message arrives in the 
> ConsumerFetcherThread and then exposed in MessageAndMetadata.
> See here for discussion:
> http://search-hadoop.com/m/4TaT4TpJy71



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2070) Replace OffsetRequest/response with ListOffsetRequest/response from org.apache.kafka.common.requests

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2070:

Fix Version/s: (was: 0.8.3)

> Replace OffsetRequest/response with ListOffsetRequest/response from 
> org.apache.kafka.common.requests
> 
>
> Key: KAFKA-2070
> URL: https://issues.apache.org/jira/browse/KAFKA-2070
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>
> Replace OffsetRequest/response with ListOffsetRequest/response from 
> org.apache.kafka.common.requests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2385) zookeeper-shell does not work

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2385:
---
Fix Version/s: (was: 0.8.3)

> zookeeper-shell does not work
> -
>
> Key: KAFKA-2385
> URL: https://issues.apache.org/jira/browse/KAFKA-2385
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Jiangjie Qin
>Assignee: Flavio Junqueira
>
> The zookeeper shell shipped with Kafka does not work because jline jar is 
> missing.
> [jqin@jqin-ld1 bin]$ ./zookeeper-shell.sh localhost:2181
> Connecting to localhost:2181
> Welcome to ZooKeeper!
> JLine support is disabled
> WATCHER::
> WatchedEvent state:SyncConnected type:None path:null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2421) Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7

2015-09-01 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726385#comment-14726385
 ] 

Jun Rao commented on KAFKA-2421:


The patch no longer applies. Could you rebase?

> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7
> 
>
> Key: KAFKA-2421
> URL: https://issues.apache.org/jira/browse/KAFKA-2421
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: IBM Java 7
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Attachments: KAFKA-2421.patch, KAFKA-2421_2015-08-11_18:54:26.patch
>
>
> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7.
> LZ4 version 1.2 crashes with 64-bit IBM Java 7. This has been fixed in LZ4 
> version 1.3 (https://github.com/jpountz/lz4-java/blob/master/CHANGES.md, 
> https://github.com/jpountz/lz4-java/pull/46).
> The unit test org.apache.kafka.common.record.MemoryRecordsTest crashes when 
> run with 64-bit IBM Java7 with the error:
> {quote}
> 023EB900: Native Method 0263CE10 
> (net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput([BII[BII)I)
> 023EB900: Invalid JNI call of function void 
> ReleasePrimitiveArrayCritical(JNIEnv *env, jarray array, void *carray, jint 
> mode): For array FFF7EAB8 parameter carray passed FFF85998, 
> expected to be FFF7EAC0
> 14:08:42.763 0x23eb900j9mm.632*   ** ASSERTION FAILED ** at 
> StandardAccessBarrier.cpp:335: ((false))
> JVMDUMP039I Processing dump event "traceassert", detail "" at 2015/08/11 
> 15:08:42 - please wait.
> {quote}
> Stack trace from javacore:
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput(Native Method)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNICompressor.compress(LZ4JNICompressor.java:31)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.(LZ4Factory.java:163)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.instance(LZ4Factory.java:46)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.nativeInstance(LZ4Factory.java:76)
> 5XESTACKTRACE   (entered lock: 
> net/jpountz/lz4/LZ4Factory@0xE02F0BE8, entry count: 1)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.fastestInstance(LZ4Factory.java:129)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:93)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:103)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance0(Native Method)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:86)
> 4XESTACKTRACEat 
> sun/reflect/DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:58)
> 4XESTACKTRACEat 
> java/lang/reflect/Constructor.newInstance(Constructor.java:542)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.wrapForOutput(Compressor.java:222)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:76)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.(MemoryRecords.java:43)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:51)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:55)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> java -version
> java version "1.7.0"
> Java(TM) SE Runtime Environment (build pxa6470_27sr3fp1-20150605_01(SR3 FP1))
> IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 Compressed References 
> 20150407_243189 (JIT enabled, AOT enabled)
> J9VM - R27_Java727_SR3_20150407_1831_B243189
> JIT  - tr.r13.java_20150406_89182
> GC   - R27_Java727_SR3_20150407_1831_B243189_CMPRSS
> J9CL - 20150407_243189)
> JCL - 20150601_01 based on Oracle 7u79-b14



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2015-09-01 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-2499:

Labels: newbie  (was: )

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Edward Ribeiro
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2500) Make logEndOffset available in the 0.8.3 Consumer

2015-09-01 Thread Will Funnell (JIRA)
Will Funnell created KAFKA-2500:
---

 Summary: Make logEndOffset available in the 0.8.3 Consumer
 Key: KAFKA-2500
 URL: https://issues.apache.org/jira/browse/KAFKA-2500
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Affects Versions: 0.8.3
Reporter: Will Funnell
Assignee: Neha Narkhede
Priority: Critical
 Fix For: 0.8.3


Originally created in the old consumer here: 
https://issues.apache.org/jira/browse/KAFKA-1977

The requirement is to create a snapshot from the Kafka topic but NOT do 
continual reads after that point. For example you might be creating a backup of 
the data to a file.

This ticket covers the addition of the functionality to the new consumer.

In order to achieve that, a recommended solution by Joel Koshy and Jay Kreps 
was to expose the high watermark, as maxEndOffset, from the FetchResponse 
object through to each MessageAndMetadata object in order to be aware when the 
consumer has reached the end of each partition.
The submitted patch achieves this by adding the maxEndOffset to the 
PartitionTopicInfo, which is updated when a new message arrives in the 
ConsumerFetcherThread and then exposed in MessageAndMetadata.
See here for discussion:
http://search-hadoop.com/m/4TaT4TpJy71




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Client-side Assignment for New Consumer

2015-09-01 Thread Jiangjie Qin
Sorry about the format... It seems apache mail list does not suppor table.
Please refer to the following sheet.

https://docs.google.com/spreadsheets/d/1eKlQC1Ty74qVrIDH9vk7S6hEs_eJAyOufFUZaIU-k28/edit?usp=sharing

Thanks,

Jiangjie (Becket) Qin

On Tue, Sep 1, 2015 at 1:01 PM, Aditya Auradkar 
wrote:

> Hey -
>
> So what is remaining here? Becket did you send a followup email regarding
> the data volume. For some reason, I dont receive emails sent by you.
>
> Aditya
>
> On Mon, Aug 31, 2015 at 9:52 AM, Joel Koshy  wrote:
>
>> Thanks Becket - I think your follow-up email to the thread states our
>> concerns and positions clearly. I would really like to make sure we
>> are very clear and firm on the practical aspects of the changes they
>> are pushing for. We should continue to stay on top of this and make
>> sure they don't push in changes to the consumer in a hurry for 0.8.3
>> without due diligence.
>>
>> Joel
>>
>> On Sun, Aug 30, 2015 at 12:00 AM, Jiangjie Qin  wrote:
>> > Hi Joel,
>> >
>> > I was trying to calculate the number but found it might be better to run
>> > some actual tests because of the following reasons:
>> > 1. cross-colo network bandwidth usage is not clear to me yet.
>> > 2. The coordinators of the four mirror maker consumer group may or may
>> not
>> > reside on the same brokers. So the impact would be different.
>> > 3. I am not sure about the network bandwidth usage of normal traffic on
>> > broker. i.e. how much bandwidth is actually available for consumer
>> > rebalance. Currently the outbound traffic on broker are not balanced.
>> >
>> > The above factors can impact the actual rebalance time significantly.
>> So I
>> > just put a quick reply to the mail thread with some actual numbers we
>> have
>> > and wanted to update the test result later. But I agree we should do it
>> > soon.
>> >
>> > Thanks,
>> >
>> >  Jiangjie (Becket) Qin
>> >
>> > On Sat, Aug 29, 2015 at 9:40 PM, Joel Koshy 
>> wrote:
>> >>
>> >> No - I thought we agreed we would calculate the exact bandwidth (bytes)
>> >> required for a rebalance, its duration and thus whether it makes sense
>> or
>> >> not. I.e that we would come up with exact numbers for the scenarios of
>> >> interest in the email that becket sent
>> >>
>> >>
>> >> On Saturday, August 29, 2015, Kartik Paramasivam
>> >>  wrote:
>> >>>
>> >>> Isn't that what becket is also saying ?
>> >>>
>> >>> On Aug 28, 2015, at 10:12 PM, Joel Koshy  wrote:
>> >>>
>> >>> I thought we were going to run the numbers ourselves and tell them if
>> we
>> >>> are okay with it or not?
>> >>>
>> >>> -- Forwarded message --
>> >>> From: Jiangjie Qin 
>> >>> Date: Friday, August 28, 2015
>> >>> Subject: [DISCUSS] Client-side Assignment for New Consumer
>> >>> To: dev@kafka.apache.org
>> >>>
>> >>>
>> >>> Hi Neha,
>> >>>
>> >>> Following are some numbers we have in the pipeline. It would be very
>> >>> helpful to see how it goes with the proposed protocol. We will try to
>> do
>> >>> some tests with the current patch as well. Please also let us know if
>> you
>> >>> want further information.
>> >>>
>> >>> 32 brokers, 1Gbps NIC
>> >>> 547 topics
>> >>> 27 chars average topic name length
>> >>> 2-3 consumers for each topic
>> >>>
>> >>> Four 26-node mirror maker instances (Four different consumer groups).
>> >>> Each
>> >>> node has 4 consumers. (Each mirror maker instance has 104 consumers)
>> >>> We are actually using selective copy, so we have a big whitelist for
>> each
>> >>> mirror maker, copying about 100 topics (We expect it to grow to a
>> couple
>> >>> of
>> >>> hundreds).
>> >>> The mirror makers are co-located with target cluster, so the consumer
>> >>> traffic go through the WAN.
>> >>>
>> >>> We have 5 to 6 wildcard consumers consuming from all the topics.
>> >>>
>> >>> The topic creation frequency is not high now, roughly about 1 / day.
>> >>>
>> >>> The scenarios we are interested in are:
>> >>> 1. The time for one round of rebalance.
>> >>> 2. The time for a rolling bounce of mirror maker.
>> >>> 3. For wildcard topic, does metadata sync up cause problem.
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Jiangjie (Becket) Qin
>> >>>
>> >>>
>> >>> On Fri, Aug 28, 2015 at 1:24 PM, Joel Koshy 
>> wrote:
>> >>>
>> >>> > Another use-case I was thinking of was something like rack-aware
>> >>> > assignment of partitions to clients. This would require some
>> >>> > additional topic metadata to be propagated to and from the
>> coordinator
>> >>> > and you would need some way to resolve conflicts for such
>> strategies.
>> >>> > I think that could be addressed by attaching a generation id to the
>> >>> > metadata and use that (i.e., pick the highest) in order to resolve
>> >>> > conflicts without another round of join-group requests.
>> >>> >
>> >>> > Likewise, without delete/recreate, 

[jira] [Updated] (KAFKA-2375) Implement elasticsearch Copycat sink connector

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2375:

Fix Version/s: (was: 0.8.3)

> Implement elasticsearch Copycat sink connector
> --
>
> Key: KAFKA-2375
> URL: https://issues.apache.org/jira/browse/KAFKA-2375
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>
> Implement an elasticsearch sink connector for Copycat. This should send 
> records to elasticsearch with unique document IDs, given appropriate configs 
> to extract IDs from input records.
> The motivation here is to provide a good end-to-end example with built-in 
> connectors that require minimal dependencies. Because Elasticsearch has a 
> very simple REST API, an elasticsearch connector shouldn't require any extra 
> dependencies and logs -> Elasticsearch (in combination with KAFKA-2374) 
> provides a compelling out-of-the-box Copycat use case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-438) Code cleanup in MessageTest

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-438:
---
Fix Version/s: (was: 0.8.3)

> Code cleanup in MessageTest
> ---
>
> Key: KAFKA-438
> URL: https://issues.apache.org/jira/browse/KAFKA-438
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.7.1
>Reporter: Jim Plush
>Priority: Trivial
> Attachments: KAFKA-438
>
>
> While exploring the Unit Tests this class had an unused import statement, 
> some ambiguity on which HashMap implementation was being used and assignments 
> of function returns when not required. 
> Trivial stuff



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1755) Improve error handling in log cleaner

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1755:

Fix Version/s: (was: 0.8.3)

> Improve error handling in log cleaner
> -
>
> Key: KAFKA-1755
> URL: https://issues.apache.org/jira/browse/KAFKA-1755
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Joel Koshy
>  Labels: newbie++
> Attachments: KAFKA-1755.patch, KAFKA-1755_2015-02-23_14:29:54.patch, 
> KAFKA-1755_2015-02-26_10:54:50.patch
>
>
> The log cleaner is a critical process when using compacted topics.
> However, if there is any error in any topic (notably if a key is missing) 
> then the cleaner exits and all other compacted topics will also be adversely 
> affected - i.e., compaction stops across the board.
> This can be improved by just aborting compaction for a topic on any error and 
> keep the thread from exiting.
> Another improvement would be to reject messages without keys that are sent to 
> compacted topics although this is not enough by itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1566) Kafka environment configuration (kafka-env.sh)

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1566:

Fix Version/s: (was: 0.8.3)

> Kafka environment configuration (kafka-env.sh)
> --
>
> Key: KAFKA-1566
> URL: https://issues.apache.org/jira/browse/KAFKA-1566
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Cosmin Lehene
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Attachments: KAFKA-1566.patch, KAFKA-1566_2015-02-21_21:57:02.patch, 
> KAFKA-1566_2015-03-17_17:01:38.patch, KAFKA-1566_2015-03-17_17:19:23.patch
>
>
> It would be useful (especially for automated deployments) to have an 
> environment configuration file that could be sourced from the launcher files 
> (e.g. kafka-run-server.sh). 
> This is how this could look like kafka-env.sh 
> {code}
> export KAFKA_JVM_PERFORMANCE_OPTS="-XX:+UseCompressedOops 
> -XX:+DisableExplicitGC -Djava.awt.headless=true \ -XX:+UseG1GC 
> -XX:PermSize=48m -XX:MaxPermSize=48m -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35' %>" 
> export KAFKA_HEAP_OPTS="'-Xmx1G -Xms1G' %>" 
> export KAFKA_LOG4J_OPTS="-Dkafka.logs.dir=/var/log/kafka" 
> {code} 
> kafka-server-start.sh 
> {code} 
> ... 
> source $base_dir/config/kafka-env.sh 
> ... 
> {code} 
> This approach is consistent with Hadoop and HBase. However the idea here is 
> to be able to set these values in a single place without having to edit 
> startup scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1927) Replace requests in kafka.api with requests in org.apache.kafka.common.requests

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1927:

Fix Version/s: (was: 0.8.3)

> Replace requests in kafka.api with requests in 
> org.apache.kafka.common.requests
> ---
>
> Key: KAFKA-1927
> URL: https://issues.apache.org/jira/browse/KAFKA-1927
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Assignee: Gwen Shapira
> Attachments: KAFKA-1927.patch
>
>
> The common package introduced a better way of defining requests using a new 
> protocol definition DSL and also includes wrapper objects for these.
> We should switch KafkaApis over to use these request definitions and consider 
> the scala classes deprecated (we probably need to retain some of them for a 
> while for the scala clients).
> This will be a big improvement because
> 1. We will have each request now defined in only one place (Protocol.java)
> 2. We will have built-in support for multi-version requests
> 3. We will have much better error messages (no more cryptic underflow errors)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2500) Make logEndOffset available in the 0.8.3 Consumer

2015-09-01 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726238#comment-14726238
 ] 

Gwen Shapira commented on KAFKA-2500:
-

[~hachikuji], do you want to take a look at this one?

> Make logEndOffset available in the 0.8.3 Consumer
> -
>
> Key: KAFKA-2500
> URL: https://issues.apache.org/jira/browse/KAFKA-2500
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.3
>Reporter: Will Funnell
>Assignee: Neha Narkhede
>Priority: Critical
> Fix For: 0.8.3
>
>
> Originally created in the old consumer here: 
> https://issues.apache.org/jira/browse/KAFKA-1977
> The requirement is to create a snapshot from the Kafka topic but NOT do 
> continual reads after that point. For example you might be creating a backup 
> of the data to a file.
> This ticket covers the addition of the functionality to the new consumer.
> In order to achieve that, a recommended solution by Joel Koshy and Jay Kreps 
> was to expose the high watermark, as maxEndOffset, from the FetchResponse 
> object through to each MessageAndMetadata object in order to be aware when 
> the consumer has reached the end of each partition.
> The submitted patch achieves this by adding the maxEndOffset to the 
> PartitionTopicInfo, which is updated when a new message arrives in the 
> ConsumerFetcherThread and then exposed in MessageAndMetadata.
> See here for discussion:
> http://search-hadoop.com/m/4TaT4TpJy71



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2472) Fix kafka ssl configs to not throw warnings

2015-09-01 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726338#comment-14726338
 ] 

Jun Rao commented on KAFKA-2472:


The original intention is to warn people of mis-spelled config names. Also, we 
are only supposed to log those unused properties as warning and we do that 
(config.logUnused()) at the end of the instantiation of the producer. So, not 
sure why those warning will show up.

> Fix kafka ssl configs to not throw warnings
> ---
>
> Key: KAFKA-2472
> URL: https://issues.apache.org/jira/browse/KAFKA-2472
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
>
> This is a follow-up fix on kafka-1690.
> [2015-08-25 18:20:48,236] WARN The configuration ssl.truststore.password = 
> striker was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2015-08-25 18:20:48,236] WARN The configuration security.protocol = SSL was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2309) ISR shrink rate not updated on LeaderAndIsr request with shrunk ISR

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2309:
---
Fix Version/s: (was: 0.8.3)

> ISR shrink rate not updated on LeaderAndIsr request with shrunk ISR
> ---
>
> Key: KAFKA-2309
> URL: https://issues.apache.org/jira/browse/KAFKA-2309
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Aditya Auradkar
>Priority: Minor
>
> If a broker receives a LeaderAndIsr request with a shrunk ISR (say, when a 
> follower shuts down) it needs to mark the isr shrink rate meter when it 
> updates its ISR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1898) compatibility testing framework

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1898:

Fix Version/s: (was: 0.8.3)

> compatibility testing framework 
> 
>
> Key: KAFKA-1898
> URL: https://issues.apache.org/jira/browse/KAFKA-1898
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
> Attachments: cctk.png
>
>
> There are a few different scenarios where you want/need to know the 
> status/state of a client library that works with Kafka. Client library 
> development is not just about supporting the wire protocol but also the 
> implementations around specific interactions of the API.  The API has 
> blossomed into a robust set of producer, consumer, broker and administrative 
> calls all of which have layers of logic above them.  A Client Library may 
> choose to deviate from the path the project sets out and that is ok. The goal 
> of this ticket is to have a system for Kafka that can help to explain what 
> the library is or isn't doing (regardless of what it claims).
> The idea behind this stems in being able to quickly/easily/succinctly analyze 
> the topic message data. Once you can analyze the topic(s) message you can 
> gather lots of information about what the client library is doing, is not 
> doing and such.  There are a few components to this.
> 1) dataset-generator 
> Test Kafka dataset generation tool. Generates a random text file with given 
> params:
> --filename, -f - output file name.
> --filesize, -s - desired size of output file. The actual size will always be 
> a bit larger (with a maximum size of $filesize + $max.length - 1)
> --min.length, -l - minimum generated entry length.
> --max.length, -h - maximum generated entry length.
> Usage:
> ./gradlew build
> java -jar dataset-generator/build/libs/dataset-generator-*.jar -s 10 -l 2 
> -h 20
> 2) dataset-producer
> Test Kafka dataset producer tool. Able to produce the given dataset to Kafka 
> or Syslog server.  The idea here is you already have lots of data sets that 
> you want to test different things for. You might have different sized 
> messages, formats, etc and want a repeatable benchmark to run and re-run the 
> testing on. You could just have a days worth of data and just choose to 
> replay it.  The CCTK idea is that you are always starting from CONSUME in 
> your state of library. If your library is only producing then you will fail a 
> bunch of tests and that might be ok for people.
> Accepts following params:
> {code}
> --filename, -f - input file name.
> --kafka, -k - Kafka broker address in host:port format. If this parameter is 
> set, --producer.config and --topic must be set too (otherwise they're 
> ignored).
> --producer.config, -p - Kafka producer properties file location.
> --topic, -t - Kafka topic to produce to.
> --syslog, -s - Syslog server address. Format: protocol://host:port 
> (tcp://0.0.0.0:5140 or udp://0.0.0.0:5141 for example)
> --loop, -l - flag to loop through file until shut off manually. False by 
> default.
> Usage:
> ./gradlew build
> java -jar dataset-producer/build/libs/dataset-producer-*.jar --filename 
> dataset --syslog tcp://0.0.0.0:5140 --loop true
> {code}
> 3) extract
> This step is good so you can save data and compare tests. It could also be 
> removed if folks are just looking for a real live test (and we could support 
> that too).  Here we are taking data out of Kafka and putting it into 
> Cassandra (but other data stores can be used too and we should come up with a 
> way to abstract this out completely so folks could implement whatever they 
> wanted.
> {code}
> package ly.stealth.shaihulud.reader
> import java.util.UUID
> import com.datastax.spark.connector._
> import com.datastax.spark.connector.cql.CassandraConnector
> import consumer.kafka.MessageAndMetadata
> import consumer.kafka.client.KafkaReceiver
> import org.apache.spark._
> import org.apache.spark.storage.StorageLevel
> import org.apache.spark.streaming._
> import org.apache.spark.streaming.dstream.DStream
> object Main extends App with Logging {
>   val parser = new scopt.OptionParser[ReaderConfiguration]("spark-reader") {
> head("Spark Reader for Kafka client applications", "1.0")
> opt[String]("testId") unbounded() optional() action { (x, c) =>
>   c.copy(testId = x)
> } text ("Source topic with initial set of data")
> opt[String]("source") unbounded() required() action { (x, c) =>
>   c.copy(sourceTopic = x)
> } text ("Source topic with initial set of data")
> opt[String]("destination") unbounded() required() action { (x, c) =>
>   c.copy(destinationTopic = x)
> } text ("Destination topic with processed set of data")
> opt[Int]("partitions") unbounded() optional() action { (x, c) =>
>   c.copy(partitions = x)
> } text ("Partitions in 

[jira] [Updated] (KAFKA-2442) QuotasTest should not fail when cpu is busy

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2442:

Fix Version/s: (was: 0.8.3)

> QuotasTest should not fail when cpu is busy
> ---
>
> Key: KAFKA-2442
> URL: https://issues.apache.org/jira/browse/KAFKA-2442
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Aditya Auradkar
>
> We observed that testThrottledProducerConsumer in QuotasTest may fail or 
> succeed randomly. It appears that the test may fail when the system is slow. 
> We can add timer in the integration test to avoid random failure.
> See an example failure at 
> https://builds.apache.org/job/kafka-trunk-git-pr/166/console for patch 
> https://github.com/apache/kafka/pull/142.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2120) Add a request timeout to NetworkClient

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2120:

Fix Version/s: (was: 0.8.3)

> Add a request timeout to NetworkClient
> --
>
> Key: KAFKA-2120
> URL: https://issues.apache.org/jira/browse/KAFKA-2120
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jiangjie Qin
>Assignee: Mayuresh Gharat
> Attachments: KAFKA-2120.patch, KAFKA-2120_2015-07-27_15:31:19.patch, 
> KAFKA-2120_2015-07-29_15:57:02.patch, KAFKA-2120_2015-08-10_19:55:18.patch, 
> KAFKA-2120_2015-08-12_10:59:09.patch
>
>
> Currently NetworkClient does not have a timeout setting for requests. So if 
> no response is received for a request due to reasons such as broker is down, 
> the request will never be completed.
> Request timeout will also be used as implicit timeout for some methods such 
> as KafkaProducer.flush() and kafkaProducer.close().
> KIP-19 is created for this public interface change.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-19+-+Add+a+request+timeout+to+NetworkClient



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2073) Replace TopicMetadata request/response with o.a.k.requests.metadata

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2073:

Fix Version/s: (was: 0.8.3)

> Replace TopicMetadata request/response with o.a.k.requests.metadata
> ---
>
> Key: KAFKA-2073
> URL: https://issues.apache.org/jira/browse/KAFKA-2073
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Andrii Biletskyi
>
> Replace TopicMetadata request/response with o.a.k.requests.metadata.
> Note, this is more challenging that it appears because while the wire 
> protocol is identical, the objects are completely different.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36858: Patch for KAFKA-2120

2015-09-01 Thread Mayuresh Gharat


> On Sept. 1, 2015, 1:02 a.m., Jun Rao wrote:
> > clients/src/main/java/org/apache/kafka/clients/NetworkClient.java, lines 
> > 362-367
> > 
> >
> > We need to think through this logic a bit more. The patch here 
> > disconnects the connection from the selector, but doesn't mark the 
> > connectionState as disconnected immediately. Only until the next 
> > networkClient.poll(), does the connectionState change to disconnected. The 
> > issue with this approach is that selector.disconnect actually cancels the 
> > socket key. So, at that moment, the connection is no longer usable. 
> > However, the connectionState is still connected. A client can check the 
> > connection as ready and then makes a send call. The send will then get a 
> > CancelledKeyException, which is weird.
> > 
> > We are dealing with a similar issue in KAFKA-2411. Our current thinking 
> > is to have a networkClient.disconnect() that closes the socket key as well 
> > as removes the client from connectionState. This will make the state in 
> > networkClient consistent after each poll() call. If we have that, we can 
> > just call networkClient.disconnect() in handleTimedOutRequests() and handle 
> > those disconnected connections immediately. Then, we don't need to maintain 
> > the clientDisconnects state in Selector.

Thanks a lot Jun.
I was thinking about something similar when I was rebasing the patch yesterday 
with latest trunk. So the initial code that I wrote, was taking the 
disconnected nodes and adding them to disconnected list in disconnect() in 
Selector. But that imposes dpendency that handleTimeout() should be called 
before handledisconnections(), because every poll clears the disconnected list. 
I will test the approach you suggested and upload a patch for this.


- Mayuresh


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36858/#review97213
---


On Aug. 12, 2015, 5:59 p.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36858/
> ---
> 
> (Updated Aug. 12, 2015, 5:59 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2120
> https://issues.apache.org/jira/browse/KAFKA-2120
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Solved compile error
> 
> 
> Addressed Jason's comments for Kip-19
> 
> 
> Addressed Jun's comments
> 
> 
> Addressed Jason's comments about the default values for requestTimeout
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
> dc8f0f115bcda893c95d17c0a57be8d14518d034 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> 2c421f42ed3fc5d61cf9c87a7eaa7bb23e26f63b 
>   clients/src/main/java/org/apache/kafka/clients/InFlightRequests.java 
> 15d00d4e484bb5d51a9ae6857ed6e024a2cc1820 
>   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
> 7ab2503794ff3aab39df881bd9fbae6547827d3b 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> 0e51d7bd461d253f4396a5b6ca7cd391658807fa 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> d35b421a515074d964c7fccb73d260b847ea5f00 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> ed99e9bdf7c4ea7a6d4555d4488cf8ed0b80641b 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
>  9517d9d0cd480d5ba1d12f1fde7963e60528d2f8 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 03b8dd23df63a8d8a117f02eabcce4a2d48c44f7 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> aa264202f2724907924985a5ecbe74afc4c6c04b 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/BufferPool.java
>  4cb1e50d6c4ed55241aeaef1d3af09def5274103 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  a152bd7697dca55609a9ec4cfe0a82c10595fbc3 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
>  06182db1c3a5da85648199b4c0c98b80ea7c6c0c 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> 0baf16e55046a2f49f6431e01d52c323c95eddf0 
>   clients/src/main/java/org/apache/kafka/common/network/Selector.java 
> ce20111ac434eb8c74585e9c63757bb9d60a832f 
>   clients/src/test/java/org/apache/kafka/clients/MockClient.java 
> 9133d85342b11ba2c9888d4d2804d181831e7a8e 
>   clients/src/test/java/org/apache/kafka/clients/NetworkClientTest.java 
> 43238ceaad0322e39802b615bb805b895336a009 
>   
> 

[jira] [Updated] (KAFKA-2072) Add StopReplica request/response to o.a.k.common.requests and replace the usage in core module

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2072:
---
Fix Version/s: (was: 0.8.3)

> Add StopReplica request/response to o.a.k.common.requests and replace the 
> usage in core module
> --
>
> Key: KAFKA-2072
> URL: https://issues.apache.org/jira/browse/KAFKA-2072
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: David Jacot
>
> Add StopReplica request/response to o.a.k.common.requests and replace the 
> usage in core module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2074) Add UpdateMetadata request/response to o.a.k.common.requests and replace its use in core module

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2074:
---
Fix Version/s: (was: 0.8.3)

Actually, KAFKA-2411 won't fix this completely since KafkaApis still uses the 
scala object. So, reopen it.

> Add UpdateMetadata request/response to o.a.k.common.requests and replace its 
> use in core module
> ---
>
> Key: KAFKA-2074
> URL: https://issues.apache.org/jira/browse/KAFKA-2074
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>
> Add UpdateMetadata request/response to o.a.k.common.requests and replace its 
> use in core module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1763) validate_index_log in system tests runs remotely but uses local paths

2015-09-01 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726275#comment-14726275
 ] 

Ewen Cheslack-Postava commented on KAFKA-1763:
--

This is for the old system tests. I think there are at least a couple of others 
bugs that were specifically for old system tests that are going to be replaced 
anyway. Anybody mind if I just close those? We haven't fully replaced & deleted 
the old ones yet, so I don't want to clean these up prematurely.

> validate_index_log in system tests runs remotely but uses local paths
> -
>
> Key: KAFKA-1763
> URL: https://issues.apache.org/jira/browse/KAFKA-1763
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.8.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-1763.patch
>
>
> validate_index_log is the only validation step in the system tests that needs 
> to execute a Kafka binary and it's currently doing so remotely, like the rest 
> of the test binaries. However, this is probably incorrect since it looks like 
> logs are synced back to the driver host and in other cases are operated on 
> locally. It looks like validate_index_log mixes up local/remote paths, 
> causing an exception in DumpLogSegments:
> {quote}
> 2014-11-10 12:09:57,665 - DEBUG - executing command [ssh vagrant@worker1 -o 
> 'HostName 127.0.0.1' -o 'Port ' -o 'UserKnownHostsFile /dev/null' -o 
> 'StrictHostKeyChecking no' -o 'PasswordAuthentication no' -o 'IdentityFile 
> /Users/ewencp/.vagrant.d/insecure_private_key' -o 'IdentitiesOnly yes' -o 
> 'LogLevel FATAL'  '/opt/kafka/bin/kafka-run-class.sh 
> kafka.tools.DumpLogSegments  --file 
> /Users/ewencp/kafka.git/system_test/replication_testsuite/testcase_0008/logs/broker-3/kafka_server_3_logs/test_1-2/1294.index
>  --verify-index-only 2>&1'] (system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - Dumping 
> /Users/ewencp/kafka.git/system_test/replication_testsuite/testcase_0008/logs/broker-3/kafka_server_3_logs/test_1-2/1294.index
>  (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - Exception in thread "main" 
> java.io.FileNotFoundException: 
> /Users/ewencp/kafka.git/system_test/replication_testsuite/testcase_0008/logs/broker-3/kafka_server_3_logs/test_1-2/1294.log
>  (No such file or directory) (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at java.io.FileInputStream.open(Native 
> Method) (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> java.io.FileInputStream.(FileInputStream.java:146) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> kafka.utils.Utils$.openChannel(Utils.scala:162) (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> kafka.log.FileMessageSet.(FileMessageSet.scala:74) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> kafka.tools.DumpLogSegments$.kafka$tools$DumpLogSegments$$dumpIndex(DumpLogSegments.scala:108)
>  (kafka_system_test_utils)
> 2014-11-10 12:09:58,673 - DEBUG - at 
> kafka.tools.DumpLogSegments$$anonfun$main$1.apply(DumpLogSegments.scala:80) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> kafka.tools.DumpLogSegments$$anonfun$main$1.apply(DumpLogSegments.scala:73) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>  (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> kafka.tools.DumpLogSegments$.main(DumpLogSegments.scala:73) 
> (kafka_system_test_utils)
> 2014-11-10 12:09:58,674 - DEBUG - at 
> kafka.tools.DumpLogSegments.main(DumpLogSegments.scala) 
> (kafka_system_test_utils)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2120) Add a request timeout to NetworkClient

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2120:

Fix Version/s: 0.8.3

> Add a request timeout to NetworkClient
> --
>
> Key: KAFKA-2120
> URL: https://issues.apache.org/jira/browse/KAFKA-2120
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jiangjie Qin
>Assignee: Mayuresh Gharat
> Fix For: 0.8.3
>
> Attachments: KAFKA-2120.patch, KAFKA-2120_2015-07-27_15:31:19.patch, 
> KAFKA-2120_2015-07-29_15:57:02.patch, KAFKA-2120_2015-08-10_19:55:18.patch, 
> KAFKA-2120_2015-08-12_10:59:09.patch
>
>
> Currently NetworkClient does not have a timeout setting for requests. So if 
> no response is received for a request due to reasons such as broker is down, 
> the request will never be completed.
> Request timeout will also be used as implicit timeout for some methods such 
> as KafkaProducer.flush() and kafkaProducer.close().
> KIP-19 is created for this public interface change.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-19+-+Add+a+request+timeout+to+NetworkClient



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-09-01 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2210:

Attachment: KAFKA-2210_2015-09-01_15:36:02.patch

> KafkaAuthorizer: Add all public entities, config changes and changes to 
> KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
> --
>
> Key: KAFKA-2210
> URL: https://issues.apache.org/jira/browse/KAFKA-2210
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.8.3
>
> Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
> KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
> KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
> KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
> KAFKA-2210_2015-08-10_18:31:54.patch, KAFKA-2210_2015-08-20_11:27:18.patch, 
> KAFKA-2210_2015-08-25_17:59:22.patch, KAFKA-2210_2015-08-26_14:29:02.patch, 
> KAFKA-2210_2015-09-01_15:36:02.patch
>
>
> This is the first subtask for Kafka-1688. As Part of this jira we intend to 
> agree on all the public entities, configs and changes to existing kafka 
> classes to allow pluggable authorizer implementation.
> Please see KIP-11 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
>  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2015-09-01 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726096#comment-14726096
 ] 

Ben Stopford commented on KAFKA-2499:
-

Hi [~eribeiro]

That's fantastic! 

There are guidelines for contributing at these links. 

http://kafka.apache.org/contributing.html
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes

Let me know if you need any help. 

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Edward Ribeiro
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1967) Support more flexible serialization in Log4jAppender

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1967:

Fix Version/s: (was: 0.8.3)

> Support more flexible serialization in Log4jAppender
> 
>
> Key: KAFKA-1967
> URL: https://issues.apache.org/jira/browse/KAFKA-1967
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jesse Yates
>Priority: Minor
> Attachments: kafka-1967-trunk.patch
>
>
> It would be nice to allow subclasses of the standard KafkfaLog4jAppender to 
> be able to serialize the LoggingEvent however they chose, rather than always 
> having to write out a string.
> A possible use case - the one I'm interested in - allows implementors to 
> convert the event to any sort of bytes. This means downstream consumers don't 
> lose data based on the logging format, but instead can get the entire event 
> to do with as they please



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1853) Unsuccessful suffix rename of expired LogSegment can leak open files and also leave the LogSegment in an invalid state

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1853:

Fix Version/s: (was: 0.8.3)

> Unsuccessful suffix rename of expired LogSegment can leak open files and also 
> leave the LogSegment in an invalid state
> --
>
> Key: KAFKA-1853
> URL: https://issues.apache.org/jira/browse/KAFKA-1853
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: jaikiran pai
>Assignee: jaikiran pai
> Attachments: KAFKA-1853_2015-01-20_22:04:29.patch, 
> KAFKA-1853_2015-01-24_10:48:08.patch, KAFKA-1853_2015-01-24_11:21:07.patch, 
> KAFKA-1853_2015-01-26_19:41:33.patch
>
>
> As noted in this discussion in the user mailing list 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201501.mbox/%3C54AE3661.8080007%40gmail.com%3E
>  an unsuccessful attempt at renaming the underlying files of a LogSegment can 
> lead to file leaks and also leave the LogSegment in an invalid state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2074) Add UpdateMetadata request/response to o.a.k.common.requests and replace its use in core module

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2074:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Closing this jira since it's now duplicated by KAFKA-2411.

> Add UpdateMetadata request/response to o.a.k.common.requests and replace its 
> use in core module
> ---
>
> Key: KAFKA-2074
> URL: https://issues.apache.org/jira/browse/KAFKA-2074
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
> Fix For: 0.8.3
>
>
> Add UpdateMetadata request/response to o.a.k.common.requests and replace its 
> use in core module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2412) Documentation bug: Add information for key.serializer and value.serializer to New Producer Config sections

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2412:
---
Fix Version/s: 0.8.3

Those docs are auto generated. Perhaps we should fix ConfigDef.toHtmlTable() to 
add a "required" column.

> Documentation bug: Add information for key.serializer and value.serializer to 
> New Producer Config sections
> --
>
> Key: KAFKA-2412
> URL: https://issues.apache.org/jira/browse/KAFKA-2412
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jeremy Fields
>Assignee: Grayson Chao
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-2412-r1.diff, KAFKA-2412.diff
>
>
> As key.serializer and value.serializer are required options when using the 
> new producer, they should be mentioned in the documentation ( here and svn 
> http://kafka.apache.org/documentation.html#newproducerconfigs )
> Appropriate values for these options exist in javadoc and producer.java 
> examples; however, not everyone is reading those, as is the case for anyone 
> setting up a producer.config file for mirrormaker.
> A sensible default should be suggested, such as
> org.apache.kafka.common.serialization.StringSerializer
> Or at least a mention of the key.serializer and value.serializer options 
> along with a link to javadoc
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2378) Add Copycat embedded API

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2378:

Assignee: Ewen Cheslack-Postava

> Add Copycat embedded API
> 
>
> Key: KAFKA-2378
> URL: https://issues.apache.org/jira/browse/KAFKA-2378
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
>
> Much of the required Copycat API will exist from previous patches since any 
> main() method will need to do very similar operations. However, integrating 
> with any other Java code may require additional API support.
> For example, one of the use cases when integrating with any stream processing 
> application will require knowing which topics will be written to. We will 
> need to add APIs to expose the topics a registered connector is writing to so 
> they can be consumed by a stream processing task



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2254) The shell script should be optimized , even kafka-run-class.sh has a syntax error.

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2254:

Fix Version/s: (was: 0.8.3)

> The shell script should be optimized , even kafka-run-class.sh has a syntax 
> error.
> --
>
> Key: KAFKA-2254
> URL: https://issues.apache.org/jira/browse/KAFKA-2254
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2.1
> Environment: linux
>Reporter: Bo Wang
>  Labels: client-script, kafka-run-class.sh, shell-script
> Attachments: kafka-shell-script.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  kafka-run-class.sh 128 line has a syntax error(missing a space):
> 127-loggc)
> 128 if [ -z "$KAFKA_GC_LOG_OPTS"] ; then
> 129GC_LOG_ENABLED="true"
> 130 fi
> And use the ShellCheck to check the shell scripts, the results shows some 
> errors 、 warnings and notes:
> https://github.com/koalaman/shellcheck/wiki/SC2068
> https://github.com/koalaman/shellcheck/wiki/Sc2046
> https://github.com/koalaman/shellcheck/wiki/Sc2086



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2075) Validate that all kafka.api requests has been removed and clean up compatibility code

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2075:

Fix Version/s: (was: 0.8.3)

> Validate that all kafka.api requests has been removed and clean up 
> compatibility code
> -
>
> Key: KAFKA-2075
> URL: https://issues.apache.org/jira/browse/KAFKA-2075
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>
> Once we finished all other subtasks - the old kafka.api requests/responses 
> shouldn't be used anywhere.
> We need to validate that the classes are indeed gone, remove the unittests 
> for serializing/deserializing them and clean up the compatibility code added 
> in KAFKA-2044.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2449) Update mirror maker (MirrorMaker) docs

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2449:

Assignee: Geoff Anderson

> Update mirror maker (MirrorMaker) docs
> --
>
> Key: KAFKA-2449
> URL: https://issues.apache.org/jira/browse/KAFKA-2449
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.8.3
>
>
> The Kafka docs on Mirror Maker state that it mirrors from N source clusters 
> to 1 destination, but this is no longer the case. Docs should be updated to 
> reflect that it mirrors from single source cluster to single target cluster.
> Docs I've found where this should be updated:
> http://kafka.apache.org/documentation.html#basic_ops_mirror_maker
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+mirroring+(MirrorMaker)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2417) Ducktape tests for SSL/TLS

2015-09-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2417:

Assignee: Geoff Anderson

> Ducktape tests for SSL/TLS
> --
>
> Key: KAFKA-2417
> URL: https://issues.apache.org/jira/browse/KAFKA-2417
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Geoff Anderson
> Fix For: 0.8.3
>
>
> The tests should be complementary to the unit/integration tests written as 
> part of KAFKA-1685.
> Things to consider:
> * Upgrade/downgrade to turning on/off SSL
> * Failure testing
> * Expired/revoked certificates
> * Renegotiation
> Some changes to ducktape may be required for upgrade scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


0.8.3 JIRA

2015-09-01 Thread Gwen Shapira
Hi Team,

I just went over the 0.8.3 Jiras that had no one assigned to them and
either moved them out of the release (if they were important someone would
be working on them) or assigned them to the guy who opened the jira  (you
reported it, you fix it).

If I accidentally assigned a JIRA that you wanted to work on to someone
else, feel free to correct me.

Gwen


[GitHub] kafka pull request: KAFKA-2332; Add quota metrics to old producer ...

2015-09-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/176


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2332) Add quota metrics to old producer and consumer

2015-09-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726262#comment-14726262
 ] 

ASF GitHub Bot commented on KAFKA-2332:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/176


> Add quota metrics to old producer and consumer
> --
>
> Key: KAFKA-2332
> URL: https://issues.apache.org/jira/browse/KAFKA-2332
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Dong Lin
>  Labels: quotas
> Fix For: 0.8.3
>
> Attachments: KAFKA-2332.patch, KAFKA-2332.patch, 
> KAFKA-2332_2015-08-03_18:22:53.patch
>
>
> Quota metrics have only been added to the new producer and consumer. It may 
> be beneficial to add these to the existing consumer and old producer also for 
> clients using the older versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-2074) Add UpdateMetadata request/response to o.a.k.common.requests and replace its use in core module

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao reopened KAFKA-2074:


> Add UpdateMetadata request/response to o.a.k.common.requests and replace its 
> use in core module
> ---
>
> Key: KAFKA-2074
> URL: https://issues.apache.org/jira/browse/KAFKA-2074
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>
> Add UpdateMetadata request/response to o.a.k.common.requests and replace its 
> use in core module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2332) Add quota metrics to old producer and consumer

2015-09-01 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-2332:
--
   Resolution: Fixed
Fix Version/s: 0.8.3
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 176
[https://github.com/apache/kafka/pull/176]

> Add quota metrics to old producer and consumer
> --
>
> Key: KAFKA-2332
> URL: https://issues.apache.org/jira/browse/KAFKA-2332
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Dong Lin
>  Labels: quotas
> Fix For: 0.8.3
>
> Attachments: KAFKA-2332.patch, KAFKA-2332.patch, 
> KAFKA-2332_2015-08-03_18:22:53.patch
>
>
> Quota metrics have only been added to the new producer and consumer. It may 
> be beneficial to add these to the existing consumer and old producer also for 
> clients using the older versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #610

2015-09-01 Thread Apache Jenkins Server
See 

Changes:

[jjkoshy] KAFKA-2332; Add quota metrics to old producer and consumer

--
[...truncated 900 lines...]
kafka.integration.TopicMetadataTest > testTopicMetadataRequest PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.api.ConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.api.ProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.common.ConfigTest > testInvalidGroupIds PASSED

kafka.common.ConfigTest > testInvalidClientIds PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.admin.AdminTest > testTopicCreationWithCollision PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.api.ConsumerTest > testSeek PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ConsumerTest > testPositionAndCommit PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.common.TopicTest > testInvalidTopicNames PASSED

kafka.common.TopicTest > testTopicHasCollision PASSED

kafka.common.TopicTest > testTopicHasCollisionChars PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.api.ConsumerTest > testUnsubscribeTopic PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.api.ConsumerTest > testListTopics PASSED

kafka.api.ProducerFailureHandlingTest > testNoResponse PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.api.ConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ConsumerTest > testGroupConsumption PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.api.ConsumerTest > 

[jira] [Updated] (KAFKA-2069) Replace OffsetFetch request/response with their org.apache.kafka.common.requests equivalent

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2069:
---
Fix Version/s: (was: 0.8.3)

> Replace OffsetFetch request/response with their   
> org.apache.kafka.common.requests  equivalent
> --
>
> Key: KAFKA-2069
> URL: https://issues.apache.org/jira/browse/KAFKA-2069
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Ashish K Singh
>
> Replace OffsetFetch request/response with their  
> org.apache.kafka.common.requests  equivalent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-09-01 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2338:
---
Fix Version/s: 0.8.3

> Warn users if they change max.message.bytes that they also need to update 
> broker and consumer settings
> --
>
> Key: KAFKA-2338
> URL: https://issues.apache.org/jira/browse/KAFKA-2338
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Edward Ribeiro
> Fix For: 0.8.3
>
> Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
> KAFKA-2338_2015-07-21_13:21:19.patch, KAFKA-2338_2015-08-24_14:32:38.patch
>
>
> We already have KAFKA-1756 filed to more completely address this issue, but 
> it is waiting for some other major changes to configs to completely protect 
> users from this problem.
> This JIRA should address the low hanging fruit to at least warn users of the 
> potential problems. Currently the only warning is in our documentation.
> 1. Generate a warning in the kafka-topics.sh tool when they change this 
> setting on a topic to be larger than the default. This needs to be very 
> obvious in the output.
> 2. Currently, the broker's replica fetcher isn't logging any useful error 
> messages when replication can't succeed because a message size is too large. 
> Logging an error here would allow users that get into a bad state to find out 
> why it is happening more easily. (Consumers should already be logging a 
> useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2492) Upgrade zkclient dependency to 0.6

2015-09-01 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2492:
---

 Summary: Upgrade zkclient dependency to 0.6
 Key: KAFKA-2492
 URL: https://issues.apache.org/jira/browse/KAFKA-2492
 Project: Kafka
  Issue Type: Task
Affects Versions: 0.8.2.1
Reporter: Stevo Slavic
Priority: Trivial


If zkclient does not get replaced with curator (via KAFKA-873) sooner please 
consider upgrading zkclient dependency to recently released 0.6.

zkclient 0.6 has few important changes included like:
- 
[fix|https://github.com/sgroschupf/zkclient/commit/0630c9c6e67ab49a51e80bfd939e4a0d01a69dfe]
 to fail retryUntilConnected actions with clear exception in case client gets 
closed
- [upgraded zookeeper dependency from 3.4.6 to 
3.4.3|https://github.com/sgroschupf/zkclient/commit/8975c1790f7f36cc5d4feea077df337fb1ddabdb]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2493) Add ability to fetch only keys in consumer

2015-09-01 Thread Ivan Balashov (JIRA)
Ivan Balashov created KAFKA-2493:


 Summary: Add ability to fetch only keys in consumer
 Key: KAFKA-2493
 URL: https://issues.apache.org/jira/browse/KAFKA-2493
 Project: Kafka
  Issue Type: Wish
  Components: consumer
Reporter: Ivan Balashov
Assignee: Neha Narkhede
Priority: Minor


Often clients need to find out which offsets contain necessary data. One of the 
possible solutions would be to iterate with small fetch size. However, this 
still leads to unnecessary data being transmitted in case keys already 
reference searched data. The ability to fetch keys only would simplify search 
for the necessary offset.

Of course, there can be other scenarios where consumer needs keys only, without 
message part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2493 Upgraded zkclient dependency from 0...

2015-09-01 Thread sslavic
GitHub user sslavic opened a pull request:

https://github.com/apache/kafka/pull/184

KAFKA-2493 Upgraded zkclient dependency from 0.5 to 0.6



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sslavic/kafka feature/KAFKA-2492

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/184.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #184


commit 9f7d7e304c966349c5c39bbe852d3f4220ffe467
Author: Stevo Slavic 
Date:   2015-09-01T09:34:43Z

KAFKA-2493 Upgraded zkclient dependency from 0.5 to 0.6




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2493) Add ability to fetch only keys in consumer

2015-09-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14725070#comment-14725070
 ] 

ASF GitHub Bot commented on KAFKA-2493:
---

GitHub user sslavic opened a pull request:

https://github.com/apache/kafka/pull/184

KAFKA-2493 Upgraded zkclient dependency from 0.5 to 0.6



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sslavic/kafka feature/KAFKA-2492

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/184.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #184


commit 9f7d7e304c966349c5c39bbe852d3f4220ffe467
Author: Stevo Slavic 
Date:   2015-09-01T09:34:43Z

KAFKA-2493 Upgraded zkclient dependency from 0.5 to 0.6




> Add ability to fetch only keys in consumer
> --
>
> Key: KAFKA-2493
> URL: https://issues.apache.org/jira/browse/KAFKA-2493
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer
>Reporter: Ivan Balashov
>Assignee: Neha Narkhede
>Priority: Minor
>
> Often clients need to find out which offsets contain necessary data. One of 
> the possible solutions would be to iterate with small fetch size. However, 
> this still leads to unnecessary data being transmitted in case keys already 
> reference searched data. The ability to fetch keys only would simplify search 
> for the necessary offset.
> Of course, there can be other scenarios where consumer needs keys only, 
> without message part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re:no Kafka KIP meeting tomorrow

2015-09-01 Thread jinxing
Can I know what is KIP meeting?






At 2015-09-01 12:30:18, "Jun Rao"  wrote:
>Since there are no new KIP issues for discussion, again, there is no
>KIP meeting
>tomorrow.
>
>Thanks,
>
>Jun


[jira] [Commented] (KAFKA-2453) enable new consumer in EndToEndLatency

2015-09-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14725257#comment-14725257
 ] 

ASF GitHub Bot commented on KAFKA-2453:
---

GitHub user benstopford reopened a pull request:

https://github.com/apache/kafka/pull/158

KAFKA-2453: Enable new consumer in EndToEndLatency 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka KAFKA-2453b

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/158.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #158


commit 71b11fac33a67ef9f0ac8ac09bcbb5305a56047f
Author: Ben Stopford 
Date:   2015-08-21T16:44:48Z

KAFKA-2453: migrated EndToEndLatencyTest to new consumer API. Added feature 
for configuring message size. Added inline assertion.

commit 43d6a0678fd37d7f382d5f89c898571ae4e3cfbb
Author: Ben Stopford 
Date:   2015-08-21T16:45:19Z

KAFKA-2453: small change which prevents the ConsoleConsumer from throwing 
an exception when the Finalizer thread tries to close it.

commit cac85029c7d802da29d68073545c118804bd41cb
Author: Ben Stopford 
Date:   2015-08-21T17:08:29Z

KAFKA-2453: Added additional arguments to call to EndToEndLatency from 
Performance tests

commit 119a6fa545bcaf586c9cb110f0e13c1cfee1f56c
Author: Ben Stopford 
Date:   2015-09-01T10:20:14Z

KAFKA-2453: Rebased to trunk

KAFKA-2453: removed whitespace

KAFKA-2453: Formatting only




> enable new consumer in EndToEndLatency
> --
>
> Key: KAFKA-2453
> URL: https://issues.apache.org/jira/browse/KAFKA-2453
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jun Rao
>Assignee: Ben Stopford
> Fix For: 0.8.3
>
>
> We need to add an option to enable the new consumer in EndToEndLatency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2494) Documetn ReplicaId in OffsetRequest in the protocol guide

2015-09-01 Thread Magnus Reftel (JIRA)
Magnus Reftel created KAFKA-2494:


 Summary: Documetn ReplicaId in OffsetRequest in the protocol guide
 Key: KAFKA-2494
 URL: https://issues.apache.org/jira/browse/KAFKA-2494
 Project: Kafka
  Issue Type: Improvement
  Components: website
Reporter: Magnus Reftel


The documentation for OffsetRequest in the protocol guide lists ReplicaId as 
one of the fields, but does not say what the field is for. It appears that it's 
similar to FetchRequest, where it's set to -1 by clients. It would be nice if 
that was documented.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2495) Protocol guide only partially updated after ConsumerMetadata* addition?

2015-09-01 Thread Magnus Reftel (JIRA)
Magnus Reftel created KAFKA-2495:


 Summary: Protocol guide only partially updated after 
ConsumerMetadata* addition?
 Key: KAFKA-2495
 URL: https://issues.apache.org/jira/browse/KAFKA-2495
 Project: Kafka
  Issue Type: Improvement
  Components: website
Reporter: Magnus Reftel


It appears that the protocol guide has only been partially updated after the 
ConsumerMetadataRequest and ConsumerMetadataResponse was added. In particular, 
the response for a TopicMetadataRequest is called "MetadataRequest" instead of 
"TopicMetadataRequest", and ResponseMessage lists only MetadataResponse as an 
alternative, but should probably list "TopicMetadataResponse" and 
"ConsumerMetadataResponse" instead.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2453: Enable new consumer in EndToEndLat...

2015-09-01 Thread benstopford
GitHub user benstopford reopened a pull request:

https://github.com/apache/kafka/pull/158

KAFKA-2453: Enable new consumer in EndToEndLatency 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka KAFKA-2453b

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/158.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #158


commit 71b11fac33a67ef9f0ac8ac09bcbb5305a56047f
Author: Ben Stopford 
Date:   2015-08-21T16:44:48Z

KAFKA-2453: migrated EndToEndLatencyTest to new consumer API. Added feature 
for configuring message size. Added inline assertion.

commit 43d6a0678fd37d7f382d5f89c898571ae4e3cfbb
Author: Ben Stopford 
Date:   2015-08-21T16:45:19Z

KAFKA-2453: small change which prevents the ConsoleConsumer from throwing 
an exception when the Finalizer thread tries to close it.

commit cac85029c7d802da29d68073545c118804bd41cb
Author: Ben Stopford 
Date:   2015-08-21T17:08:29Z

KAFKA-2453: Added additional arguments to call to EndToEndLatency from 
Performance tests

commit 119a6fa545bcaf586c9cb110f0e13c1cfee1f56c
Author: Ben Stopford 
Date:   2015-09-01T10:20:14Z

KAFKA-2453: Rebased to trunk

KAFKA-2453: removed whitespace

KAFKA-2453: Formatting only




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2453) enable new consumer in EndToEndLatency

2015-09-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14725256#comment-14725256
 ] 

ASF GitHub Bot commented on KAFKA-2453:
---

Github user benstopford closed the pull request at:

https://github.com/apache/kafka/pull/158


> enable new consumer in EndToEndLatency
> --
>
> Key: KAFKA-2453
> URL: https://issues.apache.org/jira/browse/KAFKA-2453
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jun Rao
>Assignee: Ben Stopford
> Fix For: 0.8.3
>
>
> We need to add an option to enable the new consumer in EndToEndLatency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2453: Enable new consumer in EndToEndLat...

2015-09-01 Thread benstopford
Github user benstopford closed the pull request at:

https://github.com/apache/kafka/pull/158


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2453) enable new consumer in EndToEndLatency

2015-09-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14725316#comment-14725316
 ] 

ASF GitHub Bot commented on KAFKA-2453:
---

GitHub user benstopford reopened a pull request:

https://github.com/apache/kafka/pull/158

KAFKA-2453: Enable new consumer in EndToEndLatency 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka KAFKA-2453b

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/158.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #158


commit 71b11fac33a67ef9f0ac8ac09bcbb5305a56047f
Author: Ben Stopford 
Date:   2015-08-21T16:44:48Z

KAFKA-2453: migrated EndToEndLatencyTest to new consumer API. Added feature 
for configuring message size. Added inline assertion.

commit 43d6a0678fd37d7f382d5f89c898571ae4e3cfbb
Author: Ben Stopford 
Date:   2015-08-21T16:45:19Z

KAFKA-2453: small change which prevents the ConsoleConsumer from throwing 
an exception when the Finalizer thread tries to close it.

commit cac85029c7d802da29d68073545c118804bd41cb
Author: Ben Stopford 
Date:   2015-08-21T17:08:29Z

KAFKA-2453: Added additional arguments to call to EndToEndLatency from 
Performance tests

commit 119a6fa545bcaf586c9cb110f0e13c1cfee1f56c
Author: Ben Stopford 
Date:   2015-09-01T10:20:14Z

KAFKA-2453: Rebased to trunk

KAFKA-2453: removed whitespace

KAFKA-2453: Formatting only




> enable new consumer in EndToEndLatency
> --
>
> Key: KAFKA-2453
> URL: https://issues.apache.org/jira/browse/KAFKA-2453
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jun Rao
>Assignee: Ben Stopford
> Fix For: 0.8.3
>
>
> We need to add an option to enable the new consumer in EndToEndLatency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2453) enable new consumer in EndToEndLatency

2015-09-01 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-2453:

Status: Patch Available  (was: Open)

[~junrao] please can you review or assign other reviewer. Thanks. 

> enable new consumer in EndToEndLatency
> --
>
> Key: KAFKA-2453
> URL: https://issues.apache.org/jira/browse/KAFKA-2453
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jun Rao
>Assignee: Ben Stopford
> Fix For: 0.8.3
>
>
> We need to add an option to enable the new consumer in EndToEndLatency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2496) New consumer from trunk doesn't work with 0.8.2.1 brokers

2015-09-01 Thread Serhey Novachenko (JIRA)
Serhey Novachenko created KAFKA-2496:


 Summary: New consumer from trunk doesn't work with 0.8.2.1 brokers
 Key: KAFKA-2496
 URL: https://issues.apache.org/jira/browse/KAFKA-2496
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Serhey Novachenko


I have a 0.8.2.1 broker running with a topic created and some messages in it.
I also have a consumer built from trunk (commit 
9c936b186d390f59f1d4ad8cc2995f800036a3d6 to be precise).

When trying to consume messages from this topic the consumer fails with a 
following stacktrace:

{noformat}
Exception in thread "main" org.apache.kafka.common.KafkaException: Unexpected 
error in join group response: The server experienced an unexpected error when 
processing the request
at 
org.apache.kafka.clients.consumer.internals.Coordinator$JoinGroupResponseHandler.handle(Coordinator.java:361)
at 
org.apache.kafka.clients.consumer.internals.Coordinator$JoinGroupResponseHandler.handle(Coordinator.java:309)
at 
org.apache.kafka.clients.consumer.internals.Coordinator$CoordinatorResponseHandler.onSuccess(Coordinator.java:701)
at 
org.apache.kafka.clients.consumer.internals.Coordinator$CoordinatorResponseHandler.onSuccess(Coordinator.java:675)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:163)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:129)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:105)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:293)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:237)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:274)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:182)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:172)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:145)
at 
org.apache.kafka.clients.consumer.internals.Coordinator.reassignPartitions(Coordinator.java:195)
at 
org.apache.kafka.clients.consumer.internals.Coordinator.ensurePartitionAssignment(Coordinator.java:170)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:770)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:731)
at Sandbox$.main(Sandbox.scala:38)
at Sandbox.main(Sandbox.scala)
{noformat}

What actually happens is broker being unable to handle the JoinGroup request 
from consumer:

{noformat}
[2015-09-01 11:48:38,820] ERROR [KafkaApi-0] error when handling request Name: 
JoinGroup; Version: 0; CorrelationId: 141; ClientId: consumer-1; Body: 
{group_id=mirror_maker_group,session_timeout=3,topics=[mirror],consumer_id=,partition_assignment_strategy=range}
 (kafka.server.KafkaApis)
kafka.common.KafkaException: Unknown api code 11
at kafka.server.KafkaApis.handle(KafkaApis.scala:70)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
at java.lang.Thread.run(Thread.java:745)
{noformat}

The consumer code that leads to this is pretty much straightforward:

{noformat}
import org.apache.kafka.clients.consumer.KafkaConsumer
import scala.collection.JavaConverters._

object Sandbox {
  def main(args: Array[String]) {
val consumerProps = new Properties
consumerProps.put("bootstrap.servers", "localhost:9092")
consumerProps.put("group.id", "mirror_maker_group")
consumerProps.put("enable.auto.commit", "false")
consumerProps.put("session.timeout.ms", "3")
consumerProps.put("key.deserializer", 
"org.apache.kafka.common.serialization.ByteArrayDeserializer")
consumerProps.put("value.deserializer", 
"org.apache.kafka.common.serialization.ByteArrayDeserializer")
val consumer = new KafkaConsumer[Array[Byte], Array[Byte]](consumerProps)
consumer.subscribe(List("mirror").asJava)

val records = consumer.poll(1000)
for (record <- records.iterator().asScala) {
  println(record.offset())
}
  }
}
{noformat}

I looked into the source code of the Kafka server in 0.8.2.1 branch and it does 
not have the logic to handle JoinGroup request. It does not actually have all 
the logic related to consumer coordination there so I wonder if there is any 
way to make the new consumer work with 0.8.2.1 brokers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2453: Enable new consumer in EndToEndLat...

2015-09-01 Thread benstopford
GitHub user benstopford reopened a pull request:

https://github.com/apache/kafka/pull/158

KAFKA-2453: Enable new consumer in EndToEndLatency 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka KAFKA-2453b

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/158.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #158


commit 71b11fac33a67ef9f0ac8ac09bcbb5305a56047f
Author: Ben Stopford 
Date:   2015-08-21T16:44:48Z

KAFKA-2453: migrated EndToEndLatencyTest to new consumer API. Added feature 
for configuring message size. Added inline assertion.

commit 43d6a0678fd37d7f382d5f89c898571ae4e3cfbb
Author: Ben Stopford 
Date:   2015-08-21T16:45:19Z

KAFKA-2453: small change which prevents the ConsoleConsumer from throwing 
an exception when the Finalizer thread tries to close it.

commit cac85029c7d802da29d68073545c118804bd41cb
Author: Ben Stopford 
Date:   2015-08-21T17:08:29Z

KAFKA-2453: Added additional arguments to call to EndToEndLatency from 
Performance tests

commit 119a6fa545bcaf586c9cb110f0e13c1cfee1f56c
Author: Ben Stopford 
Date:   2015-09-01T10:20:14Z

KAFKA-2453: Rebased to trunk

KAFKA-2453: removed whitespace

KAFKA-2453: Formatting only




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2453: Enable new consumer in EndToEndLat...

2015-09-01 Thread benstopford
Github user benstopford closed the pull request at:

https://github.com/apache/kafka/pull/158


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2453) enable new consumer in EndToEndLatency

2015-09-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14725314#comment-14725314
 ] 

ASF GitHub Bot commented on KAFKA-2453:
---

Github user benstopford closed the pull request at:

https://github.com/apache/kafka/pull/158


> enable new consumer in EndToEndLatency
> --
>
> Key: KAFKA-2453
> URL: https://issues.apache.org/jira/browse/KAFKA-2453
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jun Rao
>Assignee: Ben Stopford
> Fix For: 0.8.3
>
>
> We need to add an option to enable the new consumer in EndToEndLatency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Open source Mysql -> Kafka connector

2015-09-01 Thread Aditya Auradkar
I thought this would be of interest:
https://developer.zendesk.com/blog/introducing-maxwell-a-mysql-to-kafka-binlog-processor

A copycat connector that parses MySQL binlogs would be rather useful I
think. Streaming connectors using jdbc are tricky to implement because they
rely on an indexed timestamp field being present all the time.

Thanks,
Aditya


  1   2   >