[jira] [Commented] (KAFKA-2340) Add additional unit tests for new consumer Fetcher

2015-08-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660927#comment-14660927
 ] 

ASF GitHub Bot commented on KAFKA-2340:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/112


 Add additional unit tests for new consumer Fetcher
 --

 Key: KAFKA-2340
 URL: https://issues.apache.org/jira/browse/KAFKA-2340
 Project: Kafka
  Issue Type: Test
Reporter: Jason Gustafson

 There are a number of cases in Fetcher which have no corresponding unit 
 tests. To name a few:
 - list offset with partition leader unknown
 - list offset disconnect
 - fetch disconnect
 Additionally, updateFetchPosition (which was moved from KafkaConsumer) has no 
 tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2384) Override commit message title in kafka-merge-pr.py

2015-08-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14654106#comment-14654106
 ] 

ASF GitHub Bot commented on KAFKA-2384:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/109


 Override commit message title in kafka-merge-pr.py
 --

 Key: KAFKA-2384
 URL: https://issues.apache.org/jira/browse/KAFKA-2384
 Project: Kafka
  Issue Type: Improvement
Reporter: Guozhang Wang
Assignee: Ismael Juma
 Fix For: 0.8.3


 It would be more convenient allow setting the commit message in the merging 
 script; right now the script takes the PR title as is and the contributors 
 have to change them according to the submission-review guidelines before 
 doing the merge.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2055) ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure

2015-08-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14654096#comment-14654096
 ] 

ASF GitHub Bot commented on KAFKA-2055:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/98


 ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure
 

 Key: KAFKA-2055
 URL: https://issues.apache.org/jira/browse/KAFKA-2055
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
Assignee: Fangmin Lv
  Labels: newbie
 Fix For: 0.8.3

 Attachments: KAFKA-2055.patch, KAFKA-2055.patch


 {code}
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:976
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:913
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2071) Replace Produce Request/Response with their org.apache.kafka.common.requests equivalents

2015-08-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14654185#comment-14654185
 ] 

ASF GitHub Bot commented on KAFKA-2071:
---

GitHub user dajac opened a pull request:

https://github.com/apache/kafka/pull/110

KAFKA-2071: Replace Producer Request/Response with their 
org.apache.kafka.common.requests equivalents

This PR replaces all producer request/response with their common 
equivalents but doesn't touch old producer at all.

Some conversions are made in KafkaApis to convert Java type/record to their 
Scala equivalents. For instance, `TopicPartition` must be converted to 
`TopicAndPartition` when it is passed to the ReplicaManager and vice versa. 
I've decided to not touch internals right now as their are used by other parts 
which makes updating them difficult. I'd prefer to address internals in a 
separate JIRA once all requests and responses are updated.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dajac/kafka KAFKA-2071

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/110.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #110


commit 8e1ca3c817b768a399d5ea54d6360d6180e4c39a
Author: David Jacot david.ja...@swisscom.com
Date:   2015-08-04T16:47:36Z

Replace Producer Request/Response with their 
org.apache.kafka.common.requests equivalents.




 Replace Produce Request/Response with their org.apache.kafka.common.requests 
 equivalents
 

 Key: KAFKA-2071
 URL: https://issues.apache.org/jira/browse/KAFKA-2071
 Project: Kafka
  Issue Type: Sub-task
Reporter: Gwen Shapira
Assignee: David Jacot
 Fix For: 0.8.3


 Replace Produce Request/Response with their org.apache.kafka.common.requests 
 equivalents



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2411) remove usage of BlockingChannel in the broker

2015-08-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14695565#comment-14695565
 ] 

ASF GitHub Bot commented on KAFKA-2411:
---

Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/127


 remove usage of BlockingChannel in the broker
 -

 Key: KAFKA-2411
 URL: https://issues.apache.org/jira/browse/KAFKA-2411
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Ismael Juma
 Fix For: 0.8.3


 In KAFKA-1690, we are adding the SSL support at Selector. However, there are 
 still a few places where we use BlockingChannel for inter-broker 
 communication. We need to replace those usage with Selector/NetworkClient to 
 enable inter-broker communication over SSL. Specially, BlockingChannel is 
 currently used in the following places.
 1. ControllerChannelManager: for the controller to propagate metadata to the 
 brokers.
 2. KafkaServer: for the broker to send controlled shutdown request to the 
 controller.
 3. AbstractFetcherThread: for the follower to fetch data from the leader 
 (through SimpleConsumer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2430) Listing of PR commits in commit message should be optional

2015-08-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14695623#comment-14695623
 ] 

ASF GitHub Bot commented on KAFKA-2430:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/136


 Listing of PR commits in commit message should be optional
 --

 Key: KAFKA-2430
 URL: https://issues.apache.org/jira/browse/KAFKA-2430
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 0.8.3


 Listing of PR commits is useful for curated branches, but the PRs for the 
 Kafka project are often for organic branches and some of them has a large 
 number of commits that are basically noise. Listing is also not useful if 
 there is a single commit in the PR.
 This change in the PR script will not list the commit if there is a single 
 one and let the merger decide whether listing the commits is useful or not 
 for other cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2406) ISR propagation should be throttled to avoid overwhelming controller.

2015-08-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696012#comment-14696012
 ] 

ASF GitHub Bot commented on KAFKA-2406:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/114


 ISR propagation should be throttled to avoid overwhelming controller.
 -

 Key: KAFKA-2406
 URL: https://issues.apache.org/jira/browse/KAFKA-2406
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
Priority: Blocker
 Fix For: 0.8.3


 This is a follow up patch for KAFKA-1367.
 We need to throttle the ISR propagation rate to avoid flooding in controller 
 to broker traffic. This might significantly increase time of controlled 
 shutdown or cluster startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2143) Replicas get ahead of leader and fail

2015-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680467#comment-14680467
 ] 

ASF GitHub Bot commented on KAFKA-2143:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/129

KAFKA-2143: fix replica offset truncate to beginning during leader 
migration.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-2143

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/129.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #129


commit 71f8a4716e1f0b4fc2bd88aa30fe38aef8a9f92e
Author: Jiangjie Qin becket@gmail.com
Date:   2015-08-03T02:22:02Z

Fix for KAFKA-2134, fix replica offset truncate to beginning during leader 
migration.




 Replicas get ahead of leader and fail
 -

 Key: KAFKA-2143
 URL: https://issues.apache.org/jira/browse/KAFKA-2143
 Project: Kafka
  Issue Type: Bug
  Components: replication
Affects Versions: 0.8.2.1
Reporter: Evan Huus
Assignee: Jiangjie Qin
 Fix For: 0.8.3


 On a cluster of 6 nodes, we recently saw a case where a single 
 under-replicated partition suddenly appeared, replication lag spiked, and 
 network IO spiked. The cluster appeared to recover eventually on its own,
 Looking at the logs, the thing which failed was partition 7 of the topic 
 {{background_queue}}. It had an ISR of 1,4,3 and its leader at the time was 
 3. Here are the interesting log lines:
 On node 3 (the leader):
 {noformat}
 [2015-04-23 16:50:05,879] ERROR [Replica Manager on Broker 3]: Error when 
 processing fetch request for partition [background_queue,7] offset 3722949957 
 from follower with correlation id 148185816. Possible cause: Request for 
 offset 3722949957 but we only have log segments in the range 3648049863 to 
 3722949955. (kafka.server.ReplicaManager)
 [2015-04-23 16:50:05,879] ERROR [Replica Manager on Broker 3]: Error when 
 processing fetch request for partition [background_queue,7] offset 3722949957 
 from follower with correlation id 156007054. Possible cause: Request for 
 offset 3722949957 but we only have log segments in the range 3648049863 to 
 3722949955. (kafka.server.ReplicaManager)
 [2015-04-23 16:50:13,960] INFO Partition [background_queue,7] on broker 3: 
 Shrinking ISR for partition [background_queue,7] from 1,4,3 to 3 
 (kafka.cluster.Partition)
 {noformat}
 Note that both replicas suddenly asked for an offset *ahead* of the available 
 offsets.
 And on nodes 1 and 4 (the replicas) many occurrences of the following:
 {noformat}
 [2015-04-23 16:50:05,935] INFO Scheduling log segment 3648049863 for log 
 background_queue-7 for deletion. (kafka.log.Log) (edited)
 {noformat}
 Based on my reading, this looks like the replicas somehow got *ahead* of the 
 leader, asked for an invalid offset, got confused, and re-replicated the 
 entire topic from scratch to recover (this matches our network graphs, which 
 show 3 sending a bunch of data to 1 and 4).
 Taking a stab in the dark at the cause, there appears to be a race condition 
 where replicas can receive a new offset before the leader has committed it 
 and is ready to replicate?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2134) Producer blocked on metric publish

2015-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680531#comment-14680531
 ] 

ASF GitHub Bot commented on KAFKA-2134:
---

Github user becketqin closed the pull request at:

https://github.com/apache/kafka/pull/104


 Producer blocked on metric publish
 --

 Key: KAFKA-2134
 URL: https://issues.apache.org/jira/browse/KAFKA-2134
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.2.1
 Environment: debian7, java8
Reporter: Vamsi Subhash Achanta
Assignee: Jun Rao
Priority: Blocker

 Hi,
 We have a REST api to publish to a topic. Yesterday, we started noticing that 
 the producer is not able to produce messages at a good rate and the 
 CLOSE_WAITs of our producer REST app are very high. All the producer REST 
 requests are hence timing out.
 When we took the thread dump and analysed it, we noticed that the threads are 
 getting blocked on JmxReporter metricChange. Here is the attached stack trace.
 dw-70 - POST /queues/queue_1/messages #70 prio=5 os_prio=0 
 tid=0x7f043c8bd000 nid=0x54cf waiting for monitor entry 
 [0x7f04363c7000]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
 - waiting to lock 0x0005c1823860 (a java.lang.Object)
 at 
 org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:182)
 - locked 0x0007a5e526c8 (a 
 org.apache.kafka.common.metrics.Metrics)
 at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:165)
 - locked 0x0007a5e526e8 (a 
 org.apache.kafka.common.metrics.Sensor)
 When I looked at the code of metricChange method, it uses a synchronised 
 block on an object resource and it seems that it is held by another.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1893) Allow regex subscriptions in the new consumer

2015-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680461#comment-14680461
 ] 

ASF GitHub Bot commented on KAFKA-1893:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/128

KAFKA-1893: Allow regex subscriptions in the new consumer



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-1893

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/128.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #128


commit 7294571b853979f85685ae6a49a4711202a64c04
Author: asingh asi...@cloudera.com
Date:   2015-07-28T00:05:48Z

KAFKA-1893: Allow regex subscriptions in the new consumer




 Allow regex subscriptions in the new consumer
 -

 Key: KAFKA-1893
 URL: https://issues.apache.org/jira/browse/KAFKA-1893
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Jay Kreps
Assignee: Ashish K Singh
Priority: Critical
 Fix For: 0.8.3


 The consumer needs to handle subscribing to regular expressions. Presumably 
 this would be done as a new api,
 {code}
   void subscribe(java.util.regex.Pattern pattern);
 {code}
 Some questions/thoughts to work out:
  - It should not be possible to mix pattern subscription with partition 
 subscription.
  - Is it allowable to mix this with normal topic subscriptions? Logically 
 this is okay but a bit complex to implement.
  - We need to ensure we regularly update the metadata and recheck our regexes 
 against the metadata to update subscriptions for new topics that are created 
 or old topics that are deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2388) subscribe(topic)/unsubscribe(topic) should either take a callback to allow user to handle exceptions or it should be synchronous.

2015-08-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14697688#comment-14697688
 ] 

ASF GitHub Bot commented on KAFKA-2388:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/139

KAFKA-2388 [WIP]; refactor KafkaConsumer subscribe API



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2388

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/139.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #139


commit dac975a77624f5a7865746736c334a76e8360182
Author: Jason Gustafson ja...@confluent.io
Date:   2015-08-14T20:16:25Z

KAFKA-2388 [WIP]; refactor KafkaConsumer subscribe API




 subscribe(topic)/unsubscribe(topic) should either take a callback to allow 
 user to handle exceptions or it should be synchronous.
 -

 Key: KAFKA-2388
 URL: https://issues.apache.org/jira/browse/KAFKA-2388
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jiangjie Qin
Assignee: Jason Gustafson

 According to the mailing list discussion on the consumer interface, we'll 
 replace:
 {code}
 public void subscribe(String... topics);
 public void subscribe(TopicPartition... partitions);
 public SetTopicPartition subscriptions();
 {code}
 with:
 {code}
 void subscribe(ListString topics, RebalanceCallback callback);
 void assign(ListTopicPartition partitions);
 ListString subscriptions();
 ListTopicPartition assignments();
 {code}
 We don't need the unsubscribe APIs anymore.
 The RebalanceCallback would look like:
 {code}
 interface RebalanceCallback {
   void onAssignment(ListTopicPartition partitions);
   void onRevocation(ListTopicPartition partitions);
   // handle non-existing topics, etc.
   void onError(Exception e);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2221) Log the entire cause which caused a reconnect in the SimpleConsumer

2015-08-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14697401#comment-14697401
 ] 

ASF GitHub Bot commented on KAFKA-2221:
---

GitHub user jaikiran opened a pull request:

https://github.com/apache/kafka/pull/138

Log the real exception which triggered a reconnect

The commit here improves the logging in SimpleConsumer to log the real 
reason why a reconnect was attempted. Relates to 
https://issues.apache.org/jira/browse/KAFKA-2221.

The same patch was submitted a while back but wasn't merged because 
SimpleConsumer was considered deprecated and users' aren't expected to use it. 
However, more and more users in the user mailing list are running into this log 
message but have no way to understand what the root cause is. So IMO, this 
change still adds value  to such users who are using SimpleConsumer.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jaikiran/kafka kafka-2221

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/138.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #138


commit 4fb024931761d253ae110cfa69377508de4a1f61
Author: Jaikiran Pai jaikiran@gmail.com
Date:   2015-08-14T16:58:57Z

Log the real exception which triggered a reconnect




 Log the entire cause which caused a reconnect in the SimpleConsumer
 ---

 Key: KAFKA-2221
 URL: https://issues.apache.org/jira/browse/KAFKA-2221
 Project: Kafka
  Issue Type: Improvement
Reporter: jaikiran pai
Assignee: jaikiran pai
Priority: Minor
 Attachments: KAFKA-2221.patch


 Currently if the SimpleConsumer goes for a reconnect, it logs the exception's 
 message which caused the reconnect. However, in some occasions the message in 
 the exception can be null, thus making it difficult to narrow down the cause 
 for the reconnect. An example of this can be seen in this user mailing list 
 thread 
 http://mail-archives.apache.org/mod_mbox/kafka-users/201505.mbox/%3CCABME_6T%2Bt90%2B-eQUtnu6R99NqRdMpVj3tqa95Pygg8KOQSNppw%40mail.gmail.com%3E
 {quote}
 kafka.consumer.SimpleConsumer: Reconnect due to socket error: null.
 {quote}
 It would help narrowing down the problem if the entire exception stacktrace 
 was logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2402) Broker should create zkpath /isr_change_notification if it does not exist.

2015-08-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14697520#comment-14697520
 ] 

ASF GitHub Bot commented on KAFKA-2402:
---

Github user becketqin closed the pull request at:

https://github.com/apache/kafka/pull/108


 Broker should create zkpath /isr_change_notification if it does not exist.
 --

 Key: KAFKA-2402
 URL: https://issues.apache.org/jira/browse/KAFKA-2402
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin

 This is a follow up patch for KAFKA-1367.
 When broker update ISR of partitions, it should ensure zkPath 
 /isr_change_notification exist. This does not matter when users do a clean 
 deploy of Kafka cluster because controller will always create the cluster. 
 But it matters when users are doing a rolling upgrade since the controller 
 could still be running on a old version broker. In that case, 
 ZkNoNodeException will be thrown and replica fetching will fail.
 We can either document the upgrade process to ask user create the zk path 
 manually before upgrade or preferably we can handle it in the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1782) Junit3 Misusage

2015-08-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14697899#comment-14697899
 ] 

ASF GitHub Bot commented on KAFKA-1782:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/140

KAFKA-1782: Follow up - add missing @Test annotations.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-1782-followup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/140.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #140


commit 1dcaf39d489c26b564186fbe8d1bddb987f38e3e
Author: Ewen Cheslack-Postava m...@ewencp.org
Date:   2015-08-14T22:43:56Z

KAFKA-1782: Follow up - add missing @Test annotations.




 Junit3 Misusage
 ---

 Key: KAFKA-1782
 URL: https://issues.apache.org/jira/browse/KAFKA-1782
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Alexander Pakulov
  Labels: newbie
 Fix For: 0.8.3

 Attachments: KAFKA-1782.patch, KAFKA-1782.patch, 
 KAFKA-1782_2015-06-18_11:52:49.patch, KAFKA-1782_2015-07-15_16:57:44.patch, 
 KAFKA-1782_2015-07-16_11:50:05.patch, KAFKA-1782_2015-07-16_11:56:11.patch


 This is found while I was working on KAFKA-1580: in many of our cases where 
 we explicitly extend from junit3suite (e.g. ProducerFailureHandlingTest), we 
 are actually misusing a bunch of features that only exist in Junit4, such as 
 (expected=classOf). For example, the following code
 {code}
 import org.scalatest.junit.JUnit3Suite
 import org.junit.Test
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will actually pass even though IOException was not thrown since this 
 annotation is not supported in Junit3. Whereas
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.scalatest.junit.JUnitSuite
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will fail.
 I would propose to not rely on Junit annotations other than @Test itself but 
 use scala unit test annotations instead, for example:
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test
   def testSendOffset() {
 intercept[IOException] {
   //nothing
 }
   }
 }
 {code}
 will fail with a clearer stacktrace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2366) Initial patch for Copycat

2015-08-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14697920#comment-14697920
 ] 

ASF GitHub Bot commented on KAFKA-2366:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/99


 Initial patch for Copycat
 -

 Key: KAFKA-2366
 URL: https://issues.apache.org/jira/browse/KAFKA-2366
 Project: Kafka
  Issue Type: Sub-task
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.8.3


 This covers the initial patch for Copycat. The goal here is to get some 
 baseline code in place, not necessarily the finalized implementation.
 The key thing we'll want here is the connector/task API, which defines how 
 third parties write connectors.
 Beyond that the goal is to have a basically functional standalone Copycat 
 implementation -- enough that we can run and test any connector code with 
 reasonable coverage of functionality; specifically, it's important that core 
 concepts like offset commit and resuming connector tasks function properly. 
 These two things obviously interact, so development of the standalone worker 
 may affect the design of connector APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2072) Add StopReplica request/response to o.a.k.common.requests and replace the usage in core module

2015-08-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14698177#comment-14698177
 ] 

ASF GitHub Bot commented on KAFKA-2072:
---

GitHub user dajac opened a pull request:

https://github.com/apache/kafka/pull/141

KAFKA-2072 [WIP]: Add StopReplica request/response to o.a.k.common.requests 
and replace the usage in core module 

Migration is done but this PR will need to be rebased on #110. I have 
copied some code (ef669a5) for now.

I'd appreciate feedback on it mainly around how I handle things in the 
ControllerChannelManager. I have introduced a new 'sendRequest' method for 
o.a.k.common.requests and kept the old one for compatibility reason. We'll be 
able to remove the old one in the future when migration of all requests and 
responses to o.a.k.common.requests is completed.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dajac/kafka KAFKA-2072

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/141.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #141


commit 22f2466f985cde1787a41f13a0191a538fc3a23f
Author: David Jacot david.ja...@gmail.com
Date:   2015-08-13T15:36:49Z

Add o.a.k.c.r.StopReplicaRequest and o.a.k.c.r.StopReplicaResponse.

commit 3a31ba9dc3d912e91535cf3a4e373b8f56b347b4
Author: David Jacot david.ja...@gmail.com
Date:   2015-08-13T17:15:18Z

Replace k.a.StopReplicaRequest and k.a.StopReplicaResponse in KafkaApis by 
their org.apache.kafka.common.requests equivalents.

commit ef669a5ff5fc125624d5d1ec79b92940d43ca3bb
Author: David Jacot david.ja...@gmail.com
Date:   2015-08-14T18:42:37Z

Code cherry-picked from KAFKA-2071. It can be removed when KAFKA-2071 is 
merged.

commit cbaa987385d989fad8cc3f40d50a24c2ee25ae78
Author: David Jacot david.ja...@gmail.com
Date:   2015-08-14T18:46:21Z

Replace k.a.StopReplicaRequest and k.a.StopReplicaResponse in Controller by 
their org.apache.kafka.common.requests equivalents.

commit 48a05d81c94ca30fff96df8c82587e64db4260b0
Author: David Jacot david.ja...@gmail.com
Date:   2015-08-14T18:53:32Z

Remove k.a.StopReplicaRequest and k.a.StopReplicaResponse.




 Add StopReplica request/response to o.a.k.common.requests and replace the 
 usage in core module
 --

 Key: KAFKA-2072
 URL: https://issues.apache.org/jira/browse/KAFKA-2072
 Project: Kafka
  Issue Type: Sub-task
Reporter: Gwen Shapira
Assignee: David Jacot
 Fix For: 0.8.3


 Add StopReplica request/response to o.a.k.common.requests and replace the 
 usage in core module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1782) Junit3 Misusage

2015-08-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14698109#comment-14698109
 ] 

ASF GitHub Bot commented on KAFKA-1782:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/140


 Junit3 Misusage
 ---

 Key: KAFKA-1782
 URL: https://issues.apache.org/jira/browse/KAFKA-1782
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Alexander Pakulov
  Labels: newbie
 Fix For: 0.8.3

 Attachments: KAFKA-1782.patch, KAFKA-1782.patch, 
 KAFKA-1782_2015-06-18_11:52:49.patch, KAFKA-1782_2015-07-15_16:57:44.patch, 
 KAFKA-1782_2015-07-16_11:50:05.patch, KAFKA-1782_2015-07-16_11:56:11.patch


 This is found while I was working on KAFKA-1580: in many of our cases where 
 we explicitly extend from junit3suite (e.g. ProducerFailureHandlingTest), we 
 are actually misusing a bunch of features that only exist in Junit4, such as 
 (expected=classOf). For example, the following code
 {code}
 import org.scalatest.junit.JUnit3Suite
 import org.junit.Test
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will actually pass even though IOException was not thrown since this 
 annotation is not supported in Junit3. Whereas
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.scalatest.junit.JUnitSuite
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will fail.
 I would propose to not rely on Junit annotations other than @Test itself but 
 use scala unit test annotations instead, for example:
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test
   def testSendOffset() {
 intercept[IOException] {
   //nothing
 }
   }
 }
 {code}
 will fail with a clearer stacktrace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2411) remove usage of BlockingChannel in the broker

2015-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680045#comment-14680045
 ] 

ASF GitHub Bot commented on KAFKA-2411:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/127

KAFKA-2411; [WIP] remove usage of blocking channel

This PR builds on the work from @harshach and only the last commit is 
relevant. Opening the PR for getting feedback.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2411-remove-usage-of-blocking-channel

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/127.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #127


commit 8ca558920347733ddf7a924463c93620e976a3f3
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-04-28T07:29:53Z

KAFKA-1690. new java producer needs ssl support as a client.

commit 754a121e7582f1452a9ae3a3ab72c58cf284da1d
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-05-11T06:02:01Z

KAFKA-1690. new java producer needs ssl support as a client.

commit 98a90ae9d80ea8f5ab4780569d1c4e301dd16c4e
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-05-11T06:18:13Z

KAFKA-1690. new java producer needs ssl support as a client.

commit 804da7a015be2f98a1bb867ee5d42aa8009a37dd
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-05-11T06:31:25Z

KAFKA-1690. new java producer needs ssl support as a client.

commit ee16e8e6f92ac2baf0e41d3019b7f8aef39b1506
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-05-11T23:09:01Z

KAFKA-1690. new java producer needs ssl support as a client. SSLFactory 
tests.

commit 2dd826be4a6ebe7064cb19ff21fe23950a1bafc2
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-05-12T23:09:38Z

KAFKA-1690. new java producer needs ssl support as a client. Added 
PrincipalBuilder.

commit 2cddad80f6a4a961b6932879448e532dab4e637e
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-05-15T14:17:37Z

KAFKA-1690. new java producer needs ssl support as a client. Addressing 
reviews.

commit ca0456dc01def337ee1711cabd9c4e9df4af61ee
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-05-20T21:23:29Z

KAFKA-1690. new java producer needs ssl support as a client. Addressing 
reviews.

commit 7e3a4cfc58932aab4288677111af52f94c9012b6
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-05-20T21:37:52Z

KAFKA-1690. new java producer needs ssl support as a client. Addressing 
reviews.

commit 9bdceb24f8682184f7fb39578f239a7b6dde
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-05-21T16:50:52Z

KAFKA-1690. new java producer needs ssl support as a client. Fixed minor
issues with the patch.

commit 65396b5cabeaf61579c6e6422848877fc7a896a9
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-05-21T17:27:11Z

KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
issues with the patch.

commit b37330a7b4ec3adfba4f0c6e33ab172be03406be
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-05-29T03:57:06Z

KAFKA-1690. new java producer needs ssl support as a client.

commit fe595fd4fda45ebd7c5da88ee093ab17817bb94d
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-06-04T01:43:34Z

KAFKA-1690. new java producer needs ssl support as a client.

commit 247264ce35a04d14b97c87dcb88378ad1dbe0986
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-06-08T16:07:14Z

Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1

commit 050782b9f47f4c61b22ef065ec4798ccbdb962d3
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-06-16T15:46:05Z

KAFKA-1690. Broker side ssl changes.

commit 9328ffa464711a835be8935cb09922230e0e1a58
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-06-20T17:47:01Z

KAFKA-1684. SSL for socketServer.

commit eda92cb5f9d2ae749903eac5453a6fdb49685964
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-06-21T03:01:30Z

KAFKA-1690. Added SSLProducerSendTest and fixes to get right port for SSL.

commit f10e28b2f2b10d91db9c1aba977fc578b4c4c633
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-06-21T03:47:54Z

Merge branch 'trunk' into KAFKA-1690-V1

commit f60c95273b3b814792d0da9264a75939049dcc5f
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-06-21T04:45:58Z

KAFKA-1690. Post merge fixes.

commit 8f7ba892502b09cb7cc05d75270352815fb1c42c
Author: Sriharsha Chintalapani har...@hortonworks.com
Date:   2015-06-21T22:35:52Z

KAFKA-1690. Added 

[jira] [Commented] (KAFKA-2390) Seek() should take a callback.

2015-08-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659236#comment-14659236
 ] 

ASF GitHub Bot commented on KAFKA-2390:
---

GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/118

KAFKA-2390; Seek() should take a callback



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-2390

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/118.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #118


commit 8a9f88649b8de52b48700821f0e1bc5c51a661f3
Author: Dong Lin lindon...@gmail.com
Date:   2015-08-06T00:05:58Z

KAFKA-2390; Seek() should take a callback




 Seek() should take a callback.
 --

 Key: KAFKA-2390
 URL: https://issues.apache.org/jira/browse/KAFKA-2390
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jiangjie Qin
Assignee: Dong Lin

 Currently seek is an async call. To have the same interface as other calls 
 like commit(), seek() should take a callback. This callback will be invoked 
 if the position to seek triggers OFFSET_OUT_OF_RANGE exception from broker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2393) Correctly Handle InvalidTopicException in KafkaApis.getTopicMetadata()

2015-08-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659134#comment-14659134
 ] 

ASF GitHub Bot commented on KAFKA-2393:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/117


 Correctly Handle InvalidTopicException in KafkaApis.getTopicMetadata()
 --

 Key: KAFKA-2393
 URL: https://issues.apache.org/jira/browse/KAFKA-2393
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Grant Henke
Assignee: Grant Henke
 Fix For: 0.8.3


 It seems that in KafkaApis.getTopicMetadata(), we need to handle 
 InvalidTopicException explicitly when calling AdminUtils.createTopic (by 
 returning the corresponding error code for that topic). Otherwise, we may not 
 be able to get the metadata for other valid topics. This seems to be an 
 existing problem, but KAFKA-2337 makes InvalidTopicException more likely to 
 happen. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2401) Fix transient failure of ProducerSendTest.testCloseWithZeroTimeoutFromSenderThread()

2015-08-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14654657#comment-14654657
 ] 

ASF GitHub Bot commented on KAFKA-2401:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/113

KAFKA-2401: fix transient failure in 
ProducerSendTest.testCloseWithZeroTimeoutFromSenderThread



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-2401

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/113.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #113


commit 7d4223df1325e42560a0d55a1b31f369157133e4
Author: Jiangjie Qin becket@gmail.com
Date:   2015-08-05T01:36:54Z

KAFKA-2401: fix transient failure in 
ProducerSendTest.testCloseWithZeroTimeoutFromSenderThread




 Fix transient failure of 
 ProducerSendTest.testCloseWithZeroTimeoutFromSenderThread()
 

 Key: KAFKA-2401
 URL: https://issues.apache.org/jira/browse/KAFKA-2401
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin

 The transient failure can happen because of a race condition of the callback 
 firing order for messages produced to broker 0 and broker 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2321) Introduce CONTRIBUTING.md

2015-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14643107#comment-14643107
 ] 

ASF GitHub Bot commented on KAFKA-2321:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/97


 Introduce CONTRIBUTING.md
 -

 Key: KAFKA-2321
 URL: https://issues.apache.org/jira/browse/KAFKA-2321
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 0.8.3


 This file is displayed when people create a pull request in GitHub. It should 
 link to the relevant pages in the wiki and website.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2342) KafkaConsumer rebalance with in-flight fetch can cause invalid position

2015-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14637525#comment-14637525
 ] 

ASF GitHub Bot commented on KAFKA-2342:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/88


 KafkaConsumer rebalance with in-flight fetch can cause invalid position
 ---

 Key: KAFKA-2342
 URL: https://issues.apache.org/jira/browse/KAFKA-2342
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Affects Versions: 0.8.3
Reporter: Jun Rao
Assignee: Jason Gustafson
 Fix For: 0.9.0


 If a rebalance occurs with an in-flight fetch, the new KafkaConsumer can end 
 up updating the fetch position of a partition to an offset which is no longer 
 valid. The consequence is that we may end up either returning to the user 
 messages with an unexpected position or we may fail to give back the right 
 offset in position(). 
 Additionally, this bug causes transient test failures in 
 ConsumerBounceTest.testConsumptionWithBrokerFailures with the following 
 exception:
 kafka.api.ConsumerBounceTest  testConsumptionWithBrokerFailures FAILED
 java.lang.NullPointerException
 at 
 org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:949)
 at 
 kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:86)
 at 
 kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:61)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2358) KafkaConsumer.partitionsFor should never return null

2015-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640181#comment-14640181
 ] 

ASF GitHub Bot commented on KAFKA-2358:
---

GitHub user sslavic opened a pull request:

https://github.com/apache/kafka/pull/96

KAFKA-2358 KafkaConsumer.partitionsFor returns empty list for non-existing 
topic



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sslavic/kafka feature/KAFKA-2358

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/96.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #96


commit 31f70d27c88a9e76e7f812f069c6128121eeff1f
Author: Stevo Slavic ssla...@gmail.com
Date:   2015-07-24T09:03:04Z

KAFKA-2358 KafkaConsumer.partitionsFor returns empty list for non-existing 
topic




 KafkaConsumer.partitionsFor should never return null
 

 Key: KAFKA-2358
 URL: https://issues.apache.org/jira/browse/KAFKA-2358
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.3
Reporter: Stevo Slavic
Priority: Minor

 {{KafkaConsumer.partitionsFor}} method by it's signature returns a 
 {{ListPartitionInfo}}. Problem is that in case (metadata for) topic does 
 not exist, current implementation will return null, which is considered a bad 
 practice - instead of null it should return empty list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2356) Support retrieving partitions of ConsumerRecords

2015-07-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14639594#comment-14639594
 ] 

ASF GitHub Bot commented on KAFKA-2356:
---

GitHub user sslavic opened a pull request:

https://github.com/apache/kafka/pull/95

KAFKA-2356 Added support for retrieving partitions of ConsumerRecords



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sslavic/kafka feature/KAFKA-2356

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/95.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #95


commit d58201a51dab1f32fbbde9f9cce894821ff3ad92
Author: Stevo Slavic ssla...@gmail.com
Date:   2015-07-23T22:09:29Z

KAFKA-2356 Added support for retrieving partitions of ConsumerRecords




 Support retrieving partitions of ConsumerRecords
 

 Key: KAFKA-2356
 URL: https://issues.apache.org/jira/browse/KAFKA-2356
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.3
Reporter: Stevo Slavic
Priority: Trivial
  Labels: newbie

 In new consumer on trunk, ConsumerRecords has method to retrieve records for 
 given TopicPartition, but there is no method to retrieve TopicPartition's 
 included/available in ConsumerRecords. Please have it supported.
 Method could be something like:
 {noformat}
 /**
  * Get partitions of records returned by a {@link Consumer#poll(long)} 
 operation
 */
 public SetTopicPartition partitions() {
 return Collections.unmodifiableSet(this.records.keySet());
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2348) Drop support for Scala 2.9

2015-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640694#comment-14640694
 ] 

ASF GitHub Bot commented on KAFKA-2348:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/87


 Drop support for Scala 2.9
 --

 Key: KAFKA-2348
 URL: https://issues.apache.org/jira/browse/KAFKA-2348
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma
Assignee: Ismael Juma

 Summary of why we should drop Scala 2.9:
 * Doubles the number of builds required from 2 to 4 (2.9.1 and 2.9.2 are not 
 binary compatible).
 * Code has been committed to trunk that doesn't build with Scala 2.9 weeks 
 ago and no-one seems to have noticed or cared (well, I filed 
 https://issues.apache.org/jira/browse/KAFKA-2325). Can we really support a 
 version if we don't test it?
 * New clients library is written in Java and won't be affected. It also has 
 received a lot of work and it's much improved since the last release.
 * It was released 4 years ago, it has been unsupported for a long time and 
 most projects have dropped support for it (for example, we use a different 
 version of ScalaTest for Scala 2.9)
 * Scala 2.10 introduced Futures and a few useful features like String 
 interpolation and value classes.
 * Doesn't work with Java 8 (https://issues.apache.org/jira/browse/KAFKA-2203).
 Vote thread: http://search-hadoop.com/m/uyzND1DIE422mz94I1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2366) Initial patch for Copycat

2015-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14642362#comment-14642362
 ] 

ASF GitHub Bot commented on KAFKA-2366:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/99

KAFKA-2366 [WIP]; Copycat

This is an initial patch implementing the basics of Copycat for KIP-26.

The intent here is to start a review of the key pieces of the core API and 
get a reasonably functional, baseline, non-distributed implementation of 
Copycat in place to get things rolling. The current patch has a number of known 
issues that need to be addressed before a final version:

* Some build-related issues. Specifically, requires some locally-installed 
dependencies (see below), ignores checkstyle for the runtime data library 
because it's lifted from Avro currently and likely won't last in its current 
form, and some Gradle task dependencies aren't quite right because I haven't 
gotten rid of the dependency on `core` (which should now be an easy patch since 
new consumer groups are in a much better state).
* This patch currently depends on some Confluent trunk code because I 
prototyped with our Avro serializers w/ schema-registry support. We need to 
figure out what we want to provide as an example built-in set of serializers. 
Unlike core Kafka where we could ignore the issue, providing only ByteArray or 
String serializers, this is pretty central to how Copycat works.
* This patch uses a hacked up version of Avro as its runtime data format. 
Not sure if we want to go through the entire API discussion just to get some 
basic code committed, so I filed KAFKA-2367 to handle that separately. The core 
connector APIs and the runtime data APIs are entirely orthogonal.
* This patch needs some updates to get aligned with recent new consumer 
changes (specifically, I'm aware of the ConcurrentModificationException issue 
on exit). More generally, the new consumer is in flux but Copycat depends on 
it, so there are likely to be some negative interactions.
* The layout feels a bit awkward to me right now because I ported it from a 
Maven layout. We don't have nearly the same level of granularity in Kafka 
currently (core and clients, plus the mostly ignored examples, log4j-appender, 
and a couple of contribs). We might want to reorganize, although keeping 
data+api separate from runtime and connector plugins is useful for minimizing 
dependencies.
* There are a variety of other things (e.g., I'm not happy with the 
exception hierarchy/how they are currently handled, TopicPartition doesn't 
really need to be duplicated unless we want Copycat entirely isolated from the 
Kafka APIs, etc), but I expect those we'll cover in the review.

Before commenting on the patch, it's probably worth reviewing 
https://issues.apache.org/jira/browse/KAFKA-2365 and 
https://issues.apache.org/jira/browse/KAFKA-2366 to get an idea of what I had 
in mind for a) what we ultimately want with all the Copycat patches and b) what 
we aim to cover in this initial patch. My hope is that we can use a WIP patch 
(after the current obvious deficiencies are addressed) while recognizing that 
we want to make iterative progress with a bunch of subsequent PRs.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka copycat

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/99.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #99


commit 11981d2eaa2f61e81251104d6051acf6fd3911b3
Author: Ewen Cheslack-Postava m...@ewencp.org
Date:   2015-07-24T20:20:15Z

Add copycat-data and copycat-api

commit 0233456c297c79c8f351dc7683a12b491d5682e8
Author: Ewen Cheslack-Postava m...@ewencp.org
Date:   2015-07-24T21:59:54Z

Add copycat-avro and copycat-runtime

commit e14942cb20952263c26540fc333b7e3dc624c09c
Author: Ewen Cheslack-Postava m...@ewencp.org
Date:   2015-07-25T02:52:47Z

Add Copycat file connector.

commit 31cd1caf3c48417bcfb56b8c85dfd2419712953c
Author: Ewen Cheslack-Postava m...@ewencp.org
Date:   2015-07-26T20:48:00Z

Add CLI tools for Copycat.

commit 4a9b4f3c671bbba3b5d05a2ac6fed65b018649ee
Author: Ewen Cheslack-Postava m...@ewencp.org
Date:   2015-07-26T21:03:52Z

Add some helpful Copycat-specific build and test targets that cover all 
Copycat packages.




 Initial patch for Copycat
 -

 Key: KAFKA-2366
 URL: https://issues.apache.org/jira/browse/KAFKA-2366
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.8.3


 This covers the 

[jira] [Commented] (KAFKA-2055) ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure

2015-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641033#comment-14641033
 ] 

ASF GitHub Bot commented on KAFKA-2055:
---

Github user lvfangmin closed the pull request at:

https://github.com/apache/kafka/pull/60


 ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure
 

 Key: KAFKA-2055
 URL: https://issues.apache.org/jira/browse/KAFKA-2055
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
Assignee: Fangmin Lv
  Labels: newbie
 Attachments: KAFKA-2055.patch, KAFKA-2055.patch


 {code}
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:976
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:913
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2055) ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure

2015-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641034#comment-14641034
 ] 

ASF GitHub Bot commented on KAFKA-2055:
---

GitHub user lvfangmin opened a pull request:

https://github.com/apache/kafka/pull/98

KAFKA-2055; Fix transient ConsumerBounceTest.testSeekAndCommitWithBro…

…kerFailures failure;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lvfangmin/kafka KAFKA-2055

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/98.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #98


commit 057a1f1539143b4d12a899267ccd8a223a021d26
Author: lvfangmin lvfang...@gmail.com
Date:   2015-07-24T19:11:31Z

KAFKA-2055; Fix transient 
ConsumerBounceTest.testSeekAndCommitWithBrokerFailures failure;




 ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure
 

 Key: KAFKA-2055
 URL: https://issues.apache.org/jira/browse/KAFKA-2055
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
Assignee: Fangmin Lv
  Labels: newbie
 Attachments: KAFKA-2055.patch, KAFKA-2055.patch


 {code}
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:976
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:913
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2344) kafka-merge-pr improvements

2015-07-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14639035#comment-14639035
 ] 

ASF GitHub Bot commented on KAFKA-2344:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/90


 kafka-merge-pr improvements
 ---

 Key: KAFKA-2344
 URL: https://issues.apache.org/jira/browse/KAFKA-2344
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Ismael Juma
Priority: Minor
 Fix For: 0.8.3


 Two suggestions for the new pr-merge tool:
 * The tool doesn't allow to credit reviewers while committing. I thought the 
 review credits were a nice habit of the Kafka community and I hate losing it. 
 OTOH, I don't want to force-push to trunk just to add credits. Perhaps the 
 tool can ask about committers?
 * Looks like the tool doesn't automatically resolve the JIRA. Would be nice 
 if it did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2321) Introduce CONTRIBUTING.md

2015-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640363#comment-14640363
 ] 

ASF GitHub Bot commented on KAFKA-2321:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/97

KAFKA-2321; Introduce CONTRIBUTING.md



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-2321

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/97.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #97


commit a4d1f9c10732eb240ab911c011f1b03c6ca32771
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-08T08:00:29Z

KAFKA-2321; Introduce CONTRIBUTING.md




 Introduce CONTRIBUTING.md
 -

 Key: KAFKA-2321
 URL: https://issues.apache.org/jira/browse/KAFKA-2321
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma

 This file is displayed when people create a pull request in GitHub. It should 
 link to the relevant pages in the wiki and website.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1595) Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount

2015-07-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14626050#comment-14626050
 ] 

ASF GitHub Bot commented on KAFKA-1595:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/74

KAFKA-1595; Remove deprecated and slower scala JSON parser

A thin wrapper over Jackson's Tree Model API is used as the replacement. 
This wrapper
increases safety while providing a simple, but powerful API through the 
usage of the
`DecodeJson` type class. Even though this has a maintenance cost, it makes 
the API
much more convenient from Scala. A number of tests were added to verify the
behaviour of this wrapper.

The Scala module for Jackson doesn't provide any help for our current 
usage, so we don't
depend on it.

An attempt has been made to maintain the existing behaviour regarding when 
exceptions
are thrown. There are a number of cases where `JsonMappingException` will 
be thrown
instead of `ClassCastException`, however. It is expected that users would 
not try to catch
`ClassCastException`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-1595-remove-deprecated-json-parser-jackson

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/74.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #74


commit 61f20cc04a89200c28eb77137671235516c81847
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-04-20T20:53:54Z

Introduce `testJsonParse`

Simple test that shows existing behaviour.

commit 4ca0feb37e8be2d388b60efacc19bc6788b6
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-04-21T00:15:02Z

KAFKA-1595; Remove deprecated and slower scala JSON parser from 
kafka.consumer.TopicCount

A thin wrapper over Jackson's Tree Model API is used as the replacement. 
This wrapper
increases safety while providing a simple, but powerful API through the 
usage of the
`DecodeJson` type class. Even though this has a maintenance cost, it makes 
the API
much more convenient from Scala. A number of tests were added to verify the
behaviour of this wrapper.

The Scala module for Jackson doesn't provide any help for our current 
usage, so we don't
depend on it.

An attempt has been made to maintain the existing behaviour regarding when 
exceptions
are thrown. There are a number of cases where `JsonMappingException` will 
be thrown
instead of `ClassCastException`, however. It is expected that users would 
not try to catch
`ClassCastException`.

commit f401990f13bddbd3d97e05756cf2f1abf367677e
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-04-21T00:23:39Z

Minor clean-ups in `Json.encode`




 Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount
 -

 Key: KAFKA-1595
 URL: https://issues.apache.org/jira/browse/KAFKA-1595
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.1.1
Reporter: Jagbir
Assignee: Ismael Juma
  Labels: newbie
 Fix For: 0.8.3

 Attachments: KAFKA-1595.patch


 The following issue is created as a follow up suggested by Jun Rao
 in a kafka news group message with the Subject
 Blocking Recursive parsing from 
 kafka.consumer.TopicCount$.constructTopicCount
 SUMMARY:
 An issue was detected in a typical cluster of 3 kafka instances backed
 by 3 zookeeper instances (kafka version 0.8.1.1, scala version 2.10.3,
 java version 1.7.0_65). On consumer end, when consumers get recycled,
 there is a troubling JSON parsing recursion which takes a busy lock and
 blocks consumers thread pool.
 In 0.8.1.1 scala client library ZookeeperConsumerConnector.scala:355 takes
 a global lock (0xd3a7e1d0) during the rebalance, and fires an
 expensive JSON parsing, while keeping the other consumers from shutting
 down, see, e.g,
 at 
 kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:161)
 The deep recursive JSON parsing should be deprecated in favor
 of a better JSON parser, see, e.g,
 http://engineering.ooyala.com/blog/comparing-scala-json-libraries?
 DETAILS:
 The first dump is for a recursive blocking thread holding the lock for 
 0xd3a7e1d0
 and the subsequent dump is for a waiting thread.
 (Please grep for 0xd3a7e1d0 to see the locked object.)
 Â 
 -8-
 Sa863f22b1e5hjh6788991800900b34545c_profile-a-prod1-s-140789080845312-c397945e8_watcher_executor
 prio=10 tid=0x7f24dc285800 

[jira] [Commented] (KAFKA-2123) Make new consumer offset commit API use callback + future

2015-07-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629067#comment-14629067
 ] 

ASF GitHub Bot commented on KAFKA-2123:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/79


 Make new consumer offset commit API use callback + future
 -

 Key: KAFKA-2123
 URL: https://issues.apache.org/jira/browse/KAFKA-2123
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Ewen Cheslack-Postava
Assignee: Jason Gustafson
Priority: Critical
 Fix For: 0.8.3

 Attachments: KAFKA-2123.patch, KAFKA-2123.patch, 
 KAFKA-2123_2015-04-30_11:23:05.patch, KAFKA-2123_2015-05-01_19:33:19.patch, 
 KAFKA-2123_2015-05-04_09:39:50.patch, KAFKA-2123_2015-05-04_22:51:48.patch, 
 KAFKA-2123_2015-05-29_11:11:05.patch, KAFKA-2123_2015-07-11_17:33:59.patch, 
 KAFKA-2123_2015-07-13_18:45:08.patch, KAFKA-2123_2015-07-14_13:20:25.patch, 
 KAFKA-2123_2015-07-14_18:21:38.patch


 The current version of the offset commit API in the new consumer is
 void commit(offsets, commit type)
 where the commit type is either sync or async. This means you need to use 
 sync if you ever want confirmation that the commit succeeded. Some 
 applications will want to use asynchronous offset commit, but be able to tell 
 when the commit completes.
 This is basically the same problem that had to be fixed going from old 
 consumer - new consumer and I'd suggest the same fix using a callback + 
 future combination. The new API would be
 FutureVoid commit(MapTopicPartition, Long offsets, ConsumerCommitCallback 
 callback);
 where ConsumerCommitCallback contains a single method:
 public void onCompletion(Exception exception);
 We can provide shorthand variants of commit() for eliding the different 
 arguments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2341) Need Standard Deviation Metrics in MetricsBench

2015-07-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629454#comment-14629454
 ] 

ASF GitHub Bot commented on KAFKA-2341:
---

GitHub user sebadiaz opened a pull request:

https://github.com/apache/kafka/pull/80

KAFKA-2341 Add standard deviation as metric

KAFKA-2341 Add standard deviation as metric

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sebadiaz/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/80.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #80


commit 585c9bdfc51345f4b9a9d8367c2e67b0c6a8aa34
Author: Sebastien Diaz sebastien.d...@misys.com
Date:   2015-07-16T08:55:16Z

KAFKA-2341 Add standard deviation as metric




 Need Standard Deviation Metrics in MetricsBench
 ---

 Key: KAFKA-2341
 URL: https://issues.apache.org/jira/browse/KAFKA-2341
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.2.1
Reporter: sebastien diaz
Priority: Minor

 The standard deviation is a measure that is used to quantify the amount of 
 variation or dispersion of a set of data values.
 Very useful. Could be added to other sensors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2320) Configure GitHub pull request build in Jenkins

2015-07-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629699#comment-14629699
 ] 

ASF GitHub Bot commented on KAFKA-2320:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/81

KAFKA-2320; Test commit



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-2320

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/81.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #81


commit 51ee14651b663dcb141826d9674851baf4540754
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-14T23:26:55Z

KAFKA-2320; Test commit




 Configure GitHub pull request build in Jenkins
 --

 Key: KAFKA-2320
 URL: https://issues.apache.org/jira/browse/KAFKA-2320
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma

 The details are available in the following Apache Infra post:
 https://blogs.apache.org/infra/entry/github_pull_request_builds_now
 I paste the instructions here as well for convenience:
 {quote}
 Here’s what you need to do to set it up:
 * Create a new job, probably copied from an existing job.
 * Make sure you’re not doing any “mvn deploy” or equivalent in the new job - 
 this job shouldn’t be deploying any artifacts to Nexus, etc.
 * Check the Enable Git validated merge support” box - you can leave the 
 first few fields set to their default, since we’re not actually pushing 
 anything. This is just required to get the pull request builder to register 
 correctly.
 * Set the “GitHub project” field to the HTTP URL for your repository - 
 i.e.,http://github.com/apache/kafka/- make sure it ends with that trailing 
 slash and doesn’t include .git, etc.
 * In the Git SCM section of the job configuration, set the repository URL to 
 point to the GitHub git:// URL for your repository - i.e., 
 git://github.com/apache/kafka.git.
 * You should be able to leave the “Branches to build” field as is - this 
 won’t be relevant anyway.
 * Click the “Add” button in “Additional Behaviors” and choose Strategy for 
 choosing what to build”. Make sure the choosing strategy is set to “Build 
 commits submitted for validated merge”.
 * Uncheck any existing build triggers - this shouldn’t be running on a 
 schedule, polling, running when SNAPSHOT dependencies are built, etc.
 * Check the “Build pull requests to the repository” option in the build 
 triggers.
 * Optionally change anything else in the job that you’d like to be different 
 for a pull request build than for a normal build - i.e., any downstream build 
 triggers should probably be removed,  you may want to change email 
 recipients, etc.
 * Save, and you’re done!
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2320) Configure GitHub pull request build in Jenkins

2015-07-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629793#comment-14629793
 ] 

ASF GitHub Bot commented on KAFKA-2320:
---

Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/81


 Configure GitHub pull request build in Jenkins
 --

 Key: KAFKA-2320
 URL: https://issues.apache.org/jira/browse/KAFKA-2320
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma
Assignee: Jun Rao

 The details are available in the following Apache Infra post:
 https://blogs.apache.org/infra/entry/github_pull_request_builds_now
 I paste the instructions here as well for convenience:
 {quote}
 Here’s what you need to do to set it up:
 * Create a new job, probably copied from an existing job.
 * Make sure you’re not doing any “mvn deploy” or equivalent in the new job - 
 this job shouldn’t be deploying any artifacts to Nexus, etc.
 * Check the Enable Git validated merge support” box - you can leave the 
 first few fields set to their default, since we’re not actually pushing 
 anything. This is just required to get the pull request builder to register 
 correctly.
 * Set the “GitHub project” field to the HTTP URL for your repository - 
 i.e.,http://github.com/apache/kafka/- make sure it ends with that trailing 
 slash and doesn’t include .git, etc.
 * In the Git SCM section of the job configuration, set the repository URL to 
 point to the GitHub git:// URL for your repository - i.e., 
 git://github.com/apache/kafka.git.
 * You should be able to leave the “Branches to build” field as is - this 
 won’t be relevant anyway.
 * Click the “Add” button in “Additional Behaviors” and choose Strategy for 
 choosing what to build”. Make sure the choosing strategy is set to “Build 
 commits submitted for validated merge”.
 * Uncheck any existing build triggers - this shouldn’t be running on a 
 schedule, polling, running when SNAPSHOT dependencies are built, etc.
 * Check the “Build pull requests to the repository” option in the build 
 triggers.
 * Optionally change anything else in the job that you’d like to be different 
 for a pull request build than for a normal build - i.e., any downstream build 
 triggers should probably be removed,  you may want to change email 
 recipients, etc.
 * Save, and you’re done!
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1595) Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount

2015-07-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627269#comment-14627269
 ] 

ASF GitHub Bot commented on KAFKA-1595:
---

Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/74


 Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount
 -

 Key: KAFKA-1595
 URL: https://issues.apache.org/jira/browse/KAFKA-1595
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.1.1
Reporter: Jagbir
Assignee: Ismael Juma
  Labels: newbie
 Fix For: 0.8.3

 Attachments: KAFKA-1595.patch


 The following issue is created as a follow up suggested by Jun Rao
 in a kafka news group message with the Subject
 Blocking Recursive parsing from 
 kafka.consumer.TopicCount$.constructTopicCount
 SUMMARY:
 An issue was detected in a typical cluster of 3 kafka instances backed
 by 3 zookeeper instances (kafka version 0.8.1.1, scala version 2.10.3,
 java version 1.7.0_65). On consumer end, when consumers get recycled,
 there is a troubling JSON parsing recursion which takes a busy lock and
 blocks consumers thread pool.
 In 0.8.1.1 scala client library ZookeeperConsumerConnector.scala:355 takes
 a global lock (0xd3a7e1d0) during the rebalance, and fires an
 expensive JSON parsing, while keeping the other consumers from shutting
 down, see, e.g,
 at 
 kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:161)
 The deep recursive JSON parsing should be deprecated in favor
 of a better JSON parser, see, e.g,
 http://engineering.ooyala.com/blog/comparing-scala-json-libraries?
 DETAILS:
 The first dump is for a recursive blocking thread holding the lock for 
 0xd3a7e1d0
 and the subsequent dump is for a waiting thread.
 (Please grep for 0xd3a7e1d0 to see the locked object.)
 Â 
 -8-
 Sa863f22b1e5hjh6788991800900b34545c_profile-a-prod1-s-140789080845312-c397945e8_watcher_executor
 prio=10 tid=0x7f24dc285800 nid=0xda9 runnable [0x7f249e40b000]
 java.lang.Thread.State: RUNNABLE
 at 
 scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.p$7(Parsers.scala:722)
 at 
 scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.continue$1(Parsers.scala:726)
 at 
 scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.apply(Parsers.scala:737)
 at 
 scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.apply(Parsers.scala:721)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Success.flatMapWithNext(Parsers.scala:142)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at 

[jira] [Commented] (KAFKA-2145) An option to add topic owners.

2015-07-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627359#comment-14627359
 ] 

ASF GitHub Bot commented on KAFKA-2145:
---

GitHub user Parth-Brahmbhatt opened a pull request:

https://github.com/apache/kafka/pull/77

KAFKA-2145: Add a log config so users can define topic owners.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Parth-Brahmbhatt/kafka KAFKA-2145

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/77.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #77


commit de9c4efac53b52923bb2002536b4a2a7725541e9
Author: Parth Brahmbhatt brahmbhatt.pa...@gmail.com
Date:   2015-07-15T00:54:40Z

KAFKA-2145: Add a log config so users can define topic owners.




 An option to add topic owners. 
 ---

 Key: KAFKA-2145
 URL: https://issues.apache.org/jira/browse/KAFKA-2145
 Project: Kafka
  Issue Type: Improvement
Reporter: Parth Brahmbhatt
Assignee: Parth Brahmbhatt

 We need to expose a way so users can identify users/groups that share 
 ownership of topic. We discussed adding this as part of 
 https://issues.apache.org/jira/browse/KAFKA-2035 and agreed that it will be 
 simpler to add owner as a logconfig. 
 The owner field can be used for auditing and also by authorization layer to 
 grant access without having to explicitly configure acls. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2335) Javadoc for Consumer says that it's thread-safe

2015-07-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628373#comment-14628373
 ] 

ASF GitHub Bot commented on KAFKA-2335:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/78

KAFKA-2335; fix comment about thread safety



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2335

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/78.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #78


commit ee63ed5b5537d31566a08bd2772db83c2fdc9d11
Author: Jason Gustafson ja...@confluent.io
Date:   2015-07-15T17:10:01Z

KAFKA-2335; fix comment about thread safety




 Javadoc for Consumer says that it's thread-safe
 ---

 Key: KAFKA-2335
 URL: https://issues.apache.org/jira/browse/KAFKA-2335
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma
Assignee: Jason Gustafson

 This looks like it was left there by mistake:
 {quote}
  * The consumer is thread safe but generally will be used only from within a 
 single thread. The consumer client has no threads of it's own, all work is 
 done in the caller's thread when calls are made on the various methods 
 exposed.
 {quote}
 A few paragraphs below it says:
 {quote}
 The Kafka consumer is NOT thread-safe. All network I/O happens in the thread 
 of the application making the call. It is the responsibility of the user to 
 ensure that multi-threaded access is properly synchronized. Un-synchronized 
 access will result in {@link ConcurrentModificationException}.
 {quote}
 This matches what the code does, so the former quoted section should probably 
 be deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2335) Javadoc for Consumer says that it's thread-safe

2015-07-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628893#comment-14628893
 ] 

ASF GitHub Bot commented on KAFKA-2335:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/78


 Javadoc for Consumer says that it's thread-safe
 ---

 Key: KAFKA-2335
 URL: https://issues.apache.org/jira/browse/KAFKA-2335
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Ismael Juma
Assignee: Jason Gustafson
 Fix For: 0.8.3


 This looks like it was left there by mistake:
 {quote}
  * The consumer is thread safe but generally will be used only from within a 
 single thread. The consumer client has no threads of it's own, all work is 
 done in the caller's thread when calls are made on the various methods 
 exposed.
 {quote}
 A few paragraphs below it says:
 {quote}
 The Kafka consumer is NOT thread-safe. All network I/O happens in the thread 
 of the application making the call. It is the responsibility of the user to 
 ensure that multi-threaded access is properly synchronized. Un-synchronized 
 access will result in {@link ConcurrentModificationException}.
 {quote}
 This matches what the code does, so the former quoted section should probably 
 be deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2123) Make new consumer offset commit API use callback + future

2015-07-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628994#comment-14628994
 ] 

ASF GitHub Bot commented on KAFKA-2123:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/79

[Minor] fix new consumer heartbeat reschedule bug

This commit fixes a minor issue introduced in the patch for KAFKA-2123. The 
schedule method requires the time the task should be executed, not a delay. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2123-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/79.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #79


commit 6eb7ec648fdd95e9c73cf6c452c425527e6c800d
Author: Jason Gustafson ja...@confluent.io
Date:   2015-07-16T00:10:12Z

[Minor] fix new consumer heartbeat reschedule bug




 Make new consumer offset commit API use callback + future
 -

 Key: KAFKA-2123
 URL: https://issues.apache.org/jira/browse/KAFKA-2123
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Ewen Cheslack-Postava
Assignee: Jason Gustafson
Priority: Critical
 Fix For: 0.8.3

 Attachments: KAFKA-2123.patch, KAFKA-2123.patch, 
 KAFKA-2123_2015-04-30_11:23:05.patch, KAFKA-2123_2015-05-01_19:33:19.patch, 
 KAFKA-2123_2015-05-04_09:39:50.patch, KAFKA-2123_2015-05-04_22:51:48.patch, 
 KAFKA-2123_2015-05-29_11:11:05.patch, KAFKA-2123_2015-07-11_17:33:59.patch, 
 KAFKA-2123_2015-07-13_18:45:08.patch, KAFKA-2123_2015-07-14_13:20:25.patch, 
 KAFKA-2123_2015-07-14_18:21:38.patch


 The current version of the offset commit API in the new consumer is
 void commit(offsets, commit type)
 where the commit type is either sync or async. This means you need to use 
 sync if you ever want confirmation that the commit succeeded. Some 
 applications will want to use asynchronous offset commit, but be able to tell 
 when the commit completes.
 This is basically the same problem that had to be fixed going from old 
 consumer - new consumer and I'd suggest the same fix using a callback + 
 future combination. The new API would be
 FutureVoid commit(MapTopicPartition, Long offsets, ConsumerCommitCallback 
 callback);
 where ConsumerCommitCallback contains a single method:
 public void onCompletion(Exception exception);
 We can provide shorthand variants of commit() for eliding the different 
 arguments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2324) Update to Scala 2.11.7

2015-07-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14630766#comment-14630766
 ] 

ASF GitHub Bot commented on KAFKA-2324:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/82


 Update to Scala 2.11.7
 --

 Key: KAFKA-2324
 URL: https://issues.apache.org/jira/browse/KAFKA-2324
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor

 There are a number of fixes and improvements in the Scala 2.11.7 release, 
 which is backwards and forwards compatible with 2.11.6:
 http://www.scala-lang.org/news/2.11.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2232) make MockProducer generic

2015-07-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14630441#comment-14630441
 ] 

ASF GitHub Bot commented on KAFKA-2232:
---

Github user apakulov closed the pull request at:

https://github.com/apache/kafka/pull/68


 make MockProducer generic
 -

 Key: KAFKA-2232
 URL: https://issues.apache.org/jira/browse/KAFKA-2232
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.1
Reporter: Jun Rao
Assignee: Alexander Pakulov
  Labels: newbie
 Fix For: 0.8.3

 Attachments: KAFKA-2232.patch, KAFKA-2232.patch, 
 KAFKA-2232_2015-06-12_14:30:30.patch


 Currently, MockProducer implements Producerbyte[], byte[]. Instead, we 
 should implement MockProducerK, V.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2324) Update to Scala 2.11.7

2015-07-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14630567#comment-14630567
 ] 

ASF GitHub Bot commented on KAFKA-2324:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/82

KAFKA-2324; Update to Scala 2.11.7



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-2324

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/82.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #82


commit d71bf5cfc430c688ad7229ec921881296c77965b
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-08T13:17:58Z

KAFKA-2324; Update to Scala 2.11.7




 Update to Scala 2.11.7
 --

 Key: KAFKA-2324
 URL: https://issues.apache.org/jira/browse/KAFKA-2324
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor

 There are a number of fixes and improvements in the Scala 2.11.7 release, 
 which is backwards and forwards compatible with 2.11.6:
 http://www.scala-lang.org/news/2.11.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2328) merge-kafka-pr.py script should not leave user in a detached branch

2015-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14631543#comment-14631543
 ] 

ASF GitHub Bot commented on KAFKA-2328:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/84

KAFKA-2328; merge-kafka-pr.py script should not leave user in a detached 
branch

The right command to get the branch name is `git rev-parse --abbrev-ref 
HEAD` instead of `git rev-parse HEAD`. The latter gives the commit hash causing 
a detached branch when we checkout to it. Seems like a bug we inherited from 
the Spark script.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2328-merge-script-no-detached-branch

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/84.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #84


commit ae201dd5ef934443fe11b98294f17b7ddd9d6d72
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-17T16:29:16Z

KAFKA-2328; merge-kafka-pr.py script should not leave user in a detached 
branch




 merge-kafka-pr.py script should not leave user in a detached branch
 ---

 Key: KAFKA-2328
 URL: https://issues.apache.org/jira/browse/KAFKA-2328
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor

 [~gwenshap] asked:
 If I start a merge and cancel (say, by choosing 'n' when asked if I want to 
 proceed), I'm left on a detached branch. Any chance the script can put me 
 back in the original branch? or in trunk?
 Reference 
 https://issues.apache.org/jira/browse/KAFKA-2187?focusedCommentId=14621243page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14621243



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1595) Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount

2015-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14631210#comment-14631210
 ] 

ASF GitHub Bot commented on KAFKA-1595:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/83

KAFKA-1595; Remove deprecated and slower scala JSON parser

Tested that we only use Jackson methods introduced in 2.0 in the main 
codebase by compiling it with the older version locally. We use a constructor 
introduced in 2.4 in one test, but I didn't remove it as it seemed harmless. 
The reasoning for this is explained in the mailing list thread:

http://search-hadoop.com/m/uyzND1FWbWw1qUbWe

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-1595-remove-deprecated-json-parser-jackson

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/83.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #83


commit c4af0bccc7b7bb04b4ccd3499a73d7dd1edaaa65
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-17T10:43:22Z

Update to JUnit 4.12.

It includes `assertNotEquals`, which is used in a subsequent
commit.

commit 19033ed3a974a7e35f97ddb35463d6a0a24eab71
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-17T11:03:00Z

Introduce `testJsonParse`

Simple test that shows existing behaviour.

commit 08ea63deebe88a77f4c54f01eac8cdcda8bb1b01
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-17T11:05:00Z

KAFKA-1595; Remove deprecated and slower scala JSON parser from 
kafka.consumer.TopicCount

A thin wrapper over Jackson's Tree Model API is used as the replacement. 
This wrapper
increases safety while providing a simple, but powerful API through the 
usage of the
`DecodeJson` type class. Even though this has a maintenance cost, it makes 
the API
much more convenient from Scala. A number of tests were added to verify the
behaviour of this wrapper.

The Scala module for Jackson doesn't provide any help for our current 
usage, so we don't
depend on it.

An attempt has been made to maintain the existing behaviour regarding when 
exceptions
are thrown. There are a number of cases where `JsonMappingException` will 
be thrown
instead of `ClassCastException`, however. It is expected that users would 
not try to catch
`ClassCastException`.

commit 8cfdec90a2d0c5bbb156512e92fa2cb3ff714d0d
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-17T11:06:00Z

Minor clean-ups in `Json.encode`




 Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount
 -

 Key: KAFKA-1595
 URL: https://issues.apache.org/jira/browse/KAFKA-1595
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.1.1
Reporter: Jagbir
Assignee: Ismael Juma
  Labels: newbie
 Fix For: 0.8.3

 Attachments: KAFKA-1595.patch


 The following issue is created as a follow up suggested by Jun Rao
 in a kafka news group message with the Subject
 Blocking Recursive parsing from 
 kafka.consumer.TopicCount$.constructTopicCount
 SUMMARY:
 An issue was detected in a typical cluster of 3 kafka instances backed
 by 3 zookeeper instances (kafka version 0.8.1.1, scala version 2.10.3,
 java version 1.7.0_65). On consumer end, when consumers get recycled,
 there is a troubling JSON parsing recursion which takes a busy lock and
 blocks consumers thread pool.
 In 0.8.1.1 scala client library ZookeeperConsumerConnector.scala:355 takes
 a global lock (0xd3a7e1d0) during the rebalance, and fires an
 expensive JSON parsing, while keeping the other consumers from shutting
 down, see, e.g,
 at 
 kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:161)
 The deep recursive JSON parsing should be deprecated in favor
 of a better JSON parser, see, e.g,
 http://engineering.ooyala.com/blog/comparing-scala-json-libraries?
 DETAILS:
 The first dump is for a recursive blocking thread holding the lock for 
 0xd3a7e1d0
 and the subsequent dump is for a waiting thread.
 (Please grep for 0xd3a7e1d0 to see the locked object.)
 Â 
 -8-
 Sa863f22b1e5hjh6788991800900b34545c_profile-a-prod1-s-140789080845312-c397945e8_watcher_executor
 prio=10 tid=0x7f24dc285800 nid=0xda9 runnable [0x7f249e40b000]
 java.lang.Thread.State: RUNNABLE
 at 
 scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.p$7(Parsers.scala:722)
 at 
 

[jira] [Commented] (KAFKA-2348) Drop support for Scala 2.9

2015-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633397#comment-14633397
 ] 

ASF GitHub Bot commented on KAFKA-2348:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/87

KAFKA-2348; Drop support for Scala 2.9

`testAll` passed locally.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2348-drop-support-for-scala-2.9

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/87.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #87


commit 00ac57ac12ce56d06311845916cae45a9db48d5e
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-18T14:57:16Z

KAFKA-2348; Drop support for Scala 2.9




 Drop support for Scala 2.9
 --

 Key: KAFKA-2348
 URL: https://issues.apache.org/jira/browse/KAFKA-2348
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma
Assignee: Ismael Juma

 Summary of why we should drop Scala 2.9:
 * Doubles the number of builds required from 2 to 4 (2.9.1 and 2.9.2 are not 
 binary compatible).
 * Code has been committed to trunk that doesn't build with Scala 2.9 weeks 
 ago and no-one seems to have noticed or cared (well, I filed 
 https://issues.apache.org/jira/browse/KAFKA-2325). Can we really support a 
 version if we don't test it?
 * New clients library is written in Java and won't be affected. It also has 
 received a lot of work and it's much improved since the last release.
 * It was released 4 years ago, it has been unsupported for a long time and 
 most projects have dropped support for it (for example, we use a different 
 version of ScalaTest for Scala 2.9)
 * Scala 2.10 introduced Futures and a few useful features like String 
 interpolation and value classes.
 * Doesn't work with Java 8 (https://issues.apache.org/jira/browse/KAFKA-2203).
 Vote thread: http://search-hadoop.com/m/uyzND1DIE422mz94I1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-294) Path length must be 0 error during startup

2015-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633653#comment-14633653
 ] 

ASF GitHub Bot commented on KAFKA-294:
--

Github user fsaintjacques closed the pull request at:

https://github.com/apache/kafka/pull/2


 Path length must be  0 error during startup
 --

 Key: KAFKA-294
 URL: https://issues.apache.org/jira/browse/KAFKA-294
 Project: Kafka
  Issue Type: Bug
Reporter: Thomas Dudziak
 Fix For: 0.8.2.0


 When starting Kafka 0.7.0 using zkclient-0.1.jar, I get this error:
 INFO 2012-03-06 02:39:04,072  main kafka.server.KafkaZooKeeper Registering 
 broker /brokers/ids/1
 FATAL 2012-03-06 02:39:04,111  main kafka.server.KafkaServer Fatal error 
 during startup.
 java.lang.IllegalArgumentException: Path length must be  0
 at 
 org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:48)
 at 
 org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:35)
 at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:620)
 at org.I0Itec.zkclient.ZkConnection.create(ZkConnection.java:87)
 at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:308)
 at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:304)
 at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
 at org.I0Itec.zkclient.ZkClient.create(ZkClient.java:304)
 at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:213)
 at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
 at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
 at kafka.utils.ZkUtils$.createParentPath(ZkUtils.scala:48)
 at kafka.utils.ZkUtils$.createEphemeralPath(ZkUtils.scala:60)
 at 
 kafka.utils.ZkUtils$.createEphemeralPathExpectConflict(ZkUtils.scala:72)
 at 
 kafka.server.KafkaZooKeeper.registerBrokerInZk(KafkaZooKeeper.scala:57)
 at kafka.log.LogManager.startup(LogManager.scala:124)
 at kafka.server.KafkaServer.startup(KafkaServer.scala:80)
 at 
 kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:47)
 at kafka.Kafka$.main(Kafka.scala:60)
 at kafka.Kafka.main(Kafka.scala)
 The problem seems to be this code in ZkClient's createPersistent method:
 String parentDir = path.substring(0, path.lastIndexOf('/'));
 createPersistent(parentDir, createParents);
 createPersistent(path, createParents);
 which doesn't check for whether parentDir is an empty string, which it will 
 become for /brokers/ids/1 after two recursions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633766#comment-14633766
 ] 

ASF GitHub Bot commented on KAFKA-2169:
---

Github user Parth-Brahmbhatt closed the pull request at:

https://github.com/apache/kafka/pull/61


 Upgrade to zkclient-0.5
 ---

 Key: KAFKA-2169
 URL: https://issues.apache.org/jira/browse/KAFKA-2169
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Neha Narkhede
Assignee: Parth Brahmbhatt
Priority: Critical
 Fix For: 0.8.3

 Attachments: KAFKA-2169.patch, KAFKA-2169.patch, 
 KAFKA-2169_2015-05-11_13:52:57.patch, KAFKA-2169_2015-05-15_10:18:41.patch


 zkclient-0.5 is released 
 http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
 KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2342) transient unit test failure in testConsumptionWithBrokerFailures

2015-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634266#comment-14634266
 ] 

ASF GitHub Bot commented on KAFKA-2342:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/88

KAFKA-2342; fix transient unit test failure ConsumerBounceTest



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2342

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/88.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #88


commit 7a4b3401aca8806824cd68708918262a87a22241
Author: Jason Gustafson ja...@confluent.io
Date:   2015-07-20T23:29:46Z

KAFKA-2342; fix transient unit test failure ConsumerBounceTest




 transient unit test failure in testConsumptionWithBrokerFailures
 

 Key: KAFKA-2342
 URL: https://issues.apache.org/jira/browse/KAFKA-2342
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Affects Versions: 0.8.3
Reporter: Jun Rao
Assignee: Jason Gustafson

 Saw the following transient unit test failure.
 kafka.api.ConsumerBounceTest  testConsumptionWithBrokerFailures FAILED
 java.lang.NullPointerException
 at 
 org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:949)
 at 
 kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:86)
 at 
 kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:61)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2328) merge-kafka-pr.py script should not leave user in a detached branch

2015-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634279#comment-14634279
 ] 

ASF GitHub Bot commented on KAFKA-2328:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/84


 merge-kafka-pr.py script should not leave user in a detached branch
 ---

 Key: KAFKA-2328
 URL: https://issues.apache.org/jira/browse/KAFKA-2328
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor

 [~gwenshap] asked:
 If I start a merge and cancel (say, by choosing 'n' when asked if I want to 
 proceed), I'm left on a detached branch. Any chance the script can put me 
 back in the original branch? or in trunk?
 Reference 
 https://issues.apache.org/jira/browse/KAFKA-2187?focusedCommentId=14621243page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14621243



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1621) Standardize --messages option in perf scripts

2015-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634555#comment-14634555
 ] 

ASF GitHub Bot commented on KAFKA-1621:
---

Github user rekhajoshm closed the pull request at:

https://github.com/apache/kafka/pull/58


 Standardize --messages option in perf scripts
 -

 Key: KAFKA-1621
 URL: https://issues.apache.org/jira/browse/KAFKA-1621
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Jay Kreps
Assignee: Rekha Joshi
  Labels: newbie

 This option is specified in PerfConfig and is used by the producer, consumer 
 and simple consumer perf commands. The docstring on the argument does not 
 list it as required but the producer performance test requires it--others 
 don't.
 We should standardize this so that either all the commands require the option 
 and it is marked as required in the docstring or none of them list it as 
 required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2344) kafka-merge-pr improvements

2015-07-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634868#comment-14634868
 ] 

ASF GitHub Bot commented on KAFKA-2344:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/90

KAFKA-2344; kafka-merge-pr improvements

The first 4 commits are adapted from changes that have been done to the 
Spark version and the last one is the feature that @gwenshap asked for.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-2344-merge-pr-improvements

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/90.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #90


commit 76b58a0e40403071d55389119f9ad5be14e8c8f9
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-21T08:51:17Z

Fix instructions on how to install the `jira-python` library

Adapted from 
https://github.com/apache/spark/commit/a4df0f2d84ff24318b139db534521141d9d4d593

commit 392623a21bf51f9245ad7958920eba4685536edc
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-21T08:53:17Z

Check return value of doctest.testmod()

Adapted from 
https://github.com/apache/spark/commit/41afa16500e682475eaa80e31c0434b7ab66abcb

commit da2fd947303a3986bba188d7552b0849c1ac13f8
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-21T08:55:59Z

Set JIRA resolution to Fixed instead of relying on default transition

Adapted from:
* 
https://github.com/apache/spark/commit/1b9e434b6c19f23a01e9875a3c1966cd03ce8e2d
* 
https://github.com/apache/spark/commit/32e27df412706b30daf41f9d46c5572bb9a41bdb

commit 0d7cb23b6388eb313b7cf74bc5e9357a40ecd1d3
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-21T09:12:13Z

Allow primary author to be overridden during merge

This is useful when multiple people work on a feature and
the automatically chosen default is not the most appropriate
one.

Adapted from:

https://github.com/apache/spark/commit/bc24289f5d54e4ff61cd75a5941338c9d946ff73

https://github.com/apache/spark/commit/228ab65a4eeef8a42eb4713edf72b50590f63176

commit d9b1c684373315d0cdd0cf26264928e9d8974da6
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-07-21T09:28:41Z

Allow reviewers to be entered during merge




 kafka-merge-pr improvements
 ---

 Key: KAFKA-2344
 URL: https://issues.apache.org/jira/browse/KAFKA-2344
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Ismael Juma
Priority: Minor
 Fix For: 0.8.3


 Two suggestions for the new pr-merge tool:
 * The tool doesn't allow to credit reviewers while committing. I thought the 
 review credits were a nice habit of the Kafka community and I hate losing it. 
 OTOH, I don't want to force-push to trunk just to add credits. Perhaps the 
 tool can ask about committers?
 * Looks like the tool doesn't automatically resolve the JIRA. Would be nice 
 if it did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1811) Ensuring registered broker host:port is unique

2015-08-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14711498#comment-14711498
 ] 

ASF GitHub Bot commented on KAFKA-1811:
---

GitHub user eribeiro opened a pull request:

https://github.com/apache/kafka/pull/168

KAFKA-1811 Ensuring registered broker host:port is unique

Adds a ZKLock recipe implementation to guarantee that the host:port pair is 
unique among the brokers registered on ZooKeeper.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/eribeiro/kafka KAFKA-1811

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/168.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #168


commit a9941892833e8e25c36ee0aac8704143863db452
Author: Edward Ribeiro edward.ribe...@gmail.com
Date:   2015-08-25T15:29:12Z

KAFKA-1811 Ensuring registered broker host:port is unique




 Ensuring registered broker host:port is unique
 --

 Key: KAFKA-1811
 URL: https://issues.apache.org/jira/browse/KAFKA-1811
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
Assignee: Edward Ribeiro
  Labels: newbie
 Attachments: KAFKA-1811.patch, KAFKA_1811.patch


 Currently, we expect each of the registered broker to have a unique host:port 
 pair. However, we don't enforce that, which causes various weird problems. It 
 would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2462) allow modifying soft limit for open files in Kafka startup script

2015-08-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14710261#comment-14710261
 ] 

ASF GitHub Bot commented on KAFKA-2462:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/164

KAFKA-2462: allow modifying soft limit for open files in Kafka startup 
script



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka ulimit

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/164.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #164


commit 7a049dafddb362a15498772df64c341d80e52d9b
Author: Gwen Shapira csh...@gmail.com
Date:   2015-08-24T23:21:08Z

adding parameter for setting soft ulimit. tested on Linux




 allow modifying soft limit for open files in Kafka startup script
 -

 Key: KAFKA-2462
 URL: https://issues.apache.org/jira/browse/KAFKA-2462
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira

 In some systems the hard limit for number of open files is set reasonably 
 high, but the default soft limit for the user running Kafka is insufficient.
 It would be nice if the Kafka startup script could set the soft limit of 
 number of files for the Kafka process to a user-defined value before starting 
 Kafka. 
 Something like:
 kafka-server-start --soft-file-limit 1 config/server.properties



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2367) Add Copycat runtime data API

2015-08-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14710212#comment-14710212
 ] 

ASF GitHub Bot commented on KAFKA-2367:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/163

KAFKA-2367: Add Copycat runtime data API.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2367-copycat-runtime-data-api

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/163.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #163


commit b90049e06a060f473878f6df1de9f4b6f2b38bc5
Author: Ewen Cheslack-Postava m...@ewencp.org
Date:   2015-08-21T01:06:56Z

KAFKA-2367: Add Copycat runtime data API.




 Add Copycat runtime data API
 

 Key: KAFKA-2367
 URL: https://issues.apache.org/jira/browse/KAFKA-2367
 Project: Kafka
  Issue Type: Sub-task
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.8.3


 Design the API used for runtime data in Copycat. This API is used to 
 construct schemas and records that Copycat processes. This needs to be a 
 fairly general data model (think Avro, JSON, Protobufs, Thrift) in order to 
 support complex, varied data types that may be input from/output to many data 
 systems.
 This should issue should also address the serialization interfaces used 
 within Copycat, which translate the runtime data into serialized byte[] form. 
 It is important that these be considered together because the data format can 
 be used in multiple ways (records, partition IDs, partition offsets), so it 
 and the corresponding serializers must be sufficient for all these use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2456) Disable SSLv3 for ssl.enabledprotocols config on client & broker side

2015-10-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14966882#comment-14966882
 ] 

ASF GitHub Bot commented on KAFKA-2456:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/342

KAFKA-2456 KAFKA-2472; SSL clean-ups



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2472-fix-kafka-ssl-config-warnings

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/342.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #342


commit 6955ae23aa951ef3a2bd3a9dc88411b3f5769ffb
Author: Ismael Juma 
Date:   2015-10-20T07:57:40Z

Remove unused strings to silence compiler warning

commit 5b2a1bcc6e430eb7f0f2bf480afcc16cec6c4a81
Author: Ismael Juma 
Date:   2015-10-21T09:46:10Z

Remove `channelConfigs` and use `values` instead

There's not much value in using the former and it's error-prone.

Also include a couple of minor improvements to `KafkaConfig`.

commit 48bce07dc39023ea5b9f8dfae99b852a622430e3
Author: Ismael Juma 
Date:   2015-10-21T09:57:54Z

Add missing `define` for `SSLEndpointIdentificationAlgorithmProp` in broker

commit 1e4f1c37db26bad474efc4e85c980d1b265887fb
Author: Ismael Juma 
Date:   2015-10-21T12:45:45Z

KAFKA-2472; Fix SSL config warnings

commit a923a999338c6c6d09c40852d5dced71c8192ff2
Author: Ismael Juma 
Date:   2015-10-21T12:50:29Z

KAFKA-2456; Disable SSLv3 for ssl.enabledprotocols config on client & 
broker side




> Disable SSLv3 for ssl.enabledprotocols config on client & broker side
> -
>
> Key: KAFKA-2456
> URL: https://issues.apache.org/jira/browse/KAFKA-2456
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Sriharsha Chintalapani
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up on KAFKA-1690 . Currently users have option to pass in 
> SSLv3 we should not be allowing this as its deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2460) Fix capitalization in SSL classes

2015-10-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14972694#comment-14972694
 ] 

ASF GitHub Bot commented on KAFKA-2460:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/355


> Fix capitalization in SSL classes
> -
>
> Key: KAFKA-2460
> URL: https://issues.apache.org/jira/browse/KAFKA-2460
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jay Kreps
>Assignee: Ismael Juma
>Priority: Minor
>
> I notice that all the SSL classes are using the convention SSLChannelBuilder, 
> SSLConfigs, etc. Kafka has always used the convention SslChannelBuilder, 
> SslConfigs, etc. See e.g. KafkaApis, ApiUtils, LeaderAndIsrRequest, 
> ClientIdAndTopic, etc.
> We should fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2472) Fix kafka ssl configs to not throw warnings

2015-10-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14972684#comment-14972684
 ] 

ASF GitHub Bot commented on KAFKA-2472:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/355

KAFKA-2472; Fix capitalisation in SSL classes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2460-fix-capitalisation-in-ssl-classes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/355.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #355


commit 9bbf8b701f13f0b4ea8df842b46cab29002033be
Author: Ismael Juma 
Date:   2015-10-24T16:24:18Z

Fix capitalisation in SSL classes




> Fix kafka ssl configs to not throw warnings
> ---
>
> Key: KAFKA-2472
> URL: https://issues.apache.org/jira/browse/KAFKA-2472
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up fix on kafka-1690.
> [2015-08-25 18:20:48,236] WARN The configuration ssl.truststore.password = 
> striker was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2015-08-25 18:20:48,236] WARN The configuration security.protocol = SSL was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2449) Update mirror maker (MirrorMaker) docs

2015-10-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14972690#comment-14972690
 ] 

ASF GitHub Bot commented on KAFKA-2449:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/356

KAFKA-2449: Update mirror maker (MirrorMaker) docs - remove reference…

…s to multiple source clusters

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2449

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/356.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #356


commit bf8fc4d075c565e6b25bf7d63dff363d2b8b782c
Author: Gwen Shapira 
Date:   2015-10-24T16:37:38Z

KAFKA-2449: Update mirror maker (MirrorMaker) docs - remove references to 
multiple source clusters




> Update mirror maker (MirrorMaker) docs
> --
>
> Key: KAFKA-2449
> URL: https://issues.apache.org/jira/browse/KAFKA-2449
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The Kafka docs on Mirror Maker state that it mirrors from N source clusters 
> to 1 destination, but this is no longer the case. Docs should be updated to 
> reflect that it mirrors from single source cluster to single target cluster.
> Docs I've found where this should be updated:
> http://kafka.apache.org/documentation.html#basic_ops_mirror_maker
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+mirroring+(MirrorMaker)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2686) unsubscribe() call leaves KafkaConsumer in invalid state for manual topic-partition assignment

2015-10-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969669#comment-14969669
 ] 

ASF GitHub Bot commented on KAFKA-2686:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/352

KAFKA-2686: Reset needsPartitionAssignment in SubscriptionState.assign()



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2686

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/352.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #352


commit ecc94205c0731ecf25737307aac2ffda20fc1a14
Author: Guozhang Wang 
Date:   2015-10-22T19:05:50Z

v1




> unsubscribe() call leaves KafkaConsumer in invalid state for manual 
> topic-partition assignment
> --
>
> Key: KAFKA-2686
> URL: https://issues.apache.org/jira/browse/KAFKA-2686
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: The Data Lorax
>Assignee: Guozhang Wang
>
> The bellow code snippet demonstrated the problem.
> Basically, the unsubscribe() call leaves the KafkaConsumer in a state that 
> means poll() will always return empty record sets, even if new 
> topic-partitions have been assigned that have messages pending.  This is 
> because unsubscribe() sets SubscriptionState.needsPartitionAssignment to 
> true, and assign() does not clear this flag. The only thing that clears this 
> flag is when the consumer handles the response from a JoinGroup request.
> {code}
> final KafkaConsumer consumer = new KafkaConsumer<>(props);
> consumer.assign(Collections.singletonList(new TopicPartition(topicName, 1)));
> ConsumerRecords records = consumer.poll(100);// <- Works, 
> returning records
> consumer.unsubscribe();   // Puts consumer into invalid state.
> consumer.assign(Collections.singletonList(new TopicPartition(topicName, 2)));
> records = consumer.poll(100);// <- Always returns empty record set.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2652) Incorporate the new consumer protocol with partition-group interface

2015-10-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969687#comment-14969687
 ] 

ASF GitHub Bot commented on KAFKA-2652:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/353

KAFKA-2652: integrate new group protocol into partition grouping

@guozhangwang 

* added ```PartitionGrouper``` (abstract class)
 * This class is responsible for grouping partitions to form tasks.
 * Users may implement this class for custom grouping.
* added ```DefaultPartitionGrouper```
 * our default implementation of ```PartitionGrouper```
* added ```KafkaStreamingPartitionAssignor```
 * We always use this as ```PartitionAssignor``` of stream consumers.
 * Actual grouping is delegated to ```PartitionGrouper```.
* ```TopologyBuilder```
 * added ```topicGroups()```
  * This returns groups of related topics according to the topology
 * added ```copartitionSources(sourceNodes...)```
  * This is used by DSL layer. It asserts the specified source nodes must 
be copartitioned.
 * added ```copartitionGroups()``` which returns groups of copartitioned 
topics
* KStream layer
 * keep track of source nodes to determine copartition sources when steams 
are joined
 * source nodes are set to null when partitioning property is not preserved 
(ex. ```map()```, ```transform()```), and this indicates the stream is no 
longer joinable


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka grouping

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/353.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #353


commit 708718c1be23fad25fa6206f665cbb619c1b5097
Author: Yasuhiro Matsuda 
Date:   2015-10-19T19:38:06Z

partition grouping

commit d2bae046b5509022e2821a2c5eb08853d228e791
Author: Yasuhiro Matsuda 
Date:   2015-10-19T20:19:54Z

wip

commit 86fa8110b23ee1992fbd19daa08c63a4b427448e
Author: Yasuhiro Matsuda 
Date:   2015-10-20T20:01:37Z

long task id

commit 4f4f9ac642ebe0eae33a5c8464309106e9239f2e
Author: Yasuhiro Matsuda 
Date:   2015-10-20T20:03:15Z

Merge branch 'trunk' of github.com:apache/kafka into grouping

commit e4ecf39b9ab0b0f4c915a4f43cfe771b1de69f7f
Author: Yasuhiro Matsuda 
Date:   2015-10-21T19:33:05Z

joinability

commit 37d72a691173a8fe878ac3d99e8973e72f5675c6
Author: Yasuhiro Matsuda 
Date:   2015-10-21T19:33:48Z

Merge branch 'trunk' of github.com:apache/kafka into grouping

commit f68723bab83c3a3f1c15872f4f24bc932df8198f
Author: Yasuhiro Matsuda 
Date:   2015-10-22T18:21:31Z

partition assignor

commit 457cf270222139eae89750781d09abaa07120932
Author: Yasuhiro Matsuda 
Date:   2015-10-22T18:21:40Z

Merge branch 'trunk' of github.com:apache/kafka into grouping

commit 13f3ad703960581229d511287f27345c567b5d3e
Author: Yasuhiro Matsuda 
Date:   2015-10-22T18:34:52Z

complete undoing long taskid

commit 98f3bcc1896fd159ccbbd37fc65b1d9d6f568bb9
Author: Yasuhiro Matsuda 
Date:   2015-10-22T18:45:38Z

fix a test




> Incorporate the new consumer protocol with partition-group interface
> 
>
> Key: KAFKA-2652
> URL: https://issues.apache.org/jira/browse/KAFKA-2652
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.0.1
>
>
> After KAFKA-2464 is checked in, we need to incorporate the new protocol along 
> with a partition-group interface.
> The first step maybe a couple of pre-defined partitioning scheme that can be 
> chosen by user from some configs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2459) Connection backoff/blackout period should start when a connection is disconnected, not when the connection attempt was initiated

2015-10-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14967459#comment-14967459
 ] 

ASF GitHub Bot commented on KAFKA-2459:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/290


> Connection backoff/blackout period should start when a connection is 
> disconnected, not when the connection attempt was initiated
> 
>
> Key: KAFKA-2459
> URL: https://issues.apache.org/jira/browse/KAFKA-2459
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, producer 
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Eno Thereska
> Fix For: 0.9.0.0
>
>
> Currently the connection code for new clients marks the time when a 
> connection was initiated (NodeConnectionState.lastConnectMs) and then uses 
> this to compute blackout periods for nodes, during which connections will not 
> be attempted and the node is not considered a candidate for leastLoadedNode.
> However, in cases where the connection attempt takes longer than the 
> blackout/backoff period (default 10ms), this results in incorrect behavior. 
> If a broker is not available and, for example, the broker does not explicitly 
> reject the connection, instead waiting for a connection timeout (e.g. due to 
> firewall settings), then the backoff period will have already elapsed and the 
> node will immediately be considered ready for a new connection attempt and a 
> node to be selected by leastLoadedNode for metadata updates. I think it 
> should be easy to reproduce and verify this problem manually by using tc to 
> introduce enough latency to make connection failures take > 10ms.
> The correct behavior would use the disconnection event to mark the end of the 
> last connection attempt and then wait for the backoff period to elapse after 
> that.
> See 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201508.mbox/%3CCAJY8EofpeU4%2BAJ%3Dw91HDUx2RabjkWoU00Z%3DcQ2wHcQSrbPT4HA%40mail.gmail.com%3E
>  for the original description of the problem.
> This is related to KAFKA-1843 because leastLoadedNode currently will 
> consistently choose the same node if this blackout period is not handled 
> correctly, but is a much smaller issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2626) Null offsets in copycat causes exception in OffsetStorageWriter

2015-10-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14967544#comment-14967544
 ] 

ASF GitHub Bot commented on KAFKA-2626:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/345

KAFKA-2626: Handle null keys and value validation properly in 
OffsetStorageWriter.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2626-offset-storage-writer-null-values

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/345.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #345


commit b89f1f9bc214169b232e592ce1126d25c4e6e9da
Author: Ewen Cheslack-Postava 
Date:   2015-10-21T17:47:14Z

KAFKA-2626: Handle null keys and value validation properly in 
OffsetStorageWriter.




> Null offsets in copycat causes exception in OffsetStorageWriter
> ---
>
> Key: KAFKA-2626
> URL: https://issues.apache.org/jira/browse/KAFKA-2626
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> {quote}
> [2015-10-07 16:20:39,052] ERROR CRITICAL: Failed to serialize offset data, 
> making it impossible to commit offsets under namespace wikipedia-irc-source. 
> This likely won't recover unless the unserializable partition or offset 
> information is overwritten. 
> (org.apache.kafka.copycat.storage.OffsetStorageWriter:152)
> [2015-10-07 16:20:39,053] ERROR Cause of serialization failure: 
> (org.apache.kafka.copycat.storage.OffsetStorageWriter:155)
> java.lang.NullPointerException
> at 
> org.apache.kafka.copycat.storage.OffsetUtils.validateFormat(OffsetUtils.java:34)
> at 
> org.apache.kafka.copycat.storage.OffsetStorageWriter.doFlush(OffsetStorageWriter.java:141)
> at 
> org.apache.kafka.copycat.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:223)
> at 
> org.apache.kafka.copycat.runtime.WorkerSqourceTask.stop(WorkerSourceTask.java:100)
> at org.apache.kafka.copycat.runtime.Worker.stopTask(Worker.java:188)
> at 
> org.apache.kafka.copycat.runtime.standalone.StandaloneHerder.removeConnectorTasks(StandaloneHerder.java:210)
> at 
> org.apache.kafka.copycat.runtime.standalone.StandaloneHerder.stopConnector(StandaloneHerder.java:155)
> at 
> org.apache.kafka.copycat.runtime.standalone.StandaloneHerder.stop(StandaloneHerder.java:60)
> at org.apache.kafka.copycat.runtime.Copycat.stop(Copycat.java:66)
> at 
> org.apache.kafka.copycat.runtime.Copycat$ShutdownHook.run(Copycat.java:88)
> [2015-10-07 16:20:39,055] ERROR Failed to flush 
> org.apache.kafka.copycat.runtime.WorkerSourceTask$2@12782f6 offsets to 
> storage:  (org.apache.kafka.copycat.runtime.WorkerSourceTask:227)
> java.lang.NullPointerException
> at 
> org.apache.kafka.copycat.storage.OffsetUtils.validateFormat(OffsetUtils.java:34)
> at 
> org.apache.kafka.copycat.storage.OffsetStorageWriter.doFlush(OffsetStorageWriter.java:141)
> at 
> org.apache.kafka.copycat.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:223)
> at 
> org.apache.kafka.copycat.runtime.WorkerSourceTask.stop(WorkerSourceTask.java:100)
> at org.apache.kafka.copycat.runtime.Worker.stopTask(Worker.java:188)
> at 
> org.apache.kafka.copycat.runtime.standalone.StandaloneHerder.removeConnectorTasks(StandaloneHerder.java:210)
> at 
> org.apache.kafka.copycat.runtime.standalone.StandaloneHerder.stopConnector(StandaloneHerder.java:155)
> at 
> org.apache.kafka.copycat.runtime.standalone.StandaloneHerder.stop(StandaloneHerder.java:60)
> at org.apache.kafka.copycat.runtime.Copycat.stop(Copycat.java:66)
> at 
> org.apache.kafka.copycat.runtime.Copycat$ShutdownHook.run(Copycat.java:88)
> [2015-10-07 16:20:39,055] INFO Starting graceful shutdown of thread 
> WorkerSourceTask-wikipedia-irc-source-0 
> (org.apache.kafka.copycat.util.ShutdownableThread:119)
> [2015-10-07 16:20:39,056] INFO Herder stopped 
> (org.apache.kafka.copycat.runtime.standalone.StandaloneHerder:64)
> [2015-10-07 16:20:39,056] INFO Worker stopping 
> (org.apache.kafka.copycat.runtime.Worker:104)
> [2015-10-07 16:20:39,056] INFO Stopped FileOffsetBackingStore 
> (org.apache.kafka.copycat.storage.FileOffsetBackingStore:61)
> [2015-10-07 16:20:39,056] INFO Worker stopped 
> (org.apache.kafka.copycat.runtime.Worker:133)
> [2015-10-07 16:20:39,057] INFO Copycat stopped 
> (org.apache.kafka.copycat.runtime.Copycat:69)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2667) Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure

2015-10-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14967584#comment-14967584
 ] 

ASF GitHub Bot commented on KAFKA-2667:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/333


> Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure
> 
>
> Key: KAFKA-2667
> URL: https://issues.apache.org/jira/browse/KAFKA-2667
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Seen in recent builds:
> {code}
> org.apache.kafka.copycat.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.kafka.copycat.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:335)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2626) Null offsets in copycat causes exception in OffsetStorageWriter

2015-10-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14967543#comment-14967543
 ] 

ASF GitHub Bot commented on KAFKA-2626:
---

Github user ewencp closed the pull request at:

https://github.com/apache/kafka/pull/344


> Null offsets in copycat causes exception in OffsetStorageWriter
> ---
>
> Key: KAFKA-2626
> URL: https://issues.apache.org/jira/browse/KAFKA-2626
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> {quote}
> [2015-10-07 16:20:39,052] ERROR CRITICAL: Failed to serialize offset data, 
> making it impossible to commit offsets under namespace wikipedia-irc-source. 
> This likely won't recover unless the unserializable partition or offset 
> information is overwritten. 
> (org.apache.kafka.copycat.storage.OffsetStorageWriter:152)
> [2015-10-07 16:20:39,053] ERROR Cause of serialization failure: 
> (org.apache.kafka.copycat.storage.OffsetStorageWriter:155)
> java.lang.NullPointerException
> at 
> org.apache.kafka.copycat.storage.OffsetUtils.validateFormat(OffsetUtils.java:34)
> at 
> org.apache.kafka.copycat.storage.OffsetStorageWriter.doFlush(OffsetStorageWriter.java:141)
> at 
> org.apache.kafka.copycat.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:223)
> at 
> org.apache.kafka.copycat.runtime.WorkerSqourceTask.stop(WorkerSourceTask.java:100)
> at org.apache.kafka.copycat.runtime.Worker.stopTask(Worker.java:188)
> at 
> org.apache.kafka.copycat.runtime.standalone.StandaloneHerder.removeConnectorTasks(StandaloneHerder.java:210)
> at 
> org.apache.kafka.copycat.runtime.standalone.StandaloneHerder.stopConnector(StandaloneHerder.java:155)
> at 
> org.apache.kafka.copycat.runtime.standalone.StandaloneHerder.stop(StandaloneHerder.java:60)
> at org.apache.kafka.copycat.runtime.Copycat.stop(Copycat.java:66)
> at 
> org.apache.kafka.copycat.runtime.Copycat$ShutdownHook.run(Copycat.java:88)
> [2015-10-07 16:20:39,055] ERROR Failed to flush 
> org.apache.kafka.copycat.runtime.WorkerSourceTask$2@12782f6 offsets to 
> storage:  (org.apache.kafka.copycat.runtime.WorkerSourceTask:227)
> java.lang.NullPointerException
> at 
> org.apache.kafka.copycat.storage.OffsetUtils.validateFormat(OffsetUtils.java:34)
> at 
> org.apache.kafka.copycat.storage.OffsetStorageWriter.doFlush(OffsetStorageWriter.java:141)
> at 
> org.apache.kafka.copycat.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:223)
> at 
> org.apache.kafka.copycat.runtime.WorkerSourceTask.stop(WorkerSourceTask.java:100)
> at org.apache.kafka.copycat.runtime.Worker.stopTask(Worker.java:188)
> at 
> org.apache.kafka.copycat.runtime.standalone.StandaloneHerder.removeConnectorTasks(StandaloneHerder.java:210)
> at 
> org.apache.kafka.copycat.runtime.standalone.StandaloneHerder.stopConnector(StandaloneHerder.java:155)
> at 
> org.apache.kafka.copycat.runtime.standalone.StandaloneHerder.stop(StandaloneHerder.java:60)
> at org.apache.kafka.copycat.runtime.Copycat.stop(Copycat.java:66)
> at 
> org.apache.kafka.copycat.runtime.Copycat$ShutdownHook.run(Copycat.java:88)
> [2015-10-07 16:20:39,055] INFO Starting graceful shutdown of thread 
> WorkerSourceTask-wikipedia-irc-source-0 
> (org.apache.kafka.copycat.util.ShutdownableThread:119)
> [2015-10-07 16:20:39,056] INFO Herder stopped 
> (org.apache.kafka.copycat.runtime.standalone.StandaloneHerder:64)
> [2015-10-07 16:20:39,056] INFO Worker stopping 
> (org.apache.kafka.copycat.runtime.Worker:104)
> [2015-10-07 16:20:39,056] INFO Stopped FileOffsetBackingStore 
> (org.apache.kafka.copycat.storage.FileOffsetBackingStore:61)
> [2015-10-07 16:20:39,056] INFO Worker stopped 
> (org.apache.kafka.copycat.runtime.Worker:133)
> [2015-10-07 16:20:39,057] INFO Copycat stopped 
> (org.apache.kafka.copycat.runtime.Copycat:69)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2365) Copycat checklist

2015-10-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14967537#comment-14967537
 ] 

ASF GitHub Bot commented on KAFKA-2365:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/344

KAFKA-2365: Handle null keys and value validation properly in 
OffsetStorageWriter.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2365-offset-storage-writer-null-values

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/344.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #344


commit f7241b508190d64d7b5ac48560b01bb14d89ffa9
Author: Ewen Cheslack-Postava 
Date:   2015-10-21T17:47:14Z

KAFKA-2365: Handle null keys and value validation properly in 
OffsetStorageWriter.




> Copycat checklist
> -
>
> Key: KAFKA-2365
> URL: https://issues.apache.org/jira/browse/KAFKA-2365
> Project: Kafka
>  Issue Type: New Feature
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>  Labels: feature
> Fix For: 0.9.0.0
>
>
> This covers the development plan for 
> [KIP-26|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=58851767].
>  There are a number of features that can be developed in sequence to make 
> incremental progress, and often in parallel:
> * Initial patch - connector API and core implementation
> * Runtime data API
> * Standalone CLI
> * REST API
> * Distributed copycat - CLI
> * Distributed copycat - coordinator
> * Distributed copycat - config storage
> * Distributed copycat - offset storage
> * Log/file connector (sample source/sink connector)
> * Elasticsearch sink connector (sample sink connector for full log -> Kafka 
> -> Elasticsearch sample pipeline)
> * Copycat metrics
> * System tests (including connector tests)
> * Mirrormaker connector
> * Copycat documentation
> This is an initial list, but it might need refinement to allow for more 
> incremental progress and may be missing features we find we want before the 
> initial release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2464) Client-side assignment and group generalization

2015-10-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14967682#comment-14967682
 ] 

ASF GitHub Bot commented on KAFKA-2464:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/165


> Client-side assignment and group generalization
> ---
>
> Key: KAFKA-2464
> URL: https://issues.apache.org/jira/browse/KAFKA-2464
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Add support for client-side assignment and generalization of join group 
> protocol as documented here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2454) Dead lock between delete log segment and shutting down.

2015-10-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14967809#comment-14967809
 ] 

ASF GitHub Bot commented on KAFKA-2454:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/153


> Dead lock between delete log segment and shutting down.
> ---
>
> Key: KAFKA-2454
> URL: https://issues.apache.org/jira/browse/KAFKA-2454
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.0
>
>
> When the broker shutdown, it will shutdown scheduler which grabs the 
> scheduler lock then wait for all the threads in scheduler to shutdown.
> The dead lock will happen when the scheduled task try to delete old log 
> segment, it will schedule a log delete task which also needs to acquire the 
> scheduler lock. In this case the shutdown thread will hold scheduler lock and 
> wait for the the log deletion thread to finish, but the log deletion thread 
> will block on waiting for the scheduler lock.
> Related stack trace:
> {noformat}
> "Thread-1" #21 prio=5 os_prio=0 tid=0x7fe7601a7000 nid=0x1a4e waiting on 
> condition [0x7fe7cf698000]
>java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000640d53540> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
> at 
> java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1465)
> at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:94)
> - locked <0x000640b6d480> (a kafka.utils.KafkaScheduler)
> at 
> kafka.server.KafkaServer$$anonfun$shutdown$4.apply$mcV$sp(KafkaServer.scala:352)
> at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:79)
> at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
> at kafka.utils.CoreUtils$.swallowWarn(CoreUtils.scala:51)
> at kafka.utils.Logging$class.swallow(Logging.scala:94)
> at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:51)
> at kafka.server.KafkaServer.shutdown(KafkaServer.scala:352)
> at 
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:42)
> at com.linkedin.kafka.KafkaServer.notifyShutdown(KafkaServer.java:99)
> at 
> com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyShutdownListener(LifeCycleMgr.java:123)
> at 
> com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyListeners(LifeCycleMgr.java:102)
> at 
> com.linkedin.util.factory.lifecycle.LifeCycleMgr.notifyStop(LifeCycleMgr.java:82)
> - locked <0x000640b77bb0> (a java.util.ArrayDeque)
> at com.linkedin.util.factory.Generator.stop(Generator.java:177)
> - locked <0x000640b77bc8> (a java.lang.Object)
> at 
> com.linkedin.offspring.servlet.OffspringServletRuntime.destroy(OffspringServletRuntime.java:82)
> at 
> com.linkedin.offspring.servlet.OffspringServletContextListener.contextDestroyed(OffspringServletContextListener.java:51)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:813)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:160)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:516)
> at com.linkedin.emweb.WebappContext.doStop(WebappContext.java:35)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
> - locked <0x0006400018b8> (a java.lang.Object)
> at 
> com.linkedin.emweb.ContextBasedHandlerImpl.doStop(ContextBasedHandlerImpl.java:112)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
> - locked <0x000640001900> (a java.lang.Object)
> at 
> com.linkedin.emweb.WebappDeployerImpl.stop(WebappDeployerImpl.java:349)
> at 
> com.linkedin.emweb.WebappDeployerImpl.doStop(WebappDeployerImpl.java:414)
> - locked <0x0006400019c0> (a 
> com.linkedin.emweb.MapBasedHandlerImpl)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
> - locked <0x0006404ee8e8> (a java.lang.Object)
> at 
> org.eclipse.jetty.util.component.AggregateLifeCycle.doStop(AggregateLifeCycle.java:107)
> at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:69)
> at 
> 

[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14973887#comment-14973887
 ] 

ASF GitHub Bot commented on KAFKA-2644:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/358

KAFKA-2644: Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

Run sanity check, replication tests and benchmarks with SASL/Kerberos using 
MiniKdc.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2644

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/358.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #358


commit 65c001a719c26da325b5a1154a61ec4e095afd70
Author: Rajini Sivaram 
Date:   2015-10-25T16:44:43Z

KAFKA-2644: Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL




> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2689) Expose select gauges and metrics programmatically (not just through JMX)

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974897#comment-14974897
 ] 

ASF GitHub Bot commented on KAFKA-2689:
---

GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/363

KAFKA-2689: Expose select gauges and metrics programmatically (not just 
through JMX)

For now just exposing the replica manager gauges.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka kafka-2689

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/363.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #363


commit 90c0085a76374fafe6fa62c18e3d24504852e687
Author: Eno Thereska 
Date:   2015-10-07T00:06:49Z

Commits to fix timing issues in three JIRAs

commit ee66491fb36d55527d156afda90c3addc3eb3175
Author: Eno Thereska 
Date:   2015-10-07T00:07:21Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 17a373733e414456475217248cbc7b0bc98fda40
Author: Eno Thereska 
Date:   2015-10-07T15:15:19Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit eb5fbf458a5b455ae8b3c8b3ebf32524f5a3ab3e
Author: Eno Thereska 
Date:   2015-10-07T16:20:45Z

Removed debug messages

commit 041baae45012cf8f99afd2c8b5d9a8099a8a928b
Author: Eno Thereska 
Date:   2015-10-07T17:35:12Z

Pick a node, but not one that is blacked out

commit 69679d7e61d36f76d2ea1dd1fcc0a1192c9b50d6
Author: Eno Thereska 
Date:   2015-10-08T17:18:02Z

Removed unneeded checks

commit 3ce5e151396575f45d1f022720f454ac36653d0d
Author: Eno Thereska 
Date:   2015-10-08T17:18:18Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 76e6a0d8ab3fe847b28edde2e0072e7fe06484ff
Author: Eno Thereska 
Date:   2015-10-08T23:35:41Z

More efficient implementation of nodesEverSeen

commit 6576f372e0cddcc54b6fcb19b9d471cff16bcd77
Author: Eno Thereska 
Date:   2015-10-10T19:04:54Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 0f9507310812740d1a8304c6350f434b5a661c63
Author: Eno Thereska 
Date:   2015-10-12T21:33:52Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit b7c4c3c1600a6e21884dbcb39588a0681d351d60
Author: Eno Thereska 
Date:   2015-10-16T08:47:35Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit f6bd8788f0e6088ad81fd2847b999e3b0d4ecd2c
Author: Eno Thereska 
Date:   2015-10-16T10:39:25Z

Fixed handling of now. Added unit test for leastLoadedNode. Fixed 
disconnected method in MockSelector

commit b5f4c1796894de5b0c4cc31b7de98eb4536c0ccf
Author: Eno Thereska 
Date:   2015-10-17T19:53:15Z

Check case when node with same broker id has different host or port

commit bee1d583fa67d944e40ec700d0212c1bac314703
Author: Eno Thereska 
Date:   2015-10-17T19:53:30Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit bdf2fcf29d5396b97b9a24bf962a7c40b6a795c6
Author: Eno Thereska 
Date:   2015-10-19T21:26:46Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit d612afe2fd7ce63b054d2406e8d82419b3b39841
Author: Eno Thereska 
Date:   2015-10-19T21:30:33Z

Removed unnecessary Map remove

commit ba5eafcfeb006c403e7047c45442eca0d9ec763a
Author: Eno Thereska 
Date:   2015-10-20T07:59:03Z

Cleaned up parts of code. Minor.

commit 65e3aee2c9491b0411672eaf568034160b331074
Author: Eno Thereska 
Date:   2015-10-20T07:59:19Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 4ab54e061a1708d086f7720dc40778cdaf0d0362
Author: Eno Thereska 
Date:   2015-10-20T10:03:14Z

More cleanup. Minor

commit 570c15ff8032248018cc8c5a7f0df75d840a898f
Author: Eno Thereska 
Date:   2015-10-21T08:35:24Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 2a1f1a6cd350d2e655e5a0b41d66fca8f0af5782
Author: Eno Thereska 
Date:   2015-10-21T20:02:38Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 285bd1c0d8830e0e89ec49716b639156d291ace6
Author: Eno Thereska 
Date:   2015-10-23T17:40:14Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit d33199d5d32e7fc2f22e4fa64b505f15427d5be0
Author: Eno Thereska 
Date:   

[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974859#comment-14974859
 ] 

ASF GitHub Bot commented on KAFKA-2644:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/361

MINOR: Add new build target for system test libs

KAFKA-2644 adds MiniKdc for system tests and hence needs a target to 
collect all MiniKdc jars. At the moment, system tests run `gradlew jar`. 
Replacing that with `gradlew systemTestLibs` will enable kafka jars and test 
dependency jars to be built and copied into appropriate locations. Submitting 
this as a separate PR so that the new target can be added to the build scripts 
that run system tests before KAFKA-2644 is committed. A separate target for 
system test artifacts will allow dependency changes to be made in future 
without breaking test runs.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka kafka-systemTestLibs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/361.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #361


commit c4925d71bcc248c566a04a6348a216870f56243a
Author: Rajini Sivaram 
Date:   2015-10-26T19:07:21Z

MINOR: Add new build target for system test libs




> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2694) Make a task id be a composite id of a task group id and a partition id

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975146#comment-14975146
 ] 

ASF GitHub Bot commented on KAFKA-2694:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/365

KAFKA-2694: Task Id

@guozhangwang 

* A task id is now a class, ```TaskId```, that has a task group id and a 
partition id fields.
* ```TopologyBuilder``` assigns a task group id to a topic group. Related 
methods are changed accordingly.
* A state store uses the partition id part of the task id as the change log 
partition id.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka task_id

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/365.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #365


commit 31375d79ad666c6a38a7566e729a9062c9a97563
Author: Yasuhiro Matsuda 
Date:   2015-10-26T21:08:56Z

TaskId class




> Make a task id be a composite id of a task group id and a partition id
> --
>
> Key: KAFKA-2694
> URL: https://issues.apache.org/jira/browse/KAFKA-2694
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2652) Incorporate the new consumer protocol with partition-group interface

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974977#comment-14974977
 ] 

ASF GitHub Bot commented on KAFKA-2652:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/353


> Incorporate the new consumer protocol with partition-group interface
> 
>
> Key: KAFKA-2652
> URL: https://issues.apache.org/jira/browse/KAFKA-2652
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>
> After KAFKA-2464 is checked in, we need to incorporate the new protocol along 
> with a partition-group interface.
> The first step maybe a couple of pre-defined partitioning scheme that can be 
> chosen by user from some configs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2689) Expose select gauges and metrics programmatically (not just through JMX)

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974903#comment-14974903
 ] 

ASF GitHub Bot commented on KAFKA-2689:
---

Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/363


> Expose select gauges and metrics programmatically (not just through JMX)
> 
>
> Key: KAFKA-2689
> URL: https://issues.apache.org/jira/browse/KAFKA-2689
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>  Labels: newbie
> Fix For: 0.9.0.0
>
>
> There are several gauges in core that are registered but cannot be accessed 
> programmatically. For example, gauges "LeaderCount", "PartitionCount", 
> "UnderReplicatedParittions" are all registered in ReplicaManager.scala but 
> there is no way to access them programmatically if one has access to the 
> kafka.server object. Other metrics,  such as isrExpandRate (also in 
> ReplicaManager.scala) can be accessed. The solution here is trivial, add a 
> var  in front of newGauge, as shown below
> var partitionCount newGauge(
> "PartitionCount",
> new Gauge[Int] {
>   def value = allPartitions.size
> }
> )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2648) Coordinator should not allow empty groupIds

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974878#comment-14974878
 ] 

ASF GitHub Bot commented on KAFKA-2648:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/362

KAFKA-2648: group.id is required for new consumer and cannot be empty



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2648

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/362.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #362


commit 5a419664f36fcbb955e250ebfe8c531e50fff981
Author: Jason Gustafson 
Date:   2015-10-26T19:40:02Z

KAFKA-2648: group.id is required for new consumer and cannot be empty




> Coordinator should not allow empty groupIds
> ---
>
> Key: KAFKA-2648
> URL: https://issues.apache.org/jira/browse/KAFKA-2648
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The coordinator currently allows consumer groups with empty groupIds, but 
> there probably aren't any cases where this is actually a good idea and it 
> tends to mask problems where different groups have simply not configured a 
> groupId. To address this, we can add a new error code, say INVALID_GROUP_ID, 
> which the coordinator can return when it encounters an  empty groupID. We 
> should also make groupId a required property in consumer configuration and 
> enforce that it is non-empty. 
> It's a little unclear whether this change would have compatibility concerns. 
> The old consumer will fail with an empty groupId (because it cannot create 
> the zookeeper paths), but other clients may allow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2464) Client-side assignment and group generalization

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975483#comment-14975483
 ] 

ASF GitHub Bot commented on KAFKA-2464:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/354


> Client-side assignment and group generalization
> ---
>
> Key: KAFKA-2464
> URL: https://issues.apache.org/jira/browse/KAFKA-2464
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Add support for client-side assignment and generalization of join group 
> protocol as documented here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2683) Ensure wakeup exceptions are propagated to user in new consumer

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975768#comment-14975768
 ] 

ASF GitHub Bot commented on KAFKA-2683:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/366

KAFKA-2683: ensure wakeup exceptions raised to user



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2683

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/366.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #366


commit 383d0cea274d6a7819b69cdba7b7768002822ae1
Author: Jason Gustafson 
Date:   2015-10-27T05:42:10Z

KAFKA-2683: ensure wakeup exceptions raised to user




> Ensure wakeup exceptions are propagated to user in new consumer
> ---
>
> Key: KAFKA-2683
> URL: https://issues.apache.org/jira/browse/KAFKA-2683
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> KafkaConsumer.wakeup() can be used to interrupt blocking operations (e.g. in 
> order to shutdown), so wakeup exceptions must get propagated to the user. 
> Currently, there are several locations in the code where a wakeup exception 
> could be caught and silently discarded. For example, when the rebalance 
> callback is invoked, we just catch and log all exceptions. In this case, we 
> also need to be careful that wakeup exceptions do not affect rebalance 
> callback semantics. In particular, it is possible currently for a wakeup to 
> cause onPartitionsRevoked to be invoked multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2694) Make a task id be a composite id of a topic group id and a partition id

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975594#comment-14975594
 ] 

ASF GitHub Bot commented on KAFKA-2694:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/365


> Make a task id be a composite id of a topic group id and a partition id
> ---
>
> Key: KAFKA-2694
> URL: https://issues.apache.org/jira/browse/KAFKA-2694
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2675) SASL/Kerberos follow-up

2015-10-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978531#comment-14978531
 ] 

ASF GitHub Bot commented on KAFKA-2675:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/376

KAFKA-2675; SASL/Kerberos follow up



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka KAFKA-2675-sasl-kerberos-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/376.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #376


commit 2e928f4705d731aa2754686651a65b0713d840b8
Author: Ismael Juma 
Date:   2015-10-23T08:30:46Z

Fix handling of `kafka_jaas.conf` not found in `SaslTestHarness`

commit 6aed268b3ff772bebb38c8f242ad63cca6ba83b6
Author: Ismael Juma 
Date:   2015-10-23T08:35:29Z

Remove unnecessary code in `BaseProducerSendTest`

In most cases, `producer` can never be null. In two
cases, there are multiple producers and the
`var producer` doesn't make sense.

commit f0cc13190c0374cf72f76040a52a66bace950ef7
Author: Ismael Juma 
Date:   2015-10-23T09:09:05Z

Move some tests from `BaseConsumerTest` to `PlaintextConsumerTest` in order 
to reduce build times

commit 8f1aa28fda00820b14f519fd7df457ef7804c634
Author: Ismael Juma 
Date:   2015-10-23T09:10:39Z

Make `Login` thread a daemon thread

This way, it won't prevent shutdown if `close` is not called on
`Consumer` or `Producer`.

commit f9a3e4e1c918c92771403cabca089092c36c1638
Author: Ismael Juma 
Date:   2015-10-23T16:59:43Z

Rename `kafka.security.auth.to.local` to 
`sasl.kerberos.principal.to.local.rules`

Also improve wording for `SaslConfigs` docs.

commit ac58906f4c446941c43f193aaee45366dfd50950
Author: Ismael Juma 
Date:   2015-10-26T09:52:20Z

Remove unused `SASL_KAFKA_SERVER_REALM` property

commit c68554f4001979ca9283f007e20fe599c1eb85fa
Author: Ismael Juma 
Date:   2015-10-26T12:59:38Z

Remove forced reload of `Configuration` from `Login` and set JAAS property 
before starting `MiniKdc`

commit 503e2662a63bd39a1602ed73cba9b2c8fe4af55f
Author: Ismael Juma 
Date:   2015-10-27T21:22:59Z

Fix `IntegrationTestHarness` to set security configs correctly

commit 133076603671c50c4ab820f754c6ebaaedc58f15
Author: Ismael Juma 
Date:   2015-10-27T23:27:49Z

Improve logging in `ControllerChannelManager` by using `brokerNode` instead 
of `toBroker`

commit 7dd7eeff4748b28f31010196c8fbb2cb65d0da0e
Author: Ismael Juma 
Date:   2015-10-28T14:36:30Z

Introduce `LoginManager.closeAll()` and use it in `SaslTestHarness`

This is necessary to avoid authentication failures when consumers,
producers or brokers are leaked during tests.

commit 0f31db82a07b4be77cd2d95cf9d2f9eecd1343ee
Author: Ismael Juma 
Date:   2015-10-28T14:37:42Z

Improve exception handling in Sasl authenticators: avoid excessive 
exception chaining




> SASL/Kerberos follow-up
> ---
>
> Key: KAFKA-2675
> URL: https://issues.apache.org/jira/browse/KAFKA-2675
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up to KAFKA-1686. 
> 1. Decide on `serviceName` configuration: do we want to keep it in two places?
> 2. auth.to.local config name is a bit opaque, is there a better one?
> 3. Implement or remove SASL_KAFKA_SERVER_REALM config
> 4. Consider making Login's thread a daemon thread
> 5. Write test that shows authentication failure due to principal in JAAS file 
> not being present in MiniKDC



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2598) Add Test with authorizer for producer and consumer

2015-10-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14980637#comment-14980637
 ] 

ASF GitHub Bot commented on KAFKA-2598:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/300


> Add Test with authorizer for producer and consumer
> --
>
> Key: KAFKA-2598
> URL: https://issues.apache.org/jira/browse/KAFKA-2598
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security, unit tests
>Affects Versions: 0.8.2.2
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Now that we have all the authorizer code merged into trunk we should add a 
> test that enables authorizer and tests that only authorized users can 
> produce/consume from topics or issue cluster actions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2705) Remove static JAAS config file for ZK auth tests

2015-10-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14980527#comment-14980527
 ] 

ASF GitHub Bot commented on KAFKA-2705:
---

GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/380

KAFKA-2705: Remove static JAAS config file for ZK auth tests

Remove static login config file.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-2705

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/380.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #380


commit 943f9e2e98e117709098255c28269d649200abba
Author: Flavio Junqueira 
Date:   2015-10-29T14:26:12Z

KAFKA-2705: Remove static JAAS config file for ZK auth tests




> Remove static JAAS config file for ZK auth tests
> 
>
> Key: KAFKA-2705
> URL: https://issues.apache.org/jira/browse/KAFKA-2705
> Project: Kafka
>  Issue Type: Test
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.0.0
>
>
> We have a static login config file in the resources folder, and it is better 
> for testing to have that file created dynamically. This issue adds this 
> functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2502) Quotas documentation for 0.8.3

2015-10-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14980709#comment-14980709
 ] 

ASF GitHub Bot commented on KAFKA-2502:
---

GitHub user auradkar opened a pull request:

https://github.com/apache/kafka/pull/381

KAFKA-2502 - Documentation for quotas

Followed the approach specified here: 
https://issues.apache.org/jira/browse/KAFKA-2502
I also made a minor fix to ConfigCommand to expose the right options on 
add-config.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auradkar/kafka K-2502

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/381.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #381


commit 8cafbe60108b5bfb2dd869f5e6dd138f7ae1cc60
Author: Aditya Auradkar 
Date:   2015-10-29T00:57:34Z

Adding documentation for quotas

commit 92fd19351b965fad553e3b185a639dbf4f869949
Author: Aditya Auradkar 
Date:   2015-10-29T01:20:45Z

Added tab to ConfigCommand

commit 48118e932a0ade2a89617767712a9224df2d6a66
Author: Aditya Auradkar 
Date:   2015-10-29T02:28:47Z

Added design section for quotas

commit 8f5ecb9591fa7a4f6f7311012b99a755a8862cd0
Author: Aditya Auradkar 
Date:   2015-10-29T16:11:06Z

Minor corrections




> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2449) Update mirror maker (MirrorMaker) docs

2015-10-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14980944#comment-14980944
 ] 

ASF GitHub Bot commented on KAFKA-2449:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/382

KAFKA-2449: Docs: Automatically generate documentation from config classes

…the way we always planned to

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2666

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/382.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #382


commit c16e1d21bd0b2556f97bd53ccb7d7f3598dbb2d6
Author: Gwen Shapira 
Date:   2015-10-29T18:04:54Z

Auto-generate the configuration docs from the configuration objects, the 
way we always planned to




> Update mirror maker (MirrorMaker) docs
> --
>
> Key: KAFKA-2449
> URL: https://issues.apache.org/jira/browse/KAFKA-2449
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Gwen Shapira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The Kafka docs on Mirror Maker state that it mirrors from N source clusters 
> to 1 destination, but this is no longer the case. Docs should be updated to 
> reflect that it mirrors from single source cluster to single target cluster.
> Docs I've found where this should be updated:
> http://kafka.apache.org/documentation.html#basic_ops_mirror_maker
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+mirroring+(MirrorMaker)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2663) Add quota-delay time to request processing time break-up

2015-10-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14981042#comment-14981042
 ] 

ASF GitHub Bot commented on KAFKA-2663:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/369


> Add quota-delay time to request processing time break-up
> 
>
> Key: KAFKA-2663
> URL: https://issues.apache.org/jira/browse/KAFKA-2663
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Aditya Auradkar
> Fix For: 0.9.0.0
>
>
> This is probably not critical for 0.9 but should be easy to fix:
> If a request is delayed due to quotas, I think the remote time will go up 
> artificially - or maybe response queue time (haven’t checked). We should add 
> a new quotaDelayTime to the request handling time break-up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2705) Remove static JAAS config file for ZK auth tests

2015-10-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14981059#comment-14981059
 ] 

ASF GitHub Bot commented on KAFKA-2705:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/380


> Remove static JAAS config file for ZK auth tests
> 
>
> Key: KAFKA-2705
> URL: https://issues.apache.org/jira/browse/KAFKA-2705
> Project: Kafka
>  Issue Type: Test
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.0.0
>
>
> We have a static login config file in the resources folder, and it is better 
> for testing to have that file created dynamically. This issue adds this 
> functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2706) Make state stores first class citizens in the processor DAG

2015-10-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14981561#comment-14981561
 ] 

ASF GitHub Bot commented on KAFKA-2706:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/387

KAFKA-2706: make state stores first class citizens in the processor topology

* Added StateStoreSupplier
* StateStore
  * Added init(ProcessorContext context) method
* TopologyBuilder
  * Added addStateStore(StateStoreSupplier supplier, String... processNames)
  * Added connectProessorAndStateStores(String processorName, String... 
stateStoreNames)
* This is for the case processors are not created when a store is added 
to the topology. (used by KStream)
* KStream
  * add stateStoreNames to process(), transform(), transformValues().
* Refactored existing state stores to implement StateStoreSupplier

@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka state_store_supplier

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/387.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #387


commit 97869c26fe503818dc57b62671f7d82450b002c1
Author: Yasuhiro Matsuda 
Date:   2015-10-29T23:19:30Z

KAFKA-2706: make state stores first class citizens in the processor topology




> Make state stores first class citizens in the processor DAG
> ---
>
> Key: KAFKA-2706
> URL: https://issues.apache.org/jira/browse/KAFKA-2706
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-10-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14981552#comment-14981552
 ] 

ASF GitHub Bot commented on KAFKA-2017:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/386

KAFKA-2017: Persist Group Metadata and Assignment before Responding



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2017

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/386.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #386


commit c00cebd93262ab9e0f1a9ea164902ab7ef0c800a
Author: Guozhang Wang 
Date:   2015-10-29T22:07:17Z

move group metadata into offset manager

commit 17a83eb8529a57a97826473919764a02db1ca3b4
Author: Guozhang Wang 
Date:   2015-10-29T22:10:50Z

add back package.html

commit 1f2579d941fe25038738ec8e93f8b480ab7e7fe7
Author: Guozhang Wang 
Date:   2015-10-29T23:28:32Z

persist synced assignment and client side error handling




> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> transferred during coordinator failover. This state consists of most of the 
> information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2447) Add capability to KafkaLog4jAppender to be able to use SSL

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14976588#comment-14976588
 ] 

ASF GitHub Bot commented on KAFKA-2447:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/175


> Add capability to KafkaLog4jAppender to be able to use SSL
> --
>
> Key: KAFKA-2447
> URL: https://issues.apache.org/jira/browse/KAFKA-2447
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> With Kafka supporting SSL, it makes sense to augment KafkaLog4jAppender to be 
> able to use SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2516) Rename o.a.k.client.tools to o.a.k.tools

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14976500#comment-14976500
 ] 

ASF GitHub Bot commented on KAFKA-2516:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/310


> Rename o.a.k.client.tools to o.a.k.tools
> 
>
> Key: KAFKA-2516
> URL: https://issues.apache.org/jira/browse/KAFKA-2516
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently our new performance tools are in o.a.k.client.tools but packaged in 
> kafka-tools not kafka-clients. This is a bit confusing.
> Since they deserve their own jar (you don't want our client tools packaged in 
> your app), lets give them a separate package and call it o.a.k.tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2452) enable new consumer in mirror maker

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14976520#comment-14976520
 ] 

ASF GitHub Bot commented on KAFKA-2452:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/266


> enable new consumer in mirror maker
> ---
>
> Key: KAFKA-2452
> URL: https://issues.apache.org/jira/browse/KAFKA-2452
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to an an option to enable the new consumer in mirror maker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2696) New KafkaProducer documentation doesn't include all necessary config properties

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14976600#comment-14976600
 ] 

ASF GitHub Bot commented on KAFKA-2696:
---

GitHub user edwardmlyte opened a pull request:

https://github.com/apache/kafka/pull/367

KAFKA-2696: New KafkaProducer documentation doesn't include all necessary 
config properties

Added in documentation to add missing properties, as well as highlight 
those minimum required properties.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/edwardmlyte/kafka docsUpdate

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/367.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #367


commit 68938ab2470c699a2471c5eacc7451cc122b330a
Author: edwardmlyte 
Date:   2015-10-27T15:38:50Z

KAFKA-2696: New KafkaProducer documentation doesn't include all necessary 
config properties

Added in documentation.




> New KafkaProducer documentation doesn't include all necessary config 
> properties
> ---
>
> Key: KAFKA-2696
> URL: https://issues.apache.org/jira/browse/KAFKA-2696
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.8.2.2
>Reporter: Edward Maxwell-Lyte
>
> It's missing the definitions for key.serializer and value.serializer. And it 
> would be good to highlight the necessary properties for the new KafkaProducer 
> to work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2696) New KafkaProducer documentation doesn't include all necessary config properties

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14976621#comment-14976621
 ] 

ASF GitHub Bot commented on KAFKA-2696:
---

Github user edwardmlyte closed the pull request at:

https://github.com/apache/kafka/pull/367


> New KafkaProducer documentation doesn't include all necessary config 
> properties
> ---
>
> Key: KAFKA-2696
> URL: https://issues.apache.org/jira/browse/KAFKA-2696
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.8.2.2
>Reporter: Edward Maxwell-Lyte
>
> It's missing the definitions for key.serializer and value.serializer. And it 
> would be good to highlight the necessary properties for the new KafkaProducer 
> to work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2645) Document potentially breaking changes in the release notes for 0.9.0

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14976496#comment-14976496
 ] 

ASF GitHub Bot commented on KAFKA-2645:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/337


> Document potentially breaking changes in the release notes for 0.9.0
> 
>
> Key: KAFKA-2645
> URL: https://issues.apache.org/jira/browse/KAFKA-2645
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2663) Add quota-delay time to request processing time break-up

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14976747#comment-14976747
 ] 

ASF GitHub Bot commented on KAFKA-2663:
---

GitHub user auradkar opened a pull request:

https://github.com/apache/kafka/pull/369

KAFKA-2663, KAFKA-2664 - [Minor] Bugfixes

This has 2 fixes:
KAFKA-2664 - This patch changes the underlying map implementation of 
Metrics.java to a ConcurrentHashMap. Using a CopyOnWriteMap caused new metrics 
creation to get extremely slow when the existing corpus of metrics is large. 
Using a ConcurrentHashMap seems to speed up metric creation time significantly

KAFKA-2663 - Splitting out the throttleTime from the remote time. On 
throttled requests, the remote time went up artificially.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auradkar/kafka K-2664

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/369.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #369


commit 2dc50c39bb9ea2c29d4d7663cacc145bf4bcd758
Author: Aditya Auradkar 
Date:   2015-10-27T16:29:29Z

Fix for KAFKA-2664, KAFKA-2663

commit f3abc741312a33fc2aba011fbc179519749af439
Author: Aditya Auradkar 
Date:   2015-10-27T17:06:47Z

revert gradle changes




> Add quota-delay time to request processing time break-up
> 
>
> Key: KAFKA-2663
> URL: https://issues.apache.org/jira/browse/KAFKA-2663
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Aditya Auradkar
> Fix For: 0.9.0.1
>
>
> This is probably not critical for 0.9 but should be easy to fix:
> If a request is delayed due to quotas, I think the remote time will go up 
> artificially - or maybe response queue time (haven’t checked). We should add 
> a new quotaDelayTime to the request handling time break-up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1888) Add a "rolling upgrade" system test

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14977270#comment-14977270
 ] 

ASF GitHub Bot commented on KAFKA-1888:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/229


> Add a "rolling upgrade" system test
> ---
>
> Key: KAFKA-1888
> URL: https://issues.apache.org/jira/browse/KAFKA-1888
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Gwen Shapira
>Assignee: Geoff Anderson
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1888_2015-03-23_11:54:25.patch
>
>
> To help test upgrades and compatibility between versions, it will be cool to 
> add a rolling-upgrade test to system tests:
> Given two versions (just a path to the jars?), check that you can do a
> rolling upgrade of the brokers from one version to another (using clients 
> from the old version) without losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2677) Coordinator disconnects not propagated to new consumer

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14977348#comment-14977348
 ] 

ASF GitHub Bot commented on KAFKA-2677:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/349


> Coordinator disconnects not propagated to new consumer
> --
>
> Key: KAFKA-2677
> URL: https://issues.apache.org/jira/browse/KAFKA-2677
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> Currently, disconnects by the coordinator are not always seen by the 
> consumer. This can result in a long delay after the old coordinator has 
> shutdown or failed before the consumer knows that it needs to find the new 
> coordinator. The NetworkClient makes socket disconnects available to users in 
> two ways:
> 1. through a flag in the ClientResponse object for requests pending when the 
> disconnect occurred, and 
> 2. through the connectionFailed() method. 
> The first method clearly cannot be depended on since it only helps when a 
> request is pending, which is relatively rare for the connection with the 
> coordinator. Instead, we can probably use the second method with a little 
> rework of ConsumerNetworkClient to check for failed connections immediately 
> after returning from poll(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2683) Ensure wakeup exceptions are propagated to user in new consumer

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14977465#comment-14977465
 ] 

ASF GitHub Bot commented on KAFKA-2683:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/366


> Ensure wakeup exceptions are propagated to user in new consumer
> ---
>
> Key: KAFKA-2683
> URL: https://issues.apache.org/jira/browse/KAFKA-2683
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> KafkaConsumer.wakeup() can be used to interrupt blocking operations (e.g. in 
> order to shutdown), so wakeup exceptions must get propagated to the user. 
> Currently, there are several locations in the code where a wakeup exception 
> could be caught and silently discarded. For example, when the rebalance 
> callback is invoked, we just catch and log all exceptions. In this case, we 
> also need to be careful that wakeup exceptions do not affect rebalance 
> callback semantics. In particular, it is possible currently for a wakeup to 
> cause onPartitionsRevoked to be invoked multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2516) Rename o.a.k.client.tools to o.a.k.tools

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14976477#comment-14976477
 ] 

ASF GitHub Bot commented on KAFKA-2516:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/310


> Rename o.a.k.client.tools to o.a.k.tools
> 
>
> Key: KAFKA-2516
> URL: https://issues.apache.org/jira/browse/KAFKA-2516
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently our new performance tools are in o.a.k.client.tools but packaged in 
> kafka-tools not kafka-clients. This is a bit confusing.
> Since they deserve their own jar (you don't want our client tools packaged in 
> your app), lets give them a separate package and call it o.a.k.tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2516) Rename o.a.k.client.tools to o.a.k.tools

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14976478#comment-14976478
 ] 

ASF GitHub Bot commented on KAFKA-2516:
---

GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/310

KAFKA-2516: Rename o.a.k.client.tools to o.a.k.tools



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka tools-packaging

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/310.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #310


commit f1cf0a01fc4ea46a03bc0cbb37cdf763a91825e5
Author: Grant Henke 
Date:   2015-10-14T16:51:08Z

KAFKA-2516: Rename o.a.k.client.tools to o.a.k.tools




> Rename o.a.k.client.tools to o.a.k.tools
> 
>
> Key: KAFKA-2516
> URL: https://issues.apache.org/jira/browse/KAFKA-2516
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently our new performance tools are in o.a.k.client.tools but packaged in 
> kafka-tools not kafka-clients. This is a bit confusing.
> Since they deserve their own jar (you don't want our client tools packaged in 
> your app), lets give them a separate package and call it o.a.k.tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    1   2   3   4   5   6   7   8   9   10   >