Jenkins build is back to normal : kafka-trunk-jdk7 #879

2015-12-08 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2903) FileMessageSet's read method maybe has problem when start is not zero

2015-12-08 Thread Pengwei (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046561#comment-15046561
 ] 

Pengwei commented on KAFKA-2903:


Yes we can also add comment on it. 
But if in the future version, someone need to use this read API will get error 
when using a slice  FileMessageSet.  so modify this code is better?
Because it is only one line of code, and add comment will use much word on it

> FileMessageSet's read method maybe has problem when start is not zero
> -
>
> Key: KAFKA-2903
> URL: https://issues.apache.org/jira/browse/KAFKA-2903
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.2.1, 0.9.0.0
>Reporter: Pengwei
>Assignee: Jay Kreps
> Fix For: 0.9.1.0
>
>
> now the code is :
> def read(position: Int, size: Int): FileMessageSet = {
>. 
> new FileMessageSet(file,
>channel,
>start = this.start + position,
>end = math.min(this.start + position + size, 
> sizeInBytes()))
>   }
> if this.start is not 0, the end is only the FileMessageSet's size, not the 
> actually position of end position.
> the end parameter should be:
>  end = math.min(this.start + position + size, this.start+sizeInBytes())



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #208

2015-12-08 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Remove unused DoublyLinkedList

[wangguoz] HOTFIX: fix ProcessorStateManager to use correct ktable partitions

--
[...truncated 1425 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 

[jira] [Commented] (KAFKA-2953) Kafka documentation is really wide

2015-12-08 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046695#comment-15046695
 ] 

Jens Rantil commented on KAFKA-2953:


Haha, it is a wise documentation, that's for sure :) Jokes aside, I've 
corrected the title now (and will give my fat fingers a workout). Thanks for 
input no this.

> Kafka documentation is really wide
> --
>
> Key: KAFKA-2953
> URL: https://issues.apache.org/jira/browse/KAFKA-2953
> Project: Kafka
>  Issue Type: Bug
>  Components: website
> Environment: Google Chrome Version 47.0.2526.73 (64-bit)
>Reporter: Jens Rantil
>Priority: Trivial
>
> The page at http://kafka.apache.org/documentation.html is extremelly wide 
> which is mostly annoying.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2953) Kafka documentation is really wide

2015-12-08 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046695#comment-15046695
 ] 

Jens Rantil edited comment on KAFKA-2953 at 12/8/15 10:23 AM:
--

Haha, it is a wise documentation, that's for sure :) Jokes aside, I've 
corrected the title now (and will give my fat fingers a workout). Thanks for 
input on this.


was (Author: ztyx):
Haha, it is a wise documentation, that's for sure :) Jokes aside, I've 
corrected the title now (and will give my fat fingers a workout). Thanks for 
input no this.

> Kafka documentation is really wide
> --
>
> Key: KAFKA-2953
> URL: https://issues.apache.org/jira/browse/KAFKA-2953
> Project: Kafka
>  Issue Type: Bug
>  Components: website
> Environment: Google Chrome Version 47.0.2526.73 (64-bit)
>Reporter: Jens Rantil
>Priority: Trivial
>
> The page at http://kafka.apache.org/documentation.html is extremelly wide 
> which is mostly annoying.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2061: Offer a --version flag to print th...

2015-12-08 Thread sasakitoa
GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/639

KAFKA-2061: Offer a --version flag to print the kafka version

Add version option to command line tools to print Kafka version

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka version_option

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/639.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #639


commit 08a87f1201fd6650057a60ec45a206ab612c271e
Author: Sasaki Toru 
Date:   2015-12-08T11:35:33Z

Add version option to command line tools to print Kafka version




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-2959) Remove temporary mapping to deserialize functions in RequestChannel

2015-12-08 Thread Jakub Nowak (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakub Nowak reassigned KAFKA-2959:
--

Assignee: Jakub Nowak

> Remove temporary mapping to deserialize functions in RequestChannel 
> 
>
> Key: KAFKA-2959
> URL: https://issues.apache.org/jira/browse/KAFKA-2959
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Jakub Nowak
>
> Once the old Request & Response objects are no longer used we can delete the 
> legacy mapping maintained in RequestChannel.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2948) Kafka producer does not cope well with topic deletions

2015-12-08 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046724#comment-15046724
 ] 

Rajini Sivaram commented on KAFKA-2948:
---

[~becket_qin] Thank you for your feedback. The fix that we are testing at the 
moment removes topics with `UNKNOWN_TOPIC_OR_PARTITION` error from the metadata 
set when the error is received in a response, but re-adds it when metadata is 
requested for the topic (eg. producer waiting for metadata to send a message). 
This ensures that the request is retried when required, but not when the topic 
is no longer in use. 

TTL sounds like a better option to remove not just deleted topics, but also any 
topic that is no longer being used. My only concern is that deleted topics 
would remain in the list for a longer period of time with a lot of warnings in 
the logs as metadata requests are retried. I could combine the current fix and 
TTL if required to avoid this, but I will try out TTL on its own first with the 
REST service and see how that goes. 

> Kafka producer does not cope well with topic deletions
> --
>
> Key: KAFKA-2948
> URL: https://issues.apache.org/jira/browse/KAFKA-2948
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Kafka producer gets metadata for topics when send is invoked and thereafter 
> it attempts to keep the metadata up-to-date without any explicit requests 
> from the client. This works well in static environments, but when topics are 
> added or deleted, list of topics in Metadata grows but never shrinks. Apart 
> from being a memory leak, this results in constant requests for metadata for 
> deleted topics.
> We are running into this issue with the Confluent REST server where topic 
> deletion from tests are filling up logs with warnings about unknown topics. 
> Auto-create is turned off in our Kafka cluster.
> I am happy to provide a fix, but am not sure what the right fix is. Does it 
> make sense to remove topics from the metadata list when 
> UNKNOWN_TOPIC_OR_PARTITION response is received if there are no outstanding 
> sends? It doesn't look very straightforward to do this, so any alternative 
> suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2960) DelayedProduce may cause message lose during repeatly leader change

2015-12-08 Thread Xing Huang (JIRA)
Xing Huang created KAFKA-2960:
-

 Summary: DelayedProduce may cause message lose during repeatly 
leader change
 Key: KAFKA-2960
 URL: https://issues.apache.org/jira/browse/KAFKA-2960
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.9.0.0
Reporter: Xing Huang


related to #KAFKA-1148
When a leader replica became follower then leader again, it may truncated its 
log as follower. But the second time it became leader, its ISR may shrink and 
if at this moment new messages were appended, the DelayedProduce generated when 
it was leader the first time may be satisfied, and the client will receive a 
response with no error. But, actually the messages were lost. 

We simulated this scene, which proved the message lose could happen. And it 
seems to be the reason for a data lose recently happened to us according to 
broker logs and client logs.

I think we should check the leader epoch when send a response, or satisfy 
DelayedProduce when leader change as described in #KAFKA-1148.

And we may need an new error code to inform the producer about this error. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2953) Kafka documentation is really wide

2015-12-08 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated KAFKA-2953:
---
Summary: Kafka documentation is really wide  (was: Kafka documentation is 
really wise)

> Kafka documentation is really wide
> --
>
> Key: KAFKA-2953
> URL: https://issues.apache.org/jira/browse/KAFKA-2953
> Project: Kafka
>  Issue Type: Bug
>  Components: website
> Environment: Google Chrome Version 47.0.2526.73 (64-bit)
>Reporter: Jens Rantil
>Priority: Trivial
>
> The page at http://kafka.apache.org/documentation.html is extremelly wide 
> which is mostly annoying.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2061) Offer a --version flag to print the kafka version

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046793#comment-15046793
 ] 

ASF GitHub Bot commented on KAFKA-2061:
---

GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/639

KAFKA-2061: Offer a --version flag to print the kafka version

Add version option to command line tools to print Kafka version

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka version_option

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/639.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #639


commit 08a87f1201fd6650057a60ec45a206ab612c271e
Author: Sasaki Toru 
Date:   2015-12-08T11:35:33Z

Add version option to command line tools to print Kafka version




> Offer a --version flag to print the kafka version
> -
>
> Key: KAFKA-2061
> URL: https://issues.apache.org/jira/browse/KAFKA-2061
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrew Pennebaker
>Priority: Minor
>
> As a newbie, I want kafka command line tools to offer a --version flag to 
> print the kafka version, so that it's easier to work with the community to 
> troubleshoot things.
> As a mitigation, users can query the package management system. But that's A) 
> Not necessarily a newbie's first instinct and B) Not always possible when 
> kafka is installed manually from tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2959) Remove temporary mapping to deserialize functions in RequestChannel

2015-12-08 Thread Jakub Nowak (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2959 started by Jakub Nowak.
--
> Remove temporary mapping to deserialize functions in RequestChannel 
> 
>
> Key: KAFKA-2959
> URL: https://issues.apache.org/jira/browse/KAFKA-2959
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Jakub Nowak
>
> Once the old Request & Response objects are no longer used we can delete the 
> legacy mapping maintained in RequestChannel.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2961) Add a Single Topic KafkaConsumer Helper Function

2015-12-08 Thread Jesse Anderson (JIRA)
Jesse Anderson created KAFKA-2961:
-

 Summary: Add a Single Topic KafkaConsumer Helper Function
 Key: KAFKA-2961
 URL: https://issues.apache.org/jira/browse/KAFKA-2961
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.9.0.0
Reporter: Jesse Anderson
Assignee: Neha Narkhede


To subscribe to a single topic, you need to write more code than you should:
consumer.subscribe(Arrays.asList(topic));

There should be a helper function to pass in a single topic to subscribe on. 
Like this:
consumer.subscribe(topic);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2064) Replace ConsumerMetadataRequest and Response with org.apache.kafka.common.requests objects

2015-12-08 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2064 started by Grant Henke.
--
> Replace ConsumerMetadataRequest and Response with  
> org.apache.kafka.common.requests objects
> ---
>
> Key: KAFKA-2064
> URL: https://issues.apache.org/jira/browse/KAFKA-2064
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>
> Replace ConsumerMetadataRequest and response with  
> org.apache.kafka.common.requests objects



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2064) Replace ConsumerMetadataRequest and Response with org.apache.kafka.common.requests objects

2015-12-08 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-2064.

Resolution: Fixed

Looks like this was resolved by KAFKA-2687. ConsumerMetadataRequest was removed 
and GroupCoordinatorRequest filled its place.

> Replace ConsumerMetadataRequest and Response with  
> org.apache.kafka.common.requests objects
> ---
>
> Key: KAFKA-2064
> URL: https://issues.apache.org/jira/browse/KAFKA-2064
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>
> Replace ConsumerMetadataRequest and response with  
> org.apache.kafka.common.requests objects



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2064) Replace ConsumerMetadataRequest and Response with org.apache.kafka.common.requests objects

2015-12-08 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-2064:
--

Assignee: Grant Henke

> Replace ConsumerMetadataRequest and Response with  
> org.apache.kafka.common.requests objects
> ---
>
> Key: KAFKA-2064
> URL: https://issues.apache.org/jira/browse/KAFKA-2064
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>
> Replace ConsumerMetadataRequest and response with  
> org.apache.kafka.common.requests objects



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2507) Replace ControlledShutdown{Request,Response} with org.apache.kafka.common.requests equivalent

2015-12-08 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-2507:
--

Assignee: Grant Henke

> Replace ControlledShutdown{Request,Response} with 
> org.apache.kafka.common.requests equivalent
> -
>
> Key: KAFKA-2507
> URL: https://issues.apache.org/jira/browse/KAFKA-2507
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2507) Replace ControlledShutdown{Request,Response} with org.apache.kafka.common.requests equivalent

2015-12-08 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2507 started by Grant Henke.
--
> Replace ControlledShutdown{Request,Response} with 
> org.apache.kafka.common.requests equivalent
> -
>
> Key: KAFKA-2507
> URL: https://issues.apache.org/jira/browse/KAFKA-2507
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2961) Add a Single Topic KafkaConsumer Helper Function

2015-12-08 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047141#comment-15047141
 ] 

Jason Gustafson commented on KAFKA-2961:


[~eljefe6a] I agree it's slightly annoying when you're subscribing to a single 
topic, but the point of using a list was to suggest that subscribe() is not 
additive and each call replaces the previous subscription. It's debatable 
whether it achieves that, but I think it might be even less clear if we also 
added subscribe(String).

> Add a Single Topic KafkaConsumer Helper Function
> 
>
> Key: KAFKA-2961
> URL: https://issues.apache.org/jira/browse/KAFKA-2961
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
>Assignee: Neha Narkhede
>
> To subscribe to a single topic, you need to write more code than you should:
> consumer.subscribe(Arrays.asList(topic));
> There should be a helper function to pass in a single topic to subscribe on. 
> Like this:
> consumer.subscribe(topic);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2960) DelayedProduce may cause message lose during repeatly leader change

2015-12-08 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2960:
-
Fix Version/s: 0.9.1.0

> DelayedProduce may cause message lose during repeatly leader change
> ---
>
> Key: KAFKA-2960
> URL: https://issues.apache.org/jira/browse/KAFKA-2960
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Xing Huang
> Fix For: 0.9.1.0
>
>
> related to #KAFKA-1148
> When a leader replica became follower then leader again, it may truncated its 
> log as follower. But the second time it became leader, its ISR may shrink and 
> if at this moment new messages were appended, the DelayedProduce generated 
> when it was leader the first time may be satisfied, and the client will 
> receive a response with no error. But, actually the messages were lost. 
> We simulated this scene, which proved the message lose could happen. And it 
> seems to be the reason for a data lose recently happened to us according to 
> broker logs and client logs.
> I think we should check the leader epoch when send a response, or satisfy 
> DelayedProduce when leader change as described in #KAFKA-1148.
> And we may need an new error code to inform the producer about this error. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2507) Replace ControlledShutdown{Request,Response} with org.apache.kafka.common.requests equivalent

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047189#comment-15047189
 ] 

ASF GitHub Bot commented on KAFKA-2507:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/640

KAFKA-2507: Replace ControlledShutdown{Request,Response} with o.a.k.c…

….requests equivalent

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka controlled-shutdown

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/640.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #640


commit 7474779b584815da5592554366ad910bc70ca17a
Author: Grant Henke 
Date:   2015-12-08T18:14:32Z

KAFKA-2507: Replace ControlledShutdown{Request,Response} with 
o.a.k.c.requests equivalent




> Replace ControlledShutdown{Request,Response} with 
> org.apache.kafka.common.requests equivalent
> -
>
> Key: KAFKA-2507
> URL: https://issues.apache.org/jira/browse/KAFKA-2507
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2507) Replace ControlledShutdown{Request,Response} with org.apache.kafka.common.requests equivalent

2015-12-08 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2507:
---
Status: Patch Available  (was: In Progress)

> Replace ControlledShutdown{Request,Response} with 
> org.apache.kafka.common.requests equivalent
> -
>
> Key: KAFKA-2507
> URL: https://issues.apache.org/jira/browse/KAFKA-2507
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2962) Add Join API

2015-12-08 Thread Yasuhiro Matsuda (JIRA)
Yasuhiro Matsuda created KAFKA-2962:
---

 Summary: Add Join API
 Key: KAFKA-2962
 URL: https://issues.apache.org/jira/browse/KAFKA-2962
 Project: Kafka
  Issue Type: Sub-task
  Components: kafka streams
Affects Versions: 0.9.1.0
Reporter: Yasuhiro Matsuda
Assignee: Yasuhiro Matsuda






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2961) Add a Single Topic KafkaConsumer Helper Function

2015-12-08 Thread Jesse Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047213#comment-15047213
 ] 

Jesse Anderson commented on KAFKA-2961:
---

I understand your point. Maybe changing the method name would connote that it 
isn't additive in a more effective way. Maybe "setTopic(topic)" or 
"setSubscribeTopic(topic)" would be better. Usually a "set..." is not 
considered additive.

> Add a Single Topic KafkaConsumer Helper Function
> 
>
> Key: KAFKA-2961
> URL: https://issues.apache.org/jira/browse/KAFKA-2961
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
>Assignee: Neha Narkhede
>
> To subscribe to a single topic, you need to write more code than you should:
> consumer.subscribe(Arrays.asList(topic));
> There should be a helper function to pass in a single topic to subscribe on. 
> Like this:
> consumer.subscribe(topic);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2958) Remove duplicate API key mapping functionality

2015-12-08 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2958:

   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 637
[https://github.com/apache/kafka/pull/637]

> Remove duplicate API key mapping functionality
> --
>
> Key: KAFKA-2958
> URL: https://issues.apache.org/jira/browse/KAFKA-2958
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> Kafka common and core both have a class that maps request API keys and names. 
> To prevent errors and issues with consistency we should remove 
> RequestKeys.scala in core in favor or ApiKeys.java in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2958) Remove duplicate API key mapping functionality

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047153#comment-15047153
 ] 

ASF GitHub Bot commented on KAFKA-2958:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/637


> Remove duplicate API key mapping functionality
> --
>
> Key: KAFKA-2958
> URL: https://issues.apache.org/jira/browse/KAFKA-2958
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> Kafka common and core both have a class that maps request API keys and names. 
> To prevent errors and issues with consistency we should remove 
> RequestKeys.scala in core in favor or ApiKeys.java in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2963) Replace server internal usage of TopicAndPartition with TopicPartition

2015-12-08 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-2963:
--

 Summary: Replace server internal usage of TopicAndPartition with 
TopicPartition
 Key: KAFKA-2963
 URL: https://issues.apache.org/jira/browse/KAFKA-2963
 Project: Kafka
  Issue Type: Sub-task
Reporter: Grant Henke






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2959) Remove temporary mapping to deserialize functions in RequestChannel

2015-12-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047199#comment-15047199
 ] 

Gwen Shapira commented on KAFKA-2959:
-

[~sinus]
I don't think you can start work on this just yet. It is blocked by everything 
else in the parent jira (KAFKA-1927) getting done first.

> Remove temporary mapping to deserialize functions in RequestChannel 
> 
>
> Key: KAFKA-2959
> URL: https://issues.apache.org/jira/browse/KAFKA-2959
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Jakub Nowak
>
> Once the old Request & Response objects are no longer used we can delete the 
> legacy mapping maintained in RequestChannel.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2960) DelayedProduce may cause message lose during repeatly leader change

2015-12-08 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047198#comment-15047198
 ] 

Jiangjie Qin commented on KAFKA-2960:
-

[~iBuddha] In Kafka, the persistence guarantee are at different levels. Would 
the following settings solve the scenario you mentioned?
acks=-1
min.isr=2
replication factor > 2

This should guarantee when response was sent, at least two brokers in the ISR 
has persisted the messages. So there should be no message loss unless the 
entire cluster is down.

> DelayedProduce may cause message lose during repeatly leader change
> ---
>
> Key: KAFKA-2960
> URL: https://issues.apache.org/jira/browse/KAFKA-2960
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Xing Huang
> Fix For: 0.9.1.0
>
>
> related to #KAFKA-1148
> When a leader replica became follower then leader again, it may truncated its 
> log as follower. But the second time it became leader, its ISR may shrink and 
> if at this moment new messages were appended, the DelayedProduce generated 
> when it was leader the first time may be satisfied, and the client will 
> receive a response with no error. But, actually the messages were lost. 
> We simulated this scene, which proved the message lose could happen. And it 
> seems to be the reason for a data lose recently happened to us according to 
> broker logs and client logs.
> I think we should check the leader epoch when send a response, or satisfy 
> DelayedProduce when leader change as described in #KAFKA-1148.
> And we may need an new error code to inform the producer about this error. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2958: Remove duplicate API key mapping f...

2015-12-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/637


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2507: Replace ControlledShutdown{Request...

2015-12-08 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/640

KAFKA-2507: Replace ControlledShutdown{Request,Response} with o.a.k.c…

….requests equivalent

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka controlled-shutdown

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/640.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #640


commit 7474779b584815da5592554366ad910bc70ca17a
Author: Grant Henke 
Date:   2015-12-08T18:14:32Z

KAFKA-2507: Replace ControlledShutdown{Request,Response} with 
o.a.k.c.requests equivalent




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2957) Fix typos in Kafka documentation

2015-12-08 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-2957:
---
Fix Version/s: 0.9.0.1

> Fix typos in Kafka documentation
> 
>
> Key: KAFKA-2957
> URL: https://issues.apache.org/jira/browse/KAFKA-2957
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Trivial
>  Labels: documentation, easyfix
> Fix For: 0.9.0.1
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> There are some minor typos in Kafka documentation. Example:
> - Supporting these uses led use to a design with a number of unique elements, 
> more akin to a database log then a traditional messaging system.
> This should read as:
> - Supporting these uses led *us* to a design with a number of unique 
> elements, more akin to a database log *than* a traditional messaging system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2948) Kafka producer does not cope well with topic deletions

2015-12-08 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047349#comment-15047349
 ] 

Rajini Sivaram commented on KAFKA-2948:
---

[~mgharat] [~becket_qin] The code that I am testing at the moment uses the 
current config `metadata.max.age.ms`. If no messages are sent to a topic for 
this interval, then the topic is removed from the metadata set. Subsequent send 
will add it back to the set. I am also marking the topic for delete if a send 
fails because no metadata was available for a topic, to limit the number of 
retries for deleted topics. Will submit a PR later today for review.

> Kafka producer does not cope well with topic deletions
> --
>
> Key: KAFKA-2948
> URL: https://issues.apache.org/jira/browse/KAFKA-2948
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Kafka producer gets metadata for topics when send is invoked and thereafter 
> it attempts to keep the metadata up-to-date without any explicit requests 
> from the client. This works well in static environments, but when topics are 
> added or deleted, list of topics in Metadata grows but never shrinks. Apart 
> from being a memory leak, this results in constant requests for metadata for 
> deleted topics.
> We are running into this issue with the Confluent REST server where topic 
> deletion from tests are filling up logs with warnings about unknown topics. 
> Auto-create is turned off in our Kafka cluster.
> I am happy to provide a fix, but am not sure what the right fix is. Does it 
> make sense to remove topics from the metadata list when 
> UNKNOWN_TOPIC_OR_PARTITION response is received if there are no outstanding 
> sends? It doesn't look very straightforward to do this, so any alternative 
> suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Onur Karaman
Congrats Ewen!

On Tue, Dec 8, 2015 at 11:39 AM, Edward Ribeiro 
wrote:

> Congratulations, Ewen! :)
>
> Cheers,
> Eddie
> Em 08/12/2015 17:37, "Neha Narkhede"  escreveu:
>
> > I am pleased to announce that the Apache Kafka PMC has voted to
> > invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
> >
> > Ewen is an active member of the community and has contributed and
> reviewed
> > numerous patches to Kafka. His most significant contribution is Kafka
> > Connect which was released few days ago as part of 0.9.
> >
> > Please join me on welcoming and congratulating Ewen.
> >
> > Ewen, we look forward to your continued contributions to the Kafka
> > community!
> >
> > --
> > Thanks,
> > Neha
> >
>


[jira] [Commented] (KAFKA-2957) Fix typos in Kafka documentation

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047357#comment-15047357
 ] 

ASF GitHub Bot commented on KAFKA-2957:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/641

KAFKA-2957: Fix typos in Kafka documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-2957

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/641.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #641


commit 3d2410946d3cd675de0ab4a45ee57bed18c3e4ca
Author: Vahid Hashemian 
Date:   2015-12-08T19:29:50Z

Fix some typos in documentation (resolves KAFKA-2957)




> Fix typos in Kafka documentation
> 
>
> Key: KAFKA-2957
> URL: https://issues.apache.org/jira/browse/KAFKA-2957
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Trivial
>  Labels: documentation, easyfix
> Fix For: 0.9.0.1
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> There are some minor typos in Kafka documentation. Example:
> - Supporting these uses led use to a design with a number of unique elements, 
> more akin to a database log then a traditional messaging system.
> This should read as:
> - Supporting these uses led *us* to a design with a number of unique 
> elements, more akin to a database log *than* a traditional messaging system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Jamie Grier
Congrats, Ewen!

On Tue, Dec 8, 2015 at 11:39 AM, Edward Ribeiro 
wrote:

> Congratulations, Ewen! :)
>
> Cheers,
> Eddie
> Em 08/12/2015 17:37, "Neha Narkhede"  escreveu:
>
> > I am pleased to announce that the Apache Kafka PMC has voted to
> > invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
> >
> > Ewen is an active member of the community and has contributed and
> reviewed
> > numerous patches to Kafka. His most significant contribution is Kafka
> > Connect which was released few days ago as part of 0.9.
> >
> > Please join me on welcoming and congratulating Ewen.
> >
> > Ewen, we look forward to your continued contributions to the Kafka
> > community!
> >
> > --
> > Thanks,
> > Neha
> >
>


[jira] [Commented] (KAFKA-2948) Kafka producer does not cope well with topic deletions

2015-12-08 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047220#comment-15047220
 ] 

Mayuresh Gharat commented on KAFKA-2948:


Adding TTL would mean another user exposed config. Can we not use the number of 
times we got "UNKNOWN_TOPIC_OR_PARTITION" and then get rid of the  topic.

> Kafka producer does not cope well with topic deletions
> --
>
> Key: KAFKA-2948
> URL: https://issues.apache.org/jira/browse/KAFKA-2948
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Kafka producer gets metadata for topics when send is invoked and thereafter 
> it attempts to keep the metadata up-to-date without any explicit requests 
> from the client. This works well in static environments, but when topics are 
> added or deleted, list of topics in Metadata grows but never shrinks. Apart 
> from being a memory leak, this results in constant requests for metadata for 
> deleted topics.
> We are running into this issue with the Confluent REST server where topic 
> deletion from tests are filling up logs with warnings about unknown topics. 
> Auto-create is turned off in our Kafka cluster.
> I am happy to provide a fix, but am not sure what the right fix is. Does it 
> make sense to remove topics from the metadata list when 
> UNKNOWN_TOPIC_OR_PARTITION response is received if there are no outstanding 
> sends? It doesn't look very straightforward to do this, so any alternative 
> suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2948) Kafka producer does not cope well with topic deletions

2015-12-08 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047326#comment-15047326
 ] 

Jiangjie Qin commented on KAFKA-2948:
-

[~mgharat] I think TTL should not be a config but simply an internal mechanism. 
User should not care about this at all.

> Kafka producer does not cope well with topic deletions
> --
>
> Key: KAFKA-2948
> URL: https://issues.apache.org/jira/browse/KAFKA-2948
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Kafka producer gets metadata for topics when send is invoked and thereafter 
> it attempts to keep the metadata up-to-date without any explicit requests 
> from the client. This works well in static environments, but when topics are 
> added or deleted, list of topics in Metadata grows but never shrinks. Apart 
> from being a memory leak, this results in constant requests for metadata for 
> deleted topics.
> We are running into this issue with the Confluent REST server where topic 
> deletion from tests are filling up logs with warnings about unknown topics. 
> Auto-create is turned off in our Kafka cluster.
> I am happy to provide a fix, but am not sure what the right fix is. Does it 
> make sense to remove topics from the metadata list when 
> UNKNOWN_TOPIC_OR_PARTITION response is received if there are no outstanding 
> sends? It doesn't look very straightforward to do this, so any alternative 
> suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Neha Narkhede
I am pleased to announce that the Apache Kafka PMC has voted to
invite Ewen Cheslack-Postava as a committer and Ewen has accepted.

Ewen is an active member of the community and has contributed and reviewed
numerous patches to Kafka. His most significant contribution is Kafka
Connect which was released few days ago as part of 0.9.

Please join me on welcoming and congratulating Ewen.

Ewen, we look forward to your continued contributions to the Kafka
community!

-- 
Thanks,
Neha


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Edward Ribeiro
Congratulations, Ewen! :)

Cheers,
Eddie
Em 08/12/2015 17:37, "Neha Narkhede"  escreveu:

> I am pleased to announce that the Apache Kafka PMC has voted to
> invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
>
> Ewen is an active member of the community and has contributed and reviewed
> numerous patches to Kafka. His most significant contribution is Kafka
> Connect which was released few days ago as part of 0.9.
>
> Please join me on welcoming and congratulating Ewen.
>
> Ewen, we look forward to your continued contributions to the Kafka
> community!
>
> --
> Thanks,
> Neha
>


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Liquan Pei
Congrats, Ewen!

On Tue, Dec 8, 2015 at 11:37 AM, Neha Narkhede  wrote:

> I am pleased to announce that the Apache Kafka PMC has voted to
> invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
>
> Ewen is an active member of the community and has contributed and reviewed
> numerous patches to Kafka. His most significant contribution is Kafka
> Connect which was released few days ago as part of 0.9.
>
> Please join me on welcoming and congratulating Ewen.
>
> Ewen, we look forward to your continued contributions to the Kafka
> community!
>
> --
> Thanks,
> Neha
>



-- 
Liquan Pei
Department of Physics
University of Massachusetts Amherst


Build failed in Jenkins: kafka-trunk-jdk8 #209

2015-12-08 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2958: Remove duplicate API key mapping functionality

--
[...truncated 3633 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Becket Qin
Congrats! Ewen!

On Tue, Dec 8, 2015 at 11:39 AM, Edward Ribeiro 
wrote:

> Congratulations, Ewen! :)
>
> Cheers,
> Eddie
> Em 08/12/2015 17:37, "Neha Narkhede"  escreveu:
>
> > I am pleased to announce that the Apache Kafka PMC has voted to
> > invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
> >
> > Ewen is an active member of the community and has contributed and
> reviewed
> > numerous patches to Kafka. His most significant contribution is Kafka
> > Connect which was released few days ago as part of 0.9.
> >
> > Please join me on welcoming and congratulating Ewen.
> >
> > Ewen, we look forward to your continued contributions to the Kafka
> > community!
> >
> > --
> > Thanks,
> > Neha
> >
>


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Gwen Shapira
Congrats!

Very well deserved :)
Now get cranking on reviews backlog  ;)

On Tue, Dec 8, 2015 at 11:37 AM, Neha Narkhede  wrote:

> I am pleased to announce that the Apache Kafka PMC has voted to
> invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
>
> Ewen is an active member of the community and has contributed and reviewed
> numerous patches to Kafka. His most significant contribution is Kafka
> Connect which was released few days ago as part of 0.9.
>
> Please join me on welcoming and congratulating Ewen.
>
> Ewen, we look forward to your continued contributions to the Kafka
> community!
>
> --
> Thanks,
> Neha
>


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Jason Gustafson
Congrats! Very well deserved.

On Tue, Dec 8, 2015 at 11:37 AM, Neha Narkhede  wrote:

> I am pleased to announce that the Apache Kafka PMC has voted to
> invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
>
> Ewen is an active member of the community and has contributed and reviewed
> numerous patches to Kafka. His most significant contribution is Kafka
> Connect which was released few days ago as part of 0.9.
>
> Please join me on welcoming and congratulating Ewen.
>
> Ewen, we look forward to your continued contributions to the Kafka
> community!
>
> --
> Thanks,
> Neha
>


[GitHub] kafka pull request: KAFKA-2957: Fix typos in Kafka documentation

2015-12-08 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/641

KAFKA-2957: Fix typos in Kafka documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-2957

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/641.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #641


commit 3d2410946d3cd675de0ab4a45ee57bed18c3e4ca
Author: Vahid Hashemian 
Date:   2015-12-08T19:29:50Z

Fix some typos in documentation (resolves KAFKA-2957)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2957) Fix typos in Kafka documentation

2015-12-08 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-2957:
---
Status: Patch Available  (was: In Progress)

> Fix typos in Kafka documentation
> 
>
> Key: KAFKA-2957
> URL: https://issues.apache.org/jira/browse/KAFKA-2957
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Trivial
>  Labels: documentation, easyfix
> Fix For: 0.9.0.1
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> There are some minor typos in Kafka documentation. Example:
> - Supporting these uses led use to a design with a number of unique elements, 
> more akin to a database log then a traditional messaging system.
> This should read as:
> - Supporting these uses led *us* to a design with a number of unique 
> elements, more akin to a database log *than* a traditional messaging system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Joe Stein
Ewen,

Congrats!

~ Joestein

On Tue, Dec 8, 2015 at 2:51 PM, Guozhang Wang  wrote:

> Congrats Ewen! Welcome onboard.
>
> Guozhang
>
> On Tue, Dec 8, 2015 at 11:42 AM, Liquan Pei  wrote:
>
> > Congrats, Ewen!
> >
> > On Tue, Dec 8, 2015 at 11:37 AM, Neha Narkhede 
> wrote:
> >
> > > I am pleased to announce that the Apache Kafka PMC has voted to
> > > invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
> > >
> > > Ewen is an active member of the community and has contributed and
> > reviewed
> > > numerous patches to Kafka. His most significant contribution is Kafka
> > > Connect which was released few days ago as part of 0.9.
> > >
> > > Please join me on welcoming and congratulating Ewen.
> > >
> > > Ewen, we look forward to your continued contributions to the Kafka
> > > community!
> > >
> > > --
> > > Thanks,
> > > Neha
> > >
> >
> >
> >
> > --
> > Liquan Pei
> > Department of Physics
> > University of Massachusetts Amherst
> >
>
>
>
> --
> -- Guozhang
>


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Harsha
Congrats Ewen.
-Harsha

On Tue, Dec 8, 2015, at 12:08 PM, Ashish Singh wrote:
> Congrats Ewen!
> 
> On Tuesday, December 8, 2015, Joe Stein  wrote:
> 
> > Ewen,
> >
> > Congrats!
> >
> > ~ Joestein
> >
> > On Tue, Dec 8, 2015 at 2:51 PM, Guozhang Wang  > > wrote:
> >
> > > Congrats Ewen! Welcome onboard.
> > >
> > > Guozhang
> > >
> > > On Tue, Dec 8, 2015 at 11:42 AM, Liquan Pei  > > wrote:
> > >
> > > > Congrats, Ewen!
> > > >
> > > > On Tue, Dec 8, 2015 at 11:37 AM, Neha Narkhede  > >
> > > wrote:
> > > >
> > > > > I am pleased to announce that the Apache Kafka PMC has voted to
> > > > > invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
> > > > >
> > > > > Ewen is an active member of the community and has contributed and
> > > > reviewed
> > > > > numerous patches to Kafka. His most significant contribution is Kafka
> > > > > Connect which was released few days ago as part of 0.9.
> > > > >
> > > > > Please join me on welcoming and congratulating Ewen.
> > > > >
> > > > > Ewen, we look forward to your continued contributions to the Kafka
> > > > > community!
> > > > >
> > > > > --
> > > > > Thanks,
> > > > > Neha
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Liquan Pei
> > > > Department of Physics
> > > > University of Massachusetts Amherst
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
> 
> 
> -- 
> Ashish h


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Jarek Jarcec Cecho
Congratulations!

Jarcec

> On Dec 8, 2015, at 8:37 PM, Neha Narkhede  wrote:
> 
> I am pleased to announce that the Apache Kafka PMC has voted to
> invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
> 
> Ewen is an active member of the community and has contributed and reviewed
> numerous patches to Kafka. His most significant contribution is Kafka
> Connect which was released few days ago as part of 0.9.
> 
> Please join me on welcoming and congratulating Ewen.
> 
> Ewen, we look forward to your continued contributions to the Kafka
> community!
> 
> -- 
> Thanks,
> Neha



[GitHub] kafka pull request: KAFKA-2957: Fix typos in Kafka documentation

2015-12-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/641


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Guozhang Wang
Congrats Ewen! Welcome onboard.

Guozhang

On Tue, Dec 8, 2015 at 11:42 AM, Liquan Pei  wrote:

> Congrats, Ewen!
>
> On Tue, Dec 8, 2015 at 11:37 AM, Neha Narkhede  wrote:
>
> > I am pleased to announce that the Apache Kafka PMC has voted to
> > invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
> >
> > Ewen is an active member of the community and has contributed and
> reviewed
> > numerous patches to Kafka. His most significant contribution is Kafka
> > Connect which was released few days ago as part of 0.9.
> >
> > Please join me on welcoming and congratulating Ewen.
> >
> > Ewen, we look forward to your continued contributions to the Kafka
> > community!
> >
> > --
> > Thanks,
> > Neha
> >
>
>
>
> --
> Liquan Pei
> Department of Physics
> University of Massachusetts Amherst
>



-- 
-- Guozhang


[jira] [Updated] (KAFKA-2959) Remove temporary mapping to deserialize functions in RequestChannel

2015-12-08 Thread Jakub Nowak (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakub Nowak updated KAFKA-2959:
---
Assignee: (was: Jakub Nowak)

> Remove temporary mapping to deserialize functions in RequestChannel 
> 
>
> Key: KAFKA-2959
> URL: https://issues.apache.org/jira/browse/KAFKA-2959
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>
> Once the old Request & Response objects are no longer used we can delete the 
> legacy mapping maintained in RequestChannel.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2957) Fix typos in Kafka documentation

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047447#comment-15047447
 ] 

ASF GitHub Bot commented on KAFKA-2957:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/641


> Fix typos in Kafka documentation
> 
>
> Key: KAFKA-2957
> URL: https://issues.apache.org/jira/browse/KAFKA-2957
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Trivial
>  Labels: documentation, easyfix
> Fix For: 0.9.0.1
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> There are some minor typos in Kafka documentation. Example:
> - Supporting these uses led use to a design with a number of unique elements, 
> more akin to a database log then a traditional messaging system.
> This should read as:
> - Supporting these uses led *us* to a design with a number of unique 
> elements, more akin to a database log *than* a traditional messaging system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Aditya Auradkar
Congrats Ewen!

On Tue, Dec 8, 2015 at 11:51 AM, Guozhang Wang  wrote:

> Congrats Ewen! Welcome onboard.
>
> Guozhang
>
> On Tue, Dec 8, 2015 at 11:42 AM, Liquan Pei  wrote:
>
> > Congrats, Ewen!
> >
> > On Tue, Dec 8, 2015 at 11:37 AM, Neha Narkhede 
> wrote:
> >
> > > I am pleased to announce that the Apache Kafka PMC has voted to
> > > invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
> > >
> > > Ewen is an active member of the community and has contributed and
> > reviewed
> > > numerous patches to Kafka. His most significant contribution is Kafka
> > > Connect which was released few days ago as part of 0.9.
> > >
> > > Please join me on welcoming and congratulating Ewen.
> > >
> > > Ewen, we look forward to your continued contributions to the Kafka
> > > community!
> > >
> > > --
> > > Thanks,
> > > Neha
> > >
> >
> >
> >
> > --
> > Liquan Pei
> > Department of Physics
> > University of Massachusetts Amherst
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Commented] (KAFKA-2959) Remove temporary mapping to deserialize functions in RequestChannel

2015-12-08 Thread Jakub Nowak (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047393#comment-15047393
 ] 

Jakub Nowak commented on KAFKA-2959:


Yes, I didn't notice that earlier, so for now I will leave this ticket.

> Remove temporary mapping to deserialize functions in RequestChannel 
> 
>
> Key: KAFKA-2959
> URL: https://issues.apache.org/jira/browse/KAFKA-2959
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Jakub Nowak
>
> Once the old Request & Response objects are no longer used we can delete the 
> legacy mapping maintained in RequestChannel.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Ashish Singh
Congrats Ewen!

On Tuesday, December 8, 2015, Joe Stein  wrote:

> Ewen,
>
> Congrats!
>
> ~ Joestein
>
> On Tue, Dec 8, 2015 at 2:51 PM, Guozhang Wang  > wrote:
>
> > Congrats Ewen! Welcome onboard.
> >
> > Guozhang
> >
> > On Tue, Dec 8, 2015 at 11:42 AM, Liquan Pei  > wrote:
> >
> > > Congrats, Ewen!
> > >
> > > On Tue, Dec 8, 2015 at 11:37 AM, Neha Narkhede  >
> > wrote:
> > >
> > > > I am pleased to announce that the Apache Kafka PMC has voted to
> > > > invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
> > > >
> > > > Ewen is an active member of the community and has contributed and
> > > reviewed
> > > > numerous patches to Kafka. His most significant contribution is Kafka
> > > > Connect which was released few days ago as part of 0.9.
> > > >
> > > > Please join me on welcoming and congratulating Ewen.
> > > >
> > > > Ewen, we look forward to your continued contributions to the Kafka
> > > > community!
> > > >
> > > > --
> > > > Thanks,
> > > > Neha
> > > >
> > >
> > >
> > >
> > > --
> > > Liquan Pei
> > > Department of Physics
> > > University of Massachusetts Amherst
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


-- 
Ashish h


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Jeff Holoman
Well done Ewen. Congrats.

On Tue, Dec 8, 2015 at 3:18 PM, Harsha  wrote:

> Congrats Ewen.
> -Harsha
>
> On Tue, Dec 8, 2015, at 12:08 PM, Ashish Singh wrote:
> > Congrats Ewen!
> >
> > On Tuesday, December 8, 2015, Joe Stein  wrote:
> >
> > > Ewen,
> > >
> > > Congrats!
> > >
> > > ~ Joestein
> > >
> > > On Tue, Dec 8, 2015 at 2:51 PM, Guozhang Wang  > > > wrote:
> > >
> > > > Congrats Ewen! Welcome onboard.
> > > >
> > > > Guozhang
> > > >
> > > > On Tue, Dec 8, 2015 at 11:42 AM, Liquan Pei  > > > wrote:
> > > >
> > > > > Congrats, Ewen!
> > > > >
> > > > > On Tue, Dec 8, 2015 at 11:37 AM, Neha Narkhede  > > >
> > > > wrote:
> > > > >
> > > > > > I am pleased to announce that the Apache Kafka PMC has voted to
> > > > > > invite Ewen Cheslack-Postava as a committer and Ewen has
> accepted.
> > > > > >
> > > > > > Ewen is an active member of the community and has contributed and
> > > > > reviewed
> > > > > > numerous patches to Kafka. His most significant contribution is
> Kafka
> > > > > > Connect which was released few days ago as part of 0.9.
> > > > > >
> > > > > > Please join me on welcoming and congratulating Ewen.
> > > > > >
> > > > > > Ewen, we look forward to your continued contributions to the
> Kafka
> > > > > > community!
> > > > > >
> > > > > > --
> > > > > > Thanks,
> > > > > > Neha
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Liquan Pei
> > > > > Department of Physics
> > > > > University of Massachusetts Amherst
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
> >
> > --
> > Ashish h
>



-- 
Jeff Holoman
Systems Engineer


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Jay Kreps
Congrats Ewen!

-Jay

On Tue, Dec 8, 2015 at 11:37 AM, Neha Narkhede  wrote:
> I am pleased to announce that the Apache Kafka PMC has voted to
> invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
>
> Ewen is an active member of the community and has contributed and reviewed
> numerous patches to Kafka. His most significant contribution is Kafka
> Connect which was released few days ago as part of 0.9.
>
> Please join me on welcoming and congratulating Ewen.
>
> Ewen, we look forward to your continued contributions to the Kafka
> community!
>
> --
> Thanks,
> Neha


[jira] [Assigned] (KAFKA-2733) Distinguish metric names inside the sensor registry

2015-12-08 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-2733:


Assignee: Guozhang Wang

> Distinguish metric names inside the sensor registry
> ---
>
> Key: KAFKA-2733
> URL: https://issues.apache.org/jira/browse/KAFKA-2733
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> Since stream tasks can share the same StreamingMetrics object, and the 
> MetricName is distinguishable only by the group name (same for the same type 
> of states, and for other streaming metrics) and the tags (currently only the 
> client-ids of the StreamThead), when we have multiple tasks within a single 
> stream thread, it could lead to IllegalStateException upon trying to registry 
> the same metric from those tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2733) Distinguish metric names inside the sensor registry

2015-12-08 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2733 started by Guozhang Wang.

> Distinguish metric names inside the sensor registry
> ---
>
> Key: KAFKA-2733
> URL: https://issues.apache.org/jira/browse/KAFKA-2733
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> Since stream tasks can share the same StreamingMetrics object, and the 
> MetricName is distinguishable only by the group name (same for the same type 
> of states, and for other streaming metrics) and the tags (currently only the 
> client-ids of the StreamThead), when we have multiple tasks within a single 
> stream thread, it could lead to IllegalStateException upon trying to registry 
> the same metric from those tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2733) Distinguish metric names inside the sensor registry

2015-12-08 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2733:
-
Fix Version/s: (was: 0.9.0.1)
   0.9.1.0

> Distinguish metric names inside the sensor registry
> ---
>
> Key: KAFKA-2733
> URL: https://issues.apache.org/jira/browse/KAFKA-2733
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> Since stream tasks can share the same StreamingMetrics object, and the 
> MetricName is distinguishable only by the group name (same for the same type 
> of states, and for other streaming metrics) and the tags (currently only the 
> client-ids of the StreamThead), when we have multiple tasks within a single 
> stream thread, it could lead to IllegalStateException upon trying to registry 
> the same metric from those tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2962: stream-table table-table joins

2015-12-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/644


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2962) Add Simple Join API

2015-12-08 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2962.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 644
[https://github.com/apache/kafka/pull/644]

> Add Simple Join API
> ---
>
> Key: KAFKA-2962
> URL: https://issues.apache.org/jira/browse/KAFKA-2962
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> Stream-Table and Table-Table joins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2962) Add Simple Join API

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048211#comment-15048211
 ] 

ASF GitHub Bot commented on KAFKA-2962:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/644


> Add Simple Join API
> ---
>
> Key: KAFKA-2962
> URL: https://issues.apache.org/jira/browse/KAFKA-2962
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> Stream-Table and Table-Table joins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #214

2015-12-08 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2924: support offsets topic in DumpLogSegments

--
[...truncated 6892 lines...]

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED
:testAll

BUILD SUCCESSFUL

Total time: 1 hrs 3 mins 7.966 secs
+ ./gradlew --stacktrace docsJarAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.9/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:docsJar_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:394:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:301:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:302:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:74:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 ^
:195:
 

Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Sriram Subramanian
Congrats!

On Tue, Dec 8, 2015 at 2:45 PM, Jay Kreps  wrote:

> Congrats Ewen!
>
> -Jay
>
> On Tue, Dec 8, 2015 at 11:37 AM, Neha Narkhede  wrote:
> > I am pleased to announce that the Apache Kafka PMC has voted to
> > invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
> >
> > Ewen is an active member of the community and has contributed and
> reviewed
> > numerous patches to Kafka. His most significant contribution is Kafka
> > Connect which was released few days ago as part of 0.9.
> >
> > Please join me on welcoming and congratulating Ewen.
> >
> > Ewen, we look forward to your continued contributions to the Kafka
> > community!
> >
> > --
> > Thanks,
> > Neha
>


[jira] [Commented] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread Bo Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047899#comment-15047899
 ] 

Bo Wang commented on KAFKA-2965:


https://github.com/apache/kafka/pull/646
[~gwenshap] please review it. thanks.

> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Priority: Minor
>  Labels: bug
> Attachments: Kafka-2965.patch
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047898#comment-15047898
 ] 

ASF GitHub Bot commented on KAFKA-2965:
---

GitHub user boweite opened a pull request:

https://github.com/apache/kafka/pull/646

[KAFKA-2965]Two variables should be exchanged.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/boweite/kafka kafka-2965

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/646.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #646


commit ad71fb59dc5e9db1a9ceea20a9b320e0885ba146
Author: unknown 
Date:   2015-12-09T02:28:50Z

change variables




> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Priority: Minor
>  Labels: bug
> Attachments: Kafka-2965.patch
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread James Cheng
Congrats!!

-James

> On Dec 8, 2015, at 11:37 AM, Neha Narkhede  wrote:
>
> I am pleased to announce that the Apache Kafka PMC has voted to
> invite Ewen Cheslack-Postava as a committer and Ewen has accepted.
>
> Ewen is an active member of the community and has contributed and reviewed
> numerous patches to Kafka. His most significant contribution is Kafka
> Connect which was released few days ago as part of 0.9.
>
> Please join me on welcoming and congratulating Ewen.
>
> Ewen, we look forward to your continued contributions to the Kafka
> community!
>
> --
> Thanks,
> Neha




This email and any attachments may contain confidential and privileged material 
for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If you 
are not the intended recipient, please contact the sender immediately and 
permanently delete this email and any attachments. No employee or agent of TiVo 
Inc. is authorized to conclude any binding agreement on behalf of TiVo Inc. by 
email. Binding agreements with TiVo Inc. may only be made by a signed written 
agreement.


[jira] [Created] (KAFKA-2967) Move Kafka documentation to ReStructuredText

2015-12-08 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2967:
---

 Summary: Move Kafka documentation to ReStructuredText
 Key: KAFKA-2967
 URL: https://issues.apache.org/jira/browse/KAFKA-2967
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


Storing documentation as HTML is kind of BS :)

* Formatting is a pain, and making it look good is even worse
* Its just HTML, can't generate PDFs
* Reading and editting is painful
* Validating changes is hard because our formatting relies on all kinds of 
Apache Server features.

I suggest:
* Move to RST
* Generate HTML and PDF during build using Sphinx plugin for Gradle.

Lots of Apache projects are doing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread Zhiqiang He (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiqiang He reassigned KAFKA-2965:
--

Assignee: (was: Zhiqiang He)

> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Priority: Minor
>  Labels: patch
> Attachments: Kafka-2965.patch
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread Bo Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bo Wang updated KAFKA-2965:
---
  Labels: patch  (was: )
Reviewer:   (was: Neha Narkhede)
  Status: Patch Available  (was: Open)

> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Priority: Minor
>  Labels: patch
> Attachments: Kafka-2965.patch
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #211

2015-12-08 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2957: Fix typos in Kafka documentation

--
[...truncated 1465 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 

[GitHub] kafka pull request: KAFKA-2733: Standardize metric name for Kafka ...

2015-12-08 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/643

KAFKA-2733: Standardize metric name for Kafka Streams



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2733

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/643.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #643


commit c437400f28c711a89fbab0c9fd179aa817f8c1fb
Author: Guozhang Wang 
Date:   2015-12-08T23:20:25Z

v1




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2733) Distinguish metric names inside the sensor registry

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047676#comment-15047676
 ] 

ASF GitHub Bot commented on KAFKA-2733:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/643

KAFKA-2733: Standardize metric name for Kafka Streams



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2733

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/643.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #643


commit c437400f28c711a89fbab0c9fd179aa817f8c1fb
Author: Guozhang Wang 
Date:   2015-12-08T23:20:25Z

v1




> Distinguish metric names inside the sensor registry
> ---
>
> Key: KAFKA-2733
> URL: https://issues.apache.org/jira/browse/KAFKA-2733
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
> Fix For: 0.9.0.1
>
>
> Since stream tasks can share the same StreamingMetrics object, and the 
> MetricName is distinguishable only by the group name (same for the same type 
> of states, and for other streaming metrics) and the tags (currently only the 
> client-ids of the StreamThead), when we have multiple tasks within a single 
> stream thread, it could lead to IllegalStateException upon trying to registry 
> the same metric from those tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2962) Add Simple Join API

2015-12-08 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda updated KAFKA-2962:

Description: Stream-Table and Table-Table joins

> Add Simple Join API
> ---
>
> Key: KAFKA-2962
> URL: https://issues.apache.org/jira/browse/KAFKA-2962
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>
> Stream-Table and Table-Table joins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2962: stream-table table-table joins

2015-12-08 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/644

KAFKA-2962: stream-table table-table joins

@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka join_methods

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/644.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #644


commit 15804dc1b8a8d9cfeee685d66b64d5fb9f77989f
Author: Yasuhiro Matsuda 
Date:   2015-12-08T23:39:15Z

stream-table table-table joins




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread Bo Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bo Wang updated KAFKA-2965:
---
Description: 
Two variables should be exchanged in KafkaController.scala as follows:
val topicsForWhichPartitionReassignmentIsInProgress = 
controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
val topicsForWhichPreferredReplicaElectionIsInProgress = 
controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
topicsForWhichPartitionReassignmentIsInProgress |
  
topicsForWhichPreferredReplicaElectionIsInProgress

Should change to:
val topicsForWhichPreferredReplicaElectionIsInProgress = 
controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
val topicsForWhichPartitionReassignmentIsInProgress = 
controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
topicsForWhichPartitionReassignmentIsInProgress |
  
topicsForWhichPreferredReplicaElectionIsInProgress

  was:Two variables should be exchanged in KafkaController.scala as follows:


> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Assignee: Neha Narkhede
>Priority: Minor
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2667: fix assertion depending on hash ma...

2015-12-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/642


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread Bo Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bo Wang updated KAFKA-2965:
---
Attachment: Kafka-2965.patch

> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Assignee: Neha Narkhede
>Priority: Minor
> Attachments: Kafka-2965.patch
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2667) Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure

2015-12-08 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047600#comment-15047600
 ] 

Jason Gustafson commented on KAFKA-2667:


I think [~becket_qin] and [~apovzner] have both seen issues with this test as 
well.

> Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure
> 
>
> Key: KAFKA-2667
> URL: https://issues.apache.org/jira/browse/KAFKA-2667
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.1.0
>
>
> Seen in recent builds:
> {code}
> org.apache.kafka.copycat.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.kafka.copycat.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:335)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2960) DelayedProduce may cause message lose during repeatly leader change

2015-12-08 Thread Xing Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047753#comment-15047753
 ] 

Xing Huang commented on KAFKA-2960:
---

The Partition class  check log end offset and current ISR to decide if there's 
enough replicas. But after a leader become follower, it may truncate its log, 
and if it became leader again very quickly, there is a chance that another 
client sent messages to it, so the LEO will increase, and the current ISR has 
changed to 2, so the DelayedProduce is satisfied, even acks=-1 and min.isr=2 
and replica.factor=3

> DelayedProduce may cause message lose during repeatly leader change
> ---
>
> Key: KAFKA-2960
> URL: https://issues.apache.org/jira/browse/KAFKA-2960
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Xing Huang
> Fix For: 0.9.1.0
>
>
> related to #KAFKA-1148
> When a leader replica became follower then leader again, it may truncated its 
> log as follower. But the second time it became leader, its ISR may shrink and 
> if at this moment new messages were appended, the DelayedProduce generated 
> when it was leader the first time may be satisfied, and the client will 
> receive a response with no error. But, actually the messages were lost. 
> We simulated this scene, which proved the message lose could happen. And it 
> seems to be the reason for a data lose recently happened to us according to 
> broker logs and client logs.
> I think we should check the leader epoch when send a response, or satisfy 
> DelayedProduce when leader change as described in #KAFKA-1148.
> And we may need an new error code to inform the producer about this error. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2948) Kafka producer does not cope well with topic deletions

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047782#comment-15047782
 ] 

ASF GitHub Bot commented on KAFKA-2948:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/645

KAFKA-2948: Remove unused topics from producer metadata set

If no messages are sent to a topic during the last refresh interval or if 
UNKNOWN_TOPIC_OR_PARTITION error is received, remove the topic from the 
metadata list. Topics are added to the list on the next attempt to send a 
message to the topic.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2948

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/645.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #645


commit f7e40e5ce515d700e8cc7ab02a0f16141fa14f67
Author: rsivaram 
Date:   2015-12-09T00:16:18Z

KAFKA-2948: Remove unused topics from producer metadata set




> Kafka producer does not cope well with topic deletions
> --
>
> Key: KAFKA-2948
> URL: https://issues.apache.org/jira/browse/KAFKA-2948
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Kafka producer gets metadata for topics when send is invoked and thereafter 
> it attempts to keep the metadata up-to-date without any explicit requests 
> from the client. This works well in static environments, but when topics are 
> added or deleted, list of topics in Metadata grows but never shrinks. Apart 
> from being a memory leak, this results in constant requests for metadata for 
> deleted topics.
> We are running into this issue with the Confluent REST server where topic 
> deletion from tests are filling up logs with warnings about unknown topics. 
> Auto-create is turned off in our Kafka cluster.
> I am happy to provide a fix, but am not sure what the right fix is. Does it 
> make sense to remove topics from the metadata list when 
> UNKNOWN_TOPIC_OR_PARTITION response is received if there are no outstanding 
> sends? It doesn't look very straightforward to do this, so any alternative 
> suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


subscribe

2015-12-08 Thread Ryan Leslie (BLOOMBERG/ 731 LEX)
subscribe

[jira] [Commented] (KAFKA-2667) Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047852#comment-15047852
 ] 

ASF GitHub Bot commented on KAFKA-2667:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/642


> Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure
> 
>
> Key: KAFKA-2667
> URL: https://issues.apache.org/jira/browse/KAFKA-2667
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.1.0
>
>
> Seen in recent builds:
> {code}
> org.apache.kafka.copycat.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.kafka.copycat.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:335)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2965:

Assignee: Neha Narkhede

> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Assignee: Neha Narkhede
>Priority: Minor
>  Labels: bug
> Attachments: Kafka-2965.patch
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047876#comment-15047876
 ] 

Gwen Shapira commented on KAFKA-2965:
-

haha, yeah, I agree. 

[~wangbo23], Kafka project moved from attached patches to git pull requests 
(https://kafka.apache.org/contributing.html).
Can you submit a pull request?

> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Priority: Minor
>  Labels: bug
> Attachments: Kafka-2965.patch
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2962) Add Simple Join API

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047694#comment-15047694
 ] 

ASF GitHub Bot commented on KAFKA-2962:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/644

KAFKA-2962: stream-table table-table joins

@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka join_methods

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/644.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #644


commit 15804dc1b8a8d9cfeee685d66b64d5fb9f77989f
Author: Yasuhiro Matsuda 
Date:   2015-12-08T23:39:15Z

stream-table table-table joins




> Add Simple Join API
> ---
>
> Key: KAFKA-2962
> URL: https://issues.apache.org/jira/browse/KAFKA-2962
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>
> Stream-Table and Table-Table joins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2948: Remove unused topics from producer...

2015-12-08 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/645

KAFKA-2948: Remove unused topics from producer metadata set

If no messages are sent to a topic during the last refresh interval or if 
UNKNOWN_TOPIC_OR_PARTITION error is received, remove the topic from the 
metadata list. Topics are added to the list on the next attempt to send a 
message to the topic.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2948

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/645.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #645


commit f7e40e5ce515d700e8cc7ab02a0f16141fa14f67
Author: rsivaram 
Date:   2015-12-09T00:16:18Z

KAFKA-2948: Remove unused topics from producer metadata set




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread Bo Wang (JIRA)
Bo Wang created KAFKA-2965:
--

 Summary: Two variables should be exchanged.
 Key: KAFKA-2965
 URL: https://issues.apache.org/jira/browse/KAFKA-2965
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.9.0.0
 Environment: NA
Reporter: Bo Wang
Assignee: Neha Narkhede
Priority: Minor


Two variables should be exchanged in KafkaController.scala as follows:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2667) Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure

2015-12-08 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-2667.
--
Resolution: Fixed
  Reviewer: Ewen Cheslack-Postava  (was: Guozhang Wang)

Issue resolved by pull request 642
https://github.com/apache/kafka/pull/642

> Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure
> 
>
> Key: KAFKA-2667
> URL: https://issues.apache.org/jira/browse/KAFKA-2667
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.1.0
>
>
> Seen in recent builds:
> {code}
> org.apache.kafka.copycat.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.kafka.copycat.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:335)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread Bo Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047855#comment-15047855
 ] 

Bo Wang commented on KAFKA-2965:


According to the code, I think the name of the two variable needs to be 
exchanged.

> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Assignee: Neha Narkhede
>Priority: Minor
> Fix For: 0.9.1.0
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread Zhiqiang He (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiqiang He reassigned KAFKA-2965:
--

Assignee: Zhiqiang He  (was: Neha Narkhede)

> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Assignee: Zhiqiang He
>Priority: Minor
> Attachments: Kafka-2965.patch
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #881

2015-12-08 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2957: Fix typos in Kafka documentation

--
[...truncated 1439 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] FAILED
java.lang.AssertionError: log cleaner should have processed up to offset 599
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
kafka.log.LogCleanerIntegrationTest.cleanerTest(LogCleanerIntegrationTest.scala:76)

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED


[jira] [Assigned] (KAFKA-2509) Replace LeaderAndIsr{Request,Response} with org.apache.kafka.common.network.requests equivalent

2015-12-08 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-2509:
--

Assignee: Grant Henke

> Replace LeaderAndIsr{Request,Response} with 
> org.apache.kafka.common.network.requests equivalent
> ---
>
> Key: KAFKA-2509
> URL: https://issues.apache.org/jira/browse/KAFKA-2509
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2399) Replace Stream.continually with Iterator.continually

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047833#comment-15047833
 ] 

ASF GitHub Bot commented on KAFKA-2399:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/106


> Replace Stream.continually with Iterator.continually
> 
>
> Key: KAFKA-2399
> URL: https://issues.apache.org/jira/browse/KAFKA-2399
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
> Fix For: 0.9.1.0
>
>
> There are two usages of `Stream.continually` and neither of them seems to 
> need the extra functionality it provides over `Iterator.continually` 
> (`Stream.continually` allocates `Cons` instances to save the computation 
> instead of recomputing it if needed more than once).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-08 Thread jeanlyn (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047848#comment-15047848
 ] 

jeanlyn commented on KAFKA-2837:


[~guozhang]It seems that the buffer is full, and cause the memory allocation 
timeout when sending messages to brokers, right?

> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>  Labels: newbie
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> {code}
> 

[jira] [Updated] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread Bo Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bo Wang updated KAFKA-2965:
---
 Reviewer: Neha Narkhede
Fix Version/s: 0.9.1.0
   Status: Patch Available  (was: Open)

> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Assignee: Neha Narkhede
>Priority: Minor
> Fix For: 0.9.1.0
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread Zhiqiang He (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiqiang He updated KAFKA-2965:
---
Labels: bug  (was: patch)

> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Priority: Minor
>  Labels: bug
> Attachments: Kafka-2965.patch
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >