[jira] [Resolved] (KAFKA-3017) hostnames with underscores '_' are not valid

2015-12-21 Thread Michael Martin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Martin resolved KAFKA-3017.
---
Resolution: Won't Fix

Works as designed and according to spec.

> hostnames with underscores '_' are not valid
> 
>
> Key: KAFKA-3017
> URL: https://issues.apache.org/jira/browse/KAFKA-3017
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Michael Martin
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Affects server.properties host.name and advertised.host.name
> {code}
> kafka_1 |  (kafka.server.KafkaConfig)
> kafka_1 | [2015-12-19 04:08:53,900] FATAL  (kafka.Kafka$)
> kafka_1 | kafka.common.KafkaException: Unable to parse 
> PLAINTEXT://kafka_kafka_1:9092 to a broker endpoint
> kafka_1 | at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:49)
> kafka_1 | at 
> kafka.utils.CoreUtils$$anonfun$listenerListToEndPoints$1.apply(CoreUtils.scala:309)
> kafka_1 | at 
> kafka.utils.CoreUtils$$anonfun$listenerListToEndPoints$1.apply(CoreUtils.scala:309)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1377) transient unit test failure in LogOffsetTest

2015-12-21 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066882#comment-15066882
 ] 

Guozhang Wang commented on KAFKA-1377:
--

[~pyritschard] Which Kafka version are you running for these tests?

> transient unit test failure in LogOffsetTest
> 
>
> Key: KAFKA-1377
> URL: https://issues.apache.org/jira/browse/KAFKA-1377
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1377.patch, KAFKA-1377_2014-04-11_17:42:13.patch, 
> KAFKA-1377_2014-04-11_18:14:45.patch
>
>
> Saw the following transient unit test failure.
> kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime FAILED
> junit.framework.AssertionFailedError: expected: but 
> was:
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.failNotEquals(Assert.java:277)
> at junit.framework.Assert.assertEquals(Assert.java:64)
> at junit.framework.Assert.assertEquals(Assert.java:71)
> at 
> kafka.server.LogOffsetTest.testGetOffsetsBeforeEarliestTime(LogOffsetTest.scala:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3024) Remove old patch review tools

2015-12-21 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3024:
---
Status: Patch Available  (was: Open)

> Remove old patch review tools
> -
>
> Key: KAFKA-3024
> URL: https://issues.apache.org/jira/browse/KAFKA-3024
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka has been using the new GitHub PR and Jenkins build process for a while 
> now. No new patches have been added to Review Board for some time. We should 
> remove the old patch review tools, and any new functionality should be added 
> the new PR build and merge script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1377) transient unit test failure in LogOffsetTest

2015-12-21 Thread Pierre-Yves Ritschard (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067053#comment-15067053
 ] 

Pierre-Yves Ritschard edited comment on KAFKA-1377 at 12/21/15 9:03 PM:


[~guozhang] I'm testing against trunk.
The failure to propagater results is confined to the Sasl tests.


was (Author: pyritschard):
[~guozhang] I'm testing against trunk.

> transient unit test failure in LogOffsetTest
> 
>
> Key: KAFKA-1377
> URL: https://issues.apache.org/jira/browse/KAFKA-1377
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1377.patch, KAFKA-1377_2014-04-11_17:42:13.patch, 
> KAFKA-1377_2014-04-11_18:14:45.patch
>
>
> Saw the following transient unit test failure.
> kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime FAILED
> junit.framework.AssertionFailedError: expected: but 
> was:
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.failNotEquals(Assert.java:277)
> at junit.framework.Assert.assertEquals(Assert.java:64)
> at junit.framework.Assert.assertEquals(Assert.java:71)
> at 
> kafka.server.LogOffsetTest.testGetOffsetsBeforeEarliestTime(LogOffsetTest.scala:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1755) Improve error handling in log cleaner

2015-12-21 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1755:

   Resolution: Fixed
Fix Version/s: 0.9.0.0
   Status: Resolved  (was: Patch Available)

This was in fact committed to trunk and is in 0.9.0.0:

commit 1cd6ed9e2c07a63474ed80a8224bd431d5d4243c  Joel Koshy committed on Mar 3
https://github.com/apache/kafka/commit/1cd6ed9e2c07a63474ed80a8224bd431d5d4243c#diff-d7330411812d23e8a34889bee42fedfe


> Improve error handling in log cleaner
> -
>
> Key: KAFKA-1755
> URL: https://issues.apache.org/jira/browse/KAFKA-1755
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Joel Koshy
>  Labels: newbie++
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1755.patch, KAFKA-1755_2015-02-23_14:29:54.patch, 
> KAFKA-1755_2015-02-26_10:54:50.patch
>
>
> The log cleaner is a critical process when using compacted topics.
> However, if there is any error in any topic (notably if a key is missing) 
> then the cleaner exits and all other compacted topics will also be adversely 
> affected - i.e., compaction stops across the board.
> This can be improved by just aborting compaction for a topic on any error and 
> keep the thread from exiting.
> Another improvement would be to reject messages without keys that are sent to 
> compacted topics although this is not enough by itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3020: Ensure CheckStyle runs on all Java...

2015-12-21 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/703

KAFKA-3020: Ensure CheckStyle runs on all Java code

- Adds CheckStyle to core and examples modules
- Fixes any existing CheckStyle issues

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka checkstyle-core

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/703.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #703


commit adc050940054947cc8a9a7396ec70a70a01f3e5f
Author: Grant Henke 
Date:   2015-11-10T22:46:38Z

KAFKA-3020: Ensure CheckStyle runs on all Java code

- Adds CheckStyle to core and examples modules
- Fixes any existing CheckStyle issues




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #250

2015-12-21 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2455: Fix failure in kafka.consumer.MetricsTest.testMetricsLeak

[wangguoz] KAFKA-3014: fix integer overflow problem in leastLoadedNode

--
[...truncated 1412 lines...]

kafka.server.OffsetCommitTest > testOffsetExpiration PASSED

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit PASSED

kafka.server.PlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic PASSED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot PASSED

kafka.server.ServerStartupTest > testConflictBrokerRegistration PASSED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.LeaderElectionTest > testLeaderElectionWithStaleControllerEpoch 
PASSED

kafka.server.LeaderElectionTest > testLeaderElectionAndEpoch PASSED

kafka.server.DynamicConfigChangeTest > testProcessNotification PASSED

kafka.server.DynamicConfigChangeTest > testClientQuotaConfigChange PASSED

kafka.server.DynamicConfigChangeTest > testConfigChangeOnNonExistingTopic PASSED

kafka.server.DynamicConfigChangeTest > testConfigChange PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceMultiplePartitions PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceSinglePartition PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresSingleLogSegment PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresSingleLogSegment 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 

[jira] [Work stopped] (KAFKA-3009) Disallow star imports

2015-12-21 Thread Manasvi Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3009 stopped by Manasvi Gupta.

> Disallow star imports
> -
>
> Key: KAFKA-3009
> URL: https://issues.apache.org/jira/browse/KAFKA-3009
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
> Attachments: main.xml
>
>
> Looks like we don't want star imports in our code (java.utils.*)
> So, lets add this rule to checkstyle and fix existing violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3009) Disallow star imports

2015-12-21 Thread Manasvi Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manasvi Gupta updated KAFKA-3009:
-
Status: Patch Available  (was: Open)

> Disallow star imports
> -
>
> Key: KAFKA-3009
> URL: https://issues.apache.org/jira/browse/KAFKA-3009
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
> Attachments: main.xml
>
>
> Looks like we don't want star imports in our code (java.utils.*)
> So, lets add this rule to checkstyle and fix existing violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3024) Remove old patch review tools

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067031#comment-15067031
 ] 

ASF GitHub Bot commented on KAFKA-3024:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/705

KAFKA-3024: Remove old patch review tools



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka review-tools-cleanup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/705.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #705


commit 4ae0d0b51dcd0bbf57cbbdedea6736480a344eca
Author: Grant Henke 
Date:   2015-12-21T20:19:55Z

KAFKA-3024: Remove old patch review tools




> Remove old patch review tools
> -
>
> Key: KAFKA-3024
> URL: https://issues.apache.org/jira/browse/KAFKA-3024
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka has been using the new GitHub PR and Jenkins build process for a while 
> now. No new patches have been added to Review Board for some time. We should 
> remove the old patch review tools, and any new functionality should be added 
> the new PR build and merge script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2989: system tests should verify partiti...

2015-12-21 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/702

KAFKA-2989: system tests should verify partitions consumed after rebalancing



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2989

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/702.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #702


commit f6640867eecd3106c635fc08a0abefa2c2cabc8e
Author: Jason Gustafson 
Date:   2015-12-16T01:49:36Z

KAFKA-2989: system tests should verify partitions consumed after rebalancing




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2989) Verify all partitions consumed after rebalancing in system tests

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066845#comment-15066845
 ] 

ASF GitHub Bot commented on KAFKA-2989:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/702

KAFKA-2989: system tests should verify partitions consumed after rebalancing



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2989

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/702.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #702


commit f6640867eecd3106c635fc08a0abefa2c2cabc8e
Author: Jason Gustafson 
Date:   2015-12-16T01:49:36Z

KAFKA-2989: system tests should verify partitions consumed after rebalancing




> Verify all partitions consumed after rebalancing in system tests
> 
>
> Key: KAFKA-2989
> URL: https://issues.apache.org/jira/browse/KAFKA-2989
> Project: Kafka
>  Issue Type: Test
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> In KAFKA-2978, we found a bug which prevented the consumer from fetching some 
> assigned partitions. Our system tests didn't catch the bug because we only 
> assert that some messages from any topic are consumed after rebalancing. We 
> should strengthen these assertions to ensure that each partition is consumed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3023) Log Compaction documentation still says compressed messages are not supported

2015-12-21 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-3023:
---

 Summary: Log Compaction documentation still says compressed 
messages are not supported
 Key: KAFKA-3023
 URL: https://issues.apache.org/jira/browse/KAFKA-3023
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


Looks like we can now compact topics with compressed messages  
(https://issues.apache.org/jira/browse/KAFKA-1374) but the docs still say we 
can't:
http://kafka.apache.org/documentation.html#design_compactionlimitations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3020) Ensure Checkstyle runs on all Java code

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066984#comment-15066984
 ] 

ASF GitHub Bot commented on KAFKA-3020:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/703

KAFKA-3020: Ensure CheckStyle runs on all Java code

- Adds CheckStyle to core and examples modules
- Fixes any existing CheckStyle issues

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka checkstyle-core

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/703.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #703


commit adc050940054947cc8a9a7396ec70a70a01f3e5f
Author: Grant Henke 
Date:   2015-11-10T22:46:38Z

KAFKA-3020: Ensure CheckStyle runs on all Java code

- Adds CheckStyle to core and examples modules
- Fixes any existing CheckStyle issues




> Ensure Checkstyle runs on all Java code
> ---
>
> Key: KAFKA-3020
> URL: https://issues.apache.org/jira/browse/KAFKA-3020
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067028#comment-15067028
 ] 

ASF GitHub Bot commented on KAFKA-2000:
---

GitHub user Parth-Brahmbhatt opened a pull request:

https://github.com/apache/kafka/pull/704

KAFKA-2000: Delete topic should also delete consumer offsets.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Parth-Brahmbhatt/kafka KAFKA-2000

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/704.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #704


commit ceae0b7031d297a7db6664b435bb3cdc55228646
Author: Parth Brahmbhatt 
Date:   2015-12-18T20:35:32Z

KAFKA-2000: Delete topic should also delete consumer offsets.




> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Fix For: 0.9.1.0
>
> Attachments: KAFKA-2000.patch, KAFKA-2000_2015-05-03_10:39:11.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3024) Remove old patch review tools

2015-12-21 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3024:
--

 Summary: Remove old patch review tools
 Key: KAFKA-3024
 URL: https://issues.apache.org/jira/browse/KAFKA-3024
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Grant Henke
Assignee: Grant Henke


Kafka has been using the new GitHub PR and Jenkins build process for a while 
now. No new patches have been added to Review Board for some time. We should 
remove the old patch review tools, and any new functionality should be added 
the new PR build and merge script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2000: Delete topic should also delete co...

2015-12-21 Thread Parth-Brahmbhatt
GitHub user Parth-Brahmbhatt opened a pull request:

https://github.com/apache/kafka/pull/704

KAFKA-2000: Delete topic should also delete consumer offsets.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Parth-Brahmbhatt/kafka KAFKA-2000

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/704.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #704


commit ceae0b7031d297a7db6664b435bb3cdc55228646
Author: Parth Brahmbhatt 
Date:   2015-12-18T20:35:32Z

KAFKA-2000: Delete topic should also delete consumer offsets.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1377) transient unit test failure in LogOffsetTest

2015-12-21 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067096#comment-15067096
 ] 

Ismael Juma commented on KAFKA-1377:


[~pyritschard], this JIRA is about LogOffsetTest, if you are seeing other 
failures (ie Sasl related), please file a separate issue (in case it hasn't 
been filed already).

> transient unit test failure in LogOffsetTest
> 
>
> Key: KAFKA-1377
> URL: https://issues.apache.org/jira/browse/KAFKA-1377
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1377.patch, KAFKA-1377_2014-04-11_17:42:13.patch, 
> KAFKA-1377_2014-04-11_18:14:45.patch
>
>
> Saw the following transient unit test failure.
> kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime FAILED
> junit.framework.AssertionFailedError: expected: but 
> was:
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.failNotEquals(Assert.java:277)
> at junit.framework.Assert.assertEquals(Assert.java:64)
> at junit.framework.Assert.assertEquals(Assert.java:71)
> at 
> kafka.server.LogOffsetTest.testGetOffsetsBeforeEarliestTime(LogOffsetTest.scala:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2455) Test Failure: kafka.consumer.MetricsTest > testMetricsLeak

2015-12-21 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2455:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 694
[https://github.com/apache/kafka/pull/694]

> Test Failure: kafka.consumer.MetricsTest > testMetricsLeak 
> ---
>
> Key: KAFKA-2455
> URL: https://issues.apache.org/jira/browse/KAFKA-2455
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1
>
>
> I've seen this failure in builds twice recently:
> kafka.consumer.MetricsTest > testMetricsLeak FAILED
> java.lang.AssertionError: expected:<174> but was:<176>
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.failNotEquals(Assert.java:689)
> at org.junit.Assert.assertEquals(Assert.java:127)
> at org.junit.Assert.assertEquals(Assert.java:514)
> at org.junit.Assert.assertEquals(Assert.java:498)
> at 
> kafka.consumer.MetricsTest$$anonfun$testMetricsLeak$1.apply$mcVI$sp(MetricsTest.scala:65)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> at kafka.consumer.MetricsTest.testMetricsLeak(MetricsTest.scala:63)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1377) transient unit test failure in LogOffsetTest

2015-12-21 Thread Pierre-Yves Ritschard (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067053#comment-15067053
 ] 

Pierre-Yves Ritschard commented on KAFKA-1377:
--

[~guozhang] I'm testing against trunk.

> transient unit test failure in LogOffsetTest
> 
>
> Key: KAFKA-1377
> URL: https://issues.apache.org/jira/browse/KAFKA-1377
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1377.patch, KAFKA-1377_2014-04-11_17:42:13.patch, 
> KAFKA-1377_2014-04-11_18:14:45.patch
>
>
> Saw the following transient unit test failure.
> kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime FAILED
> junit.framework.AssertionFailedError: expected: but 
> was:
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.failNotEquals(Assert.java:277)
> at junit.framework.Assert.assertEquals(Assert.java:64)
> at junit.framework.Assert.assertEquals(Assert.java:71)
> at 
> kafka.server.LogOffsetTest.testGetOffsetsBeforeEarliestTime(LogOffsetTest.scala:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3009) Disallow star imports

2015-12-21 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3009:
-
   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 700
[https://github.com/apache/kafka/pull/700]

> Disallow star imports
> -
>
> Key: KAFKA-3009
> URL: https://issues.apache.org/jira/browse/KAFKA-3009
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
> Fix For: 0.9.1.0
>
> Attachments: main.xml
>
>
> Looks like we don't want star imports in our code (java.utils.*)
> So, lets add this rule to checkstyle and fix existing violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3009) Disallow star imports

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067095#comment-15067095
 ] 

ASF GitHub Bot commented on KAFKA-3009:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/700


> Disallow star imports
> -
>
> Key: KAFKA-3009
> URL: https://issues.apache.org/jira/browse/KAFKA-3009
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
> Fix For: 0.9.1.0
>
> Attachments: main.xml
>
>
> Looks like we don't want star imports in our code (java.utils.*)
> So, lets add this rule to checkstyle and fix existing violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2455: Test Failure: kafka.consumer.Metri...

2015-12-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/694


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2455) Test Failure: kafka.consumer.MetricsTest > testMetricsLeak

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066855#comment-15066855
 ] 

ASF GitHub Bot commented on KAFKA-2455:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/694


> Test Failure: kafka.consumer.MetricsTest > testMetricsLeak 
> ---
>
> Key: KAFKA-2455
> URL: https://issues.apache.org/jira/browse/KAFKA-2455
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1
>
>
> I've seen this failure in builds twice recently:
> kafka.consumer.MetricsTest > testMetricsLeak FAILED
> java.lang.AssertionError: expected:<174> but was:<176>
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.failNotEquals(Assert.java:689)
> at org.junit.Assert.assertEquals(Assert.java:127)
> at org.junit.Assert.assertEquals(Assert.java:514)
> at org.junit.Assert.assertEquals(Assert.java:498)
> at 
> kafka.consumer.MetricsTest$$anonfun$testMetricsLeak$1.apply$mcVI$sp(MetricsTest.scala:65)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> at kafka.consumer.MetricsTest.testMetricsLeak(MetricsTest.scala:63)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3009 : Disallow star imports

2015-12-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/700


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2835) FAILING TEST: LogCleaner

2015-12-21 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2835.
--
Resolution: Duplicate
  Assignee: jin xing

> FAILING TEST: LogCleaner
> 
>
> Key: KAFKA-2835
> URL: https://issues.apache.org/jira/browse/KAFKA-2835
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: jin xing
>
> kafka.log.LogCleanerIntegrationTest > cleanerTest[2] FAILED
> java.lang.AssertionError: log cleaner should have processed up to offset 
> 599
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> kafka.log.LogCleanerIntegrationTest.cleanerTest(LogCleanerIntegrationTest.scala:76)
> https://builds.apache.org/job/kafka-trunk-jdk7/817/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2835) FAILING TEST: LogCleaner

2015-12-21 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066865#comment-15066865
 ] 

Guozhang Wang commented on KAFKA-2835:
--

Should already be fixed in KAFKA-2977.

> FAILING TEST: LogCleaner
> 
>
> Key: KAFKA-2835
> URL: https://issues.apache.org/jira/browse/KAFKA-2835
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>
> kafka.log.LogCleanerIntegrationTest > cleanerTest[2] FAILED
> java.lang.AssertionError: log cleaner should have processed up to offset 
> 599
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> kafka.log.LogCleanerIntegrationTest.cleanerTest(LogCleanerIntegrationTest.scala:76)
> https://builds.apache.org/job/kafka-trunk-jdk7/817/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3022) Deduplicate common project configurations

2015-12-21 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3022:
--

 Summary: Deduplicate common project configurations
 Key: KAFKA-3022
 URL: https://issues.apache.org/jira/browse/KAFKA-3022
 Project: Kafka
  Issue Type: Sub-task
Reporter: Grant Henke
Assignee: Grant Henke


Many of the configurations for subproject artifacts, tests, CheckStyle, etc. 
are and should be exactly the same. We can reduce duplicate code by moving this 
configuration to the sub-projects section.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3024: Remove old patch review tools

2015-12-21 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/705

KAFKA-3024: Remove old patch review tools



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka review-tools-cleanup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/705.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #705


commit 4ae0d0b51dcd0bbf57cbbdedea6736480a344eca
Author: Grant Henke 
Date:   2015-12-21T20:19:55Z

KAFKA-3024: Remove old patch review tools




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3017) hostnames with underscores '_' are not valid

2015-12-21 Thread Michael Martin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066877#comment-15066877
 ] 

Michael Martin commented on KAFKA-3017:
---

Thank you for the clarification, Manikumar! We can explicitly override the 
hostname to circumvent this discrepancy.

> hostnames with underscores '_' are not valid
> 
>
> Key: KAFKA-3017
> URL: https://issues.apache.org/jira/browse/KAFKA-3017
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Michael Martin
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Affects server.properties host.name and advertised.host.name
> {code}
> kafka_1 |  (kafka.server.KafkaConfig)
> kafka_1 | [2015-12-19 04:08:53,900] FATAL  (kafka.Kafka$)
> kafka_1 | kafka.common.KafkaException: Unable to parse 
> PLAINTEXT://kafka_kafka_1:9092 to a broker endpoint
> kafka_1 | at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:49)
> kafka_1 | at 
> kafka.utils.CoreUtils$$anonfun$listenerListToEndPoints$1.apply(CoreUtils.scala:309)
> kafka_1 | at 
> kafka.utils.CoreUtils$$anonfun$listenerListToEndPoints$1.apply(CoreUtils.scala:309)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3014) Integer overflow causes incorrect node iteration in leastLoadedNode()

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066875#comment-15066875
 ] 

ASF GitHub Bot commented on KAFKA-3014:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/696


> Integer overflow causes incorrect node iteration in leastLoadedNode() 
> --
>
> Key: KAFKA-3014
> URL: https://issues.apache.org/jira/browse/KAFKA-3014
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> The leastLoadedNode() implementation iterates over all the known nodes to 
> find a suitable candidate for sending metadata. The loop looks like this:
> {code}
> for (int i = 0; i < nodes.size(); i++) {
>   int idx = Utils.abs((this.nodeIndexOffset + i) % nodes.size());
>   Node node = nodes.get(idx);
>   ...
> }
> {code}
> Unfortunately, this doesn't handle integer overflow correctly, which can 
> result in some nodes in the list being passed over. For example, if the size 
> of the node list is 5 and the random offset is Integer.MAX_VALUE, then the 
> loop will iterate over the following indices: 2, 3, 2, 1, 0. 
> In pathological cases, this can prevent the client from being able to connect 
> to good nodes in order to refresh metadata.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3014: fix integer overflow problem in le...

2015-12-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/696


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3014) Integer overflow causes incorrect node iteration in leastLoadedNode()

2015-12-21 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3014.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.1

Issue resolved by pull request 696
[https://github.com/apache/kafka/pull/696]

> Integer overflow causes incorrect node iteration in leastLoadedNode() 
> --
>
> Key: KAFKA-3014
> URL: https://issues.apache.org/jira/browse/KAFKA-3014
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> The leastLoadedNode() implementation iterates over all the known nodes to 
> find a suitable candidate for sending metadata. The loop looks like this:
> {code}
> for (int i = 0; i < nodes.size(); i++) {
>   int idx = Utils.abs((this.nodeIndexOffset + i) % nodes.size());
>   Node node = nodes.get(idx);
>   ...
> }
> {code}
> Unfortunately, this doesn't handle integer overflow correctly, which can 
> result in some nodes in the list being passed over. For example, if the size 
> of the node list is 5 and the random offset is Integer.MAX_VALUE, then the 
> loop will iterate over the following indices: 2, 3, 2, 1, 0. 
> In pathological cases, this can prevent the client from being able to connect 
> to good nodes in order to refresh metadata.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3021) Centralize dependency version managment

2015-12-21 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3021:
--

 Summary: Centralize dependency version managment
 Key: KAFKA-3021
 URL: https://issues.apache.org/jira/browse/KAFKA-3021
 Project: Kafka
  Issue Type: Sub-task
Reporter: Grant Henke
Assignee: Grant Henke






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3020) Ensure Checkstyle runs on all Java code

2015-12-21 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3020:
---
Status: Patch Available  (was: In Progress)

> Ensure Checkstyle runs on all Java code
> ---
>
> Key: KAFKA-3020
> URL: https://issues.apache.org/jira/browse/KAFKA-3020
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2988) Change default configuration of the log cleaner

2015-12-21 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2988:
---
Status: Patch Available  (was: Open)

> Change default configuration of the log cleaner
> ---
>
> Key: KAFKA-2988
> URL: https://issues.apache.org/jira/browse/KAFKA-2988
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Since 0.9.0 the internal "__consumer_offsets" topic is being used more 
> heavily. Because this is a compacted topic "log.cleaner.enable" needs to be 
> "true" in order for it to be compacted. 
> Since this is critical for core Kafka functionality we should change the 
> default to true and potentially consider removing the option to disable all 
> together. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Update 2015-12-21

2015-12-21 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/706


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Update 2015-12-21

2015-12-21 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/706

Update 2015-12-21



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/706.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #706


commit af74e4ba84b1094c29a9882b71ea2f100af761ec
Author: vahidhashemian 
Date:   2015-12-17T20:38:45Z

Merge pull request #3 from apache/trunk

Update 2015-12-17




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3025) KIP-31 (part 1): Add timestamp field to message, configs, and Producer/ConsumerRecord

2015-12-21 Thread Anna Povzner (JIRA)
Anna Povzner created KAFKA-3025:
---

 Summary: KIP-31 (part 1): Add timestamp field to message, configs, 
and Producer/ConsumerRecord
 Key: KAFKA-3025
 URL: https://issues.apache.org/jira/browse/KAFKA-3025
 Project: Kafka
  Issue Type: Improvement
Reporter: Anna Povzner
Assignee: Anna Povzner


This JIRA is for changes for KIP-32 excluding broker checking and acting on 
timestamp field in a message.

This JIRA includes:
1. Add time field to the message
Timestamp => int64
Timestamp is the number of milliseconds since Unix Epoch

2. Add time field to both ProducerRecord and Consumer Record
If a user specifies the timestamp in a ProducerRecord, the ProducerRecord is 
sent with this timestamp.
If a user does not specify the timestamp in a ProducerRecord, the producer 
stamps the ProducerRecord with current time.
ConsumerRecord will have the timestamp of the message that were stored on 
broker.

3. Add two new configurations to the broker. Configuration is per topic.
* message.timestamp.type: type of a timestamp. Possible values: CreateTime, 
LogAppendTime. Default: CreateTime
* max.message.time.difference.ms: threshold for the acceptable time difference 
between Timestamp in the message and local time on the broker. Default: 
Long.MaxValue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1377) transient unit test failure in LogOffsetTest

2015-12-21 Thread Pierre-Yves Ritschard (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067138#comment-15067138
 ] 

Pierre-Yves Ritschard commented on KAFKA-1377:
--

[~ijuma] will do. it looked to me as a generalization of the previous problem.

> transient unit test failure in LogOffsetTest
> 
>
> Key: KAFKA-1377
> URL: https://issues.apache.org/jira/browse/KAFKA-1377
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1377.patch, KAFKA-1377_2014-04-11_17:42:13.patch, 
> KAFKA-1377_2014-04-11_18:14:45.patch
>
>
> Saw the following transient unit test failure.
> kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime FAILED
> junit.framework.AssertionFailedError: expected: but 
> was:
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.failNotEquals(Assert.java:277)
> at junit.framework.Assert.assertEquals(Assert.java:64)
> at junit.framework.Assert.assertEquals(Assert.java:71)
> at 
> kafka.server.LogOffsetTest.testGetOffsetsBeforeEarliestTime(LogOffsetTest.scala:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3026) KIP-32 (part 2): Changes in broker to over-write timestamp or reject message

2015-12-21 Thread Anna Povzner (JIRA)
Anna Povzner created KAFKA-3026:
---

 Summary: KIP-32 (part 2): Changes in broker to over-write 
timestamp or reject message
 Key: KAFKA-3026
 URL: https://issues.apache.org/jira/browse/KAFKA-3026
 Project: Kafka
  Issue Type: Improvement
Reporter: Anna Povzner
Assignee: Anna Povzner


This JIRA includes:
When the broker receives a message, it checks the configs:
1. If message.timestamp.type=LogAppendTime, the server over-writes the 
timestamp with its current local time
Message could be compressed or not compressed. In either case, the timestamp is 
always over-written to broker's current time

2. If message.timestamp.type=CreateTime, the server calculated the difference 
between the current time on broker and Timestamp in the message:
If difference is within max.message.time.difference.ms, the server will accept 
it and append it to the log. For compressed message, server will update the 
timestamp in compressed message to -1: this means that CreateTime is used and 
the timestamp is in each individual inner message.
If difference is higher than max.message.time.difference.ms, the server will 
reject the entire batch with TimestampExceededThresholdException.

(Actually adding the timestamp to the message and adding configs are covered by 
KAFKA-3025).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #251

2015-12-21 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3009; Disallow star imports

--
[...truncated 3617 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > 

[GitHub] kafka pull request: MINOR: Fix typo in documentation

2015-12-21 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/707

MINOR: Fix typo in documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka 
typo04/fix_documentation_typos

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/707.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #707


commit 3f6dfb78af7b46d0d216cd605780f3b6db2d231c
Author: Vahid Hashemian 
Date:   2015-12-21T23:42:11Z

Fix a minor documentation typo




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3027) Kafka metrics can be stale if there is no new update

2015-12-21 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-3027:
--

 Summary: Kafka metrics can be stale if there is no new update
 Key: KAFKA-3027
 URL: https://issues.apache.org/jira/browse/KAFKA-3027
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.0
Reporter: Jun Rao


Currently, org.apache.kafka.common.metrics has the issue that the reported 
metric value can be stale if there is no new update. For example, in the 
producer, if no new data is sent to the producer instance, metrics such as 
record rate will be stale.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3025) KIP-32 (part 1): Add timestamp field to message, configs, and Producer/ConsumerRecord

2015-12-21 Thread Anna Povzner (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anna Povzner updated KAFKA-3025:

Summary: KIP-32 (part 1): Add timestamp field to message, configs, and 
Producer/ConsumerRecord  (was: KIP-31 (part 1): Add timestamp field to message, 
configs, and Producer/ConsumerRecord)

> KIP-32 (part 1): Add timestamp field to message, configs, and 
> Producer/ConsumerRecord
> -
>
> Key: KAFKA-3025
> URL: https://issues.apache.org/jira/browse/KAFKA-3025
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> This JIRA is for changes for KIP-32 excluding broker checking and acting on 
> timestamp field in a message.
> This JIRA includes:
> 1. Add time field to the message
> Timestamp => int64
> Timestamp is the number of milliseconds since Unix Epoch
> 2. Add time field to both ProducerRecord and Consumer Record
> If a user specifies the timestamp in a ProducerRecord, the ProducerRecord is 
> sent with this timestamp.
> If a user does not specify the timestamp in a ProducerRecord, the producer 
> stamps the ProducerRecord with current time.
> ConsumerRecord will have the timestamp of the message that were stored on 
> broker.
> 3. Add two new configurations to the broker. Configuration is per topic.
> * message.timestamp.type: type of a timestamp. Possible values: CreateTime, 
> LogAppendTime. Default: CreateTime
> * max.message.time.difference.ms: threshold for the acceptable time 
> difference between Timestamp in the message and local time on the broker. 
> Default: Long.MaxValue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KAFKA Connect - Source Connector for Mainframe REST Services

2015-12-21 Thread saravanan tirugnanum
Thanks. Yes. This is resolved. 
Please also check the below mail i sent earlier. 
Hi ,
I just found that the pom.xml of Kafka-Connect-jdbc is missing these 
entriesAlso , both the libraries Common-Config and Common-Utils are not found 
in the confluent maven repo. Please upload and update the pom.xml for the below 
entry.
https://github.com/confluentinc/kafka-connect-jdbc

                     io.confluent            
common-utils            
${confluent.version}        
RegardsSaravanan 

On Sunday, 20 December 2015 10:14 PM, Ewen Cheslack-Postava 
 wrote:
 

 I think the relevant questions here were addressed in this Github issue: 
https://github.com/confluentinc/kafka-connect-jdbc/issues/30

-Ewen


On Tue, Dec 15, 2015 at 12:51 PM, saravanan tirugnanum  
wrote:

Also please share some example of JDBC Source Connector running in distributed 
mode and assigning tasks across different workers.
RegardsSaravanan


    On Tuesday, 15 December 2015 1:23 PM, saravanan tirugnanum 
 wrote:


 Hi 
I am working on designing and building a SourceConnector to run in a 
distributed mode to transfer data from Mainframe data sources which are exposed 
as RESTful services. So , planning to spin multiple workers handling and 
processing subset of data coordinating with  all workers in distributed mode. 
Any recommendations or references around this implementation and how to 
distribute the data and handle offset management in the distributed environment.
The volume of data inflow will be so huge and looking for a scalable , 
distributed and fault tolerant model.
Any small input would be of great help.
RegardsSaravanan

  



-- 
Thanks,
Ewen


  

[jira] [Commented] (KAFKA-2937) Topics marked for delete in Zookeeper may become undeletable

2015-12-21 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066499#comment-15066499
 ] 

Rajini Sivaram commented on KAFKA-2937:
---

[~mgharat] We are running kafka 0.9.0.0.

> Topics marked for delete in Zookeeper may become undeletable
> 
>
> Key: KAFKA-2937
> URL: https://issues.apache.org/jira/browse/KAFKA-2937
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Mayuresh Gharat
>
> In our clusters, we occasionally see topics marked for delete, but never 
> actually deleted. It may be due to brokers being restarted while tests were 
> running, but further restarts of Kafka dont fix the problem. The topics 
> remain marked for delete in Zookeeper.
> Topic describe shows:
> {quote}
> Topic:testtopic   PartitionCount:1ReplicationFactor:3 Configs:
>   Topic: testtopicPartition: 0Leader: noneReplicas: 3,4,0 
> Isr: 
> {quote}
> Kafka logs show:
> {quote}
> 2015-12-02 15:53:30,152] ERROR Controller 2 epoch 213 initiated state change 
> of replica 3 for partition [testtopic,0] from OnlineReplica to OfflineReplica 
> failed (state.change.logger)
> kafka.common.StateChangeFailedException: Failed to change state of replica 3 
> for partition [testtopic,0] since the leader and isr path in zookeeper is 
> empty
> at 
> kafka.controller.ReplicaStateMachine.handleStateChange(ReplicaStateMachine.scala:269)
> at 
> kafka.controller.ReplicaStateMachine$$anonfun$handleStateChanges$2.apply(ReplicaStateMachine.scala:114)
> at 
> kafka.controller.ReplicaStateMachine$$anonfun$handleStateChanges$2.apply(ReplicaStateMachine.scala:114)
> at 
> scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:322)
> at 
> scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:978)
> at 
> kafka.controller.ReplicaStateMachine.handleStateChanges(ReplicaStateMachine.scala:114)
> at 
> kafka.controller.TopicDeletionManager$$anonfun$startReplicaDeletion$2.apply(TopicDeletionManager.scala:342)
> at 
> kafka.controller.TopicDeletionManager$$anonfun$startReplicaDeletion$2.apply(TopicDeletionManager.scala:334)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
> at 
> kafka.controller.TopicDeletionManager.startReplicaDeletion(TopicDeletionManager.scala:334)
> at 
> kafka.controller.TopicDeletionManager.kafka$controller$TopicDeletionManager$$onPartitionDeletion(TopicDeletionManager.scala:367)
> at 
> kafka.controller.TopicDeletionManager$$anonfun$kafka$controller$TopicDeletionManager$$onTopicDeletion$2.apply(TopicDeletionManager.scala:313)
> at 
> kafka.controller.TopicDeletionManager$$anonfun$kafka$controller$TopicDeletionManager$$onTopicDeletion$2.apply(TopicDeletionManager.scala:312)
> at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
> at 
> kafka.controller.TopicDeletionManager.kafka$controller$TopicDeletionManager$$onTopicDeletion(TopicDeletionManager.scala:312)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread$$anonfun$doWork$1$$anonfun$apply$mcV$sp$4.apply(TopicDeletionManager.scala:431)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread$$anonfun$doWork$1$$anonfun$apply$mcV$sp$4.apply(TopicDeletionManager.scala:403)
> at scala.collection.immutable.Set$Set2.foreach(Set.scala:111)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread$$anonfun$doWork$1.apply$mcV$sp(TopicDeletionManager.scala:403)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread$$anonfun$doWork$1.apply(TopicDeletionManager.scala:397)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread$$anonfun$doWork$1.apply(TopicDeletionManager.scala:397)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread.doWork(TopicDeletionManager.scala:397)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> {quote}  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3027) Kafka metrics can be stale if there is no new update

2015-12-21 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3027:
---
Affects Version/s: 0.9.0.0

> Kafka metrics can be stale if there is no new update
> 
>
> Key: KAFKA-3027
> URL: https://issues.apache.org/jira/browse/KAFKA-3027
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.0, 0.9.0.0
>Reporter: Jun Rao
>
> Currently, org.apache.kafka.common.metrics has the issue that the reported 
> metric value can be stale if there is no new update. For example, in the 
> producer, if no new data is sent to the producer instance, metrics such as 
> record rate will be stale.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3006) Make collection default container type for sequences in the consumer API

2015-12-21 Thread Pierre-Yves Ritschard (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067633#comment-15067633
 ] 

Pierre-Yves Ritschard commented on KAFKA-3006:
--

[~hachikuji] [~ijuma] I went through the re-factor. I had to fight gradle and 
some unrelated tests which fail for me (on both trunk and this branch), which 
explains the many commits. As it stands the branch builds and passes all tests.

Along the way I discovered that a SinkTask uses similar signatures, I propose 
converging to the same signatures in a separate patch, to get this under way.

> Make collection default container type for sequences in the consumer API
> 
>
> Key: KAFKA-3006
> URL: https://issues.apache.org/jira/browse/KAFKA-3006
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Pierre-Yves Ritschard
>  Labels: patch
>
> The KafkaConsumer API has some annoying inconsistencies in the usage of 
> collection types. For example, subscribe() takes a list, but subscription() 
> returns a set. Similarly for assign() and assignment(). We also have pause() 
> , seekToBeginning(), seekToEnd(), and resume() which annoyingly use a 
> variable argument array, which means you have to copy the result of 
> assignment() to an array if you want to pause all assigned partitions. We can 
> solve these issues by adding the following variants:
> {code}
> void subscribe(Collection topics);
> void subscribe(Collection topics, ConsumerRebalanceListener);
> void assign(Collection partitions);
> void pause(Collection partitions);
> void resume(Collection partitions);
> void seekToBeginning(Collection);
> void seekToEnd(Collection);
> {code}
> This issues supersedes KAFKA-2991



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3027) Kafka metrics can be stale if there is no new update

2015-12-21 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067445#comment-15067445
 ] 

Jay Kreps commented on KAFKA-3027:
--

The window should advance either when an update comes or when the value is read 
(e.g. SampledStat.measure()), is that not happening?

> Kafka metrics can be stale if there is no new update
> 
>
> Key: KAFKA-3027
> URL: https://issues.apache.org/jira/browse/KAFKA-3027
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.0, 0.9.0.0
>Reporter: Jun Rao
>
> Currently, org.apache.kafka.common.metrics has the issue that the reported 
> metric value can be stale if there is no new update. For example, in the 
> producer, if no new data is sent to the producer instance, metrics such as 
> record rate will be stale.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3020: Ensure CheckStyle runs on all Java...

2015-12-21 Thread granthenke
Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/703


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3020) Ensure Checkstyle runs on all Java code

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067541#comment-15067541
 ] 

ASF GitHub Bot commented on KAFKA-3020:
---

GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/703

KAFKA-3020: Ensure CheckStyle runs on all Java code

- Adds CheckStyle to core and examples modules
- Fixes any existing CheckStyle issues

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka checkstyle-core

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/703.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #703


commit adc050940054947cc8a9a7396ec70a70a01f3e5f
Author: Grant Henke 
Date:   2015-11-10T22:46:38Z

KAFKA-3020: Ensure CheckStyle runs on all Java code

- Adds CheckStyle to core and examples modules
- Fixes any existing CheckStyle issues




> Ensure Checkstyle runs on all Java code
> ---
>
> Key: KAFKA-3020
> URL: https://issues.apache.org/jira/browse/KAFKA-3020
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3020) Ensure Checkstyle runs on all Java code

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067540#comment-15067540
 ] 

ASF GitHub Bot commented on KAFKA-3020:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/703


> Ensure Checkstyle runs on all Java code
> ---
>
> Key: KAFKA-3020
> URL: https://issues.apache.org/jira/browse/KAFKA-3020
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2914) Kafka Connect Source connector for HBase

2015-12-21 Thread James Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067437#comment-15067437
 ] 

James Cheng commented on KAFKA-2914:


https://github.com/wushujames/copycat-connector-skeleton has now been updated 
to support 0.9.0. And it has been renamed to 
https://github.com/wushujames/kafka-connector-skeleton

> Kafka Connect Source connector for HBase 
> -
>
> Key: KAFKA-2914
> URL: https://issues.apache.org/jira/browse/KAFKA-2914
> Project: Kafka
>  Issue Type: New Feature
>  Components: copycat
>Reporter: Niels Basjes
>Assignee: Ewen Cheslack-Postava
>
> In many cases I see HBase being used to persist data.
> I would like to listen to the changes and process them in a streaming system 
> (like Apache Flink).
> Feature request: A Kafka Connect "Source" that listens to the changes in a 
> specified HBase table. These changes are then stored in a 'standardized' form 
> in Kafka so that it becomes possible to process the observed changes in 
> near-realtime. I expect this 'standard' to be very HBase specific.
> Implementation suggestion: Perhaps listening to the HBase WAL like the "HBase 
> Side Effects Processor" does?
> https://github.com/NGDATA/hbase-indexer/tree/master/hbase-sep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3020: Ensure CheckStyle runs on all Java...

2015-12-21 Thread granthenke
GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/703

KAFKA-3020: Ensure CheckStyle runs on all Java code

- Adds CheckStyle to core and examples modules
- Fixes any existing CheckStyle issues

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka checkstyle-core

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/703.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #703


commit adc050940054947cc8a9a7396ec70a70a01f3e5f
Author: Grant Henke 
Date:   2015-11-10T22:46:38Z

KAFKA-3020: Ensure CheckStyle runs on all Java code

- Adds CheckStyle to core and examples modules
- Fixes any existing CheckStyle issues




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3020: Ensure CheckStyle runs on all Java...

2015-12-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/703


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #920

2015-12-21 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3020; Ensure CheckStyle runs on all Java code

--
[...truncated 124 lines...]
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/coordinator/GroupMetadataManager.scala:394:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/server/KafkaApis.scala:284:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/server/KafkaServer.scala:301:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/server/KafkaServer.scala:302:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/tools/EndToEndLatency.scala:74:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/tools/MirrorMaker.scala:195:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  maybeSetDefaultProperty(producerProps, 
ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/tools/ProducerPerformance.scala:40:
 @deprecated now takes two arguments; see the scaladoc.
@deprecated
 ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/admin/AclCommand.scala:243:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
Console.readLine().equalsIgnoreCase("y")
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/admin/TopicCommand.scala:353:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
if (!Console.readLine().equalsIgnoreCase("y")) {
 ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/controller/ControllerChannelManager.scala:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/controller/ControllerChannelManager.scala:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/network/BlockingChannel.scala:129:

Jenkins build is back to normal : kafka-trunk-jdk7 #917

2015-12-21 Thread Apache Jenkins Server
See 



[jira] [Reopened] (KAFKA-1377) transient unit test failure in LogOffsetTest

2015-12-21 Thread Pierre-Yves Ritschard (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre-Yves Ritschard reopened KAFKA-1377:
--

I am getting these errors consistently.
This is against trunk on Linux. 64 bit, i7 processor 16G. JDK version:

java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)


> transient unit test failure in LogOffsetTest
> 
>
> Key: KAFKA-1377
> URL: https://issues.apache.org/jira/browse/KAFKA-1377
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1377.patch, KAFKA-1377_2014-04-11_17:42:13.patch, 
> KAFKA-1377_2014-04-11_18:14:45.patch
>
>
> Saw the following transient unit test failure.
> kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime FAILED
> junit.framework.AssertionFailedError: expected: but 
> was:
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.failNotEquals(Assert.java:277)
> at junit.framework.Assert.assertEquals(Assert.java:64)
> at junit.framework.Assert.assertEquals(Assert.java:71)
> at 
> kafka.server.LogOffsetTest.testGetOffsetsBeforeEarliestTime(LogOffsetTest.scala:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3028) "producer.request-latency-avg/max" metric can't report latency less than 1ms

2015-12-21 Thread Alexey Pirogov (JIRA)
Alexey Pirogov created KAFKA-3028:
-

 Summary: "producer.request-latency-avg/max" metric can't report 
latency less than 1ms
 Key: KAFKA-3028
 URL: https://issues.apache.org/jira/browse/KAFKA-3028
 Project: Kafka
  Issue Type: Bug
Reporter: Alexey Pirogov


"producer.request-latency-avg/max" metric report NaN if latency is less than 
1ms.

Maybe it is possible to measure nanoseconds or at least to doulbe for latency 
measuring.

http://mail-archives.apache.org/mod_mbox/kafka-users/201512.mbox/%3CCAD5tkZbyCRJpwTW3XPOYhkx%3Dcs6a0Xo4mNVVJJGXisiSKczTCA%40mail.gmail.com%3E

"Hi Alexey,

Could you please report a bug in JIRA for the NaN result? We should handle
that better.

Thanks,
Ismael

On Mon, Dec 21, 2015 at 9:12 AM, Alexey Pirogov 
wrote:

> I'm looking for help with a question regarding measuring of producer
> request latency.
> I expected that "producer.request-latency-avg/max" will do a good job for
> me. But seems that if latencies less than 1ms in most cases, this metric
> will emit NaN(as it doesn't support float values).
> We need this metric for monitoring purpose.
>
> Is there any way to producer request latency statistic without adding
> callback or blocking of Future from KafkaProducer.send(...) method?
>
> P.S. Technically, we could treat NaN from "producer.request-latency-avg"
> metric as a special case in our monitoring tool, but it will required some
> specific configuration only for this metric.
>
> Thank you,
> Alexey"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3020) Ensure Checkstyle runs on all Java code

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067649#comment-15067649
 ] 

ASF GitHub Bot commented on KAFKA-3020:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/703


> Ensure Checkstyle runs on all Java code
> ---
>
> Key: KAFKA-3020
> URL: https://issues.apache.org/jira/browse/KAFKA-3020
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #252

2015-12-21 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3020; Ensure CheckStyle runs on all Java code

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 64b746bd8b4dae17d7dd804f0e7161f304e2d8ee 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 64b746bd8b4dae17d7dd804f0e7161f304e2d8ee
 > git rev-list a0d21407cbdd3b7cb73f52c986aa2f60804618e7 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson1657597870875774827.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 14.151 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson108199182503607204.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.9/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3/792d5e592f6f3f0c1a3337cd0ac84309b544f8f4/lz4-1.3.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 13.402 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


[jira] [Updated] (KAFKA-3020) Ensure Checkstyle runs on all Java code

2015-12-21 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3020:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 703
[https://github.com/apache/kafka/pull/703]

> Ensure Checkstyle runs on all Java code
> ---
>
> Key: KAFKA-3020
> URL: https://issues.apache.org/jira/browse/KAFKA-3020
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1377) transient unit test failure in LogOffsetTest

2015-12-21 Thread Pierre-Yves Ritschard (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066345#comment-15066345
 ] 

Pierre-Yves Ritschard commented on KAFKA-1377:
--

FWIW, bumping the waitTime parameter in TestUtils.scala does not change the 
behavior, so this is not timing related (waiting for 15s instead of 5s still 
exhibits the same behavior).

> transient unit test failure in LogOffsetTest
> 
>
> Key: KAFKA-1377
> URL: https://issues.apache.org/jira/browse/KAFKA-1377
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1377.patch, KAFKA-1377_2014-04-11_17:42:13.patch, 
> KAFKA-1377_2014-04-11_18:14:45.patch
>
>
> Saw the following transient unit test failure.
> kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime FAILED
> junit.framework.AssertionFailedError: expected: but 
> was:
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.failNotEquals(Assert.java:277)
> at junit.framework.Assert.assertEquals(Assert.java:64)
> at junit.framework.Assert.assertEquals(Assert.java:71)
> at 
> kafka.server.LogOffsetTest.testGetOffsetsBeforeEarliestTime(LogOffsetTest.scala:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3019) Add an exceptionName method to Errors

2015-12-21 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3019:
--

 Summary: Add an exceptionName method to Errors
 Key: KAFKA-3019
 URL: https://issues.apache.org/jira/browse/KAFKA-3019
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Grant Henke
Assignee: Grant Henke


The Errors class is often used to get and print the name of an exception 
related to an Error. Adding a exceptionName method and updating all usages 
would help provide more clear and less error prone code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3020) Ensure Checkstyle runs on all Java code

2015-12-21 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3020:
--

 Summary: Ensure Checkstyle runs on all Java code
 Key: KAFKA-3020
 URL: https://issues.apache.org/jira/browse/KAFKA-3020
 Project: Kafka
  Issue Type: Sub-task
Reporter: Grant Henke
Assignee: Grant Henke






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3009) Disallow star imports

2015-12-21 Thread Manasvi Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3009 started by Manasvi Gupta.

> Disallow star imports
> -
>
> Key: KAFKA-3009
> URL: https://issues.apache.org/jira/browse/KAFKA-3009
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
>
> Looks like we don't want star imports in our code (java.utils.*)
> So, lets add this rule to checkstyle and fix existing violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3009) Disallow star imports

2015-12-21 Thread Manasvi Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manasvi Gupta updated KAFKA-3009:
-
Attachment: main.xml

Checkstyle errors on core module

> Disallow star imports
> -
>
> Key: KAFKA-3009
> URL: https://issues.apache.org/jira/browse/KAFKA-3009
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
> Attachments: main.xml
>
>
> Looks like we don't want star imports in our code (java.utils.*)
> So, lets add this rule to checkstyle and fix existing violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3020) Ensure Checkstyle runs on all Java code

2015-12-21 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3020 started by Grant Henke.
--
> Ensure Checkstyle runs on all Java code
> ---
>
> Key: KAFKA-3020
> URL: https://issues.apache.org/jira/browse/KAFKA-3020
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)