[GitHub] kafka-site pull request #92: Redesign of Streams page - includes video & cus...

2017-10-06 Thread derrickdoo
Github user derrickdoo commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/92#discussion_r143323026
  
--- Diff: 0110/streams/index.html ---
@@ -1,275 +1,311 @@
 
-
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+   -->
 
-
 

Build failed in Jenkins: kafka-trunk-jdk7 #2869

2017-10-06 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5829; Only delete producer snapshots before the recovery point

[wangguoz] KAFKA-5362; Follow up to Streams EOS system test

[wangguoz] MINOR: KIP-161 upgrade docs change

--
[...truncated 256.05 KB...]

kafka.api.AuthorizerIntegrationTest > 
shouldSendSuccessfullyWhenIdempotentAndHasCorrectACL PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerTopicAuthorizationExceptionInSendCallback STARTED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerTopicAuthorizationExceptionInSendCallback PASSED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoWriteTransactionalIdAcl STARTED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoWriteTransactionalIdAcl PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedCreatePartitions STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedCreatePartitions PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithoutTopicDescribeAccess 
STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithoutTopicDescribeAccess 
PASSED

kafka.api.AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnEndTransaction
 STARTED

kafka.api.AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnEndTransaction
 PASSED

kafka.api.AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnSendOffsetsToTxn
 STARTED

kafka.api.AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnSendOffsetsToTxn
 PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl STARTED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicExisting STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicExisting PASSED

kafka.api.AuthorizerIntegrationTest > 
testUnauthorizedDeleteRecordsWithoutDescribe STARTED

kafka.api.AuthorizerIntegrationTest > 
testUnauthorizedDeleteRecordsWithoutDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl STARTED

kafka.api.AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl PASSED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionMatchingInternalTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionMatchingInternalTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testSendOffsetsWithNoConsumerGroupDescribeAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSendOffsetsWithNoConsumerGroupDescribeAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId STARTED

kafka.api.AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess PASSED

kafka.api.ClientIdQuotaTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.ClientIdQuotaTest > 

[GitHub] kafka-site pull request #92: Redesign of Streams page - includes video & cus...

2017-10-06 Thread derrickdoo
Github user derrickdoo commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/92#discussion_r143322899
  
--- Diff: css/styles.css ---
@@ -1031,50 +1054,438 @@ nav .btn {
 justify-content: center;
 width: 268px;
 }
-
-.green_card{
+.green_card {
 background-color: #00b900;
 }
-
 .customer__card__label {
 color: #00;
 margin-top: 2.4rem;
 display: flex;
 }
-
 @media only screen and (max-width: 1109px) {
 .customer__cards {
 height: 35rem;
-  
-  }
+}
 .customer_cards_2 {
- height: 45rem;
-}   
-
+height: 45rem;
+}
 }
-
 @media only screen and (max-width: 1023px) {
+.customer__cards {
+height: 65rem;
+flex-direction: column;
+}
+.customer_cards_2 {
+height: 75rem;
+flex-direction: column;
+margin-top: 5.2rem;
+}
+.customer-right {
+margin-right: 2rem;
+}
+.customer_cards_2 {
+margin-top: 0rem;
+}
+}
+.customer-title {
+margin-top: 62px;
+margin-bottom: 4.2rem;
+}
+}
+/* Streams page - adding video & cusomter logos*/
 
-.customer__cards{
- height: 65rem;
- flex-direction: column;
-   }
-   .customer_cards_2 {
- height: 75rem;
- flex-direction: column;
-  margin-top: 5.2rem;
-   }
-.customer-right{
- margin-right:2rem;
-}
-   .customer_cards_2{
- margin-top: 0rem;
+.streams-menu ul {
+list-style-type: none;
+margin: 0;
+padding: 0;
+}
+.streams-menu {
+padding-left: 0px;
+width: 90rem;
+}
+.streams-menu li .active-menu-item {
+color: #000!important;
+padding-bottom: 7px;
+}
+.streams-menu li .active-menu-item {
+width: 108px;
+height: 2px;
+border-bottom: solid 4px #00;
+}
+.streams-menu li a {
+color: #8c;
+margin-top: 16px;
+}
+.streams-menu li {
+display: inline;
+height: 28px;
+font-family: Roboto;
+font-size: 15px;
+line-height: 1.87;
+text-align: left;
+color: #00;
+margin-right: 30px;
+text-transform: uppercase;
+}
+.video__series__grid {
+width: 100%;
+display: -webkit-flex;
+/* Safari */
+
+display: flex;
+margin-bottom: 58px;
+/*flex-direction: row;*/
+}
+.video__series__grid div {
+-webkit-flex: 1;
+/* Safari 6.1+ */
+
+-ms-flex: 1;
+/* IE 10 */
+
+flex: 1;
+}
+.video-list li {
+display: list-item;
+font-family: Roboto;
+font-size: 15px;
+line-height: 2.67;
+text-align: left;
+color: #d8d8d8;
+text-transform: capitalize;
+}
+.mobile-video-list li {
+display: inline-block;
+font-family: Roboto;
+font-size: 15px;
+line-height: 2.67;
+text-align: left;
+color: #d8d8d8;
+text-transform: capitalize;
+}
+.mobile-video-list .active {
+color: #000;
+}
+.mobile-video-list .active:before {
+background-color: #000;
+border: solid 2px #000;
+}
+.video-list .active {
+color: #000;
+}
+.video-list .active:before {
+background-color: #000;
+border: solid 2px #000;
+}
+@media only screen and (min-width: 1126px) {
+.mobile-video-list {
+display: none;
+}
+}
+@media only screen and (max-width: 1125px) {
+.video__series_grid {
+flex-direction: column;
+}
+.video__block h3 {
+display: none;
 }
+.video-list {
+display: none;
+}
+.mobile-video-list {
+display: block;
+}
+}
+ul.video-list,
+ul.mobile-video-list {
+list-style-type: none;
+/* Setup the counter. */
+
+counter-reset: num;
+padding-left: 0px;
+}
+.video-list li,
+.mobile-video-list li {
+margin-bottom: 1rem;
+}
+.video-list li:before,
+.mobile-video-list li:before {
+/* Advance the number. */
+
+counter-increment: num;
+/* Use the counter number as content. */
+
+content: counter(num);
+color: #fff;
+background-color: #d8d8d8;
+width: 50px;
+border-radius: 50%;
+padding: 5px 10px;
+margin-right: .8rem;
+}
+.grid__item__customer__description {
+margin: 0 2rem 2rem;
+padding-top: 0rem;
+}
+.stream__text {

[GitHub] kafka-site pull request #92: Redesign of Streams page - includes video & cus...

2017-10-06 Thread derrickdoo
Github user derrickdoo commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/92#discussion_r143322885
  
--- Diff: css/styles.css ---
@@ -1031,50 +1054,438 @@ nav .btn {
 justify-content: center;
 width: 268px;
 }
-
-.green_card{
+.green_card {
 background-color: #00b900;
 }
-
 .customer__card__label {
 color: #00;
 margin-top: 2.4rem;
 display: flex;
 }
-
 @media only screen and (max-width: 1109px) {
 .customer__cards {
 height: 35rem;
-  
-  }
+}
 .customer_cards_2 {
- height: 45rem;
-}   
-
+height: 45rem;
+}
 }
-
 @media only screen and (max-width: 1023px) {
+.customer__cards {
+height: 65rem;
+flex-direction: column;
+}
+.customer_cards_2 {
+height: 75rem;
+flex-direction: column;
+margin-top: 5.2rem;
+}
+.customer-right {
+margin-right: 2rem;
+}
+.customer_cards_2 {
+margin-top: 0rem;
+}
+}
+.customer-title {
+margin-top: 62px;
+margin-bottom: 4.2rem;
+}
+}
+/* Streams page - adding video & cusomter logos*/
 
-.customer__cards{
- height: 65rem;
- flex-direction: column;
-   }
-   .customer_cards_2 {
- height: 75rem;
- flex-direction: column;
-  margin-top: 5.2rem;
-   }
-.customer-right{
- margin-right:2rem;
-}
-   .customer_cards_2{
- margin-top: 0rem;
+.streams-menu ul {
+list-style-type: none;
+margin: 0;
+padding: 0;
+}
+.streams-menu {
+padding-left: 0px;
+width: 90rem;
+}
+.streams-menu li .active-menu-item {
+color: #000!important;
+padding-bottom: 7px;
+}
+.streams-menu li .active-menu-item {
+width: 108px;
+height: 2px;
+border-bottom: solid 4px #00;
+}
+.streams-menu li a {
+color: #8c;
+margin-top: 16px;
+}
+.streams-menu li {
+display: inline;
+height: 28px;
+font-family: Roboto;
+font-size: 15px;
+line-height: 1.87;
+text-align: left;
+color: #00;
+margin-right: 30px;
+text-transform: uppercase;
+}
+.video__series__grid {
+width: 100%;
+display: -webkit-flex;
+/* Safari */
+
+display: flex;
+margin-bottom: 58px;
+/*flex-direction: row;*/
+}
+.video__series__grid div {
+-webkit-flex: 1;
+/* Safari 6.1+ */
+
+-ms-flex: 1;
+/* IE 10 */
+
+flex: 1;
+}
+.video-list li {
+display: list-item;
+font-family: Roboto;
+font-size: 15px;
+line-height: 2.67;
+text-align: left;
+color: #d8d8d8;
+text-transform: capitalize;
+}
+.mobile-video-list li {
+display: inline-block;
+font-family: Roboto;
+font-size: 15px;
+line-height: 2.67;
+text-align: left;
+color: #d8d8d8;
+text-transform: capitalize;
+}
+.mobile-video-list .active {
+color: #000;
+}
+.mobile-video-list .active:before {
+background-color: #000;
+border: solid 2px #000;
+}
+.video-list .active {
+color: #000;
+}
+.video-list .active:before {
+background-color: #000;
+border: solid 2px #000;
+}
+@media only screen and (min-width: 1126px) {
+.mobile-video-list {
+display: none;
+}
+}
+@media only screen and (max-width: 1125px) {
+.video__series_grid {
+flex-direction: column;
+}
+.video__block h3 {
+display: none;
 }
+.video-list {
+display: none;
+}
+.mobile-video-list {
+display: block;
+}
+}
+ul.video-list,
+ul.mobile-video-list {
+list-style-type: none;
+/* Setup the counter. */
+
+counter-reset: num;
+padding-left: 0px;
+}
+.video-list li,
+.mobile-video-list li {
+margin-bottom: 1rem;
+}
+.video-list li:before,
+.mobile-video-list li:before {
+/* Advance the number. */
+
+counter-increment: num;
+/* Use the counter number as content. */
+
+content: counter(num);
+color: #fff;
+background-color: #d8d8d8;
+width: 50px;
+border-radius: 50%;
+padding: 5px 10px;
+margin-right: .8rem;
+}
+.grid__item__customer__description {
+margin: 0 2rem 2rem;
+padding-top: 0rem;
+}
+.stream__text {

[GitHub] kafka-site issue #92: Redesign of Streams page - includes video & customer l...

2017-10-06 Thread derrickdoo
Github user derrickdoo commented on the issue:

https://github.com/apache/kafka-site/pull/92
  
@manjuapu to fix the video clipping issue you need to switch over to 
relative sizing when you viewport gets smaller than the full size embed (525px) 
+ the total left/right padding/margin (20px) on your content area. Here's the 
CSS you need:

`
@media only screen and (max-width: 545px) {
  .yt_series {
width: 100%;
  }
}
`

There's also a few more tweaks you should make to get things synced up with 
the design comps.

1. Remove the default borders on your video embeds:

`
.yt_series{
  border: none;
}
` 

2. At 1125px wide, you're folding the video selector items down to just the 
1-4 clickable bullets below the video embed. You should center the video and 
clickable bullets to be consistent with the rest of layout

`
@media only screen and (max-width: 1125px) {
.yt_series {
margin: 0 auto;
}
.video__list {
text-align: center;
}
}
`

Last thing just for readability I'd consider grouping all your media 
queries together. For instance you're looking for 

`
@media only screen and (max-width: 1125px)
`

In 4 places in style.css. Just group all that stuff up. 


---


Jenkins build is back to normal : kafka-trunk-jdk8 #2118

2017-10-06 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #4037: KAFKA-5541: Streams should not re-throw if suspend...

2017-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4037


---


Build failed in Jenkins: kafka-1.0-jdk7 #16

2017-10-06 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5362; Follow up to Streams EOS system test

[wangguoz] MINOR: KIP-161 upgrade docs change

--
[...truncated 369.41 KB...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing PASSED

kafka.log.ProducerStateManagerTest > testTruncate STARTED

kafka.log.ProducerStateManagerTest > testTruncate PASSED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload STARTED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload PASSED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump STARTED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
PASSED

kafka.log.ProducerStateManagerTest > testTakeSnapshot STARTED

kafka.log.ProducerStateManagerTest > testTakeSnapshot PASSED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore STARTED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore PASSED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached STARTED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction 
STARTED

kafka.log.ProducerStateManagerTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #2117

2017-10-06 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5651; Follow-up: add with method to Materialized

[wangguoz] MINOR: log4j improvements on assigned tasks and store changelog 
reader

[jason] KAFKA-5829; Only delete producer snapshots before the recovery point

--
[...truncated 1.78 MB...]
org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable PASSED


[GitHub] kafka-site issue #92: Redesign of Streams page - includes video & customer l...

2017-10-06 Thread manjuapu
Github user manjuapu commented on the issue:

https://github.com/apache/kafka-site/pull/92
  
@derrickdoo  I have made changes as per your feedback, but need help with 
video clipping off.


---


[GitHub] kafka pull request #4037: KAFKA-5541: Streams should not re-throw if suspend...

2017-10-06 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/4037

KAFKA-5541: Streams should not re-throw if suspending/closing tasks fails



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
kafka-5541-dont-rethrow-on-suspend-or-close-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4037.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4037


commit 8267f5cb928fb7f9d928400e0da15d1a56c786c2
Author: Matthias J. Sax 
Date:   2017-10-07T01:05:27Z

KAFKA-5541: Streams should not re-throw if suspending/closing tasks fails




---


Build failed in Jenkins: kafka-1.0-jdk7 #15

2017-10-06 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5829; Only delete producer snapshots before the recovery point

--
[...truncated 376.42 KB...]

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] STARTED
ERROR: Could not install GRADLE_3_4_RC_2_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:887)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:419)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:627)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:592)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:391)
at hudson.scm.SCM.poll(SCM.java:408)
at hudson.model.AbstractProject._poll(AbstractProject.java:1394)
at hudson.model.AbstractProject.poll(AbstractProject.java:1297)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:594)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:640)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing PASSED

kafka.log.ProducerStateManagerTest > testTruncate STARTED

kafka.log.ProducerStateManagerTest > testTruncate PASSED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload STARTED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload PASSED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump STARTED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 

[GitHub] kafka pull request #4036: MINOR: KIP-161 upgrade docs change

2017-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4036


---


[GitHub] kafka pull request #3542: KAFKA-5362: Follow up to Streams EOS system test

2017-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3542


---


Jenkins build is back to normal : kafka-trunk-jdk8 #2116

2017-10-06 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #4023: KAFKA-5829: Only delete producer snapshots before ...

2017-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4023


---


Jenkins build is back to normal : kafka-trunk-jdk7 #2867

2017-10-06 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #4036: MINOR: KIP-161 upgrade docs change

2017-10-06 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/4036

MINOR: KIP-161 upgrade docs change



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KMinor-kip-161-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4036.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4036


commit 12712180da63faf11d74ecd7a90977519bbbedb8
Author: Guozhang Wang 
Date:   2017-10-06T23:24:57Z

kip-161 docs change




---


[GitHub] kafka pull request #4031: MINOR: log4j improvements on assigned tasks and st...

2017-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4031


---


[GitHub] kafka pull request #4009: KAFKA-5651: [FOLLOW-UP] add with method to Materia...

2017-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4009


---


Build failed in Jenkins: kafka-trunk-jdk8 #2115

2017-10-06 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Allow schedule and commit in MockProcessorContext

--
[...truncated 3.67 MB...]
at 
org.apache.kafka.streams.integration.RegexSourceIntegrationTest$3.conditionMet(RegexSourceIntegrationTest.java:175)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:267)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:254)
at 
org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenCreated(RegexSourceIntegrationTest.java:172)

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED


[GitHub] kafka pull request #4022: KAFKA-6015: Fix NullPointerException in RecordAccu...

2017-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4022


---


Build failed in Jenkins: kafka-trunk-jdk7 #2866

2017-10-06 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5547; Return TOPIC_AUTHORIZATION_FAILED error if no describe

--
[...truncated 371.34 KB...]

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction STARTED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction PASSED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged STARTED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] PASSED


[jira] [Created] (KAFKA-6020) Broker side filtering

2017-10-06 Thread Pavel Micka (JIRA)
Pavel Micka created KAFKA-6020:
--

 Summary: Broker side filtering
 Key: KAFKA-6020
 URL: https://issues.apache.org/jira/browse/KAFKA-6020
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Pavel Micka


Currently, it is not possible to filter messages on broker side. Filtering 
messages on broker side is convenient for filter with very low selectivity (one 
message in few thousands). In my case it means to transfer several GB of data 
to consumer, throw it away, take one message and do it again...

While I understand that filtering by message body is not feasible (for 
performance reasons), I propose to filter just by message key prefix. This can 
be achieved even without any deserialization, as the prefix to be matched can 
be passed as an array (hence the broker would do just array prefix compare).




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk7 #2865

2017-10-06 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Allow schedule and commit in MockProcessorContext

--
[...truncated 371.59 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 

[jira] [Resolved] (KAFKA-5547) Return topic authorization failed if no topic describe access

2017-10-06 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-5547.

   Resolution: Fixed
Fix Version/s: (was: 1.1.0)
   1.0.0

Issue resolved by pull request 3924
[https://github.com/apache/kafka/pull/3924]

> Return topic authorization failed if no topic describe access
> -
>
> Key: KAFKA-5547
> URL: https://issues.apache.org/jira/browse/KAFKA-5547
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Manikumar
>  Labels: security, usability
> Fix For: 1.0.0
>
>
> We previously made a change to several of the request APIs to return 
> UNKNOWN_TOPIC_OR_PARTITION if the principal does not have Describe access to 
> the topic. The thought was to avoid leaking information about which topics 
> exist. The problem with this is that a client which sees this error will just 
> keep retrying because it is usually treated as retriable. It seems, however, 
> that we could return TOPIC_AUTHORIZATION_FAILED instead and still avoid 
> leaking information as long as we ensure that the Describe authorization 
> check comes before the topic existence check. This would avoid the ambiguity 
> on the client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3924: KAFKA-5547: Return TOPIC_AUTHORIZATION_FAILED erro...

2017-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3924


---


Re: [DISCUSS] URIs on Producer and Consumer

2017-10-06 Thread Clebert Suconic
> >
> >
> > Although I can't start a KIP... again.. (lost the count now).. can
> > someone add me authorizations to the WIKI Page so I can create the
> > page and start a proper discussion there?
>
> Hmm, I don't have permissions to add you.  admins?




Perhaps I should email the list with a different subject?

I was just following the contributing page saying I should ask permissions
here to create the page with the KIP template.





>
> regards,
> Colin
>
-- 
Clebert Suconic


Re: [DISCUSS] KIP-207: Offsets returned by ListOffsetsResponse should be monotonically increasing even during a partition leader change

2017-10-06 Thread Tom Bentley
Thanks Colin, it makes sense now, it was the HWM party I was missing.

Cheers,

Tom

On 6 Oct 2017 6:44 pm, "Colin McCabe"  wrote:

On Thu, Oct 5, 2017, at 12:06, Tom Bentley wrote:
> Hi Colin,
>
> Is it really true that "the period when the offset is unavailable should
> be
> brief"? I'm thinking about a producer with acks=1, so the old leader
> returns the ProduceResponse immediately and then is replaced before it
> can
> sent a FetchResponse to any followers.
> The new leader is then waiting for
> more messages from producers in order for its high watermark to increase
> (because it's log doesn't have the original messages in, so its HW can't
> catch up with this). This wait could be be arbitrarily long.

Hi Tom,

As I understand it, the sequence of events you are proposing is this:

1. Producer with acks=1 sends message batch B to node 1 (current
leader).
2. Node 1 fails before any replication can take place
3. Node 2 becomes the leader.

In this scenario, Node 2's log end offset (LEO) does not include message
batch B.  So there is no wait (or at least, no wait that is due to batch
B).

Also, Node 1 cannot advance its high water mark (HWM) until the replicas
have caught up.  So the HWM never goes backwards.  Batch B simply
disappears without a trace -- no consumers ever were able to consume it,
and it never advanced the partition HWM.  That's life with acks=1.

cheers,
Colin

>
> I rather suspect this isn't a problem really and that I misunderstand the
> precise details of the protocol, but it would be beneficial to me to
> discover my misconceptions.
>
> Thanks,
>
> Tom
>
>
>
> On 5 October 2017 at 19:23, Colin McCabe  wrote:
>
> > Hi all,
> >
> > I created a KIP for discussion about fixing a corner case in
> > ListOffsetsResponse.  Check it out at:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 207%3A+Offsets+returned+by+ListOffsetsResponse+should+be+
> > monotonically+increasing+even+during+a+partition+leader+change
> >
> > cheers,
> > Colin
> >


Build failed in Jenkins: kafka-trunk-jdk8 #2114

2017-10-06 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-5953: Register all jdbc drivers available in plugin and class

--
[...truncated 1.77 MB...]

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic STARTED


Build failed in Jenkins: kafka-trunk-jdk7 #2864

2017-10-06 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-5953: Register all jdbc drivers available in plugin and class

--
[...truncated 394.79 KB...]
kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId PASSED

kafka.coordinator.transaction.ProducerIdManagerTest > testExceedProducerIdLimit 
STARTED

kafka.coordinator.transaction.ProducerIdManagerTest > testExceedProducerIdLimit 
PASSED

kafka.coordinator.transaction.ProducerIdManagerTest > testGetProducerId STARTED

kafka.coordinator.transaction.ProducerIdManagerTest > testGetProducerId PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSaveForLaterWhenLeaderUnknownButNotAvailable STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSaveForLaterWhenLeaderUnknownButNotAvailable PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateEmptyMapWhenNoRequestsOutstanding STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateEmptyMapWhenNoRequestsOutstanding PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCompleteAppendToLogOnEndTxnWhenSendMarkersSucceed STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCompleteAppendToLogOnEndTxnWhenSendMarkersSucceed PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateRequestPerPartitionPerBroker STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateRequestPerPartitionPerBroker PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRemoveMarkersForTxnPartitionWhenPartitionEmigrated STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRemoveMarkersForTxnPartitionWhenPartitionEmigrated PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSkipSendMarkersWhenLeaderNotFound STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSkipSendMarkersWhenLeaderNotFound PASSED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
STARTED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
PASSED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn STARTED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError PASSED

[GitHub] kafka pull request #3992: MINOR: Allow schedule and commit in MockProcessorC...

2017-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3992


---


[GitHub] kafka-site issue #92: Redesign of Streams page - includes video & customer l...

2017-10-06 Thread derrickdoo
Github user derrickdoo commented on the issue:

https://github.com/apache/kafka-site/pull/92
  
@manjuapu I'm running your branch locally and I noticed some layout/display 
issues that should eb sorted out.

DESKTOP:
1. Clipping content on the right at certain browser widths:

![image](https://user-images.githubusercontent.com/271961/31293205-441f64f0-aa8b-11e7-9b5d-ba46ac23d0bb.png)

2. Unnecessary padding appears to the left of the main content area at 
certain browser widths:

![image](https://user-images.githubusercontent.com/271961/31293400-f18ad66a-aa8b-11e7-893c-51ca2d5b652c.png)

3. Horizontal nav should keep the same color scheme when you vertically 
scroll down the page. Can you just apply a white background with maybe 10px of 
padding-bottom to the nav to make it appear as if content is scrolling 
underneath the nav?

![image](https://user-images.githubusercontent.com/271961/31293485-322a7284-aa8c-11e7-8dd0-7d47fb3880aa.png)

MOBILE:
1. Content is getting clipped on mobile

![image](https://user-images.githubusercontent.com/271961/31293522-539e260e-aa8c-11e7-8d6e-e249ffd0a2ba.png)

2. Video embed is getting clipped on mobile

![image](https://user-images.githubusercontent.com/271961/31293592-7fa78c5e-aa8c-11e7-9eed-8e7f1450a51a.png)

3. "write your first app" cta should be at the top of the list

![image](https://user-images.githubusercontent.com/271961/31293652-af2fc02c-aa8c-11e7-83f0-6c30ab473e68.png)

4. (not because of your changes) do you mind reducing the text size for the 
code examples for mobile
![Uploading image.png…]()





---


[GitHub] kafka pull request #4030: KAFKA-5953: Register all jdbc drivers available in...

2017-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4030


---


[jira] [Resolved] (KAFKA-5953) Connect classloader isolation may be broken for JDBC drivers

2017-10-06 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5953.
--
   Resolution: Fixed
Fix Version/s: 1.0.0
   1.1.0

Issue resolved by pull request 4030
[https://github.com/apache/kafka/pull/4030]

> Connect classloader isolation may be broken for JDBC drivers
> 
>
> Key: KAFKA-5953
> URL: https://issues.apache.org/jira/browse/KAFKA-5953
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Jiri Pechanec
>Assignee: Konstantine Karantasis
>Priority: Critical
> Fix For: 1.1.0, 1.0.0
>
>
> Let's suppose there are two connectors deployed
> # using JDBC driver (Debezium MySQL connector)
> # using PostgreSQL JDBC driver (JDBC sink).
> Connector 1 is started first - it executes a statement
> {code:java}
> Connection conn = DriverManager.getConnection(url, props);
> {code}
> As a result a {{DriverManager}} calls {{ServiceLoader}} and searches for all 
> JDBC drivers. The postgres driver from connector 2) is found associated with 
> classloader from connector 1).
> Connector 2 is started after that - it executes a statement
> {code:java}
> connection = DriverManager.getConnection(url, username, password);
> {code}
> DriverManager finds the connector that was loaded in step before but becuase 
> the classloader is different - now we use classloader 2) so it refuses to 
> load the class and no JDBC driver is found.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6019) Sentry permissions bug on CDH

2017-10-06 Thread Jorge Machado (JIRA)
Jorge Machado created KAFKA-6019:


 Summary: Sentry permissions bug on CDH
 Key: KAFKA-6019
 URL: https://issues.apache.org/jira/browse/KAFKA-6019
 Project: Kafka
  Issue Type: Bug
Reporter: Jorge Machado


Hello Guys, 
I think I found a bug on sentry +sasl + kafka CDH. 

Please check https://issues.apache.org/jira/browse/KAFKA-6017

thanks



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-207: Offsets returned by ListOffsetsResponse should be monotonically increasing even during a partition leader change

2017-10-06 Thread Apurva Mehta
Thanks for the KIP Colin. That looks like a reasonable proposal.

On Thu, Oct 5, 2017 at 11:23 AM, Colin McCabe  wrote:

> Hi all,
>
> I created a KIP for discussion about fixing a corner case in
> ListOffsetsResponse.  Check it out at:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 207%3A+Offsets+returned+by+ListOffsetsResponse+should+be+
> monotonically+increasing+even+during+a+partition+leader+change
>
> cheers,
> Colin
>


Re: [DISCUSS] KIP-186: Increase offsets retention default to 7 days

2017-10-06 Thread Manikumar
looks like VOTE thread is *NOT* started for this KIP.

On Fri, Oct 6, 2017 at 11:23 PM, Manikumar 
wrote:

> looks like VOTE thread is started for this KIP.
>
>
> On Wed, Aug 16, 2017 at 5:39 PM, Stevo Slavić  wrote:
>
>> +1 for making consistent default log and offsets retention time.
>> I like Stephane's suggestion too, log retention override should override
>> offset retention too if not explicitly configured.
>>
>> Please consider additionally:
>> - introducing offsets.retention.hours config property
>> - syncing log and offsets retention.check.interval.ms, if there's no real
>> reason for the two to differ
>> -- consider making retention check interval by default (if not explicitly
>> configured) a fraction of retention time
>> - name all "offsets" configs with "offsets" prefix (now it's a mix of
>> singular/"offset" and plural/"offsets")
>>
>>
>> On Fri, Aug 11, 2017 at 2:01 AM, Guozhang Wang 
>> wrote:
>>
>> > +1 from me
>> >
>> > On Wed, Aug 9, 2017 at 9:40 AM, Jason Gustafson 
>> > wrote:
>> >
>> > > +1 on the bump to 7 days. Wanted to mention one minor point. The
>> > > OffsetCommit RPC still provides the ability to set the retention time
>> > from
>> > > the client, but we do not use it in the consumer. Should we consider
>> > adding
>> > > a consumer config to set this? Given the problems people had with the
>> old
>> > > default, such a config would probably have gotten a fair bit of use.
>> > Maybe
>> > > it's less necessary with the new default, but there may be situations
>> > where
>> > > you don't want to keep the offsets for too long. For example, the
>> console
>> > > consumer commits offsets with a generated group id. We might want to
>> set
>> > a
>> > > low retention time to keep it from filling the offset cache with
>> garbage
>> > > from such groups.
>> > >
>> >
>> > I agree with Jason here, but maybe itself deserves a separate KIP
>> > discussion.
>> >
>> >
>> > >
>> > > -Jason
>> > >
>> > > On Wed, Aug 9, 2017 at 5:24 AM, Sönke Liebau <
>> > > soenke.lie...@opencore.com.invalid> wrote:
>> > >
>> > > > Just had this create issues at a customer as well, +1
>> > > >
>> > > > On Wed, Aug 9, 2017 at 11:46 AM, Mickael Maison <
>> > > mickael.mai...@gmail.com>
>> > > > wrote:
>> > > >
>> > > > > Yes the current default is too short, +1
>> > > > >
>> > > > > On Wed, Aug 9, 2017 at 8:56 AM, Ismael Juma 
>> > wrote:
>> > > > > > Thanks for the KIP, +1 from me.
>> > > > > >
>> > > > > > Ismael
>> > > > > >
>> > > > > > On Wed, Aug 9, 2017 at 1:24 AM, Ewen Cheslack-Postava <
>> > > > e...@confluent.io
>> > > > > >
>> > > > > > wrote:
>> > > > > >
>> > > > > >> Hi all,
>> > > > > >>
>> > > > > >> I posted a simple new KIP for a problem we see with a lot of
>> > users:
>> > > > > >> KIP-186: Increase offsets retention default to 7 days
>> > > > > >>
>> > > > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > > > > >> 186%3A+Increase+offsets+retention+default+to+7+days
>> > > > > >>
>> > > > > >> Note that in addition to the KIP text itself, the linked JIRA
>> > > already
>> > > > > >> existed and has a bunch of discussion on the subject.
>> > > > > >>
>> > > > > >> -Ewen
>> > > > > >>
>> > > > >
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > Sönke Liebau
>> > > > Partner
>> > > > Tel. +49 179 7940878
>> > > > OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel -
>> Germany
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> > -- Guozhang
>> >
>>
>
>


Re: [DISCUSS] KIP-186: Increase offsets retention default to 7 days

2017-10-06 Thread Ted Yu
+1 on the KIP.

bq. introducing offsets.retention.hours config property

Probably the introduction would cause confusion among users due to the
existing minutes config.

On Fri, Oct 6, 2017 at 10:53 AM, Manikumar 
wrote:

> looks like VOTE thread is started for this KIP.
>
> On Wed, Aug 16, 2017 at 5:39 PM, Stevo Slavić  wrote:
>
> > +1 for making consistent default log and offsets retention time.
> > I like Stephane's suggestion too, log retention override should override
> > offset retention too if not explicitly configured.
> >
> > Please consider additionally:
> > - introducing offsets.retention.hours config property
> > - syncing log and offsets retention.check.interval.ms, if there's no
> real
> > reason for the two to differ
> > -- consider making retention check interval by default (if not explicitly
> > configured) a fraction of retention time
> > - name all "offsets" configs with "offsets" prefix (now it's a mix of
> > singular/"offset" and plural/"offsets")
> >
> >
> > On Fri, Aug 11, 2017 at 2:01 AM, Guozhang Wang 
> wrote:
> >
> > > +1 from me
> > >
> > > On Wed, Aug 9, 2017 at 9:40 AM, Jason Gustafson 
> > > wrote:
> > >
> > > > +1 on the bump to 7 days. Wanted to mention one minor point. The
> > > > OffsetCommit RPC still provides the ability to set the retention time
> > > from
> > > > the client, but we do not use it in the consumer. Should we consider
> > > adding
> > > > a consumer config to set this? Given the problems people had with the
> > old
> > > > default, such a config would probably have gotten a fair bit of use.
> > > Maybe
> > > > it's less necessary with the new default, but there may be situations
> > > where
> > > > you don't want to keep the offsets for too long. For example, the
> > console
> > > > consumer commits offsets with a generated group id. We might want to
> > set
> > > a
> > > > low retention time to keep it from filling the offset cache with
> > garbage
> > > > from such groups.
> > > >
> > >
> > > I agree with Jason here, but maybe itself deserves a separate KIP
> > > discussion.
> > >
> > >
> > > >
> > > > -Jason
> > > >
> > > > On Wed, Aug 9, 2017 at 5:24 AM, Sönke Liebau <
> > > > soenke.lie...@opencore.com.invalid> wrote:
> > > >
> > > > > Just had this create issues at a customer as well, +1
> > > > >
> > > > > On Wed, Aug 9, 2017 at 11:46 AM, Mickael Maison <
> > > > mickael.mai...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Yes the current default is too short, +1
> > > > > >
> > > > > > On Wed, Aug 9, 2017 at 8:56 AM, Ismael Juma 
> > > wrote:
> > > > > > > Thanks for the KIP, +1 from me.
> > > > > > >
> > > > > > > Ismael
> > > > > > >
> > > > > > > On Wed, Aug 9, 2017 at 1:24 AM, Ewen Cheslack-Postava <
> > > > > e...@confluent.io
> > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > >> Hi all,
> > > > > > >>
> > > > > > >> I posted a simple new KIP for a problem we see with a lot of
> > > users:
> > > > > > >> KIP-186: Increase offsets retention default to 7 days
> > > > > > >>
> > > > > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > >> 186%3A+Increase+offsets+retention+default+to+7+days
> > > > > > >>
> > > > > > >> Note that in addition to the KIP text itself, the linked JIRA
> > > > already
> > > > > > >> existed and has a bunch of discussion on the subject.
> > > > > > >>
> > > > > > >> -Ewen
> > > > > > >>
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Sönke Liebau
> > > > > Partner
> > > > > Tel. +49 179 7940878
> > > > > OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel -
> Germany
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


Re: [DISCUSS] URIs on Producer and Consumer

2017-10-06 Thread Colin McCabe
On Fri, Oct 6, 2017, at 10:12, Clebert Suconic wrote:
> > As I said before, a connection string might be a good idea.  A URI, no.
> 
> 
> Same purpose.. I'm fine with that...

Sounds good to me.  You might want to discuss this on the librdkafka
mailing list as well, to see if there are any ideas that can be shared.

> 
> 
> Although I can't start a KIP... again.. (lost the count now).. can
> someone add me authorizations to the WIKI Page so I can create the
> page and start a proper discussion there?

Hmm, I don't have permissions to add you.  admins?

regards,
Colin


Re: [DISCUSS] KIP-186: Increase offsets retention default to 7 days

2017-10-06 Thread Manikumar
looks like VOTE thread is started for this KIP.

On Wed, Aug 16, 2017 at 5:39 PM, Stevo Slavić  wrote:

> +1 for making consistent default log and offsets retention time.
> I like Stephane's suggestion too, log retention override should override
> offset retention too if not explicitly configured.
>
> Please consider additionally:
> - introducing offsets.retention.hours config property
> - syncing log and offsets retention.check.interval.ms, if there's no real
> reason for the two to differ
> -- consider making retention check interval by default (if not explicitly
> configured) a fraction of retention time
> - name all "offsets" configs with "offsets" prefix (now it's a mix of
> singular/"offset" and plural/"offsets")
>
>
> On Fri, Aug 11, 2017 at 2:01 AM, Guozhang Wang  wrote:
>
> > +1 from me
> >
> > On Wed, Aug 9, 2017 at 9:40 AM, Jason Gustafson 
> > wrote:
> >
> > > +1 on the bump to 7 days. Wanted to mention one minor point. The
> > > OffsetCommit RPC still provides the ability to set the retention time
> > from
> > > the client, but we do not use it in the consumer. Should we consider
> > adding
> > > a consumer config to set this? Given the problems people had with the
> old
> > > default, such a config would probably have gotten a fair bit of use.
> > Maybe
> > > it's less necessary with the new default, but there may be situations
> > where
> > > you don't want to keep the offsets for too long. For example, the
> console
> > > consumer commits offsets with a generated group id. We might want to
> set
> > a
> > > low retention time to keep it from filling the offset cache with
> garbage
> > > from such groups.
> > >
> >
> > I agree with Jason here, but maybe itself deserves a separate KIP
> > discussion.
> >
> >
> > >
> > > -Jason
> > >
> > > On Wed, Aug 9, 2017 at 5:24 AM, Sönke Liebau <
> > > soenke.lie...@opencore.com.invalid> wrote:
> > >
> > > > Just had this create issues at a customer as well, +1
> > > >
> > > > On Wed, Aug 9, 2017 at 11:46 AM, Mickael Maison <
> > > mickael.mai...@gmail.com>
> > > > wrote:
> > > >
> > > > > Yes the current default is too short, +1
> > > > >
> > > > > On Wed, Aug 9, 2017 at 8:56 AM, Ismael Juma 
> > wrote:
> > > > > > Thanks for the KIP, +1 from me.
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Wed, Aug 9, 2017 at 1:24 AM, Ewen Cheslack-Postava <
> > > > e...@confluent.io
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > >> Hi all,
> > > > > >>
> > > > > >> I posted a simple new KIP for a problem we see with a lot of
> > users:
> > > > > >> KIP-186: Increase offsets retention default to 7 days
> > > > > >>
> > > > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > >> 186%3A+Increase+offsets+retention+default+to+7+days
> > > > > >>
> > > > > >> Note that in addition to the KIP text itself, the linked JIRA
> > > already
> > > > > >> existed and has a bunch of discussion on the subject.
> > > > > >>
> > > > > >> -Ewen
> > > > > >>
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Sönke Liebau
> > > > Partner
> > > > Tel. +49 179 7940878
> > > > OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: [DISCUSS] KIP-207: Offsets returned by ListOffsetsResponse should be monotonically increasing even during a partition leader change

2017-10-06 Thread Colin McCabe
On Thu, Oct 5, 2017, at 12:06, Tom Bentley wrote:
> Hi Colin,
> 
> Is it really true that "the period when the offset is unavailable should
> be
> brief"? I'm thinking about a producer with acks=1, so the old leader
> returns the ProduceResponse immediately and then is replaced before it
> can
> sent a FetchResponse to any followers.
> The new leader is then waiting for
> more messages from producers in order for its high watermark to increase
> (because it's log doesn't have the original messages in, so its HW can't
> catch up with this). This wait could be be arbitrarily long.

Hi Tom,

As I understand it, the sequence of events you are proposing is this:

1. Producer with acks=1 sends message batch B to node 1 (current
leader).
2. Node 1 fails before any replication can take place
3. Node 2 becomes the leader.

In this scenario, Node 2's log end offset (LEO) does not include message
batch B.  So there is no wait (or at least, no wait that is due to batch
B).

Also, Node 1 cannot advance its high water mark (HWM) until the replicas
have caught up.  So the HWM never goes backwards.  Batch B simply
disappears without a trace -- no consumers ever were able to consume it,
and it never advanced the partition HWM.  That's life with acks=1.

cheers,
Colin

> 
> I rather suspect this isn't a problem really and that I misunderstand the
> precise details of the protocol, but it would be beneficial to me to
> discover my misconceptions.
> 
> Thanks,
> 
> Tom
> 
> 
> 
> On 5 October 2017 at 19:23, Colin McCabe  wrote:
> 
> > Hi all,
> >
> > I created a KIP for discussion about fixing a corner case in
> > ListOffsetsResponse.  Check it out at:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 207%3A+Offsets+returned+by+ListOffsetsResponse+should+be+
> > monotonically+increasing+even+during+a+partition+leader+change
> >
> > cheers,
> > Colin
> >


Re: [DISCUSS] URIs on Producer and Consumer

2017-10-06 Thread Colin McCabe
On Thu, Oct 5, 2017, at 13:33, Michael Pearce wrote:
> To me, this is a lot more in line with many other systems connections, to
> have the ability to have a single connection string / uri, is this really
> that left field suggesting or wanting this?
> 
> If anything this bring kafka more standardised approach imo, to have a
> unified resource identifier, protocol name and a set schema for that.
> 
> e.g.
> Database connection strings like
> 
> oracle:
> jdbc:oracle:thin:@(description=(address_list=
>(address=(protocol=tcp)(port=1521)(host=prodHost)))
> (connect_data=(INSTANCE_NAME=ORCL)))

Hmm.  That isn't a URI, though, right?  So adopting URIs doesn't help us
integrate with JDBC.  In any case, since Kafka is not a database, it is
a little unclear what better integration with JDBC would look like. 
Perhaps that is worth thinking about at some point, but it seems
unrelated to this URI discussion.

> On 05/10/2017, 20:10, "Clebert Suconic" 
> wrote:
> 
> On Thu, Oct 5, 2017 at 2:20 PM, Colin McCabe 
> wrote:
> > We used URIs as file paths in Hadoop.  I think it was a mistake, for a
> > few different reasons.
> >
> > URIs are actually very complex.  You probably know about scheme, host,
> > and port, but did you know about authority, user-info, query, fragment,
> > scheme-specific-part?  Do you know what they do in Hadoop?  The mapping
> > isn't obvious (and it wouldn't be obvious in Kafka either).
> 
> URIs are just a hashmap of key=string.. just like Properties...

You really can't treat a URI as a hashmap.  For one thing, the scheme
and hostname parts are not optional.

You are probably thinking of the "query" part (the part after the
question mark).  This isn't  map either-- it's a sequence of
comma-separated key=value pairs.  The same key can appear multiple
times.  And you have to encode everything with RFC3986 "percent
encoding."

> The Consumer and Producer is just having such hashMap.. and these
> values are easy to translate to boolean, integer.. etc. We would just
> need to add such mapping as part of this task when done. I don't see
> anything difficult there.

I don't object to having some kind of connection string that rolls up
all the configuration properties.  I just don't think it should be a
URI.

> >
> > When you flip back and forth between URIs and strings (and you
> > inevitably will do this, when serializing or sending things over the
> > wire), you run into tons of really hard problems.  Should you preserve
> > the "fragment" (the thing after the hash mark) for your URI, or not?  It
> > may not do anything now, but maybe it will do something later.  URIs
> > also have complex string escaping rules.  Parsing URIs is very messy,
> > especially when you start talking about non-Java programming languages.
> 
> 
> Why flip back and forth? URIs would generate the same HashMap that's
> being generated today.. I don't see any mess here.
> Besides... This would be an addition, not replacement...
> 
> And I'm talking only about the Java API now.

We have a lot of non-Java clients-- those should be part of the
discussion.

> 
> Again, All the properties on ProducerConfig and ConsumerConfig seems
> easy to be mapped as primitive types (String, numbers.. booleans).
> 
> Serialization shouldn't be a problem there. it would generate the
> same
> properties it's generated now.
> 
> >
> > URIs are designed for a world where you talk to a single host over a
> > single port.  That isn't the world distributed systems live in.  You
> > don't want your clients to fail to bootstrap because the single server
> > you specified is having a bad day, even when the other 8 servers are up.
> 
> I have seen a few projects using this style of URI: I would make it
> doing the same here:
> 
> If you have multiple hosts:
> 
> KafkaConsumer consumer = new
> 
> KafkaConsumer("kafka:(kafka://host1:port,kafka://host2:port)?property1=value");

That's not a valid URI?

> 
> if you have a single host:
> KafkaConsumer consumer = new
> KafkaConsumer("kafka://host2:port?property1=value=value2");
> 
> 
> One example of an apache project using a similar approach is
> qpid-jms:
> 
> http://qpid.apache.org/releases/qpid-jms-0.25.0/docs/index.html#failover-configuration-options
> 
> 
> > The bottom line is that URIs are the wrong abstraction for the job.
> > They just don't express what we really want, and they introduce a lot of
> > complexity and ambiguity.
> 
> I have seen the opposite to be honest. this has been simpler for me
> and users I know than using a HashMap.. .  users in my experience
> tend
> to write this faster.

Users tend to find make mistakes when writing URIs.  For example, how do
you translate a filename with spaces and commas into a URI?  I had to

Re: [DISCUSS] URIs on Producer and Consumer

2017-10-06 Thread Clebert Suconic
> As I said before, a connection string might be a good idea.  A URI, no.


Same purpose.. I'm fine with that...


Although I can't start a KIP... again.. (lost the count now).. can
someone add me authorizations to the WIKI Page so I can create the
page and start a proper discussion there?


[jira] [Resolved] (KAFKA-5916) Upgrade rocksdb dependency to 5.8

2017-10-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved KAFKA-5916.
---
Resolution: Duplicate

With KAFKA-5576

> Upgrade rocksdb dependency to 5.8
> -
>
> Key: KAFKA-5916
> URL: https://issues.apache.org/jira/browse/KAFKA-5916
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Ted Yu
>Priority: Minor
>
> Currently we use 5.3.6.
> The latest release is 5.8 :
> https://github.com/facebook/rocksdb/releases
> We should upgrade to latest rocksdb release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka-site pull request #92: Redesign of Streams page - includes video & cus...

2017-10-06 Thread manjuapu
GitHub user manjuapu opened a pull request:

https://github.com/apache/kafka-site/pull/92

Redesign of Streams page - includes video & customer logos

@derrickdoo Please review.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/manjuapu/kafka-site asf-site

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/92.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #92


commit 1e8a7f632ce9273e97432c5b32778b60065f5a96
Author: Manjula K 
Date:   2017-10-06T15:45:22Z

Adding google tracking file for youtube metrics

commit abda631d6cebe329ebbb045390489d7eb8d735f5
Author: Manjula K 
Date:   2017-10-06T16:24:39Z

Redesign of Streams page - includes video & customer logos




---


[GitHub] kafka-site issue #91: Adding google tracking file for youtube metrics

2017-10-06 Thread dguy
Github user dguy commented on the issue:

https://github.com/apache/kafka-site/pull/91
  
Merged to asf-site


---


[GitHub] kafka-site pull request #91: Adding google tracking file for youtube metrics

2017-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/91


---


[GitHub] kafka-site pull request #91: Adding google tracking file for youtube metrics

2017-10-06 Thread manjuapu
GitHub user manjuapu opened a pull request:

https://github.com/apache/kafka-site/pull/91

Adding google tracking file for youtube metrics

@guozhangwang @dguy Please review this. Thanks!!

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/manjuapu/kafka-site asf-site

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/91.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #91


commit 1e8a7f632ce9273e97432c5b32778b60065f5a96
Author: Manjula K 
Date:   2017-10-06T15:45:22Z

Adding google tracking file for youtube metrics




---


[jira] [Reopened] (KAFKA-6017) Cannot get broker ids from Kafka using kafka-connect

2017-10-06 Thread Jorge Machado (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorge Machado reopened KAFKA-6017:
--

This seems not to be totally fixed.  Now I get : 

{code}
[2017-10-06 12:13:18,509] WARN Error while fetching metadata with correlation 
id 1 : {connect-prod-offsets=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
[2017-10-06 12:13:18,612] WARN Error while fetching metadata with correlation 
id 2 : {connect-prod-offsets=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
[2017-10-06 12:13:18,717] WARN Error while fetching metadata with correlation 
id 3 : {connect-prod-offsets=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
[2017-10-06 12:13:18,820] WARN Error while fetching metadata with correlation 
id 4 : {connect-prod-offsets=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
[2017-10-06 12:13:18,924] WARN Error while fetching metadata with correlation 
id 5 : {connect-prod-offsets=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
[2017-10-06 12:13:19,029] WARN Error while fetching metadata with correlation 
id 6 : {connect-prod-offsets=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
{code}

I think this is sentry on the background ... but I'm not able to see anylog's 
ideias?

> Cannot get broker ids from Kafka using kafka-connect
> 
>
> Key: KAFKA-6017
> URL: https://issues.apache.org/jira/browse/KAFKA-6017
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
>Reporter: Jorge Machado
>
> Hi guys, 
> I'm using CDH Kafka 0.10.2.1-cp2 and the confluent 3.2.0 with Kerberos. 
> It seems that it cannot get the broker id's and it adds it with -1, -2 etc... 
> On Debug mode I see : 
> Cluster(id = null, nodes = [host1:9092 (id: -2 rack: null), host2:9092 (id: 
> -3 rack: null), host3:9092 (id: -1 rack: null)], partitions = [])
> On Zookeeper I see: 
> ls /kafka-prod/brokers/ids
> [264, 265, 263]
> I'm using this command: 
> KAFKA_OPTS=-Djava.security.auth.login.config=/etc/kafka/connect_distributed_jaas.conf
>  connect-distributed /etc/kafka/connect-distributed.properties
> {code:java}
> [2017-10-06 08:47:52,078] DEBUG Recorded API versions for node -3: 
> (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 
> to 1 [usable: 1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 
> [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 
> [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 
> [usable: 2], OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 
> [usable: 0], JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], 
> LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], 
> DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], 
> SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], 
> CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 
> (org.apache.kafka.clients.NetworkClient:558)
> {code}
> At the end I get this error: 
> {code}
> [main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started 
> o.e.j.s.ServletContextHandler@593e824f{/,null,AVAILABLE}
> [main] INFO org.eclipse.jetty.server.ServerConnector - Started 
> ServerConnector@2cab9998{HTTP/1.1}{host:8083}
> [main] INFO org.eclipse.jetty.server.Server - Started @1469ms
> [main] INFO org.apache.kafka.connect.runtime.rest.RestServer - REST server 
> listening at http://HOST:8083/, advertising URL http://HOST:8083/
> [main] INFO org.apache.kafka.connect.runtime.Connect - Kafka Connect started
> [DistributedHerder] ERROR 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder - Uncaught 
> exception in herder work thread, exiting:
> org.apache.kafka.connect.errors.ConnectException: Could not look up partition 
> metadata for offset backing store topic in allotted period. This could 
> indicate a connectivity issue, unavailable topic partitions, or if this is 
> your first use of the topic it may have taken too long to create.
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:133)
>   at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:86)
>   at org.apache.kafka.connect.runtime.Worker.start(Worker.java:121)
>   at 
> org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:95)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:193)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> 

[GitHub] kafka pull request #4034: MINOR: Remove TLS renegotiation code

2017-10-06 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4034

MINOR: Remove TLS renegotiation code

This has been disabled since the start and since
it's removed in TLS 1.3, there are no plans to
ever support it.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka remove-tls-renegotiation-support

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4034.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4034


commit c99acab314d699ee69a46a9dd2fd0cd135832dd8
Author: Ismael Juma 
Date:   2017-10-06T10:39:00Z

MINOR: Remove TLS renegotiation code

This has been disabled since the start and since
it's removed in TLS 1.3, there are no plans to
ever support it.




---


[jira] [Resolved] (KAFKA-6017) Cannot get broker ids from Kafka using kafka-connect

2017-10-06 Thread Jorge Machado (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorge Machado resolved KAFKA-6017.
--
Resolution: Won't Fix

> Cannot get broker ids from Kafka using kafka-connect
> 
>
> Key: KAFKA-6017
> URL: https://issues.apache.org/jira/browse/KAFKA-6017
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
>Reporter: Jorge Machado
>
> Hi guys, 
> I'm using CDH Kafka 0.10.2.1-cp2 and the confluent 3.2.0 with Kerberos. 
> It seems that it cannot get the broker id's and it adds it with -1, -2 etc... 
> On Debug mode I see : 
> Cluster(id = null, nodes = [host1:9092 (id: -2 rack: null), host2:9092 (id: 
> -3 rack: null), host3:9092 (id: -1 rack: null)], partitions = [])
> On Zookeeper I see: 
> ls /kafka-prod/brokers/ids
> [264, 265, 263]
> I'm using this command: 
> KAFKA_OPTS=-Djava.security.auth.login.config=/etc/kafka/connect_distributed_jaas.conf
>  connect-distributed /etc/kafka/connect-distributed.properties
> {code:java}
> [2017-10-06 08:47:52,078] DEBUG Recorded API versions for node -3: 
> (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 
> to 1 [usable: 1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 
> [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 
> [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 
> [usable: 2], OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 
> [usable: 0], JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], 
> LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], 
> DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], 
> SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], 
> CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 
> (org.apache.kafka.clients.NetworkClient:558)
> {code}
> At the end I get this error: 
> {code}
> [main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started 
> o.e.j.s.ServletContextHandler@593e824f{/,null,AVAILABLE}
> [main] INFO org.eclipse.jetty.server.ServerConnector - Started 
> ServerConnector@2cab9998{HTTP/1.1}{host:8083}
> [main] INFO org.eclipse.jetty.server.Server - Started @1469ms
> [main] INFO org.apache.kafka.connect.runtime.rest.RestServer - REST server 
> listening at http://HOST:8083/, advertising URL http://HOST:8083/
> [main] INFO org.apache.kafka.connect.runtime.Connect - Kafka Connect started
> [DistributedHerder] ERROR 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder - Uncaught 
> exception in herder work thread, exiting:
> org.apache.kafka.connect.errors.ConnectException: Could not look up partition 
> metadata for offset backing store topic in allotted period. This could 
> indicate a connectivity issue, unavailable topic partitions, or if this is 
> your first use of the topic it may have taken too long to create.
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:133)
>   at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:86)
>   at org.apache.kafka.connect.runtime.Worker.start(Worker.java:121)
>   at 
> org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:95)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:193)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> [Thread-1] INFO org.apache.kafka.connect.runtime.Connect - Kafka Connect 
> stopping
> [Thread-1] INFO org.apache.kafka.connect.runtime.rest.RestServer - Stopping 
> REST server
> [Thread-2] INFO org.eclipse.jetty.server.ServerConnector - Stopped 
> ServerConnector@2cab9998{HTTP/1.1}{HOST:8083}
> {code}
> any ideias ? I think this is a bug on SASL_PLAINTEXT



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4033: KAFKA-6018: Make KafkaFuture.Future an interface

2017-10-06 Thread steven-aerts
GitHub user steven-aerts opened a pull request:

https://github.com/apache/kafka/pull/4033

KAFKA-6018: Make KafkaFuture.Future an interface

Changing KafkaFuture.Future and KafkaFuture.BiConsumer into an interface 
makes
them a functional interface.  This makes them Java 8 lambda compatible.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/steven-aerts/kafka-1 KAFKA-6018

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4033.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4033


commit 6c630aff48954558b8ad6e2611cf0c269b879287
Author: Steven Aerts 
Date:   2017-10-06T10:16:41Z

KAFKA-6018 Make KafkaFuture.Future an interface

Changing KafkaFuture.Future and KafkaFuture.BiConsumer into an interface 
makes
them a functional interface.  This makes them Java 8 lambda compatible.




---


[jira] [Created] (KAFKA-6018) Make from KafkaFuture.Function java 8 lambda compatible

2017-10-06 Thread Steven Aerts (JIRA)
Steven Aerts created KAFKA-6018:
---

 Summary: Make from KafkaFuture.Function java 8 lambda compatible
 Key: KAFKA-6018
 URL: https://issues.apache.org/jira/browse/KAFKA-6018
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Steven Aerts


KafkaFuture.Function is currently an empty public abstract class.

This means you cannot implement them as a java lambda.  And you end up with 
constructs as:

{code:java}
new KafkaFuture.Function() {
@Override
public Object apply(Set strings) {
return foo;
}
}
{code}

I propose to define them as interfaces.
So this code can become in java 8:

{code:java}
strings -> foo
{code}

I know this change is backwards incompatible (extends becomes implements).
But as {{KafkaFuture}} is marked as {{@InterfaceStability.Evolving}}.
And KafkaFuture states in its javadoc:
{quote}This will eventually become a thin shim on top of Java 8's 
CompletableFuture.{quote}

I think this change might be worth considering.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6017) Cannot get broker ids from Kafka using kafka-connect

2017-10-06 Thread Jorge Machado (JIRA)
Jorge Machado created KAFKA-6017:


 Summary: Cannot get broker ids from Kafka using kafka-connect
 Key: KAFKA-6017
 URL: https://issues.apache.org/jira/browse/KAFKA-6017
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.2.0
Reporter: Jorge Machado


Hi guys, 

I'm using CDH Kafka 0.10.2.1-cp2 and the confluent 3.2.0 with Kerberos. 
It seems that it cannot get the broker id's and it adds it with -1, -2 etc... 

On Debug mode I see : 
Cluster(id = null, nodes = [host1:9092 (id: -2 rack: null), host2:9092 (id: -3 
rack: null), host3:9092 (id: -1 rack: null)], partitions = [])
On Zookeeper I see: 

ls /kafka-prod/brokers/ids
[264, 265, 263]

I'm using this command: 
KAFKA_OPTS=-Djava.security.auth.login.config=/etc/kafka/connect_distributed_jaas.conf
 connect-distributed /etc/kafka/connect-distributed.properties

{code:java}
[2017-10-06 08:47:52,078] DEBUG Recorded API versions for node -3: (Produce(0): 
0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 to 1 [usable: 
1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 [usable: 0], 
StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], 
ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], 
OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 [usable: 0], 
JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], 
LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], 
DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], 
SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], 
CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 
(org.apache.kafka.clients.NetworkClient:558)
{code}

At the end I get this error: 
{code}
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started 
o.e.j.s.ServletContextHandler@593e824f{/,null,AVAILABLE}
[main] INFO org.eclipse.jetty.server.ServerConnector - Started 
ServerConnector@2cab9998{HTTP/1.1}{host:8083}
[main] INFO org.eclipse.jetty.server.Server - Started @1469ms
[main] INFO org.apache.kafka.connect.runtime.rest.RestServer - REST server 
listening at http://HOST:8083/, advertising URL http://HOST:8083/
[main] INFO org.apache.kafka.connect.runtime.Connect - Kafka Connect started
[DistributedHerder] ERROR 
org.apache.kafka.connect.runtime.distributed.DistributedHerder - Uncaught 
exception in herder work thread, exiting:
org.apache.kafka.connect.errors.ConnectException: Could not look up partition 
metadata for offset backing store topic in allotted period. This could indicate 
a connectivity issue, unavailable topic partitions, or if this is your first 
use of the topic it may have taken too long to create.
at 
org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:133)
at 
org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:86)
at org.apache.kafka.connect.runtime.Worker.start(Worker.java:121)
at 
org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:95)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:193)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[Thread-1] INFO org.apache.kafka.connect.runtime.Connect - Kafka Connect 
stopping
[Thread-1] INFO org.apache.kafka.connect.runtime.rest.RestServer - Stopping 
REST server
[Thread-2] INFO org.eclipse.jetty.server.ServerConnector - Stopped 
ServerConnector@2cab9998{HTTP/1.1}{HOST:8083}
{code}

any ideias ? I think this is a bug on SASL_PLAINTEXT



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3519: KAFKA-5576: increase the rocksDB version to 5.5.1 ...

2017-10-06 Thread yussufsh
Github user yussufsh closed the pull request at:

https://github.com/apache/kafka/pull/3519


---