Jenkins build is back to normal : kafka-trunk-jdk9 #271

2017-12-20 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk9 #270

2017-12-20 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6126: Remove unnecessary topics created check

--
[...truncated 1.45 MB...]
kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges STARTED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > 

[GitHub] kafka pull request #4348: MINOR: Fix race condition in Streams EOS system te...

2017-12-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4348


---


[GitHub] kafka pull request #4349: KAFKA-6366 [WIP]: Fix stack overflow in consumer d...

2017-12-20 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/4349

KAFKA-6366 [WIP]: Fix stack overflow in consumer due to fast offset commits 
during coordinator disconnect

When the coordinator is marked unknown, we explicitly disconnect its 
connection and cancel pending requests. Currently the disconnect happens before 
the coordinator state is set to null, which means that callbacks which inspect 
the coordinator state will see it still as active. This can lead to further 
requests being sent. In pathological cases, the disconnect itself is not able 
to return because new requests are sent to the coordinator before the 
disconnect can complete, which leads to the stack overflow error. To fix the 
problem, I have reordered the disconnect to happen after the coordinator is set 
to null.

I have added a basic test case to verify that callbacks for in-flight or 
unsent requests see the coordinator as unknown which prevents them from 
attempting to resend. We may need additional test cases after we determine 
whether this is in fact was it happening in the reported ticket.

Note that I have also included some minor cleanups which I noticed along 
the way.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-6366

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4349.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4349


commit 488de3dca5be6111fd447980c8e79477259dc99a
Author: Jason Gustafson 
Date:   2017-12-18T18:53:38Z

KAFKA-6366 [WIP]: Fix stack overflow in consumer due to fast offset commits 
during coordinator disconnect




---


[GitHub] kafka pull request #4322: KAFKA-6126: Remove unnecessary topics created chec...

2017-12-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4322


---


Re: [VOTE] KIP-243: Make ProducerConfig and ConsumerConfig constructors public

2017-12-20 Thread Matthias J. Sax
It's tailored for internal usage. I think client constructors don't
benefit from accepting those config objects. We just want to be able to
access the default values for certain parameters.

From a user point of view, it's actually boiler plate code if you pass
in a config object instead of a plain Properties object because the
config object itself is immutable.

I actually create a JIRA to remove the constructors from KafkaStreams
that do accept StreamsConfig for exact this reason:
https://issues.apache.org/jira/browse/KAFKA-6386


-Matthias


On 12/20/17 3:33 PM, Jason Gustafson wrote:
> Hi Matthias,
> 
> Isn't it a little weird to make these constructors public but not also
> expose the corresponding client constructors that use them?
> 
> -Jason
> 
> On Tue, Dec 19, 2017 at 9:30 AM, Bill Bejeck  wrote:
> 
>> +1
>>
>> On Tue, Dec 19, 2017 at 12:09 PM, Guozhang Wang 
>> wrote:
>>
>>> +1
>>>
>>> On Tue, Dec 19, 2017 at 1:49 AM, Tom Bentley 
>>> wrote:
>>>
 +1

 On 18 December 2017 at 23:28, Vahid S Hashemian <
>>> vahidhashem...@us.ibm.com
>
 wrote:

> +1
>
> Thanks for the KIP.
>
> --Vahid
>
>
>
> From:   Ted Yu 
> To: dev@kafka.apache.org
> Date:   12/18/2017 02:45 PM
> Subject:Re: [VOTE] KIP-243: Make ProducerConfig and
 ConsumerConfig
> constructors public
>
>
>
> +1
>
> nit: via "copy and past" an 'e' is missing at the end.
>
> On Mon, Dec 18, 2017 at 2:38 PM, Matthias J. Sax <
>>> matth...@confluent.io>
> wrote:
>
>> Hi,
>>
>> I want to propose the following KIP:
>>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> apache.org_confluence_display_KAFKA_KIP-2D=DwIBaQ=jf_
> iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> kjJc7uSVcviKUc=JToRX4-HeVsRoOekIz18ht-YLMe-T21MttZTgbxB4ag=
> 6aZjPCc9e00raokVPKvx1BxwDOHyCuKNgtBXPMeoHy4=
>
>> 243%3A+Make+ProducerConfig+and+ConsumerConfig+constructors+public
>>
>>
>> This is a rather straight forward change, thus I skip the DISCUSS
>> thread and call for a vote immediately.
>>
>>
>> -Matthias
>>
>>
>
>
>
>
>

>>>
>>>
>>>
>>> --
>>> -- Guozhang
>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-243: Make ProducerConfig and ConsumerConfig constructors public

2017-12-20 Thread Matthias J. Sax
It's tailored for internal usage. I think client constructors don't
benefit from accepting those config objects. We just want to be able to
access the default values for certain parameters.

From a user point of view, it's actually boiler plate code if you pass
in a config object instead of a plain Properties object because the
config object itself is immutable.

I actually create a JIRA to remove the constructors from KafkaStreams
that do accept StreamsConfig for exact this reason:
https://issues.apache.org/jira/browse/KAFKA-6386


-Matthias


On 12/20/17 3:33 PM, Jason Gustafson wrote:
> Hi Matthias,
> 
> Isn't it a little weird to make these constructors public but not also
> expose the corresponding client constructors that use them?
> 
> -Jason
> 
> On Tue, Dec 19, 2017 at 9:30 AM, Bill Bejeck  wrote:
> 
>> +1
>>
>> On Tue, Dec 19, 2017 at 12:09 PM, Guozhang Wang 
>> wrote:
>>
>>> +1
>>>
>>> On Tue, Dec 19, 2017 at 1:49 AM, Tom Bentley 
>>> wrote:
>>>
 +1

 On 18 December 2017 at 23:28, Vahid S Hashemian <
>>> vahidhashem...@us.ibm.com
>
 wrote:

> +1
>
> Thanks for the KIP.
>
> --Vahid
>
>
>
> From:   Ted Yu 
> To: dev@kafka.apache.org
> Date:   12/18/2017 02:45 PM
> Subject:Re: [VOTE] KIP-243: Make ProducerConfig and
 ConsumerConfig
> constructors public
>
>
>
> +1
>
> nit: via "copy and past" an 'e' is missing at the end.
>
> On Mon, Dec 18, 2017 at 2:38 PM, Matthias J. Sax <
>>> matth...@confluent.io>
> wrote:
>
>> Hi,
>>
>> I want to propose the following KIP:
>>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> apache.org_confluence_display_KAFKA_KIP-2D=DwIBaQ=jf_
> iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> kjJc7uSVcviKUc=JToRX4-HeVsRoOekIz18ht-YLMe-T21MttZTgbxB4ag=
> 6aZjPCc9e00raokVPKvx1BxwDOHyCuKNgtBXPMeoHy4=
>
>> 243%3A+Make+ProducerConfig+and+ConsumerConfig+constructors+public
>>
>>
>> This is a rather straight forward change, thus I skip the DISCUSS
>> thread and call for a vote immediately.
>>
>>
>> -Matthias
>>
>>
>
>
>
>
>

>>>
>>>
>>>
>>> --
>>> -- Guozhang
>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


[GitHub] kafka pull request #4348: MINOR: Fix race condition in Streams EOS system te...

2017-12-20 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/4348

MINOR: Fix race condition in Streams EOS system test

We should start the process only within the `with` block, otherwise the 
bytes parameter would cause a race condition that result in false alarms of 
system test failures.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KMinor-fix-eos-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4348.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4348


commit 628b345c61017abaf3e51c3a753a0c2b3418a0ce
Author: Guozhang Wang 
Date:   2017-12-21T01:56:29Z

Fix race condition




---


Jenkins build is back to normal : kafka-trunk-jdk8 #2291

2017-12-20 Thread Apache Jenkins Server
See 




[GitHub] kafka-site pull request #113: HOTFIX: broken /10 docs

2017-12-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/113


---


[GitHub] kafka-site issue #113: HOTFIX: broken /10 docs

2017-12-20 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/113
  
Merged to asf-site.


---


[GitHub] kafka-site issue #113: HOTFIX: broken /10 docs

2017-12-20 Thread derrickdoo
Github user derrickdoo commented on the issue:

https://github.com/apache/kafka-site/pull/113
  
👍 👍 LGTM


---


Jenkins build is back to normal : kafka-trunk-jdk9 #268

2017-12-20 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-243: Make ProducerConfig and ConsumerConfig constructors public

2017-12-20 Thread Jason Gustafson
Hi Matthias,

Isn't it a little weird to make these constructors public but not also
expose the corresponding client constructors that use them?

-Jason

On Tue, Dec 19, 2017 at 9:30 AM, Bill Bejeck  wrote:

> +1
>
> On Tue, Dec 19, 2017 at 12:09 PM, Guozhang Wang 
> wrote:
>
> > +1
> >
> > On Tue, Dec 19, 2017 at 1:49 AM, Tom Bentley 
> > wrote:
> >
> > > +1
> > >
> > > On 18 December 2017 at 23:28, Vahid S Hashemian <
> > vahidhashem...@us.ibm.com
> > > >
> > > wrote:
> > >
> > > > +1
> > > >
> > > > Thanks for the KIP.
> > > >
> > > > --Vahid
> > > >
> > > >
> > > >
> > > > From:   Ted Yu 
> > > > To: dev@kafka.apache.org
> > > > Date:   12/18/2017 02:45 PM
> > > > Subject:Re: [VOTE] KIP-243: Make ProducerConfig and
> > > ConsumerConfig
> > > > constructors public
> > > >
> > > >
> > > >
> > > > +1
> > > >
> > > > nit: via "copy and past" an 'e' is missing at the end.
> > > >
> > > > On Mon, Dec 18, 2017 at 2:38 PM, Matthias J. Sax <
> > matth...@confluent.io>
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I want to propose the following KIP:
> > > > >
> > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> > > > apache.org_confluence_display_KAFKA_KIP-2D=DwIBaQ=jf_
> > > > iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> > > > kjJc7uSVcviKUc=JToRX4-HeVsRoOekIz18ht-YLMe-T21MttZTgbxB4ag=
> > > > 6aZjPCc9e00raokVPKvx1BxwDOHyCuKNgtBXPMeoHy4=
> > > >
> > > > > 243%3A+Make+ProducerConfig+and+ConsumerConfig+constructors+public
> > > > >
> > > > >
> > > > > This is a rather straight forward change, thus I skip the DISCUSS
> > > > > thread and call for a vote immediately.
> > > > >
> > > > >
> > > > > -Matthias
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


[GitHub] kafka pull request #4342: KAFKA-4263: fix flaky test QueryableStateIntegrati...

2017-12-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4342


---


[jira] [Resolved] (KAFKA-4263) QueryableStateIntegrationTest.concurrentAccess is failing occasionally in jenkins builds

2017-12-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4263.
--
   Resolution: Fixed
Fix Version/s: (was: 0.10.1.1)
   1.0.1
   1.1.0

Issue resolved by pull request 4342
[https://github.com/apache/kafka/pull/4342]

> QueryableStateIntegrationTest.concurrentAccess is failing occasionally in 
> jenkins builds
> 
>
> Key: KAFKA-4263
> URL: https://issues.apache.org/jira/browse/KAFKA-4263
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Matthias J. Sax
> Fix For: 1.1.0, 1.0.1
>
>
> We are seeing occasional failures of this test in jenkins, however it isn't 
> failing when running locally (confirmed by multiple people). Needs 
> investingating



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka-site issue #113: HOTFIX: broken /10 docs

2017-12-20 Thread joel-hamill
Github user joel-hamill commented on the issue:

https://github.com/apache/kafka-site/pull/113
  
ping @guozhangwang 


---


[GitHub] kafka-site pull request #113: HOTFIX: broken /10 docs

2017-12-20 Thread joel-hamill
GitHub user joel-hamill opened a pull request:

https://github.com/apache/kafka-site/pull/113

HOTFIX: broken /10 docs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joel-hamill/kafka-site fix-broken-links

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/113.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #113


commit 93afd4eea6817d1d33b2ece2bf970e20657e7027
Author: Joel Hamill 
Date:   2017-12-20T22:51:58Z

HOTFIX: broken /10 docs




---


Build failed in Jenkins: kafka-trunk-jdk9 #267

2017-12-20 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-5647; Use KafkaZkClient in ReassignPartitionsCommand and

--
[...truncated 1.45 MB...]

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges STARTED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #2290

2017-12-20 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-5647; Use KafkaZkClient in ReassignPartitionsCommand and

--
[...truncated 405.82 KB...]
kafka.server.DelayedOperationTest > testRequestSatisfaction STARTED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLock STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLock PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnection STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnection PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > 

[GitHub] kafka pull request #4323: KAFKA-5849: Add process stop, round trip workload,...

2017-12-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4323


---


[jira] [Resolved] (KAFKA-5849) Add process stop faults, round trip workload, partitioned produce-consume test

2017-12-20 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-5849.
---
   Resolution: Fixed
Fix Version/s: 1.1.0

Issue resolved by pull request 4323
[https://github.com/apache/kafka/pull/4323]

> Add process stop faults, round trip workload, partitioned produce-consume test
> --
>
> Key: KAFKA-5849
> URL: https://issues.apache.org/jira/browse/KAFKA-5849
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 1.1.0
>
>
> Add partitioned produce consume test



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka-site issue #112: Migrate Streams Dev Guide content to AK

2017-12-20 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/112
  
LGTM. Merged to asf-site.


---


[GitHub] kafka-site pull request #112: Migrate Streams Dev Guide content to AK

2017-12-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/112


---


Re: [VOTE] KIP-239 Add queryableStoreName() to GlobalKTable

2017-12-20 Thread Ted Yu
Ping for more (binding) votes.

The pull request is ready.

On Fri, Dec 15, 2017 at 12:57 PM, Guozhang Wang  wrote:

> +1 (binding), thanks!
>
> On Fri, Dec 15, 2017 at 11:56 AM, Ted Yu  wrote:
>
> > Hi,
> > Here is the discussion thread:
> >
> > http://search-hadoop.com/m/Kafka/uyzND12QnH514pPO9?subj=
> > Re+DISCUSS+KIP+239+Add+queryableStoreName+to+GlobalKTable
> >
> > Please vote on this KIP.
> >
> > Thanks
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Created] (KAFKA-6394) Prevent misconfiguration of advertised listeners

2017-12-20 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6394:
--

 Summary: Prevent misconfiguration of advertised listeners
 Key: KAFKA-6394
 URL: https://issues.apache.org/jira/browse/KAFKA-6394
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


We don't really have any protection from misconfiguration of the advertised 
listeners. Sometimes users will copy the config from one host to another during 
an upgrade. They may remember to update the broker id, but forget about the 
advertised listeners. It can be surprisingly difficult to detect this unless 
you know to look for it (e.g. you might just see a lot of NotLeaderForPartition 
errors as the fetchers connect to the wrong broker). It may not be totally 
foolproof, but it's probably enough for the common misconfiguration case to 
check existing brokers to see whether there are any which have already 
registered the advertised listener.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4260: KAFKA-5647: Use KafkaZkClient in ReassignPartition...

2017-12-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4260


---


[jira] [Created] (KAFKA-6393) Add tool to view active brokers

2017-12-20 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6393:
--

 Summary: Add tool to view active brokers
 Key: KAFKA-6393
 URL: https://issues.apache.org/jira/browse/KAFKA-6393
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


It would be helpful to have a tool to view the active brokers in the cluster. 
For example, it could include the following:

1. Broker id and version (maybe detected through ApiVersions request)
2. Broker listener information
3. Whether broker is online
4. Which broker is the active controller
5. Maybe some key configs (e.g. inter-broker version and message format version)





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka-site issue #112: Migrate Streams Dev Guide content to AK

2017-12-20 Thread derrickdoo
Github user derrickdoo commented on the issue:

https://github.com/apache/kafka-site/pull/112
  
👍 💯 LGTM


---


Re: [DISCUSS] KIP-236 Interruptible Partition Reassignment

2017-12-20 Thread Jun Rao
Hi, Tom,

Thanks for the reply. A few more comments below.

10. You explanation makes sense. My remaining concern is the additional ZK
writes in the proposal. With the proposal, we will need to do following
writes in ZK.

a. write new assignment in /admin/reassignment_requests

b. write new assignment and additional metadata in
/admin/reassignments/$topic/$partition

c. write old + new assignment  in /brokers/topics/[topic]

d. write new assignment in /brokers/topics/[topic]

e. delete /admin/reassignments/$topic/$partition

So, there are quite a few ZK writes. I am wondering if it's better to
consolidate the info in /admin/reassignments/$topic/$partition into
/brokers/topics/[topic].
For example, we can just add some new JSON fields in /brokers/topics/[topic]
to remember the new assignment and potentially the original replica count
when doing step c. Those fields with then be removed in step d. That way,
we can get rid of step b and e, saving 2 ZK writes per partition.

11. What you described sounds good. We could potentially optimize the
dropped replicas a bit more. Suppose that assignment [0,1,2] is first
changed to [1,2,3] and then to [2,3,4]. When initiating the second
assignment, we may end up dropping replica 3 and only to restart it again.
In this case, we could only drop a replica if it's not going to be added
back again.

13. Since this is a corner case, we can either prevent or allow overriding
with old/new mechanisms. To me, it seems that allowing is simpler to
implement, the order in /admin/reassignment_requests determines the
ordering the of override, whether that's initiated by the new way or the
old way.

Thanks,

Jun

On Tue, Dec 19, 2017 at 2:43 AM, Tom Bentley  wrote:

> Hi Jun,
>
> 10. Another concern of mine is on consistency with the current pattern. The
> > current pattern for change notification based on ZK is (1) we first write
> > the actual value in the entity path and then write the change
> notification
> > path, and (2)  the change notification path only includes what entity has
> > changed but not the actual changes. If we want to follow this pattern for
> > consistency, /admin/reassignment_requests/request_xxx will only have the
> > partitions whose reassignment have changed, but not the actual
> > reassignment.
> >
>
> Ah, I hadn't understood part (2). That means my concern about efficiency
> with the current pattern is misplaced. There are still some interesting
> differences in semantics, however:
>
> a) The mechanism currently proposed in KIP-236 means that the controller is
> the only writer to /admin/reassignments. This means it can include
> information in these znodes that requesters might not know, or information
> that's necessary to perform the reassignment but not necessary to describe
> the request. While this could be handled using the current pattern it would
> rely on all  writers to preserve any information added by the controller,
> which seems complicated and hence fragile.
>
> b) The current pattern for change notification doesn't cope with competing
> writers to the entity path: If two processes write to the entity path
> before the controller can read it (due to notification) then one set of
> updates will be lost.
>
> c) If a single writing process crashes after writing to the entity path,
> but before writing to the notification path then the write will be lost.
>
> I'm actually using point a) in my WIP (see below). Points b) and c) are
> obviously edge cases.
>
>
> > 11. Ok. I am not sure that I fully understand the description of that
> part.
> > Does "assigned" refer to the current assignment? Could you also describe
> > where the length of the original assignment is stored in ZK?
> >
>
> Sorry if the description is not clear. Yes, "assigned" referrs to the
> currently assigned replicas (taken from the
> ControllerContext.partitionReplicaAssignment). I would store the length of
> the original assignment in the /admin/reassignments/$topic/$partition
> znode
> (this is where the point (a) above is useful -- the requester shouldn't
> know that this information is used by the controller).
>
> I've updated the KIP to make these points clearer.
>
>
> > 13. Hmm, I am not sure that the cancellation needs to be done for the
> whole
> > batch. The reason that I brought this up is for consistency. The KIP
> allows
> > override when using the new approach. It just seems that it's simpler to
> > extend this model when resolving multiple changes between the old and the
> > new approach.
>
>
> Ah, I think I've been unclear on this point too. Currently the
> ReassignPartitionsCommand enforces that you can't change reassignments, but
> this doesn't stop other ZK clients making changes to
> /admin/reassign_partitions directly and I believe some Kafka users do
> indeed change reassignments in-flight by writing to
> /admin/reassign_partitions. What I'm proposing doesn't break that at all.
> The semantic I've implemented is only that the 

[jira] [Created] (KAFKA-6392) Do not permit message down-conversion for replicas

2017-12-20 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6392:
--

 Summary: Do not permit message down-conversion for replicas
 Key: KAFKA-6392
 URL: https://issues.apache.org/jira/browse/KAFKA-6392
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


We have seen several cases where down-conversion caused replicas to diverge 
from the leader in subtle ways. Generally speaking, even if we addressed all of 
the edge cases so that down-conversion worked correctly as far as consistency 
of offsets, it would probably still be a bad idea to permit down-conversion. 
For example, this can cause message timestamps to be lost if down-converting 
from v1 to v0, or transactional data could be lost if down-converting from v2 
to v1 or v0. 

With that in mind, it would better to forbid down-conversion for replica 
fetches. Following the normal upgrade procedure, down-conversion is not needed 
anyway, but users often skip updating the inter-broker version. It is probably 
better in these cases to let the ISR shrink until the replicas have been 
updated as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk8 #2289

2017-12-20 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5746; Document new broker metrics added for health checks

--
[...truncated 3.88 MB...]

org.apache.kafka.connect.runtime.rest.entities.ConnectorTypeTest > testForValue 
PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigCast STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigCast PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigRegexRouter STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigRegexRouter PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigSetSchemaMetadata STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigSetSchemaMetadata PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigTimestampConverter STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigTimestampConverter PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigHoistField STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigHoistField PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigMaskField STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigMaskField PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigInsertField STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigInsertField PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigFlatten STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigFlatten PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigReplaceField STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigReplaceField PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigTimestampRouter STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigTimestampRouter PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigValueToKey STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigValueToKey PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigExtractField STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigExtractField PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > unconfiguredTransform 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > unconfiguredTransform 
PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > 
multipleTransformsOneDangling STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > 
multipleTransformsOneDangling PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > misconfiguredTransform 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > misconfiguredTransform 
PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > noTransforms STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > noTransforms PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > danglingTransformAlias 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > danglingTransformAlias 
PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > multipleTransforms 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > multipleTransforms PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > singleTransform STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > singleTransform PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > wrongTransformationType 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > wrongTransformationType 
PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutConnectorConfig STARTED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutTaskConfigs STARTED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorFailedBasicValidation STARTED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorFailedBasicValidation PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorFailedCustomValidation STARTED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorFailedCustomValidation PASSED


[GitHub] kafka pull request #4347: KAFKA-6391 ensure topics are created with correct ...

2017-12-20 Thread cvaliente
GitHub user cvaliente opened a pull request:

https://github.com/apache/kafka/pull/4347

KAFKA-6391 ensure topics are created with correct partitions BEFORE 
building the…

ensure topics are created with correct partitions BEFORE building the 
metadata for our stream tasks

First ensureCoPartitioning() on repartitionTopicMetadata before creating 
allRepartitionTopicPartitions

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cvaliente/kafka KAFKA-6391

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4347.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4347


commit bda1803d50d984ef4860579d508c37487df9781a
Author: Clemens Valiente 
Date:   2017-12-20T15:45:41Z

ensure topics are created with correct partitions BEFORE building the 
metadata for our stream tasks




---


Jenkins build is back to normal : kafka-trunk-jdk8 #2288

2017-12-20 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-6391) output from ensure copartitioning is not used for Cluster metadata

2017-12-20 Thread Clemens Valiente (JIRA)
Clemens Valiente created KAFKA-6391:
---

 Summary: output from ensure copartitioning is not used for Cluster 
metadata
 Key: KAFKA-6391
 URL: https://issues.apache.org/jira/browse/KAFKA-6391
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.0
Reporter: Clemens Valiente
Assignee: Clemens Valiente


https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamPartitionAssignor.java#L366


Map allRepartitionTopicPartitions is created 
from repartitionTopicMetadata
THEN we do ensureCoPartitioning on repartitionTopicMetadata
THEN we create topics and partitions according to repartitionTopicMetadata
THEN we use allRepartitionTopicPartitions to create our Cluster fullMetadata
THEN we use fullMetadata to assign the tasks and no longer use 
repartitionTopicMetadata

This results in any change to repartitionTopicMetadata in ensureCoPartitioning 
to be used for creating partitions but no tasks are ever created for any 
partition added by ensureCoPartitioning()


the fix is easy: First ensureCoPartitioning() on repartitionTopicMetadata 
before creating allRepartitionTopicPartitions




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4026: KAFKA-5746: Document new broker metrics added for ...

2017-12-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4026


---


[jira] [Resolved] (KAFKA-6331) Transient failure in kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpointkafka.api.AdminClientIntegrationTest.testAlterReplicaLogDirs

2017-12-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6331.

   Resolution: Fixed
Fix Version/s: 1.1.0

> Transient failure in 
> kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpointkafka.api.AdminClientIntegrationTest.testAlterReplicaLogDirs
> --
>
> Key: KAFKA-6331
> URL: https://issues.apache.org/jira/browse/KAFKA-6331
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Guozhang Wang
>Assignee: Dong Lin
> Fix For: 1.1.0
>
>
> Saw this error once on Jenkins: 
> https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/3025/testReport/junit/kafka.api/AdminClientIntegrationTest/testAlterReplicaLogDirs/
> {code}
> Stacktrace
> java.lang.AssertionError: timed out waiting for message produce
>   at kafka.utils.TestUtils$.fail(TestUtils.scala:347)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:861)
>   at 
> kafka.api.AdminClientIntegrationTest.testAlterReplicaLogDirs(AdminClientIntegrationTest.scala:357)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at java.base/java.lang.Thread.run(Thread.java:844)
> Standard Output
> [2017-12-07 19:22:56,297] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> [2017-12-07 19:22:59,447] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> [2017-12-07 19:22:59,453] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> [2017-12-07 19:23:01,335] ERROR Error while creating ephemeral at 
> /controller, node already exists and owner '99134641238966279' does not match 
> current session '99134641238966277' 
> (kafka.zk.KafkaZkClient$CheckedEphemeral:71)
> [2017-12-07 19:23:04,695] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> [2017-12-07 19:23:04,760] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> [2017-12-07 19:23:06,764] ERROR Error while creating ephemeral at 
> /controller, node already exists and owner '99134641586700293' does not match 
> current session '99134641586700295' 
> (kafka.zk.KafkaZkClient$CheckedEphemeral:71)
> [2017-12-07 19:23:09,379] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> [2017-12-07 19:23:09,387] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> [2017-12-07 19:23:11,533] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> [2017-12-07 19:23:11,539] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN 

[GitHub] kafka pull request #4346: appends Materialized#with to include the ability t...

2017-12-20 Thread kdrakon
GitHub user kdrakon opened a pull request:

https://github.com/apache/kafka/pull/4346

appends Materialized#with to include the ability to specify the store name 
as well as the serdes

`Materialized#with` doesn't allow you to specify both a store name and the 
key/value serdes. If you specify the name using `#as`, then the serdes are 
implied/inferred to be for `Serde` and it becomes ugly to use 
`#withKeySerde`, `#withValueSerde`, etc. This overload of `Materialized#with` 
allows both the store name *and* the specific Serdes desired to be specified in 
one go.

I've updated `MaterializedTest` with the expected new behaviour as well as 
the old behaviour (i.e. `null` store name when not specified).

### Committer Checklist (excluded from commit message)
- [x] Verify design and implementation 
- [x] Verify test coverage and CI build status
- [x] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kdrakon/kafka 
kdrakon/adding-storename-to-Materialized#with

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4346.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4346


commit a5394b1f92ee2cc6542d38e3e31834bc30917eca
Author: Sean Policarpio 
Date:   2017-12-20T12:12:01Z

appends Materialized#with to include the ability to specify the store name 
as well as the serdes




---


[GitHub] kafka pull request #4306: KAFKA-6331; Fix transient failure in AdminClientIn...

2017-12-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4306


---


[GitHub] kafka pull request #4345: KAFKA-6390: Update ZooKeeper to 3.4.11, Gradle and...

2017-12-20 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4345

KAFKA-6390: Update ZooKeeper to 3.4.11, Gradle and other minor updates

Updates:
- Gradle, gradle plugins and maven artifact updated
- Bug fix updates for ZooKeeper, Jackson, EasyMock and Snappy

Not updated:
- RocksDB as it often causes issues, so better done separately
- args4j as our test coverage is weak and the update was a
feature release

Release notes for ZooKeeper 3.4.11:
https://zookeeper.apache.org/doc/r3.4.11/releasenotes.html

Notable fix is improved handling of UnknownHostException:
https://issues.apache.org/jira/browse/ZOOKEEPER-2614

Manually tested that IntelliJ import and build still works.
Relying on existing test suite otherwise.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-6390-zk-3.4.11-and-other-updates

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4345.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4345


commit 5df853f9ea5425e08794507d1d104d050b56dde2
Author: Ismael Juma 
Date:   2017-12-20T10:27:57Z

KAFKA-6390: Update ZooKeeper to 3.4.11, Gradle and other minor updates

Updates:
- Gradle, gradle plugins and maven artifact updated
- Bug fix updates for ZooKeeper, Jackson, EasyMock and Snappy

Release notes for ZooKeeper 3.4.11:
https://zookeeper.apache.org/doc/r3.4.11/releasenotes.html

Notable fix is improved handling of UnknownHostException:
https://issues.apache.org/jira/browse/ZOOKEEPER-2614




---


[jira] [Created] (KAFKA-6390) Update ZooKeeper to 3.4.11 and other minor updates

2017-12-20 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-6390:
--

 Summary: Update ZooKeeper to 3.4.11 and other minor updates
 Key: KAFKA-6390
 URL: https://issues.apache.org/jira/browse/KAFKA-6390
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma
Assignee: Ismael Juma






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6389) Expose transaction metrics via JMX

2017-12-20 Thread JIRA
Florent Ramière created KAFKA-6389:
--

 Summary: Expose transaction metrics via JMX
 Key: KAFKA-6389
 URL: https://issues.apache.org/jira/browse/KAFKA-6389
 Project: Kafka
  Issue Type: Improvement
  Components: metrics
Affects Versions: 1.0.0
Reporter: Florent Ramière


Expose various metrics from 
https://cwiki.apache.org/confluence/display/KAFKA/Transactional+Messaging+in+Kafka
Especially 
* number of transactions
* number of current transactions
* timeout



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4304: KAFKA-6323: document that punctuation is called im...

2017-12-20 Thread fredfp
Github user fredfp closed the pull request at:

https://github.com/apache/kafka/pull/4304


---


Re: [DISCUSS]KIP-216: IQ should throw different exceptions for different errors

2017-12-20 Thread vito jeng
Matthias,

I try to clarify some concept.

When streams state is REBALANCING, it means the user can just plain retry.

When streams state is ERROR or PENDING_SHUTDOWN or NOT_RUNNING, it means
state store migrated to another instance, the user needs to rediscover the
store.

Is my understanding correct?


---
Vito

On Sun, Nov 5, 2017 at 12:30 AM, Matthias J. Sax 
wrote:

> Thanks for the KIP Vito!
>
> I agree with what Guozhang said. The original idea of the Jira was, to
> give different exceptions for different "recovery" strategies to the user.
>
> For example, if a store is currently recreated, a user just need to wait
> and can query the store later. On the other hand, if a store go migrated
> to another instance, a user needs to rediscover the store instead of a
> "plain retry".
>
> Fatal errors might be a third category.
>
> Not sure if there is something else?
>
> Anyway, the KIP should contain a section that talks about this ideas and
> reasoning.
>
>
> -Matthias
>
>
> On 11/3/17 11:26 PM, Guozhang Wang wrote:
> > Thanks for writing up the KIP.
> >
> > Vito, Matthias: one thing that I wanted to figure out first is what
> > categories of errors we want to notify the users, if we only wants to
> > distinguish fatal v.s. retriable then probably we should rename the
> > proposed StateStoreMigratedException / StateStoreClosedException classes.
> > And then from there we should list what are the possible internal
> > exceptions ever thrown in those APIs in the call trace, and which
> > exceptions should be wrapped to what others, and which ones should be
> > handled without re-throwing, and which ones should not be wrapped at all
> > but directly thrown to user's face.
> >
> > Guozhang
> >
> >
> > On Wed, Nov 1, 2017 at 11:09 PM, vito jeng  wrote:
> >
> >> Hi,
> >>
> >> I'd like to start discuss KIP-216:
> >>
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 216%3A+IQ+should+throw+different+exceptions+for+different+errors
> >>
> >> Please have a look.
> >> Thanks!
> >>
> >> ---
> >> Vito
> >>
> >
> >
> >
>
>