Build failed in Jenkins: kafka-2.2-jdk8 #102

2019-05-03 Thread Apache Jenkins Server
See 


Changes:

[bill] KAFKA-8323: Close RocksDBStore's BloomFilter (#6672)

--
[...truncated 2.70 MB...]
kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
PASSED

kafka.controller.PartitionStateMachineTest > 
testNonexistentPartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNonexistentPartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
STARTED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion STARTED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED

kafka.controller.PartitionStateMachineTest > 
testNoOfflinePartitionsChangeForTopicsBeingDeleted STARTED

kafka.controller.PartitionStateMachineTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #3600

2019-05-03 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] KAFKA-8323: Close RocksDBStore's BloomFilter (#6672)

--
[...truncated 4.80 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED


Build failed in Jenkins: kafka-trunk-jdk11 #484

2019-05-03 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] KAFKA-8323: Close RocksDBStore's BloomFilter (#6672)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H33 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision a1b1e088b98763818e933dce335b580d02916640 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a1b1e088b98763818e933dce335b580d02916640
Commit message: "KAFKA-8323: Close RocksDBStore's BloomFilter (#6672)"
 > git rev-list --no-walk a37282415e4e7f682b43abe78517ed18a8dea962 # timeout=10
Setting GRADLE_4_10_2_HOME=/home/jenkins/tools/gradle/4.10.2
Setting GRADLE_4_10_2_HOME=/home/jenkins/tools/gradle/4.10.2
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins5336852975946911251.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.10.2/bin/gradle
/tmp/jenkins5336852975946911251.sh: line 4: 
/home/jenkins/tools/gradle/4.10.2/bin/gradle: No such file or directory
Build step 'Execute shell' marked build as failure
Recording test results
Setting GRADLE_4_10_2_HOME=/home/jenkins/tools/gradle/4.10.2
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_10_2_HOME=/home/jenkins/tools/gradle/4.10.2
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user wangg...@gmail.com


Re: [DISCUSS] KIP-455: Create an Administrative API for Replica Reassignment

2019-05-03 Thread Colin McCabe
On Tue, Apr 30, 2019, at 23:33, George Li wrote:
>  Hi Colin,
> 
> Thanks for KIP-455!  yes. KIP-236, etc. will depend on it.  It is the 
> good direction to go for the RP 
> 
> Regarding storing the new reassignments & original replicas at the 
> topic/partition level.  I have some concerns when controller is failing 
> over, and the scalability of scanning the active reassignments from ZK 
> topic/partition level nodes. Please see my reply to Jason in the 
> KIP-236 thread. 

Hi George,

The controller already has to rescan this information from ZooKeeper when 
starting up, for unrelated reasons. 
 The controller needs to know about stuff like who is in the ISR for each 
partition, what the replicas are, and so forth.  So this doesn't add any 
additional overhead.

best,
Colin

> 
> Once the decision is made where new reassignment and original replicas 
> is stored, I will modify KIP-236 accordingly for how to cancel/rollback 
> the reassignments. 
> 
> Thanks,
> George 
> 
> 
> On Monday, April 15, 2019, 6:07:44 PM PDT, Colin McCabe 
>  wrote:  
>  
>  Hi all,
> 
> We've been having discussions on a few different KIPs (KIP-236, 
> KIP-435, etc.) about what the Admin Client replica reassignment API 
> should look like.  The current API is really hard to extend and 
> maintain, which is a big source of problems.  I think it makes sense to 
> have a KIP that establishes a clean API that we can use and extend 
> going forward, so I posted KIP-455.  Take a look.  :)
> 
> best,
> Colin
>


Re: [VOTE] KIP-454: Expansion of the ConnectClusterState interface

2019-05-03 Thread Randall Hauch
Nice job, Chris!

+1 (binding)

On Thu, May 2, 2019 at 8:16 PM Magesh Nandakumar 
wrote:

> Thanks a lot for the work on this KIP Chris.
>
> +1(non-binding)
>
> On Thu, May 2, 2019, 4:56 PM Chris Egerton  wrote:
>
> > Hi all,
> >
> > I'd like to start a vote for KIP-454:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-454%3A+Expansion+of+the+ConnectClusterState+interface
> >
> > The discussion thread can be found at
> > https://www.mail-archive.com/dev@kafka.apache.org/msg96911.html
> >
> > Thanks to Konstantine Karantasis and Magesh Nandakumar for their
> thoughtful
> > feedback!
> >
> > Cheers,
> >
> > Chris
> >
>


[jira] [Resolved] (KAFKA-8323) Memory leak of BloomFilter Rocks object

2019-05-03 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8323.

Resolution: Fixed

cherry-picked to 2.2 as well

> Memory leak of BloomFilter Rocks object
> ---
>
> Key: KAFKA-8323
> URL: https://issues.apache.org/jira/browse/KAFKA-8323
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
>Reporter: Sophie Blee-Goldman
>Assignee: Sophie Blee-Goldman
>Priority: Blocker
> Fix For: 2.3.0, 2.2.1
>
>
> Any RocksJava object that inherits from org.rocksdb.AbstractNativeReference 
> must be closed explicitly in order to free up the memory of the backing C++ 
> object. The BloomFilter extends RocksObject (which implements 
> AbstractNativeReference) and should be also be closed in RocksDBStore#close 
> to avoid leaking memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-411: Make default Kafka Connect worker task client IDs distinct

2019-05-03 Thread Arjun Satish
Maybe we can say something like:

This change can have an indirect impact on resource usage by a Connector.
For example, systems that were enforcing quotas using a "consumer-[id]"
client id will now have to update their configs to enforce quota on
"connector-consumer-[id]". For systems that were not enforcing any
limitations or using default quotas, there should be no change expected.

Best,

On Fri, May 3, 2019 at 1:38 PM Paul Davidson
 wrote:

> Thanks Arjun. I updated the KIP to mention the impact on quotas. Please let
> me know if you think I need more detail. The paragraph I added was:
>
> Since the default client.id values are changing, this will also affect any
> > user that has quotas defined against the current defaults. The current
> > default client.id values are of the form: consumer-{count}  and
> >  producer-{count}.
>
>
> Thanks,
>
> Paul
>
> On Thu, May 2, 2019 at 5:36 PM Arjun Satish 
> wrote:
>
> > Paul,
> >
> > You might want to make a note on the KIP regarding the impact on quotas.
> >
> > Thanks,
> >
> > On Thu, May 2, 2019 at 9:48 AM Paul Davidson
> >  wrote:
> >
> > > Thanks for the votes everyone! KIP-411 is now accepted with:
> > >
> > > +3 binding votes (Randall, Jason, Gwen) , and
> > > +3 non-binding votes (Ryanne, Arjun, Magesh)
> > >
> > > Regards,
> > >
> > > Paul
> > >
> > > On Wed, May 1, 2019 at 10:07 PM Arjun Satish 
> > > wrote:
> > >
> > > > Good point, Gwen. We always set a non empty value for client id:
> > > >
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/2.2.0/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L668
> > > > .
> > > >
> > > > But more importantly, connect client ids (for consumers, for example)
> > > were
> > > > already of the form "consumer-[0-9]+", and from now on they will be
> > > > "connector-consumer-[connector_name]-[0-9]+". So, at least for
> connect
> > > > consumers/producers, we would have already been hitting the default
> > quota
> > > > limits and nothing changes for them. You can correct me if I'm
> missing
> > > > something, but seems like this doesn't *break* backward
> compatibility?
> > > >
> > > > I suppose this change only gives us a better way to manage that quota
> > > > limit.
> > > >
> > > > Best,
> > > >
> > > > On Wed, May 1, 2019 at 9:16 PM Gwen Shapira 
> wrote:
> > > >
> > > > > I'm confused. Surely the default quota applies on empty client IDs
> > too?
> > > > > otherwise it will be very difficult to enforce?
> > > > > So setting the client name will only change something if there's
> > > already
> > > > a
> > > > > quota for that client?
> > > > >
> > > > > On the other hand, I fully support switching to "easy-to-wildcard"
> > > > template
> > > > > for the client id.
> > > > >
> > > > > On Wed, May 1, 2019 at 8:50 PM Arjun Satish <
> arjun.sat...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > I just realized that setting the client.id on the will now
> trigger
> > > any
> > > > > > quota restrictions (
> > > > > > https://kafka.apache.org/documentation/#design_quotasconfig) on
> > the
> > > > > > broker.
> > > > > > It seems like this PR will enforce quota policies that will
> either
> > > > > require
> > > > > > admins to set limits for each task (since the chosen format is
> > > > > > connector-*-id), or fallback to some default value.
> > > > > >
> > > > > > Maybe we should mention this in the backward compatibility
> section
> > > for
> > > > > the
> > > > > > KIP. At the same time, since there is no way atm to turn off this
> > > > > feature,
> > > > > > should this feature be merged and released in the upcoming v2.3?
> > This
> > > > is
> > > > > > something the committers can comment better.
> > > > > >
> > > > > > Best,
> > > > > >
> > > > > >
> > > > > > On Wed, May 1, 2019 at 5:13 PM Gwen Shapira 
> > > wrote:
> > > > > >
> > > > > > > hell yeah!
> > > > > > > +1
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Apr 5, 2019 at 9:08 AM Paul Davidson
> > > > > > >  wrote:
> > > > > > >
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > Since we seem to have agreement in the discussion I would
> like
> > to
> > > > > start
> > > > > > > the
> > > > > > > > vote on KIP-411.
> > > > > > > >
> > > > > > > > See:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-411%3A+Make+default+Kafka+Connect+worker+task+client+IDs+distinct
> > > > > > > >
> > > > > > > > Also see the related PR:
> > > https://github.com/apache/kafka/pull/6097
> > > > > > > >
> > > > > > > > Thanks to everyone who contributed!
> > > > > > > >
> > > > > > > > Paul
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > *Gwen Shapira*
> > > > > > > Product Manager | Confluent
> > > > > > > 650.450.2760 | @gwenshap
> > > > > > > Follow us: Twitter  | blog
> > > > > > > 
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > 

[jira] [Resolved] (KAFKA-7946) Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup

2019-05-03 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-7946.

   Resolution: Fixed
Fix Version/s: 2.2.1

> Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup
> ---
>
> Key: KAFKA-7946
> URL: https://issues.apache.org/jira/browse/KAFKA-7946
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
> {quote}java.lang.NullPointerException at 
> kafka.admin.DeleteConsumerGroupsTest.testDeleteNonEmptyGroup(DeleteConsumerGroupsTest.scala:96){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8324) User constructed RocksObjects leak memory

2019-05-03 Thread Sophie Blee-Goldman (JIRA)
Sophie Blee-Goldman created KAFKA-8324:
--

 Summary: User constructed RocksObjects leak memory
 Key: KAFKA-8324
 URL: https://issues.apache.org/jira/browse/KAFKA-8324
 Project: Kafka
  Issue Type: Bug
Reporter: Sophie Blee-Goldman


Some of the RocksDB options a user can set when extending RocksDBConfigSetter 
take Rocks objects as parameters. Many of these--including potentially large 
objects like Cache and Filter-- inherit from AbstractNativeReference and must 
be closed explicitly in order to free the memory of the backing C++ object. 
However the user has no way of closing any objects they have created in 
RocksDBConfigSetter, and we do not ever close them for them. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8323) Memory leak of BloomFilter Rocks object

2019-05-03 Thread Sophie Blee-Goldman (JIRA)
Sophie Blee-Goldman created KAFKA-8323:
--

 Summary: Memory leak of BloomFilter Rocks object
 Key: KAFKA-8323
 URL: https://issues.apache.org/jira/browse/KAFKA-8323
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.3.0, 2.2.1
Reporter: Sophie Blee-Goldman
Assignee: Sophie Blee-Goldman


Any RocksJava object that inherits from org.rocksdb.AbstractNativeReference 
must be closed explicitly in order to free up the memory of the backing C++ 
object. The BloomFilter extends RocksObject (which implements 
AbstractNativeReference) and should be also be closed in RocksDBStore#close to 
avoid leaking memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8322) Flaky test: SslTransportLayerTest.testListenerConfigOverride

2019-05-03 Thread Dhruvil Shah (JIRA)
Dhruvil Shah created KAFKA-8322:
---

 Summary: Flaky test: 
SslTransportLayerTest.testListenerConfigOverride
 Key: KAFKA-8322
 URL: https://issues.apache.org/jira/browse/KAFKA-8322
 Project: Kafka
  Issue Type: Test
  Components: core, unit tests
Reporter: Dhruvil Shah


java.lang.AssertionError: expected: but 
was: at org.junit.Assert.fail(Assert.java:89) at 
org.junit.Assert.failNotEquals(Assert.java:835) at 
org.junit.Assert.assertEquals(Assert.java:120) at 
org.junit.Assert.assertEquals(Assert.java:146) at 
org.apache.kafka.common.network.NetworkTestUtils.waitForChannelClose(NetworkTestUtils.java:111)
 at 
org.apache.kafka.common.network.SslTransportLayerTest.testListenerConfigOverride(SslTransportLayerTest.java:319)

 

[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/4250/testReport/junit/org.apache.kafka.common.network/SslTransportLayerTest/testListenerConfigOverride/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3599

2019-05-03 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Upgrade dependencies for Kafka 2.3 (#6665)

--
[...truncated 4.80 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter 

Re: [DISCUSS] 2.2.1 Bug Fix Release

2019-05-03 Thread Vahid Hashemian
Hi Sophie,

Thanks for the heads-up. Once the fix is confirmed, could you please create
a ticket for it and assign it to 2.2.1 release?

Thanks,
--Vahid

On Fri, May 3, 2019 at 3:24 PM Sophie Blee-Goldman 
wrote:

> Hey Vahid,
>
> We also have another minor bug fix we just uncovered and are hoping to get
> in today although I don't think there's a ticket for it atm...just waiting
> for the build to pass.
>
> Thanks for volunteering!
>
> Cheers,
> Sophie
>
> On Fri, May 3, 2019 at 3:16 PM Vahid Hashemian 
> wrote:
>
> > Hi John,
> >
> > Thanks for confirming.
> > I'll wait for final bug fix PR for this issue to get merged so we can
> > safely resolve the ticket. That makes it easier with the release script.
> > Hopefully, the current build passes.
> >
> > --Vahid
> >
> > On Fri, May 3, 2019 at 3:07 PM John Roesler  wrote:
> >
> > > Hi Vahid,
> > >
> > > The fix is merged to 2.2. The ticket isn't resolved yet, because the
> > tests
> > > failed on the 2.1 merge, but I think the 2.2.1 release is unblocked
> now.
> > >
> > > Thanks,
> > > -John
> > >
> > > On Fri, May 3, 2019 at 10:41 AM Vahid Hashemian <
> > vahid.hashem...@gmail.com
> > > >
> > > wrote:
> > >
> > > > Thanks for the filter fix and the heads up John.
> > > > I'll wait for that to go through then.
> > > >
> > > > --Vahid
> > > >
> > > > On Fri, May 3, 2019 at 8:33 AM John Roesler 
> wrote:
> > > >
> > > > > Thanks for volunteering, Vahid!
> > > > >
> > > > > I noticed that the "unresolved issues" filter on the plan page was
> > > still
> > > > > set to 2.1.1 (I fixed it).
> > > > >
> > > > > There's one blocker left:
> > > > https://issues.apache.org/jira/browse/KAFKA-8289
> > > > > ,
> > > > > but it's merged to trunk and we're cherry-picking to 2.2 today.
> > > > >
> > > > > Thanks again!
> > > > > -John
> > > > >
> > > > > On Thu, May 2, 2019 at 10:38 PM Vahid Hashemian <
> > > > vahid.hashem...@gmail.com
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > If there are no objections on the proposed plan, I'll start
> > preparing
> > > > the
> > > > > > first release candidate.
> > > > > >
> > > > > > Thanks,
> > > > > > --Vahid
> > > > > >
> > > > > > On Thu, Apr 25, 2019 at 6:27 AM Ismael Juma 
> > > wrote:
> > > > > >
> > > > > > > Thanks Vahid!
> > > > > > >
> > > > > > > On Wed, Apr 24, 2019 at 10:44 PM Vahid Hashemian <
> > > > > > > vahid.hashem...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > I'd like to volunteer for the release manager of the 2.2.1
> bug
> > > fix
> > > > > > > release.
> > > > > > > > Kafka 2.2.0 was released on March 22, 2019.
> > > > > > > >
> > > > > > > > At this point, there are 29 resolved JIRA issues scheduled
> for
> > > > > > inclusion
> > > > > > > in
> > > > > > > > 2.2.1:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.2.1
> > > > > > > >
> > > > > > > > The release plan is documented here:
> > > > > > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1
> > > > > > > >
> > > > > > > > Thanks!
> > > > > > > > --Vahid
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > >
> > > > > > Thanks!
> > > > > > --Vahid
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Thanks!
> > > > --Vahid
> > > >
> > >
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
> >
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] 2.2.1 Bug Fix Release

2019-05-03 Thread Sophie Blee-Goldman
Hey Vahid,

We also have another minor bug fix we just uncovered and are hoping to get
in today although I don't think there's a ticket for it atm...just waiting
for the build to pass.

Thanks for volunteering!

Cheers,
Sophie

On Fri, May 3, 2019 at 3:16 PM Vahid Hashemian 
wrote:

> Hi John,
>
> Thanks for confirming.
> I'll wait for final bug fix PR for this issue to get merged so we can
> safely resolve the ticket. That makes it easier with the release script.
> Hopefully, the current build passes.
>
> --Vahid
>
> On Fri, May 3, 2019 at 3:07 PM John Roesler  wrote:
>
> > Hi Vahid,
> >
> > The fix is merged to 2.2. The ticket isn't resolved yet, because the
> tests
> > failed on the 2.1 merge, but I think the 2.2.1 release is unblocked now.
> >
> > Thanks,
> > -John
> >
> > On Fri, May 3, 2019 at 10:41 AM Vahid Hashemian <
> vahid.hashem...@gmail.com
> > >
> > wrote:
> >
> > > Thanks for the filter fix and the heads up John.
> > > I'll wait for that to go through then.
> > >
> > > --Vahid
> > >
> > > On Fri, May 3, 2019 at 8:33 AM John Roesler  wrote:
> > >
> > > > Thanks for volunteering, Vahid!
> > > >
> > > > I noticed that the "unresolved issues" filter on the plan page was
> > still
> > > > set to 2.1.1 (I fixed it).
> > > >
> > > > There's one blocker left:
> > > https://issues.apache.org/jira/browse/KAFKA-8289
> > > > ,
> > > > but it's merged to trunk and we're cherry-picking to 2.2 today.
> > > >
> > > > Thanks again!
> > > > -John
> > > >
> > > > On Thu, May 2, 2019 at 10:38 PM Vahid Hashemian <
> > > vahid.hashem...@gmail.com
> > > > >
> > > > wrote:
> > > >
> > > > > If there are no objections on the proposed plan, I'll start
> preparing
> > > the
> > > > > first release candidate.
> > > > >
> > > > > Thanks,
> > > > > --Vahid
> > > > >
> > > > > On Thu, Apr 25, 2019 at 6:27 AM Ismael Juma 
> > wrote:
> > > > >
> > > > > > Thanks Vahid!
> > > > > >
> > > > > > On Wed, Apr 24, 2019 at 10:44 PM Vahid Hashemian <
> > > > > > vahid.hashem...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > > I'd like to volunteer for the release manager of the 2.2.1 bug
> > fix
> > > > > > release.
> > > > > > > Kafka 2.2.0 was released on March 22, 2019.
> > > > > > >
> > > > > > > At this point, there are 29 resolved JIRA issues scheduled for
> > > > > inclusion
> > > > > > in
> > > > > > > 2.2.1:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.2.1
> > > > > > >
> > > > > > > The release plan is documented here:
> > > > > > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1
> > > > > > >
> > > > > > > Thanks!
> > > > > > > --Vahid
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > > Thanks!
> > > > > --Vahid
> > > > >
> > > >
> > >
> > >
> > > --
> > >
> > > Thanks!
> > > --Vahid
> > >
> >
>
>
> --
>
> Thanks!
> --Vahid
>


[jira] [Created] (KAFKA-8321) Flaky Test kafka.server.DynamicConfigTest.shouldFailWhenChangingClientIdUnknownConfig

2019-05-03 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-8321:
--

 Summary: Flaky Test 
kafka.server.DynamicConfigTest.shouldFailWhenChangingClientIdUnknownConfig
 Key: KAFKA-8321
 URL: https://issues.apache.org/jira/browse/KAFKA-8321
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.3.0
Reporter: Bill Bejeck


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/4253/testReport/junit/kafka.server/DynamicConfigTest/shouldFailWhenChangingClientIdUnknownConfig/]
{noformat}
Error Message
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for 
connection while in state: CONNECTING
Stacktrace
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for 
connection while in state: CONNECTING
at 
kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:268)
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at 
kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:264)
at kafka.zookeeper.ZooKeeperClient.(ZooKeeperClient.scala:97)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1694)
at kafka.zk.ZooKeeperTestHarness.setUp(ZooKeeperTestHarness.scala:59)
at jdk.internal.reflect.GeneratedMethodAccessor138.invoke(Unknown 
Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at jdk.internal.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:117)
at jdk.internal.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 

Re: [DISCUSS] 2.2.1 Bug Fix Release

2019-05-03 Thread Vahid Hashemian
Hi John,

Thanks for confirming.
I'll wait for final bug fix PR for this issue to get merged so we can
safely resolve the ticket. That makes it easier with the release script.
Hopefully, the current build passes.

--Vahid

On Fri, May 3, 2019 at 3:07 PM John Roesler  wrote:

> Hi Vahid,
>
> The fix is merged to 2.2. The ticket isn't resolved yet, because the tests
> failed on the 2.1 merge, but I think the 2.2.1 release is unblocked now.
>
> Thanks,
> -John
>
> On Fri, May 3, 2019 at 10:41 AM Vahid Hashemian  >
> wrote:
>
> > Thanks for the filter fix and the heads up John.
> > I'll wait for that to go through then.
> >
> > --Vahid
> >
> > On Fri, May 3, 2019 at 8:33 AM John Roesler  wrote:
> >
> > > Thanks for volunteering, Vahid!
> > >
> > > I noticed that the "unresolved issues" filter on the plan page was
> still
> > > set to 2.1.1 (I fixed it).
> > >
> > > There's one blocker left:
> > https://issues.apache.org/jira/browse/KAFKA-8289
> > > ,
> > > but it's merged to trunk and we're cherry-picking to 2.2 today.
> > >
> > > Thanks again!
> > > -John
> > >
> > > On Thu, May 2, 2019 at 10:38 PM Vahid Hashemian <
> > vahid.hashem...@gmail.com
> > > >
> > > wrote:
> > >
> > > > If there are no objections on the proposed plan, I'll start preparing
> > the
> > > > first release candidate.
> > > >
> > > > Thanks,
> > > > --Vahid
> > > >
> > > > On Thu, Apr 25, 2019 at 6:27 AM Ismael Juma 
> wrote:
> > > >
> > > > > Thanks Vahid!
> > > > >
> > > > > On Wed, Apr 24, 2019 at 10:44 PM Vahid Hashemian <
> > > > > vahid.hashem...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to volunteer for the release manager of the 2.2.1 bug
> fix
> > > > > release.
> > > > > > Kafka 2.2.0 was released on March 22, 2019.
> > > > > >
> > > > > > At this point, there are 29 resolved JIRA issues scheduled for
> > > > inclusion
> > > > > in
> > > > > > 2.2.1:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.2.1
> > > > > >
> > > > > > The release plan is documented here:
> > > > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1
> > > > > >
> > > > > > Thanks!
> > > > > > --Vahid
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Thanks!
> > > > --Vahid
> > > >
> > >
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
> >
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] 2.2.1 Bug Fix Release

2019-05-03 Thread John Roesler
Hi Vahid,

The fix is merged to 2.2. The ticket isn't resolved yet, because the tests
failed on the 2.1 merge, but I think the 2.2.1 release is unblocked now.

Thanks,
-John

On Fri, May 3, 2019 at 10:41 AM Vahid Hashemian 
wrote:

> Thanks for the filter fix and the heads up John.
> I'll wait for that to go through then.
>
> --Vahid
>
> On Fri, May 3, 2019 at 8:33 AM John Roesler  wrote:
>
> > Thanks for volunteering, Vahid!
> >
> > I noticed that the "unresolved issues" filter on the plan page was still
> > set to 2.1.1 (I fixed it).
> >
> > There's one blocker left:
> https://issues.apache.org/jira/browse/KAFKA-8289
> > ,
> > but it's merged to trunk and we're cherry-picking to 2.2 today.
> >
> > Thanks again!
> > -John
> >
> > On Thu, May 2, 2019 at 10:38 PM Vahid Hashemian <
> vahid.hashem...@gmail.com
> > >
> > wrote:
> >
> > > If there are no objections on the proposed plan, I'll start preparing
> the
> > > first release candidate.
> > >
> > > Thanks,
> > > --Vahid
> > >
> > > On Thu, Apr 25, 2019 at 6:27 AM Ismael Juma  wrote:
> > >
> > > > Thanks Vahid!
> > > >
> > > > On Wed, Apr 24, 2019 at 10:44 PM Vahid Hashemian <
> > > > vahid.hashem...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I'd like to volunteer for the release manager of the 2.2.1 bug fix
> > > > release.
> > > > > Kafka 2.2.0 was released on March 22, 2019.
> > > > >
> > > > > At this point, there are 29 resolved JIRA issues scheduled for
> > > inclusion
> > > > in
> > > > > 2.2.1:
> > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.2.1
> > > > >
> > > > > The release plan is documented here:
> > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1
> > > > >
> > > > > Thanks!
> > > > > --Vahid
> > > > >
> > > >
> > >
> > >
> > > --
> > >
> > > Thanks!
> > > --Vahid
> > >
> >
>
>
> --
>
> Thanks!
> --Vahid
>


[jira] [Created] (KAFKA-8320) Connect Error handling is using the RetriableException from common package

2019-05-03 Thread Magesh kumar Nandakumar (JIRA)
Magesh kumar Nandakumar created KAFKA-8320:
--

 Summary: Connect Error handling is using the RetriableException 
from common package
 Key: KAFKA-8320
 URL: https://issues.apache.org/jira/browse/KAFKA-8320
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.0.0
Reporter: Magesh kumar Nandakumar
Assignee: Magesh kumar Nandakumar


When a SourceConnector throws 

org.apache.kafka.connect.errors.RetriableException during the poll, connect 
runtime is supposed to ignore the error and retry per 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-298%3A+Error+Handling+in+Connect]
 . When the conenctors throw the execption its not handled gracefully. 

WorkerSourceTask is catching the execption from wrog package 
`org.apache.kafka.common.errors`. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-465: Add Consolidated Connector Endpoint to Connect REST API

2019-05-03 Thread Alex Liu
Good question,

`info` is probably the best name for it. The updated output on the wiki
looks reasonable to me.

Alex

On Fri, May 3, 2019 at 2:24 PM dan  wrote:

> thanks. i think this make sense.
>
> i'm thinking we should just use repeated queryparams for this, so
> `?expand=status=config`
>
> another thing is what do you think we should use for the `/` endpoint? was
> thinking `?expand=info`
>
> output could look like
>
> w:kafka norwood$ curl -s '
> http://localhost:8083/connectors?expand=status=config' | jq
>
> {
>
>   "blah": {
>
> "config": {
>
>   "name": "blah",
>
>   "config": {
>
> "connector.class":
> "org.apache.kafka.connect.file.FileStreamSourceConnector",
>
> "file": "/tmp/lol",
>
> "tasks.max": "10",
>
> "name": "blah",
>
> "topic": "test-topic"
>
>   },
>
>   "tasks": [
>
> {
>
>   "connector": "blah",
>
>   "task": 0
>
> }
>
>   ],
>
>   "type": "source"
>
> },
>
> "status": {
>
>   "name": "blah",
>
>   "connector": {
>
> "state": "RUNNING",
>
> "worker_id": "10.200.25.241:8083"
>
>   },
>
>   "tasks": [
>
> {
>
>   "id": 0,
>
>   "state": "RUNNING",
>
>   "worker_id": "10.200.25.241:8083"
>
> }
>
>   ],
>
>   "type": "source"
>
> }
>
>   }
>
> }
>
>
> will update the wiki with this info
>
> thanks
> dan
>
> On Thu, May 2, 2019 at 4:43 PM Alex Liu  wrote:
>
> > Good idea, Dan. One thing I might suggest is to have the query parameters
> > reflect the fact that there are multiple resources under each connector.
> > There is `connectors//`, `connectors//config`, and
> > `connectors//status`.
> > Each of them returns a slightly different set of information, so it would
> > be useful to allow the query parameter be a string instead of a
> true/false
> > flag. In this case, `expand=status,config` would specify expanding both
> the
> > /status and /config subresources into the response objects.
> >
> > Other than this detail, I think this is a useful addition to the Connect
> > REST API.
> >
> > Alex
> >
>


Re: [DISCUSS] KIP-465: Add Consolidated Connector Endpoint to Connect REST API

2019-05-03 Thread dan
thanks. i think this make sense.

i'm thinking we should just use repeated queryparams for this, so
`?expand=status=config`

another thing is what do you think we should use for the `/` endpoint? was
thinking `?expand=info`

output could look like

w:kafka norwood$ curl -s '
http://localhost:8083/connectors?expand=status=config' | jq

{

  "blah": {

"config": {

  "name": "blah",

  "config": {

"connector.class":
"org.apache.kafka.connect.file.FileStreamSourceConnector",

"file": "/tmp/lol",

"tasks.max": "10",

"name": "blah",

"topic": "test-topic"

  },

  "tasks": [

{

  "connector": "blah",

  "task": 0

}

  ],

  "type": "source"

},

"status": {

  "name": "blah",

  "connector": {

"state": "RUNNING",

"worker_id": "10.200.25.241:8083"

  },

  "tasks": [

{

  "id": 0,

  "state": "RUNNING",

  "worker_id": "10.200.25.241:8083"

}

  ],

  "type": "source"

}

  }

}


will update the wiki with this info

thanks
dan

On Thu, May 2, 2019 at 4:43 PM Alex Liu  wrote:

> Good idea, Dan. One thing I might suggest is to have the query parameters
> reflect the fact that there are multiple resources under each connector.
> There is `connectors//`, `connectors//config`, and
> `connectors//status`.
> Each of them returns a slightly different set of information, so it would
> be useful to allow the query parameter be a string instead of a true/false
> flag. In this case, `expand=status,config` would specify expanding both the
> /status and /config subresources into the response objects.
>
> Other than this detail, I think this is a useful addition to the Connect
> REST API.
>
> Alex
>


Re: [VOTE] KIP-411: Make default Kafka Connect worker task client IDs distinct

2019-05-03 Thread Paul Davidson
Thanks Arjun. I updated the KIP to mention the impact on quotas. Please let
me know if you think I need more detail. The paragraph I added was:

Since the default client.id values are changing, this will also affect any
> user that has quotas defined against the current defaults. The current
> default client.id values are of the form: consumer-{count}  and
>  producer-{count}.


Thanks,

Paul

On Thu, May 2, 2019 at 5:36 PM Arjun Satish  wrote:

> Paul,
>
> You might want to make a note on the KIP regarding the impact on quotas.
>
> Thanks,
>
> On Thu, May 2, 2019 at 9:48 AM Paul Davidson
>  wrote:
>
> > Thanks for the votes everyone! KIP-411 is now accepted with:
> >
> > +3 binding votes (Randall, Jason, Gwen) , and
> > +3 non-binding votes (Ryanne, Arjun, Magesh)
> >
> > Regards,
> >
> > Paul
> >
> > On Wed, May 1, 2019 at 10:07 PM Arjun Satish 
> > wrote:
> >
> > > Good point, Gwen. We always set a non empty value for client id:
> > >
> > >
> >
> https://github.com/apache/kafka/blob/2.2.0/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L668
> > > .
> > >
> > > But more importantly, connect client ids (for consumers, for example)
> > were
> > > already of the form "consumer-[0-9]+", and from now on they will be
> > > "connector-consumer-[connector_name]-[0-9]+". So, at least for connect
> > > consumers/producers, we would have already been hitting the default
> quota
> > > limits and nothing changes for them. You can correct me if I'm missing
> > > something, but seems like this doesn't *break* backward compatibility?
> > >
> > > I suppose this change only gives us a better way to manage that quota
> > > limit.
> > >
> > > Best,
> > >
> > > On Wed, May 1, 2019 at 9:16 PM Gwen Shapira  wrote:
> > >
> > > > I'm confused. Surely the default quota applies on empty client IDs
> too?
> > > > otherwise it will be very difficult to enforce?
> > > > So setting the client name will only change something if there's
> > already
> > > a
> > > > quota for that client?
> > > >
> > > > On the other hand, I fully support switching to "easy-to-wildcard"
> > > template
> > > > for the client id.
> > > >
> > > > On Wed, May 1, 2019 at 8:50 PM Arjun Satish 
> > > > wrote:
> > > >
> > > > > I just realized that setting the client.id on the will now trigger
> > any
> > > > > quota restrictions (
> > > > > https://kafka.apache.org/documentation/#design_quotasconfig) on
> the
> > > > > broker.
> > > > > It seems like this PR will enforce quota policies that will either
> > > > require
> > > > > admins to set limits for each task (since the chosen format is
> > > > > connector-*-id), or fallback to some default value.
> > > > >
> > > > > Maybe we should mention this in the backward compatibility section
> > for
> > > > the
> > > > > KIP. At the same time, since there is no way atm to turn off this
> > > > feature,
> > > > > should this feature be merged and released in the upcoming v2.3?
> This
> > > is
> > > > > something the committers can comment better.
> > > > >
> > > > > Best,
> > > > >
> > > > >
> > > > > On Wed, May 1, 2019 at 5:13 PM Gwen Shapira 
> > wrote:
> > > > >
> > > > > > hell yeah!
> > > > > > +1
> > > > > >
> > > > > >
> > > > > > On Fri, Apr 5, 2019 at 9:08 AM Paul Davidson
> > > > > >  wrote:
> > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > > Since we seem to have agreement in the discussion I would like
> to
> > > > start
> > > > > > the
> > > > > > > vote on KIP-411.
> > > > > > >
> > > > > > > See:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-411%3A+Make+default+Kafka+Connect+worker+task+client+IDs+distinct
> > > > > > >
> > > > > > > Also see the related PR:
> > https://github.com/apache/kafka/pull/6097
> > > > > > >
> > > > > > > Thanks to everyone who contributed!
> > > > > > >
> > > > > > > Paul
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > *Gwen Shapira*
> > > > > > Product Manager | Confluent
> > > > > > 650.450.2760 | @gwenshap
> > > > > > Follow us: Twitter  | blog
> > > > > > 
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > *Gwen Shapira*
> > > > Product Manager | Confluent
> > > > 650.450.2760 | @gwenshap
> > > > Follow us: Twitter  | blog
> > > > 
> > > >
> > >
> >
>


[jira] [Reopened] (KAFKA-8240) Source.equals() can fail with NPE

2019-05-03 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reopened KAFKA-8240:


This needs to get cherry-picked back to 2.2 and 2.1 so reopening.  I'll resolve 
once that happens.

> Source.equals() can fail with NPE
> -
>
> Key: KAFKA-8240
> URL: https://issues.apache.org/jira/browse/KAFKA-8240
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>  Labels: beginner, easy-fix, newbie
>
> Reported on an PR: 
> [https://github.com/apache/kafka/pull/5284/files/1df6208f48b6b72091fea71323d94a16102ffd13#r270607795]
> InternalTopologyBuilder#Source.equals() might fail with NPE if 
> `topicPattern==null`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-464 Default Replication Factor for AdminClient#createTopic

2019-05-03 Thread Randall Hauch
I personally like those extra methods, rather than relying upon the generic
properties. But I'm fine if others think they should be removed. I'm also
fine with not deprecating the Connect version of the builder.

On Fri, May 3, 2019 at 11:27 AM Almog Gavra  wrote:

> Ack. KIP updated :) Perhaps instead of deprecating the Connect builder,
> then, we can indeed just subclass it and move some of the less common build
> methods (e.g. uncleanLeaderElection) there.
>
> On Fri, May 3, 2019 at 11:20 AM Randall Hauch  wrote:
>
> > Thanks for updating, Almog. I have a few suggestions specific to the
> > builder:
> >
> > 1. The AK pattern for builder classes that are nested is to name the
> class
> > "Builder" and to make it publicly visible. We should follow that pattern
> > here, too.
> > 2. The builder's private constructor makes it impossible to subclass,
> > should we ever want to do that (e.g, in Connect). If we make it protected
> > or public, then subclassing is easier.
> >
> > On Thu, May 2, 2019 at 9:44 AM Almog Gavra  wrote:
> >
> > > Thanks for the input Randall. I'm happy adding it natively to NewTopic
> > > instead of introducing more verbosity - updating the KIP to reflect
> this
> > > now.
> > >
> > > On Thu, May 2, 2019 at 9:28 AM Randall Hauch  wrote:
> > >
> > > > I wrote the `NewTopicBuilder` in Connect, and it was simply a
> > convenience
> > > > to more easily set some of the frequently-used properties and the #
> of
> > > > partitions and replicas for the new topic in the same way. An example
> > is:
> > > >
> > > > NewTopic topicDescription = TopicAdmin.defineTopic(topic).
> > > > compacted().
> > > > partitions(1).
> > > > replicationFactor(3).
> > > > build();
> > > >
> > > > Arguably it should have been added to clients from the beginning. So
> > I'm
> > > > fine with that being moved to clients, as long as Connect is changed
> to
> > > use
> > > > the new clients class. However, even though Connect's
> `NewTopicBuilder`
> > > is
> > > > in the runtime and technically not part of the public API, things
> like
> > > this
> > > > still tend to get reused elsewhere. Let's keep the Connect
> > > > `NewTopicBuilder` but deprecate it and have it extend the one in
> > clients.
> > > > The `TopicAdmin` class in Connect can then refer to the new one in
> > > clients.
> > > >
> > > > The KIP now talks about having a constructor for the builder:
> > > >
> > > > NewTopic myTopic = new
> > > >
> > > >
> > >
> >
> NewTopicBuilder(name).compacted().partitions(1).replicationFactor(3).build();
> > > >
> > > > How about adding the builder to the NewTopic class itself:
> > > >
> > > > NewTopic myTopic =
> > > >
> > > >
> > >
> >
> NewTopic.build(name).compacted().partitions(1).replicationFactor(3).build();
> > > >
> > > > This is a bit shorter, a bit easier to read (no "new New..."), and
> more
> > > > discoverable since anyone looking at the NewTopic source or JavaDoc
> > will
> > > > maybe notice it.
> > > >
> > > > Randall
> > > >
> > > >
> > > > On Thu, May 2, 2019 at 8:56 AM Almog Gavra 
> wrote:
> > > >
> > > > > Sure thing, added more detail to the KIP! To clarify, the plan is
> to
> > > move
> > > > > an existing API from one package to another (NewTopicBuilder exists
> > in
> > > > the
> > > > > connect.runtime package) leaving the old in place for compatibility
> > and
> > > > > deprecating it.
> > > > >
> > > > > I'm happy to hear thoughts on whether we should (a) move it to the
> > same
> > > > > package in a new module so that we don't need to deprecate it or
> (b)
> > > take
> > > > > this opportunity to change any of the APIs.
> > > > >
> > > > > On Thu, May 2, 2019 at 8:22 AM Ismael Juma 
> > wrote:
> > > > >
> > > > > > If you are adding new API, you need to specify it all in the KIP.
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Thu, May 2, 2019, 8:04 AM Almog Gavra 
> > wrote:
> > > > > >
> > > > > > > I think that sounds reasonable - I updated the KIP and I will
> > > remove
> > > > > the
> > > > > > > constructor that takes in only partitions.
> > > > > > >
> > > > > > > On Thu, May 2, 2019 at 4:44 AM Andy Coates 
> > > > wrote:
> > > > > > >
> > > > > > > > Rather than adding overloaded constructors, which can lead to
> > API
> > > > > > bloat,
> > > > > > > > how about using a builder pattern?
> > > > > > > >
> > > > > > > > I see it’s already got some constructor overloading, but we
> > could
> > > > > add a
> > > > > > > > single new constructor that takes just the name, and support
> > > > > everything
> > > > > > > > else being set via builder methods.
> > > > > > > >
> > > > > > > > This would result in a better long term api as the number of
> > > > options
> > > > > > > > increases.
> > > > > > > >
> > > > > > > > Sent from my iPhone
> > > > > > > >
> > > > > > > > > On 30 Apr 2019, at 16:28, Almog Gavra 
> > > > wrote:
> > > > > > > > >
> > > > > > > > > Hello Everyone,
> > > > > > > > >
> > 

Re: [DISCUSS] KIP-464 Default Replication Factor for AdminClient#createTopic

2019-05-03 Thread Almog Gavra
Ack. KIP updated :) Perhaps instead of deprecating the Connect builder,
then, we can indeed just subclass it and move some of the less common build
methods (e.g. uncleanLeaderElection) there.

On Fri, May 3, 2019 at 11:20 AM Randall Hauch  wrote:

> Thanks for updating, Almog. I have a few suggestions specific to the
> builder:
>
> 1. The AK pattern for builder classes that are nested is to name the class
> "Builder" and to make it publicly visible. We should follow that pattern
> here, too.
> 2. The builder's private constructor makes it impossible to subclass,
> should we ever want to do that (e.g, in Connect). If we make it protected
> or public, then subclassing is easier.
>
> On Thu, May 2, 2019 at 9:44 AM Almog Gavra  wrote:
>
> > Thanks for the input Randall. I'm happy adding it natively to NewTopic
> > instead of introducing more verbosity - updating the KIP to reflect this
> > now.
> >
> > On Thu, May 2, 2019 at 9:28 AM Randall Hauch  wrote:
> >
> > > I wrote the `NewTopicBuilder` in Connect, and it was simply a
> convenience
> > > to more easily set some of the frequently-used properties and the # of
> > > partitions and replicas for the new topic in the same way. An example
> is:
> > >
> > > NewTopic topicDescription = TopicAdmin.defineTopic(topic).
> > > compacted().
> > > partitions(1).
> > > replicationFactor(3).
> > > build();
> > >
> > > Arguably it should have been added to clients from the beginning. So
> I'm
> > > fine with that being moved to clients, as long as Connect is changed to
> > use
> > > the new clients class. However, even though Connect's `NewTopicBuilder`
> > is
> > > in the runtime and technically not part of the public API, things like
> > this
> > > still tend to get reused elsewhere. Let's keep the Connect
> > > `NewTopicBuilder` but deprecate it and have it extend the one in
> clients.
> > > The `TopicAdmin` class in Connect can then refer to the new one in
> > clients.
> > >
> > > The KIP now talks about having a constructor for the builder:
> > >
> > > NewTopic myTopic = new
> > >
> > >
> >
> NewTopicBuilder(name).compacted().partitions(1).replicationFactor(3).build();
> > >
> > > How about adding the builder to the NewTopic class itself:
> > >
> > > NewTopic myTopic =
> > >
> > >
> >
> NewTopic.build(name).compacted().partitions(1).replicationFactor(3).build();
> > >
> > > This is a bit shorter, a bit easier to read (no "new New..."), and more
> > > discoverable since anyone looking at the NewTopic source or JavaDoc
> will
> > > maybe notice it.
> > >
> > > Randall
> > >
> > >
> > > On Thu, May 2, 2019 at 8:56 AM Almog Gavra  wrote:
> > >
> > > > Sure thing, added more detail to the KIP! To clarify, the plan is to
> > move
> > > > an existing API from one package to another (NewTopicBuilder exists
> in
> > > the
> > > > connect.runtime package) leaving the old in place for compatibility
> and
> > > > deprecating it.
> > > >
> > > > I'm happy to hear thoughts on whether we should (a) move it to the
> same
> > > > package in a new module so that we don't need to deprecate it or (b)
> > take
> > > > this opportunity to change any of the APIs.
> > > >
> > > > On Thu, May 2, 2019 at 8:22 AM Ismael Juma 
> wrote:
> > > >
> > > > > If you are adding new API, you need to specify it all in the KIP.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Thu, May 2, 2019, 8:04 AM Almog Gavra 
> wrote:
> > > > >
> > > > > > I think that sounds reasonable - I updated the KIP and I will
> > remove
> > > > the
> > > > > > constructor that takes in only partitions.
> > > > > >
> > > > > > On Thu, May 2, 2019 at 4:44 AM Andy Coates 
> > > wrote:
> > > > > >
> > > > > > > Rather than adding overloaded constructors, which can lead to
> API
> > > > > bloat,
> > > > > > > how about using a builder pattern?
> > > > > > >
> > > > > > > I see it’s already got some constructor overloading, but we
> could
> > > > add a
> > > > > > > single new constructor that takes just the name, and support
> > > > everything
> > > > > > > else being set via builder methods.
> > > > > > >
> > > > > > > This would result in a better long term api as the number of
> > > options
> > > > > > > increases.
> > > > > > >
> > > > > > > Sent from my iPhone
> > > > > > >
> > > > > > > > On 30 Apr 2019, at 16:28, Almog Gavra 
> > > wrote:
> > > > > > > >
> > > > > > > > Hello Everyone,
> > > > > > > >
> > > > > > > > I'd like to start a discussion on KIP-464:
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-464%3A+Default+Replication+Factor+for+AdminClient%23createTopic
> > > > > > > >
> > > > > > > > It's about allowing users of the AdminClient to supply only a
> > > > > partition
> > > > > > > > count and to use the default replication factor configured by
> > the
> > > > > kafka
> > > > > > > > cluster. Happy to receive any and all feedback!
> > > > > > > >
> > > > > 

Build failed in Jenkins: kafka-trunk-jdk11 #483

2019-05-03 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Upgrade dependencies for Kafka 2.3 (#6665)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H26 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision a37282415e4e7f682b43abe78517ed18a8dea962 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a37282415e4e7f682b43abe78517ed18a8dea962
Commit message: "MINOR: Upgrade dependencies for Kafka 2.3 (#6665)"
 > git rev-list --no-walk 6d3ff132b57835fc879d678e9addc5e7c3804205 # timeout=10
Setting GRADLE_4_10_2_HOME=/home/jenkins/tools/gradle/4.10.2
Setting GRADLE_4_10_2_HOME=/home/jenkins/tools/gradle/4.10.2
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins9196970193267980014.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.10.2/bin/gradle
/tmp/jenkins9196970193267980014.sh: line 4: 
/home/jenkins/tools/gradle/4.10.2/bin/gradle: No such file or directory
Build step 'Execute shell' marked build as failure
Recording test results
Setting GRADLE_4_10_2_HOME=/home/jenkins/tools/gradle/4.10.2
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_10_2_HOME=/home/jenkins/tools/gradle/4.10.2
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user wangg...@gmail.com


Re: [DISCUSS] KIP-464 Default Replication Factor for AdminClient#createTopic

2019-05-03 Thread Randall Hauch
Thanks for updating, Almog. I have a few suggestions specific to the
builder:

1. The AK pattern for builder classes that are nested is to name the class
"Builder" and to make it publicly visible. We should follow that pattern
here, too.
2. The builder's private constructor makes it impossible to subclass,
should we ever want to do that (e.g, in Connect). If we make it protected
or public, then subclassing is easier.

On Thu, May 2, 2019 at 9:44 AM Almog Gavra  wrote:

> Thanks for the input Randall. I'm happy adding it natively to NewTopic
> instead of introducing more verbosity - updating the KIP to reflect this
> now.
>
> On Thu, May 2, 2019 at 9:28 AM Randall Hauch  wrote:
>
> > I wrote the `NewTopicBuilder` in Connect, and it was simply a convenience
> > to more easily set some of the frequently-used properties and the # of
> > partitions and replicas for the new topic in the same way. An example is:
> >
> > NewTopic topicDescription = TopicAdmin.defineTopic(topic).
> > compacted().
> > partitions(1).
> > replicationFactor(3).
> > build();
> >
> > Arguably it should have been added to clients from the beginning. So I'm
> > fine with that being moved to clients, as long as Connect is changed to
> use
> > the new clients class. However, even though Connect's `NewTopicBuilder`
> is
> > in the runtime and technically not part of the public API, things like
> this
> > still tend to get reused elsewhere. Let's keep the Connect
> > `NewTopicBuilder` but deprecate it and have it extend the one in clients.
> > The `TopicAdmin` class in Connect can then refer to the new one in
> clients.
> >
> > The KIP now talks about having a constructor for the builder:
> >
> > NewTopic myTopic = new
> >
> >
> NewTopicBuilder(name).compacted().partitions(1).replicationFactor(3).build();
> >
> > How about adding the builder to the NewTopic class itself:
> >
> > NewTopic myTopic =
> >
> >
> NewTopic.build(name).compacted().partitions(1).replicationFactor(3).build();
> >
> > This is a bit shorter, a bit easier to read (no "new New..."), and more
> > discoverable since anyone looking at the NewTopic source or JavaDoc will
> > maybe notice it.
> >
> > Randall
> >
> >
> > On Thu, May 2, 2019 at 8:56 AM Almog Gavra  wrote:
> >
> > > Sure thing, added more detail to the KIP! To clarify, the plan is to
> move
> > > an existing API from one package to another (NewTopicBuilder exists in
> > the
> > > connect.runtime package) leaving the old in place for compatibility and
> > > deprecating it.
> > >
> > > I'm happy to hear thoughts on whether we should (a) move it to the same
> > > package in a new module so that we don't need to deprecate it or (b)
> take
> > > this opportunity to change any of the APIs.
> > >
> > > On Thu, May 2, 2019 at 8:22 AM Ismael Juma  wrote:
> > >
> > > > If you are adding new API, you need to specify it all in the KIP.
> > > >
> > > > Ismael
> > > >
> > > > On Thu, May 2, 2019, 8:04 AM Almog Gavra  wrote:
> > > >
> > > > > I think that sounds reasonable - I updated the KIP and I will
> remove
> > > the
> > > > > constructor that takes in only partitions.
> > > > >
> > > > > On Thu, May 2, 2019 at 4:44 AM Andy Coates 
> > wrote:
> > > > >
> > > > > > Rather than adding overloaded constructors, which can lead to API
> > > > bloat,
> > > > > > how about using a builder pattern?
> > > > > >
> > > > > > I see it’s already got some constructor overloading, but we could
> > > add a
> > > > > > single new constructor that takes just the name, and support
> > > everything
> > > > > > else being set via builder methods.
> > > > > >
> > > > > > This would result in a better long term api as the number of
> > options
> > > > > > increases.
> > > > > >
> > > > > > Sent from my iPhone
> > > > > >
> > > > > > > On 30 Apr 2019, at 16:28, Almog Gavra 
> > wrote:
> > > > > > >
> > > > > > > Hello Everyone,
> > > > > > >
> > > > > > > I'd like to start a discussion on KIP-464:
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-464%3A+Default+Replication+Factor+for+AdminClient%23createTopic
> > > > > > >
> > > > > > > It's about allowing users of the AdminClient to supply only a
> > > > partition
> > > > > > > count and to use the default replication factor configured by
> the
> > > > kafka
> > > > > > > cluster. Happy to receive any and all feedback!
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Almog
> > > > > >
> > > > >
> > > >
> > >
> >
>


[jira] [Resolved] (KAFKA-8308) Update jetty for security vulnerability CVE-2019-10241

2019-05-03 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-8308.

   Resolution: Fixed
 Assignee: Ismael Juma  (was: Lee Dongjin)
Fix Version/s: 2.3.0

This was fixed by https://github.com/apache/kafka/pull/6665 which was merged 
today (coincidentally).

> Update jetty for security vulnerability CVE-2019-10241
> --
>
> Key: KAFKA-8308
> URL: https://issues.apache.org/jira/browse/KAFKA-8308
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Di Shang
>Assignee: Ismael Juma
>Priority: Major
>  Labels: security
> Fix For: 2.3.0
>
>
> Kafka 2.2 uses jetty-*-9.4.14.v20181114 which is marked vulnerable
> [https://github.com/apache/kafka/blob/2.2/gradle/dependencies.gradle#L58]
>  
> [https://nvd.nist.gov/vuln/detail/CVE-2019-10241]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3598

2019-05-03 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] KAFKA-8240: Fix NPE in Source.equals() (#6589)

--
[...truncated 2.40 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [VOTE] KIP-460: Admin Leader Election RPC

2019-05-03 Thread Mickael Maison
+1 (non binding)
Thanks for the KIP

On Thu, May 2, 2019 at 11:02 PM Colin McCabe  wrote:
>
> +1 (binding)
>
> thanks, Jose.
>
> best,
> Colin
>
> On Wed, May 1, 2019, at 14:44, Jose Armando Garcia Sancio wrote:
> > Hi all,
> >
> > I would like to start the voting for KIP-460:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-460%3A+Admin+Leader+Election+RPC
> >
> > The thread discussion is here:
> > https://www.mail-archive.com/dev@kafka.apache.org/msg97226.html
> >
> > Thanks!
> > -Jose
> >


[jira] [Resolved] (KAFKA-8240) Source.equals() can fail with NPE

2019-05-03 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8240.

Resolution: Fixed

> Source.equals() can fail with NPE
> -
>
> Key: KAFKA-8240
> URL: https://issues.apache.org/jira/browse/KAFKA-8240
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>  Labels: beginner, easy-fix, newbie
>
> Reported on an PR: 
> [https://github.com/apache/kafka/pull/5284/files/1df6208f48b6b72091fea71323d94a16102ffd13#r270607795]
> InternalTopologyBuilder#Source.equals() might fail with NPE if 
> `topicPattern==null`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-419 Safely notify Kafka Connect SourceTask is stopped

2019-05-03 Thread Andrew Schofield
Hi Vahid,
Thanks for taking a look at this KIP.

- The KIP is proposing a new interface because the existing "stop()" interface
isn't called at the end of the SourceTask's existence. It's a signal to the
task to stop, rather than a signal that it has actually stopped. Essentially, if
the task has resources to clean up, there's no clear point at which to do this 
before
this KIP. I'm trying to make it a bit easier to write a connector with this 
need.

- The "stop()" interface can be called multiple times which you can see by 
setting
breakpoints. That could be a defect in the implementation but I think it's a bit
risky changing the timing of that call because some connector somewhere might
cease working. Unlikely, but compatibility is important. Also, it's important 
that
the stop() signal is noticed and a SourceTask runs on multiple threads so it's 
tricky.
The new method is called exactly once after everything has quiesced.

- I don't disagree that a verb sounds better but couldn't really think of a more
final alternative to "stop()". That's why I went with "stopped()". Could be 
"cleanup()"
or "release()". Suggestions are welcome.

Thanks.
Andrew 

On 03/05/2019, 06:16, "Vahid Hashemian"  wrote:

Hi Andrew,

Thanks for the KIP. I'm not too familiar with the internals of KC so I hope
you can clarify a couple of things:

   - It seems the KIP is proposing a new interface because the existing
   "stop()" interface doesn't fully perform what it should ideally be doing.
   Is that a fair statement?
   - You mentioned the "stop()" interface can be called multiple times.
   Would the same thing be true for the proposed interface? Does it matter? 
Or
   there is a guard against that?
   - I also agree with Ryan that using a verb sounds more intuitive for an
   interface that's supposed to trigger some action.

Regards,
--Vahid


On Thu, Jan 24, 2019 at 9:23 AM Ryanne Dolan  wrote:

> Ah, I'm sorta wrong -- in the current implementation, restartTask()
> stops the task and starts a *new* task instance with the same task ID.
> (I'm not certain that is clear from the documentation or interfaces,
> or if that may change in the future.)
>
> Ryanne
>
> On Thu, Jan 24, 2019 at 10:25 AM Ryanne Dolan 
> wrote:
> >
> > Andrew, I believe the task can be started again with start() during the
> stopping and stopped states in your diagram.
> >
> > Ryanne
> >
> > On Thu, Jan 24, 2019, 10:20 AM Andrew Schofield <
> andrew_schofi...@live.com wrote:
> >>
> >> I've now added a diagram to illustrate the states of a SourceTask. The
> KIP is essentially trying to give a clear signal to SourceTask when all
> work has stopped. In particular, if a SourceTask has a session to the
> source system that it uses in poll() and commit(), it now has a safe way 
to
> release this.
> >>
> >> Andrew Schofield
> >> IBM Event Streams
> >>
> >> On 21/01/2019, 10:13, "Andrew Schofield" 
> wrote:
> >>
> >> Ryanne,
> >> Thanks for your comments. I think my overarching point is that the
> various states of a SourceTask and the transitions between them seem a bit
> loose and that makes it difficult to figure out when the resources held by
> a SourceTask can be safely released. Your "I can't tell from the
> documentation" comment is key here __ Neither could I.
> >>
> >> The problem is that stop() is a signal to stop polling. It's
> basically a request from the framework to the task and it doesn't tell the
> task that it's actually finished. One of the purposes of the KC framework
> is to make life easy for a connector developer and a nice clean "all done
> now" method would help.
> >>
> >> I think I'll add a diagram to illustrate to the KIP.
> >>
> >> Andrew Schofield
> >> IBM Event Streams
> >>
> >> On 18/01/2019, 19:02, "Ryanne Dolan"  wrote:
> >>
> >> Andrew, do we know whether the SourceTask may be start()ed
> again? If this
> >> is the last call to a SourceTask I suggest we call it close().
> I can't tell
> >> from the documentation.
> >>
> >> Also, do we need this if a SourceTask can keep track of whether
> it was
> >> start()ed since the last stop()?
> >>
> >> Ryanne
> >>
> >>
> >> On Fri, Jan 18, 2019, 12:02 PM Andrew Schofield <
> andrew_schofi...@live.com
> >> wrote:
> >>
> >> > Hi,
> >> > I’ve created a new KIP to enhance the SourceTask interface in
> Kafka
> >> > Connect.
> >> >
> >> >
> >> >
> 

Re: [DISCUSS] 2.2.1 Bug Fix Release

2019-05-03 Thread Vahid Hashemian
Thanks for the filter fix and the heads up John.
I'll wait for that to go through then.

--Vahid

On Fri, May 3, 2019 at 8:33 AM John Roesler  wrote:

> Thanks for volunteering, Vahid!
>
> I noticed that the "unresolved issues" filter on the plan page was still
> set to 2.1.1 (I fixed it).
>
> There's one blocker left: https://issues.apache.org/jira/browse/KAFKA-8289
> ,
> but it's merged to trunk and we're cherry-picking to 2.2 today.
>
> Thanks again!
> -John
>
> On Thu, May 2, 2019 at 10:38 PM Vahid Hashemian  >
> wrote:
>
> > If there are no objections on the proposed plan, I'll start preparing the
> > first release candidate.
> >
> > Thanks,
> > --Vahid
> >
> > On Thu, Apr 25, 2019 at 6:27 AM Ismael Juma  wrote:
> >
> > > Thanks Vahid!
> > >
> > > On Wed, Apr 24, 2019 at 10:44 PM Vahid Hashemian <
> > > vahid.hashem...@gmail.com>
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to volunteer for the release manager of the 2.2.1 bug fix
> > > release.
> > > > Kafka 2.2.0 was released on March 22, 2019.
> > > >
> > > > At this point, there are 29 resolved JIRA issues scheduled for
> > inclusion
> > > in
> > > > 2.2.1:
> > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.2.1
> > > >
> > > > The release plan is documented here:
> > > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1
> > > >
> > > > Thanks!
> > > > --Vahid
> > > >
> > >
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
> >
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] 2.2.1 Bug Fix Release

2019-05-03 Thread John Roesler
Thanks for volunteering, Vahid!

I noticed that the "unresolved issues" filter on the plan page was still
set to 2.1.1 (I fixed it).

There's one blocker left: https://issues.apache.org/jira/browse/KAFKA-8289,
but it's merged to trunk and we're cherry-picking to 2.2 today.

Thanks again!
-John

On Thu, May 2, 2019 at 10:38 PM Vahid Hashemian 
wrote:

> If there are no objections on the proposed plan, I'll start preparing the
> first release candidate.
>
> Thanks,
> --Vahid
>
> On Thu, Apr 25, 2019 at 6:27 AM Ismael Juma  wrote:
>
> > Thanks Vahid!
> >
> > On Wed, Apr 24, 2019 at 10:44 PM Vahid Hashemian <
> > vahid.hashem...@gmail.com>
> > wrote:
> >
> > > Hi all,
> > >
> > > I'd like to volunteer for the release manager of the 2.2.1 bug fix
> > release.
> > > Kafka 2.2.0 was released on March 22, 2019.
> > >
> > > At this point, there are 29 resolved JIRA issues scheduled for
> inclusion
> > in
> > > 2.2.1:
> > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.2.1
> > >
> > > The release plan is documented here:
> > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1
> > >
> > > Thanks!
> > > --Vahid
> > >
> >
>
>
> --
>
> Thanks!
> --Vahid
>


[jira] [Created] (KAFKA-8319) Flaky Test KafkaStreamsTest.statefulTopologyShouldCreateStateDirectory

2019-05-03 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-8319:
--

 Summary: Flaky Test 
KafkaStreamsTest.statefulTopologyShouldCreateStateDirectory
 Key: KAFKA-8319
 URL: https://issues.apache.org/jira/browse/KAFKA-8319
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk11 #482

2019-05-03 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] KAFKA-8240: Fix NPE in Source.equals() (#6589)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 6d3ff132b57835fc879d678e9addc5e7c3804205 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 6d3ff132b57835fc879d678e9addc5e7c3804205
Commit message: "KAFKA-8240: Fix NPE in Source.equals() (#6589)"
 > git rev-list --no-walk 3ba4686d4d650f0f9155b2e22dddb192a5a56a6c # timeout=10
Setting GRADLE_4_10_2_HOME=/home/jenkins/tools/gradle/4.10.2
Setting GRADLE_4_10_2_HOME=/home/jenkins/tools/gradle/4.10.2
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins2687958088024404688.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.10.2/bin/gradle
/tmp/jenkins2687958088024404688.sh: line 4: 
/home/jenkins/tools/gradle/4.10.2/bin/gradle: No such file or directory
Build step 'Execute shell' marked build as failure
Recording test results
Setting GRADLE_4_10_2_HOME=/home/jenkins/tools/gradle/4.10.2
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_10_2_HOME=/home/jenkins/tools/gradle/4.10.2
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user wangg...@gmail.com


Re: [DISCUSS] KIP-398: Support reading trust store from classpath

2019-05-03 Thread Noa Resare
Thank you for your input, Jun, Colina and Rajini! It seems not that quick 
responding either :) 

I would like to maintain that the SSL trust store is conceptually different 
from the other pieces
of configuration that is mentioned such as key stores and GSSAPI keytabs. The 
trust store is
not secret information and the same bytes is shipped to a large number of 
clients, compared 
to the other types of authentication data mentioned where the information is 
typically secret a
and ideally issued just for a single client. Because of this I would claim that 
it is not entirely
inconsistent to only provide this facility for the SSL use case.

Reading through KIP-421, it seems this only concerns itself with config values, 
not a way
to inject the contents of files that config values point to, so that wouldn’t 
help us. 

With regards to the performance implications of loading the trust store from 
classpath, my
reading of the code is that the SSLContext returned by 
SslFactory.createSSLContext() is a long lived
object, and as such the trust store InputSteam would only be read on the first 
connect which
means that the total performance impact even of a really slow classpath lookup 
would be small
in the grand scheme of things.

All that said, as it seems the appetite for this change is limited, we would be 
okay with implementing
this using the KIP-383 extension point.

/noa

> On 20 Mar 2019, at 20:41, Rajini Sivaram  wrote:
> 
> Hi Noa,
> 
> My understanding was the KIP-398 is proposing to look up only SSL
> truststores from CLASSPATH, not files that contain actual credentials like
> SSL keystores. In production scenarios, lots of users tend to use
> certificates signed by trusted CAs, so you don't actually need to configure
> a truststore on the client. Since you need to create a jar to distribute
> anyway to hold the truststore, perhaps a plugin using the approaches Jun or
> Colin suggested (KIP-383 or KIP-421) may be more suitable.
> 
> Regards,
> 
> Rajini
> 
> 
> On Wed, Mar 20, 2019 at 8:14 PM Colin McCabe  wrote:
> 
>> Hi Noa,
>> 
>> I think Jun makes a good point that this would be better as a generic
>> mechanism, not specific to SSL.  With KIP-421: Support resolving
>> externalized secrets in AbstractConfig, we should be able to do something
>> like that.  We could have a ClasspathConfigProvider.  Then this could be
>> used in the same way as any external config provider.
>> 
>> I would also add a note of caution that searching through the CLASSPATH
>> can be quite slow, and can be confusing due to the use of multiple class
>> loaders, many directories in the CLASSPATH, and so forth.  I worked on the
>> Hadoop project for a while, and our experience was that most users would
>> have been better off with just a fixed path.  Debugging configuration
>> issues caused by searching through the CLASSPATH was frustrating, and the
>> time used to search the path was significant when viewed in program
>> execution traces.  But your mileage may vary, and it's reasonable to offer
>> this as an option for people using Spring, etc.
>> 
>> best,
>> Colin
>> 
>> 
>> On Tue, Mar 19, 2019, at 18:29, Jun Rao wrote:
>>> Hi, Noa,
>>> 
>>> Thanks for the KIP and sorry for the late response.
>>> 
>>> The KIP itself looks reasonable. My only concern is that it's very
>> specific
>>> to SSL. We have other configurations that also depend on files (e.g.
>> keytab
>>> in Sasl GSSAPI). It would be inconsistent that we only support the
>>> CLASSPATH: syntax in SSL. So, I was thinking that we could either (1) try
>>> to support CLASSPATH: generally where a file is being used, or (2) just
>>> rely on KIP-383, which seems more general purpose.
>>> 
>>> Jun
>>> 
>>> 
>>> 
>>> On Tue, Dec 18, 2018 at 2:03 AM Noa Resare  wrote:
>>> 
 I believe that I have addressed the concerns raised in this
>> discussion. It
 seems reasonable to start a vote in about two days. Please raise any
 concerns you may have and I will be happy to attempt to address them.
 
 /noa
 
> On 10 Dec 2018, at 10:53, Noa Resare  wrote:
> 
> Thank you for your comments, see replies inline.
> 
>> On 9 Dec 2018, at 01:33, Harsha  wrote:
>> 
>> Hi Noa,
>> Based on KIP"s motivation section
>> "If we had the ability to load a trust store from the classpath as
>> well
 as from a file, the trust store could be shipped in a jar that could be
 declared as a dependency and piggyback on the distribution
>> infrastructure
 already in place."
>> 
>> It looks like you are making the assumption that distributing a jar
>> is
 better than the file. I am not sure why one is better than the other.
>> There
 are other use-cases where one can make a call local "daemon" over Unix
 socket to fetch a certificate as well.
> 
> It was not my intention to convey that loading the trust store from
>> the
 classpath is inherently better in all cases. The proposed change simply
 

[jira] [Created] (KAFKA-8318) Session Window Aggregations generate an extra tombstone

2019-05-03 Thread John Roesler (JIRA)
John Roesler created KAFKA-8318:
---

 Summary: Session Window Aggregations generate an extra tombstone
 Key: KAFKA-8318
 URL: https://issues.apache.org/jira/browse/KAFKA-8318
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler


See the discussion 
https://github.com/apache/kafka/pull/6654#discussion_r280231439

The session merging logic generates a tombstone in addition to an update when 
the session window already exists. It's not a correctness issue, just a small 
performance hit, because that tombstone is immediately invalidated by the 
update.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)