Re: [VOTE] KIP-331 Add default implementation to close() and configure() for Serializer, Deserializer and Serde

2018-09-20 Thread Chia-Ping Tsai
KIP-336[1] has been merged so it is time to activate this thread (KIP-331[2]). 
Last discussion is about "Should we add FunctionalInterface annotation to 
Serializer and Deserializer". In discussion of KIP-336 we mentioned that we 
probably add the default implementation for headless method later. Hence, 
adding FunctionalInterface annotation is not suitable now.

KIP-331 has removed the change of adding FunctionalInterface annotation. Please 
take a look again.

[1] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=87298242
[2] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-331+Add+default+implementation+to+close%28%29+and+configure%28%29+for+Serializer%2C+Deserializer+and+Serde

Cheers,
Chia-Ping


On 2018/07/20 10:56:59, Ismael Juma  wrote: 
> Part of the motivation for this KIP is to make these interfaces functional
> interfaces. But I think that may not be desirable due to the method that
> passes headers. So, it doesn't make sense to discuss two separate changes
> to the same interfaces in isolation, we should figure out how we want them
> to work holistically.
> 
> Ismael
> 
> On Fri, Jul 20, 2018 at 3:50 AM Chia-Ping Tsai  wrote:
> 
> > > The KIP needs 3 binding votes to pass.
> >
> > Thanks for the reminder. I will reopen the ballot box until we get 3
> > tickets.
> >
> > > I still think we should include the details of how things will look like
> > > with the headers being passed to serializers/deserializers to ensure
> > > things actually make sense as a whole.
> >
> > This KIP is unrelated to the both methods - serialize() and deserialize().
> > We won't add the default implementation to them in this kip. Please correct
> > me if I didn't catch what you said.
> >
> > Cheers,
> > Chia-Ping
> >
> > On 2018/07/09 01:55:41, Ismael Juma  wrote:
> > > The KIP needs 3 binding votes to pass. I still think we should include
> > the
> > > details of how things will look like with the headers being passed to
> > > serializers/deserializers to ensure things actually make sense as a
> > whole.
> > >
> > > Ismael
> > >
> > >
> > > On Sun, 8 Jul 2018, 18:31 Chia-Ping Tsai,  wrote:
> > >
> > > > All,
> > > >
> > > > The 72 hours has passed. The vote result of KIP-313 is shown below.
> > > >
> > > > 1 binding vote (Matthias J. Sax)
> > > > 4 non-binding votes (John Roesler, Richard Yu, vito jeng and Chia-Ping)
> > > >
> > > > Cheers,
> > > > Chia-Ping
> > > >
> > > > On 2018/07/05 14:45:01, Chia-Ping Tsai  wrote:
> > > > > hi all,
> > > > >
> > > > > I would like to start voting on "KIP-331 Add default implementation
> > to
> > > > close() and configure() for Serializer, Deserializer and Serde"
> > > > >
> > > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-331+Add+default+implementation+to+close%28%29+and+configure%28%29+for+Serializer%2C+Deserializer+and+Serde
> > > > >
> > > > > Cheers,
> > > > > Chia-Ping
> > > > >
> > > >
> > >
> >
> 


Build failed in Jenkins: kafka-trunk-jdk10 #510

2018-09-20 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6923; Refactor Serializer/Deserializer for KIP-336 (#5494)

--
[...truncated 1.40 MB...]

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList PASSED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
STARTED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct PASSED

> Task :streams:examples:processResources NO-SOURCE
> Task :streams:examples:processTestResources NO-SOURCE
> Task :spotlessScala
> Task :spotlessScalaCheck
> Task :streams:streams-scala:processResources NO-SOURCE
> Task :streams:streams-scala:processTestResources
> Task :streams:test-utils:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task 

Build failed in Jenkins: kafka-trunk-jdk8 #2978

2018-09-20 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6923; Refactor Serializer/Deserializer for KIP-336 (#5494)

--
[...truncated 2.33 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED


Re: [DISCUSS] KIP-370: Remove Orphan Partitions

2018-09-20 Thread xiongqi wu
Colin,

Thanks for the comment.
1)
auto.orphan.partition.removal.delay.ms refers to timeout since the first
leader and ISR request was received.  The idea is we want to wait enough
time to receive up-to-dated leaderandISR request and any old or new
partitions reassignment requests.

2)
Is there any logic to remove the partition folders on disk?  I can only
find references to removing older log segments, but not the folder, in the
KIP.
==> yes, the plan is to remove partition folders as well.

I will update the KIP to make it more clear.


Xiongqi (Wesley) Wu


On Thu, Sep 20, 2018 at 5:02 PM Colin McCabe  wrote:

> Hi Xiongqi,
>
> Thanks for the KIP.
>
> Can you be a bit more clear what the timeout
> auto.orphan.partition.removal.delay.ms refers to?  Is the timeout
> measured since the partition was supposed to be on the broker?  Or is the
> timeout measured since the broker started up?
>
> Is there any logic to remove the partition folders on disk?  I can only
> find references to removing older log segments, but not the folder, in the
> KIP.
>
> best,
> Colin
>
> On Wed, Sep 19, 2018, at 10:53, xiongqi wu wrote:
> > Any comments?
> >
> > Xiongqi (Wesley) Wu
> >
> >
> > On Mon, Sep 10, 2018 at 3:04 PM xiongqi wu  wrote:
> >
> > > Here is the implementation for the KIP 370.
> > >
> > >
> > >
> https://github.com/xiowu0/kafka/commit/f1bd3085639f41a7af02567550a8e3018cfac3e9
> > >
> > >
> > > The purpose is to do one time cleanup (after a configured delay) of
> orphan
> > > partitions when a broker starts up.
> > >
> > >
> > > Xiongqi (Wesley) Wu
> > >
> > >
> > > On Wed, Sep 5, 2018 at 10:51 AM xiongqi wu 
> wrote:
> > >
> > >>
> > >> This KIP enables broker to remove orphan partitions automatically.
> > >>
> > >>
> > >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-370%3A+Remove+Orphan+Partitions
> > >>
> > >>
> > >> Xiongqi (Wesley) Wu
> > >>
> > >
>


Build failed in Jenkins: kafka-trunk-jdk10 #509

2018-09-20 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
error: Could not read c93a9ff0cabaae0d16e0050690cf6594e8cf780d
error: Could not read b95a7edde0df10b1c567094f1fa3e5fbe3e6cd27
error: Could not read 7299e1836ba2ff9485c1256410c569e35379
remote: Enumerating objects: 2973, done.
remote: Counting objects:   0% (1/2973)   remote: Counting objects:   
1% (30/2973)   remote: Counting objects:   2% (60/2973)   
remote: Counting objects:   3% (90/2973)   remote: Counting objects:   
4% (119/2973)   remote: Counting objects:   5% (149/2973)   
remote: Counting objects:   6% (179/2973)   remote: Counting objects:   
7% (209/2973)   remote: Counting objects:   8% (238/2973)   
remote: Counting objects:   9% (268/2973)   remote: Counting objects:  
10% (298/2973)   remote: Counting objects:  11% (328/2973)   
remote: Counting objects:  12% (357/2973)   remote: Counting objects:  
13% (387/2973)   remote: Counting objects:  14% (417/2973)   
remote: Counting objects:  15% (446/2973)   remote: Counting objects:  
16% (476/2973)   remote: Counting objects:  17% (506/2973)   
remote: Counting objects:  18% (536/2973)   remote: Counting objects:  
19% (565/2973)   remote: Counting objects:  20% (595/2973)   
remote: Counting objects:  21% (625/2973)   remote: Counting objects:  
22% (655/2973)   remote: Counting objects:  23% (684/2973)   
remote: Counting objects:  24% (714/2973)   remote: Counting objects:  
25% (744/2973)   remote: Counting objects:  26% (773/2973)   
remote: Counting objects:  27% (803/2973)   remote: Counting objects:  
28% (833/2973)   remote: Counting objects:  29% (863/2973)   
remote: Counting objects:  30% (892/2973)   remote: Counting objects:  
31% (922/2973)   remote: Counting objects:  32% (952/2973)   
remote: Counting objects:  33% (982/2973)   remote: Counting objects:  
34% (1011/2973)   remote: Counting objects:  35% (1041/2973)   
remote: Counting objects:  36% (1071/2973)   remote: Counting objects:  
37% (1101/2973)   remote: Counting objects:  38% (1130/2973)   
remote: Counting objects:  39% (1160/2973)   remote: Counting objects:  
40% (1190/2973)   remote: Counting objects:  41% (1219/2973)   
remote: Counting objects:  42% (1249/2973)   remote: Counting objects:  
43% (1279/2973)   remote: Counting objects:  44% (1309/2973)   
remote: Counting objects:  45% (1338/2973)   remote: Counting objects:  
46% (1368/2973)   remote: Counting objects:  47% (1398/2973)   
remote: Counting objects:  48% (1428/2973)   remote: Counting objects:  
49% (1457/2973)   remote: Counting objects:  50% (1487/2973)   

Re: [DISCUSS] KIP-370: Remove Orphan Partitions

2018-09-20 Thread Colin McCabe
Hi Xiongqi,

Thanks for the KIP.

Can you be a bit more clear what the timeout 
auto.orphan.partition.removal.delay.ms refers to?  Is the timeout measured 
since the partition was supposed to be on the broker?  Or is the timeout 
measured since the broker started up?

Is there any logic to remove the partition folders on disk?  I can only find 
references to removing older log segments, but not the folder, in the KIP.

best,
Colin

On Wed, Sep 19, 2018, at 10:53, xiongqi wu wrote:
> Any comments?
> 
> Xiongqi (Wesley) Wu
> 
> 
> On Mon, Sep 10, 2018 at 3:04 PM xiongqi wu  wrote:
> 
> > Here is the implementation for the KIP 370.
> >
> >
> > https://github.com/xiowu0/kafka/commit/f1bd3085639f41a7af02567550a8e3018cfac3e9
> >
> >
> > The purpose is to do one time cleanup (after a configured delay) of orphan
> > partitions when a broker starts up.
> >
> >
> > Xiongqi (Wesley) Wu
> >
> >
> > On Wed, Sep 5, 2018 at 10:51 AM xiongqi wu  wrote:
> >
> >>
> >> This KIP enables broker to remove orphan partitions automatically.
> >>
> >>
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-370%3A+Remove+Orphan+Partitions
> >>
> >>
> >> Xiongqi (Wesley) Wu
> >>
> >


[jira] [Created] (KAFKA-7428) ConnectionStressSpec: add "action", allow multiple clients

2018-09-20 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-7428:
--

 Summary: ConnectionStressSpec: add "action", allow multiple clients
 Key: KAFKA-7428
 URL: https://issues.apache.org/jira/browse/KAFKA-7428
 Project: Kafka
  Issue Type: Test
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


ConnectionStressSpec: add "action", allow multiple clients



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #508

2018-09-20 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
error: Could not read c93a9ff0cabaae0d16e0050690cf6594e8cf780d
error: Could not read b95a7edde0df10b1c567094f1fa3e5fbe3e6cd27
error: Could not read 7299e1836ba2ff9485c1256410c569e35379
remote: Enumerating objects: 2973, done.
remote: Counting objects:   0% (1/2973)   remote: Counting objects:   
1% (30/2973)   remote: Counting objects:   2% (60/2973)   
remote: Counting objects:   3% (90/2973)   remote: Counting objects:   
4% (119/2973)   remote: Counting objects:   5% (149/2973)   
remote: Counting objects:   6% (179/2973)   remote: Counting objects:   
7% (209/2973)   remote: Counting objects:   8% (238/2973)   
remote: Counting objects:   9% (268/2973)   remote: Counting objects:  
10% (298/2973)   remote: Counting objects:  11% (328/2973)   
remote: Counting objects:  12% (357/2973)   remote: Counting objects:  
13% (387/2973)   remote: Counting objects:  14% (417/2973)   
remote: Counting objects:  15% (446/2973)   remote: Counting objects:  
16% (476/2973)   remote: Counting objects:  17% (506/2973)   
remote: Counting objects:  18% (536/2973)   remote: Counting objects:  
19% (565/2973)   remote: Counting objects:  20% (595/2973)   
remote: Counting objects:  21% (625/2973)   remote: Counting objects:  
22% (655/2973)   remote: Counting objects:  23% (684/2973)   
remote: Counting objects:  24% (714/2973)   remote: Counting objects:  
25% (744/2973)   remote: Counting objects:  26% (773/2973)   
remote: Counting objects:  27% (803/2973)   remote: Counting objects:  
28% (833/2973)   remote: Counting objects:  29% (863/2973)   
remote: Counting objects:  30% (892/2973)   remote: Counting objects:  
31% (922/2973)   remote: Counting objects:  32% (952/2973)   
remote: Counting objects:  33% (982/2973)   remote: Counting objects:  
34% (1011/2973)   remote: Counting objects:  35% (1041/2973)   
remote: Counting objects:  36% (1071/2973)   remote: Counting objects:  
37% (1101/2973)   remote: Counting objects:  38% (1130/2973)   
remote: Counting objects:  39% (1160/2973)   remote: Counting objects:  
40% (1190/2973)   remote: Counting objects:  41% (1219/2973)   
remote: Counting objects:  42% (1249/2973)   remote: Counting objects:  
43% (1279/2973)   remote: Counting objects:  44% (1309/2973)   
remote: Counting objects:  45% (1338/2973)   remote: Counting objects:  
46% (1368/2973)   remote: Counting objects:  47% (1398/2973)   
remote: Counting objects:  48% (1428/2973)   remote: Counting objects:  
49% (1457/2973)   remote: Counting objects:  50% (1487/2973)   

Build failed in Jenkins: kafka-trunk-jdk8 #2977

2018-09-20 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] [KAFKA-7379] [streams] send.buffer.bytes should be allowed to set -1 
in

--
[...truncated 2.68 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

[jira] [Resolved] (KAFKA-6923) Consolidate ExtendedSerializer/Serializer and ExtendedDeserializer/Deserializer

2018-09-20 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6923.

Resolution: Fixed

> Consolidate ExtendedSerializer/Serializer and 
> ExtendedDeserializer/Deserializer
> ---
>
> Key: KAFKA-6923
> URL: https://issues.apache.org/jira/browse/KAFKA-6923
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Ismael Juma
>Assignee: Viktor Somogyi
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.1.0
>
>
> The Javadoc of ExtendedDeserializer states:
> {code}
>  * Prefer {@link Deserializer} if access to the headers is not required. Once 
> Kafka drops support for Java 7, the
>  * {@code deserialize()} method introduced by this interface will be added to 
> Deserializer with a default implementation
>  * so that backwards compatibility is maintained. This interface may be 
> deprecated once that happens.
> {code}
> Since we have dropped Java 7 support, we should figure out how to do this. 
> There are compatibility implications, so a KIP is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-328: Ability to suppress updates for KTables

2018-09-20 Thread John Roesler
Hello all,

During review of https://github.com/apache/kafka/pull/5567 for KIP-328,
the reviewers raised many good suggestions for the API.

The basic design of the suppress operation remains the same, but the
config object is (in my opinion) far more ergonomic with their suggestions.

I have updated the KIP to reflect the new config (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-328%3A+Ability+to+suppress+updates+for+KTables#KIP-328:AbilitytosuppressupdatesforKTables-NewSuppressOperator
)

Please let me know if anyone wishes to change their vote, and we call for a
recast.

Thanks,
-John

On Thu, Aug 23, 2018 at 12:54 PM Matthias J. Sax 
wrote:

> It seems nobody has any objections against the change.
>
> That's for the KIP improvement. I'll go ahead and merge the PR.
>
>
> -Matthias
>
> On 8/21/18 2:44 PM, John Roesler wrote:
> > Hello again, all,
> >
> > I belatedly had a better idea for adding grace period to the Windows
> class
> > hierarchy (TimeWindows, UnlimitedWindows, JoinWindows). Instead of
> > providing the grace-setter in the abstract class and having to retract it
> > in UnlimitedWindows, I've made the getter abstract method in Windows and
> > only added setters to Time and Join windows.
> >
> > This should not only improve the ergonomics of grace period, but make the
> > whole class hierarchy more maintainable.
> >
> > See the PR for more details: https://github.com/apache/kafka/pull/5536
> >
> > I've updated the KIP accordingly. Here's the diff:
> >
> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=87295409=11=9
> >
> > Please let me know if this changes your vote.
> >
> > Thanks,
> > -John
> >
> > On Mon, Aug 13, 2018 at 5:20 PM John Roesler  wrote:
> >
> >> Hey all,
> >>
> >> I just wanted to let you know that a few small issues surfaced during
> >> implementation and review. I've updated the KIP. Here's the diff:
> >>
> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=87295409=9=8
> >>
> >> Basically:
> >> * the metrics named "*-event-*" are inconsistent with existing
> >> nomenclature, and will be "*-record-*" instead (late records instead of
> >> late events, for example)
> >> * the apis taking and returning Duration will use long millis instead.
> We
> >> do want to transition to Duration in the future, but we shouldn't do it
> >> piecemeal.
> >>
> >> Thanks,
> >> -John
> >>
> >> On Tue, Aug 7, 2018 at 12:07 PM John Roesler  wrote:
> >>
> >>> Thanks everyone, KIP-328 has passed with 3 binding votes (Guozhang,
> >>> Damian, and Matthias) and 3 non-binding (Ted, Bill, and me).
> >>>
> >>> Thanks for your time,
> >>> -John
> >>>
> >>> On Mon, Aug 6, 2018 at 6:35 PM Matthias J. Sax 
> >>> wrote:
> >>>
>  +1 (binding)
> 
>  Thanks for the KIP.
> 
> 
>  -Matthias
> 
>  On 8/3/18 12:52 AM, Damian Guy wrote:
> > Thanks John! +1
> >
> > On Mon, 30 Jul 2018 at 23:58 Guozhang Wang 
> wrote:
> >
> >> Yes, the addendum lgtm as well. Thanks!
> >>
> >> On Mon, Jul 30, 2018 at 3:34 PM, John Roesler 
>  wrote:
> >>
> >>> Another thing that came up after I started working on an
>  implementation
> >> is
> >>> that in addition to deprecating "retention" from the Windows
>  interface,
> >> we
> >>> also need to deprecate "segmentInterval", for the same reasons. I
>  simply
> >>> overlooked it previously. I've updated the KIP accordingly.
> >>>
> >>> Hopefully, this doesn't change anyone's vote.
> >>>
> >>> Thanks,
> >>> -John
> >>>
> >>> On Mon, Jul 30, 2018 at 5:31 PM John Roesler 
>  wrote:
> >>>
>  Thanks Guozhang,
> 
>  Thanks for that catch. to clarify, currently, events are "late"
> only
> >> when
>  they are older than the retention period. Currently, we detect
> this
>  in
> >>> the
>  processor and record it as a "skipped-record". We then do not
>  attempt
> >> to
>  store the event in the window store. If a user provided a
> >> pre-configured
>  window store with a retention period smaller than the one they
>  specify
> >>> via
>  Windows#until, the segmented store will drop the update with no
>  metric
> >>> and
>  record a debug-level log.
> 
>  With KIP-328, with the introduction of "grace period" and moving
> >>> retention
>  fully into the state store, we need to have metrics for both "late
> >>> events"
>  (new records older than the grace period) and "expired window
>  events"
> >>> (new
>  records for windows that are no longer retained in the state
>  store). I
>  already proposed metrics for the late events, and I've just
> updated
>  the
> >>> KIP
>  with metrics for the expired window events. I also updated the KIP
>  to
> >>> make
>  it clear that neither late nor expired 

Re: [DISCUSS] KIP-362: Dynamic Session Window Support

2018-09-20 Thread Matthias J. Sax
Thanks for following up. Very nice examples!

I think, that the window definition for Flink is semantically
questionable. If there is only a single record, why is the window
defined as [ts, ts+gap]? To me, this definition is not sound and seems
to be arbitrary. To define the windows as [ts-gap,ts+gap] as you mention
would be semantically more useful -- still, I think that defining the
window as [ts,ts] as we do currently in Kafka Streams is semantically
the best.

I have the impression, that Flink only defines them differently, because
it solves the issues in the implementation. (Ie, an implementation
details leaks into the semantics, what is usually not desired.)

However, I believe that we could change the implementation accordingly.
We could store the windowed keys, as [ts-gap,ts+gap] (or [ts,ts+gap]) in
RocksDB, but at API level we return [ts,ts]. This way, we can still find
all windows we need and provide the same deterministic behavior and keep
the current window boundaries on the semantic level (there is no need to
store the window start and/or end time). With this technique, we can
also implement dynamic session gaps. I think, we would need to store the
used "gap" for each window, too. But again, this would be an
implementation detail.

Let's see what others think.

One tricky question we would need to address is, how we can be backward
compatible. I am currently working on KIP-258 that should help to
address this backward compatibility issue though.


-Matthias



On 9/19/18 5:17 PM, Lei Chen wrote:
> Thanks Matthias. That makes sense.
> 
> You're right that symmetric merge is necessary to ensure consistency. On
> the other hand, I kinda feel it defeats the purpose of dynamic gap, which
> is to update the gap from old value to new value. The symmetric merge
> always honor the larger gap in both direction, rather than honor the gap
> carried by record with larger timestamp. I wasn't able to find any semantic
> definitions w.r.t this particular aspect online, but spent some time
> looking into other streaming engines like Apache Flink.
> 
> Apache Flink defines the window differently, that uses (start time, start
> time + gap).
> 
> so our previous example (10, 10), (19,5),(15,3) in Flink's case will be:
> [10,20]
> [19,24] => merged to [10,24]
> [15,18] => merged to [10,24]
> 
> while example (15,3)(19,5)(10,10) will be
> [15,18]
> [19,24] => no merge
> [10,20] => merged to [10,24]
> 
> however, since it only records gap in future direction, not past, a late
> record might not trigger any merge where in symmetric merge it would.
> (7,2),(10, 10), (19,5),(15,3)
> [7,9]
> [10,20]
> [19,24] => merged to [10,24]
> [15,18] => merged to [10,24]
> so at the end
> two windows [7,9][10,24] are there.
> 
> As you can see, in Flink, the gap semantic is more toward to the way that,
> a gap carried by one record only affects how this record merges with future
> records. e.g. a later event (T2, G2) will only be merged with (T1, G1) is
> T2 is less than T1+G1, but not when T1 is less than T2 - G2. Let's call
> this "forward-merge" way of handling this. I just went thought some source
> code and if my understanding is incorrect about Flink's implementation,
> please correct me.
> 
> On the other hand, if we want to do symmetric merge in Kafka Streams, we
> can change the window definition to [start time - gap, start time + gap].
> This way the example (7,2),(10, 10), (19,5),(15,3) will be
> [5,9]
> [0,20] => merged to [0,20]
> [14,24] => merged to [0,24]
> [12,18] => merged to [0,24]
> 
>  (19,5),(15,3)(7,2),(10, 10) will generate same result
> [14,24]
> [12,18] => merged to [12,24]
> [5,9] => no merge
> [0,20] => merged to [0,24]
> 
> Note that symmetric-merge would require us to change the way how Kafka
> Steams fetch windows now, instead of fetching range from timestamp-gap to
> timestamp+gap, we will need to fetch all windows that are not expired yet.
> On the other hand, I'm not sure how this will impact the current logic of
> how a window is considered as closed, because the window doesn't carry end
> timestamp anymore, but end timestamp + gap.
> 
> So do you guys think forward-merge approach used by Flink makes more sense
> in Kafka Streams, or symmetric-merge makes more sense? Both of them seems
> to me can give deterministic result.
> 
> BTW I'll add the use case into original KIP.
> 
> Lei
> 
> 
> On Tue, Sep 11, 2018 at 5:45 PM Matthias J. Sax 
> wrote:
> 
>> Thanks for explaining your understanding. And thanks for providing more
>> details about the use-case. Maybe you can add this to the KIP?
>>
>>
>> First one general comment. I guess that my and Guozhangs understanding
>> about gap/close/gracePeriod is the same as yours -- we might not have
>> use the term precisely correct in previous email.
>>
>>
>> To you semantics of gap in detail:
>>
>>> I thought when (15,3) is received, kafka streams look up for neighbor
>>> record/window that is within the gap
>>> of [15-3, 15+3], and merge if any. Previous 

[jira] [Created] (KAFKA-7427) kafka.server mBean sometimes reports meaningless rate metrics

2018-09-20 Thread Michael Kairys (JIRA)
Michael Kairys created KAFKA-7427:
-

 Summary: kafka.server mBean sometimes reports meaningless rate 
metrics
 Key: KAFKA-7427
 URL: https://issues.apache.org/jira/browse/KAFKA-7427
 Project: Kafka
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.0.0, 1.0.0
 Environment: Linux 3.8.13 64-bit
Reporter: Michael Kairys


For example, 
kafka.server:name=ZooKeeperDisconnectsPerSec,type=SessionExpireListener:

FifteenMinuteRate = 2.439618229224669E-102
FiveMinuteRate = 3.246265432544712E-299
MeanRate = 2.057921540928146E-6
OneMinuteRate = 2.964393875E-314



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #507

2018-09-20 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] [KAFKA-7379] [streams] send.buffer.bytes should be allowed to set -1 
in

--
[...truncated 2.21 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] 

Re: [VOTE] KIP-372: Naming Repartition Topics for Joins and Grouping

2018-09-20 Thread Bill Bejeck
Hi All,

KIP-372 is now accepted with:
- 3 Binding +1 (Damian, Guozhang, Matthias)
- 3 Non-binding +1 (Dongjin, John, Bill)

Thanks to everyone for the votes.

-Bill

On Thu, Sep 20, 2018 at 7:30 AM Damian Guy  wrote:

> +1 (binding)
>
> On Tue, 18 Sep 2018 at 16:33 Bill Bejeck  wrote:
>
> > All,
> >
> > In starting work on the PR for KIP-372, the Grouped interface needed some
> > method renaming to be more consistent with the other configuration
> classes
> > (Joined, Produced, etc.).  As such I've updated the Grouped code section
> of
> > the KIP.
> >
> > As these changes address a comment from Matthias on the initial draft of
> > the KIP and don't change any of the existing behavior already outlined,
> I
> > don't think a re-vote is required.
> >
> > Thanks,
> > Bill
> >
> > On Tue, Sep 18, 2018 at 10:09 AM John Roesler  wrote:
> >
> > > +1 (non-binding)
> > >
> > > Thanks!
> > >
> > > On Mon, Sep 17, 2018 at 7:29 PM Dongjin Lee 
> wrote:
> > >
> > > > Great improvements. +1. (Non-binding)
> > > >
> > > > On Tue, Sep 18, 2018 at 5:14 AM Matthias J. Sax <
> matth...@confluent.io
> > >
> > > > wrote:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > > -Matthias
> > > > >
> > > > > On 9/17/18 1:12 PM, Guozhang Wang wrote:
> > > > > > +1 from me, thanks Bill !
> > > > > >
> > > > > > On Mon, Sep 17, 2018 at 12:43 PM, Bill Bejeck  >
> > > > wrote:
> > > > > >
> > > > > >> All,
> > > > > >>
> > > > > >> I'd like to start the voting process for KIP-372.  Here's the
> link
> > > to
> > > > > the
> > > > > >> updated proposal
> > > > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > >> 372%3A+Naming+Repartition+Topics+for+Joins+and+Grouping
> > > > > >>
> > > > > >> I'll start with my own +1.
> > > > > >>
> > > > > >> Thanks,
> > > > > >> Bill
> > > > > >>
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > >
> > > > --
> > > > *Dongjin Lee*
> > > >
> > > > *A hitchhiker in the mathematical world.*
> > > >
> > > > *github:  github.com/dongjinleekr
> > > > linkedin:
> > > kr.linkedin.com/in/dongjinleekr
> > > > slideshare:
> > > > www.slideshare.net/dongjinleekr
> > > > *
> > > >
> > >
> >
>


[jira] [Resolved] (KAFKA-7379) send.buffer.bytes should be allowed to set -1 in KafkaStreams

2018-09-20 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-7379.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

> send.buffer.bytes should be allowed to set -1 in KafkaStreams
> -
>
> Key: KAFKA-7379
> URL: https://issues.apache.org/jira/browse/KAFKA-7379
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.2, 0.11.0.3, 1.0.2, 1.1.1, 2.0.0
>Reporter: Badai Aqrandista
>Assignee: Aleksei Izmalkin
>Priority: Minor
>  Labels: easyfix, newbie
> Fix For: 2.1.0
>
>
> send.buffer.bytes and receive.buffer.bytes are declared with atLeast(0) 
> constraint in StreamsConfig, whereas -1 should be also allowed to set. This 
> is like KAFKA-6891.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-371: Add a configuration to build custom SSL principal name

2018-09-20 Thread Priyank Shah
+1(non-binding)

On 9/20/18, 9:18 AM, "Harsha Chintalapani"  wrote:

+1 (binding).

Thanks,
Harsha


On September 19, 2018 at 5:19:51 AM, Manikumar (manikumar.re...@gmail.com) 
wrote:

Hi All, 

I would like to start voting on KIP-371, which adds a configuration option 
for building custom SSL principal names. 

KIP: 

https://cwiki.apache.org/confluence/display/KAFKA/KIP-371%3A+Add+a+configuration+to+build+custom+SSL+principal+name
 

Discussion Thread: 

https://lists.apache.org/thread.html/e346f5e3e3dd1feb863594e40eac1ed54138613a667f319b99344710@%3Cdev.kafka.apache.org%3E
 

Thanks, 
Manikumar 




[jira] [Created] (KAFKA-7426) Kafka Compatibility Matrix is missing Kafka 2.0 broker information

2018-09-20 Thread Christophe Jolif (JIRA)
Christophe Jolif created KAFKA-7426:
---

 Summary: Kafka Compatibility Matrix is missing Kafka 2.0 broker 
information
 Key: KAFKA-7426
 URL: https://issues.apache.org/jira/browse/KAFKA-7426
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Reporter: Christophe Jolif


See

https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7425) Kafka Broker -Unable to start

2018-09-20 Thread Sathish Yanamala (JIRA)
Sathish Yanamala created KAFKA-7425:
---

 Summary: Kafka Broker -Unable to start 
 Key: KAFKA-7425
 URL: https://issues.apache.org/jira/browse/KAFKA-7425
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Sathish Yanamala


Hello Team,

We are facing below Error, While starting Kafka Broker .

By disabling below property in server.properties , we can able to up Broker , 
But we are missing ACL authentication with Kafka Brokers.

##

server.properties (Config File)

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer



 

Please help and suggest on below error .

Error Log :

2018-09-20 13:50:54,121 INFO 
kafka.coordinator.transaction.TransactionCoordinator: [TransactionCoordinator 
id=4] Startup complete.
2018-09-20 13:50:54,238 FATAL kafka.server.KafkaServer: [KafkaServer id=4] 
Fatal error during KafkaServer startup. Prepare to shutdown
kafka.common.KafkaException: DelegationToken not a valid resourceType name. The 
valid names are Topic,Group,Cluster,TransactionalId
 at 
kafka.security.auth.ResourceType$.$anonfun$fromString$2(ResourceType.scala:56)
 at scala.Option.getOrElse(Option.scala:121)
 at kafka.security.auth.ResourceType$.fromString(ResourceType.scala:56)
 at 
kafka.security.auth.SimpleAclAuthorizer.$anonfun$loadCache$2(SimpleAclAuthorizer.scala:233)
 at 
kafka.security.auth.SimpleAclAuthorizer.$anonfun$loadCache$2$adapted(SimpleAclAuthorizer.scala:232)
 at scala.collection.Iterator.foreach(Iterator.scala:929)
 at scala.collection.Iterator.foreach$(Iterator.scala:929)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
 at scala.collection.IterableLike.foreach(IterableLike.scala:71)
 at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
 at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
 at 
kafka.security.auth.SimpleAclAuthorizer.$anonfun$loadCache$1(SimpleAclAuthorizer.scala:232)
 at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:217)
 at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:225)
 at 
kafka.security.auth.SimpleAclAuthorizer.loadCache(SimpleAclAuthorizer.scala:230)
 at 
kafka.security.auth.SimpleAclAuthorizer.configure(SimpleAclAuthorizer.scala:114)
 at kafka.server.KafkaServer.$anonfun$startup$4(KafkaServer.scala:254)
 at scala.Option.map(Option.scala:146)
 at kafka.server.KafkaServer.startup(KafkaServer.scala:252)
 at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
 at kafka.Kafka$.main(Kafka.scala:92)
 at kafka.Kafka.main(Kafka.scala)
2018-09-20 13:50:54,241 INFO kafka.server.KafkaServer: [KafkaServer id=4] 
shutting down
2018-09-20 13:50:54,242 INFO kafka.network.SocketServer: [SocketServer 
brokerId=4] Shutting down
2018-09-20 13:50:54,260 INFO kafka.network.SocketServer: [SocketServer 
brokerId=4] Shutdown completed
2018-09-20 13:50:54,267 INFO 
kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper: 
[ExpirationReaper-4-topic]: Shutting down
2018-09-20 13:50:54,451 INFO 
kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper: 
[ExpirationReaper-4-topic]: Shutdown completed
2018-09-20 13:50:54,451 INFO 
kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper: 
[ExpirationReaper-4-topic]: Stopped
2018-09-20 13:50:54,453 INFO 
kafka.coordinator.transaction.TransactionCoordinator: [TransactionCoordinator 
id=4] Shutting down.
2018-09-20 13:50:54,454 INFO kafka.coordinator.transaction.ProducerIdManager: 
[ProducerId Manager 4]: Shutdown complete: last producerId assigned 1559000
2018-09-20 13:50:54,455 INFO 
kafka.coordinator.transaction.TransactionStateManager: [Transaction State 
Manager 4]: Shutdown complete
2018-09-20 13:50:54,455 INFO 
kafka.coordinator.transaction.TransactionMarkerChannelManager: [Transaction 
Marker Channel Manager 4]: Shutting down
2018-09-20 13:50:54,455 INFO 
kafka.coordinator.transaction.TransactionMarkerChannelManager: [Transaction 
Marker Channel Manager 4]: Stopped
2018-09-20 13:50:54,455 INFO 
kafka.coordinator.transaction.TransactionMarkerChannelManager: [Transaction 
Marker Channel Manager 4]: Shutdown completed
2018-09-20 13:50:54,456 INFO 
kafka.coordinator.transaction.TransactionCoordinator: [TransactionCoordinator 
id=4] Shutdown complete.
2018-09-20 13:50:54,456 INFO kafka.coordinator.group.GroupCoordinator: 
[GroupCoordinator 4]: Shutting down.
2018-09-20 13:50:54,457 INFO 
kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper: 
[ExpirationReaper-4-Heartbeat]: Shutting down
2018-09-20 13:50:54,653 INFO 
kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper: 
[ExpirationReaper-4-Heartbeat]: Shutdown completed
2018-09-20 13:50:54,653 INFO 
kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper: 
[ExpirationReaper-4-Heartbeat]: Stopped
2018-09-20 13:50:54,653 INFO 

Re: Confluence Wiki access

2018-09-20 Thread Matthias J. Sax
Done :)

On 9/20/18 9:53 AM, Srinivas Reddy wrote:
> My bad, the username is: mrsrinivas
> 
> 
> 
> --
> Srinivas Reddy
> 
> http://mrsrinivas.com/
> 
> 
> (Sent via gmail web)
> 
> 
> On Fri, 21 Sep 2018 at 00:46, Matthias J. Sax  wrote:
> 
>> Could not find this user ID. Can you double check? Also, it the
>> email-address really your ID?
>>
>> I also tried `srinivas96alluri` -- not found either.
>>
>> Note, that JIRA and Wiki is two different account (in case this is your
>> JIRA id).
>>
>>
>> -Matthias
>>
>> On 9/20/18 3:45 AM, Srinivas Reddy wrote:
>>> Greetings,
>>>
>>> I need to edit KIP-373, can anyone help me with an access to this wiki id
>>>
>>> srinivas96all...@gmail.com
>>>
>>> Thank you in advance.
>>>
>>>
>>> -
>>> Srinivas
>>>
>>> - Typed on tiny keys. pls ignore typos.{mobile app}
>>>
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: Confluence Wiki access

2018-09-20 Thread Srinivas Reddy
My bad, the username is: mrsrinivas



--
Srinivas Reddy

http://mrsrinivas.com/


(Sent via gmail web)


On Fri, 21 Sep 2018 at 00:46, Matthias J. Sax  wrote:

> Could not find this user ID. Can you double check? Also, it the
> email-address really your ID?
>
> I also tried `srinivas96alluri` -- not found either.
>
> Note, that JIRA and Wiki is two different account (in case this is your
> JIRA id).
>
>
> -Matthias
>
> On 9/20/18 3:45 AM, Srinivas Reddy wrote:
> > Greetings,
> >
> > I need to edit KIP-373, can anyone help me with an access to this wiki id
> >
> > srinivas96all...@gmail.com
> >
> > Thank you in advance.
> >
> >
> > -
> > Srinivas
> >
> > - Typed on tiny keys. pls ignore typos.{mobile app}
> >
>
>


Re: Issue in Creating the Kafka Consumer

2018-09-20 Thread Harsha

It looks like you are trying to connect to SASL Kafka broker? If that's
the case make sure you follow the 
dochttp://kafka.apache.org/documentation.html#security_jaas_client
to pass in JAAS config with the KafkaClient section to your
consumer. -Harsha
On Thu, Sep 20, 2018, at 8:31 AM, Sravanthi Gottam wrote:
> Hi Team,


>  


> I am facing the issue in creating the Kafka consumer. I am giving the
> jaas config file in the build path>  


>  


> 10:14:00,765 WARN  [org.springframework.web.context.support.Annotatio-
> nConfigWebApplicationContext] (ServerService Thread Pool -- 84)
> Exception encountered during context initialization - cancelling
> refresh attempt:
> _org.springframework.context.ApplicationContextException_: Failed to
> start bean 'org.springframework.kafka.config.internalKafkaListenerEnd-
> pointRegistry'; nested exception is
> _org.apache.kafka.common.KafkaException_: Failed to construct kafka
> consumer> 10:14:00,789 ERROR 
> [org.springframework.web.servlet.DispatcherServlet]
> (ServerService Thread Pool -- 84) Context initialization failed:
> _org.springframework.context.ApplicationContextException_: Failed to
> start bean 'org.springframework.kafka.config.internalKafkaListenerEnd-
> pointRegistry'; nested exception is
> _org.apache.kafka.common.KafkaException_: Failed to construct kafka
> consumer>at 
> org.springframework.context.support.DefaultLifecycleProcess-
>or.doStart(_DefaultLifecycleProcessor.java:176_) [spring-context-
>4.3.3.RELEASE.jar:4.3.3.RELEASE]>at 
> org.springframework.context.support.DefaultLifecycleProcess-
>or.access$200(_DefaultLifecycleProcessor.java:51_) [spring-context-
>4.3.3.RELEASE.jar:4.3.3.RELEASE]>at 
> org.springframework.context.support.DefaultLifecycleProcess-
>or$LifecycleGroup.start(_DefaultLifecycleProcessor.java:346_)
>[spring-context-4.3.3.RELEASE.jar:4.3.3.RELEASE]>at 
> org.springframework.context.support.DefaultLifecycleProcess-
>or.startBeans(_DefaultLifecycleProcessor.java:149_) [spring-context-
>4.3.3.RELEASE.jar:4.3.3.RELEASE]>  


> ... 24 more


> Caused by: _org.apache.kafka.common.KafkaException_:
> _java.lang.IllegalArgumentException_: No serviceName defined in either
> JAAS or Kafka config>at 
> org.apache.kafka.common.network.SaslChannelBuilder.configur-
>e(_SaslChannelBuilder.java:98_) [kafka-clients-0.11.0.0.jar:]>
> at org.apache.kafka.common.network.ChannelBuilders.create(_Cha-
>nnelBuilders.java:112_) [kafka-clients-0.11.0.0.jar:]>at 
> org.apache.kafka.common.network.ChannelBuilders.clientChann-
>elBuilder(_ChannelBuilders.java:58_) [kafka-clients-
>0.11.0.0.jar:]>at 
> org.apache.kafka.clients.ClientUtils.createChannelBuilder(_-
>ClientUtils.java:88_) [kafka-clients-0.11.0.0.jar:]>at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(_Kaf-
>kaConsumer.java:695_) [kafka-clients-0.11.0.0.jar:]>... 36 more


> Caused by: _java.lang.IllegalArgumentException_: No serviceName
> defined in either JAAS or Kafka config>at 
> org.apache.kafka.common.security.kerberos.KerberosLogin.get-
>ServiceName(_KerberosLogin.java:298_) [kafka-clients-
>0.11.0.0.jar:]>at 
> org.apache.kafka.common.security.kerberos.KerberosLogin.con-
>figure(_KerberosLogin.java:87_) [kafka-clients-0.11.0.0.jar:]>
> at org.apache.kafka.common.security.authenticator.LoginManager-
>.(_LoginManager.java:49_) [kafka-clients-0.11.0.0.jar:]>
> at org.apache.kafka.common.security.authenticator.LoginManager-
>.acquireLoginManager(_LoginManager.java:73_) [kafka-clients-
>0.11.0.0.jar:]>at 
> org.apache.kafka.common.network.SaslChannelBuilder.configur-
>e(_SaslChannelBuilder.java:90_) [kafka-clients-0.11.0.0.jar:]>
> ... 40 more


>  


>  


> Thanks,


> *Sravanthi Gottam*


> Medical Management Systems


>  


> Centene logo


> 7930 Clayton Rd | St Louis, MO 63117


> Ext: 8099453


> _Sravanthi.gottam@centene.com_


>  


> CONFIDENTIALITY NOTICE: This communication contains information
> intended for the use of the individuals to whom it is addressed and
> may contain information that is privileged, confidential or exempt
> from other disclosure under applicable law. If you are not the
> intended recipient, you are notified that any disclosure, printing,
> copying, distribution or use of the contents is prohibited. If you
> have received this in error, please notify the sender immediately by
> telephone or by returning it by return mail and then permanently
> delete the communication from your system. Thank you.


Re: Confluence Wiki access

2018-09-20 Thread Patrick Williams
HI Matthias,

Can you help me unsubscribe from all these Kafka and Apache channels please?
Tried to follow previous directions but still getting loads of spam
Thanks

Best,
 
Patrick Williams
 
Head of Sales, UK & Ireland
+44 (0)7549 676279
patrick.willi...@storageos.com
 
20 Midtown
20 Proctor Street
Holborn
London WC1V 6NX
 
Twitter: @patch37
LinkedIn: linkedin.com/in/patrickwilliams4 

https://slack.storageos.com/
 
 

On 20/09/2018, 17:46, "Matthias J. Sax"  wrote:

Could not find this user ID. Can you double check? Also, it the
email-address really your ID?

I also tried `srinivas96alluri` -- not found either.

Note, that JIRA and Wiki is two different account (in case this is your
JIRA id).


-Matthias

On 9/20/18 3:45 AM, Srinivas Reddy wrote:
> Greetings,
> 
> I need to edit KIP-373, can anyone help me with an access to this wiki id
> 
> srinivas96all...@gmail.com
> 
> Thank you in advance.
> 
> 
> -
> Srinivas
> 
> - Typed on tiny keys. pls ignore typos.{mobile app}
> 





Re: Confluence Wiki access

2018-09-20 Thread Matthias J. Sax
Could not find this user ID. Can you double check? Also, it the
email-address really your ID?

I also tried `srinivas96alluri` -- not found either.

Note, that JIRA and Wiki is two different account (in case this is your
JIRA id).


-Matthias

On 9/20/18 3:45 AM, Srinivas Reddy wrote:
> Greetings,
> 
> I need to edit KIP-373, can anyone help me with an access to this wiki id
> 
> srinivas96all...@gmail.com
> 
> Thank you in advance.
> 
> 
> -
> Srinivas
> 
> - Typed on tiny keys. pls ignore typos.{mobile app}
> 



signature.asc
Description: OpenPGP digital signature


Issue in Creating the Kafka Consumer

2018-09-20 Thread Sravanthi Gottam
Hi Team,

I am facing the issue in creating the Kafka consumer. I am giving the jaas 
config file in the build path


10:14:00,765 WARN  
[org.springframework.web.context.support.AnnotationConfigWebApplicationContext] 
(ServerService Thread Pool -- 84) Exception encountered during context 
initialization - cancelling refresh attempt: 
org.springframework.context.ApplicationContextException: Failed to start bean 
'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; 
nested exception is org.apache.kafka.common.KafkaException: Failed to construct 
kafka consumer
10:14:00,789 ERROR [org.springframework.web.servlet.DispatcherServlet] 
(ServerService Thread Pool -- 84) Context initialization failed: 
org.springframework.context.ApplicationContextException: Failed to start bean 
'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; 
nested exception is org.apache.kafka.common.KafkaException: Failed to construct 
kafka consumer
   at 
org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:176)
 [spring-context-4.3.3.RELEASE.jar:4.3.3.RELEASE]
   at 
org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:51)
 [spring-context-4.3.3.RELEASE.jar:4.3.3.RELEASE]
   at 
org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:346)
 [spring-context-4.3.3.RELEASE.jar:4.3.3.RELEASE]
   at 
org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:149)
 [spring-context-4.3.3.RELEASE.jar:4.3.3.RELEASE]

... 24 more
Caused by: org.apache.kafka.common.KafkaException: 
java.lang.IllegalArgumentException: No serviceName defined in either JAAS or 
Kafka config
   at 
org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:98)
 [kafka-clients-0.11.0.0.jar:]
   at 
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:112)
 [kafka-clients-0.11.0.0.jar:]
   at 
org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:58)
 [kafka-clients-0.11.0.0.jar:]
   at 
org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88) 
[kafka-clients-0.11.0.0.jar:]
   at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:695) 
[kafka-clients-0.11.0.0.jar:]
   ... 36 more
Caused by: java.lang.IllegalArgumentException: No serviceName defined in either 
JAAS or Kafka config
   at 
org.apache.kafka.common.security.kerberos.KerberosLogin.getServiceName(KerberosLogin.java:298)
 [kafka-clients-0.11.0.0.jar:]
   at 
org.apache.kafka.common.security.kerberos.KerberosLogin.configure(KerberosLogin.java:87)
 [kafka-clients-0.11.0.0.jar:]
   at 
org.apache.kafka.common.security.authenticator.LoginManager.(LoginManager.java:49)
 [kafka-clients-0.11.0.0.jar:]
   at 
org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:73)
 [kafka-clients-0.11.0.0.jar:]
   at 
org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:90)
 [kafka-clients-0.11.0.0.jar:]
   ... 40 more


Thanks,
Sravanthi Gottam
Medical Management Systems

[Centene logo]
7930 Clayton Rd | St Louis, MO 63117
Ext: 8099453
sravanthi.got...@centene.com

CONFIDENTIALITY NOTICE: This communication contains information intended for 
the use of the individuals to whom it is addressed and may contain information 
that is privileged, confidential or exempt from other disclosure under 
applicable law. If you are not the intended recipient, you are notified that 
any disclosure, printing, copying, distribution or use of the contents is 
prohibited. If you have received this in error, please notify the sender 
immediately by telephone or by returning it by return mail and then permanently 
delete the communication from your system. Thank you.


Re: [VOTE] KIP-371: Add a configuration to build custom SSL principal name

2018-09-20 Thread Harsha Chintalapani
+1 (binding).

Thanks,
Harsha


On September 19, 2018 at 5:19:51 AM, Manikumar (manikumar.re...@gmail.com) 
wrote:

Hi All, 

I would like to start voting on KIP-371, which adds a configuration option 
for building custom SSL principal names. 

KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-371%3A+Add+a+configuration+to+build+custom+SSL+principal+name
 

Discussion Thread: 
https://lists.apache.org/thread.html/e346f5e3e3dd1feb863594e40eac1ed54138613a667f319b99344710@%3Cdev.kafka.apache.org%3E
 

Thanks, 
Manikumar 


[jira] [Resolved] (KAFKA-7419) Rolling sum for high frequency sream

2018-09-20 Thread Stanislav Bausov (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanislav Bausov resolved KAFKA-7419.
-
Resolution: Fixed

> Rolling sum for high frequency sream
> 
>
> Key: KAFKA-7419
> URL: https://issues.apache.org/jira/browse/KAFKA-7419
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Stanislav Bausov
>Priority: Minor
>
> Have a task to count 24h market volume for high frequency trades stream. And 
> there is no solution out of the box. Windowing is not an option.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


RE: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-20 Thread Skrzypek, Jonathan
Ok thanks.

+1 (non-binding)

The only thing I'm not too sure about is the naming around configuration 
entries for this, both for KIP-235 and KIP-302.

KIP-235 expands DNS A records for bootstrap : 
resolve.canonical.bootstrap.servers.only
KIP-302 expands DNS A records for advertised.listeners : use.all.dns.ips

I'm a bit concerned that those don't easily explain what this does.
Documentation helps obviously, but would we have suggestions for better naming ?
I'm fine if we go for those but worth thinking about I think.

Also, we probably want a third option to have both ? That's why we initially 
put in ".only" for KIP-235's parameter.

Jonathan Skrzypek


-Original Message-
From: Edoardo Comar [mailto:eco...@uk.ibm.com]
Sent: 20 September 2018 09:55
To: dev@kafka.apache.org
Subject: RE: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved IP 
addresses

Hi Jonathan
we'll update the PR for KIP-302 soon. We do not need KIP-235 actually,
they only share the name of the configuration entry.

thanks
Edo

PS - we need votes :-)

--

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN



From:   "Skrzypek, Jonathan" 
To: "dev@kafka.apache.org" 
Date:   19/09/2018 16:12
Subject:***UNCHECKED*** RE: [VOTE] KIP-302 - Enable Kafka clients
to use all  DNS resolved IP addresses



I'm assuming this needs KIP-235 to be merged.
Unfortunately I've tripped over some merge issues with git and struggled
to fix.
Hopefully this is fixed but any help appreciated :
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_pull_4485=DwIFAg=7563p3e2zaQw0AB1wrFVgyagb2IE5rTZOYPxLxfZlX4=nNmJlu1rR_QFAPdxGlafmDu9_r6eaCbPOM0NM1EHo-E=HwQEPivzE-kKvVc99xS-xIe66IRdoD_x8cGEGCVqLFs=7yGX8SM2OhgJi2q8K8BXIrnu1YjEGSORIr5Bs2Up8Zg=


Jonathan Skrzypek



-Original Message-
From: Eno Thereska [mailto:eno.there...@gmail.com]
Sent: 19 September 2018 11:01
To: dev@kafka.apache.org
Subject: Re: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved
IP addresses

+1 (non-binding).

Thanks
Eno

On Wed, Sep 19, 2018 at 10:09 AM, Rajini Sivaram 
wrote:

> Hi Edo,
>
> Thanks for the KIP!
>
> +1 (binding)
>
> On Tue, Sep 18, 2018 at 3:51 PM, Edoardo Comar 
wrote:
>
> > Hi All,
> >
> > I'd like to start the vote on KIP-302:
> >
> >
https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D=DwIFAg=7563p3e2zaQw0AB1wrFVgyagb2IE5rTZOYPxLxfZlX4=nNmJlu1rR_QFAPdxGlafmDu9_r6eaCbPOM0NM1EHo-E=HwQEPivzE-kKvVc99xS-xIe66IRdoD_x8cGEGCVqLFs=9z7km3slLJqJLvkw991bM2ht1lygWxDOIY2238JsNLQ=

> > 302+-+Enable+Kafka+clients+to+use+all+DNS+resolved+IP+addresses
> >
> > We'd love to get this in 2.1.0
> > Kip freeze is just a few days away ... please cast your votes  :-):-)
> >
> > Thanks!!
> > Edo
> >
> > --
> >
> > Edoardo Comar
> >
> > IBM Message Hub
> >
> > IBM UK Ltd, Hursley Park, SO21 2JN
> > Unless stated otherwise above:
> > IBM United Kingdom Limited - Registered in England and Wales with
number
> > 741598.
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> 3AU
> >
>



Your Personal Data: We may collect and process information about you that
may be subject to data protection laws. For more information about how we
use and disclose your personal data, how we protect your information, our
legal basis to use your information, your rights and who you can contact,
please refer to: http://www.gs.com/privacy-notices<
http://www.gs.com/privacy-notices
>




Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number
741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU



Your Personal Data: We may collect and process information about you that may 
be subject to data protection laws. For more information about how we use and 
disclose your personal data, how we protect your information, our legal basis 
to use your information, your rights and who you can contact, please refer to: 
www.gs.com/privacy-notices


Re: Apache Kafka Authentication

2018-09-20 Thread Vahid Hashemian
Hi Rasheed,

This article https://developer.ibm.com/code/howtos/kafka-authn-authz explains
how to enable authentication and authorization in a Kafka cluster.
Note: it does not cover encryption.

Regards.
--Vahid

On Wed, Sep 19, 2018 at 10:33 PM Rasheed Siddiqui 
wrote:

> Dear Team,
>
>
>
> I want to know the detail document and discussion regarding the Kafka
> Authentication.
>
>
>
> We are building the Consumer on .Net Platform. We have difficulty in
> communication with Producer as we have developed Unsecure Consumer.
>
> So Please suggest to resolve this issue.
>
> Thanks in Advance!!!
>
>
>
>
>
>
>
> *Thanks & Regards,*
>
>
>
> [image: Description: Description: cid:image002.png@01D330A4.350DE830]
>
> *Rasheed Siddiqui *
>
> *Sr.Technical Analyst *
>
> M: 8655567060 <(865)%20556-7060>  E: rash...@ccentric.co
>
> www.ccentric.co
>
> [image: Description: Description: cid:image004.png@01D330A4.350DE830]
>
> *Years of *
>
> *Customer *
>
> *Excellence*
>
>
>
>
>


Re: [DISCUSS] KIP-373: Add '--help' option to all available Kafka CLI commands

2018-09-20 Thread Attila Sasvári
Hi all,

This is just to inform you that this KIP was taken over by Srinivas Reddy. Good
luck Srinivas!

Regards,
- Attila



On Wed, Sep 19, 2018 at 1:30 PM Attila Sasvári  wrote:

> Hi all,
>
> I have just created a KIP to add '--help' option to all available Kafka
> CLI commands:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-373%3A+Add+%27--help%27+option+to+all+available+Kafka+CLI+commands
>
> Tracking JIRA: https://issues.apache.org/jira/browse/KAFKA-7418
>
> Regards,
> - Attila
>


Re: [VOTE] KIP-372: Naming Repartition Topics for Joins and Grouping

2018-09-20 Thread Damian Guy
+1 (binding)

On Tue, 18 Sep 2018 at 16:33 Bill Bejeck  wrote:

> All,
>
> In starting work on the PR for KIP-372, the Grouped interface needed some
> method renaming to be more consistent with the other configuration classes
> (Joined, Produced, etc.).  As such I've updated the Grouped code section of
> the KIP.
>
> As these changes address a comment from Matthias on the initial draft of
> the KIP and don't change any of the existing behavior already outlined,  I
> don't think a re-vote is required.
>
> Thanks,
> Bill
>
> On Tue, Sep 18, 2018 at 10:09 AM John Roesler  wrote:
>
> > +1 (non-binding)
> >
> > Thanks!
> >
> > On Mon, Sep 17, 2018 at 7:29 PM Dongjin Lee  wrote:
> >
> > > Great improvements. +1. (Non-binding)
> > >
> > > On Tue, Sep 18, 2018 at 5:14 AM Matthias J. Sax  >
> > > wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > -Matthias
> > > >
> > > > On 9/17/18 1:12 PM, Guozhang Wang wrote:
> > > > > +1 from me, thanks Bill !
> > > > >
> > > > > On Mon, Sep 17, 2018 at 12:43 PM, Bill Bejeck 
> > > wrote:
> > > > >
> > > > >> All,
> > > > >>
> > > > >> I'd like to start the voting process for KIP-372.  Here's the link
> > to
> > > > the
> > > > >> updated proposal
> > > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > >> 372%3A+Naming+Repartition+Topics+for+Joins+and+Grouping
> > > > >>
> > > > >> I'll start with my own +1.
> > > > >>
> > > > >> Thanks,
> > > > >> Bill
> > > > >>
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > >
> > > --
> > > *Dongjin Lee*
> > >
> > > *A hitchhiker in the mathematical world.*
> > >
> > > *github:  github.com/dongjinleekr
> > > linkedin:
> > kr.linkedin.com/in/dongjinleekr
> > > slideshare:
> > > www.slideshare.net/dongjinleekr
> > > *
> > >
> >
>


Confluence Wiki access

2018-09-20 Thread Srinivas Reddy
Greetings,

I need to edit KIP-373, can anyone help me with an access to this wiki id

srinivas96all...@gmail.com

Thank you in advance.


-
Srinivas

- Typed on tiny keys. pls ignore typos.{mobile app}


[jira] [Created] (KAFKA-7424) State stores restoring from changelog topic not the source topic

2018-09-20 Thread James Hay (JIRA)
James Hay created KAFKA-7424:


 Summary: State stores restoring from changelog topic not the 
source topic
 Key: KAFKA-7424
 URL: https://issues.apache.org/jira/browse/KAFKA-7424
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.1.1
Reporter: James Hay


Hi,

 

I've recently attempted to upgrade a streams application form 1.1 to 1.1.1 and 
I noticed a drop in the number of messages being restored in our state stores.

It appears that there is a change in 1.1.1 which causes our state stores to be 
restored from the changelog topic as opposed to version 1.1 where the stores 
are restored from the source topic.  In our application this causes an issue as 
we switched to StreamsBuilder from KStreamBuilder  in the middle of the 
applications lifetime and so the changelog doesn't represent a full history of 
the source topic.

Has this switch been introduced intentionally? Is there a way to configure our 
application to use 1.1.1 and still use the source stream to restore state 
stores? Any recommendations on getting our changelog in sync with the source?

 

Thanks

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


RE: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-20 Thread Edoardo Comar
Hi Jonathan 
we'll update the PR for KIP-302 soon. We do not need KIP-235 actually, 
they only share the name of the configuration entry. 

thanks
Edo

PS - we need votes :-) 

--

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN



From:   "Skrzypek, Jonathan" 
To: "dev@kafka.apache.org" 
Date:   19/09/2018 16:12
Subject:***UNCHECKED*** RE: [VOTE] KIP-302 - Enable Kafka clients 
to use all  DNS resolved IP addresses



I'm assuming this needs KIP-235 to be merged.
Unfortunately I've tripped over some merge issues with git and struggled 
to fix.
Hopefully this is fixed but any help appreciated : 
https://github.com/apache/kafka/pull/4485


Jonathan Skrzypek



-Original Message-
From: Eno Thereska [mailto:eno.there...@gmail.com]
Sent: 19 September 2018 11:01
To: dev@kafka.apache.org
Subject: Re: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved 
IP addresses

+1 (non-binding).

Thanks
Eno

On Wed, Sep 19, 2018 at 10:09 AM, Rajini Sivaram 
wrote:

> Hi Edo,
>
> Thanks for the KIP!
>
> +1 (binding)
>
> On Tue, Sep 18, 2018 at 3:51 PM, Edoardo Comar  
wrote:
>
> > Hi All,
> >
> > I'd like to start the vote on KIP-302:
> >
> > 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-

> > 302+-+Enable+Kafka+clients+to+use+all+DNS+resolved+IP+addresses
> >
> > We'd love to get this in 2.1.0
> > Kip freeze is just a few days away ... please cast your votes  :-):-)
> >
> > Thanks!!
> > Edo
> >
> > --
> >
> > Edoardo Comar
> >
> > IBM Message Hub
> >
> > IBM UK Ltd, Hursley Park, SO21 2JN
> > Unless stated otherwise above:
> > IBM United Kingdom Limited - Registered in England and Wales with 
number
> > 741598.
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> 3AU
> >
>



Your Personal Data: We may collect and process information about you that 
may be subject to data protection laws. For more information about how we 
use and disclose your personal data, how we protect your information, our 
legal basis to use your information, your rights and who you can contact, 
please refer to: www.gs.com/privacy-notices<
http://www.gs.com/privacy-notices
>




Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Re: [VOTE] KIP-367 Introduce close(Duration) to Producer and AdminClient instead of close(long, TimeUnit)

2018-09-20 Thread Chia-Ping Tsai
Thanks for all votes. KIP-367 has passed!!!

binding votes (3) :
Matthias J. Sax
Harsha
Jason Gustafson

non-binding votes (6):
Dongjin Lee
Manikumar
Mickael Maison
vito jeng
Colin McCabe
Bill Bejeck

Cheers,
Chia-Ping

On 2018/09/08 18:27:59, Chia-Ping Tsai  wrote: 
> Hi All,
> 
> I'd like to put KIP-367 to the vote.
> 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=89070496
> 
> --
> Chia-Ping
>