Build failed in Jenkins: kafka-2.3-jdk8 #89

2019-08-15 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] MINOR: Fix bugs in handling zero-length ImplicitLinkedHashCollections

--
[...truncated 2.92 MB...]
kafka.utils.CoreUtilsTest > testTryAll PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod STARTED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions STARTED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath STARTED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath PASSED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing STARTED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing PASSED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.TopicFilterTest > testWhitelists STARTED

kafka.utils.TopicFilterTest > testWhitelists PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr STARTED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.JsonTest > testParseToWithInvalidJson STARTED

kafka.utils.JsonTest > testParseToWithInvalidJson PASSED

kafka.utils.JsonTest > testParseTo STARTED

kafka.utils.JsonTest > testParseTo PASSED

kafka.utils.JsonTest > testJsonParse STARTED

kafka.utils.JsonTest > testJsonParse PASSED

kafka.utils.JsonTest > testLegacyEncodeAsString STARTED

kafka.utils.JsonTest > testLegacyEncodeAsString PASSED

kafka.utils.JsonTest > testEncodeAsBytes STARTED

kafka.utils.JsonTest > testEncodeAsBytes PASSED

kafka.utils.JsonTest > testEncodeAsString STARTED

kafka.utils.JsonTest > testEncodeAsString PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.timer.TimerTaskListTest > testAll STARTED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange STARTED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecode STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecode PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED


[jira] [Created] (KAFKA-8807) Flaky Test GlobalThreadShutDownOrderTest.shouldFinishGlobalStoreOperationOnShutDown

2019-08-15 Thread Sophie Blee-Goldman (JIRA)
Sophie Blee-Goldman created KAFKA-8807:
--

 Summary: Flaky Test 
GlobalThreadShutDownOrderTest.shouldFinishGlobalStoreOperationOnShutDown
 Key: KAFKA-8807
 URL: https://issues.apache.org/jira/browse/KAFKA-8807
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Sophie Blee-Goldman


[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/24229/testReport/junit/org.apache.kafka.streams.integration/GlobalThreadShutDownOrderTest/shouldFinishGlobalStoreOperationOnShutDown/]

 
h3. Error Message

java.lang.AssertionError: expected:<[1, 2, 3, 4]> but was:<[1, 2, 3, 4, 1, 2, 
3, 4]>
h3. Stacktrace

java.lang.AssertionError: expected:<[1, 2, 3, 4]> but was:<[1, 2, 3, 4, 1, 2, 
3, 4]> at org.junit.Assert.fail(Assert.java:89) at 
org.junit.Assert.failNotEquals(Assert.java:835) at 
org.junit.Assert.assertEquals(Assert.java:120) at 
org.junit.Assert.assertEquals(Assert.java:146) at 
org.apache.kafka.streams.integration.GlobalThreadShutDownOrderTest.shouldFinishGlobalStoreOperationOnShutDown(GlobalThreadShutDownOrderTest.java:138)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:65) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292) at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:412) at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
 at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
 at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
 at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
 at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
 at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
 at 

Jenkins build is back to normal : kafka-trunk-jdk11 #757

2019-08-15 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #3855

2019-08-15 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Update the javadoc of SocketServer#startup(). (#7215)

--
[...truncated 8.94 MB...]
kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testPerPartitionLeadWithMaxPollRecords STARTED

kafka.api.PlaintextConsumerTest > testPerPartitionLeadWithMaxPollRecords PASSED

kafka.api.PlaintextConsumerTest > testHeaders STARTED

kafka.api.PlaintextConsumerTest > testHeaders PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInAssignment STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInAssignment PASSED

kafka.api.PlaintextConsumerTest > testHeadersSerializerDeserializer STARTED

kafka.api.PlaintextConsumerTest > testHeadersSerializerDeserializer PASSED

kafka.api.PlaintextConsumerTest > testDeprecatedPollBlocksForAssignment STARTED

kafka.api.PlaintextConsumerTest > testDeprecatedPollBlocksForAssignment PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume STARTED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > 
testQuotaMetricsNotCreatedIfNoQuotasConfigured STARTED

kafka.api.PlaintextConsumerTest > 
testQuotaMetricsNotCreatedIfNoQuotasConfigured PASSED

kafka.api.PlaintextConsumerTest > 
testPerPartitionLagMetricsCleanUpWithSubscribe STARTED

kafka.api.PlaintextConsumerTest > 
testPerPartitionLagMetricsCleanUpWithSubscribe PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime STARTED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime PASSED

kafka.api.PlaintextConsumerTest > testPerPartitionLagMetricsWhenReadCommitted 
STARTED

kafka.api.PlaintextConsumerTest > testPerPartitionLagMetricsWhenReadCommitted 
PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup PASSED

kafka.api.PlaintextConsumerTest > testMaxPollRecords STARTED

kafka.api.PlaintextConsumerTest > testMaxPollRecords PASSED

kafka.api.PlaintextConsumerTest > testAutoOffsetReset STARTED

kafka.api.PlaintextConsumerTest > testAutoOffsetReset PASSED

kafka.api.PlaintextConsumerTest > testPerPartitionLagWithMaxPollRecords STARTED

kafka.api.PlaintextConsumerTest > testPerPartitionLagWithMaxPollRecords PASSED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset STARTED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitIntercept STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitIntercept PASSED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst STARTED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst PASSED

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets STARTED

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.PlaintextConsumerTest > 
testPerPartitionLeadMetricsCleanUpWithSubscribe STARTED

kafka.api.PlaintextConsumerTest > 
testPerPartitionLeadMetricsCleanUpWithSubscribe PASSED

kafka.api.PlaintextConsumerTest > testCommitMetadata STARTED

kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testHeadersExtendedSerializerDeserializer 
STARTED

kafka.api.PlaintextConsumerTest > testHeadersExtendedSerializerDeserializer 
PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment STARTED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription STARTED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.PlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.PlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII STARTED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII STARTED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.PlaintextEndToEndAuthorizationTest > testListenerName STARTED

kafka.api.PlaintextEndToEndAuthorizationTest > testListenerName PASSED

kafka.api.PlaintextEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.PlaintextEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.PlaintextEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls STARTED

kafka.api.PlaintextEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls PASSED


Re: [DISCUSS] KIP-401 TransformerSupplier/ProcessorSupplier enhancements

2019-08-15 Thread Paul Whalen
I updated the KIP (and PR) to relax the restriction on connecting state
stores via either means; it definitely makes sense to me at this point.
I'd love to hear if there are any other concerns or broad objections to the
KIP.

Paul

On Thu, Aug 8, 2019 at 10:12 PM Paul Whalen  wrote:

> Matthias,
>
> You did summarize my thinking correctly, thanks for writing it out.  I
> think the disconnect on opinion is due to a couple things influenced by my
> habits while writing streams code:
>
> 1) I don't see state stores that are "individually owned" versus "shared"
> as that much different at all, at least from the perspective of the
> business logic for the Processor. So it is actually a negative to separate
> the connecting of stores, because it appears in the topology wiring that
> fewer stores are being used by the Processor than actually are.  A reader
> might assume that the Processor doesn't need other state to do its job
> which could cause confusion.
> 2) In practice, my addProcessor() and addStateStore() (or
> builder.addStateStore() and stream.process() ) calls are very near each
> other anyway, so the shared dependency on StoreBuilder is not a burden;
> passing the same object could even bring clarity to the idea that the store
> is shared and not individually owned.
>
> Hearing your thoughts though, I think I have imposed a bit too much of my
> own style and assumptions on the API, especially with the shared dependency
> on a single StoreBuilder and thoughts about store ownership/sharing.  I'm
> going to update the KIP since the one +1 vote comes from John who is favor
> of relaxing the restriction anyway.
>
> Paul
>
> On Wed, Aug 7, 2019 at 11:11 PM Matthias J. Sax 
> wrote:
>
>> I am not sure if I full understand, hence, I try to rephrase:
>>
>> > I can't think of an example that would require both ways, or would
>> > even be more readable using both ways.
>>
>> Example:
>>
>> There are two processor A and B, and one store S that both need to
>> access and one store S_b that only B needs to access:
>>
>> If we don't allow to mix both approaches, it would be required to write
>> the following code:
>>
>>   Topology t = new Topology();
>>   t.addProcessor("A", ...); // does not add any store
>>   t.addProceccor("B", ...); // does not add any store
>>   t.addStateStore(..., "A", "B"); // adds S and connect it to A and B
>>   t.addStateStore(..., "B"); // adds S_b and connect it to B
>>
>> // DSL example:
>>
>>   StreamsBiulder b = new StreamsBuilder();
>>   b.addStateStore() // adds S
>>   b.addStateStore() // adds S_b
>>   stream1.process(..., "S") // add A and connect S
>>   stream2.process(..., "S", "S_b") // add B and connect S and S_b
>>
>>
>> If we allow to mixes both approaches, the code could be (simplified to):
>>
>>   Topology t = new Topology();
>>   t.addProcessor("A", ...); // does not add any store
>>   t.addProceccor("B", ...); // adds/connects S_b implicitly
>>   t.addStateStore(..., "A", "B"); // adds S and connect it to A and B
>>
>> // DSL example
>>
>>   StreamsBiulder b = new StreamsBuilder();
>>   b.addStateStore() // adds S
>>   stream1.process(..., "S") // add A and connect S
>>   stream2.process(..., "S") // add B and connect S; adds/connects S_b
>> implicitly
>>
>> The fact that B has a "private store" could be encapsulated and I don't
>> see why this would be bad?
>>
>> > If you can
>> > do both ways, the actual full set of state stores being connected could
>> be
>> > in wildly different places in the code, which could create confusion.
>>
>> Ie, I don't see why the second version would be confusing, or why the
>> first version would be more readable (I don't argue it's less readable
>> either though; I think both are equally readable)?
>>
>>
>>
>> Or do you argue that we should allow the following:
>>
>> > Shared stores can be passed from
>> > the outside in an anonymous ProcessorSupplier if desired, making it
>> > effectively the same as passing the stateStoreNames var args
>>
>>   Topology t = new Topology();
>>   t.addProcessor("A", ...); // adds/connects S implicitly
>>   t.addProceccor("B", ...); // adds/connects S and S_b implicitly
>>
>> // DSL example
>>
>>   StreamsBiulder b = new StreamsBuilder();
>>   stream1.process(...) // add A and add/connect S implicitly
>>   stream2.process(...) // add B and add/connect S and S_b implicitly
>>
>> For this case, the second implicit adding of S would require to return
>> the same `StoreBuilder` instance to make it idempotent what seems hard
>> to achieve, because both `ProcessorSuppliers` now have a cross
>> dependency to us the same object.
>>
>> Hence, I don't think this would be a good approach.
>>
>>
>> Also, because we require for a unique store name to always pass the same
>> `StoreBuilder` instance, we have actually a good protection against user
>> bug that may add two stores with the same name but different builders
>> twice.
>>
>>
>> I also do not feel super strong about it, but see some advantages to
>> allow the 

Re: [DISCUSS] KIP-507: Securing Internal Connect REST Endpoints

2019-08-15 Thread Ryanne Dolan
Thanks Chris, that makes sense.

I know you have already considered this, but I'm not convinced we should
rule out using Kafka topics for this purpose. That would enable the same
level of security without introducing any new authentication or
authorization mechanisms (your keys). And as you say, you'd need to lock
down Connect's topics and groups anyway.

Can you explain what you mean when you say using Kafka topics would require
"reworking the group coordination protocol"? I don't see how these are
related. Why would it matter if the workers sent Kafka messages vs POST
requests among themselves?

Ryanne

On Thu, Aug 15, 2019 at 3:57 PM Chris Egerton  wrote:

> Hi Ryanne,
>
> Yes, if the Connect group is left unsecured then that is a potential
> vulnerability. However, in a truly secure Connect cluster, the group would
> need to be secured anyways to prevent attackers from joining the group with
> the intent to either snoop on connector/task configurations or bring the
> cluster to a halt by spamming the group with membership requests and then
> not running the assigned connectors/tasks. Additionally, for a Connect
> cluster to be secure, access to internal topics (for configs, offsets, and
> statuses) would also need to be restricted so that attackers could not,
> e.g., write arbitrary connector/task configurations to the configs topic.
> This is all currently possible in Kafka with the use of ACLs.
>
> I think the bottom line here is that there's a number of steps that need to
> be taken to effectively lock down a Connect cluster; the point of this KIP
> is to close a loophole that exists even after all of those steps are taken,
> not to completely secure this one vulnerability even when no other security
> measures are taken.
>
> Cheers,
>
> Chris
>
> On Wed, Aug 14, 2019 at 10:56 PM Ryanne Dolan 
> wrote:
>
> > Chris, I don't understand how the rebalance protocol can be used to give
> > out session tokens in a secure way. It seems that any attacker could just
> > join the group and sign requests with the provided token. Am I missing
> > something?
> >
> > Ryanne
> >
> > On Wed, Aug 14, 2019, 2:31 PM Chris Egerton  wrote:
> >
> > > The KIP page can be found at
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-507%3A+Securing+Internal+Connect+REST+Endpoints
> > > ,
> > > by the way. Apologies for neglecting to include it in my initial email!
> > >
> > > On Wed, Aug 14, 2019 at 12:29 PM Chris Egerton 
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to start discussion on a KIP to secure the internal "POST
> > > > /connectors//tasks" endpoint for the Connect framework. The
> > > proposed
> > > > changes address a vulnerability in the framework in its current state
> > > that
> > > > allows malicious users to write arbitrary task configurations for
> > > > connectors; it is vital that this issue be addressed in order for any
> > > > Connect cluster to be secure.
> > > >
> > > > Looking forward to your thoughts,
> > > >
> > > > Chris
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-495: Dynamically Adjust Log Levels in Connect

2019-08-15 Thread Arjun Satish
Hey Konstantine,

Thanks for the feedback.

re: the use of log4j, yes, the proposed changes will only work if log4j is
available in runtime. We will not add the mBean if log4j is not available
in classpath. If we change from log4j 1 to 2, that would involve another
KIP, and it would need to update the changes proposed in this KIP and
others (KIP-412, for instance).

re: use of Object types, I've changed it from Boolean to the primitive type
for setLogLevel. We are changing the signature of the old method this way,
but since it never returned null, this should be fine.

re: example usage, I've added some screenshot on how this feature would be
used with jconsole.

Hope this works!

Thanks very much,
Arjun

On Wed, Aug 14, 2019 at 6:42 AM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> And one thing I forgot is also related to Chris's comment above. I agree
> that an example on how a user is expected to set the log level (for
> instance to DEBUG) would be nice, even if it's showing only one out of the
> many possible ways to achieve that.
>
> - Konstantine
>
> On Wed, Aug 14, 2019 at 4:38 PM Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
> >
> > Thanks Arjun for tackling the need to support this very useful feature.
> >
> > One thing I noticed while reading the KIP is that I would have loved to
> > see more info regarding how this proposal depends on the underlying
> logging
> > APIs and implementations. For instance, my understanding is that slf4j
> can
> > not be leveraged and that the logging framework needs to be pegged to
> log4j
> > explicitly (or another logging implementation). Correct me if I'm wrong,
> > but if such a dependency is introduced I believe it's worth mentioning.
> >
> > Additionally, if the above is correct, there are differences in log4j's
> > APIs between version 1 and version 2. In version 2, Logger#setLevel
> method
> > has been removed from the Logger interface and in order to set the log
> > level programmatically the Configurator class needs to used, which as
> > stated in the FAQ (
> > https://logging.apache.org/log4j/2.x/faq.html#reconfig_level_from_code)
> > it's not part of log4j2's public API. Is this a concern? I believe that
> > even if these are implementation specific details for the wrappers
> > introduced by this KIP (which to a certain extent they are), a mention in
> > the KIP text and a few references would be useful to understand the
> changes
> > and the dependencies introduced by this proposal.
> >
> > And a few minor comments:
> > - Is there any specific reason that object types were preferred in the
> > proposed interface compared to primitive types? My understanding is that
> > `null` is not expected as a return value.
> > - Related to the above, I think it'd be nice for the javadoc to mention
> > when a parameter is not expected to be `null` with an appropriate comment
> > (e.g. foo bar etc; may not be null)
> >
> > Cheers,
> > Konstantine
> >
> > On Tue, Aug 6, 2019 at 9:34 AM Cyrus Vafadari 
> wrote:
> >
> >> This looks like a useful feature, the strategy makes sense, and the KIP
> is
> >> thorough and nicely written. Thanks!
> >>
> >> Cyrus
> >>
> >> On Thu, Aug 1, 2019, 12:40 PM Chris Egerton 
> wrote:
> >>
> >> > Thanks Arjun! Looks good to me.
> >> >
> >> > On Thu, Aug 1, 2019 at 12:33 PM Arjun Satish 
> >> > wrote:
> >> >
> >> > > Thanks for the feedback, Chris!
> >> > >
> >> > > Yes, the example is pretty much how Connect will use the new
> feature.
> >> > > Tweaked the section to make this more clear.
> >> > >
> >> > > Best,
> >> > >
> >> > > On Fri, Jul 26, 2019 at 11:52 AM Chris Egerton  >
> >> > > wrote:
> >> > >
> >> > > > Hi Arjun,
> >> > > >
> >> > > > This looks great. The changes to public interface are pretty small
> >> and
> >> > > > moving the Log4jController class into the clients package seems
> like
> >> > the
> >> > > > right way to go. One question I have--it looks like the purpose of
> >> this
> >> > > KIP
> >> > > > is to enable dynamic setting of log levels in the Connect
> framework,
> >> > but
> >> > > > it's not clear how the Connect framework will use that new
> utility.
> >> Is
> >> > > the
> >> > > > "Example Usage" section (which involves invoking the utility with
> a
> >> > > > namespace of "kafka.connect") actually meant to be part of the
> >> proposed
> >> > > > changes to public interface?
> >> > > >
> >> > > > Cheers,
> >> > > >
> >> > > > Chris
> >> > > >
> >> > > > On Mon, Jul 22, 2019 at 11:03 PM Arjun Satish <
> >> arjun.sat...@gmail.com>
> >> > > > wrote:
> >> > > >
> >> > > > > Hi everyone.
> >> > > > >
> >> > > > > I'd like to propose the following KIP to implement changing log
> >> > levels
> >> > > on
> >> > > > > the fly in Connect workers:
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-495%3A+Dynamically+Adjust+Log+Levels+in+Connect
> >> > > > >
> >> > > > > Would like to hear your thoughts on this.
> 

Re: [DISCUSS] KIP-504 - Add new Java Authorizer Interface

2019-08-15 Thread Colin McCabe
Thanks, Rajini.  It looks good to me.

best,
Colin


On Thu, Aug 15, 2019, at 11:37, Rajini Sivaram wrote:
> Hi Colin,
> 
> Thanks for the review. I have updated the KIP to move the interfaces for
> request context and server info to the authorizer package. These are now
> called AuthorizableRequestContext and AuthorizerServerInfo. Endpoint is now
> a class in org.apache.kafka.common to make it reusable since we already
> have multiple implementations of it. I have removed requestName from the
> request context interface since authorizers can distinguish follower fetch
> and consumer fetch from the operation being authorized. So 16-bit request
> type should be sufficient for audit logging.  Also replaced AuditFlag with
> two booleans as you suggested.
> 
> Can you take another look and see if the KIP is ready for voting?
> 
> Thanks for all your help!
> 
> Regards,
> 
> Rajini
> 
> On Wed, Aug 14, 2019 at 8:59 PM Colin McCabe  wrote:
> 
> > Hi Rajini,
> >
> > I think it would be good to rename KafkaRequestContext to something like
> > AuthorizableRequestContext, and put it in the
> > org.apache.kafka.server.authorizer namespace.  If we put it in the
> > org.apache.kafka.common namespace, then it's not really clear that it's
> > part of the Authorizer API.  Since this class is purely an interface, with
> > no concrete implementation of anything, there's nothing common to really
> > reuse in any case.  We definitely don't want someone to accidentally add or
> > remove methods because they think this is just another internal class used
> > for requests.
> >
> > The BrokerInfo class is a nice improvement.  It looks like it will be
> > useful for passing in information about the context we're running in.  It
> > would be nice to call this class ServerInfo rather than BrokerInfo, since
> > we will want to run the authorizer on controllers as well as on brokers,
> > and the controller may run as a separate process post KIP-500.  I also
> > think that this class should be in the org.apache.kafka.server.authorizer
> > namespace.  Again, it is an interface, not a concrete implementation, and
> > it's an interface that is very specifically for the authorizer.
> >
> > I agree that we probably don't have enough information preserved for
> > requests currently to always know what entity made them.  So we can leave
> > that out for now (except in the special case of Fetch).  Perhaps we can add
> > this later if it's needed.
> >
> > I understand the intention behind AuthorizationMode (which is now called
> > AuditFlag in the latest revision).  But it still feels complex.  What if we
> > just had two booleans in Action: logSuccesses and logFailures?  That seems
> > to cover all the cases here.  MANDATORY_AUTHORIZE = true, true.
> > OPTIONAL_AUTHORIZE = true, false.  FILTER = true, false.  LIST_AUTHORIZED =
> > false, false.  Would there be anything lost versus having the enum?
> >
> > best,
> > Colin
> >
> >
> > On Wed, Aug 14, 2019, at 06:29, Mickael Maison wrote:
> > > Hi Rajini,
> > >
> > > Thanks for the KIP!
> > > I really like that authorize() will be able to take a batch of
> > > requests, this will speed up many implementations!
> > >
> > > On Tue, Aug 13, 2019 at 5:57 PM Rajini Sivaram 
> > wrote:
> > > >
> > > > Thanks David! I have fixed the typo.
> > > >
> > > > Also made a couple of changes to make the context interfaces more
> > generic.
> > > > KafkaRequestContext now returns the 16-bit API key as Colin suggested
> > as
> > > > well as the friendly name used in metrics which are useful in audit
> > logs.
> > > > `Authorizer#start` is now provided a generic `BrokerInfo` interface
> > that
> > > > gives cluster id, broker id and endpoint information. The generic
> > interface
> > > > can potentially be used in other broker plugins in future and provides
> > > > dynamically generated configs like broker id and ports which are
> > currently
> > > > not available to plugins unless these configs are statically
> > configured.
> > > > Please let me know if there are any concerns.
> > > >
> > > > Regards,
> > > >
> > > > Rajini
> > > >
> > > > On Tue, Aug 13, 2019 at 4:30 PM David Jacot 
> > wrote:
> > > >
> > > > > Hi Rajini,
> > > > >
> > > > > Thank you for the update! It looks good to me. There is a typo in the
> > > > > `AuditFlag` enum: `MANDATORY_AUTHOEIZE` -> `MANDATORY_AUTHORIZE`.
> > > > >
> > > > > Regards,
> > > > > David
> > > > >
> > > > > On Mon, Aug 12, 2019 at 2:54 PM Rajini Sivaram <
> > rajinisiva...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi David,
> > > > > >
> > > > > > Thanks for reviewing the KIP! Since questions about `authorization
> > mode`
> > > > > > and `count` have come up multiple times, I have renamed both.
> > > > > >
> > > > > > 1) Renamed `count` to `resourceReferenceCount`. It is the number
> > of times
> > > > > > the resource being authorized is referenced within the request.
> > > > > >
> > > > > > 2) Renamed `AuthorizationMode` to `AuditFlag`. It is provided to
> > 

Re: [VOTE] KIP-496: Administrative API to delete consumer offsets

2019-08-15 Thread Colin McCabe
On Thu, Aug 15, 2019, at 11:47, Jason Gustafson wrote:
> Hey Colin, I think deleting all offsets is equivalent to deleting the
> group, which can be done with the `deleteConsumerGroups` api. I debated
> whether there should be a way to delete partitions for all unsubscribed
> topics, but I decided to start with a simple API.

That's a fair point-- deleting the group covers the main use-case for deleting 
all offsets.  So we might as well keep it simple for now.

cheers,
Colin

> 
> I'm going to close this vote. The final result is +3 with myself, Guozhang,
> and Colin voting.
> 
> -Jason
> 
> On Tue, Aug 13, 2019 at 9:21 AM Colin McCabe  wrote:
> 
> > Hi Jason,
> >
> > Thanks for the KIP.
> >
> > Is there ever a desire to delete all the offsets for a given group?
> > Should the protocol and tools support this?
> >
> > +1 (binding)
> >
> > best,
> > Colin
> >
> >
> > On Mon, Aug 12, 2019, at 10:57, Guozhang Wang wrote:
> > > +1 (binding).
> > >
> > > Thanks Jason!
> > >
> > > On Wed, Aug 7, 2019 at 11:18 AM Jason Gustafson 
> > wrote:
> > >
> > > > Hi All,
> > > >
> > > > I'd like to start a vote on KIP-496:
> > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-496%3A+Administrative+API+to+delete+consumer+offsets
> > > > .
> > > > +1
> > > > from me of course.
> > > >
> > > > -Jason
> > > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


Re: [DISCUSS] KIP-507: Securing Internal Connect REST Endpoints

2019-08-15 Thread Chris Egerton
Hi Ryanne,

Yes, if the Connect group is left unsecured then that is a potential
vulnerability. However, in a truly secure Connect cluster, the group would
need to be secured anyways to prevent attackers from joining the group with
the intent to either snoop on connector/task configurations or bring the
cluster to a halt by spamming the group with membership requests and then
not running the assigned connectors/tasks. Additionally, for a Connect
cluster to be secure, access to internal topics (for configs, offsets, and
statuses) would also need to be restricted so that attackers could not,
e.g., write arbitrary connector/task configurations to the configs topic.
This is all currently possible in Kafka with the use of ACLs.

I think the bottom line here is that there's a number of steps that need to
be taken to effectively lock down a Connect cluster; the point of this KIP
is to close a loophole that exists even after all of those steps are taken,
not to completely secure this one vulnerability even when no other security
measures are taken.

Cheers,

Chris

On Wed, Aug 14, 2019 at 10:56 PM Ryanne Dolan  wrote:

> Chris, I don't understand how the rebalance protocol can be used to give
> out session tokens in a secure way. It seems that any attacker could just
> join the group and sign requests with the provided token. Am I missing
> something?
>
> Ryanne
>
> On Wed, Aug 14, 2019, 2:31 PM Chris Egerton  wrote:
>
> > The KIP page can be found at
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-507%3A+Securing+Internal+Connect+REST+Endpoints
> > ,
> > by the way. Apologies for neglecting to include it in my initial email!
> >
> > On Wed, Aug 14, 2019 at 12:29 PM Chris Egerton 
> > wrote:
> >
> > > Hi all,
> > >
> > > I'd like to start discussion on a KIP to secure the internal "POST
> > > /connectors//tasks" endpoint for the Connect framework. The
> > proposed
> > > changes address a vulnerability in the framework in its current state
> > that
> > > allows malicious users to write arbitrary task configurations for
> > > connectors; it is vital that this issue be addressed in order for any
> > > Connect cluster to be secure.
> > >
> > > Looking forward to your thoughts,
> > >
> > > Chris
> > >
> >
>


Re: [VOTE] KIP-396: Add Commit/List Offsets Operations to AdminClient

2019-08-15 Thread Guozhang Wang
+1 (binding).

Thanks!


Guozhang

On Wed, Aug 14, 2019 at 5:18 PM Vahid Hashemian 
wrote:

> +1 (binding)
>
> Thanks Michael for the suggestion of simplifying offset
> retrieval/alteration operations.
>
> --Vahid
>
> On Wed, Aug 14, 2019 at 4:42 PM Bill Bejeck  wrote:
>
> > Thanks for the KIP Mickael, looks very useful.
> > +1 (binding)
> >
> > -Bill
> >
> > On Wed, Aug 14, 2019 at 6:14 PM Harsha Chintalapani 
> > wrote:
> >
> > > Thanks for the KIP Mickael. LGTM +1 (binding).
> > > -Harsha
> > >
> > >
> > > On Wed, Aug 14, 2019 at 1:10 PM, Colin McCabe 
> > wrote:
> > >
> > > > Thanks, Mickael. +1 (binding)
> > > >
> > > > best,
> > > > Colin
> > > >
> > > > On Wed, Aug 14, 2019, at 12:07, Gabor Somogyi wrote:
> > > >
> > > > +1 (non-binding)
> > > > I've read it through in depth and as Jungtaek said Spark can make
> good
> > > use
> > > > of it.
> > > >
> > > > On Wed, 14 Aug 2019, 17:06 Jungtaek Lim,  wrote:
> > > >
> > > > +1 (non-binding)
> > > >
> > > > I found it very useful for Spark's case. (Discussion on KIP-505
> > described
> > > > it.)
> > > >
> > > > Thanks for driving the effort!
> > > >
> > > > 2019년 8월 14일 (수) 오후 8:49, Mickael Maison  >님이
> > > 작성:
> > > >
> > > > Hi Guozhang,
> > > >
> > > > Thanks for taking a look.
> > > >
> > > > 1. Right, I updated the titles of the code blocks
> > > >
> > > > 2. Yes that's a good idea. I've updated the KIP
> > > >
> > > > Thank you
> > > >
> > > > On Wed, Aug 14, 2019 at 11:05 AM Mickael Maison
> > > >  wrote:
> > > >
> > > > Hi Colin,
> > > >
> > > > Thanks for raising these 2 valid points. I've updated the KIP
> > > >
> > > > accordingly.
> > > >
> > > > On Tue, Aug 13, 2019 at 9:50 PM Guozhang Wang 
> > > >
> > > > wrote:
> > > >
> > > > Hi Mickael,
> > > >
> > > > Thanks for the KIP!
> > > >
> > > > Just some minor comments.
> > > >
> > > > 1. Java class names are stale, e.g. "CommitOffsetsOptions.java
> > > > "
> > > >
> > > > should
> > > >
> > > > be
> > > >
> > > > "AlterOffsetsOptions".
> > > >
> > > > 2. I'd suggest we change the future structure of "AlterOffsetsResult"
> > > >
> > > > to
> > > >
> > > > *KafkaFuture>>*
> > > >
> > > > This is because we will have a hierarchy of two-layers of errors
> > > >
> > > > since
> > > >
> > > > we
> > > >
> > > > need to find out the group coordinator first and then issue the
> > > >
> > > > commit
> > > >
> > > > offset request (see e.g. the ListConsumerGroupOffsetsResult which
> > > >
> > > > exclude
> > > >
> > > > partitions that have errors, or the DeleteMembersResult as part of
> > > >
> > > > KIP-345).
> > > >
> > > > If the discover-coordinator returns non-triable error, we would set
> > > >
> > > > it
> > > >
> > > > on
> > > >
> > > > the first layer of the KafkaFuture, and the per-partition error would
> > > >
> > > > be
> > > >
> > > > set on the second layer of the KafkaFuture.
> > > >
> > > > Guozhang
> > > >
> > > > On Tue, Aug 13, 2019 at 9:36 AM Colin McCabe 
> > > >
> > > > wrote:
> > > >
> > > > Hi Mickael,
> > > >
> > > > Considering that KIP-496, which adds a way of deleting consumer
> > > >
> > > > offsets
> > > >
> > > > from AdminClient, looks like it is going to get in, this seems like
> > > > functionality we should definitely have.
> > > >
> > > > For alterConsumerGroupOffsets, is the intention to ignore
> > > >
> > > > partitions
> > > >
> > > > that
> > > >
> > > > are not specified in the map? If so, we should specify that in the
> > > >
> > > > JavaDoc.
> > > >
> > > > isolationLevel seems like it should be an enum rather than a
> > > >
> > > > string. The
> > > >
> > > > existing enum is in org.apache.kafka.common.requests, so we should
> > > >
> > > > probably
> > > >
> > > > create a new one which is public in org.apache.kafka.clients.admin.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > > On Mon, Mar 25, 2019, at 06:10, Mickael Maison wrote:
> > > >
> > > > Bumping this thread once again
> > > >
> > > > Ismael, have I answered your questions?
> > > > While this has received a few non-binding +1s, no committers have
> voted
> > > > yet. If you have concerns or questions, please let me know.
> > > >
> > > > Thanks
> > > >
> > > > On Mon, Feb 11, 2019 at 11:51 AM Mickael Maison
> > > >  wrote:
> > > >
> > > > Bumping this thread as it's been a couple of weeks.
> > > >
> > > > On Tue, Jan 22, 2019 at 2:26 PM Mickael Maison <
> > > >
> > > > mickael.mai...@gmail.com> wrote:
> > > >
> > > > Thanks Ismael for the feedback. I think your point has 2
> > > >
> > > > parts:
> > > >
> > > > - Having the reset functionality in the AdminClient: The fact we
> have a
> > > > command line tool illustrate that this
> > > >
> > > > operation
> > > >
> > > > is
> > > >
> > > > relatively common. I seems valuable to be able to perform
> > > >
> > > > this
> > > >
> > > > operation directly via a proper API in addition of the CLI
> > > >
> > > > tool.
> > > >
> > > > - Sending an OffsetCommit directly instead of relying on
> > > >
> > > > 

Build failed in Jenkins: kafka-trunk-jdk11 #756

2019-08-15 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Update the javadoc of SocketServer#startup(). (#7215)

--
[...truncated 2.59 MB...]

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testRestartTaskLeaderRedirect PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testRestartTaskOwnerRedirect STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testRestartTaskOwnerRedirect PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testExpandConnectorsWithConnectorNotFound STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testExpandConnectorsWithConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorWithHeaderAuthorization STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorWithHeaderAuthorization PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfigConnectorNotFound STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfigConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNameAllWhitespaces STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNameAllWhitespaces PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectors STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectors PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testExpandConnectorsStatus STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testExpandConnectorsStatus PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testExpandConnectorsInfo STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testExpandConnectorsInfo PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testFullExpandConnectors STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testFullExpandConnectors PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNotLeader STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorExists STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorExists PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNoName STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNoName PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotLeader STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotFound STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfig STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorWithControlSequenceInName STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorWithControlSequenceInName PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfigWithSpecialCharsInName STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfigWithSpecialCharsInName PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfigWithControlSequenceInName STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfigWithControlSequenceInName PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfigNameMismatch STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfigNameMismatch PASSED


Re: UnderReplicatedPartitions = 0 and UnderMinPartitionIsrCount > 0

2019-08-15 Thread Alexandre Dupriez
Hello James,

Many thanks for your quick response and pointing out the associated JIRA.

Happy to dig a bit on the function paths affected by a consistency check on
these parameters, as explained in the ticket, and see what could be done
(or not).

Thanks,
Alexandre

Le mer. 14 août 2019 à 06:11, James Cheng  a écrit :

> Alexandre,
>
> You are right that this is a problem. There is a JIRA on this from a while
> back.
>
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/KAFKA-4680
>
> I don’t think anyone is currently working on it right now.
>
> -James
>
> Sent from my iPhone
>
> > On Aug 13, 2019, at 1:17 AM, Alexandre Dupriez <
> alexandre.dupr...@gmail.com> wrote:
> >
> > Hello all,
> >
> > We run into a scenario where we had misconfigured the replication factor
> > and the minimum in-sync replicas count in such a way that the replication
> > factor (either default or defined at the topic level) is strictly lower
> > than the property min.insync.replicas.
> >
> > We observed broker metrics reporting UnderReplicatedPartitions = 0 and
> > UnderMinPartitionIsrCount > 0, and the topic’s partitions were
> unavailable
> > for producers (with ack=all) and consumers.
> >
> > Since it seems to be impossible in this scenario to ever reach the number
> > of in-sync replicas, making partitions permanently unavailable, it could
> be
> > worth to prevent this misconfiguration to make its way to the broker,
> e.g.
> > a check could be added when a topic is created to ensure the replication
> > factor is greater than or equals to the minimum number of in-sync
> replicas.
> >
> > I may have missed something though. What do you think?
> >
> > Thank you,
> > Alexandre
>


Re: [VOTE] KIP-496: Administrative API to delete consumer offsets

2019-08-15 Thread Jason Gustafson
Hey Colin, I think deleting all offsets is equivalent to deleting the
group, which can be done with the `deleteConsumerGroups` api. I debated
whether there should be a way to delete partitions for all unsubscribed
topics, but I decided to start with a simple API.

I'm going to close this vote. The final result is +3 with myself, Guozhang,
and Colin voting.

-Jason

On Tue, Aug 13, 2019 at 9:21 AM Colin McCabe  wrote:

> Hi Jason,
>
> Thanks for the KIP.
>
> Is there ever a desire to delete all the offsets for a given group?
> Should the protocol and tools support this?
>
> +1 (binding)
>
> best,
> Colin
>
>
> On Mon, Aug 12, 2019, at 10:57, Guozhang Wang wrote:
> > +1 (binding).
> >
> > Thanks Jason!
> >
> > On Wed, Aug 7, 2019 at 11:18 AM Jason Gustafson 
> wrote:
> >
> > > Hi All,
> > >
> > > I'd like to start a vote on KIP-496:
> > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-496%3A+Administrative+API+to+delete+consumer+offsets
> > > .
> > > +1
> > > from me of course.
> > >
> > > -Jason
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: [DISCUSS] KIP-504 - Add new Java Authorizer Interface

2019-08-15 Thread Rajini Sivaram
Hi Colin,

Thanks for the review. I have updated the KIP to move the interfaces for
request context and server info to the authorizer package. These are now
called AuthorizableRequestContext and AuthorizerServerInfo. Endpoint is now
a class in org.apache.kafka.common to make it reusable since we already
have multiple implementations of it. I have removed requestName from the
request context interface since authorizers can distinguish follower fetch
and consumer fetch from the operation being authorized. So 16-bit request
type should be sufficient for audit logging.  Also replaced AuditFlag with
two booleans as you suggested.

Can you take another look and see if the KIP is ready for voting?

Thanks for all your help!

Regards,

Rajini

On Wed, Aug 14, 2019 at 8:59 PM Colin McCabe  wrote:

> Hi Rajini,
>
> I think it would be good to rename KafkaRequestContext to something like
> AuthorizableRequestContext, and put it in the
> org.apache.kafka.server.authorizer namespace.  If we put it in the
> org.apache.kafka.common namespace, then it's not really clear that it's
> part of the Authorizer API.  Since this class is purely an interface, with
> no concrete implementation of anything, there's nothing common to really
> reuse in any case.  We definitely don't want someone to accidentally add or
> remove methods because they think this is just another internal class used
> for requests.
>
> The BrokerInfo class is a nice improvement.  It looks like it will be
> useful for passing in information about the context we're running in.  It
> would be nice to call this class ServerInfo rather than BrokerInfo, since
> we will want to run the authorizer on controllers as well as on brokers,
> and the controller may run as a separate process post KIP-500.  I also
> think that this class should be in the org.apache.kafka.server.authorizer
> namespace.  Again, it is an interface, not a concrete implementation, and
> it's an interface that is very specifically for the authorizer.
>
> I agree that we probably don't have enough information preserved for
> requests currently to always know what entity made them.  So we can leave
> that out for now (except in the special case of Fetch).  Perhaps we can add
> this later if it's needed.
>
> I understand the intention behind AuthorizationMode (which is now called
> AuditFlag in the latest revision).  But it still feels complex.  What if we
> just had two booleans in Action: logSuccesses and logFailures?  That seems
> to cover all the cases here.  MANDATORY_AUTHORIZE = true, true.
> OPTIONAL_AUTHORIZE = true, false.  FILTER = true, false.  LIST_AUTHORIZED =
> false, false.  Would there be anything lost versus having the enum?
>
> best,
> Colin
>
>
> On Wed, Aug 14, 2019, at 06:29, Mickael Maison wrote:
> > Hi Rajini,
> >
> > Thanks for the KIP!
> > I really like that authorize() will be able to take a batch of
> > requests, this will speed up many implementations!
> >
> > On Tue, Aug 13, 2019 at 5:57 PM Rajini Sivaram 
> wrote:
> > >
> > > Thanks David! I have fixed the typo.
> > >
> > > Also made a couple of changes to make the context interfaces more
> generic.
> > > KafkaRequestContext now returns the 16-bit API key as Colin suggested
> as
> > > well as the friendly name used in metrics which are useful in audit
> logs.
> > > `Authorizer#start` is now provided a generic `BrokerInfo` interface
> that
> > > gives cluster id, broker id and endpoint information. The generic
> interface
> > > can potentially be used in other broker plugins in future and provides
> > > dynamically generated configs like broker id and ports which are
> currently
> > > not available to plugins unless these configs are statically
> configured.
> > > Please let me know if there are any concerns.
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > > On Tue, Aug 13, 2019 at 4:30 PM David Jacot 
> wrote:
> > >
> > > > Hi Rajini,
> > > >
> > > > Thank you for the update! It looks good to me. There is a typo in the
> > > > `AuditFlag` enum: `MANDATORY_AUTHOEIZE` -> `MANDATORY_AUTHORIZE`.
> > > >
> > > > Regards,
> > > > David
> > > >
> > > > On Mon, Aug 12, 2019 at 2:54 PM Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi David,
> > > > >
> > > > > Thanks for reviewing the KIP! Since questions about `authorization
> mode`
> > > > > and `count` have come up multiple times, I have renamed both.
> > > > >
> > > > > 1) Renamed `count` to `resourceReferenceCount`. It is the number
> of times
> > > > > the resource being authorized is referenced within the request.
> > > > >
> > > > > 2) Renamed `AuthorizationMode` to `AuditFlag`. It is provided to
> improve
> > > > > audit logging in the authorizer. The enum values have javadoc which
> > > > > indicate how the authorization result is used in each of the modes
> to
> > > > > enable authorizers to log audit messages at the right severity
> level.
> > > > >
> > > > > Regards,
> > > > >
> > > > > Rajini
> > > > >
> > > > > On Mon, Aug 12, 

Re: [DISCUSS] KIP-481: SerDe Improvements for Connect Decimal type in JSON

2019-08-15 Thread Konstantine Karantasis
Thanks! KIP reads even better for me now.
Just voted. +1 non-binding

Konstantine

On Wed, Aug 14, 2019 at 7:00 PM Almog Gavra  wrote:

> Thanks for the review Konstantine!
>
> I think the terminology suggestion definitely makes things clearer - I will
> update the documentation based on your suggestion (e.g. s/Consumer/Sink
> Converter/g and s/Producer/Source Converter/g).
>
> Cheers,
> Almog
>
> On Wed, Aug 14, 2019 at 8:13 AM Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
> > Thanks Almog for preparing this KIP!
> > I think it will improve usability and troubleshooting with JSON data a
> lot.
> >
> > The finalized plan seems quite concrete now. I also liked that some
> > implementation specific implications (such as setting the ObjectMapper to
> > deserialize floating point as BigDecimal) are highlighted in the KIP.
> >
> > Still, as I was reading the KIP, the main obstacle I encountered was
> around
> > terminology. I couldn't get used to reading "producer" and "consumer" and
> > not thinking in terms of Kafka producers and consumers - which are not
> > relevant to what this KIP proposes. Thus, I'd suggest replacing
> > "Producer(s)" with "Source Converter(s)" and "Consumer(s)" with "Sink
> > Converter(s)" (even if "Converter used by Source Connector" or "Converter
> > used by Sink Connector" would be even more accurate - maybe this could be
> > an explanation in a footnote). Terminology around converters has been
> > tricky in the past and adding producers/consumers in the mix might add to
> > the confusion.
> >
> > Another example where I'd apply this different terminology would be to a
> > phrase such as the following:
> > "Because of this, users must take care to first ensure that all consumers
> > have upgraded to the new code before upgrading producers to make use of
> the
> > NUMERIC serialization format."
> > which I'd write
> > "Because of this, users must take care to first ensure that all sink
> > connectors have upgraded to the new converter code before upgrading
> source
> > connectors to make use of the NUMERIC serialization format in
> > JsonConverter."
> >
> > Let me know if you think this suggestion makes the KIP easier to follow.
> > Otherwise I think it's a solid proposal.
> >
> > I'm concluding with a couple of nits:
> >
> > - "Upgraded Producer with BASE64 serialization, Legacy Consumer: this
> > scenario is okay as the upgraded ~producer~ consumer will be able to read
> > binary as today" (again according to my suggestion above, it could be as
> > the upgraded source converter ...)
> >
> > - "consumers cannot consumer NUMERIC data. " -> "consumers cannot read
> > NUMERIC data"
> >
> > Best,
> > Konstantine
> >
> > On Fri, Aug 9, 2019 at 6:37 PM Almog Gavra  wrote:
> >
> > > Good catches! Fixed :)
> > >
> > > On Thu, Aug 8, 2019 at 10:36 PM Arjun Satish 
> > > wrote:
> > >
> > > > Cool!
> > > >
> > > > Couple of nits:
> > > >
> > > > - In public interfaces, typo: *json.decimal.serialization.fromat*
> > > > - In public interfaces, you use the term "HEX" instead of "BASE64".
> > > >
> > > >
> > > >
> > > > On Wed, Aug 7, 2019 at 9:51 AM Almog Gavra 
> wrote:
> > > >
> > > > > EDIT: everywhere I've been using "HEX" I meant to be using
> "BASE64".
> > I
> > > > will
> > > > > update the KIP to reflect this.
> > > > >
> > > > > On Wed, Aug 7, 2019 at 9:44 AM Almog Gavra 
> > wrote:
> > > > >
> > > > > > Thanks for the feedback Arjun! I'm happy changing the default
> > config
> > > to
> > > > > > HEX instead of BINARY, no strong feelings there.
> > > > > >
> > > > > > I'll also clarify the example in the KIP to be clearer:
> > > > > >
> > > > > > - serialize the decimal field "foo" with value "10.2345" with the
> > HEX
> > > > > > setting: {"foo": "D3J5"}
> > > > > > - serialize the decimal field "foo" with value "10.2345" with the
> > > > NUMERIC
> > > > > > setting: {"foo": 10.2345}
> > > > > >
> > > > > > With regards to the precision issue, that was my original concern
> > as
> > > > well
> > > > > > (and why I originally suggested a TEXT format). Many JSON
> > > deserializers
> > > > > > (e.g. Jackson with
> > > DeserializationFeature.USE_BIG_DECIMAL_FOR_FLOATS),
> > > > > > however, have the ability to deserialize decimals correctly so I
> > will
> > > > > > configure that as the default for Connect's JsonDeserializer.
> It's
> > > > > probably
> > > > > > a good idea to call out that using other deserializers must be
> done
> > > > with
> > > > > > care - I will add that documentation to the serialization config.
> > > > > >
> > > > > > Note that there would not be an issue on the _serialization_ side
> > of
> > > > > > things as Jackson respects BigDecimal.
> > > > > >
> > > > > > Almog
> > > > > >
> > > > > > On Tue, Aug 6, 2019 at 11:23 PM Arjun Satish <
> > arjun.sat...@gmail.com
> > > >
> > > > > > wrote:
> > > > > >
> > > > > >> hey Almog, nice work! couple of thoughts (hope I'm not late
> since
> > > you
> > > > > >> started the voting thread already):
> > > 

Re: [VOTE] KIP-481: SerDe Improvements for Connect Decimal type in JSON

2019-08-15 Thread Konstantine Karantasis
Thanks Almog!
Nicely designed and concise KIP.

+1 non-binding

Konstantine

On Tue, Aug 6, 2019 at 11:44 PM Almog Gavra  wrote:

> Hello Everyone,
>
> After discussions on
>
> https://lists.apache.org/thread.html/fa665a6dc59f73ca294a00bcbef2eaa3ad00cc69626e91c516fa4fca@%3Cdev.kafka.apache.org%3E
> I've opened this KIP up for formal voting.
>
> KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-481%3A+SerDe+Improvements+for+Connect+Decimal+type+in+JSON
>
> Please update the DISCUSS thread with any concerns/comments.
>
> Cheers!
> Almog
>


Re: Apply to be contributor of kafka

2019-08-15 Thread Matthias J. Sax
Thanks for you interest in Kafka!

I added you to the list of contributors in Jira. You can now self-assign
tickets.

To get started check out the web-page: https://kafka.apache.org/contributing

Let us know if you have any further questions.


-Matthias

On 8/14/19 5:55 PM, xiemian wrote:
> Hello,
> 
> I am a graduate student studying in United States, major in computer science. 
> I am much interested on being a contributor of kafka. Before going to school 
> again, I actually had a couple years working experience on Hadoop ecosystem. 
> I have built clusters to satisfy the real world requirement and also modified 
> some source code of YARN and MapReduce in order to make an improvement. I 
> have already read some source code of kafka and would like to be a 
> contributor since contributing myself to the open source community is my 
> goal. So could you please tell me what I need to do?
> 
> My ASF JIRA username is: mianxieEmail: xiemian...@gmail.com
> 
> Thanks for your time and patience. I am really looking forward to your reply.
> 
> Best.
> Mian Xie
> 



signature.asc
Description: OpenPGP digital signature


Re: Request permission

2019-08-15 Thread Matthias J. Sax
Done.

On 8/14/19 6:30 PM, Mario Molina wrote:
> Hi,
> 
> I'd like to create a new page for a KIP in the Apache Kafka Confluence site.
> The implementation of this KIP is already done but changes some things in
> the API that should be approved. You can check it out here
> .
> 
> My ID in Confluence is "mmolimar".
> 
> Regards,
> Mario
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-373: Allow users to create delegation tokens for other users

2019-08-15 Thread Viktor Somogyi-Vass
Started to implement my proposition and thought about it a little bit more
and it seems like I overthought the problem and we'd actually be better off
by having only the User resource type only and not CreateUsers. The problem
with CreateUsers is that a resource apparently is created only in addAcls
(at least in SimpleAclAuthorizer). Therefore we'd need to check before
creating the token that the owner user has been created and the token
creator has the permissions. Then add the user resource and the token. This
means that we'd only use CreateUsers when creating tokens iff the token
requester principal already has CreateTokens permissions with that user
(the owner) so it's kinda duplicate.
It would work though if we require the resources to be added beforehand but
it's not the case and is the detail of the Authorizer implementation.

I'll update the KIP accordingly and apologies for the extra round :).

Thanks,
Viktor

On Wed, Aug 14, 2019 at 2:40 PM Viktor Somogyi-Vass 
wrote:

> Sorry, reading my email the second time I probably wasn't clear.
> So basically the concept is that there is a user who can add other users
> as resources (such as userB and userC) prior to creating the "userA can
> create delegation token for userB and userC" association with CreateTokens.
> To limit who can add new users as resources I thought we can introduce a
> CreateUser operation. It's true though that we could also say that a Create
> operation permission on the Cluster resource would be enough to create new
> users but I think from a generic security perspective it's better if we
> don't extend already existing operations.
> So a classic flow would be that prior to creating the delegation token for
> userB, userB itself has to be added by another user who has CreateUser
> permissions. After this a CreateToken permission has to be created that
> says "userA can create delegation tokens for userB" and after this userA
> can actually create the token.
> Let me know what you think.
>
> Viktor
>
> On Wed, Aug 14, 2019 at 1:30 PM Manikumar 
> wrote:
>
>> Hi,
>>
>> Why do we need  new ACL operation  "CreateUsers"?
>> I think,  "CreateTokens" Operation is sufficient to create "UserA can
>> create tokens for UserB, UserC" association.
>>
>> Thanks,
>>
>> On Tue, Aug 13, 2019 at 3:37 PM Viktor Somogyi-Vass <
>> viktorsomo...@gmail.com>
>> wrote:
>>
>> > Hi Manikumar,
>> >
>> > Yea, I just brought up superuser for the sake of simplicity :).
>> > Anyway, your proposition makes sense to me, I'll modify the KIP for
>> this.
>> >
>> > The changes summarized:
>> > 1. We'll need a new ACL operation as well (say "CreateUsers") to create
>> the
>> > "UserA can create tokens for UserB, UserC" association. This can be used
>> > via the createAcls API of the AdminClient.
>> > 2. CreateToken will be a User level operation (instead of a Cluster
>> level
>> > as in previous drafts). So that means any user who wants to create a
>> > delegation token for other users will have to have an ACL set up by a
>> > higher level user to authorize this.
>> > 3. DescribeToken will also be a User level operation. In this case
>> tokenT
>> > owned by userB will be described if userA has a Describe ACL on tokenT
>> or
>> > has a DescribeToken ACL on userB. Note that in the latter case userA
>> will
>> > be able to describe all other tokens belonging to userB.
>> >
>> > Would this work for you?
>> >
>> > Viktor
>> >
>> > On Mon, Aug 12, 2019 at 5:45 PM Colin McCabe 
>> wrote:
>> >
>> > > +1 for better access control here. In general, impersonating another
>> user
>> > > seems like it’s equivalent to super user access.
>> > >
>> > > Colin
>> > >
>> > > On Mon, Aug 12, 2019, at 05:43, Manikumar wrote:
>> > > > Hi Viktor,
>> > > >
>> > > > As per the KIP, It's not only superuser, any user with required
>> > > permissions
>> > > > (CreateTokens on Cluster Resource), can create the tokens for other
>> > > users.
>> > > > Current proposed permissions defined like, "UserA can create tokens
>> for
>> > > any
>> > > > user".
>> > > > I am thinking, can we change the permissions like "UserA can create
>> > > tokens
>> > > > for UserB, UserC"?
>> > > >
>> > > > Thanks,
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > On Fri, Aug 9, 2019 at 6:39 PM Viktor Somogyi-Vass <
>> > > viktorsomo...@gmail.com>
>> > > > wrote:
>> > > >
>> > > > > Hey Manikumar,
>> > > > >
>> > > > > Thanks for the feedback.
>> > > > > I'm not sure I fully grasp the use-case. Would this be a quota?
>> Do we
>> > > say
>> > > > > something like "there can be 10 active delegation tokens at a time
>> > > that is
>> > > > > created by superuserA for other users"?
>> > > > > I think such a feature could be useful to limit the
>> responsibility of
>> > > said
>> > > > > superuser (and blast radius in case of a faulty/malicious
>> superuser)
>> > > and
>> > > > > also to limit potential programming errors. Do you have other use
>> > cases
>> > > > > too?
>> > > > >
>> > > > > Thanks,
>> > > > 

Re: [VOTE] KIP-499 - Unify connection name flag for command line tool

2019-08-15 Thread Jakub Scholz
+1 (non-binding)

Jakub

On Sat, Aug 10, 2019 at 8:34 PM Stanislav Kozlovski 
wrote:

> Awesome KIP, +1 (non-binding)
>
> Thanks,
> Stanislav
>
> On Fri, Aug 9, 2019 at 11:32 PM Colin McCabe  wrote:
>
> > +1 (binding)
> >
> > cheers,
> > Colin
> >
> > On Fri, Aug 9, 2019, at 09:56, Ron Dagostino wrote:
> > > +1 (non-binding)
> > >
> > > The simplest of KIPs, with perhaps the biggest impact.  Like removing
> > > the thorn from the soles of my feet.
> > >
> > > Thanks for doing it.
> > >
> > > > On Aug 9, 2019, at 12:50 PM, Dongjin Lee  wrote:
> > > >
> > > > +1 (non-binding)
> > > >
> > > > Two binding +1 (Gwen, Harsha) with Six non-binding +1 until now.
> > > > We need one more binding +1.
> > > >
> > > > Thanks,
> > > > Dongjin
> > > >
> > > >> On Sat, 10 Aug 2019 at 12:59 AM Tom Bentley 
> > wrote:
> > > >>
> > > >> +1 (non-binding). Thanks!
> > > >>
> > > >> On Fri, Aug 9, 2019 at 12:37 PM Satish Duggana <
> > satish.dugg...@gmail.com>
> > > >> wrote:
> > > >>
> > > >>> +1 (non-binding) Thanks for the KIP, so useful.
> > > >>>
> > > >>> On Fri, Aug 9, 2019 at 4:42 PM Mickael Maison <
> > mickael.mai...@gmail.com>
> > > >>> wrote:
> > > >>>
> > >  +1 (non binding)
> > >  Thanks for the KIP!
> > > 
> > >  On Fri, Aug 9, 2019 at 9:36 AM Andrew Schofield
> > >   wrote:
> > > >
> > > > +1 (non-binding)
> > > >
> > > > On 09/08/2019, 08:39, "Sönke Liebau" <
> soenke.lie...@opencore.com>
> > >  wrote:
> > > >
> > > >+1 (non-binding)
> > > >
> > > >
> > > >
> > > >On Fri, 9 Aug 2019 at 04:45, Harsha Chintalapani <
> > > >> ka...@harsha.io>
> > >  wrote:
> > > >
> > > >> +1  (binding). much needed!!
> > > >>
> > > >>
> > > >> On Thu, Aug 08, 2019 at 6:43 PM, Gwen Shapira <
> > > >> g...@confluent.io
> > > 
> > >  wrote:
> > > >>
> > > >>> +1 (binding) THANK YOU. It would be +100 if I could.
> > > >>>
> > > >>> On Thu, Aug 8, 2019 at 6:37 PM Mitchell  > > >>>
> > >  wrote:
> > > >>>
> > > >>> Hello Dev,
> > > >>> After the discussion I would like to start the vote for
> > > >> KIP-499
> > > >>>
> > > >>> The following command line tools will have the
> > >  `--bootstrap-server`
> > > >>> command line argument added: kafka-console-producer.sh,
> > > >>> kafka-consumer-groups.sh, kafka-consumer-perf-test.sh,
> > > >>> kafka-verifiable-consumer.sh, kafka-verifiable-producer.sh
> > > >>>
> > > >>> Thanks,
> > > >>> -Mitch
> > > >>>
> > > >>> --
> > > >>> Gwen Shapira
> > > >>> Product Manager | Confluent
> > > >>> 650.450.2760 | @gwenshap
> > > >>> Follow us: Twitter | blog
> > > >
> > > >
> > > >--
> > > >Sönke Liebau
> > > >Partner
> > > >Tel. +49 179 7940878
> > > >OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel -
> > > >>> Germany
> > > > --
> > > > *Dongjin Lee*
> > > >
> > > > *A hitchhiker in the mathematical world.*
> > > > *github:  github.com/dongjinleekr
> > > > linkedin:
> > kr.linkedin.com/in/dongjinleekr
> > > > speakerdeck:
> > speakerdeck.com/dongjin
> > > > *
> > >
> >
>
>
> --
> Best,
> Stanislav
>


Build failed in Jenkins: kafka-trunk-jdk11 #755

2019-08-15 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8788: Optimize client metadata handling with a large number of

--
[...truncated 6.53 MB...]

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.InsertFieldTest > 
schemalessInsertConfiguredFields STARTED

org.apache.kafka.connect.transforms.InsertFieldTest > 
schemalessInsertConfiguredFields PASSED

org.apache.kafka.connect.transforms.InsertFieldTest > topLevelStructRequired 
STARTED

org.apache.kafka.connect.transforms.InsertFieldTest > topLevelStructRequired 
PASSED

org.apache.kafka.connect.transforms.InsertFieldTest > 
copySchemaAndInsertConfiguredFields STARTED

org.apache.kafka.connect.transforms.InsertFieldTest > 
copySchemaAndInsertConfiguredFields PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless PASSED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
STARTED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeDateRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeDateRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanTrue PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #3854

2019-08-15 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8788: Optimize client metadata handling with a large number of

--
[...truncated 6.51 MB...]
org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testMultipleSourcesInvalid PASSED

> Task :jmh-benchmarks:test NO-SOURCE

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > session windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilWindowCloses STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > session windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilWindowCloses PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > non-windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilTimeLimit STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > non-windowed 
KTable#suppress should correctly suppress results 

Build failed in Jenkins: kafka-2.3-jdk8 #88

2019-08-15 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-8788: Optimize client metadata handling with a large number of

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H48 (ubuntu bionic) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.3^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.3^{commit} # timeout=10
Checking out Revision 4d0cc439eea0c57aba508fae257c366edfd39028 
(refs/remotes/origin/2.3)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 4d0cc439eea0c57aba508fae257c366edfd39028
Commit message: "KAFKA-8788: Optimize client metadata handling with a large 
number of partitions (#7192)"
 > git rev-list --no-walk 90386e89be4d294baf63911af9012493a3faa35a # timeout=10
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[kafka-2.3-jdk8] $ /bin/bash -xe /tmp/jenkins1136468872655857978.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.8.1/bin/gradle
/tmp/jenkins1136468872655857978.sh: line 4: 
/home/jenkins/tools/gradle/4.8.1/bin/gradle: No such file or directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
No credentials specified
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=4d0cc439eea0c57aba508fae257c366edfd39028, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #85
Recording test results
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
Not sending mail to unregistered user b...@confluent.io
Not sending mail to unregistered user ism...@juma.me.uk


Re: [DISCUSS] KIP-486 Support for pluggable KeyStore and TrustStore

2019-08-15 Thread Maulin Vasavada
Just to update - still working on it. Get to work only on and off on it :(

On Thu, Aug 8, 2019 at 4:05 PM Maulin Vasavada 
wrote:

> Hi Harsha
>
> Let me try to write samples and will let you know.
>
> Thanks
> Maulin
>
> On Thu, Aug 8, 2019 at 4:00 PM Harsha Ch  wrote:
>
>> Hi Maulin,
>>  With java security providers can be as custom you would like it
>> to
>> be. If you only want to to implement a custom way of loading the
>> keystore and truststore and not implement any protocol/encryption handling
>> you can leave them empty and no need to implement.
>> Have you looked into the links I pasted before?
>>
>> https://github.com/spiffe/spiffe-example/blob/master/java-spiffe/spiffe-security-provider/src/main/java/spiffe/api/provider/SpiffeKeyStore.java
>>
>> https://github.com/spiffe/spiffe-example/blob/master/java-spiffe/spiffe-security-provider/src/main/java/spiffe/api/provider/SpiffeTrustManager.java
>>
>> https://github.com/spiffe/spiffe-example/blob/master/java-spiffe/spiffe-security-provider/src/main/java/spiffe/api/provider/SpiffeProvider.java
>>
>> Can you please tell me which methods are too complex in above to implement
>> or unnecessary? You are changing anything in SSL/TLS implementations
>> provided by
>>
>> All of the implementations delegating the checks to the default
>> implementation anyway.
>> Spire agent is an example, its nothing but a GRPC server listening on a
>> unix domain socket . Above code is making a RPC call to the local daemon
>> to
>> get the certificate and keys. The mechanics are pretty much same as what
>> you are asking for.
>>
>> Thanks,
>> Harsha
>>
>> On Thu, Aug 8, 2019 at 3:47 PM Maulin Vasavada > >
>> wrote:
>>
>> > Imagine a scenario like - We know we have a custom KMS and as a Kafka
>> owner
>> > we want to comply to using that KMS source to load keys/certs. As a
>> Kafka
>> > owner we know how to integrate with KMS but doesn't necessarily have to
>> > know anything about cipher suites, algorithms, and SSL/TLS
>> implementation.
>> > Going the Provider way requires to know lot more than we should, isn't
>> it?
>> > Not that we would have concern/shy-away knowing those details - but if
>> we
>> > don't have to - why should we?
>> >
>> > On Thu, Aug 8, 2019 at 3:23 PM Maulin Vasavada <
>> maulin.vasav...@gmail.com>
>> > wrote:
>> >
>> > > Hi Harsha
>> > >
>> > > We don't have spire (or similar) agents and we do not have keys/certs
>> > > locally on any brokers. To elaborate more on my previous email,
>> > >
>> > > I agree that Java security Providers are used in much broader sense -
>> to
>> > > have a particular implementation of an algorithm, use specific cipher
>> > > suites for SSL , OR  in our current team's case have a particular way
>> to
>> > > leverage pre-generated SSL sessions. However, the scope of our KIP
>> (486)
>> > is
>> > > much restricted than that. We merely intend to provide a custom
>> > > keystore/truststore for our SSL connections and not really worry about
>> > > underlying specific SSL/TLS implementation.  This simplifies it a lot
>> for
>> > > us to keep the concerns separate and clear.
>> > >
>> > > I feel our approach is more complimentary such that it allows for
>> using
>> > > keystores of choice while retaining the flexibility to use any
>> > > underlying/available Provider for actually making the SSL call.
>> > >
>> > > We agree with KIP-492's approach based on Providers (and Java's
>> > > recommendation), but also strongly believe that our approach can
>> > compliment
>> > > it very effectively for reasons explained above.
>> > >
>> > > Thanks
>> > > Maulin
>> > >
>> > > On Thu, Aug 8, 2019 at 3:05 PM Harsha Chintalapani 
>> > > wrote:
>> > >
>> > >> Hi Maulin,
>> > >>
>> > >> On Thu, Aug 08, 2019 at 2:04 PM, Maulin Vasavada <
>> > >> maulin.vasav...@gmail.com>
>> > >> wrote:
>> > >>
>> > >> > Hi Harsha
>> > >> >
>> > >> > The reason we rejected the SslProvider route is that - we only
>> needed
>> > a
>> > >> > custom way to load keys/certs. Not touch any policy that existing
>> > >> Providers
>> > >> > govern like SunJSSE Provider.
>> > >> >
>> > >>
>> > >> We have exactly the same requirements to load certs and keys through
>> > spire
>> > >> agent. We used security.provider to do that exactly. I am not sure
>> why
>> > you
>> > >> would be modifying any policies provided by default SunJSSE provider.
>> > Can
>> > >> you give me an example of having custom provider that will override
>> an
>> > >> existing policy in  SunJSSE provider.
>> > >>
>> > >> As pointed out earlier, this kip
>> > >>
>> > >>
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-492%3A+Add+java+security+providers+in+Kafka+Security+config
>> > >> allows
>> > >> you to  load security.provider through config.
>> > >> Take a look at the examples I gave before
>> > >>
>> > >>
>> >
>> https://github.com/spiffe/spiffe-example/blob/master/java-spiffe/spiffe-security-provider/src/main/java/spiffe/api/provider/SpiffeProvider.java
>> > >> It registers