Re: [VOTE] KIP-281: ConsumerPerformance: Increase Polling Loop Timeout and Make It Reachable by the End User

2018-04-19 Thread Alex Dunayevsky
+1


4 votes total:

  1 binding vote (Jason Gustafson)

  3 non-binding votes (Moshe Blumberg, Ted Yu, Alex Dunayevsky)


Can we consider the voting closed?


Thank you everyone!

Alex Dunayevsky


> Tue, 17 Apr 2018 23:28:35 GMT, Jason Gustafson  wrote:

> +1 (binding)
>
> On Tue, Apr 17, 2018 at 9:04 AM, > Moshe Blumberg > 
wrote:

> +1
>
>
> 
> From: Ted Yu 
> Sent: 16 April 2018 22:43
> To: dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-281: ConsumerPerformance: Increase Polling Loop
> Timeout and Make It Reachable by the End User
> >
> > +1
> >
> > On Mon, Apr 16, 2018 at 2:25 PM, Alex Dunayevsky 
> wrote:
> >
> > > Hello friends,
> > >
> > > Let's start the vote for KIP-281: ConsumerPerformance: Increase Polling
> > > Loop Timeout and Make It Reachable by the End User:
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 281%3A+ConsumerPerformance%3A+Increase+Polling+Loop+Timeout+
> > > and+Make+It+Reachable+by+the+End+User
> > >
> > > Thank you,
> > > Alexander Dunayevsky
> >
>


Re: [DISCUSS] KIP-285: Connect Rest Extension Plugin

2018-04-19 Thread Magesh Nandakumar
Hi All,

I have updated the KIP with following changes

   1. Expanded the Motivation section
   2. Included details about the interface in the public interface section
   3. Modified the config name to rest.extension.classes
   4. Modified the ConnectRestExtension to include Configurable instead of
   ResourceConfig
   5. Modified the "Rest Extension Integration with Connect" in "Proposed
   Approach" to include a new Custom implementation for Configurable
   6. Provided examples for the Java Service provider mechanism
   7. Included a reference implementation in scope

Kindly let me know your thoughts on the updates.

Thanks
Magesh

On Thu, Apr 19, 2018 at 10:39 AM, Magesh Nandakumar 
wrote:

> Hi Randall,
>
> Thanks for your feedback. I also would like to go with
> rest.extension.classes`.
>
> For exposing Configurable, my original intention was just to expose that
> to the extension because that's all one needs to register JAX RS resources.
> The fact that we use Jersey shouldn't even be exposed in the interface.
> Hence it doesn't affect the public API by any means.
>
> I will update the KIP and let everyone know.
>
> Thanks
> Magesh
>
> On Thu, Apr 19, 2018 at 9:51 AM, Randall Hauch  wrote:
>
>> On Thu, Apr 12, 2018 at 10:59 AM, Magesh Nandakumar > >
>> wrote:
>>
>> > Hi Randall,
>> >
>> > Thanks a lot for your feedback.
>> >
>> > I will update the KIP to reflect your comments in (1), (2), (7) and (8).
>> >
>>
>> Looking forward to these.
>>
>>
>> >
>> > For comment # (3) , while it would be great to automatically configure
>> the
>> > Rest Extensions, I would prefer that to be specified explicitly. Lets
>> > assume a connector archive includes a implementation for the
>> RestExtension
>> > to do authentication using some header. We don't want this to be
>> > automatically included. Having said that I think that the config key
>> name
>> > should probably be changed to something like "rest.extension" or
>> > "rest.extension.class".
>> >
>>
>> That's a good point. I do like `rest.extension.class` (or `..classes`?)
>> much more than `rest.plugins`.
>>
>>
>> >
>> > For the comment regarding the resource loading into jersey, I have the
>> > following proposal
>> >
>> > Create an implementation of Configurable(
>> > https://docs.oracle.com/javaee/7/api/javax/ws/rs/core/Configurable.html
>> )
>> > that delegates to ResourceConfig. In the ConnectRestExtension, we would
>> > expose only Configurable which is sufficient enough to register new
>> > resources. In the new implementation, we will check if the resource is
>> > already registered using ResourceConfig.isRegistered() method and log a
>> > warning if the resource is already registered. This will make it a
>> > deterministic behavior and avoid any potential re-registrations. Let me
>> > know your thoughts on these.
>> >
>>
>> This sounds a good idea. Is it as flexible as the current proposal? If
>> not,
>> then I'd love to see how this affects the public APIs.
>>
>>
>> >
>> > Thanks
>> > Magesh
>> >
>> >
>> > On Wed, Apr 11, 2018 at 10:06 AM, Randall Hauch 
>> wrote:
>> >
>> > > Very nice proposal, Magesh. I like the approach and the new concepts
>> and
>> > > interfaces, but I do have a few comments/suggestions about some
>> specific
>> > > details:
>> > >
>> > >1. In the "Motivation" section, perhaps it makes sense to briefly
>> > >describe one or two somewhat concrete examples of how this is
>> useful.
>> > >2. Maybe in the "Public Interfaces" section you could briefly
>> describe
>> > >each of the interfaces, what they represent, and which are
>> implemented
>> > > by
>> > >the framework vs implemented by an extension. I think it'd help to
>> > > explain
>> > >that only the `ConnectRestPlugin` needs to be implemented, and the
>> > rest
>> > >will be provided by the framework. I know the next section goes
>> into
>> > it
>> > > a
>> > >bit, but it'd be useful in this section when first talking about
>> the
>> > new
>> > >interfaces.
>> > >3. Also in the "Public Interfaces" section: I don't think we should
>> > >introduce a "rest.plugins" configuration property. Instead, can we
>> not
>> > > just
>> > >instantiate and call all of the ConnectRestPlugins that we find on
>> the
>> > >plugin path? Besides, it seems too close to the `plugin.path`
>> > > configuration
>> > >property.
>> > >4. Why would the implementation register Connect resources *after*
>> the
>> > >plugins, if Jersey currently registers only the first one? The
>> > "Rejected
>> > >Alternatives" mentions why, but this section should be explicit
>> about
>> > > why.
>> > >For example, "The plugin's would be registered in the
>> > >RestServer.start(Herder herder) method before registering the
>> default
>> > >Connect resources, which ensures that plugins cannot remove Connect
>> > >resources."
>> > >5. "Hence, it is 

[jira] [Resolved] (KAFKA-6592) NullPointerException thrown when executing ConsoleCosumer with deserializer set to `WindowedDeserializer`

2018-04-19 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6592.
--
Resolution: Fixed

Thanks for the reminder!

> NullPointerException thrown when executing ConsoleCosumer with deserializer 
> set to `WindowedDeserializer`
> -
>
> Key: KAFKA-6592
> URL: https://issues.apache.org/jira/browse/KAFKA-6592
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Affects Versions: 1.0.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Minor
>
> When reading streams app's output topic with WindowedDeserializer deserilizer 
> using kafka-console-consumer.sh, NullPointerException was thrown due to the 
> fact that the inner deserializer was not initialized since there is no place 
> in ConsoleConsumer to set this class.
> Complete stack trace is shown below:
> {code:java}
> [2018-02-26 14:56:04,736] ERROR Unknown error when running consumer:  
> (kafka.tools.ConsoleConsumer$)
> java.lang.NullPointerException
> at 
> org.apache.kafka.streams.kstream.internals.WindowedDeserializer.deserialize(WindowedDeserializer.java:89)
> at 
> org.apache.kafka.streams.kstream.internals.WindowedDeserializer.deserialize(WindowedDeserializer.java:35)
> at 
> kafka.tools.DefaultMessageFormatter.$anonfun$writeTo$2(ConsoleConsumer.scala:544)
> at scala.Option.map(Option.scala:146)
> at kafka.tools.DefaultMessageFormatter.write$1(ConsoleConsumer.scala:545)
> at kafka.tools.DefaultMessageFormatter.writeTo(ConsoleConsumer.scala:560)
> at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:147)
> at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:84)
> at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)
> at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk7 #3357

2018-04-19 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: add window store range query in simple benchmark (#4894)

--
[...truncated 1.93 MB...]

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenBetweenBeginningAndEndOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenBeforeBeginningOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenBeforeBeginningOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
shouldThrowOnInvalidDateFormat STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
shouldThrowOnInvalidDateFormat PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenAfterEndOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenAfterEndOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testShiftOffsetByWhenBeforeBeginningOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testShiftOffsetByWhenBeforeBeginningOffset PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideNonPrefixedCustomConfigsWithPrefixedConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideNonPrefixedCustomConfigsWithPrefixedConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 

Jenkins build is back to normal : kafka-trunk-jdk8 #2569

2018-04-19 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk10 #36

2018-04-19 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-6809) connections-created metric does not behave as expected

2018-04-19 Thread Anna Povzner (JIRA)
Anna Povzner created KAFKA-6809:
---

 Summary: connections-created metric does not behave as expected
 Key: KAFKA-6809
 URL: https://issues.apache.org/jira/browse/KAFKA-6809
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.1
Reporter: Anna Povzner


"connections-created" sensor is described as "new connections established". It 
currently records only connections that the broker creates, but does not count 
connections received. Seems like we should also count connections received – 
either include them into this metric (and also clarify the description) or add 
a new metric (separately counting two types of connections). I am not sure how 
useful is to separate them, so I think we should do the first approach.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSSION] KIP-266: Add TimeoutException to KafkaConsumer#position()

2018-04-19 Thread John Roesler
Thanks for the tip, Ted!

On Thu, Apr 19, 2018 at 12:12 PM, Ted Yu  wrote:

> John:
> In case you want to pursue async poll, it seems (by looking at current API)
> that introducing PollCallback follows existing pattern(s).
>
> e.g. KafkaConsumer#commitAsync(OffsetCommitCallback)
>
> FYI
>
> On Thu, Apr 19, 2018 at 10:08 AM, John Roesler  wrote:
>
> > Hi Richard,
> >
> > Thanks for the invitation! I do think it would be safer to introduce a
> new
> > poll
> > method than to change the semantics of the old one. I've been mulling
> about
> > whether the new one could still have (slightly different) async semantics
> > with
> > a timeout of 0. If possible, I'd like to avoid introducing another new
> > "asyncPoll".
> >
> > I'm planning to run some experiments and dig into the implementation a
> bit
> > more before solidifying the proposal. I'll update the KIP as you suggest
> at
> > that point,
> > and then can call for another round of reviews and voting.
> >
> > Thanks,
> > -John
> >
> > On Tue, Apr 17, 2018 at 4:53 PM, Richard Yu 
> > wrote:
> >
> > > Hi John,
> > >
> > > Do you have a preference for fixing the poll() method (e.g. using
> > asyncPoll
> > > or just sticking with the current method but with an extra timeout
> > > parameter) ? I think your current proposition for KIP-288 is better
> than
> > > what I have on my side. If you think there is something that you want
> to
> > > add, you could go ahead and change KIP-266 to your liking. Just to note
> > > that it would be preferable that if one of us modifies this KIP, it
> would
> > > be best to mention your change on this thread to let each other know
> > (makes
> > > it easier to coordinate progress).
> > >
> > > Thanks,
> > > Richard
> > >
> > > On Tue, Apr 17, 2018 at 2:07 PM, John Roesler 
> wrote:
> > >
> > > > Ok, I'll close the discussion on KIP-288 and mark it discarded.
> > > >
> > > > We can solidify the design for poll in KIP-266, and once it's
> approved,
> > > > I'll coordinate with Qiang Zhao on the PR for the poll part of the
> > work.
> > > > Once that is merged, you'll have a clean slate for the rest of the
> > work.
> > > >
> > > > On Tue, Apr 17, 2018 at 3:39 PM, Richard Yu <
> > yohan.richard...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi John,
> > > > >
> > > > > I think that you could finish your PR that corresponds with KIP-288
> > and
> > > > > merge it. I can finish my side of the work afterwards.
> > > > >
> > > > > On another note, adding an asynchronized version of poll() would
> make
> > > > > sense, particularily since the current version of Kafka does not
> > > support
> > > > > it.
> > > > >
> > > > > Thanks
> > > > > Richar
> > > > >
> > > > > On Tue, Apr 17, 2018 at 12:30 PM, John Roesler 
> > > > wrote:
> > > > >
> > > > > > Cross-pollinating from some discussion we've had on KIP-288,
> > > > > >
> > > > > > I think there's a good reason that poll() takes a timeout when
> none
> > > of
> > > > > the
> > > > > > other methods do, and it's relevant to this discussion. The
> timeout
> > > in
> > > > > > poll() is effectively implementing a long-poll API (on the client
> > > side,
> > > > > so
> > > > > > it's not really long-poll, but the programmer-facing behavior is
> > the
> > > > > same).
> > > > > > The timeout isn't really bounding the execution time of the
> method,
> > > but
> > > > > > instead giving a max time that callers are willing to wait around
> > and
> > > > see
> > > > > > if any results show up.
> > > > > >
> > > > > > If I understand the code sufficiently, it would be perfectly
> > > reasonable
> > > > > for
> > > > > > a caller to use a timeout of 0 to implement async poll, it would
> > just
> > > > > mean
> > > > > > that KafkaConsumer would just check on each call if there's a
> > > response
> > > > > > ready and if not, fire off a new request without waiting for a
> > > > response.
> > > > > >
> > > > > > As such, it seems inappropriate to throw a ClientTimeoutException
> > > from
> > > > > > poll(), except possibly if the initial phase of ensuring an
> > > assignment
> > > > > > times out. We wouldn't want the method contract to be "returns a
> > > > > non-empty
> > > > > > collection or throws a ClientTimeoutException"
> > > > > >
> > > > > > Now, I'm wondering if we should actually consider one of my
> > rejected
> > > > > > alternatives, to treat the "operation timeout" as a separate
> > > parameter
> > > > > from
> > > > > > the "long-poll time". Or maybe adding an "asyncPoll(timeout, time
> > > > unit)"
> > > > > > that only uses the timeout to bound metadata updates and
> otherwise
> > > > > behaves
> > > > > > like the current "poll(0)".
> > > > > >
> > > > > > Thanks,
> > > > > > -John
> > > > > >
> > > > > > On Tue, Apr 17, 2018 at 2:05 PM, John Roesler  >
> > > > wrote:
> > > > > >
> > > > > > > Hey Richard,
> > > > > > >
> > > > > > > As you noticed, the 

Build failed in Jenkins: kafka-trunk-jdk10 #35

2018-04-19 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Retry setting aligned time until set (#4893)

--
[...truncated 1.48 MB...]

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe 

Build failed in Jenkins: kafka-trunk-jdk8 #2568

2018-04-19 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Retry setting aligned time until set (#4893)

--
[...truncated 415.96 KB...]

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumVerifyOptions STARTED


Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-19 Thread Matt Farmer
+1 (non-binding). TY!

On Thu, Apr 19, 2018 at 11:56 AM, tao xiao  wrote:

> +1 non-binding. thx Ismael
>
> On Thu, 19 Apr 2018 at 23:14 Vahid S Hashemian 
> wrote:
>
> > +1 (non-binding).
> >
> > Thanks Ismael.
> >
> > --Vahid
> >
> >
> >
> > From:   Jorge Esteban Quilcate Otoya 
> > To: dev@kafka.apache.org
> > Date:   04/19/2018 07:32 AM
> > Subject:Re: [VOTE] Kafka 2.0.0 in June 2018
> >
> >
> >
> > +1 (non binding), thanks Ismael!
> >
> > El jue., 19 abr. 2018 a las 13:01, Manikumar ( >)
> > escribió:
> >
> > > +1 (non-binding).
> > >
> > > Thanks.
> > >
> > > On Thu, Apr 19, 2018 at 3:07 PM, Stephane Maarek <
> > > steph...@simplemachines.com.au> wrote:
> > >
> > > > +1 (non binding). Thanks Ismael!
> > > >
> > > > On Thu., 19 Apr. 2018, 2:47 pm Gwen Shapira, 
> > wrote:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > > On Wed, Apr 18, 2018 at 11:35 AM, Ismael Juma 
> > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I started a discussion last year about bumping the version of the
> > > June
> > > > > 2018
> > > > > > release to 2.0.0[1]. To reiterate the reasons in the original
> > post:
> > > > > >
> > > > > > 1. Adopt KIP-118 (Drop Support for Java 7), which requires a
> major
> > > > > version
> > > > > > bump due to semantic versioning.
> > > > > >
> > > > > > 2. Take the chance to remove deprecated code that was deprecated
> > > prior
> > > > to
> > > > > > 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that
> > we
> > > can
> > > > > > move faster.
> > > > > >
> > > > > > One concern that was raised is that we still do not have a
> rolling
> > > > > upgrade
> > > > > > path for the old ZK-based consumers. Since the Scala clients
> > haven't
> > > > been
> > > > > > updated in a long time (they don't support security or the latest
> > > > message
> > > > > > format), users who need them can continue to use 1.1.0 with no
> > loss
> > > of
> > > > > > functionality.
> > > > > >
> > > > > > Since it's already mid-April and people seemed receptive during
> > the
> > > > > > discussion last year, I'm going straight to a vote, but we can
> > > discuss
> > > > > more
> > > > > > if needed (of course).
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > [1]
> > > > > >
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.
> apache.org_thread.html_dd9d3e31d7e9590c1f727ef5560c93
> =DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-XI9HTNmZdteu6wfk8=lBt342M2PM_
> 4czzbFWtAc63571qsZGc9sfB7A5DlZPo=
> >
> > > > > > 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > *Gwen Shapira*
> > > > > Product Manager | Confluent
> > > > > 650.450.2760 <(650)%20450-2760> | @gwenshap
> > > > > Follow us: Twitter <
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__
> twitter.com_ConfluentInc=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_
> xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-
> XI9HTNmZdteu6wfk8=KcgJLWP_UEkzMrujjrbJA4QfHPDrJjcaWS95p2LGewU=
> > > | blog
> > > > > <
> >
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.
> confluent.io_blog=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-
> XI9HTNmZdteu6wfk8=XaV8g8yeT1koLf1dbc30NTzBdXB6GAj7FwD8J2VP7iY=
> > >
> > > > >
> > > >
> > >
> >
> >
> >
> >
> >
>


Re: [DISCUSS] KIP-285: Connect Rest Extension Plugin

2018-04-19 Thread Magesh Nandakumar
Hi Randall,

Thanks for your feedback. I also would like to go with
rest.extension.classes`.

For exposing Configurable, my original intention was just to expose that to
the extension because that's all one needs to register JAX RS resources.
The fact that we use Jersey shouldn't even be exposed in the interface.
Hence it doesn't affect the public API by any means.

I will update the KIP and let everyone know.

Thanks
Magesh

On Thu, Apr 19, 2018 at 9:51 AM, Randall Hauch  wrote:

> On Thu, Apr 12, 2018 at 10:59 AM, Magesh Nandakumar 
> wrote:
>
> > Hi Randall,
> >
> > Thanks a lot for your feedback.
> >
> > I will update the KIP to reflect your comments in (1), (2), (7) and (8).
> >
>
> Looking forward to these.
>
>
> >
> > For comment # (3) , while it would be great to automatically configure
> the
> > Rest Extensions, I would prefer that to be specified explicitly. Lets
> > assume a connector archive includes a implementation for the
> RestExtension
> > to do authentication using some header. We don't want this to be
> > automatically included. Having said that I think that the config key name
> > should probably be changed to something like "rest.extension" or
> > "rest.extension.class".
> >
>
> That's a good point. I do like `rest.extension.class` (or `..classes`?)
> much more than `rest.plugins`.
>
>
> >
> > For the comment regarding the resource loading into jersey, I have the
> > following proposal
> >
> > Create an implementation of Configurable(
> > https://docs.oracle.com/javaee/7/api/javax/ws/rs/core/Configurable.html)
> > that delegates to ResourceConfig. In the ConnectRestExtension, we would
> > expose only Configurable which is sufficient enough to register new
> > resources. In the new implementation, we will check if the resource is
> > already registered using ResourceConfig.isRegistered() method and log a
> > warning if the resource is already registered. This will make it a
> > deterministic behavior and avoid any potential re-registrations. Let me
> > know your thoughts on these.
> >
>
> This sounds a good idea. Is it as flexible as the current proposal? If not,
> then I'd love to see how this affects the public APIs.
>
>
> >
> > Thanks
> > Magesh
> >
> >
> > On Wed, Apr 11, 2018 at 10:06 AM, Randall Hauch 
> wrote:
> >
> > > Very nice proposal, Magesh. I like the approach and the new concepts
> and
> > > interfaces, but I do have a few comments/suggestions about some
> specific
> > > details:
> > >
> > >1. In the "Motivation" section, perhaps it makes sense to briefly
> > >describe one or two somewhat concrete examples of how this is
> useful.
> > >2. Maybe in the "Public Interfaces" section you could briefly
> describe
> > >each of the interfaces, what they represent, and which are
> implemented
> > > by
> > >the framework vs implemented by an extension. I think it'd help to
> > > explain
> > >that only the `ConnectRestPlugin` needs to be implemented, and the
> > rest
> > >will be provided by the framework. I know the next section goes into
> > it
> > > a
> > >bit, but it'd be useful in this section when first talking about the
> > new
> > >interfaces.
> > >3. Also in the "Public Interfaces" section: I don't think we should
> > >introduce a "rest.plugins" configuration property. Instead, can we
> not
> > > just
> > >instantiate and call all of the ConnectRestPlugins that we find on
> the
> > >plugin path? Besides, it seems too close to the `plugin.path`
> > > configuration
> > >property.
> > >4. Why would the implementation register Connect resources *after*
> the
> > >plugins, if Jersey currently registers only the first one? The
> > "Rejected
> > >Alternatives" mentions why, but this section should be explicit
> about
> > > why.
> > >For example, "The plugin's would be registered in the
> > >RestServer.start(Herder herder) method before registering the
> default
> > >Connect resources, which ensures that plugins cannot remove Connect
> > >resources."
> > >5. "Hence, it is recommended that the plugins don't re-register the
> > >default Connect Resources. This could potentially lead to unexpected
> > >errors." First, we should not say "recommended" and should just say
> > > plugins
> > >should not register any resources that conflict with the built-in
> > > Connect
> > >resources. Second, if the worker does find conflicts, can we just
> > remove
> > >them before adding the built-in Connect resources?
> > >6. Is it possible for implementations to check whether resources
> > already
> > >exist before registering their own? If so, we should recommend that
> > >implementations do this and log any problems.
> > >7. We should be explicit that the "Service Provider" is Java's
> Service
> > >Provider API. We also need to be explicit that an implementation
> must
> > >provide a 

Re: [DISCUSSION] KIP-266: Add TimeoutException to KafkaConsumer#position()

2018-04-19 Thread Ted Yu
John:
In case you want to pursue async poll, it seems (by looking at current API)
that introducing PollCallback follows existing pattern(s).

e.g. KafkaConsumer#commitAsync(OffsetCommitCallback)

FYI

On Thu, Apr 19, 2018 at 10:08 AM, John Roesler  wrote:

> Hi Richard,
>
> Thanks for the invitation! I do think it would be safer to introduce a new
> poll
> method than to change the semantics of the old one. I've been mulling about
> whether the new one could still have (slightly different) async semantics
> with
> a timeout of 0. If possible, I'd like to avoid introducing another new
> "asyncPoll".
>
> I'm planning to run some experiments and dig into the implementation a bit
> more before solidifying the proposal. I'll update the KIP as you suggest at
> that point,
> and then can call for another round of reviews and voting.
>
> Thanks,
> -John
>
> On Tue, Apr 17, 2018 at 4:53 PM, Richard Yu 
> wrote:
>
> > Hi John,
> >
> > Do you have a preference for fixing the poll() method (e.g. using
> asyncPoll
> > or just sticking with the current method but with an extra timeout
> > parameter) ? I think your current proposition for KIP-288 is better than
> > what I have on my side. If you think there is something that you want to
> > add, you could go ahead and change KIP-266 to your liking. Just to note
> > that it would be preferable that if one of us modifies this KIP, it would
> > be best to mention your change on this thread to let each other know
> (makes
> > it easier to coordinate progress).
> >
> > Thanks,
> > Richard
> >
> > On Tue, Apr 17, 2018 at 2:07 PM, John Roesler  wrote:
> >
> > > Ok, I'll close the discussion on KIP-288 and mark it discarded.
> > >
> > > We can solidify the design for poll in KIP-266, and once it's approved,
> > > I'll coordinate with Qiang Zhao on the PR for the poll part of the
> work.
> > > Once that is merged, you'll have a clean slate for the rest of the
> work.
> > >
> > > On Tue, Apr 17, 2018 at 3:39 PM, Richard Yu <
> yohan.richard...@gmail.com>
> > > wrote:
> > >
> > > > Hi John,
> > > >
> > > > I think that you could finish your PR that corresponds with KIP-288
> and
> > > > merge it. I can finish my side of the work afterwards.
> > > >
> > > > On another note, adding an asynchronized version of poll() would make
> > > > sense, particularily since the current version of Kafka does not
> > support
> > > > it.
> > > >
> > > > Thanks
> > > > Richar
> > > >
> > > > On Tue, Apr 17, 2018 at 12:30 PM, John Roesler 
> > > wrote:
> > > >
> > > > > Cross-pollinating from some discussion we've had on KIP-288,
> > > > >
> > > > > I think there's a good reason that poll() takes a timeout when none
> > of
> > > > the
> > > > > other methods do, and it's relevant to this discussion. The timeout
> > in
> > > > > poll() is effectively implementing a long-poll API (on the client
> > side,
> > > > so
> > > > > it's not really long-poll, but the programmer-facing behavior is
> the
> > > > same).
> > > > > The timeout isn't really bounding the execution time of the method,
> > but
> > > > > instead giving a max time that callers are willing to wait around
> and
> > > see
> > > > > if any results show up.
> > > > >
> > > > > If I understand the code sufficiently, it would be perfectly
> > reasonable
> > > > for
> > > > > a caller to use a timeout of 0 to implement async poll, it would
> just
> > > > mean
> > > > > that KafkaConsumer would just check on each call if there's a
> > response
> > > > > ready and if not, fire off a new request without waiting for a
> > > response.
> > > > >
> > > > > As such, it seems inappropriate to throw a ClientTimeoutException
> > from
> > > > > poll(), except possibly if the initial phase of ensuring an
> > assignment
> > > > > times out. We wouldn't want the method contract to be "returns a
> > > > non-empty
> > > > > collection or throws a ClientTimeoutException"
> > > > >
> > > > > Now, I'm wondering if we should actually consider one of my
> rejected
> > > > > alternatives, to treat the "operation timeout" as a separate
> > parameter
> > > > from
> > > > > the "long-poll time". Or maybe adding an "asyncPoll(timeout, time
> > > unit)"
> > > > > that only uses the timeout to bound metadata updates and otherwise
> > > > behaves
> > > > > like the current "poll(0)".
> > > > >
> > > > > Thanks,
> > > > > -John
> > > > >
> > > > > On Tue, Apr 17, 2018 at 2:05 PM, John Roesler 
> > > wrote:
> > > > >
> > > > > > Hey Richard,
> > > > > >
> > > > > > As you noticed, the newly introduced KIP-288 overlaps with this
> > one.
> > > > > Sorry
> > > > > > for stepping on your toes... How would you like to proceed? I'm
> > happy
> > > > to
> > > > > > "close" KIP-288 in deference to this KIP.
> > > > > >
> > > > > > With respect to poll(), reading this discussion gave me a new
> idea
> > > for
> > > > > > providing a non-breaking update path... What if we introduce a

Re: [DISCUSSION] KIP-266: Add TimeoutException to KafkaConsumer#position()

2018-04-19 Thread John Roesler
Hi Richard,

Thanks for the invitation! I do think it would be safer to introduce a new
poll
method than to change the semantics of the old one. I've been mulling about
whether the new one could still have (slightly different) async semantics
with
a timeout of 0. If possible, I'd like to avoid introducing another new
"asyncPoll".

I'm planning to run some experiments and dig into the implementation a bit
more before solidifying the proposal. I'll update the KIP as you suggest at
that point,
and then can call for another round of reviews and voting.

Thanks,
-John

On Tue, Apr 17, 2018 at 4:53 PM, Richard Yu 
wrote:

> Hi John,
>
> Do you have a preference for fixing the poll() method (e.g. using asyncPoll
> or just sticking with the current method but with an extra timeout
> parameter) ? I think your current proposition for KIP-288 is better than
> what I have on my side. If you think there is something that you want to
> add, you could go ahead and change KIP-266 to your liking. Just to note
> that it would be preferable that if one of us modifies this KIP, it would
> be best to mention your change on this thread to let each other know (makes
> it easier to coordinate progress).
>
> Thanks,
> Richard
>
> On Tue, Apr 17, 2018 at 2:07 PM, John Roesler  wrote:
>
> > Ok, I'll close the discussion on KIP-288 and mark it discarded.
> >
> > We can solidify the design for poll in KIP-266, and once it's approved,
> > I'll coordinate with Qiang Zhao on the PR for the poll part of the work.
> > Once that is merged, you'll have a clean slate for the rest of the work.
> >
> > On Tue, Apr 17, 2018 at 3:39 PM, Richard Yu 
> > wrote:
> >
> > > Hi John,
> > >
> > > I think that you could finish your PR that corresponds with KIP-288 and
> > > merge it. I can finish my side of the work afterwards.
> > >
> > > On another note, adding an asynchronized version of poll() would make
> > > sense, particularily since the current version of Kafka does not
> support
> > > it.
> > >
> > > Thanks
> > > Richar
> > >
> > > On Tue, Apr 17, 2018 at 12:30 PM, John Roesler 
> > wrote:
> > >
> > > > Cross-pollinating from some discussion we've had on KIP-288,
> > > >
> > > > I think there's a good reason that poll() takes a timeout when none
> of
> > > the
> > > > other methods do, and it's relevant to this discussion. The timeout
> in
> > > > poll() is effectively implementing a long-poll API (on the client
> side,
> > > so
> > > > it's not really long-poll, but the programmer-facing behavior is the
> > > same).
> > > > The timeout isn't really bounding the execution time of the method,
> but
> > > > instead giving a max time that callers are willing to wait around and
> > see
> > > > if any results show up.
> > > >
> > > > If I understand the code sufficiently, it would be perfectly
> reasonable
> > > for
> > > > a caller to use a timeout of 0 to implement async poll, it would just
> > > mean
> > > > that KafkaConsumer would just check on each call if there's a
> response
> > > > ready and if not, fire off a new request without waiting for a
> > response.
> > > >
> > > > As such, it seems inappropriate to throw a ClientTimeoutException
> from
> > > > poll(), except possibly if the initial phase of ensuring an
> assignment
> > > > times out. We wouldn't want the method contract to be "returns a
> > > non-empty
> > > > collection or throws a ClientTimeoutException"
> > > >
> > > > Now, I'm wondering if we should actually consider one of my rejected
> > > > alternatives, to treat the "operation timeout" as a separate
> parameter
> > > from
> > > > the "long-poll time". Or maybe adding an "asyncPoll(timeout, time
> > unit)"
> > > > that only uses the timeout to bound metadata updates and otherwise
> > > behaves
> > > > like the current "poll(0)".
> > > >
> > > > Thanks,
> > > > -John
> > > >
> > > > On Tue, Apr 17, 2018 at 2:05 PM, John Roesler 
> > wrote:
> > > >
> > > > > Hey Richard,
> > > > >
> > > > > As you noticed, the newly introduced KIP-288 overlaps with this
> one.
> > > > Sorry
> > > > > for stepping on your toes... How would you like to proceed? I'm
> happy
> > > to
> > > > > "close" KIP-288 in deference to this KIP.
> > > > >
> > > > > With respect to poll(), reading this discussion gave me a new idea
> > for
> > > > > providing a non-breaking update path... What if we introduce a new
> > > > variant
> > > > > 'poll(long timeout, TimeUnit unit)' that displays the new, desired
> > > > > behavior, and just leave the old method alone?
> > > > >
> > > > > Thanks,
> > > > > -John
> > > > >
> > > > > On Tue, Apr 17, 2018 at 12:09 PM, Richard Yu <
> > > yohan.richard...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > >> Hi all,
> > > > >>
> > > > >> If possible, would a committer please review?
> > > > >>
> > > > >> Thanks
> > > > >>
> > > > >> On Sun, Apr 1, 2018 at 7:24 PM, Richard Yu <
> > > 

[jira] [Created] (KAFKA-6808) Creating source Kafka connectors re-using a name of a deleted connector causes the connector to never push messages to kafka

2018-04-19 Thread Igor Candido (JIRA)
Igor Candido created KAFKA-6808:
---

 Summary: Creating source Kafka connectors re-using a name of a 
deleted connector causes the connector to never push messages to kafka
 Key: KAFKA-6808
 URL: https://issues.apache.org/jira/browse/KAFKA-6808
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.1.0, 1.0.0
Reporter: Igor Candido


I tried to deploy a source kafka connector to a kafka connect instance that 
previously had a connector with a similar definition deployed to but since had 
been deleted and the newly deployed connector was not erroring but also wasn't 
pushing messages to kafka.

 

The connector created was this salesforce 
[https://github.com/jcustenborder/kafka-connect-salesforce]

 

And the definition of the connector is:

{
  "connector.class": 
"com.github.jcustenborder.kafka.connect.salesforce.SalesforceSourceConnector",
  "salesforce.username": "XXX",
  "tasks.max": "1",
  "salesforce.consumer.key": "XXX",
  "salesforce.push.topic.name": "Leads",
  "salesforce.instance": "https://eu8.salesforce.com;,
  "salesforce.password": "XXX",
  "salesforce.password.token": "XXX",
  "salesforce.version": "36",
  "name": "salesforce-lead-source",
  "kafka.topic": "salesforce-lead",
  "salesforce.consumer.secret": "XXX",
  "salesforce.object": "Lead"
}

 

I tried to restart kafka connect instance and that didn't fix the problem.

 

As soon as I changed the name of the connector it started working without any 
configuration change or environment change.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-285: Connect Rest Extension Plugin

2018-04-19 Thread Randall Hauch
On Thu, Apr 12, 2018 at 10:59 AM, Magesh Nandakumar 
wrote:

> Hi Randall,
>
> Thanks a lot for your feedback.
>
> I will update the KIP to reflect your comments in (1), (2), (7) and (8).
>

Looking forward to these.


>
> For comment # (3) , while it would be great to automatically configure the
> Rest Extensions, I would prefer that to be specified explicitly. Lets
> assume a connector archive includes a implementation for the RestExtension
> to do authentication using some header. We don't want this to be
> automatically included. Having said that I think that the config key name
> should probably be changed to something like "rest.extension" or
> "rest.extension.class".
>

That's a good point. I do like `rest.extension.class` (or `..classes`?)
much more than `rest.plugins`.


>
> For the comment regarding the resource loading into jersey, I have the
> following proposal
>
> Create an implementation of Configurable(
> https://docs.oracle.com/javaee/7/api/javax/ws/rs/core/Configurable.html)
> that delegates to ResourceConfig. In the ConnectRestExtension, we would
> expose only Configurable which is sufficient enough to register new
> resources. In the new implementation, we will check if the resource is
> already registered using ResourceConfig.isRegistered() method and log a
> warning if the resource is already registered. This will make it a
> deterministic behavior and avoid any potential re-registrations. Let me
> know your thoughts on these.
>

This sounds a good idea. Is it as flexible as the current proposal? If not,
then I'd love to see how this affects the public APIs.


>
> Thanks
> Magesh
>
>
> On Wed, Apr 11, 2018 at 10:06 AM, Randall Hauch  wrote:
>
> > Very nice proposal, Magesh. I like the approach and the new concepts and
> > interfaces, but I do have a few comments/suggestions about some specific
> > details:
> >
> >1. In the "Motivation" section, perhaps it makes sense to briefly
> >describe one or two somewhat concrete examples of how this is useful.
> >2. Maybe in the "Public Interfaces" section you could briefly describe
> >each of the interfaces, what they represent, and which are implemented
> > by
> >the framework vs implemented by an extension. I think it'd help to
> > explain
> >that only the `ConnectRestPlugin` needs to be implemented, and the
> rest
> >will be provided by the framework. I know the next section goes into
> it
> > a
> >bit, but it'd be useful in this section when first talking about the
> new
> >interfaces.
> >3. Also in the "Public Interfaces" section: I don't think we should
> >introduce a "rest.plugins" configuration property. Instead, can we not
> > just
> >instantiate and call all of the ConnectRestPlugins that we find on the
> >plugin path? Besides, it seems too close to the `plugin.path`
> > configuration
> >property.
> >4. Why would the implementation register Connect resources *after* the
> >plugins, if Jersey currently registers only the first one? The
> "Rejected
> >Alternatives" mentions why, but this section should be explicit about
> > why.
> >For example, "The plugin's would be registered in the
> >RestServer.start(Herder herder) method before registering the default
> >Connect resources, which ensures that plugins cannot remove Connect
> >resources."
> >5. "Hence, it is recommended that the plugins don't re-register the
> >default Connect Resources. This could potentially lead to unexpected
> >errors." First, we should not say "recommended" and should just say
> > plugins
> >should not register any resources that conflict with the built-in
> > Connect
> >resources. Second, if the worker does find conflicts, can we just
> remove
> >them before adding the built-in Connect resources?
> >6. Is it possible for implementations to check whether resources
> already
> >exist before registering their own? If so, we should recommend that
> >implementations do this and log any problems.
> >7. We should be explicit that the "Service Provider" is Java's Service
> >Provider API. We also need to be explicit that an implementation must
> >provide a `META-INF/services/org.apache.kafka.connect.
> > ConnectRestPlugin`
> >file (or whatever the package name of the `ConnectRestPlugin` will be)
> > with
> >the fully-qualified name of the implementation class(es).
> >8. The example should include the META-INF file required by the
> Service
> >Provider API.
> >
> > Again, overall this is really great!
> >
> > Best regards,
> >
> > Randall
> >
> > On Wed, Apr 11, 2018 at 12:14 AM, Magesh Nandakumar <
> mage...@confluent.io>
> > wrote:
> >
> > > Hi,
> > >
> > > We have posted KIP-285: Connect Rest Extension Plugin to add the
> ability
> > to
> > > provide Rest Extensions to Connect Rest API.
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 

[jira] [Created] (KAFKA-6807) Inconsistent method name

2018-04-19 Thread KuiLIU (JIRA)
KuiLIU created KAFKA-6807:
-

 Summary: Inconsistent method name
 Key: KAFKA-6807
 URL: https://issues.apache.org/jira/browse/KAFKA-6807
 Project: Kafka
  Issue Type: Improvement
Reporter: KuiLIU


The following method is named as "readTo", but the method will return the 
variable with lesses value. Thus, the name "readTo" is inconsistent with the 
method body code.
Rename the method as "lesser" should be better.

{code:java}
private Long readTo(final long endOffset) {
return endOffset < offsetLimit ? endOffset : offsetLimit;
}
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-19 Thread tao xiao
+1 non-binding. thx Ismael

On Thu, 19 Apr 2018 at 23:14 Vahid S Hashemian 
wrote:

> +1 (non-binding).
>
> Thanks Ismael.
>
> --Vahid
>
>
>
> From:   Jorge Esteban Quilcate Otoya 
> To: dev@kafka.apache.org
> Date:   04/19/2018 07:32 AM
> Subject:Re: [VOTE] Kafka 2.0.0 in June 2018
>
>
>
> +1 (non binding), thanks Ismael!
>
> El jue., 19 abr. 2018 a las 13:01, Manikumar ()
> escribió:
>
> > +1 (non-binding).
> >
> > Thanks.
> >
> > On Thu, Apr 19, 2018 at 3:07 PM, Stephane Maarek <
> > steph...@simplemachines.com.au> wrote:
> >
> > > +1 (non binding). Thanks Ismael!
> > >
> > > On Thu., 19 Apr. 2018, 2:47 pm Gwen Shapira, 
> wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > On Wed, Apr 18, 2018 at 11:35 AM, Ismael Juma 
> > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I started a discussion last year about bumping the version of the
> > June
> > > > 2018
> > > > > release to 2.0.0[1]. To reiterate the reasons in the original
> post:
> > > > >
> > > > > 1. Adopt KIP-118 (Drop Support for Java 7), which requires a major
> > > > version
> > > > > bump due to semantic versioning.
> > > > >
> > > > > 2. Take the chance to remove deprecated code that was deprecated
> > prior
> > > to
> > > > > 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that
> we
> > can
> > > > > move faster.
> > > > >
> > > > > One concern that was raised is that we still do not have a rolling
> > > > upgrade
> > > > > path for the old ZK-based consumers. Since the Scala clients
> haven't
> > > been
> > > > > updated in a long time (they don't support security or the latest
> > > message
> > > > > format), users who need them can continue to use 1.1.0 with no
> loss
> > of
> > > > > functionality.
> > > > >
> > > > > Since it's already mid-April and people seemed receptive during
> the
> > > > > discussion last year, I'm going straight to a vote, but we can
> > discuss
> > > > more
> > > > > if needed (of course).
> > > > >
> > > > > Ismael
> > > > >
> > > > > [1]
> > > > >
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.apache.org_thread.html_dd9d3e31d7e9590c1f727ef5560c93=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-XI9HTNmZdteu6wfk8=lBt342M2PM_4czzbFWtAc63571qsZGc9sfB7A5DlZPo=
>
> > > > > 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Gwen Shapira*
> > > > Product Manager | Confluent
> > > > 650.450.2760 <(650)%20450-2760> | @gwenshap
> > > > Follow us: Twitter <
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_ConfluentInc=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-XI9HTNmZdteu6wfk8=KcgJLWP_UEkzMrujjrbJA4QfHPDrJjcaWS95p2LGewU=
> > | blog
> > > > <
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.confluent.io_blog=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-XI9HTNmZdteu6wfk8=XaV8g8yeT1koLf1dbc30NTzBdXB6GAj7FwD8J2VP7iY=
> >
> > > >
> > >
> >
>
>
>
>
>


Re: [DISCUSS] KIP-255: OAuth Authentication via SASL/OAUTHBEARER

2018-04-19 Thread Ron Dagostino
Hi Rajini.  Thanks for the feedback.  I will adopt all of the suggestions.
Regarding your question about moving the refresh config values out of the
JAAS config and making them generic, yes, I think that would work, and it
does advance us down the road toward an eventual unification.  I'll post
again when the KIP and the code are updated accordingly both for this
change and for the removal of substitution as mentioned in the KIP-269
discussion.

Ron

On Wed, Apr 18, 2018 at 7:49 AM, Rajini Sivaram 
wrote:

> Hi Ron,
>
> A few more suggestions and questions:
>
>
>1. The KIP says that various callback handlers and login have to be
>configured in order to use OAuth. Could we also say that a default
>implementation is included which is not suitable for production use, but
>this would work out-of-the-box with no additional configuration required
>for callback handlers and login class? So the default callback handler
> and
>login class that we would use in SaslChannelBuilder for OAuth, if the
>config is not overridden would be the classes that you are including
> here (
>OAuthBearerUnsecuredValidatorCallbackHandler etc.)
>2. Following on from 1) above, I think the default  implementation
>classes can be moved to the internal package, since they no longer need
>to be part of the public API, if we just choose them automatically by
>default. I think the only classes that really need to part of the public
>API are OAuthBearerToken, OAuthBearerLoginModule,
> OAuthBearerLoginCallback
>and OAuthBearerValidatorCallback. It is hard to tell from the current
>package layout, but the packages that are public currently are
>org.apache.kafka.common.security.auth,
> org.apache.kafka.common.security.plain
>and org.apache.kafka.common.security.scram. Callback handlers and login
>class are not public for the other mechanisms.
>3. Can we rename OAuthBearerLoginCallback to OAuthBearerTokenCallback or
>something along those lines since it is used by SaslClient as well as
> the
>login module?
>4. We use `Ms` as the suffix for fields and methods that refer to
>milliseconds. So, perhaps in OAuthBearerToken, we could have lifetimeMs
>and startTimeMs? I thought initially that lifetime was a time interval
>rather than the wall-clock time. Something like expiryTimeMs may be less
>confusing. But that could just be me (and it is fine to use the
>terminology used in OAuth RFCs, so I will leave it up to you).
>5. I am wondering whether it would be better to define refresh
>parameters as properties in SaslConfigs rather than in the JAAS config.
>We already have similar properties defined for Kerberos, but with
> kerberos
>prefix. I wonder if defining the properties in a mechanism-independent
> way
>(e.g. sasl.login.refresh.window.factor) could work with different
>mechanisms? We could use it initially for just OAuth, but if we did
> unify
>refreshing logins in future, we could deprecate the current
>Kerberos-specific properties and have just one set that works for any
>mechanism that uses token refresh. What do you think?
>
> Thanks,
>
> Rajini
>
>
> On Thu, Mar 29, 2018 at 11:39 PM, Rajini Sivaram 
> wrote:
>
> > Hi Ron,
> >
> > Thanks for the updates. I had a quick look and it is looking good.
> >
> > I have updated KIP-86 and the associated PR to with a new config
> > sasl.login.callback.handler.class that matches what you are using in
> this
> > KIP.
> >
> > On Thu, Mar 29, 2018 at 6:27 AM, Ron Dagostino 
> wrote:
> >
> >> Hi Rajini.  I have adjusted the KIP to use callbacks and callback
> handlers
> >>
> >> throughout.  I also clarified that production implementations of the
> >> retrieval and validation callback handlers will require the use of an
> open
> >> source JWT library, and the unsecured implementations are as far as
> >> SASL/OAUTHBEARER will go out-of-the-box. Your suggestions, plus this
> >> clarification, has allowed much of the code to move into the ".internal"
> >> java package; the public-facing API now consists of just 8 Java
> classes, 1
> >> Java interface, and a set of configuration requirements.  I also added a
> >> section outlinng those configuration requirements since they are
> extensive
> >> (not onerously so -- just not something one can easily remember).
> >>
> >> Ron
> >>
> >> On Tue, Mar 13, 2018 at 11:44 AM, Rajini Sivaram <
> rajinisiva...@gmail.com
> >> >
> >> wrote:
> >>
> >> > Hi Ron,
> >> >
> >> > Thanks for the response. All sound good, I think the only outstanding
> >> > question is around callbacks vs classes provided through the login
> >> context.
> >> > As you have pointed out, there are advantages of both approaches. Even
> >> > though my preference is for callbacks, it is not a blocker since the
> >> > current approach works fine too. I will make the case for callbacks
> >> anyway,
> 

Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-19 Thread Vahid S Hashemian
+1 (non-binding).

Thanks Ismael.

--Vahid



From:   Jorge Esteban Quilcate Otoya 
To: dev@kafka.apache.org
Date:   04/19/2018 07:32 AM
Subject:Re: [VOTE] Kafka 2.0.0 in June 2018



+1 (non binding), thanks Ismael!

El jue., 19 abr. 2018 a las 13:01, Manikumar ()
escribió:

> +1 (non-binding).
>
> Thanks.
>
> On Thu, Apr 19, 2018 at 3:07 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > +1 (non binding). Thanks Ismael!
> >
> > On Thu., 19 Apr. 2018, 2:47 pm Gwen Shapira,  
wrote:
> >
> > > +1 (binding)
> > >
> > > On Wed, Apr 18, 2018 at 11:35 AM, Ismael Juma 
> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I started a discussion last year about bumping the version of the
> June
> > > 2018
> > > > release to 2.0.0[1]. To reiterate the reasons in the original 
post:
> > > >
> > > > 1. Adopt KIP-118 (Drop Support for Java 7), which requires a major
> > > version
> > > > bump due to semantic versioning.
> > > >
> > > > 2. Take the chance to remove deprecated code that was deprecated
> prior
> > to
> > > > 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that 
we
> can
> > > > move faster.
> > > >
> > > > One concern that was raised is that we still do not have a rolling
> > > upgrade
> > > > path for the old ZK-based consumers. Since the Scala clients 
haven't
> > been
> > > > updated in a long time (they don't support security or the latest
> > message
> > > > format), users who need them can continue to use 1.1.0 with no 
loss
> of
> > > > functionality.
> > > >
> > > > Since it's already mid-April and people seemed receptive during 
the
> > > > discussion last year, I'm going straight to a vote, but we can
> discuss
> > > more
> > > > if needed (of course).
> > > >
> > > > Ismael
> > > >
> > > > [1]
> > > > 
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.apache.org_thread.html_dd9d3e31d7e9590c1f727ef5560c93=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-XI9HTNmZdteu6wfk8=lBt342M2PM_4czzbFWtAc63571qsZGc9sfB7A5DlZPo=

> > > > 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
> > > >
> > >
> > >
> > >
> > > --
> > > *Gwen Shapira*
> > > Product Manager | Confluent
> > > 650.450.2760 <(650)%20450-2760> | @gwenshap
> > > Follow us: Twitter <
https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_ConfluentInc=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-XI9HTNmZdteu6wfk8=KcgJLWP_UEkzMrujjrbJA4QfHPDrJjcaWS95p2LGewU=
> | blog
> > > <
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.confluent.io_blog=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-XI9HTNmZdteu6wfk8=XaV8g8yeT1koLf1dbc30NTzBdXB6GAj7FwD8J2VP7iY=
>
> > >
> >
>






Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-19 Thread Jorge Esteban Quilcate Otoya
+1 (non binding), thanks Ismael!

El jue., 19 abr. 2018 a las 13:01, Manikumar ()
escribió:

> +1 (non-binding).
>
> Thanks.
>
> On Thu, Apr 19, 2018 at 3:07 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > +1 (non binding). Thanks Ismael!
> >
> > On Thu., 19 Apr. 2018, 2:47 pm Gwen Shapira,  wrote:
> >
> > > +1 (binding)
> > >
> > > On Wed, Apr 18, 2018 at 11:35 AM, Ismael Juma 
> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I started a discussion last year about bumping the version of the
> June
> > > 2018
> > > > release to 2.0.0[1]. To reiterate the reasons in the original post:
> > > >
> > > > 1. Adopt KIP-118 (Drop Support for Java 7), which requires a major
> > > version
> > > > bump due to semantic versioning.
> > > >
> > > > 2. Take the chance to remove deprecated code that was deprecated
> prior
> > to
> > > > 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that we
> can
> > > > move faster.
> > > >
> > > > One concern that was raised is that we still do not have a rolling
> > > upgrade
> > > > path for the old ZK-based consumers. Since the Scala clients haven't
> > been
> > > > updated in a long time (they don't support security or the latest
> > message
> > > > format), users who need them can continue to use 1.1.0 with no loss
> of
> > > > functionality.
> > > >
> > > > Since it's already mid-April and people seemed receptive during the
> > > > discussion last year, I'm going straight to a vote, but we can
> discuss
> > > more
> > > > if needed (of course).
> > > >
> > > > Ismael
> > > >
> > > > [1]
> > > > https://lists.apache.org/thread.html/dd9d3e31d7e9590c1f727ef5560c93
> > > > 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
> > > >
> > >
> > >
> > >
> > > --
> > > *Gwen Shapira*
> > > Product Manager | Confluent
> > > 650.450.2760 <(650)%20450-2760> | @gwenshap
> > > Follow us: Twitter  | blog
> > > 
> > >
> >
>


Re: [DISCUSS]KIP-216: IQ should throw different exceptions for different errors

2018-04-19 Thread Matthias J. Sax
Thanks for clarification! That makes sense to me.

Can you update the KIP to make those suggestions explicit?


-Matthias

On 4/18/18 2:19 PM, vito jeng wrote:
> Matthias,
> 
> Thanks for the feedback!
> 
>> It's up to you to keep the details part in the KIP or not.
> 
> Got it!
> 
>> The (incomplete) question was, if we need `StateStoreFailException` or
>> if existing `InvalidStateStoreException` could be used? Do you suggest
>> that `InvalidStateStoreException` is not thrown at all anymore, but only
>> the new sub-classes (just to get a better understanding).
> 
> Yes. I suggest that `InvalidStateStoreException` is not thrown at all
> anymore,
> but only new sub-classes.
> 
>> Not sure what this sentence means:
>>> The internal exception will be wrapped as category exception finally.
>> Can you elaborate?
> 
> For example, `StreamThreadStateStoreProvider#stores()` will throw
> `StreamThreadNotRunningException`(internal exception).
> And then the internal exception will be wrapped as
> `StateStoreRetryableException` or `StateStoreFailException` during the
> `KafkaStreams.store()` and throw to the user.
> 
> 
>> Can you explain the purpose of the "internal exceptions". It's unclear
> to me atm why they are introduced.
> 
> Hmmm...the purpose of the "internal exceptions" is to distinguish between
> the different kinds of InvalidStateStoreException.
> The original idea is that we can distinguish different state store
> exception for
> different handling.
> But to be honest, I am not quite sure this is necessary. Maybe have
> some change during implementation.
> 
> Does it make sense?
> 
> 
> 
> 
> ---
> Vito
> 
> On Mon, Apr 16, 2018 at 5:59 PM, Matthias J. Sax 
> wrote:
> 
>> Thanks for the update Vito!
>>
>> It's up to you to keep the details part in the KIP or not.
>>
>>
>> The (incomplete) question was, if we need `StateStoreFailException` or
>> if existing `InvalidStateStoreException` could be used? Do you suggest
>> that `InvalidStateStoreException` is not thrown at all anymore, but only
>> the new sub-classes (just to get a better understanding).
>>
>>
>> Not sure what this sentence means:
>>
>>> The internal exception will be wrapped as category exception finally.
>>
>> Can you elaborate?
>>
>>
>> Can you explain the purpose of the "internal exceptions". It's unclear
>> to me atm why they are introduced.
>>
>>
>> -Matthias
>>
>> On 4/10/18 12:33 AM, vito jeng wrote:
>>> Matthias,
>>>
>>> Thanks for the review.
>>> I reply separately in the following sections.
>>>
>>>
>>> ---
>>> Vito
>>>
>>> On Sun, Apr 8, 2018 at 1:30 PM, Matthias J. Sax 
>>> wrote:
>>>
 Thanks for updating the KIP and sorry for the long pause...

 Seems you did a very thorough investigation of the code. It's useful to
 understand what user facing interfaces are affected.
>>>
>>> (Some parts might be even too detailed for a KIP.)

>>>
>>> I also think too detailed. Especially the section `Changes in call
>> trace`.
>>> Do you think it should be removed?
>>>
>>>

 To summarize my current understanding of your KIP, the main change is to
 introduce new exceptions that extend `InvalidStateStoreException`.

>>>
>>> yep. Keep compatibility in this KIP is important things.
>>> I think the best way is that all new exceptions extend from
>>> `InvalidStateStoreException`.
>>>
>>>

 Some questions:

  - Why do we need ```? Could `InvalidStateStoreException` be used for
 this purpose?

>>>
>>> Does this question miss some word?
>>>
>>>

  - What the superclass of `StateStoreStreamThreadNotRunningException`
 is? Should it be `InvalidStateStoreException` or
>> `StateStoreFailException`
 ?

  - Is `StateStoreClosed` a fatal or retryable exception ?


>>> I apologize for not well written parts. I tried to modify some code in
>> the
>>> recent period and modify the KIP.
>>> The modification is now complete. Please look again.
>>>
>>>


 -Matthias


 On 2/21/18 5:15 PM, vito jeng wrote:
> Matthias,
>
> Sorry for not response these days.
> I just finished it. Please have a look. :)
>
>
>
> ---
> Vito
>
> On Wed, Feb 14, 2018 at 5:45 AM, Matthias J. Sax <
>> matth...@confluent.io>
> wrote:
>
>> Is there any update on this KIP?
>>
>> -Matthias
>>
>> On 1/3/18 12:59 AM, vito jeng wrote:
>>> Matthias,
>>>
>>> Thank you for your response.
>>>
>>> I think you are right. We need to look at the state both of
>>> KafkaStreams and StreamThread.
>>>
>>> After further understanding of KafkaStreams thread and state store,
>>> I am currently rewriting the KIP.
>>>
>>>
>>>
>>>
>>> ---
>>> Vito
>>>
>>> On Fri, Dec 29, 2017 at 4:32 AM, Matthias J. Sax <
 matth...@confluent.io>
>>> wrote:
>>>
 Vito,

 Sorry for this late reply.


[jira] [Created] (KAFKA-6806) Unable to validate sink connectors without "topics" component which is not required

2018-04-19 Thread JIRA
Ivan Majnarić created KAFKA-6806:


 Summary: Unable to validate sink connectors without "topics" 
component which is not required
 Key: KAFKA-6806
 URL: https://issues.apache.org/jira/browse/KAFKA-6806
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.1.0
 Environment: CP4.1., Centos7
Reporter: Ivan Majnarić


The bug is happening when you try to create new connector through for example 
kafka-connect-ui.

While both source and sink connectors were able to be validated through REST 
without "topics" as add-on with "connector.class" like this:
{code:java}
PUT / 
http://connect-url:8083/connector-plugins/com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector/config/validate
{
    "connector.class": 
"com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector",
}{code}
In the new version of CP4.1 you still can validate *source connectors* but not 
*sink connectors*. If you want to validate sink connectors you need to add to 
request -> "topics" config, like:
{code:java}
PUT / 
http://connect-url:8083/connector-plugins/com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector/config/validate
{
    "connector.class": 
"com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector",
"topics": "test-topic"
}{code}
So there is a little missmatch of the ways how to validate connectors which I 
think happened accidentally.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-276 Add StreamsConfig prefix for different consumers

2018-04-19 Thread Boyang Chen
Thanks guys!

Sent from my iPhone

> On Apr 19, 2018, at 8:28 PM, Matthias J. Sax  wrote:
> 
> +1 (binding)
> 
> -Matthias
> 
>> On 4/19/18 9:08 AM, Ted Yu wrote:
>> +1
>>  Original message From: Guozhang Wang  
>> Date: 4/18/18  10:18 PM  (GMT-08:00) To: dev@kafka.apache.org Subject: Re: 
>> [VOTE] KIP-276 Add StreamsConfig prefix for different consumers 
>> Thanks Boyang, +1 from me.
>> 
>>> On Wed, Apr 18, 2018 at 4:43 PM, Boyang Chen  wrote:
>>> 
>>> Hey guys,
>>> 
>>> 
>>> sorry I forgot to include a summary of this KIP.  Basically the idea is to
>>> separate the stream config for different internal consumers of Kafka
>>> Streams by supplying prefixes . We currently have
>>> 
>>> (1) "main.consumer." for the main consumer
>>> 
>>> (2) "restore.consumer." for the restore consumer
>>> 
>>> (3) "global.consumer." for the global consumer
>>> 
>>> 
>>> Best,
>>> 
>>> Boyang
>>> 
>>> 
>>> From: Boyang Chen 
>>> Sent: Tuesday, April 17, 2018 12:18 PM
>>> To: dev@kafka.apache.org
>>> Subject: [VOTE] KIP-276 Add StreamsConfig prefix for different consumers
>>> 
>>> 
>>> Hey friends.
>>> 
>>> 
>>> I would like to start a vote on KIP 276: add StreamsConfig prefix for
>>> different consumers.
>>> 
>>> KIP: here>> 276+Add+StreamsConfig+prefix+for+different+consumers>
>>> 
>>> Pull request: here
>>> 
>>> Jira: here
>>> 
>>> 
>>> Let me know if you have questions.
>>> 
>>> 
>>> Thank you!
>>> 
>>> 
>>> >> 276+Add+StreamsConfig+prefix+for+different+consumers>
>>> 
>>> 
>> 
>> 
> 


[jira] [Created] (KAFKA-6805) Allow dynamic broker configs to be configured in ZooKeeper before starting broker

2018-04-19 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-6805:
-

 Summary: Allow dynamic broker configs to be configured in 
ZooKeeper before starting broker
 Key: KAFKA-6805
 URL: https://issues.apache.org/jira/browse/KAFKA-6805
 Project: Kafka
  Issue Type: Task
  Components: tools
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 2.0.0


At the moment, dynamic broker configs like SSL keystore and password can be 
configured using ConfigCommand only after a broker is started (using the new 
AdminClient). To start a broker, these configs have to be defined in 
server.properties. We want to restrict updates using ZooKeeper once broker 
starts up, but we should allow updates using ZK prior to starting brokers. This 
is particularly useful for password configs which are stored encrypted in ZK, 
making it difficult to set manually before starting brokers.

ConfigCommand is being updated to talk to AdminClient under KIP-248, but we 
will need to maintain the tool using ZK to enable credentials to be created in 
ZK before starting brokers. So the functionality to set broker configs can sit 
alongside that.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-276 Add StreamsConfig prefix for different consumers

2018-04-19 Thread Matthias J. Sax
+1 (binding)

-Matthias

On 4/19/18 9:08 AM, Ted Yu wrote:
> +1
>  Original message From: Guozhang Wang  
> Date: 4/18/18  10:18 PM  (GMT-08:00) To: dev@kafka.apache.org Subject: Re: 
> [VOTE] KIP-276 Add StreamsConfig prefix for different consumers 
> Thanks Boyang, +1 from me.
> 
> On Wed, Apr 18, 2018 at 4:43 PM, Boyang Chen  wrote:
> 
>> Hey guys,
>>
>>
>> sorry I forgot to include a summary of this KIP.  Basically the idea is to
>> separate the stream config for different internal consumers of Kafka
>> Streams by supplying prefixes . We currently have
>>
>> (1) "main.consumer." for the main consumer
>>
>> (2) "restore.consumer." for the restore consumer
>>
>> (3) "global.consumer." for the global consumer
>>
>>
>> Best,
>>
>> Boyang
>>
>> 
>> From: Boyang Chen 
>> Sent: Tuesday, April 17, 2018 12:18 PM
>> To: dev@kafka.apache.org
>> Subject: [VOTE] KIP-276 Add StreamsConfig prefix for different consumers
>>
>>
>> Hey friends.
>>
>>
>> I would like to start a vote on KIP 276: add StreamsConfig prefix for
>> different consumers.
>>
>> KIP: here> 276+Add+StreamsConfig+prefix+for+different+consumers>
>>
>> Pull request: here
>>
>> Jira: here
>>
>>
>> Let me know if you have questions.
>>
>>
>> Thank you!
>>
>>
>> > 276+Add+StreamsConfig+prefix+for+different+consumers>
>>
>>
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-19 Thread Manikumar
+1 (non-binding).

Thanks.

On Thu, Apr 19, 2018 at 3:07 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> +1 (non binding). Thanks Ismael!
>
> On Thu., 19 Apr. 2018, 2:47 pm Gwen Shapira,  wrote:
>
> > +1 (binding)
> >
> > On Wed, Apr 18, 2018 at 11:35 AM, Ismael Juma  wrote:
> >
> > > Hi all,
> > >
> > > I started a discussion last year about bumping the version of the June
> > 2018
> > > release to 2.0.0[1]. To reiterate the reasons in the original post:
> > >
> > > 1. Adopt KIP-118 (Drop Support for Java 7), which requires a major
> > version
> > > bump due to semantic versioning.
> > >
> > > 2. Take the chance to remove deprecated code that was deprecated prior
> to
> > > 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that we can
> > > move faster.
> > >
> > > One concern that was raised is that we still do not have a rolling
> > upgrade
> > > path for the old ZK-based consumers. Since the Scala clients haven't
> been
> > > updated in a long time (they don't support security or the latest
> message
> > > format), users who need them can continue to use 1.1.0 with no loss of
> > > functionality.
> > >
> > > Since it's already mid-April and people seemed receptive during the
> > > discussion last year, I'm going straight to a vote, but we can discuss
> > more
> > > if needed (of course).
> > >
> > > Ismael
> > >
> > > [1]
> > > https://lists.apache.org/thread.html/dd9d3e31d7e9590c1f727ef5560c93
> > > 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
> > >
> >
> >
> >
> > --
> > *Gwen Shapira*
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter  | blog
> > 
> >
>


Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-19 Thread Stephane Maarek
+1 (non binding). Thanks Ismael!

On Thu., 19 Apr. 2018, 2:47 pm Gwen Shapira,  wrote:

> +1 (binding)
>
> On Wed, Apr 18, 2018 at 11:35 AM, Ismael Juma  wrote:
>
> > Hi all,
> >
> > I started a discussion last year about bumping the version of the June
> 2018
> > release to 2.0.0[1]. To reiterate the reasons in the original post:
> >
> > 1. Adopt KIP-118 (Drop Support for Java 7), which requires a major
> version
> > bump due to semantic versioning.
> >
> > 2. Take the chance to remove deprecated code that was deprecated prior to
> > 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that we can
> > move faster.
> >
> > One concern that was raised is that we still do not have a rolling
> upgrade
> > path for the old ZK-based consumers. Since the Scala clients haven't been
> > updated in a long time (they don't support security or the latest message
> > format), users who need them can continue to use 1.1.0 with no loss of
> > functionality.
> >
> > Since it's already mid-April and people seemed receptive during the
> > discussion last year, I'm going straight to a vote, but we can discuss
> more
> > if needed (of course).
> >
> > Ismael
> >
> > [1]
> > https://lists.apache.org/thread.html/dd9d3e31d7e9590c1f727ef5560c93
> > 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
> >
>
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter  | blog
> 
>


[jira] [Created] (KAFKA-6804) Event values should not be included in log messages

2018-04-19 Thread Jussi Lyytinen (JIRA)
Jussi Lyytinen created KAFKA-6804:
-

 Summary: Event values should not be included in log messages
 Key: KAFKA-6804
 URL: https://issues.apache.org/jira/browse/KAFKA-6804
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 1.0.0
Reporter: Jussi Lyytinen


In certain error situations, event values are included in log messages:
{code:java}
2018-04-19 08:00:28 
[my-component-37670563-39be-4fcb-92e1-02f115aca43c-StreamThread-2] ERROR 
o.a.k.s.p.i.AssignedTasks - stream-thread 
[my-component-37670563-39be-4fcb-92e1-02f115aca43c-StreamThread-2] Failed to 
commit stream task 1_1 due to the following error:
org.apache.kafka.streams.errors.StreamsException: task [1_1] Abort sending 
since an error caught with a previous record (key [my-key] value [my-value] ...
{code}
In some environments, this is highly undesired behavior since the values can 
contain sensitive information. Error logs are usually collected to separate 
systems not meant for storing such information (e.g. patient or financial data).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-19 Thread Gwen Shapira
+1 (binding)

On Wed, Apr 18, 2018 at 11:35 AM, Ismael Juma  wrote:

> Hi all,
>
> I started a discussion last year about bumping the version of the June 2018
> release to 2.0.0[1]. To reiterate the reasons in the original post:
>
> 1. Adopt KIP-118 (Drop Support for Java 7), which requires a major version
> bump due to semantic versioning.
>
> 2. Take the chance to remove deprecated code that was deprecated prior to
> 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that we can
> move faster.
>
> One concern that was raised is that we still do not have a rolling upgrade
> path for the old ZK-based consumers. Since the Scala clients haven't been
> updated in a long time (they don't support security or the latest message
> format), users who need them can continue to use 1.1.0 with no loss of
> functionality.
>
> Since it's already mid-April and people seemed receptive during the
> discussion last year, I'm going straight to a vote, but we can discuss more
> if needed (of course).
>
> Ismael
>
> [1]
> https://lists.apache.org/thread.html/dd9d3e31d7e9590c1f727ef5560c93
> 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
>



-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-19 Thread Mickael Maison
+1 (non binding), thanks

On Wed, Apr 18, 2018 at 11:21 PM, Rahul Singh
 wrote:
> +1
>
> On Apr 18, 2018, 5:12 PM -0500, Bill Bejeck , wrote:
>> +1
>>
>> Thanks,
>> Bill
>>
>> On Wed, Apr 18, 2018 at 6:07 PM, Edoardo Comar  wrote:
>>
>> > thanks Ismael
>> >
>> > +1 (non-binding)
>> >
>> > --
>> >
>> > Edoardo Comar
>> >
>> > IBM Message Hub
>> >
>> > IBM UK Ltd, Hursley Park, SO21 2JN
>> >
>> >
>> >
>> > From: Rajini Sivaram > > To: dev > > Date: 18/04/2018 22:05
>> > Subject: Re: [VOTE] Kafka 2.0.0 in June 2018
>> >
>> >
>> >
>> > Hi Ismael, Thanks for following this up.
>> >
>> > +1 (binding)
>> >
>> > Regards,
>> >
>> > Rajini
>> >
>> >
>> > On Wed, Apr 18, 2018 at 8:09 PM, Ted Yu  wrote:
>> >
>> > > +1
>> > >  Original message From: Ismael Juma > > > Date: 4/18/18 11:35 AM (GMT-08:00) To: dev > > > Subject: [VOTE] Kafka 2.0.0 in June 2018
>> > > Hi all,
>> > >
>> > > I started a discussion last year about bumping the version of the June
>> > 2018
>> > > release to 2.0.0[1]. To reiterate the reasons in the original post:
>> > >
>> > > 1. Adopt KIP-118 (Drop Support for Java 7), which requires a major
>> > version
>> > > bump due to semantic versioning.
>> > >
>> > > 2. Take the chance to remove deprecated code that was deprecated prior
>> > to
>> > > 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that we can
>> > > move faster.
>> > >
>> > > One concern that was raised is that we still do not have a rolling
>> > upgrade
>> > > path for the old ZK-based consumers. Since the Scala clients haven't
>> > been
>> > > updated in a long time (they don't support security or the latest
>> > message
>> > > format), users who need them can continue to use 1.1.0 with no loss of
>> > > functionality.
>> > >
>> > > Since it's already mid-April and people seemed receptive during the
>> > > discussion last year, I'm going straight to a vote, but we can discuss
>> > more
>> > > if needed (of course).
>> > >
>> > > Ismael
>> > >
>> > > [1]
>> > >
>> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.
>> > apache.org_thread.html_dd9d3e31d7e9590c1f727ef5560c93
>> > =DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0
>> > OaeRo7hgW4_tQ=JJocxOdv9dM3JXFwj2wYoHmVn9uDo5LSaIpu2MFZf3E=
>> > cVke94qmd9h7EH2TscT6aiaT9bMkdiPc4Vr9AVLwxrQ=
>> >
>> > > 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
>> > >
>> >
>> >
>> >
>> > Unless stated otherwise above:
>> > IBM United Kingdom Limited - Registered in England and Wales with number
>> > 741598.
>> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>> >


Re: [VOTE] KIP-276 Add StreamsConfig prefix for different consumers

2018-04-19 Thread Ted Yu
+1
 Original message From: Guozhang Wang  
Date: 4/18/18  10:18 PM  (GMT-08:00) To: dev@kafka.apache.org Subject: Re: 
[VOTE] KIP-276 Add StreamsConfig prefix for different consumers 
Thanks Boyang, +1 from me.

On Wed, Apr 18, 2018 at 4:43 PM, Boyang Chen  wrote:

> Hey guys,
>
>
> sorry I forgot to include a summary of this KIP.  Basically the idea is to
> separate the stream config for different internal consumers of Kafka
> Streams by supplying prefixes . We currently have
>
> (1) "main.consumer." for the main consumer
>
> (2) "restore.consumer." for the restore consumer
>
> (3) "global.consumer." for the global consumer
>
>
> Best,
>
> Boyang
>
> 
> From: Boyang Chen 
> Sent: Tuesday, April 17, 2018 12:18 PM
> To: dev@kafka.apache.org
> Subject: [VOTE] KIP-276 Add StreamsConfig prefix for different consumers
>
>
> Hey friends.
>
>
> I would like to start a vote on KIP 276: add StreamsConfig prefix for
> different consumers.
>
> KIP: here 276+Add+StreamsConfig+prefix+for+different+consumers>
>
> Pull request: here
>
> Jira: here
>
>
> Let me know if you have questions.
>
>
> Thank you!
>
>
>  276+Add+StreamsConfig+prefix+for+different+consumers>
>
>


-- 
-- Guozhang