Build failed in Jenkins: kafka-trunk-jdk8 #3570

2019-04-19 Thread Apache Jenkins Server
See 


Changes:

[cshapi] KAFKA-7965; Fix flaky test ConsumerBounceTest

--
[...truncated 4.75 MB...]
org.apache.kafka.connect.runtime.ConnectorConfigTest > emptyConnectorName PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > multipleTransforms 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > multipleTransforms PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > singleTransform STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > singleTransform PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > wrongTransformationType 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > wrongTransformationType 
PASSED

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > session windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilWindowCloses STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > session windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilWindowCloses PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > non-windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilTimeLimit STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > non-windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilTimeLimit PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilWindowCloses should produce the correct suppression STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilWindowCloses should produce the correct suppression PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilTimeLimit should produce the correct suppression STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilTimeLimit should produce the correct suppression PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxRecords 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxRecords 
should produce the 

Jenkins build is back to normal : kafka-trunk-jdk11 #449

2019-04-19 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-421: Automatically resolve external configurations.

2019-04-19 Thread Gwen Shapira
+1

On Thu, Apr 18, 2019, 3:02 PM TEJAL ADSUL  wrote:

> Hi All,
>
> As we have reached a consensus on the design, I would like to start a vote
> for KIP-421. Below are the links for this proposal:
>
> KIP Link:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100829515
> DiscussionThread:
> https://lists.apache.org/thread.html/a2f834d876e9f8fb3977db794bf161818c97f7f481edd1b10449d89f@%3Cdev.kafka.apache.org%3E
>
> Thanks,
> Tejal
>


Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-19 Thread Matthias J. Sax
Thank you all!

-Matthias


On 4/19/19 3:58 PM, Lei Chen wrote:
> Congratulations Matthias! Well deserved!
> 
> -Lei
> 
> On Fri, Apr 19, 2019 at 2:55 PM James Cheng  > wrote:
> 
> Congrats!!
> 
> -James
> 
> Sent from my iPhone
> 
> > On Apr 18, 2019, at 2:35 PM, Guozhang Wang  > wrote:
> >
> > Hello Everyone,
> >
> > I'm glad to announce that Matthias J. Sax is now a member of Kafka
> PMC.
> >
> > Matthias has been a committer since Jan. 2018, and since then he
> continued
> > to be active in the community and made significant contributions the
> > project.
> >
> >
> > Congratulations to Matthias!
> >
> > -- Guozhang
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Created] (KAFKA-8267) Flaky Test SaslAuthenticatorTest#testUserCredentialsUnavailableForScramMechanism

2019-04-19 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8267:
--

 Summary: Flaky Test 
SaslAuthenticatorTest#testUserCredentialsUnavailableForScramMechanism
 Key: KAFKA-8267
 URL: https://issues.apache.org/jira/browse/KAFKA-8267
 Project: Kafka
  Issue Type: Bug
  Components: core, security
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3925/testReport/junit/org.apache.kafka.common.security.authenticator/SaslAuthenticatorTest/testUserCredentialsUnavailableForScramMechanism/]
{quote}java.lang.AssertionError: Metric not updated 
successful-reauthentication-total expected:<0.0> but was:<1.0> expected:<0.0> 
but was:<1.0> at org.junit.Assert.fail(Assert.java:89) at 
org.junit.Assert.failNotEquals(Assert.java:835) at 
org.junit.Assert.assertEquals(Assert.java:555) at 
org.apache.kafka.common.network.NioEchoServer.waitForMetrics(NioEchoServer.java:190)
 at 
org.apache.kafka.common.network.NioEchoServer.verifyReauthenticationMetrics(NioEchoServer.java:157)
 at 
org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest.testUserCredentialsUnavailableForScramMechanism(SaslAuthenticatorTest.java:501){quote}
STDOUT
{quote}[2019-04-19 22:15:35,524] ERROR Extensions provided in login context 
without a token 
(org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule:318) 
java.io.IOException: Extensions provided in login context without a token at 
org.apache.kafka.common.security.oauthbearer.internals.unsecured.OAuthBearerUnsecuredLoginCallbackHandler.handle(OAuthBearerUnsecuredLoginCallbackHandler.java:164)
 at 
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule.identifyToken(OAuthBearerLoginModule.java:316)
 at 
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule.login(OAuthBearerLoginModule.java:301)
 at 
java.base/javax.security.auth.login.LoginContext.invoke(LoginContext.java:726) 
at 
java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:665) 
at 
java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:663) 
at java.base/java.security.AccessController.doPrivileged(Native Method) at 
java.base/javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:663)
 at 
java.base/javax.security.auth.login.LoginContext.login(LoginContext.java:574) 
at 
org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:60)
 at 
org.apache.kafka.common.security.authenticator.LoginManager.(LoginManager.java:61)
 at 
org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:104)
 at 
org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:149)
 at 
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
 at 
org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
 at 
org.apache.kafka.common.network.NioEchoServer.(NioEchoServer.java:121) at 
org.apache.kafka.common.network.NioEchoServer.(NioEchoServer.java:97) at 
org.apache.kafka.common.network.NetworkTestUtils.createEchoServer(NetworkTestUtils.java:49)
 at 
org.apache.kafka.common.network.NetworkTestUtils.createEchoServer(NetworkTestUtils.java:43)
 at 
org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest.createEchoServer(SaslAuthenticatorTest.java:1851)
 at 
org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest.createEchoServer(SaslAuthenticatorTest.java:1847)
 at 
org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest.testValidSaslOauthBearerMechanismWithoutServerTokens(SaslAuthenticatorTest.java:1586)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 

[jira] [Resolved] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-04-19 Thread Gwen Shapira (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-7965.
-
Resolution: Fixed

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 1.1.1, 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8266) Improve `testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup`

2019-04-19 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-8266:
--

 Summary: Improve 
`testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup`
 Key: KAFKA-8266
 URL: https://issues.apache.org/jira/browse/KAFKA-8266
 Project: Kafka
  Issue Type: Test
Reporter: Jason Gustafson


Some additional validation could be done after the member gets kicked out. The 
main thing is showing that the group can continue to consume data and commit 
offsets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-455: Create an Administrative API for Replica Reassignment

2019-04-19 Thread Colin McCabe
On Wed, Apr 17, 2019, at 17:23, Robert Barrett wrote:
> Thanks for the KIP, Colin. I have a couple questions:
> 
> 1. What's the reasoning for requiring cancellation of a reassignment before
> submitting a new one? It seems like overriding an existing reassignment
> could be done with a single update to
> /brokers/topics/[topic]/partitions/[partitionId]/state and a single
> LeaderAndIsrRequest. Maybe we could include a flag in the request so that
> the client can explicitly request to override an existing reassignment?

Hmm, good point.  That might be more convenient than having to cancel and 
remove before creating a new assignment.

> 2. I agree that supporting the old ZK API for in the long term is a bad
> idea. However, while the number of tools that use the ZK API may be small,
> this would be a non-trivial change for them. Could we temporarily support
> both, with a config enabling the new behavior to prevent users from trying
> to use both mechanisms (if the config is true, the old znode is ignored; if
> the config is false, the Admin Client API returns an error indicating that
> it is not enabled)? We could then remove the ZK API in a later release, to
> give people time to update their tools.

It seems like the big change is basically just depending on adminclient versus 
a ZK client.  The code itself for converting a JSON file into an adminclient 
call shouldn't be difficult.  Maybe we could add a helper method to do this, to 
make it easier to do the conversion.  We'll already need that code for the 
command-line tool.

best,
Colin


> 
> Thanks,
> Bob
> 
> On Mon, Apr 15, 2019 at 9:33 PM Colin McCabe  wrote:
> 
> > link:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-455%3A+Create+an+Administrative+API+for+Replica+Reassignment
> >
> > 
> >
> > On Mon, Apr 15, 2019, at 18:07, Colin McCabe wrote:
> > > Hi all,
> > >
> > > We've been having discussions on a few different KIPs (KIP-236,
> > > KIP-435, etc.) about what the Admin Client replica reassignment API
> > > should look like. The current API is really hard to extend and
> > > maintain, which is a big source of problems. I think it makes sense to
> > > have a KIP that establishes a clean API that we can use and extend
> > > going forward, so I posted KIP-455. Take a look. :)
> > >
> > > best,
> > > Colin
> > >
>


Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-19 Thread Lei Chen
Congratulations Matthias! Well deserved!

-Lei

On Fri, Apr 19, 2019 at 2:55 PM James Cheng  wrote:

> Congrats!!
>
> -James
>
> Sent from my iPhone
>
> > On Apr 18, 2019, at 2:35 PM, Guozhang Wang  wrote:
> >
> > Hello Everyone,
> >
> > I'm glad to announce that Matthias J. Sax is now a member of Kafka PMC.
> >
> > Matthias has been a committer since Jan. 2018, and since then he
> continued
> > to be active in the community and made significant contributions the
> > project.
> >
> >
> > Congratulations to Matthias!
> >
> > -- Guozhang
>


Re: [VOTE] KIP-421: Automatically resolve external configurations.

2019-04-19 Thread Colin McCabe
+1.  Thanks, Tejal.

best,
Colin

On Thu, Apr 18, 2019, at 15:02, TEJAL ADSUL wrote:
> Hi All,
> 
> As we have reached a consensus on the design, I would like to start a 
> vote for KIP-421. Below are the links for this proposal:
> 
> KIP Link: 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100829515
> DiscussionThread: 
> https://lists.apache.org/thread.html/a2f834d876e9f8fb3977db794bf161818c97f7f481edd1b10449d89f@%3Cdev.kafka.apache.org%3E
> 
> Thanks,
> Tejal
>


Re: [DISCUSS] KIP-451: Make TopologyTestDriver output iterable

2019-04-19 Thread Patrik Kleindl
Hi Matthias
Seems I got a bit ahead of myself.
With option C my aim was a simple alternative which gives back all output
records that have happened up to this point (and which have not been
removed by calls to readOutput).
Based on that the user can decide how to step through or compare the
records.

If you see it as more consistent if the new methods removed all returned
records then this can easily be done.

But maybe the pick of Iterable was too narrow.
It would probably be a good fit to return a List or just a Collection

Picking up John's naming suggestion this would make this:

public Collection> readAllOutput(final
String topic) {
final Collection> outputRecords =
outputRecordsByTopic.get(topic);
if (outputRecords == null) {
return Collections.emptyList();
}
outputRecordsByTopic.put(topic, new LinkedList<>());
return outputRecords;
}

With the semantics the same as readOutput = removing everything.

Can be changed to a List if you think it matters that a user can query
some index directly.

What do you think?

best regards

Patrik



On Fri, 19 Apr 2019 at 03:04, Matthias J. Sax  wrote:

> I am not sure if (C) is the best option to pick.
>
> What is the reasoning to suggest (C) over the other options?
>
> It seems that users cannot clear buffered output using option (C). This
> might it make difficult to write tests.
>
> The original Jira tickets suggest:
>
> > which returns either an iterator or list over the records that are
> currently available in the topic
>
> This implies that the current buffer would be cleared when getting the
> iterator.
>
> Also, from my understanding, the idea of iterating in general, is to
> step through a finite collection of objects/elements. Hence, if
> `hasNext()` returns `false` is will never return `true` later on.
>
> As John mentioned, Java also has support for streams, that offer
> different semantics, that would align with option (C). However, I am not
> sure if this would be the test API to write tests?
>
> Thoughts?
>
> In any way: whatever semantics we pick, the KIP should explain them.
> Atm, this part is missing in the KIP.
>
>
> -Matthias
>
> On 4/18/19 12:47 PM, Patrik Kleindl wrote:
> > Hi John
> >
> > Thanks for your feedback
> > It's C, it does not consume the messages in contrast to the readOutput.
> > Is it a requirement to do so?
> > That's why I picked a different name so the difference is more
> noticeable.
> > I will add that to the JavaDoc.
> >
> > I see your point regarding future changes, that's why I linked KIP-456
> > where such a method is proposed and would maybe allow to deprecate my
> > version in favor of a bigger solution.
> >
> > Hope that answers your questions
> >
> > best regards
> > Patrik
> >
> >
> > On Thu, 18 Apr 2019 at 19:46, John Roesler  wrote:
> >
> >> Hi, Patrik,
> >>
> >> Thanks for this proposal!
> >>
> >> I have one question, which I didn't see addressed by the KIP. Currently,
> >> when you call `readOutput`, it consumes the result (removes it from the
> >> test driver's output). Does your proposed method:
> >> A: consume the whole output stream for that topic "atomically" when it
> >> returns the iterable? (i.e., two calls in a row would guarantee the
> second
> >> call is always an empty iterable?)
> >> B: consume each record when we iterate over it? (i.e., this is like a
> >> stream) If this is the case, is the returned object iterable once
> (uncached
> >> stream), or could we iterate over it repeatedly (cached stream)?
> >> C: not consume at all? (i.e., this is a view on the output topic, but we
> >> need a separate method to consume/clear the output)
> >> D: something else?
> >>
> >> Also, one suggestion: maybe name the method "readAllOutput" or
> something.
> >> Specifically naming it "iterable" makes it awkward if we do want to
> tighten
> >> the return type (e.g., to List) in the future. This is something we may
> >> actually want to do, if there's an easy way to say, "assert that the
> output
> >> equals [...some literal list...]".
> >>
> >> Thanks again!
> >> -John
> >>
> >> On Wed, Apr 17, 2019 at 4:01 PM Patrik Kleindl 
> wrote:
> >>
> >>> Hi all
> >>>
> >>> Unless someone has objections I will start a VOTE thread tomorrow.
> >>> The KIP adds two methods to the TopologyTestDriver and has no conflicts
> >> for
> >>> existing users.
> >>> PR https://github.com/apache/kafka/pull/6556 is already being
> reviewed.
> >>>
> >>> Side-note:
> >>>
> >>>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-456%3A+Helper+classes+to+make+it+simpler+to+write+test+logic+with+TopologyTestDriver
> >>> will
> >>> provide a much larger solution for the TopologyTestDriver, but is just
> >>> starting the discussion.
> >>>
> >>> best regards
> >>>
> >>> Patrik
> >>>
> >>> On Thu, 11 Apr 2019 at 22:14, Patrik Kleindl 
> wrote:
> >>>
>  Hi Matthias
> 
>  Thanks for the questions.
> 
>  Regarding the return type:
>  Iterable offers the option of being used in a foreach 

Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-19 Thread James Cheng
Congrats!!

-James

Sent from my iPhone

> On Apr 18, 2019, at 2:35 PM, Guozhang Wang  wrote:
> 
> Hello Everyone,
> 
> I'm glad to announce that Matthias J. Sax is now a member of Kafka PMC.
> 
> Matthias has been a committer since Jan. 2018, and since then he continued
> to be active in the community and made significant contributions the
> project.
> 
> 
> Congratulations to Matthias!
> 
> -- Guozhang


[DISCUSS] KIP-458: Connector Client Config Override Policy

2019-04-19 Thread Magesh Nandakumar
Hi all,

I've posted "KIP-458: Connector Client Config Override Policy", which
allows users to override the connector client configurations based on a
policy defined by the administrator.

The KIP can be found at
https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy
.

Looking forward for the discussion on the KIP and all of your thoughts &
feedback on this enhancement to Connect.

Thanks,
Magesh


[jira] [Created] (KAFKA-8265) Connect Client Config Override policy

2019-04-19 Thread Magesh kumar Nandakumar (JIRA)
Magesh kumar Nandakumar created KAFKA-8265:
--

 Summary: Connect Client Config Override policy
 Key: KAFKA-8265
 URL: https://issues.apache.org/jira/browse/KAFKA-8265
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect
Reporter: Magesh kumar Nandakumar


Right now, each source connector and sink connector inherit their client 
configurations from the worker properties. Within the worker properties, all 
configurations that have a prefix of "producer." or "consumer." are applied to 
all source connectors and sink connectors respectively.

We should also provide connector-level overrides whereby connector properties 
that are prefixed with "producer." and "consumer." are used to feed into the 
producer and consumer clients embedded within source and sink connectors 
respectively. The prefixes will be removed via a String#substring() call, and 
the remainder of the connector property key will be used as the client 
configuration key. The value is fed directly to the client as the configuration 
value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-454: Expansion of the ConnectClusterState interface

2019-04-19 Thread Chris Egerton
Hi Konstantine,

Thanks for your comments. I'll respond to them in the order you provided
them:

1. Clarify of ConnectClusterState code block:
Good point. I've altered the code block in the KIP to remove the outer
'interface ConnectClusterState {' snippet and include only the additional
methods. Happy to make further changes if there's still chance for
confusion here.

2. Targeted version:
I think it makes sense to target the 2.3 release for this KIP (time
permitting). The changes to the code base would consist of an additional
feature and not a bug fix, so I'm doubtful this qualifies to be backported.

3. ConnectorTaskId vs Integer:
I tried to follow the convention of the ConnectorHealth class, which
exposes task states as a map that uses task IDs (represented as Integer
objects) for keys, instead of ConnectTaskId. There was also some discussion
on KIP-285 about relying on classes in the Connect runtime module (as
opposed to the api module) and how that should be avoided, so if we wanted
to use a more sophisticated key for the return type of the taskConfigs
method, we'd probably need to implement our own for use in the Connect api
module. I'm not opposed to this, but there didn't seem to be good enough
reason to propose it initially given the convention set by the
ConnectorHealth class.

4. Config retrieval from herder:
I've added a paragraph to the KIP that gives some details on how
configurations will be retrieved from the herder; it's basically the same
as with the current ConnectClusterStateImpl class but with a few method
calls changed here and there.

Thanks again for your thoughts!

Cheers,

Chris

On Fri, Apr 19, 2019 at 11:24 AM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Resending an email I sent this Monday but didn't make it to the mailing
> list:
>
> 
>
> Hi Chris,
>
> thanks for the KIP!
> I have the following initial comments:
>
> 1. In its current form, the code snippet gives the impression that this is
> a new interface or an interface that completely replaces the existing one.
> It's not clear that the interface is extended. You think we could represent
> this extension in a better way? (I'm aware that in the text you say that
> these methods are additional, but the code block gives a partial view of
> this interface).
>
> 2. In the compatibility page it'd be nice to read which version this
> feature is targeting. Now, given that KIP-285 was introduced in 2.0.0, I
> wonder if it'd make sense to have default methods for the new interface
> methods that you suggest adding.
>
> 3. Any reason why ConnectorTaskId is not used instead of Integer as key
> type in taskConfigs? This class is already part of the connect-api package
> and I'd imagine reusing it might allow for fewer transformations between
> task config maps currently used and the new ones that will be returned by
> this interface method.
>
> 4. Finally, there's a mention on the interface javadoc about how these
> configs are retrieved using the Connect herder, but it's not clear whether
> the leader of the workers' group will be queried or not for this
> information. I think a paragraph describing what are the assumptions with
> respect to what consists a "current" task configuration and how this is
> retrieved would be valuable here.
>
> Best,
> Konstantine
>
>
>
> On Thu, Apr 18, 2019 at 2:45 PM Chris Egerton  wrote:
>
> > Hi Magesh,
> >
> > Thanks for your comments. I'll address them in the order you provided
> them:
> >
> > 1 - Reason for exposing task configurations to REST extensions:
> > Yes, the motivation is a little thin for exposing task configs to REST
> > extensions. I can think of a few uses for this functionality, such as
> > attempting to infer problematic configurations by examining failed tasks
> > and comparing their configurations to the configurations of running
> tasks,
> > but like you've indicated it's dubious that the best place for anything
> > like that belongs in a REST extension.
> > I'd be interested to hear others' thoughts, but right now I'm not too
> > opposed to erring on the side of caution and leaving it out. Worst case,
> it
> > takes another KIP to add this later on down the road, but that's a small
> > price to pay to avoid adding support for a feature that nobody needs.
> >
> > 2. Usefulness of exposing Kafka cluster ID to REST extensions:
> > As the KIP states, "the Kafka cluster ID may be useful for the purpose of
> > uniquely identifying a Connect cluster from within a REST extension,
> since
> > users may be running multiple Kafka clusters and the group.id for a
> > distributed Connect cluster may not be sufficient to identify a cluster."
> > Even though there may be producer or consumer overrides for
> > bootstrap.servers present in the configuration for the worker, these will
> > not affect which Kafka cluster is used as a backing store for connector
> > configurations, offsets, and statuses, so the Kafka cluster ID for the
> > worker in conjunction with the 

Re: [DISCUSS] KIP-454: Expansion of the ConnectClusterState interface

2019-04-19 Thread Konstantine Karantasis
Resending an email I sent this Monday but didn't make it to the mailing
list:



Hi Chris,

thanks for the KIP!
I have the following initial comments:

1. In its current form, the code snippet gives the impression that this is
a new interface or an interface that completely replaces the existing one.
It's not clear that the interface is extended. You think we could represent
this extension in a better way? (I'm aware that in the text you say that
these methods are additional, but the code block gives a partial view of
this interface).

2. In the compatibility page it'd be nice to read which version this
feature is targeting. Now, given that KIP-285 was introduced in 2.0.0, I
wonder if it'd make sense to have default methods for the new interface
methods that you suggest adding.

3. Any reason why ConnectorTaskId is not used instead of Integer as key
type in taskConfigs? This class is already part of the connect-api package
and I'd imagine reusing it might allow for fewer transformations between
task config maps currently used and the new ones that will be returned by
this interface method.

4. Finally, there's a mention on the interface javadoc about how these
configs are retrieved using the Connect herder, but it's not clear whether
the leader of the workers' group will be queried or not for this
information. I think a paragraph describing what are the assumptions with
respect to what consists a "current" task configuration and how this is
retrieved would be valuable here.

Best,
Konstantine



On Thu, Apr 18, 2019 at 2:45 PM Chris Egerton  wrote:

> Hi Magesh,
>
> Thanks for your comments. I'll address them in the order you provided them:
>
> 1 - Reason for exposing task configurations to REST extensions:
> Yes, the motivation is a little thin for exposing task configs to REST
> extensions. I can think of a few uses for this functionality, such as
> attempting to infer problematic configurations by examining failed tasks
> and comparing their configurations to the configurations of running tasks,
> but like you've indicated it's dubious that the best place for anything
> like that belongs in a REST extension.
> I'd be interested to hear others' thoughts, but right now I'm not too
> opposed to erring on the side of caution and leaving it out. Worst case, it
> takes another KIP to add this later on down the road, but that's a small
> price to pay to avoid adding support for a feature that nobody needs.
>
> 2. Usefulness of exposing Kafka cluster ID to REST extensions:
> As the KIP states, "the Kafka cluster ID may be useful for the purpose of
> uniquely identifying a Connect cluster from within a REST extension, since
> users may be running multiple Kafka clusters and the group.id for a
> distributed Connect cluster may not be sufficient to identify a cluster."
> Even though there may be producer or consumer overrides for
> bootstrap.servers present in the configuration for the worker, these will
> not affect which Kafka cluster is used as a backing store for connector
> configurations, offsets, and statuses, so the Kafka cluster ID for the
> worker in conjunction with the Connect group ID should be sufficient to
> uniquely identify a Connect cluster.
> We can and should document that the Connect cluster with overridden
> producer.bootstrap.servers or consumer.bootstrap.servers may be writing
> to/reading from a different Kafka cluster. However, REST extensions are
> already passed the configs for their worker through their configure(...)
> method, so they'd be able to detect any such overrides and act accordingly.
>
> Thanks again for your thoughts!
>
> Cheers,
>
> Chris
>
> On Thu, Apr 18, 2019 at 11:08 AM Magesh Nandakumar 
> wrote:
>
> > Hi Chris,
> >
> > Thanks for the KIP. Overall, it looks good and straightforward to me.
> >
> > I had a few questions on the new methods
> >
> > 1. I'm not sure if an extension would ever require the task configs. An
> > extension generally should only require the health and current state of
> the
> > connector which includes the connector config. I was wondering if there
> is
> > a specific reason it would need task configs.
> > 2. Also, I'm not convinced that kafkaClusterId() belongs to the
> > ConnectClusterState
> > interface. The interface is generally to provide information about the
> > Connect cluster and its information.  Also, the kafkaClusterId could
> > potentially change based on whether there is a "producer." or "consumer."
> > prefix, right?
> >
> > Having said that, I would prefer to have connectorConfigs which I think
> is
> > a great idea and addition to the interface. Let me know what you think.
> >
> > Thanks,
> > Magesh
> >
> > On Sat, Apr 13, 2019 at 9:00 PM Chris Egerton 
> wrote:
> >
> > > Hi all,
> > >
> > > I've posted "KIP-454: Expansion of the ConnectClusterState interface",
> > > which proposes that we add provide more information about the Connect
> > > cluster to REST extensions.
> > >
> > > The KIP can be found at
> > >
> > >
> >
> 

Re: [VOTE] KIP-433: Block old clients on brokers

2019-04-19 Thread Harsha
Thanks Ying for updating the KIP. 
Hi Ismael,
 Given min.api.version allows admin/users to specifiy min.version 
for each request this should address your concerns right?

Thanks,
Harsha

On Wed, Apr 17, 2019, at 2:29 PM, Ying Zheng wrote:
> I have updated the config description in the KIP, made the example more
> clear
> 
> The proposed change allows setting different min versions for different
> APIs, and the ApiVersionRequest change is already in the KIP.
> 
> On Fri, Apr 12, 2019 at 8:22 PM Harsha  wrote:
> 
> > Hi Ismael,
> > I meant to say blocking clients based on their API version
> > https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/api/ApiVersion.scala#L48
> > But If I understand what you are saying, since each client release can
> > support different versions for each of fetch, produce, offset commit etc..
> > and it's harder to block just based on single min.api.version setting
> > across different clients.
> > The idea I had in my mind was to do this via ApiVersionRequest, when a
> > client makes api request to broker in response we return min and max
> > version supported for each Api. When min.api.version enabled on broker, it
> > returns the maxVersion it supports for each of the requests in that release
> > as min versions to the clients.
> >
> > Example:
> > Kafka 1.1.1 broker and min.api.verson set to
> > https://github.com/apache/kafka/blob/1.1/core/src/main/scala/kafka/api/ApiVersion.scala#L79
> > (KAFKA_1_1_IV0) and client makes a ApiVersionsRequest and in response for
> > example produce request
> >
> > https://github.com/apache/kafka/blob/1.1/clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java#L112
> > Instead of returning all of the supported versions it will return
> > PRODUCE_REQUEST_V5 as the only supported version.
> >
> > Irrespective of the above approach I understand your point still stands
> > which is sarama might not choose to implement all the higher version
> > protocols for Kafka 1.1 release and they might introduce higher version of
> > produce request in a subsequent minor release and it will be harder for
> > users to figure out which release of sarama client they can use.
> >
> >
> > Ying, if you have a different apporach which might address this issue
> > please add.
> >
> >
> > Thanks,
> > Harsha
> >
> > On Fri, Apr 12, 2019, at 7:23 PM, Ismael Juma wrote:
> > > Hi Harsha,
> > >
> > > There is no such thing as 1.1 protocol. I encourage you to describe an
> > > example config that achieves what you are suggesting here. It's pretty
> > > complicated because the versions are per API and each client evolves
> > > independently.
> > >
> > > Ismael
> > >
> > > On Sat, Apr 13, 2019 at 4:09 AM Harsha  wrote:
> > >
> > > > Hi,
> > > >
> > > > "Relying on min.version seems like a pretty clunky way to achieve the
> > above
> > > > > list. The challenge is that it's pretty difficult to do it in a way
> > that
> > > > > works for clients across languages. They each add support for new
> > > > protocol
> > > > > versions independently (it could even happen in a bug fix release).
> > So,
> > > > if
> > > > > you tried to block Sarama in #2, you may block Java clients too."
> > > >
> > > > That's the intended effect, right?  if you as the admin/operator
> > > > configures the broker to have min.api.version to be 1.1
> > > > it should block java , sarama clients etc.. which are below the 1.1
> > > > protocol.  As mentioned this is not just related to log.format upgrade
> > > > problem but in general a forcing cause to get the users to upgrade
> > their
> > > > client version in a multi-tenant environment.
> > > >
> > > > "> For #3, it seems simplest to have a config that requires clients to
> > > > support
> > > > > a given message format version (or higher). For #2, it seems like
> > you'd
> > > > > want clients to advertise their versions. That would be useful for
> > > > multiple
> > > > > reasons."
> > > > This kip offers the ability to block clients based on the protocol they
> > > > support. This should be independent of the message format upgrade. Not
> > all
> > > > of the features or bugs are dependent on a message format and having a
> > > > message format dependency to block clients means we have to upgrade to
> > > > message.format and we cannot just say we've 1.1 brokers with 0.8.2
> > message
> > > > format and now we want to block all 0.8.x clients.
> > > >
> > > > min.api.version helps at the cluster level to say that all users
> > required
> > > > to upgrade clients to the at minimum need to speak the min.api.version
> > and
> > > > not tie to message.format because not all cases one wants to upgrade
> > the
> > > > message format and block the old clients.
> > > >
> > > >
> > > > To Gwen's point, I think we should also return in the error message
> > that
> > > > the broker only supports min.api.version and above. So that users can
> > see a
> > > > clear message and upgrade to a newer version.
> 

[jira] [Created] (KAFKA-8264) Flaky Test PlaintextConsumerTest#testLowMaxFetchSizeForRequestAndPartition

2019-04-19 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8264:
--

 Summary: Flaky Test 
PlaintextConsumerTest#testLowMaxFetchSizeForRequestAndPartition
 Key: KAFKA-8264
 URL: https://issues.apache.org/jira/browse/KAFKA-8264
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.0.1
Reporter: Matthias J. Sax
 Fix For: 2.0.2


[https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/252/tests]
{quote}org.apache.kafka.common.errors.TopicExistsException: Topic 'topic3' 
already exists.{quote}
STDOUT
 
{quote}[2019-04-19 03:54:20,080] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:20,080] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:20,312] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
fetcherId=0] Error for partition topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:20,313] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition topic-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:20,994] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:21,727] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition topicWithNewMessageFormat-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:28,696] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:28,699] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:29,246] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition topic-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:29,247] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:29,287] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
fetcherId=0] Error for partition topic-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:33,408] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:33,408] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:33,655] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition topic-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 03:54:33,657] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-19 

[jira] [Created] (KAFKA-8263) Flaky Test MetricsIntegrationTest#testStreamMetricOfWindowStore

2019-04-19 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8263:
--

 Summary: Flaky Test 
MetricsIntegrationTest#testStreamMetricOfWindowStore
 Key: KAFKA-8263
 URL: https://issues.apache.org/jira/browse/KAFKA-8263
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3900/testReport/junit/org.apache.kafka.streams.integration/MetricsIntegrationTest/testStreamMetricOfWindowStore/]
{quote}java.lang.AssertionError: Condition not met within timeout 1. 
testStoreMetricWindow -> Size of metrics of type:'put-latency-avg' must be 
equal to:2 but it's equal to 0 expected:<2> but was:<0> at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:361){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8262) Flaky Test MetricsIntegrationTest#testStreamMetric

2019-04-19 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8262:
--

 Summary: Flaky Test MetricsIntegrationTest#testStreamMetric
 Key: KAFKA-8262
 URL: https://issues.apache.org/jira/browse/KAFKA-8262
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3900/testReport/junit/org.apache.kafka.streams.integration/MetricsIntegrationTest/testStreamMetric/]
{quote}java.lang.AssertionError: Condition not met within timeout 1. 
testTaskMetric -> Size of metrics of type:'commit-latency-avg' must be equal 
to:5 but it's equal to 0 expected:<5> but was:<0> at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:361) at 
org.apache.kafka.streams.integration.MetricsIntegrationTest.testStreamMetric(MetricsIntegrationTest.java:228){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8261) snappyjava.dll not removed after consumer restart.

2019-04-19 Thread Yurii (JIRA)
Yurii created KAFKA-8261:


 Summary: snappyjava.dll not removed after consumer restart.
 Key: KAFKA-8261
 URL: https://issues.apache.org/jira/browse/KAFKA-8261
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 2.2.0
 Environment: Windows Server
Reporter: Yurii


When using snappy compression with producer snappyjava.dll is not being removed 
from Temp folderafter producer closed.

 

Producer properties used:

 
{code:java}
kafkaProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
org.apache.kafka.common.serialization.StringSerializer.class);
 kafkaProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
org.apache.kafka.common.serialization.StringSerializer.class);
kafkaProperties.put(ProducerConfig.ACKS_CONFIG, "-1");
 kafkaProperties.put(ProducerConfig.MAX_BLOCK_MS_CONFIG,6L);
kafkaProperties.put(ProducerConfig.RETRIES_CONFIG, "3");
kafkaProperties.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy");
 kafkaProperties.put(ProducerConfig.LINGER_MS_CONFIG, "20");
 kafkaProperties.put(ProducerConfig.BATCH_SIZE_CONFIG, 
Integer.toString(32*1024));
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Kafka point in time recovery?

2019-04-19 Thread Jonathan Santilli
Hello Kumar,

if I got it correctly, you can try with the method offsetsForTimes from the
KafkaConsumer Api.
Maybe you can reset the offset to a particular timestamp and consume up to
a certain offset or timestamp.

https://kafka.apache.org/22/javadoc/?org/apache/kafka/clients/consumer/KafkaConsumer.html

Hope that helps.
--
Jonathan




On Fri, Apr 19, 2019 at 2:56 AM kumar  wrote:

> Is there any possibility of recovering kafka topic and brokers point in
> time?.  I want to recover kafka topics and brokers as of yesterday 5 PM. I
> dont want any data arrived in kafka yesterday after 5 PM. I read about
> mirroring of data using kafka replicator, ureplicator..etc..All the
> mirroring tools do is mirror data from 1 kafka cluster to another kafka
> cluster in real time async mode.  I am looking for a feature in kafka where
> I can do point in time recovery like oracle database  or elasticsearch.
>
> What is working for us is taking snapshots of kafka vms and recovering it
> from VM snapshots. This is also helping us with consumer offsetts.  We did
> not face any issue as of now. not sure if this is appropriate method.
>
> Thanks,
> AK
>


-- 
Santilli Jonathan


Re: [ANNOUNCE] New Kafka PMC member: Sriharsh Chintalapan

2019-04-19 Thread Satish Duggana
Congrats Harsha!

On Fri, Apr 19, 2019 at 2:58 PM Mickael Maison 
wrote:

> Congratulations Harsha!
>
>
> On Fri, Apr 19, 2019 at 5:49 AM Manikumar 
> wrote:
> >
> > Congrats Harsha!.
> >
> > On Fri, Apr 19, 2019 at 7:43 AM Dong Lin  wrote:
> >
> > > Congratulations Sriharsh!
> > >
> > > On Thu, Apr 18, 2019 at 11:46 AM Jun Rao  wrote:
> > >
> > > > Hi, Everyone,
> > > >
> > > > Sriharsh Chintalapan has been active in the Kafka community since he
> > > became
> > > > a Kafka committer in 2015. I am glad to announce that Harsh is now a
> > > member
> > > > of Kafka PMC.
> > > >
> > > > Congratulations, Harsh!
> > > >
> > > > Jun
> > > >
> > >
>


Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-19 Thread Satish Duggana
Congratulations Matthias!

On Fri, Apr 19, 2019 at 2:59 PM Jorge Quilcate 
wrote:

> Congrats Matthias!!
>
> On 4/19/19 11:28 AM, Mickael Maison wrote:
> > Congrats Matthias!
> >
> > On Fri, Apr 19, 2019 at 6:07 AM Vahid Hashemian
> >  wrote:
> >> Congratulations Matthias!
> >>
> >> --Vahid
> >>
> >> On Thu, Apr 18, 2019 at 9:39 PM Manikumar 
> wrote:
> >>
> >>> Congrats Matthias!. well deserved.
> >>>
> >>> On Fri, Apr 19, 2019 at 7:44 AM Dong Lin  wrote:
> >>>
>  Congratulations Matthias!
> 
>  Very well deserved!
> 
>  On Thu, Apr 18, 2019 at 2:35 PM Guozhang Wang 
> >>> wrote:
> > Hello Everyone,
> >
> > I'm glad to announce that Matthias J. Sax is now a member of Kafka
> PMC.
> >
> > Matthias has been a committer since Jan. 2018, and since then he
>  continued
> > to be active in the community and made significant contributions the
> > project.
> >
> >
> > Congratulations to Matthias!
> >
> > -- Guozhang
> >
> >>
> >> --
> >>
> >> Thanks!
> >> --Vahid
>


Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-19 Thread Jorge Quilcate

Congrats Matthias!!

On 4/19/19 11:28 AM, Mickael Maison wrote:

Congrats Matthias!

On Fri, Apr 19, 2019 at 6:07 AM Vahid Hashemian
 wrote:

Congratulations Matthias!

--Vahid

On Thu, Apr 18, 2019 at 9:39 PM Manikumar  wrote:


Congrats Matthias!. well deserved.

On Fri, Apr 19, 2019 at 7:44 AM Dong Lin  wrote:


Congratulations Matthias!

Very well deserved!

On Thu, Apr 18, 2019 at 2:35 PM Guozhang Wang 

wrote:

Hello Everyone,

I'm glad to announce that Matthias J. Sax is now a member of Kafka PMC.

Matthias has been a committer since Jan. 2018, and since then he

continued

to be active in the community and made significant contributions the
project.


Congratulations to Matthias!

-- Guozhang



--

Thanks!
--Vahid


Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-19 Thread Mickael Maison
Congrats Matthias!

On Fri, Apr 19, 2019 at 6:07 AM Vahid Hashemian
 wrote:
>
> Congratulations Matthias!
>
> --Vahid
>
> On Thu, Apr 18, 2019 at 9:39 PM Manikumar  wrote:
>
> > Congrats Matthias!. well deserved.
> >
> > On Fri, Apr 19, 2019 at 7:44 AM Dong Lin  wrote:
> >
> > > Congratulations Matthias!
> > >
> > > Very well deserved!
> > >
> > > On Thu, Apr 18, 2019 at 2:35 PM Guozhang Wang 
> > wrote:
> > >
> > > > Hello Everyone,
> > > >
> > > > I'm glad to announce that Matthias J. Sax is now a member of Kafka PMC.
> > > >
> > > > Matthias has been a committer since Jan. 2018, and since then he
> > > continued
> > > > to be active in the community and made significant contributions the
> > > > project.
> > > >
> > > >
> > > > Congratulations to Matthias!
> > > >
> > > > -- Guozhang
> > > >
> > >
> >
>
>
> --
>
> Thanks!
> --Vahid


Re: [ANNOUNCE] New Kafka PMC member: Sriharsh Chintalapan

2019-04-19 Thread Mickael Maison
Congratulations Harsha!


On Fri, Apr 19, 2019 at 5:49 AM Manikumar  wrote:
>
> Congrats Harsha!.
>
> On Fri, Apr 19, 2019 at 7:43 AM Dong Lin  wrote:
>
> > Congratulations Sriharsh!
> >
> > On Thu, Apr 18, 2019 at 11:46 AM Jun Rao  wrote:
> >
> > > Hi, Everyone,
> > >
> > > Sriharsh Chintalapan has been active in the Kafka community since he
> > became
> > > a Kafka committer in 2015. I am glad to announce that Harsh is now a
> > member
> > > of Kafka PMC.
> > >
> > > Congratulations, Harsh!
> > >
> > > Jun
> > >
> >