Build failed in Jenkins: kafka-trunk-jdk8 #3629

2019-05-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8294; Batch StopReplica requests when possible and improve test

--
[...truncated 2.44 MB...]
@Test
 ^
  symbol:   class Test
  location: class TopologyTestDriverTest
:564:
 error: cannot find symbol
@Test
 ^
  symbol:   class Test
  location: class TopologyTestDriverTest
:589:
 error: cannot find symbol
@Test
 ^
  symbol:   class Test
  location: class TopologyTestDriverTest
:606:
 error: cannot find symbol
@Test
 ^
  symbol:   class Test
  location: class TopologyTestDriverTest
:624:
 error: cannot find symbol
@Test
 ^
  symbol:   class Test
  location: class TopologyTestDriverTest
100 errors

> Task :streams:test-utils:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task :streams:upgrade-system-tests-0110:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:testClasses
> Task :streams:upgrade-system-tests-0110:checkstyleTest
> Task :streams:upgrade-system-tests-0110:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:test
> Task :streams:upgrade-system-tests-10:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-10:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-10:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:compileTestJava
> Task :streams:upgrade-system-tests-10:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:testClasses
> Task :streams:upgrade-system-tests-10:checkstyleTest
> Task :streams:test-utils:spotbugsMain
> Task :streams:upgrade-system-tests-10:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:test
> Task :streams:upgrade-system-tests-11:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-11:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-11:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-11:compileTestJava
> Task :streams:upgrade-system-tests-11:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:testClasses
> Task :streams:upgrade-system-tests-11:checkstyleTest
> Task 

[VOTE] 2.2.1 RC1

2019-05-13 Thread Vahid Hashemian
Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 2.2.1.

Compared to RC0, this release candidate also fixes the following issues:

   - [KAFKA-6789] - Add retry logic in AdminClient requests
   - [KAFKA-8348] - Document of kafkaStreams improvement
   - [KAFKA-7633] - Kafka Connect requires permission to create internal
   topics even if they exist
   - [KAFKA-8240] - Source.equals() can fail with NPE
   - [KAFKA-8335] - Log cleaner skips Transactional mark and batch record,
   causing unlimited growth of __consumer_offsets
   - [KAFKA-8352] - Connect System Tests are failing with 404

Release notes for the 2.2.1 release:
https://home.apache.org/~vahid/kafka-2.2.1-rc1/RELEASE_NOTES.html

*** Please download, test and vote by Thursday, May 16, 9:00 pm PT.

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~vahid/kafka-2.2.1-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~vahid/kafka-2.2.1-rc1/javadoc/

* Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
https://github.com/apache/kafka/releases/tag/2.2.1-rc1

* Documentation:
https://kafka.apache.org/22/documentation.html

* Protocol:
https://kafka.apache.org/22/protocol.html

* Successful Jenkins builds for the 2.2 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/115/

Thanks!
--Vahid


Build failed in Jenkins: kafka-trunk-jdk11 #517

2019-05-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8294; Batch StopReplica requests when possible and improve test

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision e4007a6408236d9b52b355fcf3c3a80fe5f570ff 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e4007a6408236d9b52b355fcf3c3a80fe5f570ff
Commit message: "KAFKA-8294; Batch StopReplica requests when possible and 
improve test coverage (#6642)"
 > git rev-list --no-walk 63e4f67d9ba9e08bdce705b35c5acf32dcd20633 # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins3523495271476563389.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins3523495271476563389.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


[jira] [Resolved] (KAFKA-8294) Batch StopReplica requests with partition deletion and add test cases

2019-05-13 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8294.

   Resolution: Fixed
Fix Version/s: 2.3.0

> Batch StopReplica requests with partition deletion and add test cases
> -
>
> Key: KAFKA-8294
> URL: https://issues.apache.org/jira/browse/KAFKA-8294
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.3.0
>
>
> One of the tricky aspects we found in KAFKA-8237 is the batching of the 
> StopReplica requests. We should have test cases covering expected behavior so 
> that we do not introduce regressions and we should make the batching 
> consistent whether or not `deletePartitions` is set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8363) Config provider parsing is broken

2019-05-13 Thread Chris Egerton (JIRA)
Chris Egerton created KAFKA-8363:


 Summary: Config provider parsing is broken
 Key: KAFKA-8363
 URL: https://issues.apache.org/jira/browse/KAFKA-8363
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.1.1, 2.2.0, 2.1.0, 2.0.1, 2.0.0
Reporter: Chris Egerton
Assignee: Chris Egerton


The 
[regex|https://github.com/apache/kafka/blob/63e4f67d9ba9e08bdce705b35c5acf32dcd20633/clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java#L56]
 used by the {{ConfigTransformer}} class to parse config provider syntax (see 
[KIP-279|https://cwiki.apache.org/confluence/display/KAFKA/KIP-297%3A+Externalizing+Secrets+for+Connect+Configurations])
 is broken and fails when multiple path-less configs are specified. For 
example: {{"${provider:configOne} ${provider:configTwo}"}} would be parsed 
incorrectly as a reference with a path of {{"configOne} $\{provider"}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8362) LogCleaner gets stuck after partition move between log directories

2019-05-13 Thread Julio Ng (JIRA)
Julio Ng created KAFKA-8362:
---

 Summary: LogCleaner gets stuck after partition move between log 
directories
 Key: KAFKA-8362
 URL: https://issues.apache.org/jira/browse/KAFKA-8362
 Project: Kafka
  Issue Type: Bug
  Components: log cleaner
Reporter: Julio Ng


When a partition is moved from one directory to another, their checkpoint entry 
in cleaner-offset-checkpoint file is not removed from the source directory.

As a consequence when we read the last firstDirtyOffset, we might get a stale 
value from the old checkpoint file.

Basically, we need clean up the entry from the check point file in the source 
directory when the move is completed

The current issue is that the code in LogCleanerManager:

{{def allCleanerCheckpoints: Map[TopicPartition, Long] = {}}
{{  inLock(lock) {}}
{{    checkpoints.values.flatMap(checkpoint => {}}
{{      try {}}
{{        checkpoint.read()}}
{{      } catch {}}
{{        case e: KafkaStorageException =>}}
{{          error(s"Failed to access checkpoint file ${checkpoint.file.getName} 
in dir ${checkpoint.file.getParentFile.getAbsolutePath}", e)}}
{{          Map.empty[TopicPartition, Long]}}
{{      }}}
{{    }).toMap}}
{{  }}}
{{}}}

collapses the offsets when multiple entries exist for the topicPartition



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8361) Fix ConsumerPerformanceTest#testNonDetailedHeaderMatchBody to test a real ConsumerPerformance's method

2019-05-13 Thread Kengo Seki (JIRA)
Kengo Seki created KAFKA-8361:
-

 Summary: Fix 
ConsumerPerformanceTest#testNonDetailedHeaderMatchBody to test a real 
ConsumerPerformance's method
 Key: KAFKA-8361
 URL: https://issues.apache.org/jira/browse/KAFKA-8361
 Project: Kafka
  Issue Type: Improvement
  Components: unit tests
Reporter: Kengo Seki
Assignee: Kengo Seki


{{kafka.tools.ConsumerPerformanceTest#testNonDetailedHeaderMatchBody}} doesn't 
work as a regression test for now, since it tests an anonymous function defined 
in the test method itself.
{code:java}
  @Test
  def testNonDetailedHeaderMatchBody(): Unit = {
testHeaderMatchContent(detailed = false, 2, () => 
println(s"${dateFormat.format(System.currentTimeMillis)}, " +
  s"${dateFormat.format(System.currentTimeMillis)}, 1.0, 1.0, 1, 1.0, 1, 1, 
1.1, 1.1"))
  }
{code}
It should test a real {{ConsumerPerformance}}'s method, just like 
{{testDetailedHeaderMatchBody}}.
{code:java}
  @Test
  def testDetailedHeaderMatchBody(): Unit = {
testHeaderMatchContent(detailed = true, 2,
  () => ConsumerPerformance.printConsumerProgress(1, 1024 * 1024, 0, 1, 0, 
0, 1, dateFormat, 1L))
  }
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3628

2019-05-13 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Throw ProducerFencedException directly but with a new instance

--
[...truncated 2.42 MB...]

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionNumbers PASSED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testMaterialize STARTED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testMaterialize PASSED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionsSpec STARTED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionsSpec PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithFailedExit STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithFailedExit PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessNotFound STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessNotFound PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessForceKillTimeout STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessForceKillTimeout PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > testProcessStop 
STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > testProcessStop 
PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithNormalExit STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithNormalExit PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testCreatesNotExistingTopics 
STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testCreatesNotExistingTopics 
PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreateZeroTopicsDoesNothing STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreateZeroTopicsDoesNothing PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreateNonExistingTopicsWithZeroTopicsDoesNothing STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreateNonExistingTopicsWithZeroTopicsDoesNothing PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreateTopicsFailsIfAtLeastOneTopicExists STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreateTopicsFailsIfAtLeastOneTopicExists PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreatesOneTopicVerifiesOneTopic STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreatesOneTopicVerifiesOneTopic PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCommonConfigOverwritesDefaultProps STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCommonConfigOverwritesDefaultProps PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testClientConfigOverwritesBothDefaultAndCommonConfigs STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testClientConfigOverwritesBothDefaultAndCommonConfigs PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testExistingTopicsNotCreated 
STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testExistingTopicsNotCreated 
PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testGetMatchingTopicPartitionsCorrectlyMatchesExactTopicName STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testGetMatchingTopicPartitionsCorrectlyMatchesExactTopicName PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testCreateRetriesOnTimeout 
STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testCreateRetriesOnTimeout 
PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testExistingTopicsMustHaveRequestedNumberOfPartitions STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testExistingTopicsMustHaveRequestedNumberOfPartitions PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testAddConfigsToPropertiesAddsAllConfigs STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testAddConfigsToPropertiesAddsAllConfigs PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testGetMatchingTopicPartitionsCorrectlyMatchesTopics STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testGetMatchingTopicPartitionsCorrectlyMatchesTopics PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testCreateOneTopic STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testCreateOneTopic PASSED

org.apache.kafka.trogdor.common.JsonUtilTest > 
testObjectFromCommandLineArgument STARTED

org.apache.kafka.trogdor.common.JsonUtilTest > 
testObjectFromCommandLineArgument PASSED

org.apache.kafka.trogdor.common.JsonUtilTest > testOpenBraceComesFirst STARTED

org.apache.kafka.trogdor.common.JsonUtilTest > testOpenBraceComesFirst PASSED

org.apache.kafka.trogdor.common.StringFormatterTest > testDurationString STARTED

org.apache.kafka.trogdor.common.StringFormatterTest > testDurationString PASSED

org.apache.kafka.trogdor.common.StringFormatterTest > testDateString STARTED


Re: [VOTE] KIP-411: Make default Kafka Connect worker task client IDs distinct

2019-05-13 Thread Arjun Satish
Paul,

Looks like the last note gives the message that this change expects people
to update the quota configurations. Can we make it clear that this change
will not impact quota limits, and that these default client ids are not a
reasonable way to configure quotas before or after this change.

Thanks very much and apologies for the confusion!

Best,

On Mon, May 6, 2019 at 10:33 AM Paul Davidson
 wrote:

> Thanks Arjun. I've updated the KIP using your suggestion - just a few
> slight changes.
>
> On Fri, May 3, 2019 at 4:48 PM Arjun Satish 
> wrote:
>
> > Maybe we can say something like:
> >
> > This change can have an indirect impact on resource usage by a Connector.
> > For example, systems that were enforcing quotas using a "consumer-[id]"
> > client id will now have to update their configs to enforce quota on
> > "connector-consumer-[id]". For systems that were not enforcing any
> > limitations or using default quotas, there should be no change expected.
> >
> > Best,
> >
> > On Fri, May 3, 2019 at 1:38 PM Paul Davidson
> >  wrote:
> >
> > > Thanks Arjun. I updated the KIP to mention the impact on quotas. Please
> > let
> > > me know if you think I need more detail. The paragraph I added was:
> > >
> > > Since the default client.id values are changing, this will also affect
> > any
> > > > user that has quotas defined against the current defaults. The
> current
> > > > default client.id values are of the form: consumer-{count}  and
> > > >  producer-{count}.
> > >
> > >
> > > Thanks,
> > >
> > > Paul
> > >
> > > On Thu, May 2, 2019 at 5:36 PM Arjun Satish 
> > > wrote:
> > >
> > > > Paul,
> > > >
> > > > You might want to make a note on the KIP regarding the impact on
> > quotas.
> > > >
> > > > Thanks,
> > > >
> > > > On Thu, May 2, 2019 at 9:48 AM Paul Davidson
> > > >  wrote:
> > > >
> > > > > Thanks for the votes everyone! KIP-411 is now accepted with:
> > > > >
> > > > > +3 binding votes (Randall, Jason, Gwen) , and
> > > > > +3 non-binding votes (Ryanne, Arjun, Magesh)
> > > > >
> > > > > Regards,
> > > > >
> > > > > Paul
> > > > >
> > > > > On Wed, May 1, 2019 at 10:07 PM Arjun Satish <
> arjun.sat...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Good point, Gwen. We always set a non empty value for client id:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/2.2.0/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L668
> > > > > > .
> > > > > >
> > > > > > But more importantly, connect client ids (for consumers, for
> > example)
> > > > > were
> > > > > > already of the form "consumer-[0-9]+", and from now on they will
> be
> > > > > > "connector-consumer-[connector_name]-[0-9]+". So, at least for
> > > connect
> > > > > > consumers/producers, we would have already been hitting the
> default
> > > > quota
> > > > > > limits and nothing changes for them. You can correct me if I'm
> > > missing
> > > > > > something, but seems like this doesn't *break* backward
> > > compatibility?
> > > > > >
> > > > > > I suppose this change only gives us a better way to manage that
> > quota
> > > > > > limit.
> > > > > >
> > > > > > Best,
> > > > > >
> > > > > > On Wed, May 1, 2019 at 9:16 PM Gwen Shapira 
> > > wrote:
> > > > > >
> > > > > > > I'm confused. Surely the default quota applies on empty client
> > IDs
> > > > too?
> > > > > > > otherwise it will be very difficult to enforce?
> > > > > > > So setting the client name will only change something if
> there's
> > > > > already
> > > > > > a
> > > > > > > quota for that client?
> > > > > > >
> > > > > > > On the other hand, I fully support switching to
> > "easy-to-wildcard"
> > > > > > template
> > > > > > > for the client id.
> > > > > > >
> > > > > > > On Wed, May 1, 2019 at 8:50 PM Arjun Satish <
> > > arjun.sat...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > I just realized that setting the client.id on the will now
> > > trigger
> > > > > any
> > > > > > > > quota restrictions (
> > > > > > > > https://kafka.apache.org/documentation/#design_quotasconfig)
> > on
> > > > the
> > > > > > > > broker.
> > > > > > > > It seems like this PR will enforce quota policies that will
> > > either
> > > > > > > require
> > > > > > > > admins to set limits for each task (since the chosen format
> is
> > > > > > > > connector-*-id), or fallback to some default value.
> > > > > > > >
> > > > > > > > Maybe we should mention this in the backward compatibility
> > > section
> > > > > for
> > > > > > > the
> > > > > > > > KIP. At the same time, since there is no way atm to turn off
> > this
> > > > > > > feature,
> > > > > > > > should this feature be merged and released in the upcoming
> > v2.3?
> > > > This
> > > > > > is
> > > > > > > > something the committers can comment better.
> > > > > > > >
> > > > > > > > Best,
> > > > > > > >
> > > > > > > >
> > > > > > > > On Wed, May 1, 2019 at 5:13 PM Gwen Shapira <
> g...@confluent.io
> > >
> > > > > wrote:
> > > > > > > >
> > > 

Re: [VOTE] KIP-461 Improve Replica Fetcher behavior at handling partition failure

2019-05-13 Thread Aishwarya Gune
Yeah, will keep in mind. Thank you Colin.

On Mon, May 13, 2019 at 1:45 PM Colin McCabe  wrote:

> On Mon, May 13, 2019, at 11:49, Aishwarya Gune wrote:
> > Thank you everyone for the discussion and voting for KIP-461
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-461+-+Improve+Replica+Fetcher+behavior+at+handling+partition+failure
> >.
> > Closing the vote on this KIP which passes with -
>
> I'm assuming that you meant to close this last week before the KIP freeze
> on Saturday.  Since all the votes were made before the KIP freeze, I'm
> going to allow this into 2.3 :)
>
> >- 5 binding (Jason, Colin, Gwen, Jun, Harsha)
> >- 1 non-binding (Dhruvil)
>
> In the future you might want to write this as +5 binding.  When I first
> glanced at this I saw -5 binding, which made me think of 5 -1 votes.
>
> Thanks again for the KIP.
>
> best,
> Colin
>
> >
> >
> >
> >
> > On Thu, May 9, 2019 at 10:53 AM Harsha Chintalapani 
> wrote:
> >
> > > Thanks for the KIP. +1(binding)
> > >
> > > Thanks,
> > > Harsha
> > > On May 9, 2019, 9:58 AM -0700, Jun Rao , wrote:
> > > > Hi, Aishwarya,
> > > >
> > > > Thanks for the KIP. +1
> > > >
> > > > Jun
> > > >
> > > > On Wed, May 8, 2019 at 4:30 PM Aishwarya Gune <
> aishwa...@confluent.io>
> > > > wrote:
> > > >
> > > > > Hi All!
> > > > >
> > > > > I would like to call for a vote on KIP-461 that would improve the
> > > behavior
> > > > > of replica fetcher in case of partition failure. The fetcher thread
> > > would
> > > > > just stop monitoring the crashed partition instead of terminating.
> > > > >
> > > > > Here's a link to the KIP -
> > > > >
> > > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-461+-+Improve+Replica+Fetcher+behavior+at+handling+partition+failure
> > > > >
> > > > > Discussion thread -
> > > > > https://www.mail-archive.com/dev@kafka.apache.org/msg97559.html
> > > > >
> > > > > --
> > > > > Thank you,
> > > > > Aishwarya
> > > > >
> > >
> >
> >
> > --
> > Thank you,
> > Aishwarya
> >
>


-- 
Thank you,
Aishwarya


Build failed in Jenkins: kafka-trunk-jdk11 #516

2019-05-13 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Throw ProducerFencedException directly but with a new instance

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 63e4f67d9ba9e08bdce705b35c5acf32dcd20633 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 63e4f67d9ba9e08bdce705b35c5acf32dcd20633
Commit message: "MINOR: Throw ProducerFencedException directly but with a new 
instance (#6717)"
 > git rev-list --no-walk 1fdc8533016e948b1d534145978252209d7612ed # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins314755528812110008.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins314755528812110008.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


Re: Kafka 2.3 release update

2019-05-13 Thread Colin McCabe
Hi all,

The KIPs freeze for the 2.3 release has now arrived.  Any KIPs we vote on after 
this will have to go in a follow-on release, not 2.3.  Thanks, everyone.

I updated the "Release features" list for 2.3 based on the KIPs that were 
accepted thus far.
Check out: 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=112820648

The next release date is the feature freeze, which is coming up this Friday.

regards,
Colin


On Fri, May 10, 2019, at 09:50, Colin McCabe wrote:
> Hi all,
> 
> Let's extend the KIP freeze deadline until tomorrow (Saturday) to give 
> the current in-progress votes time to finish.  This should be the last 
> extension.  Thanks to everyone who voted and reviewed!
> 
> Feature freeze is coming up soon.  Since we normally have it a week 
> after KIP freeze, let's extend that until next Friday (May 17).  
> Remember that if your favorite feature doesn't make it into the 2.3 
> release, there will be a new release in just a few months.
> 
> For more details about the 2.3 release, check out 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=112820648
> 
> cheers,
> Colin
>


Re: [VOTE] KIP-461 Improve Replica Fetcher behavior at handling partition failure

2019-05-13 Thread Colin McCabe
On Mon, May 13, 2019, at 11:49, Aishwarya Gune wrote:
> Thank you everyone for the discussion and voting for KIP-461
> .
> Closing the vote on this KIP which passes with -

I'm assuming that you meant to close this last week before the KIP freeze on 
Saturday.  Since all the votes were made before the KIP freeze, I'm going to 
allow this into 2.3 :)

>- 5 binding (Jason, Colin, Gwen, Jun, Harsha)
>- 1 non-binding (Dhruvil)

In the future you might want to write this as +5 binding.  When I first glanced 
at this I saw -5 binding, which made me think of 5 -1 votes.

Thanks again for the KIP.

best,
Colin

> 
> 
> 
> 
> On Thu, May 9, 2019 at 10:53 AM Harsha Chintalapani  wrote:
> 
> > Thanks for the KIP. +1(binding)
> >
> > Thanks,
> > Harsha
> > On May 9, 2019, 9:58 AM -0700, Jun Rao , wrote:
> > > Hi, Aishwarya,
> > >
> > > Thanks for the KIP. +1
> > >
> > > Jun
> > >
> > > On Wed, May 8, 2019 at 4:30 PM Aishwarya Gune 
> > > wrote:
> > >
> > > > Hi All!
> > > >
> > > > I would like to call for a vote on KIP-461 that would improve the
> > behavior
> > > > of replica fetcher in case of partition failure. The fetcher thread
> > would
> > > > just stop monitoring the crashed partition instead of terminating.
> > > >
> > > > Here's a link to the KIP -
> > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-461+-+Improve+Replica+Fetcher+behavior+at+handling+partition+failure
> > > >
> > > > Discussion thread -
> > > > https://www.mail-archive.com/dev@kafka.apache.org/msg97559.html
> > > >
> > > > --
> > > > Thank you,
> > > > Aishwarya
> > > >
> >
> 
> 
> -- 
> Thank you,
> Aishwarya
>


Jenkins build is back to normal : kafka-2.2-jdk8 #114

2019-05-13 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #3627

2019-05-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8335; Clean empty batches when sequence numbers are reused 
(#6715)

[jjkoshy] KAFKA-7321: Add a Maximum Log Compaction Lag (KIP-354) (#6009)

--
[...truncated 2.42 MB...]

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
STARTED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > 
testReloadOnStartWithNoNewRecordsPresent STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > 
testReloadOnStartWithNoNewRecordsPresent PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.converters.FloatConverterTest > testBytesNullToNumber 
STARTED

org.apache.kafka.connect.converters.FloatConverterTest > testBytesNullToNumber 
PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.FloatConverterTest > testNullToBytes STARTED

org.apache.kafka.connect.converters.FloatConverterTest > testNullToBytes PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testBytesNullToNumber STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testBytesNullToNumber PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > testNullToBytes 
STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > testNullToBytes 
PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectSchemaless STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectSchemaless PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnectNull 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnectNull 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectBadSchema STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectBadSchema PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull PASSED


Re: [VOTE] KIP-461 Improve Replica Fetcher behavior at handling partition failure

2019-05-13 Thread Aishwarya Gune
Thank you everyone for the discussion and voting for KIP-461
.
Closing the vote on this KIP which passes with -

   - 5 binding (Jason, Colin, Gwen, Jun, Harsha)
   - 1 non-binding (Dhruvil)




On Thu, May 9, 2019 at 10:53 AM Harsha Chintalapani  wrote:

> Thanks for the KIP. +1(binding)
>
> Thanks,
> Harsha
> On May 9, 2019, 9:58 AM -0700, Jun Rao , wrote:
> > Hi, Aishwarya,
> >
> > Thanks for the KIP. +1
> >
> > Jun
> >
> > On Wed, May 8, 2019 at 4:30 PM Aishwarya Gune 
> > wrote:
> >
> > > Hi All!
> > >
> > > I would like to call for a vote on KIP-461 that would improve the
> behavior
> > > of replica fetcher in case of partition failure. The fetcher thread
> would
> > > just stop monitoring the crashed partition instead of terminating.
> > >
> > > Here's a link to the KIP -
> > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-461+-+Improve+Replica+Fetcher+behavior+at+handling+partition+failure
> > >
> > > Discussion thread -
> > > https://www.mail-archive.com/dev@kafka.apache.org/msg97559.html
> > >
> > > --
> > > Thank you,
> > > Aishwarya
> > >
>


-- 
Thank you,
Aishwarya


[jira] [Created] (KAFKA-8360) Docs do not mention RequestQueueSize JMX metric

2019-05-13 Thread Charles Francis Larrieu Casias (JIRA)
Charles Francis Larrieu Casias created KAFKA-8360:
-

 Summary: Docs do not mention RequestQueueSize JMX metric
 Key: KAFKA-8360
 URL: https://issues.apache.org/jira/browse/KAFKA-8360
 Project: Kafka
  Issue Type: Improvement
  Components: documentation, metrics, network
Reporter: Charles Francis Larrieu Casias


In the [monitoring 
documentation|[https://kafka.apache.org/documentation/#monitoring],] there is 
no mention of the `kafka.network:type=RequestChannel,name=RequestQueueSize` JMX 
metric. This is an important metric because it can indicate that there are too 
many requests in queue and suggest either increasing `queued.max.requests` 
(along with perhaps memory), or increasing `num.io.threads`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Please give me permission to create KIP,Thanks!

2019-05-13 Thread Bill Bejeck
Responding here on the dev list as well.

Thanks for your interest in contributing!

You should be all set now.

Thanks,
Bill

On Mon, May 13, 2019 at 12:34 PM hu xiaohua 
wrote:

>
> I want create my KIP ,but I havn't a permission,please give me permission
> to create KIP,My Wiki ID is Flowermin.Thanks!
>


[jira] [Created] (KAFKA-8359) Reconsider default for leader imbalance percentage

2019-05-13 Thread Dhruvil Shah (JIRA)
Dhruvil Shah created KAFKA-8359:
---

 Summary: Reconsider default for leader imbalance percentage
 Key: KAFKA-8359
 URL: https://issues.apache.org/jira/browse/KAFKA-8359
 Project: Kafka
  Issue Type: Improvement
Reporter: Dhruvil Shah


By default, the leader imbalance ratio is 10%. This means that the controller 
won't trigger preferred leader election for a broker unless the ratio of the 
number of partitions a broker is the current leader of and the number of 
partitions it is the preferred leader of is off by more than 10%. The problem 
is when a broker is catching up after a restart, the smallest topics tend to 
catch up first and the largest ones later, so the 10% remaining difference may 
not be proportional to the broker's load. To keep better balance in the 
cluster, we should consider setting `leader.imbalance.per.broker.percentage=0` 
by default so that the preferred leaders are always elected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Please give me permission to create KIP,Thanks!

2019-05-13 Thread hu xiaohua

I want create my KIP ,but I havn't a permission,please give me permission to 
create KIP,My Wiki ID is Flowermin.Thanks!


Build failed in Jenkins: kafka-trunk-jdk11 #515

2019-05-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8335; Clean empty batches when sequence numbers are reused 
(#6715)

[jjkoshy] KAFKA-7321: Add a Maximum Log Compaction Lag (KIP-354) (#6009)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 1fdc8533016e948b1d534145978252209d7612ed 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1fdc8533016e948b1d534145978252209d7612ed
Commit message: "KAFKA-7321: Add a Maximum Log Compaction Lag (KIP-354) (#6009)"
 > git rev-list --no-walk 8a237f599afa539868a138b5a2534dbf884cb4ec # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins2328944608483414422.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins2328944608483414422.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


Re: [VOTE] 2.2.1 RC0

2019-05-13 Thread Vahid Hashemian
Hi Jason,

Thanks for quick resolution on this ticket.
I'll work on generating RC1.

Thanks!
--Vahid

On Mon, May 13, 2019 at 9:02 AM Jason Gustafson  wrote:

> Hi Vahid,
>
> I merged the patch for KAFKA-8335 into 2.2.
>
> Thanks,
> Jason
>
> On Mon, May 13, 2019 at 7:17 AM Vahid Hashemian  >
> wrote:
>
> > Thanks Patrik for the reference.
> >
> > That JIRA seems to be covering the exact same issue with multiple
> releases,
> > and is not marked as a blocker at this point.
> >
> > --Vahid
> >
> > On Mon, May 13, 2019 at 1:22 AM Patrik Kleindl 
> wrote:
> >
> > > Hi
> > > This might be related to
> > https://issues.apache.org/jira/browse/KAFKA-5998
> > > at least the behaviour is described there.
> > > Regards
> > > Patrik
> > >
> > > > Am 13.05.2019 um 00:23 schrieb Vahid Hashemian <
> > > vahid.hashem...@gmail.com>:
> > > >
> > > > Hi Jonathan,
> > > >
> > > > Thanks for reporting the issue.
> > > > Do you know if it is something that's introduced since 2.2? Did you
> > run a
> > > > similar test with 2.2?
> > > >
> > > > Thanks,
> > > > --Vahid
> > > >
> > > > On Sun, May 12, 2019 at 2:22 PM Jonathan Santilli <
> > > > jonathansanti...@gmail.com> wrote:
> > > >
> > > >> Hello Vahid,
> > > >>
> > > >>
> > > >> am testing one of our Kafka Stream Apps with the 2.2.1-rc, after few
> > > >> minutes, I see this WARN:
> > > >>
> > > >>
> > > >> 2019-05-09 13:14:37,025 WARN  [test-app-id-dc27624a-8e02-
> > > >> 4031-963b-7596a8a77097-StreamThread-1]
> > internals.ProcessorStateManager (
> > > >> ProcessorStateManager.java:349) - task [0_0] Failed to write offset
> > > >> checkpoint file to
> [/tmp/kafka-stream-app/test-app-id/0_0/.checkpoint]
> > > >>
> > > >> java.io.FileNotFoundException:
> > > >> /tmp/kafka-stream-app/test-app-id/0_0/.checkpoint.tmp (No such file
> or
> > > >> directory)
> > > >>
> > > >> at java.io.FileOutputStream.open0(Native Method) ~[?:1.8.0_191]
> > > >>
> > > >> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> > > ~[?:1.8.0_191]
> > > >>
> > > >> at java.io.FileOutputStream.(FileOutputStream.java:213)
> > > >> ~[?:1.8.0_191]
> > > >>
> > > >> at java.io.FileOutputStream.(FileOutputStream.java:162)
> > > >> ~[?:1.8.0_191]
> > > >>
> > > >> at org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(
> > > >> OffsetCheckpoint.java:79) ~[kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at
> > > >>
> > > >>
> > >
> >
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(
> > > >> ProcessorStateManager.java:347) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at org.apache.kafka.streams.processor.internals.StreamTask.commit(
> > > >> StreamTask.java:476) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at org.apache.kafka.streams.processor.internals.StreamTask.suspend(
> > > >> StreamTask.java:598) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at org.apache.kafka.streams.processor.internals.StreamTask.close(
> > > >> StreamTask.java:724) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at org.apache.kafka.streams.processor.internals.AssignedTasks.close(
> > > >> AssignedTasks.java:337) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at
> org.apache.kafka.streams.processor.internals.TaskManager.shutdown(
> > > >> TaskManager.java:267) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at
> > > >>
> > >
> >
> org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(
> > > >> StreamThread.java:1208) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> at org.apache.kafka.streams.processor.internals.StreamThread.run(
> > > >> StreamThread.java:785) [kafka-streams-2.2.1.jar:?]
> > > >>
> > > >> Checking the system, in fact, the folder does not exist, but, others
> > > were
> > > >> created:
> > > >>
> > > >> # ls /tmp/kafka-stream-app/test-app-id/
> > > >> # 1_0 1_1 1_2
> > > >>
> > > >> After restarting the App, the same WARN shows-up but in this case,
> the
> > > >> folders were created but not the .checkpoint.tmp file:
> > > >>
> > > >> # ls /tmp/kafka-stream-app/test-app-id/
> > > >> # 0_0 0_1 0_2 1_0 1_1 1_2
> > > >>
> > > >> Am just reporting this because I found it strange/suspicious.
> > > >>
> > > >>
> > > >> Cheers!
> > > >> --
> > > >> Jonathan
> > > >>
> > > >>
> > > >>
> > > >> On Wed, May 8, 2019 at 9:26 PM Vahid Hashemian <
> > > vahid.hashem...@gmail.com>
> > > >> wrote:
> > > >>
> > > >>> Hello Kafka users, developers and client-developers,
> > > >>>
> > > >>> This is the first candidate for release of Apache Kafka 2.2.1,
> which
> > > >>> includes many bug fixes for Apache Kafka 2.2.
> > > >>>
> > > >>> Release notes for the 2.2.1 release:
> > > >>> https://home.apache.org/~vahid/kafka-2.2.1-rc0/RELEASE_NOTES.html
> > > >>>
> > > >>> *** Please download, test and vote by Monday, May 13, 6:00 pm PT.
> > > >>>
> > > >>> Kafka's KEYS file containing PGP keys we use to sign the release:
> > > >>> https://kafka.apache.org/KEYS
> > > >>>
> > > >>> * Release artifacts to be voted upon (source and binary):
> > > >>> 

Re: [VOTE] 2.2.1 RC0

2019-05-13 Thread Jason Gustafson
Hi Vahid,

I merged the patch for KAFKA-8335 into 2.2.

Thanks,
Jason

On Mon, May 13, 2019 at 7:17 AM Vahid Hashemian 
wrote:

> Thanks Patrik for the reference.
>
> That JIRA seems to be covering the exact same issue with multiple releases,
> and is not marked as a blocker at this point.
>
> --Vahid
>
> On Mon, May 13, 2019 at 1:22 AM Patrik Kleindl  wrote:
>
> > Hi
> > This might be related to
> https://issues.apache.org/jira/browse/KAFKA-5998
> > at least the behaviour is described there.
> > Regards
> > Patrik
> >
> > > Am 13.05.2019 um 00:23 schrieb Vahid Hashemian <
> > vahid.hashem...@gmail.com>:
> > >
> > > Hi Jonathan,
> > >
> > > Thanks for reporting the issue.
> > > Do you know if it is something that's introduced since 2.2? Did you
> run a
> > > similar test with 2.2?
> > >
> > > Thanks,
> > > --Vahid
> > >
> > > On Sun, May 12, 2019 at 2:22 PM Jonathan Santilli <
> > > jonathansanti...@gmail.com> wrote:
> > >
> > >> Hello Vahid,
> > >>
> > >>
> > >> am testing one of our Kafka Stream Apps with the 2.2.1-rc, after few
> > >> minutes, I see this WARN:
> > >>
> > >>
> > >> 2019-05-09 13:14:37,025 WARN  [test-app-id-dc27624a-8e02-
> > >> 4031-963b-7596a8a77097-StreamThread-1]
> internals.ProcessorStateManager (
> > >> ProcessorStateManager.java:349) - task [0_0] Failed to write offset
> > >> checkpoint file to [/tmp/kafka-stream-app/test-app-id/0_0/.checkpoint]
> > >>
> > >> java.io.FileNotFoundException:
> > >> /tmp/kafka-stream-app/test-app-id/0_0/.checkpoint.tmp (No such file or
> > >> directory)
> > >>
> > >> at java.io.FileOutputStream.open0(Native Method) ~[?:1.8.0_191]
> > >>
> > >> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> > ~[?:1.8.0_191]
> > >>
> > >> at java.io.FileOutputStream.(FileOutputStream.java:213)
> > >> ~[?:1.8.0_191]
> > >>
> > >> at java.io.FileOutputStream.(FileOutputStream.java:162)
> > >> ~[?:1.8.0_191]
> > >>
> > >> at org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(
> > >> OffsetCheckpoint.java:79) ~[kafka-streams-2.2.1.jar:?]
> > >>
> > >> at
> > >>
> > >>
> >
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(
> > >> ProcessorStateManager.java:347) [kafka-streams-2.2.1.jar:?]
> > >>
> > >> at org.apache.kafka.streams.processor.internals.StreamTask.commit(
> > >> StreamTask.java:476) [kafka-streams-2.2.1.jar:?]
> > >>
> > >> at org.apache.kafka.streams.processor.internals.StreamTask.suspend(
> > >> StreamTask.java:598) [kafka-streams-2.2.1.jar:?]
> > >>
> > >> at org.apache.kafka.streams.processor.internals.StreamTask.close(
> > >> StreamTask.java:724) [kafka-streams-2.2.1.jar:?]
> > >>
> > >> at org.apache.kafka.streams.processor.internals.AssignedTasks.close(
> > >> AssignedTasks.java:337) [kafka-streams-2.2.1.jar:?]
> > >>
> > >> at org.apache.kafka.streams.processor.internals.TaskManager.shutdown(
> > >> TaskManager.java:267) [kafka-streams-2.2.1.jar:?]
> > >>
> > >> at
> > >>
> >
> org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(
> > >> StreamThread.java:1208) [kafka-streams-2.2.1.jar:?]
> > >>
> > >> at org.apache.kafka.streams.processor.internals.StreamThread.run(
> > >> StreamThread.java:785) [kafka-streams-2.2.1.jar:?]
> > >>
> > >> Checking the system, in fact, the folder does not exist, but, others
> > were
> > >> created:
> > >>
> > >> # ls /tmp/kafka-stream-app/test-app-id/
> > >> # 1_0 1_1 1_2
> > >>
> > >> After restarting the App, the same WARN shows-up but in this case, the
> > >> folders were created but not the .checkpoint.tmp file:
> > >>
> > >> # ls /tmp/kafka-stream-app/test-app-id/
> > >> # 0_0 0_1 0_2 1_0 1_1 1_2
> > >>
> > >> Am just reporting this because I found it strange/suspicious.
> > >>
> > >>
> > >> Cheers!
> > >> --
> > >> Jonathan
> > >>
> > >>
> > >>
> > >> On Wed, May 8, 2019 at 9:26 PM Vahid Hashemian <
> > vahid.hashem...@gmail.com>
> > >> wrote:
> > >>
> > >>> Hello Kafka users, developers and client-developers,
> > >>>
> > >>> This is the first candidate for release of Apache Kafka 2.2.1, which
> > >>> includes many bug fixes for Apache Kafka 2.2.
> > >>>
> > >>> Release notes for the 2.2.1 release:
> > >>> https://home.apache.org/~vahid/kafka-2.2.1-rc0/RELEASE_NOTES.html
> > >>>
> > >>> *** Please download, test and vote by Monday, May 13, 6:00 pm PT.
> > >>>
> > >>> Kafka's KEYS file containing PGP keys we use to sign the release:
> > >>> https://kafka.apache.org/KEYS
> > >>>
> > >>> * Release artifacts to be voted upon (source and binary):
> > >>> https://home.apache.org/~vahid/kafka-2.2.1-rc0/
> > >>>
> > >>> * Maven artifacts to be voted upon:
> > >>>
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >>>
> > >>> * Javadoc:
> > >>> https://home.apache.org/~vahid/kafka-2.2.1-rc0/javadoc/
> > >>>
> > >>> * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> > >>> https://github.com/apache/kafka/releases/tag/2.2.1-rc0
> > >>>
> > >>> * Documentation:
> > >>> https://kafka.apache.org/22/documentation.html

[jira] [Resolved] (KAFKA-8335) Log cleaner skips Transactional mark and batch record, causing unlimited growth of __consumer_offsets

2019-05-13 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8335.

   Resolution: Fixed
Fix Version/s: 2.2.1
   2.1.2
   2.0.2

> Log cleaner skips Transactional mark and batch record, causing unlimited 
> growth of __consumer_offsets
> -
>
> Key: KAFKA-8335
> URL: https://issues.apache.org/jira/browse/KAFKA-8335
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Boquan Tang
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.1
>
> Attachments: seg_april_25.zip, segment.zip
>
>
> My Colleague Weichu already sent out a mail to kafka user mailing list 
> regarding this issue, but we think it's worth having a ticket tracking it.
> We are using Kafka Streams with exactly-once enabled on a Kafka cluster for
> a while.
> Recently we found that the size of __consumer_offsets partitions grew huge.
> Some partition went over 30G. This caused Kafka to take quite long to load
> "__consumer_offsets" topic on startup (it loads the topic in order to
> become group coordinator).
> We dumped the __consumer_offsets segments and found that while normal
> offset commits are nicely compacted, transaction records (COMMIT, etc) are
> all preserved. Looks like that since these messages don't have a key, the
> LogCleaner is keeping them all:
> --
> $ bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files
> /003484332061.log --key-decoder-class
> kafka.serializer.StringDecoder 2>/dev/null | cat -v | head
> Dumping 003484332061.log
> Starting offset: 3484332061
> offset: 3484332089 position: 549 CreateTime: 1556003706952 isvalid: true
> keysize: 4 valuesize: 6 magic: 2 compresscodec: NONE producerId: 1006
> producerEpoch: 2530 sequence: -1 isTransactional: true headerKeys: []
> endTxnMarker: COMMIT coordinatorEpoch: 81
> offset: 3484332090 position: 627 CreateTime: 1556003706952 isvalid: true
> keysize: 4 valuesize: 6 magic: 2 compresscodec: NONE producerId: 4005
> producerEpoch: 2520 sequence: -1 isTransactional: true headerKeys: []
> endTxnMarker: COMMIT coordinatorEpoch: 84
> ...
> --
> Streams is doing transaction commits per 100ms (commit.interval.ms=100 when
> exactly-once) so the __consumer_offsets is growing really fast.
> Is this (to keep all transactions) by design, or is that a bug for
> LogCleaner?  What would be the way to clean up the topic?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8358) KafkaConsumer.endOffsets should be able to also return end offsets while not ignoring control records

2019-05-13 Thread Natan Silnitsky (JIRA)
Natan Silnitsky created KAFKA-8358:
--

 Summary: KafkaConsumer.endOffsets should be able to also return 
end offsets while not ignoring control records
 Key: KAFKA-8358
 URL: https://issues.apache.org/jira/browse/KAFKA-8358
 Project: Kafka
  Issue Type: Improvement
Reporter: Natan Silnitsky


We have a use case where we have a wrapper on top of {{kafkaConsumer}} for 
compact logs.
In order to know that a user can get "new" values for a key in the compact log, 
on init, or on rebalance, we need to block until all "old" values were read.

We wanted to use {{KafkaConsumer.endOffsets}} to help us find out where the 
"old" values end.
once all "old" values arrive from {{KafkaConsumer.poll}}, we can release the 
blocking on getting new values.

But it seems that [control 
records|https://github.com/apache/kafka/blob/c09e25fac2aaea61af892ae3e5273679a4bdbc7d/clients/src/main/java/org/apache/kafka/common/record/DefaultRecordBatch.java#L128]
 are not received in {{KafkaConsumer.poll }} but are taking into account for 
{{KafkaConsumer.endOffsets }}

So the Feature request is for {{KafkaConsumer.endOffsets}} to have a flag to 
ignore control records, the same way that {{KafkaConsumer.poll }} ignores them.



(From a quick review of the code, it seems that 
{{LeaderEpochFile}}.[assign|https://github.com/apache/kafka/blob/c09e25fac2aaea61af892ae3e5273679a4bdbc7d/core/src/main/scala/kafka/server/epoch/LeaderEpochFileCache.scala#L51]
 can be given the flag isControl from 
[batch.isControlBatch|https://github.com/apache/kafka/blob/c09e25fac2aaea61af892ae3e5273679a4bdbc7d/clients/src/main/java/org/apache/kafka/common/record/RecordBatch.java#L239]

But I'm maybe wrong with my understanding there...)

CC:
[~berman7] [~berman]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8357) OOM on HPUX

2019-05-13 Thread Shamil Sabirov (JIRA)
Shamil Sabirov created KAFKA-8357:
-

 Summary: OOM on HPUX
 Key: KAFKA-8357
 URL: https://issues.apache.org/jira/browse/KAFKA-8357
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.2.0
 Environment: HP-UX B.11.31 U ia64
Reporter: Shamil Sabirov
 Attachments: server.log.2019-05-10-11

we have trubles similar to KAFKA-5962

issue resolved by updating docs. for linux

but i have no idea how we can fix this for HPUX environment

any ideas?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 2.2.1 RC0

2019-05-13 Thread Vahid Hashemian
Thanks Patrik for the reference.

That JIRA seems to be covering the exact same issue with multiple releases,
and is not marked as a blocker at this point.

--Vahid

On Mon, May 13, 2019 at 1:22 AM Patrik Kleindl  wrote:

> Hi
> This might be related to https://issues.apache.org/jira/browse/KAFKA-5998
> at least the behaviour is described there.
> Regards
> Patrik
>
> > Am 13.05.2019 um 00:23 schrieb Vahid Hashemian <
> vahid.hashem...@gmail.com>:
> >
> > Hi Jonathan,
> >
> > Thanks for reporting the issue.
> > Do you know if it is something that's introduced since 2.2? Did you run a
> > similar test with 2.2?
> >
> > Thanks,
> > --Vahid
> >
> > On Sun, May 12, 2019 at 2:22 PM Jonathan Santilli <
> > jonathansanti...@gmail.com> wrote:
> >
> >> Hello Vahid,
> >>
> >>
> >> am testing one of our Kafka Stream Apps with the 2.2.1-rc, after few
> >> minutes, I see this WARN:
> >>
> >>
> >> 2019-05-09 13:14:37,025 WARN  [test-app-id-dc27624a-8e02-
> >> 4031-963b-7596a8a77097-StreamThread-1] internals.ProcessorStateManager (
> >> ProcessorStateManager.java:349) - task [0_0] Failed to write offset
> >> checkpoint file to [/tmp/kafka-stream-app/test-app-id/0_0/.checkpoint]
> >>
> >> java.io.FileNotFoundException:
> >> /tmp/kafka-stream-app/test-app-id/0_0/.checkpoint.tmp (No such file or
> >> directory)
> >>
> >> at java.io.FileOutputStream.open0(Native Method) ~[?:1.8.0_191]
> >>
> >> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> ~[?:1.8.0_191]
> >>
> >> at java.io.FileOutputStream.(FileOutputStream.java:213)
> >> ~[?:1.8.0_191]
> >>
> >> at java.io.FileOutputStream.(FileOutputStream.java:162)
> >> ~[?:1.8.0_191]
> >>
> >> at org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(
> >> OffsetCheckpoint.java:79) ~[kafka-streams-2.2.1.jar:?]
> >>
> >> at
> >>
> >>
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(
> >> ProcessorStateManager.java:347) [kafka-streams-2.2.1.jar:?]
> >>
> >> at org.apache.kafka.streams.processor.internals.StreamTask.commit(
> >> StreamTask.java:476) [kafka-streams-2.2.1.jar:?]
> >>
> >> at org.apache.kafka.streams.processor.internals.StreamTask.suspend(
> >> StreamTask.java:598) [kafka-streams-2.2.1.jar:?]
> >>
> >> at org.apache.kafka.streams.processor.internals.StreamTask.close(
> >> StreamTask.java:724) [kafka-streams-2.2.1.jar:?]
> >>
> >> at org.apache.kafka.streams.processor.internals.AssignedTasks.close(
> >> AssignedTasks.java:337) [kafka-streams-2.2.1.jar:?]
> >>
> >> at org.apache.kafka.streams.processor.internals.TaskManager.shutdown(
> >> TaskManager.java:267) [kafka-streams-2.2.1.jar:?]
> >>
> >> at
> >>
> org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(
> >> StreamThread.java:1208) [kafka-streams-2.2.1.jar:?]
> >>
> >> at org.apache.kafka.streams.processor.internals.StreamThread.run(
> >> StreamThread.java:785) [kafka-streams-2.2.1.jar:?]
> >>
> >> Checking the system, in fact, the folder does not exist, but, others
> were
> >> created:
> >>
> >> # ls /tmp/kafka-stream-app/test-app-id/
> >> # 1_0 1_1 1_2
> >>
> >> After restarting the App, the same WARN shows-up but in this case, the
> >> folders were created but not the .checkpoint.tmp file:
> >>
> >> # ls /tmp/kafka-stream-app/test-app-id/
> >> # 0_0 0_1 0_2 1_0 1_1 1_2
> >>
> >> Am just reporting this because I found it strange/suspicious.
> >>
> >>
> >> Cheers!
> >> --
> >> Jonathan
> >>
> >>
> >>
> >> On Wed, May 8, 2019 at 9:26 PM Vahid Hashemian <
> vahid.hashem...@gmail.com>
> >> wrote:
> >>
> >>> Hello Kafka users, developers and client-developers,
> >>>
> >>> This is the first candidate for release of Apache Kafka 2.2.1, which
> >>> includes many bug fixes for Apache Kafka 2.2.
> >>>
> >>> Release notes for the 2.2.1 release:
> >>> https://home.apache.org/~vahid/kafka-2.2.1-rc0/RELEASE_NOTES.html
> >>>
> >>> *** Please download, test and vote by Monday, May 13, 6:00 pm PT.
> >>>
> >>> Kafka's KEYS file containing PGP keys we use to sign the release:
> >>> https://kafka.apache.org/KEYS
> >>>
> >>> * Release artifacts to be voted upon (source and binary):
> >>> https://home.apache.org/~vahid/kafka-2.2.1-rc0/
> >>>
> >>> * Maven artifacts to be voted upon:
> >>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >>>
> >>> * Javadoc:
> >>> https://home.apache.org/~vahid/kafka-2.2.1-rc0/javadoc/
> >>>
> >>> * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> >>> https://github.com/apache/kafka/releases/tag/2.2.1-rc0
> >>>
> >>> * Documentation:
> >>> https://kafka.apache.org/22/documentation.html
> >>>
> >>> * Protocol:
> >>> https://kafka.apache.org/22/protocol.html
> >>>
> >>> * Successful Jenkins builds for the 2.2 branch:
> >>> Unit/integration tests:
> >> https://builds.apache.org/job/kafka-2.2-jdk8/106/
> >>>
> >>> Thanks,
> >>> --Vahid
> >>>
> >>
> >>
> >> --
> >> Santilli Jonathan
> >>
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
>


-- 

Thanks!
--Vahid


Re: [VOTE] 2.2.1 RC0

2019-05-13 Thread Patrik Kleindl
Hi
This might be related to https://issues.apache.org/jira/browse/KAFKA-5998
at least the behaviour is described there.
Regards
Patrik 

> Am 13.05.2019 um 00:23 schrieb Vahid Hashemian :
> 
> Hi Jonathan,
> 
> Thanks for reporting the issue.
> Do you know if it is something that's introduced since 2.2? Did you run a
> similar test with 2.2?
> 
> Thanks,
> --Vahid
> 
> On Sun, May 12, 2019 at 2:22 PM Jonathan Santilli <
> jonathansanti...@gmail.com> wrote:
> 
>> Hello Vahid,
>> 
>> 
>> am testing one of our Kafka Stream Apps with the 2.2.1-rc, after few
>> minutes, I see this WARN:
>> 
>> 
>> 2019-05-09 13:14:37,025 WARN  [test-app-id-dc27624a-8e02-
>> 4031-963b-7596a8a77097-StreamThread-1] internals.ProcessorStateManager (
>> ProcessorStateManager.java:349) - task [0_0] Failed to write offset
>> checkpoint file to [/tmp/kafka-stream-app/test-app-id/0_0/.checkpoint]
>> 
>> java.io.FileNotFoundException:
>> /tmp/kafka-stream-app/test-app-id/0_0/.checkpoint.tmp (No such file or
>> directory)
>> 
>> at java.io.FileOutputStream.open0(Native Method) ~[?:1.8.0_191]
>> 
>> at java.io.FileOutputStream.open(FileOutputStream.java:270) ~[?:1.8.0_191]
>> 
>> at java.io.FileOutputStream.(FileOutputStream.java:213)
>> ~[?:1.8.0_191]
>> 
>> at java.io.FileOutputStream.(FileOutputStream.java:162)
>> ~[?:1.8.0_191]
>> 
>> at org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(
>> OffsetCheckpoint.java:79) ~[kafka-streams-2.2.1.jar:?]
>> 
>> at
>> 
>> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(
>> ProcessorStateManager.java:347) [kafka-streams-2.2.1.jar:?]
>> 
>> at org.apache.kafka.streams.processor.internals.StreamTask.commit(
>> StreamTask.java:476) [kafka-streams-2.2.1.jar:?]
>> 
>> at org.apache.kafka.streams.processor.internals.StreamTask.suspend(
>> StreamTask.java:598) [kafka-streams-2.2.1.jar:?]
>> 
>> at org.apache.kafka.streams.processor.internals.StreamTask.close(
>> StreamTask.java:724) [kafka-streams-2.2.1.jar:?]
>> 
>> at org.apache.kafka.streams.processor.internals.AssignedTasks.close(
>> AssignedTasks.java:337) [kafka-streams-2.2.1.jar:?]
>> 
>> at org.apache.kafka.streams.processor.internals.TaskManager.shutdown(
>> TaskManager.java:267) [kafka-streams-2.2.1.jar:?]
>> 
>> at
>> org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(
>> StreamThread.java:1208) [kafka-streams-2.2.1.jar:?]
>> 
>> at org.apache.kafka.streams.processor.internals.StreamThread.run(
>> StreamThread.java:785) [kafka-streams-2.2.1.jar:?]
>> 
>> Checking the system, in fact, the folder does not exist, but, others were
>> created:
>> 
>> # ls /tmp/kafka-stream-app/test-app-id/
>> # 1_0 1_1 1_2
>> 
>> After restarting the App, the same WARN shows-up but in this case, the
>> folders were created but not the .checkpoint.tmp file:
>> 
>> # ls /tmp/kafka-stream-app/test-app-id/
>> # 0_0 0_1 0_2 1_0 1_1 1_2
>> 
>> Am just reporting this because I found it strange/suspicious.
>> 
>> 
>> Cheers!
>> --
>> Jonathan
>> 
>> 
>> 
>> On Wed, May 8, 2019 at 9:26 PM Vahid Hashemian 
>> wrote:
>> 
>>> Hello Kafka users, developers and client-developers,
>>> 
>>> This is the first candidate for release of Apache Kafka 2.2.1, which
>>> includes many bug fixes for Apache Kafka 2.2.
>>> 
>>> Release notes for the 2.2.1 release:
>>> https://home.apache.org/~vahid/kafka-2.2.1-rc0/RELEASE_NOTES.html
>>> 
>>> *** Please download, test and vote by Monday, May 13, 6:00 pm PT.
>>> 
>>> Kafka's KEYS file containing PGP keys we use to sign the release:
>>> https://kafka.apache.org/KEYS
>>> 
>>> * Release artifacts to be voted upon (source and binary):
>>> https://home.apache.org/~vahid/kafka-2.2.1-rc0/
>>> 
>>> * Maven artifacts to be voted upon:
>>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>>> 
>>> * Javadoc:
>>> https://home.apache.org/~vahid/kafka-2.2.1-rc0/javadoc/
>>> 
>>> * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
>>> https://github.com/apache/kafka/releases/tag/2.2.1-rc0
>>> 
>>> * Documentation:
>>> https://kafka.apache.org/22/documentation.html
>>> 
>>> * Protocol:
>>> https://kafka.apache.org/22/protocol.html
>>> 
>>> * Successful Jenkins builds for the 2.2 branch:
>>> Unit/integration tests:
>> https://builds.apache.org/job/kafka-2.2-jdk8/106/
>>> 
>>> Thanks,
>>> --Vahid
>>> 
>> 
>> 
>> --
>> Santilli Jonathan
>> 
> 
> 
> -- 
> 
> Thanks!
> --Vahid