Build failed in Jenkins: kafka-trunk-jdk11 #1599

2020-06-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10173: Fix suppress changelog binary schema compatibility (#8905)


--
[...truncated 3.18 MB...]

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task 

Build failed in Jenkins: kafka-trunk-jdk8 #4671

2020-06-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10173: Fix suppress changelog binary schema compatibility (#8905)


--
[...truncated 3.16 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task 

[DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-06-26 Thread Navinder Brar
Hi,

KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-406%3A+GlobalStreamThread+should+honor+custom+reset+policy

I have taken over this KIP since it has been dormant for a long time and this 
looks important for use-cases that have large global data, so rebuilding global 
stores from scratch might seem overkill in case of InvalidOffsetExecption.

We want to give users the control to use reset policy(as we do in StreamThread) 
in case they hit invalid offsets. I have still not decided whether to restrict 
this option to the same reset policy being used by StreamThread(using 
auto.offset.reset config) or add another reset config specifically for global 
stores "global.auto.offset.reset" which gives users more control to choose 
separate policies for global and stream threads.

I would like to hear your opinions on the KIP.


-Navinder 

Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-06-26 Thread John Roesler
Hi Randall,

I neglected to notify this thread when I merged the fix for
https://issues.apache.org/jira/browse/KAFKA-10185
on June 19th. I'm sorry about that oversight. It is marked with
a fix version of 2.6.0.

On a side node, I have a fix for KAFKA-10173, which I'm merging
and backporting right now.

Thanks for managing the release,
-John

On Thu, Jun 25, 2020, at 10:23, Randall Hauch wrote:
> Thanks for the update, folks!
> 
> Based upon Jira [1], we currently have 4 issues that are considered
> blockers for the 2.6.0 release and production of RCs:
> 
>- https://issues.apache.org/jira/browse/KAFKA-10134 - High CPU issue
>during rebalance in Kafka consumer after upgrading to 2.5 (unassigned)
>- https://issues.apache.org/jira/browse/KAFKA-10143 - Can no longer
>change replication throttle with reassignment tool (Jason G)
>- https://issues.apache.org/jira/browse/KAFKA-10166 - Excessive
>TaskCorruptedException seen in testing (Sophie, Bruno)
>- https://issues.apache.org/jira/browse/KAFKA-10173
>- BufferUnderflowException during Kafka Streams Upgrade (John R)
> 
> and one critical issue that may be a regression that at this time will not
> block production of RCs:
> 
>- https://issues.apache.org/jira/browse/KAFKA-10017 - Flaky Test
>EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta (Matthias)
> 
> and one build/release issue we'd like to fix if possible but will not block
> RCs or the release:
> 
>- https://issues.apache.org/jira/browse/KAFKA-9381
>- kafka-streams-scala: Javadocs + Scaladocs not published on maven central
>(me)
> 
> I'm working with the assignees and reporters of these issues (via comments
> on the issues) to identify an ETA and to track progress. Anyone is welcome
> to chime in on those issues.
> 
> At this time, no other changes (other than PRs that only fix/improve tests)
> should be merged to the `2.6` branch. If you think you've identified a new
> blocker issue or believe another existing issue should be treated as a
> blocker for 2.6.0, please mark the issue's `fix version` as `2.6.0` _and_
> respond to this thread with details, and I will work with you to determine
> whether it is indeed a blocker.
> 
> As always, let me know here if you have any questions/concerns.
> 
> Best regards,
> 
> Randall
> 
> [1] https://issues.apache.org/jira/projects/KAFKA/versions/12346918
> 
> On Thu, Jun 25, 2020 at 8:27 AM Mario Molina  wrote:
> 
> > Hi Randal,
> >
> > Ticket https://issues.apache.org/jira/browse/KAFKA-9018 is not a blocker
> > so
> > it can be moved to the 2.7.0 version.
> >
> > Mario
> >
> > On Wed, 24 Jun 2020 at 20:22, Boyang Chen 
> > wrote:
> >
> > > Hey Randal,
> > >
> > > There was another spotted blocker:
> > > https://issues.apache.org/jira/browse/KAFKA-10173
> > > As of current, John is working on a fix.
> > >
> > > Boyang
> > >
> > > On Wed, Jun 24, 2020 at 4:08 PM Sophie Blee-Goldman  > >
> > > wrote:
> > >
> > > > Hey all,
> > > >
> > > > Just a heads up that we discovered a new blocker. The fix is pretty
> > > > straightforward
> > > > and there's already a PR for it so it should be resolved quickly.
> > > >
> > > > Here's the ticket: https://issues.apache.org/jira/browse/KAFKA-10198
> > > >
> > > > On Sat, May 30, 2020 at 12:52 PM Randall Hauch 
> > wrote:
> > > >
> > > > > Hi, Kowshik,
> > > > >
> > > > > Thanks for the update on KIP-584. This is listed on the "Postponed"
> > > > section
> > > > > of the AK 2.6.0 release plan (
> > > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > > > ).
> > > > >
> > > > > Best regards,
> > > > >
> > > > > Randall
> > > > >
> > > > > On Fri, May 29, 2020 at 4:51 PM Kowshik Prakasam <
> > > kpraka...@confluent.io
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Hi Randall,
> > > > > >
> > > > > > We have to remove KIP-584 from the release plan, as this item will
> > > not
> > > > be
> > > > > > completed for 2.6 release (although KIP is accepted). We plan to
> > > > include
> > > > > it
> > > > > > in a next release.
> > > > > >
> > > > > >
> > > > > > Cheers,
> > > > > > Kowshik
> > > > > >
> > > > > >
> > > > > > On Fri, May 29, 2020 at 11:43 AM Maulin Vasavada <
> > > > > > maulin.vasav...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Randall Hauch
> > > > > > >
> > > > > > > Can we add KIP-519 to 2.6? It was merged to Trunk already in
> > April
> > > -
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=128650952
> > > > > > > .
> > > > > > >
> > > > > > > Thanks
> > > > > > > Maulin
> > > > > > >
> > > > > > > On Fri, May 29, 2020 at 11:01 AM Randall Hauch  > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Here's an update on the AK 2.6.0 release.
> > > > > > > >
> > > > > > > > Code freeze was Wednesday, and the release plan [1] has been
> > > > updated
> > > > > to
> > > > > > > > reflect all of the KIPs that made the release. 

[jira] [Resolved] (KAFKA-10185) Streams should log summarized restoration information at info level

2020-06-26 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-10185.
--
Resolution: Fixed

> Streams should log summarized restoration information at info level
> ---
>
> Key: KAFKA-10185
> URL: https://issues.apache.org/jira/browse/KAFKA-10185
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.6.0, 2.5.1
>
>
> Currently, restoration progress is only visible at debug level in the 
> Consumer's Fetcher logs. Users can register a restoration listener and 
> implement their own logging, but it would substantially improve operability 
> to have some logs available at INFO level.
> Logging each partition in each restore batch at info level would be too much, 
> though, so we should print summarized logs at a decreased interval, like 
> every 10 seconds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-trunk-jdk11 #1598

2020-06-26 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #4670

2020-06-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10166: always write checkpoint before closing an (initialized)


--
[...truncated 3.16 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > 

[jira] [Created] (KAFKA-10207) Untrimmed Index files cause premature log segment deletions on startup

2020-06-26 Thread Johnny Malizia (Jira)
Johnny Malizia created KAFKA-10207:
--

 Summary: Untrimmed Index files cause premature log segment 
deletions on startup
 Key: KAFKA-10207
 URL: https://issues.apache.org/jira/browse/KAFKA-10207
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 2.4.1, 2.3.1, 2.4.0
Reporter: Johnny Malizia


[KIP-263|https://cwiki.apache.org/confluence/display/KAFKA/KIP-263%3A+Allow+broker+to+skip+sanity+check+of+inactive+segments+on+broker+startup#KIP263:Allowbrokertoskipsanitycheckofinactivesegmentsonbrokerstartup-Evaluation]
 appears to have introduced a change explicitly deciding to not call the 
sanityCheck method on the time or offset index files that are loaded by Kafka 
at startup. I found a particularly nasty bug using the following configuration
{code:java}
jvm=1.8.0_191 zfs=0.6.5.6 kernel=4.4.0-1013-aws kafka=2.4.1{code}
The bug was that the retention period for a topic or even the broker level 
configuration seemed to not be respected, no matter what, when the broker 
started up it would decide that all log segments on disk were breaching the 
retention window and the data would be purged away.

 
{code:java}
Found deletable segments with base offsets [11610665,12130396,12650133] due to 
retention time 8640ms breach {code}
{code:java}
Rolled new log segment at offset 12764291 in 1 ms. (kafka.log.Log)
Scheduling segments for deletion List(LogSegment(baseOffset=11610665, 
size=1073731621, lastModifiedTime=1592532125000, largestTime=0), 
LogSegment(baseOffset=12130396, size=1073727967, 
lastModifiedTime=1592532462000, largestTime=0), LogSegment(baseOffset=12650133, 
size=235891971, lastModifiedTime=1592532531000, largestTime=0)) {code}
Further logging showed that this issue was happening when loading the files, 
indicating the final writes to trim the index were not successful
{code:java}
DEBUG Loaded index file 
/mnt/kafka-logs/test_topic-0/17221277.timeindex with maxEntries = 
873813, maxIndexSize = 10485760, entries = 873813, lastOffset = 
TimestampOffset(0,17221277), file position = 10485756 
(kafka.log.TimeIndex){code}
 

So, because the index leaves the preallocated 0 bytes at the tail, when the 
index is loaded again after restarting Kafka, the next timestamp is 0 and this 
leads to a premature TTL deletion of the log segments.

 

I tracked the issue down to being caused by the jvm version being used as 
upgrading resolved this issue, but I think that Kafka should never delete data 
by mistake like this as doing a rolling restart with this bug in place would 
cause complete data-loss across the cluster.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-623: Add "internal-topics" option to streams application reset tool

2020-06-26 Thread Boyang Chen
Thanks for driving the proposal Joel, I have a minor suggestion:  we should
be more clear about why we introduce this flag, so it would be better to
also state clearly in the document for the default behavior as well, such
like:

Comma-separated list of internal topics to be deleted. By default,
Streams reset tool will delete all topics prefixed by the
application.id.

This flag is useful when you need to keep certain topics intact due to
the prefix conflict with another application (such like "app" vs
"app-v2").

With provided internal topic names for "app", the reset tool will only
delete internal topics associated with "app", instead of both "app"
and "app-v2".


Other than that, +1 from me (binding).

On Wed, Jun 24, 2020 at 1:19 PM Joel Wee  wrote:

> Apologies. Changing the subject.
>
> On 24 Jun 2020, at 9:14 PM, Joel Wee  joel@outlook.com>> wrote:
>
> Hi all
>
> I would like to start a vote for KIP-623, which adds the option
> --internal-topics to the streams-application-reset-tool:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158862177
> .
>
> Please let me know what you think.
>
> Best
>
> Joel
>
>


Build failed in Jenkins: kafka-trunk-jdk14 #246

2020-06-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10166: always write checkpoint before closing an (initialized)


--
[...truncated 3.18 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #4669

2020-06-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update Scala to 2.13.3 (#8931)

[mkumar] MINOR: Rename SslTransportLayer.State."NOT_INITALIZED" enum value to


--
[...truncated 6.31 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED


[jira] [Resolved] (KAFKA-6453) Reconsider timestamp propagation semantics

2020-06-26 Thread Victoria Bialas (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Bialas resolved KAFKA-6453.

Resolution: Fixed

Fixed by James Galasyn in https://github.com/apache/kafka/pull/8920

> Reconsider timestamp propagation semantics
> --
>
> Key: KAFKA-6453
> URL: https://issues.apache.org/jira/browse/KAFKA-6453
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Victoria Bialas
>Priority: Major
>  Labels: needs-kip
>
> Atm, Kafka Streams only has a defined "contract" about timestamp propagation 
> at the Processor API level: all processor within a sub-topology, see the 
> timestamp from the input topic record and this timestamp will be used for all 
> result record when writing them to an topic, too.
> The DSL, inherits this "contract" atm.
> From a DSL point of view, it would be desirable to provide a different 
> contract to the user. To allow this, we need to do the following:
>  - extend Processor API to allow manipulation timestamps (ie, a Processor can 
> set a new timestamp for downstream records)
>  - define a DSL "contract" for timestamp propagation for each DSL operator
>  - document the DSL "contract"
>  - implement the DSL "contract" using the new/extended Processor API



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10166) Excessive TaskCorruptedException seen in testing

2020-06-26 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-10166.
-
Resolution: Fixed

> Excessive TaskCorruptedException seen in testing
> 
>
> Key: KAFKA-10166
> URL: https://issues.apache.org/jira/browse/KAFKA-10166
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Bruno Cadonna
>Priority: Blocker
> Fix For: 2.6.0
>
>
> As the title indicates, long-running test applications with injected network 
> "outages" seem to hit TaskCorruptedException more than expected.
> Seen occasionally on the ALOS application (~20 times in two days in one case, 
> for example), and very frequently with EOS (many times per day)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1597

2020-06-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update Scala to 2.13.3 (#8931)

[mkumar] MINOR: Rename SslTransportLayer.State."NOT_INITALIZED" enum value to


--
[...truncated 4.18 MB...]

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Jenkins build is back to normal : kafka-trunk-jdk14 #245

2020-06-26 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #4668

2020-06-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9076: support consumer sync across clusters in MM 2.0 (#7577)


--
[...truncated 6.31 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task 

Re: [VOTE] KIP-418: A method-chaining way to branch KStream

2020-06-26 Thread John Roesler
Thanks, Ivan!

I’m +1 (binding)

-John

On Thu, May 28, 2020, at 17:24, Ivan Ponomarev wrote:
> Hello all!
> 
> I'd like to start the vote for KIP-418 which proposes deprecation of 
> current `branch` method and provides a method-chaining based API for 
> branching.
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-418%3A+A+method-chaining+way+to+branch+KStream
> 
> Regards,
> 
> Ivan
>


Build failed in Jenkins: kafka-trunk-jdk14 #244

2020-06-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9076: support consumer sync across clusters in MM 2.0 (#7577)


--
[...truncated 3.18 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED


[jira] [Created] (KAFKA-10206) Admin can transiently return incorrect results about topics

2020-06-26 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-10206:
---

 Summary: Admin can transiently return incorrect results about 
topics
 Key: KAFKA-10206
 URL: https://issues.apache.org/jira/browse/KAFKA-10206
 Project: Kafka
  Issue Type: Bug
  Components: admin, core
Reporter: Tom Bentley
Assignee: Tom Bentley


When a broker starts up it can handle metadata requests before it has 
received UPDATE_METADATA requests from the controller. 
This manifests in the admin client via:

* listTopics returning an empty list
* describeTopics and describeConfigs of topics erroneously returning 
TopicOrPartitionNotFoundException

I assume this also affects the producer and consumer, though since 
`UnknownTopicOrPartitionException` is retriable those clients recover.

Testing locally suggests that the window for this happening is typically <1s.

There doesn't seem to be any way for the caller of the Admin client to detect 
this situation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9076) MirrorMaker 2.0 automated consumer offset sync

2020-06-26 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-9076.
---
Resolution: Fixed

> MirrorMaker 2.0 automated consumer offset sync
> --
>
> Key: KAFKA-9076
> URL: https://issues.apache.org/jira/browse/KAFKA-9076
> Project: Kafka
>  Issue Type: Improvement
>  Components: mirrormaker
>Affects Versions: 2.4.0
>Reporter: Ning Zhang
>Assignee: Ning Zhang
>Priority: Major
>  Labels: mirrormaker, pull-request-available
> Fix For: 2.7.0
>
>
> To calculate the translated consumer offset in the target cluster, currently 
> `Mirror-client` provides a function called "remoteConsumerOffsets()" that is 
> used by "RemoteClusterUtils" for one-time purpose.
> In order to make the consumer and stream applications migrate from source to 
> target cluster transparently and conveniently, e.g. in event of source 
> cluster failure, a background job is proposed to periodically sync the 
> consumer offsets from the source to target cluster, so that when the consumer 
> and stream applications switch to the target cluster, it will resume to 
> consume from where it left off at source cluster.
>  KIP: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-545%3A+support+automated+consumer+offset+sync+across+clusters+in+MM+2.0
> [https://github.com/apache/kafka/pull/7577]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: KAFKA-10194 and contributor list

2020-06-26 Thread Mickael Maison
Hi Mohamed,

I've added you to the contributors list.
Welcome to the Kafka community!

On Thu, Jun 25, 2020 at 11:34 PM Mohamed Chebbi  wrote:
>
> Hi Kafka Team
>
> i want to be added to the contributor list so i can Work on KAFKA-10194.
>
> my JIRA Username is mhmdchebbi.
>
>
> Cdt
>
> Mohamed Chebbi
>


Re: Re:Re: [ANNOUNCE] New committer: Xi Hu

2020-06-26 Thread David Jacot
Congrats!

On Thu, Jun 25, 2020 at 4:08 PM Hu Xi  wrote:

> Thank you, everyone. It is my great honor to be a part of the community.
> Will make a greater contribution in the coming days.
>
> 
> 发件人: Roc Marshal 
> 发送时间: 2020年6月25日 10:20
> 收件人: us...@kafka.apache.org 
> 主题: Re:Re: [ANNOUNCE] New committer: Xi Hu
>
> Congratulations ! Xi Hu.
>
>
> Best,
> Roc Marshal.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> At 2020-06-25 01:30:33, "Boyang Chen"  wrote:
> >Congratulations Xi! Well deserved.
> >
> >On Wed, Jun 24, 2020 at 10:10 AM AJ Chen  wrote:
> >
> >> Congratulations, Xi.
> >> -aj
> >>
> >>
> >>
> >> On Wed, Jun 24, 2020 at 9:27 AM Guozhang Wang 
> wrote:
> >>
> >> > The PMC for Apache Kafka has invited Xi Hu as a committer and we are
> >> > pleased to announce that he has accepted!
> >> >
> >> > Xi Hu has been actively contributing to Kafka since 2016, and is well
> >> > recognized especially for his non-code contributions: he maintains a
> tech
> >> > blog post evangelizing Kafka in the Chinese speaking community (
> >> > https://www.cnblogs.com/huxi2b/), and is one of the most active
> >> answering
> >> > member in Zhihu (Chinese Reddit / StackOverflow) Kafka topic. He has
> >> > presented in Kafka meetup events in the past and authored a
> >> > book deep-diving on Kafka architecture design and operations as well (
> >> > https://www.amazon.cn/dp/B07JH9G2FL). Code wise, he has contributed
> 75
> >> > patches so far.
> >> >
> >> >
> >> > Thanks for all the contributions Xi. Congratulations!
> >> >
> >> > -- Guozhang, on behalf of the Apache Kafka PMC
> >> >
> >>
>


Re: [ANNOUNCE] New committer: Boyang Chen

2020-06-26 Thread David Jacot
Congrats, Boyang!

On Thu, Jun 25, 2020 at 1:57 PM Viktor Somogyi-Vass 
wrote:

> Congrats :)
>
> On Thu, Jun 25, 2020 at 12:28 AM Liquan Pei  wrote:
>
> > Congrats!
> >
> > On Wed, Jun 24, 2020 at 9:42 AM Raymond Ng  wrote:
> >
> > > Congrats Boyang! Look forward to more awesome contributions from you in
> > the
> > > future.
> > >
> > > Regards,
> > > Ray
> > >
> > > On Wed, Jun 24, 2020 at 6:07 AM Ismael Juma  wrote:
> > >
> > > > Congratulations Boyang!
> > > >
> > > > Ismael
> > > >
> > > > On Mon, Jun 22, 2020 at 4:26 PM Guozhang Wang 
> > > wrote:
> > > >
> > > > > The PMC for Apache Kafka has invited Boyang Chen as a committer and
> > we
> > > > are
> > > > > pleased to announce that he has accepted!
> > > > >
> > > > > Boyang has been active in the Kafka community more than two years
> > ago.
> > > > > Since then he has presented his experience operating with Kafka
> > Streams
> > > > at
> > > > > Pinterest as well as several feature development including
> rebalance
> > > > > improvements (KIP-345) and exactly-once scalability improvements
> > > > (KIP-447)
> > > > > in various Kafka Summit and Kafka Meetups. More recently he's also
> > been
> > > > > participating in Kafka broker development including post-Zookeeper
> > > > > controller design (KIP-500). Besides all the code contributions,
> > Boyang
> > > > has
> > > > > also helped reviewing even more PRs and KIPs than his own.
> > > > >
> > > > > Thanks for all the contributions Boyang! And look forward to more
> > > > > collaborations with you on Apache Kafka.
> > > > >
> > > > >
> > > > > -- Guozhang, on behalf of the Apache Kafka PMC
> > > > >
> > > >
> > >
> > --
> > Liquan Pei
> > Software Engineer, Confluent Inc
> >
>


[jira] [Created] (KAFKA-10205) NullPointerException in StreamTask (Kafka Streams 2.5.0)

2020-06-26 Thread Brian Forkan (Jira)
Brian Forkan created KAFKA-10205:


 Summary: NullPointerException in StreamTask (Kafka Streams 2.5.0)
 Key: KAFKA-10205
 URL: https://issues.apache.org/jira/browse/KAFKA-10205
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.5.0
Reporter: Brian Forkan


In our Kafka Streams application we have been experiencing a 
NullPointerException when deploying a new version of our application. This does 
not happen during a normal rolling restart.

The exception is:
{code:java}
Error caught during partition assignment, will abort the current process and 
re-throw at the end of 
rebalance","stack_trace":"java.lang.NullPointerException: nullError caught 
during partition assignment, will abort the current process and re-throw at the 
end of rebalance","stack_trace":"java.lang.NullPointerException: null at 
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:186)
 at 
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:115)
 at 
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:352)
 at 
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:310)
 at 
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:295)
 at 
org.apache.kafka.streams.processor.internals.TaskManager.addNewActiveTasks(TaskManager.java:160)
 at 
org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:120)
 at 
org.apache.kafka.streams.processor.internals.StreamsRebalanceListener.onPartitionsAssigned(StreamsRebalanceListener.java:77)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:278)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:419)
 at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:439)
 at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:358)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:490)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1275)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1243) 
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1173) 
at brave.kafka.clients.TracingConsumer.poll(TracingConsumer.java:86) at 
brave.kafka.clients.TracingConsumer.poll(TracingConsumer.java:80) at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:853)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:753)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:697)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:670)
{code}
And the relevant lines of code - 
[https://github.com/apache/kafka/blob/2.5/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java#L184-L196]

I suspect "topology.source(partition.topic());" is returning null.

Has anyone experienced this issue before? I suspect there is a problem with our 
topology but I can't replicate this on my machine so I can't tell.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10204) When producing a message, Kafka Producer throws OffsetOutOfRangeException when awaiting for delivery report

2020-06-26 Thread Kaio Chiarato (Jira)
Kaio Chiarato created KAFKA-10204:
-

 Summary: When producing a message, Kafka Producer throws 
OffsetOutOfRangeException when awaiting for delivery report
 Key: KAFKA-10204
 URL: https://issues.apache.org/jira/browse/KAFKA-10204
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.0.1
Reporter: Kaio Chiarato


In production environment, we are dealing with this situation in which a 
producer thrown OffsetOutOfRangeException. I have handle this error once when 
consuming (and changed offset reset policy) but never when producing.

 

Do you guys have any idea?

 

Producer ACK mode = *all*

 

3 Brokers deployed

min.insync.replicas=1default.replication.factor=2
min.insync.replicas=1
num.replica.fetchers=1
offsets.topic.replication.factor=3
[replica.fetch.backoff.ms|http://replica.fetch.backoff.ms/]=1000
replica.fetch.max.bytes=1048576
replica.fetch.min.bytes=1
[replica.fetch.wait.max.ms|http://replica.fetch.wait.max.ms/]=500
[replica.high.watermark.checkpoint.interval.ms|http://replica.high.watermark.checkpoint.interval.ms/]=5000
[replica.lag.time.max.ms|http://replica.lag.time.max.ms/]=1
replica.socket.receive.buffer.bytes=65536
[replica.socket.timeout.ms|http://replica.socket.timeout.ms/]=3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: New Website Layout

2020-06-26 Thread Jorge Esteban Quilcate Otoya
Looks great!!

A small comment about menus: `Get Started` and `Docs` pages have different
UX.
While Get Started pages use the whole page, Docs have a menu on the left
side.
I'd like Docs pages to also use most of the page. For instance, config and
metrics tables could look more readable using more space.
Would it be possible to make the left menu expand/collapse, similar to
current Confluence wiki menu?

Thanks,
Jorge.

On Fri, Jun 26, 2020 at 11:49 AM Ben Stopford  wrote:

> Hey folks
>
> We've made some updates to the website's look and feel. There is a staged
> version in the link below.
>
> https://ec2-13-57-18-236.us-west-1.compute.amazonaws.com/
> username: kafka
> password: streaming
>
> Comments welcomed.
>
> Ben
>


New Website Layout

2020-06-26 Thread Ben Stopford
Hey folks

We've made some updates to the website's look and feel. There is a staged
version in the link below.

https://ec2-13-57-18-236.us-west-1.compute.amazonaws.com/
username: kafka
password: streaming

Comments welcomed.

Ben


Build failed in Jenkins: kafka-trunk-jdk8 #4667

2020-06-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove Diamond and code code Alignment (#8107)


--
[...truncated 6.31 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED


Build failed in Jenkins: kafka-trunk-jdk14 #243

2020-06-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove Diamond and code code Alignment (#8107)


--
[...truncated 6.36 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = 

Jenkins build is back to normal : kafka-trunk-jdk11 #1595

2020-06-26 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-629: Use racially neutral terms in our codebase

2020-06-26 Thread Benny Lee



From: Tom Bentley 
Sent: Friday, 26 June 2020 7:45 PM
To: dev@kafka.apache.org 
Subject: Re: [VOTE] KIP-629: Use racially neutral terms in our codebase

+1 (non-binding)

On Fri, Jun 26, 2020 at 10:37 AM David Jacot  wrote:

> +1 (non-binding). Thanks, Xavier!
>
> On Fri, Jun 26, 2020 at 10:59 AM Levani Kokhreidze  >
> wrote:
>
> > +1 (non-binding)
> >
> > Thank you for this initiative.
> >
> > Levani
> >
> > > On Jun 26, 2020, at 11:53 AM, Mickael Maison  >
> > wrote:
> > >
> > > +1 (binding)
> > > Thanks for the KIP!
> > >
> > > On Fri, Jun 26, 2020 at 9:51 AM Jorge Esteban Quilcate Otoya
> > >  wrote:
> > >>
> > >> +1 (non-binding)
> > >> Thank you Xavier!
> > >>
> > >> On Fri, Jun 26, 2020 at 8:38 AM Bruno Cadonna 
> > wrote:
> > >>
> > >>> +1 (non-binding)
> > >>>
> > >>> On Fri, Jun 26, 2020 at 3:41 AM Jay Kreps 
> wrote:
> > 
> >  +1
> > 
> >  On Thu, Jun 25, 2020 at 6:39 PM Bill Bejeck 
> > wrote:
> > 
> > > Thanks for this KIP Xavier.
> > >
> > > +1(binding)
> > >
> > > -Bill
> > >
> > > On Thu, Jun 25, 2020 at 9:04 PM Gwen Shapira 
> > >>> wrote:
> > >
> > >> +1 (binding)
> > >>
> > >> Thank you Xavier!
> > >>
> > >> On Thu, Jun 25, 2020, 3:44 PM Xavier Léauté 
> > wrote:
> > >>
> > >>> Hi Everyone,
> > >>>
> > >>> I would like to initiate the voting process for KIP-629.
> > >>>
> > >>>
> > >>
> > >
> > >>>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase
> > >>>
> > >>> Thank you,
> > >>> Xavier
> > >>>
> > >>
> > >
> > >>>
> >
> >
>

** IMPORTANT MESSAGE **  
This e-mail message is intended only for the addressee(s) and contains 
information which may be confidential. If you are not the intended recipient 
please advise the sender by return email, do not use or disclose the contents, 
and delete the message and any attachments from your system. Unless 
specifically indicated, this email does not constitute formal advice or 
commitment by the sender or the Commonwealth Bank of Australia (ABN 48 123 123 
124 AFSL and Australian credit licence 234945) or its subsidiaries. We can be 
contacted through our web site: commbank.com.au.
If you no longer wish to receive commercial electronic messages from us, please 
reply to this e-mail by typing Unsubscribe in the subject line.
**

Re: [VOTE] KIP-629: Use racially neutral terms in our codebase

2020-06-26 Thread Tom Bentley
+1 (non-binding)

On Fri, Jun 26, 2020 at 10:37 AM David Jacot  wrote:

> +1 (non-binding). Thanks, Xavier!
>
> On Fri, Jun 26, 2020 at 10:59 AM Levani Kokhreidze  >
> wrote:
>
> > +1 (non-binding)
> >
> > Thank you for this initiative.
> >
> > Levani
> >
> > > On Jun 26, 2020, at 11:53 AM, Mickael Maison  >
> > wrote:
> > >
> > > +1 (binding)
> > > Thanks for the KIP!
> > >
> > > On Fri, Jun 26, 2020 at 9:51 AM Jorge Esteban Quilcate Otoya
> > >  wrote:
> > >>
> > >> +1 (non-binding)
> > >> Thank you Xavier!
> > >>
> > >> On Fri, Jun 26, 2020 at 8:38 AM Bruno Cadonna 
> > wrote:
> > >>
> > >>> +1 (non-binding)
> > >>>
> > >>> On Fri, Jun 26, 2020 at 3:41 AM Jay Kreps 
> wrote:
> > 
> >  +1
> > 
> >  On Thu, Jun 25, 2020 at 6:39 PM Bill Bejeck 
> > wrote:
> > 
> > > Thanks for this KIP Xavier.
> > >
> > > +1(binding)
> > >
> > > -Bill
> > >
> > > On Thu, Jun 25, 2020 at 9:04 PM Gwen Shapira 
> > >>> wrote:
> > >
> > >> +1 (binding)
> > >>
> > >> Thank you Xavier!
> > >>
> > >> On Thu, Jun 25, 2020, 3:44 PM Xavier Léauté 
> > wrote:
> > >>
> > >>> Hi Everyone,
> > >>>
> > >>> I would like to initiate the voting process for KIP-629.
> > >>>
> > >>>
> > >>
> > >
> > >>>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase
> > >>>
> > >>> Thank you,
> > >>> Xavier
> > >>>
> > >>
> > >
> > >>>
> >
> >
>


Re: [VOTE] KIP-629: Use racially neutral terms in our codebase

2020-06-26 Thread David Jacot
+1 (non-binding). Thanks, Xavier!

On Fri, Jun 26, 2020 at 10:59 AM Levani Kokhreidze 
wrote:

> +1 (non-binding)
>
> Thank you for this initiative.
>
> Levani
>
> > On Jun 26, 2020, at 11:53 AM, Mickael Maison 
> wrote:
> >
> > +1 (binding)
> > Thanks for the KIP!
> >
> > On Fri, Jun 26, 2020 at 9:51 AM Jorge Esteban Quilcate Otoya
> >  wrote:
> >>
> >> +1 (non-binding)
> >> Thank you Xavier!
> >>
> >> On Fri, Jun 26, 2020 at 8:38 AM Bruno Cadonna 
> wrote:
> >>
> >>> +1 (non-binding)
> >>>
> >>> On Fri, Jun 26, 2020 at 3:41 AM Jay Kreps  wrote:
> 
>  +1
> 
>  On Thu, Jun 25, 2020 at 6:39 PM Bill Bejeck 
> wrote:
> 
> > Thanks for this KIP Xavier.
> >
> > +1(binding)
> >
> > -Bill
> >
> > On Thu, Jun 25, 2020 at 9:04 PM Gwen Shapira 
> >>> wrote:
> >
> >> +1 (binding)
> >>
> >> Thank you Xavier!
> >>
> >> On Thu, Jun 25, 2020, 3:44 PM Xavier Léauté 
> wrote:
> >>
> >>> Hi Everyone,
> >>>
> >>> I would like to initiate the voting process for KIP-629.
> >>>
> >>>
> >>
> >
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase
> >>>
> >>> Thank you,
> >>> Xavier
> >>>
> >>
> >
> >>>
>
>


Re: [VOTE] KIP-627: Expose Trogdor-specific JMX Metrics for Tasks and Agents

2020-06-26 Thread David Jacot
+1 (non-binding)

Thanks for the KIP!

Best,
David

On Fri, Jun 26, 2020 at 11:11 AM Manikumar 
wrote:

> +1 (binding)
>
> Thanks for the KIP.
>
> Thans,
> Manikumar
>
> On Fri, Jun 26, 2020 at 11:46 AM Stanislav Kozlovski <
> stanis...@confluent.io>
> wrote:
>
> > +1 (non-binding).
> >
> > Thanks for the work! I am also happy to see Trogdor being improved
> >
> > Best,
> > Stanislav
> >
> > On Fri, Jun 26, 2020 at 5:34 AM Colin McCabe  wrote:
> >
> > > +1 (binding).
> > >
> > > Thanks, Sam.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Thu, Jun 25, 2020, at 18:05, Gwen Shapira wrote:
> > > > +1 (binding)
> > > >
> > > > Thank you, Sam. It is great to see Trogdor getting the care it
> > deserves.
> > > >
> > > > On Mon, Jun 22, 2020, 1:46 PM Sam Pal  wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I would like to start a vote for KIP-627, which adds metrics about
> > > active
> > > > > agents and the number of created, running, and done tasks in a
> > Trogdor
> > > > > cluster:
> > > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-627%3A+Expose+Trogdor-specific+JMX+Metrics+for+Tasks+and+Agents
> > > > >
> > > > > Looking forward to hearing from you all!
> > > > >
> > > > > Best,
> > > > > Sam
> > > > >
> > > > >
> > > >
> > >
> >
> >
> > --
> > Best,
> > Stanislav
> >
>


Re: [VOTE] KIP-627: Expose Trogdor-specific JMX Metrics for Tasks and Agents

2020-06-26 Thread Manikumar
+1 (binding)

Thanks for the KIP.

Thans,
Manikumar

On Fri, Jun 26, 2020 at 11:46 AM Stanislav Kozlovski 
wrote:

> +1 (non-binding).
>
> Thanks for the work! I am also happy to see Trogdor being improved
>
> Best,
> Stanislav
>
> On Fri, Jun 26, 2020 at 5:34 AM Colin McCabe  wrote:
>
> > +1 (binding).
> >
> > Thanks, Sam.
> >
> > best,
> > Colin
> >
> >
> > On Thu, Jun 25, 2020, at 18:05, Gwen Shapira wrote:
> > > +1 (binding)
> > >
> > > Thank you, Sam. It is great to see Trogdor getting the care it
> deserves.
> > >
> > > On Mon, Jun 22, 2020, 1:46 PM Sam Pal  wrote:
> > >
> > > > Hi all,
> > > >
> > > > I would like to start a vote for KIP-627, which adds metrics about
> > active
> > > > agents and the number of created, running, and done tasks in a
> Trogdor
> > > > cluster:
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-627%3A+Expose+Trogdor-specific+JMX+Metrics+for+Tasks+and+Agents
> > > >
> > > > Looking forward to hearing from you all!
> > > >
> > > > Best,
> > > > Sam
> > > >
> > > >
> > >
> >
>
>
> --
> Best,
> Stanislav
>


Re: [VOTE] KIP-629: Use racially neutral terms in our codebase

2020-06-26 Thread Levani Kokhreidze
+1 (non-binding)

Thank you for this initiative.

Levani

> On Jun 26, 2020, at 11:53 AM, Mickael Maison  wrote:
> 
> +1 (binding)
> Thanks for the KIP!
> 
> On Fri, Jun 26, 2020 at 9:51 AM Jorge Esteban Quilcate Otoya
>  wrote:
>> 
>> +1 (non-binding)
>> Thank you Xavier!
>> 
>> On Fri, Jun 26, 2020 at 8:38 AM Bruno Cadonna  wrote:
>> 
>>> +1 (non-binding)
>>> 
>>> On Fri, Jun 26, 2020 at 3:41 AM Jay Kreps  wrote:
 
 +1
 
 On Thu, Jun 25, 2020 at 6:39 PM Bill Bejeck  wrote:
 
> Thanks for this KIP Xavier.
> 
> +1(binding)
> 
> -Bill
> 
> On Thu, Jun 25, 2020 at 9:04 PM Gwen Shapira 
>>> wrote:
> 
>> +1 (binding)
>> 
>> Thank you Xavier!
>> 
>> On Thu, Jun 25, 2020, 3:44 PM Xavier Léauté  wrote:
>> 
>>> Hi Everyone,
>>> 
>>> I would like to initiate the voting process for KIP-629.
>>> 
>>> 
>> 
> 
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase
>>> 
>>> Thank you,
>>> Xavier
>>> 
>> 
> 
>>> 



Re: [VOTE] KIP-629: Use racially neutral terms in our codebase

2020-06-26 Thread Mickael Maison
+1 (binding)
Thanks for the KIP!

On Fri, Jun 26, 2020 at 9:51 AM Jorge Esteban Quilcate Otoya
 wrote:
>
> +1 (non-binding)
> Thank you Xavier!
>
> On Fri, Jun 26, 2020 at 8:38 AM Bruno Cadonna  wrote:
>
> > +1 (non-binding)
> >
> > On Fri, Jun 26, 2020 at 3:41 AM Jay Kreps  wrote:
> > >
> > > +1
> > >
> > > On Thu, Jun 25, 2020 at 6:39 PM Bill Bejeck  wrote:
> > >
> > > > Thanks for this KIP Xavier.
> > > >
> > > > +1(binding)
> > > >
> > > > -Bill
> > > >
> > > > On Thu, Jun 25, 2020 at 9:04 PM Gwen Shapira 
> > wrote:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > > Thank you Xavier!
> > > > >
> > > > > On Thu, Jun 25, 2020, 3:44 PM Xavier Léauté  wrote:
> > > > >
> > > > > > Hi Everyone,
> > > > > >
> > > > > > I would like to initiate the voting process for KIP-629.
> > > > > >
> > > > > >
> > > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase
> > > > > >
> > > > > > Thank you,
> > > > > > Xavier
> > > > > >
> > > > >
> > > >
> >


Re: [VOTE] KIP-629: Use racially neutral terms in our codebase

2020-06-26 Thread Jorge Esteban Quilcate Otoya
+1 (non-binding)
Thank you Xavier!

On Fri, Jun 26, 2020 at 8:38 AM Bruno Cadonna  wrote:

> +1 (non-binding)
>
> On Fri, Jun 26, 2020 at 3:41 AM Jay Kreps  wrote:
> >
> > +1
> >
> > On Thu, Jun 25, 2020 at 6:39 PM Bill Bejeck  wrote:
> >
> > > Thanks for this KIP Xavier.
> > >
> > > +1(binding)
> > >
> > > -Bill
> > >
> > > On Thu, Jun 25, 2020 at 9:04 PM Gwen Shapira 
> wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > Thank you Xavier!
> > > >
> > > > On Thu, Jun 25, 2020, 3:44 PM Xavier Léauté  wrote:
> > > >
> > > > > Hi Everyone,
> > > > >
> > > > > I would like to initiate the voting process for KIP-629.
> > > > >
> > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase
> > > > >
> > > > > Thank you,
> > > > > Xavier
> > > > >
> > > >
> > >
>


Re: [VOTE] KIP-629: Use racially neutral terms in our codebase

2020-06-26 Thread Bruno Cadonna
+1 (non-binding)

On Fri, Jun 26, 2020 at 3:41 AM Jay Kreps  wrote:
>
> +1
>
> On Thu, Jun 25, 2020 at 6:39 PM Bill Bejeck  wrote:
>
> > Thanks for this KIP Xavier.
> >
> > +1(binding)
> >
> > -Bill
> >
> > On Thu, Jun 25, 2020 at 9:04 PM Gwen Shapira  wrote:
> >
> > > +1 (binding)
> > >
> > > Thank you Xavier!
> > >
> > > On Thu, Jun 25, 2020, 3:44 PM Xavier Léauté  wrote:
> > >
> > > > Hi Everyone,
> > > >
> > > > I would like to initiate the voting process for KIP-629.
> > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase
> > > >
> > > > Thank you,
> > > > Xavier
> > > >
> > >
> >


Re: [VOTE] KIP-627: Expose Trogdor-specific JMX Metrics for Tasks and Agents

2020-06-26 Thread Stanislav Kozlovski
+1 (non-binding).

Thanks for the work! I am also happy to see Trogdor being improved

Best,
Stanislav

On Fri, Jun 26, 2020 at 5:34 AM Colin McCabe  wrote:

> +1 (binding).
>
> Thanks, Sam.
>
> best,
> Colin
>
>
> On Thu, Jun 25, 2020, at 18:05, Gwen Shapira wrote:
> > +1 (binding)
> >
> > Thank you, Sam. It is great to see Trogdor getting the care it deserves.
> >
> > On Mon, Jun 22, 2020, 1:46 PM Sam Pal  wrote:
> >
> > > Hi all,
> > >
> > > I would like to start a vote for KIP-627, which adds metrics about
> active
> > > agents and the number of created, running, and done tasks in a Trogdor
> > > cluster:
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-627%3A+Expose+Trogdor-specific+JMX+Metrics+for+Tasks+and+Agents
> > >
> > > Looking forward to hearing from you all!
> > >
> > > Best,
> > > Sam
> > >
> > >
> >
>


-- 
Best,
Stanislav