Limit number of topic partitions

2020-03-06 Thread Gokul Ramanan Subramanian
Hi.

I'd like your thoughts on
https://issues.apache.org/jira/browse/KAFKA-9590?jql=text%20~%20%22limit%20topic%20partitions%22
(tldr:
its about limiting number of topic partitions via a read-only
configuration).

I am going to start writing a KIP for this soon, and will share on this
thread as I make progress. But I thought it would be a good idea to get
some initial thoughts / additional requirements (if any) from the community
before I dive deep into the design.

Thanks for your time.


Build failed in Jenkins: kafka-2.5-jdk8 #58

2020-03-06 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-9668: Iterating over KafkaStreams.getAllMetadata() results in


--
[...truncated 5.86 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #1218

2020-03-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Check store directory empty to decide whether throw task


--
[...truncated 2.92 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED


[jira] [Resolved] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient

2020-03-06 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-5722.
-
Resolution: Duplicate

> Refactor ConfigCommand to use the AdminClient
> -
>
> Key: KAFKA-5722
> URL: https://issues.apache.org/jira/browse/KAFKA-5722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>  Labels: kip, needs-kip
> Fix For: 2.6.0
>
>
> The ConfigCommand currently uses a direct connection to zookeeper. The 
> zookeeper dependency should be deprecated and an AdminClient API created to 
> be used instead.
> This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9679) Mock consumer should behave consistent with actual consumer

2020-03-06 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9679:
--

 Summary: Mock consumer should behave consistent with actual 
consumer
 Key: KAFKA-9679
 URL: https://issues.apache.org/jira/browse/KAFKA-9679
 Project: Kafka
  Issue Type: Test
  Components: consumer, streams
Reporter: Boyang Chen


Right now in MockConsumer we shall return illegal state exception when the 
buffered records are not able to find corresponding assigned partitions. This 
is not the case for KafkaConsumer where we shall just not return those data 
during `poll()` call. This inconsistent behavior should be fixed.

Note that if we are going to take this fix, the full unit tests need to be 
executed to make sure no regression is introduced, as some tests are 
potentially depending on the current MockConsumer behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9662) Throttling system test fails when messages are produced before consumer starts up

2020-03-06 Thread David Arthur (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Arthur resolved KAFKA-9662.
-
Resolution: Fixed

> Throttling system test fails when messages are produced before consumer 
> starts up
> -
>
> Key: KAFKA-9662
> URL: https://issues.apache.org/jira/browse/KAFKA-9662
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.5.0
>
>
> The tests produces large records using producer performance tool and then 
> starts another validating produce/consume loop for integer records. Consumer 
> starts consuming from the latest offset to avoid consuming the large records 
> produced earlier by the first producer. If the second producer starts 
> producing records before the consumer has reset its offset to latest offset, 
> then the consumer misses some records produced and the test fails:
> {quote}
> AssertionError: 762 acked message did not make it to the Consumer. They are: 
> 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19...plus 
> 742 more. Total Acked: 174330, Total Consumed: 173568. We validated that the 
> first 762 of these missing messages correctly made it into Kafka's data 
> files. This suggests they were lost on their way to the consumer.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9678) Introduce bounded exponential backoff in clients

2020-03-06 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-9678:


 Summary: Introduce bounded exponential backoff in clients
 Key: KAFKA-9678
 URL: https://issues.apache.org/jira/browse/KAFKA-9678
 Project: Kafka
  Issue Type: Improvement
  Components: admin, consumer, producer 
Reporter: Guozhang Wang


In all clients (consumer, producer, admin, and streams) we have retry 
mechanisms with fixed backoff to handle transient connection issues with 
brokers. However, with small backoff (many defaults to 100ms) we could send 10s 
of requests per second to the broker, and if the connection issue is prolonged 
it means a huge overhead.

We should consider introducing upper-bounded exponential backoff universally in 
those clients to reduce the num of retry requests during the period of 
connection partitioning.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1217

2020-03-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9668: Iterating over KafkaStreams.getAllMetadata() results in


--
[...truncated 5.88 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task 

[jira] [Created] (KAFKA-9677) Low consume bandwidth quota may cause consumer not being able to fetch data

2020-03-06 Thread Anna Povzner (Jira)
Anna Povzner created KAFKA-9677:
---

 Summary: Low consume bandwidth quota may cause consumer not being 
able to fetch data
 Key: KAFKA-9677
 URL: https://issues.apache.org/jira/browse/KAFKA-9677
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.3.1, 2.4.0, 2.2.2, 2.1.1, 2.0.1
Reporter: Anna Povzner
Assignee: Anna Povzner


When we changed quota communication with KIP-219, fetch requests get throttled 
by returning empty response with the delay in `throttle_time_ms` and Kafka 
consumer retrying again after the delay. 

With default configs, the maximum fetch size could be as big as 50MB (or 10MB 
per partition). The default broker config (1-second window, 10 full windows of 
tracked bandwidth/thread utilization usage) means that < 5MB/s consumer quota 
(per broker) may stop fetch request from ever being successful.

Or the other way around: 1 MB/s consumer quota (per broker) means that any 
fetch request that gets >= 10MB of data (10 seconds * 1MB/second) in the 
response will never get through.
h3. Proposed fix

Return less data in fetch response in this case: Cap `fetchMaxBytes` passed to 
replicaManager.fetchMessages() from KafkaApis.handleFetchRequest() to  * . In the example of default configs and 
1MB/s consumer bandwidth quota, fetchMaxBytes will be 10MB.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-557: Add emit on change support for Kafka Streams

2020-03-06 Thread Richard Yu
Hi all,

I have decided to pass this KIP with 2 binding votes and 3 non-binding
votes (including mine).
I will update KIP status shortly after this.

Best,
Richard

On Thu, Mar 5, 2020 at 3:45 PM Richard Yu 
wrote:

> Hi all,
>
> Just polling for some last changes on the name.
> I think that since there doesn't seem to be much objection to any major
> changes in the KIP, I will pass it this Friday.
>
> If you feel that we still need some more discussion, please let me know. :)
>
> Best,
> Richard
>
> P.S. Will start working on a PR for this one soon.
>
> On Wed, Mar 4, 2020 at 1:30 PM Guozhang Wang  wrote:
>
>> Regarding the metric name, I was actually trying to be consistent with the
>> node-level `suppression-emit` as I feel this one's characteristics is
>> closer to that. I other folks feels better to align with the task-level
>> "dropped-records" I think I can be convinced too.
>>
>>
>> Guozhang
>>
>> On Wed, Mar 4, 2020 at 12:09 AM Bruno Cadonna  wrote:
>>
>> > Hi all,
>> >
>> > may I make a non-binding proposal for the metric name? I would prefer
>> > "skipped-idempotent-updates" to be consistent with the
>> > "dropped-records".
>> >
>> > Best,
>> > Bruno
>> >
>> > On Tue, Mar 3, 2020 at 11:57 PM Richard Yu 
>> > wrote:
>> > >
>> > > Hi all,
>> > >
>> > > Thanks for the discussion!
>> > >
>> > > @Guozhang, I will make the corresponding changes to the KIP (i.e.
>> > renaming
>> > > the sensor and adding some notes).
>> > > With the current state of things, we are very close. Just need that
>> one
>> > > last binding vote.
>> > >
>> > > @Matthias J. Sax   It would be ideal if we can
>> > also
>> > > get your last two cents on this as well.
>> > > Other than that, we are good.
>> > >
>> > > Best,
>> > > Richard
>> > >
>> > >
>> > > On Tue, Mar 3, 2020 at 10:46 AM Guozhang Wang 
>> > wrote:
>> > >
>> > > > Hi Bruno, John:
>> > > >
>> > > > 1) That makes sense. If we consider them to be node-specific metrics
>> > that
>> > > > only applies to a subset of built-in processor nodes that are
>> > irrelevant to
>> > > > alert-relevant metrics (just like suppression-emit (rate | total)),
>> > they'd
>> > > > better be per-node instead of per-task and we would not associate
>> such
>> > > > events with warning. With that in mind, I'd suggest we consider
>> > renaming
>> > > > the metric without the `dropped` keyword to distinguish it with the
>> > > > per-task level sensor. How about "idempotent-update-skip (rate |
>> > total)"?
>> > > >
>> > > > Also a minor suggestion: we should clarify in the KIP / javadocs
>> which
>> > > > built-in processor nodes would have this metric while others don't.
>> > > >
>> > > > 2) About stream time tracking, there are multiple known issues that
>> we
>> > > > should close to improve our consistency semantics:
>> > > >
>> > > >  a. preserve stream time of active tasks across rebalances where
>> they
>> > may
>> > > > be migrated. This is what KAFKA-9368
>> > > >  meant for.
>> > > >  b. preserve stream time of standby tasks to be aligned with the
>> active
>> > > > tasks, via the changelog topics.
>> > > >
>> > > > And what I'm more concerning is b) here. For example: let's say we
>> > have a
>> > > > topology of `source -> A -> repartition -> B` where both A and B
>> have
>> > > > states along with changelogs, and both of them have standbys. If a
>> > record
>> > > > is piped from the source and completed traversed through the
>> topology,
>> > we
>> > > > need to make sure that the stream time inferred across:
>> > > >
>> > > > * active task A (inferred from the source record),
>> > > > * active task B (inferred from the derived record from repartition
>> > topic),
>> > > > * standby task A (inferred from the changelog topic of A's store),
>> > > > * standby task B (inferred from the changelog topic of B's store)
>> > > >
>> > > > are consistent (note I'm not saying they should be "exactly the
>> same",
>> > but
>> > > > consistent, meaning that they may have different values but as long
>> as
>> > that
>> > > > does not impact the time-based queries, it is fine). The main
>> > motivation is
>> > > > that on IQ, where both active and standby tasks could be accessed,
>> we
>> > can
>> > > > eventually improve our consistency guarantee to have 1)
>> > read-your-write, 2)
>> > > > consistency across stores, etc.
>> > > >
>> > > > I agree with John's assessment in the previous email, and just to
>> > clarify
>> > > > more concretely what I'm thinking.
>> > > >
>> > > >
>> > > > Guozhang
>> > > >
>> > > >
>> > > > On Tue, Mar 3, 2020 at 9:03 AM John Roesler 
>> > wrote:
>> > > >
>> > > > > Thanks, Guozhang and Bruno!
>> > > > >
>> > > > > 2)
>> > > > > I had a similar though to both of you about the metrics, but I
>> > ultimately
>> > > > > came out with a conclusion like Bruno's. These aren't dropped
>> invalid
>> > > > > records, they're intentionally dropped, valid, but unnecessary,
>> > updates.
>> > > > > A "warning" for this 

Build failed in Jenkins: kafka-trunk-jdk8 #4299

2020-03-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9668: Iterating over KafkaStreams.getAllMetadata() results in

[github] MINOR: Check store directory empty to decide whether throw task


--
[...truncated 2.90 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > 

Build failed in Jenkins: kafka-2.5-jdk8 #57

2020-03-06 Thread Apache Jenkins Server
See 


Changes:

[mumrah] KAFKA-9662: Wait for consumer offset reset in throttle test to avoid


--
[...truncated 2.89 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] 

Build failed in Jenkins: kafka-trunk-jdk11 #1216

2020-03-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9662: Wait for consumer offset reset in throttle test to avoid


--
[...truncated 2.90 MB...]
org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Jenkins build is back to normal : kafka-trunk-jdk11 #1215

2020-03-06 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-9676) Add test coverage for new ActiveTaskCreator and StandbyTaskCreator

2020-03-06 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9676:
--

 Summary: Add test coverage for new ActiveTaskCreator and 
StandbyTaskCreator
 Key: KAFKA-9676
 URL: https://issues.apache.org/jira/browse/KAFKA-9676
 Project: Kafka
  Issue Type: Test
  Components: streams
Reporter: Boyang Chen


The newly separated ActiveTaskCreator and StandbyTaskCreator have no unit test 
coverage. We should add corresponding tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9675) RocksDB metrics reported always at zero

2020-03-06 Thread Michael Viamari (Jira)
Michael Viamari created KAFKA-9675:
--

 Summary: RocksDB metrics reported always at zero
 Key: KAFKA-9675
 URL: https://issues.apache.org/jira/browse/KAFKA-9675
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Michael Viamari


The rocksdb metrics listed under {{stream-state-metrics}} are reported as zero 
for all metrics and all rocksdb instances. The metrics are present in JMX, but 
are always zero.

The streams state is configured with {{MetricsRecordingLevel}} at {{debug}}. I 
am able to see metrics with appropriate values in the 
{{stream-rocksdb-window-state-metrics}}, {{stream-record-cache-metrics}}, 
{{stream-task-metrics}}, and {{stream-processor-node-metrics}}.

Additionally, my DEBUG logs show the appropriate messages for recording events, 
i.e.

{{org.apache.kafka.streams.state.internals.metrics.RocksDBMetricsRecorder 
[RocksDB Metrics Recorder for agg-store] Recording metrics for store agg-store}}

It happens that all of my rocksdb instances are windowed stores, not key value 
stores, so I haven't been able to check if this issue is unique to windowed 
stores.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Create content permission

2020-03-06 Thread Colin McCabe
Added

best,
Colin

On Fri, Mar 6, 2020, at 09:45, Cheng Tan wrote:
> It’s d8tltanc. Thank you,
> 
> > On Mar 6, 2020, at 9:35 AM, Matthias J. Sax  wrote:
> > 
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> > 
> > What is your wiki account id?
> > 
> > - -Matthias
> > 
> > On 3/5/20 2:15 PM, Cheng Tan wrote:
> >> Hello,
> >> 
> >> My name is Cheng Tan and I’m working at Confluent. Can I get the
> >> create content permission so I can create a KIP? Thank you.
> >> 
> >> Best, - Cheng Tan
> >> 
> > -BEGIN PGP SIGNATURE-
> > 
> > iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5iiecACgkQO4miYXKq
> > /OjBww//U6258HTqWZZ7DlWNTYmrjw+ksQE60BqeMhaNWI3gsMhiuzoihprr3iUP
> > sxiCwdeGGnDmP45whUDhrwzGtH0p9NaRct2E6+s6CkrwTZX0jut/HH2ytM7tqYVh
> > JJ8VvYNoBt1BI7gWetoaFHtIeRfbo0Y+k8KtkUHw630o+3gIMuPjs8lZvUgsTiy7
> > hgjaHXItSbHpvc72PtFpXHc4XKBqhp9goO6DHQ8LCjoJPvN+62QQwjC6E8fjDhcH
> > 2wI3Je16Yuo74SDuCSxEDpfoAmzIR74916KrbhYTjAUlYDZWkmAjWvzLOLjisK2B
> > Bsf0F7ihpigHTT07bQitENZS0mQ+jyU6UaHpsqAnGyaEUFr4UXgqK8z87ByTgp9C
> > zhTBjELgnk6USCFkPvptgJx6kBqXb9lj5PkBhMEGJupoZC9LaQMSNQqXUtU0rPPm
> > sf35tlWaMiY/SVE3pnb/6Km7M+z5+oGRL9nxp97JRxxHwBThf3+CLhzd2+3HJozp
> > xCbnQKIINxgOJT0Ckj4rsMe2vYaZ79y1m054PtP2PteF24yaJTP46esXz27lba4T
> > O0e0FoIeiYYGat2ReBpOb10+qxx2EICzL6fvQcaZSLDbN3HJ7a3aB1ktjLh6ZqW6
> > OCxLZBIwAzCDPNAlmZLzYZJ9CcgAbWLiKz5FgEQdkZk7I1koIg0=
> > =LONO
> > -END PGP SIGNATURE-
> 
>


[jira] [Resolved] (KAFKA-9635) Should ConfigProvider.subscribe be deprecated?

2020-03-06 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-9635.
-
Resolution: Duplicate

Let's continue the discussion on the mailing list.

> Should ConfigProvider.subscribe be deprecated?
> --
>
> Key: KAFKA-9635
> URL: https://issues.apache.org/jira/browse/KAFKA-9635
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> KIP 297 added the ConfigProvider interface for use with Kafka Connect.
> Its seems that at that time it was anticipated that config providers should 
> have a change notification mechanism to facilitate dynamic reconfiguration. 
> This was realised by having {{subscribe()}}, {{unsubscribe()}} and 
> {{unsubscribeAll()}} methods in the ConfigProvider interface.
> KIP-421 subsequently added the ability to use config providers with other 
> configs (e.g. client, broker and Kafka Streams). KIP-421 didn't end up using 
> the change notification feature, since it was incompatible with being able to 
> update broker configs atomically.
> As things currently stand the {{subscribe()}}, {{unsubscribe()}} and 
> {{unsubscribeAll()}}  methods remain in the ConfigProvider interface but are 
> not used anywhere in the Kafka code base. Is there an intention to make use 
> of these methods, or should they be deprecated?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Create content permission

2020-03-06 Thread Cheng Tan
It’s d8tltanc. Thank you,

> On Mar 6, 2020, at 9:35 AM, Matthias J. Sax  wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
> 
> What is your wiki account id?
> 
> - -Matthias
> 
> On 3/5/20 2:15 PM, Cheng Tan wrote:
>> Hello,
>> 
>> My name is Cheng Tan and I’m working at Confluent. Can I get the
>> create content permission so I can create a KIP? Thank you.
>> 
>> Best, - Cheng Tan
>> 
> -BEGIN PGP SIGNATURE-
> 
> iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5iiecACgkQO4miYXKq
> /OjBww//U6258HTqWZZ7DlWNTYmrjw+ksQE60BqeMhaNWI3gsMhiuzoihprr3iUP
> sxiCwdeGGnDmP45whUDhrwzGtH0p9NaRct2E6+s6CkrwTZX0jut/HH2ytM7tqYVh
> JJ8VvYNoBt1BI7gWetoaFHtIeRfbo0Y+k8KtkUHw630o+3gIMuPjs8lZvUgsTiy7
> hgjaHXItSbHpvc72PtFpXHc4XKBqhp9goO6DHQ8LCjoJPvN+62QQwjC6E8fjDhcH
> 2wI3Je16Yuo74SDuCSxEDpfoAmzIR74916KrbhYTjAUlYDZWkmAjWvzLOLjisK2B
> Bsf0F7ihpigHTT07bQitENZS0mQ+jyU6UaHpsqAnGyaEUFr4UXgqK8z87ByTgp9C
> zhTBjELgnk6USCFkPvptgJx6kBqXb9lj5PkBhMEGJupoZC9LaQMSNQqXUtU0rPPm
> sf35tlWaMiY/SVE3pnb/6Km7M+z5+oGRL9nxp97JRxxHwBThf3+CLhzd2+3HJozp
> xCbnQKIINxgOJT0Ckj4rsMe2vYaZ79y1m054PtP2PteF24yaJTP46esXz27lba4T
> O0e0FoIeiYYGat2ReBpOb10+qxx2EICzL6fvQcaZSLDbN3HJ7a3aB1ktjLh6ZqW6
> OCxLZBIwAzCDPNAlmZLzYZJ9CcgAbWLiKz5FgEQdkZk7I1koIg0=
> =LONO
> -END PGP SIGNATURE-



[jira] [Created] (KAFKA-9674) Task corruption should also close the producer if necessary

2020-03-06 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9674:
--

 Summary: Task corruption should also close the producer if 
necessary
 Key: KAFKA-9674
 URL: https://issues.apache.org/jira/browse/KAFKA-9674
 Project: Kafka
  Issue Type: Bug
Reporter: Boyang Chen
Assignee: Boyang Chen


The task revive call only transits the task to CREATED mode. It should handle 
the recreation of task producer as well.

Sequence is like:
 # Task hits out of range exception and throws CorruptedException
 # Task producer closed along with the task
 # Task revived and rebalance triggered
 # Task was assigned back to the same thread
 # Trying to use task producer will throw as it has already been closed.

The full log:

 

[2020-03-03T21:56:29-08:00] 
(streams-soak-trunk-eos_soak_i-0eaa3f3a6a197f876_streamslog) [2020-03-04 
05:56:29,070] WARN 
[stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
stream-thread 
[stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
Encountered org.apache.kafka.clients.consumer.OffsetOutOfRangeException 
fetching records from restore consumer for partitions 
[stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-19-changelog-0], it is 
likely that the consumer's position has fallen out of the topic partition 
offset range because the topic was truncated or compacted on the broker, 
marking the corresponding tasks as corrupted and re-initializing it later. 
(org.apache.kafka.streams.processor.internals.StoreChangelogReader)

[2020-03-03T21:56:29-08:00] 
(streams-soak-trunk-eos_soak_i-0eaa3f3a6a197f876_streamslog) [2020-03-04 
05:56:29,071] WARN 
[stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
stream-thread 
[stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] Detected 
the states of tasks 
\{1_0=[stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-19-changelog-0]} 
are corrupted. Will close the task as dirty and re-create and bootstrap from 
scratch. (org.apache.kafka.streams.processor.internals.StreamThread)

 

[2020-03-03T21:56:30-08:00] 
(streams-soak-trunk-eos_soak_i-0eaa3f3a6a197f876_streamslog) [2020-03-04 
05:56:30,010] INFO 
[stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
[Producer 
clientId=stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3-1_0-producer,
 transactionalId=stream-soak-test-1_0] Closing the Kafka producer with 
timeoutMillis = 9223372036854775807 ms. 
(org.apache.kafka.clients.producer.KafkaProducer)

 

 

[2020-03-03T21:56:30-08:00] 
(streams-soak-trunk-eos_soak_i-0eaa3f3a6a197f876_streamslog) [2020-03-04 
05:56:30,017] INFO 
[stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
stream-thread 
[stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] task 
[1_0] Closed clean (org.apache.kafka.streams.processor.internals.StreamTask)

 

 

[2020-03-03T21:56:22-08:00] 
(streams-soak-trunk-eos_soak_i-0eaa3f3a6a197f876_streamslog) [2020-03-04 
05:56:22,827] INFO 
[stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
[Producer 
clientId=stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3-1_0-producer,
 transactionalId=stream-soak-test-1_0] Closing the Kafka producer with 
timeoutMillis = 9223372036854775807 ms. 
(org.apache.kafka.clients.producer.KafkaProducer)

[2020-03-03T21:56:22-08:00] 
(streams-soak-trunk-eos_soak_i-0eaa3f3a6a197f876_streamslog) [2020-03-04 
05:56:22,829] INFO 
[stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
stream-thread 
[stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] task 
[1_0] Closed dirty (org.apache.kafka.streams.processor.internals.StreamTask)

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9673) Conditionally apply SMTs

2020-03-06 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9673:
--

 Summary: Conditionally apply SMTs
 Key: KAFKA-9673
 URL: https://issues.apache.org/jira/browse/KAFKA-9673
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect
Reporter: Tom Bentley
Assignee: Tom Bentley


KAFKA-7052 ended up using IAE with a message, rather than NPE in the case of a 
SMT being applied to a record lacking a given field. It's still not possible to 
apply a SMT conditionally, which is what things like Debezium really need in 
order to apply transformations only to non-schema change events.

[~rhauch] suggested a mechanism to conditionally apply any SMT but was 
concerned about the possibility of a naming collision (assuming it was 
configured by a simple config)

I'd like to propose something which would solve this problem without the 
possibility of such collisions. The idea is to have a higher-level condition, 
which applies an arbitrary transformation (or transformation chain) according 
to some predicate on the record. 

More concretely, it might be configured like this:

{noformat}
  transforms.conditionalExtract.type: Conditional
  transforms.conditionalExtract.transforms: extractInt
  transforms.conditionalExtract.transforms.extractInt.type: 
org.apache.kafka.connect.transforms.ExtractField$Key
  transforms.conditionalExtract.transforms.extractInt.field: c1
  transforms.conditionalExtract.condition: topic-matches:
{noformat}

* The {{Conditional}} SMT is configured with its own list of transforms 
({{transforms.conditionalExtract.transforms}}) to apply. This would work just 
like the top level {{transforms}} config, so subkeys can be used to configure 
these transforms in the usual way.
* The {{condition}} config defines the predicate for when the transforms are 
applied to a record using a {{:}} syntax

We could initially support three condition types:

*{{topic-matches:}}* The transformation would be applied if the 
record's topic name matched the given regular expression pattern. For example, 
the following would apply the transformation on records being sent to any topic 
with a name beginning with "my-prefix-":
{noformat}
   transforms.conditionalExtract.condition: topic-matches:my-prefix-.*
{noformat}
   
*{{has-header:}}* The transformation would be applied if the 
record had at least one header with the given name. For example, the following 
will apply the transformation on records with at least one header with the name 
"my-header":
{noformat}
   transforms.conditionalExtract.condition: has-header:my-header
{noformat}
   
*{{not:}}* This would negate the result of another named 
condition using the condition config prefix. For example, the following will 
apply the transformation on records which lack any header with the name 
my-header:

{noformat}
  transforms.conditionalExtract.condition: not:hasMyHeader
  transforms.conditionalExtract.condition.hasMyHeader: has-header:my-header
{noformat}

I foresee one implementation concern with this approach, which is that 
currently {{Transformation}} has to return a fixed {{ConfigDef}}, and this 
proposal would require something more flexible in order to allow the config 
parameters to depend on the listed transform aliases (and similarly for named 
predicate used for the {{not:}} predicate). I think this could be done by 
adding a {{default}} method to {{Transformation}} for getting the ConfigDef 
given the config, for example.

Obviously this would require a KIP, but before I spend any more time on this 
I'd be interested in your thoughts [~rhauch], [~rmoff], [~gunnar.morling].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Create content permission

2020-03-06 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

What is your wiki account id?

- -Matthias

On 3/5/20 2:15 PM, Cheng Tan wrote:
> Hello,
>
> My name is Cheng Tan and I’m working at Confluent. Can I get the
> create content permission so I can create a KIP? Thank you.
>
> Best, - Cheng Tan
>
-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5iiecACgkQO4miYXKq
/OjBww//U6258HTqWZZ7DlWNTYmrjw+ksQE60BqeMhaNWI3gsMhiuzoihprr3iUP
sxiCwdeGGnDmP45whUDhrwzGtH0p9NaRct2E6+s6CkrwTZX0jut/HH2ytM7tqYVh
JJ8VvYNoBt1BI7gWetoaFHtIeRfbo0Y+k8KtkUHw630o+3gIMuPjs8lZvUgsTiy7
hgjaHXItSbHpvc72PtFpXHc4XKBqhp9goO6DHQ8LCjoJPvN+62QQwjC6E8fjDhcH
2wI3Je16Yuo74SDuCSxEDpfoAmzIR74916KrbhYTjAUlYDZWkmAjWvzLOLjisK2B
Bsf0F7ihpigHTT07bQitENZS0mQ+jyU6UaHpsqAnGyaEUFr4UXgqK8z87ByTgp9C
zhTBjELgnk6USCFkPvptgJx6kBqXb9lj5PkBhMEGJupoZC9LaQMSNQqXUtU0rPPm
sf35tlWaMiY/SVE3pnb/6Km7M+z5+oGRL9nxp97JRxxHwBThf3+CLhzd2+3HJozp
xCbnQKIINxgOJT0Ckj4rsMe2vYaZ79y1m054PtP2PteF24yaJTP46esXz27lba4T
O0e0FoIeiYYGat2ReBpOb10+qxx2EICzL6fvQcaZSLDbN3HJ7a3aB1ktjLh6ZqW6
OCxLZBIwAzCDPNAlmZLzYZJ9CcgAbWLiKz5FgEQdkZk7I1koIg0=
=LONO
-END PGP SIGNATURE-


Re: Subject: [VOTE] 2.4.1 RC0

2020-03-06 Thread Colin McCabe
+1 (binding)

Checked the git hash and branch, looked at the docs a bit.  Ran quickstart 
(although not the connect or streams parts).  Looks good.

best,
Colin


On Fri, Mar 6, 2020, at 07:31, David Arthur wrote:
> +1 (binding)
> 
> Download kafka_2.13-2.4.1 and verified signature, ran quickstart,
> everything looks good.
> 
> Thanks for running this release, Bill!
> 
> -David
> 
> 
> 
> On Wed, Mar 4, 2020 at 6:06 AM Eno Thereska  wrote:
> 
> > Hi Bill,
> >
> > I built from source and ran unit and integration tests. They passed.
> > There was a large number of skipped tests, but I'm assuming that is
> > intentional.
> >
> > Cheers
> > Eno
> >
> > On Tue, Mar 3, 2020 at 8:42 PM Eric Lalonde  wrote:
> > >
> > > Hi,
> > >
> > > I ran:
> > > $  https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh
> > 
> > 2.4.1 https://home.apache.org/~bbejeck/kafka-2.4.1-rc0 <
> > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0>
> > >
> > > All checksums and signatures are good and all unit and integration tests
> > that were executed passed successfully.
> > >
> > > - Eric
> > >
> > > > On Mar 2, 2020, at 6:39 PM, Bill Bejeck  wrote:
> > > >
> > > > Hello Kafka users, developers and client-developers,
> > > >
> > > > This is the first candidate for release of Apache Kafka 2.4.1.
> > > >
> > > > This is a bug fix release and it includes fixes and improvements from
> > 38
> > > > JIRAs, including a few critical bugs.
> > > >
> > > > Release notes for the 2.4.1 release:
> > > > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/RELEASE_NOTES.html
> > > >
> > > > *Please download, test and vote by Thursday, March 5, 9 am PT*
> > > >
> > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > https://kafka.apache.org/KEYS
> > > >
> > > > * Release artifacts to be voted upon (source and binary):
> > > > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/
> > > >
> > > > * Maven artifacts to be voted upon:
> > > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > > >
> > > > * Javadoc:
> > > > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/javadoc/
> > > >
> > > > * Tag to be voted upon (off 2.4 branch) is the 2.4.1 tag:
> > > > https://github.com/apache/kafka/releases/tag/2.4.1-rc0
> > > >
> > > > * Documentation:
> > > > https://kafka.apache.org/24/documentation.html
> > > >
> > > > * Protocol:
> > > > https://kafka.apache.org/24/protocol.html
> > > >
> > > > * Successful Jenkins builds for the 2.4 branch:
> > > > Unit/integration tests: Links to successful unit/integration test
> > build to
> > > > follow
> > > > System tests:
> > > > https://jenkins.confluent.io/job/system-test-kafka/job/2.4/152/
> > > >
> > > >
> > > > Thanks,
> > > > Bill Bejeck
> > >
> >
> 
> 
> -- 
> David Arthur
>


Re: Subject: [VOTE] 2.4.1 RC0

2020-03-06 Thread David Arthur
+1 (binding)

Download kafka_2.13-2.4.1 and verified signature, ran quickstart,
everything looks good.

Thanks for running this release, Bill!

-David



On Wed, Mar 4, 2020 at 6:06 AM Eno Thereska  wrote:

> Hi Bill,
>
> I built from source and ran unit and integration tests. They passed.
> There was a large number of skipped tests, but I'm assuming that is
> intentional.
>
> Cheers
> Eno
>
> On Tue, Mar 3, 2020 at 8:42 PM Eric Lalonde  wrote:
> >
> > Hi,
> >
> > I ran:
> > $  https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh
> 
> 2.4.1 https://home.apache.org/~bbejeck/kafka-2.4.1-rc0 <
> https://home.apache.org/~bbejeck/kafka-2.4.1-rc0>
> >
> > All checksums and signatures are good and all unit and integration tests
> that were executed passed successfully.
> >
> > - Eric
> >
> > > On Mar 2, 2020, at 6:39 PM, Bill Bejeck  wrote:
> > >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the first candidate for release of Apache Kafka 2.4.1.
> > >
> > > This is a bug fix release and it includes fixes and improvements from
> 38
> > > JIRAs, including a few critical bugs.
> > >
> > > Release notes for the 2.4.1 release:
> > > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/RELEASE_NOTES.html
> > >
> > > *Please download, test and vote by Thursday, March 5, 9 am PT*
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > https://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * Javadoc:
> > > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/javadoc/
> > >
> > > * Tag to be voted upon (off 2.4 branch) is the 2.4.1 tag:
> > > https://github.com/apache/kafka/releases/tag/2.4.1-rc0
> > >
> > > * Documentation:
> > > https://kafka.apache.org/24/documentation.html
> > >
> > > * Protocol:
> > > https://kafka.apache.org/24/protocol.html
> > >
> > > * Successful Jenkins builds for the 2.4 branch:
> > > Unit/integration tests: Links to successful unit/integration test
> build to
> > > follow
> > > System tests:
> > > https://jenkins.confluent.io/job/system-test-kafka/job/2.4/152/
> > >
> > >
> > > Thanks,
> > > Bill Bejeck
> >
>


-- 
David Arthur


Re: [VOTE] KIP-570: Add leader epoch in StopReplicaRequest

2020-03-06 Thread David Jacot
Hi all,

The vote has passed with +3 binding votes (Jason Gustafson, Gwen Shapira,
Jun Rao).

Thanks to everyone!

Best,
David

On Wed, Mar 4, 2020 at 9:02 AM David Jacot  wrote:

> Hi Jun,
>
> You're right. I have noticed it while implementing it. I plan to use a
> default
> value as a sentinel in the protocol (e.g. -2) to cover this case.
>
> David
>
> On Wed, Mar 4, 2020 at 3:18 AM Jun Rao  wrote:
>
>> Hi, David,
>>
>> Thanks for the KIP. +1 from me too. Just one comment below.
>>
>> 1. Regarding the sentinel leader epoch to indicate topic deletion, it
>> seems
>> that we need to use a different sentinel value to indicate that the leader
>> epoch is not present when the controller is still on the old version
>> during
>> upgrade.
>>
>> Jun
>>
>> On Mon, Mar 2, 2020 at 11:20 AM Gwen Shapira  wrote:
>>
>> > +1
>> >
>> > On Mon, Feb 24, 2020, 2:16 AM David Jacot  wrote:
>> >
>> > > Hi all,
>> > >
>> > > I would like to start a vote on KIP-570: Add leader epoch in
>> > > StopReplicaRequest
>> > >
>> > > The KIP is here:
>> > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-570%3A+Add+leader+epoch+in+StopReplicaRequest
>> > >
>> > > Thanks,
>> > > David
>> > >
>> >
>>
>


[jira] [Created] (KAFKA-9672) Dead broker in ISR cause isr-expiration to fail with exception

2020-03-06 Thread Ivan Yurchenko (Jira)
Ivan Yurchenko created KAFKA-9672:
-

 Summary: Dead broker in ISR cause isr-expiration to fail with 
exception
 Key: KAFKA-9672
 URL: https://issues.apache.org/jira/browse/KAFKA-9672
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.4.0, 2.4.1
Reporter: Ivan Yurchenko


We're running Kafka 2.4 and facing a pretty strange situation.
 Let's say there were three brokers in the cluster 0, 1, and 2. Then:
 1. Broker 3 was added.
 2. Partitions were reassigned from broker 0 to broker 3.
 3. Broker 0 was shut down (not gracefully) and removed from the cluster.
 4. We see the following state in ZooKeeper:
{code:java}
ls /brokers/ids
[1, 2, 3]

get /brokers/topics/foo
{"version":2,"partitions":{"0":[2,1,3]},"adding_replicas":{},"removing_replicas":{}}

get /brokers/topics/foo/partitions/0/state
{"controller_epoch":123,"leader":1,"version":1,"leader_epoch":42,"isr":[0,2,3,1]}
{code}
It means, the dead broker 0 remains in the partitions's ISR. A big share of the 
partitions in the cluster have this issue.

This is actually causing an errors:
{code:java}
Uncaught exception in scheduled task 'isr-expiration' 
(kafka.utils.KafkaScheduler)
org.apache.kafka.common.errors.ReplicaNotAvailableException: Replica with id 12 
is not available on broker 17
{code}
It means that effectively {{isr-expiration}} task is not working any more.

I have a suspicion that this was introduced by [this commit (line 
selected)|https://github.com/apache/kafka/commit/57baa4079d9fc14103411f790b9a025c9f2146a4#diff-5450baca03f57b9f2030f93a480e6969R856]

Unfortunately, I haven't been able to reproduce this in isolation.

Any hints about how to reproduce (so I can write a patch) or mitigate the issue 
on a running cluster are welcome.

Generally, I assume that not throwing {{ReplicaNotAvailableException}} on a 
dead (i.e. non-existent) broker, considering them out-of-sync and removing from 
the ISR should fix the problem.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-518: Allow listing consumer groups per state

2020-03-06 Thread Mickael Maison
Thanks David and Gwen for the votes
Colin, I believe I've answered all your questions, can you take another look?

So far we have 1 binding and 5 non binding votes.

On Mon, Mar 2, 2020 at 4:56 PM Gwen Shapira  wrote:
>
> +1 (binding)
>
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>
> On Mon, Mar 02, 2020 at 8:32 AM, David Jacot < dja...@confluent.io > wrote:
>
> >
> >
> >
> > +1 (non-binding). Thanks for the KIP!
> >
> >
> >
> > David
> >
> >
> >
> > On Thu, Feb 6, 2020 at 10:45 PM Colin McCabe < cmccabe@ apache. org (
> > cmcc...@apache.org ) > wrote:
> >
> >
> >>
> >>
> >> Hi Mickael,
> >>
> >>
> >>
> >> Thanks for the KIP. I left a comment on the DISCUSS thread as well.
> >>
> >>
> >>
> >> best,
> >> Colin
> >>
> >>
> >>
> >> On Thu, Feb 6, 2020, at 08:58, Mickael Maison wrote:
> >>
> >>
> >>>
> >>>
> >>> Hi Manikumar,
> >>>
> >>>
> >>>
> >>> I believe I've answered David's comments in the DISCUSS thread. Thanks
> >>>
> >>>
> >>>
> >>> On Wed, Jan 15, 2020 at 10:15 AM Manikumar < manikumar. reddy@ gmail. com 
> >>> (
> >>> manikumar.re...@gmail.com ) >
> >>>
> >>>
> >>
> >>
> >>
> >> wrote:
> >>
> >>
> >>>
> 
> 
>  Hi Mickael,
> 
> 
> 
>  Thanks for the KIP. Can you respond to the comments from David on
> 
> 
> >>>
> >>>
> >>
> >>
> >>
> >> discuss
> >>
> >>
> >>>
> 
> 
>  thread?
> 
> 
> 
>  Thanks,
> 
> 
> >>>
> >>>
> >>
> >>
> >
> >
> >


Re: [VOTE] KIP-409: Allow creating under-replicated topics and partitions

2020-03-06 Thread Mickael Maison
Bumping this thread once again.
So far we only have 4 non binding votes.

Please take a look at the KIP, share any feedback and consider voting.
Thanks

On Thu, Feb 27, 2020 at 12:03 AM Ryanne Dolan  wrote:
>
> Hey all, please consider voting for this KIP.  It's really a shame that
> topic creation is impossible when clusters are under-provisioned, which is
> not uncommon in a dynamic environment like Kubernetes.
>
> Ryanne
>
> On Thu, Feb 6, 2020 at 10:57 AM Mickael Maison 
> wrote:
>
> > I have not seen any new feedback nor votes.
> > Bumping this thread again
> >
> > On Mon, Jan 27, 2020 at 3:55 PM Mickael Maison 
> > wrote:
> > >
> > > Hi,
> > >
> > > We are now at 4 non-binding votes but still no binding votes.
> > > I have not seen any outstanding questions in the DISCUSS thread. If
> > > you have any feedback, please let me know.
> > >
> > > Thanks
> > >
> > >
> > > On Thu, Jan 16, 2020 at 2:03 PM M. Manna  wrote:
> > > >
> > > > MIckael,
> > > >
> > > >
> > > >
> > > > On Thu, 16 Jan 2020 at 14:01, Mickael Maison  > >
> > > > wrote:
> > > >
> > > > > Hi Manna,
> > > > >
> > > > > In your example, the topic 'dummy' is not under replicated. It just
> > > > > has 1 replica. A topic under replicated is a topic with less ISRs
> > than
> > > > > replicas.
> > > > >
> > > > > Having under replicated topics is relatively common in a Kafka
> > > > > cluster, it happens everytime is broker is down. However Kafka does
> > > > > not permit it to happen at topic creation. Currently at creation,
> > > > > Kafka requires to have at least as many brokers as the replication
> > > > > factor. This KIP addresses this limitation.
> > > > >
> > > > > Regarding your 2nd point. When rack awareness is enabled, Kafka tries
> > > > > to distribute partitions across racks. When all brokers in a rack are
> > > > > down (ie: a zone is offline), you can end up with partitions not well
> > > > > distributed even with rack awareness. There are currently no easy way
> > > > > to track such partitions so I decided to not attempt addressing this
> > > > > issue in this KIP.
> > > > >
> > > > > I hope that answers your questions.
> > > > >
> > > >
> > > >  It does and I appreciate you taking time and explaining this.
> > > >
> > > >  +1 (binding) if I haven't already.
> > > >
> > > > >
> > > > >
> > > > >
> > > > > On Wed, Jan 15, 2020 at 4:10 PM Kamal Chandraprakash
> > > > >  wrote:
> > > > > >
> > > > > > +1 (non-binding). Thanks for the KIP!
> > > > > >
> > > > > > On Mon, Jan 13, 2020 at 1:58 PM M. Manna 
> > wrote:
> > > > > >
> > > > > > > Hi Mikael,
> > > > > > >
> > > > > > > Apologies for last minute question, as I just caught up with it.
> > > > > Thanks for
> > > > > > > your work on the KIP.
> > > > > > >
> > > > > > > Just trying to get your thoughts on one thing (I might have
> > > > > misunderstood
> > > > > > > it) - currently it's possible (even though I am strongly against
> > it) to
> > > > > > > create Kafka topics which are under-replicated; despite all
> > brokers
> > > > > being
> > > > > > > online. This the the output of an intentionally under-replicated
> > topic
> > > > > > > "dummy" with p=6 and RF=1 (with a 3 node cluster)
> > > > > > >
> > > > > > >
> > > > > > > virtualadmin@kafka-broker-machine-1:/opt/kafka/bin$
> > ./kafka-topics.sh
> > > > > > > --create --topic dummy --partitions 6 --replication-factor 1
> > > > > > > --bootstrap-server localhost:9092
> > > > > > > virtualadmin@kafka-broker-machine-1:/opt/kafka/bin$
> > ./kafka-topics.sh
> > > > > > > --describe --topic dummy  --bootstrap-server localhost:9092
> > > > > > > Topic:dummy PartitionCount:6ReplicationFactor:1
> > > > > > >
> > > > > > >
> > > > >
> > Configs:compression.type=gzip,min.insync.replicas=2,cleanup.policy=delete,segment.bytes=10485760,max.message.bytes=10642642,retention.bytes=20971520
> > > > > > > Topic: dummyPartition: 0Leader: 3
> >  Replicas: 3
> > > > > > > Isr: 3
> > > > > > > Topic: dummyPartition: 1Leader: 1
> >  Replicas: 1
> > > > > > > Isr: 1
> > > > > > > Topic: dummyPartition: 2Leader: 2
> >  Replicas: 2
> > > > > > > Isr: 2
> > > > > > > Topic: dummyPartition: 3Leader: 3
> >  Replicas: 3
> > > > > > > Isr: 3
> > > > > > > Topic: dummyPartition: 4Leader: 1
> >  Replicas: 1
> > > > > > > Isr: 1
> > > > > > > Topic: dummyPartition: 5Leader: 2
> >  Replicas: 2
> > > > > > > Isr: 2
> > > > > > >
> > > > > > >  This is with respect to the following statement on your KIP
> > (i.e.
> > > > > > > under-replicated topic creation is also permitted when none is
> > > > > offline):
> > > > > > >
> > > > > > > *but note that this may already happen (without this KIP) when
> > > > > > > > topics/partitions are created while all brokers in a rack are
> > offline
> > > > > > > (ie:
> > > > > > > > an availability zone is offline). Tracking topics/partitions
> > not
> > > > > > > optimally
> > > > > > > > spread across 

[jira] [Created] (KAFKA-9671) Support sha512sum file format

2020-03-06 Thread Jira
Aljoscha Pörtner created KAFKA-9671:
---

 Summary: Support sha512sum file format
 Key: KAFKA-9671
 URL: https://issues.apache.org/jira/browse/KAFKA-9671
 Project: Kafka
  Issue Type: Improvement
  Components: packaging
Affects Versions: 2.4.0
Reporter: Aljoscha Pörtner


The generated *.tgz.sha512 can not be checked using sha512sum -c because of a 
wrong file format generated by GPG. The problem applies to *.tgz.md5 and 
*.tgz.sha1 aswell.

Current format:
{code:java}
kafka_2.12-2.4.0.tgz: 53B52F86 EA56C9FA C6204652 4F03F756 65A089EA 2DAE554A
 EFE3A3D2 694F2DA8 8B5BA872 5D8BE55F 198BA806 95443559
 ED9DE7C0 B2A2817F 7A614100 8FF79F49{code}
Expected format:
{code:java}
53b52f86ea56c9fac62046524f03f75665a089ea2dae554aefe3a3d2694f2da88b5ba8725d8be55f198ba80695443559ed9de7c0b2a2817f7a6141008ff79f49
  kafka_2.12-2.4.0.tgz
{code}
A fix would help to improve to automate the check, e.g. within Dockerfiles.

*Discussion on the same topic at the Apache Spark project : 
[http://apache-spark-developers-list.1001551.n3.nabble.com/Changing-how-we-compute-release-hashes-td23599.html|http://example.com]

Thank you.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)