Re: [DISCUSS] KIP-637 Include min.insync.replicas in MetadataResponse to make Producer smarter in partitioning events

2020-07-07 Thread Arvin Zheng
Thanks for the comment. Yes, it's indeed a downside that this will increase
the size of the metadata response, I was thinking if it's
worth providing this information to the Producer conditionally, e.g. add a
config to Producer to allow people choose whether to include this
information in the MetadataResponse or not. What do you think?

Br,
Arvin

Colin McCabe  于2020年7月7日周二 上午7:40写道:

> Hi Arvin,
>
> Thanks for the KIP.
>
> Unfortunately, I don't think this makes sense since it would increase the
> amount of data we send back in the metadata response, which is pretty bad
> for scalability.  In general we probably want to avoid the case where we
> don't have the appropriate number of in-sync replicas, not try to optimize
> for it.
>
> best,
> Colin
>
> On Mon, Jul 6, 2020, at 10:38, Arvin Zheng wrote:
> > Hi everyone,
> >
> > I would like to start the discussion for KIP-637
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-637%3A+Include+min.insync.replicas+in+MetadataResponse+to+make+Producer+smarter+in+partitioning+events
> >
> > Looking forward to your feedback.
> >
> > Thanks,
> > Arvin
> >
>


[jira] [Resolved] (KAFKA-10221) Backport fix for KAFKA-9603 to 2.5

2020-07-07 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-10221.
--
Resolution: Fixed

> Backport fix for KAFKA-9603 to 2.5 
> ---
>
> Key: KAFKA-10221
> URL: https://issues.apache.org/jira/browse/KAFKA-10221
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.5.0
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Blocker
> Fix For: 2.5.1
>
>
> The fix for [KAFKA-9603|https://issues.apache.org/jira/browse/KAFKA-9603] 
> shall be backported to 2.5. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10246) AbstractProcessorContext topic() throws NullPointerException when modifying a state store within the DSL from a punctuator

2020-07-07 Thread Peter Pringle (Jira)
Peter Pringle created KAFKA-10246:
-

 Summary: AbstractProcessorContext topic() throws 
NullPointerException when modifying a state store within the DSL from a 
punctuator
 Key: KAFKA-10246
 URL: https://issues.apache.org/jira/browse/KAFKA-10246
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.5.0
 Environment: linux, windows, java 11
Reporter: Peter Pringle


It seems valid for AbstractProcessorContext.topic() to return null; however the 
check below returns a NullPointerException before a null can be returned.
{quote}if (topic.equals(NONEXIST_TOPIC)) {
{quote}
Make a local fix to reverse the ordering of the check (i.e. avoid the null) and 
this appears to fix the issue and sends the change to the state stores change 
log topic.
{quote}if (NONEXIST_TOPIC.equals(topic)) {
{quote}
Stacktrace below seen when deleting from a previously declared ktable 
materialized state store which is being called from a punctuator added to the 
topology using either process/valueTransform within the init method.

 

 

{{2020-07-02 07:29:46,829 
[ABC_aggregator-551a90c1-d7c3-4357-a608-3ea79951f4e8-StreamThread-5] ERROR 
[o.a.k.s.p.i.StreamThread]: stream-thread [ABC_aggregator-5}}
{{51a90c1-d7c3-4357-a608-3ea79951f4e8-StreamThread-5] Encountered the following 
error during processing:}}
{{java.lang.NullPointerException: null}}
{{ at 
org.apache.kafka.streams.processor.internals.AbstractProcessorContext.topic(AbstractProcessorContext.java:115)}}
{{ at 
org.apache.kafka.streams.state.internals.CachingKeyValueStore.putInternal(CachingKeyValueStore.java:141)}}
{{ at 
org.apache.kafka.streams.state.internals.CachingKeyValueStore.put(CachingKeyValueStore.java:123)}}
{{ at 
org.apache.kafka.streams.state.internals.CachingKeyValueStore.put(CachingKeyValueStore.java:36)}}
{{ at 
org.apache.kafka.streams.state.internals.MeteredKeyValueStore.lambda$put$3(MeteredKeyValueStore.java:144)}}
{{ at 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:806)}}
{{ at 
org.apache.kafka.streams.state.internals.MeteredKeyValueStore.put(MeteredKeyValueStore.java:144)}}
{{ at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl$KeyValueStoreReadWriteDecorator.put(ProcessorContextImpl.java:487)}}
{{ at 
org.apache.kafka.streams.kstream.internals.KTableKTableJoinMerger$KTableKTableJoinMergeProcessor.process(KTableKTableJoinMerger.java:118)}}
{{ at 
org.apache.kafka.streams.kstream.internals.KTableKTableJoinMerger$KTableKTableJoinMergeProcessor.process(KTableKTableJoinMerger.java:97)}}
{{ at 
org.apache.kafka.streams.processor.internals.ProcessorNode.lambda$process$2(ProcessorNode.java:142)}}
{{ at 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:806)}}
{{ at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:142)}}
{{ at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)}}
{{ at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)}}
{{ at 
org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoin$KTableKTableOuterJoinProcessor.process(KTableKTableOuterJoin.java:118)}}
{{ at 
org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoin$KTableKTableOuterJoinProcessor.process(KTableKTableOuterJoin.java:65)}}
{{ at 
org.apache.kafka.streams.processor.internals.ProcessorNode.lambda$process$2(ProcessorNode.java:142)}}
{{ at 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:806)}}
{{ at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:142)}}
{{ at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)}}
{{ at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)}}
{{ at 
org.apache.kafka.streams.kstream.internals.TimestampedCacheFlushListener.apply(TimestampedCacheFlushListener.java:45)}}
{{ at 
org.apache.kafka.streams.kstream.internals.TimestampedCacheFlushListener.apply(TimestampedCacheFlushListener.java:28)}}
{{ at 
org.apache.kafka.streams.state.internals.MeteredKeyValueStore.lambda$setFlushListener$1(MeteredKeyValueStore.java:119)}}
{{ at 
org.apache.kafka.streams.state.internals.CachingKeyValueStore.putAndMaybeForward(CachingKeyValueStore.java:92)}}
{{ at 
org.apache.kafka.streams.state.internals.CachingKeyValueStore.lambda$initInternal$0(CachingKeyValueStore.java:72)}}
{{ at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:151)}}
{{ at 
org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:244)}}
{{ at 

Re: [VOTE] KIP-431: Support of printing additional ConsumerRecord fields in DefaultMessageFormatter

2020-07-07 Thread John Roesler
Hi Badai,

Thanks for picking this up. I've reviewed the KIP document and
the threads you linked. I think we may want to make more 
improvements in the future to the printing of headers in particular,
but this KIP seems like a clear benefit already. I think we can
take it incrementally.

I'm +1 (binding)

Thanks,
-John

On Tue, Jul 7, 2020, at 09:57, Badai Aqrandista wrote:
> Hi all
> 
> After resurrecting the discussion thread [1] for KIP-431 and have not
> received any further feedback for 2 weeks, I would like to resurrect
> the voting thread [2] for KIP-431.
> 
> I have updated KIP-431 wiki page
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-431%3A+Support+of+printing+additional+ConsumerRecord+fields+in+DefaultMessageFormatter)
> to address Ismael's comment on that thread [3].
> 
> Does anyone else have other comments or objections about this KIP?
> 
> [1] 
> https://lists.apache.org/thread.html/raabf3268ed05931b8a048fce0d6cdf6a326aee4b0d89713d6e6998d6%40%3Cdev.kafka.apache.org%3E
> 
> [2] 
> https://lists.apache.org/thread.html/41fff34873184625370f9e76b8d9257f7a9e7892ab616afe64b4f67c%40%3Cdev.kafka.apache.org%3E
> 
> [3] 
> https://lists.apache.org/thread.html/99e9cbaad4a0a49b96db104de450c9f488d4b2b03a09b991bcbadbc7%40%3Cdev.kafka.apache.org%3E
> 
> -- 
> Thanks,
> Badai
>


Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-07-07 Thread Ron Dagostino
HI Colin.  Thanks for the KIP.  Here is some feedback and various questions.

"*Controller processes will listen on a separate port from brokers.  This
will be true even when the broker and controller are co-located in the same
JVM*". I assume it is possible that the port numbers could be the same when
using separate JVMs (i.e. broker uses port 9192 and controller also uses
port 9192).  I think it would be clearer to state this along these
lines: "Controller
nodes will listen on a port, and the controller port must differ from any
port that a broker in the same JVM is listening on.  In other words, a
controller and a broker node, when in the same JVM, do not share ports"

I think the sentence "*In the realm of ACLs, this translates to controllers
requiring CLUSTERACTION on CLUSTER for all operations*" is confusing.  It
feels to me that you can just delete it.  Am I missing something here?

The KIP states "*The metadata will be stored in memory on all the active
controllers.*"  Can there be multiple active controllers?  Should it
instead read "The metadata will be stored in memory on all potential
controllers." (or something like that)?

KIP-595 states "*we have assumed the name __cluster_metadata for this
topic, but this is not a formal part of this proposal*".  This KIP-631
states "*Metadata changes need to be persisted to the __metadata log before
we propagate them to the other nodes in the cluster.  This means waiting
for the metadata log's last stable offset to advance to the offset of the
change.*"  Are we here formally defining "__metadata" as the topic name,
and should these sentences refer to "__metadata topic" rather than
"__metadata log"?  What are the "other nodes in the cluster" that are
referred to?  These are not controller nodes but brokers, right?  If so,
then should we say "before we propagate them to the brokers"?  Technically
we have a controller cluster and a broker cluster -- two separate clusters,
correct?  (Even though we could potentially share JVMs and therefore
require no additional processes.). If the statement is referring to nodes
in both clusters then maybe we should state "before we propagate them to
the other nodes in the controller cluster or to brokers."

"*The controller may have several of these uncommitted changes in flight at
any given time.  In essence, the controller's in-memory state is always a
little bit in the future compared to the current state.  This allows the
controller to continue doing things while it waits for the previous changes
to be committed to the Raft log.*"  Should the three references above be to
the active controller rather than just the controller?

"*Therefore, the controller must not make this future state "visible" to
the rest of the cluster until it has been made persistent – that is, until
it becomes current state*". Again I wonder if this should refer to "active"
controller, and indicate "anyone else" as opposed to "the rest of the
cluster" since we are talking about 2 clusters here?

"*When the active controller decides that it itself should create a
snapshot, it will first try to give up the leadership of the Raft quorum.*"
Why?  Is it necessary to state this?  It seems like it might be an
implementation detail rather than a necessary constraint/requirement that
we declare publicly and would have to abide by.

"*It will reject brokers whose metadata is too stale*". Why?  An example
might be helpful here.

"*it may lose subsequent conflicts if its broker epoch is stale*" This is
the first time a "broker epoch" is mentioned.  I am assuming it is the
controller epoch communicated to it (if any).  It would be good to
introduce it/explicitly state what it is before referring to it.

Ron

On Tue, Jul 7, 2020 at 6:48 PM Colin McCabe  wrote:

> Hi all,
>
> I posted a KIP about how the quorum-based controller envisioned in KIP-500
> will work.  Please take a look here:
> https://cwiki.apache.org/confluence/x/4RV4CQ
>
> best,
> Colin
>


Build failed in Jenkins: kafka-trunk-jdk11 #1626

2020-07-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10191 fix flaky StreamsOptimizedTest (#8913)


--
[...truncated 6.38 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: kafka-trunk-jdk14 #275

2020-07-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10191 fix flaky StreamsOptimizedTest (#8913)


--
[...truncated 6.38 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

[DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-07-07 Thread Colin McCabe
Hi all,

I posted a KIP about how the quorum-based controller envisioned in KIP-500 will 
work.  Please take a look here: https://cwiki.apache.org/confluence/x/4RV4CQ

best,
Colin


Jenkins build is back to normal : kafka-trunk-jdk8 #4700

2020-07-07 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-622 Add currentSystemTimeMs and currentStreamTimeMs to ProcessorContext

2020-07-07 Thread William Bottrell
Sure, I would appreciate help from Piotr creating an example.

On Tue, Jul 7, 2020 at 12:03 PM Boyang Chen 
wrote:

> Hey John,
>
> since ProcessorContext is a public API, I couldn't be sure that people
> won't try to extend it. Without a default implementation, user code
> compilation will break.
>
> William and Piotr, it seems that we haven't added any example usage of the
> new API, could we try to address that? It should help with the motivation
> and follow-up meta comments as John proposed.
>
> Boyang
>
> On Mon, Jul 6, 2020 at 12:04 PM Matthias J. Sax  wrote:
>
> > William,
> >
> > thanks for the KIP. LGMT. Feel free to start a vote.
> >
> >
> > -Matthias
> >
> >
> > On 7/4/20 10:14 AM, John Roesler wrote:
> > > Hi Richard,
> > >
> > > It’s good to hear from you!
> > >
> > > Thanks for bringing up the wall-clock suppression feature. IIRC,
> someone
> > actually started a KIP discussion for it already, but I don’t think it
> went
> > to a vote. I don’t recall any technical impediment, just the lack of
> > availability to finish it up. Although there is some association, it
> would
> > be good to keep the KIPs separate.
> > >
> > > Thanks,
> > > John
> > >
> > > On Sat, Jul 4, 2020, at 10:05, Richard Yu wrote:
> > >> Hi all,
> > >>
> > >> This reminds me of a previous issue I think that we were discussing.
> > >> @John Roesler  I think you should
> remember
> > this one.
> > >>
> > >> A while back, we were talking about having suppress operator emit
> > >> records by wall-clock time instead of stream time.
> > >> If we are adding this, wouldn't that make it more feasible for us to
> > >> implement that feature for suppression?
> > >>
> > >> If I recall correctly, there actually had been quite a bit of user
> > >> demand for such a feature.
> > >> Might be good to include it in this KIP (or maybe use one of the prior
> > >> KIPs for this feature).
> > >>
> > >> Best,
> > >> Richard
> > >>
> > >> On Sat, Jul 4, 2020 at 6:58 AM John Roesler 
> > wrote:
> > >>> Hi all,
> > >>>
> > >>>  1. Thanks, Boyang, it is nice to see usage examples in KIPs like
> > this. It helps during the discussion, and it’s also good documentation
> > later on.
> > >>>
> > >>>  2. Yeah, this is a subtle point. The motivation mentions being able
> > to control the time during tests, but to be able to make it work, the
> > processor implementation needs a public method on ProcessorContext to get
> > the time. Otherwise, processors would have to check the type of the
> context
> > and cast, depending on whether they’re running inside a test or not. In
> > retrospect, if we’d had a usage example, this probably would have been
> > clear.
> > >>>
> > >>>  3. I don’t think we expect people to have their own implementations
> > of ProcessorContext. Since all implementations are internal, it’s really
> an
> > implementation detail whether we use a default method, abstract methods,
> or
> > concrete methods. I can’t think of an implementation that really wants to
> > just look up the system time. In the production code path, we cache the
> > time for performance, and in testing, we use a mock time.
> > >>>
> > >>>  Thanks,
> > >>>  John
> > >>>
> > >>>
> > >>>  On Fri, Jul 3, 2020, at 06:41, Piotr Smoliński wrote:
> > >>>  > 1. Makes sense; let me propose something
> > >>>  >
> > >>>  > 2. That's not testing-only. The goal is to use the same API to
> > access
> > >>>  > the time
> > >>>  > in deployment and testing environments. The major driver is
> > >>>  > System.currentTimeMillis(),
> > >>>  > which a) cannot be used in tests b) could go in specific cases
> back
> > c)
> > >>>  > is not compatible
> > >>>  > with punctuator call. The idea is that we could access clock using
> > >>>  > uniform API.
> > >>>  > For completness we should have same API for system and stream
> time.
> > >>>  >
> > >>>  > 3. There aren't that many subclasses. Two important ones are
> > >>>  > ProcessorContextImpl and
> > >>>  > MockProcessorContext (and third one:
> > >>>  > ForwardingDisableProcessorContext). If given
> > >>>  > implementation does not support schedule() call, there is no
> reason
> > to
> > >>>  > support clock access.
> > >>>  > The default implementation should just throw
> > >>>  > UnsupportedOperationException just to prevent
> > >>>  > from compilation errors in possible subclasses.
> > >>>  >
> > >>>  > On 2020/07/01 02:24:43, Boyang Chen 
> > wrote:
> > >>>  > > Thanks Will for the KIP. A couple questions and suggestions:
> > >>>  > >
> > >>>  > > 1. I think for new APIs to make most sense, we should add a
> > minimal example
> > >>>  > > demonstrating how it could be useful to structure unit tests w/o
> > the new
> > >>>  > > APIs.
> > >>>  > > 2. If this is a testing-only feature, could we only add it
> > >>>  > > to MockProcessorContext?
> > >>>  > > 3. Regarding the API, since this will be added to the
> > ProcessorContext with
> > >>>  > > many subclasses, does it make sense to provide default
> > 

Jenkins build is back to normal : kafka-trunk-jdk14 #274

2020-07-07 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-622 Add currentSystemTimeMs and currentStreamTimeMs to ProcessorContext

2020-07-07 Thread Boyang Chen
Hey John,

since ProcessorContext is a public API, I couldn't be sure that people
won't try to extend it. Without a default implementation, user code
compilation will break.

William and Piotr, it seems that we haven't added any example usage of the
new API, could we try to address that? It should help with the motivation
and follow-up meta comments as John proposed.

Boyang

On Mon, Jul 6, 2020 at 12:04 PM Matthias J. Sax  wrote:

> William,
>
> thanks for the KIP. LGMT. Feel free to start a vote.
>
>
> -Matthias
>
>
> On 7/4/20 10:14 AM, John Roesler wrote:
> > Hi Richard,
> >
> > It’s good to hear from you!
> >
> > Thanks for bringing up the wall-clock suppression feature. IIRC, someone
> actually started a KIP discussion for it already, but I don’t think it went
> to a vote. I don’t recall any technical impediment, just the lack of
> availability to finish it up. Although there is some association, it would
> be good to keep the KIPs separate.
> >
> > Thanks,
> > John
> >
> > On Sat, Jul 4, 2020, at 10:05, Richard Yu wrote:
> >> Hi all,
> >>
> >> This reminds me of a previous issue I think that we were discussing.
> >> @John Roesler  I think you should remember
> this one.
> >>
> >> A while back, we were talking about having suppress operator emit
> >> records by wall-clock time instead of stream time.
> >> If we are adding this, wouldn't that make it more feasible for us to
> >> implement that feature for suppression?
> >>
> >> If I recall correctly, there actually had been quite a bit of user
> >> demand for such a feature.
> >> Might be good to include it in this KIP (or maybe use one of the prior
> >> KIPs for this feature).
> >>
> >> Best,
> >> Richard
> >>
> >> On Sat, Jul 4, 2020 at 6:58 AM John Roesler 
> wrote:
> >>> Hi all,
> >>>
> >>>  1. Thanks, Boyang, it is nice to see usage examples in KIPs like
> this. It helps during the discussion, and it’s also good documentation
> later on.
> >>>
> >>>  2. Yeah, this is a subtle point. The motivation mentions being able
> to control the time during tests, but to be able to make it work, the
> processor implementation needs a public method on ProcessorContext to get
> the time. Otherwise, processors would have to check the type of the context
> and cast, depending on whether they’re running inside a test or not. In
> retrospect, if we’d had a usage example, this probably would have been
> clear.
> >>>
> >>>  3. I don’t think we expect people to have their own implementations
> of ProcessorContext. Since all implementations are internal, it’s really an
> implementation detail whether we use a default method, abstract methods, or
> concrete methods. I can’t think of an implementation that really wants to
> just look up the system time. In the production code path, we cache the
> time for performance, and in testing, we use a mock time.
> >>>
> >>>  Thanks,
> >>>  John
> >>>
> >>>
> >>>  On Fri, Jul 3, 2020, at 06:41, Piotr Smoliński wrote:
> >>>  > 1. Makes sense; let me propose something
> >>>  >
> >>>  > 2. That's not testing-only. The goal is to use the same API to
> access
> >>>  > the time
> >>>  > in deployment and testing environments. The major driver is
> >>>  > System.currentTimeMillis(),
> >>>  > which a) cannot be used in tests b) could go in specific cases back
> c)
> >>>  > is not compatible
> >>>  > with punctuator call. The idea is that we could access clock using
> >>>  > uniform API.
> >>>  > For completness we should have same API for system and stream time.
> >>>  >
> >>>  > 3. There aren't that many subclasses. Two important ones are
> >>>  > ProcessorContextImpl and
> >>>  > MockProcessorContext (and third one:
> >>>  > ForwardingDisableProcessorContext). If given
> >>>  > implementation does not support schedule() call, there is no reason
> to
> >>>  > support clock access.
> >>>  > The default implementation should just throw
> >>>  > UnsupportedOperationException just to prevent
> >>>  > from compilation errors in possible subclasses.
> >>>  >
> >>>  > On 2020/07/01 02:24:43, Boyang Chen 
> wrote:
> >>>  > > Thanks Will for the KIP. A couple questions and suggestions:
> >>>  > >
> >>>  > > 1. I think for new APIs to make most sense, we should add a
> minimal example
> >>>  > > demonstrating how it could be useful to structure unit tests w/o
> the new
> >>>  > > APIs.
> >>>  > > 2. If this is a testing-only feature, could we only add it
> >>>  > > to MockProcessorContext?
> >>>  > > 3. Regarding the API, since this will be added to the
> ProcessorContext with
> >>>  > > many subclasses, does it make sense to provide default
> implementations as
> >>>  > > well?
> >>>  > >
> >>>  > > Boyang
> >>>  > >
> >>>  > > On Tue, Jun 30, 2020 at 6:56 PM William Bottrell <
> bottre...@gmail.com>
> >>>  > > wrote:
> >>>  > >
> >>>  > > > Thanks, John! I made the change. How much longer should I let
> there be
> >>>  > > > discussion before starting a VOTE?
> >>>  > > >
> >>>  > > > On Sat, Jun 27, 2020 at 6:50 AM 

[VOTE] KIP-622 Add currentSystemTimeMs and currentStreamTimeMs to ProcessorContext

2020-07-07 Thread William Bottrell
Hi everyone,

I'd like to start a vote for adding two new time API's to ProcessorContext.

Add currentSystemTimeMs and currentStreamTimeMs to ProcessorContext


 Thanks everyone for the initial feedback and thanks for your time.


[jira] [Resolved] (KAFKA-10222) Incorrect methods show up in 0.10 Kafka Streams docs

2020-07-07 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-10222.
-
Resolution: Fixed

> Incorrect methods show up in 0.10 Kafka Streams docs
> 
>
> Key: KAFKA-10222
> URL: https://issues.apache.org/jira/browse/KAFKA-10222
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.10.0.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
>
> In 0.10 Kafka Streams 
> doc([http://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/streams/KafkaStreams.html]),
>  two wrong methods show up, as show below:
> _builder.from("my-input-topic").mapValue(value -> 
> value.length().toString()).to("my-output-topic");_
>  
> There is no method named `from` or `mapValues`. They should be `stream` and 
> `mapValues` respectively.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10191) fix flaky StreamsOptimizedTest

2020-07-07 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-10191.
-
Resolution: Fixed

> fix flaky StreamsOptimizedTest
> --
>
> Key: KAFKA-10191
> URL: https://issues.apache.org/jira/browse/KAFKA-10191
> Project: Kafka
>  Issue Type: Test
>  Components: streams, unit tests
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 2.7.0
>
>
> {quote}Exception in thread 
> "StreamsOptimizedTest-53c7d3b1-12b2-4d02-90b1-15757dfd2735-StreamThread-1" 
> java.lang.IllegalStateException: Tried to lookup lag for unknown task 
> 2_0Exception in thread 
> "StreamsOptimizedTest-53c7d3b1-12b2-4d02-90b1-15757dfd2735-StreamThread-1" 
> java.lang.IllegalStateException: Tried to lookup lag for unknown task 2_0 at 
> org.apache.kafka.streams.processor.internals.assignment.ClientState.lagFor(ClientState.java:306)
>  at java.util.Comparator.lambda$comparingLong$6043328a$1(Comparator.java:511) 
> at java.util.Comparator.lambda$thenComparing$36697e65$1(Comparator.java:216) 
> at java.util.TreeMap.put(TreeMap.java:552) at 
> java.util.TreeSet.add(TreeSet.java:255) at 
> java.util.AbstractCollection.addAll(AbstractCollection.java:344) at 
> java.util.TreeSet.addAll(TreeSet.java:312) at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.getPreviousTasksByLag(StreamsPartitionAssignor.java:1250)
>  at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assignTasksToThreads(StreamsPartitionAssignor.java:1164)
>  at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.computeNewAssignment(StreamsPartitionAssignor.java:920)
>  at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assign(StreamsPartitionAssignor.java:391)
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:583)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:689)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1400(AbstractCoordinator.java:111)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:602)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:575)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1132)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1107)
>  at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:206)
>  at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:169)
>  at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:129)
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:602)
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:412)
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297)
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:419)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:359)
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:506)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1263)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1229) 
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1204) 
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:762)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:622)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:549)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:508)
> {quote}
>  
> this issue may be related to 
> 

[jira] [Created] (KAFKA-10245) Using vulnerable log4j version

2020-07-07 Thread Pavel Kuznetsov (Jira)
Pavel Kuznetsov created KAFKA-10245:
---

 Summary: Using vulnerable log4j version
 Key: KAFKA-10245
 URL: https://issues.apache.org/jira/browse/KAFKA-10245
 Project: Kafka
  Issue Type: Bug
  Components: core, KafkaConnect
Affects Versions: 2.5.0
Reporter: Pavel Kuznetsov


*Description*
I checked kafka_2.12-2.5.0.tgz distribution with WhiteSource and find out that 
log4j version, that used in kafka-connect and kafka-brocker, has vulnerabilities
 * log4j-1.2.17.jar has 
[CVE-2019-17571|https://github.com/advisories/GHSA-2qrg-x229-3v8q] and 
[CVE-2020-9488|https://github.com/advisories/GHSA-vwqq-5vrc-xw9h] 
vulnerabilities. The way to fix it is to upgrade to 
org.apache.logging.log4j:log4j-core:2.13.2

*To Reproduce*
Download kafka_2.12-2.5.0.tgz
Open libs folder in it and find log4j-1.2.17.jar.
Check [CVE-2019-17571|https://github.com/advisories/GHSA-2qrg-x229-3v8q] and 
[CVE-2020-9488|https://github.com/advisories/GHSA-vwqq-5vrc-xw9h] to see that 
log4j 1.2.17 is vulnerable.

*Expected*
 * log4j is log4j-core 2.13.2 or higher

*Actual*
 * log4j is 1.2.17



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4699

2020-07-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10243; ConcurrentModificationException while processing 
connection


--
[...truncated 3.16 MB...]

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

[GitHub] [kafka-site] mjsax commented on pull request #271: MINOR: Fix table of contents in protocol page

2020-07-07 Thread GitBox


mjsax commented on pull request #271:
URL: https://github.com/apache/kafka-site/pull/271#issuecomment-655024302


   Thanks @wkodate 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] mjsax merged pull request #271: MINOR: Fix table of contents in protocol page

2020-07-07 Thread GitBox


mjsax merged pull request #271:
URL: https://github.com/apache/kafka-site/pull/271


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] mjsax commented on pull request #272: KAFKA-10222:Incorrect methods show up in 0.10 Kafka Streams docs

2020-07-07 Thread GitBox


mjsax commented on pull request #272:
URL: https://github.com/apache/kafka-site/pull/272#issuecomment-655021969


   We can still merge this PR, because otherwise the web-page would not be 
updated.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] mjsax merged pull request #272: KAFKA-10222:Incorrect methods show up in 0.10 Kafka Streams docs

2020-07-07 Thread GitBox


mjsax merged pull request #272:
URL: https://github.com/apache/kafka-site/pull/272


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSSION] KIP-619: Add internal topic creation support

2020-07-07 Thread Cheng Tan
Hi Colin,


Thanks for the comments. I’ve modified the KIP accordingly.

> I think we need to understand which of these limitations we will carry 
> forward and which we will not.  We also have the option of putting 
> limitations just on consumer offsets, but not on other internal topics.


In the proposal, I added details about this. I agree that cluster admin should 
use ACLs to apply the restrictions. 
Internal topic creation will be allowed.
Internal topic deletion will be allowed except for` __consumer_offsets` and 
`__transaction_state`.
Producing to internal topic partitions other than `__consumer_offsets` and 
`__transaction_state` will be allowed.
Adding internal topic partitions to transactions will be allowed.
> I think there are a fair number of compatibility concerns.  What's the result 
> if someone tries to create a topic with the configuration internal = true 
> right now?  Does it fail?  If not, that seems like a potential problem.

I also added this compatibility issue in the "Compatibility, Deprecation, and 
Migration Plan" section.

Please feel free to make any suggestions or comments regarding to my latest 
proposal. Thanks.


Best, - Cheng Tan






> On Jun 15, 2020, at 11:18 AM, Colin McCabe  wrote:
> 
> Hi Cheng,
> 
> The link from the main KIP page is an "edit link" meaning that it drops you 
> into the editor for the wiki page.  I think the link you meant to use is a 
> "view link" that will just take you to view the page.
> 
> In general I'm not sure what I'm supposed to take away from the large UML 
> diagram in the KIP.  This is just a description of the existing code, right?  
> Seems like we should remove this.
> 
> I'm not sure why the controller classes are featured here since as far as I 
> can tell, the controller doesn't need to care if a topic is internal.
> 
>> Kafka and its upstream applications treat internal topics differently from
>> non-internal topics. For example:
>> * Kafka handles topic creation response errors differently for internal 
>> topics
>> * Internal topic partitions cannot be added to a transaction
>> * Internal topic records cannot be deleted
>> * Appending to internal topics might get rejected
> 
> I think we need to understand which of these limitations we will carry 
> forward and which we will not.  We also have the option of putting 
> limitations just on consumer offsets, but not on other internal topics.
> 
> Taking it one by one:
> 
>> * Kafka handles topic creation response errors differently for internal 
>> topics.
> 
> Hmm.  Kafka doesn't currently allow you to create internal topics, so the 
> difference here is that you always fail, right?  Or is there something else 
> more subtle here?  Like do we specifically prevent you from creating topics 
> named __consumer_offsets or something?  We need to spell this all out in the 
> KIP.
> 
>> * Internal topic partitions cannot be added to a transaction
> 
> I don't think we should carry this limitation forward, or if we do, we should 
> only do it for consumer-offsets.  Does anyone know why this limitation exists?
> 
>> * Internal topic records cannot be deleted
> 
> This seems like something that should be handled by ACLs rather than by 
> treating internal topics specially.
> 
>> * Appending to internal topics might get rejected
> 
> We clearly need to use ACLs here rather than rejecting appends.  Otherwise, 
> how will external systems like KSQL, streams, etc. use this feature?  This is 
> the kind of information we need to have in the KIP.
> 
>> Public Interfaces
>> 2. KafkaZkClient will have a new method getInternalTopics() which 
>> returns a set of internal topic name strings.
> 
> KafkaZkClient isn't a public interface, so it doesn't need to be described 
> here.
> 
>> There are no compatibility concerns in this KIP.
> 
> I think there are a fair number of compatibility concerns.  What's the result 
> if someone tries to create a topic with the configuration internal = true 
> right now?  Does it fail?  If not, that seems like a potential problem.
> 
> Are people going to be able to create or delete topics named 
> __consumer_offsets or __transaction_state using this mechanism?  If so, how 
> does the security model work for that?
> 
> best,
> Colin
> 
> On Fri, May 29, 2020, at 01:09, Cheng Tan wrote:
>> Hello developers,
>> 
>> 
>> I’m proposing KIP-619 to add internal topic creation support. 
>> 
>> Kafka and its upstream applications treat internal topics differently 
>> from non-internal topics. For example:
>> 
>>  • Kafka handles topic creation response errors differently for internal 
>> topics
>>  • Internal topic partitions cannot be added to a transaction
>>  • Internal topic records cannot be deleted
>>  • Appending to internal topics might get rejected
>>  • ……
>> 
>> Clients and upstream applications may define their own internal topics. 
>> For example, Kafka Connect defines `connect-configs`, 
>> `connect-offsets`, and `connect-statuses`. 

Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-07-07 Thread Jun Rao
Hi, Ying,

Thanks for the update. It's good to see the progress on this. Please let us
know when you are done updating the KIP wiki.

Jun

On Tue, Jul 7, 2020 at 10:13 AM Ying Zheng  wrote:

> Hi Jun,
>
> Satish and I have added more design details in the KIP, including how to
> keep consistency between replicas (especially when there is leadership
> changes / log truncations) and new metrics. We also made some other minor
> changes in the doc. We will finish the KIP changes in the next couple of
> days. We will let you know when we are done. Most of the changes are
> already updated to the wiki KIP. You can take a look. But it's not the
> final version yet.
>
> As for the implementation, the code is mostly done and we already had some
> feature tests / system tests. I have added the performance test results in
> the KIP. However the recent design changes (e.g. leader epoch info
> management / log truncation / some of the new metrics) have not been
> implemented yet. It will take about 2 weeks for us to implement after you
> review and agree with those design changes.
>
>
>
> On Tue, Jul 7, 2020 at 9:23 AM Jun Rao  wrote:
>
> > Hi, Satish, Harsha,
> >
> > Any new updates on the KIP? This feature is one of the most important and
> > most requested features in Apache Kafka right now. It would be helpful if
> > we can make sustained progress on this. Could you share how far along is
> > the design/implementation right now? Is there anything that other people
> > can help to get it across the line?
> >
> > As for "transactional support" and "follower requests/replication", no
> > further comments from me as long as the producer state and leader epoch
> can
> > be restored properly from the object store when needed.
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Jun 9, 2020 at 3:39 AM Satish Duggana 
> > wrote:
> >
> > > We did not want to add many implementation details in the KIP. But we
> > > decided to add them in the KIP as appendix or sub-sections(including
> > > follower fetch protocol) to describe the flow with the main cases.
> > > That will answer most of the queries. I will update on this mail
> > > thread when the respective sections are updated.
> > >
> > > Thanks,
> > > Satish.
> > >
> > > On Sat, Jun 6, 2020 at 7:49 PM Alexandre Dupriez
> > >  wrote:
> > > >
> > > > Hi Satish,
> > > >
> > > > A couple of questions specific to the section "Follower
> > > > Requests/Replication", pages 16:17 in the design document [1].
> > > >
> > > > 900. It is mentioned that followers fetch auxiliary states from the
> > > > remote storage.
> > > >
> > > > 900.a Does the consistency model of the external storage impacts
> reads
> > > > of leader epochs and other auxiliary data?
> > > >
> > > > 900.b What are the benefits of using a mechanism to store and access
> > > > the leader epochs which is different from other metadata associated
> to
> > > > tiered segments? What are the benefits of retrieving this information
> > > > on-demand from the follower rather than relying on propagation via
> the
> > > > topic __remote_log_metadata? What are the advantages over using a
> > > > dedicated control structure (e.g. a new record type) propagated via
> > > > this topic? Since in the document, different control paths are
> > > > operating in the system, how are the metadata stored in
> > > > __remote_log_metadata [which also include the epoch of the leader
> > > > which offloaded a segment] and the remote auxiliary states, kept in
> > > > sync?
> > > >
> > > > 900.c A follower can encounter an OFFSET_MOVED_TO_TIERED_STORAGE. Is
> > > > this in response to a Fetch or OffsetForLeaderEpoch request?
> > > >
> > > > 900.d What happens if, after a follower encountered an
> > > > OFFSET_MOVED_TO_TIERED_STORAGE response, its attempts to retrieve
> > > > leader epochs fail (for instance, because the remote storage is
> > > > temporarily unavailable)? Does the follower fallbacks to a mode where
> > > > it ignores tiered segments, and applies truncation using only locally
> > > > available information? What happens when access to the remote storage
> > > > is restored? How is the replica lineage inferred by the remote leader
> > > > epochs reconciled with the follower's replica lineage, which has
> > > > evolved? Does the follower remember fetching auxiliary states failed
> > > > in the past and attempt reconciliation? Is there a plan to offer
> > > > different strategies in this scenario, configurable via
> configuration?
> > > >
> > > > 900.e Is the leader epoch cache offloaded with every segment? Or when
> > > > a new checkpoint is detected? If that information is not always
> > > > offloaded to avoid duplicating data, how does the remote storage
> > > > satisfy the request to retrieve it?
> > > >
> > > > 900.f Since the leader epoch cache covers the entire replica lineage,
> > > > what happens if, after a leader epoch cache file is offloaded with a
> > > > given segment, the local epoch cache is truncated [not necessarily
> for
> 

Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-07-07 Thread Ying Zheng
Hi Jun,

Satish and I have added more design details in the KIP, including how to
keep consistency between replicas (especially when there is leadership
changes / log truncations) and new metrics. We also made some other minor
changes in the doc. We will finish the KIP changes in the next couple of
days. We will let you know when we are done. Most of the changes are
already updated to the wiki KIP. You can take a look. But it's not the
final version yet.

As for the implementation, the code is mostly done and we already had some
feature tests / system tests. I have added the performance test results in
the KIP. However the recent design changes (e.g. leader epoch info
management / log truncation / some of the new metrics) have not been
implemented yet. It will take about 2 weeks for us to implement after you
review and agree with those design changes.



On Tue, Jul 7, 2020 at 9:23 AM Jun Rao  wrote:

> Hi, Satish, Harsha,
>
> Any new updates on the KIP? This feature is one of the most important and
> most requested features in Apache Kafka right now. It would be helpful if
> we can make sustained progress on this. Could you share how far along is
> the design/implementation right now? Is there anything that other people
> can help to get it across the line?
>
> As for "transactional support" and "follower requests/replication", no
> further comments from me as long as the producer state and leader epoch can
> be restored properly from the object store when needed.
>
> Thanks,
>
> Jun
>
> On Tue, Jun 9, 2020 at 3:39 AM Satish Duggana 
> wrote:
>
> > We did not want to add many implementation details in the KIP. But we
> > decided to add them in the KIP as appendix or sub-sections(including
> > follower fetch protocol) to describe the flow with the main cases.
> > That will answer most of the queries. I will update on this mail
> > thread when the respective sections are updated.
> >
> > Thanks,
> > Satish.
> >
> > On Sat, Jun 6, 2020 at 7:49 PM Alexandre Dupriez
> >  wrote:
> > >
> > > Hi Satish,
> > >
> > > A couple of questions specific to the section "Follower
> > > Requests/Replication", pages 16:17 in the design document [1].
> > >
> > > 900. It is mentioned that followers fetch auxiliary states from the
> > > remote storage.
> > >
> > > 900.a Does the consistency model of the external storage impacts reads
> > > of leader epochs and other auxiliary data?
> > >
> > > 900.b What are the benefits of using a mechanism to store and access
> > > the leader epochs which is different from other metadata associated to
> > > tiered segments? What are the benefits of retrieving this information
> > > on-demand from the follower rather than relying on propagation via the
> > > topic __remote_log_metadata? What are the advantages over using a
> > > dedicated control structure (e.g. a new record type) propagated via
> > > this topic? Since in the document, different control paths are
> > > operating in the system, how are the metadata stored in
> > > __remote_log_metadata [which also include the epoch of the leader
> > > which offloaded a segment] and the remote auxiliary states, kept in
> > > sync?
> > >
> > > 900.c A follower can encounter an OFFSET_MOVED_TO_TIERED_STORAGE. Is
> > > this in response to a Fetch or OffsetForLeaderEpoch request?
> > >
> > > 900.d What happens if, after a follower encountered an
> > > OFFSET_MOVED_TO_TIERED_STORAGE response, its attempts to retrieve
> > > leader epochs fail (for instance, because the remote storage is
> > > temporarily unavailable)? Does the follower fallbacks to a mode where
> > > it ignores tiered segments, and applies truncation using only locally
> > > available information? What happens when access to the remote storage
> > > is restored? How is the replica lineage inferred by the remote leader
> > > epochs reconciled with the follower's replica lineage, which has
> > > evolved? Does the follower remember fetching auxiliary states failed
> > > in the past and attempt reconciliation? Is there a plan to offer
> > > different strategies in this scenario, configurable via configuration?
> > >
> > > 900.e Is the leader epoch cache offloaded with every segment? Or when
> > > a new checkpoint is detected? If that information is not always
> > > offloaded to avoid duplicating data, how does the remote storage
> > > satisfy the request to retrieve it?
> > >
> > > 900.f Since the leader epoch cache covers the entire replica lineage,
> > > what happens if, after a leader epoch cache file is offloaded with a
> > > given segment, the local epoch cache is truncated [not necessarily for
> > > a range of offset included in tiered segments]? How are remote and
> > > local leader epoch caches kept consistent?
> > >
> > > 900.g Consumer can also use leader epochs (e.g. to enable fencing to
> > > protect against stale leaders). What differences would there be
> > > between consumer and follower fetches? Especially, would consumers
> > > also fetch leader epoch information from 

Re: [VOTE] KIP-632: Add DirectoryConfigProvider

2020-07-07 Thread David Jacot
+1 (non-binding). Thanks for the KIP!

On Tue, Jul 7, 2020 at 12:54 PM Tom Bentley  wrote:

> Hi,
>
> I'd like to start a vote on KIP-632, which is about making the config
> provider mechanism more ergonomic on Kubernetes:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-632%3A+Add+DirectoryConfigProvider
>
> Please take a look if you have time.
>
> Many thanks,
>
> Tom
>


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-07-07 Thread Jun Rao
Hi, Satish, Harsha,

Any new updates on the KIP? This feature is one of the most important and
most requested features in Apache Kafka right now. It would be helpful if
we can make sustained progress on this. Could you share how far along is
the design/implementation right now? Is there anything that other people
can help to get it across the line?

As for "transactional support" and "follower requests/replication", no
further comments from me as long as the producer state and leader epoch can
be restored properly from the object store when needed.

Thanks,

Jun

On Tue, Jun 9, 2020 at 3:39 AM Satish Duggana 
wrote:

> We did not want to add many implementation details in the KIP. But we
> decided to add them in the KIP as appendix or sub-sections(including
> follower fetch protocol) to describe the flow with the main cases.
> That will answer most of the queries. I will update on this mail
> thread when the respective sections are updated.
>
> Thanks,
> Satish.
>
> On Sat, Jun 6, 2020 at 7:49 PM Alexandre Dupriez
>  wrote:
> >
> > Hi Satish,
> >
> > A couple of questions specific to the section "Follower
> > Requests/Replication", pages 16:17 in the design document [1].
> >
> > 900. It is mentioned that followers fetch auxiliary states from the
> > remote storage.
> >
> > 900.a Does the consistency model of the external storage impacts reads
> > of leader epochs and other auxiliary data?
> >
> > 900.b What are the benefits of using a mechanism to store and access
> > the leader epochs which is different from other metadata associated to
> > tiered segments? What are the benefits of retrieving this information
> > on-demand from the follower rather than relying on propagation via the
> > topic __remote_log_metadata? What are the advantages over using a
> > dedicated control structure (e.g. a new record type) propagated via
> > this topic? Since in the document, different control paths are
> > operating in the system, how are the metadata stored in
> > __remote_log_metadata [which also include the epoch of the leader
> > which offloaded a segment] and the remote auxiliary states, kept in
> > sync?
> >
> > 900.c A follower can encounter an OFFSET_MOVED_TO_TIERED_STORAGE. Is
> > this in response to a Fetch or OffsetForLeaderEpoch request?
> >
> > 900.d What happens if, after a follower encountered an
> > OFFSET_MOVED_TO_TIERED_STORAGE response, its attempts to retrieve
> > leader epochs fail (for instance, because the remote storage is
> > temporarily unavailable)? Does the follower fallbacks to a mode where
> > it ignores tiered segments, and applies truncation using only locally
> > available information? What happens when access to the remote storage
> > is restored? How is the replica lineage inferred by the remote leader
> > epochs reconciled with the follower's replica lineage, which has
> > evolved? Does the follower remember fetching auxiliary states failed
> > in the past and attempt reconciliation? Is there a plan to offer
> > different strategies in this scenario, configurable via configuration?
> >
> > 900.e Is the leader epoch cache offloaded with every segment? Or when
> > a new checkpoint is detected? If that information is not always
> > offloaded to avoid duplicating data, how does the remote storage
> > satisfy the request to retrieve it?
> >
> > 900.f Since the leader epoch cache covers the entire replica lineage,
> > what happens if, after a leader epoch cache file is offloaded with a
> > given segment, the local epoch cache is truncated [not necessarily for
> > a range of offset included in tiered segments]? How are remote and
> > local leader epoch caches kept consistent?
> >
> > 900.g Consumer can also use leader epochs (e.g. to enable fencing to
> > protect against stale leaders). What differences would there be
> > between consumer and follower fetches? Especially, would consumers
> > also fetch leader epoch information from the remote storage?
> >
> > 900.h Assume a newly elected leader of a topic-partition detects more
> > recent segments are available in the external storage, with epochs >
> > its local epoch. Does it ignore these segments and their associated
> > epoch-to-offset vectors? Or try to reconstruct its local replica
> > lineage based on the data remotely available?
> >
> > Thanks,
> > Alexandre
> >
> > [1]
> https://docs.google.com/document/d/18tnobSas3mKFZFr8oRguZoj_tkD_sGzivuLRlMloEMs/edit?usp=sharing
> >
> > Le jeu. 4 juin 2020 à 19:55, Satish Duggana 
> a écrit :
> > >
> > > Hi Jun,
> > > Please let us know if you have any comments on "transactional support"
> > > and "follower requests/replication" mentioned in the wiki.
> > >
> > > Thanks,
> > > Satish.
> > >
> > > On Tue, Jun 2, 2020 at 9:25 PM Satish Duggana <
> satish.dugg...@gmail.com> wrote:
> > > >
> > > > Thanks Jun for your comments.
> > > >
> > > > >100. It would be useful to provide more details on how those apis
> are used. Otherwise, it's kind of hard to really assess whether the new
> apis are 

Re: [VOTE] KIP-620 Deprecate ConsumerConfig#addDeserializerToConfig(Properties, Deserializer, Deserializer) and ProducerConfig#addSerializerToConfig(Properties, Serializer, Serializer)

2020-07-07 Thread Boyang Chen
Ok, after a second thought, keeping a function which still has production
reference is ok. We probably should not make it public in the first place,
but this is not high priority either.

On Tue, Jul 7, 2020 at 9:03 AM Chia-Ping Tsai  wrote:

> > do we just suggest they no longer have any production use case?
>
> yep
>
> > KafkaProducer internal only. Do we also want to deprecate this public
> API as well?
>
> We have to make sure users' code can keep working beyond recompilation
> when migrating to "next" release. Hence, deprecation cycle is necessary.
>
> I don't think my question gets answered, why would deprecating the map
based `addSerializerToConfig` break user's recompilation? If you worry
about warnings, we could refactor out the content and create a
package-private `attachSerializersToConfig` or something similar.

On 2020/07/07 06:52:25, Boyang Chen  wrote:
> > Thanks for the KIP. One question I have is that when we refer to the two
> > methods as useless, do we just suggest they no longer have any production
> > use case? If this is the case, Producer#addSerializerToConfig(Map > Object> configs, keySerializer, valueSerializer) is only used in
> > KafkaProducer internal only. Do we also want to deprecate this public API
> > as well?
> >
> > Boyang
> >
> >
> > On Mon, Jul 6, 2020 at 11:36 PM Manikumar 
> wrote:
> >
> > > +1 (binding)
> > >
> > > Thanks for the KIP.
> > >
> > > On Wed, Jun 10, 2020 at 11:43 PM Matthias J. Sax 
> wrote:
> > >
> > > > Yes, it does.
> > > >
> > > > I guess many people are busy wrapping up 2.6 release. Today is code
> > > freeze.
> > > >
> > > >
> > > > -Matthias
> > > >
> > > >
> > > > On 6/10/20 12:11 AM, Chia-Ping Tsai wrote:
> > > > > hi Matthias,
> > > > >
> > > > > Does this straightforward KIP still need 3 votes?
> > > > >
> > > > > On 2020/06/05 21:27:52, "Matthias J. Sax" 
> wrote:
> > > > >> +1 (binding)
> > > > >>
> > > > >> Thanks for the KIP!
> > > > >>
> > > > >>
> > > > >> -Matthias
> > > > >>
> > > > >> On 6/4/20 11:25 PM, Chia-Ping Tsai wrote:
> > > > >>> hi All,
> > > > >>>
> > > > >>> I would like to start the vote on KIP-620:
> > > > >>>
> > > > >>>
> > > >
> > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=155749118
> > > > >>>
> > > > >>> --
> > > > >>> Chia-Ping
> > > > >>>
> > > > >>
> > > > >>
> > > >
> > > >
> > >
> >
>


Re: [VOTE] KIP-620 Deprecate ConsumerConfig#addDeserializerToConfig(Properties, Deserializer, Deserializer) and ProducerConfig#addSerializerToConfig(Properties, Serializer, Serializer)

2020-07-07 Thread Chia-Ping Tsai
> do we just suggest they no longer have any production use case?

yep

> KafkaProducer internal only. Do we also want to deprecate this public API as 
> well?

We have to make sure users' code can keep working beyond recompilation when 
migrating to "next" release. Hence, deprecation cycle is necessary.

On 2020/07/07 06:52:25, Boyang Chen  wrote: 
> Thanks for the KIP. One question I have is that when we refer to the two
> methods as useless, do we just suggest they no longer have any production
> use case? If this is the case, Producer#addSerializerToConfig(Map Object> configs, keySerializer, valueSerializer) is only used in
> KafkaProducer internal only. Do we also want to deprecate this public API
> as well?
> 
> Boyang
> 
> 
> On Mon, Jul 6, 2020 at 11:36 PM Manikumar  wrote:
> 
> > +1 (binding)
> >
> > Thanks for the KIP.
> >
> > On Wed, Jun 10, 2020 at 11:43 PM Matthias J. Sax  wrote:
> >
> > > Yes, it does.
> > >
> > > I guess many people are busy wrapping up 2.6 release. Today is code
> > freeze.
> > >
> > >
> > > -Matthias
> > >
> > >
> > > On 6/10/20 12:11 AM, Chia-Ping Tsai wrote:
> > > > hi Matthias,
> > > >
> > > > Does this straightforward KIP still need 3 votes?
> > > >
> > > > On 2020/06/05 21:27:52, "Matthias J. Sax"  wrote:
> > > >> +1 (binding)
> > > >>
> > > >> Thanks for the KIP!
> > > >>
> > > >>
> > > >> -Matthias
> > > >>
> > > >> On 6/4/20 11:25 PM, Chia-Ping Tsai wrote:
> > > >>> hi All,
> > > >>>
> > > >>> I would like to start the vote on KIP-620:
> > > >>>
> > > >>>
> > >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=155749118
> > > >>>
> > > >>> --
> > > >>> Chia-Ping
> > > >>>
> > > >>
> > > >>
> > >
> > >
> >
> 


[DISCUSS] KIP-641 An new java interface to replace 'kafka.common.MessageReader'

2020-07-07 Thread Chia-Ping Tsai
hi all,

I would like to start the discussion for KIP-641.

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158866569

Many thanks,

Chia-Ping


[jira] [Resolved] (KAFKA-10243) ConcurrentModificationException while processing connection setup timeouts

2020-07-07 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-10243.

  Reviewer: Rajini Sivaram
Resolution: Fixed

> ConcurrentModificationException while processing connection setup timeouts
> --
>
> Key: KAFKA-10243
> URL: https://issues.apache.org/jira/browse/KAFKA-10243
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Reporter: Rajini Sivaram
>Assignee: David Jacot
>Priority: Major
> Fix For: 2.7
>
>
> From [~guozhang] in [https://github.com/apache/kafka/pull/8683:]
> {quote}
> java.util.ConcurrentModificationException
>   at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
>   at java.util.HashMap$KeyIterator.next(HashMap.java:1469)
>   at 
> org.apache.kafka.clients.NetworkClient.handleTimedOutConnections(NetworkClient.java:822)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:574)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:419)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:359)
> {quote}
> While processing connection set up timeouts, we are iterating through the 
> connecting nodes to process timeouts and we disconnect within the loop, 
> removing the entry from the set in the loop that it iterating over the set:
> {code}
> for (String nodeId : connectingNodes) {
> if (connectionStates.isConnectionSetupTimeout(nodeId, now)) {
> this.selector.close(nodeId);
> log.debug(
> "Disconnecting from node {} due to socket connection 
> setup timeout. " +
> "The timeout value is {} ms.",
> nodeId,
> connectionStates.connectionSetupTimeoutMs(nodeId));
> processDisconnection(responses, nodeId, now, 
> ChannelState.LOCAL_CLOSE);
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10244) An new java interface to replace 'kafka.common.MessageReader'

2020-07-07 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-10244:
--

 Summary: An new java interface to replace 
'kafka.common.MessageReader'
 Key: KAFKA-10244
 URL: https://issues.apache.org/jira/browse/KAFKA-10244
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


inspired by 
https://github.com/apache/kafka/commit/caa806cd82fb9fa88510c81de53e69ac9846311d.

kafka.common.MessageReader is a pure scala trait and we should offer a java 
replacement to users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-363

2020-07-07 Thread Colin McCabe
Hi Tom,

Thanks for this.  I think the tough part is probably the few messages that are 
still using manual serialization, which can't be easily converted to using 
this.  So we will probably have to upgrade them to using automatic generation, 
or accept a little inconsistency for a while until they are upgraded.

best,
Colin


On Wed, Jul 1, 2020, at 09:21, Tom Bentley wrote:
> Hi all,
> 
> Following a suggestion from Colin in the KIP-625 discussion thread, I'd
> like to start discussion on a much smaller KIP which proposes to make error
> codes and messages tagged fields in all RPCs.
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-363%3A+Make+RPC+error+codes+and+messages+tagged+fields
> 
> I'd be grateful for any feedback you may have.
> 
> Kind regards,
> 
> Tom
>


Re: [VOTE] KIP-431: Support of printing additional ConsumerRecord fields in DefaultMessageFormatter

2020-07-07 Thread Badai Aqrandista
Hi all

After resurrecting the discussion thread [1] for KIP-431 and have not
received any further feedback for 2 weeks, I would like to resurrect
the voting thread [2] for KIP-431.

I have updated KIP-431 wiki page
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-431%3A+Support+of+printing+additional+ConsumerRecord+fields+in+DefaultMessageFormatter)
to address Ismael's comment on that thread [3].

Does anyone else have other comments or objections about this KIP?

[1] 
https://lists.apache.org/thread.html/raabf3268ed05931b8a048fce0d6cdf6a326aee4b0d89713d6e6998d6%40%3Cdev.kafka.apache.org%3E

[2] 
https://lists.apache.org/thread.html/41fff34873184625370f9e76b8d9257f7a9e7892ab616afe64b4f67c%40%3Cdev.kafka.apache.org%3E

[3] 
https://lists.apache.org/thread.html/99e9cbaad4a0a49b96db104de450c9f488d4b2b03a09b991bcbadbc7%40%3Cdev.kafka.apache.org%3E

-- 
Thanks,
Badai


Re: [VOTE] KIP-621: Deprecate and replace DescribeLogDirsResult.all() and .values()

2020-07-07 Thread Colin McCabe
Thanks, Tom.

I tried to think of a better way to do this, but I think you're right that we 
probably just need different methods.

+1 (binding).

best,
Colin

On Mon, Jul 6, 2020, at 01:14, Tom Bentley wrote:
> Hi,
> 
> I'd like to start a vote on KIP-621 which is about deprecating methods in
> DescribeLogDirsResult which leak internal classes, replacing them with
> public API classes.
> 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158862109
> 
> Thanks,
> 
> Tom
>


Re: [DISCUSS] Include min.insync.replicas in MetadataResponse to make Producer smarter in partitioning events

2020-07-07 Thread Colin McCabe
Hi Arvin,

Thanks for the KIP.

Unfortunately, I don't think this makes sense since it would increase the 
amount of data we send back in the metadata response, which is pretty bad for 
scalability.  In general we probably want to avoid the case where we don't have 
the appropriate number of in-sync replicas, not try to optimize for it.

best,
Colin

On Mon, Jul 6, 2020, at 10:38, Arvin Zheng wrote:
> Hi everyone,
> 
> I would like to start the discussion for KIP-637
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-637%3A+Include+min.insync.replicas+in+MetadataResponse+to+make+Producer+smarter+in+partitioning+events
> 
> Looking forward to your feedback.
> 
> Thanks,
> Arvin
>


Re: [DISCUSS] KIP-638: Deprecate DescribeLogDirsResponse.[LogDirInfo, ReplicaInfo]

2020-07-07 Thread Mickael Maison
Hi Dongjin,

It looks like this KIP is addressing the same issue as KIP-621:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158862109

On Tue, Jul 7, 2020 at 2:29 PM Dongjin Lee  wrote:
>
> Hi devs,
>
> I hope to start the discussion of KIP-638, which aims to fix a glitch in
> Admin#describeLogDirs method.
>
> - KIP:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158866169
> - Jira: https://issues.apache.org/jira/browse/KAFKA-8794
>
> All kinds of feedback will be greatly appreciated.
>
> Best,
> Dongjin
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck: speakerdeck.com/dongjin
> *


Re: Untrimmed Index Files resulting in data loss

2020-07-07 Thread Ismael Juma
Hi John,

Thanks for reporting the issue. Let's continue the discussion in the PR.

Ismael

On Mon, Jul 6, 2020 at 7:13 PM John Malizia  wrote:

> Hi there, about a week ago I submitted an issue report and an associated PR
>
> https://issues.apache.org/jira/browse/KAFKA-10207
> https://github.com/apache/kafka/pull/8936
>
> Is there any further information I can provide to help this along? Sorry to
> bug everyone about this, but after identifying the issue it seems like
> something that should be handled more gracefully.
>


[DISCUSS] KIP-638: Deprecate DescribeLogDirsResponse.[LogDirInfo, ReplicaInfo]

2020-07-07 Thread Dongjin Lee
Hi devs,

I hope to start the discussion of KIP-638, which aims to fix a glitch in
Admin#describeLogDirs method.

- KIP:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158866169
- Jira: https://issues.apache.org/jira/browse/KAFKA-8794

All kinds of feedback will be greatly appreciated.

Best,
Dongjin

-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*




*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Re: Feedback: Print schemaId using bin/kafka-dump-log.sh

2020-07-07 Thread Adam Bellemare
Hi Mohanraj

While I see the usefulness of your suggestion, the main issue is that
you're using the Confluent schema registry's conventions and hardwiring
them into Kafka core. Given that Confluent's standards are not part of
Kafka's official standards, I do not think you will get approval to submit
this code into Kafka core.

There may be Confluent tools that are available that already do this, or
perhaps they have their own custom tools available where this may be more
suitable for submission.

Adam



On Mon, Jul 6, 2020 at 11:00 AM Mohanraj Nagasamy 
wrote:

> Do anyone have feedback on this? ☺
>
> From: Mohanraj Nagasamy 
> Date: Wednesday, July 1, 2020 at 6:29 PM
> To: "dev@kafka.apache.org" 
> Subject: Feedback: Print schemaId using bin/kafka-dump-log.sh
>
> When I try to dump kafka logs for diagnosing or debugging a problem, It's
> handy to see if the kafka log message schemaId or not. If it has got, print
> the schemaId.
>
> I'm soliciting feedback as to whether it is worth making this change to
> kafka-core codebase.
>
> I'm new to the kafka-community - forgive me if this wasn't part of the
> process.
>
> This is the change I made:
>
> ```
>  core/src/main/scala/kafka/tools/DumpLogSegments.scala | 21
> +++--
>  1 file changed, 19 insertions(+), 2 deletions(-)
>
> diff --git a/core/src/main/scala/kafka/tools/DumpLogSegments.scala
> b/core/src/main/scala/kafka/tools/DumpLogSegments.scala
> index 9e9546a92..a8750ac3d 100755
> --- a/core/src/main/scala/kafka/tools/DumpLogSegments.scala
> +++ b/core/src/main/scala/kafka/tools/DumpLogSegments.scala
> @@ -35,6 +35,7 @@ object DumpLogSegments {
>
>// visible for testing
>private[tools] val RecordIndent = "|"
> +  private val MAGIC_BYTE = 0x0
>
>def main(args: Array[String]): Unit = {
>  val opts = new DumpLogSegmentsOptions(args)
> @@ -277,8 +278,24 @@ object DumpLogSegments {
>}
>  } else if (printContents) {
>val (key, payload) = parser.parse(record)
> -  key.foreach(key => print(s" key: $key"))
> -  payload.foreach(payload => print(s" payload: $payload"))
> +  key.foreach(key => {
> +val keyBuffer = record.key()
> +if (keyBuffer.get() == MAGIC_BYTE) {
> +  print(s" keySchemaId: ${keyBuffer.getInt} key: $key")
> +}
> +else {
> +  print(s" key: $key")
> +}
> +  })
> +
> +  payload.foreach(payload => {
> +val valueBuffer = record.value()
> +if (valueBuffer.get() == MAGIC_BYTE) {
> +  print(s" payloadSchemaId: ${valueBuffer.getInt}
> payload: $payload")
> +} else {
> +  print(s" payload: $payload")
> +}
> +  })
>  }
>  println()
>}
> (END)
> ```
>
> And this is how the output looks like:
>
> ```
> $ bin/kafka-dump-log.sh --files
> data/kafka/logdir/avro_topic_test-0/.log
> --print-data-log
>
> | offset: 50 CreateTime: 1593570942959 keysize: -1 valuesize: 16 sequence:
> -1 headerKeys: [] payloadSchemaId: 1 payload:
> TracRowe
> baseOffset: 51 lastOffset: 51 count: 1 baseSequence: -1 lastSequence: -1
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional:
> false isControl: false position: 2918 CreateTime: 1593570958044 size: 101
> magic: 2 compresscodec: NONE crc: 1913155179 isvalid: true
> | offset: 51 CreateTime: 1593570958044 keysize: 3 valuesize: 30 sequence:
> -1 headerKeys: [] key: ... payloadSchemaId: 2 payload:
> .iRKoMVeoVVnTmQEuqwSTHZQ
> baseOffset: 52 lastOffset: 52 count: 1 baseSequence: -1 lastSequence: -1
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional:
> false isControl: false position: 3019 CreateTime: 1593570969001 size: 84
> magic: 2 compresscodec: NONE crc: 2188466405 isvalid: true
> ```
>
> -Mohan
>


[VOTE] KIP-632: Add DirectoryConfigProvider

2020-07-07 Thread Tom Bentley
Hi,

I'd like to start a vote on KIP-632, which is about making the config
provider mechanism more ergonomic on Kubernetes:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-632%3A+Add+DirectoryConfigProvider

Please take a look if you have time.

Many thanks,

Tom


Build failed in Jenkins: kafka-2.5-jdk8 #163

2020-07-07 Thread Apache Jenkins Server
See 


Changes:

[boyang] KAFKA-10239: Make GroupInstanceId ignorable in DescribeGroups (#8989)


--
[...truncated 5.92 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 

[jira] [Created] (KAFKA-10243) ConcurrentModificationException while processing connection setup timeouts

2020-07-07 Thread Rajini Sivaram (Jira)
Rajini Sivaram created KAFKA-10243:
--

 Summary: ConcurrentModificationException while processing 
connection setup timeouts
 Key: KAFKA-10243
 URL: https://issues.apache.org/jira/browse/KAFKA-10243
 Project: Kafka
  Issue Type: Bug
  Components: network
Reporter: Rajini Sivaram
Assignee: David Jacot
 Fix For: 2.7


>From [~guozhang] in [https://github.com/apache/kafka/pull/8683:]

{quote}
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
at java.util.HashMap$KeyIterator.next(HashMap.java:1469)
at 
org.apache.kafka.clients.NetworkClient.handleTimedOutConnections(NetworkClient.java:822)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:574)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:419)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:359)
{quote}

While processing connection set up timeouts, we are iterating through the 
connecting nodes to process timeouts and we disconnect within the loop, 
removing the entry from the set in the loop that it iterating over the set:

{code}
for (String nodeId : connectingNodes) {
if (connectionStates.isConnectionSetupTimeout(nodeId, now)) {
this.selector.close(nodeId);
log.debug(
"Disconnecting from node {} due to socket connection setup 
timeout. " +
"The timeout value is {} ms.",
nodeId,
connectionStates.connectionSetupTimeoutMs(nodeId));
processDisconnection(responses, nodeId, now, 
ChannelState.LOCAL_CLOSE);
}
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


NotEnoughReplicasException: The size of the current ISR Set(2) is insufficient to satisfy the min.isr requirement of 3

2020-07-07 Thread Nag Y
I had the following setup Brokers : 3 - all are up and running with
min.insync.replicas=3.

I created a topic with the following configuration

bin\windows\kafka-topics --zookeeper 127.0.0.1:2181 --topic topic-ack-all
--create --partitions 4 --replication-factor 3

I triggered the producer with "ack = all" and producer is able to send the
message. However, the problem starts when i start the consumer

bin\windows\kafka-console-consumer --bootstrap-server
localhost:9094,localhost:9092 --topic topic-ack-all --from-beginning

The error is

NotEnoughReplicasException: The size of the current ISR Set(2) is
insufficient to satisfy the min.isr requirement of 3
NotEnoughReplicasException:The size of the current ISR Set(3) is
insufficient to satisfy the min.isr requirement of 3 for partition __con

I see two kinds of errors here . I went though the documentation and had
also understaning about "min.isr", However, these error messages are not
clear .

   1. What does it mean by current ISR set ? Is it different for each topic
   and what it signifies ?
   2. I guess min.isr is same as min.insync.replicas . I hope is should
   have value at least same as "replication factor" ?


Jenkins build is back to normal : kafka-trunk-jdk11 #1624

2020-07-07 Thread Apache Jenkins Server
See 




Re: KIP-560 Discuss

2020-07-07 Thread Boyang Chen
No worry! Could you address Matthias' comments in this mailing thread?
Seems we still have some gaps.

On Mon, Jul 6, 2020 at 7:14 AM Sang wn Lee  wrote:

> I'm sorry.
> I just modified the KIP!
>
> On 2020/03/07 20:09:47, "Matthias J. Sax"  wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> >
> > Thanks for the KIP Sang!
> >
> > I have a couple of more comments about the wiki page:
> >
> > (1) The "Public Interface" section should only list the new stuff. This
> > KIP does not change anything with regard to the existing options
> > `--input-topic` or `--intermediate-topic` and thus it's just "noise" to
> > have them in this section. Only list the new option
> > `allInputTopicsOption`.
> >
> > (2) Don't post code, ie, the implementation of private methods. KIPs
> > should only describe public interface changes.
> >
> > (3) The KIP should describe that we intend to use
> > `describeConsumerGroups` calls to discover the topic names -- atm, it's
> > unclear from the KIP how the new feature actually works.
> >
> > (4) If the new flag is used, we will discover input and intermediate
> > topics. Hence, the name is miss leading. We could call it
> > `--all-user-topics` and explain in the description that "user topics"
> > are input and intermediate topics for this case (in general, also output
> > topics are "user topics" but there is nothing to be done for output
> > topics). Thoughts?
> >
> >
> > - -Matthias
> >
> >
> > On 1/27/20 6:35 AM, Sang wn Lee wrote:
> > > thank you John Roesle
> > >
> > > It is a good idea "—all-input-topics"
> > >
> > > I agree with you
> > >
> > > I'll update right away
> > >
> > >
> > > On 2020/01/24 14:14:17, "John Roesler" 
> > > wrote:
> > >
> > >> Hi all, thanks for the explanation. I was also not sure how the
> > >> kip would be possible to implement.
> > >>
> > >> No that it does seem plausible, my only feedback is that the
> > >> command line option could align better with the existing one.
> > >> That is, the existing option is called “—input-topics”, so it
> > >> seems like the new one should be called “—all-input-topics”.
> > >>
> > >> Thanks, John
> > >>
> > >> On Fri, Jan 24, 2020, at 01:42, Boyang Chen wrote:
> > >>> Thanks Sophie for the explanation! I read Sang's PR and
> > >>> basically he did exactly what you proposed (check it here
> > >>>  in case I'm
> > >>> wrong).
> > >>>
> > >>> I think Sophie's response answers Gwen's question already,
> > >>> while in the meantime for a KIP itself we are not required to
> > >>> mention all the internal details about how to make the changes
> > >>> happen (like how to actually get the external topics),
> > >>> considering the change scope is pretty small as well. But
> > >>> again, it would do no harm if we mention it inside Proposed
> > >>> Change session specifically so that people won't get confused
> > >>> about how.
> > >>>
> > >>>
> > >>> On Thu, Jan 23, 2020 at 8:26 PM Sophie Blee-Goldman
> > >>>  wrote:
> > >>>
> >  Hi all,
> > 
> >  I think what Gwen is trying to ask (correct me if I'm wrong)
> >  is how we can infer which topics are associated with Streams
> >  from the admin client's topic list. I agree that this
> >  doesn't seem possible, since as she pointed out the topics
> >  list (or even description) lacks the specific information we
> >  need.
> > 
> >  What we could do instead is use the admin client's
> >  `describeConsumerGroups` API to get the information on the
> >  Streams app's consumer group specifically -- note that the
> >  Streams application.id config is also used as the consumer
> >  group id, so each app forms a group to read from the input
> >  topics. We could compile a list of these topics just by
> >  looking at each member's assignment (and even check for a
> >  StreamsPartitionAssignor to verify that this is indeed a
> >  Streams app group, if we're being paranoid).
> > 
> >  The reset tool actually already gets the consumer group
> >  description, in order to validate there are no active
> >  consumers in the group. We may as well grab the list of
> >  topics from it while it's there. Or did you have something
> >  else in mind?
> > 
> >  On Sat, Jan 18, 2020 at 6:17 PM Sang wn Lee
> >   wrote:
> > 
> > > Thank you
> > >
> > > I understand you
> > >
> > > 1. admin client has topic list 2. applicationId can only
> > > have one stream, so It won't be a problem! 3. For example,
> > > --input-topic [reg] Allowing reg solves some inconvenience
> > >
> > >
> > > On 2020/01/18 18:15:23, Gwen Shapira 
> > > wrote:
> > >> I am not sure I follow. Afaik:
> > >>
> > >> 1. Topics don't include client ID information 2. Even if
> > >> you did, the same ID could be used for topics that are
> > >> not
> > > Kafka
> > >> Streams input
> > >>
> > >> The 

Re: [VOTE] KIP-620 Deprecate ConsumerConfig#addDeserializerToConfig(Properties, Deserializer, Deserializer) and ProducerConfig#addSerializerToConfig(Properties, Serializer, Serializer)

2020-07-07 Thread Boyang Chen
Thanks for the KIP. One question I have is that when we refer to the two
methods as useless, do we just suggest they no longer have any production
use case? If this is the case, Producer#addSerializerToConfig(Map configs, keySerializer, valueSerializer) is only used in
KafkaProducer internal only. Do we also want to deprecate this public API
as well?

Boyang


On Mon, Jul 6, 2020 at 11:36 PM Manikumar  wrote:

> +1 (binding)
>
> Thanks for the KIP.
>
> On Wed, Jun 10, 2020 at 11:43 PM Matthias J. Sax  wrote:
>
> > Yes, it does.
> >
> > I guess many people are busy wrapping up 2.6 release. Today is code
> freeze.
> >
> >
> > -Matthias
> >
> >
> > On 6/10/20 12:11 AM, Chia-Ping Tsai wrote:
> > > hi Matthias,
> > >
> > > Does this straightforward KIP still need 3 votes?
> > >
> > > On 2020/06/05 21:27:52, "Matthias J. Sax"  wrote:
> > >> +1 (binding)
> > >>
> > >> Thanks for the KIP!
> > >>
> > >>
> > >> -Matthias
> > >>
> > >> On 6/4/20 11:25 PM, Chia-Ping Tsai wrote:
> > >>> hi All,
> > >>>
> > >>> I would like to start the vote on KIP-620:
> > >>>
> > >>>
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=155749118
> > >>>
> > >>> --
> > >>> Chia-Ping
> > >>>
> > >>
> > >>
> >
> >
>


Re: [DISCUSS] KIP-616: Rename implicit Serdes instances in kafka-streams-scala

2020-07-07 Thread Yuriy Badalyantc
So, what's next? It's my first KIP and I'm not familiar with all processes.

-Yuriy

On Mon, Jul 6, 2020 at 1:32 AM John Roesler  wrote:

> Hi Yuriy,
>
> Thanks for the update! It looks good to me.
>
> Thanks,
> John
>
> On Sun, Jul 5, 2020, at 03:27, Yuriy Badalyantc wrote:
> > Hi John.
> >
> > I updated the KIP. An old proposed implementation is now in the rejected
> > alternatives.
> >
> > - Yuriy
> >
> > On Sun, Jul 5, 2020 at 12:03 AM John Roesler 
> wrote:
> >
> > > Hi Yuriy,
> > >
> > > I agree, we can keep them separate. I just wanted to make you aware of
> it.
> > >
> > > Thanks for the PR, it looks the way I expected.
> > >
> > > I just read over the KIP document again. I think it needs to be
> updated to
> > > the current proposal, and then we’ll be able to start the vote.
> > >
> > > Thanks,
> > > John
> > >
> > > On Tue, Jun 30, 2020, at 04:58, Yuriy Badalyantc wrote:
> > > > Hi everybody!
> > > >
> > > > Looks like a discussion about KIP-513 could take a while. I think we
> > > should
> > > > move forward with KIP-616 without waiting for KIP-513.
> > > >
> > > > I created a new pull request for KIP-616:
> > > > https://github.com/apache/kafka/pull/8955. It contains a new
> > > > `org.apache.kafka.streams.scala.serialization.Serdes` object without
> name
> > > > clash. An old one was marked as deprecated. This change is backward
> > > > compatible and it could be merged in any further release.
> > > >
> > > > On Wed, Jun 3, 2020 at 12:41 PM Yuriy Badalyantc 
> > > wrote:
> > > >
> > > > > Hi, John
> > > > >
> > > > > Thanks for pointing that out. I expressed my thoughts about
> KIP-513 and
> > > > > its connection to KIP-616 in the KIP-513 mail list.
> > > > >
> > > > > - Yuriy
> > > > >
> > > > > On Sun, May 31, 2020 at 1:26 AM John Roesler 
> > > wrote:
> > > > >
> > > > >> Hi Yuriy,
> > > > >>
> > > > >> I was just looking back at KIP-513, and I’m wondering if there’s
> any
> > > > >> overlap we should consider here, or if they are just orthogonal.
> > > > >>
> > > > >> Thanks,
> > > > >> -John
> > > > >>
> > > > >> On Thu, May 28, 2020, at 21:36, Yuriy Badalyantc wrote:
> > > > >> > At the current moment, I think John's plan is better than the
> > > original
> > > > >> plan
> > > > >> > described in the KIP. I think we should create a new `Serdes` in
> > > another
> > > > >> > package. The old one will be deprecated.
> > > > >> >
> > > > >> > - Yuriy
> > > > >> >
> > > > >> > On Fri, May 29, 2020 at 8:58 AM John Roesler <
> vvcep...@apache.org>
> > > > >> wrote:
> > > > >> >
> > > > >> > > Thanks, Matthias,
> > > > >> > >
> > > > >> > > If we go with the approach Yuriy and I agreed on, to
> deprecate and
> > > > >> replace
> > > > >> > > the whole class and not just a few of the methods, then the
> > > timeline
> > > > >> is
> > > > >> > > less of a concern. Under that plan, Yuriy can just write the
> new
> > > class
> > > > >> > > exactly the way he wants and people can cleanly swap over to
> the
> > > new
> > > > >> > > pattern when they are ready.
> > > > >> > >
> > > > >> > > The timeline was more significant if we were just going to
> > > deprecate
> > > > >> some
> > > > >> > > methods and add new methods to the existing class. That plan
> > > requires
> > > > >> two
> > > > >> > > implementation phases, where we first deprecate the existing
> > > methods
> > > > >> and
> > > > >> > > later swap the implicits at the same time we remove the
> deprecated
> > > > >> members.
> > > > >> > > Aside from the complexity of that approach, it’s not a
> breakage
> > > free
> > > > >> path,
> > > > >> > > as some users would be forced to continue using the deprecated
> > > members
> > > > >> > > until a future release drops them, breaking their source
> code, and
> > > > >> only
> > > > >> > > then can they update their code.
> > > > >> > >
> > > > >> > > That wouldn’t be the end of the world, and we’ve had to do the
> > > same
> > > > >> thing
> > > > >> > > in the past with the implicit conversations, but this is a
> much
> > > wider
> > > > >> > > scope, since it’s all the serdes. I’m happy with the new plan,
> > > since
> > > > >> it’s
> > > > >> > > not only one step, but also it provides everyone a
> breakage-free
> > > path.
> > > > >> > >
> > > > >> > > We can still consider dropping the deprecated class in 3.0; I
> just
> > > > >> wanted
> > > > >> > > to clarify how the timeline issue has changed.
> > > > >> > >
> > > > >> > > Thanks,
> > > > >> > > John
> > > > >> > >
> > > > >> > > On Thu, May 28, 2020, at 20:34, Matthias J. Sax wrote:
> > > > >> > > > I am not a Scale person, so I cannot really contribute much.
> > > > >> However for
> > > > >> > > > the deprecation period, if we get the change into 2.7, it
> might
> > > be
> > > > >> ok to
> > > > >> > > > remove the deprecated classed in 3.0.
> > > > >> > > >
> > > > >> > > > It would only be one minor release in between what is a
> little
> > > bit
> > > > >> short
> > > > >> > > > (we usually prefer at least two minor released, 

[DISCUSS] KIP-637 Include min.insync.replicas in MetadataResponse to make Producer smarter in partitioning events

2020-07-07 Thread Arvin Zheng
Updated the subject to add KIP number

Arvin Zheng  于2020年7月6日周一 上午10:38写道:

> Hi everyone,
>
> I would like to start the discussion for KIP-637
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-637%3A+Include+min.insync.replicas+in+MetadataResponse+to+make+Producer+smarter+in+partitioning+events
>
> Looking forward to your feedback.
>
> Thanks,
> Arvin
>


Build failed in Jenkins: kafka-trunk-jdk14 #273

2020-07-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10166: checkpoint recycled standbys and ignore empty rocksdb base

[github] MINOR: document timestamped state stores (#8920)

[github] KAFKA-9930: Adjust ReplicaFetcherThread logging when processing

[github] KAFKA-10239: Make GroupInstanceId ignorable in DescribeGroups (#8989)


--
[...truncated 6.37 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = 

Re: [VOTE] KIP-620 Deprecate ConsumerConfig#addDeserializerToConfig(Properties, Deserializer, Deserializer) and ProducerConfig#addSerializerToConfig(Properties, Serializer, Serializer)

2020-07-07 Thread Manikumar
+1 (binding)

Thanks for the KIP.

On Wed, Jun 10, 2020 at 11:43 PM Matthias J. Sax  wrote:

> Yes, it does.
>
> I guess many people are busy wrapping up 2.6 release. Today is code freeze.
>
>
> -Matthias
>
>
> On 6/10/20 12:11 AM, Chia-Ping Tsai wrote:
> > hi Matthias,
> >
> > Does this straightforward KIP still need 3 votes?
> >
> > On 2020/06/05 21:27:52, "Matthias J. Sax"  wrote:
> >> +1 (binding)
> >>
> >> Thanks for the KIP!
> >>
> >>
> >> -Matthias
> >>
> >> On 6/4/20 11:25 PM, Chia-Ping Tsai wrote:
> >>> hi All,
> >>>
> >>> I would like to start the vote on KIP-620:
> >>>
> >>>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=155749118
> >>>
> >>> --
> >>> Chia-Ping
> >>>
> >>
> >>
>
>


Build failed in Jenkins: kafka-trunk-jdk8 #4698

2020-07-07 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: document timestamped state stores (#8920)

[github] KAFKA-9930: Adjust ReplicaFetcherThread logging when processing

[github] KAFKA-10239: Make GroupInstanceId ignorable in DescribeGroups (#8989)


--
[...truncated 3.16 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest