Fw: Regarding email notification un-subscription

2021-03-24 Thread madan mohan mohanty
Please unsubscribe me

Sent from Yahoo Mail on Android 
 
   - Forwarded message - From: "Jyotinder Singh" 
 To: "dev@kafka.apache.org"  Cc: 
 Sent: Thu, 25 Mar 2021 at 12:38 am Subject: Re: Regarding email notification 
un-subscription  Thanks, I wasn't aware about the unsubscribe confirmation mail 
that was
ending up in my spam folder.

Thanks a lot!

On Wed, 24 Mar 2021, 17:55 Tom Bentley,  wrote:

> Avinash,
>
> You don't say whether you have already tried, but you need to send an email
> to dev-unsubscr...@kafka.apache.org.
>
> Jyotinder,
>
> Assuming you definitely used the right email address, have you checked your
> spam folder, since IIRC you get sent a confirmation email with a link to
> visit to confirm that you want to unsubscribe.
>
> Kind regards,
>
> Tom
>
> On Wed, Mar 24, 2021 at 12:12 PM Jyotinder Singh 
> wrote:
>
> > I’m facing the same issue. Mailing to the unsubscribe address doesn’t
> seem
> > to work.
> > Email: jyotindrsi...@gmail.com
> >
> > On Wed, 24 Mar 2021 at 5:29 PM, Avinash Srivastava <
> > asrivast...@flyanra.com>
> > wrote:
> >
> > > Hi,
> > >
> > > I am continuously getting multiple emails from various email accounts
> > > related to kafka. I would request you to please unsubscribe my email -
> > > asrivast...@flyanra.com from subscription list.
> > >
> > >
> > > Thanks
> > >
> > > Avinash
> > >
> > > --
> > Sent from my iPad
> >
>
  


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #667

2021-03-24 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12434; Admin support for `DescribeProducers` API (#10275)

[github] MINOR: Always apply the java-library gradle plugin (#10394)


--
[...truncated 3.70 MB...]
AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnEndTransaction()
 STARTED

AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnEndTransaction()
 PASSED

AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnSendOffsetsToTxn()
 STARTED

AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnSendOffsetsToTxn()
 PASSED

AuthorizerIntegrationTest > testCommitWithNoGroupAccess() STARTED

AuthorizerIntegrationTest > testCommitWithNoGroupAccess() PASSED

AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl() STARTED

AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl() PASSED

AuthorizerIntegrationTest > testAuthorizeByResourceTypeDenyTakesPrecedence() 
STARTED

AuthorizerIntegrationTest > testAuthorizeByResourceTypeDenyTakesPrecedence() 
PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe() STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe() PASSED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
STARTED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
PASSED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() STARTED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
PASSED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() STARTED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() PASSED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
STARTED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
PASSED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
STARTED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
PASSED

AuthorizerIntegrationTest > testListTransactionsAuthorization() STARTED

AuthorizerIntegrationTest > testListTransactionsAuthorization() PASSED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() STARTED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() PASSED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() STARTED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() PASSED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
STARTED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
PASSED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testClose() STARTED

SslProducerSendTest > testClose() PASSED

SslProducerSendTest > testFlush() STARTED

SslProducerSendTest > testFlush() PASSED

SslProducerSendTest > testSendToPartition() STARTED

SslProducerSendTest > testSendToPartition() PASSED

SslProducerSendTest > testSendOffset() STARTED

SslProducerSendTest > testSendOffset() PASSED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() PASSED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() STARTED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() PASSED

ProducerCompressionTest > 

[jira] [Resolved] (KAFKA-12540) Sub-key support to avoid unnecessary rekey operations with new key is a compound key of the original key + sub-field

2021-03-24 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman resolved KAFKA-12540.

Resolution: Duplicate

> Sub-key support to avoid unnecessary rekey operations with new key is a 
> compound key of the original key + sub-field
> 
>
> Key: KAFKA-12540
> URL: https://issues.apache.org/jira/browse/KAFKA-12540
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 2.8.0
>Reporter: Antony Stubbs
>Priority: Major
>
> If I am, for example, wanting to aggregate by an account, and by a metric, 
> and the input topic is keyed by account (and let’s say there’s massive amount 
> of traffic), this will have have to rekey on account+metric, which will cause 
> a repartition topic, then group by and aggregate.
> However because we know that all the metrics for an account will already 
> exist on the same partition, we ideally don’t want to have to repartition - 
> causing a large unneeded overhead.
>  
> Ideally a new `#selectSubkey` sort of method could be introduced, which would 
> force a compound key with the original.
>  
> {{var subKeyStream = stream#selectSubKey(x,v->v.getField(“metric”)) <— under 
> the hood this appends the returned key to the existing key, so actual key of 
> next stream in memory will be (account+metric)}}
>  
> Although this might break key->partition strategy, the topology shouldn’t be 
> dirty at this stage still as we know we’re still co-partitioned. What can 
> happen next in the topology may need to be restricted however. In this case 
> we would then do a:
>  
> {{subKeyStream.groupByKey().aggregate()}}
>  
> Functions other than aggregate, may need a repartition still, or maybe not - 
> not sure.
>  
> Similarly described quite well in this forum here: 
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-keying-sub-keying-a-stream-without-repartitioning-td12745.html]
>  
> I can achieve what I want with a custom processor and state store, but this 
> seems something that might be common and useful to have supported at DSL 
> level.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12434) Admin API for DescribeProducers

2021-03-24 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-12434.
-
Resolution: Fixed

> Admin API for DescribeProducers
> ---
>
> Key: KAFKA-12434
> URL: https://issues.apache.org/jira/browse/KAFKA-12434
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> Implement the Admin `describeProducers` API defined by KIP-664. The server 
> implementation was completed in KAFKA-12238.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12550) Introduce RESTORING state to the KafkaStreams FSM

2021-03-24 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12550:
--

 Summary: Introduce RESTORING state to the KafkaStreams FSM
 Key: KAFKA-12550
 URL: https://issues.apache.org/jira/browse/KAFKA-12550
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: A. Sophie Blee-Goldman
Assignee: A. Sophie Blee-Goldman
 Fix For: 3.0.0


We should consider adding a new state to the KafkaStreams FSM: RESTORING

This would cover the time between the completion of a stable rebalance and the 
completion of restoration across the client. Currently, Streams will report the 
state during this time as REBALANCING even though it is generally spending much 
more time restoring than rebalancing in most cases.

There are a few motivations/benefits behind this idea:

# Observability is a big one: using the umbrella REBALANCING state to cover all 
aspects of rebalancing -> task initialization -> restoring has been a common 
source of confusion in the past. It’s also proved to be a time sink for us, 
during escalations, incidents, mailing list questions, and bug reports. It 
often adds latency to escalations in particular as we have to go through GTS 
and wait for the customer to clarify whether their “Kafka Streams is stuck 
rebalancing” ticket means that it’s literally rebalancing, or just in the 
REBALANCING state and actually stuck elsewhere in Streams
# Prereq for global thread improvements: for example [KIP-406: 
GlobalStreamThread should honor custom reset policy 
|https://cwiki.apache.org/confluence/display/KAFKA/KIP-406%3A+GlobalStreamThread+should+honor+custom+reset+policy]
 was ultimately blocked on this as we needed to pause the Streams app while the 
global thread restored from the appropriate offset. Since there’s absolutely no 
rebalancing involved in this case, piggybacking on the REBALANCING state would 
just be shooting ourselves in the foot.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9846) Race condition can lead to severe lag underestimate for active tasks

2021-03-24 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman resolved KAFKA-9846.
---
Resolution: Fixed

Resolving since this is fixed in 2.6

> Race condition can lead to severe lag underestimate for active tasks
> 
>
> Key: KAFKA-9846
> URL: https://issues.apache.org/jira/browse/KAFKA-9846
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.5.0
>Reporter: A. Sophie Blee-Goldman
>Priority: Critical
> Fix For: 2.6.0
>
>
> In KIP-535 we added the ability to query still-restoring and standby tasks. 
> To give users control over how out of date the data they fetch can be, we 
> added an API to KafkaStreams that fetches the end offsets for all changelog 
> partitions and computes the lag for each local state store.
> During this lag computation, we check whether an active task is in RESTORING 
> and calculate the actual lag if so. If not, we assume it's in RUNNING and 
> return a lag of zero. However, tasks may be in other states besides running 
> and restoring; notably they first pass through the CREATED state before 
> getting to RESTORING. A CREATED task may happen to be caught-up to the end 
> offset, but in many cases it is likely to be lagging or even completely 
> uninitialized.
> This introduces a race condition where users may be led to believe that a 
> task has zero lag and is "safe" to query even with the strictest correctness 
> guarantees, while the task is actually lagging by some unknown amount.  
> During transfer of ownership of the task between different threads on the 
> same machine, tasks can actually spend a while in CREATED while the new owner 
> waits to acquire the task directory lock. So, this race condition may not be 
> particularly rare in multi-threaded Streams applications



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8042) Kafka Streams creates many segment stores on state restore

2021-03-24 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman resolved KAFKA-8042.
---
Fix Version/s: 2.6.0
   Resolution: Fixed

I believe this issue should be solved via 
https://issues.apache.org/jira/browse/KAFKA-10005 -- most likely the segments 
you are seeing are due to disabling compaction during the "bulk loading" phase 
of restoration. This lines up with your observation that upon completion of 
restoration, the number of files went back down, as we would issue a manual 
compaction after the bulk loading.

Since 2.6 bulk loading has been removed, so you should not see such large 
spikes in the number of files since compaction is never disabled

> Kafka Streams creates many segment stores on state restore
> --
>
> Key: KAFKA-8042
> URL: https://issues.apache.org/jira/browse/KAFKA-8042
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.1.1
>Reporter: Adrian McCague
>Priority: Major
> Fix For: 2.6.0
>
> Attachments: StateStoreSegments-StreamsConfig.txt
>
>
> Note that this is from the perspective of one instance of an application, 
> where there are 8 instances total, with partition count 8 for all topics and 
> of course stores. Standby replicas = 1.
> In the process there are multiple instances of {{KafkaStreams}} so the below 
> detail is from one of these.
> h2. Actual Behaviour
> During state restore of an application, many segment stores are created (I am 
> using MANIFEST files as a marker since they preallocate 4MB each). As can be 
> seen this topology has 5 joins - which is the extent of its state.
> {code:java}
> bash-4.2# pwd
> /data/fooapp/0_7
> bash-4.2# for dir in $(find . -maxdepth 1 -type d); do echo "${dir}: $(find 
> ${dir} -type f -name 'MANIFEST-*' -printf x | wc -c)"; done
> .: 8058
> ./KSTREAM-JOINOTHER-25-store: 851
> ./KSTREAM-JOINOTHER-40-store: 819
> ./KSTREAM-JOINTHIS-24-store: 851
> ./KSTREAM-JOINTHIS-29-store: 836
> ./KSTREAM-JOINOTHER-35-store: 819
> ./KSTREAM-JOINOTHER-30-store: 819
> ./KSTREAM-JOINOTHER-45-store: 745
> ./KSTREAM-JOINTHIS-39-store: 819
> ./KSTREAM-JOINTHIS-44-store: 685
> ./KSTREAM-JOINTHIS-34-store: 819
> There are many (x800 as above) of these segment files:
> ./KSTREAM-JOINOTHER-25-store.155146629
> ./KSTREAM-JOINOTHER-25-store.155155902
> ./KSTREAM-JOINOTHER-25-store.155149269
> ./KSTREAM-JOINOTHER-25-store.155154879
> ./KSTREAM-JOINOTHER-25-store.155169861
> ./KSTREAM-JOINOTHER-25-store.155153064
> ./KSTREAM-JOINOTHER-25-store.155148444
> ./KSTREAM-JOINOTHER-25-store.155155671
> ./KSTREAM-JOINOTHER-25-store.155168673
> ./KSTREAM-JOINOTHER-25-store.155159565
> ./KSTREAM-JOINOTHER-25-store.155175735
> ./KSTREAM-JOINOTHER-25-store.155168574
> ./KSTREAM-JOINOTHER-25-store.155163525
> ./KSTREAM-JOINOTHER-25-store.155165241
> ./KSTREAM-JOINOTHER-25-store.155146662
> ./KSTREAM-JOINOTHER-25-store.155178177
> ./KSTREAM-JOINOTHER-25-store.155158740
> ./KSTREAM-JOINOTHER-25-store.155168145
> ./KSTREAM-JOINOTHER-25-store.155166231
> ./KSTREAM-JOINOTHER-25-store.155172171
> ./KSTREAM-JOINOTHER-25-store.155175075
> ./KSTREAM-JOINOTHER-25-store.155163096
> ./KSTREAM-JOINOTHER-25-store.155161512
> ./KSTREAM-JOINOTHER-25-store.155179233
> ./KSTREAM-JOINOTHER-25-store.155146266
> ./KSTREAM-JOINOTHER-25-store.155153691
> ./KSTREAM-JOINOTHER-25-store.155159235
> ./KSTREAM-JOINOTHER-25-store.155152734
> ./KSTREAM-JOINOTHER-25-store.155160687
> ./KSTREAM-JOINOTHER-25-store.155174415
> ./KSTREAM-JOINOTHER-25-store.155150820
> ./KSTREAM-JOINOTHER-25-store.155148642
> ... etc
> {code}
> Once re-balancing and state restoration is complete - the redundant segment 
> files are deleted and the segment count drops to 508 total (where the above 
> mentioned state directory is one of many).
> We have seen the number of these segment stores grow to as many as 15000 over 
> the baseline 508 which can fill smaller volumes. *This means that a state 
> volume that would normally have ~300MB total disk usage would use in excess 
> of 30GB during rebalancing*, mostly preallocated MANIFEST files.
> h2. Expected Behaviour
> For this particular application we expect 508 segment folders total to be 
> active and existing throughout rebalancing. Give or take migrated tasks that 
> are subject to the 

[jira] [Resolved] (KAFKA-7213) NullPointerException during state restoration in kafka streams

2021-03-24 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman resolved KAFKA-7213.
---
Resolution: Fixed

I think we can close this as the version is quite old now and much refactoring 
of the task management code has occurred since then, with no reports of NPEs -- 
please reopen if you encounter this issue again on more recent versions

> NullPointerException during state restoration in kafka streams
> --
>
> Key: KAFKA-7213
> URL: https://issues.apache.org/jira/browse/KAFKA-7213
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Abhishek Agarwal
>Priority: Major
>
> I had written a custom state store which has a batch restoration callback 
> registered. What I have observed, when multiple consumer instances are 
> restarted, the application keeps failing with NullPointerException. The stack 
> trace is 
> {noformat}
> java.lang.NullPointerException: null
>   at 
> org.apache.kafka.streams.state.internals.RocksDBStore.putAll(RocksDBStore.java:351)
>  ~[kafka-streams-1.0.0.jar:?]
>   at 
> org.apache.kafka.streams.state.internals.RocksDBSlotKeyValueBytesStore.putAll(RocksDBSlotKeyValueBytesStore.java:100)
>  ~[streams-core-1.0.0.297.jar:?]
>   at 
> org.apache.kafka.streams.state.internals.RocksDBSlotKeyValueBytesStore$SlotKeyValueBatchRestoreCallback.restoreAll(RocksDBSlotKeyValueBytesStore.java:303)
>  ~[streams-core-1.0.0.297.jar:?]
>   at 
> org.apache.kafka.streams.processor.internals.CompositeRestoreListener.restoreAll(CompositeRestoreListener.java:89)
>  ~[kafka-streams-1.0.0.jar:?]
>   at 
> org.apache.kafka.streams.processor.internals.StateRestorer.restore(StateRestorer.java:75)
>  ~[kafka-streams-1.0.0.jar:?]
>   at 
> org.apache.kafka.streams.processor.internals.StoreChangelogReader.processNext(StoreChangelogReader.java:277)
>  ~[kafka-streams-1.0.0.jar:?]
>   at 
> org.apache.kafka.streams.processor.internals.StoreChangelogReader.restorePartition(StoreChangelogReader.java:238)
>  ~[kafka-streams-1.0.0.jar:?]
>   at 
> org.apache.kafka.streams.processor.internals.StoreChangelogReader.restore(StoreChangelogReader.java:83)
>  ~[kafka-streams-1.0.0.jar:?]
>   at 
> org.apache.kafka.streams.processor.internals.TaskManager.updateNewAndRestoringTasks(TaskManager.java:263)
>  ~[kafka-streams-1.0.0.jar:?]
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:803)
>  ~[kafka-streams-1.0.0.jar:?]
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
>  ~[kafka-streams-1.0.0.jar:?]
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
>  ~[kafka-streams-1.0.0.jar:?]
> {noformat}
> The faulty line in question is 
> {noformat}
> db.write(wOptions, batch);
> {noformat}
> in RocksDBStore.java which would mean that db variable is null. Probably the 
> store has been closed and restoration is still being done on it. After going 
> through the code, I think the problem is when state transitions from 
> PARTITIONS_ASSIGNED to PARTITIONS_REVOKED and restoration is still in 
> progress. 
> In such state transition, while the active tasks themselves are closed, the 
> changelog reader is not reset. It tries to restore the tasks that have 
> already been closed, db is null and results in NPE. 
> I will put in a fix to see if that fixes the issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-5256) Non-checkpointed state stores should be deleted before restore

2021-03-24 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman resolved KAFKA-5256.
---
Resolution: Fixed

Closing this since the described behavior is now implemented in EOS and this 
seems to cover the desired semantics

> Non-checkpointed state stores should be deleted before restore
> --
>
> Key: KAFKA-5256
> URL: https://issues.apache.org/jira/browse/KAFKA-5256
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Tommy Becker
>Priority: Major
>
> Currently, Kafka Streams will re-use an existing state store even if there is 
> no checkpoint for it. This seems both inefficient (because duplicate inserts 
> can be made on restore) and incorrect (records which have been deleted from 
> the backing topic may still exist in the store). Since the contents of a 
> store with no checkpoint are unknown, the best way to proceed would be to 
> delete the store and recreate before restoring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Kafka logos and branding

2021-03-24 Thread Justin Mclean
Hi,

> The logo in Apache Kafka website has been updated. The high resolution
> image is also linked from the trademark page.

Thanks for that much appreciated.

Justin


[jira] [Created] (KAFKA-12549) Allow state stores to opt-in transactional support

2021-03-24 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-12549:
-

 Summary: Allow state stores to opt-in transactional support
 Key: KAFKA-12549
 URL: https://issues.apache.org/jira/browse/KAFKA-12549
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Reporter: Guozhang Wang


Right now Kafka Stream's EOS implementation does not make any assumptions about 
the state store's transactional support. Allowing the state stores to 
optionally provide transactional support can have multiple benefits. E.g., if 
we add some APIs into the {{StateStore}} interface, like {{beginTxn}}, 
{{commitTxn}} and {{abortTxn}}. Then these APIs can be used under both ALOS and 
EOS such that:

* store.beginTxn
* store.put // during processing
* streams commit // either through eos protocol or not
* store.commitTxn

We can have the following benefits:
* Reduce the duplicated records upon crashes for ALOS (note this is not EOS 
still, but some middle-ground where uncommitted data within a state store would 
not be retained if store.commitTxn failed).
* No need to wipe the state store and re-bootstrap from scratch upon crashes 
for EOS. E.g., if a crash-failure happened between streams commit completes and 
store.commitTxn. We can instead just roll-forward the transaction by replaying 
the changelog from the second recent  streams committed offset towards the most 
recent committed offset.
* Remote stores that support txn then does not need to support wiping 
(https://issues.apache.org/jira/browse/KAFKA-12475).
* We can fix the known issues of emit-on-change 
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on+change+support+for+Kafka+Streams).
* We can support "query committed data only" for interactive queries (see below 
for reasons).

As for the implementation of these APIs, there are several options:
* The state store itself have natural transaction features (e.g. RocksDB).
* Use an in-memory buffer for all puts within a transaction, and upon 
`commitTxn` write the whole buffer as a batch to the underlying state store, or 
just drop the whole buffer upon aborting. Then for interactive queries, one can 
optionally only query the underlying store for committed data only.
* Use a separate store as the transient persistent buffer. Upon `beginTxn` 
create a new empty transient store, and upon `commitTxn` merge the store into 
the underlying store. Same applies for interactive querying committed-only data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-2.8-jdk8 #79

2021-03-24 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12548) Invalid record error is not getting sent to application

2021-03-24 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12548:
---

 Summary: Invalid record error is not getting sent to application
 Key: KAFKA-12548
 URL: https://issues.apache.org/jira/browse/KAFKA-12548
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


The ProduceResponse includes a nice record error message when we return 
INVALID_RECORD_ERROR. Sadly this is getting discarded by the producer, so the 
user never gets a chance to see it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Kafka logos and branding

2021-03-24 Thread Jun Rao
Hi, Justin,

The logo in Apache Kafka website has been updated. The high resolution
image is also linked from the trademark page.

Thanks,

Jun

On Mon, Mar 22, 2021 at 6:46 PM Justin Mclean  wrote:

> Hi,
>
> > Yes, we confirmed from ASF branding that this is the correct logo (i.e.
> > without the word Apache and the registered trademark symbol).
>
> Thanks for that, just odd that I've see it with the Apache and R symbol
> elsewhere and assumed that was the official one. It doesn't  match the logo
> on the Kafka website for instance.
>
> Thanks,
> Justin
>


[GitHub] [kafka-site] junrao merged pull request #343: MINOR: more powered by companies and updating apache logo

2021-03-24 Thread GitBox


junrao merged pull request #343:
URL: https://github.com/apache/kafka-site/pull/343


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12547) Drop idempotent updates for repartition operations

2021-03-24 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12547:
--

 Summary: Drop idempotent updates for repartition operations
 Key: KAFKA-12547
 URL: https://issues.apache.org/jira/browse/KAFKA-12547
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: A. Sophie Blee-Goldman


Implement emit-on-change semantics for repartition operations, when we need to 
send both prior and new results.

One of the proposed changes in 
[KIP-557|https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on+change+support+for+Kafka+Streams]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12546) Drop idempotent updates for aggregations

2021-03-24 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12546:
--

 Summary: Drop idempotent updates for aggregations
 Key: KAFKA-12546
 URL: https://issues.apache.org/jira/browse/KAFKA-12546
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: A. Sophie Blee-Goldman


For example, KGroupedStream, KGroupedTable, TimeWindowedKStream, and 
SessionWindowedKStream operations.

One of the proposed change in 
[KIP-557|https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on+change+support+for+Kafka+Streams]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] scott-confluent commented on a change in pull request #343: MINOR: more powered by companies and updating apache logo

2021-03-24 Thread GitBox


scott-confluent commented on a change in pull request #343:
URL: https://github.com/apache/kafka-site/pull/343#discussion_r600872282



##
File path: css/styles.css
##
@@ -444,15 +444,15 @@ tr:nth-child(odd) {
 }
 .logo-link {
 display: inline-block;
-width: 220px;
-height: 64.77px;
-background: url("/images/logo.png");

Review comment:
   removed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] scott-confluent commented on a change in pull request #343: MINOR: more powered by companies and updating apache logo

2021-03-24 Thread GitBox


scott-confluent commented on a change in pull request #343:
URL: https://github.com/apache/kafka-site/pull/343#discussion_r600868690



##
File path: css/styles.css
##
@@ -444,15 +444,15 @@ tr:nth-child(odd) {
 }
 .logo-link {
 display: inline-block;
-width: 220px;
-height: 64.77px;
-background: url("/images/logo.png");
-background-size: 217px auto;
+width: 117px;
+height: 65px;
+background: url("/logos/kafka_logo--simple.png");
+background-size: auto 65px;
 background-repeat: no-repeat;
 /* small: 168 x 50 */
 }
 .page-dark .logo-link {
-background: url('/logos/kafka-white.svg');

Review comment:
   I've cleaned up all the "dark theme" stuff, which included any of the 
white logos that were in use.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] junrao commented on a change in pull request #343: MINOR: more powered by companies and updating apache logo

2021-03-24 Thread GitBox


junrao commented on a change in pull request #343:
URL: https://github.com/apache/kafka-site/pull/343#discussion_r600850419



##
File path: css/styles.css
##
@@ -444,15 +444,15 @@ tr:nth-child(odd) {
 }
 .logo-link {
 display: inline-block;
-width: 220px;
-height: 64.77px;
-background: url("/images/logo.png");
-background-size: 217px auto;
+width: 117px;
+height: 65px;
+background: url("/logos/kafka_logo--simple.png");
+background-size: auto 65px;
 background-repeat: no-repeat;
 /* small: 168 x 50 */
 }
 .page-dark .logo-link {
-background: url('/logos/kafka-white.svg');
+background: url('/logos/kafka_logo--white-simple.png.svg');

Review comment:
   kafka_logo--white-simple.png.svg => kafka_logo--white-simple.png ?

##
File path: css/styles.css
##
@@ -444,15 +444,15 @@ tr:nth-child(odd) {
 }
 .logo-link {
 display: inline-block;
-width: 220px;
-height: 64.77px;
-background: url("/images/logo.png");

Review comment:
   Should we remove the old logo image file?

##
File path: css/styles.css
##
@@ -444,15 +444,15 @@ tr:nth-child(odd) {
 }
 .logo-link {
 display: inline-block;
-width: 220px;
-height: 64.77px;
-background: url("/images/logo.png");
-background-size: 217px auto;
+width: 117px;
+height: 65px;
+background: url("/logos/kafka_logo--simple.png");
+background-size: auto 65px;
 background-repeat: no-repeat;
 /* small: 168 x 50 */
 }
 .page-dark .logo-link {
-background: url('/logos/kafka-white.svg');

Review comment:
   Should we remove the old logo image file?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12545) Integrate snapshot in the shell tool

2021-03-24 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-12545:
--

 Summary: Integrate snapshot in the shell tool
 Key: KAFKA-12545
 URL: https://issues.apache.org/jira/browse/KAFKA-12545
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jose Armando Garcia Sancio


The MetadataNodeManager in the shell tool doesn't fully support snapshots. It 
needs to handle:
 # reloading snapshots
 # SnapshotReader::snapshotId representing the end offset and epoch for the the 
records in the snapshot



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12544) Particular partitions lagging and consumers intermittently reading

2021-03-24 Thread kevin j staiger (Jira)
kevin j staiger created KAFKA-12544:
---

 Summary: Particular partitions lagging and consumers 
intermittently reading
 Key: KAFKA-12544
 URL: https://issues.apache.org/jira/browse/KAFKA-12544
 Project: Kafka
  Issue Type: Bug
  Components: consumer
 Environment: production
Reporter: kevin j staiger
 Attachments: Screen Shot 2021-03-17 at 4.04.50 PM.png, Screen Shot 
2021-03-23 at 8.42.13 PM.png

Hi, we are experiencing a strange issue with a kafka topic where is seems like 
a particular consumer gets stuck in a bad state, we're running an 8 pod 
kubernetes cluster with 2 threads and 16 partitions, things run smoothly for 
awhile and then one of the pods (with 2 consumers and 2 partitions) will become 
very intermittent in its read rate and partition lag will spike.  Eventually 
all of the pods switch from reading at a steady rate to this spike intermittent 
rate.  the cpu on the pods seems normal and the byte rate of the events seems 
fine, any idea why certain consumers can get into this state where there seem 
to gaps of 0 operations happening and the lag continually increases? thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #606

2021-03-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #634

2021-03-24 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12173 Migrate streams:streams-scala module to JUnit 5 (#9858)


--
[...truncated 3.69 MB...]

AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl() PASSED

AuthorizerIntegrationTest > testAuthorizeByResourceTypeDenyTakesPrecedence() 
STARTED

AuthorizerIntegrationTest > testAuthorizeByResourceTypeDenyTakesPrecedence() 
PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe() STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe() PASSED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
STARTED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
PASSED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() STARTED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
PASSED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() STARTED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() PASSED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
STARTED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
PASSED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
STARTED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
PASSED

AuthorizerIntegrationTest > testListTransactionsAuthorization() STARTED

AuthorizerIntegrationTest > testListTransactionsAuthorization() PASSED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() STARTED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() PASSED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() STARTED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() PASSED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
STARTED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
PASSED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testClose() STARTED

SslProducerSendTest > testClose() PASSED

SslProducerSendTest > testFlush() STARTED

SslProducerSendTest > testFlush() PASSED

SslProducerSendTest > testSendToPartition() STARTED

SslProducerSendTest > testSendToPartition() PASSED

SslProducerSendTest > testSendOffset() STARTED

SslProducerSendTest > testSendOffset() PASSED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() PASSED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() STARTED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() PASSED

ProducerCompressionTest > [1] compression=none STARTED

ProducerCompressionTest > [1] compression=none PASSED

ProducerCompressionTest > [2] compression=gzip STARTED

ProducerCompressionTest > [2] compression=gzip PASSED

ProducerCompressionTest > [3] compression=snappy STARTED

ProducerCompressionTest > [3] compression=snappy PASSED

ProducerCompressionTest > [4] compression=lz4 STARTED

ProducerCompressionTest > [4] compression=lz4 PASSED

ProducerCompressionTest > [5] compression=zstd STARTED

ProducerCompressionTest > [5] compression=zstd PASSED

MetricsTest > testMetrics() STARTED

MetricsTest > testMetrics() PASSED

ProducerFailureHandlingTest > testCannotSendToInternalTopic() STARTED

ProducerFailureHandlingTest > testCannotSendToInternalTopic() PASSED

ProducerFailureHandlingTest > testTooLargeRecordWithAckOne() STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #665

2021-03-24 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12173 Migrate streams:streams-scala module to JUnit 5 (#9858)


--
[...truncated 3.69 MB...]

LogTest > testReadWithTooSmallMaxLength() PASSED

LogTest > testOverCompactedLogRecovery() STARTED

LogTest > testOverCompactedLogRecovery() PASSED

LogTest > testBogusIndexSegmentsAreRemoved() STARTED

LogTest > testBogusIndexSegmentsAreRemoved() PASSED

LogTest > testLeaderEpochCacheClearedAfterStaticMessageFormatDowngrade() STARTED

LogTest > testLeaderEpochCacheClearedAfterStaticMessageFormatDowngrade() PASSED

LogTest > testCompressedMessages() STARTED

LogTest > testCompressedMessages() PASSED

LogTest > testAppendMessageWithNullPayload() STARTED

LogTest > testAppendMessageWithNullPayload() PASSED

LogTest > testCorruptLog() STARTED

LogTest > testCorruptLog() PASSED

LogTest > testLogRecoversToCorrectOffset() STARTED

LogTest > testLogRecoversToCorrectOffset() PASSED

LogTest > testReopenThenTruncate() STARTED

LogTest > testReopenThenTruncate() PASSED

LogTest > testZombieCoordinatorFenced() STARTED

LogTest > testZombieCoordinatorFenced() PASSED

LogTest > testOldProducerEpoch() STARTED

LogTest > testOldProducerEpoch() PASSED

LogTest > testProducerSnapshotsRecoveryAfterUncleanShutdownV1() STARTED

LogTest > testProducerSnapshotsRecoveryAfterUncleanShutdownV1() PASSED

LogTest > testDegenerateSegmentSplit() STARTED

LogTest > testDegenerateSegmentSplit() PASSED

LogTest > testParseTopicPartitionNameForMissingPartition() STARTED

LogTest > testParseTopicPartitionNameForMissingPartition() PASSED

LogTest > testParseTopicPartitionNameForEmptyName() STARTED

LogTest > testParseTopicPartitionNameForEmptyName() PASSED

LogTest > testOffsetSnapshot() STARTED

LogTest > testOffsetSnapshot() PASSED

LogTest > testOpenDeletesObsoleteFiles() STARTED

LogTest > testOpenDeletesObsoleteFiles() PASSED

LogTest > shouldUpdateOffsetForLeaderEpochsWhenDeletingSegments() STARTED

LogTest > shouldUpdateOffsetForLeaderEpochsWhenDeletingSegments() PASSED

LogTest > testLogDeleteDirName() STARTED

LogTest > testLogDeleteDirName() PASSED

LogTest > testDeleteOldSegments() STARTED

LogTest > testDeleteOldSegments() PASSED

LogTest > testRebuildTimeIndexForOldMessages() STARTED

LogTest > testRebuildTimeIndexForOldMessages() PASSED

LogTest > testProducerIdMapTruncateTo() STARTED

LogTest > testProducerIdMapTruncateTo() PASSED

LogTest > testTakeSnapshotOnRollAndDeleteSnapshotOnRecoveryPointCheckpoint() 
STARTED

LogTest > testTakeSnapshotOnRollAndDeleteSnapshotOnRecoveryPointCheckpoint() 
PASSED

LogTest > testLogEndLessThanStartAfterReopen() STARTED

LogTest > testLogEndLessThanStartAfterReopen() PASSED

LogTest > testLogRecoversForLeaderEpoch() STARTED

LogTest > testLogRecoversForLeaderEpoch() PASSED

LogTest > testRetentionDeletesProducerStateSnapshots() STARTED

LogTest > testRetentionDeletesProducerStateSnapshots() PASSED

LogTest > testLoadingLogDeletesProducerStateSnapshotsPastLogEndOffset() STARTED

LogTest > testLoadingLogDeletesProducerStateSnapshotsPastLogEndOffset() PASSED

LogTest > testRetentionIdempotency() STARTED

LogTest > testRetentionIdempotency() PASSED

LogTest > testWriteLeaderEpochCheckpointAfterDirectoryRename() STARTED

LogTest > testWriteLeaderEpochCheckpointAfterDirectoryRename() PASSED

LogTest > testOverCompactedLogRecoveryMultiRecord() STARTED

LogTest > testOverCompactedLogRecoveryMultiRecord() PASSED

LogTest > testSizeBasedLogRoll() STARTED

LogTest > testSizeBasedLogRoll() PASSED

LogTest > testRebuildProducerIdMapWithCompactedData() STARTED

LogTest > testRebuildProducerIdMapWithCompactedData() PASSED

LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize() STARTED

LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize() PASSED

LogTest > testTransactionIndexUpdatedThroughReplication() STARTED

LogTest > testTransactionIndexUpdatedThroughReplication() PASSED

LogTest > testTimeBasedLogRollJitter() STARTED

LogTest > testTimeBasedLogRollJitter() PASSED

LogTest > testParseTopicPartitionName() STARTED

LogTest > testParseTopicPartitionName() PASSED

LogTest > testEndTxnWithFencedProducerEpoch() STARTED

LogTest > testEndTxnWithFencedProducerEpoch() PASSED

LogTest > testRecoveryOfSegmentWithOffsetOverflow() STARTED

LogTest > testRecoveryOfSegmentWithOffsetOverflow() PASSED

LogTest > testRecoverAfterNonMonotonicCoordinatorEpochWrite() STARTED

LogTest > testRecoverAfterNonMonotonicCoordinatorEpochWrite() PASSED

LogTest > testLoadProducersAfterDeleteRecordsMidSegment() STARTED

LogTest > testLoadProducersAfterDeleteRecordsMidSegment() PASSED

LogTest > testInitializationOfProducerSnapshotsUpgradePath() STARTED

LogTest > testInitializationOfProducerSnapshotsUpgradePath() PASSED

LogTest > 
shouldDeleteStartOffsetBreachedSegmentsWhenPolicyDoesNotIncludeDelete() STARTED

LogTest > 

[GitHub] [kafka-site] scott-confluent opened a new pull request #343: MINOR -- more powered by companies and updating apache logo

2021-03-24 Thread GitBox


scott-confluent opened a new pull request #343:
URL: https://github.com/apache/kafka-site/pull/343


   each new image has been optimized as well


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: Regarding email notification un-subscription

2021-03-24 Thread Jyotinder Singh
Thanks, I wasn't aware about the unsubscribe confirmation mail that was
ending up in my spam folder.

Thanks a lot!

On Wed, 24 Mar 2021, 17:55 Tom Bentley,  wrote:

> Avinash,
>
> You don't say whether you have already tried, but you need to send an email
> to dev-unsubscr...@kafka.apache.org.
>
> Jyotinder,
>
> Assuming you definitely used the right email address, have you checked your
> spam folder, since IIRC you get sent a confirmation email with a link to
> visit to confirm that you want to unsubscribe.
>
> Kind regards,
>
> Tom
>
> On Wed, Mar 24, 2021 at 12:12 PM Jyotinder Singh 
> wrote:
>
> > I’m facing the same issue. Mailing to the unsubscribe address doesn’t
> seem
> > to work.
> > Email: jyotindrsi...@gmail.com
> >
> > On Wed, 24 Mar 2021 at 5:29 PM, Avinash Srivastava <
> > asrivast...@flyanra.com>
> > wrote:
> >
> > > Hi,
> > >
> > > I am continuously getting multiple emails from various email accounts
> > > related to kafka. I would request you to please unsubscribe my email -
> > > asrivast...@flyanra.com from subscription list.
> > >
> > >
> > > Thanks
> > >
> > > Avinash
> > >
> > > --
> > Sent from my iPad
> >
>


[VOTE] KIP-716: Allow configuring the location of the offset-syncs topic with MirrorMaker2

2021-03-24 Thread Mickael Maison
Hi,

I'd like to start a vote on KIP-716: Allow configuring the location of
the offset-syncs topic with MirrorMaker2.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-716%3A+Allow+configuring+the+location+of+the+offset-syncs+topic+with+MirrorMaker2

Thanks


[jira] [Created] (KAFKA-12543) Re-design the ownership model for snapshots

2021-03-24 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-12543:
--

 Summary: Re-design the ownership model for snapshots
 Key: KAFKA-12543
 URL: https://issues.apache.org/jira/browse/KAFKA-12543
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jose Armando Garcia Sancio


With the current implementation, {{RawSnapshotReader}} are created and closed 
by the {{KafkaRaftClient}} as needed to satisfy {{FetchSnapshot}} requests. 
This means that for {{FileRawSnapshotReader}} they are closed before the 
network client has had a chance to send the bytes over the network.

One way to fix this is to make the {{KafkaMetadataLog}} the owner of the 
{{FileRawSnapshotReader}}. Once a {{FileRawSnapshotReader}} is created it will 
stay open until the snapshot is deleted by 
{{ReplicatedLog::deleteBeforeSnapshot}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-12508) Emit-on-change tables may lose updates on error or restart in at_least_once

2021-03-24 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler reopened KAFKA-12508:
--
  Assignee: John Roesler  (was: Bruno Cadonna)

> Emit-on-change tables may lose updates on error or restart in at_least_once
> ---
>
> Key: KAFKA-12508
> URL: https://issues.apache.org/jira/browse/KAFKA-12508
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Nico Habermann
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.8.0, 2.7.1, 2.6.2
>
>
> [KIP-557|https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on+change+support+for+Kafka+Streams]
>  added emit-on-change semantics to KTables that suppress updates for 
> duplicate values.
> However, this may cause data loss in at_least_once topologies when records 
> are retried from the last commit due to an error / restart / etc.
>  
> Consider the following example:
> {code:java}
> streams.table(source, materialized)
> .toStream()
> .map(mayThrow())
> .to(output){code}
>  
>  # Record A gets read
>  # Record A is stored in the table
>  # The update for record A is forwarded through the topology
>  # Map() throws (or alternatively, any restart while the forwarded update was 
> still being processed and not yet produced to the output topic)
>  # The stream is restarted and "retries" from the last commit
>  # Record A gets read again
>  # The table will discard the update for record A because
>  ## The value is the same
>  ## The timestamp is the same
>  # Eventually the stream will commit
>  # There is absolutely no output for Record A even though we're running in 
> at_least_once
>  
> This behaviour does not seem intentional. [The emit-on-change logic 
> explicitly forwards records that have the same value and an older 
> timestamp.|https://github.com/apache/kafka/blob/367eca083b44261d4e5fa8aa61b7990a8b35f8b0/streams/src/main/java/org/apache/kafka/streams/state/internals/ValueAndTimestampSerializer.java#L50]
> This logic should probably be changed to also forward updates that have an 
> older *or equal* timestamp.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12542) TopologyTestDriver

2021-03-24 Thread SHWETA SINHA (Jira)
SHWETA SINHA created KAFKA-12542:


 Summary: TopologyTestDriver
 Key: KAFKA-12542
 URL: https://issues.apache.org/jira/browse/KAFKA-12542
 Project: Kafka
  Issue Type: Bug
  Components: streams-test-utils
Affects Versions: 2.6.0
Reporter: SHWETA SINHA


I am using Kafka Streams DSL to create topology.

StreamsBuilder streamsBuilder=new StreamsBuilder();
valueSerde.configure(getConfigForSpecificAvro(appId),false);
KStream stream = streamsBuilder.stream(inputTopic, 
Consumed.with(new Serdes.StringSerde(), valueSerde));
KStream filtered = stream .filter((key, value) -> 
ServiceConsumer.filter(key,value));
filtered .map((KeyValueMapper>) (key, value) -> ServiceConsumer.process(key,value))
 .to((k,v,recordContext) -> v instanceof AvroDTO? 
dlqTopic:outputTopic,Produced.with(new Serdes.StringSerde(), valueSerde));
Topology topology=streamsBuilder.build();
KafkaStreams kafkaStreams=new 
KafkaStreams(topology,getKafkaStreamsConfig(appId));

 

To Test the Topology, I am using TopologyTestDriver.  

when(getKafkaStreamsConfig(any())).thenReturn(kafkaConfig);
 when(ServiceConsumer.filter(any(),any())).thenReturn(false);
// when(ServiceConsumer.process(any(),any())).thenReturn(new 
KeyValue<>(statusDto.getTaskId().toString(),statusDto));
 topologyTestDriver=new 
TopologyTestDriver(getTopologyAndStartKafkaStreams(),kafkaConfig);
StreamInput = topologyTestDriver.createInputTopic("INPUT_TOPIC", new 
StringSerializer(), new KafkaAvroSerializer(schemaRegistryClient));
UpdateOutput =topologyTestDriver.createOutputTopic("OUTPUT_TOPIC",new 
StringDeserializer(),new KafkaAvroDeserializer(schemaRegistryClient,config));

StreamInput.pipeInput("Hi);
assertThat(UpdateOutput .isEmpty()).isTrue();

 

I am checking if there are no filtered messages then my output topic is empty.

Getting Error while Unit Testing

java.lang.IllegalArgumentException: Unknown topic: OUTPUT_TOPIC

 

Changing when(ServiceConsumer.filter(any(),any())).thenReturn(false); to true 
doesnt gives any error.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12391) Add an option to store arbitrary metadata to a SourceRecord

2021-03-24 Thread Luca Burgazzoli (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luca Burgazzoli resolved KAFKA-12391.
-
Resolution: Information Provided

> Add an option to store arbitrary metadata to a SourceRecord
> ---
>
> Key: KAFKA-12391
> URL: https://issues.apache.org/jira/browse/KAFKA-12391
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Luca Burgazzoli
>Priority: Minor
>
> When writing Source Connectors for Kafka, it may be required to perform some 
> additional house cleaning when an record has been acknowledged by the Kafka 
> broker and as today, it is possible to set up an hook by overriding 
> SourceTask.commitRecord(SourceRecord).
> This works fine in most of the cases but to make it easy for the source 
> connector to perform it's internal house keeping, it would be nice to have an 
> option to set some additional metadata to the SourceRecord without having 
> impacts to the Record sent to the Kafka Broker, something like:
> {code:java}
> class SourceRecord {
> public SourceRecord(
> ...,
> Map attributes) {
> ...
> this.attributes = attributes;
> }
> Map attributes() { 
> return attributes;
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Regarding email notification un-subscription

2021-03-24 Thread Tom Bentley
Avinash,

You don't say whether you have already tried, but you need to send an email
to dev-unsubscr...@kafka.apache.org.

Jyotinder,

Assuming you definitely used the right email address, have you checked your
spam folder, since IIRC you get sent a confirmation email with a link to
visit to confirm that you want to unsubscribe.

Kind regards,

Tom

On Wed, Mar 24, 2021 at 12:12 PM Jyotinder Singh 
wrote:

> I’m facing the same issue. Mailing to the unsubscribe address doesn’t seem
> to work.
> Email: jyotindrsi...@gmail.com
>
> On Wed, 24 Mar 2021 at 5:29 PM, Avinash Srivastava <
> asrivast...@flyanra.com>
> wrote:
>
> > Hi,
> >
> > I am continuously getting multiple emails from various email accounts
> > related to kafka. I would request you to please unsubscribe my email -
> > asrivast...@flyanra.com from subscription list.
> >
> >
> > Thanks
> >
> > Avinash
> >
> > --
> Sent from my iPad
>


[GitHub] [kafka-site] Ambuj8105 opened a new pull request #342:  Please add Knoldus Inc. (https://www.knoldus.com/) to the list of the "Powered By ❤️".

2021-03-24 Thread GitBox


Ambuj8105 opened a new pull request #342:
URL: https://github.com/apache/kafka-site/pull/342


### **Context**
   
   [Knoldus](https://www.knoldus.com/home) is a niche engineering services 
company with a presence in Singapore, Canada, USA, Netherlands, and India. We 
use Kafka in most of the projects for building a real-time Analytics System as 
well as have been using Kafka Stream for Async Communication between 
Micro-services. Hence we'd like to be part of the "Powered By" web-page.
   
   爛 Hopefully, we'll see our organisation on your web-page.
   
   Feel free to [contact me](https://www.linkedin.com/in/ambuj-sahu-57b87/) 
with any further questions.
   
   ### **Kafka Contributions**
   
   We try to share and spread the knowledge of Kafka around us, please have a 
look at our contributions:
   
   - [Various blogs on Kafka](https://blog.knoldus.com/?s=Kafka) where we 
explained features and use-cases of Kafka 
   - [KnolX-session 
(Meet-ups)](https://www.youtube.com/c/Knoldus/search?query=kafka) we used 
introduce Kafka and which problem we want to solve
   - A few [webinars on Kafka](https://www.knoldus.com/learn/webinars/kafka) 
and discussing its use-cases 
   - Ready to deploy [Kafka 
Templates](https://techhub.knoldus.com/dashboard/projects/kafka)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: Regarding email notification un-subscription

2021-03-24 Thread Jyotinder Singh
I’m facing the same issue. Mailing to the unsubscribe address doesn’t seem
to work.
Email: jyotindrsi...@gmail.com

On Wed, 24 Mar 2021 at 5:29 PM, Avinash Srivastava 
wrote:

> Hi,
>
> I am continuously getting multiple emails from various email accounts
> related to kafka. I would request you to please unsubscribe my email -
> asrivast...@flyanra.com from subscription list.
>
>
> Thanks
>
> Avinash
>
> --
Sent from my iPad


Regarding email notification un-subscription

2021-03-24 Thread Avinash Srivastava

Hi,

I am continuously getting multiple emails from various email accounts 
related to kafka. I would request you to please unsubscribe my email - 
asrivast...@flyanra.com from subscription list.



Thanks

Avinash



[jira] [Created] (KAFKA-12541) AdminClient.listOffsets should return the offset for the record with highest timestamp

2021-03-24 Thread Tom Scott (Jira)
Tom Scott created KAFKA-12541:
-

 Summary: AdminClient.listOffsets should return the offset for the 
record with highest timestamp
 Key: KAFKA-12541
 URL: https://issues.apache.org/jira/browse/KAFKA-12541
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Reporter: Tom Scott


In Kafka 2.7 the following method was added to AdminClient that provides this 
information:
{panel}
{panel}
|{{public}} {{ListOffsetsResult listOffsets(Map 
topicPartitionOffsets,}}
{{ }}{{ListOffsetsOptions options)}}|

[https://kafka.apache.org/27/javadoc/org/apache/kafka/clients/admin/KafkaAdminClient.html#listOffsets-java.util.Map-org.apache.kafka.clients.admin.ListOffsetsOptions-]

where OffsetSpec can be:
 * OffsetSpec.EarliestSpec
 * OffsetSpec.LatestSpec
 * OffsetSpec.TimestampSpec

 
This ticket introduces a new spec:
{panel}
{panel}
|{{OffsetSpec.MaxTimestampSpec }}{{// this returns the offset and timestamp for 
the record with the highest timestamp.}}|

This indicates to the AdminClient that we want to fetch the timestamp and 
offset for the record with the largest timestamp produced to a partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] Ambuj8105 closed pull request #335: MINOR: Added Knoldus to powered-by page

2021-03-24 Thread GitBox


Ambuj8105 closed pull request #335:
URL: https://github.com/apache/kafka-site/pull/335


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSS] KIP-693: Client-side Circuit Breaker for Partition Write Errors

2021-03-24 Thread Guoqiang Shu


In our current proposal it can be configured via 
producer.circuit.breaker.mute.retry.interval (defaulted to 10 mins), but 
perhaps 'interval' is a confusing name. 

On 2021/03/23 00:45:23, Guozhang Wang  wrote: 
> Thanks for the updated KIP! Some more comments inlined.
> >
> > I'm still not sure if, in your proposal, the muting length is a
> customizable value (and if yes, through which config) or it is always hard
> coded as 10 minutes?
> 
> 
> > > Guozhang



[jira] [Created] (KAFKA-12540) Sub-key support to avoid unnecessary rekey operations with new key is a compound key of the original key + sub-field

2021-03-24 Thread Antony Stubbs (Jira)
Antony Stubbs created KAFKA-12540:
-

 Summary: Sub-key support to avoid unnecessary rekey operations 
with new key is a compound key of the original key + sub-field
 Key: KAFKA-12540
 URL: https://issues.apache.org/jira/browse/KAFKA-12540
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Affects Versions: 2.8.0
Reporter: Antony Stubbs


If I am, for example, wanting to aggregate by an account, and by a metric, and 
the input topic is keyed by account (and let’s say there’s massive amount of 
traffic), this will have have to rekey on account+metric, which will cause a 
repartition topic, then group by and aggregate.

However because we know that all the metrics for an account will already exist 
on the same partition, we ideally don’t want to have to repartition - causing a 
large unneeded overhead.

 

Ideally a new `#selectSubkey` sort of method could be introduced, which would 
force a compound key with the original.

 

var subKeyStream = stream#selectSubKey(x,v->v.getField(“metric”)) <— under the 
hood this appends the returned key to the existing key

 

Although this might break key->partition strategy, the topology shouldn’t be 
dirty at this stage still as we know we’re still co-partitioned. What can 
happen next in the topology may need to be restricted however. In this case we 
would then do a:

 

subKeyStream.groupByKey().aggregate()

 

Functions other than aggregate, may need a repartition still, or maybe not - 
not sure.

 

Similarly described quite well in this forum here: 
[http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-keying-sub-keying-a-stream-without-repartitioning-td12745.html]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Gradle error - aggregatedJavadoc depending on "compileJava"

2021-03-24 Thread Alexandre Dupriez
Hi Chia-Ping and Ismael,

Thanks for your follow-ups and thanks for finding the root-cause for
this, that the problem appeared when the rat plugin wasn't applied
when the .git folder does not exist. This is something faced indeed
with a new locally cloned repository, while existing local
repositories for which a build had already been made worked just fine.

Thanks,
Alexandre

Le mer. 24 mars 2021 à 06:16, Chia-Ping Tsai  a écrit :
>
> hi Alexandre,
>
> please take a look at https://github.com/apache/kafka/pull/10386. We are 
> going to fix that error. Thanks for your report.
>
> On 2021/03/10 14:07:08, Alexandre Dupriez  wrote:
> > Hi Community,
> >
> > I tried to build Kafka from trunk on my environment today (2021, March
> > 10th) and it failed with the following Gradle error at the beginning
> > of the build, while Gradle configures project from build.gradle:
> >
> >   "Could not get unknown property 'compileJava' for root project
> > '' of type org.gradle.api.Project."
> >
> > The command used is "gradle releaseTarGz". Removing "dependsOn:
> > compileJava" from the task "aggregatedJavadoc" (added on March 9th
> > [1]) made the problem disappear - I wonder if anyone else encountered
> > the same problem?
> >
> > [1] https://github.com/apache/kafka/pull/10272
> >
> > Many thanks,
> > Alexandre
> >


[jira] [Created] (KAFKA-12539) Move some logic in handleVoteRequest to EpochState

2021-03-24 Thread dengziming (Jira)
dengziming created KAFKA-12539:
--

 Summary: Move some logic in handleVoteRequest to EpochState
 Key: KAFKA-12539
 URL: https://issues.apache.org/jira/browse/KAFKA-12539
 Project: Kafka
  Issue Type: Improvement
Reporter: dengziming


Reduce the cyclomatic complexity of KafkaRaftClient, see the comment for 
details: https://github.com/apache/kafka/pull/10289#discussion_r597274570



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #664

2021-03-24 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12524: Remove deprecated segments() (#10379)


--
[...truncated 3.70 MB...]

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
STARTED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
PASSED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() STARTED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
PASSED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() STARTED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() PASSED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
STARTED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
PASSED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
STARTED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
PASSED

AuthorizerIntegrationTest > testListTransactionsAuthorization() STARTED

AuthorizerIntegrationTest > testListTransactionsAuthorization() PASSED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() STARTED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() PASSED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() STARTED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() PASSED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
STARTED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
PASSED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testClose() STARTED

SslProducerSendTest > testClose() PASSED

SslProducerSendTest > testFlush() STARTED

SslProducerSendTest > testFlush() PASSED

SslProducerSendTest > testSendToPartition() STARTED

SslProducerSendTest > testSendToPartition() PASSED

SslProducerSendTest > testSendOffset() STARTED

SslProducerSendTest > testSendOffset() PASSED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() PASSED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() STARTED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() PASSED

ProducerCompressionTest > [1] compression=none STARTED

ProducerCompressionTest > [1] compression=none PASSED

ProducerCompressionTest > [2] compression=gzip STARTED

ProducerCompressionTest > [2] compression=gzip PASSED

ProducerCompressionTest > [3] compression=snappy STARTED

ProducerCompressionTest > [3] compression=snappy PASSED

ProducerCompressionTest > [4] compression=lz4 STARTED

ProducerCompressionTest > [4] compression=lz4 PASSED

ProducerCompressionTest > [5] compression=zstd STARTED

ProducerCompressionTest > [5] compression=zstd PASSED

MetricsTest > testMetrics() STARTED

MetricsTest > testMetrics() PASSED

ProducerFailureHandlingTest > testCannotSendToInternalTopic() STARTED

ProducerFailureHandlingTest > testCannotSendToInternalTopic() PASSED

ProducerFailureHandlingTest > testTooLargeRecordWithAckOne() STARTED

ProducerFailureHandlingTest > testTooLargeRecordWithAckOne() PASSED

ProducerFailureHandlingTest > testWrongBrokerList() STARTED

ProducerFailureHandlingTest > testWrongBrokerList() PASSED

ProducerFailureHandlingTest > testNotEnoughReplicas() STARTED

ProducerFailureHandlingTest > testNotEnoughReplicas() PASSED

ProducerFailureHandlingTest > testResponseTooLargeForReplicationWithAckAll() 
STARTED

ProducerFailureHandlingTest > 

Re: Gradle error - aggregatedJavadoc depending on "compileJava"

2021-03-24 Thread Chia-Ping Tsai
hi Alexandre,

please take a look at https://github.com/apache/kafka/pull/10386. We are going 
to fix that error. Thanks for your report.

On 2021/03/10 14:07:08, Alexandre Dupriez  wrote: 
> Hi Community,
> 
> I tried to build Kafka from trunk on my environment today (2021, March
> 10th) and it failed with the following Gradle error at the beginning
> of the build, while Gradle configures project from build.gradle:
> 
>   "Could not get unknown property 'compileJava' for root project
> '' of type org.gradle.api.Project."
> 
> The command used is "gradle releaseTarGz". Removing "dependsOn:
> compileJava" from the task "aggregatedJavadoc" (added on March 9th
> [1]) made the problem disappear - I wonder if anyone else encountered
> the same problem?
> 
> [1] https://github.com/apache/kafka/pull/10272
> 
> Many thanks,
> Alexandre
> 


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #605

2021-03-24 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12524: Remove deprecated segments() (#10379)


--
[...truncated 3.68 MB...]

TransactionsTest > testFencingOnSend() PASSED

TransactionsTest > testFencingOnCommit() STARTED

TransactionsTest > testFencingOnCommit() PASSED

TransactionsTest > testAbortTransactionTimeout() STARTED

TransactionsTest > testAbortTransactionTimeout() PASSED

TransactionsTest > testMultipleMarkersOneLeader() STARTED

TransactionsTest > testMultipleMarkersOneLeader() PASSED

TransactionsTest > testCommitTransactionTimeout() STARTED

TransactionsTest > testCommitTransactionTimeout() PASSED

SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess() STARTED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess() PASSED

SaslClientsWithInvalidCredentialsTest > testProducerWithAuthenticationFailure() 
STARTED

SaslClientsWithInvalidCredentialsTest > testProducerWithAuthenticationFailure() 
PASSED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure() 
STARTED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure() 
PASSED

SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > testConsumerWithAuthenticationFailure() 
STARTED

SaslClientsWithInvalidCredentialsTest > testConsumerWithAuthenticationFailure() 
PASSED

UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota() STARTED

UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota() PASSED

UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled() STARTED

UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled() PASSED

UserClientIdQuotaTest > testThrottledProducerConsumer() STARTED

UserClientIdQuotaTest > testThrottledProducerConsumer() PASSED

UserClientIdQuotaTest > testQuotaOverrideDelete() STARTED

UserClientIdQuotaTest > testQuotaOverrideDelete() PASSED

UserClientIdQuotaTest > testThrottledRequest() STARTED

UserClientIdQuotaTest > testThrottledRequest() PASSED

ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() STARTED

ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() PASSED

ZooKeeperClientTest > testZooKeeperSessionStateMetric() STARTED

ZooKeeperClientTest > testZooKeeperSessionStateMetric() PASSED

ZooKeeperClientTest > testExceptionInBeforeInitializingSession() STARTED

ZooKeeperClientTest > testExceptionInBeforeInitializingSession() PASSED

ZooKeeperClientTest > testGetChildrenExistingZNode() STARTED

ZooKeeperClientTest > testGetChildrenExistingZNode() PASSED

ZooKeeperClientTest > testConnection() STARTED

ZooKeeperClientTest > testConnection() PASSED

ZooKeeperClientTest > testZNodeChangeHandlerForCreation() STARTED

ZooKeeperClientTest > testZNodeChangeHandlerForCreation() PASSED

ZooKeeperClientTest > testGetAclExistingZNode() STARTED

ZooKeeperClientTest > testGetAclExistingZNode() PASSED

ZooKeeperClientTest > testSessionExpiryDuringClose() STARTED

ZooKeeperClientTest > testSessionExpiryDuringClose() PASSED

ZooKeeperClientTest > testReinitializeAfterAuthFailure() STARTED

ZooKeeperClientTest > testReinitializeAfterAuthFailure() PASSED

ZooKeeperClientTest > testSetAclNonExistentZNode() STARTED

ZooKeeperClientTest > testSetAclNonExistentZNode() PASSED

ZooKeeperClientTest > testConnectionLossRequestTermination() STARTED

ZooKeeperClientTest > testConnectionLossRequestTermination() PASSED

ZooKeeperClientTest > testExistsNonExistentZNode() STARTED

ZooKeeperClientTest > testExistsNonExistentZNode() PASSED

ZooKeeperClientTest > testGetDataNonExistentZNode() STARTED

ZooKeeperClientTest > testGetDataNonExistentZNode() PASSED

ZooKeeperClientTest > testConnectionTimeout() STARTED

ZooKeeperClientTest > testConnectionTimeout() PASSED

ZooKeeperClientTest > testBlockOnRequestCompletionFromStateChangeHandler() 
STARTED

ZooKeeperClientTest > testBlockOnRequestCompletionFromStateChangeHandler() 
PASSED

ZooKeeperClientTest > testUnresolvableConnectString() STARTED