[jira] [Updated] (KAFKA-10401) GroupMetadataManager ignores current_state_timestamp field for GROUP_METADATA_VALUE_SCHEMA_V3
[ https://issues.apache.org/jira/browse/KAFKA-10401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ismael Juma updated KAFKA-10401: Priority: Critical (was: Major) > GroupMetadataManager ignores current_state_timestamp field for > GROUP_METADATA_VALUE_SCHEMA_V3 > - > > Key: KAFKA-10401 > URL: https://issues.apache.org/jira/browse/KAFKA-10401 > Project: Kafka > Issue Type: Bug > Components: offset manager >Affects Versions: 2.1.1, 2.2.2, 2.4.1, 2.6.0, 2.5.1 >Reporter: Marek >Assignee: Luke Chen >Priority: Critical > > While reading group metadata information from ByteBuffer GroupMetadataManager > reads current_state_timestamp only for group schema version 2. For all other > versions this value is set to "None". > Piece of code responsible for the bug: > [https://github.com/apache/kafka/blob/2.6.0/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L1412] > Restarting kafka forces GroupMetadataManager manager to reload group metadata > from file basically causing > [KIP-211|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets]] > to be only applicable for schema version 2. > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-10401) GroupMetadataManager ignores current_state_timestamp field for GROUP_METADATA_VALUE_SCHEMA_V3
[ https://issues.apache.org/jira/browse/KAFKA-10401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ismael Juma updated KAFKA-10401: Fix Version/s: 2.6.1 2.7.0 > GroupMetadataManager ignores current_state_timestamp field for > GROUP_METADATA_VALUE_SCHEMA_V3 > - > > Key: KAFKA-10401 > URL: https://issues.apache.org/jira/browse/KAFKA-10401 > Project: Kafka > Issue Type: Bug > Components: offset manager >Affects Versions: 2.1.1, 2.2.2, 2.4.1, 2.6.0, 2.5.1 >Reporter: Marek >Assignee: Luke Chen >Priority: Critical > Fix For: 2.7.0, 2.6.1 > > > While reading group metadata information from ByteBuffer GroupMetadataManager > reads current_state_timestamp only for group schema version 2. For all other > versions this value is set to "None". > Piece of code responsible for the bug: > [https://github.com/apache/kafka/blob/2.6.0/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L1412] > Restarting kafka forces GroupMetadataManager manager to reload group metadata > from file basically causing > [KIP-211|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets]] > to be only applicable for schema version 2. > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] kkonstantine merged pull request #9172: KAFKA-10387: Fix inclusion of transformation configs when topic creation is enabled in Connect
kkonstantine merged pull request #9172: URL: https://github.com/apache/kafka/pull/9172 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] kkonstantine edited a comment on pull request #9172: KAFKA-10387: Fix inclusion of transformation configs when topic creation is enabled in Connect
kkonstantine edited a comment on pull request #9172: URL: https://github.com/apache/kafka/pull/9172#issuecomment-674644885 Thanks @rhauch ! Tested manually with transforms from the `plugin.path` as well. Merging on `trunk` and backporting to `2.6`. (2/3 builds were green) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] kkonstantine commented on pull request #9172: KAFKA-10387: Fix inclusion of transformation configs when topic creation is enabled in Connect
kkonstantine commented on pull request #9172: URL: https://github.com/apache/kafka/pull/9172#issuecomment-674644885 Thanks @rhauch ! Tested manually with transforms from the `plugin.path` as well. Merging on `trunk` and backporting to `2.6`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] apovzner commented on pull request #8768: KAFKA-10023: Enforce broker-wide and per-listener connection creation…
apovzner commented on pull request #8768: URL: https://github.com/apache/kafka/pull/8768#issuecomment-674639202 Hi @rajinisivaram, the test failure turned out to be a bug where I did not remove connection rate sensors on listener removal. I fixed the code and tests now pass. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (KAFKA-10401) GroupMetadataManager ignores current_state_timestamp field for GROUP_METADATA_VALUE_SCHEMA_V3
[ https://issues.apache.org/jira/browse/KAFKA-10401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luke Chen reassigned KAFKA-10401: - Assignee: Luke Chen > GroupMetadataManager ignores current_state_timestamp field for > GROUP_METADATA_VALUE_SCHEMA_V3 > - > > Key: KAFKA-10401 > URL: https://issues.apache.org/jira/browse/KAFKA-10401 > Project: Kafka > Issue Type: Bug > Components: offset manager >Affects Versions: 2.1.1, 2.2.2, 2.4.1, 2.6.0, 2.5.1 >Reporter: Marek >Assignee: Luke Chen >Priority: Major > > While reading group metadata information from ByteBuffer GroupMetadataManager > reads current_state_timestamp only for group schema version 2. For all other > versions this value is set to "None". > Piece of code responsible for the bug: > [https://github.com/apache/kafka/blob/2.6.0/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L1412] > Restarting kafka forces GroupMetadataManager manager to reload group metadata > from file basically causing > [KIP-211|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets]] > to be only applicable for schema version 2. > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] huxihx opened a new pull request #9189: Kafka 10407: Have KafkaLog4jAppender support `linger.ms` and `batch.size`
huxihx opened a new pull request #9189: URL: https://github.com/apache/kafka/pull/9189 https://issues.apache.org/jira/browse/KAFKA-10407 Currently, KafkaLog4jAppender does not support `linger.ms` or `batch.size`. In some situations, those two parameters are good to tune the performance. *More detailed description of your change, if necessary. The PR title and PR message become the squashed commit message, so use a separate comment to ping reviewers.* *Summary of testing strategy (including rationale) for the feature or bug fix. Unit and/or integration tests are expected for any behaviour change and system tests should be considered for larger changes.* ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (KAFKA-8154) Buffer Overflow exceptions between brokers and with clients
[ https://issues.apache.org/jira/browse/KAFKA-8154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17178678#comment-17178678 ] Alena Messmer edited comment on KAFKA-8154 at 8/17/20, 2:00 AM: [~pwebb.itrs] [~rnataraja] I've opened [a pull request on Github|https://github.com/apache/kafka/pull/9188] . I no longer see the problem after applying this change. Could you apply the change and see if it resolves the problem for you as well? was (Author: alena.messmer): [~pwebb.itrs] [~rnataraja] I've opened a pull request on [Github|[https://github.com/apache/kafka/pull/9188]]. I no longer see the problem after applying this change. Could you apply the change and see if it resolves the problem for you as well? > Buffer Overflow exceptions between brokers and with clients > --- > > Key: KAFKA-8154 > URL: https://issues.apache.org/jira/browse/KAFKA-8154 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 2.1.0 >Reporter: Rajesh Nataraja >Priority: Major > Attachments: server.properties.txt > > > https://github.com/apache/kafka/pull/6495 > https://github.com/apache/kafka/pull/5785 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (KAFKA-8154) Buffer Overflow exceptions between brokers and with clients
[ https://issues.apache.org/jira/browse/KAFKA-8154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17178678#comment-17178678 ] Alena Messmer edited comment on KAFKA-8154 at 8/17/20, 1:59 AM: [~pwebb.itrs] [~rnataraja] I've opened a pull request on [Github|[https://github.com/apache/kafka/pull/9188]]. I no longer see the problem after applying this change. Could you apply the change and see if it resolves the problem for you as well? was (Author: alena.messmer): [~pwebb.itrs] [~rnataraja] I've opened a [pull request on GitHub|[https://github.com/apache/kafka/pull/9188]]. I no longer see the problem after applying this change. Could you apply the change and see if it resolves the problem for you as well? > Buffer Overflow exceptions between brokers and with clients > --- > > Key: KAFKA-8154 > URL: https://issues.apache.org/jira/browse/KAFKA-8154 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 2.1.0 >Reporter: Rajesh Nataraja >Priority: Major > Attachments: server.properties.txt > > > https://github.com/apache/kafka/pull/6495 > https://github.com/apache/kafka/pull/5785 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-8154) Buffer Overflow exceptions between brokers and with clients
[ https://issues.apache.org/jira/browse/KAFKA-8154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17178678#comment-17178678 ] Alena Messmer commented on KAFKA-8154: -- [~pwebb.itrs] [~rnataraja] I've opened a [pull request on GitHub|[https://github.com/apache/kafka/pull/9188]]. I no longer see the problem after applying this change. Could you apply the change and see if it resolves the problem for you as well? > Buffer Overflow exceptions between brokers and with clients > --- > > Key: KAFKA-8154 > URL: https://issues.apache.org/jira/browse/KAFKA-8154 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 2.1.0 >Reporter: Rajesh Nataraja >Priority: Major > Attachments: server.properties.txt > > > https://github.com/apache/kafka/pull/6495 > https://github.com/apache/kafka/pull/5785 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (KAFKA-10407) add linger.ms parameter support to KafkaLog4jAppender
[ https://issues.apache.org/jira/browse/KAFKA-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huxihx reassigned KAFKA-10407: -- Assignee: huxihx > add linger.ms parameter support to KafkaLog4jAppender > - > > Key: KAFKA-10407 > URL: https://issues.apache.org/jira/browse/KAFKA-10407 > Project: Kafka > Issue Type: Improvement > Components: logging >Reporter: Yu Yang >Assignee: huxihx >Priority: Minor > > Currently KafkaLog4jAppender does not accept `linger.ms` setting. When a > service has an outrage that cause excessively error logging, the service can > have too many producer requests to kafka brokers and overload the broker. > Setting a non-zero 'linger.ms' will allow kafka producer to batch records and > reduce # of producer request. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] Spatterjaaay opened a new pull request #9188: break when dst is full so that unwrap isn't called when appreadbuffer…
Spatterjaaay opened a new pull request #9188: URL: https://github.com/apache/kafka/pull/9188 … may have data There are a couple of different situations which can result in BUFFER_OVERFLOW on read with the current implementation, due to the while loop structure (such as TLS compression with identical buffer sizes, or buffers sizes that differ to optimize modes where the cipher text is larger than the plain text.) The JDK documentation indicates that a buffer of getApplicationBufferSize() bytes will be enough for a single unwrap operation, but the SslTransportLayer loop may call unwrap with an application buffer which isn't empty. The current implementation will check dst for space and then move data from the application buffer. It will then continue the loop and may try to unwrap() again without verifying that there are getApplicationBufferSize() bytes free in the application buffer. If, instead, the loop moves data into dst, and then breaks the loop if dst is full, then unwrap() should never be called with data in the application buffer. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] huxihx commented on pull request #7711: KAFKA-9157: Avoid generating empty segments if all records are deleted after cleaning
huxihx commented on pull request #7711: URL: https://github.com/apache/kafka/pull/7711#issuecomment-674612527 @junrao Could you take some time to review this patch? Thanks :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-10363) Broker try to connect to a new cluster when there are changes in zookeeper.connect properties
[ https://issues.apache.org/jira/browse/KAFKA-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17178634#comment-17178634 ] Rens Groothuijsen commented on KAFKA-10363: --- One possible workaround is to update the broker's existing kafka-logs/meta.properties file with the expected ID, or remove it so that it will be regenerated upon next restart. > Broker try to connect to a new cluster when there are changes in > zookeeper.connect properties > - > > Key: KAFKA-10363 > URL: https://issues.apache.org/jira/browse/KAFKA-10363 > Project: Kafka > Issue Type: Bug >Affects Versions: 2.4.0, 2.3.1 > Environment: 3 Kafka brokers (v2.3.1, v2.4.0) with Zookeeper cluster > (3.4.10) > Ubuntu 18.04 LTS >Reporter: Alexey Kornev >Priority: Critical > > We've just successfully set up a Kafka cluster consists of 3 brokers and > faced with the following issue: when we change order of zookeeper servers in > zookeeper.connect property in server.properties files and restart Kafka > broker then this Kafka broker tries to connect to a new Kafka cluster. As a > result, Kafka broker throws an error and shutdown. > For example, config server.properties on first broker: > {code:java} > broker.id=-1 > ... > zookeeper.connect=node_1:2181/kafka,node_2:2181/kafka,node_3:2181/kafka > {code} > We changed it to > {code:java} > broker.id=-1 > ... > zookeeper.connect=node_2:2181/kafka,node_3:2181/kafka,node_1:2181/kafka {code} > and restart Kafka broker. > Logs: > {code:java} > [2020-08-05 09:07:55,658] INFO [ExpirationReaper-0-Heartbeat]: Starting > (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)[2020-08-05 > 09:07:55,658] INFO [ExpirationReaper-0-Heartbeat]: Starting > (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)[2020-08-05 > 09:07:55,658] INFO [ExpirationReaper-0-topic]: Starting > (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)[2020-08-05 > 09:07:57,070] INFO Registered kafka:type=kafka.Log4jController MBean > (kafka.utils.Log4jControllerRegistration$)[2020-08-05 09:07:57,656] INFO > Registered signal handlers for TERM, INT, HUP > (org.apache.kafka.common.utils.LoggingSignalHandler)[2020-08-05 09:07:57,657] > INFO starting (kafka.server.KafkaServer)[2020-08-05 09:07:57,658] INFO > Connecting to zookeeper on > node_2:2181/kafka,node_3:2181/kafka,node_1:2181/kafka > (kafka.server.KafkaServer)[2020-08-05 09:07:57,685] INFO [ZooKeeperClient > Kafka server] Initializing a new session to node_2:2181. > (kafka.zookeeper.ZooKeeperClient)[2020-08-05 09:07:57,690] INFO Client > environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, > built on 03/06/2019 16:18 GMT (org.apache.zookeeper.ZooKeeper)[2020-08-05 > 09:07:57,693] INFO Client environment:host.name=localhost > (org.apache.zookeeper.ZooKeeper)[2020-08-05 09:07:57,693] INFO Client > environment:java.version=11.0.8 (org.apache.zookeeper.ZooKeeper)[2020-08-05 > 09:07:57,696] INFO Client environment:java.vendor=Ubuntu > (org.apache.zookeeper.ZooKeeper)[2020-08-05 09:07:57,696] INFO Client > environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 > (org.apache.zookeeper.ZooKeeper)[2020-08-05 09:07:57,696] INFO Client >
[jira] [Created] (KAFKA-10407) add linger.ms parameter support to KafkaLog4jAppender
Yu Yang created KAFKA-10407: --- Summary: add linger.ms parameter support to KafkaLog4jAppender Key: KAFKA-10407 URL: https://issues.apache.org/jira/browse/KAFKA-10407 Project: Kafka Issue Type: Improvement Components: logging Reporter: Yu Yang Currently KafkaLog4jAppender does not accept `linger.ms` setting. When a service has an outrage that cause excessively error logging, the service can have too many producer requests to kafka brokers and overload the broker. Setting a non-zero 'linger.ms' will allow kafka producer to batch records and reduce # of producer request. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] rajinisivaram commented on pull request #8768: KAFKA-10023: Enforce broker-wide and per-listener connection creation…
rajinisivaram commented on pull request #8768: URL: https://github.com/apache/kafka/pull/8768#issuecomment-674554549 @apovzner DynamicBrokerReconfigurationTest.testAddRemoveSaslListeners failed in all three PR builds, so probably related? ``` 15:55:45 kafka.server.DynamicBrokerReconfigurationTest > testAddRemoveSaslListeners FAILED 15:55:45 org.scalatest.exceptions.TestFailedException: Processors not shutdown for removed listener 15:55:45 at org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530) 15:55:45 at org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529) 15:55:45 at org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389) 15:55:45 at org.scalatest.Assertions.fail(Assertions.scala:1091) 15:55:45 at org.scalatest.Assertions.fail$(Assertions.scala:1087) 15:55:45 at org.scalatest.Assertions$.fail(Assertions.scala:1389) 15:55:45 at kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:1178) 15:55:45 at kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSaslListeners(DynamicBrokerReconfigurationTest.scala:1057) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] lbradstreet commented on pull request #9187: MINOR: bump mockito to 3.5.0
lbradstreet commented on pull request #9187: URL: https://github.com/apache/kafka/pull/9187#issuecomment-674551252 I thought this might be useful for jmh benchmarks as reflection can often throw us off. I'm not sure we use the inline mock maker anywhere though. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] lbradstreet opened a new pull request #9187: MINOR: bump mockito to 3.5.0
lbradstreet opened a new pull request #9187: URL: https://github.com/apache/kafka/pull/9187 3.5.0 no longer uses any reflection and is backwards compatible. The lack of reflection could be helpful when writing jmh benchmark tests as often these mocks can completely throw off benchmark results and require some finagling to be more respresentative. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-10396) Overall memory of container keep on growing due to kafka stream / rocksdb and OOM killed once limit reached
[ https://issues.apache.org/jira/browse/KAFKA-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17178519#comment-17178519 ] Vagesh Mathapati commented on KAFKA-10396: -- I mean to say if my topic going to have 10 Million record then what will be the value of Cache and writeBufferManager? and what will be the number if topic going to have just 5k records then what will be the value of Cache and writeBufferManager? As my different topics going to have different number of records so do i need to create different classes as i need to pass just class name config.put(StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG, CustomRocksDBConfig.class); The results are with fixing the iterator leak. Updated toe file. Also i did not require to use the libjemalloc. > Overall memory of container keep on growing due to kafka stream / rocksdb and > OOM killed once limit reached > --- > > Key: KAFKA-10396 > URL: https://issues.apache.org/jira/browse/KAFKA-10396 > Project: Kafka > Issue Type: Bug > Components: streams >Affects Versions: 2.3.1, 2.5.0 >Reporter: Vagesh Mathapati >Priority: Critical > Attachments: CustomRocksDBConfig.java, MyStreamProcessor.java, > kafkaStreamConfig.java > > > We are observing that overall memory of our container keep on growing and > never came down. > After analysis find out that rocksdbjni.so is keep on allocating 64M chunks > of memory off-heap and never releases back. This causes OOM kill after memory > reaches configured limit. > We use Kafka stream and globalktable for our many kafka topics. > Below is our environment > * Kubernetes cluster > * openjdk 11.0.7 2020-04-14 LTS > * OpenJDK Runtime Environment Zulu11.39+16-SA (build 11.0.7+10-LTS) > * OpenJDK 64-Bit Server VM Zulu11.39+16-SA (build 11.0.7+10-LTS, mixed mode) > * Springboot 2.3 > * spring-kafka-2.5.0 > * kafka-streams-2.5.0 > * kafka-streams-avro-serde-5.4.0 > * rocksdbjni-5.18.3 > Observed same result with kafka 2.3 version. > Below is the snippet of our analysis > from pmap output we took addresses from these 64M allocations (RSS) > Address Kbytes RSS Dirty Mode Mapping > 7f3ce800 65536 65532 65532 rw--- [ anon ] > 7f3cf400 65536 65536 65536 rw--- [ anon ] > 7f3d6400 65536 65536 65536 rw--- [ anon ] > We tried to match with memory allocation logs enabled with the help of Azul > systems team. > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741] > - 0x7f3ce8ff7ca0 > @ /tmp/librocksdbjni6564497922441568920.so: > _ZN7rocksdb15BlockBasedTable3GetERKNS_11ReadOptionsERKNS_5SliceEPNS_10GetContextEPKNS_14SliceTransformEb+0x894)[0x7f3e1c898fd4] > - 0x7f3ce8ff9780 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0xfa)[0x7f3e1c65d5da] > - 0x7f3ce8ff9750 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741] > - 0x7f3ce8ff97c0 > @ > /tmp/librocksdbjni6564497922441568920.so:_Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0xfa)[0x7f3e1c65d5da] > - 0x7f3ce8ffccf0 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741] > - 0x7f3ce8ffcd10 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0xfa)[0x7f3e1c65d5da] > - 0x7f3ce8ffccf0 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741] > - 0x7f3ce8ffcd10 > We also identified that content on this 64M is just 0s and no any data > present in it. > I tried to tune the rocksDB configuratino as mentioned but it did not helped. > [https://docs.confluent.io/current/streams/developer-guide/config-streams.html#streams-developer-guide-rocksdb-config] > > Please let me know if you need any more details -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-10396) Overall memory of container keep on growing due to kafka stream / rocksdb and OOM killed once limit reached
[ https://issues.apache.org/jira/browse/KAFKA-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vagesh Mathapati updated KAFKA-10396: - Attachment: (was: MyStreamProcessor.java) > Overall memory of container keep on growing due to kafka stream / rocksdb and > OOM killed once limit reached > --- > > Key: KAFKA-10396 > URL: https://issues.apache.org/jira/browse/KAFKA-10396 > Project: Kafka > Issue Type: Bug > Components: streams >Affects Versions: 2.3.1, 2.5.0 >Reporter: Vagesh Mathapati >Priority: Critical > Attachments: CustomRocksDBConfig.java, MyStreamProcessor.java, > kafkaStreamConfig.java > > > We are observing that overall memory of our container keep on growing and > never came down. > After analysis find out that rocksdbjni.so is keep on allocating 64M chunks > of memory off-heap and never releases back. This causes OOM kill after memory > reaches configured limit. > We use Kafka stream and globalktable for our many kafka topics. > Below is our environment > * Kubernetes cluster > * openjdk 11.0.7 2020-04-14 LTS > * OpenJDK Runtime Environment Zulu11.39+16-SA (build 11.0.7+10-LTS) > * OpenJDK 64-Bit Server VM Zulu11.39+16-SA (build 11.0.7+10-LTS, mixed mode) > * Springboot 2.3 > * spring-kafka-2.5.0 > * kafka-streams-2.5.0 > * kafka-streams-avro-serde-5.4.0 > * rocksdbjni-5.18.3 > Observed same result with kafka 2.3 version. > Below is the snippet of our analysis > from pmap output we took addresses from these 64M allocations (RSS) > Address Kbytes RSS Dirty Mode Mapping > 7f3ce800 65536 65532 65532 rw--- [ anon ] > 7f3cf400 65536 65536 65536 rw--- [ anon ] > 7f3d6400 65536 65536 65536 rw--- [ anon ] > We tried to match with memory allocation logs enabled with the help of Azul > systems team. > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741] > - 0x7f3ce8ff7ca0 > @ /tmp/librocksdbjni6564497922441568920.so: > _ZN7rocksdb15BlockBasedTable3GetERKNS_11ReadOptionsERKNS_5SliceEPNS_10GetContextEPKNS_14SliceTransformEb+0x894)[0x7f3e1c898fd4] > - 0x7f3ce8ff9780 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0xfa)[0x7f3e1c65d5da] > - 0x7f3ce8ff9750 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741] > - 0x7f3ce8ff97c0 > @ > /tmp/librocksdbjni6564497922441568920.so:_Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0xfa)[0x7f3e1c65d5da] > - 0x7f3ce8ffccf0 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741] > - 0x7f3ce8ffcd10 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0xfa)[0x7f3e1c65d5da] > - 0x7f3ce8ffccf0 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741] > - 0x7f3ce8ffcd10 > We also identified that content on this 64M is just 0s and no any data > present in it. > I tried to tune the rocksDB configuratino as mentioned but it did not helped. > [https://docs.confluent.io/current/streams/developer-guide/config-streams.html#streams-developer-guide-rocksdb-config] > > Please let me know if you need any more details -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-10396) Overall memory of container keep on growing due to kafka stream / rocksdb and OOM killed once limit reached
[ https://issues.apache.org/jira/browse/KAFKA-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vagesh Mathapati updated KAFKA-10396: - Attachment: MyStreamProcessor.java > Overall memory of container keep on growing due to kafka stream / rocksdb and > OOM killed once limit reached > --- > > Key: KAFKA-10396 > URL: https://issues.apache.org/jira/browse/KAFKA-10396 > Project: Kafka > Issue Type: Bug > Components: streams >Affects Versions: 2.3.1, 2.5.0 >Reporter: Vagesh Mathapati >Priority: Critical > Attachments: CustomRocksDBConfig.java, MyStreamProcessor.java, > kafkaStreamConfig.java > > > We are observing that overall memory of our container keep on growing and > never came down. > After analysis find out that rocksdbjni.so is keep on allocating 64M chunks > of memory off-heap and never releases back. This causes OOM kill after memory > reaches configured limit. > We use Kafka stream and globalktable for our many kafka topics. > Below is our environment > * Kubernetes cluster > * openjdk 11.0.7 2020-04-14 LTS > * OpenJDK Runtime Environment Zulu11.39+16-SA (build 11.0.7+10-LTS) > * OpenJDK 64-Bit Server VM Zulu11.39+16-SA (build 11.0.7+10-LTS, mixed mode) > * Springboot 2.3 > * spring-kafka-2.5.0 > * kafka-streams-2.5.0 > * kafka-streams-avro-serde-5.4.0 > * rocksdbjni-5.18.3 > Observed same result with kafka 2.3 version. > Below is the snippet of our analysis > from pmap output we took addresses from these 64M allocations (RSS) > Address Kbytes RSS Dirty Mode Mapping > 7f3ce800 65536 65532 65532 rw--- [ anon ] > 7f3cf400 65536 65536 65536 rw--- [ anon ] > 7f3d6400 65536 65536 65536 rw--- [ anon ] > We tried to match with memory allocation logs enabled with the help of Azul > systems team. > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741] > - 0x7f3ce8ff7ca0 > @ /tmp/librocksdbjni6564497922441568920.so: > _ZN7rocksdb15BlockBasedTable3GetERKNS_11ReadOptionsERKNS_5SliceEPNS_10GetContextEPKNS_14SliceTransformEb+0x894)[0x7f3e1c898fd4] > - 0x7f3ce8ff9780 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0xfa)[0x7f3e1c65d5da] > - 0x7f3ce8ff9750 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741] > - 0x7f3ce8ff97c0 > @ > /tmp/librocksdbjni6564497922441568920.so:_Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0xfa)[0x7f3e1c65d5da] > - 0x7f3ce8ffccf0 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741] > - 0x7f3ce8ffcd10 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0xfa)[0x7f3e1c65d5da] > - 0x7f3ce8ffccf0 > @ /tmp/librocksdbjni6564497922441568920.so: > _Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741] > - 0x7f3ce8ffcd10 > We also identified that content on this 64M is just 0s and no any data > present in it. > I tried to tune the rocksDB configuratino as mentioned but it did not helped. > [https://docs.confluent.io/current/streams/developer-guide/config-streams.html#streams-developer-guide-rocksdb-config] > > Please let me know if you need any more details -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] vamossagar12 commented on pull request #6669: KAFKA-8238: Adding Number of messages/bytes read
vamossagar12 commented on pull request #6669: URL: https://github.com/apache/kafka/pull/6669#issuecomment-674534010 > Hey there @vamossagar12, are you still working on this PR? hey @stanislavkozlovski I think I had made some final changes but couldn't get past the final review(I should have probably followed up again) . I see merge conflicts now in the file where I made the change. I can resolve them if you feel this can be reviewed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] rajinisivaram commented on pull request #8768: KAFKA-10023: Enforce broker-wide and per-listener connection creation…
rajinisivaram commented on pull request #8768: URL: https://github.com/apache/kafka/pull/8768#issuecomment-674533627 ok to test This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-10404) Flaky Test kafka.api.SaslSslConsumerTest.testCoordinatorFailover
[ https://issues.apache.org/jira/browse/KAFKA-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajini Sivaram resolved KAFKA-10404. Fix Version/s: 2.6.1 2.5.2 2.7.0 Reviewer: Manikumar Resolution: Fixed > Flaky Test kafka.api.SaslSslConsumerTest.testCoordinatorFailover > > > Key: KAFKA-10404 > URL: https://issues.apache.org/jira/browse/KAFKA-10404 > Project: Kafka > Issue Type: Test > Components: core, unit tests >Reporter: Bill Bejeck >Assignee: Rajini Sivaram >Priority: Major > Fix For: 2.7.0, 2.5.2, 2.6.1 > > > From build [https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/3829/] > > {noformat} > kafka.api.SaslSslConsumerTest > testCoordinatorFailover FAILED > 11:27:15 java.lang.AssertionError: expected: but > was: commit cannot be completed since the consumer is not part of an active group > for auto partition assignment; it is likely that the consumer was kicked out > of the group.)> > 11:27:15 at org.junit.Assert.fail(Assert.java:89) > 11:27:15 at org.junit.Assert.failNotEquals(Assert.java:835) > 11:27:15 at org.junit.Assert.assertEquals(Assert.java:120) > 11:27:15 at org.junit.Assert.assertEquals(Assert.java:146) > 11:27:15 at > kafka.api.AbstractConsumerTest.sendAndAwaitAsyncCommit(AbstractConsumerTest.scala:195) > 11:27:15 at > kafka.api.AbstractConsumerTest.ensureNoRebalance(AbstractConsumerTest.scala:302) > 11:27:15 at > kafka.api.BaseConsumerTest.testCoordinatorFailover(BaseConsumerTest.scala:76) > 11:27:15 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > 11:27:15 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 11:27:15 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 11:27:15 at java.lang.reflect.Method.invoke(Method.java:498) > 11:27:15 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > 11:27:15 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 11:27:15 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > 11:27:15 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 11:27:15 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 11:27:15 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 11:27:15 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 11:27:15 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > 11:27:15 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > 11:27:15 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > 11:27:15 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > 11:27:15 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > 11:27:15 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > 11:27:15 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > 11:27:15 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > 11:27:15 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > 11:27:15 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 11:27:15 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 11:27:15 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 11:27:15 at org.junit.runners.ParentRunner.run(ParentRunner.java:413) > 11:27:15 at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110) > 11:27:15 at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58) > 11:27:15 at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38) > 11:27:15 at > org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62) > 11:27:15 at > org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51) > 11:27:15 at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown > Source) > 11:27:15 at >
[GitHub] [kafka] rajinisivaram commented on pull request #9183: KAFKA-10404; Use higher poll timeout to avoid rebalance in testCoordinatorFailover
rajinisivaram commented on pull request #9183: URL: https://github.com/apache/kafka/pull/9183#issuecomment-674528245 @omkreddy Thanks for the review, merging to trunk, 2.6 and 2.5. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] rajinisivaram merged pull request #9183: KAFKA-10404; Use higher poll timeout to avoid rebalance in testCoordinatorFailover
rajinisivaram merged pull request #9183: URL: https://github.com/apache/kafka/pull/9183 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] rajinisivaram commented on a change in pull request #9142: MINOR: Fix delete_topic for system tests
rajinisivaram commented on a change in pull request #9142: URL: https://github.com/apache/kafka/pull/9142#discussion_r471094493 ## File path: tests/kafkatest/services/kafka/kafka.py ## @@ -503,7 +503,7 @@ def create_topic(self, topic_cfg, node=None, use_zk_to_create_topic=True): self.logger.info("Running topic creation command...\n%s" % cmd) node.account.ssh(cmd) -def delete_topic(self, topic, node=None): +def delete_topic(self, topic, node=None, use_zk_to_delete_topic=False): Review comment: @rondagostino are we ok with merging this to trunk? Since this is not required for existing tests which either use ZK or PLAINTEXT brokers, not planning to backport to older versions. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] rajinisivaram merged pull request #9143: MINOR: Fix the way total consumed is calculated for verifiable consumer
rajinisivaram merged pull request #9143: URL: https://github.com/apache/kafka/pull/9143 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] rajinisivaram commented on pull request #9143: MINOR: Fix the way total consumed is calculated for verifiable consumer
rajinisivaram commented on pull request #9143: URL: https://github.com/apache/kafka/pull/9143#issuecomment-674507951 @skaundinya15 Thanks for running the tests, merging to trunk. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-9516) Flaky Test PlaintextProducerSendTest#testNonBlockingProducer
[ https://issues.apache.org/jira/browse/KAFKA-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajini Sivaram resolved KAFKA-9516. --- Fix Version/s: 2.6.1 2.5.2 2.7.0 Reviewer: Ismael Juma Resolution: Fixed > Flaky Test PlaintextProducerSendTest#testNonBlockingProducer > > > Key: KAFKA-9516 > URL: https://issues.apache.org/jira/browse/KAFKA-9516 > Project: Kafka > Issue Type: Bug > Components: core, producer , tools, unit tests >Reporter: Matthias J. Sax >Assignee: Rajini Sivaram >Priority: Critical > Labels: flaky-test > Fix For: 2.7.0, 2.5.2, 2.6.1 > > > [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/4521/testReport/junit/kafka.api/PlaintextProducerSendTest/testNonBlockingProducer/] > {quote}java.util.concurrent.TimeoutException: Timeout after waiting for 1 > ms. at > org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:78) > at > org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30) > at > kafka.api.PlaintextProducerSendTest.verifySendSuccess$1(PlaintextProducerSendTest.scala:148) > at > kafka.api.PlaintextProducerSendTest.testNonBlockingProducer(PlaintextProducerSendTest.scala:172){quote} > {quote} > h3. Standard Output > [2020-02-06 03:35:27,912] ERROR [ReplicaFetcher replicaId=1, leaderId=0, > fetcherId=0] Error for partition topic-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. [2020-02-06 03:35:50,812] ERROR > [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition > topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. [2020-02-06 03:35:51,015] ERROR > [ReplicaManager broker=0] Error processing append operation on partition > topic-0 (kafka.server.ReplicaManager:76) > org.apache.kafka.common.errors.InvalidTimestampException: One or more records > have been rejected due to invalid timestamp [2020-02-06 03:35:51,027] ERROR > [ReplicaManager broker=0] Error processing append operation on partition > topic-0 (kafka.server.ReplicaManager:76) > org.apache.kafka.common.errors.InvalidTimestampException: One or more records > have been rejected due to invalid timestamp [2020-02-06 03:35:53,127] ERROR > [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition > topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. [2020-02-06 03:35:58,617] ERROR > [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition > topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. [2020-02-06 03:36:01,843] ERROR > [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition > topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. [2020-02-06 03:36:05,111] ERROR > [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition > topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. [2020-02-06 03:36:08,383] ERROR > [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition > topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. [2020-02-06 03:36:08,383] ERROR > [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition > topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. [2020-02-06 03:36:12,582] ERROR > [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition > topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. [2020-02-06 03:36:12,582] ERROR > [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition > topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not
[jira] [Resolved] (KAFKA-8033) Flaky Test PlaintextConsumerTest#testFetchInvalidOffset
[ https://issues.apache.org/jira/browse/KAFKA-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajini Sivaram resolved KAFKA-8033. --- Fix Version/s: 2.5.2 Reviewer: Ismael Juma Resolution: Fixed > Flaky Test PlaintextConsumerTest#testFetchInvalidOffset > --- > > Key: KAFKA-8033 > URL: https://issues.apache.org/jira/browse/KAFKA-8033 > Project: Kafka > Issue Type: Bug > Components: core, unit tests >Affects Versions: 2.3.0 >Reporter: Matthias J. Sax >Assignee: Rajini Sivaram >Priority: Critical > Labels: flaky-test > Fix For: 2.7.0, 2.5.2, 2.6.1 > > > [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2829/testReport/junit/kafka.api/PlaintextConsumerTest/testFetchInvalidOffset/] > {quote}org.scalatest.junit.JUnitTestFailedError: Expected exception > org.apache.kafka.clients.consumer.NoOffsetForPartitionException to be thrown, > but no exception was thrown{quote} > STDOUT prints this over and over again: > {quote}[2019-03-02 04:01:25,576] ERROR [ReplicaFetcher replicaId=0, > leaderId=1, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76){quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] rajinisivaram merged pull request #9184: KAFKA-8033; Wait for NoOffsetForPartitionException in testFetchInvalidOffset
rajinisivaram merged pull request #9184: URL: https://github.com/apache/kafka/pull/9184 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] rajinisivaram commented on pull request #9184: KAFKA-8033; Wait for NoOffsetForPartitionException in testFetchInvalidOffset
rajinisivaram commented on pull request #9184: URL: https://github.com/apache/kafka/pull/9184#issuecomment-674505814 @ijuma Thanks for the review. Yes, NoOffsetForPartitionException continues to be thrown as soon as possible, typically less than 50ms. Merging to trunk, 2.6 and 2.5. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] rajinisivaram merged pull request #9181: KAFKA-9516; Increase timeout in testNonBlockingProducer to make it more reliable
rajinisivaram merged pull request #9181: URL: https://github.com/apache/kafka/pull/9181 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] rajinisivaram commented on pull request #9181: KAFKA-9516; Increase timeout in testNonBlockingProducer to make it more reliable
rajinisivaram commented on pull request #9181: URL: https://github.com/apache/kafka/pull/9181#issuecomment-674503723 @ijuma Thanks for the review, merging to trunk, 2.6 and 2.5. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] JoelWee commented on pull request #9186: KAFKA-10277: Allow null keys with non-null mappedKey in KStreamKGlobalTable join
JoelWee commented on pull request #9186: URL: https://github.com/apache/kafka/pull/9186#issuecomment-674500361 [KAFKA-10277](https://issues.apache.org/jira/browse/KAFKA-10277?jql=project%20%3D%20KAFKA%20AND%20labels%20%3D%20newbie%20AND%20status%20%3D%20Open%20ORDER%20BY%20updated%20DESC) Hi @mjsax, please could you have a look? :) It feels like if implemented this way, we should have a NullPointerException test for the processor, but I'm not sure where that test should be put in. It fits best as a direct unit test for the processor but it doesn't look like any of those tests are done. And it's somewhat inconvenient to add it to the existing join tests because of the way they are set up This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] JoelWee opened a new pull request #9186: KAFKA-10277: Allow null keys with non-null mappedKey in KStreamKGlobalTable join
JoelWee opened a new pull request #9186: URL: https://github.com/apache/kafka/pull/9186 ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] stanislavkozlovski commented on pull request #6669: KAFKA-8238: Adding Number of messages/bytes read
stanislavkozlovski commented on pull request #6669: URL: https://github.com/apache/kafka/pull/6669#issuecomment-674497333 Hey there @vamossagar12, are you still working on this PR? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org