[jira] [Assigned] (KAFKA-10526) Explore performance impact of leader fsync deferral

2020-10-30 Thread Sagar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Rao reassigned KAFKA-10526:
-

Assignee: Sagar Rao

> Explore performance impact of leader fsync deferral
> ---
>
> Key: KAFKA-10526
> URL: https://issues.apache.org/jira/browse/KAFKA-10526
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Sagar Rao
>Priority: Major
>
> In order to commit a write, a majority of nodes must call fsync in order to 
> ensure the data has been written to disk. An interesting optimization option 
> to consider is letting the leader defer fsync until the high watermark is 
> ready to be advanced. This potentially allows us to reduce the number of 
> flushes on the leader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9948) Gradle Issue

2020-10-30 Thread Murali Krishna Pinjala (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223993#comment-17223993
 ] 

Murali Krishna Pinjala commented on KAFKA-9948:
---

I can say better to use gradle wrapper, it pulls the gradle version configured 
on gradle-warpper.properties instead of local gradle version.

./gradlew clean build

> Gradle Issue
> 
>
> Key: KAFKA-9948
> URL: https://issues.apache.org/jira/browse/KAFKA-9948
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.4.1
> Environment: gradle -v
> 
> Gradle 6.0.1
> 
> Build time:   2019-11-18 20:25:01 UTC
> Revision: fad121066a68c4701acd362daf4287a7c309a0f5
> Kotlin:   1.3.50
> Groovy:   2.5.8
> Ant:  Apache Ant(TM) version 1.10.7 compiled on September 1 2019
> JVM:  1.8.0_152 (Oracle Corporation 25.152-b16)
> OS:   Mac OS X 10.15.4 x86_64
>Reporter: Dulvin Witharane
>Priority: Blocker
>
> Can't get Gradle to build kafka.
>  
> Build file '/Users/dulvin/Documents/Work/git/kafka/build.gradle' line: 457
> A problem occurred evaluating root project 'kafka'.
> > Could not create task ':clients:spotbugsMain'.
>  > Could not create task of type 'SpotBugsTask'.
>  > Could not create an instance of type 
> com.github.spotbugs.internal.SpotBugsReportsImpl.
>  > 
> org.gradle.api.reporting.internal.TaskReportContainer.(Ljava/lang/Class;Lorg/gradle/api/Task;)V
>  
> The above error is thrown



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10344) Redirect Create/Renew/ExpireDelegationToken to the controller

2020-10-30 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen updated KAFKA-10344:

Description: Original comment: 
https://github.com/apache/kafka/pull/9103#discussion_r515427912  (was: In the 
bridge release broker, Create/Renew/ExpireDelegationToken should be redirected 
to the active controller.)

> Redirect Create/Renew/ExpireDelegationToken to the controller
> -
>
> Key: KAFKA-10344
> URL: https://issues.apache.org/jira/browse/KAFKA-10344
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Priority: Major
>
> Original comment: 
> https://github.com/apache/kafka/pull/9103#discussion_r515427912



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10344) Add active controller check to the controller level in KIP-500

2020-10-30 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen updated KAFKA-10344:

Summary: Add active controller check to the controller level in KIP-500  
(was: Redirect Create/Renew/ExpireDelegationToken to the controller)

> Add active controller check to the controller level in KIP-500
> --
>
> Key: KAFKA-10344
> URL: https://issues.apache.org/jira/browse/KAFKA-10344
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Priority: Major
>
> Original comment: 
> https://github.com/apache/kafka/pull/9103#discussion_r515427912



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10667) Add timeout for forwarding requests

2020-10-30 Thread Jason Gustafson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223938#comment-17223938
 ] 

Jason Gustafson commented on KAFKA-10667:
-

The api timeout does not seem to help us here since the broker does not have 
it. I think it would be reasonable to retry for the duration of the 
inter-broker request timeout. 

> Add timeout for forwarding requests
> ---
>
> Key: KAFKA-10667
> URL: https://issues.apache.org/jira/browse/KAFKA-10667
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Priority: Major
>
> It makes sense to handle timeout for forwarding request coming from the 
> client, instead of retry indefinitely. We could either use the api timeout, 
> or a customized timeout hook which could be defined by different request 
> types.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10348) Consider consolidation of broker to controller communication

2020-10-30 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen updated KAFKA-10348:

Description: Right now, we made forwarding and AlterISR into separate 
channels to make non-blocking behavior for each other. However, the controller 
queue is single threaded without prioritization for various requests, so only 
separating connections may not really help unblocking the AlterISR when 
forwarding request is taking indefinite time. In long term, we want to see if 
it is possible to consolidate these two with a systematic logical change on the 
controller side to ensure AlterISR always have higher priority.  (was: In the 
bridge release broker, the UpdateFeatures should be redirected to the active 
controller instead of relying on admin client discovery.)

> Consider consolidation of broker to controller communication
> 
>
> Key: KAFKA-10348
> URL: https://issues.apache.org/jira/browse/KAFKA-10348
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Priority: Major
>
> Right now, we made forwarding and AlterISR into separate channels to make 
> non-blocking behavior for each other. However, the controller queue is single 
> threaded without prioritization for various requests, so only separating 
> connections may not really help unblocking the AlterISR when forwarding 
> request is taking indefinite time. In long term, we want to see if it is 
> possible to consolidate these two with a systematic logical change on the 
> controller side to ensure AlterISR always have higher priority.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10348) Consider consolidation of broker to controller communication

2020-10-30 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen updated KAFKA-10348:

Summary: Consider consolidation of broker to controller communication  
(was: Redirect UpdateFeatures to the controller)

> Consider consolidation of broker to controller communication
> 
>
> Key: KAFKA-10348
> URL: https://issues.apache.org/jira/browse/KAFKA-10348
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Priority: Major
>
> In the bridge release broker, the UpdateFeatures should be redirected to the 
> active controller instead of relying on admin client discovery.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10668) Avoid deserialization on second hop for request forwarding

2020-10-30 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen updated KAFKA-10668:

Description: 
Right now on forwarding broker we would deserialize the response and serialize 
it again to respond to the client. It should be able to keep the response data 
sealed and send back to the client to save some CPU cost.

Original comment: 
https://github.com/apache/kafka/pull/9103#discussion_r515219726

  was:Right now on forwarding broker we would deserialize the response and 
serialize it again to respond to the client. It should be able to keep the 
response data sealed and send back to the client to save some CPU cost.


> Avoid deserialization on second hop for request forwarding
> --
>
> Key: KAFKA-10668
> URL: https://issues.apache.org/jira/browse/KAFKA-10668
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Priority: Major
>
> Right now on forwarding broker we would deserialize the response and 
> serialize it again to respond to the client. It should be able to keep the 
> response data sealed and send back to the client to save some CPU cost.
> Original comment: 
> https://github.com/apache/kafka/pull/9103#discussion_r515219726



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10668) Avoid deserialization on second hop for request forwarding

2020-10-30 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-10668:
---

 Summary: Avoid deserialization on second hop for request forwarding
 Key: KAFKA-10668
 URL: https://issues.apache.org/jira/browse/KAFKA-10668
 Project: Kafka
  Issue Type: Sub-task
Reporter: Boyang Chen


Right now on forwarding broker we would deserialize the response and serialize 
it again to respond to the client. It should be able to keep the response data 
sealed and send back to the client to save some CPU cost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10667) Add timeout for forwarding requests

2020-10-30 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-10667:
---

 Summary: Add timeout for forwarding requests
 Key: KAFKA-10667
 URL: https://issues.apache.org/jira/browse/KAFKA-10667
 Project: Kafka
  Issue Type: Sub-task
Reporter: Boyang Chen


It makes sense to handle timeout for forwarding request coming from the client, 
instead of retry indefinitely. We could either use the api timeout, or a 
customized timeout hook which could be defined by different request types.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10645) Forwarding a record from a punctuator sometimes it results in a NullPointerException

2020-10-30 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223894#comment-17223894
 ] 

Matthias J. Sax commented on KAFKA-10645:
-

You can find the release plan in the wiki: 
[https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan] Both 
2.6.1 and 2.7.0 should be released soon (hopefully November).

> Forwarding a record from a punctuator sometimes it results in a 
> NullPointerException
> 
>
> Key: KAFKA-10645
> URL: https://issues.apache.org/jira/browse/KAFKA-10645
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.5.0
>Reporter: Filippo Machi
>Assignee: Matthias J. Sax
>Priority: Major
>
> Hello,
>  I am working on a java kafka stream application (v. 2.5.0) running on a 
> kubernetes cluster.
> It´s a springboot application running with java 8.
> With the last upgrade to version 2.5.0 I started to see into the logs some 
> NullPointerException that are happening when forwarding a record from a 
> punctuator. 
>  This is the stacktrace of the exception
> {code:java}
> Caused by: org.apache.kafka.streams.errors.StreamsException: task [2_2] Abort 
> sending since an error caught with a previous record (timestamp 
> 1603721062667) to topic reply-reminder-push-sender due to 
> java.lang.NullPointerException\tat 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:240)\tat
>  
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:111)\tat
>  
> org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:89)\tat
>  
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)\tat
>  
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)\tat
>  
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:133)\t...
>  24 common frames omittedCaused by: java.lang.NullPointerException: null\tat 
> org.apache.kafka.common.record.DefaultRecord.sizeOf(DefaultRecord.java:613)\tat
>  
> org.apache.kafka.common.record.DefaultRecord.recordSizeUpperBound(DefaultRecord.java:633)\tat
>  
> org.apache.kafka.common.record.DefaultRecordBatch.estimateBatchSizeUpperBound(DefaultRecordBatch.java:534)\tat
>  
> org.apache.kafka.common.record.AbstractRecords.estimateSizeInBytesUpperBound(AbstractRecords.java:135)\tat
>  
> org.apache.kafka.common.record.AbstractRecords.estimateSizeInBytesUpperBound(AbstractRecords.java:125)\tat
>  
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:914)\tat
>  
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:862)\tat
>  
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:181)\t...
>  29 common frames omitted
> {code}
> Checking the code, it looks like it happens calculating the size of the 
> record. There is one header that is null but I don´t think I can control 
> those headers right?
> Thanks a lot



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10666) Kafka doesn't use keystore / key / truststore passwords for named SSL connections

2020-10-30 Thread Jason (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason updated KAFKA-10666:
--
Summary: Kafka doesn't use keystore / key / truststore passwords for named 
SSL connections  (was: Kafka doesn't used keystore / key / truststore passwords 
for named SSL connections)

> Kafka doesn't use keystore / key / truststore passwords for named SSL 
> connections
> -
>
> Key: KAFKA-10666
> URL: https://issues.apache.org/jira/browse/KAFKA-10666
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.5.0, 2.6.0
> Environment: kafka in an openjdk-11 docker container, the client java 
> application is in an alpine container. zookeeper in a separate container. 
>Reporter: Jason
>Priority: Minor
>
> When configuring named listener SSL connections with ssl key and keystore 
> with passwords including listener.name.ourname.ssl.key.password, 
> listener.name.ourname.ssl.keystore.password, and 
> listener.name.ourname.ssl.truststore.password via via the AdminClient the 
> settings are not used and the setting is not accepted if the default 
> ssl.key.password or ssl.keystore.password are not set.  We configure all 
> keystore and truststore values for the named listener in a single batch using 
> incrementalAlterConfigs. Additionally, when ssl.keystore.password is set to 
> the value of our keystore password the keystore is loaded for SSL 
> communication without issue, however if ssl.keystore.password is incorrect 
> and listener.name.ourname.keystore.password is correct, we are unable to load 
> the keystore with bad password errors.  It appears that only the default 
> ssl.xxx.password settings are used. This setting is immutable as when we 
> attempt to set it we get an error indicating that the listener.name. setting 
> can be set. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10666) Kafka doesn't used keystore / key / truststore passwords for named SSL connections

2020-10-30 Thread Jason (Jira)
Jason created KAFKA-10666:
-

 Summary: Kafka doesn't used keystore / key / truststore passwords 
for named SSL connections
 Key: KAFKA-10666
 URL: https://issues.apache.org/jira/browse/KAFKA-10666
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 2.6.0, 2.5.0
 Environment: kafka in an openjdk-11 docker container, the client java 
application is in an alpine container. zookeeper in a separate container. 
Reporter: Jason


When configuring named listener SSL connections with ssl key and keystore with 
passwords including listener.name.ourname.ssl.key.password, 
listener.name.ourname.ssl.keystore.password, and 
listener.name.ourname.ssl.truststore.password via via the AdminClient the 
settings are not used and the setting is not accepted if the default 
ssl.key.password or ssl.keystore.password are not set.  We configure all 
keystore and truststore values for the named listener in a single batch using 
incrementalAlterConfigs. Additionally, when ssl.keystore.password is set to the 
value of our keystore password the keystore is loaded for SSL communication 
without issue, however if ssl.keystore.password is incorrect and 
listener.name.ourname.keystore.password is correct, we are unable to load the 
keystore with bad password errors.  It appears that only the default 
ssl.xxx.password settings are used. This setting is immutable as when we 
attempt to set it we get an error indicating that the listener.name. setting 
can be set. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10665) Flaky Test StreamTableJoinTopologyOptimizationIntegrationTest.shouldDoStreamTableJoinWithDifferentNumberOfPartitions[Optimization = all]

2020-10-30 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-10665:
--

 Summary: Flaky Test 
StreamTableJoinTopologyOptimizationIntegrationTest.shouldDoStreamTableJoinWithDifferentNumberOfPartitions[Optimization
 = all]
 Key: KAFKA-10665
 URL: https://issues.apache.org/jira/browse/KAFKA-10665
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: A. Sophie Blee-Goldman


{code:java}
java.nio.file.DirectoryNotEmptyException: 
/tmp/kafka-13241964730537515637/app-StreamTableJoinTopologyOptimizationIntegrationTestshouldDoStreamTableJoinWithDifferentNumberOfPartitions_Optimization___all_
at 
java.base/sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:246)
at 
java.base/sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:105)
at java.base/java.nio.file.Files.delete(Files.java:1146)
at 
org.apache.kafka.common.utils.Utils$2.postVisitDirectory(Utils.java:869)
at 
org.apache.kafka.common.utils.Utils$2.postVisitDirectory(Utils.java:839)
at java.base/java.nio.file.Files.walkFileTree(Files.java:2822)
at java.base/java.nio.file.Files.walkFileTree(Files.java:2876)
at org.apache.kafka.common.utils.Utils.delete(Utils.java:839)
at org.apache.kafka.common.utils.Utils.delete(Utils.java:825)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.purgeLocalStreamsState(IntegrationTestUtils.java:151)
at 
org.apache.kafka.streams.integration.StreamTableJoinTopologyOptimizationIntegrationTest.whenShuttingDown(StreamTableJoinTopologyOptimizationIntegrationTest.java:122)
{code}
https://github.com/apache/kafka/pull/9515/checks?check_run_id=1333753280



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] dajac merged pull request #9537: MINOR: ApiKey DESCRIBE_QUORUM missing in parseRequest

2020-10-30 Thread GitBox


dajac merged pull request #9537:
URL: https://github.com/apache/kafka/pull/9537


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dajac commented on pull request #9537: MINOR: ApiKey DESCRIBE_QUORUM missing in parseRequest

2020-10-30 Thread GitBox


dajac commented on pull request #9537:
URL: https://github.com/apache/kafka/pull/9537#issuecomment-719476052


   All tests have passed. Merging to trunk.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-10645) Forwarding a record from a punctuator sometimes it results in a NullPointerException

2020-10-30 Thread Filippo Machi (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223541#comment-17223541
 ] 

Filippo Machi commented on KAFKA-10645:
---

Thanks for your reply [~mjsax] , yes it could be. My service is running with 
NUM_STREAM_THREADS_CONFIG=3, so it could be that since those threads shared the 
record is static, there are different threads accessing and modifying it. I can 
try to set this value to 1, so I should be able to mitigate this problem I 
guess.  Well, of course I will need to create more pods. Anyway, do you have 
any idea about when the fix could be released?

> Forwarding a record from a punctuator sometimes it results in a 
> NullPointerException
> 
>
> Key: KAFKA-10645
> URL: https://issues.apache.org/jira/browse/KAFKA-10645
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.5.0
>Reporter: Filippo Machi
>Assignee: Matthias J. Sax
>Priority: Major
>
> Hello,
>  I am working on a java kafka stream application (v. 2.5.0) running on a 
> kubernetes cluster.
> It´s a springboot application running with java 8.
> With the last upgrade to version 2.5.0 I started to see into the logs some 
> NullPointerException that are happening when forwarding a record from a 
> punctuator. 
>  This is the stacktrace of the exception
> {code:java}
> Caused by: org.apache.kafka.streams.errors.StreamsException: task [2_2] Abort 
> sending since an error caught with a previous record (timestamp 
> 1603721062667) to topic reply-reminder-push-sender due to 
> java.lang.NullPointerException\tat 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:240)\tat
>  
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:111)\tat
>  
> org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:89)\tat
>  
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)\tat
>  
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)\tat
>  
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:133)\t...
>  24 common frames omittedCaused by: java.lang.NullPointerException: null\tat 
> org.apache.kafka.common.record.DefaultRecord.sizeOf(DefaultRecord.java:613)\tat
>  
> org.apache.kafka.common.record.DefaultRecord.recordSizeUpperBound(DefaultRecord.java:633)\tat
>  
> org.apache.kafka.common.record.DefaultRecordBatch.estimateBatchSizeUpperBound(DefaultRecordBatch.java:534)\tat
>  
> org.apache.kafka.common.record.AbstractRecords.estimateSizeInBytesUpperBound(AbstractRecords.java:135)\tat
>  
> org.apache.kafka.common.record.AbstractRecords.estimateSizeInBytesUpperBound(AbstractRecords.java:125)\tat
>  
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:914)\tat
>  
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:862)\tat
>  
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:181)\t...
>  29 common frames omitted
> {code}
> Checking the code, it looks like it happens calculating the size of the 
> record. There is one header that is null but I don´t think I can control 
> those headers right?
> Thanks a lot



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] mimaison commented on pull request #9224: KAFKA-10304: refactor MM2 integration tests

2020-10-30 Thread GitBox


mimaison commented on pull request #9224:
URL: https://github.com/apache/kafka/pull/9224#issuecomment-719446426


   @ning2008wisc Thanks for the quick update. It's still on my todo list but 
unfortunately I don't have time to do reviews this week. I hope to take another 
look next week.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dengziming commented on a change in pull request #9531: KAFKA-10661; Add new resigned state for graceful shutdown/initialization

2020-10-30 Thread GitBox


dengziming commented on a change in pull request #9531:
URL: https://github.com/apache/kafka/pull/9531#discussion_r514945125



##
File path: raft/src/main/java/org/apache/kafka/raft/KafkaRaftClient.java
##
@@ -1543,9 +1516,40 @@ private long maybeAppendBatches(
 return timeUnitFlush;
 }
 
+private long pollResigned(long currentTimeMs) throws IOException {
+GracefulShutdown shutdown = this.shutdown.get();
+ResignedState state = quorum.resignedStateOrThrow();
+
+long endQuorumBackoffMs = maybeSendRequests(

Review comment:
   If the cluster resigns from candidateState, it will also send 
EndQuorumEpochRequest to all voters?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] anatasiavela opened a new pull request #9537: MINOR: ApiKey DESCRIBE_QUORUM missing in parseRequest

2020-10-30 Thread GitBox


anatasiavela opened a new pull request #9537:
URL: https://github.com/apache/kafka/pull/9537


   Missing the ApiKey `DESCRIBE_QUORUM` in `AbstractRequest.parseRequest`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org