[jira] [Commented] (KAFKA-6981) Missing Connector Config (errors.deadletterqueue.topic.name) kills Connect Clusters

2018-06-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498841#comment-16498841
 ] 

ASF GitHub Bot commented on KAFKA-6981:
---

wicknicks opened a new pull request #5125:  KAFKA-6981: Move the error handling 
configuration properties into the ConnectorConfig and SinkConnectorConfig 
classes
URL: https://github.com/apache/kafka/pull/5125
 
 
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Missing Connector Config (errors.deadletterqueue.topic.name) kills Connect 
> Clusters
> ---
>
> Key: KAFKA-6981
> URL: https://issues.apache.org/jira/browse/KAFKA-6981
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Arjun Satish
>Assignee: Arjun Satish
>Priority: Major
> Fix For: 2.0.0
>
>
> The trunk version of AK currently tries to incorrectly read the property 
> (errors.deadletterqueue.topic.name) when starting a sink connector. This 
> fails no matter what the contents of the connector config are. The 
> ConnectorConfig does not define this property, and any calls to getString 
> will throw a ConfigException (since only known properties are retained by 
> AbstractConfig). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6981) Missing Connector Config (errors.deadletterqueue.topic.name) kills Connect Clusters

2018-06-01 Thread Arjun Satish (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arjun Satish updated KAFKA-6981:

Description: The trunk version of AK currently tries to incorrectly read 
the property (errors.deadletterqueue.topic.name) when starting a sink 
connector. This fails no matter what the contents of the connector config are. 
The ConnectorConfig does not define this property, and any calls to getString 
will throw a ConfigException (since only known properties are retained by 
AbstractConfig).   (was: The trunk version of AK currently tries to incorrectly 
read the property (errors.deadletterqueue.topic.name) when starting a sink 
connector. This fails no matter what the contents of the connector config are. 
The ConnectorConfig does not define this property, and any calls to getString 
will throw a ConfigException. )

> Missing Connector Config (errors.deadletterqueue.topic.name) kills Connect 
> Clusters
> ---
>
> Key: KAFKA-6981
> URL: https://issues.apache.org/jira/browse/KAFKA-6981
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Arjun Satish
>Assignee: Arjun Satish
>Priority: Major
> Fix For: 2.0.0
>
>
> The trunk version of AK currently tries to incorrectly read the property 
> (errors.deadletterqueue.topic.name) when starting a sink connector. This 
> fails no matter what the contents of the connector config are. The 
> ConnectorConfig does not define this property, and any calls to getString 
> will throw a ConfigException (since only known properties are retained by 
> AbstractConfig). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6982) java.lang.ArithmeticException: / by zero

2018-06-01 Thread huxihx (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498824#comment-16498824
 ] 

huxihx commented on KAFKA-6982:
---

Did you resize the processor thread pool?

> java.lang.ArithmeticException: / by zero
> 
>
> Key: KAFKA-6982
> URL: https://issues.apache.org/jira/browse/KAFKA-6982
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 1.1.0
> Environment: Environment: Windows 10. 
>Reporter: wade wu
>Priority: Major
>
> Producer keeps sending messages to Kafka, Kafka is down. 
> Server.log shows: 
> ..
> [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
> dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to 
> log file 
> D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due to 
> Corrupt index found, index file 
> (D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
> non-zero size but the last offset is 0 which is no greater than the base 
> offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
>  [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
> dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to 
> log file 
> D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due to 
> Corrupt index found, index file 
> (D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
> non-zero size but the last offset is 0 which is no greater than the base 
> offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
> ..
>  
> This line of code in SocketServer.scala causing the error: 
>                   {color:#33} currentProcessor = 
> currentProcessor{color:#d04437} % processors.size{color}{color}
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6983) Error while deleting segments - The process cannot access the file because it is being used by another process

2018-06-01 Thread wade wu (JIRA)
wade wu created KAFKA-6983:
--

 Summary: Error while deleting segments - The process cannot access 
the file because it is being used by another process
 Key: KAFKA-6983
 URL: https://issues.apache.org/jira/browse/KAFKA-6983
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 1.1.0
 Environment: Windows 10
Reporter: wade wu


..

[2018-06-01 17:00:07,566] ERROR Error while deleting segments for test4-1 in 
dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
java.nio.file.FileSystemException: 
D:\data\Kafka\kafka-logs\test4-1\.log -> 
D:\data\Kafka\kafka-logs\test4-1\.log.deleted: The process 
cannot access the file because it is being used by another process.

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
 at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
 at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415)
 at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:1601)
 at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:1588)
 at 
kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170)
 at 
kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at kafka.log.Log$$anonfun$deleteSegments$1.apply$mcI$sp(Log.scala:1170)
 at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
 at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
 at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
 at kafka.log.Log.deleteSegments(Log.scala:1161)
 at kafka.log.Log.deleteOldSegments(Log.scala:1156)
 at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1228)
 at kafka.log.Log.deleteOldSegments(Log.scala:1222)
 at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:854)
 at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:852)
 at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
 at scala.collection.immutable.List.foreach(List.scala:392)
 at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
 at kafka.log.LogManager.cleanupLogs(LogManager.scala:852)
 at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:385)
 at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
 at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 Suppressed: java.nio.file.FileSystemException: 
D:\data\Kafka\kafka-logs\test4-1\.log -> 
D:\data\Kafka\kafka-logs\test4-1\.log.deleted: The process 
cannot access the file because it is being used by another process.

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
 ... 32 more

 

..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6982) java.lang.ArithmeticException: / by zero

2018-06-01 Thread wade wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wade wu updated KAFKA-6982:
---
Description: 
Producer keeps sending messages to Kafka, Kafka is down. 

Server.log shows: 

..

[2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
 [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
 [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
 java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
 [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
 java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
 [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
 java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)

..

 

This line of code in SocketServer.scala causing the error: 

                  {color:#33} currentProcessor = 
currentProcessor{color:#d04437} % processors.size{color}{color}

 

 

 

 

  was:
Producer keeps sending messages to Kafka, Kafka is down. 

Server.log shows: 

..

[2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs\__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs\__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
[2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs\__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs\__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)

..


> java.lang.ArithmeticException: / by zero
> 
>
> Key: KAFKA-6982
> URL: https://issues.apache.org/jira/browse/KAFKA-6982
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 1.1.0
> Environment: Environment: Windows 10. 
>Reporter: wade wu
>Priority: Major
>
> Producer keeps sending messages to Kafka, Kafka is down. 
> Server.log shows: 
> ..
> [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
> dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to 
> log file 
> D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due to 
> Corrupt index found, index file 
> (D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
> non-zero size but the last offset is 0 which is no greater than the base 
> offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
>  

[jira] [Created] (KAFKA-6982) java.lang.ArithmeticException: / by zero

2018-06-01 Thread wade wu (JIRA)
wade wu created KAFKA-6982:
--

 Summary: java.lang.ArithmeticException: / by zero
 Key: KAFKA-6982
 URL: https://issues.apache.org/jira/browse/KAFKA-6982
 Project: Kafka
  Issue Type: Bug
  Components: network
Affects Versions: 1.1.0
 Environment: Environment: Windows 10. 

Reporter: wade wu


Producer keeps sending messages to Kafka, Kafka is down. 

Server.log shows: 

..

[2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs\__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs\__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
[2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs\__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs\__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)

..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6965) log4j:WARN log messages printed when running kafka-console-producer OOB

2018-06-01 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6965.

Resolution: Not A Bug

> log4j:WARN log messages printed when running kafka-console-producer OOB
> ---
>
> Key: KAFKA-6965
> URL: https://issues.apache.org/jira/browse/KAFKA-6965
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Yeva Byzek
>Priority: Major
>  Labels: newbie
>
> This error message is presented running `bin/kafka-console-producer` out of 
> the box.  
> {noformat}
> log4j:WARN No appenders could be found for logger 
> (kafka.utils.Log4jControllerRegistration$).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6965) log4j:WARN log messages printed when running kafka-console-producer OOB

2018-06-01 Thread Yeva Byzek (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498745#comment-16498745
 ] 

Yeva Byzek commented on KAFKA-6965:
---

Correct, please close this issue.

Details: This issue was a result of user misconfiguration, specifically I had 
manually reconfigured the parameter {{log4j.rootLogger}} in file 
{{etc/kafka/tools-log4j.properties}}.  With the default install, this works as 
expected.

> log4j:WARN log messages printed when running kafka-console-producer OOB
> ---
>
> Key: KAFKA-6965
> URL: https://issues.apache.org/jira/browse/KAFKA-6965
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Yeva Byzek
>Priority: Major
>  Labels: newbie
>
> This error message is presented running `bin/kafka-console-producer` out of 
> the box.  
> {noformat}
> log4j:WARN No appenders could be found for logger 
> (kafka.utils.Log4jControllerRegistration$).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-4628) Support KTable/GlobalKTable Joins

2018-06-01 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498741#comment-16498741
 ] 

Ted Yu commented on KAFKA-4628:
---

[~abellemare]:
Would you start a KIP for this enhancement ?

Thanks

> Support KTable/GlobalKTable Joins
> -
>
> Key: KAFKA-4628
> URL: https://issues.apache.org/jira/browse/KAFKA-4628
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Priority: Major
>  Labels: needs-kip
>
> In KIP-99 we have added support for GlobalKTables, however we don't currently 
> support KTable/GlobalKTable joins as they require materializing a state store 
> for the join. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6822) Kafka Consumer 0.10.2.1 can not normally read data from Kafka 1.0.0

2018-06-01 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498734#comment-16498734
 ] 

Guozhang Wang commented on KAFKA-6822:
--

cc [~mjsax]

> Kafka Consumer 0.10.2.1 can not normally read data from Kafka 1.0.0
> ---
>
> Key: KAFKA-6822
> URL: https://issues.apache.org/jira/browse/KAFKA-6822
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, streams
>Reporter: Phil Mikhailov
>Priority: Major
>
> We have a microservices that use Kafka Streams which stuck in initialization 
> of stream topolgy while filling StateStore from Kafka using KafkaConsumer. 
> Microservice is build with Kafka Streams 0.10.2.1-cp1 (Confluent 3.2.1) but 
> environment runs Kafka cluster 1.0.0 (Confluent 4.0.0). 
> We reproduced this problem several times by restarting microservices and 
> eventually had to reset the stream offsets to beginning in order unblock 
> microservices.
> While investigating this problem more deeply we found out that  StateStore 
> (0.10.2.1) stuck in loading data using {{ProcessorStateManager}}. It uses 
> KafkaConsumer (0.10.2.1) to fill the store and it calculates offsets like 
> this:
>  Fetcher:524
> {code:java}
> long nextOffset = partRecords.get(partRecords.size() - 1).offset() + 1;
> {code}
> Get the latest offset from records (which were got from {{poll}}) plus 1.
>  So the next offset is estimated.
> In Kafka 1.0.0 otherwise broker explicitly provides the next offset to fetch:
> {code:java}
> long nextOffset = partitionRecords.nextFetchOffset;
> {code}
> It returns the actual next offset but not estimated.
> That said, we had a situation when StateStore (0.10.2.1) stuck in loading 
> data. The reason was in {{ProcessorStateManager.restoreActiveState:245}} 
> which kept spinning in consumer loop 'cause the following condition never 
> happened:
> {code:java}
>    } else if (restoreConsumer.position(storePartition) == endOffset) {
>    break;
>    }
> {code}
>  
> We assume that consumer 0.10.2.1 estimates endoffset incorrectly in a case of 
> compaction. 
>  Or there is inconsistency between offsets calculation between 0.10.2.1 and 
> 1.0.0 which doesn't allow to use 0.10.2.1 consumer with 1.0.0 broker.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6860) NPE when reinitializeStateStores with eos enabled

2018-06-01 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498733#comment-16498733
 ] 

Guozhang Wang commented on KAFKA-6860:
--

Thanks for the detailed explanation of the issue. I looked at the code of 1.1, 
and the `checkpoint` object is always initialized in the constructor of 
`ProcessorStateManager`. So it is still not clear to me why EOS would hit this 
issue. Could you share more of your findings?

> NPE when reinitializeStateStores with eos enabled
> -
>
> Key: KAFKA-6860
> URL: https://issues.apache.org/jira/browse/KAFKA-6860
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.1.0
> Environment: mac, kafka1.1
>Reporter: ko byoung kwon
>Priority: Major
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> *Symptom*
>  With EOS enabled , Reinitializing stateStores get an NPE because checkpoint 
> is null.
> {code:java}
> 2018-05-02 18:05:17.156 ERROR 60836 --- [-StreamThread-1] 
> o.a.k.s.p.internals.StreamThread : stream-thread 
> [kafka-stream-application-d6ec1dfb-9b7f-42dd-8b28-899ff3d1ad98-StreamThread-1]
>  Encountered the following error during processing:
> java.lang.NullPointerException: null
> at 
> org.apache.kafka.streams.processor.internals.AbstractStateManager.reinitializeStateStoresForPartitions(AbstractStateManager.java:66)
>  ~[kafka-streams-1.1.0.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.reinitializeStateStoresForPartitions(ProcessorStateManager.java:155)
>  ~[kafka-streams-1.1.0.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AbstractTask.reinitializeStateStoresForPartitions(AbstractTask.java:230)
>  ~[kafka-streams-1.1.0.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StoreChangelogReader.restore(StoreChangelogReader.java:94)
>  ~[kafka-streams-1.1.0.jar:na]
> {code}
> *How to reproduce*
> *configure as*
>  - changelog topic with short `retention.ms` and `delete` policy (just to 
> reproduce the symptom easily)
>  ex) retention.ms=3,cleanup.policy=delete 
>  - exaclty once semantic enabled
>  - no cleanup
> *Step*
>  - two task[0_0],[0,1] , two Spring Boot (assign was#1:task[0_0], 
> was#2:task[0_1])
>  - write some data each state store(changelog topic will soon erase those 
> messages. by short "retentin.ms")
>  - when was#2 is killed, then was#1 will restore task[0_1]'s data on its own 
> rocksDB
>  - In the process, it finds a checkpoint and an error 
> occurs.(AbstractStateManager #66)
> {code:java}
> // My code
> Map topicConfiguration = new HashMap<>();
> topicConfiguration.putIfAbsent("cleanup.policy", "delete");
> topicConfiguration.putIfAbsent("file.delete.delay.ms", "0");
> topicConfiguration.putIfAbsent("retention.ms", "3000");
> builder.stream(properties.getSourceTopic(),
>Consumed.with(Serdes.Long(), Serdes.String()))
>.groupByKey()
>.count(Materialized
>   . byte[]>>as(ORDER_STORE_NAME)
>   .withKeySerde(Serdes.Long())
>   .withValueSerde(Serdes.Long())
>   .withLoggingEnabled(topicConfiguration));
> {code}
> *Suggestion*
> When EOS is enabled, the checkpoint will be null.
>  I think , need to add some code to create a Checkpoint. 
>  As follows
> {code:java}
> // # At org.apache.kafka.streams.processor.internals.AbstractStateManager #66
> // # suggestion start
> if (checkpoint == null) {
> checkpoint = new OffsetCheckpoint(new File(baseDir, 
> CHECKPOINT_FILE_NAME));
> }
> // # suggestion end
> try {
> checkpoint.write(checkpointableOffsets);
> } catch (final IOException fatalException) {
> log.error("Failed to write offset checkpoint file to {} while 
> re-initializing {}: {}", checkpoint, stateStores, fatalException);
>  throw new StreamsException("Failed to reinitialize global store.", 
> fatalException);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6981) Missing Connector Config (errors.deadletterqueue.topic.name) kills Connect Clusters

2018-06-01 Thread Arjun Satish (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arjun Satish updated KAFKA-6981:

Priority: Major  (was: Minor)

> Missing Connector Config (errors.deadletterqueue.topic.name) kills Connect 
> Clusters
> ---
>
> Key: KAFKA-6981
> URL: https://issues.apache.org/jira/browse/KAFKA-6981
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Arjun Satish
>Assignee: Arjun Satish
>Priority: Major
> Fix For: 2.0.0
>
>
> The trunk version of AK currently tries to incorrectly read the property 
> (errors.deadletterqueue.topic.name) when starting a sink connector. This 
> fails no matter what the contents of the connector config are. The 
> ConnectorConfig does not define this property, and any calls to getString 
> will throw a ConfigException. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6938) Add documentation for accessing Headers on Kafka Streams Processor API

2018-06-01 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498724#comment-16498724
 ] 

Guozhang Wang commented on KAFKA-6938:
--

[~jeqo] are you actively working on this JIRA? We are approaching the release 
deadline and hence need to make sure the docs are updated.

> Add documentation for accessing Headers on Kafka Streams Processor API
> --
>
> Key: KAFKA-6938
> URL: https://issues.apache.org/jira/browse/KAFKA-6938
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation, streams
>Affects Versions: 2.0.0
>Reporter: Jorge Quilcate
>Assignee: Jorge Quilcate
>Priority: Major
> Fix For: 2.0.0
>
>
> Document changes implemented on KIP-244.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6977) Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 while fetching data

2018-06-01 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498723#comment-16498723
 ] 

Guozhang Wang commented on KAFKA-6977:
--

cc [~mjsax]

>  Kafka Streams - java.lang.IllegalStateException: Unexpected error code 2 
> while fetching data
> -
>
> Key: KAFKA-6977
> URL: https://issues.apache.org/jira/browse/KAFKA-6977
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Eugen Feller
>Priority: Blocker
>  Labels: streams
>
> We are running Kafka Streams 0.11.0.1 with Kafka Broker 0.10.2.1 and 
> constantly run into the following exception: 
> {code:java}
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> partition assignment took 40 ms.
> current active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 0_15, 
> 0_17, 0_20, 0_21, 0_23, 0_25, 0_28, 0_2, 0_6, 0_7, 0_8, 0_9, 0_11, 0_16, 
> 0_18, 0_19, 0_22, 0_24, 0_26, 0_27, 0_29, 0_30, 0_31]
> current standby tasks: []
> previous active tasks: [0_0, 0_1, 0_3, 0_4, 0_5, 0_10, 0_12, 0_13, 0_14, 
> 0_15, 0_17, 0_20, 0_21, 0_23, 0_25, 0_28]
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> State transition from PARTITIONS_ASSIGNED to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e] State 
> transition from REBALANCING to RUNNING.
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> ERROR org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-838cf286-be42-4a49-b3ab-3a291617233e-StreamThread-1] 
> Encountered the following error during processing:
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:886)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> Shutting down
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> State transition from RUNNING to PENDING_SHUTDOWN.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.clients.producer.KafkaProducer - Closing the Kafka 
> producer with timeoutMillis = 9223372036854775807 ms.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> Stream thread shutdown complete
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.processor.internals.StreamThread - 
> stream-thread 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> State transition from PENDING_SHUTDOWN to DEAD.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> INFO org.apache.kafka.streams.KafkaStreams - stream-client 
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4] State 
> transition from RUNNING to ERROR.
> [visitstats-mongowriter-3727734b-3c2f-4775-87d2-838d72ca3bf4-StreamThread-1] 
> WARN org.apache.kafka.streams.KafkaStreams - stream-client 
> 

[jira] [Resolved] (KAFKA-6760) responses not logged properly in controller

2018-06-01 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6760.

   Resolution: Fixed
Fix Version/s: 1.1.1
   2.0.0

> responses not logged properly in controller
> ---
>
> Key: KAFKA-6760
> URL: https://issues.apache.org/jira/browse/KAFKA-6760
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Jun Rao
>Assignee: Mickael Maison
>Priority: Major
>  Labels: newbie
> Fix For: 2.0.0, 1.1.1
>
>
> Saw the following logging in controller.log. We need to log the 
> StopReplicaResponse properly in KafkaController.
> [2018-04-05 14:38:41,878] DEBUG [Controller id=0] Delete topic callback 
> invoked for org.apache.kafka.common.requests.StopReplicaResponse@263d40c 
> (kafka.controller.K
> afkaController)
> It seems that the same issue exists for LeaderAndIsrResponse as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6981) Missing Connector Config (errors.deadletterqueue.topic.name) kills Connect Clusters

2018-06-01 Thread Arjun Satish (JIRA)
Arjun Satish created KAFKA-6981:
---

 Summary: Missing Connector Config 
(errors.deadletterqueue.topic.name) kills Connect Clusters
 Key: KAFKA-6981
 URL: https://issues.apache.org/jira/browse/KAFKA-6981
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Arjun Satish
Assignee: Arjun Satish
 Fix For: 2.0.0


The trunk version of AK currently tries to incorrectly read the property 
(errors.deadletterqueue.topic.name) when starting a sink connector. This fails 
no matter what the contents of the connector config are. The ConnectorConfig 
does not define this property, and any calls to getString will throw a 
ConfigException. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-6098) Delete and Re-create topic operation could result in race condition

2018-06-01 Thread Dhruvil Shah (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dhruvil Shah reassigned KAFKA-6098:
---

Assignee: Dhruvil Shah

> Delete and Re-create topic operation could result in race condition
> ---
>
> Key: KAFKA-6098
> URL: https://issues.apache.org/jira/browse/KAFKA-6098
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Dhruvil Shah
>Priority: Major
>  Labels: reliability
>
> Here is the following process to re-produce this issue:
> 1. Delete a topic using the delete topic request.
> 2. Confirm the topic is deleted using the list topics request.
> 3. Create the topic using the create topic request.
> In step 3) a race condition can happen that the response returns a 
> {{TOPIC_ALREADY_EXISTS}} error code, indicating the topic has already existed.
> The root cause of the above issue is in the {{TopicDeletionManager}} class:
> {code}
> controller.partitionStateMachine.handleStateChanges(partitionsForDeletedTopic.toSeq,
>  OfflinePartition)
> controller.partitionStateMachine.handleStateChanges(partitionsForDeletedTopic.toSeq,
>  NonExistentPartition)
> topicsToBeDeleted -= topic
> partitionsToBeDeleted.retain(_.topic != topic)
> kafkaControllerZkUtils.deleteTopicZNode(topic)
> kafkaControllerZkUtils.deleteTopicConfigs(Seq(topic))
> kafkaControllerZkUtils.deleteTopicDeletions(Seq(topic))
> controllerContext.removeTopic(topic)
> {code}
> I.e. it first update the broker's metadata cache through the ISR and metadata 
> update request, then delete the topic zk path, and then delete the 
> topic-deletion zk path. However, upon handling the create topic request, the 
> broker will simply try to write to the topic zk path directly. Hence there is 
> a race condition that between brokers update their metadata cache (hence list 
> topic request not returning this topic anymore) and zk path for the topic be 
> deleted (hence the create topic succeed).
> The reason this problem could be exposed, is through current handling logic 
> of the create topic response, most of which takes {{TOPIC_ALREADY_EXISTS}} as 
> "OK" and moves on, and the zk path will be deleted later, hence leaving the 
> topic to be not created at all:
> https://github.com/apache/kafka/blob/249e398bf84cdd475af6529e163e78486b43c570/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsKafkaClient.java#L221
> https://github.com/apache/kafka/blob/1a653c813c842c0b67f26fb119d7727e272cf834/connect/runtime/src/main/java/org/apache/kafka/connect/util/TopicAdmin.java#L232
> Looking at the code history, it seems this race condition always exist, but 
> testing on trunk / 1.0 with the above steps it is more likely to happen than 
> before. I wonder if the ZK async calls have an effect here. cc [~junrao] 
> [~onurkaraman]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6760) responses not logged properly in controller

2018-06-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498714#comment-16498714
 ] 

ASF GitHub Bot commented on KAFKA-6760:
---

ijuma closed pull request #4834: KAFKA-6760: Responses not logged properly in 
controller
URL: https://github.com/apache/kafka/pull/4834
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/clients/src/main/java/org/apache/kafka/common/requests/LeaderAndIsrResponse.java
 
b/clients/src/main/java/org/apache/kafka/common/requests/LeaderAndIsrResponse.java
index c21f9a783b3..8e6324589df 100644
--- 
a/clients/src/main/java/org/apache/kafka/common/requests/LeaderAndIsrResponse.java
+++ 
b/clients/src/main/java/org/apache/kafka/common/requests/LeaderAndIsrResponse.java
@@ -119,4 +119,13 @@ protected Struct toStruct(short version) {
 
 return struct;
 }
+
+@Override
+public String toString() {
+return "LeaderAndIsrResponse(" +
+"responses=" + responses +
+", error=" + error +
+")";
+}
+
 }
diff --git 
a/clients/src/main/java/org/apache/kafka/common/requests/StopReplicaResponse.java
 
b/clients/src/main/java/org/apache/kafka/common/requests/StopReplicaResponse.java
index 777416d1758..4e8efe9547a 100644
--- 
a/clients/src/main/java/org/apache/kafka/common/requests/StopReplicaResponse.java
+++ 
b/clients/src/main/java/org/apache/kafka/common/requests/StopReplicaResponse.java
@@ -115,4 +115,13 @@ protected Struct toStruct(short version) {
 struct.set(ERROR_CODE, error.code());
 return struct;
 }
+
+@Override
+public String toString() {
+return "StopReplicaResponse(" +
+"responses=" + responses +
+", error=" + error +
+")";
+}
+
 }
diff --git 
a/clients/src/test/java/org/apache/kafka/common/requests/LeaderAndIsrResponseTest.java
 
b/clients/src/test/java/org/apache/kafka/common/requests/LeaderAndIsrResponseTest.java
index 3eda7782743..c242555a4b0 100644
--- 
a/clients/src/test/java/org/apache/kafka/common/requests/LeaderAndIsrResponseTest.java
+++ 
b/clients/src/test/java/org/apache/kafka/common/requests/LeaderAndIsrResponseTest.java
@@ -27,6 +27,7 @@
 import java.util.Map;
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
 
 public class LeaderAndIsrResponseTest {
 
@@ -64,4 +65,16 @@ public void testErrorCountsNoTopLevelError() {
 assertEquals(1, 
errorCounts.get(Errors.CLUSTER_AUTHORIZATION_FAILED).intValue());
 }
 
+@Test
+public void testToString() {
+Map errors = new HashMap<>();
+errors.put(new TopicPartition("foo", 0), Errors.NONE);
+errors.put(new TopicPartition("foo", 1), 
Errors.CLUSTER_AUTHORIZATION_FAILED);
+LeaderAndIsrResponse response = new LeaderAndIsrResponse(Errors.NONE, 
errors);
+String responseStr = response.toString();
+
assertTrue(responseStr.contains(LeaderAndIsrResponse.class.getSimpleName()));
+assertTrue(responseStr.contains(errors.toString()));
+assertTrue(responseStr.contains(Errors.NONE.name()));
+}
+
 }
diff --git 
a/clients/src/test/java/org/apache/kafka/common/requests/StopReplicaResponseTest.java
 
b/clients/src/test/java/org/apache/kafka/common/requests/StopReplicaResponseTest.java
index 95cb3aa39e2..3a87fc64691 100644
--- 
a/clients/src/test/java/org/apache/kafka/common/requests/StopReplicaResponseTest.java
+++ 
b/clients/src/test/java/org/apache/kafka/common/requests/StopReplicaResponseTest.java
@@ -26,6 +26,7 @@
 import java.util.Map;
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
 
 public class StopReplicaResponseTest {
 
@@ -58,4 +59,16 @@ public void testErrorCountsNoTopLevelError() {
 assertEquals(1, 
errorCounts.get(Errors.CLUSTER_AUTHORIZATION_FAILED).intValue());
 }
 
+@Test
+public void testToString() {
+Map errors = new HashMap<>();
+errors.put(new TopicPartition("foo", 0), Errors.NONE);
+errors.put(new TopicPartition("foo", 1), 
Errors.CLUSTER_AUTHORIZATION_FAILED);
+StopReplicaResponse response = new StopReplicaResponse(Errors.NONE, 
errors);
+String responseStr = response.toString();
+
assertTrue(responseStr.contains(StopReplicaResponse.class.getSimpleName()));
+assertTrue(responseStr.contains(errors.toString()));
+assertTrue(responseStr.contains(Errors.NONE.name()));
+}
+
 }


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to 

[jira] [Assigned] (KAFKA-6965) log4j:WARN log messages printed when running kafka-console-producer OOB

2018-06-01 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-6965:
--

Assignee: (was: Ismael Juma)

> log4j:WARN log messages printed when running kafka-console-producer OOB
> ---
>
> Key: KAFKA-6965
> URL: https://issues.apache.org/jira/browse/KAFKA-6965
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Yeva Byzek
>Priority: Major
>  Labels: newbie
>
> This error message is presented running `bin/kafka-console-producer` out of 
> the box.  
> {noformat}
> log4j:WARN No appenders could be found for logger 
> (kafka.utils.Log4jControllerRegistration$).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6965) log4j:WARN log messages printed when running kafka-console-producer OOB

2018-06-01 Thread Ismael Juma (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498690#comment-16498690
 ] 

Ismael Juma commented on KAFKA-6965:


I think [~yevabyzek] said that this was not an issue after all. [~yevabyzek], 
can we close this then?

> log4j:WARN log messages printed when running kafka-console-producer OOB
> ---
>
> Key: KAFKA-6965
> URL: https://issues.apache.org/jira/browse/KAFKA-6965
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Yeva Byzek
>Priority: Major
>  Labels: newbie
>
> This error message is presented running `bin/kafka-console-producer` out of 
> the box.  
> {noformat}
> log4j:WARN No appenders could be found for logger 
> (kafka.utils.Log4jControllerRegistration$).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6979) Add max.block.ms to consumer for default timeout behavior

2018-06-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498622#comment-16498622
 ] 

ASF GitHub Bot commented on KAFKA-6979:
---

dhruvilshah3 opened a new pull request #5122: KAFKA-6979: Add `max.block.ms` to 
KafkaConsumer
URL: https://github.com/apache/kafka/pull/5122
 
 
   Add `max.block.ms` to KafkaConsumer (KIP-266).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add max.block.ms to consumer for default timeout behavior
> -
>
> Key: KAFKA-6979
> URL: https://issues.apache.org/jira/browse/KAFKA-6979
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Dhruvil Shah
>Priority: Major
> Fix For: 2.0.0
>
>
> Implement max.block.ms as described in KIP-266: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-266%3A+Fix+consumer+indefinite+blocking+behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6925) Memory leak in org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl

2018-06-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498565#comment-16498565
 ] 

ASF GitHub Bot commented on KAFKA-6925:
---

guozhangwang closed pull request #5119: KAFKA-6925: fix parentSensors memory 
leak 
URL: https://github.com/apache/kafka/pull/5119
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImpl.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImpl.java
index 03bbceb25c4..03a4819aba2 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImpl.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImpl.java
@@ -222,6 +222,7 @@ public void removeSensor(Sensor sensor) {
 final Sensor parent = parentSensors.get(sensor);
 if (parent != null) {
 metrics.removeSensor(parent.name());
+parentSensors.remove(sensor);
 }
 
 }
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImplTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImplTest.java
index 7b16246da33..7666e42044d 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImplTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImplTest.java
@@ -19,11 +19,15 @@
 
 import org.apache.kafka.common.Metric;
 import org.apache.kafka.common.MetricName;
+import org.apache.kafka.common.metrics.KafkaMetric;
 import org.apache.kafka.common.metrics.Metrics;
 import org.apache.kafka.common.metrics.Sensor;
+import org.junit.Assert;
 import org.junit.Test;
 
+import java.util.Collections;
 import java.util.HashMap;
+import java.util.LinkedHashMap;
 import java.util.Map;
 
 import static org.junit.Assert.assertEquals;
@@ -53,19 +57,27 @@ public void testRemoveSensor() {
 String entity = "entity";
 String operation = "put";
 Map tags = new HashMap<>();
-StreamsMetricsImpl streamsMetrics = new StreamsMetricsImpl(new 
Metrics(), groupName, tags);
+final Metrics metrics = new Metrics();
+final Map initialMetrics = 
Collections.unmodifiableMap(new LinkedHashMap<>(metrics.metrics()));
+StreamsMetricsImpl streamsMetrics = new StreamsMetricsImpl(metrics, 
groupName, tags);
 
 Sensor sensor1 = streamsMetrics.addSensor(sensorName, 
Sensor.RecordingLevel.DEBUG);
 streamsMetrics.removeSensor(sensor1);
+Assert.assertEquals(initialMetrics, metrics.metrics());
 
 Sensor sensor1a = streamsMetrics.addSensor(sensorName, 
Sensor.RecordingLevel.DEBUG, sensor1);
 streamsMetrics.removeSensor(sensor1a);
+Assert.assertEquals(initialMetrics, metrics.metrics());
 
 Sensor sensor2 = streamsMetrics.addLatencyAndThroughputSensor(scope, 
entity, operation, Sensor.RecordingLevel.DEBUG);
 streamsMetrics.removeSensor(sensor2);
+Assert.assertEquals(initialMetrics, metrics.metrics());
 
 Sensor sensor3 = streamsMetrics.addThroughputSensor(scope, entity, 
operation, Sensor.RecordingLevel.DEBUG);
 streamsMetrics.removeSensor(sensor3);
+Assert.assertEquals(initialMetrics, metrics.metrics());
+
+Assert.assertEquals(Collections.emptyMap(), 
streamsMetrics.parentSensors);
 }
 
 @Test


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Memory leak in 
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
> -
>
> Key: KAFKA-6925
> URL: https://issues.apache.org/jira/browse/KAFKA-6925
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.2, 1.1.0, 1.0.1
>Reporter: Marcin Kuthan
>Assignee: John Roesler
>Priority: Major
> Fix For: 1.1.1
>
>
> *Note: this issue was fixed incidentally in 2.0, so it is only present in 
> versions 0.x and 1.x.*
>  
> The retained heap of 
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
>  is surprisingly high for long running job. Over 100MB of heap for every 
> stream after a 

[jira] [Commented] (KAFKA-6925) Memory leak in org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl

2018-06-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498492#comment-16498492
 ] 

ASF GitHub Bot commented on KAFKA-6925:
---

vvcephei opened a new pull request #5119: KAFKA-6925: fix parentSensors memory 
leak 
URL: https://github.com/apache/kafka/pull/5119
 
 
   See also #5108 / 0a7462e3b6c5b73e836f53e6b4dc7fc1ff23e1b3 .
   
   Previously, we failed to remove sensors from the parentSensors map, 
effectively a memory leak.
   
   Add a test to verify that removed sensors get removed from the underlying 
registry as well as the parentSensors map.
   
   Reviewers: Bill Bejeck , Guozhang Wang 

   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Memory leak in 
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
> -
>
> Key: KAFKA-6925
> URL: https://issues.apache.org/jira/browse/KAFKA-6925
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.2, 1.1.0, 1.0.1
>Reporter: Marcin Kuthan
>Assignee: John Roesler
>Priority: Major
> Fix For: 1.1.1
>
>
> *Note: this issue was fixed incidentally in 2.0, so it is only present in 
> versions 0.x and 1.x.*
>  
> The retained heap of 
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
>  is surprisingly high for long running job. Over 100MB of heap for every 
> stream after a week of uptime, when for the same application a few hours 
> after start heap takes 2MB.
> For the problematic instance majority of memory StreamsMetricsThreadImpl is 
> occupied by hash map entries in parentSensors, over 8000 elements 100+kB 
> each. For fresh instance there are less than 200 elements.
> Below you could find retained set report generated from Eclipse Mat but I'm 
> not fully sure about correctness due to complex object graph in the metrics 
> related code. Number of objects in single 
> StreamThread$StreamsMetricsThreadImpl  instance.
>  
> {code:java}
> Class Name | Objects | Shallow Heap
> ---
> org.apache.kafka.common.metrics.KafkaMetric | 140,476 | 4,495,232
> org.apache.kafka.common.MetricName | 140,476 | 4,495,232
> org.apache.kafka.common.metrics.stats.SampledStat$Sample | 73,599 | 3,532,752
> org.apache.kafka.common.metrics.stats.Meter | 42,104 | 1,347,328
> org.apache.kafka.common.metrics.stats.Count | 42,104 | 1,347,328
> org.apache.kafka.common.metrics.stats.Rate | 42,104 | 1,010,496
> org.apache.kafka.common.metrics.stats.Total | 42,104 | 1,010,496
> org.apache.kafka.common.metrics.stats.Max | 28,134 | 900,288
> org.apache.kafka.common.metrics.stats.Avg | 28,134 | 900,288
> org.apache.kafka.common.metrics.Sensor | 3,164 | 202,496
> org.apache.kafka.common.metrics.Sensor[] | 3,164 | 71,088
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl|
>  1 | 56
> ---
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6870) Concurrency conflicts in SampledStat

2018-06-01 Thread Kevin Lafferty (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498412#comment-16498412
 ] 

Kevin Lafferty commented on KAFKA-6870:
---

This depends on [KAFKA-6765|https://issues.apache.org/jira/browse/KAFKA-6765] 
so that would have to be backported to 1.0 as well. [~rsivaram]

> Concurrency conflicts in SampledStat
> 
>
> Key: KAFKA-6870
> URL: https://issues.apache.org/jira/browse/KAFKA-6870
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 2.0.0, 1.1.1
>
>
> The samples stored in SampledStat is not thread-safe. However, 
> ReplicaFetcherThreads used to handle replica to specified brokers may update 
> (when the samples is empty, we will add a new sample to it) and iterate the 
> samples concurrently, and then cause the ConcurrentModificationException.
> {code:java}
> [2018-05-03 13:50:56,087] ERROR [ReplicaFetcher replicaId=106, leaderId=100, 
> fetcherId=0] Error due to (kafka.server.ReplicaFetcherThread:76)
> java.util.ConcurrentModificationException
> at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909)
> at java.util.ArrayList$Itr.next(ArrayList.java:859)
> at 
> org.apache.kafka.common.metrics.stats.Rate$SampledTotal.combine(Rate.java:132)
> at 
> org.apache.kafka.common.metrics.stats.SampledStat.measure(SampledStat.java:78)
> at org.apache.kafka.common.metrics.stats.Rate.measure(Rate.java:66)
> at 
> org.apache.kafka.common.metrics.KafkaMetric.measurableValue(KafkaMetric.java:85)
> at org.apache.kafka.common.metrics.Sensor.checkQuotas(Sensor.java:201)
> at org.apache.kafka.common.metrics.Sensor.checkQuotas(Sensor.java:192)
> at 
> kafka.server.ReplicationQuotaManager.isQuotaExceeded(ReplicationQuotaManager.scala:104)
> at 
> kafka.server.ReplicaFetcherThread.kafka$server$ReplicaFetcherThread$$shouldFollowerThrottle(ReplicaFetcherThread.scala:384)
> at 
> kafka.server.ReplicaFetcherThread$$anonfun$buildFetchRequest$1.apply(ReplicaFetcherThread.scala:263)
> at 
> kafka.server.ReplicaFetcherThread$$anonfun$buildFetchRequest$1.apply(ReplicaFetcherThread.scala:261)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at 
> kafka.server.ReplicaFetcherThread.buildFetchRequest(ReplicaFetcherThread.scala:261)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$2.apply(AbstractFetcherThread.scala:102)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$2.apply(AbstractFetcherThread.scala:101)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
> at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:101)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
> {code}
> Before 
> [https://github.com/apache/kafka/commit/d734f4e56d276f84b8c52b602edd67d41cbb6c35|https://github.com/apache/kafka/commit/d734f4e56d276f84b8c52b602edd67d41cbb6c35]
>  the ConcurrentModificationException doesn't exist since all changes to 
> samples is "add" currently. Using the get(index) is able to avoid the 
> ConcurrentModificationException.
> In short, we can just make samples thread-safe. Or just replace the foreach 
> loop by get(index) if we have concerns about the performance of thread-safe 
> list...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5424) KafkaConsumer.listTopics() throws Exception when unauthorized topics exist in cluster

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5424.
--
Resolution: Fixed

This has been fixed via KAFKA-3396

> KafkaConsumer.listTopics() throws Exception when unauthorized topics exist in 
> cluster
> -
>
> Key: KAFKA-5424
> URL: https://issues.apache.org/jira/browse/KAFKA-5424
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Mike Fagan
>Assignee: Mickael Maison
>Priority: Major
>
> KafkaConsumer.listTopics() internally calls Fetcher. 
> getAllTopicMetadata(timeout) and this method will throw a 
> TopicAuthorizationException when there exists an unauthorized topic in the 
> cluster. 
> This behavior runs counter to the API docs and makes listTopics() unusable 
> except in the case of the consumer is authorized for every single topic in 
> the cluster. 
> A potentially better approach is to have Fetcher implement a new method 
> getAuthorizedTopicMetadata(timeout)  and have KafkaConsumer call this method 
> instead of getAllTopicMetadata(timeout) from within KafkaConsumer.listTopics()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6207) Include start of record when RecordIsTooLarge

2018-06-01 Thread Tadhg Pearson (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498297#comment-16498297
 ] 

Tadhg Pearson commented on KAFKA-6207:
--

[~ewencp] - any feedback on this?

> Include start of record when RecordIsTooLarge
> -
>
> Key: KAFKA-6207
> URL: https://issues.apache.org/jira/browse/KAFKA-6207
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.1.1
>Reporter: Tadhg Pearson
>Priority: Minor
>
> When a message is too large to be sent (at 
> org.apache.kafka.clients.producer.KafkaProducer#doSend), the 
> RecordTooLargeException should carry the start of the record (for example, 
> the first 1KB) so that the calling application can debug which message caused 
> the error. 
> For example: one common use case of Kafka is logging. The 
> RecordTooLargeException is thrown due to a large log message being sent by 
> the application. How do you know which statement in your application logged 
> this large message? If your exception has thousands of logging statements, it 
> will be very tough to find which one is the cause today but you include 
> the start of the message, this could prove a very strong hint as to the cause!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5304) Kafka Producer throwing infinite NullPointerExceptions

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5304.
--
Resolution: Auto Closed

Closing inactive issue. Please reopen if the issue still exists.

> Kafka Producer throwing infinite NullPointerExceptions
> --
>
> Key: KAFKA-5304
> URL: https://issues.apache.org/jira/browse/KAFKA-5304
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.1
> Environment: RedHat Enterprise Linux 6.8
>Reporter: Pranay Kumar Chaudhary
>Priority: Major
>
> 2017-05-22 11:38:56,918 LL="ERROR" TR="kafka-producer-network-thread | 
> application-name.hostname.com" LN="o.a.k.c.p.i.Sender"  Uncaught error in 
> kafka producer I/O thread:
> java.lang.NullPointerException: null
> Continuously getting this error in logs which is filling up the disk space. 
> Not able to get a stack trace to pinpoint the source of the error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6925) Memory leak in org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl

2018-06-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498202#comment-16498202
 ] 

ASF GitHub Bot commented on KAFKA-6925:
---

guozhangwang closed pull request #5108: KAFKA-6925: fix parentSensors memory 
leak
URL: https://github.com/apache/kafka/pull/5108
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImpl.java
 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImpl.java
index b2ce2e7dcf8..5d0c46ecba4 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImpl.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImpl.java
@@ -222,6 +222,7 @@ public void removeSensor(Sensor sensor) {
 final Sensor parent = parentSensors.get(sensor);
 if (parent != null) {
 metrics.removeSensor(parent.name());
+parentSensors.remove(sensor);
 }
 
 }
diff --git 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImplTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImplTest.java
index 7b16246da33..7666e42044d 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImplTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamsMetricsImplTest.java
@@ -19,11 +19,15 @@
 
 import org.apache.kafka.common.Metric;
 import org.apache.kafka.common.MetricName;
+import org.apache.kafka.common.metrics.KafkaMetric;
 import org.apache.kafka.common.metrics.Metrics;
 import org.apache.kafka.common.metrics.Sensor;
+import org.junit.Assert;
 import org.junit.Test;
 
+import java.util.Collections;
 import java.util.HashMap;
+import java.util.LinkedHashMap;
 import java.util.Map;
 
 import static org.junit.Assert.assertEquals;
@@ -53,19 +57,27 @@ public void testRemoveSensor() {
 String entity = "entity";
 String operation = "put";
 Map tags = new HashMap<>();
-StreamsMetricsImpl streamsMetrics = new StreamsMetricsImpl(new 
Metrics(), groupName, tags);
+final Metrics metrics = new Metrics();
+final Map initialMetrics = 
Collections.unmodifiableMap(new LinkedHashMap<>(metrics.metrics()));
+StreamsMetricsImpl streamsMetrics = new StreamsMetricsImpl(metrics, 
groupName, tags);
 
 Sensor sensor1 = streamsMetrics.addSensor(sensorName, 
Sensor.RecordingLevel.DEBUG);
 streamsMetrics.removeSensor(sensor1);
+Assert.assertEquals(initialMetrics, metrics.metrics());
 
 Sensor sensor1a = streamsMetrics.addSensor(sensorName, 
Sensor.RecordingLevel.DEBUG, sensor1);
 streamsMetrics.removeSensor(sensor1a);
+Assert.assertEquals(initialMetrics, metrics.metrics());
 
 Sensor sensor2 = streamsMetrics.addLatencyAndThroughputSensor(scope, 
entity, operation, Sensor.RecordingLevel.DEBUG);
 streamsMetrics.removeSensor(sensor2);
+Assert.assertEquals(initialMetrics, metrics.metrics());
 
 Sensor sensor3 = streamsMetrics.addThroughputSensor(scope, entity, 
operation, Sensor.RecordingLevel.DEBUG);
 streamsMetrics.removeSensor(sensor3);
+Assert.assertEquals(initialMetrics, metrics.metrics());
+
+Assert.assertEquals(Collections.emptyMap(), 
streamsMetrics.parentSensors);
 }
 
 @Test


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Memory leak in 
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
> -
>
> Key: KAFKA-6925
> URL: https://issues.apache.org/jira/browse/KAFKA-6925
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.2, 1.1.0, 1.0.1
>Reporter: Marcin Kuthan
>Assignee: John Roesler
>Priority: Major
> Fix For: 1.1.1
>
>
> *Note: this issue was fixed incidentally in 2.0, so it is only present in 
> versions 0.x and 1.x.*
>  
> The retained heap of 
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
>  is surprisingly high for long running job. Over 100MB of heap for every 
> stream after a week 

[jira] [Updated] (KAFKA-6925) Memory leak in org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl

2018-06-01 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-6925:
-
Fix Version/s: 1.1.1

> Memory leak in 
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
> -
>
> Key: KAFKA-6925
> URL: https://issues.apache.org/jira/browse/KAFKA-6925
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.2, 1.1.0, 1.0.1
>Reporter: Marcin Kuthan
>Assignee: John Roesler
>Priority: Major
> Fix For: 1.1.1
>
>
> *Note: this issue was fixed incidentally in 2.0, so it is only present in 
> versions 0.x and 1.x.*
>  
> The retained heap of 
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
>  is surprisingly high for long running job. Over 100MB of heap for every 
> stream after a week of uptime, when for the same application a few hours 
> after start heap takes 2MB.
> For the problematic instance majority of memory StreamsMetricsThreadImpl is 
> occupied by hash map entries in parentSensors, over 8000 elements 100+kB 
> each. For fresh instance there are less than 200 elements.
> Below you could find retained set report generated from Eclipse Mat but I'm 
> not fully sure about correctness due to complex object graph in the metrics 
> related code. Number of objects in single 
> StreamThread$StreamsMetricsThreadImpl  instance.
>  
> {code:java}
> Class Name | Objects | Shallow Heap
> ---
> org.apache.kafka.common.metrics.KafkaMetric | 140,476 | 4,495,232
> org.apache.kafka.common.MetricName | 140,476 | 4,495,232
> org.apache.kafka.common.metrics.stats.SampledStat$Sample | 73,599 | 3,532,752
> org.apache.kafka.common.metrics.stats.Meter | 42,104 | 1,347,328
> org.apache.kafka.common.metrics.stats.Count | 42,104 | 1,347,328
> org.apache.kafka.common.metrics.stats.Rate | 42,104 | 1,010,496
> org.apache.kafka.common.metrics.stats.Total | 42,104 | 1,010,496
> org.apache.kafka.common.metrics.stats.Max | 28,134 | 900,288
> org.apache.kafka.common.metrics.stats.Avg | 28,134 | 900,288
> org.apache.kafka.common.metrics.Sensor | 3,164 | 202,496
> org.apache.kafka.common.metrics.Sensor[] | 3,164 | 71,088
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl|
>  1 | 56
> ---
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4751) kafka-clients-0.9.0.2.4.2.11-1 issue not throwing exception.

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4751.
--
Resolution: Fixed

This is addressed by KIP-266. 

> kafka-clients-0.9.0.2.4.2.11-1 issue not throwing exception.
> 
>
> Key: KAFKA-4751
> URL: https://issues.apache.org/jira/browse/KAFKA-4751
> Project: Kafka
>  Issue Type: Bug
> Environment: kafka-clients-0.9.0.2.4.2.11-1 java based client
>Reporter: Avinash Kumar Gaur
>Priority: Major
>
> While running consumer with kafka-clients-0.9.0.2.4.2.11-1.jar and connecting 
> directly with broker, kafka consumer is not throwing any exception, if broker 
> is down.
> 1)Create client with kafka-clients-0.9.0.2.4.2.11-1.jar.
> 2)Do not start kafka broker.
> 3)Start kafka consumer with required properties.
> Observation - As you may see consumer is not throwing any exception even if 
> broker is down.
> Expected - It should throw exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4571) Consumer fails to retrieve messages if started before producer

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4571.
--
Resolution: Auto Closed

Closing inactive issue. Please reopen if the issue still exists.

> Consumer fails to retrieve messages if started before producer
> --
>
> Key: KAFKA-4571
> URL: https://issues.apache.org/jira/browse/KAFKA-4571
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.1
> Environment: Ubuntu Desktop 16.04 LTS, Oracle Java 8 1.8.0_101, Core 
> i7 4770K
>Reporter: Sergiu Hlihor
>Priority: Major
>
> In a configuration where topic was never created before, starting the 
> consumer before the producer leads to no message being consumed 
> (KafkaConsumer.pool() returns always an instance of ConsumerRecords with 0 
> count ). 
> Starting another consumer on the same group, same topic after messages were 
> produced is still not consuming them. Starting another consumer with another 
> groupId appears to be working.
> In the consumer logs I see: WARN  NetworkClient - Error while fetching 
> metadata with correlation id 1 : {measurements021=LEADER_NOT_AVAILABLE} 
> Both producer and consumer were launched from inside same JVM. 
> The configuration used is the standard one found in Kafka distribution. If 
> this is a configuration issue, please suggest any change that I should do.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3822) Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while connected

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3822.
--
Resolution: Fixed

This is addressed by KIP-266. 

> Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while 
> connected
> --
>
> Key: KAFKA-3822
> URL: https://issues.apache.org/jira/browse/KAFKA-3822
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1, 0.10.0.0
> Environment: x86 Red Hat 6 (1 broker running zookeeper locally, 
> client running on a separate server)
>Reporter: Alexander Cook
>Assignee: Ashish Singh
>Priority: Major
>
> I am using the KafkaConsumer java client to consume messages. My application 
> shuts down smoothly if I am connected to a Kafka broker, or if I never 
> succeed at connecting to a Kafka broker, but if the broker is shut down while 
> my consumer is connected to it, consumer.close() hangs indefinitely. 
> Here is how I reproduce it: 
> 1. Start 0.9.0.1 Kafka Broker
> 2. Start consumer application and consume messages
> 3. Stop 0.9.0.1 Kafka Broker (ctrl-c or stop script)
> 4. Try to stop application...hangs at consumer.close() indefinitely. 
> I also see this same behavior using 0.10 broker and client. 
> This is my first bug reported to Kafka, so please let me know if I should be 
> following a different format. Thanks! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3457) KafkaConsumer.committed(...) hangs forever if port number is wrong

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3457.
--
Resolution: Fixed

This is addressed by KIP-266. 

> KafkaConsumer.committed(...) hangs forever if port number is wrong
> --
>
> Key: KAFKA-3457
> URL: https://issues.apache.org/jira/browse/KAFKA-3457
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Harald Kirsch
>Assignee: Liquan Pei
>Priority: Major
>
> Create a KafkaConsumer with default settings but with a wrong host:port 
> setting for bootstrap.servers. Have it in some consumer group, do not 
> subscribe or assign partitions.
> Then call .committed(...) for a topic/partition combination a few times. It 
> will hang on the 2nd or third call forever. In the debug log you will see 
> that it repeats connections all over again. I waited many minutes and it 
> never came back to throw an Exception.
> The connections problems should at least pop out on the WARNING log level. 
> Likely the connection problems should throw an exception eventually.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3011) Consumer.poll(0) blocks if Kafka not accessible

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3011.
--
Resolution: Fixed

This is addressed by KIP-266. 

> Consumer.poll(0) blocks if Kafka not accessible
> ---
>
> Key: KAFKA-3011
> URL: https://issues.apache.org/jira/browse/KAFKA-3011
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: all
>Reporter: Eric Bowman
>Priority: Major
>
> Because of this loop in ConsumerNetworkClient:
> {code:java}
> public void awaitMetadataUpdate() {
> int version = this.metadata.requestUpdate();
> do {
> poll(Long.MAX_VALUE);
> } while (this.metadata.version() == version);
> }
> {code}
> ...if Kafka is not reachable (perhaps not running, or other network issues, 
> unclear), then KafkaConsumer.poll(0) will block until it's available.
> I suspect that better behavior would be an exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3727) Consumer.poll() stuck in loop on non-existent topic manually assigned

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3727.
--
Resolution: Fixed

This is addressed by KIP-266.

> Consumer.poll() stuck in loop on non-existent topic manually assigned
> -
>
> Key: KAFKA-3727
> URL: https://issues.apache.org/jira/browse/KAFKA-3727
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
>Priority: Critical
>
> The behavior of a consumer on poll() for a non-existing topic is surprisingly 
> different/inconsistent 
> between a consumer that subscribed to the topic and one that had the 
> topic-partition manually assigned.
> The "subscribed" consumer will return an empty collection
> The "assigned" consumer will *loop forever* - this feels a bug to me.
> sample snippet to reproduce:
> {quote}
> KafkaConsumer assignKc = new KafkaConsumer<>(props1);
> KafkaConsumer subsKc = new KafkaConsumer<>(props2);
> List tps = new ArrayList<>();
> tps.add(new TopicPartition("topic-not-exists", 0));
> assignKc.assign(tps);
> subsKc.subscribe(Arrays.asList("topic-not-exists"));
> System.out.println("* subscribe k consumer ");
> ConsumerRecords crs2 = subsKc.poll(1000L); 
> print("subscribeKc", crs2); // returns empty
> System.out.println("* assign k consumer ");
> ConsumerRecords crs1 = assignKc.poll(1000L); 
>// will loop forever ! 
> print("assignKc", crs1);
> {quote}
> the logs for the "assigned" consumer show:
> [2016-05-18 17:33:09,907] DEBUG Updated cluster metadata version 8 to 
> Cluster(nodes = [192.168.10.18:9093 (id: 0 rack: null)], partitions = []) 
> (org.apache.kafka.clients.Metadata)
> [2016-05-18 17:33:09,908] DEBUG Partition topic-not-exists-0 is unknown for 
> fetching offset, wait for metadata refresh 
> (org.apache.kafka.clients.consumer.internals.Fetcher)
> [2016-05-18 17:33:10,010] DEBUG Sending metadata request 
> {topics=[topic-not-exists]} to node 0 (org.apache.kafka.clients.NetworkClient)
> [2016-05-18 17:33:10,011] WARN Error while fetching metadata with correlation 
> id 9 : {topic-not-exists=UNKNOWN_TOPIC_OR_PARTITION} 
> (org.apache.kafka.clients.NetworkClient)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3899) Consumer.poll() stuck in loop if wrong credentials are supplied

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3899.
--
   Resolution: Fixed
 Assignee: (was: Edoardo Comar)
Fix Version/s: 2.0.0

This is addressed by KIP-266. 

> Consumer.poll() stuck in loop if wrong credentials are supplied
> ---
>
> Key: KAFKA-3899
> URL: https://issues.apache.org/jira/browse/KAFKA-3899
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.0, 0.10.1.0
>Reporter: Edoardo Comar
>Priority: Major
> Fix For: 2.0.0
>
>
> With the broker configured to use SASL PLAIN ,
> if the client is supplying wrong credentials, 
> a consumer calling poll()
> is stuck forever and only inspection of DEBUG-level logging can tell what is 
> wrong.
> [2016-06-24 12:15:16,455] DEBUG Connection with localhost/127.0.0.1 
> disconnected (org.apache.kafka.common.network.Selector)
> java.io.EOFException
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>   at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveResponseOrToken(SaslClientAuthenticator.java:239)
>   at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:182)
>   at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:64)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:318)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:183)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:973)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3503) Throw exception on missing/non-existent partition

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3503.
--
Resolution: Duplicate

> Throw exception on missing/non-existent  partition 
> ---
>
> Key: KAFKA-3503
> URL: https://issues.apache.org/jira/browse/KAFKA-3503
> Project: Kafka
>  Issue Type: Wish
>Affects Versions: 0.9.0.1
> Environment: Java 1.8.0_60. 
> Linux  centos65vm 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC
>Reporter: Navin Markandeya
>Priority: Minor
>
> I would expect some exception to be thrown when a consumer tries to access a 
> non-existent partition. I did not see anyone reporting it. If is already 
> known, please link and close this.
> {code}
> java version "1.8.0_60"
> Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
> Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
> {code}
> {code}
> Linux centos65vm 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 
> x86_64 x86_64 x86_64 GNU/Linux
> {code}
> {{Kafka release - kafka_2.11-0.9.0.1}}
> Created a topic with 3 partitions
> {code}
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic mytopic
> Topic:mytopic PartitionCount:3ReplicationFactor:1 Configs:
>   Topic: mytopic  Partition: 0Leader: 0   Replicas: 0 Isr: 0
>   Topic: mytopic  Partition: 1Leader: 0   Replicas: 0 Isr: 0
>   Topic: mytopic  Partition: 2Leader: 0   Replicas: 0 Isr: 0
> {code}
> Consumer application does not terminate. A thrown exception that there is no 
> such {{mytopic-3}} partition, that would help to gracefully terminate it.
> {code}
> 14:08:02.885 [main] DEBUG o.a.k.c.c.i.ConsumerCoordinator - Fetching 
> committed offsets for partitions: [mytopic-3, mytopic-0, mytopic-1, mytopic-2]
> 14:08:02.887 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.bytes-sent
> 14:08:02.888 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.bytes-received
> 14:08:02.888 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.latency
> 14:08:02.888 [main] DEBUG o.apache.kafka.clients.NetworkClient - Completed 
> connection to node 2147483647
> 14:08:02.891 [main] DEBUG o.a.k.c.c.i.ConsumerCoordinator - No committed 
> offset for partition mytopic-3
> 14:08:02.891 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Resetting 
> offset for partition mytopic-3 to latest offset.
> 14:08:02.892 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> 14:08:02.965 [main] DEBUG o.apache.kafka.clients.NetworkClient - Sending 
> metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=4,client_id=consumer-2},
>  body={topics=[mytopic]}), isInitiatedByNetworkClient, 
> createdTimeMs=1459804082965, sendTimeMs=0) to node 0
> 14:08:02.968 [main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster 
> metadata version 3 to Cluster(nodes = [Node(0, centos65vm, 9092)], partitions 
> = [Partition(topic = mytopic, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = mytopic, partition = 1, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = mytopic, partition = 2, leader = 0, 
> replicas = [0,], isr = [0,]])
> 14:08:02.968 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> 14:08:03.071 [main] DEBUG o.apache.kafka.clients.NetworkClient - Sending 
> metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=5,client_id=consumer-2},
>  body={topics=[mytopic]}), isInitiatedByNetworkClient, 
> createdTimeMs=1459804083071, sendTimeMs=0) to node 0
> 14:08:03.073 [main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster 
> metadata version 4 to Cluster(nodes = [Node(0, centos65vm, 9092)], partitions 
> = [Partition(topic = mytopic, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = mytopic, partition = 1, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = mytopic, partition = 2, leader = 0, 
> replicas = [0,], isr = [0,]])
> 14:08:03.073 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3177) Kafka consumer can hang when position() is called on a non-existing partition.

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3177.
--
Resolution: Fixed

This is addressed by KIP-266.

> Kafka consumer can hang when position() is called on a non-existing partition.
> --
>
> Key: KAFKA-3177
> URL: https://issues.apache.org/jira/browse/KAFKA-3177
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 2.0.0
>
>
> This can be easily reproduced as following:
> {code}
> {
> ...
> consumer.assign(SomeNonExsitingTopicParition);
> consumer.position();
> ...
> }
> {code}
> It seems when position is called we will try to do the following:
> 1. Fetch committed offsets.
> 2. If there is no committed offsets, try to reset offset using reset 
> strategy. in sendListOffsetRequest(), if the consumer does not know the 
> TopicPartition, it will refresh its metadata and retry. In this case, because 
> the partition does not exist, we fall in to the infinite loop of refreshing 
> topic metadata.
> Another orthogonal issue is that if the topic in the above code piece does 
> not exist, position() call will actually create the topic due to the fact 
> that currently topic metadata request could automatically create the topic. 
> This is a known separate issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6581) ConsumerGroupCommand hangs if even one of the partition is unavailable

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6581.
--
   Resolution: Fixed
Fix Version/s: (was: 0.10.0.2)
   2.0.0

This is addressed by KIP-266.

> ConsumerGroupCommand hangs if even one of the partition is unavailable
> --
>
> Key: KAFKA-6581
> URL: https://issues.apache.org/jira/browse/KAFKA-6581
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, core, tools
>Affects Versions: 0.10.0.0
>Reporter: Sahil Aggarwal
>Priority: Minor
> Fix For: 2.0.0
>
>
> ConsumerGroupCommand.scala uses consumer internally to get the position for 
> each partition but if the partition is unavailable the call 
> consumer.position(topicPartition) will block indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6973) setting invalid timestamp causes Kafka broker restart to fail

2018-06-01 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6973.

   Resolution: Fixed
Fix Version/s: 2.0.0

> setting invalid timestamp causes Kafka broker restart to fail
> -
>
> Key: KAFKA-6973
> URL: https://issues.apache.org/jira/browse/KAFKA-6973
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 1.1.0
>Reporter: Paul Brebner
>Assignee: huxihx
>Priority: Critical
> Fix For: 2.0.0
>
>
> Setting timestamp to invalid value causes Kafka broker to fail upon startup. 
> E.g.
> ./kafka-topics.sh --create --zookeeper localhost --topic duck3 --partitions 1 
> --replication-factor 1 --config message.timestamp.type=boom
>  
> Also note that the docs says the parameter name is 
> log.message.timestamp.type, but this is silently ignored.
> This works with no error for the invalid timestamp value. But next time you 
> restart Kafka:
>  
> [2018-05-29 13:09:05,806] FATAL [KafkaServer id=0] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> java.util.NoSuchElementException: Invalid timestamp type boom
> at org.apache.kafka.common.record.TimestampType.forName(TimestampType.java:39)
> at kafka.log.LogConfig.(LogConfig.scala:94)
> at kafka.log.LogConfig$.fromProps(LogConfig.scala:279)
> at kafka.log.LogManager$$anonfun$17.apply(LogManager.scala:786)
> at kafka.log.LogManager$$anonfun$17.apply(LogManager.scala:785)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
> at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at kafka.log.LogManager$.apply(LogManager.scala:785)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
> at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
> at kafka.Kafka$.main(Kafka.scala:92)
> at kafka.Kafka.main(Kafka.scala)
> [2018-05-29 13:09:05,811] INFO [KafkaServer id=0] shutting down 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6973) setting invalid timestamp causes Kafka broker restart to fail

2018-06-01 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-6973:
---
Priority: Major  (was: Critical)

> setting invalid timestamp causes Kafka broker restart to fail
> -
>
> Key: KAFKA-6973
> URL: https://issues.apache.org/jira/browse/KAFKA-6973
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 1.1.0
>Reporter: Paul Brebner
>Assignee: huxihx
>Priority: Major
> Fix For: 2.0.0
>
>
> Setting timestamp to invalid value causes Kafka broker to fail upon startup. 
> E.g.
> ./kafka-topics.sh --create --zookeeper localhost --topic duck3 --partitions 1 
> --replication-factor 1 --config message.timestamp.type=boom
>  
> Also note that the docs says the parameter name is 
> log.message.timestamp.type, but this is silently ignored.
> This works with no error for the invalid timestamp value. But next time you 
> restart Kafka:
>  
> [2018-05-29 13:09:05,806] FATAL [KafkaServer id=0] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> java.util.NoSuchElementException: Invalid timestamp type boom
> at org.apache.kafka.common.record.TimestampType.forName(TimestampType.java:39)
> at kafka.log.LogConfig.(LogConfig.scala:94)
> at kafka.log.LogConfig$.fromProps(LogConfig.scala:279)
> at kafka.log.LogManager$$anonfun$17.apply(LogManager.scala:786)
> at kafka.log.LogManager$$anonfun$17.apply(LogManager.scala:785)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
> at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at kafka.log.LogManager$.apply(LogManager.scala:785)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
> at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
> at kafka.Kafka$.main(Kafka.scala:92)
> at kafka.Kafka.main(Kafka.scala)
> [2018-05-29 13:09:05,811] INFO [KafkaServer id=0] shutting down 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6973) setting invalid timestamp causes Kafka broker restart to fail

2018-06-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498044#comment-16498044
 ] 

ASF GitHub Bot commented on KAFKA-6973:
---

ijuma closed pull request #5106: KAFKA-6973: TopicCommand should verify 
topic-level config
URL: https://github.com/apache/kafka/pull/5106
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/core/src/main/scala/kafka/log/LogConfig.scala 
b/core/src/main/scala/kafka/log/LogConfig.scala
index 0db49e76e2e..158209a1fc0 100755
--- a/core/src/main/scala/kafka/log/LogConfig.scala
+++ b/core/src/main/scala/kafka/log/LogConfig.scala
@@ -254,7 +254,7 @@ object LogConfig {
 KafkaConfig.LogPreAllocateProp)
   .define(MessageFormatVersionProp, STRING, Defaults.MessageFormatVersion, 
MEDIUM, MessageFormatVersionDoc,
 KafkaConfig.LogMessageFormatVersionProp)
-  .define(MessageTimestampTypeProp, STRING, Defaults.MessageTimestampType, 
MEDIUM, MessageTimestampTypeDoc,
+  .define(MessageTimestampTypeProp, STRING, Defaults.MessageTimestampType, 
in("CreateTime", "LogAppendTime"), MEDIUM, MessageTimestampTypeDoc,
 KafkaConfig.LogMessageTimestampTypeProp)
   .define(MessageTimestampDifferenceMaxMsProp, LONG, 
Defaults.MessageTimestampDifferenceMaxMs,
 atLeast(0), MEDIUM, MessageTimestampDifferenceMaxMsDoc, 
KafkaConfig.LogMessageTimestampDifferenceMaxMsProp)
diff --git a/core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala 
b/core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala
index 6a276df1e7c..782fcf539f3 100644
--- a/core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala
+++ b/core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala
@@ -225,4 +225,21 @@ class TopicCommandTest extends ZooKeeperTestHarness with 
Logging with RackAwareT
 assertTrue(output.contains(topic) && 
output.contains(markedForDeletionList))
   }
 
+  @Test
+  def testInvalidTopicLevelConfig(): Unit = {
+val brokers = List(0)
+TestUtils.createBrokersInZk(zkClient, brokers)
+
+// create the topic
+try {
+  val createOpts = new TopicCommandOptions(
+Array("--partitions", "1", "--replication-factor", "1", "--topic", 
"test",
+  "--config", "message.timestamp.type=boom"))
+  TopicCommand.createTopic(zkClient, createOpts)
+  fail("Expected exception on invalid topic-level config.")
+} catch {
+  case _: Exception => // topic creation should fail due to the invalid 
config
+}
+  }
+
 }


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> setting invalid timestamp causes Kafka broker restart to fail
> -
>
> Key: KAFKA-6973
> URL: https://issues.apache.org/jira/browse/KAFKA-6973
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 1.1.0
>Reporter: Paul Brebner
>Assignee: huxihx
>Priority: Critical
>
> Setting timestamp to invalid value causes Kafka broker to fail upon startup. 
> E.g.
> ./kafka-topics.sh --create --zookeeper localhost --topic duck3 --partitions 1 
> --replication-factor 1 --config message.timestamp.type=boom
>  
> Also note that the docs says the parameter name is 
> log.message.timestamp.type, but this is silently ignored.
> This works with no error for the invalid timestamp value. But next time you 
> restart Kafka:
>  
> [2018-05-29 13:09:05,806] FATAL [KafkaServer id=0] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> java.util.NoSuchElementException: Invalid timestamp type boom
> at org.apache.kafka.common.record.TimestampType.forName(TimestampType.java:39)
> at kafka.log.LogConfig.(LogConfig.scala:94)
> at kafka.log.LogConfig$.fromProps(LogConfig.scala:279)
> at kafka.log.LogManager$$anonfun$17.apply(LogManager.scala:786)
> at kafka.log.LogManager$$anonfun$17.apply(LogManager.scala:785)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
> at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> at 

[jira] [Resolved] (KAFKA-6936) Scala API Wrapper for Streams uses default serializer for table aggregate

2018-06-01 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6936.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

> Scala API Wrapper for Streams uses default serializer for table aggregate
> -
>
> Key: KAFKA-6936
> URL: https://issues.apache.org/jira/browse/KAFKA-6936
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Daniel Heinrich
>Priority: Major
> Fix For: 2.0.0
>
>
> On of the goals of the Scala API is to not fall back on the configured 
> default serializer, but let the compiler provide them through implicits.
> The aggregate method on KGroupedStream misses to achieve this goal.
> Compared to the Java API is this behavior very supprising, because no other 
> stream operation falls back to the default serializer and a developer assums, 
> that the compiler checks for the correct serializer type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6309) add support for getting topic defaults from AdminClient

2018-06-01 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498033#comment-16498033
 ] 

Manikumar commented on KAFKA-6309:
--

This functionality is supported in KafkaAdminClient.describeConfigs() API.

We can call _"describeConfigs(topicResource, new 
DescribeConfigsOptions().includeSynonyms(true))"_ to list all the configured 
values and the precedence used to obtain the currently configured value.

more details: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-226+-+Dynamic+Broker+Configuration]

> add support for getting topic defaults from AdminClient
> ---
>
> Key: KAFKA-6309
> URL: https://issues.apache.org/jira/browse/KAFKA-6309
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dan norwood
>Assignee: dan norwood
>Priority: Major
>
> kip here: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-234%3A+add+support+for+getting+topic+defaults+from+AdminClient



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-6972) Kafka ACL does not work expected with wildcard

2018-06-01 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-6972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau reassigned KAFKA-6972:
---

Assignee: Sönke Liebau

> Kafka ACL does not work expected with wildcard
> --
>
> Key: KAFKA-6972
> URL: https://issues.apache.org/jira/browse/KAFKA-6972
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
> Environment: OS : CentOS 7, 64bit.
> Confluent : 3.3, Kafka 0.11.
>Reporter: Soyee Deng
>Assignee: Sönke Liebau
>Priority: Major
>
> Just started with Confluent 3.3 platform and Kafka 0.11 having SSL as 
> transportation security and Kerberos to restrict the access control based on 
> the holding principals. In order to make life easier, wildcard is extensively 
> used in my environment. But it turned out that is not working as expected. 
> My issue is that when I run the command _kafka-acls_ under one directory with 
> some files, this command would pick up the name of first file as the topic 
> name or group name. e.g. In my case, abcd.txt would be chosen while giving my 
> principal connect-consumer the permissions of consuming message from any 
> topic with any group Id.
> [quality@data-pipeline-1 test_dir]$ 
> KAFKA_OPTS=-Djava.security.auth.login.config='/etc/security/jaas/broker-jaas.conf'
>  kafka-acls --authorizer-properties 
> zookeeper.connect=data-pipeline-1.orion.com:2181 --add --allow-principal 
> User:connect-consumer --consumer --topic * --group *
>  Adding ACLs for resource `Topic:abcd.txt`:
>  User:connect-consumer has Allow permission for operations: Describe from 
> hosts: *
>  User:connect-consumer has Allow permission for operations: Read from hosts: *
> Adding ACLs for resource `Group:abcd.txt`:
>  User:connect-consumer has Allow permission for operations: Read from hosts: *
> Current ACLs for resource `Topic:abcd.txt`:
>  User:connect-consumer has Allow permission for operations: Describe from 
> hosts: *
>  User:connect-consumer has Allow permission for operations: Read from hosts: *
>  User:connect-consumer has Allow permission for operations: Write from hosts: 
> *
> Current ACLs for resource `Group:abcd.txt`:
>  User:connect-consumer has Allow permission for operations: Read from hosts: *
>  
> My current work around solution is changing command context to an empty 
> directory and run above command, it works as expected. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6980) Recommended MaxDirectMemorySize for consumers

2018-06-01 Thread John Lu (JIRA)
John Lu created KAFKA-6980:
--

 Summary: Recommended MaxDirectMemorySize for consumers
 Key: KAFKA-6980
 URL: https://issues.apache.org/jira/browse/KAFKA-6980
 Project: Kafka
  Issue Type: Wish
  Components: consumer, documentation
Affects Versions: 0.10.2.0
 Environment: CloudFoundry
Reporter: John Lu


We are observing that when MaxDirectMemorySize is set too low, our Kafka 
consumer threads are failing and encountering the following exception:

{{java.lang.OutOfMemoryError: Direct buffer memory}}

Is there a way to estimate how much direct memory is required for optimal 
performance?  In the documentation, it is suggested that the amount of memory 
required is  [Number of Partitions * max.partition.fetch.bytes].  

When we pick a value slightly above that, we no longer encounter the error, but 
if we double or triple the number, our throughput improves drastically.  So we 
are wondering if there is another setting or parameter to consider?

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6959) Any impact we foresee if we upgrade Linux version or move to VM instead of physical Linux server

2018-06-01 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497840#comment-16497840
 ] 

Manikumar commented on KAFKA-6959:
--

Post these kind of queries to us...@kafka.apache.org mailing list 
(http://kafka.apache.org/contact) for better visibility and quicker responses.

> Any impact we foresee if we upgrade Linux version or move to VM instead of 
> physical Linux server
> 
>
> Key: KAFKA-6959
> URL: https://issues.apache.org/jira/browse/KAFKA-6959
> Project: Kafka
>  Issue Type: Task
>  Components: admin
>Affects Versions: 0.11.0.2
> Environment: Prod
>Reporter: Gene Yi
>Priority: Trivial
>  Labels: patch, performance, security
>
> As we know that the recent issue on the Liunx Meltdown and Spectre. all the 
> Linux servers need to deploy the patch and the OS version at least to be 6.9. 
> we want to know the impact to Kafka, is there any side effect if we directly 
> upgrade the OS to 7.0,  also is there any limitation if we deploy Kafka to VM 
> instead of the physical servers?
> currently the Kafka version we used is 0.11.0.2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6335) SimpleAclAuthorizerTest#testHighConcurrencyModificationOfResourceAcls fails intermittently

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-6335:
-
Fix Version/s: (was: 2.0.0)

> SimpleAclAuthorizerTest#testHighConcurrencyModificationOfResourceAcls fails 
> intermittently
> --
>
> Key: KAFKA-6335
> URL: https://issues.apache.org/jira/browse/KAFKA-6335
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Manikumar
>Priority: Major
>
> From 
> https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/3045/testReport/junit/kafka.security.auth/SimpleAclAuthorizerTest/testHighConcurrencyModificationOfResourceAcls/
>  :
> {code}
> java.lang.AssertionError: expected acls Set(User:36 has Allow permission for 
> operations: Read from hosts: *, User:7 has Allow permission for operations: 
> Read from hosts: *, User:21 has Allow permission for operations: Read from 
> hosts: *, User:39 has Allow permission for operations: Read from hosts: *, 
> User:43 has Allow permission for operations: Read from hosts: *, User:3 has 
> Allow permission for operations: Read from hosts: *, User:35 has Allow 
> permission for operations: Read from hosts: *, User:15 has Allow permission 
> for operations: Read from hosts: *, User:16 has Allow permission for 
> operations: Read from hosts: *, User:22 has Allow permission for operations: 
> Read from hosts: *, User:26 has Allow permission for operations: Read from 
> hosts: *, User:11 has Allow permission for operations: Read from hosts: *, 
> User:38 has Allow permission for operations: Read from hosts: *, User:8 has 
> Allow permission for operations: Read from hosts: *, User:28 has Allow 
> permission for operations: Read from hosts: *, User:32 has Allow permission 
> for operations: Read from hosts: *, User:25 has Allow permission for 
> operations: Read from hosts: *, User:41 has Allow permission for operations: 
> Read from hosts: *, User:44 has Allow permission for operations: Read from 
> hosts: *, User:48 has Allow permission for operations: Read from hosts: *, 
> User:2 has Allow permission for operations: Read from hosts: *, User:9 has 
> Allow permission for operations: Read from hosts: *, User:14 has Allow 
> permission for operations: Read from hosts: *, User:46 has Allow permission 
> for operations: Read from hosts: *, User:13 has Allow permission for 
> operations: Read from hosts: *, User:5 has Allow permission for operations: 
> Read from hosts: *, User:29 has Allow permission for operations: Read from 
> hosts: *, User:45 has Allow permission for operations: Read from hosts: *, 
> User:6 has Allow permission for operations: Read from hosts: *, User:37 has 
> Allow permission for operations: Read from hosts: *, User:23 has Allow 
> permission for operations: Read from hosts: *, User:19 has Allow permission 
> for operations: Read from hosts: *, User:24 has Allow permission for 
> operations: Read from hosts: *, User:17 has Allow permission for operations: 
> Read from hosts: *, User:34 has Allow permission for operations: Read from 
> hosts: *, User:12 has Allow permission for operations: Read from hosts: *, 
> User:42 has Allow permission for operations: Read from hosts: *, User:4 has 
> Allow permission for operations: Read from hosts: *, User:47 has Allow 
> permission for operations: Read from hosts: *, User:18 has Allow permission 
> for operations: Read from hosts: *, User:31 has Allow permission for 
> operations: Read from hosts: *, User:49 has Allow permission for operations: 
> Read from hosts: *, User:33 has Allow permission for operations: Read from 
> hosts: *, User:1 has Allow permission for operations: Read from hosts: *, 
> User:27 has Allow permission for operations: Read from hosts: *) but got 
> Set(User:36 has Allow permission for operations: Read from hosts: *, User:7 
> has Allow permission for operations: Read from hosts: *, User:21 has Allow 
> permission for operations: Read from hosts: *, User:39 has Allow permission 
> for operations: Read from hosts: *, User:43 has Allow permission for 
> operations: Read from hosts: *, User:3 has Allow permission for operations: 
> Read from hosts: *, User:35 has Allow permission for operations: Read from 
> hosts: *, User:15 has Allow permission for operations: Read from hosts: *, 
> User:16 has Allow permission for operations: Read from hosts: *, User:22 has 
> Allow permission for operations: Read from hosts: *, User:26 has Allow 
> permission for operations: Read from hosts: *, User:11 has Allow permission 
> for operations: Read from hosts: *, User:38 has Allow permission for 
> operations: Read from hosts: *, User:8 has Allow permission for operations: 
> Read from hosts: *, User:28 has Allow permission for operations: Read from 
> hosts: *, User:32 has Allow permission 

[jira] [Resolved] (KAFKA-3743) kafka-server-start.sh: Unhelpful error message

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3743.
--
Resolution: Duplicate

> kafka-server-start.sh: Unhelpful error message
> --
>
> Key: KAFKA-3743
> URL: https://issues.apache.org/jira/browse/KAFKA-3743
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.0.0
>Reporter: Magnus Edenhill
>Priority: Minor
>
> When trying to start Kafka from an uncompiled source tarball rather than the 
> binary the kafka-server-start.sh command gives a mystical error message:
> ```
> $ bin/kafka-server-start.sh config/server.properties 
> Error: Could not find or load main class config.server.properties
> ```
> This could probably be improved to say something closer to the truth.
> This is on 0.10.0.0-rc6 tarball from github.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-5810) Improve authentication logging on the broker-side

2018-06-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/KAFKA-5810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497701#comment-16497701
 ] 

Gérald Quintana edited comment on KAFKA-5810 at 6/1/18 8:10 AM:


In order to be able to detect security attacks, I'd like to have to an access 
log. It would contain authentication failures (ex. wrong user/password with 
SASL plain), authorization failure (like kafka-authorizer.log).

To go further, it could also be interesting to log
 * Successful authentication/authorization (mostly for debugging purpose)
 * Dangerous operations: ACL changes, topic deletion...


was (Author: gquintana):
In order to be able to detect security attacks, I'd like to have to an access 
log. It would contain authentication failures (ex. wrong user/password with 
SASL plain), authorization failure (like kafka-authorizer.log). Successful 
authentication/authorization logs could also be interesting (mostly for 
debugging purpose).

> Improve authentication logging on the broker-side
> -
>
> Key: KAFKA-5810
> URL: https://issues.apache.org/jira/browse/KAFKA-5810
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: Rajini Sivaram
>Priority: Major
>
> From [~theduderog] in the discussion of KIP-152:
> The metrics in KIP-188 will provide counts across all users but the log
> could potentially be used to audit individual authentication events.  I
> think these would be useful at INFO level but if it's inconsistent with the
> rest of Kafka, DEBUG is ok too.  The default log4j config for Kafka
> separates authorization logs.  It seems like a good idea to treat
> authentication logs the same way whether or not we choose DEBUG or INFO.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5810) Improve authentication logging on the broker-side

2018-06-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/KAFKA-5810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497701#comment-16497701
 ] 

Gérald Quintana commented on KAFKA-5810:


In order to be able to detect security attacks, I'd like to have to an access 
log. It would contain authentication failures (ex. wrong user/password with 
SASL plain), authorization failure (like kafka-authorizer.log). Successful 
authentication/authorization logs could also be interesting (mostly for 
debugging purpose).

> Improve authentication logging on the broker-side
> -
>
> Key: KAFKA-5810
> URL: https://issues.apache.org/jira/browse/KAFKA-5810
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: Rajini Sivaram
>Priority: Major
>
> From [~theduderog] in the discussion of KIP-152:
> The metrics in KIP-188 will provide counts across all users but the log
> could potentially be used to audit individual authentication events.  I
> think these would be useful at INFO level but if it's inconsistent with the
> rest of Kafka, DEBUG is ok too.  The default log4j config for Kafka
> separates authorization logs.  It seems like a good idea to treat
> authentication logs the same way whether or not we choose DEBUG or INFO.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)