[jira] [Created] (KAFKA-5592) Connection with plain client to SSL-secured broker causes OOM

2017-07-14 Thread JIRA
Marcin Łuczyński created KAFKA-5592:
---

 Summary: Connection with plain client to SSL-secured broker causes 
OOM
 Key: KAFKA-5592
 URL: https://issues.apache.org/jira/browse/KAFKA-5592
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.0
 Environment: Linux x86_64 x86_64 x86_64 GNU/Linux
Reporter: Marcin Łuczyński
 Attachments: heapdump.20170713.100129.14207.0002.phd, Heap.PNG, 
javacore.20170713.100129.14207.0003.txt, Snap.20170713.100129.14207.0004.trc, 
Stack.PNG

While testing connection with client app that does not have configured 
truststore with a Kafka broker secured by SSL, my JVM crashes with 
OutOfMemoryError. I saw it mixed with StackOverfowError. I attach dump files.

The stack trace to start with is here:

{quote}at java/nio/HeapByteBuffer. (HeapByteBuffer.java:57) 
at java/nio/ByteBuffer.allocate(ByteBuffer.java:331) 
at 
org/apache/kafka/common/network/NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
 
at 
org/apache/kafka/common/network/NetworkReceive.readFrom(NetworkReceive.java:71) 
at org/apache/kafka/common/network/KafkaChannel.receive(KafkaChannel.java:169) 
at org/apache/kafka/common/network/KafkaChannel.read(KafkaChannel.java:150) 
at 
org/apache/kafka/common/network/Selector.pollSelectionKeys(Selector.java:355) 
at org/apache/kafka/common/network/Selector.poll(Selector.java:303) 
at org/apache/kafka/clients/NetworkClient.poll(NetworkClient.java:349) 
at 
org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
 
at 
org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.poll(ConsumerNetworkClient.java:188)
 
at 
org/apache/kafka/clients/consumer/internals/AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:207)
 
at 
org/apache/kafka/clients/consumer/internals/AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
 
at 
org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.poll(ConsumerCoordinator.java:279)
 
at 
org/apache/kafka/clients/consumer/KafkaConsumer.pollOnce(KafkaConsumer.java:1029)
 
at org/apache/kafka/clients/consumer/KafkaConsumer.poll(KafkaConsumer.java:995) 
at 
com/ibm/is/cc/kafka/runtime/KafkaConsumerProcessor.process(KafkaConsumerProcessor.java:237)
 
at com/ibm/is/cc/kafka/runtime/KafkaProcessor.process(KafkaProcessor.java:173) 
at 
com/ibm/is/cc/javastage/connector/CC_JavaAdapter.run(CC_JavaAdapter.java:443){quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4107) Support offset reset capability in Kafka Connect

2017-07-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080346#comment-16080346
 ] 

Sönke Liebau commented on KAFKA-4107:
-

Sounds good to me. I'll investigate the issue a little and draft a high level 
KIP as basis for further discussion.

> Support offset reset capability in Kafka Connect
> 
>
> Key: KAFKA-4107
> URL: https://issues.apache.org/jira/browse/KAFKA-4107
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>
> It would be useful in some cases to be able to reset connector offsets. For 
> example, if a topic in Kafka corresponding to a source database is 
> accidentally deleted (or deleted because of corrupt data), an administrator 
> may want to reset offsets and reproduce the log from the beginning. It may 
> also be useful to have support for overriding offsets, but that seems like a 
> less likely use case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5625) Invalid subscription data may cause streams app to throw BufferUnderflowException

2017-07-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xavier Léauté updated KAFKA-5625:
-
Affects Version/s: 0.11.0.0
  Component/s: streams

> Invalid subscription data may cause streams app to throw 
> BufferUnderflowException
> -
>
> Key: KAFKA-5625
> URL: https://issues.apache.org/jira/browse/KAFKA-5625
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Xavier Léauté
>Priority: Minor
>
> I was able to cause my streams app to crash with the following error when 
> attempting to join the same consumer group with a rogue client.
> At the very least I would expect streams to throw a 
> {{TaskAssignmentException}} to indicate invalid subscription data.
> {code}
> java.nio.BufferUnderflowException
> at java.nio.Buffer.nextGetIndex(Buffer.java:506)
> at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:361)
> at 
> org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.decode(SubscriptionInfo.java:97)
> at 
> org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:302)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:365)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:512)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1100(AbstractCoordinator.java:93)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:462)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:445)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:798)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:778)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:168)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:310)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:297)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:574)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:545)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:519)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5625) Invalid subscription data may cause streams app to throw BufferUnderflowException

2017-07-21 Thread JIRA
Xavier Léauté created KAFKA-5625:


 Summary: Invalid subscription data may cause streams app to throw 
BufferUnderflowException
 Key: KAFKA-5625
 URL: https://issues.apache.org/jira/browse/KAFKA-5625
 Project: Kafka
  Issue Type: Bug
Reporter: Xavier Léauté
Priority: Minor


I was able to cause my streams app to crash with the following error when 
attempting to join the same consumer group with a rogue client.

At the very least I would expect streams to throw a {{TaskAssignmentException}} 
to indicate invalid subscription data.

{code}
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:506)
at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:361)
at 
org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.decode(SubscriptionInfo.java:97)
at 
org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:302)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:365)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:512)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1100(AbstractCoordinator.java:93)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:462)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:445)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:798)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:778)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:168)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:358)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:310)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:297)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:574)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:545)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:519)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5563) Clarify handling of connector name in config

2017-07-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16096076#comment-16096076
 ] 

Sönke Liebau commented on KAFKA-5563:
-

Alright, I'll prepare a PR to introduce this check once  
[2755|https://github.com/apache/kafka/pull/2755] has been reviewed and merged, 
as this touches exactly the same code and it would make merging easier to base 
it on that PR I think.

> Clarify handling of connector name in config 
> -
>
> Key: KAFKA-5563
> URL: https://issues.apache.org/jira/browse/KAFKA-5563
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Sönke Liebau
>Priority: Minor
>
> The connector name is currently being stored in two places, once at the root 
> level of the connector and once in the config:
> {code:java}
> {
>   "name": "test",
>   "config": {
>   "connector.class": 
> "org.apache.kafka.connect.tools.MockSinkConnector",
>   "tasks.max": "3",
>   "topics": "test-topic",
>   "name": "test"
>   },
>   "tasks": [
>   {
>   "connector": "test",
>   "task": 0
>   }
>   ]
> }
> {code}
> If no name is provided in the "config" element, then the name from the root 
> level is [copied there when the connector is being 
> created|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L95].
>  If however a name is provided in the config then it is not touched, which 
> means it is possible to create a connector with a different name at the root 
> level and in the config like this:
> {code:java}
> {
>   "name": "test1",
>   "config": {
>   "connector.class": 
> "org.apache.kafka.connect.tools.MockSinkConnector",
>   "tasks.max": "3",
>   "topics": "test-topic",
>   "name": "differentname"
>   },
>   "tasks": [
>   {
>   "connector": "test1",
>   "task": 0
>   }
>   ]
> }
> {code}
> I am not aware of any issues that this currently causes, but it is at least 
> confusing and probably not intended behavior and definitely bears potential 
> for bugs, if different functions take the name from different places.
> Would it make sense to add a check to reject requests that provide different 
> names in the request and the config section?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-4827) Kafka connect: error with special characters in connector name

2017-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau reassigned KAFKA-4827:
---

Assignee: Sönke Liebau

> Kafka connect: error with special characters in connector name
> --
>
> Key: KAFKA-4827
> URL: https://issues.apache.org/jira/browse/KAFKA-4827
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.0
>Reporter: Aymeric Bouvet
>Assignee: Sönke Liebau
>Priority: Minor
>
> When creating a connector, if the connector name (and possibly other 
> properties) end with a carriage return, kafka-connect will create the config 
> but report error
> {code}
> cat << EOF > file-connector.json
> {
>   "name": "file-connector\r",
>   "config": {
> "topic": "kafka-connect-logs\r",
> "tasks.max": "1",
> "file": "/var/log/ansible-confluent/connect.log",
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSourceConnector"
>   }
> }
> EOF
> curl -X POST -H "Content-Type: application/json" -H "Accept: 
> application/json" -d @file-connector.json localhost:8083/connectors 
> {code}
> returns an error 500  and log the following
> {code}
> [2017-03-01 18:25:23,895] WARN  (org.eclipse.jetty.servlet.ServletHandler)
> javax.servlet.ServletException: java.lang.IllegalArgumentException: Illegal 
> character in path at index 27: /connectors/file-connector4
> at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
> at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at 
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:159)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Illegal character in path at 
> index 27: /connectors/file-connector4
> at java.net.URI.create(URI.java:852)
> at 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.createConnector(ConnectorsResource.java:100)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvo

[jira] [Commented] (KAFKA-4827) Kafka connect: error with special characters in connector name

2017-07-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103087#comment-16103087
 ] 

Sönke Liebau commented on KAFKA-4827:
-

All true :)

I'll create a dedicated jira and kick off a discussion on the mailing list, as 
I think this jira is too narrow and should be solved shortly when the fix for 
KAFKA-4930 is merged.

> Kafka connect: error with special characters in connector name
> --
>
> Key: KAFKA-4827
> URL: https://issues.apache.org/jira/browse/KAFKA-4827
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.0
>Reporter: Aymeric Bouvet
>Priority: Minor
>
> When creating a connector, if the connector name (and possibly other 
> properties) end with a carriage return, kafka-connect will create the config 
> but report error
> {code}
> cat << EOF > file-connector.json
> {
>   "name": "file-connector\r",
>   "config": {
> "topic": "kafka-connect-logs\r",
> "tasks.max": "1",
> "file": "/var/log/ansible-confluent/connect.log",
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSourceConnector"
>   }
> }
> EOF
> curl -X POST -H "Content-Type: application/json" -H "Accept: 
> application/json" -d @file-connector.json localhost:8083/connectors 
> {code}
> returns an error 500  and log the following
> {code}
> [2017-03-01 18:25:23,895] WARN  (org.eclipse.jetty.servlet.ServletHandler)
> javax.servlet.ServletException: java.lang.IllegalArgumentException: Illegal 
> character in path at index 27: /connectors/file-connector4
> at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
> at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at 
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:159)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Illegal character in path at 
> index 27: /connectors/file-connector4
> at java.net.URI.create(URI.java:852)
> at 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.createConnector(ConnectorsResource.java:100)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.j

[jira] [Assigned] (KAFKA-5637) Document compatibility and release policies

2017-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau reassigned KAFKA-5637:
---

Assignee: Sönke Liebau

> Document compatibility and release policies
> ---
>
> Key: KAFKA-5637
> URL: https://issues.apache.org/jira/browse/KAFKA-5637
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Sönke Liebau
> Fix For: 1.0.0
>
>
> We should document our compatibility and release policies in one place so 
> that people have the correct expectations. This is generally important, but 
> more so now that we are releasing 1.0.0.
> More details to come.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5563) Clarify handling of connector name in config

2017-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau reassigned KAFKA-5563:
---

Assignee: Sönke Liebau

> Clarify handling of connector name in config 
> -
>
> Key: KAFKA-5563
> URL: https://issues.apache.org/jira/browse/KAFKA-5563
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Sönke Liebau
>Assignee: Sönke Liebau
>Priority: Minor
>
> The connector name is currently being stored in two places, once at the root 
> level of the connector and once in the config:
> {code:java}
> {
>   "name": "test",
>   "config": {
>   "connector.class": 
> "org.apache.kafka.connect.tools.MockSinkConnector",
>   "tasks.max": "3",
>   "topics": "test-topic",
>   "name": "test"
>   },
>   "tasks": [
>   {
>   "connector": "test",
>   "task": 0
>   }
>   ]
> }
> {code}
> If no name is provided in the "config" element, then the name from the root 
> level is [copied there when the connector is being 
> created|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L95].
>  If however a name is provided in the config then it is not touched, which 
> means it is possible to create a connector with a different name at the root 
> level and in the config like this:
> {code:java}
> {
>   "name": "test1",
>   "config": {
>   "connector.class": 
> "org.apache.kafka.connect.tools.MockSinkConnector",
>   "tasks.max": "3",
>   "topics": "test-topic",
>   "name": "differentname"
>   },
>   "tasks": [
>   {
>   "connector": "test1",
>   "task": 0
>   }
>   ]
> }
> {code}
> I am not aware of any issues that this currently causes, but it is at least 
> confusing and probably not intended behavior and definitely bears potential 
> for bugs, if different functions take the name from different places.
> Would it make sense to add a check to reject requests that provide different 
> names in the request and the config section?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5637) Document compatibility and release policies

2017-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau updated KAFKA-5637:

Description: 
We should document our compatibility and release policies in one place so that 
people have the correct expectations. This is generally important, but more so 
now that we are releasing 1.0.0.

I extracted the following topics from the mailing list thread as the ones that 
should be documented as a minimum: 

*Code stability*
* Explanation of stability annotations and their implications
* Explanation of what public apis are
* *Discussion point: * Do we want to keep the _unstable_ annotation or is 
_evolving_ sufficient going forward?

*Support duration*
* How long are versions supported?
* How far are bugfixes backported?
* How far are security fixes backported?
* How long are protocol versions supported by subsequent code versions?
* How long are older clients supported?
* How long are older brokers supported?

I will create an initial pull request to add a section to the documentation as 
basis for further discussion.

  was:
We should document our compatibility and release policies in one place so that 
people have the correct expectations. This is generally important, but more so 
now that we are releasing 1.0.0.

More details to come.

Component/s: documentation

> Document compatibility and release policies
> ---
>
> Key: KAFKA-5637
> URL: https://issues.apache.org/jira/browse/KAFKA-5637
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Ismael Juma
>Assignee: Sönke Liebau
> Fix For: 1.0.0
>
>
> We should document our compatibility and release policies in one place so 
> that people have the correct expectations. This is generally important, but 
> more so now that we are releasing 1.0.0.
> I extracted the following topics from the mailing list thread as the ones 
> that should be documented as a minimum: 
> *Code stability*
> * Explanation of stability annotations and their implications
> * Explanation of what public apis are
> * *Discussion point: * Do we want to keep the _unstable_ annotation or is 
> _evolving_ sufficient going forward?
> *Support duration*
> * How long are versions supported?
> * How far are bugfixes backported?
> * How far are security fixes backported?
> * How long are protocol versions supported by subsequent code versions?
> * How long are older clients supported?
> * How long are older brokers supported?
> I will create an initial pull request to add a section to the documentation 
> as basis for further discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5546) Lost data when the leader is disconnected.

2017-07-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16071252#comment-16071252
 ] 

Björn Eriksson commented on KAFKA-5546:
---

Hi [~mihbor], I've updated the test to use {{acks=all}} and no unclean leader 
election but the results are the same: the console consumer doesn't immediately 
switch to the new leader and data is lost.

You're right, we're trying to set up a resilient Kafka cluster but this seems 
difficult to achieve.


> Lost data when the leader is disconnected.
> --
>
> Key: KAFKA-5546
> URL: https://issues.apache.org/jira/browse/KAFKA-5546
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.1
>Reporter: Björn Eriksson
> Attachments: kafka-failure-log.txt
>
>
> We've noticed that if the leaders networking is deconfigured (with {{ifconfig 
> eth0 down}}) the producer won't notice this and doesn't immediately connect 
> to the newly elected leader.
> {{docker-compose.yml}} and test runner are at 
> https://github.com/owbear/kafka-network-failure-tests with sample test output 
> at 
> https://github.com/owbear/kafka-network-failure-tests/blob/master/README.md#sample-results
> I was expecting a transparent failover to the new leader.
> The attached log shows that while the producer produced values between 
> {{12:37:33}} and {{12:37:54}}, theres a gap between {{12:37:41}} and 
> {{12:37:50}} where no values was stored in the log after the network was 
> taken down at {{12:37:42}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5546) Lost data when the leader is disconnected.

2017-06-30 Thread JIRA
Björn Eriksson created KAFKA-5546:
-

 Summary: Lost data when the leader is disconnected.
 Key: KAFKA-5546
 URL: https://issues.apache.org/jira/browse/KAFKA-5546
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.10.2.1
Reporter: Björn Eriksson
 Attachments: kafka-failure-log.txt

We've noticed that if the leaders networking is deconfigured (with {{ifconfig 
eth0 down}}) the producer won't notice this and doesn't immediately connect to 
the newly elected leader.

{{docker-compose.yml}} and test runner are at 
https://github.com/owbear/kafka-network-failure-tests with sample test output 
at 
https://github.com/owbear/kafka-network-failure-tests/blob/master/README.md#sample-results

I was expecting a transparent failover to the new leader.

The attached log shows that while the producer produced values between 
{{12:37:33}} and {{12:37:54}}, theres a gap between {{12:37:41}} and 
{{12:37:50}} where no values was stored in the log after the network was taken 
down at {{12:37:42}}.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4107) Support offset reset capability in Kafka Connect

2017-07-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073478#comment-16073478
 ] 

Sönke Liebau commented on KAFKA-4107:
-

[~hachikuji] any updates on whether this is still needed? I can't self assign, 
but am still willing to work on this.

> Support offset reset capability in Kafka Connect
> 
>
> Key: KAFKA-4107
> URL: https://issues.apache.org/jira/browse/KAFKA-4107
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>
> It would be useful in some cases to be able to reset connector offsets. For 
> example, if a topic in Kafka corresponding to a source database is 
> accidentally deleted (or deleted because of corrupt data), an administrator 
> may want to reset offsets and reproduce the log from the beginning. It may 
> also be useful to have support for overriding offsets, but that seems like a 
> less likely use case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4827) Kafka connect: error with special characters in connector name

2017-07-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076052#comment-16076052
 ] 

Sönke Liebau commented on KAFKA-4827:
-

There is a few more special characters that cause similar behavior: %, ?, <, >, 
/, \, ...

I have blacklisted the obvious ones in the fix for KAFKA-4930 but think that a 
larger discussion around naming conventions and what is and isn't allowed is 
probably necessary.

> Kafka connect: error with special characters in connector name
> --
>
> Key: KAFKA-4827
> URL: https://issues.apache.org/jira/browse/KAFKA-4827
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.0
>Reporter: Aymeric Bouvet
>Priority: Minor
>
> When creating a connector, if the connector name (and possibly other 
> properties) end with a carriage return, kafka-connect will create the config 
> but report error
> {code}
> cat << EOF > file-connector.json
> {
>   "name": "file-connector\r",
>   "config": {
> "topic": "kafka-connect-logs\r",
> "tasks.max": "1",
> "file": "/var/log/ansible-confluent/connect.log",
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSourceConnector"
>   }
> }
> EOF
> curl -X POST -H "Content-Type: application/json" -H "Accept: 
> application/json" -d @file-connector.json localhost:8083/connectors 
> {code}
> returns an error 500  and log the following
> {code}
> [2017-03-01 18:25:23,895] WARN  (org.eclipse.jetty.servlet.ServletHandler)
> javax.servlet.ServletException: java.lang.IllegalArgumentException: Illegal 
> character in path at index 27: /connectors/file-connector4
> at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
> at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at 
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:159)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Illegal character in path at 
> index 27: /connectors/file-connector4
> at java.net.URI.create(URI.java:852)
> at 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.createConnector(ConnectorsResource.java:100)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAcce

[jira] [Created] (KAFKA-5563) Clarify handling of connector name in config

2017-07-06 Thread JIRA
Sönke Liebau created KAFKA-5563:
---

 Summary: Clarify handling of connector name in config 
 Key: KAFKA-5563
 URL: https://issues.apache.org/jira/browse/KAFKA-5563
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Sönke Liebau
Priority: Minor


The connector name is currently being stored in two places, once at the root 
level of the connector and once in the config:
{code:java}
{
"name": "test",
"config": {
"connector.class": 
"org.apache.kafka.connect.tools.MockSinkConnector",
"tasks.max": "3",
"topics": "test-topic",
"name": "test"
},
"tasks": [
{
"connector": "test",
"task": 0
}
]
}
{code}

If no name is provided in the "config" element, then the name from the root 
level is [copied there when the connector is being 
created|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L95].
 If however a name is provided in the config then it is not touched, which 
means it is possible to create a connector with a different name at the root 
level and in the config like this:
{code:java}
{
"name": "test1",
"config": {
"connector.class": 
"org.apache.kafka.connect.tools.MockSinkConnector",
"tasks.max": "3",
"topics": "test-topic",
"name": "differentname"
},
"tasks": [
{
"connector": "test1",
"task": 0
}
]
}
{code}

I am not aware of any issues that this currently causes, but it is at least 
confusing and probably not intended behavior and definitely bears potential for 
bugs, if different functions take the name from different places.

Would it make sense to add a check to reject requests that provide different 
names in the request and the config section?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5546) Temporary loss of availability data when the leader is disconnected

2017-08-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115735#comment-16115735
 ] 

Björn Eriksson commented on KAFKA-5546:
---

Hi Jason,

No, {{ifdown}} means that the connection won't be shut down cleanly. We're 
building a fault tolerant system and we need to test network failures, like 
hardware failure or a disconnected network cable.

I've updated the 0.11.0.0 branch to include results for {{ifdown}} and {{kill 
-9}} (docker rm). Testing with {{kill -9}} shows better results (2 - 8 seconds) 
but we'd like guarantees much lower than that.

The {{ifdown}} test shows that after the _1003_ leader is disconnected 
(_@11:31:12_) it takes ~2.5 seconds for the producer to realise this and report 
_Disconnecting from node 1003 due to request timeout_. Zookeeper reports the 
new leader to be _1002_ after ~ 6 seconds but the producer doesn't get wind of 
the new leader until 14 seconds after the network failure in spite of it 
continuously sending metadata requests.


> Temporary loss of availability data when the leader is disconnected
> ---
>
> Key: KAFKA-5546
> URL: https://issues.apache.org/jira/browse/KAFKA-5546
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.1, 0.11.0.0
> Environment: docker, failing-network
>Reporter: Björn Eriksson
>
> We've noticed that if the leaders networking is deconfigured (with {{ifconfig 
> eth0 down}}) the producer won't notice this and doesn't immediately connect 
> to the newly elected leader.
> {{docker-compose.yml}} and test runner are at 
> https://github.com/owbear/kafka-network-failure-tests.
> We were expecting a transparent failover to the new leader but testing shows 
> that there's a 8-15 seconds long gap where no values are stored in the log 
> after the network is taken down.
> Tests (and results) [against 
> 0.10.2.1|https://github.com/owbear/kafka-network-failure-tests/tree/kafka-network-failure-tests-0.10.2.1]
> Tests (and results) [against 
> 0.11.0.0|https://github.com/owbear/kafka-network-failure-tests/tree/kafka-network-failure-tests-0.11.0.0]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5546) Temporary loss of availability data when the leader is disconnected

2017-08-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Björn Eriksson updated KAFKA-5546:
--
Affects Version/s: 0.11.0.0
  Environment: docker, failing-network
  Description: 
We've noticed that if the leaders networking is deconfigured (with {{ifconfig 
eth0 down}}) the producer won't notice this and doesn't immediately connect to 
the newly elected leader.

{{docker-compose.yml}} and test runner are at 
https://github.com/owbear/kafka-network-failure-tests.

We were expecting a transparent failover to the new leader but testing shows 
that there's a 8-15 seconds long gap where no values are stored in the log 
after the network is taken down.

Tests (and results) [against 
0.10.2.1|https://github.com/owbear/kafka-network-failure-tests/tree/kafka-network-failure-tests-0.10.2.1]
Tests (and results) [against 
0.11.0.0|https://github.com/owbear/kafka-network-failure-tests/tree/kafka-network-failure-tests-0.11.0.0]



  was:
We've noticed that if the leaders networking is deconfigured (with {{ifconfig 
eth0 down}}) the producer won't notice this and doesn't immediately connect to 
the newly elected leader.

{{docker-compose.yml}} and test runner are at 
https://github.com/owbear/kafka-network-failure-tests with sample test output 
at 
https://github.com/owbear/kafka-network-failure-tests/blob/master/README.md#sample-results

I was expecting a transparent failover to the new leader.

The attached log shows that while the producer produced values between 
{{12:37:33}} and {{12:37:54}}, theres a gap between {{12:37:41}} and 
{{12:37:50}} where no values was stored in the log after the network was taken 
down at {{12:37:42}}.


  Summary: Temporary loss of availability data when the leader is 
disconnected  (was: Lost data when the leader is disconnected.)

> Temporary loss of availability data when the leader is disconnected
> ---
>
> Key: KAFKA-5546
> URL: https://issues.apache.org/jira/browse/KAFKA-5546
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.1, 0.11.0.0
> Environment: docker, failing-network
>Reporter: Björn Eriksson
>
> We've noticed that if the leaders networking is deconfigured (with {{ifconfig 
> eth0 down}}) the producer won't notice this and doesn't immediately connect 
> to the newly elected leader.
> {{docker-compose.yml}} and test runner are at 
> https://github.com/owbear/kafka-network-failure-tests.
> We were expecting a transparent failover to the new leader but testing shows 
> that there's a 8-15 seconds long gap where no values are stored in the log 
> after the network is taken down.
> Tests (and results) [against 
> 0.10.2.1|https://github.com/owbear/kafka-network-failure-tests/tree/kafka-network-failure-tests-0.10.2.1]
> Tests (and results) [against 
> 0.11.0.0|https://github.com/owbear/kafka-network-failure-tests/tree/kafka-network-failure-tests-0.11.0.0]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5546) Temporary loss of availability data when the leader is disconnected

2017-08-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Björn Eriksson updated KAFKA-5546:
--
Attachment: (was: kafka-failure-log.txt)

> Temporary loss of availability data when the leader is disconnected
> ---
>
> Key: KAFKA-5546
> URL: https://issues.apache.org/jira/browse/KAFKA-5546
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.1, 0.11.0.0
> Environment: docker, failing-network
>Reporter: Björn Eriksson
>
> We've noticed that if the leaders networking is deconfigured (with {{ifconfig 
> eth0 down}}) the producer won't notice this and doesn't immediately connect 
> to the newly elected leader.
> {{docker-compose.yml}} and test runner are at 
> https://github.com/owbear/kafka-network-failure-tests.
> We were expecting a transparent failover to the new leader but testing shows 
> that there's a 8-15 seconds long gap where no values are stored in the log 
> after the network is taken down.
> Tests (and results) [against 
> 0.10.2.1|https://github.com/owbear/kafka-network-failure-tests/tree/kafka-network-failure-tests-0.10.2.1]
> Tests (and results) [against 
> 0.11.0.0|https://github.com/owbear/kafka-network-failure-tests/tree/kafka-network-failure-tests-0.11.0.0]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5742) Support passing ZK chroot in system tests

2017-08-16 Thread JIRA
Xavier Léauté created KAFKA-5742:


 Summary: Support passing ZK chroot in system tests
 Key: KAFKA-5742
 URL: https://issues.apache.org/jira/browse/KAFKA-5742
 Project: Kafka
  Issue Type: Test
  Components: system tests
Reporter: Xavier Léauté


Currently spinning up multiple Kafka clusters in a system tests requires at 
least one ZK node per Kafka cluster, which wastes a lot of resources. We 
currently also don't test anything outside of the ZK root path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5742) Support passing ZK chroot in system tests

2017-08-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xavier Léauté reassigned KAFKA-5742:


Assignee: Xavier Léauté

> Support passing ZK chroot in system tests
> -
>
> Key: KAFKA-5742
> URL: https://issues.apache.org/jira/browse/KAFKA-5742
> Project: Kafka
>  Issue Type: Test
>  Components: system tests
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>
> Currently spinning up multiple Kafka clusters in a system tests requires at 
> least one ZK node per Kafka cluster, which wastes a lot of resources. We 
> currently also don't test anything outside of the ZK root path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5765) Move merge() from StreamsBuilder to KStream

2017-08-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137081#comment-16137081
 ] 

Xavier Léauté commented on KAFKA-5765:
--

I have a small request when it comes to merge(). The current varargs form 
generates a lot of compiler warnings that need to be suppressed using 
{{@SuppressWarnings("unchecked")}}.

Given that the typical merge use-case only involves only a handful of streams, 
I think it would be useful to provide a couple of overloads that take a fixed 
number of arguments, similar to what Guave does in 
[ImmutableList.of(...)|https://google.github.io/guava/releases/21.0/api/docs/com/google/common/collect/ImmutableList.html#of-E-E-E-E-E-E-E-E-E-E-E-]


> Move merge() from StreamsBuilder to KStream
> ---
>
> Key: KAFKA-5765
> URL: https://issues.apache.org/jira/browse/KAFKA-5765
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>  Labels: needs-kip
>
> Merging multiple {{KStream}} is done via {{StreamsBuilder#merge()}} (formally 
> {{KStreamBuilder#merge()}}). This is quite unnatural and should be done via 
> {{KStream#merge()}}.
> As {{StreamsBuilder}} is not released yet, this is not a backward 
> incompatible change (and KStreamBuilder is already deprecated). We still need 
> a KIP as we add a new method to a public {{KStreams}} API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5966) Support ByteBuffer serialization in Kafka Streams

2017-09-22 Thread JIRA
Xavier Léauté created KAFKA-5966:


 Summary: Support ByteBuffer serialization in Kafka Streams
 Key: KAFKA-5966
 URL: https://issues.apache.org/jira/browse/KAFKA-5966
 Project: Kafka
  Issue Type: Improvement
Reporter: Xavier Léauté


Currently Kafka Streams only supports serialization using byte arrays. This 
means we generate a lot of garbage and spend unnecessary time copying bytes, 
especially when working with windowed state stores that rely on composite keys. 
In many places in the code we have extract parts of the composite key to 
deserialize the either the timestamp or the message key from the state store 
key (e.g. the methods in WindowStoreUtils)

Having support for serde into/from ByteBuffers would allow us to reuse the 
underlying bytearrays and just pass around slices of the underlying Buffers to 
avoid the unnecessary copying.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5959) NPE in NetworkClient

2017-09-21 Thread JIRA
Xavier Léauté created KAFKA-5959:


 Summary: NPE in NetworkClient
 Key: KAFKA-5959
 URL: https://issues.apache.org/jira/browse/KAFKA-5959
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 1.0.0
Reporter: Xavier Léauté
Assignee: Jason Gustafson


I'm experiencing the following error when running trunk clients against a 
0.11.0 cluster configured with SASL_PLAINTEXT

{code}
[2017-09-21 23:07:09,072] ERROR [kafka-producer-network-thread | xxx] [Producer 
clientId=xxx] Uncaught error in request completion: 
(org.apache.kafka.clients.NetworkClient)
java.lang.NullPointerException
at 
org.apache.kafka.clients.producer.internals.Sender.canRetry(Sender.java:639)
at 
org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:522)
at 
org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:473)
at 
org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:76)
at 
org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:693)
at 
org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:101)
at 
org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:453)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:241)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:166)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4972) Kafka 0.10.0 Found a corrupted index file during Kafka broker startup

2017-09-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169681#comment-16169681
 ] 

Julius Žaromskis commented on KAFKA-4972:
-

There's a bunch of warning msgs in my log file, kafka is slow to restart

{{[2017-09-18 06:53:19,349] WARN Found a corrupted index file due to 
requirement failed: Corrupt index found, index file 
(/var/kafka/dispatch.task-ack-6/00021796.index) has non-zero size 
but the last offset is 21796 which is no larger than the base offset 21796.}. 
deleting /var/kafka/dispatch.task-ack-6/00021796.timeindex, 
/var/kafka/dispatch.task-ack-6/00021796.index, and 
/var/kafka/dispatch.task-ack-6/00021796.txnindex and rebuilding 
index... (kafka.log.Log)
[2017-09-18 06:56:10,533] WARN Found a corrupted index file due to requirement 
failed: Corrupt index found, index file 
(/var/kafka/dispatch.task-ack-10/00027244.index) has non-zero size 
but the last offset is 27244 which is no larger than the base offset 27244.}. 
deleting /var/kafka/dispatch.task-ack-10/00027244.timeindex, 
/var/kafka/dispatch.task-ack-10/00027244.index, and 
/var/kafka/dispatch.task-ack-10/00027244.txnindex and rebuilding 
index... (kafka.log.Log)
[2017-09-18 07:09:17,710] WARN Found a corrupted index file due to requirement 
failed: Corrupt index found, index file 
(/var/kafka/dispatch.status-3/49362755.index) has non-zero size but 
the last offset is 49362755 which is no larger than the base offset 49362755.}. 
deleting /var/kafka/dispatch.status-3/49362755.timeindex, 
/var/kafka/dispatch.status-3/49362755.index, and 
/var/kafka/dispatch.status-3/49362755.txnindex and rebuilding 
index... (kafka.log.Log)}}

> Kafka 0.10.0  Found a corrupted index file during Kafka broker startup
> --
>
> Key: KAFKA-4972
> URL: https://issues.apache.org/jira/browse/KAFKA-4972
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
> Environment: JDK: HotSpot  x64  1.7.0_80
> Tag: 0.10.0
>Reporter: fangjinuo
>Priority: Critical
> Fix For: 0.11.0.2
>
> Attachments: Snap3.png
>
>
> After force shutdown all kafka brokers one by one, restart them one by one, 
> but a broker startup failure.
> The following WARN leval log was found in the log file:
> found a corrutped index file,  .index , delet it  ...
> you can view details by following attachment.
> I look up some codes in core module, found out :
> the nonthreadsafe method LogSegment.append(offset, messages)  has tow caller:
> 1) Log.append(messages)  // here has a synchronized 
> lock 
> 2) LogCleaner.cleanInto(topicAndPartition, source, dest, map, retainDeletes, 
> messageFormatVersion)   // here has not 
> So I guess this may be the reason for the repeated offset in 0xx.log file 
> (logsegment's .log) 
> Although this is just my inference, but I hope that this problem can be 
> quickly repaired



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-4972) Kafka 0.10.0 Found a corrupted index file during Kafka broker startup

2017-09-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169681#comment-16169681
 ] 

Julius Žaromskis edited comment on KAFKA-4972 at 9/18/17 7:22 AM:
--

There's a bunch of warning msgs in my log file, kafka is slow to restart. 
Upgrading 0.10.2 to 0.11.0.1

{{[2017-09-18 06:53:19,349] WARN Found a corrupted index file due to 
requirement failed: Corrupt index found, index file 
(/var/kafka/dispatch.task-ack-6/00021796.index) has non-zero size 
but the last offset is 21796 which is no larger than the base offset 21796.}. 
deleting /var/kafka/dispatch.task-ack-6/00021796.timeindex, 
/var/kafka/dispatch.task-ack-6/00021796.index, and 
/var/kafka/dispatch.task-ack-6/00021796.txnindex and rebuilding 
index... (kafka.log.Log)
[2017-09-18 06:56:10,533] WARN Found a corrupted index file due to requirement 
failed: Corrupt index found, index file 
(/var/kafka/dispatch.task-ack-10/00027244.index) has non-zero size 
but the last offset is 27244 which is no larger than the base offset 27244.}. 
deleting /var/kafka/dispatch.task-ack-10/00027244.timeindex, 
/var/kafka/dispatch.task-ack-10/00027244.index, and 
/var/kafka/dispatch.task-ack-10/00027244.txnindex and rebuilding 
index... (kafka.log.Log)
[2017-09-18 07:09:17,710] WARN Found a corrupted index file due to requirement 
failed: Corrupt index found, index file 
(/var/kafka/dispatch.status-3/49362755.index) has non-zero size but 
the last offset is 49362755 which is no larger than the base offset 49362755.}. 
deleting /var/kafka/dispatch.status-3/49362755.timeindex, 
/var/kafka/dispatch.status-3/49362755.index, and 
/var/kafka/dispatch.status-3/49362755.txnindex and rebuilding 
index... (kafka.log.Log)}}


was (Author: juliuszaromskis):
There's a bunch of warning msgs in my log file, kafka is slow to restart

{{[2017-09-18 06:53:19,349] WARN Found a corrupted index file due to 
requirement failed: Corrupt index found, index file 
(/var/kafka/dispatch.task-ack-6/00021796.index) has non-zero size 
but the last offset is 21796 which is no larger than the base offset 21796.}. 
deleting /var/kafka/dispatch.task-ack-6/00021796.timeindex, 
/var/kafka/dispatch.task-ack-6/00021796.index, and 
/var/kafka/dispatch.task-ack-6/00021796.txnindex and rebuilding 
index... (kafka.log.Log)
[2017-09-18 06:56:10,533] WARN Found a corrupted index file due to requirement 
failed: Corrupt index found, index file 
(/var/kafka/dispatch.task-ack-10/00027244.index) has non-zero size 
but the last offset is 27244 which is no larger than the base offset 27244.}. 
deleting /var/kafka/dispatch.task-ack-10/00027244.timeindex, 
/var/kafka/dispatch.task-ack-10/00027244.index, and 
/var/kafka/dispatch.task-ack-10/00027244.txnindex and rebuilding 
index... (kafka.log.Log)
[2017-09-18 07:09:17,710] WARN Found a corrupted index file due to requirement 
failed: Corrupt index found, index file 
(/var/kafka/dispatch.status-3/49362755.index) has non-zero size but 
the last offset is 49362755 which is no larger than the base offset 49362755.}. 
deleting /var/kafka/dispatch.status-3/49362755.timeindex, 
/var/kafka/dispatch.status-3/49362755.index, and 
/var/kafka/dispatch.status-3/49362755.txnindex and rebuilding 
index... (kafka.log.Log)}}

> Kafka 0.10.0  Found a corrupted index file during Kafka broker startup
> --
>
> Key: KAFKA-4972
> URL: https://issues.apache.org/jira/browse/KAFKA-4972
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
> Environment: JDK: HotSpot  x64  1.7.0_80
> Tag: 0.10.0
>Reporter: fangjinuo
>Priority: Critical
> Fix For: 0.11.0.2
>
> Attachments: Snap3.png
>
>
> After force shutdown all kafka brokers one by one, restart them one by one, 
> but a broker startup failure.
> The following WARN leval log was found in the log file:
> found a corrutped index file,  .index , delet it  ...
> you can view details by following attachment.
> I look up some codes in core module, found out :
> the nonthreadsafe method LogSegment.append(offset, messages)  has tow caller:
> 1) Log.append(messages)  // here has a synchronized 
> lock 
> 2) LogCleaner.cleanInto(topicAndPartition, source, dest, map, retainDeletes, 
> messageFormatVersion)   // here has not 
> So I guess this may be the reason for the repeated offset in 0xx.log file 
>

[jira] [Created] (KAFKA-5895) Update readme to reflect that Gradle 2 is no longer good enough

2017-09-14 Thread JIRA
Matthias Weßendorf created KAFKA-5895:
-

 Summary: Update readme to reflect that Gradle 2 is no longer good 
enough
 Key: KAFKA-5895
 URL: https://issues.apache.org/jira/browse/KAFKA-5895
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.11.0.2
Reporter: Matthias Weßendorf
Priority: Trivial


The README says:

Kafka requires Gradle 2.0 or higher.

but running with "2.13", I am getting an ERROR message, saying that 3.0+ is 
needed:

{code}
> Failed to apply plugin [class 
> 'com.github.jengelman.gradle.plugins.shadow.ShadowBasePlugin']
   > This version of Shadow supports Gradle 3.0+ only. Please upgrade.

{code}

Full log here:

{code}
➜  kafka git:(utils_improvment) ✗ gradle 
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Download https://jcenter.bintray.com/org/ajoberstar/grgit/1.9.3/grgit-1.9.3.pom
Download 
https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.15.0/gradle-versions-plugin-0.15.0.pom
Download 
https://repo1.maven.org/maven2/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.pom
Download 
https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.1/shadow-2.0.1.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/4.5.2.201704071617-r/org.eclipse.jgit-4.5.2.201704071617-r.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit-parent/4.5.2.201704071617-r/org.eclipse.jgit-parent-4.5.2.201704071617-r.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/4.5.2.201704071617-r/org.eclipse.jgit.ui-4.5.2.201704071617-r.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.9/jsch.agentproxy.jsch-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy/0.0.9/jsch.agentproxy-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.9/jsch.agentproxy.pageant-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.9/jsch.agentproxy.sshagent-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.9/jsch.agentproxy.usocket-jna-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.9/jsch.agentproxy.usocket-nc-0.0.9.pom
Download https://repo1.maven.org/maven2/com/jcraft/jsch/0.1.54/jsch-0.1.54.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.9/jsch.agentproxy.core-0.0.9.pom
Download https://jcenter.bintray.com/org/ajoberstar/grgit/1.9.3/grgit-1.9.3.jar
Download 
https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.15.0/gradle-versions-plugin-0.15.0.jar
Download 
https://repo1.maven.org/maven2/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.jar
Download 
https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.1/shadow-2.0.1.jar
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/4.5.2.201704071617-r/org.eclipse.jgit-4.5.2.201704071617-r.jar
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/4.5.2.201704071617-r/org.eclipse.jgit.ui-4.5.2.201704071617-r.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.9/jsch.agentproxy.jsch-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.9/jsch.agentproxy.pageant-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.9/jsch.agentproxy.sshagent-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.9/jsch.agentproxy.usocket-jna-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.9/jsch.agentproxy.usocket-nc-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.9/jsch.agentproxy.core-0.0.9.jar
Building project 'core' with Scala version 2.11.11

FAILURE: Build failed with an exception.

* Where:
Build file '/home/Matthias/Work/Apache/kafka/build.gradle' line: 978

* What went wrong:
A problem occurred evaluating root project 'kafka'.
> Failed to apply plugin [class 
> 'com.github.jengelman.gradle.plugins.shadow.ShadowBasePlugin']
   > This version of Shadow supports Gradle 3.0+ only. Please upgrade.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 14.637 secs
➜  kafka git:(utils_improvment) ✗ gradle --version


Gradle 2.13
{code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5895) Gradle 3.0+ is needed on the build

2017-09-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Weßendorf updated KAFKA-5895:
--
Summary: Gradle 3.0+ is needed on the build  (was: Update readme to reflect 
that Gradle 2 is no longer good enough)

> Gradle 3.0+ is needed on the build
> --
>
> Key: KAFKA-5895
> URL: https://issues.apache.org/jira/browse/KAFKA-5895
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.11.0.2
>Reporter: Matthias Weßendorf
>Priority: Trivial
>
> The README says:
> Kafka requires Gradle 2.0 or higher.
> but running with "2.13", I am getting an ERROR message, saying that 3.0+ is 
> needed:
> {code}
> > Failed to apply plugin [class 
> > 'com.github.jengelman.gradle.plugins.shadow.ShadowBasePlugin']
>> This version of Shadow supports Gradle 3.0+ only. Please upgrade.
> {code}
> Full log here:
> {code}
> ➜  kafka git:(utils_improvment) ✗ gradle 
> To honour the JVM settings for this build a new JVM will be forked. Please 
> consider using the daemon: 
> https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
> Download 
> https://jcenter.bintray.com/org/ajoberstar/grgit/1.9.3/grgit-1.9.3.pom
> Download 
> https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.15.0/gradle-versions-plugin-0.15.0.pom
> Download 
> https://repo1.maven.org/maven2/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.pom
> Download 
> https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.1/shadow-2.0.1.pom
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/4.5.2.201704071617-r/org.eclipse.jgit-4.5.2.201704071617-r.pom
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit-parent/4.5.2.201704071617-r/org.eclipse.jgit-parent-4.5.2.201704071617-r.pom
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/4.5.2.201704071617-r/org.eclipse.jgit.ui-4.5.2.201704071617-r.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.9/jsch.agentproxy.jsch-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy/0.0.9/jsch.agentproxy-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.9/jsch.agentproxy.pageant-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.9/jsch.agentproxy.sshagent-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.9/jsch.agentproxy.usocket-jna-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.9/jsch.agentproxy.usocket-nc-0.0.9.pom
> Download https://repo1.maven.org/maven2/com/jcraft/jsch/0.1.54/jsch-0.1.54.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.9/jsch.agentproxy.core-0.0.9.pom
> Download 
> https://jcenter.bintray.com/org/ajoberstar/grgit/1.9.3/grgit-1.9.3.jar
> Download 
> https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.15.0/gradle-versions-plugin-0.15.0.jar
> Download 
> https://repo1.maven.org/maven2/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.jar
> Download 
> https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.1/shadow-2.0.1.jar
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/4.5.2.201704071617-r/org.eclipse.jgit-4.5.2.201704071617-r.jar
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/4.5.2.201704071617-r/org.eclipse.jgit.ui-4.5.2.201704071617-r.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.9/jsch.agentproxy.jsch-0.0.9.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.9/jsch.agentproxy.pageant-0.0.9.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.9/jsch.agentproxy.sshagent-0.0.9.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.9/jsch.agentproxy.usocket-jna-0.0.9.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.9/jsch.agentproxy.usocket-nc-0.0.9.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.9/jsch.agentproxy.core-0.0.9.jar
> Building project 'core' with Scala version 2.11.11
> FAILURE: Build failed with an exception.
> * Where:
> Build file '/home/Matthias/Work/Apache/kafka/build.gradle' line: 978
> * What went wrong:
&

[jira] [Commented] (KAFKA-4950) ConcurrentModificationException when iterating over Kafka Metrics

2017-09-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16172134#comment-16172134
 ] 

Sébastien Launay commented on KAFKA-4950:
-

I am also getting such an error sporadically when using a custom version of 
[kafka-graphite|https://github.com/apakulov/kafka-graphite] when periodically 
pushing consumer metrics to a Graphite server.

You can find a unit test in the PR above to reproduce the 
{{ConcurrentModificationException}} (fails consistently on my laptop).

Let me know what you think of the bugfix.

> ConcurrentModificationException when iterating over Kafka Metrics
> -
>
> Key: KAFKA-4950
> URL: https://issues.apache.org/jira/browse/KAFKA-4950
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.1
>Reporter: Dumitru Postoronca
>Assignee: Vahid Hashemian
>Priority: Minor
> Fix For: 0.11.0.2
>
>
> It looks like the when calling {{PartitionStates.partitionSet()}}, while the 
> resulting Hashmap is being built, the internal state of the allocations can 
> change, which leads to ConcurrentModificationException during the copy 
> operation.
> {code}
> java.util.ConcurrentModificationException
> at 
> java.util.LinkedHashMap$LinkedHashIterator.nextNode(LinkedHashMap.java:719)
> at 
> java.util.LinkedHashMap$LinkedKeyIterator.next(LinkedHashMap.java:742)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:119)
> at 
> org.apache.kafka.common.internals.PartitionStates.partitionSet(PartitionStates.java:66)
> at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedPartitions(SubscriptionState.java:291)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$ConsumerCoordinatorMetrics$1.measure(ConsumerCoordinator.java:783)
> at 
> org.apache.kafka.common.metrics.KafkaMetric.value(KafkaMetric.java:61)
> at 
> org.apache.kafka.common.metrics.KafkaMetric.value(KafkaMetric.java:52)
> {code}
> {code}
> // client code:
> import java.util.Collections;
> import java.util.HashMap;
> import java.util.Map;
> import com.codahale.metrics.Gauge;
> import com.codahale.metrics.Metric;
> import com.codahale.metrics.MetricSet;
> import org.apache.kafka.clients.consumer.KafkaConsumer;
> import org.apache.kafka.common.MetricName;
> import static com.codahale.metrics.MetricRegistry.name;
> public class KafkaMetricSet implements MetricSet {
> private final KafkaConsumer client;
> public KafkaMetricSet(KafkaConsumer client) {
> this.client = client;
> }
> @Override
> public Map<String, Metric> getMetrics() {
> final Map<String, Metric> gauges = new HashMap<String, Metric>();
> Map<MetricName, org.apache.kafka.common.Metric> m = client.metrics();
> for (Map.Entry<MetricName, org.apache.kafka.common.Metric> e : 
> m.entrySet()) {
> gauges.put(name(e.getKey().group(), e.getKey().name(), "count"), 
> new Gauge() {
> @Override
> public Double getValue() {
>     return e.getValue().value(); // exception thrown here 
> }
> });
> }
> return Collections.unmodifiableMap(gauges);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4477) Node reduces its ISR to itself, and doesn't recover. Other nodes do not take leadership, cluster remains sick until node is restarted.

2017-10-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195250#comment-16195250
 ] 

Håkon Åmdal commented on KAFKA-4477:


[~mlesyk] It seems like we're also experiencing the same issues on Kafka 
0.11.0.1. We're first seeing {{Shrinking ISR from 2,6,1 to 2 
(kafka.cluster.Partition)}} on a broker, and then a series of  {{Connection to 
2 was disconnected before the response was read}} on brokers trying to stay in 
sync with the other broker. This effectively brings down our entire cluster, 
and a manual restart of the broker is needed.

> Node reduces its ISR to itself, and doesn't recover. Other nodes do not take 
> leadership, cluster remains sick until node is restarted.
> --
>
> Key: KAFKA-4477
> URL: https://issues.apache.org/jira/browse/KAFKA-4477
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.0
> Environment: RHEL7
> java version "1.8.0_66"
> Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
> Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
>Reporter: Michael Andre Pearce
>Assignee: Apurva Mehta
>Priority: Critical
>  Labels: reliability
> Fix For: 0.10.1.1
>
> Attachments: 2016_12_15.zip, 72_Server_Thread_Dump.txt, 
> 73_Server_Thread_Dump.txt, 74_Server_Thread_Dump, issue_node_1001_ext.log, 
> issue_node_1001.log, issue_node_1002_ext.log, issue_node_1002.log, 
> issue_node_1003_ext.log, issue_node_1003.log, kafka.jstack, 
> server_1_72server.log, server_2_73_server.log, server_3_74Server.log, 
> state_change_controller.tar.gz
>
>
> We have encountered a critical issue that has re-occured in different 
> physical environments. We haven't worked out what is going on. We do though 
> have a nasty work around to keep service alive. 
> We do have not had this issue on clusters still running 0.9.01.
> We have noticed a node randomly shrinking for the partitions it owns the 
> ISR's down to itself, moments later we see other nodes having disconnects, 
> followed by finally app issues, where producing to these partitions is 
> blocked.
> It seems only by restarting the kafka instance java process resolves the 
> issues.
> We have had this occur multiple times and from all network and machine 
> monitoring the machine never left the network, or had any other glitches.
> Below are seen logs from the issue.
> Node 7:
> [2016-12-01 07:01:28,112] INFO Partition 
> [com_ig_trade_v1_position_event--demo--compacted,10] on broker 7: Shrinking 
> ISR for partition [com_ig_trade_v1_position_event--demo--compacted,10] from 
> 1,2,7 to 7 (kafka.cluster.Partition)
> All other nodes:
> [2016-12-01 07:01:38,172] WARN [ReplicaFetcherThread-0-7], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@5aae6d42 
> (kafka.server.ReplicaFetcherThread)
> java.io.IOException: Connection to 7 was disconnected before the response was 
> read
> All clients:
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> After this occurs, we then suddenly see on the sick machine an increasing 
> amount of close_waits and file descriptors.
> As a work around to keep service we are currently putting in an automated 
> process that tails and regex's for: and where new_partitions hit just itself 
> we restart the node. 
> "\[(?P.+)\] INFO Partition \[.*\] on broker .* Shrinking ISR for 
> partition \[.*\] from (?P.+) to (?P.+) 
> \(kafka.cluster.Partition\)"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5637) Document compatibility and release policies

2017-10-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192779#comment-16192779
 ] 

Sönke Liebau commented on KAFKA-5637:
-

Apologies for letting this slide for so long, I had a medical emergency in the 
family and became a father somewhat earlier than anticipated, so am still 
trying to get back into the groove so to say. I will pick this up in the next 
two weeks and submit an initial proposal, so it can hopefully go into the first 
bugfix release.

> Document compatibility and release policies
> ---
>
> Key: KAFKA-5637
> URL: https://issues.apache.org/jira/browse/KAFKA-5637
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Ismael Juma
>Assignee: Sönke Liebau
> Fix For: 1.1.0
>
>
> We should document our compatibility and release policies in one place so 
> that people have the correct expectations. This is generally important, but 
> more so now that we are releasing 1.0.0.
> I extracted the following topics from the mailing list thread as the ones 
> that should be documented as a minimum: 
> *Code stability*
> * Explanation of stability annotations and their implications
> * Explanation of what public apis are
> * *Discussion point: * Do we want to keep the _unstable_ annotation or is 
> _evolving_ sufficient going forward?
> *Support duration*
> * How long are versions supported?
> * How far are bugfixes backported?
> * How far are security fixes backported?
> * How long are protocol versions supported by subsequent code versions?
> * How long are older clients supported?
> * How long are older brokers supported?
> I will create an initial pull request to add a section to the documentation 
> as basis for further discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5708) Update Jackson dependencies (from 2.8.5 to 2.9.x)

2017-09-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dejan Stojadinović updated KAFKA-5708:
--
Description: 
-In addition to update: remove deprecated version forcing for 
'jackson-annotations'-

*_Notes:_*
* wait until [Jackson 
2.9.1|https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.9.1] is 
released (expected in September 2017)
* inspired by pull request: https://github.com/apache/kafka/pull/3631






  was:
In addition to update: remove deprecated version forcing for 
'jackson-annotations'

*_Notes:_*
* wait until [Jackson 
2.9.1|https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.9.1] is 
released (expected in September 2017)
* inspired by pull request: https://github.com/apache/kafka/pull/3631







> Update Jackson dependencies (from 2.8.5 to 2.9.x)
> -
>
> Key: KAFKA-5708
> URL: https://issues.apache.org/jira/browse/KAFKA-5708
> Project: Kafka
>  Issue Type: Task
>  Components: build
>Reporter: Dejan Stojadinović
>Assignee: Dejan Stojadinović
>Priority: Blocker
> Fix For: 1.0.0
>
>
> -In addition to update: remove deprecated version forcing for 
> 'jackson-annotations'-
> *_Notes:_*
> * wait until [Jackson 
> 2.9.1|https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.9.1] is 
> released (expected in September 2017)
> * inspired by pull request: https://github.com/apache/kafka/pull/3631



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1324) Debian packaging

2017-08-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-1324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139931#comment-16139931
 ] 

Martin Schröder commented on KAFKA-1324:


bq. I think we can leave packaging out of Kafka itself since this is done in a 
bunch of places now.

Excuse me - where? Googling brings up a bunch of scripts for Kafka 0.8 and not 
newer. And I see no repo for the latest release (0.11) anywhere.

> Debian packaging
> 
>
> Key: KAFKA-1324
> URL: https://issues.apache.org/jira/browse/KAFKA-1324
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
> Environment: linux
>Reporter: David Stendardi
>Priority: Minor
>  Labels: deb, debian, fpm, newbie, packaging
> Attachments: packaging.patch
>
>
> The following patch add a task releaseDeb to the gradle build :
> ./gradlew releaseDeb
> This task should create a debian package in core/build/distributions using 
> fpm :
> https://github.com/jordansissel/fpm.
> We decided to use fpm so other package types would be easy to provide in 
> further iterations (eg : rpm).
> *Some implementations details* :
> - We splitted the releaseTarGz in two tasks : distDir, releaseTarGz.
> - We tried to use gradle builtin variables (project.name etc...)
> - By default the service will not start automatically so the user is free to 
> setup the service with custom configuration.
> Notes : 
>  * FPM is required and should be in the path.
>  * FPM does not allow yet to declare /etc/default/kafka as a conffiles (see : 
> https://github.com/jordansissel/fpm/issues/668)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4741) Memory leak in RecordAccumulator.append

2017-08-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136549#comment-16136549
 ] 

Julius Žaromskis commented on KAFKA-4741:
-

I'm having this problem: 
https://stackoverflow.com/questions/45813477/kafka-off-heap-memory-leak

My questions is this - would it cause leaking on producer or on server?

> Memory leak in RecordAccumulator.append
> ---
>
> Key: KAFKA-4741
> URL: https://issues.apache.org/jira/browse/KAFKA-4741
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Satish Duggana
>Assignee: Satish Duggana
> Fix For: 0.11.0.0
>
>
> RecordAccumulator creates a `ByteBuffer` from free memory pool. This should 
> be deallocated when invocations encounter an exception or throwing any 
> exceptions. 
> I added todo comment lines in the below code for cases to deallocate that 
> buffer.
> {code:title=RecordProducer.java|borderStyle=solid}
> ByteBuffer buffer = free.allocate(size, maxTimeToBlock);
> synchronized (dq) {
> // Need to check if producer is closed again after grabbing 
> the dequeue lock.
> if (closed)
>// todo buffer should be cleared.
> throw new IllegalStateException("Cannot send after the 
> producer is closed.");
> // todo buffer should be cleared up when tryAppend throws an 
> Exception
> RecordAppendResult appendResult = tryAppend(timestamp, key, 
> value, callback, dq);
> if (appendResult != null) {
> // Somebody else found us a batch, return the one we 
> waited for! Hopefully this doesn't happen often...
> free.deallocate(buffer);
> return appendResult;
> }
> {code}
> I will raise PR for the same soon.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5649) Producer is being closed generating ssl exception

2017-08-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142275#comment-16142275
 ] 

Tomasz Sosiński edited comment on KAFKA-5649 at 8/25/17 9:48 PM:
-

[~ppanero]
Do you use Kafka or Spark on docker?


was (Author: esthom):
Do you use Kafka or Spark on docker?

> Producer is being closed generating ssl exception
> -
>
> Key: KAFKA-5649
> URL: https://issues.apache.org/jira/browse/KAFKA-5649
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.1
> Environment: Spark 2.2.0 and kafka 0.10.2.0
>Reporter: Pablo Panero
>
> On a streaming job using built-in kafka source and sink (over SSL), with I am 
> getting the following exception:
> On a streaming job using built-in kafka source and sink (over SSL), with  I 
> am getting the following exception:
> Config of the source:
> {code:java}
> val df = spark.readStream
>   .format("kafka")
>   .option("kafka.bootstrap.servers", config.bootstrapServers)
>   .option("failOnDataLoss", value = false)
>   .option("kafka.connections.max.idle.ms", 360)
>   //SSL: this only applies to communication between Spark and Kafka 
> brokers; you are still responsible for separately securing Spark inter-node 
> communication.
>   .option("kafka.security.protocol", "SASL_SSL")
>   .option("kafka.sasl.mechanism", "GSSAPI")
>   .option("kafka.sasl.kerberos.service.name", "kafka")
>   .option("kafka.ssl.truststore.location", "/etc/pki/java/cacerts")
>   .option("kafka.ssl.truststore.password", "changeit")
>   .option("subscribe", config.topicConfigList.keys.mkString(","))
>   .load()
> {code}
> Config of the sink:
> {code:java}
> .writeStream
> .option("checkpointLocation", 
> s"${config.checkpointDir}/${topicConfig._1}/")
> .format("kafka")
> .option("kafka.bootstrap.servers", config.bootstrapServers)
> .option("kafka.connections.max.idle.ms", 360)
> //SSL: this only applies to communication between Spark and Kafka 
> brokers; you are still responsible for separately securing Spark inter-node 
> communication.
> .option("kafka.security.protocol", "SASL_SSL")
> .option("kafka.sasl.mechanism", "GSSAPI")
> .option("kafka.sasl.kerberos.service.name", "kafka")
> .option("kafka.ssl.truststore.location", "/etc/pki/java/cacerts")
> .option("kafka.ssl.truststore.password", "changeit")
> .start()
> {code}
> And in some cases it throws the exception making the spark job stuck in that 
> step. Exception stack trace is the following:
> {code:java}
> 17/07/18 10:11:58 WARN SslTransportLayer: Failed to send SSL Close message 
> java.io.IOException: Broken pipe
>   at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>   at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>   at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:195)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:163)
>   at org.apache.kafka.common.utils.Utils.closeAll(Utils.java:731)
>   at 
> org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:54)
>   at org.apache.kafka.common.network.Selector.doClose(Selector.java:540)
>   at org.apache.kafka.common.network.Selector.close(Selector.java:531)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:378)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1047)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> org.apache.spark.sql.kafka010.CachedKafkaConsumer.poll(CachedKafkaC

[jira] [Commented] (KAFKA-5649) Producer is being closed generating ssl exception

2017-08-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142275#comment-16142275
 ] 

Tomasz Sosiński commented on KAFKA-5649:


Do you use Kafka or Spark on docker?

> Producer is being closed generating ssl exception
> -
>
> Key: KAFKA-5649
> URL: https://issues.apache.org/jira/browse/KAFKA-5649
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.1
> Environment: Spark 2.2.0 and kafka 0.10.2.0
>Reporter: Pablo Panero
>
> On a streaming job using built-in kafka source and sink (over SSL), with I am 
> getting the following exception:
> On a streaming job using built-in kafka source and sink (over SSL), with  I 
> am getting the following exception:
> Config of the source:
> {code:java}
> val df = spark.readStream
>   .format("kafka")
>   .option("kafka.bootstrap.servers", config.bootstrapServers)
>   .option("failOnDataLoss", value = false)
>   .option("kafka.connections.max.idle.ms", 360)
>   //SSL: this only applies to communication between Spark and Kafka 
> brokers; you are still responsible for separately securing Spark inter-node 
> communication.
>   .option("kafka.security.protocol", "SASL_SSL")
>   .option("kafka.sasl.mechanism", "GSSAPI")
>   .option("kafka.sasl.kerberos.service.name", "kafka")
>   .option("kafka.ssl.truststore.location", "/etc/pki/java/cacerts")
>   .option("kafka.ssl.truststore.password", "changeit")
>   .option("subscribe", config.topicConfigList.keys.mkString(","))
>   .load()
> {code}
> Config of the sink:
> {code:java}
> .writeStream
> .option("checkpointLocation", 
> s"${config.checkpointDir}/${topicConfig._1}/")
> .format("kafka")
> .option("kafka.bootstrap.servers", config.bootstrapServers)
> .option("kafka.connections.max.idle.ms", 360)
> //SSL: this only applies to communication between Spark and Kafka 
> brokers; you are still responsible for separately securing Spark inter-node 
> communication.
> .option("kafka.security.protocol", "SASL_SSL")
> .option("kafka.sasl.mechanism", "GSSAPI")
> .option("kafka.sasl.kerberos.service.name", "kafka")
> .option("kafka.ssl.truststore.location", "/etc/pki/java/cacerts")
> .option("kafka.ssl.truststore.password", "changeit")
> .start()
> {code}
> And in some cases it throws the exception making the spark job stuck in that 
> step. Exception stack trace is the following:
> {code:java}
> 17/07/18 10:11:58 WARN SslTransportLayer: Failed to send SSL Close message 
> java.io.IOException: Broken pipe
>   at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>   at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>   at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:195)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:163)
>   at org.apache.kafka.common.utils.Utils.closeAll(Utils.java:731)
>   at 
> org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:54)
>   at org.apache.kafka.common.network.Selector.doClose(Selector.java:540)
>   at org.apache.kafka.common.network.Selector.close(Selector.java:531)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:378)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1047)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> org.apache.spark.sql.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:298)
>   at 
> org.apache.spark.sql.kafka010.CachedKafkaConsumer.org$apache$spark$sql$kafka010$Cach

[jira] [Commented] (KAFKA-5772) Improve Util classes

2017-08-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138460#comment-16138460
 ] 

Matthias Weßendorf commented on KAFKA-5772:
---

As requested by [~rhauch], for this PR: 
https://github.com/apache/kafka/pull/3370 I've created this JIRA 

> Improve Util classes
> 
>
> Key: KAFKA-5772
> URL: https://issues.apache.org/jira/browse/KAFKA-5772
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Matthias Weßendorf
>
> Utils with static methods should not be instantiated, hence we should improve 
> them by marking the classes final and adding a private constructor as well.
> In addition this is consistent w/ a few of the existing Util classes, such as 
> ByteUtils, see:
> https://github.com/apache/kafka/blob/d345d53e4e5e4f74707e2521aa635b93ba3f1e7b/clients/src/main/java/org/apache/kafka/common/utils/ByteUtils.java#L29-L31



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5772) Improve Util classes

2017-08-23 Thread JIRA
Matthias Weßendorf created KAFKA-5772:
-

 Summary: Improve Util classes
 Key: KAFKA-5772
 URL: https://issues.apache.org/jira/browse/KAFKA-5772
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Matthias Weßendorf


Utils with static methods should not be instantiated, hence we should improve 
them by marking the classes final and adding a private constructor as well.

In addition this is consistent w/ a few of the existing Util classes, such as 
ByteUtils, see:
https://github.com/apache/kafka/blob/d345d53e4e5e4f74707e2521aa635b93ba3f1e7b/clients/src/main/java/org/apache/kafka/common/utils/ByteUtils.java#L29-L31



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5836) Kafka Streams - API for specifying internal stream name on join

2017-09-05 Thread JIRA
Lovro Pandžić created KAFKA-5836:


 Summary: Kafka Streams - API for specifying internal stream name 
on join
 Key: KAFKA-5836
 URL: https://issues.apache.org/jira/browse/KAFKA-5836
 Project: Kafka
  Issue Type: New Feature
Reporter: Lovro Pandžić


Automatic topic name can be problematic in case of streams operation 
change/migration.
I'd like to be able to specify name of an internal topic so I can avoid 
creation of new stream and data "loss" when changing the Stream building.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6104) Add unit tests for ClusterConnectionStates

2017-10-22 Thread JIRA
Sönke Liebau created KAFKA-6104:
---

 Summary: Add unit tests for ClusterConnectionStates
 Key: KAFKA-6104
 URL: https://issues.apache.org/jira/browse/KAFKA-6104
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Affects Versions: 0.11.0.0
Reporter: Sönke Liebau
Assignee: Sönke Liebau
Priority: Trivial


There are no unit tests for the ClusterConnectionStates object.

I've created tests to:
* Cycle through connection states and check correct properties during the 
process
* Check authentication exception is correctly stored
* Check that max reconnect backoff is not exceeded during reconnects
* Check that removed connections are correctly reset

There is currently no test that checks whether the reconnect times are 
correctly increased, as that is still being fixed in KAFKA-6101 .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4827) Kafka connect: error with special characters in connector name

2017-11-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau updated KAFKA-4827:

Fix Version/s: (was: 0.11.0.2)
   (was: 0.10.2.2)
   (was: 0.10.1.2)

> Kafka connect: error with special characters in connector name
> --
>
> Key: KAFKA-4827
> URL: https://issues.apache.org/jira/browse/KAFKA-4827
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.0
>Reporter: Aymeric Bouvet
>Assignee: Sönke Liebau
>Priority: Minor
> Fix For: 1.1.0
>
>
> When creating a connector, if the connector name (and possibly other 
> properties) end with a carriage return, kafka-connect will create the config 
> but report error
> {code}
> cat << EOF > file-connector.json
> {
>   "name": "file-connector\r",
>   "config": {
> "topic": "kafka-connect-logs\r",
> "tasks.max": "1",
> "file": "/var/log/ansible-confluent/connect.log",
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSourceConnector"
>   }
> }
> EOF
> curl -X POST -H "Content-Type: application/json" -H "Accept: 
> application/json" -d @file-connector.json localhost:8083/connectors 
> {code}
> returns an error 500  and log the following
> {code}
> [2017-03-01 18:25:23,895] WARN  (org.eclipse.jetty.servlet.ServletHandler)
> javax.servlet.ServletException: java.lang.IllegalArgumentException: Illegal 
> character in path at index 27: /connectors/file-connector4
> at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
> at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at 
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:159)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Illegal character in path at 
> index 27: /connectors/file-connector4
> at java.net.URI.create(URI.java:852)
> at 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.createConnector(ConnectorsResource.java:100)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.gl

[jira] [Commented] (KAFKA-4827) Kafka connect: error with special characters in connector name

2017-11-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247275#comment-16247275
 ] 

Sönke Liebau commented on KAFKA-4827:
-

In this particular case the issue was probably a non encoded name when 
forwarding the request, I agree with that. However the same issue will also 
turn up if that request were made to the correct worker, as when processing a 
create request, the connector is created and then ConnectorsResource sends a 
rest request to get the config for that worker and returns this to the sender 
of the original request. In this request the name is not properly url encoded, 
which causes the connector to be created properly, but Connect to return a 
failure as response to the request.
This will be fixed by KAFKA-4930  - resending the request to the leader worker 
is quite related, but might be better fixed separately in this jira. I'll 
change the type of relation on 4930 to reflect this unless you have objections?
I have set aside some time to work on 4930 next week, the fix itself is easy 
and done, however testing turns out to be a bit complex, as I need a running 
webserver and backend since url decoding happens there and I have not yet found 
test cases that I can piggyback upon.

> Kafka connect: error with special characters in connector name
> --
>
> Key: KAFKA-4827
> URL: https://issues.apache.org/jira/browse/KAFKA-4827
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.0
>Reporter: Aymeric Bouvet
>Assignee: Sönke Liebau
>Priority: Minor
> Fix For: 0.10.1.2, 0.10.2.2, 0.11.0.2, 1.1.0
>
>
> When creating a connector, if the connector name (and possibly other 
> properties) end with a carriage return, kafka-connect will create the config 
> but report error
> {code}
> cat << EOF > file-connector.json
> {
>   "name": "file-connector\r",
>   "config": {
> "topic": "kafka-connect-logs\r",
> "tasks.max": "1",
> "file": "/var/log/ansible-confluent/connect.log",
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSourceConnector"
>   }
> }
> EOF
> curl -X POST -H "Content-Type: application/json" -H "Accept: 
> application/json" -d @file-connector.json localhost:8083/connectors 
> {code}
> returns an error 500  and log the following
> {code}
> [2017-03-01 18:25:23,895] WARN  (org.eclipse.jetty.servlet.ServletHandler)
> javax.servlet.ServletException: java.lang.IllegalArgumentException: Illegal 
> character in path at index 27: /connectors/file-connector4
> at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
> at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at 
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:159)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>

[jira] [Commented] (KAFKA-6168) Connect Schema comparison is slow for large schemas

2017-11-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16252532#comment-16252532
 ] 

Loïc Monney commented on KAFKA-6168:


Hello [~rhauch],
First, thank you very much for creating this issue and helping us :)
I created this morning an issue in the schema-registry 
(https://github.com/confluentinc/schema-registry/issues/665) that illustrates 
the problems you mentioned above with the maps and the significant usages of 
`hashCode` and `equals`.

> Connect Schema comparison is slow for large schemas
> ---
>
> Key: KAFKA-6168
> URL: https://issues.apache.org/jira/browse/KAFKA-6168
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 1.0.0
>Reporter: Randall Hauch
>Assignee: Ted Yu
>Priority: Critical
> Attachments: 6168.v1.txt
>
>
> The {{ConnectSchema}} implementation computes the hash code every time its 
> needed, and {{equals(Object)}} is a deep equality check. This extra work can 
> be expensive for large schemas, especially in code like the {{AvroConverter}} 
> (or rather {{AvroData}} in the converter) that uses instances as keys in a 
> hash map that then requires significant use of {{hashCode}} and {{equals}}.
> The {{ConnectSchema}} is an immutable object and should at a minimum 
> precompute the hash code. Also, the order that the fields are compared in 
> {{equals(...)}} should use the cheapest comparisons first (e.g., the {{name}} 
> field is one of the _last_ fields to be checked). Finally, it might be worth 
> considering having each instance precompute and cache a string or byte[] 
> representation of all fields that can be used for faster equality checking.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5563) Clarify handling of connector name in config

2017-11-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265038#comment-16265038
 ] 

Sönke Liebau commented on KAFKA-5563:
-

[~ewencp] can you take a look at the changes when you get a chance?

> Clarify handling of connector name in config 
> -
>
> Key: KAFKA-5563
> URL: https://issues.apache.org/jira/browse/KAFKA-5563
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Sönke Liebau
>Assignee: Sönke Liebau
>Priority: Minor
>
> The connector name is currently being stored in two places, once at the root 
> level of the connector and once in the config:
> {code:java}
> {
>   "name": "test",
>   "config": {
>   "connector.class": 
> "org.apache.kafka.connect.tools.MockSinkConnector",
>   "tasks.max": "3",
>   "topics": "test-topic",
>   "name": "test"
>   },
>   "tasks": [
>   {
>   "connector": "test",
>   "task": 0
>   }
>   ]
> }
> {code}
> If no name is provided in the "config" element, then the name from the root 
> level is [copied there when the connector is being 
> created|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L95].
>  If however a name is provided in the config then it is not touched, which 
> means it is possible to create a connector with a different name at the root 
> level and in the config like this:
> {code:java}
> {
>   "name": "test1",
>   "config": {
>   "connector.class": 
> "org.apache.kafka.connect.tools.MockSinkConnector",
>   "tasks.max": "3",
>   "topics": "test-topic",
>   "name": "differentname"
>   },
>   "tasks": [
>   {
>   "connector": "test1",
>   "task": 0
>   }
>   ]
> }
> {code}
> I am not aware of any issues that this currently causes, but it is at least 
> confusing and probably not intended behavior and definitely bears potential 
> for bugs, if different functions take the name from different places.
> Would it make sense to add a check to reject requests that provide different 
> names in the request and the config section?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6257) KafkaConsumer was hung when bootstrap servers was not existed

2017-11-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-6257.
-
Resolution: Duplicate

Closing this as duplicate since no contradicting information was added.

> KafkaConsumer was hung when bootstrap servers was not existed
> -
>
> Key: KAFKA-6257
> URL: https://issues.apache.org/jira/browse/KAFKA-6257
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Brian Clark
>Priority: Minor
>
> Could anyone help me on this?
> We have an issue if we entered an non-existed host:port for bootstrap.servers 
> property on KafkaConsumer. The created KafkaConsumer was hung forever.
> the debug message:
> java.net.ConnectException: Connection timed out: no further information
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
> at 
> org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:95)
> at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:359)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:432)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:199)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:223)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:200)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
> [2017-08-28 09:20:56,400] DEBUG Node -1 disconnected. 
> (org.apache.kafka.clients.NetworkClient)
> [2017-08-28 09:20:56,400] WARN Connection to node -1 could not be 
> established. Broker may not be available. 
> (org.apache.kafka.clients.NetworkClient)
> [2017-08-28 09:20:56,400] DEBUG Give up sending metadata request since no 
> node is available (org.apache.kafka.clients.NetworkClient)
> [2017-08-28 09:20:56,450] DEBUG Initialize connection to node -1 for sending 
> metadata request (org.apache.kafka.clients.NetworkClient)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6389) Expose transaction metrics via JMX

2017-12-20 Thread JIRA
Florent Ramière created KAFKA-6389:
--

 Summary: Expose transaction metrics via JMX
 Key: KAFKA-6389
 URL: https://issues.apache.org/jira/browse/KAFKA-6389
 Project: Kafka
  Issue Type: Improvement
  Components: metrics
Affects Versions: 1.0.0
Reporter: Florent Ramière


Expose various metrics from 
https://cwiki.apache.org/confluence/display/KAFKA/Transactional+Messaging+in+Kafka
Especially 
* number of transactions
* number of current transactions
* timeout



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5563) Clarify handling of connector name in config

2017-11-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256744#comment-16256744
 ] 

Sönke Liebau commented on KAFKA-5563:
-

As KAFKA-4930 is approaching a stable state I looked at this again and found 
that a check like this is already implemented for [updating a 
connectorconfig|https://github.com/apache/kafka/blob/e31c0c9bdbad432bc21b583bd3c084f05323f642/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L135]
 

I've prepared a pull request that moves this check into a small helper function 
and call this from the createConnector method as well. 

I've also tested merging this with the PR for 4930 which works nicely (for 
now), if any problems occur I'll rebase whichever PR gets merged later.

> Clarify handling of connector name in config 
> -
>
> Key: KAFKA-5563
> URL: https://issues.apache.org/jira/browse/KAFKA-5563
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Sönke Liebau
>Assignee: Sönke Liebau
>Priority: Minor
>
> The connector name is currently being stored in two places, once at the root 
> level of the connector and once in the config:
> {code:java}
> {
>   "name": "test",
>   "config": {
>   "connector.class": 
> "org.apache.kafka.connect.tools.MockSinkConnector",
>   "tasks.max": "3",
>   "topics": "test-topic",
>   "name": "test"
>   },
>   "tasks": [
>   {
>   "connector": "test",
>   "task": 0
>   }
>   ]
> }
> {code}
> If no name is provided in the "config" element, then the name from the root 
> level is [copied there when the connector is being 
> created|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L95].
>  If however a name is provided in the config then it is not touched, which 
> means it is possible to create a connector with a different name at the root 
> level and in the config like this:
> {code:java}
> {
>   "name": "test1",
>   "config": {
>   "connector.class": 
> "org.apache.kafka.connect.tools.MockSinkConnector",
>   "tasks.max": "3",
>   "topics": "test-topic",
>   "name": "differentname"
>   },
>   "tasks": [
>   {
>   "connector": "test1",
>   "task": 0
>   }
>   ]
> }
> {code}
> I am not aware of any issues that this currently causes, but it is at least 
> confusing and probably not intended behavior and definitely bears potential 
> for bugs, if different functions take the name from different places.
> Would it make sense to add a check to reject requests that provide different 
> names in the request and the config section?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-414) Evaluate mmap-based writes for Log implementation

2017-11-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255509#comment-16255509
 ] 

Sönke Liebau commented on KAFKA-414:


True, I was looking at the index file code, sorry for the mixup!
Is this still relevant and should be reopened then, or has there been a 
decision to not go down this road at some point in time? I couldn't find 
anything in jira or the wiki on this topic when researching..

> Evaluate mmap-based writes for Log implementation
> -
>
> Key: KAFKA-414
> URL: https://issues.apache.org/jira/browse/KAFKA-414
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jay Kreps
>Priority: Minor
> Attachments: TestLinearWritePerformance.java, 
> linear_write_performance.txt
>
>
> Working on another project I noticed that small write performance for 
> FileChannel is really very bad. This likely effects Kafka in the case where 
> messages are produced one at a time or in small batches. I wrote a quick 
> program to evaluate the following options:
> raf = RandomAccessFile
> mmap = MappedByteBuffer
> channel = FileChannel
> For both of the later two I tried both direct-allocated and non-direct 
> allocated buffers (direct allocation is supposed to be faster).
> Here are the results I saw:
> [jkreps@jkreps-ld valencia]$ java -XX:+UseConcMarkSweepGC -cp 
> target/test-classes -server -Xmx1G -Xms1G valencia.TestLinearWritePerformance 
> $((256*1024)) $((1*1024*1024*1024)) 2
>   file_length  size (bytes)  raf (mb/sec) 
>   channel_direct (mb/sec)  mmap_direct (mb/sec) channel_heap (mb/sec) 
>mmap_heap (mb/sec)
>   100 1   
> 0.60  0.52 28.66  
> 0.55 50.40
>   200 2   
> 1.18  1.16 67.84  
> 1.13 74.17
>   400 4   
> 2.33  2.26121.52  
> 2.23122.14
>   800 8   
> 4.72  4.51228.39  
> 4.41175.20
>  160016   
> 9.25  8.96393.24  
> 8.88314.11
>  320032  
> 18.43 17.93601.83 
> 17.28482.25
>  640064  
> 36.25 35.21799.98 
> 34.39680.39
> 12800   128  
> 69.80 67.52963.30 
> 66.21870.82
> 25600   256 
> 134.24129.25   1064.13
> 129.01   1014.00
> 51200   512 
> 247.38238.24   1124.71
> 235.57   1091.81
>102400  1024 
> 420.42411.43   1170.94
> 406.57   1138.80
>1073741824  2048 
> 671.93658.96   1133.63
> 650.39   1151.81
>1073741824  4096
> 1007.84989.88   1165.60   
>  976.10   1158.49
>1073741824  8192
> 1137.12   1145.01   1189.38   
> 1128.30   1174.66
>1073741824 16384
> 1172.63   1228.33   1192.19   
> 1206.58   1156.37
>1073741824 32768
&

[jira] [Resolved] (KAFKA-381) Changes made by a request do not affect following requests in the same packet.

2017-11-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-381.

Resolution: Not A Bug

I think we can safely close this issue, the behavior was sufficiently 
investigated and explained.
Behavior today would still be like this and is the expected behavior.

> Changes made by a request do not affect following requests in the same packet.
> --
>
> Key: KAFKA-381
> URL: https://issues.apache.org/jira/browse/KAFKA-381
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.7
>Reporter: Samir Jindel
>Priority: Minor
>
> If a packet contains a produce request followed immediately by a fetch 
> request, the fetch request will not have access to the data produced by the 
> prior request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-517) Ensure that we escape the metric names if they include user strings

2017-11-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256991#comment-16256991
 ] 

Sönke Liebau commented on KAFKA-517:


A recent 
[commit|https://github.com/apache/kafka/commit/9be71f7bdcd147aee7a360a4ccf400acb858a056]
 introduced a jmxSanitizer to properly quote jmx strings if necessary. This is 
currently being applied to tags and mbean strings, but not to the name.
This is probably not really a problem, as I couldn't find any occurrences where 
there are dynamic metric names, so any exceptions should occur during testing 
when adding new metrics, however it is a potential exception that can be 
avoided by sanitizing names as well.

We could apply the transformation 
[here|https://github.com/apache/kafka/blob/7672e9ec3def7af6797bc0ecf254ac694efdfad5/core/src/main/scala/kafka/metrics/KafkaMetricsGroup.scala#L64].
 As far as I can tell (and I am by no means an expert on jmx) this should not 
change anything for properly named metrics (which currently is all of them) but 
in case someone ever adds one with an illegal name it would not cause an 
exception. 

I am unsure if this is a useful addition or if we'd rather new metrics fail so 
that the author can change the name to something valid. Maybe someone can 
comment on my musings, happy to create a small pull request if we deem this 
useful. If not, I believe we can close the issue.

> Ensure that we escape the metric names if they include user strings
> ---
>
> Key: KAFKA-517
> URL: https://issues.apache.org/jira/browse/KAFKA-517
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Jay Kreps
>  Labels: bugs
>
> JMX has limits on valid strings. We need to check validity before blindly 
> creating a metric that includes a given topic name. If we fail to do this we 
> will get an exception like this:
> javax.management.MalformedObjectNameException: Unterminated key property part
>   at javax.management.ObjectName.construct(ObjectName.java:540)
>   at javax.management.ObjectName.(ObjectName.java:1403)
>   at 
> com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
>   at 
> com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
>   at 
> com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
>   at 
> com.yammer.metrics.core.MetricsRegistry.newMeter(MetricsRegistry.java:240)
>   at com.yammer.metrics.Metrics.newMeter(Metrics.java:245)
>   at 
> kafka.metrics.KafkaMetricsGroup$class.newMeter(KafkaMetricsGroup.scala:46)
>   at kafka.server.FetcherStat.newMeter(AbstractFetcherThread.scala:180)
>   at kafka.server.FetcherStat.(AbstractFetcherThread.scala:182)
>   at 
> kafka.server.FetcherStat$$anonfun$2.apply(AbstractFetcherThread.scala:186)
>   at 
> kafka.server.FetcherStat$$anonfun$2.apply(AbstractFetcherThread.scala:186)
>   at kafka.utils.Pool.getAndMaybePut(Pool.scala:60)
>   at 
> kafka.server.FetcherStat$.getFetcherStat(AbstractFetcherThread.scala:190)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4827) Kafka connect: error with special characters in connector name

2017-11-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247658#comment-16247658
 ] 

Sönke Liebau commented on KAFKA-4827:
-

Sure, feel free to reassign!

> Kafka connect: error with special characters in connector name
> --
>
> Key: KAFKA-4827
> URL: https://issues.apache.org/jira/browse/KAFKA-4827
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.0
>Reporter: Aymeric Bouvet
>Assignee: Sönke Liebau
>Priority: Minor
> Fix For: 0.10.1.2, 0.10.2.2, 1.1.0, 0.11.0.3
>
>
> When creating a connector, if the connector name (and possibly other 
> properties) end with a carriage return, kafka-connect will create the config 
> but report error
> {code}
> cat << EOF > file-connector.json
> {
>   "name": "file-connector\r",
>   "config": {
> "topic": "kafka-connect-logs\r",
> "tasks.max": "1",
> "file": "/var/log/ansible-confluent/connect.log",
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSourceConnector"
>   }
> }
> EOF
> curl -X POST -H "Content-Type: application/json" -H "Accept: 
> application/json" -d @file-connector.json localhost:8083/connectors 
> {code}
> returns an error 500  and log the following
> {code}
> [2017-03-01 18:25:23,895] WARN  (org.eclipse.jetty.servlet.ServletHandler)
> javax.servlet.ServletException: java.lang.IllegalArgumentException: Illegal 
> character in path at index 27: /connectors/file-connector4
> at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
> at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at 
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:159)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Illegal character in path at 
> index 27: /connectors/file-connector4
> at java.net.URI.create(URI.java:852)
> at 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.createConnector(ConnectorsResource.java:100)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.gl

[jira] [Comment Edited] (KAFKA-4827) Kafka connect: error with special characters in connector name

2017-11-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247658#comment-16247658
 ] 

Sönke Liebau edited comment on KAFKA-4827 at 11/10/17 4:16 PM:
---

Sure, feel free to reassign!

If it helps, I've committed the fix code 
[here|https://github.com/soenkeliebau/kafka/blob/KAFKA-4827/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L101].
 That was the easiest way I could find of properly url encoding a path element 
without relying on extra libs.


was (Author: sliebau):
Sure, feel free to reassign!

> Kafka connect: error with special characters in connector name
> --
>
> Key: KAFKA-4827
> URL: https://issues.apache.org/jira/browse/KAFKA-4827
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.0
>Reporter: Aymeric Bouvet
>Assignee: Sönke Liebau
>Priority: Minor
> Fix For: 0.10.1.2, 0.10.2.2, 1.1.0, 0.11.0.3
>
>
> When creating a connector, if the connector name (and possibly other 
> properties) end with a carriage return, kafka-connect will create the config 
> but report error
> {code}
> cat << EOF > file-connector.json
> {
>   "name": "file-connector\r",
>   "config": {
> "topic": "kafka-connect-logs\r",
> "tasks.max": "1",
> "file": "/var/log/ansible-confluent/connect.log",
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSourceConnector"
>   }
> }
> EOF
> curl -X POST -H "Content-Type: application/json" -H "Accept: 
> application/json" -d @file-connector.json localhost:8083/connectors 
> {code}
> returns an error 500  and log the following
> {code}
> [2017-03-01 18:25:23,895] WARN  (org.eclipse.jetty.servlet.ServletHandler)
> javax.servlet.ServletException: java.lang.IllegalArgumentException: Illegal 
> character in path at index 27: /connectors/file-connector4
> at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
> at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at 
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:159)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Illegal character in path at 
> index 27: /connectors/file-connector4
> at java.net.URI.create(URI.java:852)
> at 
> org.apache.kafka.connect.runtime.re

[jira] [Commented] (KAFKA-6177) kafka-mirror-maker.sh RecordTooLargeException

2017-11-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16249639#comment-16249639
 ] 

Rémi REY commented on KAFKA-6177:
-

Hello all,

Anyone to acknowledge this ticket please ?

Thank you.

> kafka-mirror-maker.sh RecordTooLargeException
> -
>
> Key: KAFKA-6177
> URL: https://issues.apache.org/jira/browse/KAFKA-6177
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.1.1
> Environment: centos 7
>Reporter: Rémi REY
>Priority: Minor
>  Labels: support
> Attachments: consumer.config, producer.config, server.properties
>
>
> Hi all,
> I am facing an issue with kafka-mirror-maker.sh.
> We have 2 kafka clusters with the same configuration and mirror maker 
> instances in charge of the mirroring between the clusters.
> We haven't change the default configuration on the message size, so the 
> 112 bytes limitation is expected on both clusters.
> we are facing the following error at the mirroring side:
> {code}
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] 
> ERROR Error when sending message to topic my_topic_name with key: 81 bytes, 
> value: 1000272 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
> org.apache.kafka.common.errors.RecordTooLargeException: The request included 
> a message larger than the max message size the server will accept.
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] 
> ERROR Error when sending message to topic my_topic_name with key: 81 bytes, 
> value: 13846 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
> java.lang.IllegalStateException: Producer is closed forcefully.
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:513)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:493)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:156)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> java.lang.Thread.run(Thread.java:745)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] 
> FATAL [mirrormaker-thread-0] Mirror maker thread failure due to  
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
> java.lang.IllegalStateException: Cannot send after the producer is closed.
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:185)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:474)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:657)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> scala.collection.Iterator$class.foreach(Iterator.scala:893)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:434)
> {code}
> Why am I getting this error ? 
> {code}
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] 
> ERROR Error when sending message to topic my_topic_name with key: 81 bytes, 
> value: 1000272 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
> org.apache.kafka.common.err

[jira] [Resolved] (KAFKA-1130) "log.dirs" is a confusing property name

2017-11-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-1130.
-
Resolution: Won't Fix

Due to the age of the last comment on this issue and the fact that there has 
been not a lot of discussion around the naming of this parameter in the recent 
past I believe we can close this issue.

> "log.dirs" is a confusing property name
> ---
>
> Key: KAFKA-1130
> URL: https://issues.apache.org/jira/browse/KAFKA-1130
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.0
>Reporter: David Arthur
>Priority: Minor
> Attachments: KAFKA-1130.diff
>
>
> "log.dirs" is a somewhat misleading config name. The term "log" comes from an 
> internal Kafka class name, and shouldn't leak out into the public API (in 
> this case, the config).
> Something like "data.dirs" would be less confusing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6234) Transient failure in kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpoint

2017-11-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258826#comment-16258826
 ] 

Sönke Liebau commented on KAFKA-6234:
-

Is the 1 second timeout in that relevant to what is being tested, or could we 
just try to increase this a little to fix the shaky test? Especially with 
stopping and restarting all brokers before that call, there may be some 
internal stuff going on that just needs longer than a second to complete. 
On the other hand, the waitUntilTrue call is obviously intended to solve this 
for a leader not available exception, but this is being interrupted by the 
Timeoutexception being thrown in KafkaFutureImpl.get, so this could get 
extended to catch this exception as well.
Since the delete request is inside the waitUntilTrue and we probably don't want 
to retry the entire delete just because we didn't get a low watermark returned 
I think increasing the timeout a bit as a first step is the way to go.

> Transient failure in 
> kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpoint
> --
>
> Key: KAFKA-6234
> URL: https://issues.apache.org/jira/browse/KAFKA-6234
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>
> Saw this once: 
> https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2669/testReport/junit/kafka.api/AdminClientIntegrationTest/testLogStartOffsetCheckpoint/
> {code}
> Stacktrace
> java.util.concurrent.TimeoutException
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:225)
>   at 
> kafka.api.AdminClientIntegrationTest.$anonfun$testLogStartOffsetCheckpoint$3(AdminClientIntegrationTest.scala:762)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:858)
>   at 
> kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpoint(AdminClientIntegrationTest.scala:756)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at java.base/java.lang.Thread.run(Thread.java:844)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6234) Transient failure in kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpoint

2017-11-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258851#comment-16258851
 ] 

Sönke Liebau commented on KAFKA-6234:
-

I've run a few tests and looked at the value that the 1s timeout was exceeded 
by  and on my machine this was never more than 1/10th of a second, so this 
looks a lot like it's really just a too short timeout value. I've created a 
pull request to increase the timeout to 5s (added some extra to account for 
busy build servers and test no being run in isolation etc.)

> Transient failure in 
> kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpoint
> --
>
> Key: KAFKA-6234
> URL: https://issues.apache.org/jira/browse/KAFKA-6234
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>
> Saw this once: 
> https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2669/testReport/junit/kafka.api/AdminClientIntegrationTest/testLogStartOffsetCheckpoint/
> {code}
> Stacktrace
> java.util.concurrent.TimeoutException
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:225)
>   at 
> kafka.api.AdminClientIntegrationTest.$anonfun$testLogStartOffsetCheckpoint$3(AdminClientIntegrationTest.scala:762)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:858)
>   at 
> kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpoint(AdminClientIntegrationTest.scala:756)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at java.base/java.lang.Thread.run(Thread.java:844)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6238) Issues with protocol version when applying a rolling upgrade to 1.0.0

2017-11-20 Thread JIRA
Diego Louzán created KAFKA-6238:
---

 Summary: Issues with protocol version when applying a rolling 
upgrade to 1.0.0
 Key: KAFKA-6238
 URL: https://issues.apache.org/jira/browse/KAFKA-6238
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.0
Reporter: Diego Louzán


Hello,

I am trying to perform a rolling upgrade from 0.10.0.1 to 1.0.0, and according 
to the instructions in the documentation, I should only have to upgrade the 
"inter.broker.protocol.version" parameter in the first step. But after setting 
the value to "0.10.0" or "0.10.0.1" (tried both), the broker refuses to start 
with the following error:

{code}
[2017-11-20 08:28:46,620] FATAL  (kafka.Kafka$)
java.lang.IllegalArgumentException: requirement failed: 
log.message.format.version 1.0-IV0 cannot be used when 
inter.broker.protocol.version is set to 0.10.0.1
at scala.Predef$.require(Predef.scala:224)
at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1205)
at kafka.server.KafkaConfig.(KafkaConfig.scala:1170)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:881)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:878)
at 
kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
at kafka.Kafka$.main(Kafka.scala:82)
at kafka.Kafka.main(Kafka.scala)
{code}

I checked the instructions for rolling upgrades to previous versions (namely 
0.11.0.0), and in here it's stated that is also needed to upgrade the 
"log.message.format.version" parameter in two stages. I have tried that and the 
upgrade worked. It seems it still applies to version 1.0.0, so I'm not sure if 
this is wrong documentation, or an actual issue with kafka since it should work 
as stated in the docs.

Regards,
Diego Louzán



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-679) Phabricator for code review

2017-11-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257000#comment-16257000
 ] 

Sönke Liebau commented on KAFKA-679:


Is there still any activity in this direction ongoing or pending? Seeing as the 
ticket has not been updated in five years and review/coding/etc. has moved to 
github I'd assume this can be closed?

> Phabricator for code review
> ---
>
> Key: KAFKA-679
> URL: https://issues.apache.org/jira/browse/KAFKA-679
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Neha Narkhede
>Assignee: Sriram Subramanian
>
> Sriram proposed adding phabricator support for code reviews. 
> From http://phabricator.org/ : "Phabricator is a open source collection of 
> web applications which make it easier to write, review, and share source 
> code. It is currently available as an early release. Phabricator was 
> developed at Facebook."
> It's open source so pretty much anyone could host an instance of this 
> software.
> To begin with, there will be a public-facing instance located at 
> http://reviews.facebook.net (sponsored by Facebook and hosted by the OSUOSL 
> http://osuosl.org).
> We can use this JIRA to deal with adding (and ensuring) Apache-friendly 
> support that will allow us to do code reviews with Phabricator for Kafka.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (KAFKA-4) Confusing Error mesage from producer when no kafka brokers are available

2017-11-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau updated KAFKA-4:
-
Comment: was deleted

(was: Better error message introduced by fix for KAFKA-5179)

> Confusing Error mesage from producer when no kafka brokers are available
> 
>
> Key: KAFKA-4
> URL: https://issues.apache.org/jira/browse/KAFKA-4
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.6
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> If no kafka brokers are available the producer gives the following error: 
> Exception in thread "main" kafka.common.InvalidPartitionException: Invalid 
> number of partitions: 0 
> Valid values are > 0 
> at 
> kafka.producer.Producer.kafka$producer$Producer$$getPartition(Producer.scala:144)
>  
> at kafka.producer.Producer$$anonfun$3.apply(Producer.scala:112) 
> at kafka.producer.Producer$$anonfun$3.apply(Producer.scala:102) 
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>  
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>  
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>  
> at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32) 
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:206) 
> at scala.collection.mutable.WrappedArray.map(WrappedArray.scala:32) 
> at kafka.producer.Producer.send(Producer.scala:102) 
> at kafka.javaapi.producer.Producer.send(Producer.scala:101) 
> at com.linkedin.nusviewer.PublishTestMessage.main(PublishTestMessage.java:45) 
> This is confusing. The problem is that no brokers are available, we should 
> make this more clear.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4) Confusing Error mesage from producer when no kafka brokers are available

2017-11-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-4.
--
Resolution: Fixed  (was: Unresolved)

> Confusing Error mesage from producer when no kafka brokers are available
> 
>
> Key: KAFKA-4
> URL: https://issues.apache.org/jira/browse/KAFKA-4
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.6
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> If no kafka brokers are available the producer gives the following error: 
> Exception in thread "main" kafka.common.InvalidPartitionException: Invalid 
> number of partitions: 0 
> Valid values are > 0 
> at 
> kafka.producer.Producer.kafka$producer$Producer$$getPartition(Producer.scala:144)
>  
> at kafka.producer.Producer$$anonfun$3.apply(Producer.scala:112) 
> at kafka.producer.Producer$$anonfun$3.apply(Producer.scala:102) 
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>  
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>  
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>  
> at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32) 
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:206) 
> at scala.collection.mutable.WrappedArray.map(WrappedArray.scala:32) 
> at kafka.producer.Producer.send(Producer.scala:102) 
> at kafka.javaapi.producer.Producer.send(Producer.scala:101) 
> at com.linkedin.nusviewer.PublishTestMessage.main(PublishTestMessage.java:45) 
> This is confusing. The problem is that no brokers are available, we should 
> make this more clear.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-414) Evaluate mmap-based writes for Log implementation

2017-11-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255038#comment-16255038
 ] 

Sönke Liebau commented on KAFKA-414:


The switch to using MappedByteBuffers was made as part of KAFKA-506 as far as I 
can tell, so we can probably close this issue.

> Evaluate mmap-based writes for Log implementation
> -
>
> Key: KAFKA-414
> URL: https://issues.apache.org/jira/browse/KAFKA-414
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jay Kreps
>Priority: Minor
> Attachments: TestLinearWritePerformance.java, 
> linear_write_performance.txt
>
>
> Working on another project I noticed that small write performance for 
> FileChannel is really very bad. This likely effects Kafka in the case where 
> messages are produced one at a time or in small batches. I wrote a quick 
> program to evaluate the following options:
> raf = RandomAccessFile
> mmap = MappedByteBuffer
> channel = FileChannel
> For both of the later two I tried both direct-allocated and non-direct 
> allocated buffers (direct allocation is supposed to be faster).
> Here are the results I saw:
> [jkreps@jkreps-ld valencia]$ java -XX:+UseConcMarkSweepGC -cp 
> target/test-classes -server -Xmx1G -Xms1G valencia.TestLinearWritePerformance 
> $((256*1024)) $((1*1024*1024*1024)) 2
>   file_length  size (bytes)  raf (mb/sec) 
>   channel_direct (mb/sec)  mmap_direct (mb/sec) channel_heap (mb/sec) 
>mmap_heap (mb/sec)
>   100 1   
> 0.60  0.52 28.66  
> 0.55 50.40
>   200 2   
> 1.18  1.16 67.84  
> 1.13 74.17
>   400 4   
> 2.33  2.26121.52  
> 2.23122.14
>   800 8   
> 4.72  4.51228.39  
> 4.41175.20
>  160016   
> 9.25  8.96393.24  
> 8.88314.11
>  320032  
> 18.43 17.93601.83 
> 17.28482.25
>  640064  
> 36.25 35.21799.98 
> 34.39680.39
> 12800   128  
> 69.80 67.52963.30 
> 66.21870.82
> 25600   256 
> 134.24129.25   1064.13
> 129.01   1014.00
> 51200   512 
> 247.38238.24   1124.71
> 235.57   1091.81
>102400  1024 
> 420.42411.43   1170.94
> 406.57   1138.80
>1073741824  2048 
> 671.93658.96   1133.63
> 650.39   1151.81
>1073741824  4096
> 1007.84989.88   1165.60   
>  976.10   1158.49
>1073741824  8192
> 1137.12   1145.01   1189.38   
> 1128.30   1174.66
>1073741824 16384
> 1172.63   1228.33   1192.19   
> 1206.58   1156.37
>1073741824 32768
> 1221.13   1295.37   1170.96   
> 1262.28   1156.65
> 

[jira] [Commented] (KAFKA-4) Confusing Error mesage from producer when no kafka brokers are available

2017-11-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254981#comment-16254981
 ] 

Sönke Liebau commented on KAFKA-4:
--

Better error message introduced by fix for KAFKA-5179

> Confusing Error mesage from producer when no kafka brokers are available
> 
>
> Key: KAFKA-4
> URL: https://issues.apache.org/jira/browse/KAFKA-4
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.6
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> If no kafka brokers are available the producer gives the following error: 
> Exception in thread "main" kafka.common.InvalidPartitionException: Invalid 
> number of partitions: 0 
> Valid values are > 0 
> at 
> kafka.producer.Producer.kafka$producer$Producer$$getPartition(Producer.scala:144)
>  
> at kafka.producer.Producer$$anonfun$3.apply(Producer.scala:112) 
> at kafka.producer.Producer$$anonfun$3.apply(Producer.scala:102) 
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>  
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>  
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>  
> at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32) 
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:206) 
> at scala.collection.mutable.WrappedArray.map(WrappedArray.scala:32) 
> at kafka.producer.Producer.send(Producer.scala:102) 
> at kafka.javaapi.producer.Producer.send(Producer.scala:101) 
> at com.linkedin.nusviewer.PublishTestMessage.main(PublishTestMessage.java:45) 
> This is confusing. The problem is that no brokers are available, we should 
> make this more clear.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-4) Confusing Error mesage from producer when no kafka brokers are available

2017-11-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254975#comment-16254975
 ] 

Sönke Liebau edited comment on KAFKA-4 at 11/16/17 9:10 AM:


Current error message when no broker is available is:

{code}
WARN Connection to node -1 could not be established. Broker may not be 
available. (org.apache.kafka.clients.NetworkClient)
{code}

This message was introduced by KAFKA-5179, so I think it is save to say that we 
can close this ticket as well with the same fix version. Before that there were 
other error messages that also improved upon this message, but I don't think we 
need to provide the entire history here..



was (Author: sliebau):
Current error message when no broker is available is:

{code}
WARN Connection to node -1 could not be established. Broker may not be 
available. (org.apache.kafka.clients.NetworkClient)
{code}

This message was introduced by KAFKA-5179, so I think it is save to say that we 
can close this ticket as well with the same fix version.


> Confusing Error mesage from producer when no kafka brokers are available
> 
>
> Key: KAFKA-4
> URL: https://issues.apache.org/jira/browse/KAFKA-4
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.6
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> If no kafka brokers are available the producer gives the following error: 
> Exception in thread "main" kafka.common.InvalidPartitionException: Invalid 
> number of partitions: 0 
> Valid values are > 0 
> at 
> kafka.producer.Producer.kafka$producer$Producer$$getPartition(Producer.scala:144)
>  
> at kafka.producer.Producer$$anonfun$3.apply(Producer.scala:112) 
> at kafka.producer.Producer$$anonfun$3.apply(Producer.scala:102) 
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>  
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>  
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>  
> at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32) 
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:206) 
> at scala.collection.mutable.WrappedArray.map(WrappedArray.scala:32) 
> at kafka.producer.Producer.send(Producer.scala:102) 
> at kafka.javaapi.producer.Producer.send(Producer.scala:101) 
> at com.linkedin.nusviewer.PublishTestMessage.main(PublishTestMessage.java:45) 
> This is confusing. The problem is that no brokers are available, we should 
> make this more clear.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4) Confusing Error mesage from producer when no kafka brokers are available

2017-11-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau updated KAFKA-4:
-
Fix Version/s: 0.11.0.0

> Confusing Error mesage from producer when no kafka brokers are available
> 
>
> Key: KAFKA-4
> URL: https://issues.apache.org/jira/browse/KAFKA-4
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.6
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> If no kafka brokers are available the producer gives the following error: 
> Exception in thread "main" kafka.common.InvalidPartitionException: Invalid 
> number of partitions: 0 
> Valid values are > 0 
> at 
> kafka.producer.Producer.kafka$producer$Producer$$getPartition(Producer.scala:144)
>  
> at kafka.producer.Producer$$anonfun$3.apply(Producer.scala:112) 
> at kafka.producer.Producer$$anonfun$3.apply(Producer.scala:102) 
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>  
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>  
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>  
> at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32) 
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:206) 
> at scala.collection.mutable.WrappedArray.map(WrappedArray.scala:32) 
> at kafka.producer.Producer.send(Producer.scala:102) 
> at kafka.javaapi.producer.Producer.send(Producer.scala:101) 
> at com.linkedin.nusviewer.PublishTestMessage.main(PublishTestMessage.java:45) 
> This is confusing. The problem is that no brokers are available, we should 
> make this more clear.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6163) Broker should fail fast on startup if an error occurs while loading logs

2017-11-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16237984#comment-16237984
 ] 

Xavier Léauté commented on KAFKA-6163:
--

[~ijuma] in this case it was a different fatal error {{java.lang.InternalError: 
a fault occurred in an unsafe memory access operation}}
You probably don't want to try to recover from that.

> Broker should fail fast on startup if an error occurs while loading logs
> 
>
> Key: KAFKA-6163
> URL: https://issues.apache.org/jira/browse/KAFKA-6163
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0, 1.0.0
>Reporter: Xavier Léauté
>Priority: Normal
>
> If the broker fails to load one of the logs during startup, we currently 
> don't fail fast. The {{LogManager}} will log an error and initiate the 
> shutdown sequence, but continue loading all the remaining sequence before 
> shutting down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6164) ClientQuotaManager threads prevent shutdown when encountering an error loading logs

2017-11-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-6164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16237977#comment-16237977
 ] 

Xavier Léauté commented on KAFKA-6164:
--

[~ted_yu] that's certainly an option assuming we don't care about orderly 
shutdown of the quota manager. It would probably be nicer though to properly 
manage the quota manager lifecycle.

> ClientQuotaManager threads prevent shutdown when encountering an error 
> loading logs
> ---
>
> Key: KAFKA-6164
> URL: https://issues.apache.org/jira/browse/KAFKA-6164
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0, 1.0.0
>Reporter: Xavier Léauté
>Priority: Major
>
> While diagnosing KAFKA-6163, we noticed that when the broker initiates a 
> shutdown sequence in response to an error loading the logs, the process never 
> exits. The JVM appears to be waiting indefinitely for several non-deamon 
> threads to terminate.
> The threads in question are {{ThrottledRequestReaper-Request}}, 
> {{ThrottledRequestReaper-Produce}}, and {{ThrottledRequestReaper-Fetch}}, so 
> it appears we don't properly shutdown {{ClientQuotaManager}} in this 
> situation.
> QuotaManager shutdown is currently handled by KafkaApis, however KafkaApis 
> will never be instantiated in the first place if we encounter an error 
> loading the logs, so quotamangers are left dangling in that case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6177) kafka-mirror-maker.sh RecordTooLargeException

2017-11-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rémi REY updated KAFKA-6177:

Description: 
Hi all,

I am facing an issue with kafka-mirror-maker.sh.
We have 2 kafka clusters with the same configuration and mirror maker instances 
in charge of the mirroring between the clusters.

We haven't change the default configuration on the message size, so the 112 
bytes limitation is expected on both clusters.

we are facing the following error at the mirroring side:

{code}
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] ERROR 
Error when sending message to topic my_topic_name with key: 81 bytes, value: 
1000272 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
org.apache.kafka.common.errors.RecordTooLargeException: The request included a 
message larger than the max message size the server will accept.
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] ERROR 
Error when sending message to topic my_topic_name with key: 81 bytes, value: 
13846 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
java.lang.IllegalStateException: Producer is closed forcefully.
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:513)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:493)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:156)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
java.lang.Thread.run(Thread.java:745)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] FATAL 
[mirrormaker-thread-0] Mirror maker thread failure due to  
(kafka.tools.MirrorMaker$MirrorMakerThread)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
java.lang.IllegalStateException: Cannot send after the producer is closed.
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:185)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:474)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:657)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.Iterator$class.foreach(Iterator.scala:893)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:434)
{code}

Why am I getting this error ? 

{code}
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] ERROR 
Error when sending message to topic my_topic_name with key: 81 bytes, value: 
1000272 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
org.apache.kafka.common.errors.RecordTooLargeException: The request included a 
message larger than the max message size the server will accept.
{code}

How can mirror maker encounter a 1000272 bytes message while the kafka cluster 
being mirrored has the default limitation of 112 bytes for a message ?

Find the mirrormaker consumer and producer config files attached.

Thanks for your inputs.

  was:
Hi all,

I am facing an issue with kafka-mirror-maker.sh.
We have 2 kafka clusters with the same configuration and mirror maker instances 
in charge of the mirroring between the clusters.

We haven't change the default configuration on the message size, so the 112 
bytes limitation is expected on both clusters.

we are facing the following error at the mirroring side:

{code}
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] ERROR 
Error when sending message to topic my_topic_name with key: 81 bytes, value: 
1000272 bytes with error

[jira] [Updated] (KAFKA-6177) kafka-mirror-maker.sh RecordTooLargeException

2017-11-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rémi REY updated KAFKA-6177:

Attachment: server.properties

Source kafka cluster config file sample.

> kafka-mirror-maker.sh RecordTooLargeException
> -
>
> Key: KAFKA-6177
> URL: https://issues.apache.org/jira/browse/KAFKA-6177
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.1.1
> Environment: centos 7
>Reporter: Rémi REY
>Priority: Minor
>  Labels: support
> Attachments: consumer.config, producer.config, server.properties
>
>
> Hi all,
> I am facing an issue with kafka-mirror-maker.sh.
> We have 2 kafka clusters with the same configuration and mirror maker 
> instances in charge of the mirroring between the clusters.
> We haven't change the default configuration on the message size, so the 
> 112 bytes limitation is expected on both clusters.
> we are facing the following error at the mirroring side:
> {code}
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] 
> ERROR Error when sending message to topic my_topic_name with key: 81 bytes, 
> value: 1000272 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
> org.apache.kafka.common.errors.RecordTooLargeException: The request included 
> a message larger than the max message size the server will accept.
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] 
> ERROR Error when sending message to topic my_topic_name with key: 81 bytes, 
> value: 13846 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
> java.lang.IllegalStateException: Producer is closed forcefully.
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:513)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:493)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:156)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> java.lang.Thread.run(Thread.java:745)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] 
> FATAL [mirrormaker-thread-0] Mirror maker thread failure due to  
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
> java.lang.IllegalStateException: Cannot send after the producer is closed.
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:185)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:474)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:657)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> scala.collection.Iterator$class.foreach(Iterator.scala:893)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:434)
> {code}
> Why am I getting this error ? 
> {code}
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] 
> ERROR Error when sending message to topic my_topic_name with key: 81 bytes, 
> value: 1000272 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
> org.apache.kafka.common.errors.RecordTooLargeException:

[jira] [Created] (KAFKA-6177) kafka-mirror-maker.sh RecordTooLargeException

2017-11-06 Thread JIRA
Rémi REY created KAFKA-6177:
---

 Summary: kafka-mirror-maker.sh RecordTooLargeException
 Key: KAFKA-6177
 URL: https://issues.apache.org/jira/browse/KAFKA-6177
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.10.1.1
 Environment: centos 7
Reporter: Rémi REY
Priority: Minor
 Attachments: consumer.config, producer.config

Hi all,

I am facing an issue with kafka-mirror-maker.sh.
We have 2 kafka clusters with the same configuration and mirror maker instances 
in charge of the mirroring between the clusters.

We haven't change the default configuration on the message size, so the 112 
bytes limitation is expected on both clusters.

we are facing the following error at the mirroring side:

Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] ERROR 
Error when sending message to topic my_topic_name with key: 81 bytes, value: 
1000272 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
org.apache.kafka.common.errors.RecordTooLargeException: The request included a 
message larger than the max message size the server will accept.
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] ERROR 
Error when sending message to topic my_topic_name with key: 81 bytes, value: 
13846 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
java.lang.IllegalStateException: Producer is closed forcefully.
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:513)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:493)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:156)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
java.lang.Thread.run(Thread.java:745)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] FATAL 
[mirrormaker-thread-0] Mirror maker thread failure due to  
(kafka.tools.MirrorMaker$MirrorMakerThread)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
java.lang.IllegalStateException: Cannot send after the producer is closed.
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:185)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:474)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:657)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.Iterator$class.foreach(Iterator.scala:893)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:434)

Why am I getting this error ? 
I expect that messages that could enter the first cluster could be repicated to 
the second cluster without raising any error on the message size.
Is there any configuration adjustment required at mirror maker side to have it 
support the default message size on the brokers ?

Find the mirrormaker consumer and producer config files attached.

Thanks for your inputs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6177) kafka-mirror-maker.sh RecordTooLargeException

2017-11-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rémi REY updated KAFKA-6177:

Description: 
Hi all,

I am facing an issue with kafka-mirror-maker.sh.
We have 2 kafka clusters with the same configuration and mirror maker instances 
in charge of the mirroring between the clusters.

We haven't change the default configuration on the message size, so the 112 
bytes limitation is expected on both clusters.

we are facing the following error at the mirroring side:

{code}
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] ERROR 
Error when sending message to topic my_topic_name with key: 81 bytes, value: 
1000272 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
org.apache.kafka.common.errors.RecordTooLargeException: The request included a 
message larger than the max message size the server will accept.
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] ERROR 
Error when sending message to topic my_topic_name with key: 81 bytes, value: 
13846 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
java.lang.IllegalStateException: Producer is closed forcefully.
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:513)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:493)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:156)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
java.lang.Thread.run(Thread.java:745)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] FATAL 
[mirrormaker-thread-0] Mirror maker thread failure due to  
(kafka.tools.MirrorMaker$MirrorMakerThread)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
java.lang.IllegalStateException: Cannot send after the producer is closed.
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:185)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:474)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:657)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.Iterator$class.foreach(Iterator.scala:893)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:434)
{code}

Why am I getting this error ? 
I expect that messages that could enter the first cluster could be repicated to 
the second cluster without raising any error on the message size.
Is there any configuration adjustment required at mirror maker side to have it 
support the default message size on the brokers ?

Find the mirrormaker consumer and producer config files attached.

Thanks for your inputs.

  was:
Hi all,

I am facing an issue with kafka-mirror-maker.sh.
We have 2 kafka clusters with the same configuration and mirror maker instances 
in charge of the mirroring between the clusters.

We haven't change the default configuration on the message size, so the 112 
bytes limitation is expected on both clusters.

we are facing the following error at the mirroring side:

Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] ERROR 
Error when sending message to topic my_topic_name with key: 81 bytes, value: 
1000272 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
org.apache.kafka.common.errors.RecordTooLargeException: The request included a 
message larger than the max message size the server will accept.
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] ERROR 
Error when sending message to topic

[jira] [Updated] (KAFKA-6178) Broker is listed as only ISR for all partitions it is leader of

2017-11-06 Thread AS (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AS updated KAFKA-6178:
--
Description: 
We're running a 15 broker cluster on windows machines, and one of the brokers, 
10, is the only ISR on all partitions that it is the leader of. On partitions 
where it isn't the leader, it seems to follow the leadeer fine. This is an 
excerpt from 'describe':


{quote}Topic: ClientQosCombined  Partition: 458  Leader: 10  Replicas: 
10,6,7,8,9,0,1   Isr: 10
Topic: ClientQosCombined  Partition: 459  Leader: 11  Replicas: 
11,7,8,9,0,1,10 Isr: 0,10,1,9,7,11,8{quote}


The server.log files all seem to be pretty standard, and the only indication of 
this issue is the following pattern that often repeats:


2017-11-06 20:28:25,207 [INFO] kafka.cluster.Partition 
[kafka-request-handler-8:] - Partition [ClientQosCombined,398] on broker 10: 
Expanding ISR for partition [ClientQosCombined,398] from 10 to 5,10
2017-11-06 20:28:39,382 [INFO] kafka.cluster.Partition [kafka-scheduler-1:] - 
Partition [ClientQosCombined,398] on broker 10: Shrinking ISR for partition 
[ClientQosCombined,398] from 5,10 to 10


For each of the partitions that 10 leads. This is the only topic that we 
currently have in our cluster. The __consumer_offsets topic seems completely 
normal in terms of isr counts. The controller is broker 5, which is cycling 
through attempting and failing to trigger leader elections on broker 10 led 
partitions. From the controller log in broker 5:


2017-11-06 20:45:04,857 [INFO] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Starting preferred replica leader 
election for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] kafka.controller.PartitionStateMachine 
[kafka-scheduler-0:] - [Partition state machine on Controller 5]: Invoking 
state change to OnlinePartition for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] 
kafka.controller.PreferredReplicaPartitionLeaderSelector [kafka-scheduler-0:] - 
[PreferredReplicaPartitionLeaderSelector]: Current leader 10 for partition 
[ClientQosCombined,375] is not the preferred replica. Trigerring preferred 
replica leader election
2017-11-06 20:45:04,857 [WARN] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Partition [ClientQosCombined,375] failed 
to complete preferred replica leader election. Leader is 10


I've also attached the logs and output from broker 10. Any idea what's wrong 
here? 

  was:
We're running a 15 broker cluster on windows machines, and one of the brokers, 
10, is the only ISR on all partitions that it is the leader of. On partitions 
where it isn't the leader, it seems to follow the leadeer fine. This is an 
excerpt from 'describe':


Topic: ClientQosCombined  Partition: 458  Leader: 10  Replicas: 
10,6,7,8,9,0,1   Isr: 10
Topic: ClientQosCombined  Partition: 459  Leader: 11  Replicas: 
11,7,8,9,0,1,10 Isr: 0,10,1,9,7,11,8


The server.log files all seem to be pretty standard, and the only indication of 
this issue is the following pattern that often repeats:


2017-11-06 20:28:25,207 [INFO] kafka.cluster.Partition 
[kafka-request-handler-8:] - Partition [ClientQosCombined,398] on broker 10: 
Expanding ISR for partition [ClientQosCombined,398] from 10 to 5,10
2017-11-06 20:28:39,382 [INFO] kafka.cluster.Partition [kafka-scheduler-1:] - 
Partition [ClientQosCombined,398] on broker 10: Shrinking ISR for partition 
[ClientQosCombined,398] from 5,10 to 10


For each of the partitions that 10 leads. This is the only topic that we 
currently have in our cluster. The __consumer_offsets topic seems completely 
normal in terms of isr counts. The controller is broker 5, which is cycling 
through attempting and failing to trigger leader elections on broker 10 led 
partitions. From the controller log in broker 5:


2017-11-06 20:45:04,857 [INFO] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Starting preferred replica leader 
election for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] kafka.controller.PartitionStateMachine 
[kafka-scheduler-0:] - [Partition state machine on Controller 5]: Invoking 
state change to OnlinePartition for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] 
kafka.controller.PreferredReplicaPartitionLeaderSelector [kafka-scheduler-0:] - 
[PreferredReplicaPartitionLeaderSelector]: Current leader 10 for partition 
[ClientQosCombined,375] is not the preferred replica. Trigerring preferred 
replica leader election
2017-11-06 20:45:04,857 [WARN] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Partition [ClientQosCombined,375] failed 
to complete preferred replica leader election. Leader is 10


I've also attached the logs and output from broker 10. Any idea what's wrong 
here? 


> Broker is listed as only ISR for all partitions it is lea

[jira] [Updated] (KAFKA-6178) Broker is listed as only ISR for all partitions it is leader of

2017-11-06 Thread AS (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AS updated KAFKA-6178:
--
Description: 
We're running a 15 broker cluster on windows machines, and one of the brokers, 
10, is the only ISR on all partitions that it is the leader of. On partitions 
where it isn't the leader, it seems to follow the leadeer fine. This is an 
excerpt from 'describe':


Topic: ClientQosCombined  Partition: 458  Leader: 10  Replicas: 
10,6,7,8,9,0,1   Isr: 10
Topic: ClientQosCombined  Partition: 459  Leader: 11  Replicas: 
11,7,8,9,0,1,10 Isr: 0,10,1,9,7,11,8


The server.log files all seem to be pretty standard, and the only indication of 
this issue is the following pattern that often repeats:


2017-11-06 20:28:25,207 [INFO] kafka.cluster.Partition 
[kafka-request-handler-8:] - Partition [ClientQosCombined,398] on broker 10: 
Expanding ISR for partition [ClientQosCombined,398] from 10 to 5,10
2017-11-06 20:28:39,382 [INFO] kafka.cluster.Partition [kafka-scheduler-1:] - 
Partition [ClientQosCombined,398] on broker 10: Shrinking ISR for partition 
[ClientQosCombined,398] from 5,10 to 10


For each of the partitions that 10 leads. This is the only topic that we 
currently have in our cluster. The __consumer_offsets topic seems completely 
normal in terms of isr counts. The controller is broker 5, which is cycling 
through attempting and failing to trigger leader elections on broker 10 led 
partitions. From the controller log in broker 5:


2017-11-06 20:45:04,857 [INFO] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Starting preferred replica leader 
election for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] kafka.controller.PartitionStateMachine 
[kafka-scheduler-0:] - [Partition state machine on Controller 5]: Invoking 
state change to OnlinePartition for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] 
kafka.controller.PreferredReplicaPartitionLeaderSelector [kafka-scheduler-0:] - 
[PreferredReplicaPartitionLeaderSelector]: Current leader 10 for partition 
[ClientQosCombined,375] is not the preferred replica. Trigerring preferred 
replica leader election
2017-11-06 20:45:04,857 [WARN] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Partition [ClientQosCombined,375] failed 
to complete preferred replica leader election. Leader is 10


I've also attached the logs and output from broker 10. Any idea what's wrong 
here? 

  was:
We're running a 15 broker cluster on windows machines, and one of the brokers, 
10, is the only ISR on all partitions that it is the leader of. On partitions 
where it isn't the leader, it seems to follow the leadeer fine. This is an 
excerpt from 'describe':

{{
Topic: ClientQosCombined  Partition: 458  Leader: 10  Replicas: 
10,6,7,8,9,0,1   Isr: 10
Topic: ClientQosCombined  Partition: 459  Leader: 11  Replicas: 
11,7,8,9,0,1,10 Isr: 0,10,1,9,7,11,8
}}

The server.log files all seem to be pretty standard, and the only indication of 
this issue is the following pattern that often repeats:

{{2017-11-06 20:28:25,207 [INFO] kafka.cluster.Partition 
[kafka-request-handler-8:] - Partition [ClientQosCombined,398] on broker 10: 
Expanding ISR for partition [ClientQosCombined,398] from 10 to 5,10
2017-11-06 20:28:39,382 [INFO] kafka.cluster.Partition [kafka-scheduler-1:] - 
Partition [ClientQosCombined,398] on broker 10: Shrinking ISR for partition 
[ClientQosCombined,398] from 5,10 to 10}}

For each of the partitions that 10 leads. This is the only topic that we 
currently have in our cluster. The __consumer_offsets topic seems completely 
normal in terms of isr counts. The controller is broker 5, which is cycling 
through attempting and failing to trigger leader elections on broker 10 led 
partitions. From the controller log in broker 5:

{{2017-11-06 20:45:04,857 [INFO] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Starting preferred replica leader 
election for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] kafka.controller.PartitionStateMachine 
[kafka-scheduler-0:] - [Partition state machine on Controller 5]: Invoking 
state change to OnlinePartition for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] 
kafka.controller.PreferredReplicaPartitionLeaderSelector [kafka-scheduler-0:] - 
[PreferredReplicaPartitionLeaderSelector]: Current leader 10 for partition 
[ClientQosCombined,375] is not the preferred replica. Trigerring preferred 
replica leader election
2017-11-06 20:45:04,857 [WARN] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Partition [ClientQosCombined,375] failed 
to complete preferred replica leader election. Leader is 10}}

I've also attached the logs and output from broker 10. Any idea what's wrong 
here? 


> Broker is listed as only ISR for all partitions it is lea

[jira] [Updated] (KAFKA-6178) Broker is listed as only ISR for all partitions it is leader of

2017-11-06 Thread AS (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AS updated KAFKA-6178:
--
Description: 
We're running a 15 broker cluster on windows machines, and one of the brokers, 
10, is the only ISR on all partitions that it is the leader of. On partitions 
where it isn't the leader, it seems to follow the leadeer fine. This is an 
excerpt from 'describe':

{{
Topic: ClientQosCombined  Partition: 458  Leader: 10  Replicas: 
10,6,7,8,9,0,1   Isr: 10
Topic: ClientQosCombined  Partition: 459  Leader: 11  Replicas: 
11,7,8,9,0,1,10 Isr: 0,10,1,9,7,11,8
}}

The server.log files all seem to be pretty standard, and the only indication of 
this issue is the following pattern that often repeats:

{{2017-11-06 20:28:25,207 [INFO] kafka.cluster.Partition 
[kafka-request-handler-8:] - Partition [ClientQosCombined,398] on broker 10: 
Expanding ISR for partition [ClientQosCombined,398] from 10 to 5,10
2017-11-06 20:28:39,382 [INFO] kafka.cluster.Partition [kafka-scheduler-1:] - 
Partition [ClientQosCombined,398] on broker 10: Shrinking ISR for partition 
[ClientQosCombined,398] from 5,10 to 10}}

For each of the partitions that 10 leads. This is the only topic that we 
currently have in our cluster. The __consumer_offsets topic seems completely 
normal in terms of isr counts. The controller is broker 5, which is cycling 
through attempting and failing to trigger leader elections on broker 10 led 
partitions. From the controller log in broker 5:

{{2017-11-06 20:45:04,857 [INFO] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Starting preferred replica leader 
election for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] kafka.controller.PartitionStateMachine 
[kafka-scheduler-0:] - [Partition state machine on Controller 5]: Invoking 
state change to OnlinePartition for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] 
kafka.controller.PreferredReplicaPartitionLeaderSelector [kafka-scheduler-0:] - 
[PreferredReplicaPartitionLeaderSelector]: Current leader 10 for partition 
[ClientQosCombined,375] is not the preferred replica. Trigerring preferred 
replica leader election
2017-11-06 20:45:04,857 [WARN] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Partition [ClientQosCombined,375] failed 
to complete preferred replica leader election. Leader is 10}}

I've also attached the logs and output from broker 10. Any idea what's wrong 
here? 

  was:
We're running a 15 broker cluster on windows machines, and one of the brokers, 
10, is the only ISR on all partitions that it is the leader of. On partitions 
where it isn't the leader, it seems to follow the leadeer fine. This is an 
excerpt from 'describe':

{{Topic: ClientQosCombined  Partition: 458  Leader: 10  Replicas: 
10,6,7,8,9,0,1   Isr: 10
Topic: ClientQosCombined  Partition: 459  Leader: 11  Replicas: 
11,7,8,9,0,1,10 Isr: 0,10,1,9,7,11,8}}

The server.log files all seem to be pretty standard, and the only indication of 
this issue is the following pattern that often repeats:

{{2017-11-06 20:28:25,207 [INFO] kafka.cluster.Partition 
[kafka-request-handler-8:] - Partition [ClientQosCombined,398] on broker 10: 
Expanding ISR for partition [ClientQosCombined,398] from 10 to 5,10
2017-11-06 20:28:39,382 [INFO] kafka.cluster.Partition [kafka-scheduler-1:] - 
Partition [ClientQosCombined,398] on broker 10: Shrinking ISR for partition 
[ClientQosCombined,398] from 5,10 to 10}}

For each of the partitions that 10 leads. This is the only topic that we 
currently have in our cluster. The __consumer_offsets topic seems completely 
normal in terms of isr counts. The controller is broker 5, which is cycling 
through attempting and failing to trigger leader elections on broker 10 led 
partitions. From the controller log in broker 5:

{{2017-11-06 20:45:04,857 [INFO] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Starting preferred replica leader 
election for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] kafka.controller.PartitionStateMachine 
[kafka-scheduler-0:] - [Partition state machine on Controller 5]: Invoking 
state change to OnlinePartition for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] 
kafka.controller.PreferredReplicaPartitionLeaderSelector [kafka-scheduler-0:] - 
[PreferredReplicaPartitionLeaderSelector]: Current leader 10 for partition 
[ClientQosCombined,375] is not the preferred replica. Trigerring preferred 
replica leader election
2017-11-06 20:45:04,857 [WARN] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Partition [ClientQosCombined,375] failed 
to complete preferred replica leader election. Leader is 10}}

I've also attached the logs and output from broker 10. Any idea what's wrong 
here? 


> Broker is listed as only ISR for all partitions it is lea

[jira] [Created] (KAFKA-6178) Broker is listed as only ISR for all partitions it is leader of

2017-11-06 Thread AS (JIRA)
AS created KAFKA-6178:
-

 Summary: Broker is listed as only ISR for all partitions it is 
leader of
 Key: KAFKA-6178
 URL: https://issues.apache.org/jira/browse/KAFKA-6178
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.1.0
 Environment: Windows
Reporter: AS
 Attachments: KafkaServiceOutput.txt, log-cleaner.log, server.log

We're running a 15 broker cluster on windows machines, and one of the brokers, 
10, is the only ISR on all partitions that it is the leader of. On partitions 
where it isn't the leader, it seems to follow the leadeer fine. This is an 
excerpt from 'describe':

{{Topic: ClientQosCombined  Partition: 458  Leader: 10  Replicas: 
10,6,7,8,9,0,1   Isr: 10
Topic: ClientQosCombined  Partition: 459  Leader: 11  Replicas: 
11,7,8,9,0,1,10 Isr: 0,10,1,9,7,11,8}}

The server.log files all seem to be pretty standard, and the only indication of 
this issue is the following pattern that often repeats:

{{2017-11-06 20:28:25,207 [INFO] kafka.cluster.Partition 
[kafka-request-handler-8:] - Partition [ClientQosCombined,398] on broker 10: 
Expanding ISR for partition [ClientQosCombined,398] from 10 to 5,10
2017-11-06 20:28:39,382 [INFO] kafka.cluster.Partition [kafka-scheduler-1:] - 
Partition [ClientQosCombined,398] on broker 10: Shrinking ISR for partition 
[ClientQosCombined,398] from 5,10 to 10}}

For each of the partitions that 10 leads. This is the only topic that we 
currently have in our cluster. The __consumer_offsets topic seems completely 
normal in terms of isr counts. The controller is broker 5, which is cycling 
through attempting and failing to trigger leader elections on broker 10 led 
partitions. From the controller log in broker 5:

{{2017-11-06 20:45:04,857 [INFO] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Starting preferred replica leader 
election for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] kafka.controller.PartitionStateMachine 
[kafka-scheduler-0:] - [Partition state machine on Controller 5]: Invoking 
state change to OnlinePartition for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] 
kafka.controller.PreferredReplicaPartitionLeaderSelector [kafka-scheduler-0:] - 
[PreferredReplicaPartitionLeaderSelector]: Current leader 10 for partition 
[ClientQosCombined,375] is not the preferred replica. Trigerring preferred 
replica leader election
2017-11-06 20:45:04,857 [WARN] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Partition [ClientQosCombined,375] failed 
to complete preferred replica leader election. Leader is 10}}

I've also attached the logs and output from broker 10. Any idea what's wrong 
here? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6178) Broker is listed as only ISR for all partitions it is leader of

2017-11-06 Thread AS (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AS updated KAFKA-6178:
--
Description: 
We're running a 15 broker cluster on windows machines, and one of the brokers, 
10, is the only ISR on all partitions that it is the leader of. On partitions 
where it isn't the leader, it seems to follow the leadeer fine. This is an 
excerpt from 'describe':


Topic: ClientQosCombined  Partition: 458  Leader: 10  Replicas: 
10,6,7,8,9,0,1   Isr: 10
Topic: ClientQosCombined  Partition: 459  Leader: 11  Replicas: 
11,7,8,9,0,1,10 Isr: 0,10,1,9,7,11,8


The server.log files all seem to be pretty standard, and the only indication of 
this issue is the following pattern that often repeats:


2017-11-06 20:28:25,207 [INFO] kafka.cluster.Partition 
[kafka-request-handler-8:] - Partition [ClientQosCombined,398] on broker 10: 
Expanding ISR for partition [ClientQosCombined,398] from 10 to 5,10
2017-11-06 20:28:39,382 [INFO] kafka.cluster.Partition [kafka-scheduler-1:] - 
Partition [ClientQosCombined,398] on broker 10: Shrinking ISR for partition 
[ClientQosCombined,398] from 5,10 to 10


For each of the partitions that 10 leads. This is the only topic that we 
currently have in our cluster. The __consumer_offsets topic seems completely 
normal in terms of isr counts. The controller is broker 5, which is cycling 
through attempting and failing to trigger leader elections on broker 10 led 
partitions. From the controller log in broker 5:


2017-11-06 20:45:04,857 [INFO] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Starting preferred replica leader 
election for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] kafka.controller.PartitionStateMachine 
[kafka-scheduler-0:] - [Partition state machine on Controller 5]: Invoking 
state change to OnlinePartition for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] 
kafka.controller.PreferredReplicaPartitionLeaderSelector [kafka-scheduler-0:] - 
[PreferredReplicaPartitionLeaderSelector]: Current leader 10 for partition 
[ClientQosCombined,375] is not the preferred replica. Trigerring preferred 
replica leader election
2017-11-06 20:45:04,857 [WARN] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Partition [ClientQosCombined,375] failed 
to complete preferred replica leader election. Leader is 10


I've also attached the logs and output from broker 10. Any idea what's wrong 
here? 

  was:
We're running a 15 broker cluster on windows machines, and one of the brokers, 
10, is the only ISR on all partitions that it is the leader of. On partitions 
where it isn't the leader, it seems to follow the leadeer fine. This is an 
excerpt from 'describe':


bq. Topic: ClientQosCombined  Partition: 458  Leader: 10  Replicas: 
10,6,7,8,9,0,1   Isr: 10
bq. Topic: ClientQosCombined  Partition: 459  Leader: 11  Replicas: 
11,7,8,9,0,1,10 Isr: 0,10,1,9,7,11,8


The server.log files all seem to be pretty standard, and the only indication of 
this issue is the following pattern that often repeats:


2017-11-06 20:28:25,207 [INFO] kafka.cluster.Partition 
[kafka-request-handler-8:] - Partition [ClientQosCombined,398] on broker 10: 
Expanding ISR for partition [ClientQosCombined,398] from 10 to 5,10
2017-11-06 20:28:39,382 [INFO] kafka.cluster.Partition [kafka-scheduler-1:] - 
Partition [ClientQosCombined,398] on broker 10: Shrinking ISR for partition 
[ClientQosCombined,398] from 5,10 to 10


For each of the partitions that 10 leads. This is the only topic that we 
currently have in our cluster. The __consumer_offsets topic seems completely 
normal in terms of isr counts. The controller is broker 5, which is cycling 
through attempting and failing to trigger leader elections on broker 10 led 
partitions. From the controller log in broker 5:


2017-11-06 20:45:04,857 [INFO] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Starting preferred replica leader 
election for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] kafka.controller.PartitionStateMachine 
[kafka-scheduler-0:] - [Partition state machine on Controller 5]: Invoking 
state change to OnlinePartition for partitions [ClientQosCombined,375]
2017-11-06 20:45:04,857 [INFO] 
kafka.controller.PreferredReplicaPartitionLeaderSelector [kafka-scheduler-0:] - 
[PreferredReplicaPartitionLeaderSelector]: Current leader 10 for partition 
[ClientQosCombined,375] is not the preferred replica. Trigerring preferred 
replica leader election
2017-11-06 20:45:04,857 [WARN] kafka.controller.KafkaController 
[kafka-scheduler-0:] - [Controller 5]: Partition [ClientQosCombined,375] failed 
to complete preferred replica leader election. Leader is 10


I've also attached the logs and output from broker 10. Any idea what's wrong 
here? 


> Broker is listed as only ISR for all partitions it is lea

[jira] [Updated] (KAFKA-6159) Link to upgrade docs in 1.0.0 release notes is broken

2017-11-02 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Schröder updated KAFKA-6159:
---
Summary: Link to upgrade docs in 1.0.0 release notes is broken  (was: Link 
to upgrade docs in 100 release notes is broken)

> Link to upgrade docs in 1.0.0 release notes is broken
> -
>
> Key: KAFKA-6159
> URL: https://issues.apache.org/jira/browse/KAFKA-6159
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.0
>Reporter: Martin Schröder
>
> The release notes for 1.0.0 point to 
> http://kafka.apache.org/100/documentation.html#upgrade for "upgrade 
> documentation", but that gives a 404.
> Maybe you mean http://kafka.apache.org/documentation.html#upgrade ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6159) Link to upgrade docs in 1.0.0 release notes is broken

2017-11-02 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Schröder updated KAFKA-6159:
---
Description: 
The release notes for 1.0.0 
(https://dist.apache.org/repos/dist/release/kafka/1.0.0/RELEASE_NOTES.html) 
point to http://kafka.apache.org/100/documentation.html#upgrade for "upgrade 
documentation", but that gives a 404.

Maybe you mean http://kafka.apache.org/documentation.html#upgrade ?

  was:
The release notes for 1.0.0 point to 
http://kafka.apache.org/100/documentation.html#upgrade for "upgrade 
documentation", but that gives a 404.

Maybe you mean http://kafka.apache.org/documentation.html#upgrade ?


> Link to upgrade docs in 1.0.0 release notes is broken
> -
>
> Key: KAFKA-6159
> URL: https://issues.apache.org/jira/browse/KAFKA-6159
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.0
>Reporter: Martin Schröder
>
> The release notes for 1.0.0 
> (https://dist.apache.org/repos/dist/release/kafka/1.0.0/RELEASE_NOTES.html) 
> point to http://kafka.apache.org/100/documentation.html#upgrade for "upgrade 
> documentation", but that gives a 404.
> Maybe you mean http://kafka.apache.org/documentation.html#upgrade ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6159) Link to upgrade docs in 100 release notes is broken

2017-11-02 Thread JIRA
Martin Schröder created KAFKA-6159:
--

 Summary: Link to upgrade docs in 100 release notes is broken
 Key: KAFKA-6159
 URL: https://issues.apache.org/jira/browse/KAFKA-6159
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.0
Reporter: Martin Schröder


The release notes for 1.0.0 point to 
http://kafka.apache.org/100/documentation.html#upgrade for "upgrade 
documentation", but that gives a 404.

Maybe you mean http://kafka.apache.org/documentation.html#upgrade ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6159) Link to upgrade docs in 1.0.0 release notes is broken

2017-11-02 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Schröder updated KAFKA-6159:
---
Labels: release-notes  (was: )

> Link to upgrade docs in 1.0.0 release notes is broken
> -
>
> Key: KAFKA-6159
> URL: https://issues.apache.org/jira/browse/KAFKA-6159
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.0
>Reporter: Martin Schröder
>  Labels: release-notes
>
> The release notes for 1.0.0 
> (https://dist.apache.org/repos/dist/release/kafka/1.0.0/RELEASE_NOTES.html) 
> point to http://kafka.apache.org/100/documentation.html#upgrade for "upgrade 
> documentation", but that gives a 404.
> Maybe you mean http://kafka.apache.org/documentation.html#upgrade ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6163) Broker should fail fast on startup if an error occurs while loading logs

2017-11-02 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xavier Léauté updated KAFKA-6163:
-
Affects Version/s: 1.0.0

> Broker should fail fast on startup if an error occurs while loading logs
> 
>
> Key: KAFKA-6163
> URL: https://issues.apache.org/jira/browse/KAFKA-6163
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0, 1.0.0
>Reporter: Xavier Léauté
>Priority: Normal
>
> If the broker fails to load one of the logs during startup, we currently 
> don't fail fast. The {{LogManager}} will log an error and initiate the 
> shutdown sequence, but continue loading all the remaining sequence before 
> shutting down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6163) Broker should fail fast on startup if an error occurs while loading logs

2017-11-02 Thread JIRA
Xavier Léauté created KAFKA-6163:


 Summary: Broker should fail fast on startup if an error occurs 
while loading logs
 Key: KAFKA-6163
 URL: https://issues.apache.org/jira/browse/KAFKA-6163
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Xavier Léauté
Priority: Normal


If the broker fails to load one of the logs during startup, we currently don't 
fail fast. The {{LogManager}} will log an error and initiate the shutdown 
sequence, but continue loading all the remaining sequence before shutting down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6164) ClientQuotaManager threads prevent shutdown when encountering an error loading logs

2017-11-02 Thread JIRA
Xavier Léauté created KAFKA-6164:


 Summary: ClientQuotaManager threads prevent shutdown when 
encountering an error loading logs
 Key: KAFKA-6164
 URL: https://issues.apache.org/jira/browse/KAFKA-6164
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0, 1.0.0
Reporter: Xavier Léauté
Priority: Major


While diagnosing KAFKA-6163, we noticed that when the broker initiates a 
shutdown sequence in response to an error loading the logs, the process never 
exits. The JVM appears to be waiting indefinitely for several non-deamon 
threads to terminate.

The threads in question are {{ThrottledRequestReaper-Request}}, 
{{ThrottledRequestReaper-Produce}}, and {{ThrottledRequestReaper-Fetch}}, so it 
appears we don't properly shutdown {{ClientQuotaManager}} in this situation.

QuotaManager shutdown is currently handled by KafkaApis, however KafkaApis will 
never be instantiated in the first place if we encounter an error loading the 
logs, so quotamangers are left dangling in that case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-6163) Broker should fail fast on startup if an error occurs while loading logs

2017-11-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xavier Léauté reassigned KAFKA-6163:


Assignee: Xavier Léauté

> Broker should fail fast on startup if an error occurs while loading logs
> 
>
> Key: KAFKA-6163
> URL: https://issues.apache.org/jira/browse/KAFKA-6163
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0, 1.0.0
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>
> If the broker fails to load one of the logs during startup, we currently 
> don't fail fast. The {{LogManager}} will log an error and initiate the 
> shutdown sequence, but continue loading all the remaining sequence before 
> shutting down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6311) Expose Kafka cluster ID in Connect REST API

2017-12-05 Thread JIRA
Xavier Léauté created KAFKA-6311:


 Summary: Expose Kafka cluster ID in Connect REST API
 Key: KAFKA-6311
 URL: https://issues.apache.org/jira/browse/KAFKA-6311
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Xavier Léauté
Assignee: Ewen Cheslack-Postava


Connect currently does not expose any information about the Kafka cluster it is 
connected to.
In an environment with multiple Kafka clusters it would be useful to know which 
cluster Connect is talking to. Exposing this information enables programmatic 
discovery of Kafka cluster metadata for the purpose of configuring connectors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6294) ZkClient is not joining it's event thread during an interrupt

2017-12-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-6294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16275011#comment-16275011
 ] 

Buğra Gedik commented on KAFKA-6294:


I guess this will happen if an interrupt is received while {{close()}} is in 
progress.

> ZkClient is not joining it's event thread during an interrupt
> -
>
> Key: KAFKA-6294
> URL: https://issues.apache.org/jira/browse/KAFKA-6294
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.1
>Reporter: Zeynep Arikoglu
>
> ZkClient is not joining it's event thread when there is an active interrupt.
> There was a similar issue with KafkaProducer thread which was partially 
> solved (thread join was solved, but still not passing the interrupted state).
> KAFKA-4767



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6114) kafka Java API Consumer and producer Offset value comparison?

2017-10-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-6114.
-
Resolution: Invalid
  Assignee: Sönke Liebau

> kafka Java API Consumer and producer Offset value comparison?
> -
>
> Key: KAFKA-6114
> URL: https://issues.apache.org/jira/browse/KAFKA-6114
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer, offset manager, producer 
>Affects Versions: 0.11.0.0
> Environment: Linux 
>Reporter: veerendra nath jasthi
>Assignee: Sönke Liebau
>
> I have a requirement to match Kafka producer offset value to consumer offset 
> by using Java API?
> I am new to KAFKA,Could anyone suggest how to proceed with this ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6114) kafka Java API Consumer and producer Offset value comparison?

2017-10-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-6114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218651#comment-16218651
 ] 

Sönke Liebau commented on KAFKA-6114:
-

Hi [~jvnath], 

I've responded to your [question on 
stackoverflow|https://stackoverflow.com/questions/46920468/kafka-java-api-consumer-and-producer-offset-value-comparison/46929326#46929326]
 on this topic, which I believe is a better place for it than in the jira.
Please add any additional details or questions there.

> kafka Java API Consumer and producer Offset value comparison?
> -
>
> Key: KAFKA-6114
> URL: https://issues.apache.org/jira/browse/KAFKA-6114
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer, offset manager, producer 
>Affects Versions: 0.11.0.0
> Environment: Linux 
>Reporter: veerendra nath jasthi
>
> I have a requirement to match Kafka producer offset value to consumer offset 
> by using Java API?
> I am new to KAFKA,Could anyone suggest how to proceed with this ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-4767) KafkaProducer is not joining its IO thread properly

2017-10-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217695#comment-16217695
 ] 

Buğra Gedik edited comment on KAFKA-4767 at 10/24/17 9:03 PM:
--

Any progress on restoring the interrupt status of the current thread?


was (Author: bgedik):
Any progress on restoring the interrupt state of the current thread?

> KafkaProducer is not joining its IO thread properly
> ---
>
> Key: KAFKA-4767
> URL: https://issues.apache.org/jira/browse/KAFKA-4767
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.0
>Reporter: Buğra Gedik
>Assignee: huxihx
>Priority: Minor
>
> The {{KafkaProducer}} is not properly joining the thread it creates. The code 
> is like this:
> {code}
> try {
> this.ioThread.join(timeUnit.toMillis(timeout));
> } catch (InterruptedException t) {
> firstException.compareAndSet(null, t);
> log.error("Interrupted while joining ioThread", t);
> }
> {code}
> If the code is interrupted while performing the join, it will end up leaving 
> the io thread running. The correct way of handling this is a follows:
> {code}
> try {
> this.ioThread.join(timeUnit.toMillis(timeout));
> } catch (InterruptedException t) {
> // propagate the interrupt
> this.ioThread.interrupt();
> try { 
>  this.ioThread.join();
> } catch (InterruptedException t) {
> firstException.compareAndSet(null, t);
> log.error("Interrupted while joining ioThread", t);
> } finally {
> // make sure we maintain the interrupted status
> Thread.currentThread.interrupt();
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4767) KafkaProducer is not joining its IO thread properly

2017-10-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217695#comment-16217695
 ] 

Buğra Gedik commented on KAFKA-4767:


Any progress on restoring the interrupt state of the current thread?

> KafkaProducer is not joining its IO thread properly
> ---
>
> Key: KAFKA-4767
> URL: https://issues.apache.org/jira/browse/KAFKA-4767
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.0
>Reporter: Buğra Gedik
>Assignee: huxihx
>Priority: Minor
>
> The {{KafkaProducer}} is not properly joining the thread it creates. The code 
> is like this:
> {code}
> try {
> this.ioThread.join(timeUnit.toMillis(timeout));
> } catch (InterruptedException t) {
> firstException.compareAndSet(null, t);
> log.error("Interrupted while joining ioThread", t);
> }
> {code}
> If the code is interrupted while performing the join, it will end up leaving 
> the io thread running. The correct way of handling this is a follows:
> {code}
> try {
> this.ioThread.join(timeUnit.toMillis(timeout));
> } catch (InterruptedException t) {
> // propagate the interrupt
> this.ioThread.interrupt();
> try { 
>  this.ioThread.join();
> } catch (InterruptedException t) {
> firstException.compareAndSet(null, t);
> log.error("Interrupted while joining ioThread", t);
> } finally {
> // make sure we maintain the interrupted status
> Thread.currentThread.interrupt();
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6868) BufferUnderflowException in client when querying consumer group information

2018-05-04 Thread JIRA
Xavier Léauté created KAFKA-6868:


 Summary: BufferUnderflowException in client when querying consumer 
group information
 Key: KAFKA-6868
 URL: https://issues.apache.org/jira/browse/KAFKA-6868
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Xavier Léauté


Exceptions get thrown when describing consumer group or querying group offsets.

{code}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
at java.lang.Thread.run(Thread.java:748)
{code}

{code}
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at
[snip]
Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
reading field 'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
... 1 more
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6868) BufferUnderflowException in client when querying consumer group information

2018-05-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xavier Léauté updated KAFKA-6868:
-
Description: 
Exceptions get thrown when describing consumer group or querying group offsets 
from a 1.0 cluster

Stacktrace is a result of calling 
{{AdminClient.describeConsumerGroups(Collection 
groupIds).describedGroups().entrySet()}} followed by 
{{KafkaFuture.whenComplete()}}

{code}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
at java.lang.Thread.run(Thread.java:748)
{code}

{code}
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at
[snip]
Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
reading field 'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
... 1 more
{code}

  was:
Exceptions get thrown when describing consumer group or querying group offsets 
from a 1.0~ish cluster

{code}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
at java.lang.Thread.run(Thread.java:748)
{code}

{code}
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at
[snip]
Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
reading field 'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
... 1 more
{code}


> BufferUnderflowException in client when querying consumer group information
> ---
>
> Key: KAFKA-6868
> URL: https://issues.apache.org/jira/browse/KAFKA-6868
> Project: Kafka
>  Issue Type: Bug
>   

[jira] [Updated] (KAFKA-6868) BufferUnderflowException in client when querying consumer group information

2018-05-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xavier Léauté updated KAFKA-6868:
-
Description: 
Exceptions get thrown when describing consumer group or querying group offsets 
from a 1.0~ish cluster

{code}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
at java.lang.Thread.run(Thread.java:748)
{code}

{code}
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at
[snip]
Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
reading field 'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
... 1 more
{code}

  was:
Exceptions get thrown when describing consumer group or querying group offsets.

{code}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
at java.lang.Thread.run(Thread.java:748)
{code}

{code}
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at
[snip]
Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
reading field 'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
... 1 more
{code}


> BufferUnderflowException in client when querying consumer group information
> ---
>
> Key: KAFKA-6868
> URL: https://issues.apache.org/jira/browse/KAFKA-6868
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Xavier Léauté
>Priority: Major
>
> Exceptions get thrown when describing consumer group or querying group 
>

[jira] [Updated] (KAFKA-6868) BufferUnderflowException in client when querying consumer group information

2018-05-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xavier Léauté updated KAFKA-6868:
-
Description: 
Exceptions get thrown when describing consumer group or querying group offsets 
from a 1.0 cluster

Stacktrace is a result of calling 
{{AdminClient.describeConsumerGroups(Collection 
groupIds).describedGroups().entrySet()}} followed by 
{{KafkaFuture.whenComplete()}}

{code}
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at
[snip]
Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
reading field 'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
... 1 more
{code}

  was:
Exceptions get thrown when describing consumer group or querying group offsets 
from a 1.0 cluster

Stacktrace is a result of calling 
{{AdminClient.describeConsumerGroups(Collection 
groupIds).describedGroups().entrySet()}} followed by 
{{KafkaFuture.whenComplete()}}

{code}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
at java.lang.Thread.run(Thread.java:748)
{code}

{code}
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at
[snip]
Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
reading field 'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
... 1 more
{code}


> BufferUnderflowException in client when querying consumer group information
> ---
>
> Key: KAFKA-6868
> URL: https://issues.apache.org/jira/browse/KAFKA-6868
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Xavier Léauté
>Priority: Major
>
> Exceptions get thrown when describing consumer group or querying group 
> offsets from a 1.0 cluster
> Stacktrace is a result of calling 
> {{AdminClient.describeConsumerGroups(Collection 
> groupIds).describedGroups().entrySet()}} followed by 
> {{KafkaFuture.whenComplete()}}
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.protocol.types.SchemaException: Error r

[jira] [Commented] (KAFKA-6689) Kafka not release .deleted file.

2018-05-07 Thread A (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465823#comment-16465823
 ] 

A commented on KAFKA-6689:
--

[~huxi_2b]   Thank you for your suggestion. After I have upgrade the Broker  to 
0.10.2.1  , It's look like the broker have cleaning .deleted file better than 
0.10.1.1 but I am still facing same problem. 

this results after monitor 3 weeks past.

 [ฺBroker-01 ~]$ lsof -p 5603 | grep deleted | wc -l
58

[Broker-02 ~]$ lsof -p 15416  | grep deleted|wc -l
17100

[Broker-03 ~]$ lsof -p 13244 |grep delete | wc -l
15722

 

> Kafka not release .deleted file.
> 
>
> Key: KAFKA-6689
> URL: https://issues.apache.org/jira/browse/KAFKA-6689
> Project: Kafka
>  Issue Type: Bug
>  Components: config, controller, log
>Affects Versions: 0.10.1.1
> Environment: 3 Kafka Broker running on CentOS6.9(rungi3 VMs) 
>Reporter: A
>Priority: Critical
>  Labels: newbie
> Fix For: 0.10.1.1
>
>
>          After Kafka cleaned log  .timeindex / .index files based on topic 
> retention. I can
> still lsof a lot of .index.deleted and .timeindex.deleted files. 
>        We have 3 Brokers on 3 VMs ,  It's happen only 2 brokers.  
> [brokeer-01 ~]$ lsof -p 24324 | grep deleted | wc -l
> 28
> [broker-02 ~]$ lsof -p 12131 | grep deleted | wc -l
> 14599
> [broker-03 ~]$ lsof -p 3349 | grep deleted | wc -l
> 14523
>  
> Configuration on 3 broker is same.  (Rolllog every hour, Retention time 11 
> Hours)
>  * INFO KafkaConfig values:
>  advertised.host.name = null
>  advertised.listeners = PLAINTEXT://Broker-02:9092
>  advertised.port = null
>  authorizer.class.name =
>  auto.create.topics.enable = true
>  auto.leader.rebalance.enable = true
>  background.threads = 10
>  broker.id = 2
>  broker.id.generation.enable = true
>  broker.rack = null
>  compression.type = producer
>  connections.max.idle.ms = 60
>  controlled.shutdown.enable = true
>  controlled.shutdown.max.retries = 3
>  controlled.shutdown.retry.backoff.ms = 5000
>  controller.socket.timeout.ms = 3
>  default.replication.factor = 3
>  delete.topic.enable = true
>  fetch.purgatory.purge.interval.requests = 1000
>  group.max.session.timeout.ms = 30
>  group.min.session.timeout.ms = 6000
>  host.name =
>  inter.broker.protocol.version = 0.10.1-IV2
>  leader.imbalance.check.interval.seconds = 300
>  leader.imbalance.per.broker.percentage = 10
>  listeners = null
>  log.cleaner.backoff.ms = 15000
>  log.cleaner.dedupe.buffer.size = 134217728
>  log.cleaner.delete.retention.ms = 8640
>  log.cleaner.enable = true
>  log.cleaner.io.buffer.load.factor = 0.9
>  log.cleaner.io.buffer.size = 524288
>  log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>  log.cleaner.min.cleanable.ratio = 0.5
>  log.cleaner.min.compaction.lag.ms = 0
>  log.cleaner.threads = 1
>  log.cleanup.policy = [delete]
>  log.dir = /tmp/kafka-logs
>  log.dirs = /data/appdata/kafka/data
>  log.flush.interval.messages = 9223372036854775807
>  log.flush.interval.ms = null
>  log.flush.offset.checkpoint.interval.ms = 6
>  log.flush.scheduler.interval.ms = 9223372036854775807
>  log.index.interval.bytes = 4096
>  log.index.size.max.bytes = 10485760
>  log.message.format.version = 0.10.1-IV2
>  log.message.timestamp.difference.max.ms = 9223372036854775807
>  log.message.timestamp.type = CreateTime
>  log.preallocate = false
>  log.retention.bytes = -1
>  log.retention.check.interval.ms = 30
>  log.retention.hours = 11
>  log.retention.minutes = 660
>  log.retention.ms = 3960
>  log.roll.hours = 1
>  log.roll.jitter.hours = 0
>  log.roll.jitter.ms = null
>  log.roll.ms = null
>  log.segment.bytes = 1073741824
>  log.segment.delete.delay.ms = 6
>  max.connections.per.ip = 2147483647
>  max.connections.per.ip.overrides =
>  message.max.bytes = 112
>  metric.reporters = []
>  metrics.num.samples = 2
>  metrics.sample.window.ms = 3
>  min.insync.replicas = 2
>  num.io.threads = 16
>  num.network.threads = 16
>  num.partitions = 10
>  num.recovery.threads.per.data.dir = 3
>  num.replica.fetchers = 1
>  offset.metadata.max.bytes = 4096
>  offsets.commit.required.acks = -1
>  offsets.commit.timeout.ms = 5000
>  offsets.load.buffer.size = 5242880
>  offsets.retention.check.interval.ms = 60
>  offsets.retention.minutes = 1440
>  offsets.topic.compression.codec = 0
>  offsets.topic.num.partitions = 50
>  offsets.topic.replication.factor = 3
>  offsets.topic.segment.bytes = 104857600
>  port =

[jira] [Created] (KAFKA-6901) Kafka crashes when trying to delete segment when retetention time is reached

2018-05-14 Thread JIRA
Grégory R. created KAFKA-6901:
-

 Summary: Kafka crashes when trying to delete segment when 
retetention time is reached 
 Key: KAFKA-6901
 URL: https://issues.apache.org/jira/browse/KAFKA-6901
 Project: Kafka
  Issue Type: Bug
  Components: core, log
Affects Versions: 1.0.0
 Environment: OS: Windows Server 2012 R2
Reporter: Grégory R.


Following the parameter
{code:java}
log.retention.hours = 16{code}
kafka tries to delete segments.

This action crashes the server with following log:

 
{code:java}

[2018-05-11 15:17:58,036] INFO Found deletable segments with base offsets [0] 
due to retention time 60480ms breach (kafka.log.Log)
[2018-05-11 15:17:58,068] INFO Rolled new log segment for 'event-0' in 12 ms. 
(kafka.log.Log)
[2018-05-11 15:17:58,068] INFO Scheduling log segment 0 for log event-0 for 
deletion. (kafka.log.Log)
[2018-05-11 15:17:58,068] ERROR Error while deleting segments for event-0 in 
dir C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log (kafka.server.L
ogDirFailureChannel)
java.nio.file.FileSystemException: 
C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\.log -> 
C:\App\VISBridge\kafka_2.
12-1.0.0\kafka-log\event-0\.log.deleted: Le processus ne 
peut pas accÚder au fichier car ce fichier est utilisÚ par un a
utre processus.

    at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
    at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
    at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
    at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
    at java.nio.file.Files.move(Files.java:1395)
    at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:682)
    at 
org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
    at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:398)
    at kafka.log.Log.asyncDeleteSegment(Log.scala:1592)
    at kafka.log.Log.deleteSegment(Log.scala:1579)
    at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1152)
    at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1152)
    at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
    at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1152)
    at 
scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
    at kafka.log.Log.maybeHandleIOException(Log.scala:1669)
    at kafka.log.Log.deleteSegments(Log.scala:1143)
    at kafka.log.Log.deleteOldSegments(Log.scala:1138)
    at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1211)
    at kafka.log.Log.deleteOldSegments(Log.scala:1204)
    at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:715)
    at 
kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:713)
    at 
scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
    at scala.collection.Iterator.foreach(Iterator.scala:929)
    at scala.collection.Iterator.foreach$(Iterator.scala:929)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
    at scala.collection.IterableLike.foreach(IterableLike.scala:71)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
    at kafka.log.LogManager.cleanupLogs(LogManager.scala:713)
    at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:341)
    at 
kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:110)
    at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
    Suppressed: java.nio.file.FileSystemException: 
C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\.log -> 
C:\Ap
p\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\.log.deleted:
 Le processus n

[jira] [Commented] (KAFKA-6729) KTable should use user source topics if possible and not create changelog topic

2018-05-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474793#comment-16474793
 ] 

Xavier Léauté commented on KAFKA-6729:
--

[~vvcephei] it's not just a performance regression, this is a change in 
behavior, it now creates an additional topic, which is not backwards compatible 
for people that pre-created topics with specific topic configurations or need 
to set ACLs beforehand.

> KTable should use user source topics if possible and not create changelog 
> topic
> ---
>
> Key: KAFKA-6729
> URL: https://issues.apache.org/jira/browse/KAFKA-6729
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
>Priority: Blocker
> Fix For: 2.0.0
>
>
> With KIP-182 we reworked Streams API largely and introduced a regression into 
> 1.0 code base. If a KTable is populated from a source topic, ie, 
> StreamsBuilder.table() -- the KTable does create its own changelog topic. 
> However, in older releases (0.11 or older), we don't create a changelog topic 
> but use the user specified source topic instead.
> We want to reintroduce this optimization to reduce the load (storage and 
> write) on the broker side for this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6901) Kafka crashes when trying to delete segment when retetention time is reached

2018-05-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grégory R. updated KAFKA-6901:
--
Attachment: 20180517 - ProcessExplorer after the crash.png

> Kafka crashes when trying to delete segment when retetention time is reached 
> -
>
> Key: KAFKA-6901
> URL: https://issues.apache.org/jira/browse/KAFKA-6901
> Project: Kafka
>  Issue Type: Bug
>  Components: core, log
>Affects Versions: 1.0.0
> Environment: OS: Windows Server 2012 R2
>Reporter: Grégory R.
>Priority: Critical
>  Labels: windows
> Attachments: 20180517 - ProcessExplorer after the crash.png, 20180517 
> - ProcessExplorer before the crash.png
>
>
> Following the parameter
> {code:java}
> log.retention.hours = 16{code}
> kafka tries to delete segments.
> This action crashes the server with following log:
>  
> {code:java}
> [2018-05-11 15:17:58,036] INFO Found deletable segments with base offsets [0] 
> due to retention time 60480ms breach (kafka.log.Log)
> [2018-05-11 15:17:58,068] INFO Rolled new log segment for 'event-0' in 12 ms. 
> (kafka.log.Log)
> [2018-05-11 15:17:58,068] INFO Scheduling log segment 0 for log event-0 for 
> deletion. (kafka.log.Log)
> [2018-05-11 15:17:58,068] ERROR Error while deleting segments for event-0 in 
> dir C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log (kafka.server.L
> ogDirFailureChannel)
> java.nio.file.FileSystemException: 
> C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\.log 
> -> C:\App\VISBridge\kafka_2.
> 12-1.0.0\kafka-log\event-0\.log.deleted: Le processus ne 
> peut pas accÚder au fichier car ce fichier est utilisÚ par un a
> utre processus.
>     at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>     at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>     at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>     at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>     at java.nio.file.Files.move(Files.java:1395)
>     at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:682)
>     at 
> org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
>     at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:398)
>     at kafka.log.Log.asyncDeleteSegment(Log.scala:1592)
>     at kafka.log.Log.deleteSegment(Log.scala:1579)
>     at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1152)
>     at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1152)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>     at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1152)
>     at 
> scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
>     at kafka.log.Log.maybeHandleIOException(Log.scala:1669)
>     at kafka.log.Log.deleteSegments(Log.scala:1143)
>     at kafka.log.Log.deleteOldSegments(Log.scala:1138)
>     at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1211)
>     at kafka.log.Log.deleteOldSegments(Log.scala:1204)
>     at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:715)
>     at 
> kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:713)
>     at 
> scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
>     at scala.collection.Iterator.foreach(Iterator.scala:929)
>     at scala.collection.Iterator.foreach$(Iterator.scala:929)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:71)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
>     at kafka.log.LogManager.cleanupLogs(LogManager.scala:713)
>     at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:341)
>     at 
> kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:110)
>     at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
>     at 
> java.util.co

[jira] [Updated] (KAFKA-6901) Kafka crashes when trying to delete segment when retetention time is reached

2018-05-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grégory R. updated KAFKA-6901:
--
Priority: Major  (was: Critical)

> Kafka crashes when trying to delete segment when retetention time is reached 
> -
>
> Key: KAFKA-6901
> URL: https://issues.apache.org/jira/browse/KAFKA-6901
> Project: Kafka
>  Issue Type: Bug
>  Components: core, log
>Affects Versions: 1.0.0, 1.1.0
> Environment: OS: Windows Server 2012 R2
>Reporter: Grégory R.
>Priority: Major
>  Labels: windows
> Attachments: 20180517 - ProcessExplorer after the crash.png, 20180517 
> - ProcessExplorer before the crash.png
>
>
> Following the parameter
> {code:java}
> log.retention.hours = 16{code}
> kafka tries to delete segments.
> This action crashes the server with following log:
>  
> {code:java}
> [2018-05-11 15:17:58,036] INFO Found deletable segments with base offsets [0] 
> due to retention time 60480ms breach (kafka.log.Log)
> [2018-05-11 15:17:58,068] INFO Rolled new log segment for 'event-0' in 12 ms. 
> (kafka.log.Log)
> [2018-05-11 15:17:58,068] INFO Scheduling log segment 0 for log event-0 for 
> deletion. (kafka.log.Log)
> [2018-05-11 15:17:58,068] ERROR Error while deleting segments for event-0 in 
> dir C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log (kafka.server.L
> ogDirFailureChannel)
> java.nio.file.FileSystemException: 
> C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\.log 
> -> C:\App\VISBridge\kafka_2.
> 12-1.0.0\kafka-log\event-0\.log.deleted: Le processus ne 
> peut pas accÚder au fichier car ce fichier est utilisÚ par un a
> utre processus.
>     at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>     at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>     at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>     at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>     at java.nio.file.Files.move(Files.java:1395)
>     at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:682)
>     at 
> org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
>     at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:398)
>     at kafka.log.Log.asyncDeleteSegment(Log.scala:1592)
>     at kafka.log.Log.deleteSegment(Log.scala:1579)
>     at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1152)
>     at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1152)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>     at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1152)
>     at 
> scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
>     at kafka.log.Log.maybeHandleIOException(Log.scala:1669)
>     at kafka.log.Log.deleteSegments(Log.scala:1143)
>     at kafka.log.Log.deleteOldSegments(Log.scala:1138)
>     at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1211)
>     at kafka.log.Log.deleteOldSegments(Log.scala:1204)
>     at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:715)
>     at 
> kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:713)
>     at 
> scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
>     at scala.collection.Iterator.foreach(Iterator.scala:929)
>     at scala.collection.Iterator.foreach$(Iterator.scala:929)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:71)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
>     at kafka.log.LogManager.cleanupLogs(LogManager.scala:713)
>     at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:341)
>     at 
> kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:110)
>     at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
>     at 
> java.util.co

[jira] [Updated] (KAFKA-6901) Kafka crashes when trying to delete segment when retetention time is reached

2018-05-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grégory R. updated KAFKA-6901:
--
Attachment: 20180517 - ProcessExplorer before the crash.png

> Kafka crashes when trying to delete segment when retetention time is reached 
> -
>
> Key: KAFKA-6901
> URL: https://issues.apache.org/jira/browse/KAFKA-6901
> Project: Kafka
>  Issue Type: Bug
>  Components: core, log
>Affects Versions: 1.0.0
> Environment: OS: Windows Server 2012 R2
>Reporter: Grégory R.
>Priority: Critical
>  Labels: windows
> Attachments: 20180517 - ProcessExplorer before the crash.png
>
>
> Following the parameter
> {code:java}
> log.retention.hours = 16{code}
> kafka tries to delete segments.
> This action crashes the server with following log:
>  
> {code:java}
> [2018-05-11 15:17:58,036] INFO Found deletable segments with base offsets [0] 
> due to retention time 60480ms breach (kafka.log.Log)
> [2018-05-11 15:17:58,068] INFO Rolled new log segment for 'event-0' in 12 ms. 
> (kafka.log.Log)
> [2018-05-11 15:17:58,068] INFO Scheduling log segment 0 for log event-0 for 
> deletion. (kafka.log.Log)
> [2018-05-11 15:17:58,068] ERROR Error while deleting segments for event-0 in 
> dir C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log (kafka.server.L
> ogDirFailureChannel)
> java.nio.file.FileSystemException: 
> C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\.log 
> -> C:\App\VISBridge\kafka_2.
> 12-1.0.0\kafka-log\event-0\.log.deleted: Le processus ne 
> peut pas accÚder au fichier car ce fichier est utilisÚ par un a
> utre processus.
>     at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>     at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>     at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>     at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>     at java.nio.file.Files.move(Files.java:1395)
>     at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:682)
>     at 
> org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
>     at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:398)
>     at kafka.log.Log.asyncDeleteSegment(Log.scala:1592)
>     at kafka.log.Log.deleteSegment(Log.scala:1579)
>     at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1152)
>     at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1152)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>     at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1152)
>     at 
> scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
>     at kafka.log.Log.maybeHandleIOException(Log.scala:1669)
>     at kafka.log.Log.deleteSegments(Log.scala:1143)
>     at kafka.log.Log.deleteOldSegments(Log.scala:1138)
>     at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1211)
>     at kafka.log.Log.deleteOldSegments(Log.scala:1204)
>     at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:715)
>     at 
> kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:713)
>     at 
> scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
>     at scala.collection.Iterator.foreach(Iterator.scala:929)
>     at scala.collection.Iterator.foreach$(Iterator.scala:929)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:71)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
>     at kafka.log.LogManager.cleanupLogs(LogManager.scala:713)
>     at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:341)
>     at 
> kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:110)
>     at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
>     at 
> java.util.concu

[jira] [Commented] (KAFKA-1194) The kafka broker cannot delete the old log files after the configured time

2018-05-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479130#comment-16479130
 ] 

Grégory R. commented on KAFKA-1194:
---

This issue is reproduced with kafka 1.1.0

> The kafka broker cannot delete the old log files after the configured time
> --
>
> Key: KAFKA-1194
> URL: https://issues.apache.org/jira/browse/KAFKA-1194
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0, 0.11.0.0, 1.0.0
> Environment: window
>Reporter: Tao Qin
>Priority: Critical
>  Labels: features, patch, windows
> Attachments: KAFKA-1194.patch, Untitled.jpg, kafka-1194-v1.patch, 
> kafka-1194-v2.patch, screenshot-1.png
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We tested it in windows environment, and set the log.retention.hours to 24 
> hours.
> # The minimum age of a log file to be eligible for deletion
> log.retention.hours=24
> After several days, the kafka broker still cannot delete the old log file. 
> And we get the following exceptions:
> [2013-12-19 01:57:38,528] ERROR Uncaught exception in scheduled task 
> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> kafka.common.KafkaStorageException: Failed to change the log file suffix from 
>  to .deleted for log segment 1516723
>  at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
>  at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:638)
>  at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:629)
>  at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>  at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>  at 
> scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
>  at scala.collection.immutable.List.foreach(List.scala:76)
>  at kafka.log.Log.deleteOldSegments(Log.scala:418)
>  at 
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:284)
>  at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:316)
>  at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:314)
>  at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:772)
>  at 
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
>  at 
> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:615)
>  at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
>  at kafka.log.LogManager.cleanupLogs(LogManager.scala:314)
>  at 
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:143)
>  at kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:724)
> I think this error happens because kafka tries to rename the log file when it 
> is still opened.  So we should close the file first before rename.
> The index file uses a special data structure, the MappedByteBuffer. Javadoc 
> describes it as:
> A mapped byte buffer and the file mapping that it represents remain valid 
> until the buffer itself is garbage-collected.
> Fortunately, I find a forceUnmap function in kafka code, and perhaps it can 
> be used to free the MappedByteBuffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7070) KafkaConsumer#committed might unexpectedly shift consumer offset

2018-06-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/KAFKA-7070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516318#comment-16516318
 ] 

Jan Lukavský commented on KAFKA-7070:
-

Hi, sorry, my bad. I should have tried to create the test case prior to filling 
this issue. The problem is little more complicated than I described. The 
following test code will show the issue:
{code:java}
public class Kafka7070 {

public static void main(String[] args) {
String topic = args[0];
String brokerList = args[1];
KafkaConsumer consumer = new 
KafkaConsumer<>(getProps(brokerList));
consumer.assign(consumer.partitionsFor(topic)
.stream()
.map(p -> new TopicPartition(topic, p.partition()))
.collect(Collectors.toList()));
consumer.seekToBeginning(consumer.assignment());
Map endOffsets = 
consumer.endOffsets(consumer.assignment());
// uncommenting the following will cause problems
/* consumer.assignment()
.forEach(tp -> consumer.seek(
tp,
Optional.ofNullable(consumer.committed(tp))
.map(OffsetAndMetadata::offset)
.orElse(0L))); */
long polled = 0;
while (!endOffsets.isEmpty()) {
ConsumerRecords poll = consumer.poll(100);
polled += poll.count();
for (ConsumerRecord r : poll) {
TopicPartition tp = new TopicPartition(topic, r.partition());
Long end = endOffsets.get(tp);
if (end != null && end - 1 <= r.offset()) {
endOffsets.remove(tp);
}
}
}
System.out.println("Total polled: " + polled);
}

private static Properties getProps(String brokerList) {
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
ByteArrayDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, UUID.randomUUID().toString());
return props;
}

}

{code}
Running this code against compacted topic leads to unexpected behavior when the 
commented code is uncommented. On the other hand, the bug in user code is 
deeper than I expected - the problem is not caused by just querying the 
committed offset, but by trying to seek to this committed offset. The reason 
why the client code would want to do this is beyond scope of this issue, but 
what might be expected never the less is that trying to seek to offset 0 in 
such a situation will actually result in seek to *end* of the topic. That is 
demonstrated in log output of the code above stating
{code:java}
2018-06-18 22:31:06 INFO  Fetcher:925 - [Consumer clientId=consumer-1, 
groupId=4f641236-7887-4758-bdd9-39883d8b2fcb] Fetch offset 0 is out of range 
for partition xyz-0, resetting offset
2018-06-18 22:31:06 INFO  Fetcher:925 - [Consumer clientId=consumer-1, 
groupId=4f641236-7887-4758-bdd9-39883d8b2fcb] Fetch offset 0 is out of range 
for partition xyz-4, resetting offset
{code}
The proposed solution of throwing exception makes no sense to me now, but maybe 
the offset reset could be a little modified, so that when trying to seek to 
offset that is lower than beginning offset, the consumer would be seeked to the 
beginning, not the end?

> KafkaConsumer#committed might unexpectedly shift consumer offset
> 
>
> Key: KAFKA-7070
> URL: https://issues.apache.org/jira/browse/KAFKA-7070
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.1.0
>Reporter: Jan Lukavský
>Priority: Major
>
> When client uses manual partition assignment (e.g. {{KafkaConsumer#assign}}), 
> but then accidentally calls {{KafkaConsumer#committed}} (for whatever reason, 
> most probably bug in user code), then the offset gets shifted to latest, 
> possibly skipping any unconsumed messages, or producing duplicates. The 
> reason is that the call to {{KafkaConsumer#committed}} invokes 
> AbstractCoordinator, which tries to fetch committed offset, but doesn't find 
> {{group.id}} (will be probably even empty). This might cause Fetcher to 
> receive invalid offset for partition and reset it to the latest offset.
> Although this is primarily bug in user code, it is very hard to track it 
> down. The call to {{KafkaConsumer#committed}} might probably throw exception 
> when called on client without auto partition assignment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   6   7   8   9   10   >