[jira] [Resolved] (KAFKA-6576) Configurable Quota Management (KIP-257)

2018-04-06 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-6576.
---
   Resolution: Fixed
 Reviewer: Jun Rao
Fix Version/s: 1.2.0

> Configurable Quota Management (KIP-257)
> ---
>
> Key: KAFKA-6576
> URL: https://issues.apache.org/jira/browse/KAFKA-6576
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 1.2.0
>
>
> See 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-257+-+Configurable+Quota+Management]
>  for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6576) Configurable Quota Management (KIP-257)

2018-04-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429058#comment-16429058
 ] 

ASF GitHub Bot commented on KAFKA-6576:
---

rajinisivaram closed pull request #4699: KAFKA-6576: Configurable Quota 
Management (KIP-257)
URL: https://github.com/apache/kafka/pull/4699
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/clients/src/main/java/org/apache/kafka/server/quota/ClientQuotaCallback.java 
b/clients/src/main/java/org/apache/kafka/server/quota/ClientQuotaCallback.java
new file mode 100644
index 000..210e9f45840
--- /dev/null
+++ 
b/clients/src/main/java/org/apache/kafka/server/quota/ClientQuotaCallback.java
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.quota;
+
+import org.apache.kafka.common.Cluster;
+import org.apache.kafka.common.Configurable;
+import org.apache.kafka.common.security.auth.KafkaPrincipal;
+
+import java.util.Map;
+
+/**
+ * Quota callback interface for brokers that enables customization of client 
quota computation.
+ */
+public interface ClientQuotaCallback extends Configurable {
+
+/**
+ * Quota callback invoked to determine the quota metric tags to be applied 
for a request.
+ * Quota limits are associated with quota metrics and all clients which 
use the same
+ * metric tags share the quota limit.
+ *
+ * @param quotaType Type of quota requested
+ * @param principal The user principal of the connection for which quota 
is requested
+ * @param clientId  The client id associated with the request
+ * @return quota metric tags that indicate which other clients share this 
quota
+ */
+Map quotaMetricTags(ClientQuotaType quotaType, 
KafkaPrincipal principal, String clientId);
+
+/**
+ * Returns the quota limit associated with the provided metric tags. These 
tags were returned from
+ * a previous call to {@link #quotaMetricTags(ClientQuotaType, 
KafkaPrincipal, String)}. This method is
+ * invoked by quota managers to obtain the current quota limit applied to 
a metric when the first request
+ * using these tags is processed. It is also invoked after a quota update 
or cluster metadata change.
+ * If the tags are no longer in use after the update, (e.g. this is a 
{user, client-id} quota metric
+ * and the quota now in use is a {user} quota), null is returned.
+ *
+ * @param quotaType  Type of quota requested
+ * @param metricTags Metric tags for a quota metric of type `quotaType`
+ * @return the quota limit for the provided metric tags or null if the 
metric tags are no longer in use
+ */
+Double quotaLimit(ClientQuotaType quotaType, Map 
metricTags);
+
+/**
+ * Quota configuration update callback that is invoked when quota 
configuration for an entity is
+ * updated in ZooKeeper. This is useful to track configured quotas if 
built-in quota configuration
+ * tools are used for quota management.
+ *
+ * @param quotaType   Type of quota being updated
+ * @param quotaEntity The quota entity for which quota is being updated
+ * @param newValueThe new quota value
+ */
+void updateQuota(ClientQuotaType quotaType, ClientQuotaEntity quotaEntity, 
double newValue);
+
+/**
+ * Quota configuration removal callback that is invoked when quota 
configuration for an entity is
+ * removed in ZooKeeper. This is useful to track configured quotas if 
built-in quota configuration
+ * tools are used for quota management.
+ *
+ * @param quotaType   Type of quota being updated
+ * @param quotaEntity The quota entity for which quota is being updated
+ */
+void removeQuota(ClientQuotaType quotaType, ClientQuotaEntity quotaEntity);
+
+/**
+ * Returns true if any of the existing quota configs may have been updated 
since the last call
+ * 

[jira] [Commented] (KAFKA-5938) Oracle jdbc-source-connector with kafka-connect distributed mode will result in http 500 error

2018-04-06 Thread Thomas C (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429050#comment-16429050
 ] 

Thomas C commented on KAFKA-5938:
-

[~TranceMaster86]

I am having the same issue in kafka version 1.0.0-cp1 in docker. I have tried 
adding the schema.patten  in the connection parameter but it doesn't seem to 
fix the issue. Do you have other suggestions to resolve this?

 

> Oracle jdbc-source-connector with kafka-connect distributed mode will result 
> in http 500 error
> --
>
> Key: KAFKA-5938
> URL: https://issues.apache.org/jira/browse/KAFKA-5938
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Wolfgang Clemens
>Priority: Major
>
> When I try to add an oracle jdbc source connector in kafka-connect 
> distributed mode via curl it will end up in a http 500 error.
> To reproduce, feel free to clone my repo for that error: 
> https://github.com/TranceMaster86/kafka-connect-oracle-error
> In the readme I described how to reproduce the error.
> It will hang up kafka-connect completely. All other connectors seems to stop 
> working after I want to create an oracle jdbc source connector with a '@' in 
> the connection string. I think this will be the main problem here. The oracle 
> connection string looks like 'jdbc:oracle:thin:@hostname:port/service' or 
> 'jdbc:oracle:thin:@hostname:port:SID'.
> See also: https://github.com/confluentinc/cp-docker-images/issues/338



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6649) ReplicaFetcher stopped after non fatal exception is thrown

2018-04-06 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429049#comment-16429049
 ] 

Jason Gustafson commented on KAFKA-6649:


[~srinivas.d...@gmail.com] Sorry for the slow response. Can you post the logs 
from the failure? Also, are there any details that you know of which will help 
us reproduce the error?

> ReplicaFetcher stopped after non fatal exception is thrown
> --
>
> Key: KAFKA-6649
> URL: https://issues.apache.org/jira/browse/KAFKA-6649
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 1.0.0, 0.11.0.2, 1.1.0, 1.0.1
>Reporter: Julio Ng
>Priority: Major
>
> We have seen several under-replication partitions, usually triggered by topic 
> creation. After digging in the logs, we see the below:
> {noformat}
> [2018-03-12 22:40:17,641] ERROR [ReplicaFetcher replicaId=12, leaderId=0, 
> fetcherId=1] Error due to (kafka.server.ReplicaFetcherThread)
> kafka.common.KafkaException: Error processing data for partition 
> [[TOPIC_NAME_REMOVED]]-84 offset 2098535
>  at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:204)
>  at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:169)
>  at scala.Option.foreach(Option.scala:257)
>  at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:169)
>  at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:166)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>  at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply$mcV$sp(AbstractFetcherThread.scala:166)
>  at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:166)
>  at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:166)
>  at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
>  at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:164)
>  at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
> Caused by: org.apache.kafka.common.errors.OffsetOutOfRangeException: Cannot 
> increment the log start offset to 2098535 of partition 
> [[TOPIC_NAME_REMOVED]]-84 since it is larger than the high watermark -1
> [2018-03-12 22:40:17,641] INFO [ReplicaFetcher replicaId=12, leaderId=0, 
> fetcherId=1] Stopped (kafka.server.ReplicaFetcherThread){noformat}
> It looks like that after the ReplicaFetcherThread is stopped, the replicas 
> start to lag behind, presumably because we are not fetching from the leader 
> anymore. Further examining, the ShutdownableThread.scala object:
> {noformat}
> override def run(): Unit = {
>  info("Starting")
>  try {
>while (isRunning)
>  doWork()
>  } catch {
>case e: FatalExitError =>
>  shutdownInitiated.countDown()
>  shutdownComplete.countDown()
>  info("Stopped")
>  Exit.exit(e.statusCode())
>case e: Throwable =>
>  if (isRunning)
>error("Error due to", e)
>  } finally {
>shutdownComplete.countDown()
>  }
>  info("Stopped")
> }{noformat}
> For the Throwable (non-fatal) case, it just exits the while loop and the 
> thread stops doing work. I am not sure whether this is the intended behavior 
> of the ShutdownableThread, or the exception should be caught and we should 
> keep calling doWork()
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6761) Reduce Kafka Streams Footprint

2018-04-06 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-6761:
---
Description: 
The persistent storage footprint of a Kafka Streams application contains the 
following aspects:
 # The internal topics created on the Kafka cluster side.
 # The materialized state stores on the Kafka Streams application instances 
side.

There have been some questions about reducing these footprints, especially 
since many of them are not necessary. For example, there are redundant internal 
topics, as well as unnecessary state stores that takes up space but also affect 
performance. When people are pushing Streams to production with high traffic, 
this issue would be more common and severe.

  was:
The persistent storage footprint of a Kafka Streams application contains the 
following aspects:
 # The internal topics created on the Kafka cluster side.
 # The materialized state stores on the Kafka Streams application instances 
side.

There have been some questions about reducing these footprints, especially 
since many of them are not necessary. For example, there are redundant internal 
topics, as well as unnecessary state stores that takes up space but also affect 
performance. When people are pushing Streams to production with high traffic, 
this issue would be more common and severe. Reducing the footprint of Streams 
have clear benefits of Kafka Streams operations, and also for KSQL deployment 
resource utilization


> Reduce Kafka Streams Footprint
> --
>
> Key: KAFKA-6761
> URL: https://issues.apache.org/jira/browse/KAFKA-6761
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 1.2.0
>
>
> The persistent storage footprint of a Kafka Streams application contains the 
> following aspects:
>  # The internal topics created on the Kafka cluster side.
>  # The materialized state stores on the Kafka Streams application instances 
> side.
> There have been some questions about reducing these footprints, especially 
> since many of them are not necessary. For example, there are redundant 
> internal topics, as well as unnecessary state stores that takes up space but 
> also affect performance. When people are pushing Streams to production with 
> high traffic, this issue would be more common and severe.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6761) Reduce Kafka Streams Footprint

2018-04-06 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-6761:
--

 Summary: Reduce Kafka Streams Footprint
 Key: KAFKA-6761
 URL: https://issues.apache.org/jira/browse/KAFKA-6761
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck
 Fix For: 1.2.0


The persistent storage footprint of a Kafka Streams application contains the 
following aspects:
 # The internal topics created on the Kafka cluster side.
 # The materialized state stores on the Kafka Streams application instances 
side.

There have been some questions about reducing these footprints, especially 
since many of them are not necessary. For example, there are redundant internal 
topics, as well as unnecessary state stores that takes up space but also affect 
performance. When people are pushing Streams to production with high traffic, 
this issue would be more common and severe. Reducing the footprint of Streams 
have clear benefits of Kafka Streams operations, and also for KSQL deployment 
resource utilization



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6760) responses not logged properly in controller

2018-04-06 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-6760:
--

 Summary: responses not logged properly in controller
 Key: KAFKA-6760
 URL: https://issues.apache.org/jira/browse/KAFKA-6760
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 1.1.0
Reporter: Jun Rao


Saw the following logging in controller.log. We need to log the 
StopReplicaResponse properly in KafkaController.

[2018-04-05 14:38:41,878] DEBUG [Controller id=0] Delete topic callback invoked 
for org.apache.kafka.common.requests.StopReplicaResponse@263d40c 
(kafka.controller.K

afkaController)

It seems that the same issue exists for LeaderAndIsrResponse as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6759) Include valid Fetch Response high_watermark and log_start_offset when Error Code 1 OFFSET_OUT_OF_RANGE

2018-04-06 Thread John R. Fallows (JIRA)
John R. Fallows created KAFKA-6759:
--

 Summary: Include valid Fetch Response high_watermark and 
log_start_offset when Error Code 1 OFFSET_OUT_OF_RANGE
 Key: KAFKA-6759
 URL: https://issues.apache.org/jira/browse/KAFKA-6759
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: John R. Fallows


When FETCH version 6 response has Error Code 1 OFFSET_OUT_OF_RANGE, any reason 
why the FETCH response could *not* also include valid values for high_watermark 
and log_start_offset (they are both currently -1) ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6474) Rewrite test to use new public TopologyTestDriver

2018-04-06 Thread John Roesler (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428837#comment-16428837
 ] 

John Roesler commented on KAFKA-6474:
-

Thanks for the PR. I'll review it and comment as soon as I can.

-John

> Rewrite test to use new public TopologyTestDriver
> -
>
> Key: KAFKA-6474
> URL: https://issues.apache.org/jira/browse/KAFKA-6474
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, unit tests
>Affects Versions: 1.1.0
>Reporter: Matthias J. Sax
>Assignee: Filipe Agapito
>Priority: Major
>  Labels: beginner, newbie
>
> With KIP-247 we added public TopologyTestDriver. We should rewrite out own 
> test to use this new test driver and remove the two classes 
> ProcessorTopoogyTestDriver and KStreamTestDriver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6758) Default "" consumer group tracks committed offsets, but is otherwise not a real group

2018-04-06 Thread David van Geest (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David van Geest updated KAFKA-6758:
---
Description: 
*To reproduce:*
 * Use the default config for `group.id` of "" (the empty string)
 * Use the default config for `enable.auto.commit` of `true`
 * Use manually assigned partitions with `assign`

*Actual (unexpected) behaviour:*

Consumer offsets are stored for the "" group. Example:

{{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
--bootstrap-server localhost:9092 --describe --group ""}}
 {{Note: This will only show information about consumers that use the Java 
consumer API (non-ZooKeeper-based consumers).}}

{{Consumer group '' has no active members.}}

{{TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID}}
 {{my_topic 54 7859593 7865082 5489 - - -}}
 {{my_topic 5 14252813 14266419 13606 - - -}}
 {{my_topic 39 19099099 19122441 23342 - - -}}
 {{my_topic 43 16434573 16449180 14607 - - -.}}



 

However, the "" is not a real group. It doesn't show up with:

{{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
--bootstrap-server localhost:9092 --list}}

You also can't do dynamic partition assignment with it - if you try to 
`subscribe` when using the default "" group ID, you get:

{{AbstractCoordinator: Attempt to join group  failed due to fatal error: The 
configured groupId is invalid}}

*Better behaviours:*

(any of these would be preferable, in my opinion)
 * Don't commit offsets with the "" group, and log a warning telling the user 
that `enable.auto.commit = true` is meaningless in this situation. This is what 
I would have expected.
 * Don't have a default `group.id`. Some of my reading indicates that the new 
consumer basically needs a `group.id` to function. If so, force users to choose 
a group ID so that they're more aware of what will happen.
 * Have a default `group.id` of `default`, and make it a real consumer group. 
That is, it shows up in lists of groups, it has dynamic partitioning, etc.

As a user, when I don't set `group.id` I expect that I'm not using consumer 
groups. This is confirmed to me by listing the consumer groups on the broker 
and not seeing anything. Therefore, I expect that there will be no offset 
tracking in Kafka.

In my specific application, I was wanting `auto.offset.reset` to kick in so 
that a failed consumer would start at the `latest` offset. However, it started 
at this unexpectedly stored offset instead.

 

  was:
*To reproduce:*
 * Use the default config for `group.id` of "" (the empty string)
 * Use the default config for `enable.auto.commit` of `true`
 * Use manually assigned partitions with `assign`

*Actual (unexpected) behaviour:*

Consumer offsets are stored for the "" group. Example:

{{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
--bootstrap-server localhost:9092 --describe --group ""}}
 {{Note: This will only show information about consumers that use the Java 
consumer API (non-ZooKeeper-based consumers).Consumer group '' has no active 
members.TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
CLIENT-ID}}
 {{my_topic 54 7859593 7865082 5489 - - -}}
 {{my_topic 5 14252813 14266419 13606 - - -}}
 {{my_topic 39 19099099 19122441 23342 - - -}}
 {{my_topic 43 16434573 16449180 14607 - - -.}}



 

However, the "" is not a real group. It doesn't show up with:

{{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
--bootstrap-server localhost:9092 --list}}

You also can't do dynamic partition assignment with it - if you try to 
`subscribe` when using the default "" group ID, you get:

{{AbstractCoordinator: Attempt to join group  failed due to fatal error: The 
configured groupId is invalid}}

*Better behaviours:*

(any of these would be preferable, in my opinion)
 * Don't commit offsets with the "" group, and log a warning telling the user 
that `enable.auto.commit = true` is meaningless in this situation. This is what 
I would have expected.
 * Don't have a default `group.id`. Some of my reading indicates that the new 
consumer basically needs a `group.id` to function. If so, force users to choose 
a group ID so that they're more aware of what will happen.
 * Have a default `group.id` of `default`, and make it a real consumer group. 
That is, it shows up in lists of groups, it has dynamic partitioning, etc.

As a user, when I don't set `group.id` I expect that I'm not using consumer 
groups. This is confirmed to me by listing the consumer groups on the broker 
and not seeing anything. Therefore, I expect that there will be no offset 
tracking in Kafka.

In my specific application, I was wanting `auto.offset.reset` to kick in so 
that a failed consumer would start at the `latest` offset. However, it started 
at this unexpectedly stored offset instead.

 


> Default "" consumer group tracks committed offsets, but is otherwise 

[jira] [Updated] (KAFKA-6754) Allow Kafka to be used for horizontally-scalable real-time stream visualization

2018-04-06 Thread Bill DeStein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill DeStein updated KAFKA-6754:

Description: 
I've developed a patch that allows Kafka to be used as the back-end for 
horizontally-scalable real-time stream visualization systems.

I've created a five-minute demo video here.

https://goo.gl/cERVmb

I'd like to get thoughts from the Kafka leadership on whether my patch could be 
made part of Kafka going forward.  I'll create a wiki with implementation 
details if there is interest.

I intend to open source the time series portal as a separate project because it 
will consist of lots of React and Redux code that probably doesn't belong in 
the Kafka code base.

Thanks,  Bill DeStein

  was:
I've developed a patch that allows Kafka to be used as the back-end for 
horizontally-scalable real-time stream visualization systems.

I've created a five-minute demo video here.

http://bit.ly/2uMTvJq

I'd like to get thoughts from the Kafka leadership on whether my patch could be 
made part of Kafka going forward.  I'll create a wiki with implementation 
details if there is interest.

I intend to open source the time series portal as a separate project because it 
will consist of lots of React and Redux code that probably doesn't belong in 
the Kafka code base.

Thanks,  Bill DeStein


> Allow Kafka to be used for horizontally-scalable real-time stream 
> visualization
> ---
>
> Key: KAFKA-6754
> URL: https://issues.apache.org/jira/browse/KAFKA-6754
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, core
>Reporter: Bill DeStein
>Priority: Major
>
> I've developed a patch that allows Kafka to be used as the back-end for 
> horizontally-scalable real-time stream visualization systems.
> I've created a five-minute demo video here.
> https://goo.gl/cERVmb
> I'd like to get thoughts from the Kafka leadership on whether my patch could 
> be made part of Kafka going forward.  I'll create a wiki with implementation 
> details if there is interest.
> I intend to open source the time series portal as a separate project because 
> it will consist of lots of React and Redux code that probably doesn't belong 
> in the Kafka code base.
> Thanks,  Bill DeStein



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6754) Allow Kafka to be used for horizontally-scalable real-time stream visualization

2018-04-06 Thread Bill DeStein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill DeStein updated KAFKA-6754:

Description: 
I've developed a patch that allows Kafka to be used as the back-end for 
horizontally-scalable real-time stream visualization systems.

I've created a five-minute demo video here.

http://bit.ly/2uMTvJq

I'd like to get thoughts from the Kafka leadership on whether my patch could be 
made part of Kafka going forward.  I'll create a wiki with implementation 
details if there is interest.

I intend to open source the time series portal as a separate project because it 
will consist of lots of React and Redux code that probably doesn't belong in 
the Kafka code base.

Thanks,  Bill DeStein

  was:
I've developed a patch that allows Kafka to be used as the back-end for 
horizontally-scalable real-time stream visualization systems.

I've created a five-minute demo video here.

[http://billdestein.software.s3.amazonaws.com/sitegrapher.html]

I'd like to get thoughts from the Kafka leadership on whether my patch could be 
made part of Kafka going forward.  I'll create a wiki with implementation 
details if there is interest.

I intend to open source the time series portal as a separate project because it 
will consist of lots of React and Redux code that probably doesn't belong in 
the Kafka code base.

Thanks,  Bill DeStein


> Allow Kafka to be used for horizontally-scalable real-time stream 
> visualization
> ---
>
> Key: KAFKA-6754
> URL: https://issues.apache.org/jira/browse/KAFKA-6754
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, core
>Reporter: Bill DeStein
>Priority: Major
>
> I've developed a patch that allows Kafka to be used as the back-end for 
> horizontally-scalable real-time stream visualization systems.
> I've created a five-minute demo video here.
> http://bit.ly/2uMTvJq
> I'd like to get thoughts from the Kafka leadership on whether my patch could 
> be made part of Kafka going forward.  I'll create a wiki with implementation 
> details if there is interest.
> I intend to open source the time series portal as a separate project because 
> it will consist of lots of React and Redux code that probably doesn't belong 
> in the Kafka code base.
> Thanks,  Bill DeStein



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6754) Allow Kafka to be used for horizontally-scalable real-time stream visualization

2018-04-06 Thread Bill DeStein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill DeStein updated KAFKA-6754:

Description: 
I've developed a patch that allows Kafka to be used as the back-end for 
horizontally-scalable real-time stream visualization systems.

I've created a five-minute demo video here.

[http://billdestein.software.s3.amazonaws.com/sitegrapher.html]

I'd like to get thoughts from the Kafka leadership on whether my patch could be 
made part of Kafka going forward.  I'll create a wiki with implementation 
details if there is interest.

I intend to open source the time series portal as a separate project because it 
will consist of lots of React and Redux code that probably doesn't belong in 
the Kafka code base.

Thanks,  Bill DeStein

  was:
I've developed a patch that allows Kafka to be used as the back-end for 
horizontally-scalable real-time stream visualization systems.

I've created a five-minute demo video here.

[http://billdestein.software.s3.amazonaws.com/sitegrapher.html]

I'd like to get thoughts from the Kafka leadership on whether my patch could be 
made part of Kafka going forward.  I'll create a wiki with implementation 
details if there is interest.

I intend to open source the time series portal as a separate project because it 
will be lots of React and Redux code that probably doesn't belong in the Kafka 
code base.

Thanks,  Bill DeStein


> Allow Kafka to be used for horizontally-scalable real-time stream 
> visualization
> ---
>
> Key: KAFKA-6754
> URL: https://issues.apache.org/jira/browse/KAFKA-6754
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, core
>Reporter: Bill DeStein
>Priority: Major
>
> I've developed a patch that allows Kafka to be used as the back-end for 
> horizontally-scalable real-time stream visualization systems.
> I've created a five-minute demo video here.
> [http://billdestein.software.s3.amazonaws.com/sitegrapher.html]
> I'd like to get thoughts from the Kafka leadership on whether my patch could 
> be made part of Kafka going forward.  I'll create a wiki with implementation 
> details if there is interest.
> I intend to open source the time series portal as a separate project because 
> it will consist of lots of React and Redux code that probably doesn't belong 
> in the Kafka code base.
> Thanks,  Bill DeStein



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6754) Allow Kafka to be used for horizontally-scalable real-time stream visualization

2018-04-06 Thread Bill DeStein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill DeStein updated KAFKA-6754:

Component/s: core

> Allow Kafka to be used for horizontally-scalable real-time stream 
> visualization
> ---
>
> Key: KAFKA-6754
> URL: https://issues.apache.org/jira/browse/KAFKA-6754
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, core
>Reporter: Bill DeStein
>Priority: Major
>
> I've developed a patch that allows Kafka to be used as the back-end for 
> horizontally-scalable real-time stream visualization systems.
> I've created a five-minute demo video here.
> [http://billdestein.software.s3.amazonaws.com/sitegrapher.html]
> I'd like to get thoughts from the Kafka leadership on whether my patch could 
> be made part of Kafka going forward.  I'll create a wiki with implementation 
> details if there is interest.
> I intend to open source the time series portal as a separate project because 
> it will be lots of React and Redux code that probably doesn't belong in the 
> Kafka code base.
> Thanks,  Bill DeStein



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6758) Default "" consumer group tracks committed offsets, but is otherwise not a real group

2018-04-06 Thread David van Geest (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David van Geest updated KAFKA-6758:
---
Description: 
*To reproduce:*
 * Use the default config for `group.id` of "" (the empty string)
 * Use the default config for `enable.auto.commit` of `true`
 * Use manually assigned partitions with `assign`

*Actual (unexpected) behaviour:*

Consumer offsets are stored for the "" group. Example:

{{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
--bootstrap-server localhost:9092 --describe --group ""}}
 {{Note: This will only show information about consumers that use the Java 
consumer API (non-ZooKeeper-based consumers).Consumer group '' has no active 
members.TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
CLIENT-ID}}
 {{my_topic 54 7859593 7865082 5489 - - -}}
 {{my_topic 5 14252813 14266419 13606 - - -}}
 {{my_topic 39 19099099 19122441 23342 - - -}}
 {{my_topic 43 16434573 16449180 14607 - - -.}}



 

However, the "" is not a real group. It doesn't show up with:

{{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
--bootstrap-server localhost:9092 --list}}

You also can't do dynamic partition assignment with it - if you try to 
`subscribe` when using the default "" group ID, you get:

{{AbstractCoordinator: Attempt to join group  failed due to fatal error: The 
configured groupId is invalid}}

*Better behaviours:*

(any of these would be preferable, in my opinion)
 * Don't commit offsets with the "" group, and log a warning telling the user 
that `enable.auto.commit = true` is meaningless in this situation. This is what 
I would have expected.
 * Don't have a default `group.id`. Some of my reading indicates that the new 
consumer basically needs a `group.id` to function. If so, force users to choose 
a group ID so that they're more aware of what will happen.
 * Have a default `group.id` of `default`, and make it a real consumer group. 
That is, it shows up in lists of groups, it has dynamic partitioning, etc.

As a user, when I don't set `group.id` I expect that I'm not using consumer 
groups. This is confirmed to me by listing the consumer groups on the broker 
and not seeing anything. Therefore, I expect that there will be no offset 
tracking in Kafka.

In my specific application, I was wanting `auto.offset.reset` to kick in so 
that a failed consumer would start at the `latest` offset. However, it started 
at this unexpectedly stored offset instead.

 

  was:
*To reproduce:*
 * Use the default config for `group.id` of "" (the empty string)
 * Use the default config for `enable.auto.commit` of `true`
 * Use manually assigned partitions with `assign`

*Actual (unexpected) behaviour:*

Consumer offsets are stored for the "" group. Example:

{{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
--bootstrap-server localhost:9092 --describe --group ""}}
{{Note: This will only show information about consumers that use the Java 
consumer API (non-ZooKeeper-based consumers).Consumer group '' has no active 
members.TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
CLIENT-ID}}
{{my_topic 54 7859593 7865082 5489 - - -}}
{{my_topic 5 14252813 14266419 13606 - - -}}
{{my_topic 39 19099099 19122441 23342 - - -}}
{{my_topic 43 16434573 16449180 14607 - - -.}}



 

However, the "" is not a real group. It doesn't show up with:

{{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
--bootstrap-server localhost:9092 --list}}

You also can't do dynamic partition assignment with it - if you try to 
`subscribe` when using the default "" group ID, you get:

{{AbstractCoordinator: Attempt to join group  failed due to fatal error: The 
configured groupId is invalid}}

*Better behaviours:*

(any of these would be preferable, in my opinion)
 * Don't commit offsets with the "" group, and log a warning telling the user 
that `enable.auto.commit = true` is meaningless in this situation. This is what 
I would have expected.
 * Don't have a default `group.id`. Some of my reading indicates that the new 
consumer basically needs a `group.id` to function. If so, force users to choose 
a group ID so that they're more aware of what will happen.
 * Have a default `group.id` of `default`, and make it a real consumer group. 
That is, it shows up in lists of groups, it has dynamic partitioning, etc.

As a user, when I don't set `group.id` I expect that I'm not using consumer 
groups. Therefore, I expect that there will be no offset tracking in Kafka.

In my specific application, I was wanting `auto.offset.reset` to kick in so 
that a failed consumer would start at the `latest` offset. However, it started 
at this unexpectedly stored offset instead.

 


> Default "" consumer group tracks committed offsets, but is otherwise not a 
> real group
> -
>
>  

[jira] [Created] (KAFKA-6758) Default "" consumer group tracks committed offsets, but is otherwise not a real group

2018-04-06 Thread David van Geest (JIRA)
David van Geest created KAFKA-6758:
--

 Summary: Default "" consumer group tracks committed offsets, but 
is otherwise not a real group
 Key: KAFKA-6758
 URL: https://issues.apache.org/jira/browse/KAFKA-6758
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.2
Reporter: David van Geest


*To reproduce:*
 * Use the default config for `group.id` of "" (the empty string)
 * Use the default config for `enable.auto.commit` of `true`
 * Use manually assigned partitions with `assign`

*Actual (unexpected) behaviour:*

Consumer offsets are stored for the "" group. Example:

{{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
--bootstrap-server localhost:9092 --describe --group ""}}
{{Note: This will only show information about consumers that use the Java 
consumer API (non-ZooKeeper-based consumers).Consumer group '' has no active 
members.TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
CLIENT-ID}}
{{my_topic 54 7859593 7865082 5489 - - -}}
{{my_topic 5 14252813 14266419 13606 - - -}}
{{my_topic 39 19099099 19122441 23342 - - -}}
{{my_topic 43 16434573 16449180 14607 - - -.}}



 

However, the "" is not a real group. It doesn't show up with:

{{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
--bootstrap-server localhost:9092 --list}}

You also can't do dynamic partition assignment with it - if you try to 
`subscribe` when using the default "" group ID, you get:

{{AbstractCoordinator: Attempt to join group  failed due to fatal error: The 
configured groupId is invalid}}

*Better behaviours:*

(any of these would be preferable, in my opinion)
 * Don't commit offsets with the "" group, and log a warning telling the user 
that `enable.auto.commit = true` is meaningless in this situation. This is what 
I would have expected.
 * Don't have a default `group.id`. Some of my reading indicates that the new 
consumer basically needs a `group.id` to function. If so, force users to choose 
a group ID so that they're more aware of what will happen.
 * Have a default `group.id` of `default`, and make it a real consumer group. 
That is, it shows up in lists of groups, it has dynamic partitioning, etc.

As a user, when I don't set `group.id` I expect that I'm not using consumer 
groups. Therefore, I expect that there will be no offset tracking in Kafka.

In my specific application, I was wanting `auto.offset.reset` to kick in so 
that a failed consumer would start at the `latest` offset. However, it started 
at this unexpectedly stored offset instead.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6754) Allow Kafka to be used for horizontally-scalable real-time stream visualization

2018-04-06 Thread Bill DeStein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill DeStein updated KAFKA-6754:

Description: 
I've developed a patch that allows Kafka to be used as the back-end for 
horizontally-scalable real-time stream visualization systems.

I've created a five-minute demo video here.

[http://billdestein.software.s3.amazonaws.com/sitegrapher.html]

I'd like to get thoughts from the Kafka leadership on whether my patch could be 
made part of Kafka going forward.  I'll create a wiki with implementation 
details if there is interest.

I intend to open source the time series portal as a separate project because it 
will be lots of React and Redux code that probably doesn't belong in the Kafka 
code base.

Thanks,  Bill DeStein

  was:
I've developed a patch that allows Kafka to be used as the back-end for 
horizontally-scalable real-time stream visualization systems.

I've created a five-minute demo video here.

[http://billdestein.software.s3.amazonaws.com/sitegrapher.html]

I'd like to get thought from the Kafka leadership on whether my patch could be 
made part of Kafka going forward.  I'll create a wiki with implementation 
details if there is interest.

I intend to open source the time series portal as a separate project because it 
will be lots of React and Redux code that probably doesn't belong in the Kafka 
code base.

Thanks,  Bill DeStein


> Allow Kafka to be used for horizontally-scalable real-time stream 
> visualization
> ---
>
> Key: KAFKA-6754
> URL: https://issues.apache.org/jira/browse/KAFKA-6754
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Bill DeStein
>Priority: Major
>
> I've developed a patch that allows Kafka to be used as the back-end for 
> horizontally-scalable real-time stream visualization systems.
> I've created a five-minute demo video here.
> [http://billdestein.software.s3.amazonaws.com/sitegrapher.html]
> I'd like to get thoughts from the Kafka leadership on whether my patch could 
> be made part of Kafka going forward.  I'll create a wiki with implementation 
> details if there is interest.
> I intend to open source the time series portal as a separate project because 
> it will be lots of React and Redux code that probably doesn't belong in the 
> Kafka code base.
> Thanks,  Bill DeStein



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-3240) Replication issues

2018-04-06 Thread Sergey AKhmatov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428090#comment-16428090
 ] 

Sergey AKhmatov commented on KAFKA-3240:


Yes, I was running kafka on ZFS. I had some concerns about compression and 
tried disabling it, but the problem repeated with or without compression.

Never tried running it on UFS to check if it makes difference.

> Replication issues
> --
>
> Key: KAFKA-3240
> URL: https://issues.apache.org/jira/browse/KAFKA-3240
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.2, 0.9.0.0, 0.9.0.1
> Environment: FreeBSD 10.2-RELEASE-p9
>Reporter: Jan Omar
>Priority: Major
>  Labels: reliability
>
> Hi,
> We are trying to replace our 3-broker cluster running on 0.6 with a new 
> cluster on 0.9.0.1 (but tried 0.8.2.2 and 0.9.0.0 as well).
> - 3 kafka nodes with one zookeeper instance on each machine
> - FreeBSD 10.2 p9
> - Nagle off (sysctl net.inet.tcp.delayed_ack=0)
> - all kafka machines write a ZFS ZIL to a dedicated SSD
> - 3 producers on 3 machines, writing to 1 topics, partitioning 3, replication 
> factor 3
> - acks all
> - 10 Gigabit Ethernet, all machines on one switch, ping 0.05 ms worst case.
> While using the ProducerPerformance or rdkafka_performance we are seeing very 
> strange Replication errors. Any hint on what's going on would be highly 
> appreciated. Any suggestion on how to debug this properly would help as well.
> This is what our broker config looks like:
> {code}
> broker.id=5
> auto.create.topics.enable=false
> delete.topic.enable=true
> listeners=PLAINTEXT://:9092
> port=9092
> host.name=kafka-five.acc
> advertised.host.name=10.5.3.18
> zookeeper.connect=zookeeper-four.acc:2181,zookeeper-five.acc:2181,zookeeper-six.acc:2181
> zookeeper.connection.timeout.ms=6000
> num.replica.fetchers=1
> replica.fetch.max.bytes=1
> replica.fetch.wait.max.ms=500
> replica.high.watermark.checkpoint.interval.ms=5000
> replica.socket.timeout.ms=30
> replica.socket.receive.buffer.bytes=65536
> replica.lag.time.max.ms=1000
> min.insync.replicas=2
> controller.socket.timeout.ms=3
> controller.message.queue.size=100
> log.dirs=/var/db/kafka
> num.partitions=8
> message.max.bytes=1
> auto.create.topics.enable=false
> log.index.interval.bytes=4096
> log.index.size.max.bytes=10485760
> log.retention.hours=168
> log.flush.interval.ms=1
> log.flush.interval.messages=2
> log.flush.scheduler.interval.ms=2000
> log.roll.hours=168
> log.retention.check.interval.ms=30
> log.segment.bytes=536870912
> zookeeper.connection.timeout.ms=100
> zookeeper.sync.time.ms=5000
> num.io.threads=8
> num.network.threads=4
> socket.request.max.bytes=104857600
> socket.receive.buffer.bytes=1048576
> socket.send.buffer.bytes=1048576
> queued.max.requests=10
> fetch.purgatory.purge.interval.requests=100
> producer.purgatory.purge.interval.requests=100
> replica.lag.max.messages=1000
> {code}
> These are the errors we're seeing:
> {code:borderStyle=solid}
> ERROR [Replica Manager on Broker 5]: Error processing fetch operation on 
> partition [test,0] offset 50727 (kafka.server.ReplicaManager)
> java.lang.IllegalStateException: Invalid message size: 0
>   at kafka.log.FileMessageSet.searchFor(FileMessageSet.scala:141)
>   at kafka.log.LogSegment.translateOffset(LogSegment.scala:105)
>   at kafka.log.LogSegment.read(LogSegment.scala:126)
>   at kafka.log.Log.read(Log.scala:506)
>   at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:536)
>   at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:507)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:507)
>   at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:462)
>   at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)0
> {code}
> and 
> {code}
> ERROR Found invalid messages during fetch for partition [test,0] offset 2732 
> error Message found with corrupt size (0) in shallow iterator 
> 

[jira] [Updated] (KAFKA-6757) Log runtime exception in case of server startup errors

2018-04-06 Thread Enrico Olivelli (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enrico Olivelli updated KAFKA-6757:
---
Description: 
Sometimes while running Kafka server inside tests of an upstream application it 
can happen that the server cannot start due to a bad runtime error, like a 
missing jar on the classpath.

I would like KafkaServerStartable to log any 'Throwable' in order to catch 
these unpredictable errors

 

like this
{code:java}
java.lang.NoClassDefFoundError: com/fasterxml/jackson/annotation/JsonMerge
    at 
com.fasterxml.jackson.databind.introspect.JacksonAnnotationIntrospector.(JacksonAnnotationIntrospector.java:50)
    at 
com.fasterxml.jackson.databind.ObjectMapper.(ObjectMapper.java:291)
    at kafka.utils.Json$.(Json.scala:29)
    at kafka.utils.Json$.(Json.scala)
    at kafka.utils.ZkUtils$ClusterId$.toJson(ZkUtils.scala:299)
    at kafka.utils.ZkUtils.createOrGetClusterId(ZkUtils.scala:314)
    at 
kafka.server.KafkaServer$$anonfun$getOrGenerateClusterId$1.apply(KafkaServer.scala:356)
    at 
kafka.server.KafkaServer$$anonfun$getOrGenerateClusterId$1.apply(KafkaServer.scala:356)
    at scala.Option.getOrElse(Option.scala:121)
    at kafka.server.KafkaServer.getOrGenerateClusterId(KafkaServer.scala:356)
    at kafka.server.KafkaServer.startup(KafkaServer.scala:197)
    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
    at 
magnews.datastream.server.DataStreamServer.start(DataStreamServer.java:96)
    at 
magnews.datastream.server.RealServerSinglePartitionTest.test(RealServerSinglePartitionTest.java:85)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
    at org.junit.rules.RunRules.evaluate(RunRules.java:20)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325){code}

  was:
Sometimes while running Kafka server inside tests of an upstream application it 
can happen that the server cannot start due to a bad runtime error, like a 
missing jar on the classpath.

I would like KafkaServerStartable to log any 'Throwable' in order to catch this 
unpredictable errors

 

like this
{code:java}

java.lang.NoClassDefFoundError: com/fasterxml/jackson/annotation/JsonMerge
    at 
com.fasterxml.jackson.databind.introspect.JacksonAnnotationIntrospector.(JacksonAnnotationIntrospector.java:50)
    at 
com.fasterxml.jackson.databind.ObjectMapper.(ObjectMapper.java:291)
    at kafka.utils.Json$.(Json.scala:29)
    at kafka.utils.Json$.(Json.scala)
    at kafka.utils.ZkUtils$ClusterId$.toJson(ZkUtils.scala:299)
    at kafka.utils.ZkUtils.createOrGetClusterId(ZkUtils.scala:314)
    at 
kafka.server.KafkaServer$$anonfun$getOrGenerateClusterId$1.apply(KafkaServer.scala:356)
    at 
kafka.server.KafkaServer$$anonfun$getOrGenerateClusterId$1.apply(KafkaServer.scala:356)
    at scala.Option.getOrElse(Option.scala:121)
    at kafka.server.KafkaServer.getOrGenerateClusterId(KafkaServer.scala:356)
    at kafka.server.KafkaServer.startup(KafkaServer.scala:197)
    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
    at 
magnews.datastream.server.DataStreamServer.start(DataStreamServer.java:96)
    at 
magnews.datastream.server.RealServerSinglePartitionTest.test(RealServerSinglePartitionTest.java:85)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
    at org.junit.rules.RunRules.evaluate(RunRules.java:20)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325){code}


> Log runtime exception in case of server startup errors
> 

[jira] [Commented] (KAFKA-6757) Log runtime exception in case of server startup errors

2018-04-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428078#comment-16428078
 ] 

ASF GitHub Bot commented on KAFKA-6757:
---

eolivelli opened a new pull request #4833: KAFKA-6757 Log runtime exception in 
case of server startup errors
URL: https://github.com/apache/kafka/pull/4833
 
 
   Sometimes while running Kafka server inside tests of an upstream application 
it can happen that the server cannot start due to a bad runtime error, like a 
missing jar on the classpath, see KAFKA-6757 for examples .
   
   I would like KafkaServerStartable to log any 'Throwable' in order to catch 
this unpredictable error
   
   Testing strategy: no test case is needed or it is worth to add
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Log runtime exception in case of server startup errors
> --
>
> Key: KAFKA-6757
> URL: https://issues.apache.org/jira/browse/KAFKA-6757
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 1.0.1
>Reporter: Enrico Olivelli
>Priority: Minor
>
> Sometimes while running Kafka server inside tests of an upstream application 
> it can happen that the server cannot start due to a bad runtime error, like a 
> missing jar on the classpath.
> I would like KafkaServerStartable to log any 'Throwable' in order to catch 
> this unpredictable errors
>  
> like this
> {code:java}
> java.lang.NoClassDefFoundError: com/fasterxml/jackson/annotation/JsonMerge
>     at 
> com.fasterxml.jackson.databind.introspect.JacksonAnnotationIntrospector.(JacksonAnnotationIntrospector.java:50)
>     at 
> com.fasterxml.jackson.databind.ObjectMapper.(ObjectMapper.java:291)
>     at kafka.utils.Json$.(Json.scala:29)
>     at kafka.utils.Json$.(Json.scala)
>     at kafka.utils.ZkUtils$ClusterId$.toJson(ZkUtils.scala:299)
>     at kafka.utils.ZkUtils.createOrGetClusterId(ZkUtils.scala:314)
>     at 
> kafka.server.KafkaServer$$anonfun$getOrGenerateClusterId$1.apply(KafkaServer.scala:356)
>     at 
> kafka.server.KafkaServer$$anonfun$getOrGenerateClusterId$1.apply(KafkaServer.scala:356)
>     at scala.Option.getOrElse(Option.scala:121)
>     at kafka.server.KafkaServer.getOrGenerateClusterId(KafkaServer.scala:356)
>     at kafka.server.KafkaServer.startup(KafkaServer.scala:197)
>     at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
>     at 
> magnews.datastream.server.DataStreamServer.start(DataStreamServer.java:96)
>     at 
> magnews.datastream.server.RealServerSinglePartitionTest.test(RealServerSinglePartitionTest.java:85)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6757) Log runtime exception in case of server startup errors

2018-04-06 Thread Enrico Olivelli (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428054#comment-16428054
 ] 

Enrico Olivelli commented on KAFKA-6757:


I am sending a pull request for this trivial change

> Log runtime exception in case of server startup errors
> --
>
> Key: KAFKA-6757
> URL: https://issues.apache.org/jira/browse/KAFKA-6757
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 1.0.1
>Reporter: Enrico Olivelli
>Priority: Minor
>
> Sometimes while running Kafka server inside tests of an upstream application 
> it can happen that the server cannot start due to a bad runtime error, like a 
> missing jar on the classpath.
> I would like KafkaServerStartable to log any 'Throwable' in order to catch 
> this unpredictable errors
>  
> like this
> {code:java}
> java.lang.NoClassDefFoundError: com/fasterxml/jackson/annotation/JsonMerge
>     at 
> com.fasterxml.jackson.databind.introspect.JacksonAnnotationIntrospector.(JacksonAnnotationIntrospector.java:50)
>     at 
> com.fasterxml.jackson.databind.ObjectMapper.(ObjectMapper.java:291)
>     at kafka.utils.Json$.(Json.scala:29)
>     at kafka.utils.Json$.(Json.scala)
>     at kafka.utils.ZkUtils$ClusterId$.toJson(ZkUtils.scala:299)
>     at kafka.utils.ZkUtils.createOrGetClusterId(ZkUtils.scala:314)
>     at 
> kafka.server.KafkaServer$$anonfun$getOrGenerateClusterId$1.apply(KafkaServer.scala:356)
>     at 
> kafka.server.KafkaServer$$anonfun$getOrGenerateClusterId$1.apply(KafkaServer.scala:356)
>     at scala.Option.getOrElse(Option.scala:121)
>     at kafka.server.KafkaServer.getOrGenerateClusterId(KafkaServer.scala:356)
>     at kafka.server.KafkaServer.startup(KafkaServer.scala:197)
>     at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
>     at 
> magnews.datastream.server.DataStreamServer.start(DataStreamServer.java:96)
>     at 
> magnews.datastream.server.RealServerSinglePartitionTest.test(RealServerSinglePartitionTest.java:85)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6757) Log runtime exception in case of server startup errors

2018-04-06 Thread Enrico Olivelli (JIRA)
Enrico Olivelli created KAFKA-6757:
--

 Summary: Log runtime exception in case of server startup errors
 Key: KAFKA-6757
 URL: https://issues.apache.org/jira/browse/KAFKA-6757
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 1.0.1
Reporter: Enrico Olivelli


Sometimes while running Kafka server inside tests of an upstream application it 
can happen that the server cannot start due to a bad runtime error, like a 
missing jar on the classpath.

I would like KafkaServerStartable to log any 'Throwable' in order to catch this 
unpredictable errors

 

like this
{code:java}

java.lang.NoClassDefFoundError: com/fasterxml/jackson/annotation/JsonMerge
    at 
com.fasterxml.jackson.databind.introspect.JacksonAnnotationIntrospector.(JacksonAnnotationIntrospector.java:50)
    at 
com.fasterxml.jackson.databind.ObjectMapper.(ObjectMapper.java:291)
    at kafka.utils.Json$.(Json.scala:29)
    at kafka.utils.Json$.(Json.scala)
    at kafka.utils.ZkUtils$ClusterId$.toJson(ZkUtils.scala:299)
    at kafka.utils.ZkUtils.createOrGetClusterId(ZkUtils.scala:314)
    at 
kafka.server.KafkaServer$$anonfun$getOrGenerateClusterId$1.apply(KafkaServer.scala:356)
    at 
kafka.server.KafkaServer$$anonfun$getOrGenerateClusterId$1.apply(KafkaServer.scala:356)
    at scala.Option.getOrElse(Option.scala:121)
    at kafka.server.KafkaServer.getOrGenerateClusterId(KafkaServer.scala:356)
    at kafka.server.KafkaServer.startup(KafkaServer.scala:197)
    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
    at 
magnews.datastream.server.DataStreamServer.start(DataStreamServer.java:96)
    at 
magnews.datastream.server.RealServerSinglePartitionTest.test(RealServerSinglePartitionTest.java:85)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
    at org.junit.rules.RunRules.evaluate(RunRules.java:20)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)