[GitHub] kafka pull request #2909: MINOR: adding global store must ensure unique name...

2017-04-24 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2909

MINOR: adding global store must ensure unique names



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka minor-fix-add-global-store

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2909.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2909


commit f45c81cb3ee936aadef62b442748282c7044bd9d
Author: Matthias J. Sax 
Date:   2017-04-25T05:56:17Z

MINOR: adding global store must ensure unique names




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4879) KafkaConsumer.position may hang forever when deleting a topic

2017-04-24 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982386#comment-15982386
 ] 

Guozhang Wang commented on KAFKA-4879:
--

[~huxi_2b] adding the timeout in these internal functions sound good to me. One 
thing worth noting is the public API `Consumer#position`, which will also call 
`updateFetchPositions` but it is considered a non-blocking call. We may either 
change that into a blocking call as well which could potentially throw a 
timeout as well.

cc [~hachikuji]

> KafkaConsumer.position may hang forever when deleting a topic
> -
>
> Key: KAFKA-4879
> URL: https://issues.apache.org/jira/browse/KAFKA-4879
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.2.0
>Reporter: Shixiong Zhu
>Assignee: Armin Braun
>
> KafkaConsumer.position may hang forever when deleting a topic. The problem is 
> this line 
> https://github.com/apache/kafka/blob/022bf129518e33e165f9ceefc4ab9e622952d3bd/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L374
> The timeout is "Long.MAX_VALUE", and it will just retry forever for 
> UnknownTopicOrPartitionException.
> Here is a reproducer
> {code}
> import org.apache.kafka.clients.consumer.KafkaConsumer;
> import org.apache.kafka.common.TopicPartition;
> import org.apache.kafka.common.serialization.StringDeserializer;
> import java.util.Collections;
> import java.util.Properties;
> import java.util.Set;
> public class KafkaReproducer {
>   public static void main(String[] args) {
> // Make sure "delete.topic.enable" is set to true.
> // Please create the topic test with "3" partitions manually.
> // The issue is gone when there is only one partition.
> String topic = "test";
> Properties props = new Properties();
> props.put("bootstrap.servers", "localhost:9092");
> props.put("group.id", "testgroup");
> props.put("value.deserializer", StringDeserializer.class.getName());
> props.put("key.deserializer", StringDeserializer.class.getName());
> props.put("enable.auto.commit", "false");
> KafkaConsumer kc = new KafkaConsumer(props);
> kc.subscribe(Collections.singletonList(topic));
> kc.poll(0);
> Set partitions = kc.assignment();
> System.out.println("partitions: " + partitions);
> kc.pause(partitions);
> kc.seekToEnd(partitions);
> System.out.println("please delete the topic in 30 seconds");
> try {
>   // Sleep 30 seconds to give us enough time to delete the topic.
>   Thread.sleep(3);
> } catch (InterruptedException e) {
>   e.printStackTrace();
> }
> System.out.println("sleep end");
> for (TopicPartition p : partitions) {
>   System.out.println(p + " offset: " + kc.position(p));
> }
> System.out.println("cannot reach here");
> kc.close();
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4379) Remove caching of dirty and removed keys from StoreChangeLogger

2017-04-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982362#comment-15982362
 ] 

ASF GitHub Bot commented on KAFKA-4379:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/2908

KAFKA-4379 Followup: Remove eviction listener from InMemoryLRUCache



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K4379-remove-listener

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2908.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2908


commit 44e1cea3a300b0e05e9574284260b774b852cb4e
Author: Guozhang Wang 
Date:   2017-04-25T02:07:23Z

remove eviction listener

commit 6343783ec2ffc907ad2b1d25cd764a6e52acc299
Author: Guozhang Wang 
Date:   2017-04-25T04:50:02Z

fix unit test




> Remove caching of dirty and removed keys from StoreChangeLogger
> ---
>
> Key: KAFKA-4379
> URL: https://issues.apache.org/jira/browse/KAFKA-4379
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.1.1, 0.10.2.0
>
>
> The StoreChangeLogger currently keeps a cache of dirty and removed keys and 
> will batch the changelog records such that we don't send a record for each 
> update. However, with KIP-63 this is unnecessary as the batching and 
> de-duping is done by the caching layer. Further, the StoreChangeLogger relies 
> on context.timestamp() which is likely to be incorrect when caching is enabled



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2908: KAFKA-4379 Followup: Remove eviction listener from...

2017-04-24 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/2908

KAFKA-4379 Followup: Remove eviction listener from InMemoryLRUCache



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K4379-remove-listener

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2908.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2908


commit 44e1cea3a300b0e05e9574284260b774b852cb4e
Author: Guozhang Wang 
Date:   2017-04-25T02:07:23Z

remove eviction listener

commit 6343783ec2ffc907ad2b1d25cd764a6e52acc299
Author: Guozhang Wang 
Date:   2017-04-25T04:50:02Z

fix unit test




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-5118) Improve message for Kafka failed startup with non-Kafka data in data.dirs

2017-04-24 Thread huxi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxi reassigned KAFKA-5118:
---

Assignee: huxi

> Improve message for Kafka failed startup with non-Kafka data in data.dirs
> -
>
> Key: KAFKA-5118
> URL: https://issues.apache.org/jira/browse/KAFKA-5118
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.2.0
>Reporter: Dustin Cote
>Assignee: huxi
>Priority: Minor
>
> Today, if you try to startup a broker with some non-Kafka data in the 
> data.dirs you end up with a cryptic message:
> {code}
> [2017-04-21 13:35:08,122] ERROR There was an error in one of the threads 
> during logs loading: java.lang.StringIndexOutOfBoundsException: String index 
> out of range: -1 (kafka.log.LogManager) 
> [2017-04-21 13:35:08,124] FATAL [Kafka Server 3], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) 
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1 
> {code}
> It'd be better if we could tell the user to look for non-Kafka data in the 
> data.dirs and print out the offending directory that caused the problem in 
> the first place.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2907: kafka-5118: Improve message for Kafka failed start...

2017-04-24 Thread amethystic
GitHub user amethystic opened a pull request:

https://github.com/apache/kafka/pull/2907

kafka-5118: Improve message for Kafka failed startup with non-Kafka data in 
data.dirs

Explicitly throwing clear exceptions when starting up a Kafka with some 
non-Kafka data in data.dirs.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/amethystic/kafka 
kafka-5118_improve_msg__for_failed_startup_with_nonKafka_data

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2907.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2907


commit 48a60617292439e10d995f6edf2fcab18d4306cb
Author: amethystic 
Date:   2017-04-25T03:39:55Z

kafka-5118: Improve message for Kafka failed startup with non-Kafka data in 
data.dirs

Explicitly throwing clear exceptions when starting up a Kafka with some 
non-Kafka data in data.dirs.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-5065) AbstractCoordinator.ensureCoordinatorReady() stuck in loop if absent any bootstrap servers

2017-04-24 Thread james chien (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980629#comment-15980629
 ] 

james chien edited comment on KAFKA-5065 at 4/25/17 2:40 AM:
-

Hi, I notice that why to use  Long.MAX_VALUE is as the comment, it need to 
enter loop to re-build connection repeatedly.


was (Author: james.c):
Hi, I notice that why to use  Long.MAX_VALUE is as the comment, it need to 
enter loop to re-build connection repeatedly.
So, I think should make a "warning message" before enter retrying block at 
first time to fix this issue.

> AbstractCoordinator.ensureCoordinatorReady() stuck in loop if absent any 
> bootstrap servers 
> ---
>
> Key: KAFKA-5065
> URL: https://issues.apache.org/jira/browse/KAFKA-5065
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0, 0.10.1.1, 0.10.2.0
>Reporter: Vladimir Porshkevich
>Assignee: james chien
>  Labels: newbie
>   Original Estimate: 4m
>  Remaining Estimate: 4m
>
> If Consumer started with wrong bootstrap servers or absent any valid servers, 
> and Thread call Consumer.poll(timeout) with any timeout Thread stuck in loop 
> with debug logs like
> {noformat}
> org.apache.kafka.common.network.Selector - Connection with /172.31.1.100 
> disconnected
> java.net.ConnectException: Connection timed out: no further information
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>   at 
> org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:51)
>   at 
> org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:81)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:335)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:203)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:138)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:216)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:275)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> com.example.SccSpringCloudDemoApplication.main(SccSpringCloudDemoApplication.java:46)
> {noformat}
> Problem with AbstractCoordinator.ensureCoordinatorReady() method
> It uses Long.MAX_VALUE as timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-5065) AbstractCoordinator.ensureCoordinatorReady() stuck in loop if absent any bootstrap servers

2017-04-24 Thread james chien (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982296#comment-15982296
 ] 

james chien edited comment on KAFKA-5065 at 4/25/17 2:39 AM:
-

we should not replace Long.MAX_VALUE for custom setting, but should return 
status code to user when failed at first time. (default is making a  exception 
message when having log4j configuration, but silence when not having 
configuration)


was (Author: james.c):
we should not replace Long.MAX_VALUE for custom setting, but should make a warn 
message to user when failed at first time. 

> AbstractCoordinator.ensureCoordinatorReady() stuck in loop if absent any 
> bootstrap servers 
> ---
>
> Key: KAFKA-5065
> URL: https://issues.apache.org/jira/browse/KAFKA-5065
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0, 0.10.1.1, 0.10.2.0
>Reporter: Vladimir Porshkevich
>Assignee: james chien
>  Labels: newbie
>   Original Estimate: 4m
>  Remaining Estimate: 4m
>
> If Consumer started with wrong bootstrap servers or absent any valid servers, 
> and Thread call Consumer.poll(timeout) with any timeout Thread stuck in loop 
> with debug logs like
> {noformat}
> org.apache.kafka.common.network.Selector - Connection with /172.31.1.100 
> disconnected
> java.net.ConnectException: Connection timed out: no further information
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>   at 
> org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:51)
>   at 
> org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:81)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:335)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:203)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:138)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:216)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:275)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> com.example.SccSpringCloudDemoApplication.main(SccSpringCloudDemoApplication.java:46)
> {noformat}
> Problem with AbstractCoordinator.ensureCoordinatorReady() method
> It uses Long.MAX_VALUE as timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5065) AbstractCoordinator.ensureCoordinatorReady() stuck in loop if absent any bootstrap servers

2017-04-24 Thread james chien (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

james chien reassigned KAFKA-5065:
--

Assignee: james chien

we should not replace Long.MAX_VALUE for custom setting, but should make a warn 
message to user when failed at first time. 

> AbstractCoordinator.ensureCoordinatorReady() stuck in loop if absent any 
> bootstrap servers 
> ---
>
> Key: KAFKA-5065
> URL: https://issues.apache.org/jira/browse/KAFKA-5065
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0, 0.10.1.1, 0.10.2.0
>Reporter: Vladimir Porshkevich
>Assignee: james chien
>  Labels: newbie
>   Original Estimate: 4m
>  Remaining Estimate: 4m
>
> If Consumer started with wrong bootstrap servers or absent any valid servers, 
> and Thread call Consumer.poll(timeout) with any timeout Thread stuck in loop 
> with debug logs like
> {noformat}
> org.apache.kafka.common.network.Selector - Connection with /172.31.1.100 
> disconnected
> java.net.ConnectException: Connection timed out: no further information
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>   at 
> org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:51)
>   at 
> org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:81)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:335)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:203)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:138)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:216)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:275)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> com.example.SccSpringCloudDemoApplication.main(SccSpringCloudDemoApplication.java:46)
> {noformat}
> Problem with AbstractCoordinator.ensureCoordinatorReady() method
> It uses Long.MAX_VALUE as timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5121) Implement transaction index for KIP-98

2017-04-24 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5121:
--

 Summary: Implement transaction index for KIP-98
 Key: KAFKA-5121
 URL: https://issues.apache.org/jira/browse/KAFKA-5121
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Fix For: 0.11.0.0


As documented in the KIP-98 proposal, the broker will maintain an index 
containing all of the aborted transactions for each partition. This index is 
used to respond to fetches with READ_COMMITTED isolation. This requires the 
broker maintain the last stable offset (LSO).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Recording - Storm & Kafka Meetup on April 20th 2017

2017-04-24 Thread Hugo Da Cruz Louro
Hi,

In the comments section of the Meetup page you can find links for all the 
slides.

https://www.meetup.com/Apache-Storm-Apache-Kafka/events/238975416/

Thank you for your interest.
Hugo

On Apr 24, 2017, at 10:19 AM, Aditya Desai 
> wrote:

Hi Harsha and others

Really thanks a lot for this meetup. I have signed up and will surely
attend it from June. I am really new to Apache Storm and Apache Kafka. I am
still learning them and these meetups will definitely help people like me.
If you have any of you have some good resources/tutorials/documents/git hub
repo to learn more about Storm and Kafka, please do share. The video of
this meetup is really inspiring and informative.

Regards

On Mon, Apr 24, 2017 at 8:28 AM, Harsha Chintalapani 
>
wrote:

Hi Aditya,
Thanks for your interest. We entatively planning one in June
1st week. If you haven't already please register here
https://www.meetup.com/Apache-Storm-Apache-Kafka/

. I'll keep the Storm lists updated once we finalize the date & location.

Thanks,
Harsha

On Mon, Apr 24, 2017 at 7:02 AM Aditya Desai 
> wrote:

Hello Everyone

Can you please let us know when is the next meet up? It would be great if
we can have in May.

Regards
Aditya Desai

On Mon, Apr 24, 2017 at 2:16 AM, Xin Wang 
> wrote:

How about publishing this to Storm site?

- Xin

2017-04-22 19:27 GMT+08:00 steve tueno 
>:

great

Thanks



Cordialement,

TUENO FOTSO Steve Jeffrey
Ingénieur de conception
Génie Informatique
+237 676 57 17 28 <+237%206%2076%2057%2017%2028>
+237 697 86 36 38 <+237%206%2097%2086%2036%2038>

+33 6 23 71 91 52 <+33%206%2023%2071%2091%2052>


https://jobs.jumia.cm/fr/candidats/CVTF1486563.html


__

https://play.google.com/store/apps/details?id=com.polytech.
remotecomputer

https://play.google.com/store/apps/details?id=com.polytech.
internetaccesschecker

*http://www.traveler.cm/
*
http://remotecomputer.traveler.cm/

https://play.google.com/store/apps/details?id=com.polytech.
androidsmssender


https://github.com/stuenofotso/notre-jargon

https://play.google.com/store/apps/details?id=com.polytech.
welovecameroon

https://play.google.com/store/apps/details?id=com.polytech.welovefrance

[jira] [Comment Edited] (KAFKA-5118) Improve message for Kafka failed startup with non-Kafka data in data.dirs

2017-04-24 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982092#comment-15982092
 ] 

huxi edited comment on KAFKA-5118 at 4/25/17 1:12 AM:
--

I remember 0.10.2.0 checks whether the directory name contains the dash before 
substring-ing it. There is another possibility that your non-Kafka directory 
ends with `-delete`. Right?


was (Author: huxi_2b):
I remember 0.10.2.0 checks whether the directory name contains the dash before 
substring-ing it. This is another possibility that your non-Kafka directory 
ends with `-delete`. Right?

> Improve message for Kafka failed startup with non-Kafka data in data.dirs
> -
>
> Key: KAFKA-5118
> URL: https://issues.apache.org/jira/browse/KAFKA-5118
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.2.0
>Reporter: Dustin Cote
>Priority: Minor
>
> Today, if you try to startup a broker with some non-Kafka data in the 
> data.dirs you end up with a cryptic message:
> {code}
> [2017-04-21 13:35:08,122] ERROR There was an error in one of the threads 
> during logs loading: java.lang.StringIndexOutOfBoundsException: String index 
> out of range: -1 (kafka.log.LogManager) 
> [2017-04-21 13:35:08,124] FATAL [Kafka Server 3], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) 
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1 
> {code}
> It'd be better if we could tell the user to look for non-Kafka data in the 
> data.dirs and print out the offending directory that caused the problem in 
> the first place.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5118) Improve message for Kafka failed startup with non-Kafka data in data.dirs

2017-04-24 Thread Dustin Cote (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982151#comment-15982151
 ] 

Dustin Cote commented on KAFKA-5118:


[~huxi] ^

> Improve message for Kafka failed startup with non-Kafka data in data.dirs
> -
>
> Key: KAFKA-5118
> URL: https://issues.apache.org/jira/browse/KAFKA-5118
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.2.0
>Reporter: Dustin Cote
>Priority: Minor
>
> Today, if you try to startup a broker with some non-Kafka data in the 
> data.dirs you end up with a cryptic message:
> {code}
> [2017-04-21 13:35:08,122] ERROR There was an error in one of the threads 
> during logs loading: java.lang.StringIndexOutOfBoundsException: String index 
> out of range: -1 (kafka.log.LogManager) 
> [2017-04-21 13:35:08,124] FATAL [Kafka Server 3], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) 
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1 
> {code}
> It'd be better if we could tell the user to look for non-Kafka data in the 
> data.dirs and print out the offending directory that caused the problem in 
> the first place.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5118) Improve message for Kafka failed startup with non-Kafka data in data.dirs

2017-04-24 Thread Dustin Cote (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982148#comment-15982148
 ] 

Dustin Cote commented on KAFKA-5118:


@huxi yes this is where this one came from. Directories that ended in -delete 
presumably from topics that were deleted. I hadn't checked if it can happen 
with other scenarios. Just thought the error was pretty cryptic.

> Improve message for Kafka failed startup with non-Kafka data in data.dirs
> -
>
> Key: KAFKA-5118
> URL: https://issues.apache.org/jira/browse/KAFKA-5118
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.2.0
>Reporter: Dustin Cote
>Priority: Minor
>
> Today, if you try to startup a broker with some non-Kafka data in the 
> data.dirs you end up with a cryptic message:
> {code}
> [2017-04-21 13:35:08,122] ERROR There was an error in one of the threads 
> during logs loading: java.lang.StringIndexOutOfBoundsException: String index 
> out of range: -1 (kafka.log.LogManager) 
> [2017-04-21 13:35:08,124] FATAL [Kafka Server 3], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) 
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1 
> {code}
> It'd be better if we could tell the user to look for non-Kafka data in the 
> data.dirs and print out the offending directory that caused the problem in 
> the first place.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Apurva Mehta
Congratulations Rajini!

On Mon, Apr 24, 2017 at 2:06 PM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> are pleased to announce that she has accepted!
>
> Rajini contributed 83 patches, 8 KIPs (all security and quota
> improvements) and a significant number of reviews. She is also on the
> conference committee for Kafka Summit, where she helped select content
> for our community event. Through her contributions she's shown good
> judgement, good coding skills, willingness to work with the community on
> finding the best
> solutions and very consistent follow through on her work.
>
> Thank you for your contributions, Rajini! Looking forward to many more :)
>
> Gwen, for the Apache Kafka PMC
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Dong Lin
Congratulations Rajini!

On Mon, Apr 24, 2017 at 4:52 PM, Becket Qin  wrote:

> Congratulations! Rajini! Great work!
>
> On Mon, Apr 24, 2017 at 3:33 PM, Jason Gustafson 
> wrote:
>
> > Woohoo! Great work, Rajini!
> >
> > On Mon, Apr 24, 2017 at 3:27 PM, Jun Rao  wrote:
> >
> > > Congratulations, Rajini ! Thanks for all your contributions.
> > >
> > > Jun
> > >
> > > On Mon, Apr 24, 2017 at 2:06 PM, Gwen Shapira 
> wrote:
> > >
> > > > The PMC for Apache Kafka has invited Rajini Sivaram as a committer
> and
> > we
> > > > are pleased to announce that she has accepted!
> > > >
> > > > Rajini contributed 83 patches, 8 KIPs (all security and quota
> > > > improvements) and a significant number of reviews. She is also on the
> > > > conference committee for Kafka Summit, where she helped select
> content
> > > > for our community event. Through her contributions she's shown good
> > > > judgement, good coding skills, willingness to work with the community
> > on
> > > > finding the best
> > > > solutions and very consistent follow through on her work.
> > > >
> > > > Thank you for your contributions, Rajini! Looking forward to many
> more
> > :)
> > > >
> > > > Gwen, for the Apache Kafka PMC
> > > >
> > >
> >
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Becket Qin
Congratulations! Rajini! Great work!

On Mon, Apr 24, 2017 at 3:33 PM, Jason Gustafson  wrote:

> Woohoo! Great work, Rajini!
>
> On Mon, Apr 24, 2017 at 3:27 PM, Jun Rao  wrote:
>
> > Congratulations, Rajini ! Thanks for all your contributions.
> >
> > Jun
> >
> > On Mon, Apr 24, 2017 at 2:06 PM, Gwen Shapira  wrote:
> >
> > > The PMC for Apache Kafka has invited Rajini Sivaram as a committer and
> we
> > > are pleased to announce that she has accepted!
> > >
> > > Rajini contributed 83 patches, 8 KIPs (all security and quota
> > > improvements) and a significant number of reviews. She is also on the
> > > conference committee for Kafka Summit, where she helped select content
> > > for our community event. Through her contributions she's shown good
> > > judgement, good coding skills, willingness to work with the community
> on
> > > finding the best
> > > solutions and very consistent follow through on her work.
> > >
> > > Thank you for your contributions, Rajini! Looking forward to many more
> :)
> > >
> > > Gwen, for the Apache Kafka PMC
> > >
> >
>


[jira] [Commented] (KAFKA-5120) Several controller metrics block if controller lock is held by another thread

2017-04-24 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982118#comment-15982118
 ] 

Onur Karaman commented on KAFKA-5120:
-

This will be fixed in KAFKA-5028. It chooses the latter approach you mentioned 
where the metrics would just read an atomic variable holding the precomputed 
metric values.

> Several controller metrics block if controller lock is held by another thread
> -
>
> Key: KAFKA-5120
> URL: https://issues.apache.org/jira/browse/KAFKA-5120
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, metrics
>Affects Versions: 0.10.2.0
>Reporter: Tim Carey-Smith
>Priority: Minor
>
> We have been tracking latency issues surrounding queries to Controller 
> MBeans. Upon digging into the root causes, we discovered that several metrics 
> acquire the controller lock within the gauge. 
> The affected metrics are: 
> * {{ActiveControllerCount}}
> * {{OfflinePartitionsCount}}
> * {{PreferredReplicaImbalanceCount}}
> If the controller is currently holding the lock and a MBean request is 
> received, the thread executing the request will block until the controller 
> releases the lock. 
> We discovered this in a cluster where the controller was holding the lock for 
> extended periods of time for normal operations. We have documented this issue 
> in KAFKA-5116. 
> Several possible solutions exist: 
> * Remove the lock from inside these {{Gauge}}s. 
> * Store and update the metric values in {{AtomicLong}}s. 
> Modifying the {{ActiveControllerCount}} metric seems to be straight-forward 
> while the other 2 metrics seem to be more involved. 
> We're happy to contribute a patch, but wanted to discuss potential solutions 
> and their tradeoffs before proceeding. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5120) Several controller metrics block if controller lock is held by another thread

2017-04-24 Thread Tim Carey-Smith (JIRA)
Tim Carey-Smith created KAFKA-5120:
--

 Summary: Several controller metrics block if controller lock is 
held by another thread
 Key: KAFKA-5120
 URL: https://issues.apache.org/jira/browse/KAFKA-5120
 Project: Kafka
  Issue Type: Bug
  Components: controller, metrics
Affects Versions: 0.10.2.0
Reporter: Tim Carey-Smith
Priority: Minor


We have been tracking latency issues surrounding queries to Controller MBeans. 
Upon digging into the root causes, we discovered that several metrics acquire 
the controller lock within the gauge. 

The affected metrics are: 

* {{ActiveControllerCount}}
* {{OfflinePartitionsCount}}
* {{PreferredReplicaImbalanceCount}}

If the controller is currently holding the lock and a MBean request is 
received, the thread executing the request will block until the controller 
releases the lock. 

We discovered this in a cluster where the controller was holding the lock for 
extended periods of time for normal operations. We have documented this issue 
in KAFKA-5116. 

Several possible solutions exist: 

* Remove the lock from inside these {{Gauge}}s. 
* Store and update the metric values in {{AtomicLong}}s. 

Modifying the {{ActiveControllerCount}} metric seems to be straight-forward 
while the other 2 metrics seem to be more involved. 

We're happy to contribute a patch, but wanted to discuss potential solutions 
and their tradeoffs before proceeding. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5118) Improve message for Kafka failed startup with non-Kafka data in data.dirs

2017-04-24 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982092#comment-15982092
 ] 

huxi commented on KAFKA-5118:
-

I remember 0.10.2.0 checks whether the directory name contains the dash before 
substring-ing it. This is another possibility that your non-Kafka directory 
ends with `-delete`. Right?

> Improve message for Kafka failed startup with non-Kafka data in data.dirs
> -
>
> Key: KAFKA-5118
> URL: https://issues.apache.org/jira/browse/KAFKA-5118
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.2.0
>Reporter: Dustin Cote
>Priority: Minor
>
> Today, if you try to startup a broker with some non-Kafka data in the 
> data.dirs you end up with a cryptic message:
> {code}
> [2017-04-21 13:35:08,122] ERROR There was an error in one of the threads 
> during logs loading: java.lang.StringIndexOutOfBoundsException: String index 
> out of range: -1 (kafka.log.LogManager) 
> [2017-04-21 13:35:08,124] FATAL [Kafka Server 3], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) 
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1 
> {code}
> It'd be better if we could tell the user to look for non-Kafka data in the 
> data.dirs and print out the offending directory that caused the problem in 
> the first place.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : kafka-trunk-jdk8 #1454

2017-04-24 Thread Apache Jenkins Server
See 




Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Jason Gustafson
Woohoo! Great work, Rajini!

On Mon, Apr 24, 2017 at 3:27 PM, Jun Rao  wrote:

> Congratulations, Rajini ! Thanks for all your contributions.
>
> Jun
>
> On Mon, Apr 24, 2017 at 2:06 PM, Gwen Shapira  wrote:
>
> > The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> > are pleased to announce that she has accepted!
> >
> > Rajini contributed 83 patches, 8 KIPs (all security and quota
> > improvements) and a significant number of reviews. She is also on the
> > conference committee for Kafka Summit, where she helped select content
> > for our community event. Through her contributions she's shown good
> > judgement, good coding skills, willingness to work with the community on
> > finding the best
> > solutions and very consistent follow through on her work.
> >
> > Thank you for your contributions, Rajini! Looking forward to many more :)
> >
> > Gwen, for the Apache Kafka PMC
> >
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Jun Rao
Congratulations, Rajini ! Thanks for all your contributions.

Jun

On Mon, Apr 24, 2017 at 2:06 PM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> are pleased to announce that she has accepted!
>
> Rajini contributed 83 patches, 8 KIPs (all security and quota
> improvements) and a significant number of reviews. She is also on the
> conference committee for Kafka Summit, where she helped select content
> for our community event. Through her contributions she's shown good
> judgement, good coding skills, willingness to work with the community on
> finding the best
> solutions and very consistent follow through on her work.
>
> Thank you for your contributions, Rajini! Looking forward to many more :)
>
> Gwen, for the Apache Kafka PMC
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Michael Noll
Congratulations, Rajini!

On Mon, Apr 24, 2017 at 11:50 PM, Ismael Juma  wrote:

> Congrats Rajini! Well-deserved. :)
>
> Ismael
>
> On Mon, Apr 24, 2017 at 10:06 PM, Gwen Shapira  wrote:
>
> > The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> > are pleased to announce that she has accepted!
> >
> > Rajini contributed 83 patches, 8 KIPs (all security and quota
> > improvements) and a significant number of reviews. She is also on the
> > conference committee for Kafka Summit, where she helped select content
> > for our community event. Through her contributions she's shown good
> > judgement, good coding skills, willingness to work with the community on
> > finding the best
> > solutions and very consistent follow through on her work.
> >
> > Thank you for your contributions, Rajini! Looking forward to many more :)
> >
> > Gwen, for the Apache Kafka PMC
> >
>


Jenkins build is back to normal : kafka-0.10.2-jdk7 #150

2017-04-24 Thread Apache Jenkins Server
See 




Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Ismael Juma
Congrats Rajini! Well-deserved. :)

Ismael

On Mon, Apr 24, 2017 at 10:06 PM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> are pleased to announce that she has accepted!
>
> Rajini contributed 83 patches, 8 KIPs (all security and quota
> improvements) and a significant number of reviews. She is also on the
> conference committee for Kafka Summit, where she helped select content
> for our community event. Through her contributions she's shown good
> judgement, good coding skills, willingness to work with the community on
> finding the best
> solutions and very consistent follow through on her work.
>
> Thank you for your contributions, Rajini! Looking forward to many more :)
>
> Gwen, for the Apache Kafka PMC
>


[jira] [Commented] (KAFKA-4593) Task migration during rebalance callback process could lead the obsoleted task's IllegalStateException

2017-04-24 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15981970#comment-15981970
 ] 

Matthias J. Sax commented on KAFKA-4593:


This JIRA seems to relate to this one: 
https://issues.apache.org/jira/browse/KAFKA-3941 Any thought if we can close 
this as duplicate? [~guozhang] [~damianguy]

> Task migration during rebalance callback process could lead the obsoleted 
> task's IllegalStateException
> --
>
> Key: KAFKA-4593
> URL: https://issues.apache.org/jira/browse/KAFKA-4593
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: infrastructure
>
> 1. Assume 2 running threads A and B, and one task t1 jut for simplicity.
> 2. First rebalance is triggered, task t1 is assigned to A (B has no assigned 
> task).
> 3. During the first rebalance callback, task t1's state store need to be 
> restored on thread A, and this is called in "restoreActiveState" of 
> "createStreamTask".
> 4. Not suppose thread A has a long GC causing it to stall, a second rebalance 
> then will be triggered and kicked A out of the group; B gets the task t1 and 
> did the same restoration process, after the process thread B continues to 
> process data and update the state store, while at the same time writes more 
> messages to the changelog (so its log end offset has incremented).
> 5. After a while A resumes from the long GC, not knowing it has actually be 
> kicked out of the group and task t1 is no longer owned to itself, it 
> continues the restoration process but then realize that the log end offset 
> has advanced. When this happens, we will see the following exception on 
> thread A:
> {code}
> java.lang.IllegalStateException: task XXX Log end offset of
> YYY-table_stream-changelog-ZZ should not change while
> restoring: old end offset .., current offset ..
> at
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.restoreActiveState(ProcessorStateManager.java:248)
> at
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.register(ProcessorStateManager.java:201)
> at
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.register(ProcessorContextImpl.java:122)
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore.init(RocksDBWindowStore.java:200)
> at
> org.apache.kafka.streams.state.internals.MeteredWindowStore.init(MeteredWindowStore.java:65)
> at
> org.apache.kafka.streams.state.internals.CachingWindowStore.init(CachingWindowStore.java:65)
> at
> org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:86)
> at
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:120)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:794)
> at
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1222)
> at
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1195)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:897)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:71)
> at
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:240)
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:230)
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:314)
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:278)
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:261)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1039)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1004)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:570)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:359)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3266) Implement KIP-4 RPCs and APIs for creating, altering, and listing ACLs

2017-04-24 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe updated KAFKA-3266:
---
Summary: Implement KIP-4 RPCs and APIs for creating, altering, and listing 
ACLs  (was: List/Alter Acls - protocol and server side implementation)

> Implement KIP-4 RPCs and APIs for creating, altering, and listing ACLs
> --
>
> Key: KAFKA-3266
> URL: https://issues.apache.org/jira/browse/KAFKA-3266
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-3266) List/Alter Acls - protocol and server side implementation

2017-04-24 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe reassigned KAFKA-3266:
--

Assignee: Colin P. McCabe  (was: Grant Henke)

> List/Alter Acls - protocol and server side implementation
> -
>
> Key: KAFKA-3266
> URL: https://issues.apache.org/jira/browse/KAFKA-3266
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Jay Kreps
Congrats Rajini!

On Mon, Apr 24, 2017 at 2:06 PM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> are pleased to announce that she has accepted!
>
> Rajini contributed 83 patches, 8 KIPs (all security and quota
> improvements) and a significant number of reviews. She is also on the
> conference committee for Kafka Summit, where she helped select content
> for our community event. Through her contributions she's shown good
> judgement, good coding skills, willingness to work with the community on
> finding the best
> solutions and very consistent follow through on her work.
>
> Thank you for your contributions, Rajini! Looking forward to many more :)
>
> Gwen, for the Apache Kafka PMC
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Vahid S Hashemian
Great news.

Congrats Rajini!

--Vahid




From:   Gwen Shapira 
To: dev@kafka.apache.org, Users , 
priv...@kafka.apache.org
Date:   04/24/2017 02:06 PM
Subject:[ANNOUNCE] New committer: Rajini Sivaram



The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
are pleased to announce that she has accepted!

Rajini contributed 83 patches, 8 KIPs (all security and quota
improvements) and a significant number of reviews. She is also on the
conference committee for Kafka Summit, where she helped select content
for our community event. Through her contributions she's shown good
judgement, good coding skills, willingness to work with the community on
finding the best
solutions and very consistent follow through on her work.

Thank you for your contributions, Rajini! Looking forward to many more :)

Gwen, for the Apache Kafka PMC






[ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Gwen Shapira
The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
are pleased to announce that she has accepted!

Rajini contributed 83 patches, 8 KIPs (all security and quota
improvements) and a significant number of reviews. She is also on the
conference committee for Kafka Summit, where she helped select content
for our community event. Through her contributions she's shown good
judgement, good coding skills, willingness to work with the community on
finding the best
solutions and very consistent follow through on her work.

Thank you for your contributions, Rajini! Looking forward to many more :)

Gwen, for the Apache Kafka PMC


[jira] [Created] (KAFKA-5119) Transient test failure SocketServerTest.testMetricCollectionAfterShutdown

2017-04-24 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5119:
--

 Summary: Transient test failure 
SocketServerTest.testMetricCollectionAfterShutdown
 Key: KAFKA-5119
 URL: https://issues.apache.org/jira/browse/KAFKA-5119
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Reporter: Jason Gustafson


>From a recent build:
{code}
20:04:15 kafka.network.SocketServerTest > testMetricCollectionAfterShutdown 
FAILED
20:04:15 java.lang.AssertionError: expected:<0.0> but 
was:<1.603886948862125>
20:04:15 at org.junit.Assert.fail(Assert.java:88)
20:04:15 at org.junit.Assert.failNotEquals(Assert.java:834)
20:04:15 at org.junit.Assert.assertEquals(Assert.java:553)
20:04:15 at org.junit.Assert.assertEquals(Assert.java:683)
20:04:15 at 
kafka.network.SocketServerTest.testMetricCollectionAfterShutdown(SocketServerTest.scala:414)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4994) Fix findbugs warning about OffsetStorageWriter#currentFlushId

2017-04-24 Thread Mariam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariam John updated KAFKA-4994:
---
Reviewer: Colin P. McCabe
  Status: Patch Available  (was: In Progress)

> Fix findbugs warning about OffsetStorageWriter#currentFlushId
> -
>
> Key: KAFKA-4994
> URL: https://issues.apache.org/jira/browse/KAFKA-4994
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Mariam John
>  Labels: newbie
>
> We should fix the findbugs warning about 
> {{OffsetStorageWriter#currentFlushId}}
> {code}
> Multithreaded correctness Warnings
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.currentFlushId; locked 
> 83% of time
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.toFlush; locked 75% of 
> time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-4994) Fix findbugs warning about OffsetStorageWriter#currentFlushId

2017-04-24 Thread Mariam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4994 started by Mariam John.
--
> Fix findbugs warning about OffsetStorageWriter#currentFlushId
> -
>
> Key: KAFKA-4994
> URL: https://issues.apache.org/jira/browse/KAFKA-4994
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Mariam John
>  Labels: newbie
>
> We should fix the findbugs warning about 
> {{OffsetStorageWriter#currentFlushId}}
> {code}
> Multithreaded correctness Warnings
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.currentFlushId; locked 
> 83% of time
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.toFlush; locked 75% of 
> time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2906: Kafka-4994 Fix findbug warnings about OffsetStorag...

2017-04-24 Thread johnma14
GitHub user johnma14 opened a pull request:

https://github.com/apache/kafka/pull/2906

Kafka-4994 Fix findbug warnings about OffsetStorageWriter#currentFlushId

Based on the description of the class OffsetStorageWriter, it is not a 
thread-safe class and should be accessed only from a Task's processing thread. 
Many methods within this class have been explicitly synchronized in their 
function definition. The doFlush() method is a non-blocking function and hasn't 
been synchronized but modifies the variables used within the synchronized 
methods in this class. This could lead to potential inconsistent 
synchronization of some variables within this class.  We can therefore remove 
the synchronized keyword from the method signatures within the 
OffsetStorageWriter class since the WorkerSourceTask class calls the different 
methods (offset, beginFlush,cancelFlush, handleFinishWrite) within a 
synchronized block. There is no need to synchronize calls to these methods more 
than once. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/johnma14/kafka kafka-4994

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2906.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2906


commit 726185526a5197a4537930cfa580503746a5468c
Author: Mariam John 
Date:   2017-04-24T19:45:32Z

Fix findbug warnings about OffsetStorageWriter

Based on the description of the class OffsetStorageWriter, it is not a 
thread-safe class and should be accessed
only from a Task's processing thread. Still many methods within this class 
have been explicitly synchronized in
their function definiton. The doFlush() method is a non-blocking function 
and hasn't been synchronized but modifies
the variables used within the synchronized methods in this class. This 
could lead to potential inconsistent synchronization
of many variables within this class.  We can therefore remove the 
synchronized keyword from the method signatures within the OffsetStorageWriter 
class. The WorkerSourceTask class calls the different methods (offset, 
beginFlush,cancelFlush, handleFinishWrite) within a synchronized block. Hence 
their method definitions in OffsetStorageWriter.java does not need to contain 
the 'synchronized' keyword.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5118) Improve message for Kafka failed startup with non-Kafka data in data.dirs

2017-04-24 Thread Dustin Cote (JIRA)
Dustin Cote created KAFKA-5118:
--

 Summary: Improve message for Kafka failed startup with non-Kafka 
data in data.dirs
 Key: KAFKA-5118
 URL: https://issues.apache.org/jira/browse/KAFKA-5118
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.10.2.0
Reporter: Dustin Cote
Priority: Minor


Today, if you try to startup a broker with some non-Kafka data in the data.dirs 
you end up with a cryptic message:
{code}
[2017-04-21 13:35:08,122] ERROR There was an error in one of the threads during 
logs loading: java.lang.StringIndexOutOfBoundsException: String index out of 
range: -1 (kafka.log.LogManager) 
[2017-04-21 13:35:08,124] FATAL [Kafka Server 3], Fatal error during 
KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) 
java.lang.StringIndexOutOfBoundsException: String index out of range: -1 
{code}

It'd be better if we could tell the user to look for non-Kafka data in the 
data.dirs and print out the offending directory that caused the problem in the 
first place.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-123: Allow per stream/table timestamp extractor

2017-04-24 Thread Thomas Becker
+1 (non-binding)

On Tue, 2017-02-28 at 08:59 +, Jeyhun Karimov wrote:
> Dear community,
>
> I'd like to start the vote for KIP-123:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=6871
> 4788
>
>
> Cheers,
> Jeyhun
--


Tommy Becker

Senior Software Engineer

O +1 919.460.4747

tivo.com




This email and any attachments may contain confidential and privileged material 
for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If you 
are not the intended recipient, please contact the sender immediately and 
permanently delete this email and any attachments. No employee or agent of TiVo 
Inc. is authorized to conclude any binding agreement on behalf of TiVo Inc. by 
email. Binding agreements with TiVo Inc. may only be made by a signed written 
agreement.


Re: [VOTE] KIP-123: Allow per stream/table timestamp extractor

2017-04-24 Thread Gwen Shapira
+1. Thanks for the KIP!

On Tue, Feb 28, 2017 at 12:59 AM, Jeyhun Karimov 
wrote:

> Dear community,
>
> I'd like to start the vote for KIP-123:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=68714788
>
>
> Cheers,
> Jeyhun
> --
> -Cheers
>
> Jeyhun
>



-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



[jira] [Commented] (KAFKA-5055) Kafka Streams skipped-records-rate sensor producing nonzero values even when FailOnInvalidTimestamp is used as extractor

2017-04-24 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15981662#comment-15981662
 ] 

Guozhang Wang commented on KAFKA-5055:
--

Thanks [~nthean], looking into it now.

> Kafka Streams skipped-records-rate sensor producing nonzero values even when 
> FailOnInvalidTimestamp is used as extractor
> 
>
> Key: KAFKA-5055
> URL: https://issues.apache.org/jira/browse/KAFKA-5055
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Nikki Thean
>Assignee: Guozhang Wang
>
> According to the code and the documentation for this metric, the only reason 
> for a skipped record is an invalid timestamp, except that a) I am reading 
> from a topic that is populated solely by Kafka Connect and b) I am using 
> `FailOnInvalidTimestamp` as the timestamp extractor.
> Either I'm missing something in the documentation (i.e. another reason for 
> skipped records) or there is a bug in the code that calculates this metric.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5055) Kafka Streams skipped-records-rate sensor producing nonzero values even when FailOnInvalidTimestamp is used as extractor

2017-04-24 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-5055:


Assignee: Guozhang Wang

> Kafka Streams skipped-records-rate sensor producing nonzero values even when 
> FailOnInvalidTimestamp is used as extractor
> 
>
> Key: KAFKA-5055
> URL: https://issues.apache.org/jira/browse/KAFKA-5055
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Nikki Thean
>Assignee: Guozhang Wang
>
> According to the code and the documentation for this metric, the only reason 
> for a skipped record is an invalid timestamp, except that a) I am reading 
> from a topic that is populated solely by Kafka Connect and b) I am using 
> `FailOnInvalidTimestamp` as the timestamp extractor.
> Either I'm missing something in the documentation (i.e. another reason for 
> skipped records) or there is a bug in the code that calculates this metric.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5110) ConsumerGroupCommand error handling improvement

2017-04-24 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15981652#comment-15981652
 ] 

Vahid Hashemian commented on KAFKA-5110:


[~cotedm] Thanks for your response. Yes, hopefully the PR would help better 
pinpoint any potential issues in the future.

> ConsumerGroupCommand error handling improvement
> ---
>
> Key: KAFKA-5110
> URL: https://issues.apache.org/jira/browse/KAFKA-5110
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.1.1
>Reporter: Dustin Cote
>Assignee: Jason Gustafson
>
> The ConsumerGroupCommand isn't handling partition errors properly. It throws 
> the following:
> {code}
> kafka-consumer-groups.sh --zookeeper 10.10.10.10:2181 --group mygroup 
> --describe
> GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG OWNER
> Error while executing consumer group command empty.head
> java.lang.UnsupportedOperationException: empty.head
> at scala.collection.immutable.Vector.head(Vector.scala:193)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService$$anonfun$getLogEndOffset$1.apply(ConsumerGroupCommand.scala:197)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService$$anonfun$getLogEndOffset$1.apply(ConsumerGroupCommand.scala:194)
> at scala.Option.map(Option.scala:146)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.getLogEndOffset(ConsumerGroupCommand.scala:194)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.kafka$admin$ConsumerGroupCommand$ConsumerGroupService$$describePartition(ConsumerGroupCommand.scala:125)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$$anonfun$describeTopicPartition$2.apply(ConsumerGroupCommand.scala:107)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$$anonfun$describeTopicPartition$2.apply(ConsumerGroupCommand.scala:106)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.describeTopicPartition(ConsumerGroupCommand.scala:106)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.describeTopicPartition(ConsumerGroupCommand.scala:134)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.kafka$admin$ConsumerGroupCommand$ZkConsumerGroupService$$describeTopic(ConsumerGroupCommand.scala:181)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService$$anonfun$describeGroup$1.apply(ConsumerGroupCommand.scala:166)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService$$anonfun$describeGroup$1.apply(ConsumerGroupCommand.scala:166)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.describeGroup(ConsumerGroupCommand.scala:166)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.describe(ConsumerGroupCommand.scala:89)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.describe(ConsumerGroupCommand.scala:134)
> at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:68)
> at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-140: Add administrative RPCs for adding, deleting, and listing ACLs

2017-04-24 Thread Colin McCabe
Thanks for taking a look, Ismael.

On Mon, Apr 24, 2017, at 04:36, Ismael Juma wrote:
> Thanks Colin. A few quick comments:
> 
> 1. Is there a reason why AddAclsRequest must be sent to the controller
> broker? When this was discussed previously, Jun suggested that such a
> restriction may not be necessary for this request.

Hmm.  I guess my thinking here was that since the controller handles
AddTopics and DeleteTopics requests, it would be nice if it had the most
up-to-date ACL information.  This was also in the original KIP-4
proposal.  However, given that auth is ZK (or a pluggable system,
optionally) there is no inherent reason the controller broker has to be
the only one to make the change.  What do you think?

> 
> 2. Other protocol APIs use the Delete prefix instead of Remove. Is there
> a
> reason to deviate here? If there's a good reason to do so, we have to fix
> the one mention of DeleteAcls in the proposal.

Good point.  Let's make it consistent by changing it to DeleteAcls.  I
will also change AddAclsRequest to CreateAclsRequest to match
CreateTopicsRequest.

> 
> 3. Do you mean "null" in the following sentence? "Note that an argument
> of
> "none" is different than a wildcard argument."

For the string types, NULL is considered "none"; for the INT8 types, -1
is considered "none".

> 
> 4. How will the non-zero top-level error code be computed? A bit more
> detail on this would be helpful. As a general rule, most batch protocol
> APIs don't have a top level error code because errors are usually at the
> batch element level. This has proved to be a bit of an issue in cases
> where
> we want to return a generic error code (e.g. InvalidProtocolVersion).
> Also,
> V2 of OffsetFetch has a top-level error code[1]. However, the OffsetFetch
> behaviour is that we either use the top-level error code or the partition
> level error codes, which is different than what is being suggested here.

The idea behind the top-level error code is to implement error codes
that don't have to do with elements in the batch.  For example, if we
implement backpressure, the server could send back an error code of
"slow down" telling the client to resend the request after a few
milliseconds have elapsed.

As you mention, we ought to have a generic response header that would
let us intelligently handle situations like "server doesn't understand
this request version or type"  Maybe this is something that needs to be
handled in another KIP, though, since it would require all the responses
to have this, and in the same format.  I guess I will remove this for
now.

> 
> 5. Nit: In other requests we used `error_message` instead of
> `error_string`.

OK.

> 6. Regarding the migration plan, it's worth making it clear that the CLI
> transition is not part of this KIP.

OK.

> 
> 7. Regarding the forward compatibility point, we could potentially use
> enums with UNKNOWN element? This pattern has been used elsewhere in Kafka
> and it would be good to compare it with the proposed solution. An
> important
> question is what can users do with UNKNOWN elements. If the assumption is
> that users ignore them, then it seems like the enum approach may be good
> enough.

Hmm.  It seemed straightforward to let callers see the numeric value of
the element.  It makes the command-line tools more useful when
interacting with clusters that have newer versions of the software, for
example.  I guess using UNKNOWN instead of UNKNOWN() has the
advantage of hiding the internal representation better.

best,
Colin


> 
> Thanks,
> Ismael
> 
> [1]
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-88%3A+OffsetFetch+Protocol+Update
> 
> 
> On Fri, Apr 21, 2017 at 9:27 PM, Colin McCabe  wrote:
> 
> > Hi all,
> >
> > As part of the AdminClient work, we would like to add methods for
> > adding, deleting, and listing access control lists (ACLs).  I wrote up a
> > KIP to discuss implementing requests for those operations, as well as
> > AdminClient APIs.  Take a look at:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 140%3A+Add+administrative+RPCs+for+adding%2C+deleting%2C+and+listing+ACLs
> >
> > regards,
> > Colin
> >


[jira] [Updated] (KAFKA-5117) Kafka Connect REST endpoints reveal Password typed values

2017-04-24 Thread Thomas Holmes (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Holmes updated KAFKA-5117:
-
Description: 
A Kafka Connect connector can specify ConfigDef keys as type of Password. This 
type was added to prevent logging the values (instead "[hidden]" is logged).

This change does not apply to the values returned by executing a GET on 
{{connectors/\{connector-name\}}} and {{connectors/\{connector-name\}/config}}. 
This creates an easily accessible way for an attacker who has infiltrated your 
network to gain access to potential secrets that should not be available.

I have started on a code change that addresses this issue by parsing the config 
values through the ConfigDef for the connector and returning their output 
instead (which leads to the masking of Password typed configs as [hidden]).

  was:
A Kafka Connect connector can specify ConfigDef keys as type of Password. This 
type was added to prevent logging the values (instead "[hidden]" is logged).

This change does not apply to the values returned by executing a GET on 
{{connectors/\{connector-name\}}} and {{connectors/{connector-name}/config}}. 
This creates an easily accessible way for an attacker who has infiltrated your 
network to gain access to potential secrets that should not be available.

I have started on a code change that addresses this issue by parsing the config 
values through the ConfigDef for the connector and returning their output 
instead (which leads to the masking of Password typed configs as [hidden]).


> Kafka Connect REST endpoints reveal Password typed values
> -
>
> Key: KAFKA-5117
> URL: https://issues.apache.org/jira/browse/KAFKA-5117
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Thomas Holmes
>
> A Kafka Connect connector can specify ConfigDef keys as type of Password. 
> This type was added to prevent logging the values (instead "[hidden]" is 
> logged).
> This change does not apply to the values returned by executing a GET on 
> {{connectors/\{connector-name\}}} and 
> {{connectors/\{connector-name\}/config}}. This creates an easily accessible 
> way for an attacker who has infiltrated your network to gain access to 
> potential secrets that should not be available.
> I have started on a code change that addresses this issue by parsing the 
> config values through the ConfigDef for the connector and returning their 
> output instead (which leads to the masking of Password typed configs as 
> [hidden]).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-138: Change punctuate semantics

2017-04-24 Thread Matthias J. Sax
>> Would a dashboard need perfect regularity? Wouldn't an upper bound suffice?

If you go with stream-time and don't have any input records for all
partitions, punctuate would not be called at all, and thus your
dashboard would "freeze".

>> I thought about cron-type things, but aren't they better triggered by an
>> external scheduler (they're more flexible anyway), which then feeds
>> "commands" into the topology?

I guess it depends what kind of periodic action you want to trigger. The
"cron job" was just an analogy. Maybe it's just a heartbeat to some
other service, that signals that your Streams app is still running.


-Matthias


On 4/24/17 10:02 AM, Michal Borowiecki wrote:
> Thanks!
> 
> Would a dashboard need perfect regularity? Wouldn't an upper bound suffice?
> 
> Unless too frequent messages on replay could overpower it?
> 
> 
> I thought about cron-type things, but aren't they better triggered by an
> external scheduler (they're more flexible anyway), which then feeds
> "commands" into the topology?
> 
> Just my 2c.
> 
> Cheers,
> 
> Michal
> 
> 
> On 24/04/17 17:57, Matthias J. Sax wrote:
>> A simple example would be some dashboard app, that needs to get
>> "current" status in regular time intervals (ie, and real-time app).
>>
>> Or something like a "scheduler" -- think "cron job" application.
>>
>>
>> -Matthias
>>
>> On 4/24/17 2:23 AM, Michal Borowiecki wrote:
>>> Hi Matthias,
>>>
>>> I agree it's difficult to reason about the hybrid approach, I certainly
>>> found it hard and I'm totally on board with the mantra.
>>>
>>> I'd be happy to limit the scope of this KIP to add system-time
>>> punctuation semantics (in addition to existing stream-time semantics)
>>> and leave more complex schemes for users to implement on top of that.
>>>
>>> Further additional PunctuationTypes, could then be added by future KIPs,
>>> possibly including the hybrid approach once it has been given more thought.
>>>
 There are real-time applications, that want to get
 callbacks in regular system-time intervals (completely independent from
 stream-time).
>>> Can you please describe what they are, so that I can put them on the
>>> wiki for later reference?
>>>
>>> Thanks,
>>>
>>> Michal
>>>
>>>
>>> On 23/04/17 21:27, Matthias J. Sax wrote:
 Hi,

 I do like Damian's API proposal about the punctuation callback function.

 I also did reread the KIP and thought about the semantics we want to
 provide.

> Given the above, I don't see a reason any more for a separate system-time 
> based punctuation.
 I disagree here. There are real-time applications, that want to get
 callbacks in regular system-time intervals (completely independent from
 stream-time). Thus we should allow this -- if we really follow the
 "hybrid" approach, this could be configured with stream-time interval
 infinite and delay whatever system-time punctuation interval you want to
 have. However, I would like to add a proper API for this and do this
 configuration under the hood (that would allow one implementation within
 all kind of branching for different cases).

 Thus, we definitely should have PunctutionType#StreamTime and
 #SystemTime -- and additionally, we _could_ have #Hybrid. Thus, I am not
 a fan of your latest API proposal.


 About the hybrid approach in general. On the one hand I like it, on the
 other hand, it seems to be rather (1) complicated (not necessarily from
 an implementation point of view, but for people to understand it) and
 (2) mixes two semantics together in a "weird" way". Thus, I disagree with:

> It may appear complicated at first but I do think these semantics will
> still be more understandable to users than having 2 separate punctuation
> schedules/callbacks with different PunctuationTypes.
 This statement only holds if you apply strong assumptions that I don't
 believe hold in general -- see (2) for details -- and I think it is
 harder than you assume to reason about the hybrid approach in general.
 IMHO, the hybrid approach is a "false friend" that seems to be easy to
 reason about...


 (1) Streams always embraced "easy to use" and we should really be
 careful to keep it this way. On the other hand, as we are talking about
 changes to PAPI, it won't affect DSL users (DSL does not use punctuation
 at all at the moment), and thus, the "easy to use" mantra might not be
 affected, while it will allow advanced users to express more complex stuff.

 I like the mantra: "make simple thing easy and complex things possible".

 (2) IMHO the major disadvantage (issue?) of the hybrid approach is the
 implicit assumption that even-time progresses at the same "speed" as
 system-time during regular processing. This implies the assumption that
 a slower progress in stream-time indicates the absence of input events
 (and 

[jira] [Created] (KAFKA-5117) Kafka Connect REST endpoints reveal Password typed values

2017-04-24 Thread Thomas Holmes (JIRA)
Thomas Holmes created KAFKA-5117:


 Summary: Kafka Connect REST endpoints reveal Password typed values
 Key: KAFKA-5117
 URL: https://issues.apache.org/jira/browse/KAFKA-5117
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.2.0
Reporter: Thomas Holmes


A Kafka Connect connector can specify ConfigDef keys as type of Password. This 
type was added to prevent logging the values (instead "[hidden]" is logged).

This change does not apply to the values returned by executing a GET on 
{{connectors/\{connector-name\}}} and {{connectors/{connector-name}/config}}. 
This creates an easily accessible way for an attacker who has infiltrated your 
network to gain access to potential secrets that should not be available.

I have started on a code change that addresses this issue by parsing the config 
values through the ConfigDef for the connector and returning their output 
instead (which leads to the masking of Password typed configs as [hidden]).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Recording - Storm & Kafka Meetup on April 20th 2017

2017-04-24 Thread Aditya Desai
Hi Harsha and others

Really thanks a lot for this meetup. I have signed up and will surely
attend it from June. I am really new to Apache Storm and Apache Kafka. I am
still learning them and these meetups will definitely help people like me.
If you have any of you have some good resources/tutorials/documents/git hub
repo to learn more about Storm and Kafka, please do share. The video of
this meetup is really inspiring and informative.

Regards

On Mon, Apr 24, 2017 at 8:28 AM, Harsha Chintalapani 
wrote:

> Hi Aditya,
>  Thanks for your interest. We entatively planning one in June
> 1st week. If you haven't already please register here
> https://www.meetup.com/Apache-Storm-Apache-Kafka/
> 
> . I'll keep the Storm lists updated once we finalize the date & location.
>
> Thanks,
> Harsha
>
> On Mon, Apr 24, 2017 at 7:02 AM Aditya Desai  wrote:
>
>> Hello Everyone
>>
>> Can you please let us know when is the next meet up? It would be great if
>> we can have in May.
>>
>> Regards
>> Aditya Desai
>>
>> On Mon, Apr 24, 2017 at 2:16 AM, Xin Wang  wrote:
>>
>>> How about publishing this to Storm site?
>>>
>>>  - Xin
>>>
>>> 2017-04-22 19:27 GMT+08:00 steve tueno :
>>>
 great

 Thanks



 Cordialement,

 TUENO FOTSO Steve Jeffrey
 Ingénieur de conception
 Génie Informatique
 +237 676 57 17 28 <+237%206%2076%2057%2017%2028>
 +237 697 86 36 38 <+237%206%2097%2086%2036%2038>

 +33 6 23 71 91 52 <+33%206%2023%2071%2091%2052>


 https://jobs.jumia.cm/fr/candidats/CVTF1486563.html
 
 
 __

 https://play.google.com/store/apps/details?id=com.polytech.
 remotecomputer
 
 https://play.google.com/store/apps/details?id=com.polytech.
 internetaccesschecker
 
 *http://www.traveler.cm/
 *
 http://remotecomputer.traveler.cm/
 
 https://play.google.com/store/apps/details?id=com.polytech.
 androidsmssender
 

 https://github.com/stuenofotso/notre-jargon
 
 https://play.google.com/store/apps/details?id=com.polytech.
 welovecameroon
 
 https://play.google.com/store/apps/details?id=com.polytech.welovefrance
 

[jira] [Commented] (KAFKA-4755) SimpleBenchmark test fails for streams

2017-04-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15981505#comment-15981505
 ] 

ASF GitHub Bot commented on KAFKA-4755:
---

Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/2867


> SimpleBenchmark test fails for streams
> --
>
> Key: KAFKA-4755
> URL: https://issues.apache.org/jira/browse/KAFKA-4755
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> This occurred Feb 10th 2017:
> kafkatest.benchmarks.streams.streams_simple_benchmark_test.StreamsSimpleBenchmarkTest.test_simple_benchmark.test=consume.scale=1
> status: FAIL
> run time:   7 minutes 36.712 seconds
> Streams Test process on ubuntu@worker2 took too long to exit
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 123, in run
> data = self.run_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 176, in run_test
> return self.test_context.function(self.test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/mark/_mark.py",
>  line 321, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/benchmarks/streams/streams_simple_benchmark_test.py",
>  line 86, in test_simple_benchmark
> self.driver[num].wait()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/services/streams.py",
>  line 102, in wait
> self.wait_node(node, timeout_sec)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/services/streams.py",
>  line 106, in wait_node
> wait_until(lambda: not node.account.alive(pid), timeout_sec=timeout_sec, 
> err_msg="Streams Test process on " + str(node.account) + " took too long to 
> exit")
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/utils/util.py",
>  line 36, in wait_until
> raise TimeoutError(err_msg)
> TimeoutError: Streams Test process on ubuntu@worker2 took too long to exit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2867: KAFKA-4755: Cherrypick streams tests from trunk

2017-04-24 Thread enothereska
Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/2867


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP 141 - ProducerRecordBuilder Interface

2017-04-24 Thread Michael Pearce
Why not simply make a cleaner client fluent API wrapper? The internals use and 
send via current api, but provide a cleaner more fluent api.

A good example here is HTTP compontents where they did this.

https://hc.apache.org/httpcomponents-client-ga/tutorial/html/fluent.html



On 24/04/2017, 17:40, "Matthias J. Sax"  wrote:

Hey Jay,

I understand your concern, and for sure, we will need to keep the
current constructors deprecated for a long time (ie, many years).

But if we don't make the move, we will not be able to improve. And I
think warnings about using deprecated APIs is an acceptable price to
pay. And the API improvements will help new people who adopt Kafka to
get started more easily.

Otherwise Kafka might end up as many other enterprise software with a
lots of old stuff that is kept forever because nobody has the guts to
improve/change it.

Of course, we can still improve the docs of the deprecated constructors,
too.

Just my two cents.


-Matthias

On 4/23/17 3:37 PM, Jay Kreps wrote:
> Hey guys,
>
> I definitely think that the constructors could have been better designed,
> but I think given that they're in heavy use I don't think this proposal
> will improve things. Deprecating constructors just leaves everyone with
> lots of warnings and crossed out things. We can't actually delete the
> methods because lots of code needs to be usable across multiple Kafka
> versions, right? So we aren't picking between the original approach 
(worse)
> and the new approach (better); what we are proposing is a perpetual
> mingling of the original style and the new style with a bunch of 
deprecated
> stuff, which I think is worst of all.
>
> I'd vote for just documenting the meaning of null in the ProducerRecord
> constructor.
>
> -Jay
>
> On Wed, Apr 19, 2017 at 3:34 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
>> Hi all,
>>
>> My first KIP, let me know your thoughts!
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP+
>> 141+-+ProducerRecordBuilder+Interface
>>
>>
>> Cheers,
>> Stephane
>>
>



The information contained in this email is strictly confidential and for the 
use of the addressee only, unless otherwise indicated. If you are not the 
intended recipient, please do not read, copy, use or disclose to others this 
message or any attachment. Please also notify the sender by replying to this 
email or by telephone (+44(020 7896 0011) and then delete the email and any 
copies of it. Opinions, conclusion (etc) that do not relate to the official 
business of this company shall be understood as neither given nor endorsed by 
it. IG is a trading name of IG Markets Limited (a company registered in England 
and Wales, company number 04008957) and IG Index Limited (a company registered 
in England and Wales, company number 01190902). Registered address at Cannon 
Bridge House, 25 Dowgate Hill, London EC4R 2YA. Both IG Markets Limited 
(register number 195355) and IG Index Limited (register number 114059) are 
authorised and regulated by the Financial Conduct Authority.


Re: [DISCUSS] KIP-138: Change punctuate semantics

2017-04-24 Thread Matthias J. Sax
A simple example would be some dashboard app, that needs to get
"current" status in regular time intervals (ie, and real-time app).

Or something like a "scheduler" -- think "cron job" application.


-Matthias

On 4/24/17 2:23 AM, Michal Borowiecki wrote:
> Hi Matthias,
> 
> I agree it's difficult to reason about the hybrid approach, I certainly
> found it hard and I'm totally on board with the mantra.
> 
> I'd be happy to limit the scope of this KIP to add system-time
> punctuation semantics (in addition to existing stream-time semantics)
> and leave more complex schemes for users to implement on top of that.
> 
> Further additional PunctuationTypes, could then be added by future KIPs,
> possibly including the hybrid approach once it has been given more thought.
> 
>> There are real-time applications, that want to get
>> callbacks in regular system-time intervals (completely independent from
>> stream-time).
> Can you please describe what they are, so that I can put them on the
> wiki for later reference?
> 
> Thanks,
> 
> Michal
> 
> 
> On 23/04/17 21:27, Matthias J. Sax wrote:
>> Hi,
>>
>> I do like Damian's API proposal about the punctuation callback function.
>>
>> I also did reread the KIP and thought about the semantics we want to
>> provide.
>>
>>> Given the above, I don't see a reason any more for a separate system-time 
>>> based punctuation.
>> I disagree here. There are real-time applications, that want to get
>> callbacks in regular system-time intervals (completely independent from
>> stream-time). Thus we should allow this -- if we really follow the
>> "hybrid" approach, this could be configured with stream-time interval
>> infinite and delay whatever system-time punctuation interval you want to
>> have. However, I would like to add a proper API for this and do this
>> configuration under the hood (that would allow one implementation within
>> all kind of branching for different cases).
>>
>> Thus, we definitely should have PunctutionType#StreamTime and
>> #SystemTime -- and additionally, we _could_ have #Hybrid. Thus, I am not
>> a fan of your latest API proposal.
>>
>>
>> About the hybrid approach in general. On the one hand I like it, on the
>> other hand, it seems to be rather (1) complicated (not necessarily from
>> an implementation point of view, but for people to understand it) and
>> (2) mixes two semantics together in a "weird" way". Thus, I disagree with:
>>
>>> It may appear complicated at first but I do think these semantics will
>>> still be more understandable to users than having 2 separate punctuation
>>> schedules/callbacks with different PunctuationTypes.
>> This statement only holds if you apply strong assumptions that I don't
>> believe hold in general -- see (2) for details -- and I think it is
>> harder than you assume to reason about the hybrid approach in general.
>> IMHO, the hybrid approach is a "false friend" that seems to be easy to
>> reason about...
>>
>>
>> (1) Streams always embraced "easy to use" and we should really be
>> careful to keep it this way. On the other hand, as we are talking about
>> changes to PAPI, it won't affect DSL users (DSL does not use punctuation
>> at all at the moment), and thus, the "easy to use" mantra might not be
>> affected, while it will allow advanced users to express more complex stuff.
>>
>> I like the mantra: "make simple thing easy and complex things possible".
>>
>> (2) IMHO the major disadvantage (issue?) of the hybrid approach is the
>> implicit assumption that even-time progresses at the same "speed" as
>> system-time during regular processing. This implies the assumption that
>> a slower progress in stream-time indicates the absence of input events
>> (and that later arriving input events will have a larger event-time with
>> high probability). Even if this might be true for some use cases, I
>> doubt it holds in general. Assume that you get a spike in traffic and
>> for some reason stream-time does advance slowly because you have more
>> records to process. This might trigger a system-time based punctuation
>> call even if this seems not to be intended. I strongly believe that it
>> is not easy to reason about the semantics of the hybrid approach (even
>> if the intentional semantics would be super useful -- but I doubt that
>> we get want we ask for).
>>
>> Thus, I also believe that one might need different "configuration"
>> values for the hybrid approach if you run the same code for different
>> scenarios: regular processing, re-processing, catching up scenario. And
>> as the term "configuration" implies, we might be better off to not mix
>> configuration with business logic that is expressed via code.
>>
>>
>> One more comment: I also don't think that the hybrid approach is
>> deterministic as claimed in the use-case subpage. I understand the
>> reasoning and agree, that it is deterministic if certain assumptions
>> hold -- compare above -- and if configured correctly. But strictly
>> speaking it's not because 

Re: [VOTE] KIP 130: Expose states of active tasks to KafkaStreams public API

2017-04-24 Thread Guozhang Wang
Florian, could you also add the part of deprecating `KafkaStreams.toString`
in your KIP as well?


Guozhang

On Fri, Apr 21, 2017 at 8:32 AM, Damian Guy  wrote:

> +1
>
> On Fri, 21 Apr 2017 at 09:06 Eno Thereska  wrote:
>
> > +1 (non-binding)
> >
> > Thanks
> > Eno
> >
> > > On 21 Apr 2017, at 05:58, Guozhang Wang  wrote:
> > >
> > > +1. Thanks a lot for the KIP!
> > >
> > > Guozhang
> > >
> > > On Wed, Apr 5, 2017 at 1:57 PM, Florian Hussonnois <
> > fhussonn...@gmail.com>
> > > wrote:
> > >
> > >> Hi All,
> > >>
> > >> I would like to start the vote for the KIP-130 :
> > >>
> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> > >> 130%3A+Expose+states+of+active+tasks+to+KafkaStreams+public+API
> > >>
> > >> Thanks,
> > >>
> > >> --
> > >> Florian HUSSONNOIS
> > >>
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> >
> >
>



-- 
-- Guozhang


Re: [DISCUSS] KIP-138: Change punctuate semantics

2017-04-24 Thread Matthias J. Sax
I agree with this. We would need to allow processor level configuration.

And I also agree, that the global caching config is not optimal...


-Matthias

On 4/24/17 3:55 AM, Michal Borowiecki wrote:
> Further to this, on your point about configuration:
> 
>> Thus, I also believe that one might need different "configuration"
>> values for the hybrid approach if you run the same code for different
>> scenarios: regular processing, re-processing, catching up scenario. And
>> as the term "configuration" implies, we might be better off to not mix
>> configuration with business logic that is expressed via code.
> I'm not sure I understand what you are suggesting here.
> 
> Configuration is global to a KafkaStreams instance and users might want
> to have different tolerance in different parts of the topology. They
> shouldn't be locked into one value set via global config.
> 
> To illustrate this point: Lately I have discovered the cache config
> introduced in KIP-63
> 
> and found it quite annoying that it's controlled by a config item. IMO,
> I should be able to control flushing per processor, not be forced to use
> one global value defined in configs.
> 
> It's easy enough for users to source a user-defined config and provided
> it as a parameter to a /given /processor as needed.
> 
> In principal I agree that configuration and business logic are better
> not mixed together but then the configuration mechanism should allow
> users to target specific processors and not be global to the
> KafkaStreams instance.
> 
> Thanks,
> 
> Michal
> 
> On 24/04/17 10:23, Michal Borowiecki wrote:
>>
>> Hi Matthias,
>>
>> I agree it's difficult to reason about the hybrid approach, I
>> certainly found it hard and I'm totally on board with the mantra.
>>
>> I'd be happy to limit the scope of this KIP to add system-time
>> punctuation semantics (in addition to existing stream-time semantics)
>> and leave more complex schemes for users to implement on top of that.
>>
>> Further additional PunctuationTypes, could then be added by future
>> KIPs, possibly including the hybrid approach once it has been given
>> more thought.
>>
>>> There are real-time applications, that want to get
>>> callbacks in regular system-time intervals (completely independent from
>>> stream-time).
>> Can you please describe what they are, so that I can put them on the
>> wiki for later reference?
>>
>> Thanks,
>>
>> Michal
>>
>>
>> On 23/04/17 21:27, Matthias J. Sax wrote:
>>> Hi,
>>>
>>> I do like Damian's API proposal about the punctuation callback function.
>>>
>>> I also did reread the KIP and thought about the semantics we want to
>>> provide.
>>>
 Given the above, I don't see a reason any more for a separate system-time 
 based punctuation.
>>> I disagree here. There are real-time applications, that want to get
>>> callbacks in regular system-time intervals (completely independent from
>>> stream-time). Thus we should allow this -- if we really follow the
>>> "hybrid" approach, this could be configured with stream-time interval
>>> infinite and delay whatever system-time punctuation interval you want to
>>> have. However, I would like to add a proper API for this and do this
>>> configuration under the hood (that would allow one implementation within
>>> all kind of branching for different cases).
>>>
>>> Thus, we definitely should have PunctutionType#StreamTime and
>>> #SystemTime -- and additionally, we _could_ have #Hybrid. Thus, I am not
>>> a fan of your latest API proposal.
>>>
>>>
>>> About the hybrid approach in general. On the one hand I like it, on the
>>> other hand, it seems to be rather (1) complicated (not necessarily from
>>> an implementation point of view, but for people to understand it) and
>>> (2) mixes two semantics together in a "weird" way". Thus, I disagree with:
>>>
 It may appear complicated at first but I do think these semantics will
 still be more understandable to users than having 2 separate punctuation
 schedules/callbacks with different PunctuationTypes.
>>> This statement only holds if you apply strong assumptions that I don't
>>> believe hold in general -- see (2) for details -- and I think it is
>>> harder than you assume to reason about the hybrid approach in general.
>>> IMHO, the hybrid approach is a "false friend" that seems to be easy to
>>> reason about...
>>>
>>>
>>> (1) Streams always embraced "easy to use" and we should really be
>>> careful to keep it this way. On the other hand, as we are talking about
>>> changes to PAPI, it won't affect DSL users (DSL does not use punctuation
>>> at all at the moment), and thus, the "easy to use" mantra might not be
>>> affected, while it will allow advanced users to express more complex stuff.
>>>
>>> I like the mantra: "make simple thing easy and complex things possible".
>>>
>>> (2) IMHO the major disadvantage (issue?) of the hybrid 

[jira] [Resolved] (KAFKA-3940) Log should check the return value of dir.mkdirs()

2017-04-24 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3940.

   Resolution: Fixed
Fix Version/s: 0.11.0.0

This was fixed as part of the findBugs fixes.

> Log should check the return value of dir.mkdirs()
> -
>
> Key: KAFKA-3940
> URL: https://issues.apache.org/jira/browse/KAFKA-3940
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Mickael Maison
>  Labels: newbie, reliability
> Fix For: 0.11.0.0
>
>
> In Log.loadSegments(), we call dir.mkdirs() w/o checking the return value and 
> just assume the directory will exist after the call. However, if the 
> directory can't be created (e.g. due to no space), we will hit 
> NullPointerException in the next statement, which will be confusing.
>for(file <- dir.listFiles if file.isFile) {



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-3940) Log should check the return value of dir.mkdirs()

2017-04-24 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-3940:
--

Assignee: Colin P. McCabe  (was: Mickael Maison)

> Log should check the return value of dir.mkdirs()
> -
>
> Key: KAFKA-3940
> URL: https://issues.apache.org/jira/browse/KAFKA-3940
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Colin P. McCabe
>  Labels: newbie, reliability
> Fix For: 0.11.0.0
>
>
> In Log.loadSegments(), we call dir.mkdirs() w/o checking the return value and 
> just assume the directory will exist after the call. However, if the 
> directory can't be created (e.g. due to no space), we will hit 
> NullPointerException in the next statement, which will be confusing.
>for(file <- dir.listFiles if file.isFile) {



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2745: MINOR: Replication system tests should cover compr...

2017-04-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2745


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-3940) Log should check the return value of dir.mkdirs()

2017-04-24 Thread Mickael Maison (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison reassigned KAFKA-3940:
-

Assignee: Mickael Maison

> Log should check the return value of dir.mkdirs()
> -
>
> Key: KAFKA-3940
> URL: https://issues.apache.org/jira/browse/KAFKA-3940
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Mickael Maison
>  Labels: newbie, reliability
>
> In Log.loadSegments(), we call dir.mkdirs() w/o checking the return value and 
> just assume the directory will exist after the call. However, if the 
> directory can't be created (e.g. due to no space), we will hit 
> NullPointerException in the next statement, which will be confusing.
>for(file <- dir.listFiles if file.isFile) {



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [jira] [Commented] (KAFKA-4295) kafka-console-consumer.sh does not delete the temporary group in zookeeper

2017-04-24 Thread Matthias J. Sax
You can't...

But you can filter them:
https://mail.google.com/mail/u/0/?tab=wm#settings/filters

Just specify from "j...@apache.org" and to "dev@kafka.apache.org" and
apply actions you want to take.


-Matthias

On 4/24/17 1:52 AM, Arisha C wrote:
> How can I unsubscribe?
> 
> Sent from my iPhone
> 
>> On 24-Apr-2017, at 1:07 PM, huxi (JIRA)  wrote:
>>
>>
>>[ 
>> https://issues.apache.org/jira/browse/KAFKA-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980835#comment-15980835
>>  ] 
>>
>> huxi commented on KAFKA-4295:
>> -
>>
>> [~ijuma]  Are we going to continue to fix this issue since KIP 109 is 
>> planned to be finished in 0.11?
>>
>>> kafka-console-consumer.sh does not delete the temporary group in zookeeper
>>> --
>>>
>>>Key: KAFKA-4295
>>>URL: https://issues.apache.org/jira/browse/KAFKA-4295
>>>Project: Kafka
>>> Issue Type: Bug
>>> Components: admin
>>>   Affects Versions: 0.9.0.1, 0.10.0.0, 0.10.0.1
>>>   Reporter: Sswater Shi
>>>   Assignee: huxi
>>>   Priority: Minor
>>>
>>> I'm not sure it is a bug or you guys designed it.
>>> Since 0.9.x.x, the kafka-console-consumer.sh will not delete the group 
>>> information in zookeeper/consumers on exit when without "--new-consumer". 
>>> There will be a lot of abandoned zookeeper/consumers/console-consumer-xxx 
>>> if kafka-console-consumer.sh runs a lot of times.
>>> When 0.8.x.x,  the kafka-console-consumer.sh can be followed by an argument 
>>> "group". If not specified, the kafka-console-consumer.sh will create a 
>>> temporary group name like 'console-consumer-'. If the group name is 
>>> specified by "group", the information in the zookeeper/consumers will be 
>>> kept on exit. If the group name is a temporary one, the information in the 
>>> zookeeper will be deleted when kafka-console-consumer.sh is quitted by 
>>> Ctrl+C. Why this is changed from 0.9.x.x.
>>
>>
>>
>> --
>> This message was sent by Atlassian JIRA
>> (v6.3.15#6346)



signature.asc
Description: OpenPGP digital signature


Re: [GitHub] kafka pull request #2759: MINOR: Move `Os` class to utils package and rename...

2017-04-24 Thread Matthias J. Sax
You can't... :(

But you can filter them:
https://mail.google.com/mail/u/0/?tab=wm#settings/filters

Just specify from "g...@git.apache.org" and to "dev@kafka.apache.org" and
apply actions you want to take.


-Matthias

On 4/24/17 12:29 AM, Arisha C wrote:
> How can I unsubscribe to these mails?
> 
> Sent from my iPhone
> 
>> On 23-Apr-2017, at 2:16 AM, asfgit  wrote:
>>
>> Github user asfgit closed the pull request at:
>>
>>https://github.com/apache/kafka/pull/2759
>>
>>
>> ---
>> If your project is set up for it, you can reply to this email and have your
>> reply appear on GitHub as well. If your project does not have this feature
>> enabled and wishes so, or if the feature is enabled but not working, please
>> contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
>> with INFRA.
>> ---



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP 141 - ProducerRecordBuilder Interface

2017-04-24 Thread Matthias J. Sax
Hey Jay,

I understand your concern, and for sure, we will need to keep the
current constructors deprecated for a long time (ie, many years).

But if we don't make the move, we will not be able to improve. And I
think warnings about using deprecated APIs is an acceptable price to
pay. And the API improvements will help new people who adopt Kafka to
get started more easily.

Otherwise Kafka might end up as many other enterprise software with a
lots of old stuff that is kept forever because nobody has the guts to
improve/change it.

Of course, we can still improve the docs of the deprecated constructors,
too.

Just my two cents.


-Matthias

On 4/23/17 3:37 PM, Jay Kreps wrote:
> Hey guys,
> 
> I definitely think that the constructors could have been better designed,
> but I think given that they're in heavy use I don't think this proposal
> will improve things. Deprecating constructors just leaves everyone with
> lots of warnings and crossed out things. We can't actually delete the
> methods because lots of code needs to be usable across multiple Kafka
> versions, right? So we aren't picking between the original approach (worse)
> and the new approach (better); what we are proposing is a perpetual
> mingling of the original style and the new style with a bunch of deprecated
> stuff, which I think is worst of all.
> 
> I'd vote for just documenting the meaning of null in the ProducerRecord
> constructor.
> 
> -Jay
> 
> On Wed, Apr 19, 2017 at 3:34 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
> 
>> Hi all,
>>
>> My first KIP, let me know your thoughts!
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP+
>> 141+-+ProducerRecordBuilder+Interface
>>
>>
>> Cheers,
>> Stephane
>>
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Work stopped] (KAFKA-4994) Fix findbugs warning about OffsetStorageWriter#currentFlushId

2017-04-24 Thread Mariam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4994 stopped by Mariam John.
--
> Fix findbugs warning about OffsetStorageWriter#currentFlushId
> -
>
> Key: KAFKA-4994
> URL: https://issues.apache.org/jira/browse/KAFKA-4994
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Mariam John
>  Labels: newbie
>
> We should fix the findbugs warning about 
> {{OffsetStorageWriter#currentFlushId}}
> {code}
> Multithreaded correctness Warnings
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.currentFlushId; locked 
> 83% of time
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.toFlush; locked 75% of 
> time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-4994) Fix findbugs warning about OffsetStorageWriter#currentFlushId

2017-04-24 Thread Mariam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4994 started by Mariam John.
--
> Fix findbugs warning about OffsetStorageWriter#currentFlushId
> -
>
> Key: KAFKA-4994
> URL: https://issues.apache.org/jira/browse/KAFKA-4994
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Mariam John
>  Labels: newbie
>
> We should fix the findbugs warning about 
> {{OffsetStorageWriter#currentFlushId}}
> {code}
> Multithreaded correctness Warnings
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.currentFlushId; locked 
> 83% of time
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.toFlush; locked 75% of 
> time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Recording - Storm & Kafka Meetup on April 20th 2017

2017-04-24 Thread Harsha Chintalapani
Hi Aditya,
 Thanks for your interest. We entatively planning one in June
1st week. If you haven't already please register here
https://www.meetup.com/Apache-Storm-Apache-Kafka/ . I'll keep the Storm
lists updated once we finalize the date & location.

Thanks,
Harsha

On Mon, Apr 24, 2017 at 7:02 AM Aditya Desai  wrote:

> Hello Everyone
>
> Can you please let us know when is the next meet up? It would be great if
> we can have in May.
>
> Regards
> Aditya Desai
>
> On Mon, Apr 24, 2017 at 2:16 AM, Xin Wang  wrote:
>
>> How about publishing this to Storm site?
>>
>>  - Xin
>>
>> 2017-04-22 19:27 GMT+08:00 steve tueno :
>>
>>> great
>>>
>>> Thanks
>>>
>>>
>>>
>>> Cordialement,
>>>
>>> TUENO FOTSO Steve Jeffrey
>>> Ingénieur de conception
>>> Génie Informatique
>>> +237 676 57 17 28 <+237%206%2076%2057%2017%2028>
>>> +237 697 86 36 38 <+237%206%2097%2086%2036%2038>
>>>
>>> +33 6 23 71 91 52 <+33%206%2023%2071%2091%2052>
>>>
>>>
>>> https://jobs.jumia.cm/fr/candidats/CVTF1486563.html
>>> 
>>>
>>> __
>>>
>>> https://play.google.com/store/apps/details?id=com.polytech.remotecomputer
>>> 
>>>
>>> https://play.google.com/store/apps/details?id=com.polytech.internetaccesschecker
>>> 
>>> *http://www.traveler.cm/
>>> *
>>> http://remotecomputer.traveler.cm/
>>> 
>>>
>>> https://play.google.com/store/apps/details?id=com.polytech.androidsmssender
>>> 
>>>
>>> https://github.com/stuenofotso/notre-jargon
>>> 
>>> https://play.google.com/store/apps/details?id=com.polytech.welovecameroon
>>> 
>>> https://play.google.com/store/apps/details?id=com.polytech.welovefrance
>>> 
>>>
>>>
>>>
>>> 2017-04-22 3:08 GMT+02:00 Roshan Naik :
>>>
 It was a great meetup and for the benefit of those interested but
 unable to attend it, here is a link to the recording :



 https://www.youtube.com/watch?v=kCRv6iEd7Ow
 




 List of Talks:

 -  *Introduction* –   Suresh Srinivas (Hortonworks)

 -  [4m:31sec] –  *Overview of  Storm 1.1* -  Hugo Louro
 (Hortonworks)

 -  [20m] –  *Rethinking the Storm 2.0 Worker*  - 

[jira] [Commented] (KAFKA-5116) Controller updates to ISR holds the controller lock for a very long time

2017-04-24 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15981382#comment-15981382
 ] 

Ismael Juma commented on KAFKA-5116:


cc [~onurkaraman] [~junrao]

> Controller updates to ISR holds the controller lock for a very long time
> 
>
> Key: KAFKA-5116
> URL: https://issues.apache.org/jira/browse/KAFKA-5116
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.10.1.0, 0.10.2.0
>Reporter: Justin Downing
> Fix For: 0.11.0.0
>
>
> Hello!
> Lately, we have noticed slow (or no) results when monitoring the broker's ISR 
> using JMX. Many of these requests appear to be 'hung' for a very long time 
> (eg: >2m). We've dug a bunch, and found that in our case, sometimes the 
> controllerLock can be held for multiple minutes in the IsrChangeNotifier 
> callback.
> Inside the lock, we are reading from Zookeeper for *each* partition in the 
> changeset. With a large changeset (eg: >500 partitions), [this 
> operation|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/controller/KafkaController.scala#L1347]
>  can take a long time to complete. 
> In KAFKA-2406, throttling was introduced to prevent overwhelming the 
> controller with many changesets at once. However, this does not take into 
> consideration _large_ changesets.
> We have identified two potential remediations we'd like to discuss further:
> * Move the Zookeeper request outside of the lock. This would then only lock 
> for the controller update and processing of the changeset.
> * Send limited changesets to Zookeeper when calling the 
> maybePropagateIsrChanges. When dealing with lots of partitions (eg: >1000) it 
> may be useful to batch the changesets in groups of 100 rather the send the 
> [entire 
> list|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ReplicaManager.scala#L204]
>  to Zookeeper at once.
> We're happy working on patches for either or both of these, but we are unsure 
> of the safety around these two proposals. Specifically, moving the Zookeeper 
> request out of the lock may be unsafe.
> Holding these locks for long periods of time seems problematic - it means 
> that broker failure won't be detected and acted upon quickly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5116) Controller updates to ISR holds the controller lock for a very long time

2017-04-24 Thread Justin Downing (JIRA)
Justin Downing created KAFKA-5116:
-

 Summary: Controller updates to ISR holds the controller lock for a 
very long time
 Key: KAFKA-5116
 URL: https://issues.apache.org/jira/browse/KAFKA-5116
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.10.2.0, 0.10.1.0
Reporter: Justin Downing
 Fix For: 0.11.0.0


Hello!

Lately, we have noticed slow (or no) results when monitoring the broker's ISR 
using JMX. Many of these requests appear to be 'hung' for a very long time (eg: 
>2m). We've dug a bunch, and found that in our case, sometimes the 
controllerLock can be held for multiple minutes in the IsrChangeNotifier 
callback.

Inside the lock, we are reading from Zookeeper for *each* partition in the 
changeset. With a large changeset (eg: >500 partitions), [this 
operation|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/controller/KafkaController.scala#L1347]
 can take a long time to complete. 

In KAFKA-2406, throttling was introduced to prevent overwhelming the controller 
with many changesets at once. However, this does not take into consideration 
_large_ changesets.

We have identified two potential remediations we'd like to discuss further:

* Move the Zookeeper request outside of the lock. This would then only lock for 
the controller update and processing of the changeset.

* Send limited changesets to Zookeeper when calling the 
maybePropagateIsrChanges. When dealing with lots of partitions (eg: >1000) it 
may be useful to batch the changesets in groups of 100 rather the send the 
[entire 
list|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ReplicaManager.scala#L204]
 to Zookeeper at once.

We're happy working on patches for either or both of these, but we are unsure 
of the safety around these two proposals. Specifically, moving the Zookeeper 
request out of the lock may be unsafe.

Holding these locks for long periods of time seems problematic - it means that 
broker failure won't be detected and acted upon quickly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4879) KafkaConsumer.position may hang forever when deleting a topic

2017-04-24 Thread Armin Braun (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Armin Braun reassigned KAFKA-4879:
--

Assignee: Armin Braun

> KafkaConsumer.position may hang forever when deleting a topic
> -
>
> Key: KAFKA-4879
> URL: https://issues.apache.org/jira/browse/KAFKA-4879
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.2.0
>Reporter: Shixiong Zhu
>Assignee: Armin Braun
>
> KafkaConsumer.position may hang forever when deleting a topic. The problem is 
> this line 
> https://github.com/apache/kafka/blob/022bf129518e33e165f9ceefc4ab9e622952d3bd/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L374
> The timeout is "Long.MAX_VALUE", and it will just retry forever for 
> UnknownTopicOrPartitionException.
> Here is a reproducer
> {code}
> import org.apache.kafka.clients.consumer.KafkaConsumer;
> import org.apache.kafka.common.TopicPartition;
> import org.apache.kafka.common.serialization.StringDeserializer;
> import java.util.Collections;
> import java.util.Properties;
> import java.util.Set;
> public class KafkaReproducer {
>   public static void main(String[] args) {
> // Make sure "delete.topic.enable" is set to true.
> // Please create the topic test with "3" partitions manually.
> // The issue is gone when there is only one partition.
> String topic = "test";
> Properties props = new Properties();
> props.put("bootstrap.servers", "localhost:9092");
> props.put("group.id", "testgroup");
> props.put("value.deserializer", StringDeserializer.class.getName());
> props.put("key.deserializer", StringDeserializer.class.getName());
> props.put("enable.auto.commit", "false");
> KafkaConsumer kc = new KafkaConsumer(props);
> kc.subscribe(Collections.singletonList(topic));
> kc.poll(0);
> Set partitions = kc.assignment();
> System.out.println("partitions: " + partitions);
> kc.pause(partitions);
> kc.seekToEnd(partitions);
> System.out.println("please delete the topic in 30 seconds");
> try {
>   // Sleep 30 seconds to give us enough time to delete the topic.
>   Thread.sleep(3);
> } catch (InterruptedException e) {
>   e.printStackTrace();
> }
> System.out.println("sleep end");
> for (TopicPartition p : partitions) {
>   System.out.println(p + " offset: " + kc.position(p));
> }
> System.out.println("cannot reach here");
> kc.close();
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Recording - Storm & Kafka Meetup on April 20th 2017

2017-04-24 Thread Aditya Desai
Hello Everyone

Can you please let us know when is the next meet up? It would be great if
we can have in May.

Regards
Aditya Desai

On Mon, Apr 24, 2017 at 2:16 AM, Xin Wang  wrote:

> How about publishing this to Storm site?
>
>  - Xin
>
> 2017-04-22 19:27 GMT+08:00 steve tueno :
>
>> great
>>
>> Thanks
>>
>>
>>
>> Cordialement,
>>
>> TUENO FOTSO Steve Jeffrey
>> Ingénieur de conception
>> Génie Informatique
>> +237 676 57 17 28 <+237%206%2076%2057%2017%2028>
>> +237 697 86 36 38 <+237%206%2097%2086%2036%2038>
>>
>> +33 6 23 71 91 52 <+33%206%2023%2071%2091%2052>
>>
>>
>> https://jobs.jumia.cm/fr/candidats/CVTF1486563.html
>> 
>> 
>> __
>>
>> https://play.google.com/store/apps/details?id=com.polytech.remotecomputer
>> 
>> https://play.google.com/store/apps/details?id=com.polytech.i
>> nternetaccesschecker
>> 
>> *http://www.traveler.cm/
>> *
>> http://remotecomputer.traveler.cm/
>> 
>> https://play.google.com/store/apps/details?id=com.polytech.a
>> ndroidsmssender
>> 
>>
>> https://github.com/stuenofotso/notre-jargon
>> 
>> https://play.google.com/store/apps/details?id=com.polytech.welovecameroon
>> 
>> https://play.google.com/store/apps/details?id=com.polytech.welovefrance
>> 
>>
>>
>>
>> 2017-04-22 3:08 GMT+02:00 Roshan Naik :
>>
>>> It was a great meetup and for the benefit of those interested but unable
>>> to attend it, here is a link to the recording :
>>>
>>>
>>>
>>> https://www.youtube.com/watch?v=kCRv6iEd7Ow
>>> 
>>>
>>>
>>>
>>>
>>> List of Talks:
>>>
>>> -  *Introduction* –   Suresh Srinivas (Hortonworks)
>>>
>>> -  [4m:31sec] –  *Overview of  Storm 1.1* -  Hugo Louro
>>> (Hortonworks)
>>>
>>> -  [20m] –  *Rethinking the Storm 2.0 Worker*  - Roshan Naik
>>> (Hortonworks)
>>>
>>> -  [57m] –  *Storm in Retail Context: Catalog data processing
>>> using Kafka, Storm & Microservices*   -   Karthik Deivasigamani
>>> (WalMart Labs)
>>>
>>> -  [1h: 54m:45sec] *–   Schema Registry &  Streaming Analytics
>>> Manager (aka StreamLine)   *-   Sriharsha Chintalapani (Hortonworks)
>>>
>>>
>>>
>>>
>>>
>>
>>
>


-- 
Aditya Ramachandra Desai
MS Computer Science Graduate 

how to assign JIRA issue to myself ?

2017-04-24 Thread James Chain
Hi,

I am added to JIRA contributor list, and how can I assign JIRA issue to
myself ?
Thank you :D

Sincerely,
  James.C


the problems of parititons assigned to consumers

2017-04-24 Thread 揣立武
Hi, all. There are two problems when we use kafka about partitions assigned
to consumers.

Problem 1: Partitions will be reassigned to consumers when consumer online
or offline, then the messages latency become higher.

Problem 2: If we have two partitions, only two consumers can consume
messages。How let more consumers to consume, but expand partitions.

Thanks!


[jira] [Commented] (KAFKA-5110) ConsumerGroupCommand error handling improvement

2017-04-24 Thread Dustin Cote (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15981086#comment-15981086
 ] 

Dustin Cote commented on KAFKA-5110:


[~vahid] I'd love to provide reproduction steps, but I can't seem to reproduce 
locally. This was seen on a cluster not under my control. If we could print 
some better info here as [~hachikuji]'s PR is doing, I think it would be 
instructive in seeing what the root cause may be.

> ConsumerGroupCommand error handling improvement
> ---
>
> Key: KAFKA-5110
> URL: https://issues.apache.org/jira/browse/KAFKA-5110
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.1.1
>Reporter: Dustin Cote
>Assignee: Jason Gustafson
>
> The ConsumerGroupCommand isn't handling partition errors properly. It throws 
> the following:
> {code}
> kafka-consumer-groups.sh --zookeeper 10.10.10.10:2181 --group mygroup 
> --describe
> GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG OWNER
> Error while executing consumer group command empty.head
> java.lang.UnsupportedOperationException: empty.head
> at scala.collection.immutable.Vector.head(Vector.scala:193)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService$$anonfun$getLogEndOffset$1.apply(ConsumerGroupCommand.scala:197)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService$$anonfun$getLogEndOffset$1.apply(ConsumerGroupCommand.scala:194)
> at scala.Option.map(Option.scala:146)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.getLogEndOffset(ConsumerGroupCommand.scala:194)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.kafka$admin$ConsumerGroupCommand$ConsumerGroupService$$describePartition(ConsumerGroupCommand.scala:125)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$$anonfun$describeTopicPartition$2.apply(ConsumerGroupCommand.scala:107)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$$anonfun$describeTopicPartition$2.apply(ConsumerGroupCommand.scala:106)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.describeTopicPartition(ConsumerGroupCommand.scala:106)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.describeTopicPartition(ConsumerGroupCommand.scala:134)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.kafka$admin$ConsumerGroupCommand$ZkConsumerGroupService$$describeTopic(ConsumerGroupCommand.scala:181)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService$$anonfun$describeGroup$1.apply(ConsumerGroupCommand.scala:166)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService$$anonfun$describeGroup$1.apply(ConsumerGroupCommand.scala:166)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.describeGroup(ConsumerGroupCommand.scala:166)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.describe(ConsumerGroupCommand.scala:89)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.describe(ConsumerGroupCommand.scala:134)
> at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:68)
> at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-140: Add administrative RPCs for adding, deleting, and listing ACLs

2017-04-24 Thread Ismael Juma
Thanks Colin. A few quick comments:

1. Is there a reason why AddAclsRequest must be sent to the controller
broker? When this was discussed previously, Jun suggested that such a
restriction may not be necessary for this request.

2. Other protocol APIs use the Delete prefix instead of Remove. Is there a
reason to deviate here? If there's a good reason to do so, we have to fix
the one mention of DeleteAcls in the proposal.

3. Do you mean "null" in the following sentence? "Note that an argument of
"none" is different than a wildcard argument."

4. How will the non-zero top-level error code be computed? A bit more
detail on this would be helpful. As a general rule, most batch protocol
APIs don't have a top level error code because errors are usually at the
batch element level. This has proved to be a bit of an issue in cases where
we want to return a generic error code (e.g. InvalidProtocolVersion). Also,
V2 of OffsetFetch has a top-level error code[1]. However, the OffsetFetch
behaviour is that we either use the top-level error code or the partition
level error codes, which is different than what is being suggested here.

5. Nit: In other requests we used `error_message` instead of `error_string`.

6. Regarding the migration plan, it's worth making it clear that the CLI
transition is not part of this KIP.

7. Regarding the forward compatibility point, we could potentially use
enums with UNKNOWN element? This pattern has been used elsewhere in Kafka
and it would be good to compare it with the proposed solution. An important
question is what can users do with UNKNOWN elements. If the assumption is
that users ignore them, then it seems like the enum approach may be good
enough.

Thanks,
Ismael

[1]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-88%3A+OffsetFetch+Protocol+Update


On Fri, Apr 21, 2017 at 9:27 PM, Colin McCabe  wrote:

> Hi all,
>
> As part of the AdminClient work, we would like to add methods for
> adding, deleting, and listing access control lists (ACLs).  I wrote up a
> KIP to discuss implementing requests for those operations, as well as
> AdminClient APIs.  Take a look at:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 140%3A+Add+administrative+RPCs+for+adding%2C+deleting%2C+and+listing+ACLs
>
> regards,
> Colin
>


Re: Recording - Storm & Kafka Meetup on April 20th 2017

2017-04-24 Thread Xin Wang
How about publishing this to Storm site?

 - Xin

2017-04-22 19:27 GMT+08:00 steve tueno :

> great
>
> Thanks
>
>
>
> Cordialement,
>
> TUENO FOTSO Steve Jeffrey
> Ingénieur de conception
> Génie Informatique
> +237 676 57 17 28 <+237%206%2076%2057%2017%2028>
> +237 697 86 36 38 <+237%206%2097%2086%2036%2038>
>
> +33 6 23 71 91 52 <+33%206%2023%2071%2091%2052>
>
>
> https://jobs.jumia.cm/fr/candidats/CVTF1486563.html
> 
> __
>
> https://play.google.com/store/apps/details?id=com.polytech.remotecomputer
> https://play.google.com/store/apps/details?id=com.polytech.
> internetaccesschecker
> *http://www.traveler.cm/ *
> http://remotecomputer.traveler.cm/
> https://play.google.com/store/apps/details?id=com.polytech.
> androidsmssender
> https://github.com/stuenofotso/notre-jargon
> https://play.google.com/store/apps/details?id=com.polytech.welovecameroon
> https://play.google.com/store/apps/details?id=com.polytech.welovefrance
>
>
>
> 2017-04-22 3:08 GMT+02:00 Roshan Naik :
>
>> It was a great meetup and for the benefit of those interested but unable
>> to attend it, here is a link to the recording :
>>
>>
>>
>> https://www.youtube.com/watch?v=kCRv6iEd7Ow
>>
>>
>>
>> List of Talks:
>>
>> -  *Introduction* –   Suresh Srinivas (Hortonworks)
>>
>> -  [4m:31sec] –  *Overview of  Storm 1.1* -  Hugo Louro
>> (Hortonworks)
>>
>> -  [20m] –  *Rethinking the Storm 2.0 Worker*  - Roshan Naik
>> (Hortonworks)
>>
>> -  [57m] –  *Storm in Retail Context: Catalog data processing
>> using Kafka, Storm & Microservices*   -   Karthik Deivasigamani (WalMart
>> Labs)
>>
>> -  [1h: 54m:45sec] *–   Schema Registry &  Streaming Analytics
>> Manager (aka StreamLine)   *-   Sriharsha Chintalapani (Hortonworks)
>>
>>
>>
>>
>>
>
>


Re: [jira] [Commented] (KAFKA-4295) kafka-console-consumer.sh does not delete the temporary group in zookeeper

2017-04-24 Thread Arisha C
How can I unsubscribe?

Sent from my iPhone

> On 24-Apr-2017, at 1:07 PM, huxi (JIRA)  wrote:
> 
> 
>[ 
> https://issues.apache.org/jira/browse/KAFKA-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980835#comment-15980835
>  ] 
> 
> huxi commented on KAFKA-4295:
> -
> 
> [~ijuma]  Are we going to continue to fix this issue since KIP 109 is planned 
> to be finished in 0.11?
> 
>> kafka-console-consumer.sh does not delete the temporary group in zookeeper
>> --
>> 
>>Key: KAFKA-4295
>>URL: https://issues.apache.org/jira/browse/KAFKA-4295
>>Project: Kafka
>> Issue Type: Bug
>> Components: admin
>>   Affects Versions: 0.9.0.1, 0.10.0.0, 0.10.0.1
>>   Reporter: Sswater Shi
>>   Assignee: huxi
>>   Priority: Minor
>> 
>> I'm not sure it is a bug or you guys designed it.
>> Since 0.9.x.x, the kafka-console-consumer.sh will not delete the group 
>> information in zookeeper/consumers on exit when without "--new-consumer". 
>> There will be a lot of abandoned zookeeper/consumers/console-consumer-xxx if 
>> kafka-console-consumer.sh runs a lot of times.
>> When 0.8.x.x,  the kafka-console-consumer.sh can be followed by an argument 
>> "group". If not specified, the kafka-console-consumer.sh will create a 
>> temporary group name like 'console-consumer-'. If the group name is 
>> specified by "group", the information in the zookeeper/consumers will be 
>> kept on exit. If the group name is a temporary one, the information in the 
>> zookeeper will be deleted when kafka-console-consumer.sh is quitted by 
>> Ctrl+C. Why this is changed from 0.9.x.x.
> 
> 
> 
> --
> This message was sent by Atlassian JIRA
> (v6.3.15#6346)


Re: [DISCUSS] KIP 141 - ProducerRecordBuilder Interface

2017-04-24 Thread Jay Kreps
Hey guys,

I definitely think that the constructors could have been better designed,
but I think given that they're in heavy use I don't think this proposal
will improve things. Deprecating constructors just leaves everyone with
lots of warnings and crossed out things. We can't actually delete the
methods because lots of code needs to be usable across multiple Kafka
versions, right? So we aren't picking between the original approach (worse)
and the new approach (better); what we are proposing is a perpetual
mingling of the original style and the new style with a bunch of deprecated
stuff, which I think is worst of all.

I'd vote for just documenting the meaning of null in the ProducerRecord
constructor.

-Jay

On Wed, Apr 19, 2017 at 3:34 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Hi all,
>
> My first KIP, let me know your thoughts!
> https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> 141+-+ProducerRecordBuilder+Interface
>
>
> Cheers,
> Stephane
>


[jira] [Commented] (KAFKA-4295) kafka-console-consumer.sh does not delete the temporary group in zookeeper

2017-04-24 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980835#comment-15980835
 ] 

huxi commented on KAFKA-4295:
-

[~ijuma]  Are we going to continue to fix this issue since KIP 109 is planned 
to be finished in 0.11?

> kafka-console-consumer.sh does not delete the temporary group in zookeeper
> --
>
> Key: KAFKA-4295
> URL: https://issues.apache.org/jira/browse/KAFKA-4295
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.9.0.1, 0.10.0.0, 0.10.0.1
>Reporter: Sswater Shi
>Assignee: huxi
>Priority: Minor
>
> I'm not sure it is a bug or you guys designed it.
> Since 0.9.x.x, the kafka-console-consumer.sh will not delete the group 
> information in zookeeper/consumers on exit when without "--new-consumer". 
> There will be a lot of abandoned zookeeper/consumers/console-consumer-xxx if 
> kafka-console-consumer.sh runs a lot of times.
> When 0.8.x.x,  the kafka-console-consumer.sh can be followed by an argument 
> "group". If not specified, the kafka-console-consumer.sh will create a 
> temporary group name like 'console-consumer-'. If the group name is 
> specified by "group", the information in the zookeeper/consumers will be kept 
> on exit. If the group name is a temporary one, the information in the 
> zookeeper will be deleted when kafka-console-consumer.sh is quitted by 
> Ctrl+C. Why this is changed from 0.9.x.x.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-138: Change punctuate semantics

2017-04-24 Thread Matthias J. Sax
Hi,

I do like Damian's API proposal about the punctuation callback function.

I also did reread the KIP and thought about the semantics we want to
provide.

> Given the above, I don't see a reason any more for a separate system-time 
> based punctuation.

I disagree here. There are real-time applications, that want to get
callbacks in regular system-time intervals (completely independent from
stream-time). Thus we should allow this -- if we really follow the
"hybrid" approach, this could be configured with stream-time interval
infinite and delay whatever system-time punctuation interval you want to
have. However, I would like to add a proper API for this and do this
configuration under the hood (that would allow one implementation within
all kind of branching for different cases).

Thus, we definitely should have PunctutionType#StreamTime and
#SystemTime -- and additionally, we _could_ have #Hybrid. Thus, I am not
a fan of your latest API proposal.


About the hybrid approach in general. On the one hand I like it, on the
other hand, it seems to be rather (1) complicated (not necessarily from
an implementation point of view, but for people to understand it) and
(2) mixes two semantics together in a "weird" way". Thus, I disagree with:

> It may appear complicated at first but I do think these semantics will
> still be more understandable to users than having 2 separate punctuation
> schedules/callbacks with different PunctuationTypes.

This statement only holds if you apply strong assumptions that I don't
believe hold in general -- see (2) for details -- and I think it is
harder than you assume to reason about the hybrid approach in general.
IMHO, the hybrid approach is a "false friend" that seems to be easy to
reason about...


(1) Streams always embraced "easy to use" and we should really be
careful to keep it this way. On the other hand, as we are talking about
changes to PAPI, it won't affect DSL users (DSL does not use punctuation
at all at the moment), and thus, the "easy to use" mantra might not be
affected, while it will allow advanced users to express more complex stuff.

I like the mantra: "make simple thing easy and complex things possible".

(2) IMHO the major disadvantage (issue?) of the hybrid approach is the
implicit assumption that even-time progresses at the same "speed" as
system-time during regular processing. This implies the assumption that
a slower progress in stream-time indicates the absence of input events
(and that later arriving input events will have a larger event-time with
high probability). Even if this might be true for some use cases, I
doubt it holds in general. Assume that you get a spike in traffic and
for some reason stream-time does advance slowly because you have more
records to process. This might trigger a system-time based punctuation
call even if this seems not to be intended. I strongly believe that it
is not easy to reason about the semantics of the hybrid approach (even
if the intentional semantics would be super useful -- but I doubt that
we get want we ask for).

Thus, I also believe that one might need different "configuration"
values for the hybrid approach if you run the same code for different
scenarios: regular processing, re-processing, catching up scenario. And
as the term "configuration" implies, we might be better off to not mix
configuration with business logic that is expressed via code.


One more comment: I also don't think that the hybrid approach is
deterministic as claimed in the use-case subpage. I understand the
reasoning and agree, that it is deterministic if certain assumptions
hold -- compare above -- and if configured correctly. But strictly
speaking it's not because there is a dependency on system-time (and
IMHO, if system-time is involved it cannot be deterministic by definition).


> I see how in theory this could be implemented on top of the 2 punctuate
> callbacks with the 2 different PunctuationTypes (one stream-time based,
> the other system-time based) but it would be a much more complicated
> scheme and I don't want to suggest that.

I agree that expressing the intended hybrid semantics is harder if we
offer only #StreamTime and #SystemTime punctuation. However, I also
believe that the hybrid approach is a "false friend" with regard to
reasoning about the semantics (it indicates that it more easy as it is
in reality). Therefore, we might be better off to not offer the hybrid
approach and make it clear to a developed, that it is hard to mix
#StreamTime and #SystemTime in a semantically sound way.


Looking forward to your feedback. :)

-Matthias




On 4/22/17 11:43 AM, Michal Borowiecki wrote:
> Hi all,
> 
> Looking for feedback on the functional interface approach Damian
> proposed. What do people think?
> 
> Further on the semantics of triggering punctuate though:
> 
> I ran through the 2 use cases that Arun had kindly put on the wiki
> (https://cwiki.apache.org/confluence/display/KAFKA/Punctuate+Use+Cases)
> in my head and 

[jira] [Commented] (KAFKA-4592) Kafka Producer Metrics Invalid Value

2017-04-24 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980818#comment-15980818
 ] 

huxi commented on KAFKA-4592:
-

[~ijuma] What's the status for this jira after you said you need a discussion 
with Jun about whether 0 is a good initial value for metrics

> Kafka Producer Metrics Invalid Value
> 
>
> Key: KAFKA-4592
> URL: https://issues.apache.org/jira/browse/KAFKA-4592
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 0.10.1.1
>Reporter: AJ Jwair
>Assignee: huxi
>
> Producer metrics
> Metric name: record-size-max
> When no records are produced during the monitoring window, the 
> record-size-max has an invalid value of -9.223372036854776E16
> Please notice that the value is not a very small number close to zero bytes, 
> it is negative 90 quadrillion bytes
> The same behavior was observed in: records-lag-max



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [GitHub] kafka pull request #2759: MINOR: Move `Os` class to utils package and rename...

2017-04-24 Thread Arisha C
How can I unsubscribe to these mails?

Sent from my iPhone

> On 23-Apr-2017, at 2:16 AM, asfgit  wrote:
> 
> Github user asfgit closed the pull request at:
> 
>https://github.com/apache/kafka/pull/2759
> 
> 
> ---
> If your project is set up for it, you can reply to this email and have your
> reply appear on GitHub as well. If your project does not have this feature
> enabled and wishes so, or if the feature is enabled but not working, please
> contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
> with INFRA.
> ---


[GitHub] kafka pull request #2905: kafka-5104: DumpLogSegments should not open index ...

2017-04-24 Thread amethystic
GitHub user amethystic opened a pull request:

https://github.com/apache/kafka/pull/2905

kafka-5104: DumpLogSegments should not open index files with `rw`

Add a parameter 'writable' for AbstractIndex and set its default value to 
true for its children classes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/amethystic/kafka 
kafka-5104_DumpLogSegments_should_not_open_index_files_with_rw

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2905.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2905


commit f7ae90cddb57ad128c33ae12c7fd73ea7ef2dd05
Author: amethystic 
Date:   2017-04-24T07:06:27Z

kafka-5104: DumpLogSegments should not open index files with `rw`

Add a parameter 'writable' for AbstractIndex and set its default value to 
true for its children classes.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2898: kafka-5104: DumpLogSegments should not open index ...

2017-04-24 Thread amethystic
Github user amethystic closed the pull request at:

https://github.com/apache/kafka/pull/2898


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2904: kafka-5901: ReassignPartitionsCommand should prote...

2017-04-24 Thread amethystic
GitHub user amethystic opened a pull request:

https://github.com/apache/kafka/pull/2904

kafka-5901: ReassignPartitionsCommand should protect against empty re…

ReassignPartitionsCommand should protect against empty replica list 
assignment.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/amethystic/kafka 
kafka-5901_ReassignPartitionsCommand_protect_against_empty_replica_list

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2904.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2904


commit 793f2766eb38b76241512ac5aa84bb815e59d0b6
Author: amethystic 
Date:   2017-04-24T06:58:01Z

kafka-5901: ReassignPartitionsCommand should protect against empty replica 
list assignment

ReassignPartitionsCommand should protect against empty replica list 
assignment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2903: MINOR: Fix needless GC + Result time unit in JMH

2017-04-24 Thread original-brownbear
GitHub user original-brownbear opened a pull request:

https://github.com/apache/kafka/pull/2903

MINOR: Fix needless GC + Result time unit in JMH

Fixes two issues with the JMH benchmark example:
* Trivial: The output should be in `ops/ms` for readability reasons (it's 
in the millions of operations per second)
* Important: The benchmark is not actually measuring the LRU-Cache 
performance as most of the time in each run is wasted on concatenating `key + 
counter` as well as `value + counter`. Fixed by pre-generating 10k K-V pairs 
(100x the cache capacity) and iterating over them. This brings the performance 
up by a factor of more than 5 on a standard 4 core i7 (`~6k/ms` before goes to 
`~35k/ms`).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/original-brownbear/kafka fix-jmh-example

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2903.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2903


commit 8d30ef0f3e35f1fa03e035796f4e4d125bde4548
Author: Armin Braun 
Date:   2017-04-24T06:50:57Z

MINOR: Fix needless GC + Result time unit in JMH




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2902: kafka-5901: ReassignPartitionsCommand should prote...

2017-04-24 Thread amethystic
Github user amethystic closed the pull request at:

https://github.com/apache/kafka/pull/2902


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---