[jira] [Commented] (KAFKA-5093) Load only batch header when rebuilding producer ID map

2017-05-27 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027696#comment-16027696
 ] 

Umesh Chaudhary commented on KAFKA-5093:


No worries at all [~hachikuji] :)

> Load only batch header when rebuilding producer ID map
> --
>
> Key: KAFKA-5093
> URL: https://issues.apache.org/jira/browse/KAFKA-5093
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> When rebuilding the producer ID map for KIP-98, we unnecessarily load the 
> full record data into memory when scanning through the log. It would be 
> better to only load the batch header since it is all that is needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5093) Load only batch header when rebuilding producer ID map

2017-05-27 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027362#comment-16027362
 ] 

Umesh Chaudhary commented on KAFKA-5093:


Tried to co-relate it with KIP but was unable to locate the intended piece of 
code to tweak. If possible, can you please point that? 

> Load only batch header when rebuilding producer ID map
> --
>
> Key: KAFKA-5093
> URL: https://issues.apache.org/jira/browse/KAFKA-5093
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> When rebuilding the producer ID map for KIP-98, we unnecessarily load the 
> full record data into memory when scanning through the log. It would be 
> better to only load the batch header since it is all that is needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5171) TC should not accept empty string transactional id

2017-05-26 Thread Umesh Chaudhary (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Chaudhary reassigned KAFKA-5171:
--

Assignee: Umesh Chaudhary

> TC should not accept empty string transactional id
> --
>
> Key: KAFKA-5171
> URL: https://issues.apache.org/jira/browse/KAFKA-5171
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Guozhang Wang
>Assignee: Umesh Chaudhary
> Fix For: 0.11.0.0
>
>
> Currently on TC, both `null` and `empty string` will be accepted and a new 
> pid will be returned. However, on the producer client end empty string 
> transactional id is not allowed, and if user specifically set it with empty 
> string RTE will be thrown.
> We can make TC's behavior consistent with client to also reject empty string 
> transactional id.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4660) Improve test coverage KafkaStreams

2017-05-20 Thread Umesh Chaudhary (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Chaudhary reassigned KAFKA-4660:
--

Assignee: Umesh Chaudhary

> Improve test coverage KafkaStreams
> --
>
> Key: KAFKA-4660
> URL: https://issues.apache.org/jira/browse/KAFKA-4660
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Umesh Chaudhary
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> {{toString}} is used to print the topology, so probably should have a unit 
> test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5025) FetchRequestTest should use batches with more than one message

2017-05-15 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010242#comment-16010242
 ] 

Umesh Chaudhary commented on KAFKA-5025:


Hi [~ijuma], I executed the above test and observed that *produceData* actually 
returns multiple records i.e. different keys with different values. 
Is that not the expected behaviour of this method ? Please correct me if my 
understanding needs correction. 

> FetchRequestTest should use batches with more than one message
> --
>
> Key: KAFKA-5025
> URL: https://issues.apache.org/jira/browse/KAFKA-5025
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>Assignee: Umesh Chaudhary
> Fix For: 0.11.0.0
>
>
> As part of the message format changes for KIP-98, 
> FetchRequestTest.produceData was changed to always use record batches 
> containing a single message. We should restructure the test so that it's more 
> realistic. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5171) TC should not accept empty string transactional id

2017-05-03 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996180#comment-15996180
 ] 

Umesh Chaudhary commented on KAFKA-5171:


[~guozhang] , Please review the PR. 

> TC should not accept empty string transactional id
> --
>
> Key: KAFKA-5171
> URL: https://issues.apache.org/jira/browse/KAFKA-5171
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Guozhang Wang
> Fix For: 0.11.0.0
>
>
> Currently on TC, both `null` and `empty string` will be accepted and a new 
> pid will be returned. However, on the producer client end empty string 
> transactional id is not allowed, and if user specifically set it with empty 
> string RTE will be thrown.
> We can make TC's behavior consistent with client to also reject empty string 
> transactional id.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5137) Controlled shutdown timeout message improvement

2017-04-27 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15988235#comment-15988235
 ] 

Umesh Chaudhary commented on KAFKA-5137:


Indeed [~cotedm]. Even the *socketTimeoutMs* instance variable used in this 
method points to *controller.socket.timeout.ms*. Sent an initial PR, please 
review it and I can improve it if we can get other reasons of IOException 
during controlled shutdown. 

> Controlled shutdown timeout message improvement
> ---
>
> Key: KAFKA-5137
> URL: https://issues.apache.org/jira/browse/KAFKA-5137
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.2.0
>Reporter: Dustin Cote
>Priority: Minor
>  Labels: newbie
>
> Currently if you fail during controlled shutdown, you can get a message that 
> says the socket.timeout.ms has expired. This config actually doesn't exist on 
> the broker. Instead, we should explicitly say if we've hit the 
> controller.socket.timeout.ms or the request.timeout.ms as it's confusing to 
> take action given the current message. I believe the relevant code is here:
> https://github.com/apache/kafka/blob/0.10.2/core/src/main/scala/kafka/server/KafkaServer.scala#L428-L454
> I'm also not sure if there's another timeout that could be hit here or 
> another reason why IOException might be thrown. In the least we should call 
> out those two configs instead of the non-existent one but if we can direct to 
> the proper one that would be even better.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-5025) FetchRequestTest should use batches with more than one message

2017-04-27 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982953#comment-15982953
 ] 

Umesh Chaudhary edited comment on KAFKA-5025 at 4/28/17 4:53 AM:
-

[~ijuma], taking this one to start my contribution to the project. May I ask 
some guidelines to start working on this? What should be considered in this 
restructure? 


was (Author: umesh9...@gmail.com):
[~ijuma], taking this one to start my contribution to the project. May I ask 
some guidelines to start working on this ?

> FetchRequestTest should use batches with more than one message
> --
>
> Key: KAFKA-5025
> URL: https://issues.apache.org/jira/browse/KAFKA-5025
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>Assignee: Umesh Chaudhary
> Fix For: 0.11.0.0
>
>
> As part of the message format changes for KIP-98, 
> FetchRequestTest.produceData was changed to always use record batches 
> containing a single message. We should restructure the test so that it's more 
> realistic. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5132) Abort long running transactions

2017-04-27 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15988198#comment-15988198
 ] 

Umesh Chaudhary commented on KAFKA-5132:


Sure [~damianguy]. Can understand :)

> Abort long running transactions
> ---
>
> Key: KAFKA-5132
> URL: https://issues.apache.org/jira/browse/KAFKA-5132
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> We need to abort any transactions that have been running longer than the txn 
> timeout



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5132) Abort long running transactions

2017-04-27 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15986115#comment-15986115
 ] 

Umesh Chaudhary commented on KAFKA-5132:


Hi [~damianguy] , can I attempt this one? I need some pointers to do this 
though. 

> Abort long running transactions
> ---
>
> Key: KAFKA-5132
> URL: https://issues.apache.org/jira/browse/KAFKA-5132
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Damian Guy
>
> We need to abort any transactions that have been running longer than the txn 
> timeout



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5025) FetchRequestTest should use batches with more than one message

2017-04-25 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982953#comment-15982953
 ] 

Umesh Chaudhary commented on KAFKA-5025:


[~ijuma], taking this one to start my contribution to the project. May I ask 
some guidelines to start working on this ?

> FetchRequestTest should use batches with more than one message
> --
>
> Key: KAFKA-5025
> URL: https://issues.apache.org/jira/browse/KAFKA-5025
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>Assignee: Umesh Chaudhary
> Fix For: 0.11.0.0
>
>
> As part of the message format changes for KIP-98, 
> FetchRequestTest.produceData was changed to always use record batches 
> containing a single message. We should restructure the test so that it's more 
> realistic. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5025) FetchRequestTest should use batches with more than one message

2017-04-25 Thread Umesh Chaudhary (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Chaudhary reassigned KAFKA-5025:
--

Assignee: Umesh Chaudhary

> FetchRequestTest should use batches with more than one message
> --
>
> Key: KAFKA-5025
> URL: https://issues.apache.org/jira/browse/KAFKA-5025
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>Assignee: Umesh Chaudhary
> Fix For: 0.11.0.0
>
>
> As part of the message format changes for KIP-98, 
> FetchRequestTest.produceData was changed to always use record batches 
> containing a single message. We should restructure the test so that it's more 
> realistic. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5114) Clarify meaning of logs in Introduction: Topics and Logs

2017-04-23 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980644#comment-15980644
 ] 

Umesh Chaudhary commented on KAFKA-5114:


IMHO, it has a co-relation with the previous statement: *The partitions in the 
log serve several purposes...*. So we described the log as data of the topic 
and partitions of the logs would be partitions of the topic's data.

> Clarify meaning of logs in Introduction: Topics and Logs
> 
>
> Key: KAFKA-5114
> URL: https://issues.apache.org/jira/browse/KAFKA-5114
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Michael Ernest
>Priority: Minor
>
> The term log is ambiguous in this section:
> * To describe a partition as a 'structured commit log'
> * To describe a topic as a partitioned log
> Then there's this sentence under Distribution: "The partitions of the log are 
> distributed over the servers in the Kafka cluster with each server handling 
> data and requests for a share of the partitions"
> In that last sentence, replacing 'log' with 'topic' would be clearer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5098) KafkaProducer.send() blocks and generates TimeoutException if topic name has illegal char

2017-04-22 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979910#comment-15979910
 ] 

Umesh Chaudhary commented on KAFKA-5098:


I will reproduce and work on this. 

> KafkaProducer.send() blocks and generates TimeoutException if topic name has 
> illegal char
> -
>
> Key: KAFKA-5098
> URL: https://issues.apache.org/jira/browse/KAFKA-5098
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.0
> Environment: Java client running against server using 
> wurstmeister/kafka Docker image.
>Reporter: Jeff Larsen
>
> The server is running with auto create enabled. If we try to publish to a 
> topic with a forward slash in the name, the call blocks and we get a 
> TimeoutException in the Callback. I would expect it to return immediately 
> with an InvalidTopicException.
> There are other blocking issues that have been reported which may be related 
> to some degree, but this particular cause seems unrelated.
> Sample code:
> {code}
> import org.apache.kafka.clients.producer.*;
> import java.util.*;
> public class KafkaProducerUnexpectedBlockingAndTimeoutException {
>   public static void main(String[] args) {
> Properties props = new Properties();
> props.put("bootstrap.servers", "kafka.example.com:9092");
> props.put("key.serializer", 
> "org.apache.kafka.common.serialization.StringSerializer");
> props.put("value.serializer", 
> "org.apache.kafka.common.serialization.StringSerializer");
> props.put("max.block.ms", 1); // 10 seconds should illustrate our 
> point
> String separator = "/";
> //String separator = "_";
> try (Producer producer = new KafkaProducer<>(props)) {
>   System.out.println("Calling KafkaProducer.send() at " + new Date());
>   producer.send(
>   new ProducerRecord("abc" + separator + 
> "someStreamName",
>   "Not expecting a TimeoutException here"),
>   new Callback() {
> @Override
> public void onCompletion(RecordMetadata metadata, Exception e) {
>   if (e != null) {
> System.out.println(e.toString());
>   }
> }
>   });
>   System.out.println("KafkaProducer.send() completed at " + new Date());
> }
>   }
> }
> {code}
> Switching to the underscore separator in the above example works as expected.
> Mea culpa: We neglected to research allowed chars in a topic name, but the 
> TimeoutException we encountered did not help point us in the right direction.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5049) Chroot check should be done for each ZkUtils instance

2017-04-15 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969873#comment-15969873
 ] 

Umesh Chaudhary commented on KAFKA-5049:


[~anukin] I tried to assign this to you but unfortunately I am not able to do 
that. May be [~junrao] or [~ijuma] can do that.

> Chroot check should be done for each ZkUtils instance
> -
>
> Key: KAFKA-5049
> URL: https://issues.apache.org/jira/browse/KAFKA-5049
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> In KAFKA-1994, the check for ZK chroot was moved to ZkPath. However, ZkPath 
> is a JVM singleton and we may use multiple ZkClient instances with multiple 
> ZooKeeper ensembles in the same JVM (for cluster info, authorizer and 
> pluggable code provided by users).
> The right way to do this is to make ZkPath an instance variable in ZkUtils so 
> that we do the check once per ZkUtils instance.
> cc [~gwenshap] [~junrao], who reviewed KAFKA-1994, in case I am missing 
> something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5049) Chroot check should be done for each ZkUtils instance

2017-04-14 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969805#comment-15969805
 ] 

Umesh Chaudhary commented on KAFKA-5049:


[~anukin] you can send a PR and for detailed steps you can refer [this 
page|https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes#ContributingCodeChanges-PullRequest]
 

> Chroot check should be done for each ZkUtils instance
> -
>
> Key: KAFKA-5049
> URL: https://issues.apache.org/jira/browse/KAFKA-5049
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> In KAFKA-1994, the check for ZK chroot was moved to ZkPath. However, ZkPath 
> is a JVM singleton and we may use multiple ZkClient instances with multiple 
> ZooKeeper ensembles in the same JVM (for cluster info, authorizer and 
> pluggable code provided by users).
> The right way to do this is to make ZkPath an instance variable in ZkUtils so 
> that we do the check once per ZkUtils instance.
> cc [~gwenshap] [~junrao], who reviewed KAFKA-1994, in case I am missing 
> something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5049) Chroot check should be done for each ZkUtils instance

2017-04-13 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15968624#comment-15968624
 ] 

Umesh Chaudhary commented on KAFKA-5049:


Thanks [~junrao] for the pointers. While I was looking at the current 
implementation, it seems difficult to instantiate ZkPath trivially. Shouldn't 
we need another class definition (as a companion of existing ZkPath object) 
which enables the instantiation of ZkPath? Please correct me if I am wrong. 

> Chroot check should be done for each ZkUtils instance
> -
>
> Key: KAFKA-5049
> URL: https://issues.apache.org/jira/browse/KAFKA-5049
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> In KAFKA-1994, the check for ZK chroot was moved to ZkPath. However, ZkPath 
> is a JVM singleton and we may use multiple ZkClient instances with multiple 
> ZooKeeper ensembles in the same JVM (for cluster info, authorizer and 
> pluggable code provided by users).
> The right way to do this is to make ZkPath an instance variable in ZkUtils so 
> that we do the check once per ZkUtils instance.
> cc [~gwenshap] [~junrao], who reviewed KAFKA-1994, in case I am missing 
> something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5057) "Big Message Log"

2017-04-13 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15968580#comment-15968580
 ] 

Umesh Chaudhary commented on KAFKA-5057:


Understood and yes this is a good idea to capture the frequency of "Big 
Messages" on broker. That new broker config would set the threshold and the 
produced messages which exceed that threshold, broker would log their details. 
Also, I can start preparing KIP for this feature. 

> "Big Message Log"
> -
>
> Key: KAFKA-5057
> URL: https://issues.apache.org/jira/browse/KAFKA-5057
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Really large requests can cause significant GC pauses which can cause quite a 
> few other symptoms on a broker. Will be nice to be able to catch them.
> Lets add the option to log details (client id, topic, partition) for every 
> produce request that is larger than a configurable threshold.
> /cc [~apurva]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5057) "Big Message Log"

2017-04-12 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965615#comment-15965615
 ] 

Umesh Chaudhary commented on KAFKA-5057:


[~gwenshap] currently we have "max.request.size" configuration for a producer, 
do you suggest to print the log message based on this config or to introduce a 
new config to define the threshold? 

> "Big Message Log"
> -
>
> Key: KAFKA-5057
> URL: https://issues.apache.org/jira/browse/KAFKA-5057
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Really large requests can cause significant GC pauses which can cause quite a 
> few other symptoms on a broker. Will be nice to be able to catch them.
> Lets add the option to log details (client id, topic, partition) for every 
> produce request that is larger than a configurable threshold.
> /cc [~apurva]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4737) Streams apps hang if started when brokers are not available

2017-02-05 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15853424#comment-15853424
 ] 

Umesh Chaudhary commented on KAFKA-4737:


No worries. Thanks [~gwenshap] !

> Streams apps hang if started when brokers are not available
> ---
>
> Key: KAFKA-4737
> URL: https://issues.apache.org/jira/browse/KAFKA-4737
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Start a streams example while broker is down, and it will just hang there. It 
> will also hang on shutdown.
> I'd expect it to exit with an error message if broker is not available.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4737) Streams apps hang if started when brokers are not available

2017-02-05 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15853422#comment-15853422
 ] 

Umesh Chaudhary commented on KAFKA-4737:


Hi [~gwenshap], does the intent is similar to KAFKA-4564?

> Streams apps hang if started when brokers are not available
> ---
>
> Key: KAFKA-4737
> URL: https://issues.apache.org/jira/browse/KAFKA-4737
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Start a streams example while broker is down, and it will just hang there. It 
> will also hang on shutdown.
> I'd expect it to exit with an error message if broker is not available.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-4569) Transient failure in org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable

2017-01-10 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813892#comment-15813892
 ] 

Umesh Chaudhary edited comment on KAFKA-4569 at 1/10/17 9:18 AM:
-

Hi [~ijuma], One observation regarding the case. I saw that when I run the test 
testWakeupWithFetchDataAvailable it never fails but when I debug it with 
breakpoint, it always fails. Trying to get the reasons for debug failures. 


was (Author: umesh9...@gmail.com):
Hi [~ijuma], One observation regarding the case. I saw that when I run the test 
testWakeupWithFetchDataAvailable it never fails but when I debug it, it always 
fails. Trying to get the reasons for debug failures. 

> Transient failure in 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable
> -
>
> Key: KAFKA-4569
> URL: https://issues.apache.org/jira/browse/KAFKA-4569
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Guozhang Wang
>Assignee: Umesh Chaudhary
>  Labels: newbie
> Fix For: 0.10.2.0
>
>
> One example is:
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/370/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testWakeupWithFetchDataAvailable/
> {code}
> Stacktrace
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.fail(Assert.java:95)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable(KafkaConsumerTest.java:679)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Commented] (KAFKA-4564) When the destination brokers are down or misconfigured in config, Streams should fail fast

2017-01-10 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15814359#comment-15814359
 ] 

Umesh Chaudhary commented on KAFKA-4564:


Hi [~guozhang], [~ewencp], I am thinking to create a KafkaProducer (using the 
properties wrapped in the StreamsConfig) inside the constructor of StreamTask 
class to verify whether user has configured the bootstrap listed correctly or 
not. For misconfigured bootstrap list, we can throw the appropriate exception 
here itself without proceeding further. 

Please suggest if that was the expectation form this JIRA and correct me if I 
am wrong here.

> When the destination brokers are down or misconfigured in config, Streams 
> should fail fast
> --
>
> Key: KAFKA-4564
> URL: https://issues.apache.org/jira/browse/KAFKA-4564
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Umesh Chaudhary
>  Labels: newbie
>
> Today if Kafka is down or users misconfigure the bootstrap list, Streams may 
> just hangs for a while without any error messages even with the log4j 
> enabled, which is quite confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4569) Transient failure in org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable

2017-01-09 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813892#comment-15813892
 ] 

Umesh Chaudhary commented on KAFKA-4569:


Hi [~ijuma], One observation regarding the case. I saw that when I run the test 
testWakeupWithFetchDataAvailable it never fails but when I debug it, it always 
fails. Trying to get the reasons for debug failures. 

> Transient failure in 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable
> -
>
> Key: KAFKA-4569
> URL: https://issues.apache.org/jira/browse/KAFKA-4569
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Guozhang Wang
>Assignee: Umesh Chaudhary
>  Labels: newbie
> Fix For: 0.10.2.0
>
>
> One example is:
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/370/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testWakeupWithFetchDataAvailable/
> {code}
> Stacktrace
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.fail(Assert.java:95)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable(KafkaConsumerTest.java:679)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> 

[jira] [Commented] (KAFKA-4564) When the destination brokers are down or misconfigured in config, Streams should fail fast

2017-01-02 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793810#comment-15793810
 ] 

Umesh Chaudhary commented on KAFKA-4564:


Thank you [~ewencp].

> When the destination brokers are down or misconfigured in config, Streams 
> should fail fast
> --
>
> Key: KAFKA-4564
> URL: https://issues.apache.org/jira/browse/KAFKA-4564
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Umesh Chaudhary
>  Labels: newbie
>
> Today if Kafka is down or users misconfigure the bootstrap list, Streams may 
> just hangs for a while without any error messages even with the log4j 
> enabled, which is quite confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4564) When the destination brokers are down or misconfigured in config, Streams should fail fast

2017-01-01 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15792067#comment-15792067
 ] 

Umesh Chaudhary commented on KAFKA-4564:


Hi [~guozhang], please assign this to me.

> When the destination brokers are down or misconfigured in config, Streams 
> should fail fast
> --
>
> Key: KAFKA-4564
> URL: https://issues.apache.org/jira/browse/KAFKA-4564
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: newbie
>
> Today if Kafka is down or users misconfigure the bootstrap list, Streams may 
> just hangs for a while without any error messages even with the log4j 
> enabled, which is quite confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4569) Transient failure in org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable

2017-01-01 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15792055#comment-15792055
 ] 

Umesh Chaudhary commented on KAFKA-4569:


Hi [~guozhang], I can work on this. Please assign this to me or add me to the 
contributor's list. 

> Transient failure in 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable
> -
>
> Key: KAFKA-4569
> URL: https://issues.apache.org/jira/browse/KAFKA-4569
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Guozhang Wang
>  Labels: newbie
>
> One example is:
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/370/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testWakeupWithFetchDataAvailable/
> {code}
> Stacktrace
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.fail(Assert.java:95)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable(KafkaConsumerTest.java:679)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 

[jira] [Commented] (KAFKA-4308) Inconsistent parameters between console producer and consumer

2016-10-17 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582838#comment-15582838
 ] 

Umesh Chaudhary commented on KAFKA-4308:


Sure Andrew, can you please assign this to me?

> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4308
> URL: https://issues.apache.org/jira/browse/KAFKA-4308
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>  Labels: newbie
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4308) Inconsistent parameters between console producer and consumer

2016-10-17 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582819#comment-15582819
 ] 

Umesh Chaudhary commented on KAFKA-4308:


Hi [~gwenshap], I would like to work on this.
If possible, please assign it to me.

> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4308
> URL: https://issues.apache.org/jira/browse/KAFKA-4308
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>  Labels: newbie
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4122) Consumer startup swallows DNS resolution exception and infinitely retries

2016-09-29 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531969#comment-15531969
 ] 

Umesh Chaudhary commented on KAFKA-4122:


This one looks feasible. 
[~junrao] [~ewencp] Please advise. I can work on this. 

> Consumer startup swallows DNS resolution exception and infinitely retries
> -
>
> Key: KAFKA-4122
> URL: https://issues.apache.org/jira/browse/KAFKA-4122
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, network
>Affects Versions: 0.9.0.1
> Environment: Run from Docker image with following Dockerfile:
> {code}
> FROM java:openjdk-8-jre
> ENV DEBIAN_FRONTEND noninteractive
> ENV SCALA_VERSION 2.11
> ENV KAFKA_VERSION 0.9.0.1
> ENV KAFKA_HOME /opt/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION"
> # Install Kafka, Zookeeper and other needed things
> RUN apt-get update && \
> apt-get install -y zookeeper wget supervisor dnsutils && \
> rm -rf /var/lib/apt/lists/* && \
> apt-get clean && \
> wget -q 
> http://apache.mirrors.spacedump.net/kafka/"$KAFKA_VERSION"/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz
>  -O /tmp/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz && \
> tar xfz /tmp/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz -C /opt && \
> rm /tmp/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz
> {code}
>Reporter: Shane Hender
>
> When a consumer encounters nodes that it can't resolve the IP to, I'd expect 
> it to print an ERROR level msg and bubble up an exception, especially if 
> there are no other nodes available.
> Following is the stack trace that was hidden under the DEBUG trace level:
> {code}
> 18:30:47.070 [Filters-akka.kafka.default-dispatcher-7] DEBUG 
> o.apache.kafka.clients.NetworkClient - Initialize connection to node 0 for 
> sending metadata request
> 18:30:47.070 [Filters-akka.kafka.default-dispatcher-7] DEBUG 
> o.apache.kafka.clients.NetworkClient - Initiating connection to node 0 at 
> kafka.docker:9092.
> 18:30:47.071 [Filters-akka.kafka.default-dispatcher-7] DEBUG 
> o.apache.kafka.clients.NetworkClient - Error connecting to node 0 at 
> kafka.docker:9092:
> java.io.IOException: Can't resolve address: kafka.docker:9092
>   at 
> org.apache.kafka.common.network.Selector.connect(Selector.java:156)
>   at 
> org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:489)
>   at 
> org.apache.kafka.clients.NetworkClient.access$400(NetworkClient.java:47)
>   at 
> org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:624)
>   at 
> org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:543)
>   at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:254)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:184)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:886)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
>   at 
> akka.kafka.internal.ConsumerStageLogic.poll(ConsumerStage.scala:410)
>   at 
> akka.kafka.internal.CommittableConsumerStage$$anon$1.poll(ConsumerStage.scala:166)
>   at 
> akka.kafka.internal.ConsumerStageLogic$$anon$5.onPull(ConsumerStage.scala:360)
>   at 
> akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:608)
>   at 
> akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:542)
>   at 
> akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:471)
>   at 
> akka.stream.impl.fusing.GraphInterpreterShell.receive(ActorGraphInterpreter.scala:414)
>   at 
> akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:603)
>   at 
> akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:618)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:484)
>   at 
> 

[jira] [Commented] (KAFKA-4142) Log files in /data dir date modified keeps being updated?

2016-09-29 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531931#comment-15531931
 ] 

Umesh Chaudhary commented on KAFKA-4142:


[~cmhillerman], Did you compare the size of both files which you see the date 
modified changed but are having same name?

>From the doc I see:
the log is cleaned by recopying each log segment but omitting any key that 
appears in the offset map with a higher offset than what is found in the 
segment (i.e. messages with a key that appears in the dirty section of the log).

I believe when you will compare the size of both files, you will find some 
messages are cleaned based on retention policy.

> Log files in /data dir date modified keeps being updated?
> -
>
> Key: KAFKA-4142
> URL: https://issues.apache.org/jira/browse/KAFKA-4142
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
> Environment: CentOS release 6.8 (Final)
> uname -a
> Linux 2.6.32-642.1.1.el6.x86_64 #1 SMP Tue May 31 21:57:07 UTC 2016 x86_64 
> x86_64 x86_64 GNU/Linux
>Reporter: Clint Hillerman
>Priority: Minor
>
> The date modified of the kafka logs (the main ones specified by logs.dirs in 
> the config) keep getting updated and set to the exact same time.
> For example:
> Say I had two log and index files ( date modified - file name):
> 20160901:10:00:01 - 0001.log
> 20160901:10:00:01 -0001.index
> 20160902:10:00:01 -0002.log
> 20160902:10:00:01 -0002.index
> Later I notice the logs are getting way to old for the retention time. I then 
> go look at the log dir and I see this:
> 20160903:10:00:01 - 0001.log
> 2016090310:00:01 -0001.index
> 20160903:10:00:01 -0002.log
> 20160903:10:00:01 -0002.index
> 20160903:10:00:01 -0003.log
> 20160903:10:00:01 -0003.index
> 20160904:10:00:01 -0004.log
> 20160904:10:00:01 -0004.index
> The first two log files had there date modified moved forward for some 
> reason. They were updated from 0901 and 0902 to 0903. 
> It seems to happen periodically. The new logs that kafka writes out have the 
> correct time stamp. 
> This causes the logs to not be deleted. Right now I just touch the log files 
> to an older date and they are deleted right away. 
> Any help would be appreciated. Also, I'll explain the problem better if this 
> doesn't make sense.
> Thanks,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4189) Consumer poll hangs forever if kafka is disabled

2016-09-21 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15509565#comment-15509565
 ] 

Umesh Chaudhary commented on KAFKA-4189:


Yep, [Guozhang's 
comment|https://issues.apache.org/jira/browse/KAFKA-1894?focusedCommentId=15036256=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15036256]
 suggests the same 

> Consumer poll hangs forever if kafka is disabled
> 
>
> Key: KAFKA-4189
> URL: https://issues.apache.org/jira/browse/KAFKA-4189
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Tomas Benc
>Priority: Critical
>
> We develop web application, where client sends REST request and our 
> application downloads messages from Kafka and sends those messages back to 
> client. In our web application we use "New Consumer API" (not High Level nor 
> Simple Consumer API).
> Problem occurs in case of disabling Kafka and web application is running on. 
> Application receives request and tries to poll messages from Kafka. 
> Processing is on that line blocked until Kafka is enabled.
> ConsumerRecords records = consumer.poll(1000);
> Timeout parameter of the poll method has no influence in such case. I expect 
> poll method could throw some Exception describing about connection issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4189) Consumer poll hangs forever if kafka is disabled

2016-09-19 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502860#comment-15502860
 ] 

Umesh Chaudhary commented on KAFKA-4189:


Is it similar to [KAFKA-1894|https://issues.apache.org/jira/browse/KAFKA-1894] ?

> Consumer poll hangs forever if kafka is disabled
> 
>
> Key: KAFKA-4189
> URL: https://issues.apache.org/jira/browse/KAFKA-4189
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Tomas Benc
>Priority: Critical
>
> We develop web application, where client sends REST request and our 
> application downloads messages from Kafka and sends those messages back to 
> client. In our web application we use "New Consumer API" (not High Level nor 
> Simple Consumer API).
> Problem occurs in case of disabling Kafka and web application is running on. 
> Application receives request and tries to poll messages from Kafka. 
> Processing is on that line blocked until Kafka is enabled.
> ConsumerRecords records = consumer.poll(1000);
> Timeout parameter of the poll method has no influence in such case. I expect 
> poll method could throw some Exception describing about connection issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4053) Refactor TopicCommand to remove redundant if/else statements

2016-08-17 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15424224#comment-15424224
 ] 

Umesh Chaudhary commented on KAFKA-4053:


I will do that. Please assign it to me.

> Refactor TopicCommand to remove redundant if/else statements
> 
>
> Key: KAFKA-4053
> URL: https://issues.apache.org/jira/browse/KAFKA-4053
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 0.10.0.1
>Reporter: Shuai Zhang
>Priority: Minor
> Fix For: 0.10.0.2
>
>
> In TopicCommand, there are a lot of redundant if/else statements, such as
> ```val ifNotExists = if (opts.options.has(opts.ifNotExistsOpt)) true else 
> false```
> We can refactor it as the following statement:
> ```val ifNotExists = opts.options.has(opts.ifNotExistsOpt)```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)