[jira] [Created] (KAFKA-13186) Proposal for commented code

2021-08-10 Thread Chao (Jira)
Chao created KAFKA-13186:


 Summary: Proposal for commented code
 Key: KAFKA-13186
 URL: https://issues.apache.org/jira/browse/KAFKA-13186
 Project: Kafka
  Issue Type: Wish
Reporter: Chao


Hello! I saw in your [coding 
guidelines|https://kafka.apache.org/coding-guide.html] that ??Don't check in 
commented out code. ??

However, I still witness commented code in some files like:

/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/ConnectorConfigTest.java

/streams/src/test/java/org/apache/kafka/streams/state/internals/TimeOrderedKeyValueBufferTest.java

/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractStickyAssignor.java

 

Would you like to remove these commented codes?

If so, I may help and open a pull request.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9873) DNS故障时服务处理线程hang住

2020-04-15 Thread zhang chao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhang chao resolved KAFKA-9873.
---
Resolution: Duplicate

> DNS故障时服务处理线程hang住
> -
>
> Key: KAFKA-9873
> URL: https://issues.apache.org/jira/browse/KAFKA-9873
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.4.1
>Reporter: zhang chao
>Priority: Major
>  Labels: DNS, acl, dns
> Attachments: kast.log
>
>
> 如附件所示,开启安全认证后,acl鉴权失败,导致所有处理线程无法工作



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9873) DNS故障时服务处理线程hang住

2020-04-15 Thread zhang chao (Jira)
zhang chao created KAFKA-9873:
-

 Summary: DNS故障时服务处理线程hang住
 Key: KAFKA-9873
 URL: https://issues.apache.org/jira/browse/KAFKA-9873
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.4.1
Reporter: zhang chao
 Attachments: kast.log

如附件所示,开启安全认证后,acl鉴权失败,导致所有处理线程无法工作



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-5452) Aggressive log compaction ratio appears to have no negative effect on log-compacted topics

2017-08-25 Thread Jeff Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Chao resolved KAFKA-5452.
--
Resolution: Resolved

Following up after a long while. After talking offline with [~wushujames], the 
original thought was to choose a sensible default in relation to disk I/O. I 
think it's best to leave this default and prevent assumptions on the underlying 
infrastructure. That way, operators are free to tune to their expectations. 
Closing this.

> Aggressive log compaction ratio appears to have no negative effect on 
> log-compacted topics
> --
>
> Key: KAFKA-5452
> URL: https://issues.apache.org/jira/browse/KAFKA-5452
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, core, log
>Affects Versions: 0.10.2.0, 0.10.2.1
> Environment: Ubuntu Trusty (14.04.5), Oracle JDK 8
>Reporter: Jeff Chao
>  Labels: performance
> Attachments: 200mbs-dirty0-dirty-1-dirty05.png, 
> flame-graph-200mbs-dirty0.png, flame-graph-200mbs-dirty0.svg
>
>
> Some of our users are seeing unintuitive/unexpected behavior with 
> log-compacted topics where they receive multiple records for the same key 
> when consuming. This is a result of low throughput on log-compacted topics 
> such that conditions ({{min.cleanable.dirty.ratio = 0.5}}, default) aren't 
> met for compaction to kick in.
> This prompted us to test and tune {{min.cleanable.dirty.ratio}} in our 
> clusters. It appears that having more aggressive log compaction ratios don't 
> have negative effects on CPU and memory utilization. If this is truly the 
> case, we should consider changing the default from {{0.5}} to something more 
> aggressive.
> Setup:
> # 8 brokers
> # 5 zk nodes
> # 32 partitions on a topic
> # replication factor 3
> # log roll 3 hours
> # log segment bytes 1 GB
> # log retention 24 hours
> # all messages to a single key
> # all messages to a unique key
> # all messages to a bounded key range [0, 999]
> # {{min.cleanable.dirty.ratio}} per topic = {{0}}, {{0.5}}, and {{1}}
> # 200 MB/s sustained, produce and consume traffic
> Observations:
> We were able to verify log cleaner threads were performing work by checking 
> the logs and verifying the {{cleaner-offset-checkpoint}} file for all topics. 
> We also observed the log cleaner's {{time-since-last-run-ms}} metric was 
> normal, never going above the default of 15 seconds.
> Under-replicated partitions stayed steady, same for replication lag.
> Here's an example test run where we try out {{min.cleanable.dirty.ratio = 
> 0}}, {{min.cleanable.dirty.ratio = 1}}, and {{min.cleanable.dirty.ratio = 
> 0.5}}. Troughs in between the peaks represent zero traffic and reconfiguring 
> of topics.
> (200mbs-dirty-0-dirty1-dirty05.png attached)
> !200mbs-dirty0-dirty-1-dirty05.png|thumbnail!
> Memory utilization is fine, but more interestingly, CPU doesn't appear to 
> have much difference.
> To get more detail, here is a flame graph (raw svg attached) of the run for 
> {{min.cleanable.dirty.ratio = 0}}. The conservative and default ratio flame 
> graphs are equivalent.
> (flame-graph-200mbs-dirty0.png attached)
> !flame-graph-200mbs-dirty0.png|thumbnail!
> Notice that the majority of CPU is coming from:
> # SSL operations (on reads/writes)
> # KafkaApis::handleFetchRequest (ReplicaManager::fetchMessages)
> # KafkaApis::handleOffsetFetchRequest
> We also have examples from small scale test runs which show similar behavior 
> but with scaled down CPU usage.
> It seems counterintuitive that there's no apparent difference in CPU whether 
> it be aggressive or conservative compaction ratios, so we'd like to get some 
> thoughts from the community.
> We're looking for feedback on whether or not anyone else has experienced this 
> behavior before as well or, if CPU isn't affected, has anyone seen something 
> related instead.
> If this is true, then we'd be happy to discuss further and provide a patch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5463) Controller incorrectly logs rack information when new brokers are added

2017-06-16 Thread Jeff Chao (JIRA)
Jeff Chao created KAFKA-5463:


 Summary: Controller incorrectly logs rack information when new 
brokers are added
 Key: KAFKA-5463
 URL: https://issues.apache.org/jira/browse/KAFKA-5463
 Project: Kafka
  Issue Type: Bug
  Components: config, controller
Affects Versions: 0.10.2.1, 0.10.2.0, 0.11.0.0
 Environment: Ubuntu Trusty (14.04.5), Oracle JDK 8
Reporter: Jeff Chao
Priority: Minor


When a new broker is added, on an {{UpdateMetadata request}}, rack information 
won't be present in the state-change log even if configured.

Example:

{{pri=TRACE t=Controller-1-to-broker-0-send-thread at=logger Controller 1 epoch 
1 received response {error_code=0} for a request sent to broker : 
(id: 0 rack: null)}}

This happens because {{ControllerChannelManager}} always instantiates a 
{{Node}} using the same constructor whether or not rack-aware is configured. 
We're happy to contribute a patch since this causes some confusion when running 
with rack-aware replica placement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5452) Aggressive log compaction ratio appears to have no negative effect on log-compacted topics

2017-06-14 Thread Jeff Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Chao updated KAFKA-5452:
-
Description: 
Some of our users are seeing unintuitive/unexpected behavior with log-compacted 
topics where they receive multiple records for the same key when consuming. 
This is a result of low throughput on log-compacted topics such that conditions 
({{min.cleanable.dirty.ratio = 0.5}}, default) aren't met for compaction to 
kick in.

This prompted us to test and tune {{min.cleanable.dirty.ratio}} in our 
clusters. It appears that having more aggressive log compaction ratios don't 
have negative effects on CPU and memory utilization. If this is truly the case, 
we should consider changing the default from {{0.5}} to something more 
aggressive.

Setup:

# 1. 8 brokers
# 2. 5 zk nodes
# 3. 32 partitions on a topic
# 4. replication factor 3
# 5. log roll 3 hours
# 6. log segment bytes 1 GB
# 7. log retention 24 hours
# 8. all messages to a single key
# 9. all messages to a unique key
# 10. all messages to a bounded key range [0, 999]
# 11. {{min.cleanable.dirty.ratio}} per topic = {{0}}, {{0.5}}, and {{1}}
# 12. 200 MB/s sustained, produce and consume traffic

Observations:

We were able to verify log cleaner threads were performing work by checking the 
logs and verifying the {{cleaner-offset-checkpoint}} file for all topics. We 
also observed the log cleaner's {{time-since-last-run-ms}} metric was normal, 
never going above the default of 15 seconds.

Under-replicated partitions stayed steady, same for replication lag.

Here's an example test run where we try out {{min.cleanable.dirty.ratio = 0}}, 
{{min.cleanable.dirty.ratio = 1}}, and {{min.cleanable.dirty.ratio = 0.5}}. 
Troughs in between the peaks represent zero traffic and reconfiguring of topics.

(200mbs-dirty-0-dirty1-dirty05.png attached)
!200mbs-dirty0-dirty-1-dirty05.png|thumbnail!

Memory utilization is fine, but more interestingly, CPU doesn't appear to have 
much difference.

To get more detail, here is a flame graph (raw svg attached) of the run for 
{{min.cleanable.dirty.ratio = 0}}. The conservative and default ratio flame 
graphs are equivalent.

(flame-graph-200mbs-dirty0.png attached)
!flame-graph-200mbs-dirty0.png|thumbnail!

Notice that the majority of CPU is coming from:

# 1. SSL operations (on reads/writes)
# 2. KafkaApis::handleFetchRequest (ReplicaManager::fetchMessages)
# 3. KafkaApis::handleOffsetFetchRequest

We also have examples from small scale test runs which show similar behavior 
but with scaled down CPU usage.

It seems counterintuitive that there's no apparent difference in CPU whether it 
be aggressive or conservative compaction ratios, so we'd like to get some 
thoughts from the community.

We're looking for feedback on whether or not anyone else has experienced this 
behavior before as well or, if CPU isn't affected, has anyone seen something 
related instead.

If this is true, then we'd be happy to discuss further and provide a patch.

  was:
Some of our users are seeing unintuitive/unexpected behavior with log-compacted 
topics where they receive multiple records for the same key when consuming. 
This is a result of low throughput on log-compacted topics such that conditions 
({{min.cleanable.dirty.ratio = 0.5}}, default) aren't met for compaction to 
kick in.

This prompted us to test and tune {{min.cleanable.dirty.ratio}} in our 
clusters. It appears that having more aggressive log compaction ratios don't 
have negative effects on CPU and memory utilization. If this is truly the case, 
we should consider changing the default from {{0.5}} to something more 
aggressive.

Setup:

# 1. 8 brokers
# 2. 5 zk nodes
# 3. 32 partitions on a topic
# 4. replication factor 3
# 5. log roll 3 hours
# 6. log segment bytes 1 GB
# 7. log retention 24 hours
# 8. all messages to a single key
# 9. all messages to a unique key
# 10. all messages to a bounded key range [0, 999]
# 11. {{min.cleanable.dirty.ratio}} per topic = {{0}}, {{0.5}}, and {{1}}
# 12. 200 MB/s sustained, produce and consume traffic

Observations:

We were able to verify log cleaner threads were performing work by checking the 
logs and verifying the {{cleaner-offset-checkpoint}} file for all topics. We 
also observed the log cleaner's {{time-since-last-run-ms}} metric was normal, 
never going above the default of 15 seconds.

Under-replicated partitions stayed steady, same for replication lag.

Here's an example test run where we try out {{min.cleanable.dirty.ratio = 0}}, 
{{min.cleanable.dirty.ratio = 1}}, and {{min.cleanable.dirty.ratio = 0.5}}. 
Troughs in between the peaks represent zero traffic and reconfiguring of topics.

!200mbs-dirty0-dirty-1-dirty05.png|thumbnail!

Memory utilization is fine, but more interestingly, CPU doesn't appear to have 
much difference.

To get more detail, here is a flame graph (raw svg attached) of the run for 

[jira] [Created] (KAFKA-5452) Aggressive log compaction ratio appears to have no negative effect on log-compacted topics

2017-06-14 Thread Jeff Chao (JIRA)
Jeff Chao created KAFKA-5452:


 Summary: Aggressive log compaction ratio appears to have no 
negative effect on log-compacted topics
 Key: KAFKA-5452
 URL: https://issues.apache.org/jira/browse/KAFKA-5452
 Project: Kafka
  Issue Type: Improvement
  Components: config, core, log
Affects Versions: 0.10.2.1, 0.10.2.0
 Environment: Ubuntu Trusty (14.04.5), Oracle JDK 8
Reporter: Jeff Chao
 Attachments: 200mbs-dirty0-dirty-1-dirty05.png, 
flame-graph-200mbs-dirty0.png, flame-graph-200mbs-dirty0.svg

Some of our users are seeing unintuitive/unexpected behavior with log-compacted 
topics where they receive multiple records for the same key when consuming. 
This is a result of low throughput on log-compacted topics such that conditions 
({{min.cleanable.dirty.ratio = 0.5}}, default) aren't met for compaction to 
kick in.

This prompted us to test and tune {{min.cleanable.dirty.ratio}} in our 
clusters. It appears that having more aggressive log compaction ratios don't 
have negative effects on CPU and memory utilization. If this is truly the case, 
we should consider changing the default from {{0.5}} to something more 
aggressive.

Setup:

# 1. 8 brokers
# 2. 5 zk nodes
# 3. 32 partitions on a topic
# 4. replication factor 3
# 5. log roll 3 hours
# 6. log segment bytes 1 GB
# 7. log retention 24 hours
# 8. all messages to a single key
# 9. all messages to a unique key
# 10. all messages to a bounded key range [0, 999]
# 11. {{min.cleanable.dirty.ratio}} per topic = {{0}}, {{0.5}}, and {{1}}
# 12. 200 MB/s sustained, produce and consume traffic

Observations:

We were able to verify log cleaner threads were performing work by checking the 
logs and verifying the {{cleaner-offset-checkpoint}} file for all topics. We 
also observed the log cleaner's {{time-since-last-run-ms}} metric was normal, 
never going above the default of 15 seconds.

Under-replicated partitions stayed steady, same for replication lag.

Here's an example test run where we try out {{min.cleanable.dirty.ratio = 0}}, 
{{min.cleanable.dirty.ratio = 1}}, and {{min.cleanable.dirty.ratio = 0.5}}. 
Troughs in between the peaks represent zero traffic and reconfiguring of topics.

!200mbs-dirty0-dirty-1-dirty05.png|thumbnail!

Memory utilization is fine, but more interestingly, CPU doesn't appear to have 
much difference.

To get more detail, here is a flame graph (raw svg attached) of the run for 
{{min.cleanable.dirty.ratio = 0}}. The conservative and default ratio flame 
graphs are equivalent.

!flame-graph-200mbs-dirty0.png|thumbnail!

Notice that the majority of CPU is coming from:

# 1. SSL operations (on reads/writes)
# 2. KafkaApis::handleFetchRequest (ReplicaManager::fetchMessages)
# 3. KafkaApis::handleOffsetFetchRequest

We also have examples from small scale test runs which show similar behavior 
but with scaled down CPU usage.

It seems counterintuitive that there's no apparent difference in CPU whether it 
be aggressive or conservative compaction ratios, so we'd like to get some 
thoughts from the community.

We're looking for feedback on whether or not anyone else has experienced this 
behavior before as well or, if CPU isn't affected, has anyone seen something 
related instead.

If this is true, then we'd be happy to discuss further and provide a patch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4844) kafka is holding open file descriptors

2017-03-15 Thread chao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15927273#comment-15927273
 ] 

chao commented on KAFKA-4844:
-

we want to delete index and log files under  TOPIC_PARTITION_X/... and  
__consumer_offsets-/...  7 days ago 
how can we configure that ?? what is the detail mean for log.cleaner.enable and 
offsets.retention.minutes??
for example :
offsets.retention.minutes=10080

> kafka is holding open file descriptors
> --
>
> Key: KAFKA-4844
> URL: https://issues.apache.org/jira/browse/KAFKA-4844
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: chao
>Priority: Critical
>
> We found strange issue on Kafka 0.9.0.1 , kafka is holding opne file 
> descriptors , and not allowing disk space to be reclaimed
> my question:
> 1. what does file (nfsX) mean ??? 
> 2. why kafka is holding file ?? 
> $ sudo lsof /nas/kafka_logs/kafka/Order-6/.nfs04550ffcbd61
> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
> java 97465 kafka mem REG 0,25 10485760 72683516 
> /nas/kafka_logs/kafka/Order-6/.nfs04550ffcbd61



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4844) kafka is holding open file descriptors

2017-03-06 Thread chao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898793#comment-15898793
 ] 

chao commented on KAFKA-4844:
-

In fact , it is vert critical issue ,because kafka hold very big file  ,but we 
can not delete except we stop Kafka process 

This issue was happened on centos 5.8 and centos 6.6.  NFS version looks 3 

 /nas/kafka_logs nfs 
rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=XX.XX.XX.XX,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=XX.XX.XX.XX
 0 0

$ cat /etc/redhat-release
CentOS release 6.6 (Final)

> kafka is holding open file descriptors
> --
>
> Key: KAFKA-4844
> URL: https://issues.apache.org/jira/browse/KAFKA-4844
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: chao
>Priority: Critical
>
> We found strange issue on Kafka 0.9.0.1 , kafka is holding opne file 
> descriptors , and not allowing disk space to be reclaimed
> my question:
> 1. what does file (nfsX) mean ??? 
> 2. why kafka is holding file ?? 
> $ sudo lsof /nas/kafka_logs/kafka/Order-6/.nfs04550ffcbd61
> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
> java 97465 kafka mem REG 0,25 10485760 72683516 
> /nas/kafka_logs/kafka/Order-6/.nfs04550ffcbd61



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4844) kafka is holding open file descriptors

2017-03-05 Thread chao (JIRA)
chao created KAFKA-4844:
---

 Summary: kafka is holding open file descriptors
 Key: KAFKA-4844
 URL: https://issues.apache.org/jira/browse/KAFKA-4844
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.1
Reporter: chao
Priority: Critical


We found strange issue on Kafka 0.9.0.1 , kafka is holding opne file 
descriptors , and not allowing disk space to be reclaimed

my question:
1. what does file (nfsX) mean ??? 
2. why kafka is holding file ?? 

$ sudo lsof /nas/kafka_logs/kafka/Order-6/.nfs04550ffcbd61

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 97465 kafka mem REG 0,25 10485760 72683516 
/nas/kafka_logs/kafka/Order-6/.nfs04550ffcbd61



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4725) Kafka broker fails due to OOM when producer exceeds throttling quota for extended periods of time

2017-02-03 Thread Jeff Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851898#comment-15851898
 ] 

Jeff Chao commented on KAFKA-4725:
--

Ok, we'll base it off trunk and open up a PR. Thanks.

> Kafka broker fails due to OOM when producer exceeds throttling quota for 
> extended periods of time
> -
>
> Key: KAFKA-4725
> URL: https://issues.apache.org/jira/browse/KAFKA-4725
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 0.10.1.1
> Environment: Ubuntu Trusty (14.04.5), Oracle JDK 8
>Reporter: Jeff Chao
>Priority: Critical
>  Labels: reliability
> Fix For: 0.10.3.0, 0.10.2.1
>
> Attachments: oom-references.png
>
>
> Steps to Reproduce:
> 1. Create a non-compacted topic with 1 partition
> 2. Set a produce quota of 512 KB/s
> 3. Send messages at 20 MB/s
> 4. Observe heap memory growth as time progresses
> Investigation:
> While running performance tests with a user configured with a produce quota, 
> we found that the lead broker serving the requests would exhaust heap memory 
> if the producer sustained a inbound request throughput greater than the 
> produce quota. 
> Upon further investigation, we took a heap dump from that broker process and 
> discovered the ThrottledResponse object has a indirect reference to the 
> byte[] holding the messages associated with the ProduceRequest. 
> We're happy contributing a patch but in the meantime wanted to first raise 
> the issue and get feedback from the community.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4725) Kafka broker fails due to OOM when producer exceeds throttling quota for extended periods of time

2017-02-02 Thread Jeff Chao (JIRA)
Jeff Chao created KAFKA-4725:


 Summary: Kafka broker fails due to OOM when producer exceeds 
throttling quota for extended periods of time
 Key: KAFKA-4725
 URL: https://issues.apache.org/jira/browse/KAFKA-4725
 Project: Kafka
  Issue Type: Bug
  Components: core, producer 
Affects Versions: 0.10.1.1
 Environment: Ubuntu Trusty (14.04.5), Oracle JDK 8
Reporter: Jeff Chao
 Attachments: oom-references.png

Steps to Reproduce:

1. Create a non-compacted topic with 1 partition
2. Set a produce quota of 512 KB/s
3. Send messages at 20 MB/s
4. Observe heap memory growth as time progresses

Investigation:

While running performance tests with a user configured with a produce quota, we 
found that the lead broker serving the requests would exhaust heap memory if 
the producer sustained a inbound request throughput greater than the produce 
quota. 

Upon further investigation, we took a heap dump from that broker process and 
discovered the ThrottledResponse object has a indirect reference to the byte[] 
holding the messages associated with the ProduceRequest. 

We're happy contributing a patch but in the meantime wanted to first raise the 
issue and get feedback from the community.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4078) VIP for Kafka doesn't work

2016-08-23 Thread chao (JIRA)
chao created KAFKA-4078:
---

 Summary: VIP for Kafka  doesn't work 
 Key: KAFKA-4078
 URL: https://issues.apache.org/jira/browse/KAFKA-4078
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.9.0.1
Reporter: chao
Priority: Blocker


We create VIP for chao007kfk002.chao007.com, 9092 ,chao007kfk003.chao007.com, 
9092 ,chao007kfk001.chao007.com, 9092

But we found that Kafka client API has some issues ,  client send metadata 
update will return three brokers ,  so it will create three connections for 001 
002 003 

When we change VIP to  chao008kfk002.chao008.com, 9092 
,chao008kfk003.chao008.com, 9092 ,chao008kfk001.chao008.com, 9092

it still produce data to 007 


The following is log information  


sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [kfk.chao.com:9092]
client.id = 

2016-08-23 07:00:48,451:DEBUG kafka-producer-network-thread | producer-1 
(NetworkClient.java:623) - Initialize connection to node -1 for sending 
metadata request
2016-08-23 07:00:48,452:DEBUG kafka-producer-network-thread | producer-1 
(NetworkClient.java:487) - Initiating connection to node -1 at 
kfk.chao.com:9092.
2016-08-23 07:00:48,463:DEBUG kafka-producer-network-thread | producer-1 
(Metrics.java:201) - Added sensor with name node--1.bytes-sent


2016-08-23 07:00:48,489:DEBUG kafka-producer-network-thread | producer-1 
(NetworkClient.java:619) - Sending metadata request 
ClientRequest(expectResponse=true, callback=null, 
request=RequestSend(header={api_key=3,api_version=0,correlation_id=0,client_id=producer-1},
 body={topics=[chao_vip]}), isInitiatedByNetworkClient, 
createdTimeMs=1471935648465, sendTimeMs=0) to node -1
2016-08-23 07:00:48,512:DEBUG kafka-producer-network-thread | producer-1 
(Metadata.java:172) - Updated cluster metadata version 2 to Cluster(nodes = 
[Node(1, chao007kfk002.chao007.com, 9092), Node(2, chao007kfk003.chao007.com, 
9092), Node(0, chao007kfk001.chao007.com, 9092)], partitions = [Partition(topic 
= chao_vip, partition = 0, leader = 0, replicas = [0,], isr = [0,], 
Partition(topic = chao_vip, partition = 3, leader = 0, replicas = [0,], isr = 
[0,], Partition(topic = chao_vip, partition = 2, leader = 2, replicas = [2,], 
isr = [2,], Partition(topic = chao_vip, partition = 1, leader = 1, replicas = 
[1,], isr = [1,], Partition(topic = chao_vip, partition = 4, leader = 1, 
replicas = [1,], isr = [1,]])






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3466) consumer was stuck , and offset can not be moved

2016-03-25 Thread chao (JIRA)
chao created KAFKA-3466:
---

 Summary: consumer was stuck , and offset can not be moved 
 Key: KAFKA-3466
 URL: https://issues.apache.org/jira/browse/KAFKA-3466
 Project: Kafka
  Issue Type: Bug
  Components: kafka streams
Affects Versions: 0.9.0.1
Reporter: chao
Priority: Blocker



We found some strange issue when we upgrade 0.9.0.1 . Some consumer was suck 
and can not continue to move offset 

if we manually commit offset , it will throw the following exception . 


1. How does it happen ??

2. If it happen , how we can consumer message and move offset to latest , and 
continue consume with this group 


2016-03-25 14:37:57 DEBUG Metrics:201 - Added sensor with name 
node-2147483646.bytes-sent
2016-03-25 14:37:57 DEBUG Metrics:201 - Added sensor with name 
node-2147483646.bytes-received
2016-03-25 14:37:57 DEBUG Metrics:201 - Added sensor with name 
node-2147483646.latency
2016-03-25 14:37:57 DEBUG NetworkClient:492 - Completed connection to node 
2147483646
2016-03-25 14:37:57 ERROR ConsumerCoordinator:544 - Error UNKNOWN_MEMBER_ID 
occurred while committing offsets for group XX_integration
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
connections-closed:client-id-consumer-1
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
connections-created:client-id-consumer-1
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
bytes-sent-received:client-id-consumer-1
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
bytes-received:client-id-consumer-1
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
bytes-sent:client-id-consumer-1
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
select-time:client-id-consumer-1
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
io-time:client-id-consumer-1
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
node--3.bytes-sent
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
node--3.bytes-received
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name node--3.latency
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
node-0.bytes-sent
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
node-0.bytes-received
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name node-0.latency
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
node-2147483646.bytes-sent
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
node-2147483646.bytes-received
2016-03-25 14:38:02 DEBUG Metrics:220 - Removed sensor with name 
node-2147483646.latency
2016-03-25 14:38:02 DEBUG KafkaConsumer:1241 - The Kafka consumer has closed.
Exception in thread "main" 
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
completed due to group rebalance
 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:546)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:487)
 at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:681)
 at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:654)
 at 
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
 at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
 at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
 at 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2412) Documentation bug: Add information for key.serializer and value.serializer to New Producer Config sections

2015-08-24 Thread Grayson Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grayson Chao updated KAFKA-2412:

Attachment: KAFKA-2412-r1.diff

 Documentation bug: Add information for key.serializer and value.serializer to 
 New Producer Config sections
 --

 Key: KAFKA-2412
 URL: https://issues.apache.org/jira/browse/KAFKA-2412
 Project: Kafka
  Issue Type: Bug
Reporter: Jeremy Fields
Assignee: Grayson Chao
Priority: Minor
  Labels: newbie
 Attachments: KAFKA-2412-r1.diff, KAFKA-2412.diff


 As key.serializer and value.serializer are required options when using the 
 new producer, they should be mentioned in the documentation ( here and svn 
 http://kafka.apache.org/documentation.html#newproducerconfigs )
 Appropriate values for these options exist in javadoc and producer.java 
 examples; however, not everyone is reading those, as is the case for anyone 
 setting up a producer.config file for mirrormaker.
 A sensible default should be suggested, such as
 org.apache.kafka.common.serialization.StringSerializer
 Or at least a mention of the key.serializer and value.serializer options 
 along with a link to javadoc
 Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2412) Documentation bug: Add information for key.serializer and value.serializer to New Producer Config sections

2015-08-24 Thread Grayson Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14709975#comment-14709975
 ] 

Grayson Chao commented on KAFKA-2412:
-

Thanks [~wushujames]!

 Documentation bug: Add information for key.serializer and value.serializer to 
 New Producer Config sections
 --

 Key: KAFKA-2412
 URL: https://issues.apache.org/jira/browse/KAFKA-2412
 Project: Kafka
  Issue Type: Bug
Reporter: Jeremy Fields
Assignee: Grayson Chao
Priority: Minor
  Labels: newbie
 Attachments: KAFKA-2412-r1.diff, KAFKA-2412.diff


 As key.serializer and value.serializer are required options when using the 
 new producer, they should be mentioned in the documentation ( here and svn 
 http://kafka.apache.org/documentation.html#newproducerconfigs )
 Appropriate values for these options exist in javadoc and producer.java 
 examples; however, not everyone is reading those, as is the case for anyone 
 setting up a producer.config file for mirrormaker.
 A sensible default should be suggested, such as
 org.apache.kafka.common.serialization.StringSerializer
 Or at least a mention of the key.serializer and value.serializer options 
 along with a link to javadoc
 Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2412) Documentation bug: Add information for key.serializer and value.serializer to New Producer Config sections

2015-08-21 Thread Grayson Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707159#comment-14707159
 ] 

Grayson Chao edited comment on KAFKA-2412 at 8/21/15 6:01 PM:
--

I based the documentation for max.in.flight.requests.per.connection on Jay's 
answer at http://grokbase.com/t/kafka/users/14cj8np158/in-flight-requests and 
the documentation for  key/value.serializer on my own reading of the code + 
Manikumar's answer at 
https://groups.google.com/forum/#!topic/kafka-clients/Psh1tmVbktY.


was (Author: gchao):
I based the documentation for max.in.flight.requests.per.connection on Jay's 
answer at http://grokbase.com/t/kafka/users/14cj8np158/in-flight-requests and 
the documentation for  key/value.serializer on my own reading of the code + 
this answer https://groups.google.com/forum/#!topic/kafka-clients/Psh1tmVbktY.

 Documentation bug: Add information for key.serializer and value.serializer to 
 New Producer Config sections
 --

 Key: KAFKA-2412
 URL: https://issues.apache.org/jira/browse/KAFKA-2412
 Project: Kafka
  Issue Type: Bug
Reporter: Jeremy Fields
Assignee: Grayson Chao
Priority: Minor
  Labels: newbie
 Attachments: KAFKA-2412.diff


 As key.serializer and value.serializer are required options when using the 
 new producer, they should be mentioned in the documentation ( here and svn 
 http://kafka.apache.org/documentation.html#newproducerconfigs )
 Appropriate values for these options exist in javadoc and producer.java 
 examples; however, not everyone is reading those, as is the case for anyone 
 setting up a producer.config file for mirrormaker.
 A sensible default should be suggested, such as
 org.apache.kafka.common.serialization.StringSerializer
 Or at least a mention of the key.serializer and value.serializer options 
 along with a link to javadoc
 Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2412) Documentation bug: Add information for key.serializer and value.serializer to New Producer Config sections

2015-08-21 Thread Grayson Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707159#comment-14707159
 ] 

Grayson Chao edited comment on KAFKA-2412 at 8/21/15 6:00 PM:
--

I based the documentation for max.in.flight.requests.per.connection on Jay's 
answer at http://grokbase.com/t/kafka/users/14cj8np158/in-flight-requests and 
the documentation for  key/value.serializer on my own reading of the code + 
this answer https://groups.google.com/forum/#!topic/kafka-clients/Psh1tmVbktY.


was (Author: gchao):
I based the documentation for max.in.flight.requests.per.connection on Jay's 
answer at http://grokbase.com/t/kafka/users/14cj8np158/in-flight-requests and 
the documentation for {key,value}.serializer on my own reading of the code + 
this answer https://groups.google.com/forum/#!topic/kafka-clients/Psh1tmVbktY.

 Documentation bug: Add information for key.serializer and value.serializer to 
 New Producer Config sections
 --

 Key: KAFKA-2412
 URL: https://issues.apache.org/jira/browse/KAFKA-2412
 Project: Kafka
  Issue Type: Bug
Reporter: Jeremy Fields
Assignee: Grayson Chao
Priority: Minor
  Labels: newbie
 Attachments: KAFKA-2412.diff


 As key.serializer and value.serializer are required options when using the 
 new producer, they should be mentioned in the documentation ( here and svn 
 http://kafka.apache.org/documentation.html#newproducerconfigs )
 Appropriate values for these options exist in javadoc and producer.java 
 examples; however, not everyone is reading those, as is the case for anyone 
 setting up a producer.config file for mirrormaker.
 A sensible default should be suggested, such as
 org.apache.kafka.common.serialization.StringSerializer
 Or at least a mention of the key.serializer and value.serializer options 
 along with a link to javadoc
 Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2412) Documentation bug: Add information for key.serializer and value.serializer to New Producer Config sections

2015-08-21 Thread Grayson Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grayson Chao updated KAFKA-2412:

Attachment: KAFKA-2412.diff

 Documentation bug: Add information for key.serializer and value.serializer to 
 New Producer Config sections
 --

 Key: KAFKA-2412
 URL: https://issues.apache.org/jira/browse/KAFKA-2412
 Project: Kafka
  Issue Type: Bug
Reporter: Jeremy Fields
Assignee: Grayson Chao
Priority: Minor
  Labels: newbie
 Attachments: KAFKA-2412.diff


 As key.serializer and value.serializer are required options when using the 
 new producer, they should be mentioned in the documentation ( here and svn 
 http://kafka.apache.org/documentation.html#newproducerconfigs )
 Appropriate values for these options exist in javadoc and producer.java 
 examples; however, not everyone is reading those, as is the case for anyone 
 setting up a producer.config file for mirrormaker.
 A sensible default should be suggested, such as
 org.apache.kafka.common.serialization.StringSerializer
 Or at least a mention of the key.serializer and value.serializer options 
 along with a link to javadoc
 Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2412) Documentation bug: Add information for key.serializer and value.serializer to New Producer Config sections

2015-08-21 Thread Grayson Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grayson Chao updated KAFKA-2412:

Attachment: (was: KAFKA-2412.diff)

 Documentation bug: Add information for key.serializer and value.serializer to 
 New Producer Config sections
 --

 Key: KAFKA-2412
 URL: https://issues.apache.org/jira/browse/KAFKA-2412
 Project: Kafka
  Issue Type: Bug
Reporter: Jeremy Fields
Assignee: Grayson Chao
Priority: Minor
  Labels: newbie
 Attachments: KAFKA-2412.diff


 As key.serializer and value.serializer are required options when using the 
 new producer, they should be mentioned in the documentation ( here and svn 
 http://kafka.apache.org/documentation.html#newproducerconfigs )
 Appropriate values for these options exist in javadoc and producer.java 
 examples; however, not everyone is reading those, as is the case for anyone 
 setting up a producer.config file for mirrormaker.
 A sensible default should be suggested, such as
 org.apache.kafka.common.serialization.StringSerializer
 Or at least a mention of the key.serializer and value.serializer options 
 along with a link to javadoc
 Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2412) Documentation bug: Add information for key.serializer and value.serializer to New Producer Config sections

2015-08-21 Thread Grayson Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grayson Chao updated KAFKA-2412:

Attachment: (was: KAFKA-2412.diff)

 Documentation bug: Add information for key.serializer and value.serializer to 
 New Producer Config sections
 --

 Key: KAFKA-2412
 URL: https://issues.apache.org/jira/browse/KAFKA-2412
 Project: Kafka
  Issue Type: Bug
Reporter: Jeremy Fields
Assignee: Grayson Chao
Priority: Minor
  Labels: newbie
 Attachments: KAFKA-2412.diff


 As key.serializer and value.serializer are required options when using the 
 new producer, they should be mentioned in the documentation ( here and svn 
 http://kafka.apache.org/documentation.html#newproducerconfigs )
 Appropriate values for these options exist in javadoc and producer.java 
 examples; however, not everyone is reading those, as is the case for anyone 
 setting up a producer.config file for mirrormaker.
 A sensible default should be suggested, such as
 org.apache.kafka.common.serialization.StringSerializer
 Or at least a mention of the key.serializer and value.serializer options 
 along with a link to javadoc
 Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2412) Documentation bug: Add information for key.serializer and value.serializer to New Producer Config sections

2015-08-21 Thread Grayson Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grayson Chao updated KAFKA-2412:

Assignee: Grayson Chao
  Status: Patch Available  (was: Open)

 Documentation bug: Add information for key.serializer and value.serializer to 
 New Producer Config sections
 --

 Key: KAFKA-2412
 URL: https://issues.apache.org/jira/browse/KAFKA-2412
 Project: Kafka
  Issue Type: Bug
Reporter: Jeremy Fields
Assignee: Grayson Chao
Priority: Minor
  Labels: newbie
 Attachments: KAFKA-2412.diff


 As key.serializer and value.serializer are required options when using the 
 new producer, they should be mentioned in the documentation ( here and svn 
 http://kafka.apache.org/documentation.html#newproducerconfigs )
 Appropriate values for these options exist in javadoc and producer.java 
 examples; however, not everyone is reading those, as is the case for anyone 
 setting up a producer.config file for mirrormaker.
 A sensible default should be suggested, such as
 org.apache.kafka.common.serialization.StringSerializer
 Or at least a mention of the key.serializer and value.serializer options 
 along with a link to javadoc
 Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2412) Documentation bug: Add information for key.serializer and value.serializer to New Producer Config sections

2015-08-21 Thread Grayson Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707174#comment-14707174
 ] 

Grayson Chao commented on KAFKA-2412:
-

Also, this info 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool 
doesn't seem to be written to work with the documentation/site which are stored 
in SVN and not Git. I can work on submitting an RB for this patch if that's 
preferred for documentation updates.

 Documentation bug: Add information for key.serializer and value.serializer to 
 New Producer Config sections
 --

 Key: KAFKA-2412
 URL: https://issues.apache.org/jira/browse/KAFKA-2412
 Project: Kafka
  Issue Type: Bug
Reporter: Jeremy Fields
Assignee: Grayson Chao
Priority: Minor
  Labels: newbie
 Attachments: KAFKA-2412.diff


 As key.serializer and value.serializer are required options when using the 
 new producer, they should be mentioned in the documentation ( here and svn 
 http://kafka.apache.org/documentation.html#newproducerconfigs )
 Appropriate values for these options exist in javadoc and producer.java 
 examples; however, not everyone is reading those, as is the case for anyone 
 setting up a producer.config file for mirrormaker.
 A sensible default should be suggested, such as
 org.apache.kafka.common.serialization.StringSerializer
 Or at least a mention of the key.serializer and value.serializer options 
 along with a link to javadoc
 Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2412) Documentation bug: Add information for key.serializer and value.serializer to New Producer Config sections

2015-08-21 Thread Grayson Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grayson Chao updated KAFKA-2412:

Attachment: KAFKA-2412.diff

I based the documentation for max.in.flight.requests.per.connection on Jay's 
answer at http://grokbase.com/t/kafka/users/14cj8np158/in-flight-requests and 
the documentation for {key,value}.serializer on my own reading of the code + 
this answer https://groups.google.com/forum/#!topic/kafka-clients/Psh1tmVbktY.

 Documentation bug: Add information for key.serializer and value.serializer to 
 New Producer Config sections
 --

 Key: KAFKA-2412
 URL: https://issues.apache.org/jira/browse/KAFKA-2412
 Project: Kafka
  Issue Type: Bug
Reporter: Jeremy Fields
Priority: Minor
  Labels: newbie
 Attachments: KAFKA-2412.diff


 As key.serializer and value.serializer are required options when using the 
 new producer, they should be mentioned in the documentation ( here and svn 
 http://kafka.apache.org/documentation.html#newproducerconfigs )
 Appropriate values for these options exist in javadoc and producer.java 
 examples; however, not everyone is reading those, as is the case for anyone 
 setting up a producer.config file for mirrormaker.
 A sensible default should be suggested, such as
 org.apache.kafka.common.serialization.StringSerializer
 Or at least a mention of the key.serializer and value.serializer options 
 along with a link to javadoc
 Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1983) TestEndToEndLatency can be unreliable after hard kill

2015-03-20 Thread Grayson Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371981#comment-14371981
 ] 

Grayson Chao commented on KAFKA-1983:
-

I'm having some trouble reproducing this issue. Hard killing the process during 
the test does not seem to cause a significant change in any of the latency 
readings. Is there a more detailed set of repro steps somewhere I could use to 
test my fix?

 TestEndToEndLatency can be unreliable after hard kill
 -

 Key: KAFKA-1983
 URL: https://issues.apache.org/jira/browse/KAFKA-1983
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
Assignee: Grayson Chao
  Labels: newbie

 If you hard kill TestEndToEndLatency, the committed offset remains the last 
 checkpointed one. However, more messages are now appended after the last 
 checkpointed offset. When restarting TestEndToEndLatency, the consumer 
 resumes from the last checkpointed offset and will report really low latency 
 since it doesn't need to wait for a new message to be produced to read the 
 next message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1983) TestEndToEndLatency can be unreliable after hard kill

2015-03-09 Thread Grayson Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14353237#comment-14353237
 ] 

Grayson Chao commented on KAFKA-1983:
-

Mind if I work on this? I'm brand new to the codebase and this seems as good a 
place to start as any.

 TestEndToEndLatency can be unreliable after hard kill
 -

 Key: KAFKA-1983
 URL: https://issues.apache.org/jira/browse/KAFKA-1983
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
  Labels: newbie

 If you hard kill TestEndToEndLatency, the committed offset remains the last 
 checkpointed one. However, more messages are now appended after the last 
 checkpointed offset. When restarting TestEndToEndLatency, the consumer 
 resumes from the last checkpointed offset and will report really low latency 
 since it doesn't need to wait for a new message to be produced to read the 
 next message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1039) kafka return acks but not logging while sending messages with compression

2013-09-04 Thread Xiang chao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13758717#comment-13758717
 ] 

Xiang chao commented on KAFKA-1039:
---

It seems they are the same problem.

 kafka return acks but not logging while sending messages with compression
 -

 Key: KAFKA-1039
 URL: https://issues.apache.org/jira/browse/KAFKA-1039
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8
 Environment: ubuntu 64bit
Reporter: Xiang chao
Assignee: Jay Kreps

 when send message with compression, the broker return acks, but don't write 
 messages to disk. So I can't get messages using consumers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1039) kafka return acks but not logging while sending messages with compression

2013-09-03 Thread Xiang chao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757449#comment-13757449
 ] 

Xiang chao commented on KAFKA-1039:
---

Ok, you can see issue detail form here
https://github.com/Shopify/sarama/issues/32#issuecomment-23730281

 kafka return acks but not logging while sending messages with compression
 -

 Key: KAFKA-1039
 URL: https://issues.apache.org/jira/browse/KAFKA-1039
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8
 Environment: ubuntu 64bit
Reporter: Xiang chao
Assignee: Jay Kreps

 when send message with compression, the broker return acks, but don't write 
 messages to disk. So I can't get messages using consumers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira