[jira] [Commented] (KAFKA-1733) Producer.send will block indeterminately when broker is unavailable.

2016-10-17 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15582565#comment-15582565
 ] 

Dru Panchal commented on KAFKA-1733:


As seen in the following file: The fix was made for the 0.9.0 release and it 
was also back-ported into 0.8.2
https://github.com/apache/kafka/blob/0.9.0/core/src/main/scala/kafka/network/BlockingChannel.scala
https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/network/BlockingChannel.scala

Also, as seen in the following file: The last release where this issue still 
exists is: 0.8.1
https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/network/BlockingChannel.scala


> Producer.send will block indeterminately when broker is unavailable.
> 
>
> Key: KAFKA-1733
> URL: https://issues.apache.org/jira/browse/KAFKA-1733
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 0.8.1.1
>Reporter: Marc Chung
>Assignee: Marc Chung
> Fix For: 0.8.2.0, 0.9.0.0
>
> Attachments: kafka-1733-add-connectTimeoutMs.patch
>
>
> This is a follow up to the conversation here:
> https://mail-archives.apache.org/mod_mbox/kafka-dev/201409.mbox/%3ccaog_4qymoejhkbo0n31+a-ujx0z5unsisd5wbrmn-xtx7gi...@mail.gmail.com%3E
> During ClientUtils.fetchTopicMetadata, if the broker is unavailable, 
> socket.connect will block indeterminately. Any retry policy 
> (message.send.max.retries) further increases the time spent waiting for the 
> socket to connect.
> The root fix is to add a connection timeout value to the BlockingChannel's 
> socket configuration, like so:
> {noformat}
> -channel.socket.connect(new InetSocketAddress(host, port))
> +channel.socket.connect(new InetSocketAddress(host, port), connectTimeoutMs)
> {noformat}
> The simplest thing to do here would be to have a constant, default value that 
> would be applied to every socket configuration. 
> Is that acceptable? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1733) Producer.send will block indeterminately when broker is unavailable.

2016-10-17 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal updated KAFKA-1733:
---
Fix Version/s: 0.8.2.0

> Producer.send will block indeterminately when broker is unavailable.
> 
>
> Key: KAFKA-1733
> URL: https://issues.apache.org/jira/browse/KAFKA-1733
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 0.8.1.1
>Reporter: Marc Chung
>Assignee: Marc Chung
> Fix For: 0.8.2.0, 0.9.0.0
>
> Attachments: kafka-1733-add-connectTimeoutMs.patch
>
>
> This is a follow up to the conversation here:
> https://mail-archives.apache.org/mod_mbox/kafka-dev/201409.mbox/%3ccaog_4qymoejhkbo0n31+a-ujx0z5unsisd5wbrmn-xtx7gi...@mail.gmail.com%3E
> During ClientUtils.fetchTopicMetadata, if the broker is unavailable, 
> socket.connect will block indeterminately. Any retry policy 
> (message.send.max.retries) further increases the time spent waiting for the 
> socket to connect.
> The root fix is to add a connection timeout value to the BlockingChannel's 
> socket configuration, like so:
> {noformat}
> -channel.socket.connect(new InetSocketAddress(host, port))
> +channel.socket.connect(new InetSocketAddress(host, port), connectTimeoutMs)
> {noformat}
> The simplest thing to do here would be to have a constant, default value that 
> would be applied to every socket configuration. 
> Is that acceptable? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1733) Producer.send will block indeterminately when broker is unavailable.

2016-10-17 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal updated KAFKA-1733:
---
Affects Version/s: 0.8.1.1

> Producer.send will block indeterminately when broker is unavailable.
> 
>
> Key: KAFKA-1733
> URL: https://issues.apache.org/jira/browse/KAFKA-1733
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 0.8.1.1
>Reporter: Marc Chung
>Assignee: Marc Chung
> Fix For: 0.9.0.0
>
> Attachments: kafka-1733-add-connectTimeoutMs.patch
>
>
> This is a follow up to the conversation here:
> https://mail-archives.apache.org/mod_mbox/kafka-dev/201409.mbox/%3ccaog_4qymoejhkbo0n31+a-ujx0z5unsisd5wbrmn-xtx7gi...@mail.gmail.com%3E
> During ClientUtils.fetchTopicMetadata, if the broker is unavailable, 
> socket.connect will block indeterminately. Any retry policy 
> (message.send.max.retries) further increases the time spent waiting for the 
> socket to connect.
> The root fix is to add a connection timeout value to the BlockingChannel's 
> socket configuration, like so:
> {noformat}
> -channel.socket.connect(new InetSocketAddress(host, port))
> +channel.socket.connect(new InetSocketAddress(host, port), connectTimeoutMs)
> {noformat}
> The simplest thing to do here would be to have a constant, default value that 
> would be applied to every socket configuration. 
> Is that acceptable? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-73) SyncProducer sends messages to invalid partitions without complaint

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-73?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal resolved KAFKA-73.
--
Resolution: Duplicate

> SyncProducer sends messages to invalid partitions without complaint
> ---
>
> Key: KAFKA-73
> URL: https://issues.apache.org/jira/browse/KAFKA-73
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.6
> Environment: Mac OSX 10.6.7
>Reporter: Jonathan Herman
>Assignee: Dru Panchal
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The SyncProducer class will send messages to invalid partitions without 
> throwing an exception or otherwise alerting the user.
> Reproduction:
> Run the kafka-simple-consumer-shell.sh script with an invalid partition 
> number. An exception will be thrown and displayed. Run the 
> kafka-producer-shell.sh with the same partition number. You will be able to 
> send messages without any errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-73) SyncProducer sends messages to invalid partitions without complaint

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-73?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15440101#comment-15440101
 ] 

Dru Panchal commented on KAFKA-73:
--

Marking this JIRA closed as the bug was solved with KAFKA-49.

> SyncProducer sends messages to invalid partitions without complaint
> ---
>
> Key: KAFKA-73
> URL: https://issues.apache.org/jira/browse/KAFKA-73
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.6
> Environment: Mac OSX 10.6.7
>Reporter: Jonathan Herman
>Assignee: Dru Panchal
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The SyncProducer class will send messages to invalid partitions without 
> throwing an exception or otherwise alerting the user.
> Reproduction:
> Run the kafka-simple-consumer-shell.sh script with an invalid partition 
> number. An exception will be thrown and displayed. Run the 
> kafka-producer-shell.sh with the same partition number. You will be able to 
> send messages without any errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-73) SyncProducer sends messages to invalid partitions without complaint

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-73?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal reassigned KAFKA-73:


Assignee: Dru Panchal

> SyncProducer sends messages to invalid partitions without complaint
> ---
>
> Key: KAFKA-73
> URL: https://issues.apache.org/jira/browse/KAFKA-73
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.6
> Environment: Mac OSX 10.6.7
>Reporter: Jonathan Herman
>Assignee: Dru Panchal
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The SyncProducer class will send messages to invalid partitions without 
> throwing an exception or otherwise alerting the user.
> Reproduction:
> Run the kafka-simple-consumer-shell.sh script with an invalid partition 
> number. An exception will be thrown and displayed. Run the 
> kafka-producer-shell.sh with the same partition number. You will be able to 
> send messages without any errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-156) Messages should not be dropped when brokers are unavailable

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal resolved KAFKA-156.
---
   Resolution: Duplicate
Fix Version/s: 0.10.1.0

> Messages should not be dropped when brokers are unavailable
> ---
>
> Key: KAFKA-156
> URL: https://issues.apache.org/jira/browse/KAFKA-156
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sharad Agarwal
>Assignee: Dru Panchal
> Fix For: 0.10.1.0
>
>
> When none of the broker is available, producer should spool the messages to 
> disk and keep retrying for brokers to come back.
> This will also enable brokers upgrade/maintenance without message loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-156) Messages should not be dropped when brokers are unavailable

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15440079#comment-15440079
 ] 

Dru Panchal commented on KAFKA-156:
---

This JIRA is duplicated by KAFKA-789 which has provided the requested solution 
in Kafka 0.10.1.0. Marking this JIRA resolved.

> Messages should not be dropped when brokers are unavailable
> ---
>
> Key: KAFKA-156
> URL: https://issues.apache.org/jira/browse/KAFKA-156
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sharad Agarwal
>
> When none of the broker is available, producer should spool the messages to 
> disk and keep retrying for brokers to come back.
> This will also enable brokers upgrade/maintenance without message loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-156) Messages should not be dropped when brokers are unavailable

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal reassigned KAFKA-156:
-

Assignee: Dru Panchal

> Messages should not be dropped when brokers are unavailable
> ---
>
> Key: KAFKA-156
> URL: https://issues.apache.org/jira/browse/KAFKA-156
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sharad Agarwal
>Assignee: Dru Panchal
>
> When none of the broker is available, producer should spool the messages to 
> disk and keep retrying for brokers to come back.
> This will also enable brokers upgrade/maintenance without message loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2) a restful producer API

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal resolved KAFKA-2.
-
Resolution: Information Provided  (was: Unresolved)

> a restful producer API
> --
>
> Key: KAFKA-2
> URL: https://issues.apache.org/jira/browse/KAFKA-2
> Project: Kafka
>  Issue Type: Improvement
>Assignee: Dru Panchal
>Priority: Minor
>
> If Kafka server supports a restful producer API, we can use Kafka in any 
> programming language without implementing the wire protocol in each language.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2) a restful producer API

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal reassigned KAFKA-2:
---

Assignee: Dru Panchal

> a restful producer API
> --
>
> Key: KAFKA-2
> URL: https://issues.apache.org/jira/browse/KAFKA-2
> Project: Kafka
>  Issue Type: Improvement
>Assignee: Dru Panchal
>Priority: Minor
>
> If Kafka server supports a restful producer API, we can use Kafka in any 
> programming language without implementing the wire protocol in each language.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2) a restful producer API

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15440061#comment-15440061
 ] 

Dru Panchal commented on KAFKA-2:
-

[~anandriyer] I would like to resolve this JIRA because the Kafka REST Proxy 
covers your desired use case.

http://docs.confluent.io/2.0.0/kafka-rest/docs/index.html


> a restful producer API
> --
>
> Key: KAFKA-2
> URL: https://issues.apache.org/jira/browse/KAFKA-2
> Project: Kafka
>  Issue Type: Improvement
>Priority: Minor
>
> If Kafka server supports a restful producer API, we can use Kafka in any 
> programming language without implementing the wire protocol in each language.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3094) Kafka process 100% CPU when no message in topic

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15440050#comment-15440050
 ] 

Dru Panchal commented on KAFKA-3094:


[~oazabir] Do you still experience this issue have you been able to resolve it? 
If so kindly share the solution as it may help others running into similar 
problems.

> Kafka process 100% CPU when no message in topic
> ---
>
> Key: KAFKA-3094
> URL: https://issues.apache.org/jira/browse/KAFKA-3094
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Omar AL Zabir
>
> When there's no message in a kafka topic and it is not getting any traffic 
> for some time, all the kafka nodes go 100% CPU. 
> As soon as I post a message, the CPU comes back to normal. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4020) Kafka consumer stop taking messages from kafka server

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15440016#comment-15440016
 ] 

Dru Panchal commented on KAFKA-4020:


[~shawnhe] What stopped working? Please provide more details.
Did the consumer process crash or shutdown or did the consumer simply stop 
receiving messages. 

If its the latter then can you provide a thread dump of the consumer process 
for further analysis?
See this link on how to create a thread dump: 
https://visualvm.java.net/threads.html


> Kafka consumer stop taking messages from kafka server
> -
>
> Key: KAFKA-4020
> URL: https://issues.apache.org/jira/browse/KAFKA-4020
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Shawn He
>
> It feels like the similar issue of KAFKA-2978, even though I haven't verified 
> if it is caused by the same events. How do I check on that? 
> I have a client that works fine using kafka 0.8.2.1, and can run months 
> without any issue. However, after I upgraded to use kafka 0.10.0.0, it's very 
> repeatable that the client will work for the first 4 hours, and then stopped 
> working. The producer side has no issue, as the data still comes in to the 
> kafka server. 
> I was using Java library kafka.consumer.Consumer.createJavaConsumerConnector 
> and kafka.consumer.KafkaStream class for the access to the kafka server.
> Any help is appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4026) consumer block

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15439982#comment-15439982
 ] 

Dru Panchal commented on KAFKA-4026:


[~imperio] 
So far I've tried setting up a 0.8.1.1 environment and used the high level api 
to create a consumer. Using the console producer I published a few messages and 
as expected they were immediately picked up by my consumer, so I am unable to 
reproduce your behavior as described.

Please provide sample code that reproduces your problem along with any 
broker/client setting overrides you use.

> consumer block
> --
>
> Key: KAFKA-4026
> URL: https://issues.apache.org/jira/browse/KAFKA-4026
> Project: Kafka
>  Issue Type: Test
>  Components: consumer
>Affects Versions: 0.8.1.1
> Environment: ubuntu 14.04
>Reporter: wxmimperio
> Fix For: 0.8.1.2
>
>
> when i use high level api make consumer. it is a block consumer,how can i 
> know the time of blocked?I put messages into a buffer.It did not reach the 
> buffer length the consumer  blocked,the buffer can not be handled.How can i 
> deal with this problem?The buffer did not reach the buffer length,I can 
> handled the buffer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4065) Missing Property in ProducerConfig.java - KafkaProducer API 0.9.0.0

2016-08-26 Thread Dru Panchal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dru Panchal updated KAFKA-4065:
---
Summary: Missing Property in ProducerConfig.java - KafkaProducer API 
0.9.0.0  (was: Property missing in ProcuderConfig.java - KafkaProducer API 
0.9.0.0)

> Missing Property in ProducerConfig.java - KafkaProducer API 0.9.0.0
> ---
>
> Key: KAFKA-4065
> URL: https://issues.apache.org/jira/browse/KAFKA-4065
> Project: Kafka
>  Issue Type: Bug
>Reporter: manzar
>
> 1 ) "compressed.topics" property is missing in ProducerConfig.java in 
> KafkaProducer API 0.9.0.0. due to that we can't enable some specific topic 
> for compression.
> 2) "compression.type" property is there in ProducerConfig.java that was 
> expected to be "compression.codec" according to official document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4076) Kafka broker shuts down due to irrecoverable IO error

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15439798#comment-15439798
 ] 

Dru Panchal edited comment on KAFKA-4076 at 8/26/16 8:43 PM:
-

[~anyun] I can confirm [~omkreddy]'s analysis on this having experienced this 
issue myself. 
Please modify your broker config in {{server.properties}} and specify a 
permanent location for the setting {{log.dirs}}.

Example: {{log.dirs=/var/opt/kafka-logs}}


was (Author: drupad.p):
[~anyun] I can confirm [~omkreddy]'s analysis on this having experienced this 
issue myself. 
Please modify your broker config in {{server.properties}} and specify a 
permanent location for Kafka the setting {{log.dirs}}.

Example: {{log.dirs=/var/opt/kafka-logs}}

> Kafka broker shuts down due to irrecoverable IO error
> -
>
> Key: KAFKA-4076
> URL: https://issues.apache.org/jira/browse/KAFKA-4076
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.9.0.0
>Reporter: Anyun 
>
> kafka.common.KafkaStorageException: I/O exception in append to log 
> '__consumer_offsets-48'
> at kafka.log.Log.append(Log.scala:318)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
> at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
> at 
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:227)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.coordinator.GroupCoordinator.doSyncGroup(GroupCoordinator.scala:312)
> at 
> kafka.coordinator.GroupCoordinator.handleSyncGroup(GroupCoordinator.scala:247)
> at kafka.server.KafkaApis.handleSyncGroupRequest(KafkaApis.scala:819)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:82)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:724)
> Caused by: java.io.FileNotFoundException: 
> /tmp/kafka-logs-new/__consumer_offsets-48/.index (No such 
> file or directory)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.(RandomAccessFile.java:241)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)
> at kafka.log.Log.roll(Log.scala:627)
> at kafka.log.Log.maybeRoll(Log.scala:602)
> at kafka.log.Log.append(Log.scala:357)
> ... 24 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4076) Kafka broker shuts down due to irrecoverable IO error

2016-08-26 Thread Dru Panchal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15439798#comment-15439798
 ] 

Dru Panchal commented on KAFKA-4076:


[~anyun] I can confirm [~omkreddy]'s analysis on this having experienced this 
issue myself. 
Please modify your broker config in {{server.properties}} and specify a 
permanent location for Kafka the setting {{log.dirs}}.

Example: {{log.dirs=/var/opt/kafka-logs}}

> Kafka broker shuts down due to irrecoverable IO error
> -
>
> Key: KAFKA-4076
> URL: https://issues.apache.org/jira/browse/KAFKA-4076
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.9.0.0
>Reporter: Anyun 
>
> kafka.common.KafkaStorageException: I/O exception in append to log 
> '__consumer_offsets-48'
> at kafka.log.Log.append(Log.scala:318)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
> at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
> at 
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:227)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.coordinator.GroupCoordinator.doSyncGroup(GroupCoordinator.scala:312)
> at 
> kafka.coordinator.GroupCoordinator.handleSyncGroup(GroupCoordinator.scala:247)
> at kafka.server.KafkaApis.handleSyncGroupRequest(KafkaApis.scala:819)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:82)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:724)
> Caused by: java.io.FileNotFoundException: 
> /tmp/kafka-logs-new/__consumer_offsets-48/.index (No such 
> file or directory)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.(RandomAccessFile.java:241)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)
> at kafka.log.Log.roll(Log.scala:627)
> at kafka.log.Log.maybeRoll(Log.scala:602)
> at kafka.log.Log.append(Log.scala:357)
> ... 24 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)