[jira] [Updated] (KAFKA-5583) Provide an "OS independent" file rename and delete mechanism

2017-07-11 Thread M. Manna (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

M. Manna updated KAFKA-5583:

Description: 
This is related to 
[KAFKA-1194|https://issues.apache.org/jira/browse/KAFKA-1194] and specific to 
Windows. I thought it would be better to address this as a KIP item since it's 
OS platform specific. Also, any moderators please feel free to move it to Cwiki 
as KIP proposal if necessary.

Some of the unit tests run on Windows platform fails after gradle build because 
of the same. Could I please request you to put some thought whether considering 
a platform-independent way of handling file renaming and deletion? 




  was:
This is related to 
[KAFKA-1194|https://issues.apache.org/jira/browse/KAFKA-1194] and specific to 
Windows. I thought it would be better to address this as a KIP item since it's 
OS platform specific.

Quite a lot of unit tests run on Windows platform fails after gradle build 
because of the same. Could I please request you to put some thought whether 
considering a platform-independent way of handling file renaming and deletion? 





> Provide an "OS independent" file rename and delete mechanism
> 
>
> Key: KAFKA-5583
> URL: https://issues.apache.org/jira/browse/KAFKA-5583
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, core
>Affects Versions: 0.10.2.1
> Environment: Windows
>Reporter: M. Manna
>
> This is related to 
> [KAFKA-1194|https://issues.apache.org/jira/browse/KAFKA-1194] and specific to 
> Windows. I thought it would be better to address this as a KIP item since 
> it's OS platform specific. Also, any moderators please feel free to move it 
> to Cwiki as KIP proposal if necessary.
> Some of the unit tests run on Windows platform fails after gradle build 
> because of the same. Could I please request you to put some thought whether 
> considering a platform-independent way of handling file renaming and 
> deletion? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5583) Provide an "OS independent" file rename and delete mechanism

2017-07-11 Thread M. Manna (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

M. Manna updated KAFKA-5583:

Attachment: att2.jpg
att3.jpg

> Provide an "OS independent" file rename and delete mechanism
> 
>
> Key: KAFKA-5583
> URL: https://issues.apache.org/jira/browse/KAFKA-5583
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, core
>Affects Versions: 0.10.2.1
> Environment: Windows
>Reporter: M. Manna
> Attachments: att1.jpg, att2.jpg, att3.jpg
>
>
> This is related to 
> [KAFKA-1194|https://issues.apache.org/jira/browse/KAFKA-1194] and specific to 
> Windows. I thought it would be better to address this as a KIP item since 
> it's OS platform specific. Also, any moderators please feel free to move it 
> to Cwiki as KIP proposal if necessary.
> Some of the unit tests run on Windows platform fails after gradle build 
> because of the same. Could I please request you to put some thought whether 
> considering a platform-independent way of handling file renaming and 
> deletion? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5583) Provide a "OS independent" file rename and delete mechanism

2017-07-11 Thread M. Manna (JIRA)
M. Manna created KAFKA-5583:
---

 Summary: Provide a "OS independent" file rename and delete 
mechanism
 Key: KAFKA-5583
 URL: https://issues.apache.org/jira/browse/KAFKA-5583
 Project: Kafka
  Issue Type: Improvement
  Components: clients, core
Affects Versions: 0.10.2.1
 Environment: Windows
Reporter: M. Manna


This is related to 
[KAFKA-1194|https://issues.apache.org/jira/browse/KAFKA-1194] and specific to 
Windows. I thought it would be better to address this as a KIP item since it's 
OS platform specific.

Quite a lot of unit tests run on Windows platform fails after gradle build 
because of the same. Could I please request you to put some thought whether 
considering a platform-independent way of handling file renaming and deletion? 






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5583) Provide an "OS independent" file rename and delete mechanism

2017-07-11 Thread M. Manna (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

M. Manna updated KAFKA-5583:

Attachment: att1.jpg

> Provide an "OS independent" file rename and delete mechanism
> 
>
> Key: KAFKA-5583
> URL: https://issues.apache.org/jira/browse/KAFKA-5583
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, core
>Affects Versions: 0.10.2.1
> Environment: Windows
>Reporter: M. Manna
> Attachments: att1.jpg
>
>
> This is related to 
> [KAFKA-1194|https://issues.apache.org/jira/browse/KAFKA-1194] and specific to 
> Windows. I thought it would be better to address this as a KIP item since 
> it's OS platform specific. Also, any moderators please feel free to move it 
> to Cwiki as KIP proposal if necessary.
> Some of the unit tests run on Windows platform fails after gradle build 
> because of the same. Could I please request you to put some thought whether 
> considering a platform-independent way of handling file renaming and 
> deletion? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5583) Provide an "OS independent" file rename and delete mechanism

2017-07-11 Thread M. Manna (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

M. Manna updated KAFKA-5583:

Summary: Provide an "OS independent" file rename and delete mechanism  
(was: Provide an "OS independent" file rename and delete mechanismn)

> Provide an "OS independent" file rename and delete mechanism
> 
>
> Key: KAFKA-5583
> URL: https://issues.apache.org/jira/browse/KAFKA-5583
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, core
>Affects Versions: 0.10.2.1
> Environment: Windows
>Reporter: M. Manna
>
> This is related to 
> [KAFKA-1194|https://issues.apache.org/jira/browse/KAFKA-1194] and specific to 
> Windows. I thought it would be better to address this as a KIP item since 
> it's OS platform specific.
> Quite a lot of unit tests run on Windows platform fails after gradle build 
> because of the same. Could I please request you to put some thought whether 
> considering a platform-independent way of handling file renaming and 
> deletion? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5583) Provide an "OS independent" file rename and delete mechanismn

2017-07-11 Thread M. Manna (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

M. Manna updated KAFKA-5583:

Summary: Provide an "OS independent" file rename and delete mechanismn  
(was: Provide a "OS independent" file rename and delete mechanism)

> Provide an "OS independent" file rename and delete mechanismn
> -
>
> Key: KAFKA-5583
> URL: https://issues.apache.org/jira/browse/KAFKA-5583
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, core
>Affects Versions: 0.10.2.1
> Environment: Windows
>Reporter: M. Manna
>
> This is related to 
> [KAFKA-1194|https://issues.apache.org/jira/browse/KAFKA-1194] and specific to 
> Windows. I thought it would be better to address this as a KIP item since 
> it's OS platform specific.
> Quite a lot of unit tests run on Windows platform fails after gradle build 
> because of the same. Could I please request you to put some thought whether 
> considering a platform-independent way of handling file renaming and 
> deletion? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5582) Log compaction with preallocation enabled does not trim segments

2017-07-11 Thread Jason Aliyetti (JIRA)
Jason Aliyetti created KAFKA-5582:
-

 Summary: Log compaction with preallocation enabled does not trim 
segments
 Key: KAFKA-5582
 URL: https://issues.apache.org/jira/browse/KAFKA-5582
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.1.1
 Environment: Linux, Windows
Reporter: Jason Aliyetti


Unexpected behavior occurs when a topic is configured to preallocate files and 
has a retention policy of compact.

When log compaction runs, the cleaner attempts to gather groups of segments to 
consolidate based on the max segment size.  
When preallocation is enabled all segments are that size and thus each 
individual segment is considered for compaction.

When compaction does occur, the resulting cleaned file is sized based on that 
same configuration.  This means that you can have very large files on disk that 
contain little or no data which partly defeats the point of compacting. 

The log cleaner should trim these segments to free up disk space.  That way 
they would free up disk space and be able to be further compacted on subsequent 
runs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5167) streams task gets stuck after re-balance due to LockException

2017-07-11 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5167:
---
Fix Version/s: 0.10.2.2

> streams task gets stuck after re-balance due to LockException
> -
>
> Key: KAFKA-5167
> URL: https://issues.apache.org/jira/browse/KAFKA-5167
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.11.0.0
>Reporter: Narendra Kumar
>Assignee: Matthias J. Sax
> Fix For: 0.10.2.2, 0.11.0.1, 0.11.1.0
>
> Attachments: BugTest.java, DebugTransformer.java, logs.txt
>
>
> During rebalance processor node's close() method gets called two times once 
> from StreamThread.suspendTasksAndState() and once from 
> StreamThread.closeNonAssignedSuspendedTasks(). I have some instance filed 
> which I am closing in processor's close method. This instance's close method 
> throws some exception if I call close more than once. Because of this 
> exception, the Kafka streams does not attempt to close the statemanager ie.  
> task.closeStateManager(true) is never called. When a task moves from one 
> thread to another within same machine the task blocks trying to get lock on 
> state directory which is still held by unclosed statemanager and keep 
> throwing the below warning message:
> 2017-04-30 12:34:17 WARN  StreamThread:1214 - Could not create task 0_1. Will 
> retry.
> org.apache.kafka.streams.errors.LockException: task [0_1] Failed to lock the 
> state directory for task 0_1
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:100)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:73)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:108)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:864)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1237)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1210)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:967)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$600(StreamThread.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:234)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:259)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:352)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:290)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1029)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:592)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:361)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5561) Rewrite TopicCommand using the new Admin client

2017-07-11 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082395#comment-16082395
 ] 

Paolo Patierno commented on KAFKA-5561:
---

[~ijuma] referring to the process to move from Scala to Java tools 
(https://issues.apache.org/jira/browse/KAFKA-5536) I made a revert on using 
argparse4j instead of joptsimple and updated the related PR. Can you take a 
look at it please ? Does it seem to be reasonable for you ? Are we on the same 
page ?

> Rewrite TopicCommand using the new Admin client
> ---
>
> Key: KAFKA-5561
> URL: https://issues.apache.org/jira/browse/KAFKA-5561
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>
> Hi, 
> as suggested in the https://issues.apache.org/jira/browse/KAFKA-3331, it 
> could be great to have the TopicCommand using the new Admin client instead of 
> the way it works today.
> As pushed by [~gwenshap] in the above JIRA, I'm going to work on it.
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5545) Kafka Stream not able to successfully restart over new broker ip

2017-07-11 Thread Yogesh BG (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081860#comment-16081860
 ] 

Yogesh BG commented on KAFKA-5545:
--

its not the time out in close. this time is fixed always to one minute.

i meant if ip change, close and restart happs with in session timeout.




> Kafka Stream not able to successfully restart over new broker ip
> 
>
> Key: KAFKA-5545
> URL: https://issues.apache.org/jira/browse/KAFKA-5545
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Yogesh BG
>Priority: Critical
> Attachments: kafkastreams.log
>
>
> Hi
> I have one kafka broker and one kafka stream application
> initially kafka stream connected and starts processing data. Then i restart 
> the broker. When broker restarts new ip will be assigned.
> In kafka stream i have a 5min interval thread which checks if broker ip 
> changed and if changed, we cleanup the stream, rebuild topology(tried with 
> reusing topology) and start the stream again. I end up with the following 
> exceptions.
> 11:04:08.032 [StreamThread-38] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-38] Creating active task 0_5 with assigned 
> partitions [PR-5]
> 11:04:08.033 [StreamThread-41] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-41] Creating active task 0_1 with assigned 
> partitions [PR-1]
> 11:04:08.036 [StreamThread-34] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-34] Creating active task 0_7 with assigned 
> partitions [PR-7]
> 11:04:08.036 [StreamThread-37] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-37] Creating active task 0_3 with assigned 
> partitions [PR-3]
> 11:04:08.036 [StreamThread-45] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-45] Creating active task 0_0 with assigned 
> partitions [PR-0]
> 11:04:08.037 [StreamThread-36] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-36] Creating active task 0_4 with assigned 
> partitions [PR-4]
> 11:04:08.037 [StreamThread-43] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-43] Creating active task 0_6 with assigned 
> partitions [PR-6]
> 11:04:08.038 [StreamThread-48] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-48] Creating active task 0_2 with assigned 
> partitions [PR-2]
> 11:04:09.034 [StreamThread-38] WARN  o.a.k.s.p.internals.StreamThread - Could 
> not create task 0_5. Will retry.
> org.apache.kafka.streams.errors.LockException: task [0_5] Failed to lock the 
> state directory for task 0_5
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:100)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:108)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:864)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1237)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1210)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:967)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$600(StreamThread.java:69)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:234)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:259)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:352)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
>  

[jira] [Commented] (KAFKA-4931) stop script fails due 4096 ps output limit

2017-07-11 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081855#comment-16081855
 ] 

Tom Bentley commented on KAFKA-4931:


The total size of your {{ps ax}} output will be for all processes that it 
lists. The issue is about the maximum length it can output for a single 
process. To show the command line of a particular process linux {{ps}} will 
read the {{/prod/$pid/cmdline}} virtual file. Apparently, 
{{/prod/$pid/cmdline}} is limited to a single page of memory 
(https://stackoverflow.com/questions/199130/how-do-i-increase-the-proc-pid-cmdline-4096-byte-limit).
 That page size would depend what options your kernel was compiled with. 
Clearly there is _a_ limit, as this bug has been reported three times. 

Of course, this is all for linux. I don't know how the limits might be 
different on other unixes. 

For completeness: Extra options to {{ps}} (e.g. {{ps axww}}) can be used to get 
longer output, but I believe that {{ps}} itself only shortens the output when 
it's outputting to a tty, which is not the case in the stop script, so such 
options won't help us.



> stop script fails due 4096 ps output limit
> --
>
> Key: KAFKA-4931
> URL: https://issues.apache.org/jira/browse/KAFKA-4931
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Amit Jain
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> When run the script: bin/zookeeper-server-stop.sh fails to stop the zookeeper 
> server process if the ps output exceeds 4096 character limit of linux. I 
> think instead of ps we can use ${JAVA_HOME}/bin/jps -vl | grep QuorumPeerMain 
>  it would correctly stop zookeeper process. Currently we are using kill 
> PIDS=$(ps ax | grep java | grep -i QuorumPeerMain | grep -v grep | awk 
> '{print $1}')



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5545) Kafka Stream not able to successfully restart over new broker ip

2017-07-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081698#comment-16081698
 ] 

Guozhang Wang commented on KAFKA-5545:
--

Thanks for the update. So when you mentioned

{code}
One more thing is if I do close with in connction timeout all goes well.
But if I issue close after connection timeout the threads are stuck
{code}

Did you mean that if you call `close(timeout)` with a timeout parameter you set 
it to the session timeout value, or mean that the broker ip changes event 
happens within the session timeout right after the streams app was started that 
triggers the close / restart of the streams app?

> Kafka Stream not able to successfully restart over new broker ip
> 
>
> Key: KAFKA-5545
> URL: https://issues.apache.org/jira/browse/KAFKA-5545
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Yogesh BG
>Priority: Critical
> Attachments: kafkastreams.log
>
>
> Hi
> I have one kafka broker and one kafka stream application
> initially kafka stream connected and starts processing data. Then i restart 
> the broker. When broker restarts new ip will be assigned.
> In kafka stream i have a 5min interval thread which checks if broker ip 
> changed and if changed, we cleanup the stream, rebuild topology(tried with 
> reusing topology) and start the stream again. I end up with the following 
> exceptions.
> 11:04:08.032 [StreamThread-38] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-38] Creating active task 0_5 with assigned 
> partitions [PR-5]
> 11:04:08.033 [StreamThread-41] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-41] Creating active task 0_1 with assigned 
> partitions [PR-1]
> 11:04:08.036 [StreamThread-34] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-34] Creating active task 0_7 with assigned 
> partitions [PR-7]
> 11:04:08.036 [StreamThread-37] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-37] Creating active task 0_3 with assigned 
> partitions [PR-3]
> 11:04:08.036 [StreamThread-45] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-45] Creating active task 0_0 with assigned 
> partitions [PR-0]
> 11:04:08.037 [StreamThread-36] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-36] Creating active task 0_4 with assigned 
> partitions [PR-4]
> 11:04:08.037 [StreamThread-43] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-43] Creating active task 0_6 with assigned 
> partitions [PR-6]
> 11:04:08.038 [StreamThread-48] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-48] Creating active task 0_2 with assigned 
> partitions [PR-2]
> 11:04:09.034 [StreamThread-38] WARN  o.a.k.s.p.internals.StreamThread - Could 
> not create task 0_5. Will retry.
> org.apache.kafka.streams.errors.LockException: task [0_5] Failed to lock the 
> state directory for task 0_5
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:100)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:108)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:864)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1237)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1210)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:967)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$600(StreamThread.java:69)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:234)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:259)
>