[GitHub] kafka pull request #1746: KAFKA-4049: Fix transient failure in RegexSourceIn...

2016-08-20 Thread guozhangwang
GitHub user guozhangwang reopened a pull request:

https://github.com/apache/kafka/pull/1746

KAFKA-4049: Fix transient failure in RegexSourceIntegrationTest



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka 
K4049-RegexSourceIntegrationTest-failure

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1746.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1746


commit e045b977cc5607e72402e64547df773d9bda7a61
Author: Guozhang Wang 
Date:   2016-08-16T20:08:27Z

fix transient failure

commit 0f21348e9e48f1e6789060b70b01816cd3baac62
Author: Guozhang Wang 
Date:   2016-08-17T22:41:41Z

github comments

commit 33e8e2cd62adb38a0cef266f533110b2f42dafe2
Author: Guozhang Wang 
Date:   2016-08-20T17:51:19Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K4049-RegexSourceIntegrationTest-failure

commit 75206faee4306184cff8c136556444d60de1358e
Author: Guozhang Wang 
Date:   2016-08-21T00:36:33Z

make atomic updates on assignedTopicPartitions




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4049) Transient failure in RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted

2016-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15429594#comment-15429594
 ] 

ASF GitHub Bot commented on KAFKA-4049:
---

Github user guozhangwang closed the pull request at:

https://github.com/apache/kafka/pull/1746


> Transient failure in 
> RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted
> --
>
> Key: KAFKA-4049
> URL: https://issues.apache.org/jira/browse/KAFKA-4049
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: test
>
> There is an hidden assumption in this test case that the created 
> {{TEST-TOPIC-A}} and {{TEST-TOPIC-B}} are propagated to the streams client at 
> the same time, and stored as {{assignedTopicPartitions[0]}}. However this is 
> not always true since these two topics may be added on the client side as two 
> consecutive metadata refreshes.
> The proposed fix includes the following:
> 1. In {{waitForCondition}} do not trigger the {{conditionMet}} function again 
> after the while loop, but just remember the returned value from the last 
> call. This is safer so that if the condition changes after the while loop it 
> will not be considered as well.
> 2. Not remembering a map of all the previous assigned partitions, but only 
> the most recent one. And also get rid of the final check after streams client 
> is closed by just use {{equals}} in the condition to make sure that it is 
> exactly the same to the expected assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1746: KAFKA-4049: Fix transient failure in RegexSourceIn...

2016-08-20 Thread guozhangwang
Github user guozhangwang closed the pull request at:

https://github.com/apache/kafka/pull/1746


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4049) Transient failure in RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted

2016-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15429595#comment-15429595
 ] 

ASF GitHub Bot commented on KAFKA-4049:
---

GitHub user guozhangwang reopened a pull request:

https://github.com/apache/kafka/pull/1746

KAFKA-4049: Fix transient failure in RegexSourceIntegrationTest



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka 
K4049-RegexSourceIntegrationTest-failure

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1746.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1746


commit e045b977cc5607e72402e64547df773d9bda7a61
Author: Guozhang Wang 
Date:   2016-08-16T20:08:27Z

fix transient failure

commit 0f21348e9e48f1e6789060b70b01816cd3baac62
Author: Guozhang Wang 
Date:   2016-08-17T22:41:41Z

github comments

commit 33e8e2cd62adb38a0cef266f533110b2f42dafe2
Author: Guozhang Wang 
Date:   2016-08-20T17:51:19Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K4049-RegexSourceIntegrationTest-failure

commit 75206faee4306184cff8c136556444d60de1358e
Author: Guozhang Wang 
Date:   2016-08-21T00:36:33Z

make atomic updates on assignedTopicPartitions




> Transient failure in 
> RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted
> --
>
> Key: KAFKA-4049
> URL: https://issues.apache.org/jira/browse/KAFKA-4049
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: test
>
> There is an hidden assumption in this test case that the created 
> {{TEST-TOPIC-A}} and {{TEST-TOPIC-B}} are propagated to the streams client at 
> the same time, and stored as {{assignedTopicPartitions[0]}}. However this is 
> not always true since these two topics may be added on the client side as two 
> consecutive metadata refreshes.
> The proposed fix includes the following:
> 1. In {{waitForCondition}} do not trigger the {{conditionMet}} function again 
> after the while loop, but just remember the returned value from the last 
> call. This is safer so that if the condition changes after the while loop it 
> will not be considered as well.
> 2. Not remembering a map of all the previous assigned partitions, but only 
> the most recent one. And also get rid of the final check after streams client 
> is closed by just use {{equals}} in the condition to make sure that it is 
> exactly the same to the expected assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3478) Finer Stream Flow Control

2016-08-20 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15429559#comment-15429559
 ] 

Bill Bejeck commented on KAFKA-3478:


[~mjsax], thanks for the clarification.  I'll take a look at what it would take 
for different configurations, but I'll hold off doing anything concrete until 
some of the details for this task (or subtasks) are fleshed out.

> Finer Stream Flow Control
> -
>
> Key: KAFKA-3478
> URL: https://issues.apache.org/jira/browse/KAFKA-3478
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> Today we have a event-time based flow control mechanism in order to 
> synchronize multiple input streams in a best effort manner:
> http://docs.confluent.io/3.0.0/streams/architecture.html#flow-control-with-timestamps
> However, there are some use cases where users would like to have finer 
> control of the input streams, for example, with two input streams, one of 
> them always reading from offset 0 upon (re)-starting, and the other reading 
> for log end offset.
> Today we only have one consumer config "offset.auto.reset" to control that 
> behavior, which means all streams are read either from "earliest" or "latest".
> We should consider how to improve this settings to allow users have finer 
> control over these frameworks.
> =
> A finer flow control could also be used to allow for populating a {{KTable}} 
> (with an "initial" state) before starting the actual processing (this feature 
> was ask for in the mailing list multiple times already). Even if it is quite 
> hard to define, *when* the initial populating phase should end, this might 
> still be useful. There would be the following possibilities:
>  1) an initial fixed time period for populating
>(it might be hard for a user to estimate the correct value)
>  2) an "idle" period, ie, if no update to a KTable for a certain time is
> done, we consider it as populated
>  3) a timestamp cut off point, ie, all records with an older timestamp
> belong to the initial populating phase
>  4) a throughput threshold, ie, if the populating frequency falls below
> the threshold, the KTable is considered "finished"
>  5) maybe something else ??
> The API might look something like this
> {noformat}
> KTable table = builder.table("topic", 1000); // populate the table without 
> reading any other topics until see one record with timestamp 1000.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4068) FileSinkTask - use JsonConverter to serialize

2016-08-20 Thread Shikhar Bhushan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shikhar Bhushan resolved KAFKA-4068.

Resolution: Not A Problem

I was thinking JSON since it would be easy to serialize to a human-readable 
format with that. But if we want to implement a more useful 
{{Struct.toString()}} in any case for debugging purposes, we should probably do 
that instead. Fair point about keeping the file sink connector as simple as 
possible. Closing this in favor of KAFKA-4070.

> FileSinkTask - use JsonConverter to serialize
> -
>
> Key: KAFKA-4068
> URL: https://issues.apache.org/jira/browse/KAFKA-4068
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Assignee: Shikhar Bhushan
>Priority: Minor
>
> People new to Connect often try out hooking up e.g. a Kafka topic with Avro 
> data to the file sink connector, only to find the file contain values like:
> {noformat}
> org.apache.kafka.connect.data.Struct@ca1bf85a
> org.apache.kafka.connect.data.Struct@c298db6a
> org.apache.kafka.connect.data.Struct@44108fbd
> {noformat}
> This is because currently the {{FileSinkConnector}} is meant as a toy example 
> that expects the schema to be {{Schema.STRING_SCHEMA}}, though it just 
> {{toString()}}'s the value without verifying that. 
> A better experience would probably be if we used 
> {{JsonConverter.fromConnectData()}} for serializing to the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4070) Implement a useful Struct.toString()

2016-08-20 Thread Shikhar Bhushan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shikhar Bhushan updated KAFKA-4070:
---
Description: Logging of {{Struct}}'s does not currently provide any useful 
output, and users also find it unhelpful e.g. when hooking up a Kafka topic 
with Avro data with the {{FileSinkConnector}} which simply {{toString()}}'s the 
values to the file.  (was: Logging of {{Struct}}'s does not currently provide 
any useful output, and users also find it unhelpful e.g. when hooking up a 
Kafka topic with Avro data with the {{FileSinkConnector}} which simply 
{{toString()}}s the values to the file.)

> Implement a useful Struct.toString()
> 
>
> Key: KAFKA-4070
> URL: https://issues.apache.org/jira/browse/KAFKA-4070
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Priority: Minor
>
> Logging of {{Struct}}'s does not currently provide any useful output, and 
> users also find it unhelpful e.g. when hooking up a Kafka topic with Avro 
> data with the {{FileSinkConnector}} which simply {{toString()}}'s the values 
> to the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4070) Implement a useful Struct.toString()

2016-08-20 Thread Shikhar Bhushan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shikhar Bhushan updated KAFKA-4070:
---
Description: Logging of {{Struct}}'s does not currently provide any useful 
output, and users also find it unhelpful e.g. when hooking up a Kafka topic 
with Avro data with the {{FileSinkConnector}} which simply {{toString()}}s the 
values to the file.  (was: Logging of {{Struct}}s does not currently provide 
any useful output, and users also find it unhelpful e.g. when hooking up a 
Kafka topic with Avro data with the {{FileSinkConnector}} which simply 
{{toString()}}s the values to the file.)

> Implement a useful Struct.toString()
> 
>
> Key: KAFKA-4070
> URL: https://issues.apache.org/jira/browse/KAFKA-4070
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Priority: Minor
>
> Logging of {{Struct}}'s does not currently provide any useful output, and 
> users also find it unhelpful e.g. when hooking up a Kafka topic with Avro 
> data with the {{FileSinkConnector}} which simply {{toString()}}s the values 
> to the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4070) Implement a useful Struct.toString()

2016-08-20 Thread Shikhar Bhushan (JIRA)
Shikhar Bhushan created KAFKA-4070:
--

 Summary: Implement a useful Struct.toString()
 Key: KAFKA-4070
 URL: https://issues.apache.org/jira/browse/KAFKA-4070
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Shikhar Bhushan
Priority: Minor


Logging of {{Struct}}s does not currently provide any useful output, and users 
also find it unhelpful e.g. when hooking up a Kafka topic with Avro data with 
the {{FileSinkConnector}} which simply {{toString()}}s the values to the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4068) FileSinkTask - use JsonConverter to serialize

2016-08-20 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15429511#comment-15429511
 ] 

Ewen Cheslack-Postava commented on KAFKA-4068:
--

Or we could override the Struct class's toString to be more useful? I think 
[~jcustenborder] has previously suggested doing something like this too since 
logging of structs doesn't provide any helpful debugging output at the moment. 
The choice of JSON also seems arbitrary -- why not csv? or some other 
text-based serialization format?

Part of the problem is that file connector potentially serves two purposes. I 
don't think anybody uses it in practice, but it serves both for user demo 
purposes and as an example for connector developers. The more we add to make 
things clearer for the user demo, the more we obfuscate the basic structure of 
a connector and make it harder to use as a template for other connectors.

> FileSinkTask - use JsonConverter to serialize
> -
>
> Key: KAFKA-4068
> URL: https://issues.apache.org/jira/browse/KAFKA-4068
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Assignee: Shikhar Bhushan
>Priority: Minor
>
> People new to Connect often try out hooking up e.g. a Kafka topic with Avro 
> data to the file sink connector, only to find the file contain values like:
> {noformat}
> org.apache.kafka.connect.data.Struct@ca1bf85a
> org.apache.kafka.connect.data.Struct@c298db6a
> org.apache.kafka.connect.data.Struct@44108fbd
> {noformat}
> This is because currently the {{FileSinkConnector}} is meant as a toy example 
> that expects the schema to be {{Schema.STRING_SCHEMA}}, though it just 
> {{toString()}}'s the value without verifying that. 
> A better experience would probably be if we used 
> {{JsonConverter.fromConnectData()}} for serializing to the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-0.10.0-jdk7 #192

2016-08-20 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request #1765: MINOR: improve Streams application reset tool to m...

2016-08-20 Thread mjsax
Github user mjsax closed the pull request at:

https://github.com/apache/kafka/pull/1765


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1767: KAFKA-4058: Failure in org.apache.kafka.streams.in...

2016-08-20 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/1767

KAFKA-4058: Failure in 
org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka kafka-4058-trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1767.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1767


commit 671bd4cfef5e123acca4ee47b129ed165d1107c4
Author: Matthias J. Sax 
Date:   2016-08-19T15:28:58Z

use AdminTool to check for active consumer group instead of sleep

commit faf4e0413563e268b7b239a9d2149b2f5f34c21c
Author: Matthias J. Sax 
Date:   2016-08-19T16:07:16Z

use AdminTool to check for active consumer group instead of sleep

commit 8df88eda5667c60a3b8329130c63cd12484112f4
Author: Matthias J. Sax 
Date:   2016-08-20T09:18:51Z

using TestUtils.waitForCondition




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4058) Failure in org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset

2016-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15429488#comment-15429488
 ] 

ASF GitHub Bot commented on KAFKA-4058:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/1767

KAFKA-4058: Failure in 
org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka kafka-4058-trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1767.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1767


commit 671bd4cfef5e123acca4ee47b129ed165d1107c4
Author: Matthias J. Sax 
Date:   2016-08-19T15:28:58Z

use AdminTool to check for active consumer group instead of sleep

commit faf4e0413563e268b7b239a9d2149b2f5f34c21c
Author: Matthias J. Sax 
Date:   2016-08-19T16:07:16Z

use AdminTool to check for active consumer group instead of sleep

commit 8df88eda5667c60a3b8329130c63cd12484112f4
Author: Matthias J. Sax 
Date:   2016-08-20T09:18:51Z

using TestUtils.waitForCondition




> Failure in 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset
> --
>
> Key: KAFKA-4058
> URL: https://issues.apache.org/jira/browse/KAFKA-4058
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: test
>
> {code}
> java.lang.AssertionError: expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:225)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset(ResetIntegrationTest.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
> 

[jira] [Commented] (KAFKA-4058) Failure in org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset

2016-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15429487#comment-15429487
 ] 

ASF GitHub Bot commented on KAFKA-4058:
---

Github user mjsax closed the pull request at:

https://github.com/apache/kafka/pull/1766


> Failure in 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset
> --
>
> Key: KAFKA-4058
> URL: https://issues.apache.org/jira/browse/KAFKA-4058
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: test
>
> {code}
> java.lang.AssertionError: expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:225)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset(ResetIntegrationTest.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> 

[GitHub] kafka pull request #1766: KAFKA-4058: Failure in org.apache.kafka.streams.in...

2016-08-20 Thread mjsax
Github user mjsax closed the pull request at:

https://github.com/apache/kafka/pull/1766


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3478) Finer Stream Flow Control

2016-08-20 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15429400#comment-15429400
 ] 

Matthias J. Sax commented on KAFKA-3478:


Hey [~bbejeck]. It is still available. However, it is not clearly defined what 
should be done. The description is more or less a collection of ideas rather 
than defined things to do. We first need to make some design decisions and also 
might want do define sub-tasks for individual things, too. Once thing to start, 
would be to allow different configurations for different sources 
IMHO.[~guozhang] [~miguno] [~enothereska] [~damianguy] [~hjafarpour] ?

> Finer Stream Flow Control
> -
>
> Key: KAFKA-3478
> URL: https://issues.apache.org/jira/browse/KAFKA-3478
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> Today we have a event-time based flow control mechanism in order to 
> synchronize multiple input streams in a best effort manner:
> http://docs.confluent.io/3.0.0/streams/architecture.html#flow-control-with-timestamps
> However, there are some use cases where users would like to have finer 
> control of the input streams, for example, with two input streams, one of 
> them always reading from offset 0 upon (re)-starting, and the other reading 
> for log end offset.
> Today we only have one consumer config "offset.auto.reset" to control that 
> behavior, which means all streams are read either from "earliest" or "latest".
> We should consider how to improve this settings to allow users have finer 
> control over these frameworks.
> =
> A finer flow control could also be used to allow for populating a {{KTable}} 
> (with an "initial" state) before starting the actual processing (this feature 
> was ask for in the mailing list multiple times already). Even if it is quite 
> hard to define, *when* the initial populating phase should end, this might 
> still be useful. There would be the following possibilities:
>  1) an initial fixed time period for populating
>(it might be hard for a user to estimate the correct value)
>  2) an "idle" period, ie, if no update to a KTable for a certain time is
> done, we consider it as populated
>  3) a timestamp cut off point, ie, all records with an older timestamp
> belong to the initial populating phase
>  4) a throughput threshold, ie, if the populating frequency falls below
> the threshold, the KTable is considered "finished"
>  5) maybe something else ??
> The API might look something like this
> {noformat}
> KTable table = builder.table("topic", 1000); // populate the table without 
> reading any other topics until see one record with timestamp 1000.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4023) Add thread id as prefix in Kafka Streams thread logging

2016-08-20 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4023 started by Bill Bejeck.
--
> Add thread id as prefix in Kafka Streams thread logging
> ---
>
> Key: KAFKA-4023
> URL: https://issues.apache.org/jira/browse/KAFKA-4023
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Bill Bejeck
>  Labels: newbie++
>
> A single Kafka Streams instance can include multiple stream threads, and 
> hence without logging prefix it is difficult to determine which thread's 
> producing which log entries.
> We should
> 1) add the log-prefix as thread id in StreamThread logger, as well as its 
> contained StreamPartitionAssignor.
> 2) add the log-prefix as task id in StreamTask / StandbyTask, as well as its 
> contained RecordCollector and ProcessorStateManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1488

2016-08-20 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3949: Fix race condition when metadata update arrives during

--
[...truncated 12056 lines...]
org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval STARTED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite STARTED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
STARTED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullApplicationIdOnBuild STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullApplicationIdOnBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 

[jira] [Work started] (KAFKA-3780) Add new config cache.max.bytes.buffering to the streams configuration

2016-08-20 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3780 started by Eno Thereska.
---
> Add new config cache.max.bytes.buffering to the streams configuration
> -
>
> Key: KAFKA-3780
> URL: https://issues.apache.org/jira/browse/KAFKA-3780
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> Add a new configuration cache.max.bytes.buffering to the streams 
> configuration options as described in KIP-63 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-63%3A+Unify+store+and+downstream+caching+in+streams



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #828

2016-08-20 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3949: Fix race condition when metadata update arrives during

--
[...truncated 12070 lines...]
org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldCantHaveNullPredicate PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullActionOnForEach STARTED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullActionOnForEach PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullValueMapperOnTableJoin STARTED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullValueMapperOnTableJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullPredicateOnFilterNot STARTED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullPredicateOnFilterNot PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldHaveAtLeastOnPredicateWhenBranching STARTED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldHaveAtLeastOnPredicateWhenBranching PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullFilePathOnWriteAsText STARTED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
shouldNotAllowNullFilePathOnWriteAsText PASSED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueWithProvidedSerde STARTED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueWithProvidedSerde PASSED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValuesWithName STARTED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValuesWithName PASSED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueDefaultSerde STARTED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueDefaultSerde PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableLeftJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamKTableLeftJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testNotSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSkipNullOnMaterialization STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSkipNullOnMaterialization PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testKTable STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
STARTED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues STARTED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldNotAllowNullSubtractorOnAggregate STARTED


[jira] [Assigned] (KAFKA-3777) Extract the existing LRU cache out of RocksDBStore

2016-08-20 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska reassigned KAFKA-3777:
---

Assignee: Eno Thereska

> Extract the existing LRU cache out of RocksDBStore
> --
>
> Key: KAFKA-3777
> URL: https://issues.apache.org/jira/browse/KAFKA-3777
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> The LRU cache that is currently inside the RocksDbStore class. As part of 
> KAFKA-3776 it needs to come outside of RocksDbStore and be a separate 
> component used in:
> 1. KGroupedStream.aggregate() / reduce(), 
> 2. KStream.aggregateByKey() / reduceByKey(),
> 3. KTable.to() (this will be done in KAFKA-3779).
> As all of the above operators can have a cache on top to deduplicate the 
> materialized state store in RocksDB.
> The scope of this JIRA is to extract out the cache of RocksDBStore, and keep 
> them as item 1) and 2) above; and it should be done together / after 
> KAFKA-3780.
> Note it is NOT in the scope of this JIRA to re-write the cache, so this will 
> basically stay the same record-based cache we currently have.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4069) Forward records in context of cache flushing/eviction

2016-08-20 Thread Eno Thereska (JIRA)
Eno Thereska created KAFKA-4069:
---

 Summary: Forward records in context of cache flushing/eviction
 Key: KAFKA-4069
 URL: https://issues.apache.org/jira/browse/KAFKA-4069
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 0.10.1.0
Reporter: Eno Thereska
Assignee: Damian Guy
 Fix For: 0.10.1.0


When the cache is in place, records should we forwarded downstream when they 
are evicted or flushed from the cache. 

This is a major structural change to the internals of the code, moving from 
having a single record outstanding inside a task to potentially having several 
records outstanding. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3780) Add new config cache.max.bytes.buffering to the streams configuration

2016-08-20 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska reassigned KAFKA-3780:
---

Assignee: Eno Thereska

> Add new config cache.max.bytes.buffering to the streams configuration
> -
>
> Key: KAFKA-3780
> URL: https://issues.apache.org/jira/browse/KAFKA-3780
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> Add a new configuration cache.max.bytes.buffering to the streams 
> configuration options as described in KIP-63 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-63%3A+Unify+store+and+downstream+caching+in+streams



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3974) LRU cache should store bytes/object and not records

2016-08-20 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3974 started by Eno Thereska.
---
> LRU cache should store bytes/object and not records
> ---
>
> Key: KAFKA-3974
> URL: https://issues.apache.org/jira/browse/KAFKA-3974
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> After the investigation in KAFKA-3973, if the outcome is either bytes or 
> objects, the actual LRU cache needs to be modified to store bytes or objects 
> (instead of records). The cache will have a total byte size as an input 
> parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)