Jenkins build is back to normal : kafka-trunk-jdk7 #876

2015-12-04 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2888) Allow ConsoleConsumer to print a status update on some interval

2015-12-04 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15042503#comment-15042503
 ] 

Nick Dimiduk commented on KAFKA-2888:
-

bump.

> Allow ConsoleConsumer to print a status update on some interval
> ---
>
> Key: KAFKA-2888
> URL: https://issues.apache.org/jira/browse/KAFKA-2888
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Nick Dimiduk
>Priority: Minor
> Attachments: 2888.patch
>
>
> When retrieving a large number of messages, ConsoleConsumer should be able to 
> provide some indication to the user how its progressing. Something like 
> printing to stderr every N lines would be sufficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2015-12-04 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15042495#comment-15042495
 ] 

Nick Dimiduk commented on KAFKA-1980:
-

So long as it's fixed somewhere, i'm good :)

> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2015-12-04 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15042480#comment-15042480
 ] 

Jakob Homan commented on KAFKA-1980:


This code's been removed in 0.9.0 and trunk (although similar code exists in 
ReplayLogProducer and is likely vulnerable to this defect as well).  Right now 
there's no plan to make any further releases on the 8.x line.  I don't have a 
problem committing the fix to 0.8.2, but it's unlikely to ever see a release.

> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2015-12-04 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reopened KAFKA-1980:


> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2015-12-04 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15042463#comment-15042463
 ] 

Nick Dimiduk commented on KAFKA-1980:
-

Bump.

cc [~miguno]

> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #205

2015-12-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2856: Add KTable non-stateful APIs along with standby task

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 39c3512eceedebcb6e50f8c6c4ef66601ff7dbc4 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 39c3512eceedebcb6e50f8c6c4ef66601ff7dbc4
 > git rev-list cd54fc8816964f5a56469075c75c567e777b9656 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5528427998887821449.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 16.844 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson3464088927009470681.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.9/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not read entry ':clients:compileJava' from cache taskArtifacts.bin 
(/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/.gradle/2.9/taskArtifacts/taskArtifacts.bin).
> java.io.UTFDataFormatException (no error message)

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 10.417 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


[GitHub] kafka pull request: MINOR: backport fix to partition assignor orde...

2015-12-04 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/630

MINOR: backport fix to partition assignor order



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2931-0.9

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/630.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #630


commit a7ec8be73ff9df4b300892486b4123864a69f2f4
Author: Jason Gustafson 
Date:   2015-12-03T19:42:09Z

use List for AbstractCoordinator.metadata() return type and add test case




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2856) add KTable

2015-12-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2856.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 604
[https://github.com/apache/kafka/pull/604]

> add KTable
> --
>
> Key: KAFKA-2856
> URL: https://issues.apache.org/jira/browse/KAFKA-2856
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> KTable is a special type of the stream that represents a changelog of a 
> database table (or a key-value store).
> A changelog has to meet the following requirements.
> * Key-value mapping is surjective in the database table (the key must be the 
> primary key).
> * All insert/update/delete events are delivered in order for the same key
> * An update event has the whole data (not just delta).
> * A delete event is represented by the null value.
> KTable does not necessarily materialized as a local store. It may be 
> materialized when necessary. (see below)
> KTable supports look-up by key. KTable is materialized implicitly when 
> look-up is necessary.
> * KTable may be created from a topic. (Base KTable)
> * KTable may be created from another KTable by filter(), filterOut(), 
> mapValues(). (Derived KTable)
> * A call to the user supplied function is skipped when the value is null 
> since such an event represents a deletion. 
> * Instead of dropping, events filtered out by filter() or filterOut() are 
> converted to delete events. (Can we avoid this?)
> * map(), flatMap() and flatMapValues() are not supported since they may 
> violate the changelog requirements
> A derived KTable may be persisted to a topic by to() or through(). through() 
> creates another base KTable. 
> KTable can be converted to KStream by the toStream() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2856) add KTable

2015-12-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15042375#comment-15042375
 ] 

ASF GitHub Bot commented on KAFKA-2856:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/604


> add KTable
> --
>
> Key: KAFKA-2856
> URL: https://issues.apache.org/jira/browse/KAFKA-2856
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> KTable is a special type of the stream that represents a changelog of a 
> database table (or a key-value store).
> A changelog has to meet the following requirements.
> * Key-value mapping is surjective in the database table (the key must be the 
> primary key).
> * All insert/update/delete events are delivered in order for the same key
> * An update event has the whole data (not just delta).
> * A delete event is represented by the null value.
> KTable does not necessarily materialized as a local store. It may be 
> materialized when necessary. (see below)
> KTable supports look-up by key. KTable is materialized implicitly when 
> look-up is necessary.
> * KTable may be created from a topic. (Base KTable)
> * KTable may be created from another KTable by filter(), filterOut(), 
> mapValues(). (Derived KTable)
> * A call to the user supplied function is skipped when the value is null 
> since such an event represents a deletion. 
> * Instead of dropping, events filtered out by filter() or filterOut() are 
> converted to delete events. (Can we avoid this?)
> * map(), flatMap() and flatMapValues() are not supported since they may 
> violate the changelog requirements
> A derived KTable may be persisted to a topic by to() or through(). through() 
> creates another base KTable. 
> KTable can be converted to KStream by the toStream() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2856: add ktable

2015-12-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/604


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: TEST: remove checkMaybeGetRemainingTime in Kaf...

2015-12-04 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/629

TEST: remove checkMaybeGetRemainingTime in KafkaProducer

@granders For testing in EC2, 3 brokers, remote producer, message-size = 
100bytes

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KProdTest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/629.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #629


commit ec7668fe9d7561a64067c0edf23aa376bc8d0ec4
Author: Guozhang Wang 
Date:   2015-12-04T22:48:05Z

KProdTest




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2945) CreateTopic - protocol and server side implementation

2015-12-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15042309#comment-15042309
 ] 

ASF GitHub Bot commented on KAFKA-2945:
---

GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/626

KAFKA-2945: CreateTopic - protocol and server side implementation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka create-wire

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/626.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #626


commit d1fe53ecb9ccdc94457efcb61332cd54ca7b8095
Author: Grant Henke 
Date:   2015-12-02T03:23:45Z

KAFKA-2945: CreateTopic - protocol and server side implementation

commit 5dce80683ebd2fe6a30c8c0aa8dfe8b2233602a8
Author: Grant Henke 
Date:   2015-12-04T21:57:57Z

Address reviews: possible codes, comments, invalid config exception




> CreateTopic - protocol and server side implementation
> -
>
> Key: KAFKA-2945
> URL: https://issues.apache.org/jira/browse/KAFKA-2945
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2945) CreateTopic - protocol and server side implementation

2015-12-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15042308#comment-15042308
 ] 

ASF GitHub Bot commented on KAFKA-2945:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/626


> CreateTopic - protocol and server side implementation
> -
>
> Key: KAFKA-2945
> URL: https://issues.apache.org/jira/browse/KAFKA-2945
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2945: CreateTopic - protocol and server ...

2015-12-04 Thread granthenke
Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/626


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2945: CreateTopic - protocol and server ...

2015-12-04 Thread granthenke
GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/626

KAFKA-2945: CreateTopic - protocol and server side implementation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka create-wire

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/626.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #626


commit d1fe53ecb9ccdc94457efcb61332cd54ca7b8095
Author: Grant Henke 
Date:   2015-12-02T03:23:45Z

KAFKA-2945: CreateTopic - protocol and server side implementation

commit 5dce80683ebd2fe6a30c8c0aa8dfe8b2233602a8
Author: Grant Henke 
Date:   2015-12-04T21:57:57Z

Address reviews: possible codes, comments, invalid config exception




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-2893) Add Negative Partition Seek Check

2015-12-04 Thread jin xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing reassigned KAFKA-2893:
---

Assignee: jin xing

> Add Negative Partition Seek Check
> -
>
> Key: KAFKA-2893
> URL: https://issues.apache.org/jira/browse/KAFKA-2893
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
>Assignee: jin xing
>
> When adding add seek that is a negative number, there isn't a check. When you 
> do give a negative number, you get the following output:
> {{2015-11-25 13:54:16 INFO  Fetcher:567 - Fetch offset null is out of range, 
> resetting offset}}
> Code to replicate:
> KafkaConsumer consumer = new KafkaConsumer String>(props);
> TopicPartition partition = new TopicPartition(topic, 0);
> consumer.assign(Arrays.asList(partition));
> consumer.seek(partition, -1);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2948) Kafka producer does not cope well with topic deletions

2015-12-04 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-2948:
-

 Summary: Kafka producer does not cope well with topic deletions
 Key: KAFKA-2948
 URL: https://issues.apache.org/jira/browse/KAFKA-2948
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.9.0.0
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram


Kafka producer gets metadata for topics when send is invoked and thereafter it 
attempts to keep the metadata up-to-date without any explicit requests from the 
client. This works well in static environments, but when topics are added or 
deleted, list of topics in Metadata grows but never shrinks. Apart from being a 
memory leak, this results in constant requests for metadata for deleted topics.

We are running into this issue with the Confluent REST server where topic 
deletion from tests are filling up logs with warnings about unknown topics. 
Auto-create is turned off in our Kafka cluster.

I am happy to provide a fix, but am not sure what the right fix is. Does it 
make sense to remove topics from the metadata list when 
UNKNOWN_TOPIC_OR_PARTITION response is received if there are no outstanding 
sends? It doesn't look very straightforward to do this, so any alternative 
suggestions are welcome.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2912) Add error code 4 (InvalidFetchSize) to Errors.java

2015-12-04 Thread jin xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing reassigned KAFKA-2912:
---

Assignee: jin xing

> Add error code 4 (InvalidFetchSize) to Errors.java
> --
>
> Key: KAFKA-2912
> URL: https://issues.apache.org/jira/browse/KAFKA-2912
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: jin xing
>
> org.apache.kafka.common.protocol.Errors.java has:
> {quote}
> // TODO: errorCode 4 for InvalidFetchSize
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2893) Add Negative Partition Seek Check

2015-12-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15041588#comment-15041588
 ] 

ASF GitHub Bot commented on KAFKA-2893:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/628

KAFKA-2893: Add a simple non-negative partition seek check



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2893

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/628.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #628


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit 253769e9441bc0634fd27d00375c5381daf03202
Author: jinxing 
Date:   2015-12-04T14:01:42Z

KAFKA-2893: Add Negative Partition Seek Check




> Add Negative Partition Seek Check
> -
>
> Key: KAFKA-2893
> URL: https://issues.apache.org/jira/browse/KAFKA-2893
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
>
> When adding add seek that is a negative number, there isn't a check. When you 
> do give a negative number, you get the following output:
> {{2015-11-25 13:54:16 INFO  Fetcher:567 - Fetch offset null is out of range, 
> resetting offset}}
> Code to replicate:
> KafkaConsumer consumer = new KafkaConsumer String>(props);
> TopicPartition partition = new TopicPartition(topic, 0);
> consumer.assign(Arrays.asList(partition));
> consumer.seek(partition, -1);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2893: Add a simple non-negative partitio...

2015-12-04 Thread ZoneMayor
GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/628

KAFKA-2893: Add a simple non-negative partition seek check



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2893

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/628.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #628


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit 253769e9441bc0634fd27d00375c5381daf03202
Author: jinxing 
Date:   2015-12-04T14:01:42Z

KAFKA-2893: Add Negative Partition Seek Check




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-2940) Make available to use any Java options at startup scripts

2015-12-04 Thread Sasaki Toru (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sasaki Toru reassigned KAFKA-2940:
--

Assignee: Sasaki Toru

> Make available to use any Java options at startup scripts
> -
>
> Key: KAFKA-2940
> URL: https://issues.apache.org/jira/browse/KAFKA-2940
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Sasaki Toru
>Assignee: Sasaki Toru
>Priority: Minor
> Fix For: 0.9.0.1
>
>
> We cannot specify any Java options (e.g. option for remote debugging) at 
> startup scrips such as kafka-server-start.sh .
> This ticket makes we can specify to use "JAVA_OPTS" environmental variables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)