Build failed in Jenkins: kafka_0.9.0_jdk7 #59

2015-12-07 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Remove unused DoublyLinkedList

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.9.0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.9.0^{commit} # timeout=10
Checking out Revision 168b759e6c7ae72e60459ef6499d0330e617f84c 
(refs/remotes/origin/0.9.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 168b759e6c7ae72e60459ef6499d0330e617f84c
 > git rev-list 8b65ec9caca82f98e715a0acb2ebabee3ae4fef1 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson7015939905330744391.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 24.091 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson5517636930442753428.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 --stacktrace clean jarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:tools:clean
:connect:api:clean
:connect:file:clean
:connect:json:clean
:connect:runtime:clean
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka_0.9.0_jdk7:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/x1/jenkins/jenkins-slave/workspace/kafka_0.9.0_jdk7/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java'
>  to cache fileHashes.bin 
> (/x1/jenkins/jenkins-slave/workspace/kafka_0.9.0_jdk7/.gradle/2.8/taskArtifacts/fileHashes.bin).

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Failed to capture snapshot of input files 
for task 'compileJava' during up-to-date check.  See stacktrace for details.
at 
org.gradle.api.internal.changedetection.rules.TaskUpToDateState.(TaskUpToDateState.java:59)
at 
org.gradle.api.internal.changedetection.changes.DefaultTaskArtifactStateRepository$TaskArtifactStateImpl.getStates(DefaultTaskArtifactStateRepository.java:126)
at 
org.gradle.api.internal.changedetection.changes.DefaultTaskArtifactStateRepository$TaskArtifactStateImpl.isUpToDate(DefaultTaskArtifactStateRepository.java:69)
at 
org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:52)
at 
org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at 
org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:52)
at 
org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at 
org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:53)
at 
org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at 
org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:203)
at 
org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.exec

[jira] [Assigned] (KAFKA-2957) Fix typos in Kafka documentation

2015-12-07 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-2957:
--

Assignee: Vahid Hashemian

> Fix typos in Kafka documentation
> 
>
> Key: KAFKA-2957
> URL: https://issues.apache.org/jira/browse/KAFKA-2957
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Trivial
>  Labels: documentation, easyfix
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> There are some minor typos in Kafka documentation. Example:
> - Supporting these uses led use to a design with a number of unique elements, 
> more akin to a database log then a traditional messaging system.
> This should read as:
> - Supporting these uses led *us* to a design with a number of unique 
> elements, more akin to a database log *than* a traditional messaging system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1997) Refactor Mirror Maker

2015-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15046499#comment-15046499
 ] 

ASF GitHub Bot commented on KAFKA-1997:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/638


> Refactor Mirror Maker
> -
>
> Key: KAFKA-1997
> URL: https://issues.apache.org/jira/browse/KAFKA-1997
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1997.patch, KAFKA-1997.patch, 
> KAFKA-1997_2015-03-03_16:28:46.patch, KAFKA-1997_2015-03-04_15:07:46.patch, 
> KAFKA-1997_2015-03-04_15:42:45.patch, KAFKA-1997_2015-03-05_20:14:58.patch, 
> KAFKA-1997_2015-03-09_18:55:54.patch, KAFKA-1997_2015-03-10_18:31:34.patch, 
> KAFKA-1997_2015-03-11_15:20:18.patch, KAFKA-1997_2015-03-11_19:10:53.patch, 
> KAFKA-1997_2015-03-13_14:43:34.patch, KAFKA-1997_2015-03-17_13:47:01.patch, 
> KAFKA-1997_2015-03-18_12:47:32.patch
>
>
> Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: fix ProcessorStateManager to use corre...

2015-12-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/635


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Remove unused DoublyLinkedList

2015-12-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/638


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2903) FileMessageSet's read method maybe has problem when start is not zero

2015-12-07 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15046379#comment-15046379
 ] 

Jun Rao commented on KAFKA-2903:


Yes, the current logic is a bit confusing, but is correct. We create a 
FileMessageSet in two cases. The first case is when we create a LogSegment. In 
this case, FileMessageSet.start is always 0. The second case is when we want to 
generate a response to the fetch request. In this case, we are getting a slice 
from the FileMessageSet created in case one, which always has 
FileMessageSet.start as 0. That's why the code works. We never had a case that 
we need to create a slice from a FileMessageSet created in case (2). To make 
the code easier to understand, perhaps we can just get rid of this.start all 
together when calculating end. It would also be good to add a comment above 
FileMessageSet to make this clear. Do you want to submit a patch?

> FileMessageSet's read method maybe has problem when start is not zero
> -
>
> Key: KAFKA-2903
> URL: https://issues.apache.org/jira/browse/KAFKA-2903
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.2.1, 0.9.0.0
>Reporter: Pengwei
>Assignee: Jay Kreps
> Fix For: 0.9.1.0
>
>
> now the code is :
> def read(position: Int, size: Int): FileMessageSet = {
>. 
> new FileMessageSet(file,
>channel,
>start = this.start + position,
>end = math.min(this.start + position + size, 
> sizeInBytes()))
>   }
> if this.start is not 0, the end is only the FileMessageSet's size, not the 
> actually position of end position.
> the end parameter should be:
>  end = math.min(this.start + position + size, this.start+sizeInBytes())



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2929) Remove duplicate error mapping functionality

2015-12-07 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15046303#comment-15046303
 ] 

Grant Henke commented on KAFKA-2929:


Thanks for the concrete example of why this is important [~peoplebike].

> Remove duplicate error mapping functionality
> 
>
> Key: KAFKA-2929
> URL: https://issues.apache.org/jira/browse/KAFKA-2929
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps error codes and exceptions. 
> To prevent errors and issues with consistency we should remove 
> ErrorMapping.scala in core in favor or Errors.java in common. Any duplicated 
> exceptions in core should be removed as well to ensure the mapping is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1997) Refactor Mirror Maker

2015-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15046296#comment-15046296
 ] 

ASF GitHub Bot commented on KAFKA-1997:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/638

MINOR: Remove unused DoublyLinkedList

It used to be used by MirrorMaker but its usage was removed in KAFKA-1997.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka remove-dll

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/638.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #638


commit 7fa9407e5971cdb8bd2e4902f94b200f8b6088a6
Author: Grant Henke 
Date:   2015-12-08T03:47:52Z

MINOR: Remove unused DoublyLinkedList

It used to be used by MirrorMaker but its usage was removed in KAFKA-1997.




> Refactor Mirror Maker
> -
>
> Key: KAFKA-1997
> URL: https://issues.apache.org/jira/browse/KAFKA-1997
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1997.patch, KAFKA-1997.patch, 
> KAFKA-1997_2015-03-03_16:28:46.patch, KAFKA-1997_2015-03-04_15:07:46.patch, 
> KAFKA-1997_2015-03-04_15:42:45.patch, KAFKA-1997_2015-03-05_20:14:58.patch, 
> KAFKA-1997_2015-03-09_18:55:54.patch, KAFKA-1997_2015-03-10_18:31:34.patch, 
> KAFKA-1997_2015-03-11_15:20:18.patch, KAFKA-1997_2015-03-11_19:10:53.patch, 
> KAFKA-1997_2015-03-13_14:43:34.patch, KAFKA-1997_2015-03-17_13:47:01.patch, 
> KAFKA-1997_2015-03-18_12:47:32.patch
>
>
> Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2929) Remove duplicate error mapping functionality

2015-12-07 Thread Xing Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15046294#comment-15046294
 ] 

Xing Huang edited comment on KAFKA-2929 at 12/8/15 3:48 AM:


Two days ago, I made a patch to deal with DelayedProduce problem when leader 
change, and I sent ErrorMaping.StaleLeaderEpochCode to client. But the client 
recognized  it as Errors.NETWORK_EXCEPTION, and threw an exception - "The 
server disconnected before a response was received.". This exception confused 
me a lot, then I find a mismatch of the two mappings:  error code 13 means 
'stale leader epoch' in ErrorMapping, but 'net work exception' in Errors. So, 
It is necessary to have a consistent error mapping.


was (Author: peoplebike):
Two days ago, I made a patch to deal with DelayedProduce problem when leader 
change, and I sent Errors.StaleLeaderEpochCode to client. But the client 
recognized  it as Errors.NETWORK_EXCEPTION, and threw an exception - "The 
server disconnected before a response was received.". This exception confused 
me a lot, then I find a mismatch of the two mappings:  error code 13 means 
'stale leader epoch' in ErrorMapping, but 'net work exception' in Errors. So, 
It is necessary to have a consistent error mapping.

> Remove duplicate error mapping functionality
> 
>
> Key: KAFKA-2929
> URL: https://issues.apache.org/jira/browse/KAFKA-2929
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps error codes and exceptions. 
> To prevent errors and issues with consistency we should remove 
> ErrorMapping.scala in core in favor or Errors.java in common. Any duplicated 
> exceptions in core should be removed as well to ensure the mapping is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Remove unused DoublyLinkedList

2015-12-07 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/638

MINOR: Remove unused DoublyLinkedList

It used to be used by MirrorMaker but its usage was removed in KAFKA-1997.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka remove-dll

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/638.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #638


commit 7fa9407e5971cdb8bd2e4902f94b200f8b6088a6
Author: Grant Henke 
Date:   2015-12-08T03:47:52Z

MINOR: Remove unused DoublyLinkedList

It used to be used by MirrorMaker but its usage was removed in KAFKA-1997.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-2929) Remove duplicate error mapping functionality

2015-12-07 Thread Xing Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15046294#comment-15046294
 ] 

Xing Huang edited comment on KAFKA-2929 at 12/8/15 3:45 AM:


Two days ago, I made a patch to deal with DelayedProduce problem when leader 
change, and I sent Errors.StaleLeaderEpochCode to client. But the client 
recognized  it as Errors.NETWORK_EXCEPTION, and threw an exception - "The 
server disconnected before a response was received.". This exception confused 
me a lot, then I find a mismatch of the two mappings:  error code 13 means 
'stale leader epoch' in ErrorMapping, but 'net work exception' in Errors. So, 
It is necessary to have a consistent error mapping.


was (Author: peoplebike):
Two days ago, I made a patch to deal with DelayedProduce problem when leader 
change, and I sent Errors.StaleLeaderEpochCode to client. But the client 
recognized  it as Errors.NETWORK_EXCEPTION, and threw a exception - "The server 
disconnected before a response was received.". This exception confused me a 
lot, then I find a mismatch of the two mappings:  error code 13 means 'stale 
leader epoch' in ErrorMapping, but 'net work exception' in Errors. So, It is 
necessary to have a consistent error mapping.

> Remove duplicate error mapping functionality
> 
>
> Key: KAFKA-2929
> URL: https://issues.apache.org/jira/browse/KAFKA-2929
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps error codes and exceptions. 
> To prevent errors and issues with consistency we should remove 
> ErrorMapping.scala in core in favor or Errors.java in common. Any duplicated 
> exceptions in core should be removed as well to ensure the mapping is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2929) Remove duplicate error mapping functionality

2015-12-07 Thread Xing Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15046294#comment-15046294
 ] 

Xing Huang commented on KAFKA-2929:
---

Two days ago, I made a patch to deal with DelayedProduce problem when leader 
change, and I sent Errors.StaleLeaderEpochCode to client. But the client 
recognized  it as Errors.NETWORK_EXCEPTION, and threw a exception - "The server 
disconnected before a response was received.". This exception confused me a 
lot, then I find a mismatch of the two mappings:  error code 13 means 'stale 
leader epoch' in ErrorMapping, but 'net work exception' in Errors. So, It is 
necessary to have a consistent error mapping.

> Remove duplicate error mapping functionality
> 
>
> Key: KAFKA-2929
> URL: https://issues.apache.org/jira/browse/KAFKA-2929
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps error codes and exceptions. 
> To prevent errors and issues with consistency we should remove 
> ErrorMapping.scala in core in favor or Errors.java in common. Any duplicated 
> exceptions in core should be removed as well to ensure the mapping is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2959) Remove temporary mapping to deserialize functions in RequestChannel

2015-12-07 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-2959:
--

 Summary: Remove temporary mapping to deserialize functions in 
RequestChannel 
 Key: KAFKA-2959
 URL: https://issues.apache.org/jira/browse/KAFKA-2959
 Project: Kafka
  Issue Type: Sub-task
Reporter: Grant Henke


Once the old Request & Response objects are no longer used we can delete the 
legacy mapping maintained in RequestChannel.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2903) FileMessageSet's read method maybe has problem when start is not zero

2015-12-07 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2903:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> FileMessageSet's read method maybe has problem when start is not zero
> -
>
> Key: KAFKA-2903
> URL: https://issues.apache.org/jira/browse/KAFKA-2903
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.2.1, 0.9.0.0
>Reporter: Pengwei
>Assignee: Jay Kreps
> Fix For: 0.9.1.0
>
>
> now the code is :
> def read(position: Int, size: Int): FileMessageSet = {
>. 
> new FileMessageSet(file,
>channel,
>start = this.start + position,
>end = math.min(this.start + position + size, 
> sizeInBytes()))
>   }
> if this.start is not 0, the end is only the FileMessageSet's size, not the 
> actually position of end position.
> the end parameter should be:
>  end = math.min(this.start + position + size, this.start+sizeInBytes())



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2958) Remove duplicate API key mapping functionality

2015-12-07 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2958:
---
Status: Patch Available  (was: In Progress)

> Remove duplicate API key mapping functionality
> --
>
> Key: KAFKA-2958
> URL: https://issues.apache.org/jira/browse/KAFKA-2958
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps request API keys and names. 
> To prevent errors and issues with consistency we should remove 
> RequestKeys.scala in core in favor or ApiKeys.java in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2958: Remove duplicate API key mapping f...

2015-12-07 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/637

KAFKA-2958: Remove duplicate API key mapping functionality



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka api-keys

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/637.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #637


commit a6a6c3c449ab84cee9178f0997c2c16356d4b391
Author: Grant Henke 
Date:   2015-12-08T01:57:29Z

KAFKA-2958: Remove duplicate API key mapping functionality




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2958) Remove duplicate API key mapping functionality

2015-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15046186#comment-15046186
 ] 

ASF GitHub Bot commented on KAFKA-2958:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/637

KAFKA-2958: Remove duplicate API key mapping functionality



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka api-keys

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/637.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #637


commit a6a6c3c449ab84cee9178f0997c2c16356d4b391
Author: Grant Henke 
Date:   2015-12-08T01:57:29Z

KAFKA-2958: Remove duplicate API key mapping functionality




> Remove duplicate API key mapping functionality
> --
>
> Key: KAFKA-2958
> URL: https://issues.apache.org/jira/browse/KAFKA-2958
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps request API keys and names. 
> To prevent errors and issues with consistency we should remove 
> RequestKeys.scala in core in favor or ApiKeys.java in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2958) Remove duplicate API key mapping functionality

2015-12-07 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2958 started by Grant Henke.
--
> Remove duplicate API key mapping functionality
> --
>
> Key: KAFKA-2958
> URL: https://issues.apache.org/jira/browse/KAFKA-2958
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps request API keys and names. 
> To prevent errors and issues with consistency we should remove 
> RequestKeys.scala in core in favor or ApiKeys.java in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2958) Remove duplicate API key mapping functionality

2015-12-07 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-2958:
--

 Summary: Remove duplicate API key mapping functionality
 Key: KAFKA-2958
 URL: https://issues.apache.org/jira/browse/KAFKA-2958
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Grant Henke
Assignee: Grant Henke


Kafka common and core both have a class that maps request API keys and names. 
To prevent errors and issues with consistency we should remove 
RequestKeys.scala in core in favor or ApiKeys.java in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-32 - Add CreateTime and LogAppendTime to Kafka message

2015-12-07 Thread Becket Qin
It looks the format of the previous email was messed up. Send it again.

Just to recap, the last proposal Jay made (with some implementation
details added)
was:

1. Allow user to stamp the message when produce

2. When broker receives a message it take a look at the difference between
its local time and the timestamp in the message.
  a. If the time difference is within a configurable
max.message.time.difference.ms, the server will accept it and append it to
the log.
  b. If the time difference is beyond the configured
max.message.time.difference.ms, the server will override the timestamp with
its current local time and append the message to the log.
  c. The default value of max.message.time.difference would be set to
Long.MaxValue.

3. The configurable time difference threshold
max.message.time.difference.ms will
be a per topic configuration.

4. The indexed will be built so it has the following guarantee.
  a. If user search by time stamp:
  - all the messages after that timestamp will be consumed.
  - user might see earlier messages.
  b. The log retention will take a look at the last time index entry in the
time index file. Because the last entry will be the latest timestamp in the
entire log segment. If that entry expires, the log segment will be deleted.
  c. The log rolling has to depend on the earliest timestamp. In this case
we may need to keep a in memory timestamp only for the current active log.
On recover, we will need to read the active log segment to get this timestamp
of the earliest messages.

5. The downside of this proposal are:
  a. The timestamp might not be monotonically increasing.
  b. The log retention might become non-deterministic. i.e. When a message
will be deleted now depends on the timestamp of the other messages in the
same log segment. And those timestamps are provided by
user within a range depending on what the time difference threshold
configuration is.
  c. The semantic meaning of the timestamp in the messages could be a little
bit vague because some of them come from the producer and some of them are
overwritten by brokers.

6. Although the proposal has some downsides, it gives user the flexibility
to use the timestamp.
  a. If the threshold is set to Long.MaxValue. The timestamp in the message is
equivalent to CreateTime.
  b. If the threshold is set to 0. The timestamp in the message is equivalent
to LogAppendTime.

This proposal actually allows user to use either CreateTime or LogAppendTime
without introducing two timestamp concept at the same time. I have updated
the wiki for KIP-32 and KIP-33 with this proposal.

One thing I am thinking is that instead of having a time difference threshold,
should we simply set have a TimestampType configuration? Because in most
cases, people will either set the threshold to 0 or Long.MaxValue. Setting
anything in between will make the timestamp in the message meaningless to
user - user don't know if the timestamp has been overwritten by the brokers.

Any thoughts?

Thanks,
Jiangjie (Becket) Qin

On Mon, Dec 7, 2015 at 10:33 AM, Jiangjie Qin 
wrote:

> Bump up this thread.
>
> Just to recap, the last proposal Jay made (with some implementation details
> added) was:
>
>1. Allow user to stamp the message when produce
>2. When broker receives a message it take a look at the difference
>between its local time and the timestamp in the message.
>   - If the time difference is within a configurable
>   max.message.time.difference.ms, the server will accept it and append
>   it to the log.
>   - If the time difference is beyond the configured
>   max.message.time.difference.ms, the server will override the
>   timestamp with its current local time and append the message to the
> log.
>   - The default value of max.message.time.difference would be set to
>   Long.MaxValue.
>   3. The configurable time difference threshold
>max.message.time.difference.ms will be a per topic configuration.
>4. The indexed will be built so it has the following guarantee.
>   - If user search by time stamp:
>- all the messages after that timestamp will be consumed.
>   - user might see earlier messages.
>   - The log retention will take a look at the last time index entry in
>   the time index file. Because the last entry will be the latest
> timestamp in
>   the entire log segment. If that entry expires, the log segment will
> be
>   deleted.
>   - The log rolling has to depend on the earliest timestamp. In this
>   case we may need to keep a in memory timestamp only for the
> current active
>   log. On recover, we will need to read the active log segment to get
> this
>   timestamp of the earliest messages.
>5. The downside of this proposal are:
>   - The timestamp might not be monotonically increasing.
>   - The log retention might become non-deterministic. i.e. When a
>   message will be deleted now depends on the timestamp of the
> 

Build failed in Jenkins: kafka-trunk-jdk7 #878

2015-12-07 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2804: manage changelog topics through ZK in PartitionAssignor

--
[...truncated 1397 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorRespon

Build failed in Jenkins: kafka-trunk-jdk8 #207

2015-12-07 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2804: manage changelog topics through ZK in PartitionAssignor

--
[...truncated 1430 lines...]

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest >

[jira] [Resolved] (KAFKA-2804) Create / Update changelog topics upon state store initialization

2015-12-07 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2804.
-
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 579
[https://github.com/apache/kafka/pull/579]

> Create / Update changelog topics upon state store initialization
> 
>
> Key: KAFKA-2804
> URL: https://issues.apache.org/jira/browse/KAFKA-2804
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> When state store instances that are logging-backed are initialized, we need 
> to check if the corresponding change log topics have been created with the 
> right number of partitions:
> 1) If not exist, create topic
> 2) If expected #.partitions < actual #.partitions, delete and re-create topic.
> 3) If expected #.partitions > actual #.partitions, add partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2804) Create / Update changelog topics upon state store initialization

2015-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15045965#comment-15045965
 ] 

ASF GitHub Bot commented on KAFKA-2804:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/579


> Create / Update changelog topics upon state store initialization
> 
>
> Key: KAFKA-2804
> URL: https://issues.apache.org/jira/browse/KAFKA-2804
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> When state store instances that are logging-backed are initialized, we need 
> to check if the corresponding change log topics have been created with the 
> right number of partitions:
> 1) If not exist, create topic
> 2) If expected #.partitions < actual #.partitions, delete and re-create topic.
> 3) If expected #.partitions > actual #.partitions, add partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2804: manage changelog topics through ZK...

2015-12-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/579


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Work started] (KAFKA-2946) DeleteTopic - protocol and server side implementation

2015-12-07 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2946 started by Grant Henke.
--
> DeleteTopic - protocol and server side implementation
> -
>
> Key: KAFKA-2946
> URL: https://issues.apache.org/jira/browse/KAFKA-2946
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2957) Fix typos in Kafka documentation

2015-12-07 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-2957:
--

 Summary: Fix typos in Kafka documentation
 Key: KAFKA-2957
 URL: https://issues.apache.org/jira/browse/KAFKA-2957
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Vahid Hashemian
Priority: Trivial


There are some minor typos in Kafka documentation. Example:

- Supporting these uses led use to a design with a number of unique elements, 
more akin to a database log then a traditional messaging system.

This should read as:

- Supporting these uses led *us* to a design with a number of unique elements, 
more akin to a database log *than* a traditional messaging system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Issue with Gradle "Build Model"

2015-12-07 Thread Grant Henke
Hadoop consumer and producer contrib models were outdated and broken, as a
result they were removed in the 0.9.0 release and trunk via
https://issues.apache.org/jira/browse/KAFKA-2783

Thanks,
Grant

On Mon, Dec 7, 2015 at 4:01 PM, Vahid S Hashemian  wrote:

> Grant,
>
> Thank you for the quick reply and the pointer.
>
> I had forgot to mention I was trying to build the 0.9.0 branch.
> I was able to cherry-pick the commit you mentioned and do "Build Model"
> without an issue.
> However, I do not see "contrib" (hadoop-consumer, hadoop-producer) in the
> output structure.
> I'm just starting with Kafka, so I'm not sure whether I should see that as
> an issue with build.
>
> Thanks again.
> --Vahid
>
>
>
>
> From:   Grant Henke 
> To: dev@kafka.apache.org
> Date:   12/07/2015 01:05 PM
> Subject:Re: Issue with Gradle "Build Model"
>
>
>
> You are likely running into the issue found and solved in this pull
> request: https://github.com/apache/kafka/pull/509
>
> Are you trying to build the 0.9.0 branch? We may need to cherry pick that
> commit into that branch.
>
> Thanks,
> grant
>
> On Mon, Dec 7, 2015 at 2:50 PM, Vahid S Hashemian
>  > wrote:
>
> > Hi,
> >
> > I am following the instructions provided in
> >
>
> https://cwiki.apache.org/confluence/display/KAFKA/Eclipse-Scala-Gradle-Git+Developement+Environment+Setupto
>
> > set up my dev environment for Kafka development / contribution and am
> > running into an issue in step 4 (Let the project show up).
> >
> > When I follow the sub-steps there and click on Build Model, I get this
> > error message:
> >
> > Error in runnable 'Creating Gradle model'
> >
> > docs/producer_config.html (No such file or directory)
> > See error log for details
> >
> >
> >
> > Of course, when I look inside my kafka project folder that specified
> file
> > does not exist.
> >
> > Any idea what is going wrong and how to fix it?
> >
> > Thanks.
> > --Vahid
> >
>
>
>
> --
> Grant Henke
> Software Engineer | Cloudera
> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>
>
>
>
>


-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


[jira] [Commented] (KAFKA-2022) simpleconsumer.fetch(req) throws a java.nio.channels.ClosedChannelException: null exception when the original leader fails instead of being trapped in the fetchResponse

2015-12-07 Thread Jinder Aujla (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15045890#comment-15045890
 ] 

Jinder Aujla commented on KAFKA-2022:
-

Hi just wondering if there was an update to this?

thanks

> simpleconsumer.fetch(req) throws a java.nio.channels.ClosedChannelException: 
> null exception when the original leader fails instead of being trapped in the 
> fetchResponse api while consuming messages
> -
>
> Key: KAFKA-2022
> URL: https://issues.apache.org/jira/browse/KAFKA-2022
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.2.1
> Environment: 3 linux nodes with both zookeepr & brokers running under 
> respective users on each..
>Reporter: Muqeet Mohammed Ali
>Assignee: Neha Narkhede
>
> simpleconsumer.fetch(req) throws a java.nio.channels.ClosedChannelException: 
> null exception when the original leader fails, instead of being trapped in 
> the fetchResponse api while consuming messages. My understanding was that any 
> fetch failures can be found via fetchResponse.hasError() call and then be 
> handled to fetch new leader in this case. Below is the relevant code snippet 
> from the simple consumer with comments marking the line causing 
> exception..can you please comment on this?
> if (simpleconsumer == null) {
>   simpleconsumer = new 
> SimpleConsumer(leaderAddress.getHostName(), leaderAddress.getPort(), 
> consumerTimeout,
>   consumerBufferSize, 
> consumerId);
> }
> FetchRequest req = new FetchRequestBuilder().clientId(getConsumerId())
>   .addFetch(topic, partition, 
> offsetManager.getTempOffset(), consumerBufferSize)
>   // Note: the fetchSize might need to be increased
>   // if large batches are written to Kafka
>   .build();
> // exception is throw at the below line
> FetchResponse fetchResponse = simpleconsumer.fetch(req);
> if (fetchResponse.hasError()) {
>   numErrors++;
> etc...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1911) Log deletion on stopping replicas should be async

2015-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15045855#comment-15045855
 ] 

ASF GitHub Bot commented on KAFKA-1911:
---

GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/636

KAFKA-1911

Made delete topic on brokers async

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka kafka-1911

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/636.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #636


commit 86a432c21eb2b206ffe120a4a4172a087fb109d4
Author: Mayuresh Gharat 
Date:   2015-12-07T22:01:22Z

Made Delete topic on the brokers Async




> Log deletion on stopping replicas should be async
> -
>
> Key: KAFKA-1911
> URL: https://issues.apache.org/jira/browse/KAFKA-1911
> Project: Kafka
>  Issue Type: Bug
>  Components: log, replication
>Reporter: Joel Koshy
>Assignee: Mayuresh Gharat
>  Labels: newbie++, newbiee
>
> If a StopReplicaRequest sets delete=true then we do a file.delete on the file 
> message sets. I was under the impression that this is fast but it does not 
> seem to be the case.
> On a partition reassignment in our cluster the local time for stop replica 
> took nearly 30 seconds.
> {noformat}
> Completed request:Name: StopReplicaRequest; Version: 0; CorrelationId: 467; 
> ClientId: ;DeletePartitions: true; ControllerId: 1212; ControllerEpoch: 
> 53 from 
> client/...:45964;totalTime:29191,requestQueueTime:1,localTime:29190,remoteTime:0,responseQueueTime:0,sendTime:0
> {noformat}
> This ties up one API thread for the duration of the request.
> Specifically in our case, the queue times for other requests also went up and 
> producers to the partition that was just deleted on the old leader took a 
> while to refresh their metadata (see KAFKA-1303) and eventually ran out of 
> retries on some messages leading to data loss.
> I think the log deletion in this case should be fully asynchronous although 
> we need to handle the case when a broker may respond immediately to the 
> stop-replica-request but then go down after deleting only some of the log 
> segments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-1911

2015-12-07 Thread MayureshGharat
GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/636

KAFKA-1911

Made delete topic on brokers async

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka kafka-1911

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/636.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #636


commit 86a432c21eb2b206ffe120a4a4172a087fb109d4
Author: Mayuresh Gharat 
Date:   2015-12-07T22:01:22Z

Made Delete topic on the brokers Async




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Issue with Gradle "Build Model"

2015-12-07 Thread Vahid S Hashemian
Grant,

Thank you for the quick reply and the pointer.

I had forgot to mention I was trying to build the 0.9.0 branch.
I was able to cherry-pick the commit you mentioned and do "Build Model" 
without an issue.
However, I do not see "contrib" (hadoop-consumer, hadoop-producer) in the 
output structure.
I'm just starting with Kafka, so I'm not sure whether I should see that as 
an issue with build.

Thanks again.
--Vahid




From:   Grant Henke 
To: dev@kafka.apache.org
Date:   12/07/2015 01:05 PM
Subject:Re: Issue with Gradle "Build Model"



You are likely running into the issue found and solved in this pull
request: https://github.com/apache/kafka/pull/509

Are you trying to build the 0.9.0 branch? We may need to cherry pick that
commit into that branch.

Thanks,
grant

On Mon, Dec 7, 2015 at 2:50 PM, Vahid S Hashemian 
 wrote:

> Hi,
>
> I am following the instructions provided in
> 
https://cwiki.apache.org/confluence/display/KAFKA/Eclipse-Scala-Gradle-Git+Developement+Environment+Setupto

> set up my dev environment for Kafka development / contribution and am
> running into an issue in step 4 (Let the project show up).
>
> When I follow the sub-steps there and click on Build Model, I get this
> error message:
>
> Error in runnable 'Creating Gradle model'
>
> docs/producer_config.html (No such file or directory)
> See error log for details
>
>
>
> Of course, when I look inside my kafka project folder that specified 
file
> does not exist.
>
> Any idea what is going wrong and how to fix it?
>
> Thanks.
> --Vahid
>



-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke






Re: Issue with Gradle "Build Model"

2015-12-07 Thread Grant Henke
You are likely running into the issue found and solved in this pull
request: https://github.com/apache/kafka/pull/509

Are you trying to build the 0.9.0 branch? We may need to cherry pick that
commit into that branch.

Thanks,
grant

On Mon, Dec 7, 2015 at 2:50 PM, Vahid S Hashemian  wrote:

> Hi,
>
> I am following the instructions provided in
> https://cwiki.apache.org/confluence/display/KAFKA/Eclipse-Scala-Gradle-Git+Developement+Environment+Setupto
> set up my dev environment for Kafka development / contribution and am
> running into an issue in step 4 (Let the project show up).
>
> When I follow the sub-steps there and click on Build Model, I get this
> error message:
>
> Error in runnable 'Creating Gradle model'
>
> docs/producer_config.html (No such file or directory)
> See error log for details
>
>
>
> Of course, when I look inside my kafka project folder that specified file
> does not exist.
>
> Any idea what is going wrong and how to fix it?
>
> Thanks.
> --Vahid
>



-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


Issue with Gradle "Build Model"

2015-12-07 Thread Vahid S Hashemian
Hi,

I am following the instructions provided in 
https://cwiki.apache.org/confluence/display/KAFKA/Eclipse-Scala-Gradle-Git+Developement+Environment+Setup
 
to set up my dev environment for Kafka development / contribution and am 
running into an issue in step 4 (Let the project show up).

When I follow the sub-steps there and click on Build Model, I get this 
error message:

Error in runnable 'Creating Gradle model'

docs/producer_config.html (No such file or directory)
See error log for details



Of course, when I look inside my kafka project folder that specified file 
does not exist.

Any idea what is going wrong and how to fix it?

Thanks.
--Vahid



Re: [DISCUSS] KIP-32 - Add CreateTime and LogAppendTime to Kafka message

2015-12-07 Thread Jiangjie Qin
Bump up this thread.

Just to recap, the last proposal Jay made (with some implementation details
added) was:

   1. Allow user to stamp the message when produce
   2. When broker receives a message it take a look at the difference
   between its local time and the timestamp in the message.
  - If the time difference is within a configurable
  max.message.time.difference.ms, the server will accept it and append
  it to the log.
  - If the time difference is beyond the configured
  max.message.time.difference.ms, the server will override the
  timestamp with its current local time and append the message to the log.
  - The default value of max.message.time.difference would be set to
  Long.MaxValue.
  3. The configurable time difference threshold
   max.message.time.difference.ms will be a per topic configuration.
   4. The indexed will be built so it has the following guarantee.
  - If user search by time stamp:
   - all the messages after that timestamp will be consumed.
  - user might see earlier messages.
  - The log retention will take a look at the last time index entry in
  the time index file. Because the last entry will be the latest
timestamp in
  the entire log segment. If that entry expires, the log segment will be
  deleted.
  - The log rolling has to depend on the earliest timestamp. In this
  case we may need to keep a in memory timestamp only for the
current active
  log. On recover, we will need to read the active log segment to get this
  timestamp of the earliest messages.
   5. The downside of this proposal are:
  - The timestamp might not be monotonically increasing.
  - The log retention might become non-deterministic. i.e. When a
  message will be deleted now depends on the timestamp of the
other messages
  in the same log segment. And those timestamps are provided by
user within a
  range depending on what the time difference threshold configuration is.
  - The semantic meaning of the timestamp in the messages could be a
  little bit vague because some of them come from the producer and some of
  them are overwritten by brokers.
  6. Although the proposal has some downsides, it gives user the
   flexibility to use the timestamp.
   - If the threshold is set to Long.MaxValue. The timestamp in the message
  is equivalent to CreateTime.
  - If the threshold is set to 0. The timestamp in the message is
  equivalent to LogAppendTime.

This proposal actually allows user to use either CreateTime or
LogAppendTime without introducing two timestamp concept at the same time. I
have updated the wiki for KIP-32 and KIP-33 with this proposal.

One thing I am thinking is that instead of having a time difference
threshold, should we simply set have a TimestampType configuration? Because
in most cases, people will either set the threshold to 0 or Long.MaxValue.
Setting anything in between will make the timestamp in the message
meaningless to user - user don't know if the timestamp has been overwritten
by the brokers.

Any thoughts?

Thanks,
Jiangjie (Becket) Qin

On Mon, Oct 26, 2015 at 1:23 PM, Jiangjie Qin  wrote:

> Hi Jay,
>
> Thanks for such detailed explanation. I think we both are trying to make
> CreateTime work for us if possible. To me by "work" it means clear
> guarantees on:
> 1. Log Retention Time enforcement.
> 2. Log Rolling time enforcement (This might be less a concern as you
> pointed out)
> 3. Application search message by time.
>
> WRT (1), I agree the expectation for log retention might be different
> depending on who we ask. But my concern is about the level of guarantee we
> give to user. My observation is that a clear guarantee to user is critical
> regardless of the mechanism we choose. And this is the subtle but important
> difference between using LogAppendTime and CreateTime.
>
> Let's say user asks this question: How long will my message stay in Kafka?
>
> If we use LogAppendTime for log retention, the answer is message will stay
> in Kafka for retention time after the message is produced (to be more
> precise, upper bounded by log.rolling.ms + log.retention.ms). User has a
> clear guarantee and they may decide whether or not to put the message into
> Kafka. Or how to adjust the retention time according to their requirements.
> If we use create time for log retention, the answer would be it depends.
> The best answer we can give is at least retention.ms because there is no
> guarantee when the messages will be deleted after that. If a message sits
> somewhere behind a larger create time, the message might stay longer than
> expected. But we don't know how longer it would be because it depends on
> the create time. In this case, it is hard for user to decide what to do.
>
> I am worrying about this because a blurring guarantee has bitten us
> before, e.g. Topic creation. We have received many questions like "why my
> topic is not there after I created

[jira] [Comment Edited] (KAFKA-2953) Kafka documentation is really wise

2015-12-07 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15045398#comment-15045398
 ] 

Jay Kreps edited comment on KAFKA-2953 at 12/7/15 6:30 PM:
---

I like that this bug's title says "wise" instead of "wide. Favorite typo ever. 
:-)

Good call. I think the issue is the default field which is getting blown out by 
the massive class names. Not sure what the fix for that is, but I agree it's 
pretty unusable right now. Those class names are also getting displayed wrong 
as what is getting printed is getClass() not getClass().getName() and hence we 
are adding brackets which is actually incorrect (i.e. if you put that value in 
it would fail).


was (Author: jkreps):
I like that this bug's title says "wise" instead of "wide. Favorite typo ever. 
:-)

Good call. I think the issue is the default field which is getting blown out by 
the massive class names. Not sure what the fix for that is, but I agree it's 
pretty unusable right now. Those class names are also getting displayed name as 
what is getting printed is getClass() not getClass().getName() and hence we are 
adding brackets which is actually incorrect (i.e. if you put that value in it 
would fail).

> Kafka documentation is really wise
> --
>
> Key: KAFKA-2953
> URL: https://issues.apache.org/jira/browse/KAFKA-2953
> Project: Kafka
>  Issue Type: Bug
>  Components: website
> Environment: Google Chrome Version 47.0.2526.73 (64-bit)
>Reporter: Jens Rantil
>Priority: Trivial
>
> The page at http://kafka.apache.org/documentation.html is extremelly wide 
> which is mostly annoying.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2953) Kafka documentation is really wise

2015-12-07 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15045398#comment-15045398
 ] 

Jay Kreps commented on KAFKA-2953:
--

I like that this bug's title says "wise" instead of "wide. Favorite typo ever. 
:-)

Good call. I think the issue is the default field which is getting blown out by 
the massive class names. Not sure what the fix for that is, but I agree it's 
pretty unusable right now. Those class names are also getting displayed name as 
what is getting printed is getClass() not getClass().getName() and hence we are 
adding brackets which is actually incorrect (i.e. if you put that value in it 
would fail).

> Kafka documentation is really wise
> --
>
> Key: KAFKA-2953
> URL: https://issues.apache.org/jira/browse/KAFKA-2953
> Project: Kafka
>  Issue Type: Bug
>  Components: website
> Environment: Google Chrome Version 47.0.2526.73 (64-bit)
>Reporter: Jens Rantil
>Priority: Trivial
>
> The page at http://kafka.apache.org/documentation.html is extremelly wide 
> which is mostly annoying.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: TEST: remove checkMaybeGetRemainingTime in Kaf...

2015-12-07 Thread guozhangwang
Github user guozhangwang closed the pull request at:

https://github.com/apache/kafka/pull/629


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2948) Kafka producer does not cope well with topic deletions

2015-12-07 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15045372#comment-15045372
 ] 

Jiangjie Qin commented on KAFKA-2948:
-

[~rsivaram] As you pointed out we never remove the topics set in producer 
metadata. I am not sure if removing the topic from the set when we see 
UNKNOWN_TOPIC_OR_PARTITION error code is the right way to fix this, because 
UNKNOWN_TOPIC_OR_PARTITION can also occur in other cases such as partition 
reassignment, the producer is supposed to retry in this case. 

Maybe a TTL is a better solution here. e.g. If the producer hasn't sent data to 
a particular topic since last metadata refresh, we can remove the topic from 
metadata topic set on next metadata refresh.

> Kafka producer does not cope well with topic deletions
> --
>
> Key: KAFKA-2948
> URL: https://issues.apache.org/jira/browse/KAFKA-2948
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Kafka producer gets metadata for topics when send is invoked and thereafter 
> it attempts to keep the metadata up-to-date without any explicit requests 
> from the client. This works well in static environments, but when topics are 
> added or deleted, list of topics in Metadata grows but never shrinks. Apart 
> from being a memory leak, this results in constant requests for metadata for 
> deleted topics.
> We are running into this issue with the Confluent REST server where topic 
> deletion from tests are filling up logs with warnings about unknown topics. 
> Auto-create is turned off in our Kafka cluster.
> I am happy to provide a fix, but am not sure what the right fix is. Does it 
> make sense to remove topics from the metadata list when 
> UNKNOWN_TOPIC_OR_PARTITION response is received if there are no outstanding 
> sends? It doesn't look very straightforward to do this, so any alternative 
> suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2956) Upgrade section of the docs is still a bit anemic

2015-12-07 Thread Jay Kreps (JIRA)
Jay Kreps created KAFKA-2956:


 Summary: Upgrade section of the docs is still a bit anemic
 Key: KAFKA-2956
 URL: https://issues.apache.org/jira/browse/KAFKA-2956
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Jay Kreps


Upgrades are pretty touchy since one wrong move can really mess up your 
cluster. We want to give people all the info about how to do this safely so 
that they do do it (we've historically had people lagging on version for a very 
long time, mostly out of fear).

Here is some of the obvious missing stuff:
1. Break out the two cases of "rolling upgrade" vs "downtime upgrade" most 
people will be doing both. We have both but they should each have their own 
section.
2. What about clients? Do they need to go first? Second? What about non-java 
clients?
3. What about mirroring? Can old clusters mirror to new clusters? Can new 
clusters mirror to old clusters? Do the servers need to upgrade first or the 
mirror makers?

Basically it would be good to really walk people through this step by step so 
that they get in the habit of doing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2955) Add Prompt to kafka-console-producer

2015-12-07 Thread Jesse Anderson (JIRA)
Jesse Anderson created KAFKA-2955:
-

 Summary: Add Prompt to kafka-console-producer
 Key: KAFKA-2955
 URL: https://issues.apache.org/jira/browse/KAFKA-2955
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Affects Versions: 0.9.0.0
Reporter: Jesse Anderson
Assignee: Jun Rao


A common source of confusion for people using the kafka-console-producer is a 
lack of prompt. People think that kafka-console-producer is still starting up 
or connecting. Adding a ">" prompt to show that the kafka-console-producer is 
ready will fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: fix ProcessorStateManager to use corre...

2015-12-07 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/635

HOTFIX: fix ProcessorStateManager to use correct ktable partitions

@guozhangwang 

* fix ProcessorStateManager to use correct ktable partitions
* more ktable tests

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka more_ktable_test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/635.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #635


commit 6b5b287a88dccdd095be574e4e96c4ae3543a879
Author: Yasuhiro Matsuda 
Date:   2015-12-04T23:16:36Z

add more ktable test

commit e7ba48810c594424b56f0b1cbec922acdd974d01
Author: Yasuhiro Matsuda 
Date:   2015-12-07T17:28:18Z

Merge branch 'trunk' of github.com:apache/kafka into more_ktable_test




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2954) reserved.broker.max.id is not fully documented

2015-12-07 Thread Jens Rantil (JIRA)
Jens Rantil created KAFKA-2954:
--

 Summary: reserved.broker.max.id is not fully documented
 Key: KAFKA-2954
 URL: https://issues.apache.org/jira/browse/KAFKA-2954
 Project: Kafka
  Issue Type: Bug
  Components: website
Reporter: Jens Rantil
Priority: Minor


reserved.broker.max.id doesn't have a description on 
http://kafka.apache.org/documentation.html#configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2953) Kafka documentation is really wise

2015-12-07 Thread Jens Rantil (JIRA)
Jens Rantil created KAFKA-2953:
--

 Summary: Kafka documentation is really wise
 Key: KAFKA-2953
 URL: https://issues.apache.org/jira/browse/KAFKA-2953
 Project: Kafka
  Issue Type: Bug
  Components: website
 Environment: Google Chrome Version 47.0.2526.73 (64-bit)
Reporter: Jens Rantil
Priority: Trivial


The page at http://kafka.apache.org/documentation.html is extremelly wide which 
is mostly annoying.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1148) Delayed fetch/producer requests should be satisfied on a leader change

2015-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15044862#comment-15044862
 ] 

ASF GitHub Bot commented on KAFKA-1148:
---

Github user iBuddha closed the pull request at:

https://github.com/apache/kafka/pull/633


> Delayed fetch/producer requests should be satisfied on a leader change
> --
>
> Key: KAFKA-1148
> URL: https://issues.apache.org/jira/browse/KAFKA-1148
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>
> Somewhat related to KAFKA-1016.
> This would be an issue only if max.wait is set to a very high value. When a 
> leader change occurs we should remove the delayed request from the purgatory 
> - either satisfy with error/expire - whichever makes more sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-1148 check leader epoch for DelayedProdu...

2015-12-07 Thread iBuddha
Github user iBuddha closed the pull request at:

https://github.com/apache/kafka/pull/633


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2951) Additional authorization test cases

2015-12-07 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15044830#comment-15044830
 ] 

Flavio Junqueira commented on KAFKA-2951:
-

[~ijuma] thanks for the feedback, I have updated the description to give more 
context.

> Additional authorization test cases
> ---
>
> Key: KAFKA-2951
> URL: https://issues.apache.org/jira/browse/KAFKA-2951
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>
> There are a few test cases that are worth adding. I've run them manually, but 
> it sounds like a good idea to have them in:
> # Test incorrect topic name (authorization failure)
> # Test topic wildcard
> The first one is covered by checking access to a topic with no authorization, 
> which could happen for example if the user as a typo in the topic name. This 
> case is somewhat covered by the test case testProduceWithNoTopicAccess in 
> AuthorizerIntegrationTest, but not in EndToEndAuthorizationTest. The second 
> case consists of testing that using the topic wildcard works. This wildcard 
> might end up being commonly used and it is worth checking the functionality. 
> At the moment, I believe none of AuthorizerIntegrationTest or 
> EndToEndAuthorizationTest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2951) Additional authorization test cases

2015-12-07 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-2951:

Description: 
There are a few test cases that are worth adding. I've run them manually, but 
it sounds like a good idea to have them in:

# Test incorrect topic name (authorization failure)
# Test topic wildcard

The first one is covered by checking access to a topic with no authorization, 
which could happen for example if the user as a typo in the topic name. This 
case is somewhat covered by the test case testProduceWithNoTopicAccess in 
AuthorizerIntegrationTest, but not in EndToEndAuthorizationTest. The second 
case consists of testing that using the topic wildcard works. This wildcard 
might end up being commonly used and it is worth checking the functionality. At 
the moment, I believe none of AuthorizerIntegrationTest or 
EndToEndAuthorizationTest.

  was:
There are a few test cases that are worth adding. I've run them manually, but 
it sounds like a good idea to have them in:

# Test incorrect topic name (authorization failure)
# Test topic wildcard


> Additional authorization test cases
> ---
>
> Key: KAFKA-2951
> URL: https://issues.apache.org/jira/browse/KAFKA-2951
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>
> There are a few test cases that are worth adding. I've run them manually, but 
> it sounds like a good idea to have them in:
> # Test incorrect topic name (authorization failure)
> # Test topic wildcard
> The first one is covered by checking access to a topic with no authorization, 
> which could happen for example if the user as a typo in the topic name. This 
> case is somewhat covered by the test case testProduceWithNoTopicAccess in 
> AuthorizerIntegrationTest, but not in EndToEndAuthorizationTest. The second 
> case consists of testing that using the topic wildcard works. This wildcard 
> might end up being commonly used and it is worth checking the functionality. 
> At the moment, I believe none of AuthorizerIntegrationTest or 
> EndToEndAuthorizationTest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2951) Additional authorization test cases

2015-12-07 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-2951:

Summary: Additional authorization test cases  (was: Add test cases to 
EndToEndAuthorizationTest)

> Additional authorization test cases
> ---
>
> Key: KAFKA-2951
> URL: https://issues.apache.org/jira/browse/KAFKA-2951
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>
> There are a few test cases that are worth adding. I've run them manually, but 
> it sounds like a good idea to have them in:
> # Test incorrect topic name (authorization failure)
> # Test topic wildcard



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2910) Failure in kafka.api.SslEndToEndAuthorizationTest.testNoGroupAcl

2015-12-07 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15044814#comment-15044814
 ] 

Flavio Junqueira commented on KAFKA-2910:
-

Based on this WARN:

{noformat}
[2015-11-30 01:35:03,481] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration
section named 'Client' was found in specified JAAS configuration file: 
'/tmp/jaas6536686531650477656.conf'. Will continue 
connection to Zookeeper server without SASL authentication, if Zookeeper server 
allows it. 
(org.apache.zookeeper.ClientCnxn:957)
{noformat}

It looks like another instance of the the configuration reset problem. 

> Failure in kafka.api.SslEndToEndAuthorizationTest.testNoGroupAcl
> 
>
> Key: KAFKA-2910
> URL: https://issues.apache.org/jira/browse/KAFKA-2910
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> {code}
> java.lang.SecurityException: zkEnableSecureAcls is true, but the verification 
> of the JAAS login file failed.
>   at kafka.server.KafkaServer.initZk(KafkaServer.scala:265)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:143)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:66)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:66)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:66)
>   at 
> kafka.api.SslEndToEndAuthorizationTest.kafka$api$IntegrationTestHarness$$super$setUp(SslEndToEndAuthorizationTest.scala:24)
>   at 
> kafka.api.IntegrationTestHarness$class.setUp(IntegrationTestHarness.scala:58)
>   at 
> kafka.api.SslEndToEndAuthorizationTest.kafka$api$EndToEndAuthorizationTest$$super$setUp(SslEndToEndAuthorizationTest.scala:24)
>   at 
> kafka.api.EndToEndAuthorizationTest$class.setUp(EndToEndAuthorizationTest.scala:141)
>   at 
> kafka.api.SslEndToEndAuthorizationTest.setUp(SslEndToEndAuthorizationTest.scala:24)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>

[jira] [Updated] (KAFKA-2952) Add ducktape test for secure->unsecure ZK migration

2015-12-07 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-2952:

Description: We have test cases for the unsecure -> secure path, but not 
the other way around, We should add it.

> Add ducktape test for secure->unsecure ZK migration 
> 
>
> Key: KAFKA-2952
> URL: https://issues.apache.org/jira/browse/KAFKA-2952
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>
> We have test cases for the unsecure -> secure path, but not the other way 
> around, We should add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2951) Add test cases to EndToEndAuthorizationTest

2015-12-07 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15044736#comment-15044736
 ] 

Ismael Juma commented on KAFKA-2951:


[~fpj], have you checked if `AuthorizerIntegrationTest` already covers these?

> Add test cases to EndToEndAuthorizationTest
> ---
>
> Key: KAFKA-2951
> URL: https://issues.apache.org/jira/browse/KAFKA-2951
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>
> There are a few test cases that are worth adding. I've run them manually, but 
> it sounds like a good idea to have them in:
> # Test incorrect topic name (authorization failure)
> # Test topic wildcard



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Minor - Add description of -daemon option to z...

2015-12-07 Thread sasakitoa
GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/634

Minor - Add description of -daemon option to zookeeper-server-start.sh



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka zookeeper_usage

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/634.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #634


commit ec82c58623125fba991292facf6d46764758a21c
Author: Sasaki Toru 
Date:   2015-12-07T05:09:51Z

Add description of -daemon option to zookeeper-server-start.sh




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2952) Add ducktape test for secure->unsecure ZK migration

2015-12-07 Thread Flavio Junqueira (JIRA)
Flavio Junqueira created KAFKA-2952:
---

 Summary: Add ducktape test for secure->unsecure ZK migration 
 Key: KAFKA-2952
 URL: https://issues.apache.org/jira/browse/KAFKA-2952
 Project: Kafka
  Issue Type: Test
Reporter: Flavio Junqueira






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2952) Add ducktape test for secure->unsecure ZK migration

2015-12-07 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-2952:

Fix Version/s: 0.9.1.0

> Add ducktape test for secure->unsecure ZK migration 
> 
>
> Key: KAFKA-2952
> URL: https://issues.apache.org/jira/browse/KAFKA-2952
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2952) Add ducktape test for secure->unsecure ZK migration

2015-12-07 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-2952:

Affects Version/s: 0.9.0.0

> Add ducktape test for secure->unsecure ZK migration 
> 
>
> Key: KAFKA-2952
> URL: https://issues.apache.org/jira/browse/KAFKA-2952
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2952) Add ducktape test for secure->unsecure ZK migration

2015-12-07 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira reassigned KAFKA-2952:
---

Assignee: Flavio Junqueira

> Add ducktape test for secure->unsecure ZK migration 
> 
>
> Key: KAFKA-2952
> URL: https://issues.apache.org/jira/browse/KAFKA-2952
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2951) Add test cases to EndToEndAuthorizationTest

2015-12-07 Thread Flavio Junqueira (JIRA)
Flavio Junqueira created KAFKA-2951:
---

 Summary: Add test cases to EndToEndAuthorizationTest
 Key: KAFKA-2951
 URL: https://issues.apache.org/jira/browse/KAFKA-2951
 Project: Kafka
  Issue Type: Test
Affects Versions: 0.9.0.0
Reporter: Flavio Junqueira
Assignee: Flavio Junqueira
 Fix For: 0.9.1.0


There are a few test cases that are worth adding. I've run them manually, but 
it sounds like a good idea to have them in:

# Test incorrect topic name (authorization failure)
# Test topic wildcard



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)