[jira] [Commented] (KAFKA-2049) Add thread that detects JVM pauses

2015-12-16 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060626#comment-15060626
 ] 

Ted Malaska commented on KAFKA-2049:


So should we close it out then?

> Add thread that detects JVM pauses
> --
>
> Key: KAFKA-2049
> URL: https://issues.apache.org/jira/browse/KAFKA-2049
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> Long JVM pauses can cause Kafka malfunctions (especially when interacting 
> with ZK) that can be challenging to debug.
> I propose implementing HADOOP-9618 in Kafka:
> Add a simple thread which loops on 1-second sleeps, and if the sleep ever 
> takes significantly longer than 1 second, log a WARN. This will make GC 
> pauses (and other pauses) obvious in logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-36 - Rack aware replica assignment

2015-12-16 Thread Allen Wang
Hi Jun,

The reason that TopicMetadataResponse is not included in the KIP is that it
currently is not version aware . So we need to introduce version to it in
order to make sure backward compatibility. It seems to me a big change. Do
we want to couple it with this KIP? Do we need to further discuss what
information to include in the new version besides rack? For example, should
we include broker security protocol in TopicMetadataResponse?

The other option is to make it a separate KIP to make TopicMetadataResponse
version aware and decide what to include, and make this KIP focus on the
rack aware algorithm, admin tools  and related changes to inter-broker
protocol .

Thanks,
Allen




On Mon, Dec 14, 2015 at 8:30 AM, Jun Rao  wrote:

> Allen,
>
> Thanks for the proposal. A few comments.
>
> 1. Since this KIP changes the inter broker communication protocol
> (UpdateMetadataRequest), we will need to document the upgrade path (similar
> to what's described in
> http://kafka.apache.org/090/documentation.html#upgrade).
>
> 2. It might be useful to include the rack info of the broker in
> TopicMetadataResponse. This can be useful for administrative tasks, as well
> as read affinity in the future.
>
> Jun
>
>
>
> On Thu, Dec 10, 2015 at 9:38 AM, Allen Wang  wrote:
>
> > If there are no more comments I would like to call for a vote.
> >
> >
> > On Sun, Nov 15, 2015 at 10:08 PM, Allen Wang 
> wrote:
> >
> > > KIP is updated with more details and how to handle the situation where
> > > rack information is incomplete.
> > >
> > > In the situation where rack information is incomplete, but we want to
> > > continue with the assignment, I have suggested to ignore all rack
> > > information and fallback to original algorithm. The reason is explained
> > > below:
> > >
> > > The other options are to assume that the broker without the rack belong
> > to
> > > its own unique rack, or they belong to one "default" rack. Either way
> we
> > > choose, it is highly likely to result in uneven number of brokers in
> > racks,
> > > and it is quite possible that the "made up" racks will have much fewer
> > > number of brokers. As I explained in the KIP, uneven number of brokers
> in
> > > racks will lead to uneven distribution of replicas among brokers (even
> > > though the leader distribution is still even). The brokers in the rack
> > that
> > > has fewer number of brokers will get more replicas per broker than
> > brokers
> > > in other racks.
> > >
> > > Given this fact and the replica assignment produced will be incorrect
> > > anyway from rack aware point of view, ignoring all rack information and
> > > fallback to the original algorithm is not a bad choice since it will at
> > > least have a better guarantee of replica distribution.
> > >
> > > Also for command line tools it gives user a choice if for any reason
> they
> > > want to ignore rack information and fallback to the original algorithm.
> > >
> > >
> > > On Tue, Nov 10, 2015 at 9:04 AM, Allen Wang 
> > wrote:
> > >
> > >> I am busy with some time pressing issues for the last few days. I will
> > >> think about how the incomplete rack information will affect the
> balance
> > and
> > >> update the KIP by early next week.
> > >>
> > >> Thanks,
> > >> Allen
> > >>
> > >>
> > >> On Tue, Nov 3, 2015 at 9:03 AM, Neha Narkhede 
> > wrote:
> > >>
> > >>> Few suggestions on improving the KIP
> > >>>
> > >>> *If some brokers have rack, and some do not, the algorithm will
> thrown
> > an
> > >>> > exception. This is to prevent incorrect assignment caused by user
> > >>> error.*
> > >>>
> > >>>
> > >>> In the KIP, can you clearly state the user-facing behavior when some
> > >>> brokers have rack information and some don't. Which actions and
> > requests
> > >>> will error out and how?
> > >>>
> > >>> *Even distribution of partition leadership among brokers*
> > >>>
> > >>>
> > >>> There is some information about arranging the sorted broker list
> > >>> interlaced
> > >>> with rack ids. Can you describe the changes to the current algorithm
> > in a
> > >>> little more detail? How does this interlacing work if only a subset
> of
> > >>> brokers have the rack id configured? Does this still work if uneven #
> > of
> > >>> brokers are assigned to each rack? It might work, I'm looking for
> more
> > >>> details on the changes, since it will affect the behavior seen by the
> > >>> user
> > >>> - imbalance on either the leaders or data or both.
> > >>>
> > >>> On Mon, Nov 2, 2015 at 6:39 PM, Aditya Auradkar <
> > aaurad...@linkedin.com>
> > >>> wrote:
> > >>>
> > >>> > I think this sounds reasonable. Anyone else have comments?
> > >>> >
> > >>> > Aditya
> > >>> >
> > >>> > On Tue, Oct 27, 2015 at 5:23 PM, Allen Wang 
> > >>> wrote:
> > >>> >
> > >>> > > During the discussion in the hangout, it was mentioned that it
> > would
> > >>> be
> > >>> > > desirable 

[jira] [Commented] (KAFKA-2049) Add thread that detects JVM pauses

2015-12-16 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060729#comment-15060729
 ] 

Ismael Juma commented on KAFKA-2049:


I was wondering the same thing. The only thing I can think of is if we wanted 
to log it in the kafka logs instead of a separate GC log.

> Add thread that detects JVM pauses
> --
>
> Key: KAFKA-2049
> URL: https://issues.apache.org/jira/browse/KAFKA-2049
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> Long JVM pauses can cause Kafka malfunctions (especially when interacting 
> with ZK) that can be challenging to debug.
> I propose implementing HADOOP-9618 in Kafka:
> Add a simple thread which loops on 1-second sleeps, and if the sleep ever 
> takes significantly longer than 1 second, log a WARN. This will make GC 
> pauses (and other pauses) obvious in logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2929: Migrate duplicate error mapping fu...

2015-12-16 Thread granthenke
Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/616


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2542) Reuse Throttler code

2015-12-16 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060655#comment-15060655
 ] 

Ted Malaska commented on KAFKA-2542:


Hey Geoff,

Are you still working on this one.  I would be happy to take it off your 
shoulders. 

Let me know
Ted Malaska

> Reuse Throttler code
> 
>
> Key: KAFKA-2542
> URL: https://issues.apache.org/jira/browse/KAFKA-2542
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Minor
>
> ThroughputThrottler.java and Throttler.scala are quite similar. It would be 
> better to remove ThroughputThrottler, and place Throttler in 
> clients/o.a.k.common so that it can be reused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2712) Reassignment Partition Tool Issue

2015-12-16 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2712.
-
Resolution: Won't Fix

Not enough details to fix the issue.

> Reassignment Partition Tool Issue
> -
>
> Key: KAFKA-2712
> URL: https://issues.apache.org/jira/browse/KAFKA-2712
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.2.1
>Reporter: Kiran
>Priority: Minor
>
> Hi 
> I have 4 broker with id 1,2,3 and 4. Have one topic on broker 1 and 2. Now i 
> want to move this topic to broker 3 and 4. For this i am using reassignment 
> partition tool.
> Scenario 1:
> When i run verify command:
> kafka-reassign-partitions.sh --zookeeper localhost:2181 
> --reassignment-json-file reassignment-json-file.json --verify
> Status of partition reassignment:
> ERROR: Assigned replicas (2,3) don't match the list of replicas for 
> reassignment (4,3) for partition [test03,1]
> ERROR: Assigned replicas (1,2) don't match the list of replicas for 
> reassignment (3,4) for partition [test03,0]
> Reassignment of partition [test03,1] failed
> Reassignment of partition [test03,0] failed
> Its give me above error.
> But I run execute command and data migrated successfully. To verify this i 
> run describe and verify command again and all looks go.
> Scenario 2: 
> Getting smiler error in first verification but when when i run it execute 
> command it will not work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2049) Add thread that detects JVM pauses

2015-12-16 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060623#comment-15060623
 ] 

Gwen Shapira commented on KAFKA-2049:
-

It is embarrassing, since I created this ticket, but I can't remember why I 
thought we need it.

Java has a GC log that is enabled for Kafka by default...

> Add thread that detects JVM pauses
> --
>
> Key: KAFKA-2049
> URL: https://issues.apache.org/jira/browse/KAFKA-2049
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> Long JVM pauses can cause Kafka malfunctions (especially when interacting 
> with ZK) that can be challenging to debug.
> I propose implementing HADOOP-9618 in Kafka:
> Add a simple thread which loops on 1-second sleeps, and if the sleep ever 
> takes significantly longer than 1 second, log a WARN. This will make GC 
> pauses (and other pauses) obvious in logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2049) Add thread that detects JVM pauses

2015-12-16 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060632#comment-15060632
 ] 

Gwen Shapira commented on KAFKA-2049:
-

If you don't see a need for this either, yes, lets close it.

I was hoping you had something in mind :)

> Add thread that detects JVM pauses
> --
>
> Key: KAFKA-2049
> URL: https://issues.apache.org/jira/browse/KAFKA-2049
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> Long JVM pauses can cause Kafka malfunctions (especially when interacting 
> with ZK) that can be challenging to debug.
> I propose implementing HADOOP-9618 in Kafka:
> Add a simple thread which loops on 1-second sleeps, and if the sleep ever 
> takes significantly longer than 1 second, log a WARN. This will make GC 
> pauses (and other pauses) obvious in logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2049) Add thread that detects JVM pauses

2015-12-16 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2049.
-
Resolution: Won't Fix

We have a GC log

> Add thread that detects JVM pauses
> --
>
> Key: KAFKA-2049
> URL: https://issues.apache.org/jira/browse/KAFKA-2049
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> Long JVM pauses can cause Kafka malfunctions (especially when interacting 
> with ZK) that can be challenging to debug.
> I propose implementing HADOOP-9618 in Kafka:
> Add a simple thread which loops on 1-second sleeps, and if the sleep ever 
> takes significantly longer than 1 second, log a WARN. This will make GC 
> pauses (and other pauses) obvious in logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2998) New Consumer should not retry indefinitely if no broker is available

2015-12-16 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060360#comment-15060360
 ] 

Ismael Juma commented on KAFKA-2998:


Thanks for the report. Is this a duplicate of KAFKA-1894?

> New Consumer should not retry indefinitely if no broker is available
> 
>
> Key: KAFKA-2998
> URL: https://issues.apache.org/jira/browse/KAFKA-2998
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Florian Hussonnois
>Priority: Minor
>
> If no broker from bootstrap.servers is available consumer retries 
> indefinitely with debug log message :
>  
> DEBUG 17:16:13 Give up sending metadata request since no node is available
> DEBUG 17:16:13 Initialize connection to node -1 for sending metadata request
> DEBUG 17:16:13 Initiating connection to node -1 at localhost:9091.
> At least, an ERROR message should be log after a number of retries.
> In addition, maybe the consumer should fail in a such case ? This behavior 
> could be set by a configuration property ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2929) Migrate server side error mapping functionality

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060416#comment-15060416
 ] 

ASF GitHub Bot commented on KAFKA-2929:
---

GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/616

KAFKA-2929: Migrate duplicate error mapping functionality

Deprecates ErrorMapping.scala in core in favor or Errors.java in common. 
Duplicated exceptions in core are deprecated as well, to ensure the mapping 
is correct.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka error-mapping

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/616.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #616


commit 631f38af04f2f944d9a31506ad6290603cc4641e
Author: Grant Henke 
Date:   2015-12-16T17:55:33Z

KAFKA-2929: Migrate duplicate error mapping functionality




> Migrate server side error mapping functionality
> ---
>
> Key: KAFKA-2929
> URL: https://issues.apache.org/jira/browse/KAFKA-2929
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps error codes and exceptions. 
> To prevent errors and issues with consistency we should migrate from 
> ErrorMapping.scala in core in favor or Errors.java in common.
> When the old clients are removed ErrorMapping.scala and the old exceptions 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2929) Migrate server side error mapping functionality

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060415#comment-15060415
 ] 

ASF GitHub Bot commented on KAFKA-2929:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/616


> Migrate server side error mapping functionality
> ---
>
> Key: KAFKA-2929
> URL: https://issues.apache.org/jira/browse/KAFKA-2929
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps error codes and exceptions. 
> To prevent errors and issues with consistency we should migrate from 
> ErrorMapping.scala in core in favor or Errors.java in common.
> When the old clients are removed ErrorMapping.scala and the old exceptions 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2929: Migrate duplicate error mapping fu...

2015-12-16 Thread granthenke
GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/616

KAFKA-2929: Migrate duplicate error mapping functionality

Deprecates ErrorMapping.scala in core in favor or Errors.java in common. 
Duplicated exceptions in core are deprecated as well, to ensure the mapping 
is correct.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka error-mapping

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/616.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #616


commit 631f38af04f2f944d9a31506ad6290603cc4641e
Author: Grant Henke 
Date:   2015-12-16T17:55:33Z

KAFKA-2929: Migrate duplicate error mapping functionality




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2998) New Consumer should not retry indefinitely if no broker is available

2015-12-16 Thread Florian Hussonnois (JIRA)
Florian Hussonnois created KAFKA-2998:
-

 Summary: New Consumer should not retry indefinitely if no broker 
is available
 Key: KAFKA-2998
 URL: https://issues.apache.org/jira/browse/KAFKA-2998
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.9.0.0
Reporter: Florian Hussonnois
Priority: Minor


If no broker from bootstrap.servers is available consumer retries indefinitely 
with debug log message :
 
DEBUG 17:16:13 Give up sending metadata request since no node is available
DEBUG 17:16:13 Initialize connection to node -1 for sending metadata request
DEBUG 17:16:13 Initiating connection to node -1 at localhost:9091.

At least, an ERROR message should be log after a number of retries.
In addition, maybe the consumer should fail in a such case ? This behavior 
could be set by a configuration property ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2999) Errors enum should be a 1 to 1 mapping of error codes and exceptions

2015-12-16 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-2999:
--

 Summary: Errors enum should be a 1 to 1 mapping of error codes and 
exceptions
 Key: KAFKA-2999
 URL: https://issues.apache.org/jira/browse/KAFKA-2999
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: Grant Henke
Assignee: Grant Henke


Errors has functionality to map from code to exception and from exception to 
code. This requires the mapping to be 1 to 1 or else unexpected behavior may 
occur.

In the current code (below), a generic ApiException will result in an 
INVALID_COMMIT_OFFSET_SIZE error, because that is the last occurrence in the 
Enum.

{code:title=Error.java|borderStyle=solid}
...
for (Errors error : Errors.values()) {
   codeToError.put(error.code(), error);
   if (error.exception != null)
  classToError.put(error.exception.getClass(), error);
}
...
{code}


This should be fixed and some tests should be written to validate it's not 
broken. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2049) Add thread that detects JVM pauses

2015-12-16 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060440#comment-15060440
 ] 

Ted Malaska commented on KAFKA-2049:


Hey Gwen,

Do you mind if I try this one?

Thanks

> Add thread that detects JVM pauses
> --
>
> Key: KAFKA-2049
> URL: https://issues.apache.org/jira/browse/KAFKA-2049
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> Long JVM pauses can cause Kafka malfunctions (especially when interacting 
> with ZK) that can be challenging to debug.
> I propose implementing HADOOP-9618 in Kafka:
> Add a simple thread which loops on 1-second sleeps, and if the sleep ever 
> takes significantly longer than 1 second, log a WARN. This will make GC 
> pauses (and other pauses) obvious in logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2984: ktable sends old values when requi...

2015-12-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/672


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA 2578 Client Metadata internal state shou...

2015-12-16 Thread eribeiro
Github user eribeiro closed the pull request at:

https://github.com/apache/kafka/pull/263


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3000) __consumer_offsets topic grows indefinitely

2015-12-16 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061154#comment-15061154
 ] 

Grant Henke commented on KAFKA-3000:


[~horkhe] Glad to hear that was it. I will resolve as not a problem.

> __consumer_offsets topic grows indefinitely
> ---
>
> Key: KAFKA-3000
> URL: https://issues.apache.org/jira/browse/KAFKA-3000
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
> Environment: Ubuntu 14.04, 5 node kafka cluster + 5 node ZooKeeper 
> cluster. ZooKeeper and Kafka coexist on 5 cloud boxes.
>Reporter: Maxim Vladimirskiy
>
> Old segments of the __consumer_offsets topic seem to be never deleted. As of 
> Dec 16, 2015, there are segments dating back to when we started use Kafka - 
> Oct 26, 2015. However the idx for the respective oldest segment file is just 
> about an hour fresh. All offset related settings are default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3000) __consumer_offsets topic grows indefinitely

2015-12-16 Thread Maxim Vladimirskiy (JIRA)
Maxim Vladimirskiy created KAFKA-3000:
-

 Summary: __consumer_offsets topic grows indefinitely
 Key: KAFKA-3000
 URL: https://issues.apache.org/jira/browse/KAFKA-3000
 Project: Kafka
  Issue Type: Bug
  Components: offset manager
Affects Versions: 0.8.2.1
 Environment: Ubuntu 14.04, 5 node kafka cluster + 5 node ZooKeeper 
cluster. ZooKeeper and Kafka coexist on 5 cloud boxes.
Reporter: Maxim Vladimirskiy


Old segments of the __consumer_offsets topic seem to be never deleted. As of 
Dec 16, 2015, there are segments dating back to when we started use Kafka - Oct 
26, 2015. However the idx for the respective oldest segment file is just about 
an hour fresh. All offset related settings are default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3000) __consumer_offsets topic grows indefinitely

2015-12-16 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061105#comment-15061105
 ] 

Grant Henke commented on KAFKA-3000:


Do you have log.cleaner.enable=true configured on your brokers? The default is 
false.

Since __consumer_offsets is a compacted topic old segments aren't supposed to 
be deleted. Instead the log cleaner, if enabled, should compact the topic.

Discussion about changing the default to true, and more related handling can be 
found in KAFKA-2988 and this email thread in the developer list: 
http://search-hadoop.com/m/uyzND1XRTEVv2ToT1=Consumer+Offsets+Compaction

> __consumer_offsets topic grows indefinitely
> ---
>
> Key: KAFKA-3000
> URL: https://issues.apache.org/jira/browse/KAFKA-3000
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
> Environment: Ubuntu 14.04, 5 node kafka cluster + 5 node ZooKeeper 
> cluster. ZooKeeper and Kafka coexist on 5 cloud boxes.
>Reporter: Maxim Vladimirskiy
>
> Old segments of the __consumer_offsets topic seem to be never deleted. As of 
> Dec 16, 2015, there are segments dating back to when we started use Kafka - 
> Oct 26, 2015. However the idx for the respective oldest segment file is just 
> about an hour fresh. All offset related settings are default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3001) Transient Failure in kafka.api.SaslPlaintextConsumerTest

2015-12-16 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3001:


 Summary: Transient Failure in kafka.api.SaslPlaintextConsumerTest
 Key: KAFKA-3001
 URL: https://issues.apache.org/jira/browse/KAFKA-3001
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang


{code}
org.apache.kafka.common.KafkaException: java.lang.IllegalArgumentException: 
Could not find a 'KafkaServer' entry in `/tmp/jaas3207762703726735156.conf`.
at 
org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:73)
at 
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:60)
at kafka.network.Processor.(SocketServer.scala:379)
at 
kafka.network.SocketServer$$anonfun$startup$1$$anonfun$apply$1.apply$mcVI$sp(SocketServer.scala:96)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at 
kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:95)
at 
kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:91)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at 
scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
at kafka.network.SocketServer.startup(SocketServer.scala:91)
at kafka.server.KafkaServer.startup(KafkaServer.scala:188)
at kafka.utils.TestUtils$.createServer(TestUtils.scala:143)
at 
kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:66)
at 
kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:66)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:66)
at 
kafka.api.BaseConsumerTest.kafka$api$IntegrationTestHarness$$super$setUp(BaseConsumerTest.scala:36)
at 
kafka.api.IntegrationTestHarness$class.setUp(IntegrationTestHarness.scala:58)
at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:62)
at 
kafka.api.SaslPlaintextConsumerTest.kafka$api$SaslTestHarness$$super$setUp(SaslPlaintextConsumerTest.scala:17)
at kafka.api.SaslTestHarness$class.setUp(SaslTestHarness.scala:27)
at 
kafka.api.SaslPlaintextConsumerTest.setUp(SaslPlaintextConsumerTest.scala:17)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
at 

[jira] [Commented] (KAFKA-3000) __consumer_offsets topic grows indefinitely

2015-12-16 Thread Maxim Vladimirskiy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061133#comment-15061133
 ] 

Maxim Vladimirskiy commented on KAFKA-3000:
---

It was set to the default value `false`, I am going to change it to `true`. The 
largest __consumer_offset partition has grown to be 64G, I guess I have one 
hell of a compaction ahead.

> __consumer_offsets topic grows indefinitely
> ---
>
> Key: KAFKA-3000
> URL: https://issues.apache.org/jira/browse/KAFKA-3000
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
> Environment: Ubuntu 14.04, 5 node kafka cluster + 5 node ZooKeeper 
> cluster. ZooKeeper and Kafka coexist on 5 cloud boxes.
>Reporter: Maxim Vladimirskiy
>
> Old segments of the __consumer_offsets topic seem to be never deleted. As of 
> Dec 16, 2015, there are segments dating back to when we started use Kafka - 
> Oct 26, 2015. However the idx for the respective oldest segment file is just 
> about an hour fresh. All offset related settings are default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3000) __consumer_offsets topic grows indefinitely

2015-12-16 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-3000.

Resolution: Not A Problem
  Assignee: Grant Henke

> __consumer_offsets topic grows indefinitely
> ---
>
> Key: KAFKA-3000
> URL: https://issues.apache.org/jira/browse/KAFKA-3000
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
> Environment: Ubuntu 14.04, 5 node kafka cluster + 5 node ZooKeeper 
> cluster. ZooKeeper and Kafka coexist on 5 cloud boxes.
>Reporter: Maxim Vladimirskiy
>Assignee: Grant Henke
>
> Old segments of the __consumer_offsets topic seem to be never deleted. As of 
> Dec 16, 2015, there are segments dating back to when we started use Kafka - 
> Oct 26, 2015. However the idx for the respective oldest segment file is just 
> about an hour fresh. All offset related settings are default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2984) KTable should send old values along with new values to downstreams

2015-12-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2984.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 672
[https://github.com/apache/kafka/pull/672]

> KTable should send old values along with new values to downstreams
> --
>
> Key: KAFKA-2984
> URL: https://issues.apache.org/jira/browse/KAFKA-2984
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.1
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> Old values are necessary for implementing aggregate functions. KTable should 
> augment an event with its old value. Basically KTable stream is a stream of 
> (key, (new value, old value)) internally. The old value may be omitted when 
> it is not used in the topology.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #913

2015-12-16 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2984: KTable should send old values when required

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 587a2f4efd7994d4d3af82ed91304f939514294a 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 587a2f4efd7994d4d3af82ed91304f939514294a
 > git rev-list 841d2d1a26af94ec95c480dbf2453f9c7d28c2f7 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson7067738365703651987.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 14.969 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson7817149559357902456.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.9/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileScala' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 25.326 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Commented] (KAFKA-3000) __consumer_offsets topic grows indefinitely

2015-12-16 Thread Maxim Vladimirskiy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061168#comment-15061168
 ] 

Maxim Vladimirskiy commented on KAFKA-3000:
---

After I set log.cleaner.enable=true the size of the __consumer_offsets started 
to go down rapidly. Thanks a lot Grant. And you should definitely remove this 
option all together, it does not make sense to have a cleaner disabled. If a 
user neither has compacted topics nor using offset management it should not be 
a huge performance hit to have an idle cleaner sticking around, unless you 
screwed up implementing it a big time, just joking :). Keep up the good work. 
Kafka is great!

> __consumer_offsets topic grows indefinitely
> ---
>
> Key: KAFKA-3000
> URL: https://issues.apache.org/jira/browse/KAFKA-3000
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
> Environment: Ubuntu 14.04, 5 node kafka cluster + 5 node ZooKeeper 
> cluster. ZooKeeper and Kafka coexist on 5 cloud boxes.
>Reporter: Maxim Vladimirskiy
>Assignee: Grant Henke
>
> Old segments of the __consumer_offsets topic seem to be never deleted. As of 
> Dec 16, 2015, there are segments dating back to when we started use Kafka - 
> Oct 26, 2015. However the idx for the respective oldest segment file is just 
> about an hour fresh. All offset related settings are default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Consumer Offsets Compaction

2015-12-16 Thread Grant Henke
I am considering changing these defaults in KAFKA-2988:

log.cleaner.enable=true (was false)
log.cleaner.dedupe.buffer.size=128MiB (was 500MiB)
log.cleaner.delete.retention.ms=7 Days (was 1 day)

Thoughts on those values?

Should I add logic to make sure we scale down the buffer size instead of
causing a OufOfMemoryError? Would anyone have an instance small enough to
cause a problem?

On Tue, Dec 15, 2015 at 4:35 PM, Gwen Shapira  wrote:

> I'm thinking that anyone who actually uses compaction has non-standard
> configuration (at the very least, they had to enable the cleaner, and
> probably few other configurations too... Compaction is a bit fiddly from
> what I've seen).
>
> So, I'm in favor of minimal default buffer just for offsets and copycat
> configuration. If you need compaction elsewhere, feel free to resize.
>
> Gwen
>
> On Tue, Dec 15, 2015 at 2:29 PM, Grant Henke  wrote:
>
> > Following up based on some digging. There are some upper and lower bounds
> > on the buffer size:
> >
> > log.cleaner.dedupe.buffer.size has a:
> >
> >- Minimum of 1 MiB per cleaner thread
> >   -
> >
> >
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala#L950
> >   - Maximum of 2 GiB per cleaner thread
> >-
> >
> >
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/LogCleaner.scala#L183
> >
> > Then entry size is 24 bytes (the size of an MD5 hash (16 Bytes) + the
> size
> > of an offset (8 bytes)).
> > Note: The hash algorithm is technically interchangeable, but not exposed
> > via configuration.
> >
> > I would like to enable the log cleaner by default given that the new
> > consumer depends on it. The main concern I have with enabling the log
> > cleaner by default, and not changing the default size of the dedupe
> buffer
> > is the impact it would have on small POC and test deployments that have
> > small heaps. When moving to the new version, many would just fail with an
> > OufOfMemoryError.
> >
> > We could scale the size of the dedupe buffer down to some percentage of
> the
> > maximum memory available not exceeding the configured
> > log.cleaner.dedupe.buffer.size, and warn if it is less than the
> configured
> > value. But I am not sure if that is the best way to handle that either.
> >
> >
> > On Tue, Dec 15, 2015 at 11:23 AM, Jay Kreps  wrote:
> >
> > > The buffer determines the maximum number of unique keys in the new
> > > writes that can be processed in one cleaning. Each key requires 24
> > > bytes of space iirc, so 500 MB = ~21,845,333 unique keys (this is
> > > actually adjusted for some load factor and divided by the number of
> > > cleaner threads). If it is too small, you will need multiple cleanings
> > > to compact new data, so sizing too small will tend to lead to lots of
> > > additional I/O. The tradeoff is that if we size it for just handling
> > > the offset topic it could be super small (proportional to the number
> > > of active group-partition combinations), but then people who use log
> > > compaction will see poor performance. If we size it larger than we
> > > waste memory.
> > >
> > > -Jay
> > >
> > > -Jay
> > >
> > > On Tue, Dec 15, 2015 at 8:19 AM, Grant Henke 
> > wrote:
> > > > Thanks for the background context Jay.
> > > >
> > > > Do we have any context on what size is small (but still effect for
> > small
> > > > deployments) for the compaction buffer? and what is large? what
> factors
> > > > help you choose the correct (or a safe) size?
> > > >
> > > > Currently the default "log.cleaner.dedupe.buffer.size" is 500 MiB. If
> > we
> > > > are enabling the log cleaner by default, should we adjust that
> default
> > > size
> > > > to be smaller?
> > > >
> > > > On a similar note, log.cleaner.delete.retention.ms is currently
> > > defaulted
> > > > to 1 day. I am not sure the background here either, but would it make
> > > sense
> > > > to default this setting to 7 days to match the default log retention
> > and
> > > > ensure no delete messages are missed by consumers?
> > > >
> > > > Thanks,
> > > > Grant
> > > >
> > > > On Mon, Dec 14, 2015 at 2:19 PM, Jay Kreps  wrote:
> > > >
> > > >> The reason for disabling it by default was (1) general paranoia
> about
> > > >> log compaction when we released it, (2) avoid allocating the
> > > >> compaction buffer. The first concern is now definitely obsolete, but
> > > >> the second concern is maybe valid. Basically that compaction buffer
> is
> > > >> a preallocated chunk of memory used in compaction and is closely
> tied
> > > >> to the efficiency of the compaction process (so you want it to be
> > > >> big). But if you're not using compaction then it is just wasting
> > > >> memory. I guess since the new consumer requires native offsets
> > > >> (right?) and native offsets require log compaction, maybe we should
> > > >> just default it to on...
> > > >>
> > 

[GitHub] kafka pull request: KAFKA-2422: Allow copycat connector plugins to...

2015-12-16 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/687

KAFKA-2422: Allow copycat connector plugins to be aliased to simpler …

…names

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2422

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/687.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #687


commit b00939902c58b98cbcb187c754e2bd1dc6463c14
Author: Gwen Shapira 
Date:   2015-12-17T04:32:09Z

KAFKA-2422: Allow copycat connector plugins to be aliased to simpler names




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2422) Allow copycat connector plugins to be aliased to simpler names

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061450#comment-15061450
 ] 

ASF GitHub Bot commented on KAFKA-2422:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/687

KAFKA-2422: Allow copycat connector plugins to be aliased to simpler …

…names

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2422

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/687.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #687


commit b00939902c58b98cbcb187c754e2bd1dc6463c14
Author: Gwen Shapira 
Date:   2015-12-17T04:32:09Z

KAFKA-2422: Allow copycat connector plugins to be aliased to simpler names




> Allow copycat connector plugins to be aliased to simpler names
> --
>
> Key: KAFKA-2422
> URL: https://issues.apache.org/jira/browse/KAFKA-2422
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
>
> Configurations of connectors can get quite verbose when you have to specify 
> the full class name, e.g. 
> connector.class=org.apache.kafka.copycat.file.FileStreamSinkConnector
> It would be nice to allow connector classes to provide shorter aliases, e.g. 
> something like "file-sink", to make this config less verbose. Flume does 
> this, so we can use it as an example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2375) Implement elasticsearch Copycat sink connector

2015-12-16 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei reassigned KAFKA-2375:
-

Assignee: Liquan Pei

> Implement elasticsearch Copycat sink connector
> --
>
> Key: KAFKA-2375
> URL: https://issues.apache.org/jira/browse/KAFKA-2375
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>
> Implement an elasticsearch sink connector for Copycat. This should send 
> records to elasticsearch with unique document IDs, given appropriate configs 
> to extract IDs from input records.
> The motivation here is to provide a good end-to-end example with built-in 
> connectors that require minimal dependencies. Because Elasticsearch has a 
> very simple REST API, an elasticsearch connector shouldn't require any extra 
> dependencies and logs -> Elasticsearch (in combination with KAFKA-2374) 
> provides a compelling out-of-the-box Copycat use case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3003) The fetch.wait.max.ms is not honored when new log segment rolled for low volume topics.

2015-12-16 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3003:

Component/s: core

> The fetch.wait.max.ms is not honored when new log segment rolled for low 
> volume topics.
> ---
>
> Key: KAFKA-3003
> URL: https://issues.apache.org/jira/browse/KAFKA-3003
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The problem we saw can be explained by the example below:
> 1. Message offset 100 is appended to partition p0, log segment .log. 
> at time T. After that no message is appended. 
> 2. This message is replicated, leader replica update its 
> highWatermark.messageOffset=100, highWatermark.segmentBaseOffset=0.
> 3. At time T + retention.ms, because no message has been appended to current 
> active log segment for retention.ms, the last modified time of the current 
> log segment reaches retention time. 
> 4. Broker rolls out a new log segment 0001.log, and deletes the old log 
> segment .log. The new log segment in this case is empty because there 
> is no message appended. 
> 5. In Log, the nextOffsetMetadata.segmentBaseOffset will be updated to the 
> new log segment's base offset, but nextOffsetMetadata.messageOffset does not 
> change. so nextOffsetMetadata.messageOffset=1, 
> nextOffsetMetadata.segmentBaseOffset=1.
> 6. Now a FetchRequest comes and try to fetch from offset 1, 
> fetch.wait.max.ms=1000.
> 7. In ReplicaManager, because there is no data to return, the fetch request 
> will be put into purgatory. When delayedFetchPurgatory.tryCompleteElseWatch() 
> is called, the DelayedFetch.tryComplete() compares replica.highWatermark and 
> the fetchOffset returned by log.read(), it will see the 
> replica.highWatermark.segmentBaseOffset=0 and 
> fetchOffset.segmentBaseOffset=1. So it will assume the fetch occurs on a 
> later segment and complete the delayed fetch immediately.
> In this case, the replica.highWatermark was not updated because the 
> LogOffsetMetadata.preceds() only checks the messageOffset but ignored 
> segmentBaseOffset. The fix is to let LogOffsetMetadata first check the 
> messageOffset then check the segmentBaseOffset. So replica.highWatermark will 
> get updated after the follower fetches from the leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3003) The fetch.wait.max.ms is not honored when new log segment rolled for low volume topics.

2015-12-16 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-3003:
---

 Summary: The fetch.wait.max.ms is not honored when new log segment 
rolled for low volume topics.
 Key: KAFKA-3003
 URL: https://issues.apache.org/jira/browse/KAFKA-3003
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin


The problem we saw can be explained by the example below:

1. Message offset 100 is appended to partition p0, log segment .log. at 
time T. After that no message is appended. 
2. This message is replicated, leader replica update its 
highWatermark.messageOffset=100, highWatermark.segmentBaseOffset=0.
3. At time T + retention.ms, because no message has been appended to current 
active log segment for retention.ms, the last modified time of the current log 
segment reaches retention time. 
4. Broker rolls out a new log segment 0001.log, and deletes the old log 
segment .log. The new log segment in this case is empty because there 
is no message appended. 
5. In Log, the nextOffsetMetadata.segmentBaseOffset will be updated to the new 
log segment's base offset, but nextOffsetMetadata.messageOffset does not 
change. so nextOffsetMetadata.messageOffset=1, 
nextOffsetMetadata.segmentBaseOffset=1.
6. Now a FetchRequest comes and try to fetch from offset 1, 
fetch.wait.max.ms=1000.
7. In ReplicaManager, because there is no data to return, the fetch request 
will be put into purgatory. When delayedFetchPurgatory.tryCompleteElseWatch() 
is called, the DelayedFetch.tryComplete() compares replica.highWatermark and 
the fetchOffset returned by log.read(), it will see the 
replica.highWatermark.segmentBaseOffset=0 and fetchOffset.segmentBaseOffset=1. 
So it will assume the fetch occurs on a later segment and complete the delayed 
fetch immediately.

In this case, the replica.highWatermark was not updated because the 
LogOffsetMetadata.preceds() only checks the messageOffset but ignored 
segmentBaseOffset. The fix is to let LogOffsetMetadata first check the 
messageOffset then check the segmentBaseOffset. So replica.highWatermark will 
get updated after the follower fetches from the leader.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3003) The fetch.wait.max.ms is not honored when new log segment rolled for low volume topics.

2015-12-16 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3003:

Fix Version/s: 0.9.0.1

> The fetch.wait.max.ms is not honored when new log segment rolled for low 
> volume topics.
> ---
>
> Key: KAFKA-3003
> URL: https://issues.apache.org/jira/browse/KAFKA-3003
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The problem we saw can be explained by the example below:
> 1. Message offset 100 is appended to partition p0, log segment .log. 
> at time T. After that no message is appended. 
> 2. This message is replicated, leader replica update its 
> highWatermark.messageOffset=100, highWatermark.segmentBaseOffset=0.
> 3. At time T + retention.ms, because no message has been appended to current 
> active log segment for retention.ms, the last modified time of the current 
> log segment reaches retention time. 
> 4. Broker rolls out a new log segment 0001.log, and deletes the old log 
> segment .log. The new log segment in this case is empty because there 
> is no message appended. 
> 5. In Log, the nextOffsetMetadata.segmentBaseOffset will be updated to the 
> new log segment's base offset, but nextOffsetMetadata.messageOffset does not 
> change. so nextOffsetMetadata.messageOffset=1, 
> nextOffsetMetadata.segmentBaseOffset=1.
> 6. Now a FetchRequest comes and try to fetch from offset 1, 
> fetch.wait.max.ms=1000.
> 7. In ReplicaManager, because there is no data to return, the fetch request 
> will be put into purgatory. When delayedFetchPurgatory.tryCompleteElseWatch() 
> is called, the DelayedFetch.tryComplete() compares replica.highWatermark and 
> the fetchOffset returned by log.read(), it will see the 
> replica.highWatermark.segmentBaseOffset=0 and 
> fetchOffset.segmentBaseOffset=1. So it will assume the fetch occurs on a 
> later segment and complete the delayed fetch immediately.
> In this case, the replica.highWatermark was not updated because the 
> LogOffsetMetadata.preceds() only checks the messageOffset but ignored 
> segmentBaseOffset. The fix is to let LogOffsetMetadata first check the 
> messageOffset then check the segmentBaseOffset. So replica.highWatermark will 
> get updated after the follower fetches from the leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3003) The fetch.wait.max.ms is not honored when new log segment rolled for low volume topics.

2015-12-16 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3003:

Affects Version/s: 0.9.0.0

> The fetch.wait.max.ms is not honored when new log segment rolled for low 
> volume topics.
> ---
>
> Key: KAFKA-3003
> URL: https://issues.apache.org/jira/browse/KAFKA-3003
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The problem we saw can be explained by the example below:
> 1. Message offset 100 is appended to partition p0, log segment .log. 
> at time T. After that no message is appended. 
> 2. This message is replicated, leader replica update its 
> highWatermark.messageOffset=100, highWatermark.segmentBaseOffset=0.
> 3. At time T + retention.ms, because no message has been appended to current 
> active log segment for retention.ms, the last modified time of the current 
> log segment reaches retention time. 
> 4. Broker rolls out a new log segment 0001.log, and deletes the old log 
> segment .log. The new log segment in this case is empty because there 
> is no message appended. 
> 5. In Log, the nextOffsetMetadata.segmentBaseOffset will be updated to the 
> new log segment's base offset, but nextOffsetMetadata.messageOffset does not 
> change. so nextOffsetMetadata.messageOffset=1, 
> nextOffsetMetadata.segmentBaseOffset=1.
> 6. Now a FetchRequest comes and try to fetch from offset 1, 
> fetch.wait.max.ms=1000.
> 7. In ReplicaManager, because there is no data to return, the fetch request 
> will be put into purgatory. When delayedFetchPurgatory.tryCompleteElseWatch() 
> is called, the DelayedFetch.tryComplete() compares replica.highWatermark and 
> the fetchOffset returned by log.read(), it will see the 
> replica.highWatermark.segmentBaseOffset=0 and 
> fetchOffset.segmentBaseOffset=1. So it will assume the fetch occurs on a 
> later segment and complete the delayed fetch immediately.
> In this case, the replica.highWatermark was not updated because the 
> LogOffsetMetadata.preceds() only checks the messageOffset but ignored 
> segmentBaseOffset. The fix is to let LogOffsetMetadata first check the 
> messageOffset then check the segmentBaseOffset. So replica.highWatermark will 
> get updated after the follower fetches from the leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3003) The fetch.wait.max.ms is not honored when new log segment rolled for low volume topics.

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061611#comment-15061611
 ] 

ASF GitHub Bot commented on KAFKA-3003:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/688

KAFKA-3003 Update the replica.highWatermark correctly



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3003

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/688.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #688


commit d3f9edf89ac32f44413edc3d58e227fa2d859ca2
Author: Jiangjie Qin 
Date:   2015-12-17T06:52:11Z

KAFKA-3003 Update the replica.highWatermark correctly




> The fetch.wait.max.ms is not honored when new log segment rolled for low 
> volume topics.
> ---
>
> Key: KAFKA-3003
> URL: https://issues.apache.org/jira/browse/KAFKA-3003
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The problem we saw can be explained by the example below:
> 1. Message offset 100 is appended to partition p0, log segment .log. 
> at time T. After that no message is appended. 
> 2. This message is replicated, leader replica update its 
> highWatermark.messageOffset=100, highWatermark.segmentBaseOffset=0.
> 3. At time T + retention.ms, because no message has been appended to current 
> active log segment for retention.ms, the last modified time of the current 
> log segment reaches retention time. 
> 4. Broker rolls out a new log segment 0001.log, and deletes the old log 
> segment .log. The new log segment in this case is empty because there 
> is no message appended. 
> 5. In Log, the nextOffsetMetadata.segmentBaseOffset will be updated to the 
> new log segment's base offset, but nextOffsetMetadata.messageOffset does not 
> change. so nextOffsetMetadata.messageOffset=1, 
> nextOffsetMetadata.segmentBaseOffset=1.
> 6. Now a FetchRequest comes and try to fetch from offset 1, 
> fetch.wait.max.ms=1000.
> 7. In ReplicaManager, because there is no data to return, the fetch request 
> will be put into purgatory. When delayedFetchPurgatory.tryCompleteElseWatch() 
> is called, the DelayedFetch.tryComplete() compares replica.highWatermark and 
> the fetchOffset returned by log.read(), it will see the 
> replica.highWatermark.segmentBaseOffset=0 and 
> fetchOffset.segmentBaseOffset=1. So it will assume the fetch occurs on a 
> later segment and complete the delayed fetch immediately.
> In this case, the replica.highWatermark was not updated because the 
> LogOffsetMetadata.preceds() only checks the messageOffset but ignored 
> segmentBaseOffset. The fix is to let LogOffsetMetadata first check the 
> messageOffset then check the segmentBaseOffset. So replica.highWatermark will 
> get updated after the follower fetches from the leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3003 Update the replica.highWatermark co...

2015-12-16 Thread becketqin
GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/688

KAFKA-3003 Update the replica.highWatermark correctly



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3003

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/688.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #688


commit d3f9edf89ac32f44413edc3d58e227fa2d859ca2
Author: Jiangjie Qin 
Date:   2015-12-17T06:52:11Z

KAFKA-3003 Update the replica.highWatermark correctly




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-2422) Allow copycat connector plugins to be aliased to simpler names

2015-12-16 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira reassigned KAFKA-2422:
---

Assignee: Gwen Shapira  (was: Ewen Cheslack-Postava)

> Allow copycat connector plugins to be aliased to simpler names
> --
>
> Key: KAFKA-2422
> URL: https://issues.apache.org/jira/browse/KAFKA-2422
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Gwen Shapira
>Priority: Minor
>
> Configurations of connectors can get quite verbose when you have to specify 
> the full class name, e.g. 
> connector.class=org.apache.kafka.copycat.file.FileStreamSinkConnector
> It would be nice to allow connector classes to provide shorter aliases, e.g. 
> something like "file-sink", to make this config less verbose. Flume does 
> this, so we can use it as an example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2422) Allow copycat connector plugins to be aliased to simpler names

2015-12-16 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2422:

Reviewer: Ewen Cheslack-Postava

> Allow copycat connector plugins to be aliased to simpler names
> --
>
> Key: KAFKA-2422
> URL: https://issues.apache.org/jira/browse/KAFKA-2422
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Gwen Shapira
>Priority: Minor
>
> Configurations of connectors can get quite verbose when you have to specify 
> the full class name, e.g. 
> connector.class=org.apache.kafka.copycat.file.FileStreamSinkConnector
> It would be nice to allow connector classes to provide shorter aliases, e.g. 
> something like "file-sink", to make this config less verbose. Flume does 
> this, so we can use it as an example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3003) The fetch.wait.max.ms is not honored when new log segment rolled for low volume topics.

2015-12-16 Thread Dong Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061582#comment-15061582
 ] 

Dong Lin commented on KAFKA-3003:
-

Great catch!

> The fetch.wait.max.ms is not honored when new log segment rolled for low 
> volume topics.
> ---
>
> Key: KAFKA-3003
> URL: https://issues.apache.org/jira/browse/KAFKA-3003
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The problem we saw can be explained by the example below:
> 1. Message offset 100 is appended to partition p0, log segment .log. 
> at time T. After that no message is appended. 
> 2. This message is replicated, leader replica update its 
> highWatermark.messageOffset=100, highWatermark.segmentBaseOffset=0.
> 3. At time T + retention.ms, because no message has been appended to current 
> active log segment for retention.ms, the last modified time of the current 
> log segment reaches retention time. 
> 4. Broker rolls out a new log segment 0001.log, and deletes the old log 
> segment .log. The new log segment in this case is empty because there 
> is no message appended. 
> 5. In Log, the nextOffsetMetadata.segmentBaseOffset will be updated to the 
> new log segment's base offset, but nextOffsetMetadata.messageOffset does not 
> change. so nextOffsetMetadata.messageOffset=1, 
> nextOffsetMetadata.segmentBaseOffset=1.
> 6. Now a FetchRequest comes and try to fetch from offset 1, 
> fetch.wait.max.ms=1000.
> 7. In ReplicaManager, because there is no data to return, the fetch request 
> will be put into purgatory. When delayedFetchPurgatory.tryCompleteElseWatch() 
> is called, the DelayedFetch.tryComplete() compares replica.highWatermark and 
> the fetchOffset returned by log.read(), it will see the 
> replica.highWatermark.segmentBaseOffset=0 and 
> fetchOffset.segmentBaseOffset=1. So it will assume the fetch occurs on a 
> later segment and complete the delayed fetch immediately.
> In this case, the replica.highWatermark was not updated because the 
> LogOffsetMetadata.preceds() only checks the messageOffset but ignored 
> segmentBaseOffset. The fix is to let LogOffsetMetadata first check the 
> messageOffset then check the segmentBaseOffset. So replica.highWatermark will 
> get updated after the follower fetches from the leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3002: Make available to specify hostname...

2015-12-16 Thread sasakitoa
GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/685

KAFKA-3002: Make available to specify hostname with Uppercase at broker list

Make available to specify hostname with Uppercase at broker list

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka hostname_uppercase

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/685.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #685


commit 337b75eeb8450daf994cd055b1ba0b2f79bbe676
Author: Sasaki Toru 
Date:   2015-12-16T15:55:45Z

make available to specify hostname with Uppercase letter




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3002) Make available to specify hostname with Uppercase at broker list

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061247#comment-15061247
 ] 

ASF GitHub Bot commented on KAFKA-3002:
---

GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/685

KAFKA-3002: Make available to specify hostname with Uppercase at broker list

Make available to specify hostname with Uppercase at broker list

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka hostname_uppercase

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/685.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #685


commit 337b75eeb8450daf994cd055b1ba0b2f79bbe676
Author: Sasaki Toru 
Date:   2015-12-16T15:55:45Z

make available to specify hostname with Uppercase letter




> Make available to specify hostname with Uppercase at broker list
> 
>
> Key: KAFKA-3002
> URL: https://issues.apache.org/jira/browse/KAFKA-3002
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Sasaki Toru
>Priority: Minor
> Fix For: 0.9.0.1
>
>
> Now we cannot specify hostname with Uppercase letter at broker list (e.g. 
> option for kafka-console-producer.sh)
> OK: kafka-console-producer.sh --broker-list kafkaserver:9092 --topic test
> NG: kafka-console-producer.sh --broker-list KafkaServer:9092 --topic test
> (exception will occur since DNS resolution failed)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #243

2015-12-16 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2984: KTable should send old values when required

--
[...truncated 476 lines...]
kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.admin.ConfigCommandTest > testArgumentParse PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout PASSED

kafka.producer.ProducerTest > testSendNullMessage PASSED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo PASSED

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.common.ZkNodeChangeNotificationListenerTest > testProcessNotification 
PASSED

kafka.common.TopicTest > testInvalidTopicNames PASSED

kafka.common.TopicTest > testTopicHasCollision PASSED

kafka.common.TopicTest > testTopicHasCollisionChars PASSED

kafka.common.ConfigTest > testInvalidGroupIds PASSED

kafka.common.ConfigTest > testInvalidClientIds PASSED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.PlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.ServerGenerateBrokerIdTest > testAutoGenerateBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps PASSED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
PASSED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps PASSED

kafka.server.SslReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceMultiplePartitions PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceSinglePartition PASSED

kafka.server.ThrottledResponseExpirationTest > testThrottledRequest PASSED

kafka.server.ThrottledResponseExpirationTest > testExpire PASSED

kafka.server.ReplicaManagerTest > testHighWaterMarkDirectoryMapping PASSED

kafka.server.ReplicaManagerTest > testIllegalRequiredAcks PASSED

kafka.server.ReplicaManagerTest > testHighwaterMarkRelativeDirectoryMapping 
PASSED

kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup PASSED

kafka.server.ServerShutdownTest > testConsecutiveShutdown PASSED

kafka.server.ServerShutdownTest > testCleanShutdown PASSED

kafka.server.ServerShutdownTest > testCleanShutdownWithDeleteTopicEnabled PASSED

kafka.server.LeaderElectionTest > testLeaderElectionWithStaleControllerEpoch 
PASSED

kafka.server.LeaderElectionTest > testLeaderElectionAndEpoch PASSED

kafka.server.OffsetCommitTest 

[GitHub] kafka pull request: KAFKA-2988: Change default configuration of th...

2015-12-16 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/686

KAFKA-2988: Change default configuration of the log cleaner



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka compaction

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/686.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #686


commit 9d5a56ec164d3113762ffda5e83dd9c278a9d29a
Author: Grant Henke 
Date:   2015-12-17T03:44:42Z

KAFKA-2988: Change default value of log.cleaner.enable




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2988) Change default configuration of the log cleaner

2015-12-16 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2988:
---
Summary: Change default configuration of the log cleaner  (was: Change 
default value of log.cleaner.enable )

> Change default configuration of the log cleaner
> ---
>
> Key: KAFKA-2988
> URL: https://issues.apache.org/jira/browse/KAFKA-2988
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Since 0.9.0 the internal "__consumer_offsets" topic is being used more 
> heavily. Because this is a compacted topic "log.cleaner.enable" needs to be 
> "true" in order for it to be compacted. 
> Since this is critical for core Kafka functionality we should change the 
> default to true and potentially consider removing the option to disable all 
> together. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2988) Change default value of log.cleaner.enable

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061408#comment-15061408
 ] 

ASF GitHub Bot commented on KAFKA-2988:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/686

KAFKA-2988: Change default configuration of the log cleaner



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka compaction

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/686.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #686


commit 9d5a56ec164d3113762ffda5e83dd9c278a9d29a
Author: Grant Henke 
Date:   2015-12-17T03:44:42Z

KAFKA-2988: Change default value of log.cleaner.enable




> Change default value of log.cleaner.enable 
> ---
>
> Key: KAFKA-2988
> URL: https://issues.apache.org/jira/browse/KAFKA-2988
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Since 0.9.0 the internal "__consumer_offsets" topic is being used more 
> heavily. Because this is a compacted topic "log.cleaner.enable" needs to be 
> "true" in order for it to be compacted. 
> Since this is critical for core Kafka functionality we should change the 
> default to true and potentially consider removing the option to disable all 
> together. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3002) Make available to specify hostname with Uppercase at broker list

2015-12-16 Thread Sasaki Toru (JIRA)
Sasaki Toru created KAFKA-3002:
--

 Summary: Make available to specify hostname with Uppercase at 
broker list
 Key: KAFKA-3002
 URL: https://issues.apache.org/jira/browse/KAFKA-3002
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.9.0.1
Reporter: Sasaki Toru
Priority: Minor
 Fix For: 0.9.0.1


Now we cannot specify hostname with Uppercase letter at broker list (e.g. 
option for kafka-console-producer.sh)

OK: kafka-console-producer.sh --broker-list kafkaserver:9092 --topic test

NG: kafka-console-producer.sh --broker-list KafkaServer:9092 --topic test
(exception will occur since DNS resolution failed)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2979) Enable authorizer and ACLs in ducktape tests

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1506#comment-1506
 ] 

ASF GitHub Bot commented on KAFKA-2979:
---

GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/683

KAFKA-2979: Enable authorizer and ACLs in ducktape tests

Patch by @fpj and @benstopford.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-2979

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/683.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #683


commit 5586e3950442060e5c5dc19b89381e86c6d4a04f
Author: flavio junqueira 
Date:   2015-11-27T20:14:35Z

First cut of the ducktape test.

commit 15c23475ae440c0b77048510b12c69f4941950e1
Author: flavio junqueira 
Date:   2015-11-28T00:00:05Z

Fixes to references in zookeeper.py.

commit b0ff7f97fee552c6266cc3f5ce09f7e99db97d23
Author: flavio junqueira 
Date:   2015-11-28T04:00:45Z

Test case passes.

commit 885b42a7e1a8b42b00ed1c8cbb76ba2dc8930757
Author: flavio junqueira 
Date:   2015-11-28T12:35:44Z

KAFKA-2905: Make zookeeper replicated.

commit ff4e8f75845259d755cc0a0a11008115f5aff7e3
Author: flavio junqueira 
Date:   2015-11-30T15:24:58Z

KAFKA-2905: Clean up - moved config file, removed warns, moved jaas 
generation.

commit d78656e6faed82f2c8616f00c1ed1cfed97d2f3f
Author: flavio junqueira 
Date:   2015-12-01T16:29:11Z

KAFKA-2905: jaas reference and generation improvements.

commit 2628db2223818143c1509397aa6c384484525ff4
Author: flavio junqueira 
Date:   2015-12-01T16:38:11Z

KAFKA-2905: Changes to kafka.properties.

commit e78c9b4f3a5bd30bc8cd501076618f0642c5972a
Author: flavio junqueira 
Date:   2015-12-01T16:39:18Z

KAFKA-2905: Increased timeout for producer to get it to pass in my local 
machine.

commit abb09c007aaf3144853060efdb65cab74a0bd790
Author: flavio junqueira 
Date:   2015-12-01T16:41:50Z

Merge remote-tracking branch 'upstream/trunk' into KAFKA-2905

commit b9d3be240743c0541aaa9369d381562f5dd2969c
Author: flavio junqueira 
Date:   2015-12-01T17:01:43Z

KAFKA-2905: Adding plain_jaas.conf.

commit 85aa0713d86fb6783cbd29709834d2013aa61822
Author: flavio junqueira 
Date:   2015-12-01T23:14:55Z

KAFKA-2905: Removing commented code.

commit 70a21a4c10e474ae5f7996ee3badcfc448494917
Author: flavio junqueira 
Date:   2015-12-02T00:14:43Z

KAFKA-2905: Removed unnecessary sleep.

commit 21fb8ec5ce6711704dfe2217c47040fae7bad323
Author: flavio junqueira 
Date:   2015-12-02T00:41:26Z

KAFKA-2905: Removing PLAIN.

commit dcf76bf3d49680bbd2a07d102d7855d2b08ee6d1
Author: flavio junqueira 
Date:   2015-12-02T01:45:52Z

KAFKA-2905: Removed missing instance of PLAIN.

commit d66dae448a61bc6c12a7c61f9ae9bdf6b75057c2
Author: flavio junqueira 
Date:   2015-12-02T09:14:15Z

KAFKA-2905: Corrected the min isr configuration.

commit de068a2bbfe863ee4be3799fffdcfadff00ba67e
Author: flavio junqueira 
Date:   2015-12-02T13:28:50Z

KAFKA-2905: Changed to Kerberos auth.

commit 95bc8a938af8078fa907c64f1c1983402f19ad48
Author: flavio junqueira 
Date:   2015-12-02T18:14:08Z

KAFKA-2905: Moving system properties to zookeeper.py.

commit fc6ff2eb0578767ea278742a12f26c675b6cfc28
Author: flavio junqueira 
Date:   2015-12-02T18:22:05Z

KAFKA-2905: Remove changes to timeouts in produce_consume_validate.py.

commit 755959504cae1441046c131e5c921cc18c5d5b4b
Author: flavio junqueira 
Date:   2015-12-03T00:06:24Z

KAFKA-2905: Removed change in minikdc.py.

commit 548043593a2ff5711a631452f9e0732420a22dd6
Author: flavio junqueira 
Date:   2015-12-03T00:09:04Z

KAFKA-2905: Missing colon in zookeeper.py.

commit 0d200b7700924e059083e9d8d70d0f9ad7339bd1
Author: flavio junqueira 
Date:   2015-12-03T00:13:46Z

Merge remote-tracking branch 'upstream/trunk' into KAFKA-2905

commit a2b710c97d8bb50d4f7e4336bf81296b7a08
Author: flavio junqueira 
Date:   2015-12-03T01:19:12Z

KAFKA-2905: Fixed bug in the generation of the principals string.

commit e820d0cd6ff34e7bbf481bcab4fe371f44110828
Author: flavio junqueira 
Date:   2015-12-03T11:41:13Z

KAFKA-2905: Increased zk connect time out and made zk jaas config 
conditional.

commit 6b279ba4202ad339bc9575c7e5c1a3f9e6085c8f
Author: flavio junqueira 
Date:   2015-12-03T18:25:49Z


[GitHub] kafka pull request: KAFKA-2979: Enable authorizer and ACLs in duck...

2015-12-16 Thread fpj
GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/683

KAFKA-2979: Enable authorizer and ACLs in ducktape tests

Patch by @fpj and @benstopford.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-2979

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/683.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #683


commit 5586e3950442060e5c5dc19b89381e86c6d4a04f
Author: flavio junqueira 
Date:   2015-11-27T20:14:35Z

First cut of the ducktape test.

commit 15c23475ae440c0b77048510b12c69f4941950e1
Author: flavio junqueira 
Date:   2015-11-28T00:00:05Z

Fixes to references in zookeeper.py.

commit b0ff7f97fee552c6266cc3f5ce09f7e99db97d23
Author: flavio junqueira 
Date:   2015-11-28T04:00:45Z

Test case passes.

commit 885b42a7e1a8b42b00ed1c8cbb76ba2dc8930757
Author: flavio junqueira 
Date:   2015-11-28T12:35:44Z

KAFKA-2905: Make zookeeper replicated.

commit ff4e8f75845259d755cc0a0a11008115f5aff7e3
Author: flavio junqueira 
Date:   2015-11-30T15:24:58Z

KAFKA-2905: Clean up - moved config file, removed warns, moved jaas 
generation.

commit d78656e6faed82f2c8616f00c1ed1cfed97d2f3f
Author: flavio junqueira 
Date:   2015-12-01T16:29:11Z

KAFKA-2905: jaas reference and generation improvements.

commit 2628db2223818143c1509397aa6c384484525ff4
Author: flavio junqueira 
Date:   2015-12-01T16:38:11Z

KAFKA-2905: Changes to kafka.properties.

commit e78c9b4f3a5bd30bc8cd501076618f0642c5972a
Author: flavio junqueira 
Date:   2015-12-01T16:39:18Z

KAFKA-2905: Increased timeout for producer to get it to pass in my local 
machine.

commit abb09c007aaf3144853060efdb65cab74a0bd790
Author: flavio junqueira 
Date:   2015-12-01T16:41:50Z

Merge remote-tracking branch 'upstream/trunk' into KAFKA-2905

commit b9d3be240743c0541aaa9369d381562f5dd2969c
Author: flavio junqueira 
Date:   2015-12-01T17:01:43Z

KAFKA-2905: Adding plain_jaas.conf.

commit 85aa0713d86fb6783cbd29709834d2013aa61822
Author: flavio junqueira 
Date:   2015-12-01T23:14:55Z

KAFKA-2905: Removing commented code.

commit 70a21a4c10e474ae5f7996ee3badcfc448494917
Author: flavio junqueira 
Date:   2015-12-02T00:14:43Z

KAFKA-2905: Removed unnecessary sleep.

commit 21fb8ec5ce6711704dfe2217c47040fae7bad323
Author: flavio junqueira 
Date:   2015-12-02T00:41:26Z

KAFKA-2905: Removing PLAIN.

commit dcf76bf3d49680bbd2a07d102d7855d2b08ee6d1
Author: flavio junqueira 
Date:   2015-12-02T01:45:52Z

KAFKA-2905: Removed missing instance of PLAIN.

commit d66dae448a61bc6c12a7c61f9ae9bdf6b75057c2
Author: flavio junqueira 
Date:   2015-12-02T09:14:15Z

KAFKA-2905: Corrected the min isr configuration.

commit de068a2bbfe863ee4be3799fffdcfadff00ba67e
Author: flavio junqueira 
Date:   2015-12-02T13:28:50Z

KAFKA-2905: Changed to Kerberos auth.

commit 95bc8a938af8078fa907c64f1c1983402f19ad48
Author: flavio junqueira 
Date:   2015-12-02T18:14:08Z

KAFKA-2905: Moving system properties to zookeeper.py.

commit fc6ff2eb0578767ea278742a12f26c675b6cfc28
Author: flavio junqueira 
Date:   2015-12-02T18:22:05Z

KAFKA-2905: Remove changes to timeouts in produce_consume_validate.py.

commit 755959504cae1441046c131e5c921cc18c5d5b4b
Author: flavio junqueira 
Date:   2015-12-03T00:06:24Z

KAFKA-2905: Removed change in minikdc.py.

commit 548043593a2ff5711a631452f9e0732420a22dd6
Author: flavio junqueira 
Date:   2015-12-03T00:09:04Z

KAFKA-2905: Missing colon in zookeeper.py.

commit 0d200b7700924e059083e9d8d70d0f9ad7339bd1
Author: flavio junqueira 
Date:   2015-12-03T00:13:46Z

Merge remote-tracking branch 'upstream/trunk' into KAFKA-2905

commit a2b710c97d8bb50d4f7e4336bf81296b7a08
Author: flavio junqueira 
Date:   2015-12-03T01:19:12Z

KAFKA-2905: Fixed bug in the generation of the principals string.

commit e820d0cd6ff34e7bbf481bcab4fe371f44110828
Author: flavio junqueira 
Date:   2015-12-03T11:41:13Z

KAFKA-2905: Increased zk connect time out and made zk jaas config 
conditional.

commit 6b279ba4202ad339bc9575c7e5c1a3f9e6085c8f
Author: flavio junqueira 
Date:   2015-12-03T18:25:49Z

KAFKA-2905: Adding test cases with different security protocols.

commit f70b6b2867f3e07b51e87821972d617a4c4395e6
Author: flavio junqueira 
Date:   2015-12-03T18:52:49Z

Merge remote-tracking branch 'upstream/trunk' into 

[GitHub] kafka pull request: MINOR: Change return type of `Schema.read` to ...

2015-12-16 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/684

MINOR: Change return type of `Schema.read` to be `Struct` instead of 
`Object`

We always return a `Struct` from `Schema.read` and this means that
we can remove a large number of casts.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka schema-read-should-return-struct

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/684.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #684


commit 147f9625f1d0006ddeabffa342fa7e6c3fdbe2be
Author: Ismael Juma 
Date:   2015-12-16T15:28:44Z

Change return type of `Schema.read` to be `Struct` instead of `Object`

We always return a `Struct` from `Schema.read` and this means that
we can remove a large number of casts.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2992) Trace log statements in the replica fetcher inner loop create large amounts of garbage

2015-12-16 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059891#comment-15059891
 ] 

Ismael Juma commented on KAFKA-2992:


I had a look at the code in question and I'm surprised by the findings. The 
code for `processPartitionData` follows:

{code}
  val TopicAndPartition(topic, partitionId) = topicAndPartition
  val replica = replicaMgr.getReplica(topic, partitionId).get
  val messageSet = partitionData.toByteBufferMessageSet
  warnIfMessageOversized(messageSet)

  if (fetchOffset != replica.logEndOffset.messageOffset)
throw new RuntimeException("Offset mismatch: fetched offset = %d, log 
end offset = %d.".format(fetchOffset, replica.logEndOffset.messageOffset))
  trace("Follower %d has replica log end offset %d for partition %s. 
Received %d messages and leader hw %d"
.format(replica.brokerId, replica.logEndOffset.messageOffset, 
topicAndPartition, messageSet.sizeInBytes, partitionData.highWatermark))
  replica.log.get.append(messageSet, assignOffsets = false)
  trace("Follower %d has replica log end offset %d after appending %d bytes 
of messages for partition %s"
.format(replica.brokerId, replica.logEndOffset.messageOffset, 
messageSet.sizeInBytes, topicAndPartition))
  val followerHighWatermark = 
replica.logEndOffset.messageOffset.min(partitionData.highWatermark)
  // for the follower replica, we do not need to keep
  // its segment base offset the physical position,
  // these values will be computed upon making the leader
  replica.highWatermark = new LogOffsetMetadata(followerHighWatermark)
  trace("Follower %d set replica high watermark for partition [%s,%d] to %s"
.format(replica.brokerId, topic, partitionId, 
followerHighWatermark))
{code}

There are a number of allocations there, so I don't see why the thunk 
allocations would be responsible for 98% of the allocations by object count (as 
per the original description). If we actually want to solve the issue at hand, 
I think more investigation would be needed and we could do more to reduce 
allocations. As it is though, it is unclear if there is a real issue or it was 
just a profiler artifact (personally, I never trust profiler data in isolation, 
it needs to be verified via other means too).

> Trace log statements in the replica fetcher inner loop create large amounts 
> of garbage
> --
>
> Key: KAFKA-2992
> URL: https://issues.apache.org/jira/browse/KAFKA-2992
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: Centos 6, Java 1.8.0_20
>Reporter: Cory Kolbeck
>Priority: Minor
>  Labels: garbage, logging, trace
> Fix For: 0.9.1.0
>
>
> We're seeing some GC pause issues in production, and during our investigation 
> found that the thunks created during invocation of three trace statements 
> guarded in the attached PR were responsible for ~98% of all allocations by 
> object count and ~90% by size. While I'm not sure that this was actually the 
> cause of our issue, it seems prudent to avoid useless allocations in a tight 
> loop.
> I realize that the trace() call does its own guarding internally, however 
> it's insufficient to prevent allocation of the thunk. I can work on getting 
> profiling results to attach here, but I used YourKit and the license has 
> since expired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2996) Introduction around queue-like consumer group behaviour is misleading

2015-12-16 Thread Jakub Korab (JIRA)
Jakub Korab created KAFKA-2996:
--

 Summary: Introduction around queue-like consumer group behaviour 
is misleading
 Key: KAFKA-2996
 URL: https://issues.apache.org/jira/browse/KAFKA-2996
 Project: Kafka
  Issue Type: Improvement
  Components: website
Affects Versions: 0.9.0.0
Reporter: Jakub Korab
Priority: Minor


The documentation reads "If all the consumer instances have the same consumer 
group, then this works just like a traditional queue balancing load over the 
consumers. "

This is then followed up two paragraphs later by explaining how multiple 
consumers on the same partition work like a queue with exclusive consumers, and 
that partitioning is required for parallelism. This section needs some 
refactoring to make it read better - as it stand it's confusing/misleading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2997) Synchronous write to disk

2015-12-16 Thread Arkadiusz Firus (JIRA)
Arkadiusz Firus created KAFKA-2997:
--

 Summary: Synchronous write to disk
 Key: KAFKA-2997
 URL: https://issues.apache.org/jira/browse/KAFKA-2997
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.9.0.0
Reporter: Arkadiusz Firus
Priority: Minor


Hi All,

I am currently work on a mechanism which allows to do an efficient synchronous 
writing to the file system. My idea is to gather few write requests for one 
partition and after that call the fsync.

As I read the code I find out that the best place to do it is to modify:
kafka.log.Log.append
method. Currently at the end of the method (line 368) there is a verification 
if the number of unflushed messages is greater than the flush interval 
(configuration parameter).

I am thinking of extending this condition. I want to add additional boolean 
configuration parameter (sync write or something like this). If this parameter 
is set to true at the end of this method the thread should hang on a lock. On 
the other hand there will be another timer thread (for every partition) which 
will be invoked every 10ms (configuration parameter). During invocation the 
thread will call flush method and after that will be releasing all hanged 
threads.

I am writing here because I would like to know your opinion about such 
approach. Do you think this one is good or maybe someone have a better (more 
permanent) one. I would also like to know if such approach is according to 
general Kafka architecture.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2997) Synchronous write to disk

2015-12-16 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060256#comment-15060256
 ] 

Grant Henke commented on KAFKA-2997:


I am curious about the motivation for synchronous writing to the file system. 
What benefit/functionality are you trying to achieve?

> Synchronous write to disk
> -
>
> Key: KAFKA-2997
> URL: https://issues.apache.org/jira/browse/KAFKA-2997
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Arkadiusz Firus
>Priority: Minor
>  Labels: features, patch
>
> Hi All,
> I am currently work on a mechanism which allows to do an efficient 
> synchronous writing to the file system. My idea is to gather few write 
> requests for one partition and after that call the fsync.
> As I read the code I find out that the best place to do it is to modify:
> kafka.log.Log.append
> method. Currently at the end of the method (line 368) there is a verification 
> if the number of unflushed messages is greater than the flush interval 
> (configuration parameter).
> I am thinking of extending this condition. I want to add additional boolean 
> configuration parameter (sync write or something like this). If this 
> parameter is set to true at the end of this method the thread should hang on 
> a lock. On the other hand there will be another timer thread (for every 
> partition) which will be invoked every 10ms (configuration parameter). During 
> invocation the thread will call flush method and after that will be releasing 
> all hanged threads.
> I am writing here because I would like to know your opinion about such 
> approach. Do you think this one is good or maybe someone have a better (more 
> permanent) one. I would also like to know if such approach is according to 
> general Kafka architecture.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2997) Synchronous write to disk

2015-12-16 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060260#comment-15060260
 ] 

Flavio Junqueira commented on KAFKA-2997:
-

[~afirus] I'm not very sure of what you're trying to achieve here. It sounds 
like you don't want to count unflushed messages and instead you want to have a 
timer thread to trigger the flush. I'm wondering if you really need to have 
that thread to achieve your goal or if you can simply run a thread in a tight 
loop that will flush, accumulate while flushing, flush, accumulate while 
flushing, and so on. What I'm proposing is essentially what we do in the 
SyncRequestProcessor of ZooKeeper:

https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/server/SyncRequestProcessor.java
  

> Synchronous write to disk
> -
>
> Key: KAFKA-2997
> URL: https://issues.apache.org/jira/browse/KAFKA-2997
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Arkadiusz Firus
>Priority: Minor
>  Labels: features, patch
>
> Hi All,
> I am currently work on a mechanism which allows to do an efficient 
> synchronous writing to the file system. My idea is to gather few write 
> requests for one partition and after that call the fsync.
> As I read the code I find out that the best place to do it is to modify:
> kafka.log.Log.append
> method. Currently at the end of the method (line 368) there is a verification 
> if the number of unflushed messages is greater than the flush interval 
> (configuration parameter).
> I am thinking of extending this condition. I want to add additional boolean 
> configuration parameter (sync write or something like this). If this 
> parameter is set to true at the end of this method the thread should hang on 
> a lock. On the other hand there will be another timer thread (for every 
> partition) which will be invoked every 10ms (configuration parameter). During 
> invocation the thread will call flush method and after that will be releasing 
> all hanged threads.
> I am writing here because I would like to know your opinion about such 
> approach. Do you think this one is good or maybe someone have a better (more 
> permanent) one. I would also like to know if such approach is according to 
> general Kafka architecture.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2929) Migrate server side error mapping functionality

2015-12-16 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2929:
---
Summary: Migrate server side error mapping functionality  (was: Deprecate 
duplicate error mapping functionality)

> Migrate server side error mapping functionality
> ---
>
> Key: KAFKA-2929
> URL: https://issues.apache.org/jira/browse/KAFKA-2929
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps error codes and exceptions. 
> To prevent errors and issues with consistency we should deprecate 
> ErrorMapping.scala in core in favor or Errors.java in common. Any duplicated 
> exceptions in core should be deprecate as well to ensure the mapping is 
> correct. 
> When the old clients are removed ErrorMapping.scala and the deprecated 
> exceptions can be removed as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2929) Migrate server side error mapping functionality

2015-12-16 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2929:
---
Description: 
Kafka common and core both have a class that maps error codes and exceptions. 
To prevent errors and issues with consistency we should migrate from 
ErrorMapping.scala in core in favor or Errors.java in common.

When the old clients are removed ErrorMapping.scala and the old exceptions 
should be removed.



  was:
Kafka common and core both have a class that maps error codes and exceptions. 
To prevent errors and issues with consistency we should deprecate 
ErrorMapping.scala in core in favor or Errors.java in common. Any duplicated 
exceptions in core should be deprecate as well to ensure the mapping is 
correct. 

When the old clients are removed ErrorMapping.scala and the deprecated 
exceptions can be removed as well.




> Migrate server side error mapping functionality
> ---
>
> Key: KAFKA-2929
> URL: https://issues.apache.org/jira/browse/KAFKA-2929
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps error codes and exceptions. 
> To prevent errors and issues with consistency we should migrate from 
> ErrorMapping.scala in core in favor or Errors.java in common.
> When the old clients are removed ErrorMapping.scala and the old exceptions 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2995) in 0.9.0.0 Old Consumer's commitOffsets with specify partition can submit not exists topic and partition to zk

2015-12-16 Thread Pengwei (JIRA)
Pengwei created KAFKA-2995:
--

 Summary: in 0.9.0.0 Old Consumer's commitOffsets with specify 
partition can submit not exists topic and partition to zk
 Key: KAFKA-2995
 URL: https://issues.apache.org/jira/browse/KAFKA-2995
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.9.0.0
Reporter: Pengwei
Assignee: Neha Narkhede
 Fix For: 0.9.1.0


in 0.9.0.0 Version, the Old Consumer's commit interface is below:

def commitOffsets(offsetsToCommit: immutable.Map[TopicAndPartition, 
OffsetAndMetadata], isAutoCommit: Boolean) {
trace("OffsetMap: %s".format(offsetsToCommit))
var retriesRemaining = 1 + (if (isAutoCommit) 0 else 
config.offsetsCommitMaxRetries) // no retries for commits from auto-commit
var done = false
while (!done) {
  val committed = offsetsChannelLock synchronized {
// committed when we receive either no error codes or only 
MetadataTooLarge errors
if (offsetsToCommit.size > 0) {
  if (config.offsetsStorage == "zookeeper") {
offsetsToCommit.foreach { case (topicAndPartition, 
offsetAndMetadata) =>
  commitOffsetToZooKeeper(topicAndPartition, 
offsetAndMetadata.offset)
}
  

this interface does not check the parameter offsetsToCommit, if offsetsToCommit 
has some topic or partition which is not exist in the kafka. Then will create 
an entry in the  /consumers/[group]/offsets/[Not exists topic]   directory.

We should check the offsetsToCommit's topic and partition is exists or just 
check it is contain in the topicRegistry or checkpointedZkOffsets ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #912

2015-12-16 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: StreamThread performance optimization

--
[...truncated 1391 lines...]
kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testCorruptIndexRebuild PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.controller.ControllerFailoverTest > testMetadataUpdate PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIPOverrides PASSED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #242

2015-12-16 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: StreamThread performance optimization

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 841d2d1a26af94ec95c480dbf2453f9c7d28c2f7 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 841d2d1a26af94ec95c480dbf2453f9c7d28c2f7
 > git rev-list 4ad165c0783bc49e5750292cd0e8d277e3186846 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson941197913050641230.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 8.669 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson3530585835486804547.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.9/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3/792d5e592f6f3f0c1a3337cd0ac84309b544f8f4/lz4-1.3.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 8.99 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2