[jira] [Created] (KAFKA-3384) bin scripts may not be portable/POSIX compliant

2016-03-10 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-3384:


 Summary: bin scripts may not be portable/POSIX compliant
 Key: KAFKA-3384
 URL: https://issues.apache.org/jira/browse/KAFKA-3384
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.1
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.10.0.0


We may be using some important tools in a non-POSIX compliant and non-portable 
way. In particular, we've discovered that we can sometimes trigger this error:

/usr/bin/kafka-server-stop: line 22: kill: SIGTERM: invalid signal specification

which looks like it is caused by invoking a command like {{kill -SIGTERM 
}}. (This is a lightly modified version of {{kafka-server-stop.sh}}, but 
nothing of relevance has been affected.)

Googling seems to suggest that passing the signal in that way is not compliant 
-- it's a shell extensions. We're using {{/bin/sh}}, but that may be aliased to 
other more liberal shells on some platforms. To be honest, I'm not sure exactly 
the requirements for triggering this since running the command directly on the 
same host via an interactive shell still works, but we are definitely limiting 
portability using the current approach.

There are a couple of possible solutions:

1. Standardize on bash. This lets us make more permissive wrt shell features 
that we use. We're already using /bin/bash in the majority of scripts anyway. 
Might help us avoid a bunch of assumptions people make when bash is aliased to 
sh: https://wiki.ubuntu.com/DashAsBinSh
2. Try to clean up scripts as we discover incompatibilities. The immediate fix 
for this issue seems to be to use {{kill -s TERM}} instead of {{kill -SIGTERM}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3383) Producer should not remove an in flight request before successfully parsing the response.

2016-03-10 Thread chen zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3383 started by chen zhu.
---
> Producer should not remove an in flight request before successfully parsing 
> the response.
> -
>
> Key: KAFKA-3383
> URL: https://issues.apache.org/jira/browse/KAFKA-3383
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: chen zhu
> Fix For: 0.10.0.0
>
>
> In the NetworkClient, we remove the in flight request before we successfully 
> parse the response. If the response parse failed, the request will not be 
> fulfilled but just lost. For a producer request, that means the callback of 
> the messages won't be fired forever.
> We should only remove the in flight request after response parsing succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3383) Producer should not remove an in flight request before successfully parsing the response.

2016-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190441#comment-15190441
 ] 

ASF GitHub Bot commented on KAFKA-3383:
---

GitHub user zhuchen1018 opened a pull request:

https://github.com/apache/kafka/pull/1050

KAFKA-3383: remove in flight request only after response parsing succeeds

@becketqin, could you take a look at the patch?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhuchen1018/kafka KAFKA-3383

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1050.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1050


commit 84372bbe06bd121e758348b54d2d79d5fb4fd095
Author: Chen Zhu 
Date:   2016-03-11T03:43:52Z

KAFKA-3383: remove in flight request only after response parsing succeeds




> Producer should not remove an in flight request before successfully parsing 
> the response.
> -
>
> Key: KAFKA-3383
> URL: https://issues.apache.org/jira/browse/KAFKA-3383
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: chen zhu
> Fix For: 0.10.0.0
>
>
> In the NetworkClient, we remove the in flight request before we successfully 
> parse the response. If the response parse failed, the request will not be 
> fulfilled but just lost. For a producer request, that means the callback of 
> the messages won't be fired forever.
> We should only remove the in flight request after response parsing succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3383: remove in flight request only afte...

2016-03-10 Thread zhuchen1018
GitHub user zhuchen1018 opened a pull request:

https://github.com/apache/kafka/pull/1050

KAFKA-3383: remove in flight request only after response parsing succeeds

@becketqin, could you take a look at the patch?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhuchen1018/kafka KAFKA-3383

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1050.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1050


commit 84372bbe06bd121e758348b54d2d79d5fb4fd095
Author: Chen Zhu 
Date:   2016-03-11T03:43:52Z

KAFKA-3383: remove in flight request only after response parsing succeeds




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3383) Producer should not remove an in flight request before successfully parsing the response.

2016-03-10 Thread chen zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chen zhu updated KAFKA-3383:

Summary: Producer should not remove an in flight request before 
successfully parsing the response.  (was: Producer should not remove an in 
flight requests before successfully parsing the response.)

> Producer should not remove an in flight request before successfully parsing 
> the response.
> -
>
> Key: KAFKA-3383
> URL: https://issues.apache.org/jira/browse/KAFKA-3383
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: chen zhu
> Fix For: 0.10.0.0
>
>
> In the NetworkClient, we remove the in flight request before we successfully 
> parse the response. If the response parse failed, the request will not be 
> fulfilled but just lost. For a producer request, that means the callback of 
> the messages won't be fired forever.
> We should only remove the in flight request after response parsing succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3383) Producer should not remove an in flight requests before successfully parsing the response.

2016-03-10 Thread chen zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chen zhu reassigned KAFKA-3383:
---

Assignee: chen zhu  (was: Jiangjie Qin)

> Producer should not remove an in flight requests before successfully parsing 
> the response.
> --
>
> Key: KAFKA-3383
> URL: https://issues.apache.org/jira/browse/KAFKA-3383
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: chen zhu
> Fix For: 0.10.0.0
>
>
> In the NetworkClient, we remove the in flight request before we successfully 
> parse the response. If the response parse failed, the request will not be 
> fulfilled but just lost. For a producer request, that means the callback of 
> the messages won't be fired forever.
> We should only remove the in flight request after response parsing succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3367) Delete topic dont delete the complete log from kafka

2016-03-10 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190206#comment-15190206
 ] 

Mayuresh Gharat commented on KAFKA-3367:


The client that are producing to the kafka clusters should be stopped before 
you delete the topic. This is because when the topic gets deleted, the producer 
is going to issue TopicMetadata request and since the automatic topic creation 
is turned ON, it will recreate the topic.

> Delete topic dont delete the complete log from kafka
> 
>
> Key: KAFKA-3367
> URL: https://issues.apache.org/jira/browse/KAFKA-3367
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Akshath Patkar
>
> Delete topic Just marks the topic as deleted. But data still remain in logs.
> How can we delete the topic completely with out doing manual delete of logs 
> from kafka and zookeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3367) Delete topic dont delete the complete log from kafka

2016-03-10 Thread Akshath Patkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190185#comment-15190185
 ] 

Akshath Patkar commented on KAFKA-3367:
---

Yes, Automatic topic creation is turned on.

There is no consumers consuming the data. And yes kafka connectors which 
produces this data were stopped after the deletion of topic. 



> Delete topic dont delete the complete log from kafka
> 
>
> Key: KAFKA-3367
> URL: https://issues.apache.org/jira/browse/KAFKA-3367
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Akshath Patkar
>
> Delete topic Just marks the topic as deleted. But data still remain in logs.
> How can we delete the topic completely with out doing manual delete of logs 
> from kafka and zookeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3383) Producer should not remove an in flight requests before successfully parsing the response.

2016-03-10 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-3383:
---

 Summary: Producer should not remove an in flight requests before 
successfully parsing the response.
 Key: KAFKA-3383
 URL: https://issues.apache.org/jira/browse/KAFKA-3383
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Fix For: 0.10.0.0


In the NetworkClient, we remove the in flight request before we successfully 
parse the response. If the response parse failed, the request will not be 
fulfilled but just lost. For a producer request, that means the callback of the 
messages won't be fired forever.

We should only remove the in flight request after response parsing succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3382) Add system test for ReplicationVerificationTool

2016-03-10 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-3382:
-

 Summary: Add system test for ReplicationVerificationTool
 Key: KAFKA-3382
 URL: https://issues.apache.org/jira/browse/KAFKA-3382
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ashish K Singh






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3382) Add system test for ReplicationVerificationTool

2016-03-10 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh reassigned KAFKA-3382:
-

Assignee: Ashish K Singh

> Add system test for ReplicationVerificationTool
> ---
>
> Key: KAFKA-3382
> URL: https://issues.apache.org/jira/browse/KAFKA-3382
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3381) Add system test for SimpleConsumerShell

2016-03-10 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh reassigned KAFKA-3381:
-

Assignee: Ashish K Singh

> Add system test for SimpleConsumerShell
> ---
>
> Key: KAFKA-3381
> URL: https://issues.apache.org/jira/browse/KAFKA-3381
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3381) Add system test for SimpleConsumerShell

2016-03-10 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-3381:
-

 Summary: Add system test for SimpleConsumerShell
 Key: KAFKA-3381
 URL: https://issues.apache.org/jira/browse/KAFKA-3381
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ashish K Singh






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Add unit test for internal topics

2016-03-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1047


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-03-10 Thread Magnus Edenhill
Hey Jay,

some thoughts in-line:


2016-03-04 18:52 GMT+01:00 Jay Kreps :

> Yeah here is my summary of my take:
>
> 1. Negotiating a per-connection protocol actually does add a lot of
> complexity to clients (many more failure states to get right).
>

I'm not sure about this.
A client that needs to support multiple versions still has this complexity.
Having the broker provide an API to query the broker/protocol version at
least makes that
part easier. Not having such an API and needing to probe is a lot more
complex
and error prone (with possible side effects).



>
> 2. Having the client configure the protocol version manually is doable now
> but probably a worse state. I suspect this will lead to more not less
> confusion.
>

Agree, this should be as a rare manual fallback.



>
> 3. I don't think the current state is actually that bad. Integrators pick a
> conservative version and build against that. There is a tradeoff between
> getting the new features and being compatible with old Kafka versions. But
> a large part of this tradeoff is essential since new features aren't going
> to magically appear on old servers, so even if you upgrade your client you
> likely aren't going to get the new stuff (since we will end up dynamically
> turning it off). Having client features that are there but don't work
> because you're on an old cluster may actually be a worse experience if not
> handled very carefully..
>

Upgrading brokers and clients in lock-step is a luxury of the official
client, 3rd party clients
has no version or distribution coupling with the broker and thus needs to
support
a wider range of broker versions than the Java client.
>From a user perspective I want to install the latest version of client XYZ
and
connect to my Kafka cluster, whatever verison it has, and things should
just work.
No digging up old client versions to support a specific broker, no feature
compat matrices,
no excessive manual configuration.
Things should just work, and they can, with a little help from the broker.
I'm sure most 3rd party client developers are willing to make the effort on
their side.


> 4. The problems Dana brought up are totally orthogonal to the problem of
> having per-api versions or overall versions. The problem was that we
> changed behavior subtly without changing the version. This will be an issue
> regardless of whether the version is global or not.
>

This is actually a good point for using the broker version rather than an
explicit
protocol version: Mistakenly changing an API without bumping the request or
protocol version
makes it impossible for a client to provide a workaround for the new
unversioned functionality,
but a new release will always have a new broker version which a client may
use.



> 5. Using the broker release as the version is strictly worse than using a
> global protocol version (0, 1, 2, ...) that increments any time any api
> changes but doesn't increment just because non-protocol code is changed.
> The problem with using the broker release version is we want to be able to
> keep Kafka releasable from any commit which means there isn't as clear a
> sequencing of releases as you would think.
>

I'm not really sure I follow.
Are you saying that 0.9.1.1 may have lesser protocol support than 0.9.1.0?
I would imagine that any broker version V2 would support at least V1's
protocol requests and versions.
(until things are deprecated/removed, but that is another story).


>
> 6. We need to consider the case of mixed version clusters during the time
> period when you are upgrading Kafka.
>
> So overall I think this is not a critical thing to do right now, but if we
> are going to do it we should do it in a way that actually improves things.
>

I disagree, the sooner we get this in the lesser the future headaches will
be.
Otherwise this discussion will blossom up for each new protocol request,
or version.


/Magnus


> On Thu, Mar 3, 2016 at 9:38 PM, Jason Gustafson 
> wrote:
>
> > I talked with Jay about this KIP briefly this morning, so let me try to
> > summarize the discussion (I'm sure he'll jump in if I get anything
> wrong).
> > Apologies in advance for the length.
> >
> > I think we both share some skepticism that a request with all the
> supported
> > versions of all the request APIs is going to be a useful primitive to try
> > and build client compatibility around. In practice I think people would
> end
> > up checking for particular request versions in order to determine if the
> > broker is 0.8 or 0.9 or whatever, and then change behavior accordingly.
> I'm
> > wondering if there's a reasonable way to handle the version responses
> that
> > doesn't amount to that. Maybe you could try to capture feature
> > compatibility by checking the versions for a subset of request types? For
> > example, to ensure that you can use the new consumer API, you check that
> > the group coordinator request is present, the offset commit request
> 

[jira] [Updated] (KAFKA-3373) Add `log` prefix to KIP-31/32 configs

2016-03-10 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3373:

Reviewer: Jun Rao

> Add `log` prefix to KIP-31/32 configs
> -
>
> Key: KAFKA-3373
> URL: https://issues.apache.org/jira/browse/KAFKA-3373
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> [~jjkoshy] suggested that we should prefix the configs introduced as part of 
> KIP-31/32 to include a `log` prefix:
> message.format.version
> message.timestamp.type
> message.timestamp.difference.max.ms
> If we do it, we must update the KIP.
> Marking it as blocker because we should decide either way before 0.10.0.0.
> Discussion here:
> https://github.com/apache/kafka/pull/907#issuecomment-193950768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3373) Add `log` prefix to KIP-31/32 configs

2016-03-10 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3373:

Status: Patch Available  (was: Open)

> Add `log` prefix to KIP-31/32 configs
> -
>
> Key: KAFKA-3373
> URL: https://issues.apache.org/jira/browse/KAFKA-3373
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> [~jjkoshy] suggested that we should prefix the configs introduced as part of 
> KIP-31/32 to include a `log` prefix:
> message.format.version
> message.timestamp.type
> message.timestamp.difference.max.ms
> If we do it, we must update the KIP.
> Marking it as blocker because we should decide either way before 0.10.0.0.
> Discussion here:
> https://github.com/apache/kafka/pull/907#issuecomment-193950768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3373) Add `log` prefix to KIP-31/32 configs

2016-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190031#comment-15190031
 ] 

ASF GitHub Bot commented on KAFKA-3373:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/1049

KAFKA-3373 add 'log' prefix to configurations in KIP-31/32



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3373

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1049.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1049


commit ff4a509902a888563a1dc39fd129eba7f0c53bb9
Author: Jiangjie Qin 
Date:   2016-03-10T21:57:46Z

KAFKA-3373 add 'log' prefix to configurations in KIP-31/32




> Add `log` prefix to KIP-31/32 configs
> -
>
> Key: KAFKA-3373
> URL: https://issues.apache.org/jira/browse/KAFKA-3373
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> [~jjkoshy] suggested that we should prefix the configs introduced as part of 
> KIP-31/32 to include a `log` prefix:
> message.format.version
> message.timestamp.type
> message.timestamp.difference.max.ms
> If we do it, we must update the KIP.
> Marking it as blocker because we should decide either way before 0.10.0.0.
> Discussion here:
> https://github.com/apache/kafka/pull/907#issuecomment-193950768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3373 add 'log' prefix to configurations ...

2016-03-10 Thread becketqin
GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/1049

KAFKA-3373 add 'log' prefix to configurations in KIP-31/32



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3373

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1049.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1049


commit ff4a509902a888563a1dc39fd129eba7f0c53bb9
Author: Jiangjie Qin 
Date:   2016-03-10T21:57:46Z

KAFKA-3373 add 'log' prefix to configurations in KIP-31/32




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-03-10 Thread Magnus Edenhill
Hi all,

sorry for joining late in the game, the carribean got in the way.

My thoughts:

There is no way around the chicken problem, so the sooner we can
add protocol versioning functionality the better and we'll add heuristics
in clients to
handle the migration period (e.g, what Dana has done in kafka-python).
The focus at this point should be to mitigate the core issue (allow clients
to know what is supported)
in the least intrusive way. Hopefully we can redesign the protocol in the
future to add proper
response headers, etc.

I'm with Data that reusing the broker version as a protocol version will
work just fine and
saves us from administrating another version.
>From a client's perspective an explicit protocol version doesn't really add
any value.
I'd rather maintain a mapping of actual broker versions to supported
protocol requests rather than
some independent protocol version that still needs to be translated to a
broker version for
proper code maintainability / error messages / etc.


Thus my suggestion is in line with some of the previous speakers, that is
is to keep things
simple and bump the MetadataRequest version to 1 by adding a VersionString
("0.9.1.0")
and VersionInt (0x00090100) field to the response.
These fields return version information for the current connection's broker
only, not for other broker's
in the cluster:
Providing version information for other brokers doesn't really serve any
purpose:
 a) the information is cached by the responding broker so it might be
outdated ( = cant be trusted)
 b) by the time the client connects to a given broker it might have upgraded

This means that a client (that is interested in protocol versioning) will
need to query each
connection's version any way. Since MetadataRequets are typically already
sent on connection set up
this seems to be the proper place to put it.

The MetadataRequest semantics should also be extended to allow asking only
for cluster and version information,
but not the topic list since this might have negative performance impact on
large clusters with many topics.
One way to achieve this would be to provide one single Null topic in the
request (length=-1).

Sending a new Metadata V1 request to an old broker will cause the
connection to be closed and
the client will need to use this as a heuristic to downgrade its protocol
ambitions to an older version
(either by some default value or by user configuration).


/Magnus


2016-03-10 20:04 GMT+01:00 Ashish Singh :

> @Magnus,
>
> Does the latest suggestion sound OK to you. I am planning to update PR
> based on latest suggestion.
>
> On Mon, Mar 7, 2016 at 10:58 AM, Ashish Singh  wrote:
>
> >
> >
> > On Fri, Mar 4, 2016 at 5:46 PM, Jay Kreps  wrote:
> >
> >> Hey Ashish,
> >>
> >> Both good points.
> >>
> >> I think the issue with the general metadata request is the same as the
> >> issue with a version-specific metadata request from the other
> >> proposal--basically it's a chicken and egg problem, to find out anything
> >> about the cluster you have to be able to communicate something in a
> format
> >> the server can understand without knowing a priori what version it's
> on. I
> >> guess the question is how can you continue to evolve the metadata
> request
> >> (whether it is the existing metadata or a protocol-version specific
> >> metadata request) given that you need this information to bootstrap you
> >> have to be more careful in how that request evolves.
> >>
> > You are correct. It's just that protocol version request would be very
> > specific to retrieve the protocol versions. Changes to protocol version
> > request itself should be very rare, if at all. However, the general
> > metadata request carries a lot more information and its format is more
> > probable to evolve. This boils down to higher probability of change vs a
> > definite network round-trip for each re/connect. It does sound like, it
> is
> > better to avoid a definite penalty than to avoid a probable rare issue.
> >
> >>
> >> I think deprecation/removal may be okay. Ultimately clients will always
> >> use
> >> the highest possible version of the protocol the server supports so if
> >> we've already deprecated and removed your highest version then you are
> >> screwed and you're going to get an error no matter what, right?
> Basically
> >> there is nothing dynamic you can do in that case.
> >>
> > Sure, this should be expected. Just wanted to make sure deprecation is
> > still on the table.
> >
> >>
> >> -Jay
> >>
> >> On Fri, Mar 4, 2016 at 4:05 PM, Ashish Singh 
> wrote:
> >>
> >> > Hello Jay,
> >> >
> >> > The overall approach sounds good. I do realize that this discussion
> has
> >> > gotten too lengthy and is starting to shoot tangents. Maybe a KIP call
> >> will
> >> > help us getting to a decision faster. I do have a few questions
> though.
> >> >
> >> > On Fri, Mar 4, 2016 at 9:52 AM, Jay Kreps  

[jira] [Commented] (KAFKA-3380) Add system test for GetOffsetShell tool

2016-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190015#comment-15190015
 ] 

ASF GitHub Bot commented on KAFKA-3380:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/1048

KAFKA-3380: Add system test for GetOffsetShell tool



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3380

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1048.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1048


commit fac6e9eada3c2b4b240939e6aba92e021e1a44fd
Author: Ashish Singh 
Date:   2016-03-10T21:52:12Z

KAFKA-3380: Add system test for GetOffsetShell tool




> Add system test for GetOffsetShell tool
> ---
>
> Key: KAFKA-3380
> URL: https://issues.apache.org/jira/browse/KAFKA-3380
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3380: Add system test for GetOffsetShell...

2016-03-10 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/1048

KAFKA-3380: Add system test for GetOffsetShell tool



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3380

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1048.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1048


commit fac6e9eada3c2b4b240939e6aba92e021e1a44fd
Author: Ashish Singh 
Date:   2016-03-10T21:52:12Z

KAFKA-3380: Add system test for GetOffsetShell tool




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3380) Add system test for GetOffsetShell tool

2016-03-10 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-3380:
-

 Summary: Add system test for GetOffsetShell tool
 Key: KAFKA-3380
 URL: https://issues.apache.org/jira/browse/KAFKA-3380
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ashish K Singh
Assignee: Ashish K Singh






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3379) Update tools relying on old producer to use new producer.

2016-03-10 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-3379:
--
Description: 
Following tools are using old producer.

* ReplicationVerificationTool
* SimpleConsumerShell
* GetOffsetShell

Old producer is being marked as deprecated in 0.10. These tools should be 
updated to use new producer. To make sure that this update does not break 
existing behavior. Below is the action plan.

For each tool that uses old producer.
* Add ducktape test to establish current behavior.
* Once the tests are committed and run fine, add patch for modification of 
these tools. The ducktape tests added in previous step should conform that 
existing behavior is still intact.

> Update tools relying on old producer to use new producer.
> -
>
> Key: KAFKA-3379
> URL: https://issues.apache.org/jira/browse/KAFKA-3379
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Following tools are using old producer.
> * ReplicationVerificationTool
> * SimpleConsumerShell
> * GetOffsetShell
> Old producer is being marked as deprecated in 0.10. These tools should be 
> updated to use new producer. To make sure that this update does not break 
> existing behavior. Below is the action plan.
> For each tool that uses old producer.
> * Add ducktape test to establish current behavior.
> * Once the tests are committed and run fine, add patch for modification of 
> these tools. The ducktape tests added in previous step should conform that 
> existing behavior is still intact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3379) Update tools relying on old producer to use new producer.

2016-03-10 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-3379:
-

 Summary: Update tools relying on old producer to use new producer.
 Key: KAFKA-3379
 URL: https://issues.apache.org/jira/browse/KAFKA-3379
 Project: Kafka
  Issue Type: Improvement
Reporter: Ashish K Singh
Assignee: Ashish K Singh






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3202) Add system test for KIP-31 and KIP-32 - Change message format version on the fly

2016-03-10 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1519#comment-1519
 ] 

Eno Thereska commented on KAFKA-3202:
-

[~apovzner] Moved test description to Compatibility test, thanks. [~becket_qin] 
what did you have in mind exactly for this (KAFKA-3202) particular test? Thanks.

> Add system test for KIP-31 and KIP-32 - Change message format version on the 
> fly
> 
>
> Key: KAFKA-3202
> URL: https://issues.apache.org/jira/browse/KAFKA-3202
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Jiangjie Qin
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The system test should cover the case that message format changes are made 
> when clients are producing/consuming. The message format change should not 
> cause client side issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3188) Add system test for KIP-31 and KIP-32 - Compatibility Test

2016-03-10 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-3188:

Description: 
The integration test should test the compatibility between 0.10.0 broker with 
clients on older versions. The clients version should include 0.9.0 and 0.8.x.

We already cover 0.10 brokers with old producers/consumers in upgrade tests. 
So, the main thing to test is a mix of 0.9 and 0.10 producers and consumers. 
E.g., test1: 0.9 producer/0.10 consumer and then test2: 0.10 producer/0.9 
consumer. And then, each of them: compression/no compression (like in upgrade 
test). And we could probably add another dimension : topic configured with 
CreateTime (default) and LogAppendTime. So, total 2x2x2 combinations (but maybe 
can reduce that — eg. do LogAppendTime with compression only).

  was:The integration test should test the compatibility between 0.10.0 broker 
with clients on older versions. The clients version should include 0.9.0 and 
0.8.x.


> Add system test for KIP-31 and KIP-32 - Compatibility Test
> --
>
> Key: KAFKA-3188
> URL: https://issues.apache.org/jira/browse/KAFKA-3188
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Anna Povzner
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The integration test should test the compatibility between 0.10.0 broker with 
> clients on older versions. The clients version should include 0.9.0 and 0.8.x.
> We already cover 0.10 brokers with old producers/consumers in upgrade tests. 
> So, the main thing to test is a mix of 0.9 and 0.10 producers and consumers. 
> E.g., test1: 0.9 producer/0.10 consumer and then test2: 0.10 producer/0.9 
> consumer. And then, each of them: compression/no compression (like in upgrade 
> test). And we could probably add another dimension : topic configured with 
> CreateTime (default) and LogAppendTime. So, total 2x2x2 combinations (but 
> maybe can reduce that — eg. do LogAppendTime with compression only).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3202) Add system test for KIP-31 and KIP-32 - Change message format version on the fly

2016-03-10 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-3202:

Description: 
The system test should cover the case that message format changes are made when 
clients are producing/consuming. The message format change should not cause 
client side issue.



  was:
The system test should cover the case that message format changes are made when 
clients are producing/consuming. The message format change should not cause 
client side issue.

We already cover 0.10 brokers with old producers/consumers in upgrade tests. 
So, the main thing to test is a mix of 0.9 and 0.10 producers and consumers. 
E.g., test1: 0.9 producer/0.10 consumer and then test2: 0.10 producer/0.9 
consumer. And then, each of them: compression/no compression (like in upgrade 
test). And we could probably add another dimension : topic configured with 
CreateTime (default) and LogAppendTime. So, total 2x2x2 combinations (but maybe 
can reduce that — eg. do LogAppendTime with compression only).


> Add system test for KIP-31 and KIP-32 - Change message format version on the 
> fly
> 
>
> Key: KAFKA-3202
> URL: https://issues.apache.org/jira/browse/KAFKA-3202
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Jiangjie Qin
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The system test should cover the case that message format changes are made 
> when clients are producing/consuming. The message format change should not 
> cause client side issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #1108

2016-03-10 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka_0.9.0_jdk7 #128

2016-03-10 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Add header and footer to protocol docs

--
[...truncated 3018 lines...]
kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.message.ByteBufferMessageSetTest > testOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > testIterator PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIPOverrides PASSED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialExists PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathExists PASSED

kafka.zk.ZKPathTest > testCreatePersistentPath PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExistsThrowsException PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentPathThrowsException PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExists PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[0] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[0] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[0] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[0] PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[1] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[1] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[1] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[1] PASSED

kafka.common.ConfigTest > testInvalidGroupIds PASSED

kafka.common.ConfigTest > testInvalidClientIds PASSED

kafka.common.TopicTest > testInvalidTopicNames PASSED

kafka.common.TopicTest > testTopicHasCollision PASSED

kafka.common.TopicTest > testTopicHasCollisionChars PASSED

kafka.common.ZkNodeChangeNotificationListenerTest > testProcessNotification 
PASSED

650 tests completed, 1 failed
:kafka_0.9.0_jdk7:core:test FAILED
:test_core_2_11_7 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:test'.
> There were failing tests. See the report at: 
> file:///x1/jenkins/jenkins-slave/workspace/kafka_0.9.0_jdk7/core/build/reports/tests/index.html

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
':core:test'.
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:69)
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46)
at 
org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:35)
at 
org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:64)
at 
org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at 
org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:52)
at 
org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at 
org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:53)
at 

[jira] [Commented] (KAFKA-3367) Delete topic dont delete the complete log from kafka

2016-03-10 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189933#comment-15189933
 ] 

Mayuresh Gharat commented on KAFKA-3367:


Is automatic topic creation turned ON in your kafka cluster?
If yes, did you stop the clients that are consuming/producing to this kafka 
cluster before you marked the topic for delete in zookeeper?

> Delete topic dont delete the complete log from kafka
> 
>
> Key: KAFKA-3367
> URL: https://issues.apache.org/jira/browse/KAFKA-3367
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Akshath Patkar
>
> Delete topic Just marks the topic as deleted. But data still remain in logs.
> How can we delete the topic completely with out doing manual delete of logs 
> from kafka and zookeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3202) Add system test for KIP-31 and KIP-32 - Change message format version on the fly

2016-03-10 Thread Anna Povzner (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189917#comment-15189917
 ] 

Anna Povzner commented on KAFKA-3202:
-

[~enothereska] I think you meant to post the test description in KAFKA-3188 
(Compatibility test). Not sure if you meant to pick this JIRA or KAFKA-3188.

> Add system test for KIP-31 and KIP-32 - Change message format version on the 
> fly
> 
>
> Key: KAFKA-3202
> URL: https://issues.apache.org/jira/browse/KAFKA-3202
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Jiangjie Qin
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The system test should cover the case that message format changes are made 
> when clients are producing/consuming. The message format change should not 
> cause client side issue.
> We already cover 0.10 brokers with old producers/consumers in upgrade tests. 
> So, the main thing to test is a mix of 0.9 and 0.10 producers and consumers. 
> E.g., test1: 0.9 producer/0.10 consumer and then test2: 0.10 producer/0.9 
> consumer. And then, each of them: compression/no compression (like in upgrade 
> test). And we could probably add another dimension : topic configured with 
> CreateTime (default) and LogAppendTime. So, total 2x2x2 combinations (but 
> maybe can reduce that — eg. do LogAppendTime with compression only).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Add unit test for internal topics

2016-03-10 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1047

MINOR: Add unit test for internal topics



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KInternal

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1047.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1047


commit c9dae2ee0beec5066d782b6680dbd8eb5d0f8273
Author: Guozhang Wang 
Date:   2016-03-10T20:17:26Z

add unit test for internal topics




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Increased default EC2 instance size

2016-03-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1046


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Increased default EC2 instance size

2016-03-10 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/1046

MINOR: Increased default EC2 instance size

AWS instance size increased to m3.xlarge to allow all system tests to pass. 
@ijuma @ewencp have a look please.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka minor-aws

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1046.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1046


commit 8b0921d62c42389550cc9a55f1f509f1278cae2a
Author: Eno Thereska 
Date:   2016-03-10T20:09:29Z

Increased default EC2 instance size




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Increase AWS instance size

2016-03-10 Thread enothereska
Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/1045


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3296) All consumer reads hang indefinately

2016-03-10 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189851#comment-15189851
 ] 

Jason Gustafson commented on KAFKA-3296:


[~thecoop1984] Sorry, haven't had any time to look at this. When I looked 
earlier, it seemed like it could be caused by KAFKA-3215. I'll try to look 
again next week.

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
> Attachments: controller.zip, kafkalogs.zip
>
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@439e25fb,
>  
> 

[GitHub] kafka pull request: MINOR: Increase AWS instance size

2016-03-10 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/1045

MINOR: Increase AWS instance size

AWS instance size increased to m3.xlarge to allow all system tests to pass.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka aws

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1045.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1045


commit 2e8162a88d55e7a7934dd3b9a7188846f94ec383
Author: Eno Thereska 
Date:   2016-02-29T20:09:34Z

Missing streams jar in release

commit 960b136ecf7d65d23a0abd5421b193bd0003c22f
Author: Eno Thereska 
Date:   2016-03-02T09:46:21Z

Merged

commit 6222928290323f9469b1ad21d1db554fccb336f8
Author: Eno Thereska 
Date:   2016-03-02T09:47:35Z

Fix to build.gradle

commit 2520bc33623f7eba23404ed1d15ada86370e719d
Author: Eno Thereska 
Date:   2016-03-02T22:53:38Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit f73c90cccf67a74c801534ec8417174eaa0f4da8
Author: Eno Thereska 
Date:   2016-03-03T12:29:37Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit c108ddf5c98f274d05f833da64560864a17af075
Author: Eno Thereska 
Date:   2016-03-03T19:50:18Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit c765cfacedd4d17b84a5ddcea10f886576b136d4
Author: Eno Thereska 
Date:   2016-03-08T15:47:02Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 6b99cf0f3751c9657889c0274d44c8de2178db4e
Author: Eno Thereska 
Date:   2016-03-10T11:39:58Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 61a6aedde80c5340716576c312faaf3665af262e
Author: Eno Thereska 
Date:   2016-03-10T19:43:01Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 11cf187d0e05c4fce2e78cfbd8f71054261feed7
Author: Eno Thereska 
Date:   2016-03-10T19:43:28Z

Increased size of ec2 instance type




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3378) Client blocks forever if SocketChannel connects instantly

2016-03-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3378:
---
Priority: Critical  (was: Major)

> Client blocks forever if SocketChannel connects instantly
> -
>
> Key: KAFKA-3378
> URL: https://issues.apache.org/jira/browse/KAFKA-3378
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Larkin Lowrey
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> Observed that some consumers were blocked in Fetcher.listOffset() when 
> starting many dozens of consumer threads at the same time.
> Selector.connect(...) calls SocketChannel.connect() in non-blocking mode and 
> assumes that false is always returned and that the channel will be in the 
> Selector's readyKeys once the connection is ready for connect completion due 
> to the OP_CONNECT interest op.
> When connect() returns true the channel is fully connected connected and will 
> not be included in readyKeys since only OP_CONNECT is set.
> I implemented a fix which handles the case when connect(...) returns true and 
> verified that I no longer see stuck consumers. A git pull request will be 
> forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3378) Client blocks forever if SocketChannel connects instantly

2016-03-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3378:
---
Fix Version/s: 0.10.0.0

> Client blocks forever if SocketChannel connects instantly
> -
>
> Key: KAFKA-3378
> URL: https://issues.apache.org/jira/browse/KAFKA-3378
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Larkin Lowrey
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> Observed that some consumers were blocked in Fetcher.listOffset() when 
> starting many dozens of consumer threads at the same time.
> Selector.connect(...) calls SocketChannel.connect() in non-blocking mode and 
> assumes that false is always returned and that the channel will be in the 
> Selector's readyKeys once the connection is ready for connect completion due 
> to the OP_CONNECT interest op.
> When connect() returns true the channel is fully connected connected and will 
> not be included in readyKeys since only OP_CONNECT is set.
> I implemented a fix which handles the case when connect(...) returns true and 
> verified that I no longer see stuck consumers. A git pull request will be 
> forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3378) Client blocks forever if SocketChannel connects instantly

2016-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189815#comment-15189815
 ] 

ASF GitHub Bot commented on KAFKA-3378:
---

GitHub user llowrey opened a pull request:

https://github.com/apache/kafka/pull/1044

KAFKA-3378 Fix for instantly connecting SocketChannels.

Added OP_WRITE interestOp when channel connects instantly 
(socketChannel.connect(address) returns true... even in non-blocking mode). 
This allows the SocketChannel to be ready on the next call to select(). The 
poll method was modified to detect this case (OP_CONNECT && OP_WRITE while not 
key.isConnectable()) to complete the connection setup as if the channel had not 
connected instantly.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/llowrey/kafka 0.9.0

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1044.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1044


commit 79f96114e0cc8a2fdadfa6738cf4c666a8c42e96
Author: Larkin Lowrey 
Date:   2016-03-10T18:16:06Z

Fix for instantly connecting SocketChannels.




> Client blocks forever if SocketChannel connects instantly
> -
>
> Key: KAFKA-3378
> URL: https://issues.apache.org/jira/browse/KAFKA-3378
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Larkin Lowrey
>
> Observed that some consumers were blocked in Fetcher.listOffset() when 
> starting many dozens of consumer threads at the same time.
> Selector.connect(...) calls SocketChannel.connect() in non-blocking mode and 
> assumes that false is always returned and that the channel will be in the 
> Selector's readyKeys once the connection is ready for connect completion due 
> to the OP_CONNECT interest op.
> When connect() returns true the channel is fully connected connected and will 
> not be included in readyKeys since only OP_CONNECT is set.
> I implemented a fix which handles the case when connect(...) returns true and 
> verified that I no longer see stuck consumers. A git pull request will be 
> forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3378 Fix for instantly connecting Socket...

2016-03-10 Thread llowrey
GitHub user llowrey opened a pull request:

https://github.com/apache/kafka/pull/1044

KAFKA-3378 Fix for instantly connecting SocketChannels.

Added OP_WRITE interestOp when channel connects instantly 
(socketChannel.connect(address) returns true... even in non-blocking mode). 
This allows the SocketChannel to be ready on the next call to select(). The 
poll method was modified to detect this case (OP_CONNECT && OP_WRITE while not 
key.isConnectable()) to complete the connection setup as if the channel had not 
connected instantly.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/llowrey/kafka 0.9.0

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1044.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1044


commit 79f96114e0cc8a2fdadfa6738cf4c666a8c42e96
Author: Larkin Lowrey 
Date:   2016-03-10T18:16:06Z

Fix for instantly connecting SocketChannels.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3318) Improve consumer rebalance error messaging

2016-03-10 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-3318.
-
   Resolution: Fixed
Fix Version/s: 0.10.0.0

Issue resolved by pull request 1036
[https://github.com/apache/kafka/pull/1036]

> Improve consumer rebalance error messaging
> --
>
> Key: KAFKA-3318
> URL: https://issues.apache.org/jira/browse/KAFKA-3318
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> A common problem with the new consumer is to have message processing take 
> longer than the session timeout, causing an unexpected rebalance. 
> Unfortunately, when this happens, the error messages are often cryptic (e.g. 
> something about illegal generation) and contain no clear advice on what to do 
> (e.g. increase session timeout). We should do a pass on error messages to 
> ensure that users receive clear guidance on the problem and possible 
> solutions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3318) Improve consumer rebalance error messaging

2016-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189810#comment-15189810
 ] 

ASF GitHub Bot commented on KAFKA-3318:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1036


> Improve consumer rebalance error messaging
> --
>
> Key: KAFKA-3318
> URL: https://issues.apache.org/jira/browse/KAFKA-3318
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> A common problem with the new consumer is to have message processing take 
> longer than the session timeout, causing an unexpected rebalance. 
> Unfortunately, when this happens, the error messages are often cryptic (e.g. 
> something about illegal generation) and contain no clear advice on what to do 
> (e.g. increase session timeout). We should do a pass on error messages to 
> ensure that users receive clear guidance on the problem and possible 
> solutions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3318: clean up consumer logging and erro...

2016-03-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1036


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3378) Client blocks forever if SocketChannel connects instantly

2016-03-10 Thread Larkin Lowrey (JIRA)
Larkin Lowrey created KAFKA-3378:


 Summary: Client blocks forever if SocketChannel connects instantly
 Key: KAFKA-3378
 URL: https://issues.apache.org/jira/browse/KAFKA-3378
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.9.0.1
Reporter: Larkin Lowrey


Observed that some consumers were blocked in Fetcher.listOffset() when starting 
many dozens of consumer threads at the same time.

Selector.connect(...) calls SocketChannel.connect() in non-blocking mode and 
assumes that false is always returned and that the channel will be in the 
Selector's readyKeys once the connection is ready for connect completion due to 
the OP_CONNECT interest op.

When connect() returns true the channel is fully connected connected and will 
not be included in readyKeys since only OP_CONNECT is set.

I implemented a fix which handles the case when connect(...) returns true and 
verified that I no longer see stuck consumers. A git pull request will be 
forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Minor: Fix system test broken by change of con...

2016-03-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1039


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request: Add protocol guide

2016-03-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/9


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3345) ProducerResponse could gracefully handle no throttle time provided

2016-03-10 Thread Bryan Baugher (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189757#comment-15189757
 ] 

Bryan Baugher commented on KAFKA-3345:
--

Looks like this is no longer as easy with the addition of KAFKA-3025. 

> ProducerResponse could gracefully handle no throttle time provided
> --
>
> Key: KAFKA-3345
> URL: https://issues.apache.org/jira/browse/KAFKA-3345
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bryan Baugher
>Priority: Minor
>
> When doing some compatibility testing between kafka 0.8 and 0.9 I found that 
> the old producer using 0.9 libraries could write to a cluster running 0.8 if 
> 'request.required.acks' was set to 0. If it was set to anything else it would 
> fail with,
> {code}
> java.nio.BufferUnderflowException
>   at java.nio.Buffer.nextGetIndex(Buffer.java:506) 
>   at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:361) 
>   at kafka.api.ProducerResponse$.readFrom(ProducerResponse.scala:41) 
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:109) 
> {code}
> In 0.9 there was a one line change to the response here[1] to look for a 
> throttle time value in the response. It seems if the 0.9 code gracefully 
> handled throttle time not being provided this would work. Would you be open 
> to this change?
> [1] - 
> https://github.com/apache/kafka/blob/0.9.0.1/core/src/main/scala/kafka/api/ProducerResponse.scala#L41



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-03-10 Thread Ashish Singh
@Magnus,

Does the latest suggestion sound OK to you. I am planning to update PR
based on latest suggestion.

On Mon, Mar 7, 2016 at 10:58 AM, Ashish Singh  wrote:

>
>
> On Fri, Mar 4, 2016 at 5:46 PM, Jay Kreps  wrote:
>
>> Hey Ashish,
>>
>> Both good points.
>>
>> I think the issue with the general metadata request is the same as the
>> issue with a version-specific metadata request from the other
>> proposal--basically it's a chicken and egg problem, to find out anything
>> about the cluster you have to be able to communicate something in a format
>> the server can understand without knowing a priori what version it's on. I
>> guess the question is how can you continue to evolve the metadata request
>> (whether it is the existing metadata or a protocol-version specific
>> metadata request) given that you need this information to bootstrap you
>> have to be more careful in how that request evolves.
>>
> You are correct. It's just that protocol version request would be very
> specific to retrieve the protocol versions. Changes to protocol version
> request itself should be very rare, if at all. However, the general
> metadata request carries a lot more information and its format is more
> probable to evolve. This boils down to higher probability of change vs a
> definite network round-trip for each re/connect. It does sound like, it is
> better to avoid a definite penalty than to avoid a probable rare issue.
>
>>
>> I think deprecation/removal may be okay. Ultimately clients will always
>> use
>> the highest possible version of the protocol the server supports so if
>> we've already deprecated and removed your highest version then you are
>> screwed and you're going to get an error no matter what, right? Basically
>> there is nothing dynamic you can do in that case.
>>
> Sure, this should be expected. Just wanted to make sure deprecation is
> still on the table.
>
>>
>> -Jay
>>
>> On Fri, Mar 4, 2016 at 4:05 PM, Ashish Singh  wrote:
>>
>> > Hello Jay,
>> >
>> > The overall approach sounds good. I do realize that this discussion has
>> > gotten too lengthy and is starting to shoot tangents. Maybe a KIP call
>> will
>> > help us getting to a decision faster. I do have a few questions though.
>> >
>> > On Fri, Mar 4, 2016 at 9:52 AM, Jay Kreps  wrote:
>> >
>> > > Yeah here is my summary of my take:
>> > >
>> > > 1. Negotiating a per-connection protocol actually does add a lot of
>> > > complexity to clients (many more failure states to get right).
>> > >
>> > > 2. Having the client configure the protocol version manually is doable
>> > now
>> > > but probably a worse state. I suspect this will lead to more not less
>> > > confusion.
>> > >
>> > > 3. I don't think the current state is actually that bad. Integrators
>> > pick a
>> > > conservative version and build against that. There is a tradeoff
>> between
>> > > getting the new features and being compatible with old Kafka versions.
>> > But
>> > > a large part of this tradeoff is essential since new features aren't
>> > going
>> > > to magically appear on old servers, so even if you upgrade your client
>> > you
>> > > likely aren't going to get the new stuff (since we will end up
>> > dynamically
>> > > turning it off). Having client features that are there but don't work
>> > > because you're on an old cluster may actually be a worse experience if
>> > not
>> > > handled very carefully..
>> > >
>> > > 4. The problems Dana brought up are totally orthogonal to the problem
>> of
>> > > having per-api versions or overall versions. The problem was that we
>> > > changed behavior subtly without changing the version. This will be an
>> > issue
>> > > regardless of whether the version is global or not.
>> > >
>> > > 5. Using the broker release as the version is strictly worse than
>> using a
>> > > global protocol version (0, 1, 2, ...) that increments any time any
>> api
>> > > changes but doesn't increment just because non-protocol code is
>> changed.
>> > > The problem with using the broker release version is we want to be
>> able
>> > to
>> > > keep Kafka releasable from any commit which means there isn't as
>> clear a
>> > > sequencing of releases as you would think.
>> > >
>> > > 6. We need to consider the case of mixed version clusters during the
>> time
>> > > period when you are upgrading Kafka.
>> > >
>> > > So overall I think this is not a critical thing to do right now, but
>> if
>> > we
>> > > are going to do it we should do it in a way that actually improves
>> > things.
>> > >
>> > > Here would be one proposal for that:
>> > > a. Add a global protocol version that increments with any api version
>> > > update. Move the documentation so that the docs are by version. This
>> is
>> > > basically just a short-hand for a complete set of supported api
>> versions.
>> > > b. Include a field in the metadata response for each broker that adds
>> the
>> > > protocol version.
>> 

[jira] [Commented] (KAFKA-3361) Initial protocol documentation page and generation

2016-03-10 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189720#comment-15189720
 ] 

Gwen Shapira commented on KAFKA-3361:
-

merged. Sorry for accidentally breaking the build :(

> Initial protocol documentation page and generation
> --
>
> Key: KAFKA-3361
> URL: https://issues.apache.org/jira/browse/KAFKA-3361
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>
> Add an initial rough draft page to the official documentation. The output 
> will be mostly generated from code, ensuring the docs are accurate and up to 
> date.  This is likely to be a separate page due to the size of the content.
> The idea here is that something is better than nothing. Other jiras will 
> track needed improvements. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Add header and footer to protocol docs

2016-03-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1043


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3202) Add system test for KIP-31 and KIP-32 - Change message format version on the fly

2016-03-10 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189692#comment-15189692
 ] 

Eno Thereska commented on KAFKA-3202:
-

[~becket_qin]: I've updated the JIRA description after consulting with 
[~apovzner]. Does this description look good to you? Thanks.

> Add system test for KIP-31 and KIP-32 - Change message format version on the 
> fly
> 
>
> Key: KAFKA-3202
> URL: https://issues.apache.org/jira/browse/KAFKA-3202
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Jiangjie Qin
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The system test should cover the case that message format changes are made 
> when clients are producing/consuming. The message format change should not 
> cause client side issue.
> We already cover 0.10 brokers with old producers/consumers in upgrade tests. 
> So, the main thing to test is a mix of 0.9 and 0.10 producers and consumers. 
> E.g., test1: 0.9 producer/0.10 consumer and then test2: 0.10 producer/0.9 
> consumer. And then, each of them: compression/no compression (like in upgrade 
> test). And we could probably add another dimension : topic configured with 
> CreateTime (default) and LogAppendTime. So, total 2x2x2 combinations (but 
> maybe can reduce that — eg. do LogAppendTime with compression only).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [GitHub] kafka pull request: Parallel log-recovery of un-flushed segments o...

2016-03-10 Thread Achanta Vamsi Subhash
Hi,
I would like to make this into 0.0.10.0 so can someone look into this and
review?

On Wed, Mar 9, 2016 at 10:29 PM, Achanta Vamsi Subhash <
achanta.va...@flipkart.com> wrote:

> Hi all,
>
> https://github.com/apache/kafka/pull/1035
> This pull request will make the log-segment load parallel with two
> configurable properties "log.recovery.threads" and "
> log.recovery.max.interval.ms".
>
> On startup, currently the log segments within a logDir are loaded
> sequentially when there is a un-clean shutdown. This will take a lot of
> time for the segments to be loaded as the logSegment.recover(..) is called
> for every segment and for brokers which have many partitions, the time
> taken will be very high (we have noticed ~40mins for 2k partitions).
>
> Logic:
> 1. Have a threadpool defined of fixed length (log.recovery.threads)
> 2. Submit the logSegment recovery as a job to the threadpool and add the
> future returned to a job list
> 3. Wait till all the jobs are done within req. time (
> log.recovery.max.interval.ms - default set to Long.Max).
> 4. If they are done and the futures are all null (meaning that the jobs
> are successfully completed), it is considered done.
> 5. If any of the recovery jobs failed, then it is logged and
> LogRecoveryFailedException is thrown
> 6. If the timeout is reached, LogRecoveryFailedException is thrown.
> The logic is backward compatible with the current sequential
> implementation as the default thread count is set to 1.
>
> JIRA link is here:
> https://issues.apache.org/jira/browse/KAFKA-3359
>
> Please review and give me suggestions. Will make them and contribute.
> Thanks.
>
>
> On Wed, Mar 9, 2016 at 7:57 PM, vamsi-subhash  wrote:
>
>> GitHub user vamsi-subhash opened a pull request:
>>
>> https://github.com/apache/kafka/pull/1035
>>
>> Parallel log-recovery of un-flushed segments on startup
>>
>> Did not find any tests for the method. Will be adding them
>>
>> You can merge this pull request into a Git repository by running:
>>
>> $ git pull https://github.com/vamsi-subhash/kafka trunk
>>
>> Alternatively you can review and apply these changes as the patch at:
>>
>> https://github.com/apache/kafka/pull/1035.patch
>>
>> To close this pull request, make a commit to your master/trunk branch
>> with (at least) the following in the commit message:
>>
>> This closes #1035
>>
>> 
>> commit ecab815203a2b6396703660d5a2f9d9bb00efcf3
>> Author: Vamsi Subhash Achanta 
>> Date:   2016-03-09T14:24:37Z
>>
>> Made log-recovery parallel
>>
>> 
>>
>>
>> ---
>> If your project is set up for it, you can reply to this email and have
>> your
>> reply appear on GitHub as well. If your project does not have this feature
>> enabled and wishes so, or if the feature is enabled but not working,
>> please
>> contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
>> with INFRA.
>> ---
>>
>
>
>
> --
> Regards
> Vamsi Subhash
>



-- 
Regards
Vamsi Subhash


[jira] [Updated] (KAFKA-3202) Add system test for KIP-31 and KIP-32 - Change message format version on the fly

2016-03-10 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-3202:

Description: 
The system test should cover the case that message format changes are made when 
clients are producing/consuming. The message format change should not cause 
client side issue.

We already cover 0.10 brokers with old producers/consumers in upgrade tests. 
So, the main thing to test is a mix of 0.9 and 0.10 producers and consumers. 
E.g., test1: 0.9 producer/0.10 consumer and then test2: 0.10 producer/0.9 
consumer. And then, each of them: compression/no compression (like in upgrade 
test). And we could probably add another dimension : topic configured with 
CreateTime (default) and LogAppendTime. So, total 2x2x2 combinations (but maybe 
can reduce that — eg. do LogAppendTime with compression only).

  was:The system test should cover the case that message format changes are 
made when clients are producing/consuming. The message format change should not 
cause client side issue.


> Add system test for KIP-31 and KIP-32 - Change message format version on the 
> fly
> 
>
> Key: KAFKA-3202
> URL: https://issues.apache.org/jira/browse/KAFKA-3202
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Jiangjie Qin
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> The system test should cover the case that message format changes are made 
> when clients are producing/consuming. The message format change should not 
> cause client side issue.
> We already cover 0.10 brokers with old producers/consumers in upgrade tests. 
> So, the main thing to test is a mix of 0.9 and 0.10 producers and consumers. 
> E.g., test1: 0.9 producer/0.10 consumer and then test2: 0.10 producer/0.9 
> consumer. And then, each of them: compression/no compression (like in upgrade 
> test). And we could probably add another dimension : topic configured with 
> CreateTime (default) and LogAppendTime. So, total 2x2x2 combinations (but 
> maybe can reduce that — eg. do LogAppendTime with compression only).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3367) Delete topic dont delete the complete log from kafka

2016-03-10 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189676#comment-15189676
 ] 

Mayuresh Gharat commented on KAFKA-3367:


It takes time to delete the logs, depending on the amount of data the topic 
has. Also when the topic is marked for deletion, the controller has a listener 
that fires and starts deleting the topic. We have tested this in our 
environment at Linkedin and it does delete the logs. 
What version of Kafka are you running?

> Delete topic dont delete the complete log from kafka
> 
>
> Key: KAFKA-3367
> URL: https://issues.apache.org/jira/browse/KAFKA-3367
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Akshath Patkar
>
> Delete topic Just marks the topic as deleted. But data still remain in logs.
> How can we delete the topic completely with out doing manual delete of logs 
> from kafka and zookeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3361) Initial protocol documentation page and generation

2016-03-10 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189661#comment-15189661
 ] 

Grant Henke commented on KAFKA-3361:


[~becket_qin] I have that included in this PR: 
https://github.com/apache/kafka/pull/1043

It also includes the needed headers since this is going to be its own page on 
the site. The site PR is here: https://github.com/apache/kafka-site/pull/9

There is an issue where the build doesn't run check/rat every time. Somehow its 
not detecting change and so its skipping the test. Something I would like to 
resolve someday. 

> Initial protocol documentation page and generation
> --
>
> Key: KAFKA-3361
> URL: https://issues.apache.org/jira/browse/KAFKA-3361
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>
> Add an initial rough draft page to the official documentation. The output 
> will be mostly generated from code, ensuring the docs are accurate and up to 
> date.  This is likely to be a separate page due to the size of the content.
> The idea here is that something is better than nothing. Other jiras will 
> track needed improvements. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka-site pull request: Add protocol guide

2016-03-10 Thread granthenke
Github user granthenke commented on the pull request:

https://github.com/apache/kafka-site/pull/9#issuecomment-194980209
  
This was briefly tested locally with an Apache web server. It includes 
https://github.com/apache/kafka/pull/1043


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request: Add protocol guide

2016-03-10 Thread granthenke
Github user granthenke commented on the pull request:

https://github.com/apache/kafka-site/pull/9#issuecomment-194980267
  
cc @gwenshap 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3361) Initial protocol documentation page and generation

2016-03-10 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189646#comment-15189646
 ] 

Jiangjie Qin commented on KAFKA-3361:
-

[~gwenshap] [~granthenke] It seems docs/protocol.html is missing the license 
header and rat complains when I ran {{./gradlew test}}? Did I miss something?

> Initial protocol documentation page and generation
> --
>
> Key: KAFKA-3361
> URL: https://issues.apache.org/jira/browse/KAFKA-3361
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>
> Add an initial rough draft page to the official documentation. The output 
> will be mostly generated from code, ensuring the docs are accurate and up to 
> date.  This is likely to be a separate page due to the size of the content.
> The idea here is that something is better than nothing. Other jiras will 
> track needed improvements. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka-site pull request: Add protocol guide

2016-03-10 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka-site/pull/9

Add protocol guide



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka-site protocol

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/9.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #9


commit 586aeea8bd22a253e6301f8def889ff66beed982
Author: Grant Henke 
Date:   2016-03-10T18:02:47Z

Add protocol guide




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Add header and footer to protocol docs

2016-03-10 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/1043

MINOR: Add header and footer to protocol docs

Because protocol.html is going to be in its own page it needs the header 
and footer included.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka protocol-docs-style

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1043.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1043


commit 46cf95a2ad3170fd0694458b7d68b83a11f62dd7
Author: Grant Henke 
Date:   2016-03-10T17:57:21Z

MINOR: Add header and footer to protocol docs

Because protocol.html is going to be in its own page it needs the header 
and footer included.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3118) Fail test at: ProducerBounceTest. testBrokerFailure

2016-03-10 Thread aarti gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aarti gupta resolved KAFKA-3118.

Resolution: Cannot Reproduce

> Fail test at: ProducerBounceTest. testBrokerFailure 
> 
>
> Key: KAFKA-3118
> URL: https://issues.apache.org/jira/browse/KAFKA-3118
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
> Environment: oracle java 7
> ubuntu 13.1
>Reporter: edwardt
>Assignee: aarti gupta
>  Labels: newbie, test
>
> java.lang.AssertionError: Should have fetched 2000 unique messages 
> expected:<2000> but was:<1334>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:132)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release plan - Kafka 0.10.0

2016-03-10 Thread Gwen Shapira
Thanks all the Kafka contributors for voting!

The vote passes with 6 binding +1, 5 non-binding +1 and no -1.

We are now 11 days before the first planned release candidate. We have
12 open blockers, 17 open critical issues and a grand total of 108
issues for the release.

I added a widget showing the current issues for the release to the
release tracking page
(https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.0)

Please take a look at the list. We need to be focused on getting all
the high priority items resolved by March 21. You can contribute by
fixing issues, by reviewing patches or by moving JIRAs to the next
release (if you are the owner). Remember that community reviews are
incredibly useful feedback to the developers, so don't hesitate to
review even if you lack a commit bit.

Gwen




On Tue, Mar 8, 2016 at 8:35 AM, Grant Henke  wrote:
> +1 (non-binding).
>
> I will review and update any jiras I  think should be tracked today.
>
> Gwen the release tracking page is awesome!
> +1 (non-binding)
>
> One clarification: there are currently 11 issues marked as blockers, is
> that an accurate list?
>
> http://bit.ly/21YCthZ
>
> -Flavio
>
>> On 08 Mar 2016, at 06:12, Harsha  wrote:
>>
>>
>> +1
>>
>> Thanks,
>> Harsha
>> On Mon, Mar 7, 2016, at 09:49 PM, Jun Rao wrote:
>>> +1
>>>
>>> Thanks,
>>>
>>> Jun
>>>
>>> On Mon, Mar 7, 2016 at 9:27 AM, Gwen Shapira  wrote:
>>>
 Greetings Kafka Developer Community,

 As you all know, we have few big features that are almost complete
 (Timestamps! Interceptors! Streams!). It is time to start planning our
 next release.

 I suggest the following:
 * Cut branches on March 21st
 * Publish the first release candidate the next day
 * Start testing, finding important issues, fixing them, rolling out new
 releases
 * And eventually get a release candidate that we all agree is awesome
 enough to release. Hopefully this won't take too many iterations :)

 Note that this is a 2 weeks heads-up on branch cutting. After we cut
 branches, we will try to minimize cherrypicks to just critical bugs
 (because last major release was a bit insane).
 Therefore,  if you have a feature that you really want to see in
 0.10.0 - you'll need to have it committed by March 21st. As a curtesy
 to the release manager, if you have features that you are not planning
 on getting in for 0.10.0, please change the "fix version" field in
 JIRA accordingly.

 I will send a heads-up few days before cutting branches, to give
 everyone a chance to get stragglers in.

 The vote will be open for 72 hours.
 All in favor, please reply with +1.

 Gwen Shapira



[jira] [Assigned] (KAFKA-3373) Add `log` prefix to KIP-31/32 configs

2016-03-10 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned KAFKA-3373:
---

Assignee: Jiangjie Qin

> Add `log` prefix to KIP-31/32 configs
> -
>
> Key: KAFKA-3373
> URL: https://issues.apache.org/jira/browse/KAFKA-3373
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> [~jjkoshy] suggested that we should prefix the configs introduced as part of 
> KIP-31/32 to include a `log` prefix:
> message.format.version
> message.timestamp.type
> message.timestamp.difference.max.ms
> If we do it, we must update the KIP.
> Marking it as blocker because we should decide either way before 0.10.0.0.
> Discussion here:
> https://github.com/apache/kafka/pull/907#issuecomment-193950768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [jira] [Created] (KAFKA-3377) add REST interface to JMX

2016-03-10 Thread Gerard Klijs
I would like to know why you want/need it to be integrated into Kafka?
For our current project we tried out zabbix,
https://www.zabbix.com/documentation/3.0/manual/config/items/itemtypes/jmx_monitoring,
it takes some configuration, but then you can fetch all the jmx you want
and put them into graphs.

On Thu, Mar 10, 2016 at 2:52 PM Christian Posta (JIRA) 
wrote:

> Christian Posta created KAFKA-3377:
> --
>
>  Summary: add REST interface to JMX
>  Key: KAFKA-3377
>  URL: https://issues.apache.org/jira/browse/KAFKA-3377
>  Project: Kafka
>   Issue Type: Improvement
>   Components: core
> Reporter: Christian Posta
>
>
> Would be awesome if we could get JMX metrics w/out having to use the JMX
> APIs.. would there be any interest in adding something like
> https://jolokia.org to Kafka? I'll happily volunteer :)
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>


[jira] [Updated] (KAFKA-3376) have sensible defaults for the command-line tools to facilitate local development

2016-03-10 Thread Christian Posta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Posta updated KAFKA-3376:
---
Component/s: packaging
 admin

> have sensible defaults for the command-line tools to facilitate local 
> development
> -
>
> Key: KAFKA-3376
> URL: https://issues.apache.org/jira/browse/KAFKA-3376
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, packaging
>Affects Versions: 0.9.0.1
>Reporter: Christian Posta
>
> for the command-line tools, it's great we can send in params to connect to 
> brokers/zk clusters, etc. 
> would be great if some of those params would come with sensible defaults so 
> we don't have to add them for each command we run. for example, for 
> --zookeeper we could have it default to "localhost:2181" or default to 
> checking an environment variable "KAFKA_TOOLS_ZOOKEEPER" or something.
> or, even better, maybe we could do something like we had in apache 
> activemq-apollo, where we had a command shell that implements all of these 
> commands and can hold "context" information about what brokers and zk 
> clusters exist 
> https://github.com/christian-posta/activemq-apollo/blob/trunk/apollo-cli/src/main/scala/org/apache/activemq/apollo/cli/Apollo.scala
>  so we don't have 10+ shell scripts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3376) have sensible defaults for the command-line tools to facilitate local development

2016-03-10 Thread Christian Posta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Posta updated KAFKA-3376:
---
Affects Version/s: 0.9.0.1

> have sensible defaults for the command-line tools to facilitate local 
> development
> -
>
> Key: KAFKA-3376
> URL: https://issues.apache.org/jira/browse/KAFKA-3376
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, packaging
>Affects Versions: 0.9.0.1
>Reporter: Christian Posta
>
> for the command-line tools, it's great we can send in params to connect to 
> brokers/zk clusters, etc. 
> would be great if some of those params would come with sensible defaults so 
> we don't have to add them for each command we run. for example, for 
> --zookeeper we could have it default to "localhost:2181" or default to 
> checking an environment variable "KAFKA_TOOLS_ZOOKEEPER" or something.
> or, even better, maybe we could do something like we had in apache 
> activemq-apollo, where we had a command shell that implements all of these 
> commands and can hold "context" information about what brokers and zk 
> clusters exist 
> https://github.com/christian-posta/activemq-apollo/blob/trunk/apollo-cli/src/main/scala/org/apache/activemq/apollo/cli/Apollo.scala
>  so we don't have 10+ shell scripts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3359) Parallel log-recovery of un-flushed segments on startup

2016-03-10 Thread Vamsi Subhash Achanta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vamsi Subhash Achanta updated KAFKA-3359:
-
 Reviewer: Grant Henke
Fix Version/s: 0.10.0.0
   Status: Patch Available  (was: Open)

https://github.com/apache/kafka/pull/1035

> Parallel log-recovery of un-flushed segments on startup
> ---
>
> Key: KAFKA-3359
> URL: https://issues.apache.org/jira/browse/KAFKA-3359
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.9.0.1, 0.8.2.2
>Reporter: Vamsi Subhash Achanta
>Assignee: Jay Kreps
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> On startup, currently the log segments within a logDir are loaded 
> sequentially when there is a un-clean shutdown. This will take a lot of time 
> for the segments to be loaded as the logSegment.recover(..) is called for 
> every segment and for brokers which have many partitions, the time taken will 
> be very high (we have noticed ~40mins for 2k partitions).
> https://github.com/apache/kafka/pull/1035
> This pull request will make the log-segment load parallel with two 
> configurable properties "log.recovery.threads" and 
> "log.recovery.max.interval.ms".
> Logic:
> 1. Have a threadpool defined of fixed length (log.recovery.threads)
> 2. Submit the logSegment recovery as a job to the threadpool and add the 
> future returned to a job list
> 3. Wait till all the jobs are done within req. time 
> (log.recovery.max.interval.ms - default set to Long.Max).
> 4. If they are done and the futures are all null (meaning that the jobs are 
> successfully completed), it is considered done.
> 5. If any of the recovery jobs failed, then it is logged and 
> LogRecoveryFailedException is thrown
> 6. If the timeout is reached, LogRecoveryFailedException is thrown.
> The logic is backward compatible with the current sequential implementation 
> as the default thread count is set to 1.
> PS: I am new to Scala and the code might look Java-ish but I will be happy to 
> modify the code review changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3375) Suppress and fix compiler warnings where reasonable and tweak compiler settings

2016-03-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3375:
---
Summary: Suppress and fix compiler warnings where reasonable and tweak 
compiler settings  (was: Suppress deprecated warnings where reasonable and 
tweak compiler settings)

> Suppress and fix compiler warnings where reasonable and tweak compiler 
> settings
> ---
>
> Key: KAFKA-3375
> URL: https://issues.apache.org/jira/browse/KAFKA-3375
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> This will make it easier to do KAFKA-2982.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3375) Suppress deprecated warnings where reasonable and tweak compiler settings

2016-03-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3375:
---
Status: Patch Available  (was: Open)

> Suppress deprecated warnings where reasonable and tweak compiler settings
> -
>
> Key: KAFKA-3375
> URL: https://issues.apache.org/jira/browse/KAFKA-3375
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> This will make it easier to do KAFKA-2982.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3375) Suppress deprecated warnings where reasonable and tweak compiler settings

2016-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189199#comment-15189199
 ] 

ASF GitHub Bot commented on KAFKA-3375:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1042

KAFKA-3375; Suppress deprecated warnings where reasonable and tweak 
compiler settings

* Fix and suppress number of unchecked warnings (except for Kafka Streams)
* Add `@SafeVarargs` annotation to fix warnings
* Suppress unfixable deprecation warnings
* Replace deprecated by non-deprecated usage where possible
* Avoid reflective calls via structural types in Scala
* Tweak compiler settings for scalac and javac

Once we drop Java 7 and Scala 2.10, we can tweak the compiler settings 
further so that they warn us about more things.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3375-suppress-depreccated-tweak-compiler

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1042.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1042


commit bdb6bf5b6c1b95c2166126de10704801abde03f4
Author: Ismael Juma 
Date:   2016-03-10T11:56:47Z

Fix and suppress number of unchecked warnings

Ignored Kafka Streams on this iteration.

commit 3973c010111b7c0d69c558311548ebbc33992a47
Author: Ismael Juma 
Date:   2016-03-10T11:57:18Z

Add `@SafeVarargs` annotation to fix warning

commit afb96c7b3fb423cc34d86d69a3e906d65a753ff9
Author: Ismael Juma 
Date:   2016-03-10T11:58:02Z

Suppress unfixable deprecation warnings

commit 1db45ac43c88beb8949849b4ae2e04b21c698b7f
Author: Ismael Juma 
Date:   2016-03-10T11:58:25Z

Replace deprecated by non-deprecated usage

commit 08542a0c007b5a12dcc8a59a34f81215e8a6c1bd
Author: Ismael Juma 
Date:   2016-03-10T11:58:57Z

Avoid reflective calls via structural types in Scala

commit 5667ff91d80c8c94a5d30a520c088b211b5bfe0b
Author: Ismael Juma 
Date:   2016-03-10T11:59:57Z

Tweak compiler settings for scalac and javac




> Suppress deprecated warnings where reasonable and tweak compiler settings
> -
>
> Key: KAFKA-3375
> URL: https://issues.apache.org/jira/browse/KAFKA-3375
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> This will make it easier to do KAFKA-2982.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3375; Suppress deprecated warnings where...

2016-03-10 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1042

KAFKA-3375; Suppress deprecated warnings where reasonable and tweak 
compiler settings

* Fix and suppress number of unchecked warnings (except for Kafka Streams)
* Add `@SafeVarargs` annotation to fix warnings
* Suppress unfixable deprecation warnings
* Replace deprecated by non-deprecated usage where possible
* Avoid reflective calls via structural types in Scala
* Tweak compiler settings for scalac and javac

Once we drop Java 7 and Scala 2.10, we can tweak the compiler settings 
further so that they warn us about more things.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3375-suppress-depreccated-tweak-compiler

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1042.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1042


commit bdb6bf5b6c1b95c2166126de10704801abde03f4
Author: Ismael Juma 
Date:   2016-03-10T11:56:47Z

Fix and suppress number of unchecked warnings

Ignored Kafka Streams on this iteration.

commit 3973c010111b7c0d69c558311548ebbc33992a47
Author: Ismael Juma 
Date:   2016-03-10T11:57:18Z

Add `@SafeVarargs` annotation to fix warning

commit afb96c7b3fb423cc34d86d69a3e906d65a753ff9
Author: Ismael Juma 
Date:   2016-03-10T11:58:02Z

Suppress unfixable deprecation warnings

commit 1db45ac43c88beb8949849b4ae2e04b21c698b7f
Author: Ismael Juma 
Date:   2016-03-10T11:58:25Z

Replace deprecated by non-deprecated usage

commit 08542a0c007b5a12dcc8a59a34f81215e8a6c1bd
Author: Ismael Juma 
Date:   2016-03-10T11:58:57Z

Avoid reflective calls via structural types in Scala

commit 5667ff91d80c8c94a5d30a520c088b211b5bfe0b
Author: Ismael Juma 
Date:   2016-03-10T11:59:57Z

Tweak compiler settings for scalac and javac




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3375) Suppress deprecated warnings where reasonable and tweak compiler settings

2016-03-10 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-3375:
--

 Summary: Suppress deprecated warnings where reasonable and tweak 
compiler settings
 Key: KAFKA-3375
 URL: https://issues.apache.org/jira/browse/KAFKA-3375
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 0.10.0.0


This will make it easier to do KAFKA-2982.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #434

2016-03-10 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request: Kafka 3173

2016-03-10 Thread fpj
GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/1041

Kafka 3173



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-3173

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1041.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1041


commit 9467a8fca443c6ff5016d9da1a6e12b2397c7e09
Author: Flavio Junqueira 
Date:   2016-03-10T10:36:11Z

KAFKA-3173: Error while moving some partitions to OnlinePartition state

commit c594d4582be98e8ae36a1cd7cf20bf279edea8ee
Author: Flavio Junqueira 
Date:   2016-03-10T10:38:23Z

KAFKA-3173: Removed unnecessary import.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3173) Error while moving some partitions to OnlinePartition state

2016-03-10 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-3173:

Attachment: KAFKA-3173-race-repro.patch

> Error while moving some partitions to OnlinePartition state 
> 
>
> Key: KAFKA-3173
> URL: https://issues.apache.org/jira/browse/KAFKA-3173
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Critical
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-3173-race-repro.patch
>
>
> We observed another instance of the problem reported in KAFKA-2300, but this 
> time the error appeared in the partition state machine. In KAFKA-2300, we 
> haven't cleaned up the state in {{PartitionStateMachine}} and 
> {{ReplicaStateMachine}} as we do in {{KafkaController}}.
> Here is the stack trace:
> {noformat}
> 2016-01-29 15:26:51,393] ERROR [Partition state machine on Controller 0]: 
> Error while moving some partitions to OnlinePartition state 
> (kafka.controller.PartitionStateMachine)java.lang.IllegalStateException: 
> Controller to broker state change requests batch is not empty while creating 
> a new one. 
> Some LeaderAndIsr state changes Map(0 -> Map(foo-0 -> (LeaderAndIsrInfo:
> (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:0)))
>  might be lostat 
> kafka.controller.ControllerBrokerRequestBatch.newBatch(ControllerChannelManager.scala:254)
> at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:144)
> at 
> kafka.controller.KafkaController.onNewPartitionCreation(KafkaController.scala:517)
> at 
> kafka.controller.KafkaController.onNewTopicCreation(KafkaController.scala:504)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(PartitionStateMachine.scala:437)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply(PartitionStateMachine.scala:419)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply(PartitionStateMachine.scala:419)
> at 
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)at 
> kafka.controller.PartitionStateMachine$TopicChangeListener.handleChildChange(PartitionStateMachine.scala:418)
> at 
> org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)at 
> org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3173) Error while moving some partitions to OnlinePartition state

2016-03-10 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-3173:

Attachment: (was: KAFKA-3173-race-repo.patch)

> Error while moving some partitions to OnlinePartition state 
> 
>
> Key: KAFKA-3173
> URL: https://issues.apache.org/jira/browse/KAFKA-3173
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> We observed another instance of the problem reported in KAFKA-2300, but this 
> time the error appeared in the partition state machine. In KAFKA-2300, we 
> haven't cleaned up the state in {{PartitionStateMachine}} and 
> {{ReplicaStateMachine}} as we do in {{KafkaController}}.
> Here is the stack trace:
> {noformat}
> 2016-01-29 15:26:51,393] ERROR [Partition state machine on Controller 0]: 
> Error while moving some partitions to OnlinePartition state 
> (kafka.controller.PartitionStateMachine)java.lang.IllegalStateException: 
> Controller to broker state change requests batch is not empty while creating 
> a new one. 
> Some LeaderAndIsr state changes Map(0 -> Map(foo-0 -> (LeaderAndIsrInfo:
> (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:0)))
>  might be lostat 
> kafka.controller.ControllerBrokerRequestBatch.newBatch(ControllerChannelManager.scala:254)
> at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:144)
> at 
> kafka.controller.KafkaController.onNewPartitionCreation(KafkaController.scala:517)
> at 
> kafka.controller.KafkaController.onNewTopicCreation(KafkaController.scala:504)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(PartitionStateMachine.scala:437)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply(PartitionStateMachine.scala:419)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply(PartitionStateMachine.scala:419)
> at 
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)at 
> kafka.controller.PartitionStateMachine$TopicChangeListener.handleChildChange(PartitionStateMachine.scala:418)
> at 
> org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)at 
> org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3173) Error while moving some partitions to OnlinePartition state

2016-03-10 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-3173:

Attachment: KAFKA-3173-race-repo.patch

> Error while moving some partitions to OnlinePartition state 
> 
>
> Key: KAFKA-3173
> URL: https://issues.apache.org/jira/browse/KAFKA-3173
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> We observed another instance of the problem reported in KAFKA-2300, but this 
> time the error appeared in the partition state machine. In KAFKA-2300, we 
> haven't cleaned up the state in {{PartitionStateMachine}} and 
> {{ReplicaStateMachine}} as we do in {{KafkaController}}.
> Here is the stack trace:
> {noformat}
> 2016-01-29 15:26:51,393] ERROR [Partition state machine on Controller 0]: 
> Error while moving some partitions to OnlinePartition state 
> (kafka.controller.PartitionStateMachine)java.lang.IllegalStateException: 
> Controller to broker state change requests batch is not empty while creating 
> a new one. 
> Some LeaderAndIsr state changes Map(0 -> Map(foo-0 -> (LeaderAndIsrInfo:
> (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:0)))
>  might be lostat 
> kafka.controller.ControllerBrokerRequestBatch.newBatch(ControllerChannelManager.scala:254)
> at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:144)
> at 
> kafka.controller.KafkaController.onNewPartitionCreation(KafkaController.scala:517)
> at 
> kafka.controller.KafkaController.onNewTopicCreation(KafkaController.scala:504)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(PartitionStateMachine.scala:437)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply(PartitionStateMachine.scala:419)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply(PartitionStateMachine.scala:419)
> at 
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)at 
> kafka.controller.PartitionStateMachine$TopicChangeListener.handleChildChange(PartitionStateMachine.scala:418)
> at 
> org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)at 
> org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3173) Error while moving some partitions to OnlinePartition state

2016-03-10 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189024#comment-15189024
 ] 

Flavio Junqueira commented on KAFKA-3173:
-

I have investigated the two races further and the first one is there but turns 
out to be harmless because we check {{hasStarted}} before adding any message to 
the batch. Consequently, the batch is not left dirty. We should still fix to 
avoid the ugly exception, but it is less critical.

The second race is a real problem. I have been able to reproduce it and it can 
cause either the startup to fail or the zk listener event to be skipped. Here 
is an output from the repro:

{noformat}
[2016-03-10 09:58:21,257] ERROR [Partition state machine on Controller 0]:  
(kafka.controller.PartitionStateMachine:100)
java.lang.Exception
at 
kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$3.apply(PartitionStateMachine.scala:158)
at 
kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$3.apply(PartitionStateMachine.scala:158)
at kafka.utils.Logging$class.error(Logging.scala:100)
at 
kafka.controller.PartitionStateMachine.error(PartitionStateMachine.scala:44)
at 
kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:158)
at 
kafka.controller.KafkaController.onNewPartitionCreation(KafkaController.scala:518)
at 
kafka.controller.KafkaController.onNewTopicCreation(KafkaController.scala:505)
at 
kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(PartitionStateMachine.scala:455)
at 
kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply(PartitionStateMachine.scala:437)
at 
kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply(PartitionStateMachine.scala:437)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:255)
at 
kafka.controller.PartitionStateMachine$TopicChangeListener.handleChildChange(PartitionStateMachine.scala:436)
at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
[2016-03-10 09:58:21,447] ERROR [Partition state machine on Controller 0]: 
Error while moving some partitions to the online state 
(kafka.controller.PartitionStateMachine:103)
java.lang.IllegalStateException: Controller to broker state change requests 
batch is not empty while creating a new one. Some LeaderAndIsr state changes 
Map(1 -> Map(topic1-0 -> 
(LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1)))
 might be lost 
at 
kafka.controller.ControllerBrokerRequestBatch.newBatch(ControllerChannelManager.scala:254)
at 
kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:126)
at 
kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:71)
at 
kafka.controller.ControllerFailoverTest.testStartupRace(ControllerFailoverTest.scala:119)
{noformat}

The first exception is induced, just to know what call is concurrent with the 
call to startup. The second exception is due to the batch being dirty when I 
call startup on {{partitionStateMachine}}. It can happen the other way around 
too and the topic update can fail. Wrapping the call to 
{{triggerOnlinePartitionStateChange}} with the controller lock solves the issue.

Unfortunately, I had to instrument the code to trigger the race. It is hard to 
test these cases without being invasive, so I'm inclined to not add test cases 
for this. I'll post the changes I have used to repro the two issues I've 
mentioned. Note that they are test cases, but they don't actually fail because 
the current code catches the illegal state exception.  

> Error while moving some partitions to OnlinePartition state 
> 
>
> Key: KAFKA-3173
> URL: https://issues.apache.org/jira/browse/KAFKA-3173
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> We observed another instance of the problem reported in KAFKA-2300, but this 
> time the error appeared in the partition state machine. In KAFKA-2300, we 
> haven't cleaned up the state in {{PartitionStateMachine}} and 
> {{ReplicaStateMachine}} as we do in {{KafkaController}}.
> Here is the stack trace:
> {noformat}
> 2016-01-29 15:26:51,393] ERROR [Partition state machine on Controller 0]: 
> Error while moving some partitions to OnlinePartition state 
> (kafka.controller.PartitionStateMachine)java.lang.IllegalStateException: 
> Controller to broker state change 

[jira] [Created] (KAFKA-3374) Failure in security rolling upgrade phase 2 system test

2016-03-10 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-3374:
--

 Summary: Failure in security rolling upgrade phase 2 system test
 Key: KAFKA-3374
 URL: https://issues.apache.org/jira/browse/KAFKA-3374
 Project: Kafka
  Issue Type: Test
  Components: system tests
Reporter: Ismael Juma
Assignee: Ben Stopford
Priority: Critical
 Fix For: 0.10.0.0


[~geoffra] reported the following a few days ago.

Seeing fairly consistent failures in
"Module: kafkatest.tests.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_phase_two
Arguments:
{
  "broker_protocol": "SASL_PLAINTEXT",
  "client_protocol": "SASL_SSL"
}
Last successful run (git hash): 2a58ba9
First failure: f7887bd
(note failures are not 100% consistent, so there's non-zero chance the commit 
that introduced the failure is prior to 2a58ba9)

See for example:
http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-03-08--001.1457454171--apache--trunk--f6e35de/report.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3373) Add `log` prefix to KIP-31/32 configs

2016-03-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3373:
---
Issue Type: Bug  (was: Test)

> Add `log` prefix to KIP-31/32 configs
> -
>
> Key: KAFKA-3373
> URL: https://issues.apache.org/jira/browse/KAFKA-3373
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
> Fix For: 0.10.0.0
>
>
> [~jjkoshy] suggested that we should prefix the configs introduced as part of 
> KIP-31/32 to include a `log` prefix:
> message.format.version
> message.timestamp.type
> message.timestamp.difference.max.ms
> If we do it, we must update the KIP.
> Marking it as blocker because we should decide either way before 0.10.0.0.
> Discussion here:
> https://github.com/apache/kafka/pull/907#issuecomment-193950768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3373) Add `log` prefix to KIP-31/32 configs

2016-03-10 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188982#comment-15188982
 ] 

Ismael Juma commented on KAFKA-3373:


cc [~becket_qin] [~junrao]

> Add `log` prefix to KIP-31/32 configs
> -
>
> Key: KAFKA-3373
> URL: https://issues.apache.org/jira/browse/KAFKA-3373
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
> Fix For: 0.10.0.0
>
>
> [~jjkoshy] suggested that we should prefix the configs introduced as part of 
> KIP-31/32 to include a `log` prefix:
> message.format.version
> message.timestamp.type
> message.timestamp.difference.max.ms
> If we do it, we must update the KIP.
> Marking it as blocker because we should decide either way before 0.10.0.0.
> Discussion here:
> https://github.com/apache/kafka/pull/907#issuecomment-193950768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3373) Add `log` prefix to KIP-31/32 configs

2016-03-10 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-3373:
--

 Summary: Add `log` prefix to KIP-31/32 configs
 Key: KAFKA-3373
 URL: https://issues.apache.org/jira/browse/KAFKA-3373
 Project: Kafka
  Issue Type: Test
Reporter: Ismael Juma
 Fix For: 0.10.0.0


[~jjkoshy] suggested that we should prefix the configs introduced as part of 
KIP-31/32 to include a `log` prefix:

message.format.version
message.timestamp.type
message.timestamp.difference.max.ms

If we do it, we must update the KIP.

Marking it as blocker because we should decide either way before 0.10.0.0.

Discussion here:
https://github.com/apache/kafka/pull/907#issuecomment-193950768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3373) Add `log` prefix to KIP-31/32 configs

2016-03-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3373:
---
Priority: Blocker  (was: Major)

> Add `log` prefix to KIP-31/32 configs
> -
>
> Key: KAFKA-3373
> URL: https://issues.apache.org/jira/browse/KAFKA-3373
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> [~jjkoshy] suggested that we should prefix the configs introduced as part of 
> KIP-31/32 to include a `log` prefix:
> message.format.version
> message.timestamp.type
> message.timestamp.difference.max.ms
> If we do it, we must update the KIP.
> Marking it as blocker because we should decide either way before 0.10.0.0.
> Discussion here:
> https://github.com/apache/kafka/pull/907#issuecomment-193950768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3248) AdminClient Blocks Forever in send Method

2016-03-10 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188979#comment-15188979
 ] 

Ismael Juma commented on KAFKA-3248:


I bumped the priority as blocking forever is not good and the fix is 
straightforward.

> AdminClient Blocks Forever in send Method
> -
>
> Key: KAFKA-3248
> URL: https://issues.apache.org/jira/browse/KAFKA-3248
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.9.0.0
>Reporter: John Tylwalk
>Assignee: Warren Green
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> AdminClient will block forever when performing operations involving the 
> {{send()}} method, due to usage of 
> {{ConsumerNetworkClient.poll(RequestFuture)}} - which blocks indefinitely.
> Suggested fix is to use {{ConsumerNetworkClient.poll(RequestFuture, long 
> timeout)}} in {{AdminClient.send()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3248) AdminClient Blocks Forever in send Method

2016-03-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3248:
---
Priority: Critical  (was: Minor)

> AdminClient Blocks Forever in send Method
> -
>
> Key: KAFKA-3248
> URL: https://issues.apache.org/jira/browse/KAFKA-3248
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.9.0.0
>Reporter: John Tylwalk
>Assignee: Warren Green
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> AdminClient will block forever when performing operations involving the 
> {{send()}} method, due to usage of 
> {{ConsumerNetworkClient.poll(RequestFuture)}} - which blocks indefinitely.
> Suggested fix is to use {{ConsumerNetworkClient.poll(RequestFuture, long 
> timeout)}} in {{AdminClient.send()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3248) AdminClient Blocks Forever in send Method

2016-03-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3248:
---
Fix Version/s: 0.10.0.0

> AdminClient Blocks Forever in send Method
> -
>
> Key: KAFKA-3248
> URL: https://issues.apache.org/jira/browse/KAFKA-3248
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.9.0.0
>Reporter: John Tylwalk
>Assignee: Warren Green
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> AdminClient will block forever when performing operations involving the 
> {{send()}} method, due to usage of 
> {{ConsumerNetworkClient.poll(RequestFuture)}} - which blocks indefinitely.
> Suggested fix is to use {{ConsumerNetworkClient.poll(RequestFuture, long 
> timeout)}} in {{AdminClient.send()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1107

2016-03-10 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3344: Remove previous system test's leftover test-log4j.properties

--
[...truncated 110 lines...]
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
Download 
https://repo1.maven.org/maven2/org/scala-lang/scala-library/2.11.8/scala-library-2.11.8.pom
Download 
https://repo1.maven.org/maven2/org/scala-lang/scala-library/2.11.8/scala-library-2.11.8.jar
Download 
https://repo1.maven.org/maven2/org/scala-lang/scala-compiler/2.11.8/scala-compiler-2.11.8.pom
Download 
https://repo1.maven.org/maven2/org/scala-lang/scala-reflect/2.11.8/scala-reflect-2.11.8.pom
Download 
https://repo1.maven.org/maven2/org/scala-lang/scala-compiler/2.11.8/scala-compiler-2.11.8.jar
Download 
https://repo1.maven.org/maven2/org/scala-lang/scala-reflect/2.11.8/scala-reflect-2.11.8.jar
ERROR: Could not install GRADLE_2_4_RC_2_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:947)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1453)
at hudson.model.AbstractProject.poll(AbstractProject.java:1356)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:526)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:555)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR: Could not install JDK_1_7U51_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:947)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1453)
at hudson.model.AbstractProject.poll(AbstractProject.java:1356)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:526)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:555)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 

[jira] [Created] (KAFKA-3372) Trailing space in Kafka ConsumerConfig

2016-03-10 Thread Kundan (JIRA)
Kundan created KAFKA-3372:
-

 Summary: Trailing space in Kafka ConsumerConfig
 Key: KAFKA-3372
 URL: https://issues.apache.org/jira/browse/KAFKA-3372
 Project: Kafka
  Issue Type: Bug
  Components: consumer, kafka streams
Affects Versions: 0.8.2.1
 Environment: Local
Reporter: Kundan
Assignee: Neha Narkhede


When I by luck I had left value in properties file with trailing space it 
thrown such error.
example : 
group.id=MyGroupID

when I read this group.id from properties file and put it to ConsumerConfig, 
the error appeared as below stacktrace.



Exception in thread "Thread-1003" kafka.common.InvalidConfigException: 
client.id MyUserDataReaderGroup  is illegal, contains a character other than 
ASCII alphanumerics, '.', '_' and '-'
at kafka.common.Config$class.validateChars(Config.scala:32)
at kafka.consumer.ConsumerConfig$.validateChars(ConsumerConfig.scala:25)
at 
kafka.consumer.ConsumerConfig$.validateClientId(ConsumerConfig.scala:64)
at kafka.consumer.ConsumerConfig$.validate(ConsumerConfig.scala:57)
at kafka.consumer.ConsumerConfig.(ConsumerConfig.scala:184)
at kafka.consumer.ConsumerConfig.(ConsumerConfig.scala:94)
at 
my.package.group.services.kafka.MyUserDataConsumer.setKafkaConfig(MyUserDataConsumer.java:96)
at 
my.package.group.services.kafka.MyUserDataConsumer.run(MyUserDataConsumer.java:112)
2016-03-10 13:34:41.280:INFO:oejsh.ContextHandler:main: Started 
o.e.j.w.WebAppContext@69a90966{/km,file:/tmp/jetty-0.0.0.0-8080-km.war-_km-any-7539601194543292160.dir/webapp/,AVAILABLE}{/km.war}
2016-03-10 13:34:47.128:INFO:ProProject:main: Spring WebApplicationInitializers 
detected on classpath: 
[my.package.group.ProProject.services.web.ApplicationInitializer@3474c3b6]
2016-03-10 13:34:47.259:INFO:ProProject:main: Initializing Spring root 
WebApplicationContext
2016-03-10 13:34:55.972:INFO:ProProject:main: Initializing Spring 
FrameworkServlet 'dispatcher'
2016-03-10 13:34:56.782:INFO:oejsh.ContextHandler:main: Started 
o.e.j.w.WebAppContext@554b8728{/ProProject,file:/tmp/jetty-0.0.0.0-8080-ProProject.war-_ProProject-any-2165600182871766069.dir/webapp/,AVAILABLE}{/ProProject.war}
2016-03-10 13:34:56.792:INFO:oejs.ServerConnector:main: Started 
ServerConnector@65269268{HTTP/1.1}{0.0.0.0:8080}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3371) ClientCompatibilityTest system test failing since KIP-31/KIP-32 was merged

2016-03-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3371:
---
Summary: ClientCompatibilityTest system test failing since KIP-31/KIP-32 
was merged  (was: ClientCompatibilityTest system test failing since 
KIP-31/KIP-32)

> ClientCompatibilityTest system test failing since KIP-31/KIP-32 was merged
> --
>
> Key: KAFKA-3371
> URL: https://issues.apache.org/jira/browse/KAFKA-3371
> Project: Kafka
>  Issue Type: Test
>Reporter: Ismael Juma
>Priority: Blocker
>
> ClientCompatibilityTest system test has been failing since we merged 
> KIP-31/32. We need to fix this for 0.10.0.0. Latest failure below:
> test_id:
> 2016-03-09--001.kafkatest.tests.compatibility_test.ClientCompatibilityTest.test_producer_back_compatibility
> status: FAIL
> run time:   1 minute 4.864 seconds
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-03-09--001.1457539618--apache--trunk--324b0c8/report.html
> cc [~becket_qin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3371) ClientCompatibilityTest system test failing since KIP-31/KIP-32

2016-03-10 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-3371:
--

 Summary: ClientCompatibilityTest system test failing since 
KIP-31/KIP-32
 Key: KAFKA-3371
 URL: https://issues.apache.org/jira/browse/KAFKA-3371
 Project: Kafka
  Issue Type: Test
Reporter: Ismael Juma
Priority: Blocker


ClientCompatibilityTest system test has been failing since we merged KIP-31/32. 
We need to fix this for 0.10.0.0. Latest failure below:


test_id:
2016-03-09--001.kafkatest.tests.compatibility_test.ClientCompatibilityTest.test_producer_back_compatibility
status: FAIL
run time:   1 minute 4.864 seconds
http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-03-09--001.1457539618--apache--trunk--324b0c8/report.html

cc [~becket_qin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1106

2016-03-10 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: KAFKA-3361 follow up

--
[...truncated 1510 lines...]
kafka.api.AdminClientTest > testListGroups PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordTooLarge PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testInterceptors PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup PASSED

kafka.api.PlaintextConsumerTest > testMaxPollRecords PASSED

kafka.api.PlaintextConsumerTest > testAutoOffsetReset PASSED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitIntercept PASSED

kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance FAILED
java.lang.AssertionError: Expected partitions [topic-0, topic-1, topic2-0, 
topic2-1] but actually got [topic-0, topic-1]

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.SaslPlaintextConsumerTest > testPauseStateNotPreservedByRebalance 
PASSED

kafka.api.SaslPlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslPlaintextConsumerTest > testListTopics PASSED

kafka.api.SaslPlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SaslPlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.SslConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.SslConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SslConsumerTest > testListTopics PASSED

kafka.api.SslConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.SslConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SslConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > 

[GitHub] kafka pull request: MINOR: update compression design doc to includ...

2016-03-10 Thread omkreddy
GitHub user omkreddy opened a pull request:

https://github.com/apache/kafka/pull/1040

MINOR: update compression design doc to include lz4 protocol



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omkreddy/kafka MINOR-DOC

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1040.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1040


commit e4e880587856dd42cccba4501987e66185746e62
Author: Manikumar reddy O 
Date:   2016-03-10T08:57:27Z

MINOR: update compression design doc to include lz4 protocol




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-03-10 Thread Rajini Sivaram
Gwen,

Just to be clear, the alternative would be:

*jaas.conf:*

GssapiKafkaServer {

com.ibm.security.auth.module.Krb5LoginModule required
credsType=both
useKeytab="file:/kafka/key.tab"
principal="kafka/localh...@example.com ";

};

SmartcardKafkaServer {

  example.SmartcardLoginModule required

  cardNumber=123;

};


*KafkaConfig*



   - login.context.map={"GSSAPI="GssapiKafkaServer",
  "SMARTCARD"=SmartcardKafkaServer}
  - login.class.map={"GSSAPI=GssapiLogin.class,
  "SMARTCARD"=SmartcardLogin.class}
  - callback.handler.class.map={"GSSAPI"=GssapiCallbackHandler.class,
  "SMARTCARD"=SmartcardCallbackHandler.class}

*Client Config *
Same as the server, but with only one entry allowed in each map and
jaas.conf



This is a different model from the Java standard for supporting multiple
logins. As a developer, I am inclined to stick with approaches that are
widely in use like JSSE. But this alternative can be made to work if the
Kafka community feels it is more appropriate for Kafka. If you know of
other systems which use this approach, that would be helpful.



On Thu, Mar 10, 2016 at 2:07 AM, Gwen Shapira  wrote:

> What I'm hearing is that:
>
> 1. In order to support authentication mechanisms that were not written
> specifically with Kafka in mind, someone will need to write the
> integration between the mechanism and Kafka. This may include Login
> and CallbackHandler classes. This can be the mechanism vendor, the
> user or a 3rd party vendor.
> 2. If someone wrote the code to support a mechanism in Kafka, and a
> user will want to use more than one mechanism, they will still need to
> write a wrapper.
> 3. In reality, #2 will not be necessary ("edge-case") because Kafka
> will actually already provide the callback needed (and presumably also
> the code to load the LoginModule provided by Example.com)?
>
> Tradeoff #1 sounds reasonable.
> #2 and #3 do not sound reasonable considering one of the goals of the
> patch is to support multiple mechanisms. I don't think we should force
> our users to write code just to avoid writing it ourselves.
> Configuring security is complex enough as is.
> Furthermore, if we believe that "Smartcard is likely to use standard
> NameCallback and PasswordCallback already implemented in Kafka" - why
> do we even provide configuration for Login and CallbackHandler
> classes? Either we support multiple mechanisms written by different
> vendors, or we don't.
>
> Gwen
>
>
> On Wed, Mar 9, 2016 at 12:32 AM, Rajini Sivaram
>  wrote:
> > I am not saying that the developer at Example Inc. would develop a Login
> > implementation that combines Smartcard and Kerberos because Retailer uses
> > both. I am saying that Example Inc develops the LoginModule (similar to
> JVM
> > security providers developing Kerberos modules). But there is no standard
> > interface for Login to allow ticket refresh. So, it is very unlikely that
> > Example Inc would develop a Login implementation that works with an
> Apache
> > Kafka defined interface ( Kafka developers wrote this code for Kerberos).
> > For a custom integration, the user (i.e. Retailer) would be expected to
> > develop this code if required.
> >
> > You could imagine that Smartcard is a commonly used mechanism and a 3rd
> > party develops code for integrating Smartcard with Kafka and makes the
> > integration code (Login and CallbackHandler implementation) widely
> > available, If Retailer wants to use clients or a broker with just
> Smartcard
> > enabled in their broker, they configure Kafka to use the 3rd party code,
> > with no additional code development. But to combine Smartcard and
> Kerberos,
> > Retailer needs to write a few lines of code to incorporate both Smartcard
> > and Kerberos. I believe this is an edge case.
> >
> > Smartcard is likely to use standard NameCallback and PasswordCallback
> > already implemented in Kafka and Kerberos support exists in Kafka. So it
> is
> > very likely that Retailer doesn't need to override Login or
> CallbackHandler
> > in this case. And it would just be a matter of configuring the
> mechanisms.
> >
> > On Wed, Mar 9, 2016 at 12:48 AM, Gwen Shapira  wrote:
> >
> >> "Since smart card logins are not built into Kafka (or the JDK), you
> need a
> >> developer to build the login module. So the developer implements
> >> example.SmartcardLoginModule. In addition, the developer may also
> implement
> >> callback handlers for the SASL client or server  and a login class to
> keep
> >> this login refreshed. The callback handlers and login implementation
> >> support all the mechanisms that the organisation supports - in this case
> >> Kerberos and smart card."
> >>
> >> In this case, the developer works for Example Inc (which develops
> >> SmartCard authentication modules), while I work for Retailer and need
> >> to use his module.
> >> You assume that developer from Example Inc knows about