Re: [Newbie][IntelliJ Setup] How to run Kafka in IntelliJ

2019-10-14 Thread tao xiao
one more thing you need to do is to include slf4j-log4j in build.gradle as
it is not included as compiled by default

On Tue, Oct 15, 2019 at 12:21 AM chandresh pancholi <
chandreshpancholi...@gmail.com> wrote:

> Wondering if below exception is anywhere related
>
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
> details.
> SLF4J: Failed to load class "org.slf4j.impl.StaticMDCBinder".
> SLF4J: Defaulting to no-operation MDCAdapter implementation.
> SLF4J: See http://www.slf4j.org/codes.html#no_static_mdc_binder for
> further details.
>
> On Mon, Oct 14, 2019 at 8:45 PM chandresh pancholi <
> chandreshpancholi...@gmail.com> wrote:
>
>> I have given full path in VM options
>>
>> *-Dlog4j.configuration=file:/Users/Chandresh/opensource/kafka/config/log4j.properties*
>>
>> still no logs in console.
>>
>> [image: Screenshot 2019-10-14 at 8.44.34 PM.png]
>>
>> On Mon, Oct 14, 2019 at 8:16 PM tao xiao  wrote:
>>
>>> yes, you need to add log4j to jvm option
>>>
>>> -Dlog4j.configuration=file:/path/to/log4j.properties
>>>
>>> On Mon, Oct 14, 2019 at 10:33 PM chandresh pancholi <
>>> chandreshpancholi...@gmail.com> wrote:
>>>
>>> > Let me rephrase it.
>>> >
>>> > I figured out that Kafka is running successfully but i can't see logs
>>> in
>>> > IntelliJ console. To see the logs do i have to change any properties.
>>> >
>>> > On Mon, Oct 14, 2019 at 5:53 PM chandresh pancholi <
>>> > chandreshpancholi...@gmail.com> wrote:
>>> >
>>> >> Sadly, No luck !!
>>> >>
>>> >> [image: Screenshot 2019-10-14 at 5.52.55 PM.png]
>>> >>
>>> >> On Mon, Oct 14, 2019 at 5:22 PM Manasvi Gupta 
>>> >> wrote:
>>> >>
>>> >>> Can you try copying config/log4j.properties to
>>> >>> core/src/main/resources/log4j.properties and see if it helps?
>>> >>>
>>> >>> On Mon 14 Oct, 2019, 5:01 PM chandresh pancholi, <
>>> >>> chandreshpancholi...@gmail.com> wrote:
>>> >>>
>>> >>> > Manasavi,
>>> >>> >
>>> >>> > I have followed your blog but I am getting no logs in IDE. I am
>>> seeing
>>> >>> > Kafka.main running infinitely with logging anything in IDE console.
>>> >>> >
>>> >>> > I have attached an image for your reference.
>>> >>> >
>>> >>> > On Mon, Oct 14, 2019 at 4:56 PM Manasvi Gupta >> >
>>> >>> wrote:
>>> >>> >
>>> >>> >>
>>> >>> >>
>>> >>>
>>> https://medium.com/@manasvi/getting-started-with-contributing-to-apache-kafka-part-1-build-and-run-kafka-from-source-code-f900452fdc06
>>> >>> >>
>>> >>> >> I wrote this blog post sometime ago for folks to get started with
>>> >>> >> contributing to Apache Kafka.
>>> >>> >>
>>> >>> >> I am behind of part 2 of the post, hopefully will be done in few
>>> days.
>>> >>> >>
>>> >>> >> Please share feedback.
>>> >>> >>
>>> >>> >> On Mon 14 Oct, 2019, 1:27 PM chandresh pancholi, <
>>> >>> >> chandreshpancholi...@gmail.com> wrote:
>>> >>> >>
>>> >>> >> > Hello Dev Community,
>>> >>> >> >
>>> >>> >> >
>>> >>> >> > I have been trying to setup Kafka in Intellij. When i try to run
>>> >>> Kafka
>>> >>> >> > class in core package. it takes infinite time. It keeps loading.
>>> >>> >> >
>>> >>> >> > I want to pick up some newbie Kafka issues and work on them but
>>> this
>>> >>> >> setup
>>> >>> >> > is not working for me.
>>> >>> >> >
>>> >>> >> > If anyone can help me setting up kafka project in Intellij that
>>> >>> would be
>>> >>> >> > great help.
>>> >>> >> >
>>> >>> >> > --
>>> >>> >> > Chandresh Pancholi
>>> >>> >> >
>>> >>> >>
>>> >>> >
>>> >>> >
>>> >>> > --
>>> >>> > Chandresh Pancholi
>>> >>> > India Head
>>> >>> > https://oneconcern.com <http://oneconcern.com>
>>> >>> > Email-id:chandr...@oneconcern.com
>>> >>> > 
>>> >>> > Contact:08951803660
>>> >>> >
>>> >>>
>>> >>
>>> >>
>>> >> --
>>> >> Chandresh Pancholi
>>> >> +91-8951803660
>>> >>
>>> >
>>> >
>>> > --
>>> > Chandresh Pancholi
>>> > +91-8951803660
>>> >
>>>
>>>
>>> --
>>> Regards,
>>> Tao
>>>
>>
>>
>> --
>> Chandresh Pancholi
>> +91-8951803660
>>
>
>
> --
> Chandresh Pancholi
> +91-8951803660
>


-- 
Regards,
Tao


Re: [Newbie][IntelliJ Setup] How to run Kafka in IntelliJ

2019-10-14 Thread tao xiao
yes, you need to add log4j to jvm option

-Dlog4j.configuration=file:/path/to/log4j.properties

On Mon, Oct 14, 2019 at 10:33 PM chandresh pancholi <
chandreshpancholi...@gmail.com> wrote:

> Let me rephrase it.
>
> I figured out that Kafka is running successfully but i can't see logs in
> IntelliJ console. To see the logs do i have to change any properties.
>
> On Mon, Oct 14, 2019 at 5:53 PM chandresh pancholi <
> chandreshpancholi...@gmail.com> wrote:
>
>> Sadly, No luck !!
>>
>> [image: Screenshot 2019-10-14 at 5.52.55 PM.png]
>>
>> On Mon, Oct 14, 2019 at 5:22 PM Manasvi Gupta 
>> wrote:
>>
>>> Can you try copying config/log4j.properties to
>>> core/src/main/resources/log4j.properties and see if it helps?
>>>
>>> On Mon 14 Oct, 2019, 5:01 PM chandresh pancholi, <
>>> chandreshpancholi...@gmail.com> wrote:
>>>
>>> > Manasavi,
>>> >
>>> > I have followed your blog but I am getting no logs in IDE. I am seeing
>>> > Kafka.main running infinitely with logging anything in IDE console.
>>> >
>>> > I have attached an image for your reference.
>>> >
>>> > On Mon, Oct 14, 2019 at 4:56 PM Manasvi Gupta 
>>> wrote:
>>> >
>>> >>
>>> >>
>>> https://medium.com/@manasvi/getting-started-with-contributing-to-apache-kafka-part-1-build-and-run-kafka-from-source-code-f900452fdc06
>>> >>
>>> >> I wrote this blog post sometime ago for folks to get started with
>>> >> contributing to Apache Kafka.
>>> >>
>>> >> I am behind of part 2 of the post, hopefully will be done in few days.
>>> >>
>>> >> Please share feedback.
>>> >>
>>> >> On Mon 14 Oct, 2019, 1:27 PM chandresh pancholi, <
>>> >> chandreshpancholi...@gmail.com> wrote:
>>> >>
>>> >> > Hello Dev Community,
>>> >> >
>>> >> >
>>> >> > I have been trying to setup Kafka in Intellij. When i try to run
>>> Kafka
>>> >> > class in core package. it takes infinite time. It keeps loading.
>>> >> >
>>> >> > I want to pick up some newbie Kafka issues and work on them but this
>>> >> setup
>>> >> > is not working for me.
>>> >> >
>>> >> > If anyone can help me setting up kafka project in Intellij that
>>> would be
>>> >> > great help.
>>> >> >
>>> >> > --
>>> >> > Chandresh Pancholi
>>> >> >
>>> >>
>>> >
>>> >
>>> > --
>>> > Chandresh Pancholi
>>> > India Head
>>> > https://oneconcern.com 
>>> > Email-id:chandr...@oneconcern.com
>>> > 
>>> > Contact:08951803660
>>> >
>>>
>>
>>
>> --
>> Chandresh Pancholi
>> +91-8951803660
>>
>
>
> --
> Chandresh Pancholi
> +91-8951803660
>


-- 
Regards,
Tao


[jira] [Created] (KAFKA-7274) Incorrect subject credential used in inter-broker communication

2018-08-09 Thread TAO XIAO (JIRA)
TAO XIAO created KAFKA-7274:
---

 Summary: Incorrect subject credential used in inter-broker 
communication
 Key: KAFKA-7274
 URL: https://issues.apache.org/jira/browse/KAFKA-7274
 Project: Kafka
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 1.1.1, 1.1.0, 1.0.2, 1.0.1, 1.0.0
Reporter: TAO XIAO


We configured one broker setup to enable multiple SASL mechanisms using JAAS 
config file but we failed to start up the broker.

 

Here is security section of server.properties

 

{{listeners=SASL_PLAINTEXT://:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=PLAIN}}{{}}

 

JAAS file

 
{noformat}
sasl_plaintext.KafkaServer {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin-secret"
  user_admin="admin-secret"
  user_alice="alice-secret";

  org.apache.kafka.common.security.scram.ScramLoginModule required
  username="admin1"
  password="admin-secret";
};{noformat}
 

Exception we got

 
{noformat}
[2018-08-10 12:12:13,070] ERROR [Controller id=0, targetBrokerId=0] Connection 
to node 0 failed authentication due to: Authentication failed: Invalid username 
or password (org.apache.kafka.clients.NetworkClient){noformat}
 

If we changed to use broker configuration property we can start broker 
successfully

 
{noformat}
listeners=SASL_PLAINTEXT://:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=PLAIN
listener.name.sasl_plaintext.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
 required username="admin" password="admin-secret" user_admin="admin-secret" 
user_alice="alice-secret";
listener.name.sasl_plaintext.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
 required username="admin1" password="admin-secret";{noformat}
 

I believe this issue is caused by Kafka assigning all login modules to each 
defined mechanism when using JAAS file which results in Login class to add both 
username defined in each login module to the same subject

[https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/JaasContext.java#L101]

 

[https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L63]

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-19 Thread tao xiao
+1 non-binding. thx Ismael

On Thu, 19 Apr 2018 at 23:14 Vahid S Hashemian 
wrote:

> +1 (non-binding).
>
> Thanks Ismael.
>
> --Vahid
>
>
>
> From:   Jorge Esteban Quilcate Otoya 
> To: dev@kafka.apache.org
> Date:   04/19/2018 07:32 AM
> Subject:Re: [VOTE] Kafka 2.0.0 in June 2018
>
>
>
> +1 (non binding), thanks Ismael!
>
> El jue., 19 abr. 2018 a las 13:01, Manikumar ()
> escribió:
>
> > +1 (non-binding).
> >
> > Thanks.
> >
> > On Thu, Apr 19, 2018 at 3:07 PM, Stephane Maarek <
> > steph...@simplemachines.com.au> wrote:
> >
> > > +1 (non binding). Thanks Ismael!
> > >
> > > On Thu., 19 Apr. 2018, 2:47 pm Gwen Shapira, 
> wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > On Wed, Apr 18, 2018 at 11:35 AM, Ismael Juma 
> > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I started a discussion last year about bumping the version of the
> > June
> > > > 2018
> > > > > release to 2.0.0[1]. To reiterate the reasons in the original
> post:
> > > > >
> > > > > 1. Adopt KIP-118 (Drop Support for Java 7), which requires a major
> > > > version
> > > > > bump due to semantic versioning.
> > > > >
> > > > > 2. Take the chance to remove deprecated code that was deprecated
> > prior
> > > to
> > > > > 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that
> we
> > can
> > > > > move faster.
> > > > >
> > > > > One concern that was raised is that we still do not have a rolling
> > > > upgrade
> > > > > path for the old ZK-based consumers. Since the Scala clients
> haven't
> > > been
> > > > > updated in a long time (they don't support security or the latest
> > > message
> > > > > format), users who need them can continue to use 1.1.0 with no
> loss
> > of
> > > > > functionality.
> > > > >
> > > > > Since it's already mid-April and people seemed receptive during
> the
> > > > > discussion last year, I'm going straight to a vote, but we can
> > discuss
> > > > more
> > > > > if needed (of course).
> > > > >
> > > > > Ismael
> > > > >
> > > > > [1]
> > > > >
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.apache.org_thread.html_dd9d3e31d7e9590c1f727ef5560c93=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-XI9HTNmZdteu6wfk8=lBt342M2PM_4czzbFWtAc63571qsZGc9sfB7A5DlZPo=
>
> > > > > 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Gwen Shapira*
> > > > Product Manager | Confluent
> > > > 650.450.2760 <(650)%20450-2760> | @gwenshap
> > > > Follow us: Twitter <
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_ConfluentInc=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-XI9HTNmZdteu6wfk8=KcgJLWP_UEkzMrujjrbJA4QfHPDrJjcaWS95p2LGewU=
> > | blog
> > > > <
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.confluent.io_blog=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-XI9HTNmZdteu6wfk8=XaV8g8yeT1koLf1dbc30NTzBdXB6GAj7FwD8J2VP7iY=
> >
> > > >
> > >
> >
>
>
>
>
>


Re: [DISCUSS] KIP-250 Add Support for Quorum-based Producer Acknowledgment

2018-02-03 Thread tao xiao
Hi Guozhang,

Are you proposing changing semantic of ack=all to acknowledge message only
after all replicas (not all ISRs, which is what Kafka currently is doing)
have committed the message? This is equivalent to setting min.isr=number of
replicas, which makes ack=all much stricter than what Kafka has right now.
I think this may introduce surprise to users too as producer will not
succeed in producing a message to Kafka when one of the followers is down

On Sat, 3 Feb 2018 at 15:26 Guozhang Wang  wrote:

> Hi Dong,
>
> Could you elaborate a bit more how controller could affect leaders to
> switch between all and quorum?
>
>
> Guozhang
>
>
> On Fri, Feb 2, 2018 at 10:12 PM, Dong Lin  wrote:
>
> > Hey Guazhang,
> >
> > Got it. Thanks for the detailed explanation. I guess my point is that we
> > can probably achieve the best of both worlds, i.e. maintain the existing
> > behavior of ack="all" while improving the tail latency.
> >
> > Thanks,
> > Dong
> >
> >
> >
> > On Fri, Feb 2, 2018 at 8:43 PM, Guozhang Wang 
> wrote:
> >
> >> Hi Dong,
> >>
> >> Yes, in terms of fault tolerance "quorum" does not do better than "all",
> >> as I said, with {min.isr} to X+1 Kafka is able to tolerate X failures
> only.
> >> So if A and B are partitioned off at the same time, then there are two
> >> concurrent failures and we do not guarantee all acked messages will be
> >> retained.
> >>
> >> The goal of my approach is to maintain the behavior of ack="all", which
> >> happen to do better than what Kafka is actually guaranteed: when both A
> and
> >> B are partitioned off, produced records will not be acked since "all"
> >> requires all replicas (not only ISRs, my previous email has an incorrect
> >> term) are required. This is doing better than tolerating X failures,
> which
> >> I was proposing to keep, so that we would not introduce any regression
> >> "surprises" to users who are already using "all". In other words,
> "quorum"
> >> is trading a bit of failure tolerance that is strictly defined on
> min.isr
> >> for better tail latency.
> >>
> >>
> >> Guozhang
> >>
> >>
> >> On Fri, Feb 2, 2018 at 6:25 PM, Dong Lin  wrote:
> >>
> >>> Hey Guozhang,
> >>>
> >>> According to the new proposal, with 3 replicas, min.isr=2 and
> >>> acks="quorum", it seems that acknowledged messages can still be
> truncated
> >>> in the network partition scenario you mentioned, right? So I guess the
> goal
> >>> is for some user to achieve better tail latency at the cost of
> potential
> >>> message loss?
> >>>
> >>> If this is the case, then I think it may be better to adopt an approach
> >>> where controller dynamically turn on/off this optimization. This
> provides
> >>> user with peace of mind (i.e. no message loss) while still reducing
> tail
> >>> latency. What do you think?
> >>>
> >>> Thanks,
> >>> Dong
> >>>
> >>>
> >>> On Fri, Feb 2, 2018 at 11:11 AM, Guozhang Wang 
> >>> wrote:
> >>>
>  Hello Litao,
> 
>  Just double checking on the leader election details, do you have time
>  to complete the proposal on that part?
> 
>  Also Jun mentioned one caveat related to KIP-250 on the KIP-232
>  discussion thread that Dong is working on, I figured it is worth
> pointing
>  out here with a tentative solution:
> 
> 
>  ```
>  Currently, if the producer uses acks=-1, a write will only succeed if
>  the write is received by all in-sync replicas (i.e., committed). This
>  is true even when min.isr is set since we first wait for a message to
>  be committed and then check the min.isr requirement. KIP-250 may
>  change that, but we can discuss the implication there.
>  ```
> 
>  The caveat is that, if we change the acking semantics in KIP-250 that
>  we will only requires num of {min.isr} to acknowledge a produce, then
> the
>  above scenario will have a caveat: imagine you have {A, B, C}
> replicas of a
>  partition with A as the leader, all in the isr list, and min.isr is 2.
> 
>  1. Say there is a network partition and both A and B are fenced off. C
>  is elected as the new leader, it shrinks its isr list to only {C};
> from A's
>  point of view it does not know it becomes the "ghost" and no longer
> the
>  leader, all it does is shrinking the isr list to {A, B}.
> 
>  2. At this time, any new writes with ack=-1 to C will not be acked,
>  since from C's pov there is only one replica. This is correct.
> 
>  3. However, any writes that are send to A (NOTE this is totally
>  possible, since producers would only refresh metadata periodically,
>  additionally if they happen to ask A or B they will get the stale
> metadata
>  that A's still the leader), since A thinks that isr list is {A, B}
> and as
>  long as B has replicated the message, A can acked the produce.
> 
>  This is not correct behavior, 

Re: [VOTE] KIP-86: Configurable SASL callback handlers

2018-01-18 Thread tao xiao
 +1 (non-binding)

On Fri, 19 Jan 2018 at 00:47 Rajini Sivaram  wrote:

> Hi all,
>
> I would like to restart the vote for KIP-86:
>https://cwiki.apache.org/confluence/display/KAFKA/KIP-86
> %3A+Configurable+SASL+callback+handlers
>
> The KIP makes callback handlers for SASL configurable to make it simpler to
> integrate with custom authentication database or custom authentication
> servers. This is particularly useful for SASL/PLAIN where the
> implementation in Kafka based on credentials stored in jaas.conf is not
> suitable for production use. It is also useful for SCRAM in environments
> where ZooKeeper is not secure. The KIP has also been updated to simplify
> addition of new SASL mechanisms by making the Login class configurable.
>
> The PR for the KIP has been rebased and updated (
> https://github.com/apache/kafka/pull/2022)
>
> Thank you,
>
> Rajini
>
>
>
> On Mon, Dec 11, 2017 at 2:22 PM, Ted Yu  wrote:
>
> > +1
> >  Original message From: Tom Bentley <
> t.j.bent...@gmail.com>
> > Date: 12/11/17  6:06 AM  (GMT-08:00) To: dev@kafka.apache.org Subject:
> > Re: [VOTE] KIP-86: Configurable SASL callback handlers
> > +1 (non-binding)
> >
> > On 5 May 2017 at 11:57, Mickael Maison  wrote:
> >
> > > Thanks for the KIP Rajini, this will significantly simplify providing
> > > custom credential providers
> > > +1 (non binding)
> > >
> > > On Wed, May 3, 2017 at 8:25 AM, Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > wrote:
> > > > Can we have some more reviews or votes for this KIP to include in
> > > 0.11.0.0?
> > > > It is not a breaking change and the code is ready for integration, so
> > it
> > > > will be good to get it in if possible.
> > > >
> > > > Ismael/Jun, since you had reviewed the KIP earlier, can you let me
> know
> > > if
> > > > I can do anything more to get your votes?
> > > >
> > > >
> > > > Thank you,
> > > >
> > > > Rajini
> > > >
> > > >
> > > > On Mon, Apr 10, 2017 at 12:18 PM, Edoardo Comar 
> > > wrote:
> > > >
> > > >> +1 (non binding)
> > > >> many thanks Rajini !
> > > >>
> > > >> --
> > > >> Edoardo Comar
> > > >> IBM MessageHub
> > > >> eco...@uk.ibm.com
> > > >> IBM UK Ltd, Hursley Park, SO21 2JN
> > > >>
> > > >> IBM United Kingdom Limited Registered in England and Wales with
> number
> > > >> 741598 Registered office: PO Box 41, North Harbour, Portsmouth,
> Hants.
> > > PO6
> > > >> 3AU
> > > >>
> > > >>
> > > >>
> > > >> From:   Rajini Sivaram 
> > > >> To: dev@kafka.apache.org
> > > >> Date:   06/04/2017 10:53
> > > >> Subject:[VOTE] KIP-86: Configurable SASL callback handlers
> > > >>
> > > >>
> > > >>
> > > >> Hi all,
> > > >>
> > > >> I would like to start the voting process for KIP-86:
> > > >>
> > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > >> 86%3A+Configurable+SASL+callback+handlers
> > > >>
> > > >>
> > > >> The KIP makes callback handlers for SASL configurable to make it
> > simpler
> > > >> to
> > > >> integrate with custom authentication database or custom
> authentication
> > > >> servers. This is particularly useful for SASL/PLAIN where the
> > > >> implementation in Kafka based on credentials stored in jaas.conf is
> > not
> > > >> suitable for production use. It is also useful for SCRAM in
> > environments
> > > >> where ZooKeeper is not secure.
> > > >>
> > > >> Thank you...
> > > >>
> > > >> Regards,
> > > >>
> > > >> Rajini
> > > >>
> > > >>
> > > >>
> > > >> Unless stated otherwise above:
> > > >> IBM United Kingdom Limited - Registered in England and Wales with
> > number
> > > >> 741598.
> > > >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire
> PO6
> > > 3AU
> > > >>
> > >
> >
>


[jira] [Updated] (KAFKA-4129) Processor throw exception when getting channel remote address after closing the channel

2016-09-06 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-4129:

Status: Patch Available  (was: Open)

> Processor throw exception when getting channel remote address after closing 
> the channel
> ---
>
> Key: KAFKA-4129
> URL: https://issues.apache.org/jira/browse/KAFKA-4129
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.1
>Reporter: TAO XIAO
>Assignee: TAO XIAO
>
> In Processor {{configureNewConnections()}} catch block, it explicitly closes 
> {{channel}} before calling {{channel.getRemoteAddress}} which results in 
> {{ClosedChannelException}} being thrown. This is due to Java implementation 
> that no remote address can be returned after the channel is closed
> {code}
> case NonFatal(e) =>
>  // need to close the channel here to avoid a socket leak.
>  close(channel)
>  error(s"Processor $id closed connection from 
> ${channel.getRemoteAddress}", e)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4129) Processor throw exception when getting channel remote address after closing the channel

2016-09-06 Thread TAO XIAO (JIRA)
TAO XIAO created KAFKA-4129:
---

 Summary: Processor throw exception when getting channel remote 
address after closing the channel
 Key: KAFKA-4129
 URL: https://issues.apache.org/jira/browse/KAFKA-4129
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.10.0.1
Reporter: TAO XIAO
Assignee: TAO XIAO


In Processor {{configureNewConnections()}} catch block, it explicitly closes 
{{channel}} before calling {{channel.getRemoteAddress}} which results in 
{{ClosedChannelException}} being thrown. This is due to Java implementation 
that no remote address can be returned after the channel is closed

{code}
case NonFatal(e) =>
 // need to close the channel here to avoid a socket leak.
 close(channel)
 error(s"Processor $id closed connection from 
${channel.getRemoteAddress}", e)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3950) kafka mirror maker tool is not respecting whitelist option

2016-07-12 Thread TAO XIAO (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372982#comment-15372982
 ] 

TAO XIAO commented on KAFKA-3950:
-

As only the topics that the users have permission to read are returned I think 
we can consider this is a authorization success. And we can throw exception if 
none of topics matched.

> kafka mirror maker tool is not respecting whitelist option
> --
>
> Key: KAFKA-3950
> URL: https://issues.apache.org/jira/browse/KAFKA-3950
> Project: Kafka
>  Issue Type: Bug
>Reporter: Raghav Kumar Gautam
>Assignee: Manikumar Reddy
>Priority: Critical
>
> A mirror maker launched like this:
> {code}
> /usr/bin/kinit -k -t /home/kfktest/hadoopqa/keytabs/kfktest.headless.keytab 
> kfkt...@example.com
> JAVA_HOME=/usr/jdk64/jdk1.8.0_77 JMX_PORT=9112 
> /usr/kafka/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_consumer_12.properties
>  --producer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_producer_12.properties
>  --new.consumer --whitelist="test.*" >>  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/mirror_maker_12.log
>  2>&1 & echo pid:$! >  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/entity_12_pid
> {code}
> Lead to TopicAuthorizationException:
> {code}
> WARN Error while fetching metadata with correlation id 44 : 
> {__consumer_offsets=TOPIC_AUTHORIZATION_FAILED} 
> (org.apache.kafka.clients.NetworkClient)
> [2016-06-20 13:24:49,983] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [__consumer_offsets]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3950) kafka mirror maker tool is not respecting whitelist option

2016-07-12 Thread TAO XIAO (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372705#comment-15372705
 ] 

TAO XIAO commented on KAFKA-3950:
-

How about we still keep the filtering on client side and fix the broken piece?

Here is the trouble maker 
[ConsumerCoordinator.java|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.java#L150]
 that fails the validation. Can we remove below check? 

{code}
if (!cluster.unauthorizedTopics().isEmpty())
throw new TopicAuthorizationException(new 
HashSet<>(cluster.unauthorizedTopics()));
{code}

As the authorization check has been done on server-side already when fetching 
metadata all topics stored in {code}cluster.topics(){code} should the ones the 
consumer has permission to read. We can simply return them that matches the 
pattern to end user

> kafka mirror maker tool is not respecting whitelist option
> --
>
> Key: KAFKA-3950
> URL: https://issues.apache.org/jira/browse/KAFKA-3950
> Project: Kafka
>  Issue Type: Bug
>Reporter: Raghav Kumar Gautam
>Assignee: Manikumar Reddy
>Priority: Critical
>
> A mirror maker launched like this:
> {code}
> /usr/bin/kinit -k -t /home/kfktest/hadoopqa/keytabs/kfktest.headless.keytab 
> kfkt...@example.com
> JAVA_HOME=/usr/jdk64/jdk1.8.0_77 JMX_PORT=9112 
> /usr/kafka/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_consumer_12.properties
>  --producer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_producer_12.properties
>  --new.consumer --whitelist="test.*" >>  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/mirror_maker_12.log
>  2>&1 & echo pid:$! >  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/entity_12_pid
> {code}
> Lead to TopicAuthorizationException:
> {code}
> WARN Error while fetching metadata with correlation id 44 : 
> {__consumer_offsets=TOPIC_AUTHORIZATION_FAILED} 
> (org.apache.kafka.clients.NetworkClient)
> [2016-06-20 13:24:49,983] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [__consumer_offsets]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3950) kafka mirror maker tool is not respecting whitelist option

2016-07-12 Thread TAO XIAO (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372705#comment-15372705
 ] 

TAO XIAO edited comment on KAFKA-3950 at 7/12/16 11:18 AM:
---

How about we still keep the filtering on client side and fix the broken piece?

Here is the trouble maker 
[ConsumerCoordinator.java|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.java#L150]
 that fails the validation. Can we remove below check? 

{code}
if (!cluster.unauthorizedTopics().isEmpty())
throw new TopicAuthorizationException(new 
HashSet<>(cluster.unauthorizedTopics()));
{code}

As the authorization check has been done on server-side already when fetching 
metadata all topics stored in {code}cluster.topics(){code} should be the ones 
the consumer has permission to read. We can simply return them that matches the 
pattern to end user


was (Author: xiaotao183):
How about we still keep the filtering on client side and fix the broken piece?

Here is the trouble maker 
[ConsumerCoordinator.java|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.java#L150]
 that fails the validation. Can we remove below check? 

{code}
if (!cluster.unauthorizedTopics().isEmpty())
throw new TopicAuthorizationException(new 
HashSet<>(cluster.unauthorizedTopics()));
{code}

As the authorization check has been done on server-side already when fetching 
metadata all topics stored in {code}cluster.topics(){code} should the ones the 
consumer has permission to read. We can simply return them that matches the 
pattern to end user

> kafka mirror maker tool is not respecting whitelist option
> --
>
> Key: KAFKA-3950
> URL: https://issues.apache.org/jira/browse/KAFKA-3950
> Project: Kafka
>  Issue Type: Bug
>Reporter: Raghav Kumar Gautam
>Assignee: Manikumar Reddy
>Priority: Critical
>
> A mirror maker launched like this:
> {code}
> /usr/bin/kinit -k -t /home/kfktest/hadoopqa/keytabs/kfktest.headless.keytab 
> kfkt...@example.com
> JAVA_HOME=/usr/jdk64/jdk1.8.0_77 JMX_PORT=9112 
> /usr/kafka/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_consumer_12.properties
>  --producer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_producer_12.properties
>  --new.consumer --whitelist="test.*" >>  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/mirror_maker_12.log
>  2>&1 & echo pid:$! >  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/entity_12_pid
> {code}
> Lead to TopicAuthorizationException:
> {code}
> WARN Error while fetching metadata with correlation id 44 : 
> {__consumer_offsets=TOPIC_AUTHORIZATION_FAILED} 
> (org.apache.kafka.clients.NetworkClient)
> [2016-06-20 13:24:49,983] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [__consumer_offsets]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-62: Allow consumer to send heartbeats from a background thread

2016-06-16 Thread tao xiao
+1

On Fri, 17 Jun 2016 at 09:03 Harsha  wrote:

> +1 (binding)
> Thanks,
> Harsha
>
> On Thu, Jun 16, 2016, at 05:46 PM, Henry Cai wrote:
> > +1
> >
> > On Thu, Jun 16, 2016 at 3:46 PM, Ismael Juma  wrote:
> >
> > > +1 (binding)
> > >
> > > On Fri, Jun 17, 2016 at 12:44 AM, Guozhang Wang 
> > > wrote:
> > >
> > > > +1.
> > > >
> > > > On Thu, Jun 16, 2016 at 11:44 AM, Jason Gustafson <
> ja...@confluent.io>
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I'd like to open the vote for KIP-62. This proposal attempts to
> address
> > > > one
> > > > > of the recurring usability problems that users of the new consumer
> have
> > > > > faced with as little impact as possible. You can read the full
> details
> > > > > here:
> > > > >
> > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread
> > > > > .
> > > > >
> > > > > After some discussion on this list, I think we were in agreement
> that
> > > > this
> > > > > change addresses a major part of the problem and we've left the
> door
> > > open
> > > > > for further improvements, such as adding a heartbeat() API or a
> > > > separately
> > > > > configured rebalance timeout. Thanks in advance to everyone who
> helped
> > > > > review the proposal.
> > > > >
> > > > > -Jason
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
>


[jira] [Updated] (KAFKA-3787) Preserve message timestamp in mirror mkaer

2016-06-03 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-3787:

Status: Patch Available  (was: Open)

> Preserve message timestamp in mirror mkaer
> --
>
> Key: KAFKA-3787
> URL: https://issues.apache.org/jira/browse/KAFKA-3787
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.0.0
>Reporter: TAO XIAO
>Assignee: TAO XIAO
>
> The timestamp of messages consumed by mirror maker is not preserved after 
> sending to target cluster. The correct behavior is to keep create timestamp 
> the same in both source and target clusters.
> Here is the KIP-32 that illustrates the correct behavior
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+timestamps+to+Kafka+message#KIP-32-AddtimestampstoKafkamessage-Mirrormakercaseindetail



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3787) Preserve message timestamp in mirror mkaer

2016-06-03 Thread TAO XIAO (JIRA)
TAO XIAO created KAFKA-3787:
---

 Summary: Preserve message timestamp in mirror mkaer
 Key: KAFKA-3787
 URL: https://issues.apache.org/jira/browse/KAFKA-3787
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 0.10.0.0
Reporter: TAO XIAO
Assignee: TAO XIAO


The timestamp of messages consumed by mirror maker is not preserved after 
sending to target cluster. The correct behavior is to keep create timestamp the 
same in both source and target clusters.

Here is the KIP-32 that illustrates the correct behavior
https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+timestamps+to+Kafka+message#KIP-32-AddtimestampstoKafkamessage-Mirrormakercaseindetail



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3163) KIP-33 - Add a time based log index to Kafka

2016-05-12 Thread TAO XIAO (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15282310#comment-15282310
 ] 

TAO XIAO commented on KAFKA-3163:
-

[~becket_qin] Thanks. Does the KIP doc in wiki reflect the latest changes? In 
the use cases discussion table it compares CreateTime index with LogAppendTime 
index. But as far as I know KIP-32 only supports only one timestamp 
configuration: either CreateTime or LogAppendTime. If CreateTime is chosen do 
we still have two different time index files?

> KIP-33 - Add a time based log index to Kafka
> 
>
> Key: KAFKA-3163
> URL: https://issues.apache.org/jira/browse/KAFKA-3163
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.0
>
>
> This ticket is associated with KIP-33.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-33+-+Add+a+time+based+log+index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3163) KIP-33 - Add a time based log index to Kafka

2016-05-12 Thread TAO XIAO (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15282271#comment-15282271
 ] 

TAO XIAO commented on KAFKA-3163:
-

Hi,

Do we have this KIP in the upcoming 0.10.0.0 release?

> KIP-33 - Add a time based log index to Kafka
> 
>
> Key: KAFKA-3163
> URL: https://issues.apache.org/jira/browse/KAFKA-3163
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.0
>
>
> This ticket is associated with KIP-33.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-33+-+Add+a+time+based+log+index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: reading the consumer offsets topic

2016-05-09 Thread tao xiao
You need to enable internal topic in the consumer.properties

exclude.internal.topics=false

On Mon, 9 May 2016 at 12:42 Cliff Rhyne <crh...@signal.co> wrote:

> Thanks Todd and Tao.  I've tried those tricks but no luck.
>
> Just to add more info, this is the __consumer_offsets specific config that
> is shown by the topic describe command:
>
>
> Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=uncompressed
>
> On Mon, May 9, 2016 at 1:16 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > You can try this
> >
> > bin/kafka-console-consumer.sh --consumer.config
> > config/consumer.properties --from-beginning
> > --topic __consumer_offsets --zookeeper localhost:2181 —formatter
> > "kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter"
> >
> > On Mon, 9 May 2016 at 09:40 Todd Palino <tpal...@gmail.com> wrote:
> >
> > > The GroupMetadataManager one should be working for you with 0.9. I
> don’t
> > > have a 0.9 KCC set up at the moment, so I’m using an 0.8 version where
> > it’s
> > > different (it’s the other class for me). The only thing I can offer now
> > is
> > > did you put quotes around the arg to --formatter so you don’t get weird
> > > shell interference?
> > >
> > > -Todd
> > >
> > >
> > > On Mon, May 9, 2016 at 8:18 AM, Cliff Rhyne <crh...@signal.co> wrote:
> > >
> > > > Thanks, Todd.  It's still not working unfortunately.
> > > >
> > > > This results in nothing getting printed to the console and requires
> > kill
> > > -9
> > > > in another window to stop (ctrl-c doesn't work):
> > > >
> > > > /kafka-console-consumer.sh --bootstrap-server localhost:9092
> > --zookeeper
> > > >  --topic __consumer_offsets --formatter
> > > > kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter
> > > >
> > > > This results in a stack trace because it can't find the class:
> > > >
> > > > ./kafka-console-consumer.sh --bootstrap-server localhost:9092
> > --zookeeper
> > > >  --topic __consumer_offsets --formatter
> > > > kafka.server.OffsetManager\$OffsetsMessageFormatter
> > > >
> > > > Exception in thread "main" java.lang.ClassNotFoundException:
> > > > kafka.server.OffsetManager$OffsetsMessageFormatter
> > > >
> > > >
> > > > I'm on 0.9.0.1. "broker-list" is invalid and zookeeper is required
> > > > regardless of the bootstrap-server parameter.
> > > >
> > > >
> > > > Thanks,
> > > >
> > > > Cliff
> > > >
> > > > On Sun, May 8, 2016 at 7:35 PM, Todd Palino <tpal...@gmail.com>
> wrote:
> > > >
> > > > > It looks like you’re just missing the proper message formatter. Of
> > > > course,
> > > > > that largely depends on your version of the broker. Try:
> > > > >
> > > > > ./kafka-console-consumer.sh --broker-list localhost:9092 --topic
> > > > > __consumer_offsets
> > > > > --formatter
> > > > kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter
> > > > >
> > > > >
> > > > > If for some reason that doesn’t work, you can try
> > > > > "kafka.server.OffsetManager\$OffsetsMessageFormatter” instead.
> > > > >
> > > > > -Todd
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Sun, May 8, 2016 at 1:26 PM, Cliff Rhyne <crh...@signal.co>
> > wrote:
> > > > >
> > > > > > I'm having difficulty reading the consumer offsets topic from the
> > > > command
> > > > > > line.  I try the following but it doesn't seem to work (along
> with
> > a
> > > > few
> > > > > > related variants including specifying the zookeeper hosts):
> > > > > >
> > > > > > ./kafka-console-consumer.sh --broker-list localhost:9092 --topic
> > > > > > __consumer_offsets
> > > > > >
> > > > > > Is there a special way to read the consumer offsets topic?
> > > > > >
> > > > > > Thanks,
> > > > > > Cliff
> > > > > >
> > > > > > --
> > > > > > Cliff Rhyne
> > > > > &

Re: reading the consumer offsets topic

2016-05-09 Thread tao xiao
You can try this

bin/kafka-console-consumer.sh --consumer.config
config/consumer.properties --from-beginning
--topic __consumer_offsets --zookeeper localhost:2181 —formatter
"kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter"

On Mon, 9 May 2016 at 09:40 Todd Palino  wrote:

> The GroupMetadataManager one should be working for you with 0.9. I don’t
> have a 0.9 KCC set up at the moment, so I’m using an 0.8 version where it’s
> different (it’s the other class for me). The only thing I can offer now is
> did you put quotes around the arg to --formatter so you don’t get weird
> shell interference?
>
> -Todd
>
>
> On Mon, May 9, 2016 at 8:18 AM, Cliff Rhyne  wrote:
>
> > Thanks, Todd.  It's still not working unfortunately.
> >
> > This results in nothing getting printed to the console and requires kill
> -9
> > in another window to stop (ctrl-c doesn't work):
> >
> > /kafka-console-consumer.sh --bootstrap-server localhost:9092 --zookeeper
> >  --topic __consumer_offsets --formatter
> > kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter
> >
> > This results in a stack trace because it can't find the class:
> >
> > ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --zookeeper
> >  --topic __consumer_offsets --formatter
> > kafka.server.OffsetManager\$OffsetsMessageFormatter
> >
> > Exception in thread "main" java.lang.ClassNotFoundException:
> > kafka.server.OffsetManager$OffsetsMessageFormatter
> >
> >
> > I'm on 0.9.0.1. "broker-list" is invalid and zookeeper is required
> > regardless of the bootstrap-server parameter.
> >
> >
> > Thanks,
> >
> > Cliff
> >
> > On Sun, May 8, 2016 at 7:35 PM, Todd Palino  wrote:
> >
> > > It looks like you’re just missing the proper message formatter. Of
> > course,
> > > that largely depends on your version of the broker. Try:
> > >
> > > ./kafka-console-consumer.sh --broker-list localhost:9092 --topic
> > > __consumer_offsets
> > > --formatter
> > kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter
> > >
> > >
> > > If for some reason that doesn’t work, you can try
> > > "kafka.server.OffsetManager\$OffsetsMessageFormatter” instead.
> > >
> > > -Todd
> > >
> > >
> > >
> > >
> > > On Sun, May 8, 2016 at 1:26 PM, Cliff Rhyne  wrote:
> > >
> > > > I'm having difficulty reading the consumer offsets topic from the
> > command
> > > > line.  I try the following but it doesn't seem to work (along with a
> > few
> > > > related variants including specifying the zookeeper hosts):
> > > >
> > > > ./kafka-console-consumer.sh --broker-list localhost:9092 --topic
> > > > __consumer_offsets
> > > >
> > > > Is there a special way to read the consumer offsets topic?
> > > >
> > > > Thanks,
> > > > Cliff
> > > >
> > > > --
> > > > Cliff Rhyne
> > > > Software Engineering Manager
> > > > e: crh...@signal.co
> > > > signal.co
> > > > 
> > > >
> > > > Cut Through the Noise
> > > >
> > > > This e-mail and any files transmitted with it are for the sole use of
> > the
> > > > intended recipient(s) and may contain confidential and privileged
> > > > information. Any unauthorized use of this email is strictly
> prohibited.
> > > > ©2016 Signal. All rights reserved.
> > > >
> > >
> > >
> > >
> > > --
> > > *—-*
> > > *Todd Palino*
> > > Staff Site Reliability Engineer
> > > Data Infrastructure Streaming
> > >
> > >
> > >
> > > linkedin.com/in/toddpalino
> > >
> >
> >
> >
> > --
> > Cliff Rhyne
> > Software Engineering Manager
> > e: crh...@signal.co
> > signal.co
> > 
> >
> > Cut Through the Noise
> >
> > This e-mail and any files transmitted with it are for the sole use of the
> > intended recipient(s) and may contain confidential and privileged
> > information. Any unauthorized use of this email is strictly prohibited.
> > ©2016 Signal. All rights reserved.
> >
>
>
>
> --
> *—-*
> *Todd Palino*
> Staff Site Reliability Engineer
> Data Infrastructure Streaming
>
>
>
> linkedin.com/in/toddpalino
>


Re: KAFKA-3112

2016-05-06 Thread tao xiao
KAFKA-2657 is unresolved so you can safely assume it hasn't been fixed yet.

On Fri, 6 May 2016 at 07:38 Raj Tanneru <rtann...@fanatics.com> wrote:

> Yeah it is a duplicate of KAFKA-2657. The question is how to check / know
> if it is merged to 0.9.0.1 release. What are the options that I have if I
> need this fix. How can I get patch for this on 0.8.2.1?
>
> Sent from my iPhone
>
> > On May 6, 2016, at 12:06 AM, tao xiao <xiaotao...@gmail.com> wrote:
> >
> > It said this is a duplication. This is the
> > https://issues.apache.org/jira/browse/KAFKA-2657 that KAKFA-3112
> duplicates
> > to.
> >
> >> On Thu, 5 May 2016 at 22:13 Raj Tanneru <rtann...@fanatics.com> wrote:
> >>
> >>
> >> Hi All,
> >> Does anyone know if KAFKA-3112 is merged to 0.9.0.1? Is there a place to
> >> check which version has this fix? Jira doesn’t show fix versions.
> >>
> >> https://issues.apache.org/jira/browse/KAFKA-3112
> >>
> >>
> >> Thanks,
> >> Raj Tanneru
> >> Information contained in this e-mail message is confidential. This
> e-mail
> >> message is intended only for the personal use of the recipient(s) named
> >> above. If you are not an intended recipient, do not read, distribute or
> >> reproduce this transmission (including any attachments). If you have
> >> received this email in error, please immediately notify the sender by
> email
> >> reply and delete the original message.
> >>
> Information contained in this e-mail message is confidential. This e-mail
> message is intended only for the personal use of the recipient(s) named
> above. If you are not an intended recipient, do not read, distribute or
> reproduce this transmission (including any attachments). If you have
> received this email in error, please immediately notify the sender by email
> reply and delete the original message.
>


Re: KAFKA-3112

2016-05-06 Thread tao xiao
It said this is a duplication. This is the
https://issues.apache.org/jira/browse/KAFKA-2657 that KAKFA-3112 duplicates
to.

On Thu, 5 May 2016 at 22:13 Raj Tanneru  wrote:

>
> Hi All,
> Does anyone know if KAFKA-3112 is merged to 0.9.0.1? Is there a place to
> check which version has this fix? Jira doesn’t show fix versions.
>
> https://issues.apache.org/jira/browse/KAFKA-3112
>
>
> Thanks,
> Raj Tanneru
> Information contained in this e-mail message is confidential. This e-mail
> message is intended only for the personal use of the recipient(s) named
> above. If you are not an intended recipient, do not read, distribute or
> reproduce this transmission (including any attachments). If you have
> received this email in error, please immediately notify the sender by email
> reply and delete the original message.
>


[jira] [Commented] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-19 Thread TAO XIAO (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198591#comment-15198591
 ] 

TAO XIAO commented on KAFKA-3409:
-

This can also happen when producer is slow and buffer memory is full which 
results in blocking consumer from polling therefore session times out. Next 
time consumer calls commit or poll it has been kicked out of the group. In this 
case it has no chance to call commit in onPartitionsRevoked and the consumed 
offset is lost which subsequently introduces message duplication.

> Mirror maker hangs indefinitely due to commit 
> --
>
> Key: KAFKA-3409
> URL: https://issues.apache.org/jira/browse/KAFKA-3409
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.1
> Environment: Kafka 0.9.0.1
>Reporter: TAO XIAO
>
> Mirror maker hangs indefinitely upon receiving CommitFailedException. I 
> believe this is due to CommitFailedException not caught by mirror maker and 
> mirror maker has no way to recover from it.
> A better approach will be catching the exception and rejoin the group. Here 
> is the stack trace
> [2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed due to group rebalance
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
> at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
> [2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-15 Thread TAO XIAO (JIRA)
TAO XIAO created KAFKA-3409:
---

 Summary: Mirror maker hangs indefinitely due to commit 
 Key: KAFKA-3409
 URL: https://issues.apache.org/jira/browse/KAFKA-3409
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 0.9.0.1
 Environment: Kafka 0.9.0.1
Reporter: TAO XIAO


Mirror maker hangs indefinitely upon receiving CommitFailedException. I believe 
this is due to CommitFailedException not caught by mirror maker and mirror 
maker has no way to recover from it.

A better approach will be catching the exception and rejoin the group. Here is 
the stack trace

[2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
committing offsets for group x 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
completed due to group rebalance
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
at 
kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
at 
kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
[2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
(kafka.tools.MirrorMaker$MirrorMakerThread)
[2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
[2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
committing offsets for group x 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3355) GetOffsetShell command doesn't work with SASL enabled Kafka

2016-03-08 Thread TAO XIAO (JIRA)
TAO XIAO created KAFKA-3355:
---

 Summary: GetOffsetShell command doesn't work with SASL enabled 
Kafka
 Key: KAFKA-3355
 URL: https://issues.apache.org/jira/browse/KAFKA-3355
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 0.9.0.1
 Environment: Kafka 0.9.0.1
Reporter: TAO XIAO


I found that GetOffsetShell doesn't work with SASL enabled Kafka. I believe 
this is due to old producer being used in GetOffsetShell.

Kafka version 0.9.0.1

Exception

% bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
localhost:9092 --topic test --time -1
[2016-03-04 21:43:56,597] INFO Verifying properties 
(kafka.utils.VerifiableProperties)
[2016-03-04 21:43:56,613] INFO Property client.id is overridden to 
GetOffsetShell (kafka.utils.VerifiableProperties)
[2016-03-04 21:43:56,613] INFO Property metadata.broker.list is overridden to 
localhost:9092 (kafka.utils.VerifiableProperties)
[2016-03-04 21:43:56,613] INFO Property request.timeout.ms is overridden to 
1000 (kafka.utils.VerifiableProperties)
[2016-03-04 21:43:56,674] INFO Fetching metadata from broker 
BrokerEndPoint(0,localhost,9092) with correlation id 0 for 1 topic(s) Set(test) 
(kafka.client.ClientUtils$)
[2016-03-04 21:43:56,689] INFO Connected to localhost:9092 for producing 
(kafka.producer.SyncProducer)
[2016-03-04 21:43:56,705] WARN Fetching topic metadata with correlation id 0 
for topics [Set(test)] from broker [BrokerEndPoint(0,localhost,9092)] failed 
(kafka.client.ClientUtils$)
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:498)
at java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:304)
at kafka.api.ApiUtils$.readShortString(ApiUtils.scala:36)
at kafka.cluster.BrokerEndPoint$.readFrom(BrokerEndPoint.scala:52)
at 
kafka.api.TopicMetadataResponse$$anonfun$1.apply(TopicMetadataResponse.scala:28)
at 
kafka.api.TopicMetadataResponse$$anonfun$1.apply(TopicMetadataResponse.scala:28)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.immutable.Range.foreach(Range.scala:166)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
kafka.api.TopicMetadataResponse$.readFrom(TopicMetadataResponse.scala:28)
at kafka.producer.SyncProducer.send(SyncProducer.scala:120)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
at kafka.tools.GetOffsetShell$.main(GetOffsetShell.scala:78)
at kafka.tools.GetOffsetShell.main(GetOffsetShell.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-43: Kafka SASL enhancements

2016-03-08 Thread tao xiao
+1 (non-binding)

On Tue, 8 Mar 2016 at 05:33 Andrew Schofield <
andrew_schofield_j...@outlook.com> wrote:

> +1 (non-binding)
>
> 
> > From: ism...@juma.me.uk
> > Date: Mon, 7 Mar 2016 19:52:11 +
> > Subject: Re: [VOTE] KIP-43: Kafka SASL enhancements
> > To: dev@kafka.apache.org
> >
> > +1 (non-binding)
> >
> > On Thu, Mar 3, 2016 at 10:37 AM, Rajini Sivaram <
> > rajinisiva...@googlemail.com> wrote:
> >
> >> I would like to start the voting process for *KIP-43: Kafka SASL
> >> enhancements*. This KIP extends the SASL implementation in Kafka to
> support
> >> new SASL mechanisms to enable Kafka to be integrated with different
> >> authentication servers.
> >>
> >> The KIP is available here for reference:
> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43:+Kafka+SASL+enhancements
> >>
> >> And here's is a link to the discussion on the mailing list:
> >>
> >>
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201601.mbox/%3CCAOJcB39b9Vy7%3DZEM3tLw2zarCS4A_s-%2BU%2BC%3DuEcWs0712UaYrQ%40mail.gmail.com%3E
> >>
> >>
> >> Thank you...
> >>
> >> Regards,
> >>
> >> Rajini
> >>
>


[jira] [Commented] (KAFKA-3206) No reconsiliation after partitioning

2016-02-04 Thread TAO XIAO (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15133580#comment-15133580
 ] 

TAO XIAO commented on KAFKA-3206:
-

This is most likely due to unclean.leader.election.enable being set to true. It 
basically means any out of sync node can take over leadership and receive 
traffic.  If you have unclean.leader.election.enable turned off the node you 
brought up in step 4 will not become a leader therefore no more messages can be 
sent to this partition.

> No reconsiliation after partitioning
> 
>
> Key: KAFKA-3206
> URL: https://issues.apache.org/jira/browse/KAFKA-3206
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Arkadiusz Firus
>
> Kafka topic partition could be in an inconsistent state where two nodes see 
> different partition state.
> Steps to reproduce the problem:
> 1. Create topic with one partition and replication factor 2
> 2. Stop one of the Kafka nodes where that partition resides
> 3. Send following messages to that topic:
> ABC
> BCD
> CDE
> 3. Stop the other Kafka node
> Currently none of the nodes should be running
> 4. Start first Kafka node - this node has no records
> 5. Send following records to the topic:
> 123
> 234
> 345
> 6. Start the other Kafka node
> The reconciliation should happen here but it does not.
> 7. When you read the topic you will see
> 123
> 234
> 345
> 8. When you stop the first node and read the topic you will see
> ABC
> BCD
> CDE 
> This means that the partition is in inconsistent state.
> If you have any questions please feel free to e-mail me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-02-01 Thread tao xiao
I am the author of KIP-44. I hope my use case will add some values to this
discussion. The reason I raised KIP44 is that I want to be able to
implement a custom security protocol that can fulfill the need of my
company. As pointed out by Ismael KIP-43 now supports a pluggable way to
inject custom security provider to SASL I think it is enough to cover the
use case I have and address the concerns raised in KIP-44.

For multiple security protocols support simultaneously it is not needed in
my use case and I don't foresee it is needed in the future but as i said
this is my use case only there may be other use cases that need it. But if
we want to support it in the future I prefer to get it right at the first
place given the fact that security protocol is an ENUM and if we stick to
that implementation it is very hard to extend in the future when we decide
multiple security protocols is needed.

Protocol negotiation is a very common usage pattern in security domain. As
suggested in Java SASL doc
http://docs.oracle.com/javase/7/docs/technotes/guides/security/sasl/sasl-refguide.html
client
first sends out a packet to server and server responds with a list of
mechanisms it supports. This is very similar to SSL/TLS negotiation.

On Tue, 2 Feb 2016 at 06:39 Ismael Juma  wrote:

> On Mon, Feb 1, 2016 at 7:04 PM, Gwen Shapira  wrote:
>
> > Looking at "existing solutions", it looks like Zookeeper allows plugging
> in
> > any SASL mechanism, but the server will only support one mechanism at a
> > time.
> >
>
> This was the original proposal from Rajini as that is enough for their
> needs.
>
>
> > If this is good enough for our use-case (do we actually need to support
> > multiple mechanisms at once?), it will simplify life a lot for us (
> > https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL
> )
> >
>
> The current thinking is that it would be useful to support multiple SASL
> mechanisms simultaneously. In the KIP meeting, Jun mentioned that companies
> sometimes support additional authentication mechanisms for partners, for
> example. It does make things more complex, as you say, so we need to be
> sure the complexity is worth it.
>
> Two more points:
>
> 1. It has been suggested that custom security protocol support is needed by
> some (KIP-44). Rajini enhanced KIP-43 so that a SASL mechanism with a
> custom provider can be used for this purpose instead. Given this, it seems
> a bit inconsistent and restrictive not to allow multiple SASL mechanisms
> simultaneously (we do allow SSL and SASL authentication simultaneously,
> after all).
>
> 2. The other option would be to support a single SASL mechanism
> simultaneously to start with and then extend this to multiple mechanisms
> simultaneously later (if and when needed). It seems like it would be harder
> to support the latter in the future if we go down this route, but maybe
> there are ways around this.
>
> Thoughts?
>
> Ismael
>


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-01-29 Thread tao xiao
Sorry, just ignore previous email. I saw the newly defined interface of the
callback in the KIP which has considered this matter.

On Fri, 29 Jan 2016 at 18:08 tao xiao <xiaotao...@gmail.com> wrote:

> Hi Rajini,
>
> Do you consider exposing Subject to AuthCallback as well? It is useful for
> users building their own SASL mechanism so that we have control  where to
> put logon data in subject and how to manipulate in SASL callback
>
>
> On Fri, 29 Jan 2016 at 18:04 Rajini Sivaram <rajinisiva...@googlemail.com>
> wrote:
>
>> Following on from the KIP meeting on Tuesday, I have updated the KIP with
>> a
>> flow for negotiation of mechanisms to support multiple SASL mechanisms
>> within a broker. I have also added a configurable Login interface to
>> support custom mechanisms which require ticket refresh - requested by Tao
>> Xiao.
>>
>> I will work on updating the PR in KAFKA-3149 over the next few days since
>> it will be useful for review.
>>
>> All comments and suggestions are welcome.
>>
>>
>> On Thu, Jan 28, 2016 at 2:35 PM, tao xiao <xiaotao...@gmail.com> wrote:
>>
>> > Sounds like a good approach to add provider in login module. Would love
>> to
>> > see updates in the PR to reflect the changes in Login and
>> > AuthCallbackHandler.
>> >
>> > On Thu, 28 Jan 2016 at 19:31 Rajini Sivaram <
>> rajinisiva...@googlemail.com>
>> > wrote:
>> >
>> > > Tao,
>> > >
>> > > We currently add the security provider in a static initializer in our
>> > login
>> > > module. This ensures that the security provider is always installed
>> > before
>> > > Kafka creates SaslServer/SaslClient. As you say, it is also possible
>> to
>> > > insert code into your application to add security provider before
>> Kafka
>> > > clients are created. Since you can also configure the JDK to add new
>> > > security providers, I am not sure if there is value in adding more
>> > > configuration in Kafka to add security providers.
>> > >
>> > > On Thu, Jan 28, 2016 at 2:25 AM, tao xiao <xiaotao...@gmail.com>
>> wrote:
>> > >
>> > > > The callback works for me as long as it has access to Subject and
>> > mechs.
>> > > > The other thing is how we can inject the customized security
>> provider
>> > via
>> > > > Security.addProvider()? If I want to implement my own SASL mech I
>> need
>> > to
>> > > > call the addProvider() before SASL.create so that my own
>> implementation
>> > > of
>> > > > SASLClient/Sever can be returned. Any thoughts on this? we can
>> either
>> > let
>> > > > users inject the provider in their logic code before creating a
>> > > > producer/consumer or Kafka does it for users
>> > > >
>> > > > On Thu, 28 Jan 2016 at 03:36 Rajini Sivaram <
>> > > rajinisiva...@googlemail.com>
>> > > > wrote:
>> > > >
>> > > > > Hi Tao,
>> > > > >
>> > > > > *javax.security.auth.callback.**CallbackHandler *is the standard
>> way
>> > in
>> > > > > which SASL clients and server obtain additional mechanism specific
>> > > > > input. *AuthCallbackHandler
>> > > > > *simply extends this interface to propagate configuration
>> > properties. I
>> > > > was
>> > > > > going to provide SASL mechanism and Subject to the callback
>> handlers
>> > as
>> > > > > well since the default handlers use these.
>> > > > >
>> > > > > Your SaslServer/SaslClient implementation can obtain the Subject
>> > using
>> > > > > *Subject.getSubject(**AccessController.getContext(). *But it will
>> be
>> > > good
>> > > > > to know if callback handlers would work for you - apart from
>> standard
>> > > > > callbacks like PasswordCallback, you can define your own callbacks
>> > too
>> > > if
>> > > > > you require.
>> > > > >
>> > > > >
>> > > > >
>> > > > > On Wed, Jan 27, 2016 at 3:59 PM, tao xiao <xiaotao...@gmail.com>
>> > > wrote:
>> > > > >
>> > > > > > Thanks Rajini. The other thing in my mind is that we should
>> find a
&

Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-01-29 Thread tao xiao
Hi Rajini,

Do you consider exposing Subject to AuthCallback as well? It is useful for
users building their own SASL mechanism so that we have control  where to
put logon data in subject and how to manipulate in SASL callback


On Fri, 29 Jan 2016 at 18:04 Rajini Sivaram <rajinisiva...@googlemail.com>
wrote:

> Following on from the KIP meeting on Tuesday, I have updated the KIP with a
> flow for negotiation of mechanisms to support multiple SASL mechanisms
> within a broker. I have also added a configurable Login interface to
> support custom mechanisms which require ticket refresh - requested by Tao
> Xiao.
>
> I will work on updating the PR in KAFKA-3149 over the next few days since
> it will be useful for review.
>
> All comments and suggestions are welcome.
>
>
> On Thu, Jan 28, 2016 at 2:35 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > Sounds like a good approach to add provider in login module. Would love
> to
> > see updates in the PR to reflect the changes in Login and
> > AuthCallbackHandler.
> >
> > On Thu, 28 Jan 2016 at 19:31 Rajini Sivaram <
> rajinisiva...@googlemail.com>
> > wrote:
> >
> > > Tao,
> > >
> > > We currently add the security provider in a static initializer in our
> > login
> > > module. This ensures that the security provider is always installed
> > before
> > > Kafka creates SaslServer/SaslClient. As you say, it is also possible to
> > > insert code into your application to add security provider before Kafka
> > > clients are created. Since you can also configure the JDK to add new
> > > security providers, I am not sure if there is value in adding more
> > > configuration in Kafka to add security providers.
> > >
> > > On Thu, Jan 28, 2016 at 2:25 AM, tao xiao <xiaotao...@gmail.com>
> wrote:
> > >
> > > > The callback works for me as long as it has access to Subject and
> > mechs.
> > > > The other thing is how we can inject the customized security provider
> > via
> > > > Security.addProvider()? If I want to implement my own SASL mech I
> need
> > to
> > > > call the addProvider() before SASL.create so that my own
> implementation
> > > of
> > > > SASLClient/Sever can be returned. Any thoughts on this? we can either
> > let
> > > > users inject the provider in their logic code before creating a
> > > > producer/consumer or Kafka does it for users
> > > >
> > > > On Thu, 28 Jan 2016 at 03:36 Rajini Sivaram <
> > > rajinisiva...@googlemail.com>
> > > > wrote:
> > > >
> > > > > Hi Tao,
> > > > >
> > > > > *javax.security.auth.callback.**CallbackHandler *is the standard
> way
> > in
> > > > > which SASL clients and server obtain additional mechanism specific
> > > > > input. *AuthCallbackHandler
> > > > > *simply extends this interface to propagate configuration
> > properties. I
> > > > was
> > > > > going to provide SASL mechanism and Subject to the callback
> handlers
> > as
> > > > > well since the default handlers use these.
> > > > >
> > > > > Your SaslServer/SaslClient implementation can obtain the Subject
> > using
> > > > > *Subject.getSubject(**AccessController.getContext(). *But it will
> be
> > > good
> > > > > to know if callback handlers would work for you - apart from
> standard
> > > > > callbacks like PasswordCallback, you can define your own callbacks
> > too
> > > if
> > > > > you require.
> > > > >
> > > > >
> > > > >
> > > > > On Wed, Jan 27, 2016 at 3:59 PM, tao xiao <xiaotao...@gmail.com>
> > > wrote:
> > > > >
> > > > > > Thanks Rajini. The other thing in my mind is that we should find
> a
> > > way
> > > > to
> > > > > > expose subject to SASL so that other mechanisms are able to use
> the
> > > > > > principal and credentials stored in subject to do authentication.
> > > > > >
> > > > > > I am thinking to have below interface that can be extended by
> users
> > > to
> > > > > > build the SASL client/server instead of having an AuthCallback.
> > With
> > > > this
> > > > > > interface users are able to add their own security provider
> before
> > > > > > client/server

Re: kafka-consumer-groups.sh doesn't work when consumers are off

2016-01-28 Thread tao xiao
Guozhang,

The old ConsumerOffsetChecker works for new consumer too with offset stored
in Kafka. I tested it with mirror maker with new consumer enabled. it is
able to show offset during mirror maker running and after its shutdown.

On Fri, 29 Jan 2016 at 06:34 Guozhang Wang  wrote:

> Once the offset is written to the log it is persistent and hence should
> survive broker failures. And its retention policy is configurable.
>
> It may be a bit misleading in saying "in-memory cache" in my previous
> email: the brokers just keep the in-memory map of [group, partition] ->
> latest_offset, while the offset commit history is kept in the log. When we
> delete the group, we remove the corresponding entry from memory map and put
> a tombstone into log as well so that the old offsets will be compacted
> eventually according to compaction policy.
>
> The old ConsumerOffsetChecker only works for old consumer that stores
> offset in ZK.
>
> Guozhang
>
> On Thu, Jan 28, 2016 at 1:43 PM, Cliff Rhyne  wrote:
>
> > Hi Guozhang,
> >
> > That looks like it might help but feels like there might be some gaps.
> > Would it be able to survive restarts of the kafka broker?  How long would
> > it stay in the cache (and is that configurable)?  If it expires from the
> > cache, what's the cache-miss operation look like?  (yes, a lot of this
> > depends on the data still being in the logs to recover)
> >
> > In the mean time, can I rely on the deprecated ConsumerOffsetChecker
> (which
> > looks at zookeeper) even though I'm using the new KafkaConsumer?
> >
> > Thanks,
> > Cliff
> >
> > On Thu, Jan 28, 2016 at 3:30 PM, Guozhang Wang 
> wrote:
> >
> > > Hi Cliff,
> > >
> > > Short answer to your question is it is just the current implementation.
> > >
> > > The coordinator stores the offsets as messages in an internal topic and
> > > also keeps the latest offset values in in-memory. It answers
> > > ConsumerGroupRequest using its cached offset, and upon the consumer
> group
> > > being removed since no member is alive already, it removed it from its
> > > in-memory cache and add a "tombstone" to the offset log as well. But
> the
> > > offsets are still persistent as messages in the log, which will only be
> > > compacted after a while (this is depend on the log compaction policy).
> > >
> > > There is a ticket open for improving on this scenario (
> > > https://issues.apache.org/jira/browse/KAFKA-2720) which lets the
> > > coordinator to only "purge" dead groups periodically instead of
> > > immediately, and that may partially resolve your case.
> > >
> > > Guozhang
> > >
> > >
> > > On Thu, Jan 28, 2016 at 12:13 PM, Cliff Rhyne 
> wrote:
> > >
> > > > Just following up on this concern.  Is there a constraint that
> prevents
> > > > ConsumerGroupCommand from reporting offsets on a group if no members
> > are
> > > > connected, or is this just the current implementation?
> > > >
> > > > Thanks,
> > > > Cliff
> > > >
> > > > On Mon, Jan 25, 2016 at 3:50 PM, Cliff Rhyne 
> wrote:
> > > >
> > > > > I'm running into a few challenges trying to evaluate offsets and
> lag
> > > > > (pending message count) in the new Java KafkaConsumer.  The old
> > > > > ConsumerOffsetChecker doesn't work anymore since the offsets aren't
> > > > stored
> > > > > in zookeeper after switching from the old consumer.  This would be
> > > fine,
> > > > > but the kafka-consumer-groups.sh command doesn't work if the
> > consumers
> > > > are
> > > > > shut off.  This seems like an unnecessary limitation and is
> > problematic
> > > > for
> > > > > troubleshooting / monitoring when the application is turned off (or
> > > while
> > > > > my application is running due to our stopping/starting consumers).
> > > > >
> > > > > Is there a constraint that I'm not aware of or is this something
> that
> > > > > could be changed?
> > > > >
> > > > > Thanks,
> > > > > Cliff
> > > > >
> > > > > --
> > > > > Cliff Rhyne
> > > > > Software Engineering Lead
> > > > > e: crh...@signal.co
> > > > > signal.co
> > > > > 
> > > > >
> > > > > Cut Through the Noise
> > > > >
> > > > > This e-mail and any files transmitted with it are for the sole use
> of
> > > the
> > > > > intended recipient(s) and may contain confidential and privileged
> > > > > information. Any unauthorized use of this email is strictly
> > prohibited.
> > > > > ©2015 Signal. All rights reserved.
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Cliff Rhyne
> > > > Software Engineering Lead
> > > > e: crh...@signal.co
> > > > signal.co
> > > > 
> > > >
> > > > Cut Through the Noise
> > > >
> > > > This e-mail and any files transmitted with it are for the sole use of
> > the
> > > > intended recipient(s) and may contain confidential and privileged
> > > > information. Any unauthorized use of this email is strictly
> prohibited.
> > > > ©2015 Signal. All rights reserved.
> > 

Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-01-28 Thread tao xiao
Sounds like a good approach to add provider in login module. Would love to
see updates in the PR to reflect the changes in Login and
AuthCallbackHandler.

On Thu, 28 Jan 2016 at 19:31 Rajini Sivaram <rajinisiva...@googlemail.com>
wrote:

> Tao,
>
> We currently add the security provider in a static initializer in our login
> module. This ensures that the security provider is always installed before
> Kafka creates SaslServer/SaslClient. As you say, it is also possible to
> insert code into your application to add security provider before Kafka
> clients are created. Since you can also configure the JDK to add new
> security providers, I am not sure if there is value in adding more
> configuration in Kafka to add security providers.
>
> On Thu, Jan 28, 2016 at 2:25 AM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > The callback works for me as long as it has access to Subject and mechs.
> > The other thing is how we can inject the customized security provider via
> > Security.addProvider()? If I want to implement my own SASL mech I need to
> > call the addProvider() before SASL.create so that my own implementation
> of
> > SASLClient/Sever can be returned. Any thoughts on this? we can either let
> > users inject the provider in their logic code before creating a
> > producer/consumer or Kafka does it for users
> >
> > On Thu, 28 Jan 2016 at 03:36 Rajini Sivaram <
> rajinisiva...@googlemail.com>
> > wrote:
> >
> > > Hi Tao,
> > >
> > > *javax.security.auth.callback.**CallbackHandler *is the standard way in
> > > which SASL clients and server obtain additional mechanism specific
> > > input. *AuthCallbackHandler
> > > *simply extends this interface to propagate configuration properties. I
> > was
> > > going to provide SASL mechanism and Subject to the callback handlers as
> > > well since the default handlers use these.
> > >
> > > Your SaslServer/SaslClient implementation can obtain the Subject using
> > > *Subject.getSubject(**AccessController.getContext(). *But it will be
> good
> > > to know if callback handlers would work for you - apart from standard
> > > callbacks like PasswordCallback, you can define your own callbacks too
> if
> > > you require.
> > >
> > >
> > >
> > > On Wed, Jan 27, 2016 at 3:59 PM, tao xiao <xiaotao...@gmail.com>
> wrote:
> > >
> > > > Thanks Rajini. The other thing in my mind is that we should find a
> way
> > to
> > > > expose subject to SASL so that other mechanisms are able to use the
> > > > principal and credentials stored in subject to do authentication.
> > > >
> > > > I am thinking to have below interface that can be extended by users
> to
> > > > build the SASL client/server instead of having an AuthCallback. With
> > this
> > > > interface users are able to add their own security provider before
> > > > client/server is returned by SASL. Any thoughts?
> > > >
> > > > Interface SaslClientBuilder {
> > > >
> > > > SaslClient build(mechs, subject, host, otherparams)
> > > > }
> > > >
> > > > Interface SaslServerBuilder {
> > > >     SaslServer build(mechs, subject, host, otherparams)
> > > > }
> > > >
> > > > On Wed, 27 Jan 2016 at 18:54 Rajini Sivaram <
> > > rajinisiva...@googlemail.com>
> > > > wrote:
> > > >
> > > > > Tao,
> > > > >
> > > > > Thank you for the explanation. I couldn't find a standard Java
> > > interface
> > > > > that would be suitable, so will define one based on your
> requirement
> > > and
> > > > > update the KIP.
> > > > >
> > > > > Regards,
> > > > >
> > > > > Rajini
> > > > >
> > > > > On Wed, Jan 27, 2016 at 2:12 AM, tao xiao <xiaotao...@gmail.com>
> > > wrote:
> > > > >
> > > > > > Hi Rajini,
> > > > > >
> > > > > > One requirement I have is to refresh the login token every X
> hours.
> > > > Like
> > > > > > what the Kerberos login does I need to have a background thread
> > that
> > > > > > refreshes the token periodically.
> > > > > >
> > > > > > I understand most of the login logic would be simple but it is
> good
> > > > that
> > > > > we
> >

Re: kafka-consumer-groups.sh doesn't work when consumers are off

2016-01-28 Thread tao xiao
it first issues an offsetrequest to broker and check if offset is stored in
broker if not it will queries zk

https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala#L171

On Fri, 29 Jan 2016 at 13:11 Guozhang Wang <wangg...@gmail.com> wrote:

> Tao,
>
> Hmm that is a bit wired since ConsumerOffsetChecker itself does not talk to
> brokers at all, but only through ZK.
>
> Guozhang
>
> On Thu, Jan 28, 2016 at 6:07 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > Guozhang,
> >
> > The old ConsumerOffsetChecker works for new consumer too with offset
> stored
> > in Kafka. I tested it with mirror maker with new consumer enabled. it is
> > able to show offset during mirror maker running and after its shutdown.
> >
> > On Fri, 29 Jan 2016 at 06:34 Guozhang Wang <wangg...@gmail.com> wrote:
> >
> > > Once the offset is written to the log it is persistent and hence should
> > > survive broker failures. And its retention policy is configurable.
> > >
> > > It may be a bit misleading in saying "in-memory cache" in my previous
> > > email: the brokers just keep the in-memory map of [group, partition] ->
> > > latest_offset, while the offset commit history is kept in the log. When
> > we
> > > delete the group, we remove the corresponding entry from memory map and
> > put
> > > a tombstone into log as well so that the old offsets will be compacted
> > > eventually according to compaction policy.
> > >
> > > The old ConsumerOffsetChecker only works for old consumer that stores
> > > offset in ZK.
> > >
> > > Guozhang
> > >
> > > On Thu, Jan 28, 2016 at 1:43 PM, Cliff Rhyne <crh...@signal.co> wrote:
> > >
> > > > Hi Guozhang,
> > > >
> > > > That looks like it might help but feels like there might be some
> gaps.
> > > > Would it be able to survive restarts of the kafka broker?  How long
> > would
> > > > it stay in the cache (and is that configurable)?  If it expires from
> > the
> > > > cache, what's the cache-miss operation look like?  (yes, a lot of
> this
> > > > depends on the data still being in the logs to recover)
> > > >
> > > > In the mean time, can I rely on the deprecated ConsumerOffsetChecker
> > > (which
> > > > looks at zookeeper) even though I'm using the new KafkaConsumer?
> > > >
> > > > Thanks,
> > > > Cliff
> > > >
> > > > On Thu, Jan 28, 2016 at 3:30 PM, Guozhang Wang <wangg...@gmail.com>
> > > wrote:
> > > >
> > > > > Hi Cliff,
> > > > >
> > > > > Short answer to your question is it is just the current
> > implementation.
> > > > >
> > > > > The coordinator stores the offsets as messages in an internal topic
> > and
> > > > > also keeps the latest offset values in in-memory. It answers
> > > > > ConsumerGroupRequest using its cached offset, and upon the consumer
> > > group
> > > > > being removed since no member is alive already, it removed it from
> > its
> > > > > in-memory cache and add a "tombstone" to the offset log as well.
> But
> > > the
> > > > > offsets are still persistent as messages in the log, which will
> only
> > be
> > > > > compacted after a while (this is depend on the log compaction
> > policy).
> > > > >
> > > > > There is a ticket open for improving on this scenario (
> > > > > https://issues.apache.org/jira/browse/KAFKA-2720) which lets the
> > > > > coordinator to only "purge" dead groups periodically instead of
> > > > > immediately, and that may partially resolve your case.
> > > > >
> > > > > Guozhang
> > > > >
> > > > >
> > > > > On Thu, Jan 28, 2016 at 12:13 PM, Cliff Rhyne <crh...@signal.co>
> > > wrote:
> > > > >
> > > > > > Just following up on this concern.  Is there a constraint that
> > > prevents
> > > > > > ConsumerGroupCommand from reporting offsets on a group if no
> > members
> > > > are
> > > > > > connected, or is this just the current implementation?
> > > > > >
> > > > > > Thanks,
> > > > > > Cliff
> > > >

Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-01-27 Thread tao xiao
The callback works for me as long as it has access to Subject and mechs.
The other thing is how we can inject the customized security provider via
Security.addProvider()? If I want to implement my own SASL mech I need to
call the addProvider() before SASL.create so that my own implementation of
SASLClient/Sever can be returned. Any thoughts on this? we can either let
users inject the provider in their logic code before creating a
producer/consumer or Kafka does it for users

On Thu, 28 Jan 2016 at 03:36 Rajini Sivaram <rajinisiva...@googlemail.com>
wrote:

> Hi Tao,
>
> *javax.security.auth.callback.**CallbackHandler *is the standard way in
> which SASL clients and server obtain additional mechanism specific
> input. *AuthCallbackHandler
> *simply extends this interface to propagate configuration properties. I was
> going to provide SASL mechanism and Subject to the callback handlers as
> well since the default handlers use these.
>
> Your SaslServer/SaslClient implementation can obtain the Subject using
> *Subject.getSubject(**AccessController.getContext(). *But it will be good
> to know if callback handlers would work for you - apart from standard
> callbacks like PasswordCallback, you can define your own callbacks too if
> you require.
>
>
>
> On Wed, Jan 27, 2016 at 3:59 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > Thanks Rajini. The other thing in my mind is that we should find a way to
> > expose subject to SASL so that other mechanisms are able to use the
> > principal and credentials stored in subject to do authentication.
> >
> > I am thinking to have below interface that can be extended by users to
> > build the SASL client/server instead of having an AuthCallback. With this
> > interface users are able to add their own security provider before
> > client/server is returned by SASL. Any thoughts?
> >
> > Interface SaslClientBuilder {
> >
> > SaslClient build(mechs, subject, host, otherparams)
> > }
> >
> > Interface SaslServerBuilder {
> > SaslServer build(mechs, subject, host, otherparams)
> > }
> >
> > On Wed, 27 Jan 2016 at 18:54 Rajini Sivaram <
> rajinisiva...@googlemail.com>
> > wrote:
> >
> > > Tao,
> > >
> > > Thank you for the explanation. I couldn't find a standard Java
> interface
> > > that would be suitable, so will define one based on your requirement
> and
> > > update the KIP.
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > > On Wed, Jan 27, 2016 at 2:12 AM, tao xiao <xiaotao...@gmail.com>
> wrote:
> > >
> > > > Hi Rajini,
> > > >
> > > > One requirement I have is to refresh the login token every X hours.
> > Like
> > > > what the Kerberos login does I need to have a background thread that
> > > > refreshes the token periodically.
> > > >
> > > > I understand most of the login logic would be simple but it is good
> > that
> > > we
> > > > can expose the logic login to users and let them decide what they
> want
> > to
> > > > do. And we can have a fallback login component that is used if users
> > dont
> > > > specify it.
> > > >
> > > > On Tue, 26 Jan 2016 at 20:07 Rajini Sivaram <
> > > rajinisiva...@googlemail.com>
> > > > wrote:
> > > >
> > > > > Hi Tao,
> > > > >
> > > > > Thank you for the review. The changes I had in mind are in the PR
> > > > > https://github.com/apache/kafka/pull/812. Login for non-Kerberos
> > > > protocols
> > > > > contains very little logic. I was expecting that combined with a
> > custom
> > > > > login module specified in JAAS configuration, this would give
> > > sufficient
> > > > > flexibility. Is there a specific usecase you have in mind where you
> > > need
> > > > to
> > > > > customize the Login code?
> > > > >
> > > > > Regards,
> > > > >
> > > > > Rajini
> > > > >
> > > > > On Tue, Jan 26, 2016 at 11:15 AM, tao xiao <xiaotao...@gmail.com>
> > > wrote:
> > > > >
> > > > > > Hi Rajini,
> > > > > >
> > > > > > I think it makes sense to change LoginManager or Login to an
> > > interface
> > > > > > which users can extend to provide their own logic of login
> > otherwise
> > > it
> > > > > is
> 

Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-01-27 Thread tao xiao
Thanks Rajini. The other thing in my mind is that we should find a way to
expose subject to SASL so that other mechanisms are able to use the
principal and credentials stored in subject to do authentication.

I am thinking to have below interface that can be extended by users to
build the SASL client/server instead of having an AuthCallback. With this
interface users are able to add their own security provider before
client/server is returned by SASL. Any thoughts?

Interface SaslClientBuilder {

SaslClient build(mechs, subject, host, otherparams)
}

Interface SaslServerBuilder {
SaslServer build(mechs, subject, host, otherparams)
}

On Wed, 27 Jan 2016 at 18:54 Rajini Sivaram <rajinisiva...@googlemail.com>
wrote:

> Tao,
>
> Thank you for the explanation. I couldn't find a standard Java interface
> that would be suitable, so will define one based on your requirement and
> update the KIP.
>
> Regards,
>
> Rajini
>
> On Wed, Jan 27, 2016 at 2:12 AM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > Hi Rajini,
> >
> > One requirement I have is to refresh the login token every X hours. Like
> > what the Kerberos login does I need to have a background thread that
> > refreshes the token periodically.
> >
> > I understand most of the login logic would be simple but it is good that
> we
> > can expose the logic login to users and let them decide what they want to
> > do. And we can have a fallback login component that is used if users dont
> > specify it.
> >
> > On Tue, 26 Jan 2016 at 20:07 Rajini Sivaram <
> rajinisiva...@googlemail.com>
> > wrote:
> >
> > > Hi Tao,
> > >
> > > Thank you for the review. The changes I had in mind are in the PR
> > > https://github.com/apache/kafka/pull/812. Login for non-Kerberos
> > protocols
> > > contains very little logic. I was expecting that combined with a custom
> > > login module specified in JAAS configuration, this would give
> sufficient
> > > flexibility. Is there a specific usecase you have in mind where you
> need
> > to
> > > customize the Login code?
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > > On Tue, Jan 26, 2016 at 11:15 AM, tao xiao <xiaotao...@gmail.com>
> wrote:
> > >
> > > > Hi Rajini,
> > > >
> > > > I think it makes sense to change LoginManager or Login to an
> interface
> > > > which users can extend to provide their own logic of login otherwise
> it
> > > is
> > > > hard for users to implement a custom SASL mechanism but have no
> control
> > > > over login
> > > >
> > > > On Tue, 26 Jan 2016 at 18:45 Ismael Juma <ism...@juma.me.uk> wrote:
> > > >
> > > > > Hi Rajini,
> > > > >
> > > > > Thanks for the KIP. As stated in the KIP, it does not address
> > "Support
> > > > for
> > > > > multiple SASL mechanisms within a broker". Maybe we should also
> > mention
> > > > > this in the "Rejected Alternatives" section with the reasoning. I
> > think
> > > > > it's particularly relevant to understand if it's not being proposed
> > > > because
> > > > > we don't think it's useful or due to the additional implementation
> > > > > complexity (it's probably a combination). If we think this could be
> > > > useful
> > > > > in the future, it would also be worth thinking about how it is
> > affected
> > > > if
> > > > > we do KIP-43 first (ie will it be easier, harder, etc.)
> > > > >
> > > > > Thanks,
> > > > > Ismael
> > > > >
> > > > > On Mon, Jan 25, 2016 at 9:55 PM, Rajini Sivaram <
> > > > > rajinisiva...@googlemail.com> wrote:
> > > > >
> > > > > > I have just created KIP-43 to extend the SASL implementation in
> > Kafka
> > > > to
> > > > > > support new SASL mechanisms.
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43%3A+Kafka+SASL+enhancements
> > > > > >
> > > > > >
> > > > > > Comments and suggestions are appreciated.
> > > > > >
> > > > > >
> > > > > > Thank you...
> > > > > >
> > > > > > Regards,
> > > > > >
> > > > > > Rajini
> > > > > >
> > > > >
> > > >
> > >
> >
>


[jira] [Updated] (KAFKA-3157) Mirror maker doesn't commit offset with new consumer when there is no more messages

2016-01-27 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-3157:

Status: Patch Available  (was: Open)

> Mirror maker doesn't commit offset with new consumer when there is no more 
> messages
> ---
>
> Key: KAFKA-3157
> URL: https://issues.apache.org/jira/browse/KAFKA-3157
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: TAO XIAO
>Assignee: TAO XIAO
>
> Mirror maker doesn't commit messages with new consumer enabled when messages 
> are sent to source within the time range that is smaller than commit interval 
> and no more messages are sent afterwards.
> The steps to reproduce:
> 1. Start mirror maker.
> 2. The default commit interval is 1 min. Send message for less than 1 min for 
> example 10 seconds.
> 3. Don't send more messages.
> 4. Check committed offset with group command. The lag remains unchanged for 
> ever even though the messages have been sent to destination.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3157) Mirror maker doesn't commit offset with new consumer when there is no more messages

2016-01-27 Thread TAO XIAO (JIRA)
TAO XIAO created KAFKA-3157:
---

 Summary: Mirror maker doesn't commit offset with new consumer when 
there is no more messages
 Key: KAFKA-3157
 URL: https://issues.apache.org/jira/browse/KAFKA-3157
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: TAO XIAO
Assignee: TAO XIAO


Mirror maker doesn't commit messages with new consumer enabled when messages 
are sent to source within the time range that is smaller than commit interval 
and no more messages are sent beyond.

The steps to reproduce:

1. Start mirror maker.
2. The default commit interval is 1 min. Send message for less than 1 min for 
example 10 seconds.
3. Don't send more messages.
4. Check committed offset with group command. The lag remains unchanged for 
ever even though the messages have been sent to destination.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3157) Mirror maker doesn't commit offset with new consumer when there is no more messages

2016-01-27 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-3157:

Description: 
Mirror maker doesn't commit messages with new consumer enabled when messages 
are sent to source within the time range that is smaller than commit interval 
and no more messages are sent afterwards.

The steps to reproduce:

1. Start mirror maker.
2. The default commit interval is 1 min. Send message for less than 1 min for 
example 10 seconds.
3. Don't send more messages.
4. Check committed offset with group command. The lag remains unchanged for 
ever even though the messages have been sent to destination.

  was:
Mirror maker doesn't commit messages with new consumer enabled when messages 
are sent to source within the time range that is smaller than commit interval 
and no more messages are sent beyond.

The steps to reproduce:

1. Start mirror maker.
2. The default commit interval is 1 min. Send message for less than 1 min for 
example 10 seconds.
3. Don't send more messages.
4. Check committed offset with group command. The lag remains unchanged for 
ever even though the messages have been sent to destination.


> Mirror maker doesn't commit offset with new consumer when there is no more 
> messages
> ---
>
> Key: KAFKA-3157
> URL: https://issues.apache.org/jira/browse/KAFKA-3157
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: TAO XIAO
>Assignee: TAO XIAO
>
> Mirror maker doesn't commit messages with new consumer enabled when messages 
> are sent to source within the time range that is smaller than commit interval 
> and no more messages are sent afterwards.
> The steps to reproduce:
> 1. Start mirror maker.
> 2. The default commit interval is 1 min. Send message for less than 1 min for 
> example 10 seconds.
> 3. Don't send more messages.
> 4. Check committed offset with group command. The lag remains unchanged for 
> ever even though the messages have been sent to destination.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-44 - Allow Kafka to have a customized security protocol

2016-01-26 Thread tao xiao
Hi Rajini,

I think I need to rephrase some of the wordings in the KIP. I meant to
provide a customized security protocol which may/may not include SSL
underneath.  With CUSTOMIZED security protocol users have the ability to
plugin both authentication and security communication components.


On Tue, 26 Jan 2016 at 17:45 Rajini Sivaram <rajinisiva...@googlemail.com>
wrote:

> Hi Tao,
>
> I have a couple of questions:
>
>1. Is there a reason why you wouldn't want to implement a custom SASL
>mechanism to use your authentication mechanism? SASL itself aims to
> provide
>pluggable authentication mechanisms.
>2. The KIP suggests that you are interested in plugging in a custom
>authenticator, but not a custom transport layer. If that is the case,
> maybe
>you need CUSTOM_PLAINTEXT and CUSTOM_SSL for consistency with the other
>security protocols (which are a combination of transport layer protocol
> and
>authentication protocol)?
>
>
> Regards,
>
> Rajini
>
> On Tue, Jan 26, 2016 at 6:58 AM, tao xiao <xiaotao...@gmail.com> wrote:
>
>
> > HI Kafka developers,
> >
> > I raised a KIP-44, allow a customized security protocol, for discussion.
> > The goal of this KIP to enable a customized security protocol where users
> > can plugin their own implementation.
> >
> > Feedback is welcomed
> >
> >
>


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-01-26 Thread tao xiao
Hi Rajini,

I think it makes sense to change LoginManager or Login to an interface
which users can extend to provide their own logic of login otherwise it is
hard for users to implement a custom SASL mechanism but have no control
over login

On Tue, 26 Jan 2016 at 18:45 Ismael Juma  wrote:

> Hi Rajini,
>
> Thanks for the KIP. As stated in the KIP, it does not address "Support for
> multiple SASL mechanisms within a broker". Maybe we should also mention
> this in the "Rejected Alternatives" section with the reasoning. I think
> it's particularly relevant to understand if it's not being proposed because
> we don't think it's useful or due to the additional implementation
> complexity (it's probably a combination). If we think this could be useful
> in the future, it would also be worth thinking about how it is affected if
> we do KIP-43 first (ie will it be easier, harder, etc.)
>
> Thanks,
> Ismael
>
> On Mon, Jan 25, 2016 at 9:55 PM, Rajini Sivaram <
> rajinisiva...@googlemail.com> wrote:
>
> > I have just created KIP-43 to extend the SASL implementation in Kafka to
> > support new SASL mechanisms.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43%3A+Kafka+SASL+enhancements
> >
> >
> > Comments and suggestions are appreciated.
> >
> >
> > Thank you...
> >
> > Regards,
> >
> > Rajini
> >
>


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-01-26 Thread tao xiao
Hi Rajini,

One requirement I have is to refresh the login token every X hours. Like
what the Kerberos login does I need to have a background thread that
refreshes the token periodically.

I understand most of the login logic would be simple but it is good that we
can expose the logic login to users and let them decide what they want to
do. And we can have a fallback login component that is used if users dont
specify it.

On Tue, 26 Jan 2016 at 20:07 Rajini Sivaram <rajinisiva...@googlemail.com>
wrote:

> Hi Tao,
>
> Thank you for the review. The changes I had in mind are in the PR
> https://github.com/apache/kafka/pull/812. Login for non-Kerberos protocols
> contains very little logic. I was expecting that combined with a custom
> login module specified in JAAS configuration, this would give sufficient
> flexibility. Is there a specific usecase you have in mind where you need to
> customize the Login code?
>
> Regards,
>
> Rajini
>
> On Tue, Jan 26, 2016 at 11:15 AM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > Hi Rajini,
> >
> > I think it makes sense to change LoginManager or Login to an interface
> > which users can extend to provide their own logic of login otherwise it
> is
> > hard for users to implement a custom SASL mechanism but have no control
> > over login
> >
> > On Tue, 26 Jan 2016 at 18:45 Ismael Juma <ism...@juma.me.uk> wrote:
> >
> > > Hi Rajini,
> > >
> > > Thanks for the KIP. As stated in the KIP, it does not address "Support
> > for
> > > multiple SASL mechanisms within a broker". Maybe we should also mention
> > > this in the "Rejected Alternatives" section with the reasoning. I think
> > > it's particularly relevant to understand if it's not being proposed
> > because
> > > we don't think it's useful or due to the additional implementation
> > > complexity (it's probably a combination). If we think this could be
> > useful
> > > in the future, it would also be worth thinking about how it is affected
> > if
> > > we do KIP-43 first (ie will it be easier, harder, etc.)
> > >
> > > Thanks,
> > > Ismael
> > >
> > > On Mon, Jan 25, 2016 at 9:55 PM, Rajini Sivaram <
> > > rajinisiva...@googlemail.com> wrote:
> > >
> > > > I have just created KIP-43 to extend the SASL implementation in Kafka
> > to
> > > > support new SASL mechanisms.
> > > >
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43%3A+Kafka+SASL+enhancements
> > > >
> > > >
> > > > Comments and suggestions are appreciated.
> > > >
> > > >
> > > > Thank you...
> > > >
> > > > Regards,
> > > >
> > > > Rajini
> > > >
> > >
> >
>


[DISCUSS] KIP-44 - Allow Kafka to have a customized security protocol

2016-01-25 Thread tao xiao
HI Kafka developers,

I raised a KIP-44, allow a customized security protocol, for discussion.
The goal of this KIP to enable a customized security protocol where users
can plugin their own implementation.

Feedback is welcomed


[jira] [Updated] (KAFKA-2993) compression-rate-avg always returns 0 even with compression.type being set

2016-01-17 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2993:

Component/s: producer 

> compression-rate-avg always returns 0 even with compression.type being set
> --
>
> Key: KAFKA-2993
> URL: https://issues.apache.org/jira/browse/KAFKA-2993
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: TAO XIAO
>Assignee: TAO XIAO
>Priority: Minor
>
> The producer metric compression-rate-avg always returns 0 even with 
> compression.type set to snappy. I drilled down the code and discovered that 
> the position of bytebuffer in org.apache.kafka.common.record.Compressor is 
> reset to 0 by RecordAccumulator.drain() before calling metric updates in 
> sender thread. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2993) compression-rate-avg always returns 0 even with compression.type being set

2015-12-15 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2993:

Summary: compression-rate-avg always returns 0 even with compression.type 
being set  (was: compression-rate-avg always returns 0 event with 
compression.type being set)

> compression-rate-avg always returns 0 even with compression.type being set
> --
>
> Key: KAFKA-2993
> URL: https://issues.apache.org/jira/browse/KAFKA-2993
> Project: Kafka
>  Issue Type: Bug
>    Reporter: TAO XIAO
>    Assignee: TAO XIAO
>Priority: Minor
>
> The producer metric compression-rate-avg always returns 0 even with 
> compression.type set to snappy. I drilled down the code and discovered that 
> the position of bytebuffer in org.apache.kafka.common.record.Compressor is 
> reset to 0 by RecordAccumulator.drain() before calling metric updates in 
> sender thread. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2993) compression-rate-avg always returns 0 event with compression.type being set

2015-12-15 Thread TAO XIAO (JIRA)
TAO XIAO created KAFKA-2993:
---

 Summary: compression-rate-avg always returns 0 event with 
compression.type being set
 Key: KAFKA-2993
 URL: https://issues.apache.org/jira/browse/KAFKA-2993
 Project: Kafka
  Issue Type: Bug
Reporter: TAO XIAO
Assignee: TAO XIAO
Priority: Minor


The producer metric compression-rate-avg always returns 0 even with 
compression.type set to snappy. I drilled down the code and discovered that the 
position of bytebuffer in org.apache.kafka.common.record.Compressor is reset to 
0 by RecordAccumulator.drain() before calling metric updates in sender thread. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2281) org.apache.kafka.clients.producer.internals.ErrorLoggingCallback holds unnecessary byte[] value

2015-06-28 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2281:

Description: 
org.apache.kafka.clients.producer.internals.ErrorLoggingCallback is constructed 
with byte[] value as one of the input. It holds the reference to the value 
until it finishes its lifecycle. The value is not used except for logging its 
size. This behavior causes unnecessary memory consumption.

The fix is to keep reference to the value size instead of value itself when 
logAsString is false

  was:
org.apache.kafka.clients.producer.internals.ErrorLoggingCallback is constructed 
with byte[] value as one of the input. It holds the reference to the value 
until it finishes its lifecycle. The value is not used except for logging its 
size. This behavior causes unnecessary memory consumption.

The fix is to keep reference to the value size instead of value itself


 org.apache.kafka.clients.producer.internals.ErrorLoggingCallback holds 
 unnecessary byte[] value
 ---

 Key: KAFKA-2281
 URL: https://issues.apache.org/jira/browse/KAFKA-2281
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.2.1
Reporter: TAO XIAO
Assignee: TAO XIAO
 Attachments: KAFKA-2281_2015-06-25.patch


 org.apache.kafka.clients.producer.internals.ErrorLoggingCallback is 
 constructed with byte[] value as one of the input. It holds the reference to 
 the value until it finishes its lifecycle. The value is not used except for 
 logging its size. This behavior causes unnecessary memory consumption.
 The fix is to keep reference to the value size instead of value itself when 
 logAsString is false



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2281) org.apache.kafka.clients.producer.internals.ErrorLoggingCallback holds unnecessary byte[] value

2015-06-25 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2281:

Attachment: (was: KAFKA-2281.patch.1)

 org.apache.kafka.clients.producer.internals.ErrorLoggingCallback holds 
 unnecessary byte[] value
 ---

 Key: KAFKA-2281
 URL: https://issues.apache.org/jira/browse/KAFKA-2281
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.2.1
Reporter: TAO XIAO
Assignee: Jun Rao
 Attachments: KAFKA-2281.patch


 org.apache.kafka.clients.producer.internals.ErrorLoggingCallback is 
 constructed with byte[] value as one of the input. It holds the reference to 
 the value until it finishes its lifecycle. The value is not used except for 
 logging its size. This behavior causes unnecessary memory consumption.
 The fix is to keep reference to the value size instead of value itself



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2281) org.apache.kafka.clients.producer.internals.ErrorLoggingCallback holds unnecessary byte[] value

2015-06-25 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2281:

Attachment: KAFKA-2281_2015-06-25.patch

 org.apache.kafka.clients.producer.internals.ErrorLoggingCallback holds 
 unnecessary byte[] value
 ---

 Key: KAFKA-2281
 URL: https://issues.apache.org/jira/browse/KAFKA-2281
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.2.1
Reporter: TAO XIAO
Assignee: Jun Rao
 Attachments: KAFKA-2281.patch, KAFKA-2281_2015-06-25.patch


 org.apache.kafka.clients.producer.internals.ErrorLoggingCallback is 
 constructed with byte[] value as one of the input. It holds the reference to 
 the value until it finishes its lifecycle. The value is not used except for 
 logging its size. This behavior causes unnecessary memory consumption.
 The fix is to keep reference to the value size instead of value itself



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2281) org.apache.kafka.clients.producer.internals.ErrorLoggingCallback holds unnecessary byte[] value

2015-06-25 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2281:

Attachment: (was: KAFKA-2281.patch)

 org.apache.kafka.clients.producer.internals.ErrorLoggingCallback holds 
 unnecessary byte[] value
 ---

 Key: KAFKA-2281
 URL: https://issues.apache.org/jira/browse/KAFKA-2281
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.2.1
Reporter: TAO XIAO
Assignee: Jun Rao
 Attachments: KAFKA-2281_2015-06-25.patch


 org.apache.kafka.clients.producer.internals.ErrorLoggingCallback is 
 constructed with byte[] value as one of the input. It holds the reference to 
 the value until it finishes its lifecycle. The value is not used except for 
 logging its size. This behavior causes unnecessary memory consumption.
 The fix is to keep reference to the value size instead of value itself



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2048) java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when handling error returned from simpleConsumer

2015-03-24 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2048:

Attachment: KAFKA-2048.patch

 java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when 
 handling error returned from simpleConsumer
 ---

 Key: KAFKA-2048
 URL: https://issues.apache.org/jira/browse/KAFKA-2048
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.2.1
Reporter: TAO XIAO
Assignee: Neha Narkhede
 Attachments: KAFKA-2048.patch


 AbstractFetcherThread will throw java.lang.IllegalMonitorStateException in 
 the catch block of processFetchRequest method. This is because 
 partitionMapLock is not acquired before calling partitionMapCond.await()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2048) java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when handling error returned from simpleConsumer

2015-03-24 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2048:

Status: Patch Available  (was: Open)

 java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when 
 handling error returned from simpleConsumer
 ---

 Key: KAFKA-2048
 URL: https://issues.apache.org/jira/browse/KAFKA-2048
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.2.1
Reporter: TAO XIAO
Assignee: Neha Narkhede
 Attachments: KAFKA-2048.patch


 AbstractFetcherThread will throw java.lang.IllegalMonitorStateException in 
 the catch block of processFetchRequest method. This is because 
 partitionMapLock is not acquired before calling partitionMapCond.await()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2048) java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when handling error returned from simpleConsumer

2015-03-24 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2048:

Attachment: KAFKA-2048.patch

 java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when 
 handling error returned from simpleConsumer
 ---

 Key: KAFKA-2048
 URL: https://issues.apache.org/jira/browse/KAFKA-2048
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.2.1
Reporter: TAO XIAO
Assignee: Neha Narkhede
 Attachments: KAFKA-2048.patch


 AbstractFetcherThread will throw java.lang.IllegalMonitorStateException in 
 the catch block of processFetchRequest method. This is because 
 partitionMapLock is not acquired before calling partitionMapCond.await()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2048) java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when handling error returned from simpleConsumer

2015-03-24 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2048:

Status: Patch Available  (was: Open)

 java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when 
 handling error returned from simpleConsumer
 ---

 Key: KAFKA-2048
 URL: https://issues.apache.org/jira/browse/KAFKA-2048
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.2.1
Reporter: TAO XIAO
Assignee: Neha Narkhede
 Attachments: KAFKA-2048.patch


 AbstractFetcherThread will throw java.lang.IllegalMonitorStateException in 
 the catch block of processFetchRequest method. This is because 
 partitionMapLock is not acquired before calling partitionMapCond.await()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2048) java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when handling error returned from simpleConsumer

2015-03-24 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2048:

Attachment: (was: KAFKA-2048.patch)

 java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when 
 handling error returned from simpleConsumer
 ---

 Key: KAFKA-2048
 URL: https://issues.apache.org/jira/browse/KAFKA-2048
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.2.1
Reporter: TAO XIAO
Assignee: Neha Narkhede
 Attachments: KAFKA-2048.patch


 AbstractFetcherThread will throw java.lang.IllegalMonitorStateException in 
 the catch block of processFetchRequest method. This is because 
 partitionMapLock is not acquired before calling partitionMapCond.await()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2048) java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when handling error returned from simpleConsumer

2015-03-24 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2048:

Status: Open  (was: Patch Available)

 java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when 
 handling error returned from simpleConsumer
 ---

 Key: KAFKA-2048
 URL: https://issues.apache.org/jira/browse/KAFKA-2048
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.2.1
Reporter: TAO XIAO
Assignee: Neha Narkhede

 AbstractFetcherThread will throw java.lang.IllegalMonitorStateException in 
 the catch block of processFetchRequest method. This is because 
 partitionMapLock is not acquired before calling partitionMapCond.await()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2048) java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when handling error returned from simpleConsumer

2015-03-24 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2048:

Attachment: (was: KAFKA-2048.patch)

 java.lang.IllegalMonitorStateException thrown in AbstractFetcherThread when 
 handling error returned from simpleConsumer
 ---

 Key: KAFKA-2048
 URL: https://issues.apache.org/jira/browse/KAFKA-2048
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.2.1
Reporter: TAO XIAO
Assignee: Neha Narkhede

 AbstractFetcherThread will throw java.lang.IllegalMonitorStateException in 
 the catch block of processFetchRequest method. This is because 
 partitionMapLock is not acquired before calling partitionMapCond.await()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1997) Refactor Mirror Maker

2015-03-17 Thread TAO XIAO (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366543#comment-14366543
 ] 

TAO XIAO edited comment on KAFKA-1997 at 3/18/15 2:52 AM:
--

[~becket_qin] I think I found a bug in MirrorMaker.scala. As show in below code 
block rebalanceListenerArgs gets passed into MirrorMakerMessageHandler 
constructor instead of messageHandlerArgs. Below code is quoted from line 256 
from MirrorMaker.scala off trunk

{code:title=MirrorMaker.scala|borderStyle=solid}
messageHandler = {
  if (customMessageHandlerClass != null) {
if (messageHandlerArgs != null)
  
Utils.createObject[MirrorMakerMessageHandler](customMessageHandlerClass, 
rebalanceListenerArgs)
else
  
Utils.createObject[MirrorMakerMessageHandler](customMessageHandlerClass)
  } else {
defaultMirrorMakerMessageHandler
  }
}
{code}


was (Author: xiaotao183):
[~becket_qin] I think I found a bug in MirrorMaker.scala. As show in below code 
block rebalanceListenerArgs gets passed into MirrorMakerMessageHandler 
constructor instead of messageHandlerArgs

{code:title=MirrorMaker.scala|borderStyle=solid}
messageHandler = {
  if (customMessageHandlerClass != null) {
if (messageHandlerArgs != null)
  
Utils.createObject[MirrorMakerMessageHandler](customMessageHandlerClass, 
rebalanceListenerArgs)
else
  
Utils.createObject[MirrorMakerMessageHandler](customMessageHandlerClass)
  } else {
defaultMirrorMakerMessageHandler
  }
}
{code}

 Refactor Mirror Maker
 -

 Key: KAFKA-1997
 URL: https://issues.apache.org/jira/browse/KAFKA-1997
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-1997.patch, KAFKA-1997_2015-03-03_16:28:46.patch, 
 KAFKA-1997_2015-03-04_15:07:46.patch, KAFKA-1997_2015-03-04_15:42:45.patch, 
 KAFKA-1997_2015-03-05_20:14:58.patch, KAFKA-1997_2015-03-09_18:55:54.patch, 
 KAFKA-1997_2015-03-10_18:31:34.patch, KAFKA-1997_2015-03-11_15:20:18.patch, 
 KAFKA-1997_2015-03-11_19:10:53.patch, KAFKA-1997_2015-03-13_14:43:34.patch, 
 KAFKA-1997_2015-03-17_13:47:01.patch


 Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1997) Refactor Mirror Maker

2015-03-17 Thread TAO XIAO (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366543#comment-14366543
 ] 

TAO XIAO commented on KAFKA-1997:
-

[~becket_qin] I think I found a bug in MirrorMaker.scala. As show in below code 
block rebalanceListenerArgs gets passed into MirrorMakerMessageHandler 
constructor instead of messageHandlerArgs

{code:title=MirrorMaker.scala|borderStyle=solid}
messageHandler = {
  if (customMessageHandlerClass != null) {
if (messageHandlerArgs != null)
  
Utils.createObject[MirrorMakerMessageHandler](customMessageHandlerClass, 
rebalanceListenerArgs)
else
  
Utils.createObject[MirrorMakerMessageHandler](customMessageHandlerClass)
  } else {
defaultMirrorMakerMessageHandler
  }
}
{code}

 Refactor Mirror Maker
 -

 Key: KAFKA-1997
 URL: https://issues.apache.org/jira/browse/KAFKA-1997
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-1997.patch, KAFKA-1997_2015-03-03_16:28:46.patch, 
 KAFKA-1997_2015-03-04_15:07:46.patch, KAFKA-1997_2015-03-04_15:42:45.patch, 
 KAFKA-1997_2015-03-05_20:14:58.patch, KAFKA-1997_2015-03-09_18:55:54.patch, 
 KAFKA-1997_2015-03-10_18:31:34.patch, KAFKA-1997_2015-03-11_15:20:18.patch, 
 KAFKA-1997_2015-03-11_19:10:53.patch, KAFKA-1997_2015-03-13_14:43:34.patch, 
 KAFKA-1997_2015-03-17_13:47:01.patch


 Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2008) Update num.consumer.fetchers description in Kafka documentation

2015-03-07 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2008:

Description: 
The description of num.consumer.fetchers currently shown in consumer config 
section of Kafka document is not accurate.

num.consumer.fetchers actually controls the max number of fetcher threads per 
broker. The actual number of fetcher threads is controlled by the combination 
of topic, partition and num.consumer.fetchers

Reference source code in AbstractFetcherManager

private def getFetcherId(topic: String, partitionId: Int) : Int = {

Utils.abs(31 * topic.hashCode() + partitionId) % numFetchers

  }

  was:
The description of num.consumer.fetchers currently shown in consumer config 
section of Kafka document is not accurate.

num.consumer.fetchers actually controls the max number of fetcher threads that 
can be created in consumer. The actual number of fetcher threads is controlled 
by the combination of topic, partition and num.consumer.fetchers

Reference source code in AbstractFetcherManager

private def getFetcherId(topic: String, partitionId: Int) : Int = {

Utils.abs(31 * topic.hashCode() + partitionId) % numFetchers

  }


 Update num.consumer.fetchers description in Kafka documentation
 ---

 Key: KAFKA-2008
 URL: https://issues.apache.org/jira/browse/KAFKA-2008
 Project: Kafka
  Issue Type: Improvement
  Components: website
Affects Versions: 0.8.2.0
Reporter: TAO XIAO

 The description of num.consumer.fetchers currently shown in consumer config 
 section of Kafka document is not accurate.
 num.consumer.fetchers actually controls the max number of fetcher threads per 
 broker. The actual number of fetcher threads is controlled by the combination 
 of topic, partition and num.consumer.fetchers
 Reference source code in AbstractFetcherManager
 private def getFetcherId(topic: String, partitionId: Int) : Int = {
 Utils.abs(31 * topic.hashCode() + partitionId) % numFetchers
   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2008) Update num.consumer.fetchers description in Kafka documentation

2015-03-07 Thread TAO XIAO (JIRA)
TAO XIAO created KAFKA-2008:
---

 Summary: Update num.consumer.fetchers description in Kafka 
documentation
 Key: KAFKA-2008
 URL: https://issues.apache.org/jira/browse/KAFKA-2008
 Project: Kafka
  Issue Type: Improvement
  Components: website
Reporter: TAO XIAO


The description of num.consumer.fetchers currently shown in consumer config 
section of Kafka document is not accurate.

num.consumer.fetchers actually controls the max number of fetcher threads that 
can be created in consumer. The actual number of fetcher threads is controlled 
by the combination of topic, partition and num.consumer.fetchers

Reference source code in AbstractFetcherManager

private def getFetcherId(topic: String, partitionId: Int) : Int = {

Utils.abs(31 * topic.hashCode() + partitionId) % numFetchers

  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2008) Update num.consumer.fetchers description in Kafka documentation

2015-03-07 Thread TAO XIAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TAO XIAO updated KAFKA-2008:

Affects Version/s: 0.8.2.0

 Update num.consumer.fetchers description in Kafka documentation
 ---

 Key: KAFKA-2008
 URL: https://issues.apache.org/jira/browse/KAFKA-2008
 Project: Kafka
  Issue Type: Improvement
  Components: website
Affects Versions: 0.8.2.0
Reporter: TAO XIAO

 The description of num.consumer.fetchers currently shown in consumer config 
 section of Kafka document is not accurate.
 num.consumer.fetchers actually controls the max number of fetcher threads 
 that can be created in consumer. The actual number of fetcher threads is 
 controlled by the combination of topic, partition and num.consumer.fetchers
 Reference source code in AbstractFetcherManager
 private def getFetcherId(topic: String, partitionId: Int) : Int = {
 Utils.abs(31 * topic.hashCode() + partitionId) % numFetchers
   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)