[jira] [Created] (KAFKA-5590) Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin

2017-07-13 Thread Chaofeng Zhao (JIRA)
Chaofeng Zhao created KAFKA-5590:


 Summary: Delete Kafka Topic Complete Failed After Enable Ranger 
Kafka Plugin
 Key: KAFKA-5590
 URL: https://issues.apache.org/jira/browse/KAFKA-5590
 Project: Kafka
  Issue Type: Bug
  Components: security
Affects Versions: 0.10.0.0
 Environment: kafka and ranger under ambari
Reporter: Chaofeng Zhao


Hi:
Recently I develop some applications about kafka under ranger. But when I 
set enable ranger kafka plugin I can not delete kafka topic completely even 
though set 'delete.topic.enable=true'. And I find when enable ranger kafka 
plugin it must be authrized. How can I delete kafka topic completely under 
ranger. Thank you.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [VOTE] KIP-168: Add TotalTopicCount metric per cluster

2017-07-13 Thread Joel Koshy
+1

On Thu, Jul 13, 2017 at 12:24 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> +1 (non-binding)
>
>
>
>
> From:   Dong Lin 
> To: dev@kafka.apache.org
> Date:   07/12/2017 10:43 AM
> Subject:Re: [VOTE] KIP-168: Add TotalTopicCount metric per cluster
>
>
>
> +1 (non-binding)
>
> On Wed, Jul 12, 2017 at 10:04 AM, Abhishek Mendhekar <
> abhishek.mendhe...@gmail.com> wrote:
>
> > Hello Kafka Dev,
> >
> > I would like to get votes on KIP-168. Here is the updated proposal based
> on
> > the discussion so far.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 168%3A+Add+GlobalTopicCount+metric+per+cluster
> >
> > Thanks,
> > Abhishek
> >
> > Email Thread - http://mail-archives.apache.org/mod_mbox/kafka-dev/201706
> .
> > mbox/%3CCAMcwe-ugep-UiSn9TkKEMwwTM%3DAzGC4jPro9LnyYRezyZg_NKA%
> > 40mail.gmail.com%3E
> >
> > On Fri, Jun 23, 2017 at 5:16 AM, Mickael Maison
> 
> > wrote:
> >
> > > +1 (non-binding)
> > > Thanks
> > >
> > > On Thu, Jun 22, 2017 at 6:07 PM, Onur Karaman
> > >  wrote:
> > > > +1
> > > >
> > > > On Thu, Jun 22, 2017 at 10:05 AM, Dong Lin 
> > wrote:
> > > >
> > > >> Thanks for the KIP. +1 (non-binding)
> > > >>
> > > >> On Wed, Jun 21, 2017 at 1:17 PM, Abhishek Mendhekar <
> > > >> abhishek.mendhe...@gmail.com> wrote:
> > > >>
> > > >> > Hi Kafka Dev,
> > > >> >
> > > >> > I did like to start the voting on -
> > > >> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > >> > 168%3A+Add+TotalTopicCount+metric+per+cluster
> > > >> >
> > > >> > Discussions will continue on -
> > > >> > http://mail-archives.apache.org/mod_mbox/kafka-dev/201706.
> > > >> > mbox/%3CCAMcwe-ugep-UiSn9TkKEMwwTM%3DAzGC4jPro9LnyYRezyZg_NKA%
> > > >> > 40mail.gmail.com%3E
> > > >> >
> > > >> > Thanks,
> > > >> > Abhishek
> > > >> >
> > > >>
> > >
> >
> >
> >
> > --
> > Abhishek Mendhekar
> > abhishek.mendhe...@gmail.com | 818.263.7030
> >
>
>
>
>
>


Re: [ANNOUNCE] New Kafka PMC member Ismael Juma

2017-07-13 Thread James Cheng
Congrats Ismael!

-James

> On Jul 5, 2017, at 1:55 PM, Jun Rao  wrote:
> 
> Hi, Everyone,
> 
> Ismael Juma has been active in the Kafka community since he became
> a Kafka committer about a year ago. I am glad to announce that Ismael is
> now a member of Kafka PMC.
> 
> Congratulations, Ismael!
> 
> Jun



[DISCUSS] KIP-177 Consumer perf tool should count rebalance time

2017-07-13 Thread Hu Xi
Hi all, I opened up a new 
KIP
 (KIP-177) concerning consumer perf tool counting and showing rebalance time in 
the output. Be free to leave your comments here. Thanks in advance.


Re: Clarification on KafkaConsumer manual partition assignment

2017-07-13 Thread venkata sastry akella
Lets a consumer not part of a group , is assigning partitiions manually in
one consumer is going to effect the other consumers part of a consumer
group?

On Thu, Jul 13, 2017 at 10:36 AM, Paolo Patierno  wrote:

> Assigning partitions manually has no relation with consumer groups. I mean
> ... a consumer doesn't need to be part of a consumer group (so specifying
> group.id) for having a partition assigned manually.
> 
> From: venkata sastry akella 
> Sent: Thursday, July 13, 2017 7:09:33 PM
> To: dev@kafka.apache.org
> Subject: Clarification on KafkaConsumer manual partition assignment
>
> KafkaConsumer API doc has the following statement.
>
> "Note that it isn't possible to mix manual partition assignment (i.e. using
> assign
>  KafkaConsumer.html#assign(java.util.Collection)>)
> with dynamic partition assignment through topic subscription (i.e. using
> subscribe
>  KafkaConsumer.html#subscribe(java.util.Collection)>
> )."
>
> Question:  Does this statement applies to only one consumer group  or
> multiple consumer groups ?
> Meaning, can one consumer group has manual assignment and other consumer
> group has automatic assignment  ?   OR if atleast one consumer group has
> manual assignment, then automatic assignment doesnt work for any other
> consumer group also ?
>
> Thanks for clarifying this.
>


Re: [DISCUSS] KIP-169 Lag-Aware Partition Assignment Strategy

2017-07-13 Thread Vahid S Hashemian
Hi Grant,

Thank you for the KIP. Very well written and easy to understand.

One question I have after reading the KIP: What are we targeting by using 
a Lag Aware assignment assignor?

Is the goal to speed up consuming all messages from a topic?
If that is the case, it sounds to me that assigning partitions based on 
only lag information would not be enough.
There are other factors, like network latency, how fast a consumer is 
processing data, and consumer configuration (such as fetch.max.bytes, 
max.partition.fetch.bytes, ...) that impact how fast a consumer is able to 
consume messages.

For example, let's say we have a topic with 4 partitions, and the lags are 
1000, 100, 10, 1 for partitions 0 to 3.
If we have two consumers c1 and c2 in the group, the Lag Aware assignment 
will be
- c1: p0, p3 (total lag of 1001)
- c2: p1, p2 (total lag of 110)
Now if the speed c1 is consuming is 10% of the speed c2 is consuming then 
the opposite assignment (c1: p1, p2 - c2: p0, p3) would be more 
reasonable.

I hope I'm not missing something in the KIP, and sorry if I misunderstood 
the purpose.

Thanks.
--Vahid




From:   Grant Neale 
To: "dev@kafka.apache.org" 
Date:   06/18/2017 11:04 AM
Subject:[DISCUSS] KIP-169 Lag-Aware Partition Assignment Strategy



Hi all,

I have raised a new KIP at 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-169+-+Lag-Aware+Partition+Assignment+Strategy


The corresponding JIRA is at 
https://issues.apache.org/jira/browse/KAFKA-5337

I look forward to your feedback.

Regards,
Grant Neale






Re: [VOTE] KIP-168: Add TotalTopicCount metric per cluster

2017-07-13 Thread Vahid S Hashemian
+1 (non-binding)




From:   Dong Lin 
To: dev@kafka.apache.org
Date:   07/12/2017 10:43 AM
Subject:Re: [VOTE] KIP-168: Add TotalTopicCount metric per cluster



+1 (non-binding)

On Wed, Jul 12, 2017 at 10:04 AM, Abhishek Mendhekar <
abhishek.mendhe...@gmail.com> wrote:

> Hello Kafka Dev,
>
> I would like to get votes on KIP-168. Here is the updated proposal based 
on
> the discussion so far.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 168%3A+Add+GlobalTopicCount+metric+per+cluster
>
> Thanks,
> Abhishek
>
> Email Thread - http://mail-archives.apache.org/mod_mbox/kafka-dev/201706
.
> mbox/%3CCAMcwe-ugep-UiSn9TkKEMwwTM%3DAzGC4jPro9LnyYRezyZg_NKA%
> 40mail.gmail.com%3E
>
> On Fri, Jun 23, 2017 at 5:16 AM, Mickael Maison 

> wrote:
>
> > +1 (non-binding)
> > Thanks
> >
> > On Thu, Jun 22, 2017 at 6:07 PM, Onur Karaman
> >  wrote:
> > > +1
> > >
> > > On Thu, Jun 22, 2017 at 10:05 AM, Dong Lin 
> wrote:
> > >
> > >> Thanks for the KIP. +1 (non-binding)
> > >>
> > >> On Wed, Jun 21, 2017 at 1:17 PM, Abhishek Mendhekar <
> > >> abhishek.mendhe...@gmail.com> wrote:
> > >>
> > >> > Hi Kafka Dev,
> > >> >
> > >> > I did like to start the voting on -
> > >> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> > 168%3A+Add+TotalTopicCount+metric+per+cluster
> > >> >
> > >> > Discussions will continue on -
> > >> > http://mail-archives.apache.org/mod_mbox/kafka-dev/201706.
> > >> > mbox/%3CCAMcwe-ugep-UiSn9TkKEMwwTM%3DAzGC4jPro9LnyYRezyZg_NKA%
> > >> > 40mail.gmail.com%3E
> > >> >
> > >> > Thanks,
> > >> > Abhishek
> > >> >
> > >>
> >
>
>
>
> --
> Abhishek Mendhekar
> abhishek.mendhe...@gmail.com | 818.263.7030
>






Re: Clarification on KafkaConsumer manual partition assignment

2017-07-13 Thread venkata sastry akella
Here is the link:

https://kafka.apache.org/0100/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#manualassignment

On Thu, Jul 13, 2017 at 11:28 AM, M. Manna  wrote:

> "A consumer doesn't need to be part of a consumer group"
> - where do you see this documented please ?
>
> On 13 Jul 2017 6:36 pm, "Paolo Patierno"  wrote:
>
> Assigning partitions manually has no relation with consumer groups. I mean
> ... a consumer doesn't need to be part of a consumer group (so specifying
> group.id) for having a partition assigned manually.
> 
> From: venkata sastry akella 
> Sent: Thursday, July 13, 2017 7:09:33 PM
> To: dev@kafka.apache.org
> Subject: Clarification on KafkaConsumer manual partition assignment
>
> KafkaConsumer API doc has the following statement.
>
> "Note that it isn't possible to mix manual partition assignment (i.e. using
> assign
>  KafkaConsumer.html#assign(java.util.Collection)>)
> with dynamic partition assignment through topic subscription (i.e. using
> subscribe
>  KafkaConsumer.html#subscribe(java.util.Collection)>
> )."
>
> Question:  Does this statement applies to only one consumer group  or
> multiple consumer groups ?
> Meaning, can one consumer group has manual assignment and other consumer
> group has automatic assignment  ?   OR if atleast one consumer group has
> manual assignment, then automatic assignment doesnt work for any other
> consumer group also ?
>
> Thanks for clarifying this.
>


Re: Clarification on KafkaConsumer manual partition assignment

2017-07-13 Thread M. Manna
"A consumer doesn't need to be part of a consumer group"
- where do you see this documented please ?

On 13 Jul 2017 6:36 pm, "Paolo Patierno"  wrote:

Assigning partitions manually has no relation with consumer groups. I mean
... a consumer doesn't need to be part of a consumer group (so specifying
group.id) for having a partition assigned manually.

From: venkata sastry akella 
Sent: Thursday, July 13, 2017 7:09:33 PM
To: dev@kafka.apache.org
Subject: Clarification on KafkaConsumer manual partition assignment

KafkaConsumer API doc has the following statement.

"Note that it isn't possible to mix manual partition assignment (i.e. using
assign
)
with dynamic partition assignment through topic subscription (i.e. using
subscribe

)."

Question:  Does this statement applies to only one consumer group  or
multiple consumer groups ?
Meaning, can one consumer group has manual assignment and other consumer
group has automatic assignment  ?   OR if atleast one consumer group has
manual assignment, then automatic assignment doesnt work for any other
consumer group also ?

Thanks for clarifying this.


Re: Clarification on KafkaConsumer manual partition assignment

2017-07-13 Thread Paolo Patierno
Assigning partitions manually has no relation with consumer groups. I mean ... 
a consumer doesn't need to be part of a consumer group (so specifying group.id) 
for having a partition assigned manually.

From: venkata sastry akella 
Sent: Thursday, July 13, 2017 7:09:33 PM
To: dev@kafka.apache.org
Subject: Clarification on KafkaConsumer manual partition assignment

KafkaConsumer API doc has the following statement.

"Note that it isn't possible to mix manual partition assignment (i.e. using
assign
)
with dynamic partition assignment through topic subscription (i.e. using
subscribe

)."

Question:  Does this statement applies to only one consumer group  or
multiple consumer groups ?
Meaning, can one consumer group has manual assignment and other consumer
group has automatic assignment  ?   OR if atleast one consumer group has
manual assignment, then automatic assignment doesnt work for any other
consumer group also ?

Thanks for clarifying this.


Clarification on KafkaConsumer manual partition assignment

2017-07-13 Thread venkata sastry akella
KafkaConsumer API doc has the following statement.

"Note that it isn't possible to mix manual partition assignment (i.e. using
assign
)
with dynamic partition assignment through topic subscription (i.e. using
subscribe

)."

Question:  Does this statement applies to only one consumer group  or
multiple consumer groups ?
Meaning, can one consumer group has manual assignment and other consumer
group has automatic assignment  ?   OR if atleast one consumer group has
manual assignment, then automatic assignment doesnt work for any other
consumer group also ?

Thanks for clarifying this.


New AdmiClient no rack awareness support

2017-07-13 Thread Paolo Patierno
Hi devs,


can you confirm that executing a "racks aware" replicas assignment is not 
possible with the new AdminClient ?

It seems that the current Scala TopicCommand tool recovers information about 
brokers and related racks then check the rack aware mode requested by user.

In any case it passes to the AdminUtils only the brokers which satisfy the rack 
awareness (could be all or part of them).

With new AdminClient, the CreateTopicRequest flows to the AdminManager in the 
broker which seems to use all the available brokers.


Is my understanding right ?


Thanks,


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Windows Embedded & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


Re: Apply for access for Kafka KIP creation permission

2017-07-13 Thread Guozhang Wang
Hi Xi,

Have granted you the permissions.

Guozhang

On Wed, Jul 12, 2017 at 7:58 PM, Hu Xi  wrote:

> Apply for access for KIP creation permission.
>
>
> WikiID: huxi_2b
>
> Email:   huxi...@hotmail.com
>



-- 
-- Guozhang


Re: [ANNOUNCE] New Kafka PMC member Ismael Juma

2017-07-13 Thread Becket Qin
Congrats, Ismael!

On Mon, Jul 10, 2017 at 6:37 AM, Neha Narkhede  wrote:

> Very well deserved. Congratulations Ismael!
> On Mon, Jul 10, 2017 at 6:33 AM Viktor Somogyi 
> wrote:
>
> > Congrats Ismael :)
> >
> > On Fri, Jul 7, 2017 at 6:59 PM, Konstantine Karantasis <
> > konstant...@confluent.io> wrote:
> >
> > > Congratulations Ismael!
> > >
> > >
> > > On Fri, Jul 7, 2017 at 8:26 AM Eno Thereska 
> > > wrote:
> > >
> > > > Congrats!
> > > >
> > > > Eno
> > > > > On 7 Jul 2017, at 16:13, Kamal C 
> > > wrote:
> > > > >
> > > > > Congratulations Ismael !
> > > > >
> > > > > On 06-Jul-2017 14:11, "Ismael Juma"  wrote:
> > > > >
> > > > >> Thanks everyone!
> > > > >>
> > > > >> Ismael
> > > > >>
> > > > >> On Wed, Jul 5, 2017 at 9:55 PM, Jun Rao  wrote:
> > > > >>
> > > > >>> Hi, Everyone,
> > > > >>>
> > > > >>> Ismael Juma has been active in the Kafka community since he
> became
> > > > >>> a Kafka committer about a year ago. I am glad to announce that
> > Ismael
> > > > is
> > > > >>> now a member of Kafka PMC.
> > > > >>>
> > > > >>> Congratulations, Ismael!
> > > > >>>
> > > > >>> Jun
> > > > >>>
> > > > >>
> > > >
> > > >
> > >
> >
> --
> Thanks,
> Neha
>


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-07-13 Thread Becket Qin
I am a little hesitant to add the configuration to the client. It would be
more flexible but this seems not the thing that users should worry about (I
imagine many people would simply set backoff to 0 just for fast rebalance).
I am wondering if the following variant of the current solution will
address the problem.

1. broker will start to rebalance immediately when the first member joins
the group at T0.

2. If another member joins the group at T1 which is between T0 and T0 +
delta (configurable), the broker will wait until T1 + delta then do the
rebalance. Any additional member joining before the rebalance kicks off
would result in the delay of the rebalance with the same extension logic as
we have now. We can also try some exponential back off if needed.

This should help address the console consumer problem. Not sure if there
are other cases that needs to be considered, though.

Thanks,

Jiangjie (Becket) Qin

On Mon, Jul 10, 2017 at 5:28 PM, Greg Fodor  wrote:

> Found this thread after posting an alternative idea after we starting
> hitting this issue ourselves for a job that has a lot of state stores and
> topic partitions. My suggestion was to have consumer groups have a
> configurable minimum member count before consumption begins, but that has
> its own trade offs and benefits (maybe a different KIP.)
>
> One suggestion I had is maybe there is some relatively fool-proof heuristic
> that can cause Kafka Streams to emit an INFO/WARN to the log to inform the
> user of the configuration if it detects a rapid rebalance on startup due to
> new nodes joining? For example, if streams detects a rebalance, before
> processors are initialized, that only add new nodes, if the configuration
> has not been overridden, write to the log?
>
>
>
> On Thu, Jun 8, 2017 at 2:56 PM, Guozhang Wang  wrote:
>
> > Just recapping on client-side v.s. broker-side config: we did discuss
> about
> > adding this as a client-side config and bump up join-group request (I
> think
> > both Ismael and Ewen questioned about it) to include this configured
> value
> > to the broker. I cannot remember if there is any strong motivations
> against
> > going to the client-side config, except that we felt a default non-zero
> > value will benefit most users assuming they start with more than one
> member
> > in their group but only advanced users would really realize this config
> > existing and tune it themselves.
> >
> > I agree that we could re-consider it for the next release if we observe
> > that it is actually affecting more users than benefiting them.
> >
> > Guozhang
> >
> > On Wed, Jun 7, 2017 at 2:26 AM, Damian Guy  wrote:
> >
> > > Hi Jun/Ismael,
> > >
> > > Sounds good to me.
> > >
> > > Thanks,
> > > Damian
> > >
> > > On Tue, 6 Jun 2017 at 23:08 Ismael Juma  wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > The console consumer issue also came up in a conversation I was
> having
> > > > recently. Seems like the config/server.properties change is a
> > reasonable
> > > > compromise given that we have other defaults that are for
> development.
> > > >
> > > > Ismael
> > > >
> > > > On Tue, Jun 6, 2017 at 10:59 PM, Jun Rao  wrote:
> > > >
> > > > > Hi, Everyone,
> > > > >
> > > > > Sorry for being late on this thread. I just came across this
> thread.
> > I
> > > > have
> > > > > a couple of concerns on this. (1) It seems the amount of delay will
> > be
> > > > > application specific. So, it seems that it's better for the delay
> to
> > > be a
> > > > > client side config instead of a server side one? (2) When running
> > > console
> > > > > consumer in quickstart, a minimum of 3 sec delay seems to be a bad
> > > > > experience for our users.
> > > > >
> > > > > Since we are getting late into the release cycle, it may be a bit
> too
> > > > late
> > > > > to make big changes in the 0.11 release. Perhaps we should at least
> > > > > consider overriding the delay in config/server.properties to 0 to
> > > improve
> > > > > the quickstart experience?
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > >
> > > > > On Tue, Apr 11, 2017 at 12:19 AM, Damian Guy  >
> > > > wrote:
> > > > >
> > > > > > Hi Onur,
> > > > > >
> > > > > > It was in my previous email. But here it is again.
> > > > > >
> > > > > > 
> > > > > >
> > > > > > 1. Better rebalance timing. We will try to rebalance only when
> all
> > > the
> > > > > > consumers in a group have joined. The challenge would be someone
> > has
> > > to
> > > > > > define what does ALL consumers mean, it could either be a time or
> > > > number
> > > > > of
> > > > > > consumers, etc.
> > > > > >
> > > > > > 2. Avoid frequent rebalance. For example, if there are 100
> > consumers
> > > > in a
> > > > > > group, today, in the worst case, we may end up with 100
> rebalances
> > > even
> > > > > if
> 

Re: [VOTE] KIP-167: Add interface for the state store restoration process

2017-07-13 Thread Eno Thereska
+1 (non-binding).

Thanks Bill.

Eno
> On 12 Jul 2017, at 09:12, Bill Bejeck  wrote:
> 
> All,
> 
> Now that we've concluded a second round of discussion on KIP-167, I'd like
> to start a vote.
> 
> 
> Thanks,
> Bill
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-167%3A+Add+interface+for+the+state+store+restoration+process



Re: [ANNOUNCE] New Kafka PMC member Jason Gustafson

2017-07-13 Thread Becket Qin
Congratulations, Jason!

On Wed, Jul 12, 2017 at 7:05 PM, Kamal C 
wrote:

> Congrats Jason!
>
> On 13-Jul-2017 00:32, "Jun Rao"  wrote:
>
> > Congratulations, Jason!
> >
> > Jun
> >
> > On Tue, Jul 11, 2017 at 10:32 PM, Guozhang Wang 
> > wrote:
> >
> > > Hi Everyone,
> > >
> > > Jason Gustafson has been very active in contributing to the Kafka
> > community
> > > since he became a Kafka committer last September and has done lots of
> > > significant work including the most recent exactly-once project. In
> > > addition, Jason has initiated or participated in the design discussion
> of
> > > more than 30 KIPs in which he has consistently brought in great
> judgement
> > > and insights throughout his communication. I am glad to announce that
> > Jason
> > > has now become a PMC member of the project.
> > >
> > > Congratulations, Jason!
> > >
> > > -- Guozhang
> > >
> >
>


Comments on JIRAs

2017-07-13 Thread Tom Bentley
The project recently switched from all JIRA events being sent to the dev
mailling list, to just issue creations. This seems like a good thing
because the dev mailling list was very noisy before, and if you want to see
all the JIRA comments etc you can subscribe to the JIRA list. If you don't
subscribe to the JIRA list you need to take the time to become a watcher on
each issue that interests you.

However, the flip-side of this is that when you comment on a JIRA you have
no idea who's going to get notified (apart from the watchers). In
particular, commenters don't know whether any of the committers will see
their comment, unless they mention them individually by name. But for an
issue in which no committer has thus far taken an interest, who is the
commenter to @mention? There is no @kafka_commiters that you can use to
bring the comment to the attention of the whole group of committers.

There is also the fact that there are an awful lot of historical issues
which interested people won't be watching because they assumed at the time
that they'd get notified via the dev list.

I can well imagine that people who aren't working a lot of Kafka won't
realise that there's a good chance that their comments on JIRAs won't reach
relevant people.

I'm mentioning this mainly to highlight to people that this is what's
happening, because it wasn't obvious to me that commenting on a JIRA might
not reach (all of) the committers/interested parties.

Cheers,

Tom


[jira] [Resolved] (KAFKA-5589) Bump dependency of Kafka 0.10.x to the latest one

2017-07-13 Thread Piotr Nowojski (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski resolved KAFKA-5589.
---
Resolution: Not A Problem

Sorry, created an issue in wrong project.

> Bump dependency of Kafka 0.10.x to the latest one
> -
>
> Key: KAFKA-5589
> URL: https://issues.apache.org/jira/browse/KAFKA-5589
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Piotr Nowojski
>
> We are using pretty old Kafka version for 0.10. Besides any bug fixes and 
> improvements that were made between 0.10.0.1 and 0.10.2.1, it 0.10.2.1 
> version is more similar to 0.11.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5589) Bump dependency of Kafka 0.10.x to the latest one

2017-07-13 Thread Piotr Nowojski (JIRA)
Piotr Nowojski created KAFKA-5589:
-

 Summary: Bump dependency of Kafka 0.10.x to the latest one
 Key: KAFKA-5589
 URL: https://issues.apache.org/jira/browse/KAFKA-5589
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Piotr Nowojski


We are using pretty old Kafka version for 0.10. Besides any bug fixes and 
improvements that were made between 0.10.0.1 and 0.10.2.1, it 0.10.2.1 version 
is more similar to 0.11.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Kafka doc : official repo or web site repo ?

2017-07-13 Thread Paolo Patierno
Hi guys,

I have big doubt on where is the doc and what to do for upgrading that.

I see the docs folder in the Kafka repo (but even there I have a PR opened for 
one month on Kafka Connect not yet merged) but then I see the Kafka web site 
repo where there is the same doc.

Reading the new Kafka Stream doc I noticed that it seems to be only in the 
Kafka web site repo and not in the Kafka repo.


Can you clarifying me where to submit PRs for doc ? On which side ?


Thanks,


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Windows Embedded & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


[GitHub] kafka pull request #3525: KAFKA-5431: cleanSegments should not set length fo...

2017-07-13 Thread huxihx
GitHub user huxihx opened a pull request:

https://github.com/apache/kafka/pull/3525

KAFKA-5431: cleanSegments should not set length for cleanable segment files

For a compacted topic with preallocate enabled, during log cleaning, 
LogCleaner.cleanSegments does not have to pre-allocate the underlying file size 
since we only want to store the cleaned data in the file.

It's believed that this fix should also solve KAFKA-5582.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/huxihx/kafka log_compact_test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3525.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3525


commit e14436a2abb25c5b324efba5e431e5e1afb6e05a
Author: huxihx 
Date:   2017-07-13T08:28:50Z

KAFKA-5431: LogCleaner stopped due to 
org.apache.kafka.common.errors.CorruptRecordException

For a compacted topic with preallocate enabled, during log cleaning, 
LogCleaner.cleanSegments does not have to pre-allocate the underlying file size 
since we only want to store the cleaned data in the file.

It's believed that this fix should also solve KAFKA-5582.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3524: KAFKA-5588: uselss --new-consumer option

2017-07-13 Thread ppatierno
GitHub user ppatierno opened a pull request:

https://github.com/apache/kafka/pull/3524

KAFKA-5588: uselss --new-consumer option

Get rid of the --new-consumer option for the ConsoleConsumer

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ppatierno/kafka kafka-5588

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3524.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3524


commit f9309497551c7058696466c06755defddad6238c
Author: ppatierno 
Date:   2017-07-13T07:53:16Z

Get rid of the --new-consumer option for the ConsoleConsumer




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5588) ConsumerConsole : uselss --new-consumer option

2017-07-13 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5588:
-

 Summary: ConsumerConsole : uselss --new-consumer option
 Key: KAFKA-5588
 URL: https://issues.apache.org/jira/browse/KAFKA-5588
 Project: Kafka
  Issue Type: Bug
Reporter: Paolo Patierno
Assignee: Paolo Patierno
Priority: Minor


Hi,
it seems to me that the --new-consumer option on the ConsoleConsumer is useless.
The useOldConsumer var is related to specify --zookeeper on the command line 
but then the bootstrap-server option (or the --new-consumer) can't be used.
If you use --bootstrap-server option then the new consumer is used 
automatically so no need for --new-consumer.
It turns out the using the old or new consumer is just related on using 
--zookeeper or --bootstrap-server option (which can't be used together, so I 
can't use new consumer connecting to zookeeper).
I'm going to remove the --new-consumer option from the tool.

Thanks,
Paolo.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5587) Processor got uncaught exception: NullPointerException

2017-07-13 Thread Dan (JIRA)
Dan created KAFKA-5587:
--

 Summary: Processor got uncaught exception: NullPointerException
 Key: KAFKA-5587
 URL: https://issues.apache.org/jira/browse/KAFKA-5587
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.10.1.1
Reporter: Dan


[2017-07-12 21:56:39,964] ERROR Processor got uncaught exception. 
(kafka.network.Processor)
java.lang.NullPointerException
at 
kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:490)
at 
kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:487)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
kafka.network.Processor.processCompletedReceives(SocketServer.scala:487)
at kafka.network.Processor.run(SocketServer.scala:417)
at java.lang.Thread.run(Thread.java:745)

Anyone knows the cause of this exception? What's the effect of it? 
When this exception occurred, the log also showed that the broker was 
frequently shrinking ISR to itself. Are these two things interrelated?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)