Re: Question how to drop a unused group id?

2018-08-14 Thread Vahid S Hashemian
These monitors are not consuming / committing offsets as part of the 
group, are they?
Normally that should not be the case (e.g. the consumer group command tool 
retrieves the current position and the log end offset but it doesn't 
affect how long the group lives).

Only consumers that commit offsets (automatically or manually) will extend 
the lifecycle of a group.

--Vahid



From:   "ÅíÅô(Peyton)" 
To:     Vahid S Hashemian 
Cc: "彭鹏" , users@kafka.apache.org
Date:   08/13/2018 07:11 PM
Subject:Re: Question how to drop a unused group id?



Yes it is clearly there is no process committing offset for those group 
id, but because of monitor(kafka manager and self coding monitor) always 
try to detect the latest offset, does this effect the remove action?

Regards,
Peyton

湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
   5/F,  Building 3, Xincheng Science & Technology 
Park
   No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
===

在 2018年8月14日,上午12:25,Vahid S Hashemian  
写道:

It should be automatically removed if there is no offset commit in the 
group for 24 hours.
Are you sure there is no process committing offset for the group?

If kafka-manager lists them they're probably still there.
You can also use the consumer group command to check this:
> bin/kafka-consumer-groups.sh --bootstrap-server [broker]:9092 --list

--Vahid



From:"ÅíÅô(Peyton)" 
To:Vahid S Hashemian 
Cc:users@kafka.apache.org, "彭鹏" 
Date:08/13/2018 09:18 AM
Subject:Re: Question how to drop a unused group id?



Thanks! Hmmm.. the last question, then by default, the unused group id was 
already removed by this setting? 

We use kafka-manager to manage the kafka cluster, and we can still see the 
expired group ids, as we talked previously, it dos not exists actually, 
right? 

Best Regards,
Peyton

湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
  5/F,  Building 3, Xincheng Science & Technology Park
  No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
===========

在 2018年8月14日,上午12:05,Vahid S Hashemian  
写道:

If you did not set the config, the default will be used.
For your version of Kafka (0.10.2.1) the default is 1 day: 
https://github.com/apache/kafka/blob/0.10.2.1/core/src/main/scala/kafka/server/KafkaConfig.scala#L152


--Vahid



From:    "ÅíÅô(Peyton)" 
To:Vahid S Hashemian 
Cc:users@kafka.apache.org, "彭鹏" 
Date:08/13/2018 08:56 AM
Subject:Re: Question how to drop a unused group id?



Ok, thanks again for the help, and another question, by default we did not 
set the config item, so does not it take effect with the default 
situation?

I just searched the code with the configuration key “
offset.retention.minutes", but I can get any reference, be changed?

Best Regards,
Peyton
湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
 5/F,  Building 3, Xincheng Science & Technology Park
 No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
=======

在 2018年8月13日,下午11:52,Vahid S Hashemian  
写道:

Hi Peyton,

Yes, if you'd like to change the value a broker restart is required for 
the change to take effect.

--Vahid



From:"ÅíÅô(Peyton)" 
To:Vahid S Hashemian 
Cc:users@kafka.apache.org, "彭鹏" 
Date:08/13/2018 08:49 AM
Subject:Re: Question how to drop a unused group id?



Hi Vahid,

Thank you very much for the reply, we use the kafka ver 2.11-0.10.2.1.

So when I apply the "offset.retention.minutes" to the system that request 
a restart to be effected?

Best Regards,
Peyton

湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
5/F,  Building 3, Xincheng Science & Technology Park
No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
===

在 2018年8月13日,下午11:43,Vahid S Hashemian  
写道:

Hi Peyton,

What version of Kafka ar

Re: Question how to drop a unused group id?

2018-08-13 Thread Vahid S Hashemian
It should be automatically removed if there is no offset commit in the 
group for 24 hours.
Are you sure there is no process committing offset for the group?

If kafka-manager lists them they're probably still there.
You can also use the consumer group command to check this:
> bin/kafka-consumer-groups.sh --bootstrap-server [broker]:9092 --list

--Vahid



From:   "ÅíÅô(Peyton)" 
To: Vahid S Hashemian 
Cc: users@kafka.apache.org, "彭鹏" 
Date:   08/13/2018 09:18 AM
Subject:Re: Question how to drop a unused group id?



Thanks! Hmmm.. the last question, then by default, the unused group id was 
already removed by this setting? 

We use kafka-manager to manage the kafka cluster, and we can still see the 
expired group ids, as we talked previously, it dos not exists actually, 
right? 

Best Regards,
Peyton

湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
   5/F,  Building 3, Xincheng Science & Technology 
Park
   No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
===

在 2018年8月14日,上午12:05,Vahid S Hashemian  
写道:

If you did not set the config, the default will be used.
For your version of Kafka (0.10.2.1) the default is 1 day: 
https://github.com/apache/kafka/blob/0.10.2.1/core/src/main/scala/kafka/server/KafkaConfig.scala#L152


--Vahid



From:"ÅíÅô(Peyton)" 
To:Vahid S Hashemian 
Cc:users@kafka.apache.org, "彭鹏" 
Date:08/13/2018 08:56 AM
Subject:Re: Question how to drop a unused group id?



Ok, thanks again for the help, and another question, by default we did not 
set the config item, so does not it take effect with the default 
situation?

I just searched the code with the configuration key “
offset.retention.minutes", but I can get any reference, be changed?

Best Regards,
Peyton
湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
  5/F,  Building 3, Xincheng Science & Technology Park
  No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
=======

在 2018年8月13日,下午11:52,Vahid S Hashemian  
写道:

Hi Peyton,

Yes, if you'd like to change the value a broker restart is required for 
the change to take effect.

--Vahid



From:"ÅíÅô(Peyton)" 
To:Vahid S Hashemian 
Cc:users@kafka.apache.org, "彭鹏" 
Date:08/13/2018 08:49 AM
Subject:Re: Question how to drop a unused group id?



Hi Vahid,

Thank you very much for the reply, we use the kafka ver 2.11-0.10.2.1.

So when I apply the "offset.retention.minutes" to the system that request 
a restart to be effected?

Best Regards,
Peyton

湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
 5/F,  Building 3, Xincheng Science & Technology Park
 No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
===

在 2018年8月13日,下午11:43,Vahid S Hashemian  
写道:

Hi Peyton,

What version of Kafka are you using?
Starting from version 1.1.0 there is a DELETE_GROUP API: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-229%3A+DeleteGroups+API

You can also use the consumer group command with `--delete` to delete a 
group.

For prior versions, the safest option would be to wait for 
`offset.retention.minutes` until all group offsets are expired.

--Vahid




From:"ÅíÅô(Peyton)" 
To:users@kafka.apache.org
Cc:"彭鹏" 
Date:08/13/2018 08:12 AM
Subject:Question how to drop a unused group id?



Hi kafka team,

I have an issue about drop the unused group id which is created with the 
new consumer api, but I found no way to process it, can you please give mi 
a hint how to do it?

Thank you very much.

Regards,
Peyton

湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
5/F,  Building 3, Xincheng Science & Technology Park
No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
===















Re: Question how to drop a unused group id?

2018-08-13 Thread Vahid S Hashemian
If you did not set the config, the default will be used.
For your version of Kafka (0.10.2.1) the default is 1 day: 
https://github.com/apache/kafka/blob/0.10.2.1/core/src/main/scala/kafka/server/KafkaConfig.scala#L152

--Vahid



From:   "ÅíÅô(Peyton)" 
To:     Vahid S Hashemian 
Cc: users@kafka.apache.org, "彭鹏" 
Date:   08/13/2018 08:56 AM
Subject:Re: Question how to drop a unused group id?



Ok, thanks again for the help, and another question, by default we did not 
set the config item, so does not it take effect with the default 
situation?

I just searched the code with the configuration key “
offset.retention.minutes", but I can get any reference, be changed?

Best Regards,
Peyton
湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
   5/F,  Building 3, Xincheng Science & Technology 
Park
   No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
===

在 2018年8月13日,下午11:52,Vahid S Hashemian  
写道:

Hi Peyton,

Yes, if you'd like to change the value a broker restart is required for 
the change to take effect.

--Vahid



From:    "ÅíÅô(Peyton)" 
To:Vahid S Hashemian 
Cc:users@kafka.apache.org, "彭鹏" 
Date:08/13/2018 08:49 AM
Subject:Re: Question how to drop a unused group id?



Hi Vahid,

Thank you very much for the reply, we use the kafka ver 2.11-0.10.2.1.

So when I apply the "offset.retention.minutes" to the system that request 
a restart to be effected?

Best Regards,
Peyton

湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
  5/F,  Building 3, Xincheng Science & Technology Park
  No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
===========

在 2018年8月13日,下午11:43,Vahid S Hashemian  
写道:

Hi Peyton,

What version of Kafka are you using?
Starting from version 1.1.0 there is a DELETE_GROUP API: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-229%3A+DeleteGroups+API

You can also use the consumer group command with `--delete` to delete a 
group.

For prior versions, the safest option would be to wait for 
`offset.retention.minutes` until all group offsets are expired.

--Vahid




From:"ÅíÅô(Peyton)" 
To:users@kafka.apache.org
Cc:"彭鹏" 
Date:08/13/2018 08:12 AM
Subject:Question how to drop a unused group id?



Hi kafka team,

I have an issue about drop the unused group id which is created with the 
new consumer api, but I found no way to process it, can you please give mi 
a hint how to do it?

Thank you very much.

Regards,
Peyton

湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
 5/F,  Building 3, Xincheng Science & Technology Park
 No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
===












Re: Question how to drop a unused group id?

2018-08-13 Thread Vahid S Hashemian
Hi Peyton,

Yes, if you'd like to change the value a broker restart is required for 
the change to take effect.

--Vahid



From:   "ÅíÅô(Peyton)" 
To:     Vahid S Hashemian 
Cc: users@kafka.apache.org, "彭鹏" 
Date:   08/13/2018 08:49 AM
Subject:Re: Question how to drop a unused group id?



Hi Vahid,

Thank you very much for the reply, we use the kafka ver 2.11-0.10.2.1.

So when I apply the "offset.retention.minutes" to the system that request 
a restart to be effected?

Best Regards,
Peyton

湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
   5/F,  Building 3, Xincheng Science & Technology 
Park
   No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
=======

在 2018年8月13日,下午11:43,Vahid S Hashemian  
写道:

Hi Peyton,

What version of Kafka are you using?
Starting from version 1.1.0 there is a DELETE_GROUP API: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-229%3A+DeleteGroups+API

You can also use the consumer group command with `--delete` to delete a 
group.

For prior versions, the safest option would be to wait for 
`offset.retention.minutes` until all group offsets are expired.

--Vahid




From:"ÅíÅô(Peyton)" 
To:users@kafka.apache.org
Cc:"彭鹏" 
Date:08/13/2018 08:12 AM
Subject:Question how to drop a unused group id?



Hi kafka team,

I have an issue about drop the unused group id which is created with the 
new consumer api, but I found no way to process it, can you please give mi 
a hint how to do it?

Thank you very much.

Regards,
Peyton

湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
  5/F,  Building 3, Xincheng Science & Technology Park
  No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
===









Re: Question how to drop a unused group id?

2018-08-13 Thread Vahid S Hashemian
Hi Peyton,

What version of Kafka are you using?
Starting from version 1.1.0 there is a DELETE_GROUP API: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-229%3A+DeleteGroups+API
You can also use the consumer group command with `--delete` to delete a 
group.

For prior versions, the safest option would be to wait for 
`offset.retention.minutes` until all group offsets are expired.

--Vahid




From:   "ÅíÅô(Peyton)" 
To: users@kafka.apache.org
Cc: "彭鹏" 
Date:   08/13/2018 08:12 AM
Subject:Question how to drop a unused group id?



Hi kafka team,

I have an issue about drop the unused group id which is created with the 
new consumer api, but I found no way to process it, can you please give mi 
a hint how to do it?

Thank you very much.

Regards,
Peyton

湖南福米信息科技有限责任公司 Hunan Fumi Financial Technology Co., Ltd.
===
地址 / Add :湖南省长沙市高新区岳麓西大道588号芯城科技园3栋5F (41)
   5/F,  Building 3, Xincheng Science & Technology 
Park
   No.588 Yuelu West Ave, Changsha 41, Hunan, 
P.R.China
手机 / Cell  : (86)  150 2113 9776
邮箱 / Mail :  p...@webull.com
===






Re: ConsumerGroupCommand tool improvement?

2018-08-06 Thread Vahid S Hashemian
Hi Colin,

Thanks for the feedback!
I understand your concerns (I was thinking about making this improvement 
in a way that's fully backward compatible, with similar regex syntax as in 
some of the existing tools).
In any case, If there is not enough interest in the user community I'll 
rest my case.

Thanks!
--Vahid




From:   Colin McCabe 
To: users@kafka.apache.org
Date:   08/06/2018 12:44 PM
Subject:Re: ConsumerGroupCommand tool improvement?



On Mon, Aug 6, 2018, at 12:04, Vahid S Hashemian wrote:
> Hi Colin,
> 
> Thanks for considering the idea and sharing your feedback.
> 
> The improvements I proposed can be achieved, to some extend, using the 
> AdminClient API and the Consumer Group CLI tool. But they won't fully 
> support the proposal.
> 
> For example,
> Regular expressions are not supported on the groups
> Topic / Client filtering is not supported across all groups
> 
> So the reason for proposing the idea was to see if other Kafka users are 

> also interested in some of these features so we can remove the burden of 

> them writing custom code around existing consumer group features, and 
make 
> those features built into Kafka Consumer Group Command and AdminClient 
> API.

Hmm.  If you're writing Java code that calls the APIs, though, this is 
easy, right?  You can filter the groups in the cluster with a regular 
expression with just a single line of code in Java 8.

> adminClient.listConsumerGroups().all().stream().
>filter(listing -> listing.groupId().matches(myRegex)).
>collect(Collectors.toList());

An option to filter ListConsumerGroups by a regular expression might 
actually be less easy-to-use for most users than this simple filter, since 
users would have to read the JavaDoc for our APIs.

Maybe there are some use-cases where it makes sense to add regex support 
to the command-line tools, though.

I guess the reason why I am pushing back on this is that regular 
expressions add a lot of complexity to the API and make it harder for us 
to meet our backwards compatibility guarantees.  For example, Java regular 
expressions changed slightly between Java 7 and Java 8

best,
Colin


> 
> Thanks again!
> --Vahid
> 
> 
> 
> From:   Colin McCabe 
> To: users@kafka.apache.org
> Date:   08/03/2018 04:16 PM
> Subject:Re: ConsumerGroupCommand tool improvement?
> 
> 
> 
> Hi Vahid,
> 
> Interesting idea.
> 
> It seems like if you're using the AdminClient APIs programmatically, you 

> can just do the filtering yourself in a more flexible way than what we 
> could provide.
> 
> On the other hand, if you're using the ./bin/consumer-groups.sh 
> command-line tool, why not use grep or a similar tool to filter the 
> output?  Maybe there is some additional functionality in supporting 
> regexes in the command-line tool, but it also seems like it might be 
kind 
> of complex as well.  Do you have some examples where  having regex 
support 
> int the tool would be much easier than the traditional way of piping the 

> output to grep, awk, and sed?
> 
> best,
> Colin
> 
> 
> On Thu, Aug 2, 2018, at 14:23, Vahid S Hashemian wrote:
> > Hi all,
> > 
> > A requirement has been raised by a colleague and I wanted to see if 
> there 
> > is any interest in the community in adding the functionality to Apache 

> > Kafka.
> > 
> > ConsumerGroupCommand tool in describe ('--describe' or '--describe 
> > --offsets') mode currently lists all topics the group has consumed 
from 
> > and all consumers with assigned partitions for a single group.
> > The idea is to allow filtering of topics, consumers (client ids), and 
> even 
> > groups using regular expressions. This will allow the tool to handle 
use 
> 
> > cases such as
> > What's the status of a particular consumer (or consumers) in all the 
> > groups they are consuming from? (for example to check if they are 
> lagging 
> > behind in all groups)
> > What consumer groups are consuming from a topic (or topics) and what's 

> the 
> > lag for each group?
> > Limit the existing result to the topics/consumers of interest (for 
> groups 
> > with several topics/consumers)
> > ...
> > 
> > This would potentially lead to enhancing the AdminClient API as well.
> > 
> > If the community also sees a value in this, I could start drafting a 
> KIP.
> > 
> > Thanks for your feedback.
> > --Vahid
> > 
> 
> 
> 
> 
> 







Re: ConsumerGroupCommand tool improvement?

2018-08-06 Thread Vahid S Hashemian
Hi Colin,

Thanks for considering the idea and sharing your feedback.

The improvements I proposed can be achieved, to some extend, using the 
AdminClient API and the Consumer Group CLI tool. But they won't fully 
support the proposal.

For example,
Regular expressions are not supported on the groups
Topic / Client filtering is not supported across all groups

So the reason for proposing the idea was to see if other Kafka users are 
also interested in some of these features so we can remove the burden of 
them writing custom code around existing consumer group features, and make 
those features built into Kafka Consumer Group Command and AdminClient 
API.

Thanks again!
--Vahid



From:   Colin McCabe 
To: users@kafka.apache.org
Date:   08/03/2018 04:16 PM
Subject:Re: ConsumerGroupCommand tool improvement?



Hi Vahid,

Interesting idea.

It seems like if you're using the AdminClient APIs programmatically, you 
can just do the filtering yourself in a more flexible way than what we 
could provide.

On the other hand, if you're using the ./bin/consumer-groups.sh 
command-line tool, why not use grep or a similar tool to filter the 
output?  Maybe there is some additional functionality in supporting 
regexes in the command-line tool, but it also seems like it might be kind 
of complex as well.  Do you have some examples where  having regex support 
int the tool would be much easier than the traditional way of piping the 
output to grep, awk, and sed?

best,
Colin


On Thu, Aug 2, 2018, at 14:23, Vahid S Hashemian wrote:
> Hi all,
> 
> A requirement has been raised by a colleague and I wanted to see if 
there 
> is any interest in the community in adding the functionality to Apache 
> Kafka.
> 
> ConsumerGroupCommand tool in describe ('--describe' or '--describe 
> --offsets') mode currently lists all topics the group has consumed from 
> and all consumers with assigned partitions for a single group.
> The idea is to allow filtering of topics, consumers (client ids), and 
even 
> groups using regular expressions. This will allow the tool to handle use 

> cases such as
> What's the status of a particular consumer (or consumers) in all the 
> groups they are consuming from? (for example to check if they are 
lagging 
> behind in all groups)
> What consumer groups are consuming from a topic (or topics) and what's 
the 
> lag for each group?
> Limit the existing result to the topics/consumers of interest (for 
groups 
> with several topics/consumers)
> ...
> 
> This would potentially lead to enhancing the AdminClient API as well.
> 
> If the community also sees a value in this, I could start drafting a 
KIP.
> 
> Thanks for your feedback.
> --Vahid
> 







ConsumerGroupCommand tool improvement?

2018-08-02 Thread Vahid S Hashemian
Hi all,

A requirement has been raised by a colleague and I wanted to see if there 
is any interest in the community in adding the functionality to Apache 
Kafka.

ConsumerGroupCommand tool in describe ('--describe' or '--describe 
--offsets') mode currently lists all topics the group has consumed from 
and all consumers with assigned partitions for a single group.
The idea is to allow filtering of topics, consumers (client ids), and even 
groups using regular expressions. This will allow the tool to handle use 
cases such as
What's the status of a particular consumer (or consumers) in all the 
groups they are consuming from? (for example to check if they are lagging 
behind in all groups)
What consumer groups are consuming from a topic (or topics) and what's the 
lag for each group?
Limit the existing result to the topics/consumers of interest (for groups 
with several topics/consumers)
...

This would potentially lead to enhancing the AdminClient API as well.

If the community also sees a value in this, I could start drafting a KIP.

Thanks for your feedback.
--Vahid



Re: [ANNOUNCE] Apache Kafka 2.0.0 Released

2018-07-30 Thread Vahid S Hashemian
Such a good news on a Monday morning ...

Thank you Rajini for driving the release!

--Vahid




From:   Mickael Maison 
To: Users 
Cc: dev , annou...@apache.org, kafka-clients 

Date:   07/30/2018 04:37 AM
Subject:Re: [ANNOUNCE] Apache Kafka 2.0.0 Released



Great news! Thanks for running the release

On Mon, Jul 30, 2018 at 12:20 PM, Manikumar  
wrote:
> Thanks for driving the release!
>
>
>
> On Mon, Jul 30, 2018 at 3:55 PM Rajini Sivaram  
wrote:
>
>> The Apache Kafka community is pleased to announce the release for
>>
>> Apache Kafka 2.0.0.
>>
>>
>>
>>
>>
>> This is a major release and includes significant new features from
>>
>> 40 KIPs. It contains fixes and improvements from 246 JIRAs, including
>>
>> a few critical bugs. Here is a summary of some notable changes:
>>
>> ** KIP-290 adds support for prefixed ACLs, simplifying access control
>> management in large secure deployments. Bulk access to topics,
>> consumer groups or transactional ids with a prefix can now be granted
>> using a single rule. Access control for topic creation has also been
>> improved to enable access to be granted to create specific topics or
>> topics with a prefix.
>>
>> ** KIP-255 adds a framework for authenticating to Kafka brokers using
>> OAuth2 bearer tokens. The SASL/OAUTHBEARER implementation is
>> customizable using callbacks for token retrieval and validation.
>>
>> **Host name verification is now enabled by default for SSL connections
>> to ensure that the default SSL configuration is not susceptible to
>> man-in-the middle attacks. You can disable this verification for
>> deployments where validation is performed using other mechanisms.
>>
>> ** You can now dynamically update SSL trust stores without broker 
restart.
>> You can also configure security for broker listeners in ZooKeeper 
before
>> starting brokers, including SSL key store and trust store passwords and
>> JAAS configuration for SASL. With this new feature, you can store 
sensitive
>> password configs in encrypted form in ZooKeeper rather than in 
cleartext
>> in the broker properties file.
>>
>> ** The replication protocol has been improved to avoid log divergence
>> between leader and follower during fast leader failover. We have also
>> improved resilience of brokers by reducing the memory footprint of
>> message down-conversions. By using message chunking, both memory
>> usage and memory reference time have been reduced to avoid
>> OutOfMemory errors in brokers.
>>
>> ** Kafka clients are now notified of throttling before any throttling 
is
>> applied
>> when quotas are enabled. This enables clients to distinguish between
>> network errors and large throttle times when quotas are exceeded.
>>
>> ** We have added a configuration option for Kafka consumer to avoid
>> indefinite blocking in the consumer.
>>
>> ** We have dropped support for Java 7 and removed the previously
>> deprecated Scala producer and consumer.
>>
>> ** Kafka Connect includes a number of improvements and features.
>> KIP-298 enables you to control how errors in connectors, 
transformations
>> and converters are handled by enabling automatic retries and 
controlling
>> the
>> number of errors that are tolerated before the connector is stopped. 
More
>> contextual information can be included in the logs to help diagnose
>> problems
>> and problematic messages consumed by sink connectors can be sent to a
>> dead letter queue rather than forcing the connector to stop.
>>
>> ** KIP-297 adds a new extension point to move secrets out of connector
>> configurations and integrate with any external key management system.
>> The placeholders in connector configurations are only resolved before
>> sending the configuration to the connector, ensuring that secrets are
>> stored
>> and managed securely in your preferred key management system and
>> not exposed over the REST APIs or in log files.
>>
>> ** We have added a thin Scala wrapper API for our Kafka Streams DSL,
>> which provides better type inference and better type safety during 
compile
>> time. Scala users can have less boilerplate in their code, notably
>> regarding
>> Serdes with new implicit Serdes.
>>
>> ** Message headers are now supported in the Kafka Streams Processor 
API,
>> allowing users to add and manipulate headers read from the source 
topics
>> and propagate them to the sink topics.
>>
>> ** Windowed aggregations performance in Kafka Streams has been largely
>> improved (sometimes by an order of magnitude) thanks to the new
>> single-key-fetch API.
>>
>> ** We have further improved unit testibility of Kafka Streams with the
>> kafka-streams-testutil artifact.
>>
>>
>>
>>
>>
>> All of the changes in this release can be found in the release notes:
>>
>> 
https://www.apache.org/dist/kafka/2.0.0/RELEASE_NOTES.html

>>
>>
>>
>>
>>
>> You can download the source and binary release (Scala 2.11 and Scala 
2.12)
>> from:
>>
>> 
https://kafka.apache.org/downloads#2.0.0

>> <

Re: [VOTE] 2.0.0 RC3

2018-07-24 Thread Vahid S Hashemian
+1 (non-binding)

Built from source and ran quickstart successfully with both Java 8 and 
Java 9 on Ubuntu.
Thanks Rajini!

--Vahid




From:   Rajini Sivaram 
To: dev , Users , 
kafka-clients 
Date:   07/24/2018 08:33 AM
Subject:[VOTE] 2.0.0 RC3



Hello Kafka users, developers and client-developers,


This is the fourth candidate for release of Apache Kafka 2.0.0.


This is a major version release of Apache Kafka. It includes 40 new  KIPs
and

several critical bug fixes. Please see the 2.0.0 release plan for more
details:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820



A few notable highlights:

   - Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
   (KIP-277)
   - SASL/OAUTHBEARER implementation (KIP-255)
   - Improved quota communication and customization of quotas (KIP-219,
   KIP-257)
   - Efficient memory usage for down conversion (KIP-283)
   - Fix log divergence between leader and follower during fast leader
   failover (KIP-279)
   - Drop support for Java 7 and remove deprecated code including old 
scala
   clients
   - Connect REST extension plugin, support for externalizing secrets and
   improved error handling (KIP-285, KIP-297, KIP-298 etc.)
   - Scala API for Kafka Streams and other Streams API improvements
   (KIP-270, KIP-150, KIP-245, KIP-251 etc.)


Release notes for the 2.0.0 release:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc3/RELEASE_NOTES.html



*** Please download, test and vote by Friday July 27, 4pm PT.


Kafka's KEYS file containing PGP keys we use to sign the release:

http://kafka.apache.org/KEYS



* Release artifacts to be voted upon (source and binary):

http://home.apache.org/~rsivaram/kafka-2.0.0-rc3/



* Maven artifacts to be voted upon:

https://repository.apache.org/content/groups/staging/



* Javadoc:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc3/javadoc/



* Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:

https://github.com/apache/kafka/releases/tag/2.0.0-rc3


* Documentation:

http://kafka.apache.org/20/documentation.html



* Protocol:

http://kafka.apache.org/20/protocol.html



* Successful Jenkins builds for the 2.0 branch:

Unit/integration tests: 
https://builds.apache.org/job/kafka-2.0-jdk8/90/


System tests: 
https://jenkins.confluent.io/job/system-test-kafka/job/2.0/41/



/**


Thanks,



Rajini






Re: [VOTE] 2.0.0 RC2

2018-07-11 Thread Vahid S Hashemian
+1 (non-binding)

Built executables from source and ran quickstart (Ubuntu / Java 8)

Thanks!
--Vahid




From:   Brett Rann 
To: d...@kafka.apache.org
Cc: Users , kafka-clients 

Date:   07/10/2018 09:53 PM
Subject:Re: [VOTE] 2.0.0 RC2



+1 (non binding)
rolling upgrade of tiny shared staging multitenacy (200+ consumer groups)
cluster from 1.1 to 2.0.0-rc1 to 2.0.0-rc2. cluster looks healthy after
upgrade. Lack of burrow lag suggests consumers are still happy, and
incoming messages remains the same.  Will monitor.

On Wed, Jul 11, 2018 at 3:17 AM Rajini Sivaram 
wrote:

> Hello Kafka users, developers and client-developers,
>
>
> This is the third candidate for release of Apache Kafka 2.0.0.
>
>
> This is a major version release of Apache Kafka. It includes 40 new KIPs
> and
>
> several critical bug fixes. Please see the 2.0.0 release plan for more
> details:
>
> 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820

> <
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820
>
>
>
> A few notable highlights:
>
> - Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
> (KIP-277)
> - SASL/OAUTHBEARER implementation (KIP-255)
> - Improved quota communication and customization of quotas (KIP-219,
> KIP-257)
> - Efficient memory usage for down conversion (KIP-283)
> - Fix log divergence between leader and follower during fast leader
> failover (KIP-279)
> - Drop support for Java 7 and remove deprecated code including old scala
> clients
> - Connect REST extension plugin, support for externalizing secrets and
> improved error handling (KIP-285, KIP-297, KIP-298 etc.)
> - Scala API for Kafka Streams and other Streams API improvements
> (KIP-270, KIP-150, KIP-245, KIP-251 etc.)
>
>
> Release notes for the 2.0.0 release:
>
> 
http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/RELEASE_NOTES.html

> <
http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/RELEASE_NOTES.html
>
>
>
> *** Please download, test and vote by Friday, July 13, 4pm PT
>
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
>
> 
http://kafka.apache.org/KEYS

> <
http://kafka.apache.org/KEYS
>
>
>
> * Release artifacts to be voted upon (source and binary):
>
> 
http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/

> <
http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/
>
>
>
> * Maven artifacts to be voted upon:
>
> 
https://repository.apache.org/content/groups/staging/

> <
https://repository.apache.org/content/groups/staging/
>
>
>
> * Javadoc:
>
> 
http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/javadoc/

> <
http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/javadoc/
>
>
>
> * Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:
>
> 
https://github.com/apache/kafka/tree/2.0.0-rc2

> <
https://github.com/apache/kafka/tree/2.0.0-rc2
>
>
>
>
> * Documentation:
>
> 
http://kafka.apache.org/20/documentation.html

> <
http://kafka.apache.org/20/documentation.html
>
>
>
> * Protocol:
>
> 
http://kafka.apache.org/20/protocol.html

> <
http://kafka.apache.org/20/protocol.html
>
>
>
> * Successful Jenkins builds for the 2.0 branch:
>
> Unit/integration tests: 
https://builds.apache.org/job/kafka-2.0-jdk8/72/

> <
https://builds.apache.org/job/kafka-2.0-jdk8/72/
>
>
> System tests:
> 
https://jenkins.confluent.io/job/system-test-kafka/job/2.0/27/

> <
https://jenkins.confluent.io/job/system-test-kafka/job/2.0/27/
>
>
>
> /**
>
>
> Thanks,
>
>
> Rajini
>


-- 

Brett Rann

Senior DevOps Engineer


Zendesk International Ltd

395 Collins Street, Melbourne VIC 3000 Australia

Mobile: +61 (0) 418 826 017






Re: Apache Kafka QuickStart

2018-07-11 Thread Vahid S Hashemian
Hi Nicholas,

The quickstart is meant to run in terminals. The two commands in Step 2 
should be run in different terminals unless you're sending the ZooKeeper 
process to the backgroud.
If you are facing particular errors please share so we can better assist 
you.

Thanks.
--Vahid




From:   Nicholas Chang 
To: "users@kafka.apache.org" 
Date:   07/11/2018 05:33 AM
Subject:Apache Kafka QuickStart



Hi,
I am new to Apache Kafka and I am trying to work on the QuickStart but run 
into problem in Step 2. After executing the first command to start 
zookeeper, do i have to open a Terminal to run the Kafka Server? I even 
try How To Install Apache Kafka on Ubuntu 14.04 | DigitalOcean also cannot 
get pass step 6. I am using Ubuntu 16.04 LTS. I look forward to receiving 
your reply soon.


| 
| 
| 
|  |  |

 |

 |
| 
|  | 
How To Install Apache Kafka on Ubuntu 14.04 | DigitalOcean

Apache Kafka is a popular distributed message broker designed to handle 
large volumes of real-time data efficien...
 |

 |

 |


Regards,Nicholas Chang







Re: [VOTE] 2.0.0 RC1

2018-07-02 Thread Vahid S Hashemian
+1 (non-binding)

Built from source and ran quickstart successfully on Ubuntu (with Java 8).

Minor: It seems this doc update PR is not included in the RC: 
https://github.com/apache/kafka/pull/5280
Guozhang seems to have wanted to cherry-pick it to 2.0.

Thanks Rajini!
--Vahid




From:   Rajini Sivaram 
To: dev , Users , 
kafka-clients 
Date:   06/29/2018 11:36 AM
Subject:[VOTE] 2.0.0 RC1



Hello Kafka users, developers and client-developers,


This is the second candidate for release of Apache Kafka 2.0.0.


This is a major version release of Apache Kafka. It includes 40 new  KIPs
and

several critical bug fixes. Please see the 2.0.0 release plan for more
details:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820



A few notable highlights:

   - Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
   (KIP-277)
   - SASL/OAUTHBEARER implementation (KIP-255)
   - Improved quota communication and customization of quotas (KIP-219,
   KIP-257)
   - Efficient memory usage for down conversion (KIP-283)
   - Fix log divergence between leader and follower during fast leader
   failover (KIP-279)
   - Drop support for Java 7 and remove deprecated code including old 
scala
   clients
   - Connect REST extension plugin, support for externalizing secrets and
   improved error handling (KIP-285, KIP-297, KIP-298 etc.)
   - Scala API for Kafka Streams and other Streams API improvements
   (KIP-270, KIP-150, KIP-245, KIP-251 etc.)

Release notes for the 2.0.0 release:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/RELEASE_NOTES.html




*** Please download, test and vote by Tuesday, July 3rd, 4pm PT


Kafka's KEYS file containing PGP keys we use to sign the release:

http://kafka.apache.org/KEYS



* Release artifacts to be voted upon (source and binary):

http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/



* Maven artifacts to be voted upon:

https://repository.apache.org/content/groups/staging/



* Javadoc:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/javadoc/



* Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:

https://github.com/apache/kafka/tree/2.0.0-rc1



* Documentation:

http://kafka.apache.org/20/documentation.html



* Protocol:

http://kafka.apache.org/20/protocol.html



* Successful Jenkins builds for the 2.0 branch:

Unit/integration tests: 
https://builds.apache.org/job/kafka-2.0-jdk8/66/


System tests: 
https://jenkins.confluent.io/job/system-test-kafka/job/2.0/15/




Please test and verify the release artifacts and submit a vote for this RC
or report any issues so that we can fix them and roll out a new RC ASAP!

Although this release vote requires PMC votes to pass, testing, votes, and
bug
reports are valuable and appreciated from everyone.


Thanks,


Rajini






Re: [VOTE] 2.0.0 RC0

2018-06-22 Thread Vahid S Hashemian
+1 (non-binding)

Built from source and ran quickstart successfully on Ubuntu (with Java 8 
and Java 9).

Thanks Rajini!
--Vahid



Re: [VOTE] 1.1.1 RC1

2018-06-22 Thread Vahid S Hashemian
+1 (non-binding)

Built from source and ran quickstart successfully on Ubuntu (with Java 8).

Thanks Dong!
--Vahid



From:   Dong Lin 
To: d...@kafka.apache.org, users@kafka.apache.org, 
kafka-clie...@googlegroups.com
Date:   06/22/2018 10:10 AM
Subject:[VOTE] 1.1.1 RC1



Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 1.1.1.

Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was first
released with 1.1.0 about 3 months ago. We have fixed about 25 issues 
since
that release. A few of the more significant fixes include:

KAFKA-6925 <
https://issues.apache.org/jira/browse/KAFKA-6925
> - Fix memory
leak in StreamsMetricsThreadImpl
KAFKA-6937 <
https://issues.apache.org/jira/browse/KAFKA-6937
> - In-sync
replica delayed during fetch if replica throttle is exceeded
KAFKA-6917 <
https://issues.apache.org/jira/browse/KAFKA-6917
> - Process txn
completion asynchronously to avoid deadlock
KAFKA-6893 <
https://issues.apache.org/jira/browse/KAFKA-6893
> - Create
processors before starting acceptor to avoid ArithmeticException
KAFKA-6870 <
https://issues.apache.org/jira/browse/KAFKA-6870
> -
Fix ConcurrentModificationException in SampledStat
KAFKA-6878 <
https://issues.apache.org/jira/browse/KAFKA-6878
> - Fix
NullPointerException when querying global state store
KAFKA-6879 <
https://issues.apache.org/jira/browse/KAFKA-6879
> - Invoke
session init callbacks outside lock to avoid Controller deadlock
KAFKA-6857 <
https://issues.apache.org/jira/browse/KAFKA-6857
> - Prevent
follower from truncating to the wrong offset if undefined leader epoch is
requested
KAFKA-6854 <
https://issues.apache.org/jira/browse/KAFKA-6854
> - Log cleaner
fails with transaction markers that are deleted during clean
KAFKA-6747 <
https://issues.apache.org/jira/browse/KAFKA-6747
> - Check
whether there is in-flight transaction before aborting transaction
KAFKA-6748 <
https://issues.apache.org/jira/browse/KAFKA-6748
> - Double
check before scheduling a new task after the punctuate call
KAFKA-6739 <
https://issues.apache.org/jira/browse/KAFKA-6739
> -
Fix IllegalArgumentException when down-converting from V2 to V0/V1
KAFKA-6728 <
https://issues.apache.org/jira/browse/KAFKA-6728
> -
Fix NullPointerException when instantiating the HeaderConverter

Kafka 1.1.1 release plan:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1


Release notes for the 1.1.1 release:
http://home.apache.org/~lindong/kafka-1.1.1-rc1/RELEASE_NOTES.html


*** Please download, test and vote by Thursday, Jun 22, 12pm PT ***

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS


* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~lindong/kafka-1.1.1-rc1/


* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/


* Javadoc:
http://home.apache.org/~lindong/kafka-1.1.1-rc1/javadoc/


* Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc1 tag:
https://github.com/apache/kafka/tree/1.1.1-rc1


* Documentation:
http://kafka.apache.org/11/documentation.html


* Protocol:
http://kafka.apache.org/11/protocol.html


* Successful Jenkins builds for the 1.1 branch:
Unit/integration tests: 
*https://builds.apache.org/job/kafka-1.1-jdk7/152/
<
https://builds.apache.org/job/kafka-1.1-jdk7/152/
>*
System tests: 
https://jenkins.confluent.io/job/system-test-

kafka-branch-builder/1817


Please test and verify the release artifacts and submit a vote for this 
RC,
or report any issues so we can fix them and get a new RC out ASAP. 
Although
this release vote requires PMC votes to pass, testing, votes, and bug
reports are valuable and appreciated from everyone.

Cheers,
Dong






Re: [VOTE] 1.0.2 RC0

2018-06-22 Thread Vahid S Hashemian
+1 (non-binding)

Built from source and ran quickstart successfully on Ubuntu (with Java 8).

Thanks for running the release Matthias!
--Vahid




From:   "Matthias J. Sax" 
To: d...@kafka.apache.org, users@kafka.apache.org, 
kafka-clie...@googlegroups.com
Date:   06/22/2018 10:42 AM
Subject:[VOTE] 1.0.2 RC0



Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 1.0.2.

This is a bug fix release closing 26 tickets:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.0.2

Release notes for the 1.0.2 release:
http://home.apache.org/~mjsax/kafka-1.0.2-rc0/RELEASE_NOTES.html

*** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
can close the vote on Wednesday.

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~mjsax/kafka-1.0.2-rc0/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~mjsax/kafka-1.0.2-rc0/javadoc/

* Tag to be voted upon (off 1.0 branch) is the 1.0.2 tag:
https://github.com/apache/kafka/releases/tag/1.0.2-rc0

* Documentation:
http://kafka.apache.org/10/documentation.html

* Protocol:
http://kafka.apache.org/10/protocol.html

* Successful Jenkins builds for the 1.0 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-1.0-jdk7/211/
System tests:
https://jenkins.confluent.io/job/system-test-kafka/job/1.0/217/

/**

Thanks,
  -Matthias

[attachment "signature.asc" deleted by Vahid S Hashemian/Silicon 
Valley/IBM] 





Re: Multiple consumers subscribing to a topic

2018-06-18 Thread Vahid S Hashemian
Hi Nitin,

1) A Kafka Consumer uses a poll loop to pull messages from the topics it 
is subscribed to. You can see examples of how this can be implemented in 
Java here: 
https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/

2,3) Consumers normally consume messages as part of consumer groups. The 
last consumed offset is preserved for each consumer group. Therefore, if a 
consumer group stops consuming for a period of time, the next time a 
consumer starts consuming as part of that group it will resume fetching 
messages from those preserved offsets (assuming the messages have not 
expired yet). In your example, the consumer will get message #6 as its 
next fetched message. You can read more about consumers and consumer 
groups here: https://docs.confluent.io/current/clients/consumer.html

--Vahid




From:   Nitin Gupta 
To: users@kafka.apache.org
Date:   06/18/2018 04:53 AM
Subject:Multiple consumers subscribing to a topic



Hi

I am looking for the below setup in one of the Applications I am working 
on:


Application generates events and publishes the events to a Kafka topic.
The topic is subscribed by multiple consumers.

Assuming there are 10 events triggered in the application and published to
the topic.
How do we handle each of the below scenarios:

1) Once the message is published to the topic, should consumers pull the
message from topic or as a subscriber to the topic, the message be pushed
to them.

2) In case one of the consumers is down, will Kafka push all the 10
messages once it is up? If yes, how?

3) In case on of the consumers goes down, after 5 messages have been
pushed/consumed by it. What happens to the remaining messages? Once
consumer is up should it notify Kafka to send messages fromn 6-10 or Kafka
is intelligent enough to only trigger the messages from 6-10.

Thanks.

Regards
Nitin






Re: [kafka-clients] Re: [VOTE] 1.1.0 RC4

2018-03-27 Thread Vahid S Hashemian
Hi Rajini,

+1 (non-binding)

Built from source on Linux and Windows (Java 8), and tested quickstart on 
both platforms.
Connect quickstart on Windows is not working as per my note on RC3: 
https://www.mail-archive.com/dev@kafka.apache.org/msg86138.html

No other issues detected.

Thanks!
--Vahid



From:   Jeff Chao 
To: "d...@kafka.apache.org" 
Cc: Users , kafka-clients 

Date:   03/27/2018 10:41 AM
Subject:Re: [kafka-clients] Re: [VOTE] 1.1.0 RC4



Hello, +1 (non-binding). Ran through our regression and performance suite.
Looks good, thanks.


Jeff Chao
Heroku

On Tue, Mar 27, 2018 at 8:44 AM, Jason Gustafson  
wrote:

> +1 Went through the quickstart, checked upgrade documentation. Thanks
> Rajini!
>
> On Tue, Mar 27, 2018 at 6:28 AM, Manikumar 
> wrote:
>
> > +1 (non-binding)
> >
> > - Verified src, binary artifacts and basic quick start
> > - Verified delegation token operations and docs
> > - Verified dynamic broker configuration and docs.
> >
> >
> > On Tue, Mar 27, 2018 at 6:52 PM, Rajini Sivaram 
 >
> > wrote:
> >
> > > Can we get some more votes for this RC so that the release can be
> rolled
> > > out soon?
> > >
> > > Many thanks,
> > >
> > > Rajini
> > >
> > > On Sat, Mar 24, 2018 at 6:54 PM, Ted Yu  wrote:
> > >
> > >> I wasn't able to reproduce the test failure when it is run alone.
> > >>
> > >> This seems to be flaky test.
> > >>
> > >> +1 from me.
> > >>
> > >> On Sat, Mar 24, 2018 at 11:49 AM, Rajini Sivaram <
> > rajinisiva...@gmail.com
> > >> >
> > >> wrote:
> > >>
> > >> > Hi Ted,
> > >> >
> > >> > Thank you for testing the RC. I haven't been able to recreate 
that
> > >> failure
> > >> > after running the test a 100 times. Was it a one-off transient
> failure
> > >> or
> > >> > does it fail consistently for you?
> > >> >
> > >> >
> > >> > On Sat, Mar 24, 2018 at 2:51 AM, Ted Yu 
> wrote:
> > >> >
> > >> > > When I ran test suite, I got one failure:
> > >> > >
> > >> > > kafka.api.PlaintextConsumerTest > testAsyncCommit FAILED
> > >> > > java.lang.AssertionError: expected:<5> but was:<1>
> > >> > > at org.junit.Assert.fail(Assert.java:88)
> > >> > > at org.junit.Assert.failNotEquals(Assert.java:834)
> > >> > > at org.junit.Assert.assertEquals(Assert.java:645)
> > >> > > at org.junit.Assert.assertEquals(Assert.java:631)
> > >> > > at
> > >> > > kafka.api.BaseConsumerTest.awaitCommitCallback(
> > >> > BaseConsumerTest.scala:214)
> > >> > > at
> > >> > > kafka.api.PlaintextConsumerTest.testAsyncCommit(
> > >> > > PlaintextConsumerTest.scala:513)
> > >> > >
> > >> > > Not sure if anyone else saw similar error.
> > >> > >
> > >> > > Cheers
> > >> > >
> > >> > > On Fri, Mar 23, 2018 at 4:37 PM, Rajini Sivaram <
> > >> rajinisiva...@gmail.com
> > >> > >
> > >> > > wrote:
> > >> > >
> > >> > > > Hello Kafka users, developers and client-developers,
> > >> > > >
> > >> > > > This is the fifth candidate for release of Apache Kafka 
1.1.0.
> > >> > > >
> > >> > > > 
https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=TRhQY0EdFgwlssb3ThOd02DK0xDQcKy6p3bH5-7oDqI=Aa2VTlU0ZaFh8vKBJb_SrivA5VSMKnARMrtPq1qhvO8=
.
> > >> > > action?pageId=75957546
> > >> > > >
> > >> > > > A few highlights:
> > >> > > >
> > >> > > > * Significant Controller improvements (much faster and 
session
> > >> > expiration
> > >> > > > edge
> > >> > > > cases fixed)
> > >> > > > * Data balancing across log directories (JBOD)
> > >> > > > * More efficient replication when the number of partitions is
> > large
> > >> > > > * Dynamic Broker Configs
> > >> > > > * Delegation tokens (KIP-48)
> > >> > > > * Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 /
> 239)
> > >> > > >
> > >> > > > Release notes for the 1.1.0 release:
> > >> > > >
> > >> > > > 
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Ersivaram_kafka-2D1.1.0-2Drc4_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=TRhQY0EdFgwlssb3ThOd02DK0xDQcKy6p3bH5-7oDqI=2gNU6jjvvmB662DmBX2R-VX_7WLyB2Mtt4fl0JRJ0AE=

> > RELEASE_NOTES.html
> > >> > > >
> > >> > > >
> > >> > > > *** Please download, test and vote by Tuesday March 27th 4pm 
PT.
> > >> > > >
> > >> > > >
> > >> > > > Kafka's KEYS file containing PGP keys we use to sign the
> release:
> > >> > > >
> > >> > > > 
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_KEYS=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=TRhQY0EdFgwlssb3ThOd02DK0xDQcKy6p3bH5-7oDqI=6WLiaQG6HrLJ7--4GAXzd9zh_iWlaf_MPOPRh04l_lI=

> > >> > > >
> > >> > > >
> > >> > > > * Release artifacts to be voted upon (source and binary):
> > >> > > >
> > >> > > > 

RE: Kafka consumer issue

2018-03-20 Thread Vahid S Hashemian
So the documentation says "advertised.host.name" and "advertised.port" are 
used only when "listeners" or "advertised.listeners" are not set.
Sorry I missed that you had already set them. Though, they are deprecated 
and it is recommended to use "advertised.listeners" instead.

I just tested this successfully for a simple consumption and only set 
"listeners" (starting with the default servers.properties), as mentioned 
earlier.
If it's possible, I'd suggest you try that too, to hopefully narrow down 
the cause of the issue.

--Vahid




From:   "Anand, Uttam" <uttam.an...@bnsf.com>
To: "users@kafka.apache.org" <users@kafka.apache.org>
Date:   03/20/2018 12:15 PM
Subject:RE: Kafka consumer issue



Getting below repeated warnings and messages are not consumed -:

[2018-03-20 14:09:56,787] WARN [Consumer clientId=consumer-1, 
groupId=console-consumer-35712] Connection to node -1 could not be 
established. Broker may not be available. 
(org.apache.kafka.clients.NetworkClient)

-Original Message-
From: Vahid S Hashemian [mailto:vahidhashem...@us.ibm.com] 
Sent: Tuesday, March 20, 2018 1:55 PM
To: users@kafka.apache.org
Subject: RE: Kafka consumer issue

EXTERNAL 
EMAIL Yes, without the spaces: 
listeners=PLAINTEXT://kfk03.mp.com:9092




From:   "Anand, Uttam" <uttam.an...@bnsf.com>
To: "users@kafka.apache.org" <users@kafka.apache.org>
Date:   03/20/2018 11:50 AM
Subject:    RE: Kafka consumer issue



You meant 

 listeners=PLAINTEXT:// kfk03.mp.com:9092


-Original Message-
From: Vahid S Hashemian [mailto:vahidhashem...@us.ibm.com]
Sent: Tuesday, March 20, 2018 1:45 PM
To: users@kafka.apache.org
Subject: RE: Kafka consumer issue

EXTERNAL 
EMAIL Thanks.

Have you tried setting the listeners property to the actual domain name of 
the broker? That might be the issue.

--Vahid



From:   "Anand, Uttam" <uttam.an...@bnsf.com>
To: "users@kafka.apache.org" <users@kafka.apache.org>
Date:   03/20/2018 11:34 AM
Subject:RE: Kafka consumer issue



Server.properties

broker.id=0
listeners=PLAINTEXT://:9092
port=9092
host.name=kfk03.mp.com
advertised.host.name=kfk03.mp.com
advertised.port=9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/bnsf/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=1440
log.segment.bytes=1073741824
log.retention.check.interval.ms=30
zookeeper.connect=zkp02.mp.com:2181
zookeeper.connection.timeout.ms=6

-Original Message-
From: Vahid S Hashemian [mailto:vahidhashem...@us.ibm.com]
Sent: Tuesday, March 20, 2018 1:31 PM
To: users@kafka.apache.org
Subject: RE: Kafka consumer issue

EXTERNAL 
EMAIL Could you paste the server.properties content?
It wasn't attached to the original note.

Thanks.
--Vahid




From:   "Anand, Uttam" <uttam.an...@bnsf.com>
To: "users@kafka.apache.org" <users@kafka.apache.org>
Date:   03/20/2018 11:10 AM
Subject:RE: Kakfa consumer issue



Yes checked everything. No errors. Replication factor is 1 only.



-Original Message-

From: Manikumar [mailto:manikumar.re...@gmail.com] 

Sent: Tuesday, March 20, 2018 12:59 PM

To: users@kafka.apache.org

Subject: Re: Kakfa consumer issue



EXTERNAL 
EMAIL check broker logs for any errors. also enable 
consumer debug logs.

check the health of the __consumer_offsets topic. make sure to set

offsets.topic.replication.factor=1 for single node cluster.



On Tue, Mar 20, 2018 at 11:21 PM, Anand, Uttam <uttam.an...@bnsf.com>
wrote:



> I don’t want to use --new-consumer as it is the default, so this

> option is deprecated and will be removed in a future release.

>

> -Original Message-

> From: Anand, Uttam

> Sent: Tuesday, March 20, 2018 12:43 PM

> To: 'users@kafka.apache.org' <users@kafka.apache.org>

> Subject: RE: Kakfa consumer issue

>

> You mean by executing below command ?

>

> /kafka/bin/kafka-console-consumer.sh --new-consumer --bootstrap-server

> kfk03.mp.com:2181 --topic test --from-beginning

>

> -Original Message-

> From: Zakee [mailto:kzak...@netzero.net]

> Sent: Tuesday, March 20, 2018 12:35 PM

> To: users@kafka.apache.org

> Subject: Re: Kakfa consumer issue

>

> EXTERNAL

> EMAIL Did you try with  --new-consumer  ?

>

> -Zakee

>

> > On Mar 20, 2018, at 10:26 AM, Anand, Uttam <uttam.an...@bnsf.com>
wrote:

> >

> > I am facing an issue while consuming message using the

> > bootstrap-server

> i.e. Kafka server. Any idea why is it not able to consume messages

> without zookeeper?

> >

> > Kafka Version -: kafka_2.11-1.0.0

> > Zookeeper Vers

RE: Kafka consumer issue

2018-03-20 Thread Vahid S Hashemian
Yes, without the spaces: listeners=PLAINTEXT://kfk03.mp.com:9092




From:   "Anand, Uttam" <uttam.an...@bnsf.com>
To: "users@kafka.apache.org" <users@kafka.apache.org>
Date:   03/20/2018 11:50 AM
Subject:RE: Kafka consumer issue



You meant 

 listeners=PLAINTEXT:// kfk03.mp.com:9092


-Original Message-
From: Vahid S Hashemian [mailto:vahidhashem...@us.ibm.com] 
Sent: Tuesday, March 20, 2018 1:45 PM
To: users@kafka.apache.org
Subject: RE: Kafka consumer issue

EXTERNAL 
EMAIL Thanks.

Have you tried setting the listeners property to the actual domain name of 
the broker? That might be the issue.

--Vahid



From:   "Anand, Uttam" <uttam.an...@bnsf.com>
To: "users@kafka.apache.org" <users@kafka.apache.org>
Date:   03/20/2018 11:34 AM
Subject:RE: Kafka consumer issue



Server.properties

broker.id=0
listeners=PLAINTEXT://:9092
port=9092
host.name=kfk03.mp.com
advertised.host.name=kfk03.mp.com
advertised.port=9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/bnsf/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=1440
log.segment.bytes=1073741824
log.retention.check.interval.ms=30
zookeeper.connect=zkp02.mp.com:2181
zookeeper.connection.timeout.ms=6

-Original Message-
From: Vahid S Hashemian [mailto:vahidhashem...@us.ibm.com]
Sent: Tuesday, March 20, 2018 1:31 PM
To: users@kafka.apache.org
Subject: RE: Kafka consumer issue

EXTERNAL 
EMAIL Could you paste the server.properties content?
It wasn't attached to the original note.

Thanks.
--Vahid




From:   "Anand, Uttam" <uttam.an...@bnsf.com>
To: "users@kafka.apache.org" <users@kafka.apache.org>
Date:   03/20/2018 11:10 AM
Subject:RE: Kakfa consumer issue



Yes checked everything. No errors. Replication factor is 1 only.



-Original Message-

From: Manikumar [mailto:manikumar.re...@gmail.com] 

Sent: Tuesday, March 20, 2018 12:59 PM

To: users@kafka.apache.org

Subject: Re: Kakfa consumer issue



EXTERNAL 
EMAIL check broker logs for any errors. also enable 
consumer debug logs.

check the health of the __consumer_offsets topic. make sure to set

offsets.topic.replication.factor=1 for single node cluster.



On Tue, Mar 20, 2018 at 11:21 PM, Anand, Uttam <uttam.an...@bnsf.com>
wrote:



> I don’t want to use --new-consumer as it is the default, so this

> option is deprecated and will be removed in a future release.

>

> -Original Message-

> From: Anand, Uttam

> Sent: Tuesday, March 20, 2018 12:43 PM

> To: 'users@kafka.apache.org' <users@kafka.apache.org>

> Subject: RE: Kakfa consumer issue

>

> You mean by executing below command ?

>

> /kafka/bin/kafka-console-consumer.sh --new-consumer --bootstrap-server

> kfk03.mp.com:2181 --topic test --from-beginning

>

> -Original Message-

> From: Zakee [mailto:kzak...@netzero.net]

> Sent: Tuesday, March 20, 2018 12:35 PM

> To: users@kafka.apache.org

> Subject: Re: Kakfa consumer issue

>

> EXTERNAL

> EMAIL Did you try with  --new-consumer  ?

>

> -Zakee

>

> > On Mar 20, 2018, at 10:26 AM, Anand, Uttam <uttam.an...@bnsf.com>
wrote:

> >

> > I am facing an issue while consuming message using the 

> > bootstrap-server

> i.e. Kafka server. Any idea why is it not able to consume messages 

> without zookeeper?

> >

> > Kafka Version -: kafka_2.11-1.0.0

> > Zookeeper Version -: kafka_2.11-1.0.0 Server.properties -: Attached 

> > Zookeeper Host and port -: z

> > <
https://urldefense.proofpoint.com/v2/url?u=http-3A__zkp02.mp.com-3A2181_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=HtCECVj6e6Vg8GxNoHQofxvBWbemYOT7QtAedYnV1LU=


>kp02.mp.com:2181

> > <
https://urldefense.proofpoint.com/v2/url?u=http-3A__zkp02.mp.com-3A2181_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=HtCECVj6e6Vg8GxNoHQofxvBWbemYOT7QtAedYnV1LU=


> Kafka Host and port -: kfk03.mp.com:9092 

> > <
https://urldefense.proofpoint.com/v2/url?u=http-3A__kfk03.mp.com-3A9092_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=0ULoijWNSqOP_En8AvhlnQvd_6LbG71ko-c1anZhnHo=


>

> >

> > Producing some message  -:

> >

> > [kfk03.mp.com <
https://urldefense.proofpoint.com/v2/url?u=http-3A__kfk03.mp.com_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=84h4-GitFCbi7J_IhfqzIRTgV3_7sGoLIdevdfQKW2g=


> ~]$ 

> > /bnsf/kafka/b

RE: Kafka consumer issue

2018-03-20 Thread Vahid S Hashemian
Thanks.

Have you tried setting the listeners property to the actual domain name of 
the broker? That might be the issue.

--Vahid



From:   "Anand, Uttam" <uttam.an...@bnsf.com>
To: "users@kafka.apache.org" <users@kafka.apache.org>
Date:   03/20/2018 11:34 AM
Subject:RE: Kafka consumer issue



Server.properties

broker.id=0
listeners=PLAINTEXT://:9092
port=9092
host.name=kfk03.mp.com
advertised.host.name=kfk03.mp.com
advertised.port=9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/bnsf/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=1440
log.segment.bytes=1073741824
log.retention.check.interval.ms=30
zookeeper.connect=zkp02.mp.com:2181
zookeeper.connection.timeout.ms=6

-Original Message-
From: Vahid S Hashemian [mailto:vahidhashem...@us.ibm.com] 
Sent: Tuesday, March 20, 2018 1:31 PM
To: users@kafka.apache.org
Subject: RE: Kafka consumer issue

EXTERNAL 
EMAIL Could you paste the server.properties content?
It wasn't attached to the original note.

Thanks.
--Vahid




From:   "Anand, Uttam" <uttam.an...@bnsf.com>
To: "users@kafka.apache.org" <users@kafka.apache.org>
Date:   03/20/2018 11:10 AM
Subject:RE: Kakfa consumer issue



Yes checked everything. No errors. Replication factor is 1 only.



-Original Message-

From: Manikumar [mailto:manikumar.re...@gmail.com] 

Sent: Tuesday, March 20, 2018 12:59 PM

To: users@kafka.apache.org

Subject: Re: Kakfa consumer issue



EXTERNAL 
EMAIL check broker logs for any errors. also enable 
consumer debug logs.

check the health of the __consumer_offsets topic. make sure to set

offsets.topic.replication.factor=1 for single node cluster.



On Tue, Mar 20, 2018 at 11:21 PM, Anand, Uttam <uttam.an...@bnsf.com>
wrote:



> I don’t want to use --new-consumer as it is the default, so this 

> option is deprecated and will be removed in a future release.

>

> -Original Message-

> From: Anand, Uttam

> Sent: Tuesday, March 20, 2018 12:43 PM

> To: 'users@kafka.apache.org' <users@kafka.apache.org>

> Subject: RE: Kakfa consumer issue

>

> You mean by executing below command ?

>

> /kafka/bin/kafka-console-consumer.sh --new-consumer --bootstrap-server

> kfk03.mp.com:2181 --topic test --from-beginning

>

> -Original Message-

> From: Zakee [mailto:kzak...@netzero.net]

> Sent: Tuesday, March 20, 2018 12:35 PM

> To: users@kafka.apache.org

> Subject: Re: Kakfa consumer issue

>

> EXTERNAL 

> EMAIL Did you try with  --new-consumer  ?

>

> -Zakee

>

> > On Mar 20, 2018, at 10:26 AM, Anand, Uttam <uttam.an...@bnsf.com> 
wrote:

> >

> > I am facing an issue while consuming message using the 

> > bootstrap-server

> i.e. Kafka server. Any idea why is it not able to consume messages 

> without zookeeper?

> >

> > Kafka Version -: kafka_2.11-1.0.0

> > Zookeeper Version -: kafka_2.11-1.0.0 Server.properties -: Attached 

> > Zookeeper Host and port -: z

> > <
https://urldefense.proofpoint.com/v2/url?u=http-3A__zkp02.mp.com-3A2181_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=HtCECVj6e6Vg8GxNoHQofxvBWbemYOT7QtAedYnV1LU=

>kp02.mp.com:2181

> > <
https://urldefense.proofpoint.com/v2/url?u=http-3A__zkp02.mp.com-3A2181_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=HtCECVj6e6Vg8GxNoHQofxvBWbemYOT7QtAedYnV1LU=

> Kafka Host and port -: kfk03.mp.com:9092 

> > <
https://urldefense.proofpoint.com/v2/url?u=http-3A__kfk03.mp.com-3A9092_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=0ULoijWNSqOP_En8AvhlnQvd_6LbG71ko-c1anZhnHo=

>

> >

> > Producing some message  -:

> >

> > [kfk03.mp.com <
https://urldefense.proofpoint.com/v2/url?u=http-3A__kfk03.mp.com_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=84h4-GitFCbi7J_IhfqzIRTgV3_7sGoLIdevdfQKW2g=

> ~]$ 

> > /bnsf/kafka/bin/kafka-console-producer.sh --broker-list kfk03.mp.com

> > <
https://urldefense.proofpoint.com/v2/url?u=http-3A__kfk03.mp.com-3A9092_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=0ULoijWNSqOP_En8AvhlnQvd_6LbG71ko-c1anZhnHo=

>:9092 <
https://urldefense.proofpoint.com/v2/url?u=http-3A__kfk03.mp.com-3A9092_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=0ULoijWNSqOP_En8AvhlnQvd_6LbG71

RE: Kafka consumer issue

2018-03-20 Thread Vahid S Hashemian
Could you paste the server.properties content?
It wasn't attached to the original note.

Thanks.
--Vahid




From:   "Anand, Uttam" 
To: "users@kafka.apache.org" 
Date:   03/20/2018 11:10 AM
Subject:RE: Kakfa consumer issue



Yes checked everything. No errors. Replication factor is 1 only.



-Original Message-

From: Manikumar [mailto:manikumar.re...@gmail.com] 

Sent: Tuesday, March 20, 2018 12:59 PM

To: users@kafka.apache.org

Subject: Re: Kakfa consumer issue



EXTERNAL 
EMAIL check broker logs for any errors. also enable 
consumer debug logs.

check the health of the __consumer_offsets topic. make sure to set

offsets.topic.replication.factor=1 for single node cluster.



On Tue, Mar 20, 2018 at 11:21 PM, Anand, Uttam  
wrote:



> I don’t want to use --new-consumer as it is the default, so this 

> option is deprecated and will be removed in a future release.

>

> -Original Message-

> From: Anand, Uttam

> Sent: Tuesday, March 20, 2018 12:43 PM

> To: 'users@kafka.apache.org' 

> Subject: RE: Kakfa consumer issue

>

> You mean by executing below command ?

>

> /kafka/bin/kafka-console-consumer.sh --new-consumer --bootstrap-server

> kfk03.mp.com:2181 --topic test --from-beginning

>

> -Original Message-

> From: Zakee [mailto:kzak...@netzero.net]

> Sent: Tuesday, March 20, 2018 12:35 PM

> To: users@kafka.apache.org

> Subject: Re: Kakfa consumer issue

>

> EXTERNAL 

> EMAIL Did you try with  --new-consumer  ?

>

> -Zakee

>

> > On Mar 20, 2018, at 10:26 AM, Anand, Uttam  
wrote:

> >

> > I am facing an issue while consuming message using the 

> > bootstrap-server

> i.e. Kafka server. Any idea why is it not able to consume messages 

> without zookeeper?

> >

> > Kafka Version -: kafka_2.11-1.0.0

> > Zookeeper Version -: kafka_2.11-1.0.0 Server.properties -: Attached 

> > Zookeeper Host and port -: z

> > <
https://urldefense.proofpoint.com/v2/url?u=http-3A__zkp02.mp.com-3A2181_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=HtCECVj6e6Vg8GxNoHQofxvBWbemYOT7QtAedYnV1LU=
>kp02.mp.com:2181

> > <
https://urldefense.proofpoint.com/v2/url?u=http-3A__zkp02.mp.com-3A2181_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=HtCECVj6e6Vg8GxNoHQofxvBWbemYOT7QtAedYnV1LU=
> Kafka Host and port -: kfk03.mp.com:9092 

> > <
https://urldefense.proofpoint.com/v2/url?u=http-3A__kfk03.mp.com-3A9092_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=0ULoijWNSqOP_En8AvhlnQvd_6LbG71ko-c1anZhnHo=
>

> >

> > Producing some message  -:

> >

> > [kfk03.mp.com <
https://urldefense.proofpoint.com/v2/url?u=http-3A__kfk03.mp.com_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=84h4-GitFCbi7J_IhfqzIRTgV3_7sGoLIdevdfQKW2g=
> ~]$ 

> > /bnsf/kafka/bin/kafka-console-producer.sh --broker-list kfk03.mp.com

> > <
https://urldefense.proofpoint.com/v2/url?u=http-3A__kfk03.mp.com-3A9092_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=0ULoijWNSqOP_En8AvhlnQvd_6LbG71ko-c1anZhnHo=
>:9092 <
https://urldefense.proofpoint.com/v2/url?u=http-3A__kfk03.mp.com-3A9092_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=0ULoijWNSqOP_En8AvhlnQvd_6LbG71ko-c1anZhnHo=
> --topic 

> > test

> > >hi

> > >hi

> >

> > Consumer not able to consume message if I give –bootstrap-server  -:

> >

> > [kfk03.mp.com <
https://urldefense.proofpoint.com/v2/url?u=http-3A__kfk03.mp.com_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=84h4-GitFCbi7J_IhfqzIRTgV3_7sGoLIdevdfQKW2g=
> ~]$ 

> > /bnsf/kafka/bin/kafka-console-consumer.sh --bootstrap-server 

> > kfk03.mp.com <
https://urldefense.proofpoint.com/v2/url?u=http-3A__kfk03.mp.com-3A9092_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=0ULoijWNSqOP_En8AvhlnQvd_6LbG71ko-c1anZhnHo=
>:9092 

> > <
https://urldefense.proofpoint.com/v2/url?u=http-3A__kfk03.mp.com-3A9092_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=kfMgTkQ_OnAIXEkdDqz7juDsY7PVQGYY7dxzFMHrNEQ=0ULoijWNSqOP_En8AvhlnQvd_6LbG71ko-c1anZhnHo=
> --topic test --from-beginning

> >

> > Consumer able to consume messages when zookeeper server is given 

> > instead

> of bootstrap-server -:

> >

> > [kfk03.mp.com <

Re: Subject: [VOTE] 1.1.0 RC3

2018-03-16 Thread Vahid S Hashemian
Hi Damian,

Thanks for running the release.

I tried building from source and running the quick start on Linux & 
Windows with both Java 8 & 9.
Here's the result:

+-+-+-+
| |  Linux  | Windows |
+ +-+-+
| | J8 | J9 | J8 | J9 |
+-+++++
|  Build  |  + |  + |  + |  + |
+-+++++
|  Single broker  |  + |  + |  + |  + |
| produce/consume |||||
+-+++++
| Connect |  + |  ? |  - |  - |
+-+++++
| Streams |  + |  + |  + |  + |
+-+++++

?: Connect quickstart on Linux with Java 9 runs but the connect tool 
throws a bunch of exceptions (https://www.codepile.net/pile/yVg8XJB8)
-: Connect quickstart on Windows fails (Java 8: 
https://www.codepile.net/pile/xJGra6BP, Java 9: 
https://www.codepile.net/pile/oREYeORK)

Given that Windows is not an officially supported platform, and the 
exceptions with Linux/Java 9 are not breaking the functionality, my vote 
is a +1 (non-binding).

Thanks.
--Vahid




From:   Damian Guy 
To: d...@kafka.apache.org, users@kafka.apache.org, 
kafka-clie...@googlegroups.com
Date:   03/15/2018 07:55 AM
Subject:Subject: [VOTE] 1.1.0 RC3



Hello Kafka users, developers and client-developers,

This is the fourth candidate for release of Apache Kafka 1.1.0.

https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D75957546=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Qn2GySTKcOV5MFr3WDl63BDv7pTd2sgX46mjZPws01U=cKgJtQXXRauZ3HSAoSbsC9SLVTAkO-pbLdPrOCBuJzE=


A few highlights:

* Significant Controller improvements (much faster and session expiration
edge cases fixed)
* Data balancing across log directories (JBOD)
* More efficient replication when the number of partitions is large
* Dynamic Broker Configs
* Delegation tokens (KIP-48)
* Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)


Release notes for the 1.1.0 release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Edamianguy_kafka-2D1.1.0-2Drc3_RELEASE-5FNOTES.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Qn2GySTKcOV5MFr3WDl63BDv7pTd2sgX46mjZPws01U=26FgbzRhKImhxyEkB4KzDPG-l8W_Y99xU6LykOAgpFI=


*** Please download, test and vote by Monday, March 19, 9am PDT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_KEYS=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Qn2GySTKcOV5MFr3WDl63BDv7pTd2sgX46mjZPws01U=xlnrfgxVFMRCKk8pTOhujyC-Um4ogtsxK6Xwks6mc3U=


* Release artifacts to be voted upon (source and binary):
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Edamianguy_kafka-2D1.1.0-2Drc3_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Qn2GySTKcOV5MFr3WDl63BDv7pTd2sgX46mjZPws01U=ulHUeYnWIp28Gsn4VV1NK3FrGV4Jn5rUpuU6tvgekME=


* Maven artifacts to be voted upon:
https://urldefense.proofpoint.com/v2/url?u=https-3A__repository.apache.org_content_groups_staging_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Qn2GySTKcOV5MFr3WDl63BDv7pTd2sgX46mjZPws01U=G9o4hXVXF0bjL_a3Wocod9GUEfy9WBBgoGa2u6yFKQw=


* Javadoc:
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Edamianguy_kafka-2D1.1.0-2Drc3_javadoc_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Qn2GySTKcOV5MFr3WDl63BDv7pTd2sgX46mjZPws01U=2auaI4IIJhEORGYm1Kdpxt5TDHh0PzSvtK77lC3SJVY=


* Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_tree_1.1.0-2Drc3=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Qn2GySTKcOV5MFr3WDl63BDv7pTd2sgX46mjZPws01U=h7G8XPD8vAWl_gqySi2Iocag5NnP32IT_PyirPC3Lss=



* Documentation:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_11_documentation.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Qn2GySTKcOV5MFr3WDl63BDv7pTd2sgX46mjZPws01U=KcPsL867-tPQxKPC4ufl5tqg9RSL3lxwsgKhOxeA7t0=

<
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_1_documentation.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Qn2GySTKcOV5MFr3WDl63BDv7pTd2sgX46mjZPws01U=WNNVk0RyxsSS16FdUGg55mIwK47_eeD3DBQX7SD56kI=
>

* Protocol:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_11_protocol.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Qn2GySTKcOV5MFr3WDl63BDv7pTd2sgX46mjZPws01U=Vnu-s8FnOg3tNdCe6juserIwE3sV_zqpc5tINkzbUus=

<

Re: [VOTE] 1.1.0 RC0

2018-02-26 Thread Vahid S Hashemian
+1 (non-binding)

Built the source and ran quickstart (including streams) successfully on 
Ubuntu (with both Java 8 and Java 9).

I understand the Windows platform is not officially supported, but I ran 
the same on Windows 10, and except for Step 7 (Connect) everything else 
worked fine.

There are a number of warning and errors (including 
java.lang.ClassNotFoundException). Here's the final error message:

> bin\windows\connect-standalone.bat config\connect-standalone.properties 
config\connect-file-source.properties config\connect-file-sink.properties
...
[2018-02-26 14:55:56,529] ERROR Stopping after connector error 
(org.apache.kafka.connect.cli.ConnectStandalone)
java.lang.NoClassDefFoundError: 
org/apache/kafka/connect/transforms/util/RegexValidator
at 
org.apache.kafka.connect.runtime.SinkConnectorConfig.(SinkConnectorConfig.java:46)
at 
org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:263)
at 
org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:164)
at 
org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107)
Caused by: java.lang.ClassNotFoundException: 
org.apache.kafka.connect.transforms.util.RegexValidator
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:582)
at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:185)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:496)
... 4 more

Thanks for running the release.
--Vahid




From:   Damian Guy 
To: d...@kafka.apache.org, users@kafka.apache.org, 
kafka-clie...@googlegroups.com
Date:   02/24/2018 08:16 AM
Subject:[VOTE] 1.1.0 RC0



Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 1.1.0.

This is minor version release of Apache Kakfa. It Includes 29 new KIPs.
Please see the release plan for more details:

https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D71764913=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=M1liORvtcIt7pZ8e5GnLr9a1i6SOUY4bvjHYOrY_zcE=


A few highlights:

* Significant Controller improvements (much faster and session expiration
edge cases fixed)
* Data balancing across log directories (JBOD)
* More efficient replication when the number of partitions is large
* Dynamic Broker Configs
* Delegation tokens (KIP-48)
* Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)

Release notes for the 1.1.0 release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Edamianguy_kafka-2D1.1.0-2Drc0_RELEASE-5FNOTES.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=H6-O0mkXk2tT_7RlN4W9bJd_lpoOt5ranhTx28WdRnQ=


*** Please download, test and vote by Wednesday, February 28th, 5pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_KEYS=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=Eo5JrktOPUlA2-7W11222zSVYfR6oqzd9uiaUEod2D4=


* Release artifacts to be voted upon (source and binary):
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Edamianguy_kafka-2D1.1.0-2Drc0_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=LkMdsPX_jln_lIgxbKUbnElAiqkNdAWJCkA5kuIRU64=


* Maven artifacts to be voted upon:
https://urldefense.proofpoint.com/v2/url?u=https-3A__repository.apache.org_content_groups_staging_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=E-Tj8DN83xkbvX8b6Vcel0z7v3AiRIusBmNtOIAUt_c=


* Javadoc:
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Edamianguy_kafka-2D1.1.0-2Drc0_javadoc_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=kfh4ovj2d15WkXcWajx2-ugtcAVvjOTklZhtF9jWDY8=


* Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_tree_1.1.0-2Drc0=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=7maGMa53WObbHkEL4GVMLf8RppBtSTPH9z3dtfRc8Pk=



* Documentation:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_11_documentation.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=JZ7LIhElx6wuu-F9awErsFSpunY5IhcZBxM2lWTR_eI=


* Protocol:

Bay Area Apache Kafka Meetup - Morning of Feb 20

2018-02-07 Thread Vahid S Hashemian
Kafka users and developers,

The next *Bay Area Apache Kafka Meetup* is on the *morning of Feb 20* and 
is hosted by *Index Developer Conference* at Moscone West in San Francisco
.
Meetup Info: https://www.meetup.com/KafkaBayArea/events/247433783/
Registration Link: https://ibm.co/2n742Jn (required)

Promo code for free meetup registration: CD1KAFKA
Promo code for free meetup registration + free full Index pass (Feb 
20-22): IND18FULL (expires Feb 12, 11:59 PST)

Detailed instructions and agenda can be found at the meetup link above.

Hope to see you there.
--Vahid



Re: [VOTE] 1.0.1 RC0

2018-02-07 Thread Vahid S Hashemian
Hi Ewen,

+1

Building from source and running the quickstart were successful on Ubuntu 
and Windows 10.

Thanks for running the release.
--Vahid



From:   Ewen Cheslack-Postava 
To: d...@kafka.apache.org, users@kafka.apache.org, 
kafka-clie...@googlegroups.com
Date:   02/05/2018 07:49 PM
Subject:[VOTE] 1.0.1 RC0



Hello Kafka users, developers and client-developers,

Sorry for a bit of delay, but I've now prepared the first candidate for
release of Apache Kafka 1.0.1.

This is a bugfix release for the 1.0 branch that was first released with
1.0.0 about 3 months ago. We've fixed 46 significant issues since that
release. Most of these are non-critical, but in aggregate these fixes will
have significant impact. A few of the more significant fixes include:

* KAFKA-6277: Make loadClass thread-safe for class loaders of Connect
plugins
* KAFKA-6185: Selector memory leak with high likelihood of OOM in case of
down conversion
* KAFKA-6269: KTable state restore fails after rebalance
* KAFKA-6190: GlobalKTable never finishes restoring when consuming
transactional messages

Release notes for the 1.0.1 release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eewencp_kafka-2D1.0.1-2Drc0_RELEASE-5FNOTES.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Z45uiGWoLkCQ5hYes5SiOy1n_pA3ih4Cvmr5W32xx98=l1iKa9gDVsN8n73JUsdMj2b_8vCXjo6ZlhPjlHnwLa4=


*** Please download, test and vote by Thursday, Feb 8, 12pm PT ***

Kafka's KEYS file containing PGP keys we use to sign the release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_KEYS=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Z45uiGWoLkCQ5hYes5SiOy1n_pA3ih4Cvmr5W32xx98=FMJWV-i3KbNT9eWV7mxnb9vLofAG8UOyqf13nC60HT0=


* Release artifacts to be voted upon (source and binary):
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eewencp_kafka-2D1.0.1-2Drc0_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Z45uiGWoLkCQ5hYes5SiOy1n_pA3ih4Cvmr5W32xx98=wfEb6h21ejMltBiWDsND5C_iAR1asfxwSVKbbmNwDRQ=


* Maven artifacts to be voted upon:
https://urldefense.proofpoint.com/v2/url?u=https-3A__repository.apache.org_content_groups_staging_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Z45uiGWoLkCQ5hYes5SiOy1n_pA3ih4Cvmr5W32xx98=YVQzF4zQchi3ru3UYkgkhgC2LnRRf_NFl1iJId4Iw2Q=


* Javadoc:
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eewencp_kafka-2D1.0.1-2Drc0_javadoc_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Z45uiGWoLkCQ5hYes5SiOy1n_pA3ih4Cvmr5W32xx98=Y7hXIhHxDGb-M7d6kLZaargoYcLW6kH3agSdqO1SuwQ=


* Tag to be voted upon (off 1.0 branch) is the 1.0.1 tag:
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_tree_1.0.1-2Drc0=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Z45uiGWoLkCQ5hYes5SiOy1n_pA3ih4Cvmr5W32xx98=L729TlgNpT-y8WQzeZTsNATg1zFfAsCpXBhXfbu6UXk=



* Documentation:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_10_documentation.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Z45uiGWoLkCQ5hYes5SiOy1n_pA3ih4Cvmr5W32xx98=DYynoi4X5K3p9DwzxkGYp8vprFK4qvPPQtO1IvQEbME=


* Protocol:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_10_protocol.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Z45uiGWoLkCQ5hYes5SiOy1n_pA3ih4Cvmr5W32xx98=_BLA3u9JgZKeJ0Kwij9_2J3lnxt8rCCXmptRh4OUPic=



Please test and verify the release artifacts and submit a vote for this 
RC,
or report any issues so we can fix them and get a new RC out ASAP! 
Although
this release vote requires PMC votes to pass, testing, votes, and bug
reports are valuable and appreciated from everyone.

Thanks,
Ewen






Re: [ANNOUNCE] New Kafka PMC Member: Rajini Sivaram

2018-01-17 Thread Vahid S Hashemian
Great news! Congratulations Rajini!

--Vahid



From:   Gwen Shapira 
To: "d...@kafka.apache.org" , Users 

Date:   01/17/2018 10:49 AM
Subject:[ANNOUNCE] New Kafka PMC Member: Rajini Sivaram



Dear Kafka Developers, Users and Fans,

Rajini Sivaram became a committer in April 2017.  Since then, she remained
active in the community and contributed major patches, reviews and KIP
discussions. I am glad to announce that Rajini is now a member of the
Apache Kafka PMC.

Congratulations, Rajini and looking forward to your future contributions.

Gwen, on behalf of Apache Kafka PMC






Re: [ANNOUNCE] New committer: Matthias J. Sax

2018-01-12 Thread Vahid S Hashemian
Congrats Matthias! Well deserved.

--Vahid



From:   Ted Yu 
To: d...@kafka.apache.org
Cc: users@kafka.apache.org
Date:   01/12/2018 03:00 PM
Subject:Re: [ANNOUNCE] New committer: Matthias J. Sax



Congratulations, Matthias.

On Fri, Jan 12, 2018 at 2:59 PM, Guozhang Wang  wrote:

> Hello everyone,
>
> The PMC of Apache Kafka is pleased to announce Matthias J. Sax as our
> newest Kafka committer.
>
> Matthias has made tremendous contributions to Kafka Streams API since 
early
> 2016. His footprint has been all over the places in Streams: in the past
> two years he has been the main driver on improving the join semantics
> inside Streams DSL, summarizing all their shortcomings and bridging the
> gaps; he has also been largely working on the exactly-once semantics of
> Streams by leveraging on the transaction messaging feature in 0.11.0. In
> addition, Matthias have been very active in community activity that goes
> beyond mailing list: he's getting the close to 1000 up votes and 100
> helpful flags on SO for answering almost all questions about Kafka 
Streams.
>
> Thank you for your contribution and welcome to Apache Kafka, Matthias!
>
>
>
> Guozhang, on behalf of the Apache Kafka PMC
>






Re: Questions about kafka-consumer-groups output

2017-11-11 Thread Vahid S Hashemian
Hi Michael,

Java based Kafka consumers can consume messages in two ways:
- simple: by manually assigning partitions to a consumer to consume from
- consumer group: a group of consumers consume messages in a coordinated 
fashion (by subscribing to topics and having partitions automatically 
assigned to them)

You are running a simple consumer for which the consumer group management 
does not apply.
If you switch to automatic partition assignment you'll see that those 
columns populated.

Please take a look at this article for additional info: 
https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/

--Vahid



From:   Michael Scofield 
To: users@kafka.apache.org
Date:   11/09/2017 10:43 PM
Subject:Questions about kafka-consumer-groups output



Hello all:

I’m using Kafka version 0.11.0.1, with the new Java consumer API (same 
version), and commit offsets to Kafka.


I want to get the consumer lags, so I use the following operation command:

$ bin/kafka-consumer-groups.sh --bootstrap-server localhost:9093 
--describe --group foo.test.consumers
Note: This will only show information about consumers that use the Java 
consumer API (non-ZooKeeper-based consumers).

Consumer group ‘foo.test.consumers' has no active members.

TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAGCONSUMER-ID 
HOST  CLIENT-ID
foo0  109929174   109929190   16 - - -
foo2  109929222   109929240   18 - - -
foo1  109929004   109929023   19 - - -


I have 2 questions regarding the output above:

1. What does the statement “Consumer group ‘foo.test.consumers' has no 
active members.” mean? My consumers are working correctly and using the 
sole group “foo.test.consumers”. It doesn’t make sense from the 
statement's literature meaning.

I googled it and the few results are useless.

2. Why are the “CONSUMER-ID  HOST  CLIENT-ID” are all “-“? I didn’t 
find any information about “CONSUMER-ID” or “HOST" in the Kafka 
documents nor the Consumer’s JavaDoc.  Thought I did find out how to set 
a client id in the KafkaConsumer, but even if I explicitly set it, the “
CLIENT-ID” in the command output is still “-".


Here's my code snippet about creating a KafkaConsumer:

Properties props = new Properties();
props.put("enable.auto.commit", "false");
props.put("key.deserializer", 
"org.apache.kafka.common.serialization.ByteArrayDeserializer");
props.put("value.deserializer", 
"org.apache.kafka.common.serialization.ByteArrayDeserializer");
props.put("group.id", "foo.test.consumers");
props.put("client.id", "1234");
KafkaConsumer kafkaConsumer = new KafkaConsumer<>(props);

I’m manually assigning a topic-partition to the KafkaConsumer:

kafkaConsumer.assign(Collections.singleton(new TopicPartition("foo", 0)));

And manually committing offsets to Kafka:

String metadata = localhost + "@" + System.currentTimeMillis();
OffsetAndMetadata offsetAndMetadata = new 
OffsetAndMetadata(uncommittedOffset + 1, metadata);
kafkaConsumer.commitSync(Collections.singletonMap(new 
TopicPartition("foo", 0), offsetAndMetadata));

Thanks!







Re: [ANNOUNCE] New committer: Onur Karaman

2017-11-06 Thread Vahid S Hashemian
Congrats Onur!

--Vahid



From:   Ismael Juma 
To: d...@kafka.apache.org
Cc: "users@kafka.apache.org" 
Date:   11/06/2017 10:13 AM
Subject:Re: [ANNOUNCE] New committer: Onur Karaman
Sent by:isma...@gmail.com



Congratulations Onur!

Ismael

On Mon, Nov 6, 2017 at 5:24 PM, Jun Rao  wrote:

> Hi, everyone,
>
> The PMC of Apache Kafka is pleased to announce a new Kafka committer 
Onur
> Karaman.
>
> Onur's most significant work is the improvement of Kafka controller, 
which
> is the brain of a Kafka cluster. Over time, we have accumulated quite a 
few
> correctness and performance issues in the controller. There have been
> attempts to fix controller issues in isolation, which would make the 
code
> base more complicated without a clear path of solving all problems. Onur 
is
> the one who took a holistic approach, by first documenting all known
> issues, writing down a new design, coming up with a plan to deliver the
> changes in phases and executing on it. At this point, Onur has completed
> the two most important phases: making the controller single threaded and
> changing the controller to use the async ZK api. The former fixed 
multiple
> deadlocks and race conditions. The latter significantly improved the
> performance when there are many partitions. Experimental results show 
that
> Onur's work reduced the controlled shutdown time by a factor of 100 
times
> and the controller failover time by a factor of 3 times.
>
> Congratulations, Onur!
>
> Thanks,
>
> Jun (on behalf of the Apache Kafka PMC)
>






Re: [VOTE] 1.0.0 RC3

2017-10-25 Thread Vahid S Hashemian
Hi Guozhang,

+1 for Ubuntu and Mac: successfully built jars and tested quickstarts with 
this RC (using Java 8).

-1 for Windows: because of KAFKA-6075 and KAFKA-6100. To me these two 
issues (which have simple workarounds) sound like "blockers" - unless 
Kafka does not officially support Windows.

Thanks.
--Vahid



From:   Guozhang Wang 
To: "d...@kafka.apache.org" , 
"users@kafka.apache.org" , kafka-clients 

Date:   10/23/2017 06:01 PM
Subject:[VOTE] 1.0.0 RC3



Hello Kafka users, developers and client-developers,

This is the third candidate for release of Apache Kafka 1.0.0. The main 
PRs
that gets merged in after RC1 are the following:

https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_commit_dc6bfa553e73ffccd1e604963e076c=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=IXzIAtGVhimjespiK39c6G1WgcbBmQRf7y_fY5znRvE=gFHxZDDyvXlxWKAopousiqwYxn6CC1VCaN6vNljV0Jg=

78d8ddcd69

It's worth noting that starting in this version we are using a different
version protocol with three digits: *major.minor.bug-fix*

Any and all testing is welcome, but the following areas are worth
highlighting:

1. Client developers should verify that their clients can produce/consume
to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
2. Performance and stress testing. Heroku and LinkedIn have helped with
this in the past (and issues have been found and fixed).
3. End users can verify that their apps work correctly with the new 
release.

This is a major version release of Apache Kafka. It includes 29 new KIPs.
See the release notes and release plan
(*https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D71764913=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=IXzIAtGVhimjespiK39c6G1WgcbBmQRf7y_fY5znRvE=GkBK7f7aFQ0q7_c5RYolZiLXrX4vyW5OkvE1As3To74=
<
https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D71764913=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=IXzIAtGVhimjespiK39c6G1WgcbBmQRf7y_fY5znRvE=GkBK7f7aFQ0q7_c5RYolZiLXrX4vyW5OkvE1As3To74=
>*)
for more details. A few feature highlights:

* Java 9 support with significantly faster TLS and CRC32C implementations
* JBOD improvements: disk failure only disables failed disk but not the
broker (KIP-112/KIP-113 part I)
* Controller improvements: reduced logging change to greatly accelerate
admin request handling.
* Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
KIP-188, KIP-196)
* Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
and drop compatibility "Evolving" annotations

Release notes for the 1.0.0 release:
*https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc3_RELEASE-5FNOTES.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=IXzIAtGVhimjespiK39c6G1WgcbBmQRf7y_fY5znRvE=w97ukcWA-on6o_E7Z-cXqTpDkRrXueAbdPXFDGR7iuE=
<
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc3_RELEASE-5FNOTES.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=IXzIAtGVhimjespiK39c6G1WgcbBmQRf7y_fY5znRvE=w97ukcWA-on6o_E7Z-cXqTpDkRrXueAbdPXFDGR7iuE=
>*



*** Please download, test and vote by Friday, October 20, 8pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_KEYS=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=IXzIAtGVhimjespiK39c6G1WgcbBmQRf7y_fY5znRvE=nnpGYoHO1DAO7pumwuwaTnFfYSJJUaD9230r0tXvtMA=


* Release artifacts to be voted upon (source and binary):
*https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc3_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=IXzIAtGVhimjespiK39c6G1WgcbBmQRf7y_fY5znRvE=htdv00t1c_eD1U0it0rlNQFYSn6iGrnH6eTZFonRrrg=
<
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc3_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=IXzIAtGVhimjespiK39c6G1WgcbBmQRf7y_fY5znRvE=htdv00t1c_eD1U0it0rlNQFYSn6iGrnH6eTZFonRrrg=
>*

* Maven artifacts to be voted upon:
https://urldefense.proofpoint.com/v2/url?u=https-3A__repository.apache.org_content_groups_staging_org_apache_kafka_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=IXzIAtGVhimjespiK39c6G1WgcbBmQRf7y_fY5znRvE=fMHq5kB4xP2FiFgdXT3FYM8ylac9FAr80joIjY4oL-s=


* Javadoc:

Re: [VOTE] 1.0.0 RC1

2017-10-17 Thread Vahid S Hashemian
Thanks Ismael for the tip.
I missed it in the Readme page (
https://github.com/apache/kafka#running-a-task-on-a-particular-version-of-scala-either-211x-or-212x
)

--Vahid



From:   Ismael Juma <isma...@gmail.com>
To: d...@kafka.apache.org
Cc: Kafka Users <users@kafka.apache.org>
Date:   10/16/2017 06:50 PM
Subject:Re: [VOTE] 1.0.0 RC1



If you don't use the default Scala version, you have to set the
SCALA_VERSION environment variable for the bin scripts to work.

Ismael

On 17 Oct 2017 1:30 am, "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
wrote:

Hi Guozhang,

I'm not sure if this should be covered by "Java 9 support" in the RC note,
but when I try to build jars from source using Java 9 (./gradlew
-PscalaVersion=2.12 jar) even though the build reports as succeeded, it
doesn't seem to have been successful:

$ bin/zookeeper-server-start.sh config/zookeeper.properties
Error: Could not find or load main class
org.apache.zookeeper.server.quorum.QuorumPeerMain
Caused by: java.lang.ClassNotFoundException:
org.apache.zookeeper.server.quorum.QuorumPeerMain

Please advise if I'm missing something.

Thanks.
--Vahid




From:   Guozhang Wang <wangg...@gmail.com>
To: "d...@kafka.apache.org" <d...@kafka.apache.org>,
"users@kafka.apache.org" <users@kafka.apache.org>, kafka-clients
<kafka-clie...@googlegroups.com>
Date:   10/13/2017 01:12 PM
Subject:[VOTE] 1.0.0 RC1



Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 1.0.0.

It's worth noting that starting in this version we are using a different
version protocol with three digits: *major.minor.bug-fix*

Any and all testing is welcome, but the following areas are worth
highlighting:

1. Client developers should verify that their clients can produce/consume
to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
2. Performance and stress testing. Heroku and LinkedIn have helped with
this in the past (and issues have been found and fixed).
3. End users can verify that their apps work correctly with the new
release.

This is a major version release of Apache Kafka. It includes 29 new KIPs.
See the release notes and release plan
(*https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_
confluence_pages_viewpage.action-3FpageId-3D71764913=
DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=
tT9k0x5RvXtHEtLzp03BA1Y8DAgHzgCXD7UjqP7oiKE=
<
https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
apache.org_confluence_pages_viewpage.action-3FpageId-
3D71764913=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=
tT9k0x5RvXtHEtLzp03BA1Y8DAgHzgCXD7UjqP7oiKE=
>*)
for more details. A few feature highlights:

* Java 9 support with significantly faster TLS and CRC32C implementations
(KIP)
* JBOD improvements: disk failure only disables failed disk but not the
broker (KIP-112/KIP-113)
* Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
KIP-188, KIP-196)
* Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
and drop compatibility "Evolving" annotations

Release notes for the 1.0.0 release:
*https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-
7Eguozhang_kafka-2D1.0.0-2Drc1_RELEASE-5FNOTES.html=
DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=
xopSUD2TETEI5y8kxHM4P-jUdUKUIiUig2xVwabgDq8=
<
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.
apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc1_RELEASE-5FNOTES.
html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=
xopSUD2TETEI5y8kxHM4P-jUdUKUIiUig2xVwabgDq8=
>*



*** Please download, test and vote by Tuesday, October 13, 8pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.
apache.org_KEYS=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_
itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_
OWk2y2QfKYsXitTyAHHM=FfLcWlN8ODpZ2m1KliMfp35duIxif3FNnptY5-9JKWU=


* Release artifacts to be voted upon (source and binary):
*https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-
7Eguozhang_kafka-2D1.0.0-2Drc1_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_
itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_
OWk2y2QfKYsXitTyAHHM=bcWIqj27_tkoj-fnEzcLdP8uGXyAt6gS9KUy12WF1FE=
<
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.
apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc1_=DwIBaQ=jf_
iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=
VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=bcWIqj27_tkoj-
fnEzcLdP8uGXyAt6gS9KUy12WF1FE=
>*

* Maven artifacts to be voted upon:
https://urldef

Re: [VOTE] 1.0.0 RC1

2017-10-16 Thread Vahid S Hashemian
Hi Guozhang,

I'm not sure if this should be covered by "Java 9 support" in the RC note, 
but when I try to build jars from source using Java 9 (./gradlew 
-PscalaVersion=2.12 jar) even though the build reports as succeeded, it 
doesn't seem to have been successful:

$ bin/zookeeper-server-start.sh config/zookeeper.properties
Error: Could not find or load main class 
org.apache.zookeeper.server.quorum.QuorumPeerMain
Caused by: java.lang.ClassNotFoundException: 
org.apache.zookeeper.server.quorum.QuorumPeerMain

Please advise if I'm missing something.

Thanks.
--Vahid




From:   Guozhang Wang 
To: "d...@kafka.apache.org" , 
"users@kafka.apache.org" , kafka-clients 

Date:   10/13/2017 01:12 PM
Subject:[VOTE] 1.0.0 RC1



Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 1.0.0.

It's worth noting that starting in this version we are using a different
version protocol with three digits: *major.minor.bug-fix*

Any and all testing is welcome, but the following areas are worth
highlighting:

1. Client developers should verify that their clients can produce/consume
to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
2. Performance and stress testing. Heroku and LinkedIn have helped with
this in the past (and issues have been found and fixed).
3. End users can verify that their apps work correctly with the new 
release.

This is a major version release of Apache Kafka. It includes 29 new KIPs.
See the release notes and release plan
(*https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D71764913=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=tT9k0x5RvXtHEtLzp03BA1Y8DAgHzgCXD7UjqP7oiKE=
<
https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D71764913=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=tT9k0x5RvXtHEtLzp03BA1Y8DAgHzgCXD7UjqP7oiKE=
>*)
for more details. A few feature highlights:

* Java 9 support with significantly faster TLS and CRC32C implementations
(KIP)
* JBOD improvements: disk failure only disables failed disk but not the
broker (KIP-112/KIP-113)
* Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
KIP-188, KIP-196)
* Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
and drop compatibility "Evolving" annotations

Release notes for the 1.0.0 release:
*https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc1_RELEASE-5FNOTES.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=xopSUD2TETEI5y8kxHM4P-jUdUKUIiUig2xVwabgDq8=
<
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc1_RELEASE-5FNOTES.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=xopSUD2TETEI5y8kxHM4P-jUdUKUIiUig2xVwabgDq8=
>*



*** Please download, test and vote by Tuesday, October 13, 8pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_KEYS=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=FfLcWlN8ODpZ2m1KliMfp35duIxif3FNnptY5-9JKWU=


* Release artifacts to be voted upon (source and binary):
*https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc1_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=bcWIqj27_tkoj-fnEzcLdP8uGXyAt6gS9KUy12WF1FE=
<
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc1_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=bcWIqj27_tkoj-fnEzcLdP8uGXyAt6gS9KUy12WF1FE=
>*

* Maven artifacts to be voted upon:
https://urldefense.proofpoint.com/v2/url?u=https-3A__repository.apache.org_content_groups_staging_org_apache_kafka_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=dKi_J-8X1TkZ83fa3hLkcO0qGcuYQ0lTxtK4o6ms5m0=


* Javadoc:
*https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc1_javadoc_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM=Cz7EusxOgrGNnBmtjdqZDqPGTL3937oedTa8xm7L-9c=
<

Re: [VOTE] 1.0.0 RC0

2017-10-11 Thread Vahid S Hashemian
Hi Guozhang,

Thanks for running the release.

I tested building from source and the quickstarts on Linux, Mac, and 
Windows 64 (with Java 8 and Gradle 4.2.1).

Everything worked well on Linux and Mac, but I ran into some issues on my 
Windows 64 VM:

I reported one issue in KAFKA-6055, but it's an easy one to fix (a PR is 
already submitted).

With that fix in place I continued my testing but ran into another issue 
after build. When trying to start a broker 
(bin\windows\kafka-server-start.bat config\server.properties) I get this 
error:

[2017-10-11 21:45:11,642] FATAL  (kafka.Kafka$)
java.lang.IllegalArgumentException: Unknown signal: HUP
at sun.misc.Signal.(Unknown Source)
at kafka.Kafka$.registerHandler$1(Kafka.scala:67)
at kafka.Kafka$.registerLoggingSignalHandler(Kafka.scala:73)
at kafka.Kafka$.main(Kafka.scala:82)
at kafka.Kafka.main(Kafka.scala)

This seems to have been introduced by a recent commit (
https://github.com/apache/kafka/commit/8256f882c92daa1470382502ab94cbe2c16028f1#diff-ef81cee39236d0121040043e4d69d330
) and for some reason that fix does not work on Windows.

Thanks.
--Vahid





From:   Guozhang Wang 
To: "d...@kafka.apache.org" , 
"users@kafka.apache.org" , kafka-clients 

Date:   10/10/2017 06:34 PM
Subject:[VOTE] 1.0.0 RC0



Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 1.0.0.

It's worth noting that starting in this version we are using a different
version protocol with three digits: *major.minor.bug-fix*

Any and all testing is welcome, but the following areas are worth
highlighting:

1. Client developers should verify that their clients can produce/consume
to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
2. Performance and stress testing. Heroku and LinkedIn have helped with
this in the past (and issues have been found and fixed).
3. End users can verify that their apps work correctly with the new 
release.

This is a major version release of Apache Kafka. It includes 29 new KIPs.
See the release notes and release plan
(*https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D71764913=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA=waGurzMZ-QrdW5_pNVc3hgTUFQoJ-a8786ce-ENb9UY=
<
https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D71764913=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA=waGurzMZ-QrdW5_pNVc3hgTUFQoJ-a8786ce-ENb9UY=
>*)
for more details. A few feature highlights:

* Java 9 support with significantly faster TLS and CRC32C implementations
(KIP)
* JBOD improvements: disk failure only disables failed disk but not the
broker (KIP-112/KIP-113)
* Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
KIP-188, KIP-196)
* Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
and drop compatibility "Evolving" annotations

Release notes for the 1.0.0 release:
*https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc0_RELEASE-5FNOTES.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA=Ba5qmFWmCACG3vS4n6iTkU2tK9HtCv-YHd2YgG-B84U=
<
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc0_RELEASE-5FNOTES.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA=Ba5qmFWmCACG3vS4n6iTkU2tK9HtCv-YHd2YgG-B84U=
>*



*** Please download, test and vote by Friday, October 13, 8pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_KEYS=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA=IN3b-XNLtG1h0LbBPS8IQUrwLnaA6ff0iJ2Xk50Nl0o=


* Release artifacts to be voted upon (source and binary):
*https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc0_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA=yJLBkpLm16v7BWOKBi_gqrhivUXaC2gZwInCrRSNl9s=
<
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc0_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA=yJLBkpLm16v7BWOKBi_gqrhivUXaC2gZwInCrRSNl9s=
>*

* Maven artifacts to be voted upon:

Re: kafka-consumer-groups tool with SASL_PLAINTEXT

2017-07-28 Thread Vahid S Hashemian
Hi Gabriel,

I have yet to experiment with enabling SSL for Kafka.
However, there are some good documents out there that seem to cover it. 
Examples:
* 
https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/
* 
http://coheigea.blogspot.com/2016/09/securing-apache-kafka-broker-part-i.html

Is there anything specific about the SSL and consumer groups that you are 
having issues with?

Thanks.
--Vahid




From:   Gabriel Machado <gmachado@gmail.com>
To: users@kafka.apache.org
Date:   07/28/2017 08:40 AM
Subject:Re: kafka-consumer-groups tool with SASL_PLAINTEXT



Hi Vahid,

Do you know how to use consumer-group tool with ssl only (without sasl) ?

Gabriel.


Le 24 juil. 2017 11:15 PM, "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
a écrit :

Hi Meghana,

I did some experiments with SASL_PLAINTEXT and documented the results
here:
https://developer.ibm.com/opentech/2017/05/31/kafka-acls-in-practice/
I think it covers what you'd like to achieve. If not, please advise.

Thanks.
--Vahid




From:   Meghana Narasimhan <mnarasim...@bandwidth.com>
To: users@kafka.apache.org
Date:   07/24/2017 01:56 PM
Subject:kafka-consumer-groups tool with SASL_PLAINTEXT



Hi,
What is the correct way to use the kafka-consumer-groups tool with
SASL_PLAINTEXT security enabled ?

The tool seems to work fine with PLAINTEXT port but not with
SASL_PLAINTEXT. Can it be configured to work with SASL_PLAINTEXT ? If so
what permissions have to enabled for it ?

Thanks,
Meghana






Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-25 Thread Vahid S Hashemian
Hi,

If there is no further feedback on this KIP, I'll start the vote tomorrow.

Thanks.
--Vahid



From:   Vahid S Hashemian/Silicon Valley/IBM
To: dev <d...@kafka.apache.org>, "Kafka User" <users@kafka.apache.org>
Date:   07/03/2017 04:06 PM
Subject:[DISCUSS] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand


Hi,

I created KIP-175 to make some improvements to the ConsumerGroupCommand 
tool.
The KIP can be found here: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand

Your review and feedback is welcome!

Thanks.
--Vahid






Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-07-24 Thread Vahid S Hashemian
Hi Ewen,

Thanks for reviewing the KIP.

Your comment about the "food for thought" section makes sense. It seems 
like a bug to me, not sure how you and others feel about it. I'll remove 
it for now, and open a separate JIRA for it, so we have a record of it.
The read vs. write discussion and fixing the confusion seems to be an even 
bigger task, and will be addressed in its own KIP, if necessary.

The KIP will be updated shortly.

Thanks again.
--Vahid




From:   Ewen Cheslack-Postava <e...@confluent.io>
To: d...@kafka.apache.org
Cc: Kafka User <users@kafka.apache.org>
Date:   07/24/2017 10:36 AM
Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL 
Permission of OffsetFetch



Vahid,

Thanks for the KIP. I think we're mostly in violent agreement that the 
lack
of any Write permissions on consumer groups is confusing. Unfortunately
it's a pretty annoying issue to fix since it would require an increase in
permissions. More generally, I think it's unfortunate because by squeezing
all permissions into the lowest two levels, we have no room for 
refinement,
e.g. if we realize some permission needs to have a lower level of access
but higher than Describe, without adding new levels.

I'm +1 on the KIP. I don't think it's ideal given the discussion of Read 
vs
Write since I think Read is the correct permission in theory, but given
where we are now it makes sense.

Regarding the extra food for thought, I think such a change would require
some plan for how to migrate people over to it. The main proposal in the
KIP works without any migration plan because it is reducing the required
permissions, but changing the requirement for listing a group to Describe
(Group) would be adding/changing the requirements, which would be 
backwards
incompatible. I'd be open to doing it, but it'd require some thought about
how it would impact users and how we'd migrate them to the updated rule 
(or
just agree that it is a bug and that including upgrade notes would be
sufficient).

-Ewen

On Mon, Jul 10, 2017 at 1:12 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> I'm bumping this up again to get some feedback, especially from some of
> the committers, on the KIP and on the note below.
>
> Thanks.
> --Vahid
>
>
>
>
> From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
> To: d...@kafka.apache.org
> Cc: "Kafka User" <users@kafka.apache.org>
> Date:   06/21/2017 12:49 PM
> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
> Permission of OffsetFetch
>
>
>
> I appreciate everyone's feedback so far on this KIP.
>
> Before starting a vote, I'd like to also ask for feedback on the
> "Additional Food for Thought" section in the KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch#KIP-163:
> 
LowertheMinimumRequiredACLPermissionofOffsetFetch-AdditionalFoodforThought
>
> I just added some more details in that section, which I hope further
> clarifies the suggestion there.
>
> Thanks.
> --Vahid
>
>
>
>
>
>
>
>
>
>
>






Re: kafka-consumer-groups tool with SASL_PLAINTEXT

2017-07-24 Thread Vahid S Hashemian
Hi Meghana,

I did some experiments with SASL_PLAINTEXT and documented the results 
here:
https://developer.ibm.com/opentech/2017/05/31/kafka-acls-in-practice/
I think it covers what you'd like to achieve. If not, please advise.

Thanks.
--Vahid




From:   Meghana Narasimhan 
To: users@kafka.apache.org
Date:   07/24/2017 01:56 PM
Subject:kafka-consumer-groups tool with SASL_PLAINTEXT



Hi,
What is the correct way to use the kafka-consumer-groups tool with
SASL_PLAINTEXT security enabled ?

The tool seems to work fine with PLAINTEXT port but not with
SASL_PLAINTEXT. Can it be configured to work with SASL_PLAINTEXT ? If so
what permissions have to enabled for it ?

Thanks,
Meghana






Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-21 Thread Vahid S Hashemian
Hi Jason,

Yes, I meant as a separate KIP.
I can start a KIP for that sometime soon.

Thanks.
--Vahid




From:   Jason Gustafson <ja...@confluent.io>
To: d...@kafka.apache.org
Cc: Kafka Users <users@kafka.apache.org>
Date:   07/21/2017 11:37 AM
Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand



>
> Regarding your comment about the current limitation on the information
> returned for a consumer group, do you think it's worth expanding the API
> to return some additional info (e.g. generation id, group leader, ...)?


Seems outside the scope of this KIP. Up to you, but I'd probably leave it
for future work.

-Jason

On Thu, Jul 20, 2017 at 4:21 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Jason,
>
> Regarding your comment about the current limitation on the information
> returned for a consumer group, do you think it's worth expanding the API
> to return some additional info (e.g. generation id, group leader, ...)?
>
> Thanks.
> --Vahid
>
>
>
>
> From:   Jason Gustafson <ja...@confluent.io>
> To: Kafka Users <users@kafka.apache.org>
> Cc: d...@kafka.apache.org
> Date:   07/19/2017 01:46 PM
> Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for
> ConsumerGroupCommand
>
>
>
> Hey Vahid,
>
> Thanks for the updates. Looks pretty good. A couple comments:
>
> 1. For the --state option, should we use the same column-oriented format
> as
> we use for the other options? I realize there would only be one row, but
> the inconsistency is a little vexing. Also, since this tool is working
> only
> with consumer groups, perhaps we can leave out "protocol type" and use
> "assignment strategy" in place of "protocol"? It would be nice to also
> include the group generation, but it seems we didn't add that to the
> DescribeGroup response. Perhaps we could also include a count of the
> number
> of members?
> 2. It's a little annoying that --subscription and --members share so 
much
> in common. Maybe we could drop --subscription and use a --verbose flag 
to
> control whether or not to include the subscription and perhaps the
> assignment as well? Not sure if that's more annoying or less, but maybe 
a
> generic --verbose will be useful in other contexts.
>
> As for your question on whether we need the --offsets option at all, I
> don't have a strong opinion, but it seems to make the command semantics 
a
> little more consistent.
>
> -Jason
>
> On Tue, Jul 18, 2017 at 12:56 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi Jason,
> >
> > I updated the KIP based on your earlier suggestions:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand
> > The only thing I am wondering at this point is whether it's worth to
> have
> > a `--describe --offsets` option that behaves exactly like 
`--describe`.
> >
> > Thanks.
> > --Vahid
> >
> >
> >
> > From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
> > To: d...@kafka.apache.org
> > Cc: Kafka Users <users@kafka.apache.org>
> > Date:   07/17/2017 03:24 PM
> > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views 
for
> > ConsumerGroupCommand
> >
> >
> >
> > Hi Jason,
> >
> > Thanks for your quick feedback. Your suggestions seem reasonable.
> > I'll start updating the KIP accordingly and will send out another note
> > when it's ready.
> >
> > Regards.
> > --Vahid
> >
> >
> >
> >
> > From:   Jason Gustafson <ja...@confluent.io>
> > To: d...@kafka.apache.org
> > Cc: Kafka Users <users@kafka.apache.org>
> > Date:   07/17/2017 02:11 PM
> > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views 
for
> > ConsumerGroupCommand
> >
> >
> >
> > Hey Vahid,
> >
> > Hmm... If possible, it would be nice to avoid cluttering the default
> > option
> > too much, especially if it is information which is going to be the 
same
> > for
> > all members (such as the generation). My preference would be to use 
the
> > --state option that you've suggested for that info so that we can
> > represent
> > it more concisely.
> >
> > The reason I prefer the current output is that it is clear every entry
> > corresponds to a partition for which we have committed offset. Entries
> > like
> > this look strange:
> >
> > TOPIC  PA

Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-20 Thread Vahid S Hashemian
Hi Jason,

Regarding your comment about the current limitation on the information 
returned for a consumer group, do you think it's worth expanding the API 
to return some additional info (e.g. generation id, group leader, ...)?

Thanks.
--Vahid




From:   Jason Gustafson <ja...@confluent.io>
To: Kafka Users <users@kafka.apache.org>
Cc: d...@kafka.apache.org
Date:   07/19/2017 01:46 PM
Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand



Hey Vahid,

Thanks for the updates. Looks pretty good. A couple comments:

1. For the --state option, should we use the same column-oriented format 
as
we use for the other options? I realize there would only be one row, but
the inconsistency is a little vexing. Also, since this tool is working 
only
with consumer groups, perhaps we can leave out "protocol type" and use
"assignment strategy" in place of "protocol"? It would be nice to also
include the group generation, but it seems we didn't add that to the
DescribeGroup response. Perhaps we could also include a count of the 
number
of members?
2. It's a little annoying that --subscription and --members share so much
in common. Maybe we could drop --subscription and use a --verbose flag to
control whether or not to include the subscription and perhaps the
assignment as well? Not sure if that's more annoying or less, but maybe a
generic --verbose will be useful in other contexts.

As for your question on whether we need the --offsets option at all, I
don't have a strong opinion, but it seems to make the command semantics a
little more consistent.

-Jason

On Tue, Jul 18, 2017 at 12:56 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Jason,
>
> I updated the KIP based on your earlier suggestions:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand
> The only thing I am wondering at this point is whether it's worth to 
have
> a `--describe --offsets` option that behaves exactly like `--describe`.
>
> Thanks.
> --Vahid
>
>
>
> From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
> To: d...@kafka.apache.org
> Cc: Kafka Users <users@kafka.apache.org>
> Date:   07/17/2017 03:24 PM
> Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for
> ConsumerGroupCommand
>
>
>
> Hi Jason,
>
> Thanks for your quick feedback. Your suggestions seem reasonable.
> I'll start updating the KIP accordingly and will send out another note
> when it's ready.
>
> Regards.
> --Vahid
>
>
>
>
> From:   Jason Gustafson <ja...@confluent.io>
> To: d...@kafka.apache.org
> Cc: Kafka Users <users@kafka.apache.org>
> Date:   07/17/2017 02:11 PM
> Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for
> ConsumerGroupCommand
>
>
>
> Hey Vahid,
>
> Hmm... If possible, it would be nice to avoid cluttering the default
> option
> too much, especially if it is information which is going to be the same
> for
> all members (such as the generation). My preference would be to use the
> --state option that you've suggested for that info so that we can
> represent
> it more concisely.
>
> The reason I prefer the current output is that it is clear every entry
> corresponds to a partition for which we have committed offset. Entries
> like
> this look strange:
>
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET
> LAGCONSUMER-ID
> HOST   CLIENT-ID
> -  -  -   -
> -  consumer4-e173f09d-c761-4f4e-95c7-6fb73bb8fbff
> /127.0.0.1
> consumer4
> -  -  -   -
> -  consumer5-7b80e428-f8ff-43f3-8360-afd1c8ba43ea
> /127.0.0.1
> consumer5
>
> It makes me think that the consumers have committed offsets for an 
unknown
> partition. The --members option seems like a clearer way to communicate
> the
> fact that there are some members with no assigned partitions.
>
> A few additional suggestions:
>
> 1. Maybe we can rename --partitions to --offsets or --committed-offsets
> and
> the output could match the default output (in other words, --offsets is
> treated as the default switch). Seems no harm including the assignment
> information if we have it.
> 2. Along the lines of Onur's comment, it would be nice if the --members
> option included the list of assignment strategies that the consumer 
joined
> with (round-robin, range, etc). This list should always be small.
> 3. Thinking a little more, I'm not sure how necessary a --topics option
> is.
> The --partitions (or --offsets) option already sh

Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-19 Thread Vahid S Hashemian
It makes sense. Thanks for clarifying.
The KIP is updated based on your feedback.

Thanks again.
--Vahid




From:   Jason Gustafson <ja...@confluent.io>
To: d...@kafka.apache.org
Cc: Kafka Users <users@kafka.apache.org>
Date:   07/19/2017 05:06 PM
Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand



I was just thinking that the subscription/assignment information can be
quite large (especially in MM use cases), so it would be nice to keep the
default output concise. I'm also not thrilled about adding more options,
but --verbose is sort of a standard one. What do you think?

-Jason

On Wed, Jul 19, 2017 at 4:39 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Jason,
>
> Thanks for sharing your feedback on the updated KIP.
> Your suggestions look good to me.
> Do you see a problem with having the `--members` provide member
> subscription and assignment too, so we can avoid an additional 
`--verbose`
> option?
>
> Thanks.
> --Vahid
>
>
>
>
> From:   Jason Gustafson <ja...@confluent.io>
> To: Kafka Users <users@kafka.apache.org>
> Cc: d...@kafka.apache.org
> Date:   07/19/2017 01:46 PM
> Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for
> ConsumerGroupCommand
>
>
>
> Hey Vahid,
>
> Thanks for the updates. Looks pretty good. A couple comments:
>
> 1. For the --state option, should we use the same column-oriented format
> as
> we use for the other options? I realize there would only be one row, but
> the inconsistency is a little vexing. Also, since this tool is working
> only
> with consumer groups, perhaps we can leave out "protocol type" and use
> "assignment strategy" in place of "protocol"? It would be nice to also
> include the group generation, but it seems we didn't add that to the
> DescribeGroup response. Perhaps we could also include a count of the
> number
> of members?
> 2. It's a little annoying that --subscription and --members share so 
much
> in common. Maybe we could drop --subscription and use a --verbose flag 
to
> control whether or not to include the subscription and perhaps the
> assignment as well? Not sure if that's more annoying or less, but maybe 
a
> generic --verbose will be useful in other contexts.
>
> As for your question on whether we need the --offsets option at all, I
> don't have a strong opinion, but it seems to make the command semantics 
a
> little more consistent.
>
> -Jason
>
> On Tue, Jul 18, 2017 at 12:56 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi Jason,
> >
> > I updated the KIP based on your earlier suggestions:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand
> > The only thing I am wondering at this point is whether it's worth to
> have
> > a `--describe --offsets` option that behaves exactly like 
`--describe`.
> >
> > Thanks.
> > --Vahid
> >
> >
> >
> > From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
> > To: d...@kafka.apache.org
> > Cc: Kafka Users <users@kafka.apache.org>
> > Date:   07/17/2017 03:24 PM
> > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views 
for
> > ConsumerGroupCommand
> >
> >
> >
> > Hi Jason,
> >
> > Thanks for your quick feedback. Your suggestions seem reasonable.
> > I'll start updating the KIP accordingly and will send out another note
> > when it's ready.
> >
> > Regards.
> > --Vahid
> >
> >
> >
> >
> > From:   Jason Gustafson <ja...@confluent.io>
> > To: d...@kafka.apache.org
> > Cc: Kafka Users <users@kafka.apache.org>
> > Date:   07/17/2017 02:11 PM
> > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views 
for
> > ConsumerGroupCommand
> >
> >
> >
> > Hey Vahid,
> >
> > Hmm... If possible, it would be nice to avoid cluttering the default
> > option
> > too much, especially if it is information which is going to be the 
same
> > for
> > all members (such as the generation). My preference would be to use 
the
> > --state option that you've suggested for that info so that we can
> > represent
> > it more concisely.
> >
> > The reason I prefer the current output is that it is clear every entry
> > corresponds to a partition for which we have committed offset. Entries
> > like
> > this look strange:
> >
> > TOPIC  PARTITION  CURRENT-OFFSET

Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-19 Thread Vahid S Hashemian
Hi Jason,

Thanks for sharing your feedback on the updated KIP.
Your suggestions look good to me.
Do you see a problem with having the `--members` provide member 
subscription and assignment too, so we can avoid an additional `--verbose` 
option?

Thanks.
--Vahid




From:   Jason Gustafson <ja...@confluent.io>
To: Kafka Users <users@kafka.apache.org>
Cc: d...@kafka.apache.org
Date:   07/19/2017 01:46 PM
Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand



Hey Vahid,

Thanks for the updates. Looks pretty good. A couple comments:

1. For the --state option, should we use the same column-oriented format 
as
we use for the other options? I realize there would only be one row, but
the inconsistency is a little vexing. Also, since this tool is working 
only
with consumer groups, perhaps we can leave out "protocol type" and use
"assignment strategy" in place of "protocol"? It would be nice to also
include the group generation, but it seems we didn't add that to the
DescribeGroup response. Perhaps we could also include a count of the 
number
of members?
2. It's a little annoying that --subscription and --members share so much
in common. Maybe we could drop --subscription and use a --verbose flag to
control whether or not to include the subscription and perhaps the
assignment as well? Not sure if that's more annoying or less, but maybe a
generic --verbose will be useful in other contexts.

As for your question on whether we need the --offsets option at all, I
don't have a strong opinion, but it seems to make the command semantics a
little more consistent.

-Jason

On Tue, Jul 18, 2017 at 12:56 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Jason,
>
> I updated the KIP based on your earlier suggestions:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand
> The only thing I am wondering at this point is whether it's worth to 
have
> a `--describe --offsets` option that behaves exactly like `--describe`.
>
> Thanks.
> --Vahid
>
>
>
> From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
> To: d...@kafka.apache.org
> Cc: Kafka Users <users@kafka.apache.org>
> Date:   07/17/2017 03:24 PM
> Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for
> ConsumerGroupCommand
>
>
>
> Hi Jason,
>
> Thanks for your quick feedback. Your suggestions seem reasonable.
> I'll start updating the KIP accordingly and will send out another note
> when it's ready.
>
> Regards.
> --Vahid
>
>
>
>
> From:   Jason Gustafson <ja...@confluent.io>
> To: d...@kafka.apache.org
> Cc: Kafka Users <users@kafka.apache.org>
> Date:   07/17/2017 02:11 PM
> Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for
> ConsumerGroupCommand
>
>
>
> Hey Vahid,
>
> Hmm... If possible, it would be nice to avoid cluttering the default
> option
> too much, especially if it is information which is going to be the same
> for
> all members (such as the generation). My preference would be to use the
> --state option that you've suggested for that info so that we can
> represent
> it more concisely.
>
> The reason I prefer the current output is that it is clear every entry
> corresponds to a partition for which we have committed offset. Entries
> like
> this look strange:
>
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET
> LAGCONSUMER-ID
> HOST   CLIENT-ID
> -  -  -   -
> -  consumer4-e173f09d-c761-4f4e-95c7-6fb73bb8fbff
> /127.0.0.1
> consumer4
> -  -  -   -
> -  consumer5-7b80e428-f8ff-43f3-8360-afd1c8ba43ea
> /127.0.0.1
> consumer5
>
> It makes me think that the consumers have committed offsets for an 
unknown
> partition. The --members option seems like a clearer way to communicate
> the
> fact that there are some members with no assigned partitions.
>
> A few additional suggestions:
>
> 1. Maybe we can rename --partitions to --offsets or --committed-offsets
> and
> the output could match the default output (in other words, --offsets is
> treated as the default switch). Seems no harm including the assignment
> information if we have it.
> 2. Along the lines of Onur's comment, it would be nice if the --members
> option included the list of assignment strategies that the consumer 
joined
> with (round-robin, range, etc). This list should always be small.
> 3. Thinking a little more, I'm not sure how necessary a --topics option
> is.
> The --partitions (or --of

Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-18 Thread Vahid S Hashemian
Hi Jason,

I updated the KIP based on your earlier suggestions: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand
The only thing I am wondering at this point is whether it's worth to have 
a `--describe --offsets` option that behaves exactly like `--describe`.

Thanks.
--Vahid



From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
To: d...@kafka.apache.org
Cc: Kafka Users <users@kafka.apache.org>
Date:   07/17/2017 03:24 PM
Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand



Hi Jason,

Thanks for your quick feedback. Your suggestions seem reasonable.
I'll start updating the KIP accordingly and will send out another note 
when it's ready.

Regards.
--Vahid




From:   Jason Gustafson <ja...@confluent.io>
To: d...@kafka.apache.org
Cc: Kafka Users <users@kafka.apache.org>
Date:   07/17/2017 02:11 PM
Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand



Hey Vahid,

Hmm... If possible, it would be nice to avoid cluttering the default 
option
too much, especially if it is information which is going to be the same 
for
all members (such as the generation). My preference would be to use the
--state option that you've suggested for that info so that we can 
represent
it more concisely.

The reason I prefer the current output is that it is clear every entry
corresponds to a partition for which we have committed offset. Entries 
like
this look strange:

TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET
LAGCONSUMER-ID
HOST   CLIENT-ID
-  -  -   -
-  consumer4-e173f09d-c761-4f4e-95c7-6fb73bb8fbff
/127.0.0.1
consumer4
-  -  -   -
-  consumer5-7b80e428-f8ff-43f3-8360-afd1c8ba43ea
/127.0.0.1
consumer5

It makes me think that the consumers have committed offsets for an unknown
partition. The --members option seems like a clearer way to communicate 
the
fact that there are some members with no assigned partitions.

A few additional suggestions:

1. Maybe we can rename --partitions to --offsets or --committed-offsets 
and
the output could match the default output (in other words, --offsets is
treated as the default switch). Seems no harm including the assignment
information if we have it.
2. Along the lines of Onur's comment, it would be nice if the --members
option included the list of assignment strategies that the consumer joined
with (round-robin, range, etc). This list should always be small.
3. Thinking a little more, I'm not sure how necessary a --topics option 
is.
The --partitions (or --offsets) option already shows the current
assignment. Maybe --topics could be --subscription and just list the 
topics
that the members subscribed to?

Thanks,
Jason

On Mon, Jul 17, 2017 at 11:04 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Jason, Onur, thank you for reviewing the KIP.
>
> Regarding the default `--describe` option, so far there have been a few
> suggestions that conflict a bit. Here are the suggestions:
> - Keep the current behavior exactly as is (Edo, Jeff)
> - Remove members with no assignments from the current result set (Jason)
> - Add additional status info to the result set (Onur) -- I assume the
> additional status (which are group related info, rather than group 
member
> related) will appear in the result separate from the member table (e.g.,
> before the table)
>
> One thing we could do to remain as close as possible to these 
suggestions
> is trim the resulting rows as per Jason's suggestion, and add the
> additional details that Onur suggested. Would this work for everyone? 
Edo,
> Jeff, what do you think?
> If so, I'll update the KIP accordingly.
>
> Some of the other updates based on the feedback received:
> * "--describe --members" will not include a topic(partitions) column.
> Instead there will be a #Partitions (number of partitions assigned to 
this
> member) column
> * "--describe --topics" will be added to list topic partitions in the
> group and the relevant info
> * "--describe --state" will be added to report group related info, such 
as
> state, protocol, ...
>
> Thanks.
> --Vahid
>












Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-17 Thread Vahid S Hashemian
Hi Jason,

Thanks for your quick feedback. Your suggestions seem reasonable.
I'll start updating the KIP accordingly and will send out another note 
when it's ready.

Regards.
--Vahid




From:   Jason Gustafson <ja...@confluent.io>
To: d...@kafka.apache.org
Cc: Kafka Users <users@kafka.apache.org>
Date:   07/17/2017 02:11 PM
Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand



Hey Vahid,

Hmm... If possible, it would be nice to avoid cluttering the default 
option
too much, especially if it is information which is going to be the same 
for
all members (such as the generation). My preference would be to use the
--state option that you've suggested for that info so that we can 
represent
it more concisely.

The reason I prefer the current output is that it is clear every entry
corresponds to a partition for which we have committed offset. Entries 
like
this look strange:

TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET
LAGCONSUMER-ID
HOST   CLIENT-ID
-  -  -   -
-  consumer4-e173f09d-c761-4f4e-95c7-6fb73bb8fbff
/127.0.0.1
consumer4
-  -  -   -
-  consumer5-7b80e428-f8ff-43f3-8360-afd1c8ba43ea
/127.0.0.1
consumer5

It makes me think that the consumers have committed offsets for an unknown
partition. The --members option seems like a clearer way to communicate 
the
fact that there are some members with no assigned partitions.

A few additional suggestions:

1. Maybe we can rename --partitions to --offsets or --committed-offsets 
and
the output could match the default output (in other words, --offsets is
treated as the default switch). Seems no harm including the assignment
information if we have it.
2. Along the lines of Onur's comment, it would be nice if the --members
option included the list of assignment strategies that the consumer joined
with (round-robin, range, etc). This list should always be small.
3. Thinking a little more, I'm not sure how necessary a --topics option 
is.
The --partitions (or --offsets) option already shows the current
assignment. Maybe --topics could be --subscription and just list the 
topics
that the members subscribed to?

Thanks,
Jason

On Mon, Jul 17, 2017 at 11:04 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Jason, Onur, thank you for reviewing the KIP.
>
> Regarding the default `--describe` option, so far there have been a few
> suggestions that conflict a bit. Here are the suggestions:
> - Keep the current behavior exactly as is (Edo, Jeff)
> - Remove members with no assignments from the current result set (Jason)
> - Add additional status info to the result set (Onur) -- I assume the
> additional status (which are group related info, rather than group 
member
> related) will appear in the result separate from the member table (e.g.,
> before the table)
>
> One thing we could do to remain as close as possible to these 
suggestions
> is trim the resulting rows as per Jason's suggestion, and add the
> additional details that Onur suggested. Would this work for everyone? 
Edo,
> Jeff, what do you think?
> If so, I'll update the KIP accordingly.
>
> Some of the other updates based on the feedback received:
> * "--describe --members" will not include a topic(partitions) column.
> Instead there will be a #Partitions (number of partitions assigned to 
this
> member) column
> * "--describe --topics" will be added to list topic partitions in the
> group and the relevant info
> * "--describe --state" will be added to report group related info, such 
as
> state, protocol, ...
>
> Thanks.
> --Vahid
>








Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-17 Thread Vahid S Hashemian
Jason, Onur, thank you for reviewing the KIP.

Regarding the default `--describe` option, so far there have been a few 
suggestions that conflict a bit. Here are the suggestions:
- Keep the current behavior exactly as is (Edo, Jeff)
- Remove members with no assignments from the current result set (Jason)
- Add additional status info to the result set (Onur) -- I assume the 
additional status (which are group related info, rather than group member 
related) will appear in the result separate from the member table (e.g., 
before the table)

One thing we could do to remain as close as possible to these suggestions 
is trim the resulting rows as per Jason's suggestion, and add the 
additional details that Onur suggested. Would this work for everyone? Edo, 
Jeff, what do you think?
If so, I'll update the KIP accordingly.

Some of the other updates based on the feedback received:
* "--describe --members" will not include a topic(partitions) column. 
Instead there will be a #Partitions (number of partitions assigned to this 
member) column
* "--describe --topics" will be added to list topic partitions in the 
group and the relevant info
* "--describe --state" will be added to report group related info, such as 
state, protocol, ...

Thanks.
--Vahid



From:   Onur Karaman <onurkaraman.apa...@gmail.com>
To: d...@kafka.apache.org
Cc: Kafka Users <users@kafka.apache.org>
Date:   07/14/2017 11:40 AM
Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand



In other words, I think the default should be the exact behavior we have
today plus the remaining group information from DescribeGroupResponse.

On Fri, Jul 14, 2017 at 11:36 AM, Onur Karaman 
<onurkaraman.apa...@gmail.com
> wrote:

> I think if we had the opportunity to start from scratch, --describe 
would
> have been the following:
> --describe --offsets: shows all offsets committed for the group as well 
as
> lag
> --describe --state (or maybe --members): shows the full
> DescribeGroupResponse output (including things like generation id, 
state,
> protocol type, etc)
> --describe: shows the merged version of the above two.
>
> On Fri, Jul 14, 2017 at 10:56 AM, Jason Gustafson <ja...@confluent.io>
> wrote:
>
>> Hey Vahid,
>>
>> Thanks for the KIP. Looks like a nice improvement. One minor 
suggestion:
>> Since consumers can be subscribed to a large number of topics, I'm
>> wondering if it might be better to leave out the topic list from the
>> "describe members" option so that the output remains concise? Perhaps 
we
>> could list only the number of assigned partitions so that users have an
>> easy way to check the overall balance and we can add a separate 
"describe
>> topics" switch to see the topic breakdown?
>>
>> As for the default --describe, it seems safest to keep its current
>> behavior. In other words, we should list all partitions which have
>> committed offsets for the group even if the partition is not currently
>> assigned. However, I don't think we need to try and fit members without
>> any
>> assigned partitions into that view.
>>
>> Thanks,
>> Jason
>>
>> On Fri, Jul 7, 2017 at 10:49 AM, Vahid S Hashemian <
>> vahidhashem...@us.ibm.com> wrote:
>>
>> > Thanks Jeff for your feedback on the usefulness of the current tool.
>> >
>> > --Vahid
>> >
>> >
>> >
>> >
>> > From:   Jeff Widman <j...@netskope.com>
>> > To: d...@kafka.apache.org
>> > Cc: Kafka User <users@kafka.apache.org>
>> > Date:   07/06/2017 02:25 PM
>> > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views 
for
>> > ConsumerGroupCommand
>> >
>> >
>> >
>> > Thanks for the KIP Vahid. I think it'd be useful to have these 
filters.
>> >
>> > That said, I also agree with Edo.
>> >
>> > We don't currently rely on the output, but there's been more than one
>> time
>> > when debugging an issue that I notice something amiss when I see all 
the
>> > data at once but if it wasn't present in the default view I probably
>> would
>> > have missed it as I wouldn't have thought to look at that particular
>> > filter.
>> >
>> > This would also be more consistent with the API of the 
kafka-topics.sh
>> > where "--describe" gives everything and then can be filtered down.
>> >
>> >
>> >
>> > On Tue, Jul 4, 2017 at 10:42 AM, Edoardo Comar <eco...@uk.ibm.com>
>> wrote:
>> >
>> > > Hi Vahid,
>> > > no we are not relyi

Re: Kafka authorizer ACLs question

2017-07-11 Thread Vahid S Hashemian
Hi SK,

Could you please take a look at this document (
https://developer.ibm.com/opentech/2017/05/31/kafka-acls-in-practice/) and 
confirm you performed the steps in Broker Setup on all brokers?

Thanks.
--Vahid



From:   Sruthi Kumar Annamneedu 
To: users@kafka.apache.org
Date:   07/11/2017 07:29 PM
Subject:Kafka authorizer ACLs question



Hi,

I am hoping someone from the community can help me clarify Kafka 
authorizer
feature.

*Question:* Do I have to set up any property other than '
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer' in
server.properties file to activate ACLs using Kafka Authorizer?

*Background:* We have 3-node Kafka cluster (Cloudera environment). N1, N2,
and N3 for Kafka. On all 3 nodes, I have upated server properties file 
with
authorizer.class.name and also with 'allow.everyone.if.no.acl.found=false'
properties. Expectation is not to allow anyone to produce/consume message
on a test topic as I have not set up ACLs on test topic yet.

*Actual result:* I am able to produce/consumer messages just like setting
up these two properties. Not exactly sure what I am missing.

*Expected result:* Error message complaining about ACLs are blocking
producing/consuming messages.

Thank you in advance for your time.

Best,
SK






Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-07-10 Thread Vahid S Hashemian
I'm bumping this up again to get some feedback, especially from some of 
the committers, on the KIP and on the note below.

Thanks.
--Vahid




From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
To: d...@kafka.apache.org
Cc: "Kafka User" <users@kafka.apache.org>
Date:   06/21/2017 12:49 PM
Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL 
Permission of OffsetFetch



I appreciate everyone's feedback so far on this KIP.

Before starting a vote, I'd like to also ask for feedback on the 
"Additional Food for Thought" section in the KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch#KIP-163:LowertheMinimumRequiredACLPermissionofOffsetFetch-AdditionalFoodforThought

I just added some more details in that section, which I hope further 
clarifies the suggestion there.

Thanks.
--Vahid












Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-07 Thread Vahid S Hashemian
Thanks Jeff for your feedback on the usefulness of the current tool.

--Vahid




From:   Jeff Widman <j...@netskope.com>
To: d...@kafka.apache.org
Cc: Kafka User <users@kafka.apache.org>
Date:   07/06/2017 02:25 PM
Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand



Thanks for the KIP Vahid. I think it'd be useful to have these filters.

That said, I also agree with Edo.

We don't currently rely on the output, but there's been more than one time
when debugging an issue that I notice something amiss when I see all the
data at once but if it wasn't present in the default view I probably would
have missed it as I wouldn't have thought to look at that particular
filter.

This would also be more consistent with the API of the kafka-topics.sh
where "--describe" gives everything and then can be filtered down.



On Tue, Jul 4, 2017 at 10:42 AM, Edoardo Comar <eco...@uk.ibm.com> wrote:

> Hi Vahid,
> no we are not relying on parsing the current output.
>
> I just thought that keeping the full output isn't necessarily that bad 
as
> it shows some sort of history of how a group was used.
>
> ciao
> Edo
> --
>
> Edoardo Comar
>
> IBM Message Hub
>
> IBM UK Ltd, Hursley Park, SO21 2JN
>
> "Vahid S Hashemian" <vahidhashem...@us.ibm.com> wrote on 04/07/2017
> 17:11:43:
>
> > From: "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
> > To: d...@kafka.apache.org
> > Cc: "Kafka User" <users@kafka.apache.org>
> > Date: 04/07/2017 17:12
> > Subject: Re: [DISCUSS] KIP-175: Additional '--describe' views for
> > ConsumerGroupCommand
> >
> > Hi Edo,
> >
> > Thanks for reviewing the KIP.
> >
> > Modifying the default behavior of `--describe` was suggested in the
> > related JIRA.
> > We could poll the community to see whether they go for that option, 
or,
> as
> > you suggested, introducing a new `--only-xxx` ( can't also think of a
> > proper name right now :) ) option instead.
> >
> > Are you making use of the current `--describe` output and relying on 
the
>
> > full data set?
> >
> > Thanks.
> > --Vahid
> >
> >
> >
> >
> > From:   Edoardo Comar <eco...@uk.ibm.com>
> > To: d...@kafka.apache.org
> > Cc: "Kafka User" <users@kafka.apache.org>
> > Date:   07/04/2017 03:17 AM
> > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views 
for
>
> > ConsumerGroupCommand
> >
> >
> >
> > Thanks Vahid, I like the KIP.
> >
> > One question - could we keep the current "--describe" behavior 
unchanged
>
> > and introduce "--only-xxx" options to filter down the full output as 
you
>
> > proposed ?
> >
> > ciao,
> > Edo
> > --
> >
> > Edoardo Comar
> >
> > IBM Message Hub
> >
> > IBM UK Ltd, Hursley Park, SO21 2JN
> >
> >
> >
> > From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
> > To: dev <d...@kafka.apache.org>, "Kafka User"
> <users@kafka.apache.org>
> > Date:   04/07/2017 00:06
> > Subject:[DISCUSS] KIP-175: Additional '--describe' views for
> > ConsumerGroupCommand
> >
> >
> >
> > Hi,
> >
> > I created KIP-175 to make some improvements to the 
ConsumerGroupCommand
> > tool.
> > The KIP can be found here:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-175%3A
> > +Additional+%27--describe%27+views+for+ConsumerGroupCommand
> >
> >
> >
> > Your review and feedback is welcome!
> >
> > Thanks.
> > --Vahid
> >
> >
> >
> >
> >
> > Unless stated otherwise above:
> > IBM United Kingdom Limited - Registered in England and Wales with 
number
>
> > 741598.
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> 3AU
> >
> >
> >
> >
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 
3AU
>






Re: Mirroring multiple clusters into one

2017-07-07 Thread Vahid S Hashemian
Thanks a lot for your input James.

Regards,
--Vahid



From:   James Cheng <wushuja...@gmail.com>
To: d...@kafka.apache.org
Cc: users@kafka.apache.org
Date:   07/06/2017 10:26 PM
Subject:Re: Mirroring multiple clusters into one



Answers inline below.

-James

Sent from my iPhone

> On Jul 7, 2017, at 1:18 AM, Vahid S Hashemian 
<vahidhashem...@us.ibm.com> wrote:
> 
> James,
> 
> Thanks for sharing your thoughts and experience.
> Could you please also confirm whether
> - you do any encryption for the mirrored data?
Not at the Kafka level. The data goes over a VPN.

> - you have a many-to-one mirroring similar to what I described?
> 

Yes, we mirror multiple source clusters to a single target cluster. We 
have a topic naming convention where our topics are prefixed with their 
cluster name, so as long as we follow that convention, each source topic 
gets mirrored to a unique target topic. That is, we try not to have 
multiple mirrormakers writing to a single target topic. 

Our topic names in the target cluster get prefixed with the string 
"mirror." And then we never mirror topics that start with "mirror." This 
prevents us from creating mirroring loops.

> Thanks.
> --Vahid
> 
> 
> 
> From:   James Cheng <wushuja...@gmail.com>
> To: users@kafka.apache.org
> Cc: dev <d...@kafka.apache.org>
> Date:   07/06/2017 12:37 PM
> Subject:Re: Mirroring multiple clusters into one
> 
> 
> 
> I'm not sure what the "official" recommendation is. At TiVo, we *do* run 

> all our mirrormakers near the target cluster. It works fine for us, but 
> we're still fairly inexperienced, so I'm not sure how strong of a data 
> point we should be.
> 
> I think the thought process is, if you are mirroring from a source 
cluster 
> to a target cluster where there is a WAN between the two, then whichever 

> request goes across the WAN has a higher chance of intermittent failure 
> than the one over the LAN. That means that if mirrormaker is near the 
> source cluster, the produce request over the WAN to the target cluster 
may 
> fail. If the mirrormaker is near the target cluster, then the fetch 
> request over the WAN to the source cluster may fail.
> 
> Failed fetch requests don't have much impact on data replication, it 
just 
> delays it. Whereas a failure during a produce request may introduce 
> duplicates.
> 
> Becket Qin from LinkedIn did a presentation on tuning producer 
performance 
> at a meetup last year, and I remember he specifically talked about 
> producing over a WAN as one of the cases where you have to tune 
settings. 
> Maybe that presentation will give more ideas about what to look at. 
> 
https://www.slideshare.net/mobile/JiangjieQin/producer-performance-tuning-for-apache-kafka-63147600

> 
> 
> -James
> 
> Sent from my iPhone
> 
>> On Jul 6, 2017, at 1:00 AM, Vahid S Hashemian 
> <vahidhashem...@us.ibm.com> wrote:
>> 
>> The literature suggests running the MM on the target cluster when 
> possible 
>> (with the exception of when encryption is required for transferred 
> data).
>> I am wondering if this is still the recommended approach when mirroring 

>> from multiple clusters to a single cluster (i.e. multiple MM 
instances).
>> Is there anything in particular (metric, specification, etc.) to 
> consider 
>> before making a decision?
>> 
>> Thanks.
>> --Vahid
>> 
>> 
> 
> 
> 
> 







Re: Mirroring multiple clusters into one

2017-07-06 Thread Vahid S Hashemian
James,

Thanks for sharing your thoughts and experience.
Could you please also confirm whether
- you do any encryption for the mirrored data?
- you have a many-to-one mirroring similar to what I described?

Thanks.
--Vahid



From:   James Cheng <wushuja...@gmail.com>
To: users@kafka.apache.org
Cc: dev <d...@kafka.apache.org>
Date:   07/06/2017 12:37 PM
Subject:Re: Mirroring multiple clusters into one



I'm not sure what the "official" recommendation is. At TiVo, we *do* run 
all our mirrormakers near the target cluster. It works fine for us, but 
we're still fairly inexperienced, so I'm not sure how strong of a data 
point we should be.

I think the thought process is, if you are mirroring from a source cluster 
to a target cluster where there is a WAN between the two, then whichever 
request goes across the WAN has a higher chance of intermittent failure 
than the one over the LAN. That means that if mirrormaker is near the 
source cluster, the produce request over the WAN to the target cluster may 
fail. If the mirrormaker is near the target cluster, then the fetch 
request over the WAN to the source cluster may fail.

Failed fetch requests don't have much impact on data replication, it just 
delays it. Whereas a failure during a produce request may introduce 
duplicates.

Becket Qin from LinkedIn did a presentation on tuning producer performance 
at a meetup last year, and I remember he specifically talked about 
producing over a WAN as one of the cases where you have to tune settings. 
Maybe that presentation will give more ideas about what to look at. 
https://www.slideshare.net/mobile/JiangjieQin/producer-performance-tuning-for-apache-kafka-63147600


-James

Sent from my iPhone

> On Jul 6, 2017, at 1:00 AM, Vahid S Hashemian 
<vahidhashem...@us.ibm.com> wrote:
> 
> The literature suggests running the MM on the target cluster when 
possible 
> (with the exception of when encryption is required for transferred 
data).
> I am wondering if this is still the recommended approach when mirroring 
> from multiple clusters to a single cluster (i.e. multiple MM instances).
> Is there anything in particular (metric, specification, etc.) to 
consider 
> before making a decision?
> 
> Thanks.
> --Vahid
> 
> 






Mirroring multiple clusters into one

2017-07-05 Thread Vahid S Hashemian
The literature suggests running the MM on the target cluster when possible 
(with the exception of when encryption is required for transferred data).
I am wondering if this is still the recommended approach when mirroring 
from multiple clusters to a single cluster (i.e. multiple MM instances).
Is there anything in particular (metric, specification, etc.) to consider 
before making a decision?

Thanks.
--Vahid




Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-04 Thread Vahid S Hashemian
Hi Edo,

Thanks for reviewing the KIP.

Modifying the default behavior of `--describe` was suggested in the 
related JIRA.
We could poll the community to see whether they go for that option, or, as 
you suggested, introducing a new `--only-xxx` ( can't also think of a 
proper name right now :) ) option instead.

Are you making use of the current `--describe` output and relying on the 
full data set?

Thanks.
--Vahid




From:   Edoardo Comar <eco...@uk.ibm.com>
To: d...@kafka.apache.org
Cc: "Kafka User" <users@kafka.apache.org>
Date:   07/04/2017 03:17 AM
Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand



Thanks Vahid, I like the KIP.

One question - could we keep the current "--describe" behavior unchanged 
and introduce "--only-xxx" options to filter down the full output as you 
proposed ?

ciao,
Edo
--

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN



From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
To: dev <d...@kafka.apache.org>, "Kafka User" <users@kafka.apache.org>
Date:   04/07/2017 00:06
Subject:[DISCUSS] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand



Hi,

I created KIP-175 to make some improvements to the ConsumerGroupCommand 
tool.
The KIP can be found here: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand



Your review and feedback is welcome!

Thanks.
--Vahid





Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU






[DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-03 Thread Vahid S Hashemian
Hi,

I created KIP-175 to make some improvements to the ConsumerGroupCommand 
tool.
The KIP can be found here: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand

Your review and feedback is welcome!

Thanks.
--Vahid




Re: [ANNOUNCE] Apache Kafka 0.11.0.0 Released

2017-06-29 Thread Vahid S Hashemian
+1.

Thank you Ismael for your hard work on this release.

--Vahid




From:   Guozhang Wang 
To: "d...@kafka.apache.org" 
Cc: "users@kafka.apache.org" , 
annou...@apache.org, kafka-clients 
Date:   06/28/2017 07:22 PM
Subject:Re: [ANNOUNCE] Apache Kafka 0.11.0.0 Released



Ismael,

Thanks for running this release!

Guozhang

On Wed, Jun 28, 2017 at 5:57 PM, Jun Rao  wrote:

> Hi, Ismael,
>
> Thanks a lot for running this release!
>
> Jun
>
> On Wed, Jun 28, 2017 at 5:52 PM, Ismael Juma  wrote:
>
> > The Apache Kafka community is pleased to announce the release for 
Apache
> > Kafka 0.11.0.0. This is a feature release which includes the 
completion
> > of 32 KIPs, over 400 bug fixes and improvements, and more than 700 
pull
> > requests merged.
> >
> > All of the changes in this release can be found in the release notes:
> > https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.
> > 0/RELEASE_NOTES.html
> > (this is a link to a mirror due to temporary issues affecting
> > archive.apache.org)
> >
> > Apache Kafka is a distributed streaming platform with five core APIs:
> >
> > ** The Producer API allows an application to publish a stream records 
to
> > one or more Kafka topics.
> >
> > ** The Consumer API allows an application to subscribe to one or more
> > topics and process the stream of records produced to them.
> >
> > ** The Streams API allows an application to act as a stream processor,
> > consuming an input stream from one or more topics and producing an
> > output stream to one or more output topics, effectively transforming 
the
> > input
> > streams to output streams.
> >
> > ** The Connector API allows building and running reusable producers or
> > consumers that connect Kafka topics to existing applications or data
> > systems. For example, a connector to a relational database might 
capture
> > every change to a table.
> >
> > ** The AdminClient API allows managing and inspecting topics, brokers,
> acls
> > and other Kafka objects.
> >
> > With these APIs, Kafka can be used for two broad classes of 
application:
> >
> > ** Building real-time streaming data pipelines that reliably get data
> > between systems or applications.
> >
> > ** Building real-time streaming applications that transform or react 
to
> > the streams of data.
> >
> > You can download the source release from
> > https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.
> > 0/kafka-0.11.0.0-src.tgz
> >
> > and binary releases from
> > *https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.
> > 0/kafka_2.11-0.11.0.0.tgz
> >  > 0/kafka_2.11-0.11.0.0.tgz>*
> >
> > *https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.
> > 0/kafka_2.12-0.11.0.0.tgz
> >  > 0/kafka_2.12-0.11.0.0.tgz>*
> > Thanks to the 118 contributors on this release!
> >
> > Aaron Coburn, Adrian McCague, Aegeaner, Akash Sethi, Akhilesh Naidu, 
Alex
> > Loddengaard, Allen Xiang, amethystic, Amit Daga, Andrew Olson, Andrey
> > Dyachkov, anukin, Apurva Mehta, Armin Braun, Balint Molnar, Ben 
Stopford,
> > Bernard Leach, Bharat Viswanadham, Bill Bejeck, Bruce Szalwinski, 
Chris
> > Egerton, Christopher L. Shannon, Clemens Valiente, Colin P. Mccabe, 
Dale
> > Peakall, Damian Guy, dan norwood, Dana Powers, Davor Poldrugo, 
dejan2609,
> > Dhwani Katagade, Dong Lin, Dustin Cote, Edoardo Comar, Eno Thereska, 
Ewen
> > Cheslack-Postava, gosubpl, Grant Henke, Guozhang Wang, Gwen Shapira,
> > Hamidreza Afzali, Hao Chen, hejiefang, Hojjat Jafarpour, huxi, Ismael
> Juma,
> > Ivan A. Melnikov, Jaikiran Pai, James Cheng, James Chien, Jan 
Lukavsky,
> > Jason Gustafson, Jean-Philippe Daigle, Jeff Chao, Jeff Widman, Jeyhun
> > Karimov, Jiangjie Qin, Jon Freedman, Jonathan Monette, Jorge Quilcate,
> > jozi-k, Jun Rao, Kamal C, Kelvin Rutt, Kevin Sweeney, Konstantine
> > Karantasis, Kyle Winkelman, Lihua Xin, Magnus Edenhill, Magnus Reftel,
> > Manikumar Reddy O, Marco Ebert, Mario Molina, Matthias J. Sax, Maysam
> > Yabandeh, Michael Andre Pearce, Michael G. Noll, Michal Borowiecki,
> Mickael
> > Maison, Nick Pillitteri, Nikki Thean, Onur Karaman, Paolo Patierno,
> > pengwei-li, Prabhat Kashyap, Qihuang Zheng, radai-rosenblatt, Raghav
> Kumar
> > Gautam, Rajini Sivaram, Randall Hauch, Ryan P, Sachin Mittal, Sandesh 
K,
> > Satish Duggana, Sean McCauliff, sharad-develop, Shikhar Bhushan, 
shuguo
> > zheng, Shun Takebayashi, simplesteph, Steven Schlansker, Stevo Slavic,
> > sunnykrgupta, Sönke Liebau, Tim Carey-Smith, Tom Bentley, Tommy 
Becker,
> > Umesh Chaudhary, Vahid Hashemian, Vitaly Pushkar, Vogeti, Will Droste,
> Will
> > Marshall, Wim Van Leuven, Xavier Léauté, Xi Hu, xinlihua, Yuto 
Kawamura
> >
> > We welcome your help and feedback. For more information on how to
> > report problems, and to 

Re: [VOTE] 0.11.0.0 RC2

2017-06-26 Thread Vahid S Hashemian
>From the error message, it sounds like one of the prior tests does not do 
a proper clean-up?!

Thanks.
--Vahid
 



From:   Ismael Juma <ism...@juma.me.uk>
To: d...@kafka.apache.org
Cc: kafka-clients <kafka-clie...@googlegroups.com>, Kafka Users 
<users@kafka.apache.org>
Date:   06/26/2017 01:54 PM
Subject:Re: [VOTE] 0.11.0.0 RC2
Sent by:isma...@gmail.com



Hi Vahid,

Can you please check which test fails first? The errors you mentioned can
happen if a test fails and doesn't clean-up properly.

Ismael

On Mon, Jun 26, 2017 at 8:41 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Ismael,
>
> To answer your questions:
>
> 1. Yes, the issues exists in trunk too.
>
> 2. I haven't checked with Cygwin, but I can give it a try.
>
> And thanks for addressing this issue. I can confirm with your PR I no
> longer see it.
> But now that the tests progress I see quite a few errors like this in
> core:
>
> kafka.server.ReplicaFetchTest > classMethod FAILED
> java.lang.AssertionError: Found unexpected threads,
> allThreads=Set(ZkClient-EventThread-268-127.0.0.1:56565,
> ProcessThread(sid:0 cport:56565):, metrics-mete
> r-tick-thread-2, SessionTracker, Signal Dispatcher, main, Reference
> Handler, ForkJoinPool-1-worker-1, Attach Listener, ProcessThread(sid:0
> cport:59720):, ZkClie
> nt-EventThread-1347-127.0.0.1:59720, kafka-producer-network-thread |
> producer-1, Test worker-SendThread(127.0.0.1:56565), /127.0.0.1:54942 to
> /127.0.0.1:54926 w
> orkers Thread 2, Test worker, SyncThread:0,
> NIOServerCxn.Factory:/127.0.0.1:0, Test worker-EventThread, Test
> worker-SendThread(127.0.0.1:59720), /127.0.0.1:5494
> 2 to /127.0.0.1:54926 workers Thread 3,
> ZkClient-EventThread-22-127.0.0.1:54976, ProcessThread(sid:0
> cport:54976):, Test worker-SendThread(127.0.0.1:54976), Fin
> alizer, metrics-meter-tick-thread-1)
>
> I tested on a VM and a physical machine, and both give me a lot of 
errors
> like this.
>
> Thanks.
> --Vahid
>
>
>
>
> From:   Ismael Juma <isma...@gmail.com>
> To: Vahid S Hashemian <vahidhashem...@us.ibm.com>
> Cc: d...@kafka.apache.org, kafka-clients
> <kafka-clie...@googlegroups.com>, Kafka Users <users@kafka.apache.org>
> Date:   06/26/2017 03:53 AM
> Subject:Re: [VOTE] 0.11.0.0 RC2
>
>
>
> Hi Vahid,
>
> Sorry for not replying to the previous email, I had missed it. A couple 
of
> questions:
>
> 1. Is this also happening in trunk? Seems like it should be the case for
> months and seemingly no-one reported it until the RC stage.
> 2. Is it correct that this only happens when compiling on Windows 
without
> Cygwin?
>
> I believe the following PR should fix it, please verify:
>
> https://github.com/apache/kafka/pull/3431
>
> Ismael
>
> On Fri, Jun 23, 2017 at 8:25 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi Ismael,
> >
> > Not sure if my response on RC1 was lost or this issue is not a
> > show-stopper:
> >
> > I checked again and with RC2, tests still fail in my Windown 64 bit
> > environment.
> >
> > :clients:checkstyleMain
> > [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> >
> 0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\
> protocol\Errors.java:89:1:
> > Class Data Abstraction Coupling is 57 (max allowed is 20) classes
> > [ApiExceptionBuilder, BrokerNotAvailableException,
> > ClusterAuthorizationException, ConcurrentTransactionsException,
> > ControllerMovedException, CoordinatorLoadInProgressException,
> > CoordinatorNotAvailableException, CorruptRecordException,
> > DuplicateSequenceNumberException, GroupAuthorizationException,
> > IllegalGenerationException, IllegalSaslStateException,
> > InconsistentGroupProtocolException, InvalidCommitOffsetSizeException,
> > InvalidConfigurationException, InvalidFetchSizeException,
> > InvalidGroupIdException, InvalidPartitionsException,
> > InvalidPidMappingException, InvalidReplicaAssignmentException,
> > InvalidReplicationFactorException, InvalidRequestException,
> > InvalidRequiredAcksException, InvalidSessionTimeoutException,
> > InvalidTimestampException, InvalidTopicException,
> InvalidTxnStateException,
> > InvalidTxnTimeoutException, LeaderNotAvailableException,
> NetworkException,
> > NotControllerException, NotCoordinatorException,
> > NotEnoughReplicasAfterAppendException, NotEnoughReplicasException,
> > NotLeaderForPartitionException, OffsetMetadataTooLarge,
> > OffsetOutOfRangeException, OperationNotAttemptedException,
> > OutOfOrderSequenceException, PolicyViolationException,
&

Re: [VOTE] 0.11.0.0 RC2

2017-06-26 Thread Vahid S Hashemian
Hi Ismael,

To answer your questions:

1. Yes, the issues exists in trunk too.

2. I haven't checked with Cygwin, but I can give it a try.

And thanks for addressing this issue. I can confirm with your PR I no 
longer see it.
But now that the tests progress I see quite a few errors like this in 
core:

kafka.server.ReplicaFetchTest > classMethod FAILED
java.lang.AssertionError: Found unexpected threads, 
allThreads=Set(ZkClient-EventThread-268-127.0.0.1:56565, 
ProcessThread(sid:0 cport:56565):, metrics-mete
r-tick-thread-2, SessionTracker, Signal Dispatcher, main, Reference 
Handler, ForkJoinPool-1-worker-1, Attach Listener, ProcessThread(sid:0 
cport:59720):, ZkClie
nt-EventThread-1347-127.0.0.1:59720, kafka-producer-network-thread | 
producer-1, Test worker-SendThread(127.0.0.1:56565), /127.0.0.1:54942 to 
/127.0.0.1:54926 w
orkers Thread 2, Test worker, SyncThread:0, 
NIOServerCxn.Factory:/127.0.0.1:0, Test worker-EventThread, Test 
worker-SendThread(127.0.0.1:59720), /127.0.0.1:5494
2 to /127.0.0.1:54926 workers Thread 3, 
ZkClient-EventThread-22-127.0.0.1:54976, ProcessThread(sid:0 
cport:54976):, Test worker-SendThread(127.0.0.1:54976), Fin
alizer, metrics-meter-tick-thread-1)

I tested on a VM and a physical machine, and both give me a lot of errors 
like this.

Thanks.
--Vahid




From:   Ismael Juma <isma...@gmail.com>
To:     Vahid S Hashemian <vahidhashem...@us.ibm.com>
Cc: d...@kafka.apache.org, kafka-clients 
<kafka-clie...@googlegroups.com>, Kafka Users <users@kafka.apache.org>
Date:   06/26/2017 03:53 AM
Subject:Re: [VOTE] 0.11.0.0 RC2



Hi Vahid,

Sorry for not replying to the previous email, I had missed it. A couple of
questions:

1. Is this also happening in trunk? Seems like it should be the case for
months and seemingly no-one reported it until the RC stage.
2. Is it correct that this only happens when compiling on Windows without
Cygwin?

I believe the following PR should fix it, please verify:

https://github.com/apache/kafka/pull/3431

Ismael

On Fri, Jun 23, 2017 at 8:25 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Ismael,
>
> Not sure if my response on RC1 was lost or this issue is not a
> show-stopper:
>
> I checked again and with RC2, tests still fail in my Windown 64 bit
> environment.
>
> :clients:checkstyleMain
> [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> 
0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1:
> Class Data Abstraction Coupling is 57 (max allowed is 20) classes
> [ApiExceptionBuilder, BrokerNotAvailableException,
> ClusterAuthorizationException, ConcurrentTransactionsException,
> ControllerMovedException, CoordinatorLoadInProgressException,
> CoordinatorNotAvailableException, CorruptRecordException,
> DuplicateSequenceNumberException, GroupAuthorizationException,
> IllegalGenerationException, IllegalSaslStateException,
> InconsistentGroupProtocolException, InvalidCommitOffsetSizeException,
> InvalidConfigurationException, InvalidFetchSizeException,
> InvalidGroupIdException, InvalidPartitionsException,
> InvalidPidMappingException, InvalidReplicaAssignmentException,
> InvalidReplicationFactorException, InvalidRequestException,
> InvalidRequiredAcksException, InvalidSessionTimeoutException,
> InvalidTimestampException, InvalidTopicException, 
InvalidTxnStateException,
> InvalidTxnTimeoutException, LeaderNotAvailableException, 
NetworkException,
> NotControllerException, NotCoordinatorException,
> NotEnoughReplicasAfterAppendException, NotEnoughReplicasException,
> NotLeaderForPartitionException, OffsetMetadataTooLarge,
> OffsetOutOfRangeException, OperationNotAttemptedException,
> OutOfOrderSequenceException, PolicyViolationException,
> ProducerFencedException, RebalanceInProgressException,
> RecordBatchTooLargeException, RecordTooLargeException,
> ReplicaNotAvailableException, SecurityDisabledException, 
TimeoutException,
> TopicAuthorizationException, TopicExistsException,
> TransactionCoordinatorFencedException, 
TransactionalIdAuthorizationException,
> UnknownMemberIdException, UnknownServerException,
> UnknownTopicOrPartitionException, UnsupportedForMessageFormatException,
> UnsupportedSaslMechanismException, UnsupportedVersionException].
> [ClassDataAbstractionCoupling]
> [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> 
0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1:
> Class Fan-Out Complexity is 60 (max allowed is 40). 
[ClassFanOutComplexity]
> [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> 0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\
> requests\AbstractRequest.java:26:1: Class Fan-Out Complexity is 43 (max
> allowed is 40). [ClassFanOutComplexity]
> [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-

Re: [VOTE] 0.11.0.0 RC2

2017-06-23 Thread Vahid S Hashemian
Hi Ismael,

Not sure if my response on RC1 was lost or this issue is not a 
show-stopper:

I checked again and with RC2, tests still fail in my Windown 64 bit 
environment.

:clients:checkstyleMain
[ant:checkstyle] [ERROR] 
C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1:
 
Class Data Abstraction Coupling is 57 (max allowed is 20) classes 
[ApiExceptionBuilder, BrokerNotAvailableException, 
ClusterAuthorizationException, ConcurrentTransactionsException, 
ControllerMovedException, CoordinatorLoadInProgressException, 
CoordinatorNotAvailableException, CorruptRecordException, 
DuplicateSequenceNumberException, GroupAuthorizationException, 
IllegalGenerationException, IllegalSaslStateException, 
InconsistentGroupProtocolException, InvalidCommitOffsetSizeException, 
InvalidConfigurationException, InvalidFetchSizeException, 
InvalidGroupIdException, InvalidPartitionsException, 
InvalidPidMappingException, InvalidReplicaAssignmentException, 
InvalidReplicationFactorException, InvalidRequestException, 
InvalidRequiredAcksException, InvalidSessionTimeoutException, 
InvalidTimestampException, InvalidTopicException, 
InvalidTxnStateException, InvalidTxnTimeoutException, 
LeaderNotAvailableException, NetworkException, NotControllerException, 
NotCoordinatorException, NotEnoughReplicasAfterAppendException, 
NotEnoughReplicasException, NotLeaderForPartitionException, 
OffsetMetadataTooLarge, OffsetOutOfRangeException, 
OperationNotAttemptedException, OutOfOrderSequenceException, 
PolicyViolationException, ProducerFencedException, 
RebalanceInProgressException, RecordBatchTooLargeException, 
RecordTooLargeException, ReplicaNotAvailableException, 
SecurityDisabledException, TimeoutException, TopicAuthorizationException, 
TopicExistsException, TransactionCoordinatorFencedException, 
TransactionalIdAuthorizationException, UnknownMemberIdException, 
UnknownServerException, UnknownTopicOrPartitionException, 
UnsupportedForMessageFormatException, UnsupportedSaslMechanismException, 
UnsupportedVersionException]. [ClassDataAbstractionCoupling]
[ant:checkstyle] [ERROR] 
C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1:
 
Class Fan-Out Complexity is 60 (max allowed is 40). 
[ClassFanOutComplexity]
[ant:checkstyle] [ERROR] 
C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\requests\AbstractRequest.java:26:1:
 
Class Fan-Out Complexity is 43 (max allowed is 40). 
[ClassFanOutComplexity]
[ant:checkstyle] [ERROR] 
C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\requests\AbstractResponse.java:26:1:
 
Class Fan-Out Complexity is 42 (max allowed is 40). 
[ClassFanOutComplexity]
:clients:checkstyleMain FAILED

FAILURE: Build failed with an exception.

Thanks.
--Vahid



From:   Ismael Juma 
To: d...@kafka.apache.org, Kafka Users , 
kafka-clients 
Date:   06/22/2017 06:16 PM
Subject:[VOTE] 0.11.0.0 RC2
Sent by:isma...@gmail.com



Hello Kafka users, developers and client-developers,

This is the third candidate for release of Apache Kafka 0.11.0.0.

This is a major version release of Apache Kafka. It includes 32 new KIPs.
See the release notes and release plan (
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.11.0.0)
for more details. A few feature highlights:

* Exactly-once delivery and transactional messaging
* Streams exactly-once semantics
* Admin client with support for topic, ACLs and config management
* Record headers
* Request rate quotas
* Improved resiliency: replication protocol improvement and 
single-threaded
controller
* Richer and more efficient message format

Release notes for the 0.11.0.0 release:
http://home.apache.org/~ijuma/kafka-0.11.0.0-rc2/RELEASE_NOTES.html

*** Please download, test and vote by Tuesday, June 27, 6pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~ijuma/kafka-0.11.0.0-rc2/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
http://home.apache.org/~ijuma/kafka-0.11.0.0-rc2/javadoc/

* Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.0 tag:
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=8698fa1f41102f1664b05baa4d6953fc9564d91e


* Documentation:
http://kafka.apache.org/0110/documentation.html

* Protocol:
http://kafka.apache.org/0110/protocol.html

* Successful Jenkins builds for the 0.11.0 branch:
Unit/integration tests: 
https://builds.apache.org/job/kafka-0.11.0-jdk7/187/
System tests: pending (will send an update tomorrow)

/**

Thanks,
Ismael






Re: Deleting/Purging data from Kafka topics (Kafka 0.10)

2017-06-23 Thread Vahid S Hashemian
Hi Karan,

I think what you are seeing with `--time -1` and '--time -2` confirms that 
the messages are deleted from the log.
The offset returned in both cases is the same, which means that the offset 
start and offset end are both the same (i.e. the log is empty).
When messages are removed from the log the offsets won't reset to 0. The 
offset index just keeps increasing, instead the offset start changes over 
time when log retention occurs.

So, in order to find the number of messages in a partition, you can just 
get the difference of the offsets returned from `--time -1` and `--time 
-2`.
I hope this answers your question.

Thanks.
--Vahid




From:   karan alang <karan.al...@gmail.com>
To: users@kafka.apache.org
Date:   06/22/2017 11:14 PM
Subject:Re: Deleting/Purging data from Kafka topics (Kafka 0.10)



Hi Vahid,
here is the output of the GetOffsetShell commands (with --time -1 & -2)

$KAFKA10_HOME/bin/kafka-run-class.sh kafka.tools.GetOffsetShell
--broker-list localhost:6092,localhost:6093,localhost:6094,localhost:6095
--topic topicPurge --time -2 --partitions 0,1,2

topicPurge:0:67

topicPurge:1:67

topicPurge:2:66

Karans-MacBook-Pro-3:config karanalang$
$KAFKA10_HOME/bin/kafka-run-class.sh kafka.tools.GetOffsetShell
--broker-list localhost:6092,localhost:6093,localhost:6094,localhost:6095
--topic topicPurge --time -1 --partitions 0,1,2

topicPurge:0:67

topicPurge:1:67

topicPurge:2:66


So, how do i interpret the above ? I was expecting the zookeeper to be
purged too .. & the offsets shown as 0, however that is not the case. (the
observation seem to tally with what you put in your email,i think)

Also, the consumer is not able to read any data.. so i guess the data is
actually purged ?

However, that also brings up additional questions ..

I was using the GetOffsetShell command to get the count, but seems that is
not necessarily the right way ..

What command should be used to get the count ?

On Thu, Jun 22, 2017 at 8:34 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Karan,
>
> Just to clarify, with `--time -1` you are getting back the latest offset
> of the partition.
> If you do `--time -2` you'll get the earliest valid offset.
>
> So, let's say the latest offset of partition 0 of topic 'test' is 100.
> When you publish 5 messages to the partition, and before retention 
policy
> kicks in,
> - with `--time -1` you should get test:0:105
> - with `--time -2` you should get test:0:100
>
> But after retention policy kicks in and old messages are removed,
> - with `--time -1` you should get test:0:105
> - with `--time -2` you should get test:0:105
>
> Could you please advise whether you're seeing a different behavior?
>
> Thanks.
> --Vahid
>
>
>
>
> From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
> To: users@kafka.apache.org
> Date:   06/22/2017 06:43 PM
> Subject:Re: Deleting/Purging data from Kafka topics (Kafka 0.10)
>
>
>
> Hi Karan,
>
> I think the issue is in verification step. Because the start and end
> offsets are not going to be reset when messages are deleted.
> Have you checked whether a consumer would see the messages that are
> supposed to be deleted? Thanks.
>
> --Vahid
>
>
>
> From:   karan alang <karan.al...@gmail.com>
> To: users@kafka.apache.org
> Date:   06/22/2017 06:09 PM
> Subject:Re: Deleting/Purging data from Kafka topics (Kafka 0.10)
>
>
>
> Hi Vahid,
>
> somehow, the changes suggested don't seem to be taking effect, and i 
dont
> see the data being purged from the topic.
>
> Here are the steps i followed -
>
> 1) topic is set with param -- retention.ms=1000
>
> $KAFKA10_HOME/bin/kafka-topics.sh --describe --topic topicPurge
> --zookeeper
> localhost:2161
>
> Topic:topicPurge PartitionCount:3 ReplicationFactor:3 
Configs:retention.ms
> =1000
>
> Topic: topicPurge Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
>
> Topic: topicPurge Partition: 1 Leader: 0 Replicas: 0,2,3 Isr: 0,2,3
>
> Topic: topicPurge Partition: 2 Leader: 1 Replicas: 1,3,0 Isr: 1,3,0
>
>
> 2) There are 4 brokers, and in the server.properties (for each of the
> brokers), i've modified the following property
>
> log.retention.check.interval.ms=3
>
> I am expecting the data to be purged every 30 secs based on property -
> log.retention.check.interval.ms, however, that does not seem to be
> happening.
>
> 3) Here is the command to check the offsets
>
> $KAFKA10_HOME/bin/kafka-run-class.sh kafka.tools.GetOffsetShell
> --broker-list 
localhost:6092,localhost:6093,localhost:6094,localhost:6095
> --topic topicPurge --time -1 --partitions 0,1,2
>
> topicPurge:0:67
>
> topicPurge:1:67
>
> topicPurge:2:66
>
>
> Any id

Re: Deleting/Purging data from Kafka topics (Kafka 0.10)

2017-06-22 Thread Vahid S Hashemian
Hi Karan,

Just to clarify, with `--time -1` you are getting back the latest offset 
of the partition.
If you do `--time -2` you'll get the earliest valid offset.

So, let's say the latest offset of partition 0 of topic 'test' is 100.
When you publish 5 messages to the partition, and before retention policy 
kicks in,
- with `--time -1` you should get test:0:105
- with `--time -2` you should get test:0:100

But after retention policy kicks in and old messages are removed,
- with `--time -1` you should get test:0:105
- with `--time -2` you should get test:0:105

Could you please advise whether you're seeing a different behavior?

Thanks.
--Vahid




From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
To: users@kafka.apache.org
Date:   06/22/2017 06:43 PM
Subject:Re: Deleting/Purging data from Kafka topics (Kafka 0.10)



Hi Karan,

I think the issue is in verification step. Because the start and end 
offsets are not going to be reset when messages are deleted.
Have you checked whether a consumer would see the messages that are 
supposed to be deleted? Thanks.

--Vahid



From:   karan alang <karan.al...@gmail.com>
To: users@kafka.apache.org
Date:   06/22/2017 06:09 PM
Subject:Re: Deleting/Purging data from Kafka topics (Kafka 0.10)



Hi Vahid,

somehow, the changes suggested don't seem to be taking effect, and i dont
see the data being purged from the topic.

Here are the steps i followed -

1) topic is set with param -- retention.ms=1000

$KAFKA10_HOME/bin/kafka-topics.sh --describe --topic topicPurge 
--zookeeper
localhost:2161

Topic:topicPurge PartitionCount:3 ReplicationFactor:3 Configs:retention.ms
=1000

Topic: topicPurge Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2

Topic: topicPurge Partition: 1 Leader: 0 Replicas: 0,2,3 Isr: 0,2,3

Topic: topicPurge Partition: 2 Leader: 1 Replicas: 1,3,0 Isr: 1,3,0


2) There are 4 brokers, and in the server.properties (for each of the
brokers), i've modified the following property

log.retention.check.interval.ms=3

I am expecting the data to be purged every 30 secs based on property -
log.retention.check.interval.ms, however, that does not seem to be
happening.

3) Here is the command to check the offsets

$KAFKA10_HOME/bin/kafka-run-class.sh kafka.tools.GetOffsetShell
--broker-list localhost:6092,localhost:6093,localhost:6094,localhost:6095
--topic topicPurge --time -1 --partitions 0,1,2

topicPurge:0:67

topicPurge:1:67

topicPurge:2:66


Any ideas on what the issue might be ?







On Thu, Jun 22, 2017 at 1:31 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Karan,
>
> The other broker config that plays a role here is
> "log.retention.check.interval.ms".
> For a low log retention time like in your example if this broker config
> value is much higher, then the broker doesn't delete old logs regular
> enough.
>
> --Vahid
>
>
>
> From:   karan alang <karan.al...@gmail.com>
> To: users@kafka.apache.org
> Date:   06/22/2017 12:27 PM
> Subject:Deleting/Purging data from Kafka topics (Kafka 0.10)
>
>
>
> Hi All -
> How do i go about deleting data from Kafka Topics ? I've Kafka 0.10
> installed.
>
> I tried setting the parameter of the topic as shown below ->
>
> $KAFKA10_HOME/bin/kafka-topics.sh --zookeeper localhost:2161 --alter
> --topic mmtopic6 --config retention.ms=1000
>  I was expecting to have the data purged in about a min or so .. 
however,
> i
> dont see that happening ..
> any ideas on what needs to be done ?
>
>
>
>
>










Re: Deleting/Purging data from Kafka topics (Kafka 0.10)

2017-06-22 Thread Vahid S Hashemian
Hi Karan,

I think the issue is in verification step. Because the start and end 
offsets are not going to be reset when messages are deleted.
Have you checked whether a consumer would see the messages that are 
supposed to be deleted? Thanks.

--Vahid



From:   karan alang <karan.al...@gmail.com>
To: users@kafka.apache.org
Date:   06/22/2017 06:09 PM
Subject:Re: Deleting/Purging data from Kafka topics (Kafka 0.10)



Hi Vahid,

somehow, the changes suggested don't seem to be taking effect, and i dont
see the data being purged from the topic.

Here are the steps i followed -

1) topic is set with param -- retention.ms=1000

$KAFKA10_HOME/bin/kafka-topics.sh --describe --topic topicPurge 
--zookeeper
localhost:2161

Topic:topicPurge PartitionCount:3 ReplicationFactor:3 Configs:retention.ms
=1000

Topic: topicPurge Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2

Topic: topicPurge Partition: 1 Leader: 0 Replicas: 0,2,3 Isr: 0,2,3

Topic: topicPurge Partition: 2 Leader: 1 Replicas: 1,3,0 Isr: 1,3,0


2) There are 4 brokers, and in the server.properties (for each of the
brokers), i've modified the following property

log.retention.check.interval.ms=3

I am expecting the data to be purged every 30 secs based on property -
log.retention.check.interval.ms, however, that does not seem to be
happening.

3) Here is the command to check the offsets

$KAFKA10_HOME/bin/kafka-run-class.sh kafka.tools.GetOffsetShell
--broker-list localhost:6092,localhost:6093,localhost:6094,localhost:6095
--topic topicPurge --time -1 --partitions 0,1,2

topicPurge:0:67

topicPurge:1:67

topicPurge:2:66


Any ideas on what the issue might be ?







On Thu, Jun 22, 2017 at 1:31 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Karan,
>
> The other broker config that plays a role here is
> "log.retention.check.interval.ms".
> For a low log retention time like in your example if this broker config
> value is much higher, then the broker doesn't delete old logs regular
> enough.
>
> --Vahid
>
>
>
> From:   karan alang <karan.al...@gmail.com>
> To: users@kafka.apache.org
> Date:   06/22/2017 12:27 PM
> Subject:Deleting/Purging data from Kafka topics (Kafka 0.10)
>
>
>
> Hi All -
> How do i go about deleting data from Kafka Topics ? I've Kafka 0.10
> installed.
>
> I tried setting the parameter of the topic as shown below ->
>
> $KAFKA10_HOME/bin/kafka-topics.sh --zookeeper localhost:2161 --alter
> --topic mmtopic6 --config retention.ms=1000
>  I was expecting to have the data purged in about a min or so .. 
however,
> i
> dont see that happening ..
> any ideas on what needs to be done ?
>
>
>
>
>






Re: Deleting/Purging data from Kafka topics (Kafka 0.10)

2017-06-22 Thread Vahid S Hashemian
Hi Karan,

The other broker config that plays a role here is 
"log.retention.check.interval.ms".
For a low log retention time like in your example if this broker config 
value is much higher, then the broker doesn't delete old logs regular 
enough.

--Vahid



From:   karan alang 
To: users@kafka.apache.org
Date:   06/22/2017 12:27 PM
Subject:Deleting/Purging data from Kafka topics (Kafka 0.10)



Hi All -
How do i go about deleting data from Kafka Topics ? I've Kafka 0.10
installed.

I tried setting the parameter of the topic as shown below ->

$KAFKA10_HOME/bin/kafka-topics.sh --zookeeper localhost:2161 --alter
--topic mmtopic6 --config retention.ms=1000
 I was expecting to have the data purged in about a min or so .. however, 
i
dont see that happening ..
any ideas on what needs to be done ?






Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-06-21 Thread Vahid S Hashemian
I appreciate everyone's feedback so far on this KIP.

Before starting a vote, I'd like to also ask for feedback on the 
"Additional Food for Thought" section in the KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch#KIP-163:LowertheMinimumRequiredACLPermissionofOffsetFetch-AdditionalFoodforThought
I just added some more details in that section, which I hope further 
clarifies the suggestion there.

Thanks.
--Vahid



From:   Vahid S Hashemian/Silicon Valley/IBM
To: d...@kafka.apache.org
Cc: "Kafka User" <users@kafka.apache.org>
Date:   06/08/2017 11:29 AM
Subject:[DISCUSS] KIP-163: Lower the Minimum Required ACL 
Permission of OffsetFetch


Hi all,

I'm resending my earlier note hoping it would spark some conversation this 
time around :)

Thanks.
--Vahid





From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
To: dev <d...@kafka.apache.org>, "Kafka User" <users@kafka.apache.org>
Date:   05/30/2017 08:33 AM
Subject:KIP-163: Lower the Minimum Required ACL Permission of 
OffsetFetch



Hi,

I started a new KIP to improve the minimum required ACL permissions of 
some of the APIs: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch

The KIP is to address KAFKA-4585.

Feedback and suggestions are welcome!

Thanks.
--Vahid








Re: [VOTE] 0.11.0.0 RC1

2017-06-20 Thread Vahid S Hashemian
Hi Ismael,

Thanks for running the release.

Running tests ('gradlew.bat test') on my Windows 64-bit VM results in 
these checkstyle errors:

:clients:checkstyleMain
[ant:checkstyle] [ERROR] 
C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1:
 
Class Data Abstraction Coupling is 57 (max allowed is 20) classes 
[ApiExceptionBuilder, BrokerNotAvailableException, 
ClusterAuthorizationException, ConcurrentTransactionsException, 
ControllerMovedException, CoordinatorLoadInProgressException, 
CoordinatorNotAvailableException, CorruptRecordException, 
DuplicateSequenceNumberException, GroupAuthorizationException, 
IllegalGenerationException, IllegalSaslStateException, 
InconsistentGroupProtocolException, InvalidCommitOffsetSizeException, 
InvalidConfigurationException, InvalidFetchSizeException, 
InvalidGroupIdException, InvalidPartitionsException, 
InvalidPidMappingException, InvalidReplicaAssignmentException, 
InvalidReplicationFactorException, InvalidRequestException, 
InvalidRequiredAcksException, InvalidSessionTimeoutException, 
InvalidTimestampException, InvalidTopicException, 
InvalidTxnStateException, InvalidTxnTimeoutException, 
LeaderNotAvailableException, NetworkException, NotControllerException, 
NotCoordinatorException, NotEnoughReplicasAfterAppendException, 
NotEnoughReplicasException, NotLeaderForPartitionException, 
OffsetMetadataTooLarge, OffsetOutOfRangeException, 
OperationNotAttemptedException, OutOfOrderSequenceException, 
PolicyViolationException, ProducerFencedException, 
RebalanceInProgressException, RecordBatchTooLargeException, 
RecordTooLargeException, ReplicaNotAvailableException, 
SecurityDisabledException, TimeoutException, TopicAuthorizationException, 
TopicExistsException, TransactionCoordinatorFencedException, 
TransactionalIdAuthorizationException, UnknownMemberIdException, 
UnknownServerException, UnknownTopicOrPartitionException, 
UnsupportedForMessageFormatException, UnsupportedSaslMechanismException, 
UnsupportedVersionException]. [ClassDataAbstractionCoupling]
[ant:checkstyle] [ERROR] 
C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1:
 
Class Fan-Out Complexity is 60 (max allowed is 40). 
[ClassFanOutComplexity]
[ant:checkstyle] [ERROR] 
C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\requests\AbstractRequest.java:26:1:
 
Class Fan-Out Complexity is 43 (max allowed is 40). 
[ClassFanOutComplexity]
[ant:checkstyle] [ERROR] 
C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\requests\AbstractResponse.java:26:1:
 
Class Fan-Out Complexity is 42 (max allowed is 40). 
[ClassFanOutComplexity]
:clients:checkstyleMain FAILED

I wonder if there is an issue with my VM since I don't get similar errors 
on Ubuntu or Mac.

--Vahid




From:   Ismael Juma 
To: d...@kafka.apache.org, Kafka Users , 
kafka-clients 
Date:   06/18/2017 03:32 PM
Subject:[VOTE] 0.11.0.0 RC1
Sent by:isma...@gmail.com



Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 0.11.0.0.

This is a major version release of Apache Kafka. It includes 32 new KIPs. 
See
the release notes and release plan (https://cwiki.apache.org/conf
luence/display/KAFKA/Release+Plan+0.11.0.0) for more details. A few 
feature
highlights:

* Exactly-once delivery and transactional messaging
* Streams exactly-once semantics
* Admin client with support for topic, ACLs and config management
* Record headers
* Request rate quotas
* Improved resiliency: replication protocol improvement and 
single-threaded
controller
* Richer and more efficient message format

A number of issues have been resolved since RC0 and there are no known
blockers remaining.

Release notes for the 0.11.0.0 release:
http://home.apache.org/~ijuma/kafka-0.11.0.0-rc1/RELEASE_NOTES.html

*** Please download, test and vote by Thursday, June 22, 9am PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~ijuma/kafka-0.11.0.0-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
http://home.apache.org/~ijuma/kafka-0.11.0.0-rc1/javadoc/

* Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.0 tag:
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=4818d4e1cbef1a8e9c027100fef317077fb3fb99


* Documentation:
http://kafka.apache.org/0110/documentation.html

* Protocol:
http://kafka.apache.org/0110/protocol.html

* Successful Jenkins builds for the 0.11.0 branch:
Unit/integration tests: 
https://builds.apache.org/job/kafka-0.11.0-jdk7/167/
System tests: 
https://jenkins.confluent.io/job/system-test-kafka-0.11.0/16/
(all 

Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-06-19 Thread Vahid S Hashemian
Thanks everyone. Great discussion.

Because these Read or Write actions are interpreted in conjunction with 
particular resources (Topic, Group, ...) it would also make more sense to 
me that for committing offsets the ACL should be (Group, Write).
So, a consumer would be required to have (Topic, Read), (Group, Write) 
ACLs in order to function.

--Vahid




From:   Colin McCabe <cmcc...@apache.org>
To: users@kafka.apache.org
Date:   06/19/2017 11:01 AM
Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL 
Permission of OffsetFetch



Thanks for the explanation.  I still think it would be better to have
the mutation operations require write ACLs, though.  It might not be
100% intuitive for novice users, but the current split between Describe
and Read is not intuitive for either novice or experienced users.

In any case, I am +1 on the incremental improvement discussed in
KIP-163.

cheers,
Colin


On Sat, Jun 17, 2017, at 11:11, Hans Jespersen wrote:
> 
> Offset commit is something that is done in the act of consuming (or
> reading) Kafka messages. 
> Yes technically it is a write to the Kafka consumer offset topic but 
it's
> much easier for 
> administers to think of ACLs in terms of whether the user is allowed to
> write (Produce) or 
> read (Consume) messages and not the lower level semantics that are that
> consuming is actually
> reading AND writing (albeit only to the offset topic).
> 
> -hans
> 
> 
> 
> 
> > On Jun 17, 2017, at 10:59 AM, Viktor Somogyi 
<viktor.somo...@cloudera.com> wrote:
> > 
> > Hi Vahid,
> > 
> > +1 for OffsetFetch from me too.
> > 
> > I also wanted to ask the strangeness of the permissions, like why is
> > OffsetCommit a Read operation instead of Write which would intuitively 
make
> > more sense to me. Perhaps any expert could shed some light on this? :)
> > 
> > Viktor
> > 
> > On Tue, Jun 13, 2017 at 2:38 PM, Vahid S Hashemian <
> > vahidhashem...@us.ibm.com <mailto:vahidhashem...@us.ibm.com>> wrote:
> > 
> >> Hi Michal,
> >> 
> >> Thanks a lot for your feedback.
> >> 
> >> Your statement about Heartbeat is fair and makes sense. I'll update 
the
> >> KIP accordingly.
> >> 
> >> --Vahid
> >> 
> >> 
> >> 
> >> 
> >> From:Michal Borowiecki <michal.borowie...@openbet.com>
> >> To:users@kafka.apache.org, Vahid S Hashemian <
> >> vahidhashem...@us.ibm.com>, d...@kafka.apache.org
> >> Date:06/13/2017 01:35 AM
> >> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
> >> Permission of OffsetFetch
> >> --
> >> 
> >> 
> >> 
> >> Hi Vahid,
> >> 
> >> +1 wrt OffsetFetch.
> >> 
> >> The "Additional Food for Thought" mentions Heartbeat as a 
non-mutating
> >> action. I don't think that's true as the GroupCoordinator updates the
> >> latestHeartbeat field for the member and adds a new object to the
> >> heartbeatPurgatory, see completeAndScheduleNextHeartbeatExpiration()
> >> called from handleHeartbeat()
> >> 
> >> NB added dev mailing list back into CC as it seems to have been lost 
along
> >> the way.
> >> 
> >> Cheers,
> >> 
> >> Michał
> >> 
> >> 
> >> On 12/06/17 18:47, Vahid S Hashemian wrote:
> >> Hi Colin,
> >> 
> >> Thanks for the feedback.
> >> 
> >> To be honest, I'm not sure either why Read was selected instead of 
Write
> >> for mutating APIs in the initial design (I asked Ewen on the 
corresponding
> >> JIRA and he seemed unsure too).
> >> Perhaps someone who was involved in the design can clarify.
> >> 
> >> Thanks.
> >> --Vahid
> >> 
> >> 
> >> 
> >> 
> >> From:   Colin McCabe *<cmcc...@apache.org <mailto:cmcc...@apache.org
>>* <cmcc...@apache.org <mailto:cmcc...@apache.org>>
> >> To: *users@kafka.apache.org <mailto:users@kafka.apache.org>* 
<users@kafka.apache.org <mailto:users@kafka.apache.org>>
> >> Date:   06/12/2017 10:11 AM
> >> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
> >> Permission of OffsetFetch
> >> 
> >> 
> >> 
> >> Hi Vahid,
> >> 
> >> I think you make a valid point that the ACLs controlling group
> >> operations are not very intuitive.
> >> 
> >> This is probably a dumb question,

Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL

2017-06-16 Thread Vahid S Hashemian
Hi Arunkumar,

I'm glad you were able to fix the issue. Also glad that the article was 
helpful.

Regarding Kafka SSL configuration, I'm sending some links:
- Kafka documentation: 
http://kafka.apache.org/documentation.html#security_ssl
- Apache Kafka Security 101: 
https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/
- Configuring Kafka Security: 
https://www.cloudera.com/documentation/kafka/latest/topics/kafka_security.html

I hope they help you get started with SSL configuration.

--Vahid




From:   Arunkumar <pm_arunku...@yahoo.com.INVALID>
To: <users@kafka.apache.org>
Date:   06/16/2017 03:47 PM
Subject:Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL



Hi Vahid

I deleted the dataDir and dataLogDir and restarted zookeeper,brokers, 
producers and consumer. Now it works

All the messages produced are consumed from the producer.

Thanks for all the help. The link you shared helped a lot.

I am planning to setup SASL_SSL, I appreciate you advice for the same.

Thanks
Arunkumar Pichaimuthu, PMP


On Fri, 6/16/17, Arunkumar <pm_arunku...@yahoo.com.INVALID> wrote:

 Subject: Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL
 To: users@kafka.apache.org
 Date: Friday, June 16, 2017, 4:15 PM
 
 Hi Vahid
 
 I am working on the same use case ): .
 As per the document I was trying to set ACL's for topic
 which worked and now I am able to start my producer without
 error.
 
 Then I set ACL for the consumer and
 when I start my consumer it starts without issue. and also
 able to set second ACL for committing offsets to group
 arun-group which is also good. 
 
 Current ACLs for resource
 `Group:arun-group`:
 User:admin
 has Allow permission for operations: All from hosts: *
 User:arun
 has Allow permission for operations: All from hosts: *
 User:arun
 has Allow permission for operations: Read from hosts: *
 User:admin
 has Allow permission for operations: Read from hosts: *
 
 Then when I try to get the proper
 listing of offsets in the group I get the following error
 
  bin/kafka-consumer-groups
 --bootstrap-server producerhost:9097 --group arun-group
 --describe --command-config etc/kafka/producer.properties
 
 Note: This will only show information
 about consumers that use the Java consumer API
 (non-ZooKeeper-based consumers).
 
 [2017-06-16 16:05:42,535] INFO
 Successfully logged in.
 (org.apache.kafka.common.security.authenticator.AbstractLogin)
 Error: Executing consumer group command
 failed due to The group coordinator is not available.
 
 Also I don't see the message producer
 posted on consumer.
 
 
 I am using the confluent opensource
 kafka which is bundled with 
 Zookeeper - 3.4.6
 Kafka - 10.2.0
 
 Please let me know if you need any more
 info. I appreciate your time.
 
 Thanks
 Arunkumar Pichaimuthu, PMP
 
 
 On Fri, 6/16/17, Vahid S Hashemian
 <vahidhashem...@us.ibm.com>
 wrote:
 
  Subject: Re:
 UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL
  To: users@kafka.apache.org
  Date: Friday, June 16, 2017, 1:56 PM
 
  Hi Arunkumar,
 
  Were you trying the same steps in the
  document when you got this error? Or 
  you are working on a different use
  case?
  Also, I might have missed it in
  previous emails. What version of Kafka
 are 
  you using?
 
  Thanks.
  --Vahid
 
 
 
  From:   Arunkumar <pm_arunku...@yahoo.com.INVALID>
  To: <users@kafka.apache.org>
  Date:   06/16/2017 10:22 AM
  Subject:Re:
  UNKNOWN_TOPIC_OR_PARTITION with
 SASL_PLAINTEXT ACL
 
 
 
   Hi Vahid
 
  Thank you for sharing link to set it
  up. It is really a very useful 
  document. When I ran describe command
  for group I see this error
 
   bin/kafka-consumer-groups
  --bootstrap-server host:9097
 --describe --group 
  arun-group --command-config
  etc/kafka/producer.properties
  Note: This will only show information
  about consumers that use the Java 
  consumer API (non-ZooKeeper-based
  consumers).
 
  [2017-06-16 11:32:23,790] INFO
  Successfully logged in. 
 
 (org.apache.kafka.common.security.authenticator.AbstractLogin)
  Error: Executing consumer group
 command
  failed due to The group 
  coordinator is not available.
 
 
  I googled to figure out the issue and
  many say that it may be because of 
  the port which I am not convinced.
 Any
  help is highly appreciated.
 
  Thanks
  Arunkumar Pichaimuthu, PMP
 
 
 ----
  On Thu, 6/15/17, Vahid S Hashemian
  <vahidhashem...@us.ibm.com>
  wrote:
 
   Subject: Re:
  UNKNOWN_TOPIC_OR_PARTITION with
 SASL_PLAINTEXT ACL
   To: users@kafka.apache.org
   Date: Thursday, June 15, 2017,
 6:49
  PM
   
   Hi Arunkumar,
   
   Could you please take a look at
 this
  article:
   
   https://developer.ibm.com/opentech/2017/05/31/kafka-acls-in-practice/
   The error message you posted
 earli

Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL

2017-06-16 Thread Vahid S Hashemian
Hi Arunkumar,

Were you trying the same steps in the document when you got this error? Or 
you are working on a different use case?
Also, I might have missed it in previous emails. What version of Kafka are 
you using?

Thanks.
--Vahid



From:   Arunkumar <pm_arunku...@yahoo.com.INVALID>
To: <users@kafka.apache.org>
Date:   06/16/2017 10:22 AM
Subject:Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL



 Hi Vahid

Thank you for sharing link to set it up. It is really a very useful 
document. When I ran describe command for group I see this error

 bin/kafka-consumer-groups --bootstrap-server host:9097 --describe --group 
arun-group --command-config etc/kafka/producer.properties
Note: This will only show information about consumers that use the Java 
consumer API (non-ZooKeeper-based consumers).

[2017-06-16 11:32:23,790] INFO Successfully logged in. 
(org.apache.kafka.common.security.authenticator.AbstractLogin)
Error: Executing consumer group command failed due to The group 
coordinator is not available.


I googled to figure out the issue and many say that it may be because of 
the port which I am not convinced. Any help is highly appreciated.

Thanks
Arunkumar Pichaimuthu, PMP


On Thu, 6/15/17, Vahid S Hashemian <vahidhashem...@us.ibm.com> wrote:

 Subject: Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL
 To: users@kafka.apache.org
 Date: Thursday, June 15, 2017, 6:49 PM
 
 Hi Arunkumar,
 
 Could you please take a look at this article:
 
 https://developer.ibm.com/opentech/2017/05/31/kafka-acls-in-practice/
 The error message you posted earlier suggests
 there is some missing ACL 
 (as indicated in
 the article).
 
 Let me know
 if that doesn't resolve the issue. Thanks.
 --Vahid
 
 
 
 
 From:  
 Arunkumar <pm_arunku...@yahoo.com.INVALID>
 To: <users@kafka.apache.org>
 Date:   06/15/2017 04:37 PM
 Subject:Re:
 UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL
 
 
 
 Hi Vahid
 
 Thank you for quick
 response.
 
 I set the ACL for
 topic and also created jaas permission as per the 
 document for both producer and consumer. I have
 set what I have posted 
 below. Do I need to
 set ACL like we set for Topics --  bin/kafka-acls 
 --topic * --add -allow-host host:9097
 --allow-principal User:arun 
 --operation
 Write --authorizer-properties zookeeper.connect=host:2182 ?
 
 Please let me know. If you need all
 configuration for zookeeper, Broker, 
 producer and consumer. I can share it as well.
 Thanks in advance
 
 
 KafkaServer {
   
 org.apache.kafka.common.security.plain.PlainLoginModule
 required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_arun="Arun123";
 };
 
 Client {

 org.apache.kafka.common.security.plain.PlainLoginModule
 required

 username="arun"

 password="Arun123";
 };
 
 KafkaClient {

 org.apache.kafka.common.security.plain.PlainLoginModule
 required

 username="arun"

 password="Arun123";
 };
 
 Thanks
 Arunkumar Pichaimuthu, PMP
 
 
 On Thu, 6/15/17, Vahid S Hashemian <vahidhashem...@us.ibm.com>
 wrote:
 
  Subject: Re:
 UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL
  To: users@kafka.apache.org
  Date: Thursday, June 15, 2017, 6:16 PM
 
  Hi Arunkumar,
 
  Have you given your Kafka
 consumer/producer
  necessary permissions to
 
  consume/produce
 
 messages?
 
  --Vahid
 
 
 
  From:   Arunkumar <pm_arunku...@yahoo.com.INVALID>
  To: <users@kafka.apache.org>
  Date:   06/15/2017 04:07 PM
 
 Subject:UNKNOWN_TOPIC_OR_PARTITION
  with SASL_PLAINTEXT ACL
 
 
 
  Hi 
 
  I am setting up ACL with
 SALS_PLAINTEXT. My
  zookeeper and broker
 starts 
  without error.
  But
 when I try to start my consumer or if I send message 
  through a producer it throws an exception
 (Both
  producer and consumer are 
  kafka CLI)
  Stack trace for my
 consumer below. Any insight
  is highly
 appreciated. 
  Thanks in advance
 
 
 bin/kafka-console-consumer
  --topic sample1
 --from-beginning 
 
 --consumer.config=etc/kafka/consumer.properties 
  --bootstrap-server 
 
 hostname:9097
  [2017-06-15 17:21:45,286]
 INFO ConsumerConfig
  values:
  auto.commit.interval.ms
  = 5000
 
 auto.offset.reset =
  earliest
  bootstrap.servers =
  [hostname:9097]
 
 check.crcs =
  true
  
client.id =
 
 connections.max.idle.ms = 54

  enable.auto.commit = true
 
 exclude.internal.topics = true
 
 fetch.max.bytes = 52428800
 
 fetch.max.wait.ms = 500
 
 fetch.min.bytes = 1
  group.id =
 test-consumer-group
 
 heartbeat.interval.ms = 1000
 
 interceptor.classes = null
 
 key.deserializer = class 
 
 org.apache.kafka.commo

Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL

2017-06-15 Thread Vahid S Hashemian
Hi Arunkumar,

Could you please take a look at this article: 
https://developer.ibm.com/opentech/2017/05/31/kafka-acls-in-practice/
The error message you posted earlier suggests there is some missing ACL 
(as indicated in the article).

Let me know if that doesn't resolve the issue. Thanks.
--Vahid




From:   Arunkumar <pm_arunku...@yahoo.com.INVALID>
To: <users@kafka.apache.org>
Date:   06/15/2017 04:37 PM
Subject:Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL



Hi Vahid

Thank you for quick response.

I set the ACL for topic and also created jaas permission as per the 
document for both producer and consumer. I have set what I have posted 
below. Do I need to set ACL like we set for Topics --  bin/kafka-acls 
--topic * --add -allow-host host:9097 --allow-principal User:arun 
--operation Write --authorizer-properties zookeeper.connect=host:2182 ? 
Please let me know. If you need all configuration for zookeeper, Broker, 
producer and consumer. I can share it as well. Thanks in advance


KafkaServer {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret"
   user_admin="admin-secret"
   user_arun="Arun123";
};

Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="arun"
password="Arun123";
};

KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="arun"
password="Arun123";
};

Thanks
Arunkumar Pichaimuthu, PMP


On Thu, 6/15/17, Vahid S Hashemian <vahidhashem...@us.ibm.com> wrote:

 Subject: Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL
 To: users@kafka.apache.org
 Date: Thursday, June 15, 2017, 6:16 PM
 
 Hi Arunkumar,
 
 Have you given your Kafka consumer/producer
 necessary permissions to 
 consume/produce
 messages?
 
 --Vahid
 
 
 
 From:   Arunkumar <pm_arunku...@yahoo.com.INVALID>
 To: <users@kafka.apache.org>
 Date:   06/15/2017 04:07 PM
 Subject:UNKNOWN_TOPIC_OR_PARTITION
 with SASL_PLAINTEXT ACL
 
 
 
 Hi 
 
 I am setting up ACL with SALS_PLAINTEXT. My
 zookeeper and broker starts 
 without error.
 But when I try to start my consumer or if I send message 
 through a producer it throws an exception (Both
 producer and consumer are 
 kafka CLI)
 Stack trace for my consumer below. Any insight
 is highly appreciated. 
 Thanks in advance
 
 bin/kafka-console-consumer
 --topic sample1 --from-beginning 
 --consumer.config=etc/kafka/consumer.properties 
 --bootstrap-server 
 hostname:9097
 [2017-06-15 17:21:45,286] INFO ConsumerConfig
 values:
 auto.commit.interval.ms
 = 5000
 auto.offset.reset =
 earliest
 bootstrap.servers =
 [hostname:9097]
 check.crcs =
 true
 client.id =
 connections.max.idle.ms = 54
 enable.auto.commit = true
 exclude.internal.topics = true
 fetch.max.bytes = 52428800
 fetch.max.wait.ms = 500
 fetch.min.bytes = 1
 group.id = test-consumer-group
 heartbeat.interval.ms = 1000
 interceptor.classes = null
 key.deserializer = class 
 org.apache.kafka.common.serialization.ByteArrayDeserializer
 max.partition.fetch.bytes =
 1048576
 max.poll.interval.ms =
 30
 max.poll.records = 500
 metadata.max.age.ms = 30
 metric.reporters = []
 metrics.num.samples = 2
 metrics.recording.level = INFO
 metrics.sample.window.ms = 3
 partition.assignment.strategy =
 [class 
 org.apache.kafka.clients.consumer.RangeAssignor]
 receive.buffer.bytes = 65536
 reconnect.backoff.ms = 50
 request.timeout.ms = 305000
 retry.backoff.ms = 100
 sasl.jaas.config = null
 sasl.kerberos.kinit.cmd =
 /usr/bin/kinit

 sasl.kerberos.min.time.before.relogin = 6
 sasl.kerberos.service.name =
 null

 sasl.kerberos.ticket.renew.jitter = 0.05
  
   sasl.kerberos.ticket.renew.window.factor = 0.8
 sasl.mechanism = PLAIN
 security.protocol =
 SASL_PLAINTEXT
 send.buffer.bytes
 = 131072
 session.timeout.ms =
 1
 ssl.cipher.suites =
 null
 ssl.enabled.protocols =
 [TLSv1.2, TLSv1.1, TLSv1]

 ssl.endpoint.identification.algorithm = null
 ssl.key.password = null
 ssl.keymanager.algorithm =
 SunX509
 ssl.keystore.location =
 null
 ssl.keystore.password =
 null
 ssl.keystore.type = JKS
 ssl.protocol = TLS
 ssl.provider = null
 ssl.secure.random.implementation =
 null
 ssl.trustmanager.algorithm
 = PKIX
 ssl.truststore.location =
 path.truststore

 ssl.truststore.password = [hidden]
  
   ssl.truststore

Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL

2017-06-15 Thread Vahid S Hashemian
Hi Arunkumar,

Have you given your Kafka consumer/producer necessary permissions to 
consume/produce messages?

--Vahid



From:   Arunkumar 
To: 
Date:   06/15/2017 04:07 PM
Subject:UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL



Hi 

I am setting up ACL with SALS_PLAINTEXT. My zookeeper and broker starts 
without error. But when I try to start my consumer or if I send message 
through a producer it throws an exception (Both producer and consumer are 
kafka CLI)
Stack trace for my consumer below. Any insight is highly appreciated. 
Thanks in advance

bin/kafka-console-consumer --topic sample1 --from-beginning 
--consumer.config=etc/kafka/consumer.properties  --bootstrap-server 
hostname:9097
[2017-06-15 17:21:45,286] INFO ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [hostname:9097]
check.crcs = true
client.id =
connections.max.idle.ms = 54
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = test-consumer-group
heartbeat.interval.ms = 1000
interceptor.classes = null
key.deserializer = class 
org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 30
max.poll.records = 500
metadata.max.age.ms = 30
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 3
partition.assignment.strategy = [class 
org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 6
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = PLAIN
security.protocol = SASL_PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 1
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = path.truststore
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
value.deserializer = class 
org.apache.kafka.common.serialization.ByteArrayDeserializer
 (org.apache.kafka.clients.consumer.ConsumerConfig)
[2017-06-15 17:21:45,438] INFO Successfully logged in. 
(org.apache.kafka.common.security.authenticator.AbstractLogin)
[2017-06-15 17:21:45,522] INFO Kafka version : 0.10.2.1-cp1 
(org.apache.kafka.common.utils.AppInfoParser)
[2017-06-15 17:21:45,523] INFO Kafka commitId : 078e7dc02a100018 
(org.apache.kafka.common.utils.AppInfoParser)
[2017-06-15 17:21:45,781] WARN Error while fetching metadata with 
correlation id 2 : {sample1=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2017-06-15 17:21:45,878] WARN Error while fetching metadata with 
correlation id 3 : {sample1=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2017-06-15 17:21:45,980] WARN Error while fetching metadata with 
correlation id 4 : {sample1=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2017-06-15 17:21:46,084] WARN Error while fetching metadata with 
correlation id 5 : {sample1=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2017-06-15 17:21:46,185] WARN Error while fetching metadata with 
correlation id 6 : {sample1=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2017-06-15 17:21:46,289] WARN Error while fetching metadata with 
correlation id 7 : {sample1=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2017-06-15 17:21:46,392] WARN Error while fetching metadata with 
correlation id 8 : {sample1=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2017-06-15 17:21:46,495] WARN Error while fetching metadata with 
correlation id 9 : {sample1=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2017-06-15 17:21:46,598] WARN Error while fetching metadata with 
correlation id 10 : {sample1=UNKNOWN_TOPIC_OR_PARTITION} 
(org.apache.kafka.clients.NetworkClient)
[2017-06-15 17:21:46,702] WARN Error while fetching metadata with 

Re: kafka-consumer-groups.sh vs ConsumerOffsetChecker

2017-06-15 Thread Vahid S Hashemian
Is it possible that the consumer group or the corresponding topic got 
deleted?

If you don't have consumers running in the group but there are offsets 
associated with (valid) topics the group, group offsets should not get 
removed from ZK.

--Vahid



From:   karan alang <karan.al...@gmail.com>
To: users@kafka.apache.org
Date:   06/15/2017 02:22 PM
Subject:Re: kafka-consumer-groups.sh vs ConsumerOffsetChecker



Hi Vahid,

In ZK, i dont see any reference to myGroup
(is this because the consumer is no longer running or is there some other
reason ?)


*ls /consumers[replay-log-producer]*

Also, when i run the ConsumerOffsetChecker w/o the topic, i get error 
shown
below ->

*$KAFKA_HOME/bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker
--zookeeper localhost:2181 --group myGroup*

[2017-06-15 14:12:35,121] WARN WARNING: ConsumerOffsetChecker is 
deprecated
and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand
instead. (kafka.tools.ConsumerOffsetChecker$)
Exiting due to: org.apache.zookeeper.KeeperException$NoNodeException:
KeeperErrorCode = NoNode for /consumers/myGroup/owners.

Btw, even with the --topic, i'm getting this error now.
(which i was not getting earlier)

So, How do i interpret this ?
is this a caching issue ?





On Thu, Jun 15, 2017 at 1:01 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Karan,
>
> The message "No topic available for consumer group provided" appears 
when
> there is no topic under the consumer group in ZooKeeper (under
> /consumers/myGroup/owners/).
> Can you check whether the topic 'newBroker1' exists under this ZK path?
> Also, do you still get the rows below if you run the 
ConsumerOffsetChecker
> without providing any topic?
>
> Thanks.
> --Vahid
>
>
>
>
> From:   karan alang <karan.al...@gmail.com>
> To: users@kafka.apache.org
> Date:   06/15/2017 12:10 PM
> Subject:Re: kafka-consumer-groups.sh vs ConsumerOffsetChecker
>
>
>
> Pls note ->
> Even when i run the command as shown below (not as new-consumer), i 
don't
> get the required result.
>
> *$KAFKA_HOME/bin/kafka-consumer-groups.sh --describe  --zookeeper
> localhost:2181  --group myGroup*
>
> No topic available for consumer group provided
>
> GROUP, TOPIC, PARTITION, CURRENT OFFSET, LOG END OFFSET, LAG, OWNER
>
> any ideas on what needs to be done?
>
> On Thu, Jun 15, 2017 at 11:55 AM, karan alang <karan.al...@gmail.com>
> wrote:
>
> > Hi All -
> > I've Kafka 0.9 (going forward will be migrating to Kafka 0.10) & 
trying
> to
> > use the ConsumerOffsetChecker &  bin/kafka-consumer-groups.sh to check
> for
> > offsets.
> >
> > I'm seeing different behavior.
> >
> > Here is what i did ->
> >
> > a) When i use ConsumerOffsetChecker
> >
> > $KAFKA_HOME/bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker
> > --zookeeper localhost:2181 --group myGroup --topic newBroker1
> >
> > [2017-06-15 11:50:23,729] WARN WARNING: ConsumerOffsetChecker is
> > deprecated and will be dropped in releases following 0.9.0. Use
> > ConsumerGroupCommand instead. (kafka.tools.ConsumerOffsetChecker$)
> >
> > Group   Topic  Pid Offset logSize
> > Lag Owner
> >
> > myGroup newBroker1 0   224216 507732
> > 283516  none
> >
> > myGroup newBroker1 1   224888 508978
> > 284090  none
> >
> > myGroup newBroker1 2   141104 424038
> > 282934  none
> >
> > myGroup newBroker1 3   11829 295110
> > 283281  none
> >
> > myGroup newBroker1 4   11580 294510
> > 282930  none
> >
> > b) When i run the kafka-consumer-groups.sh (since the WARNING message
> > above says - ConsumerOffsetChecker is deprecated & we need to use
> > ConsumerGroupCommand instead)
> >
> > Option 1 :
> >
> > $KAFKA_HOME/bin/kafka-consumer-groups.sh --describe --bootstrap-server
> > localhost:9092,localhost:9093,localhost:9094,localhost:9095
> > --new-consumer --group myGroup
> >
> > Consumer group `myGroup` does not exist or is rebalancing.
> >
> >
> > The question - When i use $KAFKA_HOME/bin/kafka-consumer-groups.sh - 
why
> > is the script not giving me Offset details of the group - myGroup &
> Topic -
> > newBroker1
> >
> > What needs to be done for this (using script
> $KAFKA_HOME/bin/kafka-consumer-groups.sh)
> > ?
> >
> >
> >
> >
>
>
>
>
>






Re: kafka-consumer-groups.sh vs ConsumerOffsetChecker

2017-06-15 Thread Vahid S Hashemian
Hi Karan,

The message "No topic available for consumer group provided" appears when 
there is no topic under the consumer group in ZooKeeper (under 
/consumers/myGroup/owners/).
Can you check whether the topic 'newBroker1' exists under this ZK path?
Also, do you still get the rows below if you run the ConsumerOffsetChecker 
without providing any topic?

Thanks.
--Vahid




From:   karan alang 
To: users@kafka.apache.org
Date:   06/15/2017 12:10 PM
Subject:Re: kafka-consumer-groups.sh vs ConsumerOffsetChecker



Pls note ->
Even when i run the command as shown below (not as new-consumer), i don't
get the required result.

*$KAFKA_HOME/bin/kafka-consumer-groups.sh --describe  --zookeeper
localhost:2181  --group myGroup*

No topic available for consumer group provided

GROUP, TOPIC, PARTITION, CURRENT OFFSET, LOG END OFFSET, LAG, OWNER

any ideas on what needs to be done?

On Thu, Jun 15, 2017 at 11:55 AM, karan alang  
wrote:

> Hi All -
> I've Kafka 0.9 (going forward will be migrating to Kafka 0.10) & trying 
to
> use the ConsumerOffsetChecker &  bin/kafka-consumer-groups.sh to check 
for
> offsets.
>
> I'm seeing different behavior.
>
> Here is what i did ->
>
> a) When i use ConsumerOffsetChecker
>
> $KAFKA_HOME/bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker
> --zookeeper localhost:2181 --group myGroup --topic newBroker1
>
> [2017-06-15 11:50:23,729] WARN WARNING: ConsumerOffsetChecker is
> deprecated and will be dropped in releases following 0.9.0. Use
> ConsumerGroupCommand instead. (kafka.tools.ConsumerOffsetChecker$)
>
> Group   Topic  Pid Offset logSize
> Lag Owner
>
> myGroup newBroker1 0   224216 507732
> 283516  none
>
> myGroup newBroker1 1   224888 508978
> 284090  none
>
> myGroup newBroker1 2   141104 424038
> 282934  none
>
> myGroup newBroker1 3   11829 295110
> 283281  none
>
> myGroup newBroker1 4   11580 294510
> 282930  none
>
> b) When i run the kafka-consumer-groups.sh (since the WARNING message
> above says - ConsumerOffsetChecker is deprecated & we need to use
> ConsumerGroupCommand instead)
>
> Option 1 :
>
> $KAFKA_HOME/bin/kafka-consumer-groups.sh --describe  --bootstrap-server
> localhost:9092,localhost:9093,localhost:9094,localhost:9095
> --new-consumer --group myGroup
>
> Consumer group `myGroup` does not exist or is rebalancing.
>
>
> The question - When i use $KAFKA_HOME/bin/kafka-consumer-groups.sh - why
> is the script not giving me Offset details of the group - myGroup & 
Topic -
> newBroker1
>
> What needs to be done for this (using script 
$KAFKA_HOME/bin/kafka-consumer-groups.sh)
> ?
>
>
>
>






Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-06-13 Thread Vahid S Hashemian
Hi Michal,

Thanks a lot for your feedback.

Your statement about Heartbeat is fair and makes sense. I'll update the 
KIP accordingly.

--Vahid




From:   Michal Borowiecki <michal.borowie...@openbet.com>
To: users@kafka.apache.org, Vahid S Hashemian 
<vahidhashem...@us.ibm.com>, d...@kafka.apache.org
Date:   06/13/2017 01:35 AM
Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL 
Permission of OffsetFetch



Hi Vahid,
+1 wrt OffsetFetch.
The "Additional Food for Thought" mentions Heartbeat as a non-mutating 
action. I don't think that's true as the GroupCoordinator updates the 
latestHeartbeat field for the member and adds a new object to the 
heartbeatPurgatory, see completeAndScheduleNextHeartbeatExpiration() 
called from handleHeartbeat()

NB added dev mailing list back into CC as it seems to have been lost along 
the way.
Cheers,
Michał

On 12/06/17 18:47, Vahid S Hashemian wrote:
Hi Colin,

Thanks for the feedback.

To be honest, I'm not sure either why Read was selected instead of Write 
for mutating APIs in the initial design (I asked Ewen on the corresponding 

JIRA and he seemed unsure too).
Perhaps someone who was involved in the design can clarify.

Thanks.
--Vahid




From:   Colin McCabe <cmcc...@apache.org>
To: users@kafka.apache.org
Date:   06/12/2017 10:11 AM
Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL 
Permission of OffsetFetch



Hi Vahid,

I think you make a valid point that the ACLs controlling group
operations are not very intuitive.

This is probably a dumb question, but why are we using Read for mutating
APIs?  Shouldn't that be Write?

The distinction between Describe and Read makes a lot of sense for
Topics.  A group isn't really something that you "read" from in the same
way as a topic, so it always felt kind of weird there.

best,
Colin


On Thu, Jun 8, 2017, at 11:29, Vahid S Hashemian wrote:

Hi all,

I'm resending my earlier note hoping it would spark some conversation
this 
time around :)

Thanks.
--Vahid




From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
To: dev <d...@kafka.apache.org>, "Kafka User" 

<users@kafka.apache.org>

Date:   05/30/2017 08:33 AM
Subject:KIP-163: Lower the Minimum Required ACL Permission of 
OffsetFetch



Hi,

I started a new KIP to improve the minimum required ACL permissions of 
some of the APIs: 


https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch




The KIP is to address KAFKA-4585.

Feedback and suggestions are welcome!

Thanks.
--Vahid














-- 

Michal Borowiecki
Senior Software Engineer L4


T: 
+44 208 742 1600


+44 203 249 8448


 

E: 
michal.borowie...@openbet.com

W: 
www.openbet.com



OpenBet Ltd

Chiswick Park Building 9

566 Chiswick High Rd

London

W4 5XT

UK




This message is confidential and intended only for the addressee. If you 
have received this message in error, please immediately notify the 
postmas...@openbet.com and delete it from your system as well as any 
copies. The content of e-mails as well as traffic data may be monitored by 
OpenBet for employment and security purposes. To protect the environment 
please do not print this e-mail unless necessary. OpenBet Ltd. Registered 
Office: Chiswick Park Building 9, 566 Chiswick High Road, London, W4 5XT, 
United Kingdom. A company registered in England and Wales. Registered no. 
3134634. VAT no. GB927523612






Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-06-12 Thread Vahid S Hashemian
Hi Colin,

Thanks for the feedback.

To be honest, I'm not sure either why Read was selected instead of Write 
for mutating APIs in the initial design (I asked Ewen on the corresponding 
JIRA and he seemed unsure too).
Perhaps someone who was involved in the design can clarify.

Thanks.
--Vahid




From:   Colin McCabe <cmcc...@apache.org>
To: users@kafka.apache.org
Date:   06/12/2017 10:11 AM
Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL 
Permission of OffsetFetch



Hi Vahid,

I think you make a valid point that the ACLs controlling group
operations are not very intuitive.

This is probably a dumb question, but why are we using Read for mutating
APIs?  Shouldn't that be Write?

The distinction between Describe and Read makes a lot of sense for
Topics.  A group isn't really something that you "read" from in the same
way as a topic, so it always felt kind of weird there.

best,
Colin


On Thu, Jun 8, 2017, at 11:29, Vahid S Hashemian wrote:
> Hi all,
> 
> I'm resending my earlier note hoping it would spark some conversation
> this 
> time around :)
> 
> Thanks.
> --Vahid
> 
> 
> 
> 
> From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
> To: dev <d...@kafka.apache.org>, "Kafka User" 
<users@kafka.apache.org>
> Date:   05/30/2017 08:33 AM
> Subject:KIP-163: Lower the Minimum Required ACL Permission of 
> OffsetFetch
> 
> 
> 
> Hi,
> 
> I started a new KIP to improve the minimum required ACL permissions of 
> some of the APIs: 
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch

> 
> The KIP is to address KAFKA-4585.
> 
> Feedback and suggestions are welcome!
> 
> Thanks.
> --Vahid
> 
> 
> 
> 
> 







Re: [ANNOUNCE] New committer: Damian Guy

2017-06-09 Thread Vahid S Hashemian
Great news.

Congrats Damian!

--Vahid



From:   Guozhang Wang 
To: "d...@kafka.apache.org" , 
"users@kafka.apache.org" , 
"priv...@kafka.apache.org" 
Date:   06/09/2017 01:34 PM
Subject:[ANNOUNCE] New committer: Damian Guy



Hello all,


The PMC of Apache Kafka is pleased to announce that we have invited Damian
Guy as a committer to the project.

Damian has made tremendous contributions to Kafka. He has not only
contributed a lot into the Streams api, but have also been involved in 
many
other areas like the producer and consumer clients, broker-side
coordinators (group coordinator and the ongoing transaction coordinator).
He has contributed more than 100 patches so far, and have been driving on 
6
KIP contributions.

More importantly, Damian has been a very prolific reviewer on open PRs and
has been actively participating on community activities such as email 
lists
and slack overflow questions. Through his code contributions and reviews,
Damian has demonstrated good judgement on system design and code 
qualities,
especially on thorough unit test coverages. We believe he will make a 
great
addition to the committers of the community.


Thank you for your contributions, Damian!


-- Guozhang, on behalf of the Apache Kafka PMC






[DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-06-08 Thread Vahid S Hashemian
Hi all,

I'm resending my earlier note hoping it would spark some conversation this 
time around :)

Thanks.
--Vahid




From:   "Vahid S Hashemian" <vahidhashem...@us.ibm.com>
To: dev <d...@kafka.apache.org>, "Kafka User" <users@kafka.apache.org>
Date:   05/30/2017 08:33 AM
Subject:KIP-163: Lower the Minimum Required ACL Permission of 
OffsetFetch



Hi,

I started a new KIP to improve the minimum required ACL permissions of 
some of the APIs: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch

The KIP is to address KAFKA-4585.

Feedback and suggestions are welcome!

Thanks.
--Vahid







Re: [VOTE] KIP-162: Enable topic deletion by default

2017-06-05 Thread Vahid S Hashemian
+1 (non-binding)

Thanks.
--Vahid



From:   Gwen Shapira 
To: "d...@kafka.apache.org" , Users 

Date:   06/05/2017 09:38 PM
Subject:[VOTE] KIP-162: Enable topic deletion by default



Hi,

The discussion has been quite positive, so I posted a JIRA, a PR and
updated the KIP with the latest decisions.

Lets officially vote on the KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-162+-+Enable+topic+deletion+by+default


JIRA is here: https://issues.apache.org/jira/browse/KAFKA-5384

Gwen






Re: Trouble with querying offsets when using new consumer groups API

2017-05-30 Thread Vahid S Hashemian
Hi Jerry,

The behavior you are expecting is implemented in 0.10.2 through KIP-88 (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-88%3A+OffsetFetch+Protocol+Update
) and KAFKA-3853 (https://issues.apache.org/jira/browse/KAFKA-3853).
Starting from this release when you query a consumer group (new consumer 
only) you'll see the stored offset corresponding to a topic partition even 
if there is no active consumer consuming from it.

I hope this helps.
--Vahid




From:   Jerry George 
To: users@kafka.apache.org
Date:   05/26/2017 06:55 AM
Subject:Trouble with querying offsets when using new consumer 
groups API



Hi

I had question about the new consumer APIs.

I am having trouble retrieving the offsets once the consumers are
*disconnected* when using new consumer v2 API. Following is what I am
trying to do,

*bin/kafka-consumer-groups.sh -new-consumer --bootstrap-server kafka:9092
--group group --describe*

If I query this when the consumers are connected, there is no problem.
However, once the consumers are disconnected it says there is no such
group, though the offsets are retained in __consumer_offsets.

The offset retention policy is default; i.e. 1440 minutes, I believe.

Once the consumers are reconnected, I am able to query the offsets once
again.

Could anyone here please help me understand why this is?

Kafka: 0.10.1
Consumer Library: sarama golang library

Regards,
Jerry






KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-05-30 Thread Vahid S Hashemian
Hi,

I started a new KIP to improve the minimum required ACL permissions of 
some of the APIs: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch
The KIP is to address KAFKA-4585.

Feedback and suggestions are welcome!

Thanks.
--Vahid



Re: KIP-162: Enable topic deletion by default

2017-05-27 Thread Vahid S Hashemian
Sure, that sounds good.

I suggested that to keep command line behavior consistent.
Plus, removal of ACL access is something that can be easily undone, but 
topic deletion is not reversible.
So, perhaps a new follow-up JIRA to this KIP to add the confirmation for 
topic deletion.

Thanks.
--Vahid



From:   Gwen Shapira <g...@confluent.io>
To: d...@kafka.apache.org, users@kafka.apache.org
Date:   05/27/2017 11:04 AM
Subject:Re: KIP-162: Enable topic deletion by default



Thanks Vahid,

Do you mind if we leave the command-line out of scope for this?

I can see why adding confirmations, options to bypass confirmations, etc
would be an improvement. However, I've seen no complaints about the 
current
behavior of the command-line and the KIP doesn't change it at all. So I'd
rather address things separately.

Gwen

On Fri, May 26, 2017 at 8:10 PM Vahid S Hashemian 
<vahidhashem...@us.ibm.com>
wrote:

> Gwen, thanks for the KIP.
> It looks good to me.
>
> Just a minor suggestion: It would be great if the command asks for a
> confirmation (y/n) before deleting the topic (similar to how removing 
ACLs
> works).
>
> Thanks.
> --Vahid
>
>
>
> From:   Gwen Shapira <g...@confluent.io>
> To: "d...@kafka.apache.org" <d...@kafka.apache.org>, Users
> <users@kafka.apache.org>
> Date:   05/26/2017 07:04 AM
> Subject:KIP-162: Enable topic deletion by default
>
>
>
> Hi Kafka developers, users and friends,
>
> I've added a KIP to improve our out-of-the-box usability a bit:
> KIP-162: Enable topic deletion by default:
>
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-162+-+Enable+topic+deletion+by+default

>
>
> Pretty simple :) Discussion and feedback are welcome.
>
> Gwen
>
>
>
>
>






Re: KIP-162: Enable topic deletion by default

2017-05-26 Thread Vahid S Hashemian
Gwen, thanks for the KIP.
It looks good to me.

Just a minor suggestion: It would be great if the command asks for a 
confirmation (y/n) before deleting the topic (similar to how removing ACLs 
works).

Thanks.
--Vahid



From:   Gwen Shapira 
To: "d...@kafka.apache.org" , Users 

Date:   05/26/2017 07:04 AM
Subject:KIP-162: Enable topic deletion by default



Hi Kafka developers, users and friends,

I've added a KIP to improve our out-of-the-box usability a bit:
KIP-162: Enable topic deletion by default:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-162+-+Enable+topic+deletion+by+default


Pretty simple :) Discussion and feedback are welcome.

Gwen






Re: Kafka 0.10.1.0 - Question about kafka-consumer-groups.sh

2017-05-05 Thread Vahid S Hashemian
Hi Subhash,

Yes, it should be listed again in the output, but it should get fresh 
offsets (with `--describe` for example), since the old offsets were 
removed once it became inactive.

--Vahid




From:   Subhash Sriram <subhash.sri...@gmail.com>
To: users@kafka.apache.org
Date:   05/05/2017 02:38 PM
Subject:Re: Kafka 0.10.1.0 - Question about 
kafka-consumer-groups.sh



Hi Vahid,

Thank you very much for your reply! I appreciate the clarification.
Unfortunately, I didn't really try the command until today. That being
said, I created a couple of new groups and consumed from a test topic
today, and they did show up in the list. Maybe I can see what happens with
them after the offset.retention.minutes has elapsed.

What if a group went inactive, and then became active again with the same
group name? I would assume that once it turns back on, it would show up in
the list.

Thanks again,
Subhash


On Fri, May 5, 2017 at 5:30 PM, Vahid S Hashemian 
<vahidhashem...@us.ibm.com
> wrote:

> Hi Subhash,
>
> The broker config that affects group offset retention is
> "offsets.retention.minutes" (which defaults to 1 day).
> If the group is inactive (i.e. has no consumer consuming from it) for 
this
> long its offsets will be removed from the internal topic offset and it
> will not be listed in the consumer group command output.
>
> But the consumer group in your case should be alive, since it did not
> become inactive.
>
> Did the command use to list the group in the output before?
>
> --Vahid
>
>
>
>
> From:   Subhash Sriram <subhash.sri...@gmail.com>
> To: users@kafka.apache.org
> Date:   05/05/2017 01:43 PM
> Subject:Kafka 0.10.1.0 - Question about kafka-consumer-groups.sh
>
>
>
> Hey everyone,
>
> I am a little bit confused about how the kafka-consumer-groups.sh/
> ConsumerGroupCommand works, and was hoping someone could shed some light
> on
> this for me.
>
> We are running Kafka 0.10.1.0, and using the new Consumer API with the
> Confluent.Kafka C# library (v0.9.5) that uses librdkafka. Today, I was
> trying to get some details on what consumers were running, and their
> position within a couple of topics, but when I ran the following, I did
> not
> see the group in the list.
>
> ./bin/kafka-consumer-groups.sh --bootstrap-server [our servers]
> --new-consumer --list
>
> I see a few groups listed, but none of the ones I was expecting to see. 
I
> saw this in the documentation:
>
> *"When using the new consumer API
> <http://kafka.apache.org/documentation.html#newconsumerapi> (where the
> broker handles coordination of partition handling and rebalance), the
> group
> is deleted when the last committed offset for that group expires"*
>
> Is that related to log retention time? If so, is it saying that the 
group
> will be deleted from the list once the highest committed offset of its
> group is past its configured log retention time?
>
> The issue I am facing is that we have a consumer group that is actively
> consuming. At one point, I am sure the messages in the topic it is
> consuming expired, but since then, more messages have been added, and it
> has been consuming & committing higher offsets. Shouldn't that group 
have
> come back on the list?
>
> Any ideas would be very helpful.
>
> Thanks,
> Subhash
>
>
>
>
>






Re: Kafka 0.10.1.0 - Question about kafka-consumer-groups.sh

2017-05-05 Thread Vahid S Hashemian
Hi Subhash,

The broker config that affects group offset retention is 
"offsets.retention.minutes" (which defaults to 1 day).
If the group is inactive (i.e. has no consumer consuming from it) for this 
long its offsets will be removed from the internal topic offset and it 
will not be listed in the consumer group command output.

But the consumer group in your case should be alive, since it did not 
become inactive.

Did the command use to list the group in the output before?

--Vahid




From:   Subhash Sriram 
To: users@kafka.apache.org
Date:   05/05/2017 01:43 PM
Subject:Kafka 0.10.1.0 - Question about kafka-consumer-groups.sh



Hey everyone,

I am a little bit confused about how the kafka-consumer-groups.sh/
ConsumerGroupCommand works, and was hoping someone could shed some light 
on
this for me.

We are running Kafka 0.10.1.0, and using the new Consumer API with the
Confluent.Kafka C# library (v0.9.5) that uses librdkafka. Today, I was
trying to get some details on what consumers were running, and their
position within a couple of topics, but when I ran the following, I did 
not
see the group in the list.

./bin/kafka-consumer-groups.sh --bootstrap-server [our servers]
--new-consumer --list

I see a few groups listed, but none of the ones I was expecting to see. I
saw this in the documentation:

*"When using the new consumer API
 (where the
broker handles coordination of partition handling and rebalance), the 
group
is deleted when the last committed offset for that group expires"*

Is that related to log retention time? If so, is it saying that the group
will be deleted from the list once the highest committed offset of its
group is past its configured log retention time?

The issue I am facing is that we have a consumer group that is actively
consuming. At one point, I am sure the messages in the topic it is
consuming expired, but since then, more messages have been added, and it
has been consuming & committing higher offsets. Shouldn't that group have
come back on the list?

Any ideas would be very helpful.

Thanks,
Subhash






Re: why is it called kafka?

2017-05-02 Thread Vahid S Hashemian
This might help: https://www.quora.com/How-did-Kafka-get-its-name

--Vahid



From:   The Real Plato 
To: users@kafka.apache.org
Date:   05/02/2017 07:20 PM
Subject:why is it called kafka?



searching google and the docs have not revealed the answer
-- 
-plato






Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Vahid S Hashemian
Great news.

Congrats Rajini!

--Vahid




From:   Gwen Shapira 
To: d...@kafka.apache.org, Users , 
priv...@kafka.apache.org
Date:   04/24/2017 02:06 PM
Subject:[ANNOUNCE] New committer: Rajini Sivaram



The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
are pleased to announce that she has accepted!

Rajini contributed 83 patches, 8 KIPs (all security and quota
improvements) and a significant number of reviews. She is also on the
conference committee for Kafka Summit, where she helped select content
for our community event. Through her contributions she's shown good
judgement, good coding skills, willingness to work with the community on
finding the best
solutions and very consistent follow through on her work.

Thank you for your contributions, Rajini! Looking forward to many more :)

Gwen, for the Apache Kafka PMC






Re: [VOTE] 0.10.2.1 RC1

2017-04-14 Thread Vahid S Hashemian
+1 (non-binding)

Built from the source and ran the quickstart successfully on Ubuntu, Mac, 
Windows (64 bit).

Thank you Gwen for running the release.

--Vahid



From:   Gwen Shapira 
To: d...@kafka.apache.org, Users 
Cc: Alexander Ayars 
Date:   04/12/2017 05:25 PM
Subject:[VOTE] 0.10.2.1 RC1



Hello Kafka users, developers, client-developers, friends, romans,
citizens, etc,

This is the second candidate for release of Apache Kafka 0.10.2.1.

This is a bug fix release and it includes fixes and improvements from 24 
JIRAs
(including a few critical bugs).

Release notes for the 0.10.2.1 release:
http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc1/RELEASE_NOTES.html

*** Please download, test and vote by Monday, April 17, 5:30 pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

Your help in validating this bugfix release is super valuable, so
please take the time to test and vote!

Suggested tests:
 * Grab the source archive and make sure it compiles
 * Grab one of the binary distros and run the quickstarts against them
 * Extract and verify one of the site docs jars
 * Build a sample against jars in the staging repo
 * Validate GPG signatures on at least one file
 * Validate the javadocs look ok
 * The 0.10.2 documentation was updated for this bugfix release
(especially upgrade, streams and connect portions) - please make sure
it looks ok: http://kafka.apache.org/documentation.html

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc1/javadoc/

* Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.1 tag:
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=e133f2ca57670e77f8114cc72dbc2f91a48e3a3b


* Documentation:
http://kafka.apache.org/0102/documentation.html

* Protocol:
http://kafka.apache.org/0102/protocol.html

/**

Thanks,

Gwen Shapira







Re: Problems with Kafka Quickstart (Windows)

2017-03-04 Thread Vahid S Hashemian
Hi Rajat,

It would be great if you could file a JIRA for this in Kafka issue 
tracking site (https://issues.apache.org/jira/issues) so it can be easily 
tracked and addressed.

Thanks.
--Vahid




From:   "Rajat Yadav (Trainee, HO-IT)" 
To: users@kafka.apache.org
Date:   03/04/2017 08:26 AM
Subject:Problems with Kafka Quickstart (Windows)



Dear Sir/Madam,

I just started exploring Kafka on Windows(10) but there are some of the
things that i found missing, not sure of all as i am still not completed
the exploration.

1.)Windows support should be provided in better way...

Just writing Windows platforms use bin\windows\ instead of bin/, and 
change
the script extension to .bat is not enough. :(

2.)In Windows consumer when reading from file gives result as:
{"schema":{"type":"string","optional":false},"payload":"-e \"foo\\nbar\" 
"}
instead of what is shown on your Quickstart i.e

{"schema":{"type":"string","optional":false},"payload":"foo"}
{"schema":{"type":"string","optional":false},"payload":"bar"}

3.) There should be some live examples providing the exact scenario of
working approach. I want to use Kafka for Nodejs and i have database as
MsSql. Still can't figure out how :(

4.) Problem with *Step 8: Use Kafka Streams to process data *WordCountDemo
<
https://github.com/apache/kafka/blob/%7BdotVersion%7D/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountDemo.java
>
link
not Working.

If other problem arise then i will mail again.
Please solve my problem asap


Thanks
Rajat Yadav
Web Developer (Darcl Logistics Limited)






Re: Simple data-driven app design using Kafka

2017-02-22 Thread Vahid S Hashemian
Pete,

I think this excellent post covers what you are looking for:
https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/

--Vahid




From:   Peter Figliozzi 
To: users@kafka.apache.org
Date:   02/22/2017 07:29 PM
Subject:Simple data-driven app design using Kafka



Hello Kafka Users,

I started using Kafka a couple of weeks ago an am very impressed!  I've
gotten the hang of producing, and now it's time for consuming.  My
applications (Scala) don't work quite like the examples, but I think it's 
a
pretty basic architecture:


   - Suppose you have a several topics: foo, bar, and baz
   - When a new data element arrives in a particular topic, perform the
   topic-specific task with the new data i.e. DoFoo(newfoo)
   - Otherwise, do nothing


Can anyone point to an example or even sketch it out here?

Thanks much,

Pete






Re: KIP-122: Add a tool to Reset Consumer Group Offsets

2017-02-16 Thread Vahid S Hashemian
Thanks for this useful and the nice KIP write-up.

Just a minor suggestion. Would it make sense to avoid repeating the term 
"reset" in the arguments?
We already use the argument "reset-offset", so we may not need to repeat 
the term in the follow-on argument.

For example, instead of

kafka-consumer-groups.sh --reset-offset --group cg1 --reset-to-datetime 
2017-01-01T00:00:00.000

we could use

kafka-consumer-groups.sh --reset-offset --group cg1 --to-datetime 
2017-01-01T00:00:00.000


Similarly we could replace the other suggested arguments (or any other 
that is eventually approved) like this:

--reset-to-period -> --to-period
--reset-to-earliest -> --to-earliest
--reset-to-latest -> --to-latest
--reset-minus -> --to-minus
--reset-plus -> --to-plus
--reset-to -> --to

Thanks.
--Vahid



From:   Jorge Esteban Quilcate Otoya 
To: d...@kafka.apache.org, Users 
Date:   02/08/2017 02:23 PM
Subject:Re: KIP-122: Add a tool to Reset Consumer Group Offsets



Great. I think I got the idea. What about this options:

Scenarios:

1. Current status

´kafka-consumer-groups.sh --reset-offset --group cg1´

2. To Datetime

´kafka-consumer-groups.sh --reset-offset --group cg1 --reset-to-datetime
2017-01-01T00:00:00.000´

3. To Period

´kafka-consumer-groups.sh --reset-offset --group cg1 --reset-to-period 
P2D´

4. To Earliest

´kafka-consumer-groups.sh --reset-offset --group cg1 --reset-to-earliest´

5. To Latest

´kafka-consumer-groups.sh --reset-offset --group cg1 --reset-to-latest´

6. Minus 'n' offsets

´kafka-consumer-groups.sh --reset-offset --group cg1 --reset-minus n´

7. Plus 'n' offsets

´kafka-consumer-groups.sh --reset-offset --group cg1 --reset-plus n´

8. To specific offset

´kafka-consumer-groups.sh --reset-offset --group cg1 --reset-to x´

Scopes:

a. All topics used by Consumer Group

Don't specify --topics

b. Specific List of Topics

Add list of values in --topics t1,t2,tn

c. One Topic, all Partitions

Add one topic and no partitions values: --topic t1

d. One Topic, List of Partitions

Add one topic and partitions values: --topic t1 --partitions 0,1,2

About Reset Plan (JSON file):

I think is still valid to have the option to persist reset configuration 
as
a file, but I agree to give the option to run the tool without going down
to the JSON file.

Execution options:

1. Without execution argument (No args):

Print out results (reset plan)

2. With --execute argument:

Run reset process

3. With --output argument:

Save result in a JSON format.

4. Only with --execute option and --reset-file (path to JSON)

Reset based on file

4. Only with --verify option and --reset-file (path to JSON)

Verify file values with current offsets

I think we can remove --generate-and-execute because is a bit clumsy.

With this options we will be able to execute with manual JSON 
configuration.


El mié., 8 feb. 2017 a las 22:43, Ben Stopford ()
escribió:

> Yes - using a tool like this to skip a set of consumer groups over a
> corrupt/bad message is definitely appealing.
>
> B
>
> On Wed, Feb 8, 2017 at 9:37 PM Gwen Shapira  wrote:
>
> > I like the --reset-to-earliest and --reset-to-latest. In general,
> > since the JSON route is the most challenging for users, we want to
> > provide a lot of ways to do useful things without going there.
> >
> > Two things that can help:
> >
> > 1. A lot of times, users want to skip few messages that cause issues
> > and continue. maybe just specifying the topic, partition and delta
> > will be better than having to find the offset and write a JSON and
> > validate the JSON etc.
> >
> > 2. Thinking if there are other common use-cases that we can make easy
> > rather than just one generic but not very usable method.
> >
> > Gwen
> >
> > On Wed, Feb 8, 2017 at 3:25 AM, Jorge Esteban Quilcate Otoya
> >  wrote:
> > > Thanks for the feedback!
> > >
> > > @Onur, @Gwen:
> > >
> > > Agree. Actually at the first draft I considered to have it inside
> > > ´kafka-consumer-groups.sh´, but I decide to propose it as a 
standalone
> > tool
> > > to describe it clearly and focus it on reset functionality.
> > >
> > > But now that you mentioned, it does make sense to have it in
> > > ´kafka-consumer-groups.sh´. How would be a consistent way to 
introduce
> > it?
> > >
> > > Maybe something like this:
> > >
> > > ´kafka-consumer-groups.sh --reset-offset --generate --group cg1
> --topics
> > t1
> > > --reset-from 2017-01-01T00:00:00.000 --output plan.json´
> > >
> > > ´kafka-consumer-groups.sh --reset-offset --verify --reset-json-file
> > > plan.json´
> > >
> > > ´kafka-consumer-groups.sh --reset-offset --execute --reset-json-file
> > > plan.json´
> > >
> > > ´kafka-consumer-groups.sh --reset-offset --generate-and-execute 
--group
> > cg1
> > > --topics t1 --reset-from 2017-01-01T00:00:00.000´
> > >
> > > @Gwen:
> > >
> > >> It looks exactly like the replica assignment tool
> > 

Re: possible bug or inconsistency in kafka-clients

2017-01-28 Thread Vahid S Hashemian
Could this be the same issue as the one reported here?
https://issues.apache.org/jira/browse/KAFKA-4547

--Vahid
 




From:   Koert Kuipers 
To: users@kafka.apache.org
Date:   01/27/2017 09:34 PM
Subject:possible bug or inconsistency in kafka-clients



hello all,

i just wanted to point out a potential issue in kafka-clients 0.10.1.1

i was using spark-sql-kafka-0-10, which is spark structured streaming
integration for kafka. it depends on kafka-clients 0.10.0.1 but since my
kafka servers are 0.10.1.1 i decided to upgrade kafka-clients to 0.10.1.1
also. i was not able to read from kafka in spark reliably. the issue 
seemed
to be that the kafka consumer got the latest offsets wrong. after
downgrading kafka-clients back to 0.10.0.1 it all worked correctly again.

did the behavior of KafkaConsumer.seekToEnd change between 0.10.0.1 and
0.10.1.1?

for the original discussion see here:
https://www.mail-archive.com/user@spark.apache.org/msg61290.html


i think the relevant code in spark is here:
https://github.com/apache/spark/blob/master/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaSource.scala#L399


best,
koert






Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Vahid S Hashemian
Congrats Grant!

--Vahid



From:   Sriram Subramanian 
To: users@kafka.apache.org
Cc: d...@kafka.apache.org, priv...@kafka.apache.org
Date:   01/11/2017 11:58 AM
Subject:Re: [ANNOUNCE] New committer: Grant Henke



Congratulations Grant!

On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
>
> Thank you for your contributions, Grant :)
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>






Re: [VOTE] 0.10.1.1 RC1

2016-12-19 Thread Vahid S Hashemian
Hi Guozhang,

I also verified the quickstart on Ubuntu and Mac. +1 on those.

On Windows OS there are a couple of issues for which the following PRs 
exist:
- https://github.com/apache/kafka/pull/2146 (already merged to trunk)
- https://github.com/apache/kafka/pull/2238 (open)

These issues are not specific to this RC. So they can be included in a 
future release.

Thanks again for running the release.

Regards.
--Vahid




From:   Jun Rao 
To: "users@kafka.apache.org" , 
"d...@kafka.apache.org" 
Date:   12/19/2016 02:47 PM
Subject:Re: [VOTE] 0.10.1.1 RC1



Hi, Guozhang,

Thanks for preparing the release. Verified quickstart. +1

Jun

On Thu, Dec 15, 2016 at 1:29 PM, Guozhang Wang  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second, and hopefully the last candidate for the release of
> Apache Kafka 0.10.1.1 before the break. This is a bug fix release and it
> includes fixes and improvements from 30 JIRAs. See the release notes for
> more details:
>
> http://home.apache.org/~guozhang/kafka-0.10.1.1-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, 20 December, 8pm PT ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~guozhang/kafka-0.10.1.1-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> NOTE the artifacts include the ones built from Scala 2.12.1 and Java8,
> which are treated a pre-alpha artifacts for the Scala community to try 
and
> test it out:
>
> https://repository.apache.org/content/groups/staging/org/
> apache/kafka/kafka_2.12/0.10.1.1/
>
> We will formally add the scala 2.12 support in future minor releases.
>
>
> * Javadoc:
> http://home.apache.org/~guozhang/kafka-0.10.1.1-rc1/javadoc/
>
> * Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc0 tag:
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> c3638376708ee6c02dfe4e57747acae0126fa6e7
>
>
> Thanks,
> Guozhang
>
> --
> -- Guozhang
>






Re: Deleting a topic without delete.topic.enable=true?

2016-12-09 Thread Vahid S Hashemian
Any chance the consumer process that consumes from that topic is still 
running while you are doing all this?

--Vahid



From:   Tim Visher 
To: users@kafka.apache.org
Date:   12/09/2016 08:26 AM
Subject:Re: Deleting a topic without delete.topic.enable=true?



I did all of that because setting delete.topic.enable=true wasn't
effective. We set that across every broker, restarted them, and then
deleted the topic, and it was still stuck in existence.

On Fri, Dec 9, 2016 at 11:11 AM, Ali Akhtar  wrote:

> You need to also delete / restart zookeeper, its probably storing the
> topics there. (Or yeah, just enable it and then delete the topic)
>
> On Fri, Dec 9, 2016 at 9:09 PM, Rodrigo Sandoval <
> rodrigo.madfe...@gmail.com
> > wrote:
>
> > Why did you do all those things instead of just setting
> > delete.topic.enable=true?
> >
> > On Dec 9, 2016 13:40, "Tim Visher"  wrote:
> >
> > > Hi Everyone,
> > >
> > > I'm really confused at the moment. We created a topic with brokers 
set
> to
> > > delete.topic.enable=false.
> > >
> > > We now need to delete that topic. To do that we shut down all the
> > brokers,
> > > deleted everything under log.dirs and logs.dir on all the kafka
> brokers,
> > > `rmr`ed the entire chroot that kafka was storing things under in
> > zookeeper,
> > > and then brought kafka back up.
> > >
> > > After doing all that, the topic comes back, every time.
> > >
> > > What can we do to delete that topic?
> > >
> > > --
> > >
> > > In Christ,
> > >
> > > Timmy V.
> > >
> > > http://blog.twonegatives.com/
> > > http://five.sentenc.es/ -- Spend less time on mail
> > >
> >
>






Re: [VOTE] 0.10.1.1 RC0

2016-12-08 Thread Vahid S Hashemian
+1

Build and quickstart worked fine on Ubuntu, Mac, Windows 32 and 64 bit.

Thanks for running the release.

Regards,
--Vahid 




From:   Guozhang Wang 
To: "users@kafka.apache.org" , 
"d...@kafka.apache.org" , 
kafka-clie...@googlegroups.com
Date:   12/07/2016 02:47 PM
Subject:[VOTE] 0.10.1.1 RC0



Hello Kafka users, developers and client-developers,

This is the first candidate for the release of Apache Kafka 0.10.1.1. This 
is
a bug fix release and it includes fixes and improvements from 27 JIRAs. 
See
the release notes for more details:

http://home.apache.org/~guozhang/kafka-0.10.1.1-rc0/RELEASE_NOTES.html

*** Please download, test and vote by Monday, 13 December, 8am PT ***

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~guozhang/kafka-0.10.1.1-rc0/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
http://home.apache.org/~guozhang/kafka-0.10.1.1-rc0/javadoc/

* Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc0 tag:
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=8b77507083fdd427ce81021228e7e346da0d814c



Thanks,
Guozhang






Re: Q about doc of consumer

2016-12-08 Thread Vahid S Hashemian
Ryan,

The correct consumer command in the latest doc (
http://kafka.apache.org/quickstart#quickstart_consume) is

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic 
test --from-beginning

You used the "--zookeeper" parameter which implies using the old consumer, 
in which case the correct port is 2181, as you figured out.
Using "--bootstrap-server" and port 9092 implies using the new (Java) 
consumer.

Regards,
--Vahid




From:   "paradixrain" 
To: "users" 
Date:   12/07/2016 09:10 PM
Subject:Q about doc of consumer



Dear kafka,
I think there is an error in the document, is that right?


Here's what I did:
Step 1:
 open a producer
./kafka-console-producer.sh --broker-list localhost:9092 --topic test

Step 2:
open a consumer
./kafka-console-consumer.sh --zookeeper localhost:9092 --topic test 
--from-beginning

Step 3:
I input something in producer

but got errors below in consumer:


Step 4:
I change the port in Step 2 from 9090 to 2181, and restarted consumer
after that I got what I want


--
YOUR FRIEND,
Ryan
 





Re: Writing a consumer offset checker

2016-12-02 Thread Vahid S Hashemian
There is a JIRA open that should address this: 
https://issues.apache.org/jira/browse/KAFKA-3853
Since it requires a change in the protocol, it's awaiting a KIP vote 
that's happening next week (
https://cwiki.apache.org/pages/viewpage.action?pageId=66849788).
Once the vote is passed the code should go in fairly quickly.

--Vahid



From:   Jon Yeargers 
To: users@kafka.apache.org
Date:   12/02/2016 04:40 AM
Subject:Writing a consumer offset checker



I want to write my own offset monitor so I can integrate it with our
alerting system. I've tried Java and Java + Scala but have run into the
same problem both times. (details here:
http://stackoverflow.com/questions/40808678/kafka-api-offsetrequest-unable-to-retrieve-results

)

If anyone has a working Java solution for 'new-consumer' groups I'd love 
to
hear about it.






Re: --group flag for console consumer

2016-11-16 Thread Vahid S Hashemian
I'll open a JIRA.

Andrew, let me know if you want to take over the implementation. 
Otherwise, I'd be happy to work on it.

Thanks.
--Vahid




From:   Gwen Shapira 
To: Users 
Date:   11/16/2016 01:23 PM
Subject:Re: --group flag for console consumer



Makes sense to me. Do you want to contribute a pull request?

On Wed, Nov 16, 2016 at 11:33 AM, Andrew Pennebaker
 wrote:
> Could the kafka-console-consumer shell script please get a --group 

> flag?
>
> Loading configs from properties files is helpful, but a direct --group 
flag
> would be a simpler user interface for this common use case.
>
>
> --
> Cheers,
> Andrew



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog







Re: [ANNOUNCE] New committer: Jiangjie (Becket) Qin

2016-10-31 Thread Vahid S Hashemian
Congrats Becket!

--Vahid



From:   Jason Gustafson 
To: Kafka Users 
Cc: d...@kafka.apache.org
Date:   10/31/2016 10:56 AM
Subject:Re: [ANNOUNCE] New committer: Jiangjie (Becket) Qin



Great work, Becket!

On Mon, Oct 31, 2016 at 10:54 AM, Onur Karaman <
okara...@linkedin.com.invalid> wrote:

> Congrats Becket!
>
> On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy  
wrote:
>
> > The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as 
a
> > committer and we are pleased to announce that he has accepted!
> >
> > Becket has made significant contributions to Kafka over the last two
> years.
> > He has been deeply involved in a broad range of KIP discussions and 
has
> > contributed several major features to the project. He recently 
completed
> > the implementation of a series of improvements (KIP-31, KIP-32, 
KIP-33)
> to
> > Kafka’s message format that address a number of long-standing issues 
such
> > as avoiding server-side re-compression, better accuracy for time-based
> log
> > retention, log roll and time-based indexing of messages.
> >
> > Congratulations Becket! Thank you for your many contributions. We are
> > excited to have you on board as a committer and look forward to your
> > continued participation!
> >
> > Joel
> >
>






Re: [VOTE] 0.10.1.0 RC0

2016-10-07 Thread Vahid S Hashemian
Jason,

Sure, I'll submit a patch for the trivial changes in the quick start.
Do you recommend adding Windows version of commands along with the current 
commands?

I'll also open a JIRA for the new consumer issue.

--Vahid



From:   Jason Gustafson <ja...@confluent.io>
To: d...@kafka.apache.org
Cc: Kafka Users <users@kafka.apache.org>
Date:   10/07/2016 08:57 AM
Subject:Re: [VOTE] 0.10.1.0 RC0



@Vahid Thanks, do you want to submit a patch for the quickstart fixes? We
won't need another RC if it's just doc changes. The exception is a little
more troubling. Perhaps open a JIRA and we can begin investigation? It's
especially strange that you say it's specific to the new consumer.

@Henry Actually that issue was resolved as "won't fix" since it pointed to
an old version of the group coordinator design. But maybe it's misleading
that we include JIRAs resolved as "won't fix" in the first place. At least
they ought to be listed in a separate section?

-Jason

On Thu, Oct 6, 2016 at 5:27 PM, Henry Cai <h...@pinterest.com.invalid>
wrote:

> Why is this feature in the release note?
>
>
>- [KAFKA-264 <https://issues.apache.org/jira/browse/KAFKA-264>] -
> Change
>the consumer side load balancing and distributed co-ordination to use 
a
>consumer co-ordinator
>
> I thought this was already done in 2015.
>
> On Thu, Oct 6, 2016 at 4:55 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com
> > wrote:
>
> > Jason,
> >
> > Thanks a lot for managing this release.
> >
> > I ran the quick start (Steps 2-8) with this release candidate on 
Ubuntu,
> > Windows, and Mac and they mostly look great.
> > These are some, hopefully, minor items and gaps I noticed with respect 
to
> > the existing quick start documentation (and the updated quick start 
that
> > leverages the new consumer).
> > They may very well be carryovers from previous releases, or perhaps
> > specific to my local environments.
> > Hopefully others can confirm.
> >
> >
> > Windows
> >
> > Since there are separate scripts on Windows platform, it probably 
would
> > help if that is clarified in the quick start section. E.g. "On Windows
> > platform replace `bin/` with `bin\windows\`". Or even have a separate
> > quick start for Windows since a number of commands will be different 
on
> > Windows.
> > There is no `connect-standalone.sh` equivalent for Windows under
> > bin\windows folder (Step 7).
> > Step 8 is also not tailored for Windows terminals. I skipped this 
step.
> > When I try to consume message using the new consumer (Step 5) I get an
> > exception on the broker side. The old consumer works fine.
> >
> > java.io.IOException: Map failed
> > at sun.nio.ch.FileChannelImpl.map(Unknown Source)
> > at kafka.log.AbstractIndex.(AbstractIndex.scala:61)
> > at kafka.log.OffsetIndex.(OffsetIndex.scala:51)
> > at kafka.log.LogSegment.(LogSegment.scala:67)
> > at kafka.log.Log.loadSegments(Log.scala:255)
> > at kafka.log.Log.(Log.scala:108)
> > at kafka.log.LogManager.createLog(LogManager.scala:362)
> > at kafka.cluster.Partition.getOrCreateReplica(Partition.
> scala:94)
> > at
> > kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.
> > apply(Partition.scala:174)
> > at
> > kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.
> > apply(Partition.scala:174)
> > at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> > at 
kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:174)
> > at 
kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:168)
> > at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
> > at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:242)
> > at kafka.cluster.Partition.makeLeader(Partition.scala:168)
> > at
> > kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(
> > ReplicaManager.scala:740)
> > at
> > kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(
> > ReplicaManager.scala:739)
> > at
> > scala.collection.mutable.HashMap$$anonfun$foreach$1.
> > apply(HashMap.scala:98)
> > at
> > scala.collection.mutable.HashMap$$anonfun$foreach$1.
> > apply(HashMap.scala:98)
> > at
> > scala.collection.mutable.HashTable$class.foreachEntry(
> HashTable.scala:226)
> > at scala.collection.mutable.HashMap.foreachEntry(HashMap.
> scala:39)
> > at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> > at
> >

Re: [VOTE] 0.10.1.0 RC0

2016-10-06 Thread Vahid S Hashemian
Jason,

Thanks a lot for managing this release.

I ran the quick start (Steps 2-8) with this release candidate on Ubuntu, 
Windows, and Mac and they mostly look great.
These are some, hopefully, minor items and gaps I noticed with respect to 
the existing quick start documentation (and the updated quick start that 
leverages the new consumer).
They may very well be carryovers from previous releases, or perhaps 
specific to my local environments.
Hopefully others can confirm.


Windows

Since there are separate scripts on Windows platform, it probably would 
help if that is clarified in the quick start section. E.g. "On Windows 
platform replace `bin/` with `bin\windows\`". Or even have a separate 
quick start for Windows since a number of commands will be different on 
Windows.
There is no `connect-standalone.sh` equivalent for Windows under 
bin\windows folder (Step 7).
Step 8 is also not tailored for Windows terminals. I skipped this step.
When I try to consume message using the new consumer (Step 5) I get an 
exception on the broker side. The old consumer works fine.

java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(Unknown Source)
at kafka.log.AbstractIndex.(AbstractIndex.scala:61)
at kafka.log.OffsetIndex.(OffsetIndex.scala:51)
at kafka.log.LogSegment.(LogSegment.scala:67)
at kafka.log.Log.loadSegments(Log.scala:255)
at kafka.log.Log.(Log.scala:108)
at kafka.log.LogManager.createLog(LogManager.scala:362)
at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:94)
at 
kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:174)
at 
kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:174)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:174)
at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:168)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:242)
at kafka.cluster.Partition.makeLeader(Partition.scala:168)
at 
kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:740)
at 
kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:739)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at 
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at 
kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:739)
at 
kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:685)
at 
kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:148)
at kafka.server.KafkaApis.handle(KafkaApis.scala:82)
at 
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
... 29 more

This issue seems to break the broker and I have to clear out the logs so I 
can bring the broker back up again.


Ubuntu / Mac

At Step 8, the output I'm seeing after going through the instructions in 
sequence is this (with unique words)

all 1
lead1
to  1
hello   1
streams 2
join1
kafka   3
summit  1

which is different what I see in the documentation (with repeating words).


--Vahid




From:   Jason Gustafson 
To: users@kafka.apache.org, d...@kafka.apache.org, kafka-clients 

Date:   10/04/2016 04:13 PM
Subject:Re: [VOTE] 0.10.1.0 RC0



One clarification: this is a minor release, not a major one.

-Jason

On Tue, Oct 4, 2016 at 4:01 PM, Jason Gustafson  
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 0.10.1.0. This 
is
> a major release that includes great new features including throttled
> replication, secure quotas, time-based log searching, and queryable 
state
> for Kafka Streams. A full list of the content can be found here:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.1. 
Since
> this is a major release, we will give people more time to try it out and
> give feedback.
>
> Release notes for the 0.10.1.0 release:
> http://home.apache.org/~jgus/kafka-0.10.1.0-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by Monday, Oct 10, 9am PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> 

Re: Snazzy new look to our website

2016-10-04 Thread Vahid S Hashemian
+1

Thank you for the much needed new design.
At first glance, it looks great, and more professional.

--Vahid 



From:   Gwen Shapira 
To: d...@kafka.apache.org, Users 
Cc: Derrick Or 
Date:   10/04/2016 04:13 PM
Subject:Snazzy new look to our website



Hi Team Kafka,

I just merged PR 20 to our website - which gives it a new (and IMO
pretty snazzy) look and feel. Thanks to Derrick Or for contributing
the update.

I had to do a hard-refresh (shift-f5 on my mac) to get the new look to
load properly - so if stuff looks off, try it.

Comments and contributions to the site are welcome.

Gwen







Re: Delete Consumer Group Information

2016-10-04 Thread Vahid S Hashemian
Please take a look at this answer to a similar recent question:
http://mail-archives.apache.org/mod_mbox/kafka-users/201608.mbox/%3cCAHwHRrUV=M_2T_XjGwkwgZ=3ba+adogro1_ckvvbytarhag...@mail.gmail.com%3e

The config parameter mentioned in the post for expiration of committed 
offsets is "offsets.retention.minutes". (see also here: 
http://stackoverflow.com/questions/39131465/how-does-an-offset-expire-for-an-apache-kafka-consumer-group
)

Regards,
--Vahid



From:   "Cuneo, Nicholas" 
To: "users@kafka.apache.org" 
Date:   10/04/2016 12:04 PM
Subject:Delete Consumer Group Information



We are using Kafka .9 and are playing around with the consumer group 
feature.  We have a lot of junk and stale consumer group information in 
the consumer groups and want to get rid of it.  What’s the best way to do 
that?
 
Using Kafka Tool, I see that all the consumer groups are stored in ‘Kafka’ 
and not ‘Zookeeper’.  When I go to zookeeper the consumers node is empty.
 
I tried using the ConsumerGroupCommand to delete it but since zookeeper 
shows no consumers this does nothing.  If the consumer groups show only 
being stored in Kafka, are they considered in memory and completely 
restarting all Kafka nodes remove them?  If so – why aren’t they being 
replicated to zookeeper for persistence?


 
Thanks,
 
Nick Cuneo  /  Software Engineer, Cloud  /  Enterprise Software
Tel: +1 949 517 4802  /  Mobile: +1 949 243 4952
3 Ada  /  Irvine, CA 92618  /  USA
ncu...@tycoint.com  /  www.tyco.com



This email (including any attachments) may contain information that is 
private or business confidential. If you received this email in error, 
please delete it from your system without copying it and notify sender by 
reply email so that our records can be corrected
 


This e-mail contains privileged and confidential information intended for 
the use of the addressees named above. If you are not the intended 
recipient of this e-mail, you are hereby notified that you must not 
disseminate, copy or take any action in respect of any information 
contained in it. If you have received this e-mail in error, please notify 
the sender immediately by e-mail and immediately destroy this e-mail and 
its attachments.





  1   2   >