Re: Delete Kafka Topic After Enable Ranger Plugin

2017-07-14 Thread M. Manna
Hi,

You got any log for this? Also, which platform are you running this on?

KR,

On Fri, 14 Jul 2017 at 11:26 pm, amazon  wrote:

> Hi:
>
>  Recently I develop some applications about kafka under ranger. But
> when I set enable ranger kafka plugin I can not delete kafka topic
> completely even though set 'delete.topic.enable=true'. And I find when
> enable ranger kafka plugin it must be authrized. How can I delete kafka
> topic completely under ranger. Thank you.
>
> --
> 格律诗
> 祝您工作顺利
>
>
>


Build failed in Jenkins: kafka-trunk-jdk8 #1810

2017-07-14 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Change log level for group member failure from debug to info

[ismael] MINOR: Correct the ConsumerPerformance print format

--
[...truncated 2.46 MB...]

org.apache.kafka.streams.kstream.internals.KStreamKStreamLeftJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KTableForeachTest > testForeach 
STARTED

org.apache.kafka.streams.kstream.internals.KTableForeachTest > testForeach 
PASSED

org.apache.kafka.streams.kstream.internals.KTableForeachTest > testTypeVariance 
STARTED

org.apache.kafka.streams.kstream.internals.KTableForeachTest > testTypeVariance 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamPrintTest > 
testPrintKeyValueWithName STARTED

org.apache.kafka.streams.kstream.internals.KStreamPrintTest > 
testPrintKeyValueWithName PASSED

org.apache.kafka.streams.kstream.internals.GlobalKTableJoinsTest > 
shouldInnerJoinWithStream STARTED

org.apache.kafka.streams.kstream.internals.GlobalKTableJoinsTest > 
shouldInnerJoinWithStream PASSED

org.apache.kafka.streams.kstream.internals.GlobalKTableJoinsTest > 
shouldLeftJoinWithStream STARTED

org.apache.kafka.streams.kstream.internals.GlobalKTableJoinsTest > 
shouldLeftJoinWithStream PASSED

org.apache.kafka.streams.kstream.internals.AbstractStreamTest > 
testShouldBeExtensible STARTED

org.apache.kafka.streams.kstream.internals.AbstractStreamTest > 
testShouldBeExtensible PASSED

org.apache.kafka.streams.kstream.internals.KStreamMapValuesTest > 
testFlatMapValues STARTED

org.apache.kafka.streams.kstream.internals.KStreamMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
PASSED

org.apache.kafka.streams.kstream.internals.WindowedStreamPartitionerTest > 
testCopartitioning STARTED

org.apache.kafka.streams.kstream.internals.WindowedStreamPartitionerTest > 
testCopartitioning PASSED

org.apache.kafka.streams.kstream.internals.WindowedStreamPartitionerTest > 
testWindowedSerializerNoArgConstructors STARTED

org.apache.kafka.streams.kstream.internals.WindowedStreamPartitionerTest > 
testWindowedSerializerNoArgConstructors PASSED

org.apache.kafka.streams.kstream.internals.WindowedStreamPartitionerTest > 
testWindowedDeserializerNoArgConstructors STARTED

org.apache.kafka.streams.kstream.internals.WindowedStreamPartitionerTest > 
testWindowedDeserializerNoArgConstructors PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableLeftJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamKTableLeftJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.UnlimitedWindowTest > 
shouldAlwaysOverlap STARTED

org.apache.kafka.streams.kstream.internals.UnlimitedWindowTest > 
shouldAlwaysOverlap PASSED

org.apache.kafka.streams.kstream.internals.UnlimitedWindowTest > 
cannotCompareUnlimitedWindowWithDifferentWindowType STARTED

org.apache.kafka.streams.kstream.internals.UnlimitedWindowTest > 
cannotCompareUnlimitedWindowWithDifferentWindowType PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullMapperOnMapValues STARTED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullMapperOnMapValues PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullPredicateOnFilter STARTED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullPredicateOnFilter PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullOtherTableOnJoin STARTED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullOtherTableOnJoin PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullSelectorOnGroupBy STARTED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullSelectorOnGroupBy PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullJoinerOnLeftJoin STARTED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullJoinerOnLeftJoin PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testStateStore 
STARTED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testStateStore 
PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowEmptyFilePathOnWriteAsText STARTED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowEmptyFilePathOnWriteAsText PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullOtherTableOnLeftJoin STARTED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullOtherTableOnLeftJoin PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullSelectorOnToStream 

Kafka Offset Management

2017-07-14 Thread Syed Rizwan Hashmi
Hi ,

I have wrote kafka consumer where i am managing kafka offset within our 
storage. The reason why we are doing this, In case of failure we should be able 
to restart from same time. 

I have disabled auto commit flag for offset.

The writing of db happened that include offset write on interval base. That 
means consumer continuously polling records but record might not commit in db. 
My question here is .. 

if my application crash and i use pretty old offset(which are still within 
retention window), would kafka start from there ? 

or does kafka still commit on every next poll, granted auto commit is off. 

 kafka_2.11-0.10.1.1

-rIZ. 



Delete Kafka Topic After Enable Ranger Plugin

2017-07-14 Thread amazon

Hi:

Recently I develop some applications about kafka under ranger. But 
when I set enable ranger kafka plugin I can not delete kafka topic 
completely even though set 'delete.topic.enable=true'. And I find when 
enable ranger kafka plugin it must be authrized. How can I delete kafka 
topic completely under ranger. Thank you.


--
格律诗
祝您工作顺利




[GitHub] kafka pull request #3417: MINOR: Correct the ConsumerPerformance print forma...

2017-07-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3417


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3523: MINOR: log4j for member failure from debug to info

2017-07-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3523


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #2518

2017-07-14 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3529: MINOR: Use correct connectionId in error message

2017-07-14 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/3529

MINOR: Use correct connectionId in error message



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka MINOR-log-connection-id

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3529.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3529


commit 14b4c7d25161097319926da17748dd09c4145624
Author: Rajini Sivaram 
Date:   2017-07-14T19:35:49Z

MINOR: Use correct connectionId in error message




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Comments on JIRAs

2017-07-14 Thread Guozhang Wang
Hello Tom,

All committers should have subscribed to the newly created jira@kafka
mailing list, so they still get all the notifications.

If you want to ping someone who's not committer and not yet watching the
ticket, then you do need to ping her id in your comment (once she has
replied to your comment she will automatically start watching the ticket);
this has been the case and will not change.


Guozhang


On Thu, Jul 13, 2017 at 6:46 AM, Tom Bentley  wrote:

> The project recently switched from all JIRA events being sent to the dev
> mailling list, to just issue creations. This seems like a good thing
> because the dev mailling list was very noisy before, and if you want to see
> all the JIRA comments etc you can subscribe to the JIRA list. If you don't
> subscribe to the JIRA list you need to take the time to become a watcher on
> each issue that interests you.
>
> However, the flip-side of this is that when you comment on a JIRA you have
> no idea who's going to get notified (apart from the watchers). In
> particular, commenters don't know whether any of the committers will see
> their comment, unless they mention them individually by name. But for an
> issue in which no committer has thus far taken an interest, who is the
> commenter to @mention? There is no @kafka_commiters that you can use to
> bring the comment to the attention of the whole group of committers.
>
> There is also the fact that there are an awful lot of historical issues
> which interested people won't be watching because they assumed at the time
> that they'd get notified via the dev list.
>
> I can well imagine that people who aren't working a lot of Kafka won't
> realise that there's a good chance that their comments on JIRAs won't reach
> relevant people.
>
> I'm mentioning this mainly to highlight to people that this is what's
> happening, because it wasn't obvious to me that commenting on a JIRA might
> not reach (all of) the committers/interested parties.
>
> Cheers,
>
> Tom
>



-- 
-- Guozhang


[GitHub] kafka pull request #3528: Kafka-4763 (Used for triggering test only)

2017-07-14 Thread lindong28
GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/3528

Kafka-4763 (Used for triggering test only)

This patch is used only for triggering test. No need for review.




You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-4763-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3528.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3528


commit a7915c2d7b7b1a96ba78a4d814503356da5840e6
Author: Dong Lin 
Date:   2017-04-03T00:46:34Z

KAFKA-4763; Handle disk failure for JBOD (KIP-112)

commit 946ece0f2067c1a890d00714ac24f298d9dcc8b5
Author: Dong Lin 
Date:   2017-06-12T04:05:12Z

Address comments

commit f1e250b61a21ea74d895ff1c2d289a359c70f117
Author: Dong Lin 
Date:   2017-06-12T09:13:28Z

StopReplicaResponse should specify error if replica-to-be-deleted is not 
found and there is offline directory

commit 8f02b2b94a9acd016c7a88cb196f12abd3890960
Author: Dong Lin 
Date:   2017-06-13T20:15:58Z

Address comments

commit dc1d6f3c89595e6192833df3f780f9053bf9327e
Author: Dong Lin 
Date:   2017-06-17T02:35:18Z

Address comments

commit 7c531a273deab40fe5952377a85baf6aae286760
Author: Dong Lin 
Date:   2017-06-20T23:17:14Z

Address comments

commit 8bc60ab337ac65bf787c1fd47c4a3b5a65599d2c
Author: Dong Lin 
Date:   2017-06-21T20:27:09Z

Close file handler of all files in the offline log directory so that the 
disk can be umounted

commit 1f399a22c9af01e34c8cfa74aac413400f018ff0
Author: Dong Lin 
Date:   2017-06-23T00:41:21Z

Address comments

commit 633d4843fbcd6421d1bd488557f47c7edb8086d3
Author: Dong Lin 
Date:   2017-06-24T02:09:54Z

Catch and handle more IOExceptions

commit 5908e16c37846edc8870f3e0dc2e14634d5d0d24
Author: Dong Lin 
Date:   2017-06-28T00:31:34Z

Address comments

commit fa2edcac7217105706340d8594f409be1341c7b0
Author: Dong Lin 
Date:   2017-06-30T21:55:30Z

Address comments

commit d4a54dffd675fff9b7763ae4b4ffcc908c54f408
Author: Dong Lin 
Date:   2017-07-06T03:50:35Z

Address comments

commit b25a029970a8802d7fe50a69f23fc365ae146b7e
Author: Dong Lin 
Date:   2017-07-06T19:04:03Z

Replace _liveLogDirs.asScala.toSeq with _liveLogDirs.asScala.toArray to 
possibly reduce memory footprint

commit 522eff8ef2ae1f4235338335a7ef8657efa0e877
Author: Dong Lin 
Date:   2017-07-07T02:45:14Z

- Use LogDirFailureChannel to trigger handleLogDirFailure()
- Remove the newly-added metrics on shutdown()

commit 5bbb72ed3ca042aa731d756c03bd2e86a2ed5a42
Author: Dong Lin 
Date:   2017-07-07T22:06:47Z

- Add UNKNOWN_RETRIABLE.
- Convert KafkaStorageException to the error code of 
NotLeaderForPartitionException for existing versions of ProduceResponse and 
FetchResponse

commit e4404382f94754fb49bea4912e684d7e98e2e20c
Author: Dong Lin 
Date:   2017-07-13T18:21:29Z

Always return KafkaStorageException for replicas in offline log directories




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3464: MINOR: Enable a number of xlint scalac warnings

2017-07-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3464


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-14 Thread Onur Karaman
In other words, I think the default should be the exact behavior we have
today plus the remaining group information from DescribeGroupResponse.

On Fri, Jul 14, 2017 at 11:36 AM, Onur Karaman  wrote:

> I think if we had the opportunity to start from scratch, --describe would
> have been the following:
> --describe --offsets: shows all offsets committed for the group as well as
> lag
> --describe --state (or maybe --members): shows the full
> DescribeGroupResponse output (including things like generation id, state,
> protocol type, etc)
> --describe: shows the merged version of the above two.
>
> On Fri, Jul 14, 2017 at 10:56 AM, Jason Gustafson 
> wrote:
>
>> Hey Vahid,
>>
>> Thanks for the KIP. Looks like a nice improvement. One minor suggestion:
>> Since consumers can be subscribed to a large number of topics, I'm
>> wondering if it might be better to leave out the topic list from the
>> "describe members" option so that the output remains concise? Perhaps we
>> could list only the number of assigned partitions so that users have an
>> easy way to check the overall balance and we can add a separate "describe
>> topics" switch to see the topic breakdown?
>>
>> As for the default --describe, it seems safest to keep its current
>> behavior. In other words, we should list all partitions which have
>> committed offsets for the group even if the partition is not currently
>> assigned. However, I don't think we need to try and fit members without
>> any
>> assigned partitions into that view.
>>
>> Thanks,
>> Jason
>>
>> On Fri, Jul 7, 2017 at 10:49 AM, Vahid S Hashemian <
>> vahidhashem...@us.ibm.com> wrote:
>>
>> > Thanks Jeff for your feedback on the usefulness of the current tool.
>> >
>> > --Vahid
>> >
>> >
>> >
>> >
>> > From:   Jeff Widman 
>> > To: dev@kafka.apache.org
>> > Cc: Kafka User 
>> > Date:   07/06/2017 02:25 PM
>> > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for
>> > ConsumerGroupCommand
>> >
>> >
>> >
>> > Thanks for the KIP Vahid. I think it'd be useful to have these filters.
>> >
>> > That said, I also agree with Edo.
>> >
>> > We don't currently rely on the output, but there's been more than one
>> time
>> > when debugging an issue that I notice something amiss when I see all the
>> > data at once but if it wasn't present in the default view I probably
>> would
>> > have missed it as I wouldn't have thought to look at that particular
>> > filter.
>> >
>> > This would also be more consistent with the API of the kafka-topics.sh
>> > where "--describe" gives everything and then can be filtered down.
>> >
>> >
>> >
>> > On Tue, Jul 4, 2017 at 10:42 AM, Edoardo Comar 
>> wrote:
>> >
>> > > Hi Vahid,
>> > > no we are not relying on parsing the current output.
>> > >
>> > > I just thought that keeping the full output isn't necessarily that bad
>> > as
>> > > it shows some sort of history of how a group was used.
>> > >
>> > > ciao
>> > > Edo
>> > > --
>> > >
>> > > Edoardo Comar
>> > >
>> > > IBM Message Hub
>> > >
>> > > IBM UK Ltd, Hursley Park, SO21 2JN
>> > >
>> > > "Vahid S Hashemian"  wrote on 04/07/2017
>> > > 17:11:43:
>> > >
>> > > > From: "Vahid S Hashemian" 
>> > > > To: dev@kafka.apache.org
>> > > > Cc: "Kafka User" 
>> > > > Date: 04/07/2017 17:12
>> > > > Subject: Re: [DISCUSS] KIP-175: Additional '--describe' views for
>> > > > ConsumerGroupCommand
>> > > >
>> > > > Hi Edo,
>> > > >
>> > > > Thanks for reviewing the KIP.
>> > > >
>> > > > Modifying the default behavior of `--describe` was suggested in the
>> > > > related JIRA.
>> > > > We could poll the community to see whether they go for that option,
>> > or,
>> > > as
>> > > > you suggested, introducing a new `--only-xxx` ( can't also think of
>> a
>> > > > proper name right now :) ) option instead.
>> > > >
>> > > > Are you making use of the current `--describe` output and relying on
>> > the
>> > >
>> > > > full data set?
>> > > >
>> > > > Thanks.
>> > > > --Vahid
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > From:   Edoardo Comar 
>> > > > To: dev@kafka.apache.org
>> > > > Cc: "Kafka User" 
>> > > > Date:   07/04/2017 03:17 AM
>> > > > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views
>> > for
>> > >
>> > > > ConsumerGroupCommand
>> > > >
>> > > >
>> > > >
>> > > > Thanks Vahid, I like the KIP.
>> > > >
>> > > > One question - could we keep the current "--describe" behavior
>> > unchanged
>> > >
>> > > > and introduce "--only-xxx" options to filter down the full output as
>> > you
>> > >
>> > > > proposed ?
>> > > >
>> > > > ciao,
>> > > > Edo
>> > > > --
>> > > >
>> > > > Edoardo Comar
>> > > >
>> > > > IBM Message Hub
>> > > >
>> > > > 

Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-14 Thread Onur Karaman
I think if we had the opportunity to start from scratch, --describe would
have been the following:
--describe --offsets: shows all offsets committed for the group as well as
lag
--describe --state (or maybe --members): shows the full
DescribeGroupResponse output (including things like generation id, state,
protocol type, etc)
--describe: shows the merged version of the above two.

On Fri, Jul 14, 2017 at 10:56 AM, Jason Gustafson 
wrote:

> Hey Vahid,
>
> Thanks for the KIP. Looks like a nice improvement. One minor suggestion:
> Since consumers can be subscribed to a large number of topics, I'm
> wondering if it might be better to leave out the topic list from the
> "describe members" option so that the output remains concise? Perhaps we
> could list only the number of assigned partitions so that users have an
> easy way to check the overall balance and we can add a separate "describe
> topics" switch to see the topic breakdown?
>
> As for the default --describe, it seems safest to keep its current
> behavior. In other words, we should list all partitions which have
> committed offsets for the group even if the partition is not currently
> assigned. However, I don't think we need to try and fit members without any
> assigned partitions into that view.
>
> Thanks,
> Jason
>
> On Fri, Jul 7, 2017 at 10:49 AM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Thanks Jeff for your feedback on the usefulness of the current tool.
> >
> > --Vahid
> >
> >
> >
> >
> > From:   Jeff Widman 
> > To: dev@kafka.apache.org
> > Cc: Kafka User 
> > Date:   07/06/2017 02:25 PM
> > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for
> > ConsumerGroupCommand
> >
> >
> >
> > Thanks for the KIP Vahid. I think it'd be useful to have these filters.
> >
> > That said, I also agree with Edo.
> >
> > We don't currently rely on the output, but there's been more than one
> time
> > when debugging an issue that I notice something amiss when I see all the
> > data at once but if it wasn't present in the default view I probably
> would
> > have missed it as I wouldn't have thought to look at that particular
> > filter.
> >
> > This would also be more consistent with the API of the kafka-topics.sh
> > where "--describe" gives everything and then can be filtered down.
> >
> >
> >
> > On Tue, Jul 4, 2017 at 10:42 AM, Edoardo Comar 
> wrote:
> >
> > > Hi Vahid,
> > > no we are not relying on parsing the current output.
> > >
> > > I just thought that keeping the full output isn't necessarily that bad
> > as
> > > it shows some sort of history of how a group was used.
> > >
> > > ciao
> > > Edo
> > > --
> > >
> > > Edoardo Comar
> > >
> > > IBM Message Hub
> > >
> > > IBM UK Ltd, Hursley Park, SO21 2JN
> > >
> > > "Vahid S Hashemian"  wrote on 04/07/2017
> > > 17:11:43:
> > >
> > > > From: "Vahid S Hashemian" 
> > > > To: dev@kafka.apache.org
> > > > Cc: "Kafka User" 
> > > > Date: 04/07/2017 17:12
> > > > Subject: Re: [DISCUSS] KIP-175: Additional '--describe' views for
> > > > ConsumerGroupCommand
> > > >
> > > > Hi Edo,
> > > >
> > > > Thanks for reviewing the KIP.
> > > >
> > > > Modifying the default behavior of `--describe` was suggested in the
> > > > related JIRA.
> > > > We could poll the community to see whether they go for that option,
> > or,
> > > as
> > > > you suggested, introducing a new `--only-xxx` ( can't also think of a
> > > > proper name right now :) ) option instead.
> > > >
> > > > Are you making use of the current `--describe` output and relying on
> > the
> > >
> > > > full data set?
> > > >
> > > > Thanks.
> > > > --Vahid
> > > >
> > > >
> > > >
> > > >
> > > > From:   Edoardo Comar 
> > > > To: dev@kafka.apache.org
> > > > Cc: "Kafka User" 
> > > > Date:   07/04/2017 03:17 AM
> > > > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views
> > for
> > >
> > > > ConsumerGroupCommand
> > > >
> > > >
> > > >
> > > > Thanks Vahid, I like the KIP.
> > > >
> > > > One question - could we keep the current "--describe" behavior
> > unchanged
> > >
> > > > and introduce "--only-xxx" options to filter down the full output as
> > you
> > >
> > > > proposed ?
> > > >
> > > > ciao,
> > > > Edo
> > > > --
> > > >
> > > > Edoardo Comar
> > > >
> > > > IBM Message Hub
> > > >
> > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > >
> > > >
> > > >
> > > > From:   "Vahid S Hashemian" 
> > > > To: dev , "Kafka User"
> > > 
> > > > Date:   04/07/2017 00:06
> > > > Subject:[DISCUSS] KIP-175: Additional '--describe' views for
> > > > ConsumerGroupCommand
> > > >
> > > >
> > > >
> > > > Hi,
> > > >
> > > 

[jira] [Created] (KAFKA-5595) Illegal state in SocketServer; attempt to send with another send in progress

2017-07-14 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5595:
--

 Summary: Illegal state in SocketServer; attempt to send with 
another send in progress
 Key: KAFKA-5595
 URL: https://issues.apache.org/jira/browse/KAFKA-5595
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


I have seen this a couple times, but I'm not sure the conditions associated 
with it. 

{code}
java.lang.IllegalStateException: Attempt to begin a send operation with prior 
send operation still in progress.
at 
org.apache.kafka.common.network.KafkaChannel.setSend(KafkaChannel.java:138)
at org.apache.kafka.common.network.Selector.send(Selector.java:248)
at kafka.network.Processor.sendResponse(SocketServer.scala:488)
at kafka.network.Processor.processNewResponses(SocketServer.scala:466)
at kafka.network.Processor.run(SocketServer.scala:431)
at java.lang.Thread.run(Thread.java:748)
{code}

Prior to this event, I see a lot of this message in the logs (always for the 
same connection id):
{code}
Attempting to send response via channel for which there is no open connection, 
connection id 7
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk7 #2517

2017-07-14 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5127; Replace pattern matching with foreach where the case None is

--
[...truncated 2.62 MB...]

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumed PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testUnknownConnectorPaused STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testUnknownConnectorPaused PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPausedRunningTaskOnly STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPausedRunningTaskOnly PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumedRunningTaskOnly STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumedRunningTaskOnly PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownTask STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownTask PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRequestProcessingOrder STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRequestProcessingOrder PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalance STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalance PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalanceFailedConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalanceFailedConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorNameConflictsWithWorkerGroupId STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorNameConflictsWithWorkerGroupId PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToLeader STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToOwner STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToOwner PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedCustomValidation STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedCustomValidation PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTask STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTask PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedBasicValidation STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedBasicValidation PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector STARTED


Re: Comments on JIRAs

2017-07-14 Thread Ismael Juma
Hi Tom,

All active committers should be subscribed to jira@ so I don't think this
changes much from that perspective. If you want to get wider input on a
particular topic, then you should send an email to the mailing list,
however. This was the case before the creation of jira@ as mailing list
filters are commonly used for JIRA notifications.

Ismael

On Thu, Jul 13, 2017 at 6:46 AM, Tom Bentley  wrote:

> The project recently switched from all JIRA events being sent to the dev
> mailling list, to just issue creations. This seems like a good thing
> because the dev mailling list was very noisy before, and if you want to see
> all the JIRA comments etc you can subscribe to the JIRA list. If you don't
> subscribe to the JIRA list you need to take the time to become a watcher on
> each issue that interests you.
>
> However, the flip-side of this is that when you comment on a JIRA you have
> no idea who's going to get notified (apart from the watchers). In
> particular, commenters don't know whether any of the committers will see
> their comment, unless they mention them individually by name. But for an
> issue in which no committer has thus far taken an interest, who is the
> commenter to @mention? There is no @kafka_commiters that you can use to
> bring the comment to the attention of the whole group of committers.
>
> There is also the fact that there are an awful lot of historical issues
> which interested people won't be watching because they assumed at the time
> that they'd get notified via the dev list.
>
> I can well imagine that people who aren't working a lot of Kafka won't
> realise that there's a good chance that their comments on JIRAs won't reach
> relevant people.
>
> I'm mentioning this mainly to highlight to people that this is what's
> happening, because it wasn't obvious to me that commenting on a JIRA might
> not reach (all of) the committers/interested parties.
>
> Cheers,
>
> Tom
>


Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-14 Thread Jason Gustafson
Hey Vahid,

Thanks for the KIP. Looks like a nice improvement. One minor suggestion:
Since consumers can be subscribed to a large number of topics, I'm
wondering if it might be better to leave out the topic list from the
"describe members" option so that the output remains concise? Perhaps we
could list only the number of assigned partitions so that users have an
easy way to check the overall balance and we can add a separate "describe
topics" switch to see the topic breakdown?

As for the default --describe, it seems safest to keep its current
behavior. In other words, we should list all partitions which have
committed offsets for the group even if the partition is not currently
assigned. However, I don't think we need to try and fit members without any
assigned partitions into that view.

Thanks,
Jason

On Fri, Jul 7, 2017 at 10:49 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Thanks Jeff for your feedback on the usefulness of the current tool.
>
> --Vahid
>
>
>
>
> From:   Jeff Widman 
> To: dev@kafka.apache.org
> Cc: Kafka User 
> Date:   07/06/2017 02:25 PM
> Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for
> ConsumerGroupCommand
>
>
>
> Thanks for the KIP Vahid. I think it'd be useful to have these filters.
>
> That said, I also agree with Edo.
>
> We don't currently rely on the output, but there's been more than one time
> when debugging an issue that I notice something amiss when I see all the
> data at once but if it wasn't present in the default view I probably would
> have missed it as I wouldn't have thought to look at that particular
> filter.
>
> This would also be more consistent with the API of the kafka-topics.sh
> where "--describe" gives everything and then can be filtered down.
>
>
>
> On Tue, Jul 4, 2017 at 10:42 AM, Edoardo Comar  wrote:
>
> > Hi Vahid,
> > no we are not relying on parsing the current output.
> >
> > I just thought that keeping the full output isn't necessarily that bad
> as
> > it shows some sort of history of how a group was used.
> >
> > ciao
> > Edo
> > --
> >
> > Edoardo Comar
> >
> > IBM Message Hub
> >
> > IBM UK Ltd, Hursley Park, SO21 2JN
> >
> > "Vahid S Hashemian"  wrote on 04/07/2017
> > 17:11:43:
> >
> > > From: "Vahid S Hashemian" 
> > > To: dev@kafka.apache.org
> > > Cc: "Kafka User" 
> > > Date: 04/07/2017 17:12
> > > Subject: Re: [DISCUSS] KIP-175: Additional '--describe' views for
> > > ConsumerGroupCommand
> > >
> > > Hi Edo,
> > >
> > > Thanks for reviewing the KIP.
> > >
> > > Modifying the default behavior of `--describe` was suggested in the
> > > related JIRA.
> > > We could poll the community to see whether they go for that option,
> or,
> > as
> > > you suggested, introducing a new `--only-xxx` ( can't also think of a
> > > proper name right now :) ) option instead.
> > >
> > > Are you making use of the current `--describe` output and relying on
> the
> >
> > > full data set?
> > >
> > > Thanks.
> > > --Vahid
> > >
> > >
> > >
> > >
> > > From:   Edoardo Comar 
> > > To: dev@kafka.apache.org
> > > Cc: "Kafka User" 
> > > Date:   07/04/2017 03:17 AM
> > > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views
> for
> >
> > > ConsumerGroupCommand
> > >
> > >
> > >
> > > Thanks Vahid, I like the KIP.
> > >
> > > One question - could we keep the current "--describe" behavior
> unchanged
> >
> > > and introduce "--only-xxx" options to filter down the full output as
> you
> >
> > > proposed ?
> > >
> > > ciao,
> > > Edo
> > > --
> > >
> > > Edoardo Comar
> > >
> > > IBM Message Hub
> > >
> > > IBM UK Ltd, Hursley Park, SO21 2JN
> > >
> > >
> > >
> > > From:   "Vahid S Hashemian" 
> > > To: dev , "Kafka User"
> > 
> > > Date:   04/07/2017 00:06
> > > Subject:[DISCUSS] KIP-175: Additional '--describe' views for
> > > ConsumerGroupCommand
> > >
> > >
> > >
> > > Hi,
> > >
> > > I created KIP-175 to make some improvements to the
> ConsumerGroupCommand
> > > tool.
> > > The KIP can be found here:
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-175%3A
> > > +Additional+%27--describe%27+views+for+ConsumerGroupCommand
> > >
> > >
> > >
> > > Your review and feedback is welcome!
> > >
> > > Thanks.
> > > --Vahid
> > >
> > >
> > >
> > >
> > >
> > > Unless stated otherwise above:
> > > IBM United Kingdom Limited - Registered in England and Wales with
> number
> >
> > > 741598.
> > > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> > 3AU
> > >
> > >
> > >
> > >
> >
> > Unless stated otherwise above:
> > IBM United Kingdom Limited - Registered in England and Wales with number
> > 741598.
> > 

Re: Comments on JIRAs

2017-07-14 Thread Jeff Widman
Thanks for bringing this up. I don't know what the solution is, but I agree
it would be good to have something.

On Jul 13, 2017 6:46 AM, "Tom Bentley"  wrote:

> The project recently switched from all JIRA events being sent to the dev
> mailling list, to just issue creations. This seems like a good thing
> because the dev mailling list was very noisy before, and if you want to see
> all the JIRA comments etc you can subscribe to the JIRA list. If you don't
> subscribe to the JIRA list you need to take the time to become a watcher on
> each issue that interests you.
>
> However, the flip-side of this is that when you comment on a JIRA you have
> no idea who's going to get notified (apart from the watchers). In
> particular, commenters don't know whether any of the committers will see
> their comment, unless they mention them individually by name. But for an
> issue in which no committer has thus far taken an interest, who is the
> commenter to @mention? There is no @kafka_commiters that you can use to
> bring the comment to the attention of the whole group of committers.
>
> There is also the fact that there are an awful lot of historical issues
> which interested people won't be watching because they assumed at the time
> that they'd get notified via the dev list.
>
> I can well imagine that people who aren't working a lot of Kafka won't
> realise that there's a good chance that their comments on JIRAs won't reach
> relevant people.
>
> I'm mentioning this mainly to highlight to people that this is what's
> happening, because it wasn't obvious to me that commenting on a JIRA might
> not reach (all of) the committers/interested parties.
>
> Cheers,
>
> Tom
>


[GitHub] kafka pull request #3527: MINOR: Mention systemTestLibs in docker system tes...

2017-07-14 Thread ijuma
Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/3527


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2919: KAFKA-5127 Replace pattern matching with foreach w...

2017-07-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2919


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3527: MINOR: Mention systemTestLibs in docker system tes...

2017-07-14 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3527

MINOR: Mention systemTestLibs in docker system test instructions

Otherwise NoClassFoundExceptions are thrown for tests that attempt
to start MiniKdc.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka systemTestLibs-in-docker-readme

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3527.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3527


commit 613a91334e90078aa872bbf61192dc52545add6e
Author: Ismael Juma 
Date:   2017-07-14T14:15:17Z

MINOR: Mention systemTestLibs in docker system test instructions




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-171: Extend Consumer Group Reset Offset for Stream Application

2017-07-14 Thread Jorge Esteban Quilcate Otoya
Hi,

KIP is updated.
Changes:
1. Approach discussed to keep both tools (streams application resetter and
consumer group reset offset).
2. Options has been aligned between both tools.
3. Zookeeper option from streams-application-resetted will be removed, and
replaced internally for Kafka AdminClient.

Looking forward to your feedback.

El jue., 29 jun. 2017 a las 15:04, Matthias J. Sax ()
escribió:

> Damian,
>
> > resets everything and clears up
> >> the state stores.
>
> That's not correct. The reset tool does not touch local store. For this,
> we have `KafkaStreams#clenup` -- otherwise, you would need to run the
> tool in every machine you run an application instance.
>
> With regard to state, the tool only deletes the underlying changelog
> topics.
>
> Just to clarify. The idea of allowing to rest with different offset is
> to clear all state but just use a different start offset (instead of
> zero). This is for use case where your topic has a larger retention time
> than the amount of data you want to reprocess. But we always need to
> start with an empty state. (Resetting with consistent state is something
> we might do at some point, but it's much hard and not part of this KIP)
>
> > @matthias, could we remove the ZK dependency from the streams reset tool
> > now?
>
> I think so. The new AdminClient provide the feature we need AFAIK. I
> guess we can picky back this into the KIP (we would need a KIP anyway
> because we deprecate `--zookeeper` what is an public API change).
>
>
> I just want to point out, that even without ZK dependency, I prefer to
> wrap the consumer offset tool and keep two tools.
>
>
> -Matthias
>
>
> On 6/29/17 9:14 AM, Damian Guy wrote:
> > Hi,
> >
> > Thanks for the KIP. What is not clear is how is this going to handle
> state
> > stores? Right now the streams reset tool, resets everything and clears up
> > the state stores. What are we going to do if we reset to a particular
> > offset? If we clear the state then we've lost any previously aggregated
> > values (which may or may not be what is expected). If we don't clear
> them,
> > then we will end up with incorrect aggregates.
> >
> > @matthias, could we remove the ZK dependency from the streams reset tool
> > now?
> >
> > Thanks,
> > Damian
> >
> > On Thu, 29 Jun 2017 at 15:22 Jorge Esteban Quilcate Otoya <
> > quilcate.jo...@gmail.com> wrote:
> >
> >> You're right, I was not considering Zookeeper dependency.
> >>
> >> I'm starting to like the idea to wrap `reset-offset` from
> >> `streams-application-reset`.
> >>
> >> I will wait a bit for more feedback and work on a draft with this
> changes.
> >>
> >>
> >> El mié., 28 jun. 2017 a las 0:20, Matthias J. Sax (<
> matth...@confluent.io
> >>> )
> >> escribió:
> >>
> >>> I agree, that we should not duplicate functionality.
> >>>
> >>> However, I am worried, that a non-streams users using the offset reset
> >>> tool might delete topics unintentionally (even if the changes are
> pretty
> >>> small). Also, currently the stream reset tool required Zookeeper while
> >>> the offset reset tool does not -- I don't think we should add this
> >>> dependency to the offset reset tool.
> >>>
> >>> Thus, it think it might be better to keep both tools, but internally
> >>> rewrite the streams reset entry class, to reuse as much as possible
> from
> >>> the offset reset tool. Ie. the streams tool would be a wrapper around
> >>> the offset tool and add some functionality it needs that is Streams
> >>> specific.
> >>>
> >>> I also think, that keeping separate tools for consumers and streams is
> a
> >>> good thing. We might want to add new features that don't apply to plain
> >>> consumers -- note, a Streams applications is more than just a client.
> >>>
> >>> WDYT?
> >>>
> >>> Would be good to get some feedback from others, too.
> >>>
> >>>
> >>> -Matthias
> >>>
> >>>
> >>> On 6/27/17 9:05 AM, Jorge Esteban Quilcate Otoya wrote:
>  Thanks for the feedback Matthias!
> 
>  The main idea is to use only 1 tool to reset offsets and don't
> >> replicate
>  functionality between tools.
>  Reset Offset (KIP-122) tool not only reset but support execute the
> >> reset
>  but also export, import from files, etc.
>  If we extend the current tool (kafka-streams-application-reset.sh) we
> >>> will
>  have to duplicate all this functionality also.
>  Maybe another option is to move the current implementation into
>  `kafka-consumer-group` and add a new command `--reset-offset-streams`
> >>> with
>  the current implementation functionality and add `--reset-offset`
> >> options
>  for input topics. Does this make sense?
> 
> 
>  El lun., 26 jun. 2017 a las 23:32, Matthias J. Sax (<
> >>> matth...@confluent.io>)
>  escribió:
> 
> > Jorge,
> >
> > thanks a lot for this KIP. Allowing the reset streams applications
> >> with
> > arbitrary start offset is something we got multiple requests already.
> >
> 

[jira] [Created] (KAFKA-5594) Tag messages with authenticated user who produced it

2017-07-14 Thread Jack Andrews (JIRA)
Jack Andrews created KAFKA-5594:
---

 Summary: Tag messages with authenticated user who produced it
 Key: KAFKA-5594
 URL: https://issues.apache.org/jira/browse/KAFKA-5594
 Project: Kafka
  Issue Type: Wish
Reporter: Jack Andrews


I see that Kafka now supports custom headers through KIP-82.  Would it be 
possible to hook this up authorization such that the authenticated user is 
added as a header?

Thank you.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5593) Kafka streams not re-balancing when 3 consumer streams are there

2017-07-14 Thread Yogesh BG (JIRA)
Yogesh BG created KAFKA-5593:


 Summary: Kafka streams not re-balancing when 3 consumer streams 
are there
 Key: KAFKA-5593
 URL: https://issues.apache.org/jira/browse/KAFKA-5593
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Yogesh BG
Priority: Critical
 Attachments: log1.txt, log2.txt, log3.txt

I have 3 broker nodes, 3 kafka streams

I observe that all 3 consumer streams are part of the group named 
rtp-kafkastreams. but when i see the data is processed only by one node. 

DEBUG n.a.a.k.a.AccessLogMetricEnrichmentProcessor - 
AccessLogMetricEnrichmentProcessor.process

when i do check the partition information shared by each of them i see first 
node has all partitions like all 8. but in other streams the folder is empty.


[root@ip-172-31-11-139 ~]# ls /data/kstreams/rtp-kafkastreams
0_0  0_1  0_2  0_3  0_4  0_5  0_6  0_7

 and  this folder is empty

I tried restarting the other two consumer streams still they won't become the 
part of the group and re-balance.

I have attached the logs.

Configurations are inside the log file.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5592) Connection with plain client to SSL-secured broker causes OOM

2017-07-14 Thread JIRA
Marcin Łuczyński created KAFKA-5592:
---

 Summary: Connection with plain client to SSL-secured broker causes 
OOM
 Key: KAFKA-5592
 URL: https://issues.apache.org/jira/browse/KAFKA-5592
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.0
 Environment: Linux x86_64 x86_64 x86_64 GNU/Linux
Reporter: Marcin Łuczyński
 Attachments: heapdump.20170713.100129.14207.0002.phd, Heap.PNG, 
javacore.20170713.100129.14207.0003.txt, Snap.20170713.100129.14207.0004.trc, 
Stack.PNG

While testing connection with client app that does not have configured 
truststore with a Kafka broker secured by SSL, my JVM crashes with 
OutOfMemoryError. I saw it mixed with StackOverfowError. I attach dump files.

The stack trace to start with is here:

{quote}at java/nio/HeapByteBuffer. (HeapByteBuffer.java:57) 
at java/nio/ByteBuffer.allocate(ByteBuffer.java:331) 
at 
org/apache/kafka/common/network/NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
 
at 
org/apache/kafka/common/network/NetworkReceive.readFrom(NetworkReceive.java:71) 
at org/apache/kafka/common/network/KafkaChannel.receive(KafkaChannel.java:169) 
at org/apache/kafka/common/network/KafkaChannel.read(KafkaChannel.java:150) 
at 
org/apache/kafka/common/network/Selector.pollSelectionKeys(Selector.java:355) 
at org/apache/kafka/common/network/Selector.poll(Selector.java:303) 
at org/apache/kafka/clients/NetworkClient.poll(NetworkClient.java:349) 
at 
org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
 
at 
org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.poll(ConsumerNetworkClient.java:188)
 
at 
org/apache/kafka/clients/consumer/internals/AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:207)
 
at 
org/apache/kafka/clients/consumer/internals/AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
 
at 
org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.poll(ConsumerCoordinator.java:279)
 
at 
org/apache/kafka/clients/consumer/KafkaConsumer.pollOnce(KafkaConsumer.java:1029)
 
at org/apache/kafka/clients/consumer/KafkaConsumer.poll(KafkaConsumer.java:995) 
at 
com/ibm/is/cc/kafka/runtime/KafkaConsumerProcessor.process(KafkaConsumerProcessor.java:237)
 
at com/ibm/is/cc/kafka/runtime/KafkaProcessor.process(KafkaProcessor.java:173) 
at 
com/ibm/is/cc/javastage/connector/CC_JavaAdapter.run(CC_JavaAdapter.java:443){quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3526: KAFKA-5587: Remove channel only after staged recei...

2017-07-14 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/3526

KAFKA-5587: Remove channel only after staged receives are delivered

When idle connections are closed, ensure that channels with staged receives 
are retained in `closingChannels` until all staged receives are completed. Also 
ensure that only one staged receive is completed in each poll, even when 
channels are closed.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-5587

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3526.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3526


commit d6848a3b1dcf9cc73caf06b68bdc89570c396c81
Author: Rajini Sivaram 
Date:   2017-07-14T08:36:22Z

KAFKA-5587: Remove channel only after staged receives are delivered




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5591) Infinite loop during failed Handshake

2017-07-14 Thread JIRA
Marcin Łuczyński created KAFKA-5591:
---

 Summary: Infinite loop during failed Handshake
 Key: KAFKA-5591
 URL: https://issues.apache.org/jira/browse/KAFKA-5591
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.0
 Environment: Linux x86_64 x86_64 x86_64 GNU/Linux
Reporter: Marcin Łuczyński
 Attachments: client.truststore.jks

For testing purposes of a connection from my client app to my secured Kafka 
broker (via SSL) I followed preparation procedure described in this section 
[http://kafka.apache.org/090/documentation.html#security_ssl]. There is a flow 
there in description of certificates generation. I was able to find a proper 
sequence of generation of certs and keys on Confluent.io 
[https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/],
 but before that, when I used the first trust store I generated, it caused 
handshake exception as shown below:

{quote}[2017-07-14 05:24:48,958] DEBUG Accepted connection from 
/10.20.40.20:55609 on /10.20.40.12:9093 and assigned it to processor 3, 
sendBufferSize [actual|requested]: [102400|102400] recvBufferSize 
[actual|requested]: [102400|102400] (kafka.network.Acceptor)
[2017-07-14 05:24:48,959] DEBUG Processor 3 listening to new connection from 
/10.20.40.20:55609 (kafka.network.Processor)
[2017-07-14 05:24:48,971] DEBUG SSLEngine.closeInBound() raised an exception. 
(org.apache.kafka.common.network.SslTransportLayer)
javax.net.ssl.SSLException: Inbound closed before receiving peer's 
close_notify: possible truncation attack?
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1666)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1634)
at sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:1561)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeFailure(SslTransportLayer.java:730)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:313)
at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:74)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:374)
at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
at kafka.network.Processor.poll(SocketServer.scala:499)
at kafka.network.Processor.run(SocketServer.scala:435)
at java.lang.Thread.run(Thread.java:748)
[2017-07-14 05:24:48,971] DEBUG Connection with /10.20.40.20 disconnected 
(org.apache.kafka.common.network.Selector)
javax.net.ssl.SSLProtocolException: Handshake message sequence violation, state 
= 1, type = 1
at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1487)
at 
sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:535)
at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:813)
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:411)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:74)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:374)
at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
at kafka.network.Processor.poll(SocketServer.scala:499)
at kafka.network.Processor.run(SocketServer.scala:435)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.net.ssl.SSLProtocolException: Handshake message sequence 
violation, state = 1, type = 1
at 
sun.security.ssl.ServerHandshaker.processMessage(ServerHandshaker.java:213)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1026)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:966)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:963)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1416)
at 
org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:335)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:416)
... 7 more
{quote}

Which is ok obviously for the broken trust store case. However my client app 
did not receive any exception or error message back. It did not stop either. 
Instead it fell into a infinite loop of re-tries, generating huge log with 
exceptions as shown above. I tried to check if there is any client app property 
that controls the number of 

[DISCUSS] KIP-174 - Deprecate and remove internal converter configs in WorkerConfig

2017-07-14 Thread UMESH CHAUDHARY
Hi there,
Resending as probably missed earlier to grab your attention.

Regards,
Umesh

-- Forwarded message -
From: UMESH CHAUDHARY 
Date: Mon, 3 Jul 2017 at 11:04
Subject: [DISCUSS] KIP-174 - Deprecate and remove internal converter
configs in WorkerConfig
To: dev@kafka.apache.org 


Hello All,
I have added a KIP recently to deprecate and remove internal converter
configs in WorkerConfig.java class because these have ultimately just
caused a lot more trouble and confusion than it is worth.

Please find the KIP here

and
the related JIRA here .

Appreciate your review and comments.

Regards,
Umesh