[jira] [Created] (KAFKA-7311) Sender should reset next batch expiry time between poll loops

2018-08-18 Thread Rohan Desai (JIRA)
Rohan Desai created KAFKA-7311:
--

 Summary: Sender should reset next batch expiry time between poll 
loops
 Key: KAFKA-7311
 URL: https://issues.apache.org/jira/browse/KAFKA-7311
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.1.0
Reporter: Rohan Desai
 Fix For: 2.1.0


Sender does not reset next batch expiry time between poll loops. This means 
that once it crosses the expiry time of the first batch, it starts spinning on 
epoll with a timeout of 0, which consumes a lot of CPU. We observed this 
running KSQL when investigating why throughput would drop after about 10 
minutes (the default delivery timeout).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Adding to contributor list

2018-08-18 Thread Matthias J. Sax
Done.

On 8/18/18 11:16 AM, Oleksandr Bondarenko wrote:
> Hi,
> 
> Could you please add me to the contributor list? My jira username:
> bond_as
> 
> Thanks
> 



signature.asc
Description: OpenPGP digital signature


[DISCUSS] KIP-359: Verify leader epoch in produce requests

2018-08-18 Thread Jason Gustafson
Hi All,

I've added a short KIP to add leader epoch validation to the produce API.
This is a follow-up to KIP-320, which added similar protection to the
consumer APIs. Take a look and let me know what you think.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-359%3A+Verify+leader+epoch+in+produce+requests

Thanks,
Jason


Re: [VOTE] KIP-349 Priorities for Source Topics

2018-08-18 Thread nick


I only saw one vote on KIP-349, just checking to see if anyone else would like 
to vote before closing this out.  
--
  Nick


> On Aug 13, 2018, at 9:19 PM, n...@afshartous.com wrote:
> 
> 
> Hi All,
> 
> Calling for a vote on KIP-349
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-349%3A+Priorities+for+Source+Topics
> 
> --
>  Nick
> 
> 
> 






Re: [DISCUSS] KIP-110: Add Codec for ZStandard Compression (Updated)

2018-08-18 Thread Ismael Juma
Sounds reasonable to me.

Ismael

On Sat, 18 Aug 2018, 12:20 Jason Gustafson,  wrote:

> Hey Ismael,
>
> Your summary looks good to me. I think it might also be a good idea to add
> a new UNSUPPORTED_COMPRESSION_TYPE error code to go along with the version
> bumps. We won't be able to use it for old api versions since the clients
> will not understand it, but we can use it going forward so that we're not
> stuck in a similar situation with a new message format and a new codec to
> support. Another option is to use UNSUPPORTED_FOR_MESSAGE_FORMAT, but it is
> not as explicit.
>
> -Jason
>
> On Fri, Aug 17, 2018 at 5:19 PM, Ismael Juma  wrote:
>
> > Hi Dongjin and Jason,
> >
> > I would agree. My summary:
> >
> > 1. Support zstd with message format 2 only.
> > 2. Bump produce and fetch request versions.
> > 3. Provide broker errors whenever possible based on the request version
> and
> > rely on clients for the cases where the broker can't validate efficiently
> > (example message format 2 consumer that supports the latest fetch version
> > but doesn't support zstd).
> >
> > If there's general agreement on this, I suggest we update the KIP to
> state
> > the proposal and to move the rejected options to its own section. And
> then
> > start a vote!
> >
> > Ismael
> >
> > On Fri, Aug 17, 2018 at 4:00 PM Jason Gustafson 
> > wrote:
> >
> > > Hi Dongjin,
> > >
> > > Yes, that's a good summary. For clients which support v2, the client
> can
> > > parse the message format and hopefully raise a useful error message
> > > indicating the unsupported compression type. For older clients, our
> > options
> > > are probably (1) to down-convert to the old format using no compression
> > > type, or (2) to return an error code. I'm leaning toward the latter as
> > the
> > > simpler solution, but the challenge is finding a good error code. Two
> > > possibilities might be INVALID_REQUEST or CORRUPT_MESSAGE. The downside
> > is
> > > that old clients probably won't get a helpful message. However, at
> least
> > > the behavior will be consistent in the sense that all clients will fail
> > if
> > > they do not support zstandard.
> > >
> > > What do you think?
> > >
> > > Thanks,
> > > Jason
> > >
> > > On Fri, Aug 17, 2018 at 8:08 AM, Dongjin Lee 
> wrote:
> > >
> > > > Thanks Jason, I reviewed the down-converting logic following your
> > > > explanation.[^1] You mean the following routines, right?
> > > >
> > > > -
> > > > https://github.com/apache/kafka/blob/trunk/core/src/
> > > > main/scala/kafka/server/KafkaApis.scala#L534
> > > > -
> > > > https://github.com/apache/kafka/blob/trunk/clients/src/
> > > > main/java/org/apache/kafka/common/record/LazyDownConversionRecords.
> > > > java#L165
> > > > -
> > > > https://github.com/apache/kafka/blob/trunk/clients/src/
> > > > main/java/org/apache/kafka/common/record/RecordsUtil.java#L40
> > > >
> > > > It seems like your stance is like following:
> > > >
> > > > 1. In principle, Kafka does not change the compression codec when
> > > > down-converting, since it requires inspecting the fetched data, which
> > is
> > > > expensive.
> > > > 2. However, there are some cases the fetched data is inspected
> anyway.
> > In
> > > > this case, we can provide compression conversion from Zstandard to
> > > > classical ones[^2].
> > > >
> > > > And from what I understand, the cases where the client without
> > ZStandard
> > > > support receives ZStandard compressed records can be organized into
> two
> > > > cases:
> > > >
> > > > a. The 'compression.type' configuration of given topic is 'producer'
> > and
> > > > the producer compressed the records with ZStandard. (that is, using
> > > > ZStandard implicitly.)
> > > > b.  The 'compression.type' configuration of given topic is 'zstd';
> that
> > > is,
> > > > using ZStandard explicitly.
> > > >
> > > > As you stated, we don't have to handle the case b specially. So, It
> > seems
> > > > like we can narrow the focus of the problem by joining case 1 and
> case
> > b
> > > > like the following:
> > > >
> > > > > Given the topic with 'producer' as its 'compression.type'
> > > configuration,
> > > > ZStandard compressed records and old client without ZStandard, is
> there
> > > any
> > > > case we need to inspect the records and can change the compression
> > type?
> > > If
> > > > so, can we provide compression type converting?
> > > >
> > > > Do I understand correctly?
> > > >
> > > > Best,
> > > > Dongjin
> > > >
> > > > [^1]: I'm sorry, I found that I was a little bit misunderstanding how
> > API
> > > > version works, after reviewing the downconvert logic & the protocol
> > > > documentation .
> > > > [^2]: None, Gzip, Snappy, Lz4.
> > > >
> > > > On Tue, Aug 14, 2018 at 2:16 AM Jason Gustafson 
> > > > wrote:
> > > >
> > > > > >
> > > > > > But in my opinion, since the client will fail with the API
> version,
> > > so
> > > > we
> > > > > > don't need to down-convert the messages anyway. Isn't it? So, I
> > 

Build failed in Jenkins: kafka-trunk-jdk8 #2907

2018-08-18 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-7308: Fix rat and checkstyle config for Java 11 support (#5529)

--
[...truncated 871.67 KB...]
kafka.admin.ResetConsumerGroupOffsetTest > 
testResetWithUnrecognizedNewConsumerOption PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDurationToEarliest 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDurationToEarliest 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnOneTopic 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnOneTopic 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsNotExistingGroup 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsNotExistingGroup 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToZonedDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToZonedDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 

Re: [DISCUSS] KIP-110: Add Codec for ZStandard Compression (Updated)

2018-08-18 Thread Jason Gustafson
Hey Ismael,

Your summary looks good to me. I think it might also be a good idea to add
a new UNSUPPORTED_COMPRESSION_TYPE error code to go along with the version
bumps. We won't be able to use it for old api versions since the clients
will not understand it, but we can use it going forward so that we're not
stuck in a similar situation with a new message format and a new codec to
support. Another option is to use UNSUPPORTED_FOR_MESSAGE_FORMAT, but it is
not as explicit.

-Jason

On Fri, Aug 17, 2018 at 5:19 PM, Ismael Juma  wrote:

> Hi Dongjin and Jason,
>
> I would agree. My summary:
>
> 1. Support zstd with message format 2 only.
> 2. Bump produce and fetch request versions.
> 3. Provide broker errors whenever possible based on the request version and
> rely on clients for the cases where the broker can't validate efficiently
> (example message format 2 consumer that supports the latest fetch version
> but doesn't support zstd).
>
> If there's general agreement on this, I suggest we update the KIP to state
> the proposal and to move the rejected options to its own section. And then
> start a vote!
>
> Ismael
>
> On Fri, Aug 17, 2018 at 4:00 PM Jason Gustafson 
> wrote:
>
> > Hi Dongjin,
> >
> > Yes, that's a good summary. For clients which support v2, the client can
> > parse the message format and hopefully raise a useful error message
> > indicating the unsupported compression type. For older clients, our
> options
> > are probably (1) to down-convert to the old format using no compression
> > type, or (2) to return an error code. I'm leaning toward the latter as
> the
> > simpler solution, but the challenge is finding a good error code. Two
> > possibilities might be INVALID_REQUEST or CORRUPT_MESSAGE. The downside
> is
> > that old clients probably won't get a helpful message. However, at least
> > the behavior will be consistent in the sense that all clients will fail
> if
> > they do not support zstandard.
> >
> > What do you think?
> >
> > Thanks,
> > Jason
> >
> > On Fri, Aug 17, 2018 at 8:08 AM, Dongjin Lee  wrote:
> >
> > > Thanks Jason, I reviewed the down-converting logic following your
> > > explanation.[^1] You mean the following routines, right?
> > >
> > > -
> > > https://github.com/apache/kafka/blob/trunk/core/src/
> > > main/scala/kafka/server/KafkaApis.scala#L534
> > > -
> > > https://github.com/apache/kafka/blob/trunk/clients/src/
> > > main/java/org/apache/kafka/common/record/LazyDownConversionRecords.
> > > java#L165
> > > -
> > > https://github.com/apache/kafka/blob/trunk/clients/src/
> > > main/java/org/apache/kafka/common/record/RecordsUtil.java#L40
> > >
> > > It seems like your stance is like following:
> > >
> > > 1. In principle, Kafka does not change the compression codec when
> > > down-converting, since it requires inspecting the fetched data, which
> is
> > > expensive.
> > > 2. However, there are some cases the fetched data is inspected anyway.
> In
> > > this case, we can provide compression conversion from Zstandard to
> > > classical ones[^2].
> > >
> > > And from what I understand, the cases where the client without
> ZStandard
> > > support receives ZStandard compressed records can be organized into two
> > > cases:
> > >
> > > a. The 'compression.type' configuration of given topic is 'producer'
> and
> > > the producer compressed the records with ZStandard. (that is, using
> > > ZStandard implicitly.)
> > > b.  The 'compression.type' configuration of given topic is 'zstd'; that
> > is,
> > > using ZStandard explicitly.
> > >
> > > As you stated, we don't have to handle the case b specially. So, It
> seems
> > > like we can narrow the focus of the problem by joining case 1 and case
> b
> > > like the following:
> > >
> > > > Given the topic with 'producer' as its 'compression.type'
> > configuration,
> > > ZStandard compressed records and old client without ZStandard, is there
> > any
> > > case we need to inspect the records and can change the compression
> type?
> > If
> > > so, can we provide compression type converting?
> > >
> > > Do I understand correctly?
> > >
> > > Best,
> > > Dongjin
> > >
> > > [^1]: I'm sorry, I found that I was a little bit misunderstanding how
> API
> > > version works, after reviewing the downconvert logic & the protocol
> > > documentation .
> > > [^2]: None, Gzip, Snappy, Lz4.
> > >
> > > On Tue, Aug 14, 2018 at 2:16 AM Jason Gustafson 
> > > wrote:
> > >
> > > > >
> > > > > But in my opinion, since the client will fail with the API version,
> > so
> > > we
> > > > > don't need to down-convert the messages anyway. Isn't it? So, I
> think
> > > we
> > > > > don't care about this case. (I'm sorry, I am not familiar with
> > > > down-convert
> > > > > logic.)
> > > >
> > > >
> > > > Currently the broker down-converts automatically when it receives an
> > old
> > > > version of the fetch request (a version which is known to predate the
> > > > message format in use). Typically when down-converting 

Re: subscribe

2018-08-18 Thread Ted Yu
Please see instructions here:

http://kafka.apache.org/contact

On Sat, Aug 18, 2018 at 8:18 AM Aegeaner 
wrote:

>
>
>


Adding to contributor list

2018-08-18 Thread Oleksandr Bondarenko
Hi,

Could you please add me to the contributor list? My jira username:
bond_as

Thanks


Re: [DISCUSS] KIP-346 - Limit blast radius of log compaction failure

2018-08-18 Thread Ismael Juma
On Thu, Aug 2, 2018 at 9:55 AM Colin McCabe  wrote:

> On Wed, Aug 1, 2018, at 11:35, James Cheng wrote:
> > I’m a little confused about something. Is this KIP focused on log
> > cleaner exceptions in general, or focused on log cleaner exceptions due
> > to disk failures?
> >
> > Will max.uncleanable.partitions apply to all exceptions (including log
> > cleaner logic errors) or will it apply to only disk I/o exceptions?
>
> There is no difference between "log cleaner exceptions in general" and
> "log cleaner exceptions due to disk failures."
>
> For example, if the data on disk is corrupted we might read a 4-byte size
> as -1 instead of 100.  Then we would get a BufferUnderFlowException later
> on.  This is a subclass of RuntimeException rather than IOException, of
> course, but it does result from a disk problem.  Or we might get exceptions
> while validating checksums, which may or may not be IOE (I haven't looked).
>
> Of course, the log cleaner itself may have a bug, which results in it
> throwing an exception even if the disk does not have a problem.  We clearly
> want to fix these bugs.  But there's no way for the program itself to know
> that it has a bug and act differently.  If an exception occurs, we must
> assume there is a disk problem.


Hey Colin,

This is inconsistent with how we deal with disk failures outside of the log
cleaner. We should follow the same approach across the board so that we can
reason about how the system works. If we think the approach of using
specific exception types for disk related errors doesn't work, we should do
a KIP for that. For this KIP, I suggest we use the same approach we use to
mark disks as offline.

Ismael


[jira] [Resolved] (KAFKA-7308) Fix rat and checkstyle plugins configuration for Java 11 support

2018-08-18 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-7308.

   Resolution: Fixed
Fix Version/s: 2.1.0

> Fix rat and checkstyle plugins configuration for Java 11 support
> 
>
> Key: KAFKA-7308
> URL: https://issues.apache.org/jira/browse/KAFKA-7308
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Major
> Fix For: 2.1.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


subcribe

2018-08-18 Thread Aegeaner