Build failed in Jenkins: kafka-trunk-jdk11 #772

2019-08-23 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8753; Expose controller topic deletion metrics (KIP-503) (#7156)

[github] MINOR: Move the resetting from revoked to the thread loop (#7243)

--
[...truncated 2.60 MB...]
org.apache.kafka.connect.json.JsonConverterTest > timestampToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
timestampToConnectWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
timestampToConnectWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnectOptional STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnectOptional PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnectWithDefaultValue 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnectWithDefaultValue 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullValueToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullValueToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringHeaderToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringHeaderToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonNonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > longToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > 

Re: [ DISCUSS ] KIP-512:Adding headers to RecordMetaData

2019-08-23 Thread Gwen Shapira
I am afraid I don't understand the proposal. The RecordMetadata is
information returned from the broker regarding the record. The
producer already has the record (including the headers), so why would
the broker need to send the headers back as part of the metadata?

On Fri, Aug 23, 2019 at 4:22 PM Renuka M  wrote:
>
> Hi All,
>
> I am starting this thread to discuss
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-512%3AAdding+headers+to+RecordMetaData
> .
>
> Please provide the feedback.
>
> Thanks
> Renuka M



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Re: [DISCUSS] KIP-495: Dynamically Adjust Log Levels in Connect

2019-08-23 Thread Arjun Satish
Jason,

Thanks for your comments!

I understand the usability issues with JMX that you mention. But it was
chosen for the following reasons:

1. Cross-cutting functionality across different components (Kafka brokers,
Connect workers and even with Streams jobs). If we go down the REST route,
then brokers don't get this feature.
2. Adding this to existing REST servers adds the whole-or-nothing problem.
It's hard to disable an endpoint if the functionality is not desired or
needs to be protected from users (Connect doesn't have ACLs which makes
this even harder to manage). Adding endpoints to different listeners makes
configuring Connect harder (and it's already a hard problem as it is). A
lot of the existing functionality there is driven around the connector data
model (connectors, plugins, their statuses and so on). Adding an '/admin'
endpoint may be a way to go, but that has tremendous implications (we are
effectively adding an administration endpoint similar to the admin one in
brokers), and probably requires a KIP of its own with discussions catered
around just that.
3. JMX is currently AK's default way to report metrics and perform other
operations. Changing log levels is typically a system level/admin
operation, and fits better there, instead of REST APIs (which is more user
facing).

Having said that, I'm happy to consider alternatives. JMX seemed to be the
lowest hanging fruit. But if there are better ideas, we can consider them.
At the end of the day, when we download and run Kafka, there should be one
way to achieve the same functionality among its components.

Finally, I hope I didn't convey that we are reverting/changing the changes
made in KIP-412. The proposed changes would be an addition to it. It will
give brokers multiple ways of changing log levels. and there is still a
consistent way of achieving cross component goals of the KIP.

Best,


On Fri, Aug 23, 2019 at 4:12 PM Jason Gustafson  wrote:

> Let me elaborate a little bit. We made the decision early on for Connect to
> use HTTP instead of Kafka's custom RPC protocol. In exchange for losing
> some hygienic consistency with Kafka, we took easier integration with
> management tools. The scope of the connect REST APIs is really managing the
> connect cluster. It has endpoints for creating connectors, changing
> configs, seeing their health, etc. Doesn't debugging fit in with that? I am
> not sure I see why we would treat this as an exceptional case.
>
> I personally see JMX as a necessary evil in Kafka because most metrics
> agents have native support. But it is particularly painful when it comes to
> use as an RPC mechanism. This was the central motivation behind KIP-412,
> which makes it very odd to see a new proposal which suggests standardizing
> on JMX for log level adjustment. I actually see this as something we'd want
> to eventually turn off in Kafka. Now that we have a proper API with support
> in the AdminClient, we can deprecate and eventually remove the JMX
> endpoint.
>
> Thanks,
> Jason
>
> On Fri, Aug 23, 2019 at 10:49 AM Jason Gustafson 
> wrote:
>
> > Hi Arjun,
> >
> > Thanks for the KIP. Do we really need a JMX-based API? Is there literally
> > anyone in the world that wants to use JMX if they don't have to? I
> thought
> > one of the major motivations of KIP-412 was how much of a pain JMX is.
> >
> > Thanks,
> > Jason
> >
> > On Mon, Aug 19, 2019 at 5:28 PM Arjun Satish 
> > wrote:
> >
> >> Thanks, Konstantine.
> >>
> >> Updated the KIP with the restrictions around log4j and added references
> to
> >> similar KIPs.
> >>
> >> Best,
> >>
> >> On Mon, Aug 19, 2019 at 3:20 PM Konstantine Karantasis <
> >> konstant...@confluent.io> wrote:
> >>
> >> > Thanks Arjun, the example is useful!
> >> >
> >> > My point when I mentioned the restrictions around log4j is that this
> is
> >> > information is significant and IMO needs to be included in the KIP.
> >> >
> >> > Speaking of its relevance to KIP-412, I think a reference would be
> nice
> >> > too.
> >> >
> >> > Konstantine
> >> >
> >> >
> >> >
> >> > On Thu, Aug 15, 2019 at 4:00 PM Arjun Satish 
> >> > wrote:
> >> >
> >> > > Hey Konstantine,
> >> > >
> >> > > Thanks for the feedback.
> >> > >
> >> > > re: the use of log4j, yes, the proposed changes will only work if
> >> log4j
> >> > is
> >> > > available in runtime. We will not add the mBean if log4j is not
> >> available
> >> > > in classpath. If we change from log4j 1 to 2, that would involve
> >> another
> >> > > KIP, and it would need to update the changes proposed in this KIP
> and
> >> > > others (KIP-412, for instance).
> >> > >
> >> > > re: use of Object types, I've changed it from Boolean to the
> primitive
> >> > type
> >> > > for setLogLevel. We are changing the signature of the old method
> this
> >> > way,
> >> > > but since it never returned null, this should be fine.
> >> > >
> >> > > re: example usage, I've added some screenshot on how this feature
> >> would
> >> > be
> >> > > used with jconsole.
> >> > >
> >> 

[DISCUSS] KIP-512:Adding headers to RecordMetaData

2019-08-23 Thread Renuka M
Hi All,

I am starting this thread to discuss
https://cwiki.apache.org/confluence/display/KAFKA/KIP-512%3AAdding+headers+to+RecordMetaData
.

Please provide the feedback.

Thanks
Renuka M

>


[ DISCUSS ] KIP-512:Adding headers to RecordMetaData

2019-08-23 Thread Renuka M
Hi All,

I am starting this thread to discuss
https://cwiki.apache.org/confluence/display/KAFKA/KIP-512%3AAdding+headers+to+RecordMetaData
.

Please provide the feedback.

Thanks
Renuka M


[jira] [Created] (KAFKA-8831) Joining a new instance sometimes does not cause rebalancing

2019-08-23 Thread Chris Pettitt (Jira)
Chris Pettitt created KAFKA-8831:


 Summary: Joining a new instance sometimes does not cause 
rebalancing
 Key: KAFKA-8831
 URL: https://issues.apache.org/jira/browse/KAFKA-8831
 Project: Kafka
  Issue Type: Bug
Reporter: Chris Pettitt
Assignee: Chris Pettitt
 Attachments: StandbyTaskTest.java

See log below. The second instance joins a bit after the first instance 
(~250ms). The group coordinator says it is going to rebalance but nothing 
happens. The first instance gets all partitions (2).

 

```

[2019-08-23 17:12:05,756] INFO [Consumer clientId=consumer-1, 
groupId=consumerApp] Subscribed to topic(s): output-topic 
(org.apache.kafka.clients.consumer.KafkaConsumer)[2019-08-23 17:12:05,756] INFO 
[Consumer clientId=consumer-1, groupId=consumerApp] Subscribed to topic(s): 
output-topic (org.apache.kafka.clients.consumer.KafkaConsumer)[2019-08-23 
17:12:05,757] INFO [Consumer 
clientId=streamsApp-581aeca8-9139-4575-8b05-a72a128e2645-StreamThread-1-consumer,
 groupId=streamsApp] Discovered group coordinator localhost:57756 (id: 
2147483647 rack: null) 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)[2019-08-23 
17:12:05,760] INFO [Consumer clientId=consumer-1, groupId=consumerApp] 
Discovered group coordinator localhost:57756 (id: 2147483647 rack: null) 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)[2019-08-23 
17:12:05,760] INFO [Consumer 
clientId=streamsApp-581aeca8-9139-4575-8b05-a72a128e2645-StreamThread-1-consumer,
 groupId=streamsApp] (Re-)joining group 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)[2019-08-23 
17:12:05,761] INFO [Consumer clientId=consumer-1, groupId=consumerApp] 
(Re-)joining group 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)[2019-08-23 
17:12:05,781] INFO [Consumer 
clientId=streamsApp-581aeca8-9139-4575-8b05-a72a128e2645-StreamThread-1-consumer,
 groupId=streamsApp] (Re-)joining group 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)[2019-08-23 
17:12:05,781] INFO [Consumer clientId=consumer-1, groupId=consumerApp] 
(Re-)joining group 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)[2019-08-23 
17:12:05,788] INFO [GroupCoordinator 0]: Preparing to rebalance group 
streamsApp in state PreparingRebalance with old generation 0 
(__consumer_offsets-6) (reason: Adding new member 
streamsApp-581aeca8-9139-4575-8b05-a72a128e2645-StreamThread-1-consumer-35501476-e96b-48b9-90d2-e98716e7be56
 with group instanceid None) 
(kafka.coordinator.group.GroupCoordinator)[2019-08-23 17:12:05,788] INFO 
[GroupCoordinator 0]: Preparing to rebalance group consumerApp in state 
PreparingRebalance with old generation 0 (__consumer_offsets-5) (reason: Adding 
new member consumer-1-afda303e-7b9b-43e3-97a2-e689c10b7fad with group 
instanceid None) (kafka.coordinator.group.GroupCoordinator)[2019-08-23 
17:12:05,793] INFO [GroupCoordinator 0]: Stabilized group streamsApp generation 
1 (__consumer_offsets-6) (kafka.coordinator.group.GroupCoordinator)[2019-08-23 
17:12:05,795] INFO [GroupCoordinator 0]: Stabilized group consumerApp 
generation 1 (__consumer_offsets-5) 
(kafka.coordinator.group.GroupCoordinator)[2019-08-23 17:12:05,798] WARN Unable 
to assign 1 of 1 standby tasks for task [0_0]. There is not enough available 
capacity. You should increase the number of threads and/or application 
instances to maintain the requested number of standby replicas. 
(org.apache.kafka.streams.processor.internals.assignment.StickyTaskAssignor)[2019-08-23
 17:12:05,798] WARN Unable to assign 1 of 1 standby tasks for task [0_1]. There 
is not enough available capacity. You should increase the number of threads 
and/or application instances to maintain the requested number of standby 
replicas. 
(org.apache.kafka.streams.processor.internals.assignment.StickyTaskAssignor)[2019-08-23
 17:12:05,798] INFO stream-thread 
[streamsApp-581aeca8-9139-4575-8b05-a72a128e2645-StreamThread-1-consumer] 
Assigned tasks to clients as 
\{581aeca8-9139-4575-8b05-a72a128e2645=[activeTasks: ([0_0, 0_1]) standbyTasks: 
([]) assignedTasks: ([0_0, 0_1]) prevActiveTasks: ([]) prevStandbyTasks: ([]) 
prevAssignedTasks: ([]) capacity: 1]}. 
(org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor)[2019-08-23
 17:12:05,799] INFO [GroupCoordinator 0]: Assignment received from leader for 
group consumerApp for generation 1 
(kafka.coordinator.group.GroupCoordinator)[2019-08-23 17:12:05,800] INFO 
[GroupCoordinator 0]: Assignment received from leader for group streamsApp for 
generation 1 (kafka.coordinator.group.GroupCoordinator)[2019-08-23 
17:12:05,815] INFO [Consumer 
clientId=streamsApp-1e4ee8e2-e5fb-4571-a25a-0084b3c0a4ca-StreamThread-1-consumer,
 groupId=streamsApp] Discovered group coordinator localhost:57756 (id: 
2147483647 rack: null) 

Re: [DISCUSS] KIP-495: Dynamically Adjust Log Levels in Connect

2019-08-23 Thread Jason Gustafson
Let me elaborate a little bit. We made the decision early on for Connect to
use HTTP instead of Kafka's custom RPC protocol. In exchange for losing
some hygienic consistency with Kafka, we took easier integration with
management tools. The scope of the connect REST APIs is really managing the
connect cluster. It has endpoints for creating connectors, changing
configs, seeing their health, etc. Doesn't debugging fit in with that? I am
not sure I see why we would treat this as an exceptional case.

I personally see JMX as a necessary evil in Kafka because most metrics
agents have native support. But it is particularly painful when it comes to
use as an RPC mechanism. This was the central motivation behind KIP-412,
which makes it very odd to see a new proposal which suggests standardizing
on JMX for log level adjustment. I actually see this as something we'd want
to eventually turn off in Kafka. Now that we have a proper API with support
in the AdminClient, we can deprecate and eventually remove the JMX endpoint.

Thanks,
Jason

On Fri, Aug 23, 2019 at 10:49 AM Jason Gustafson  wrote:

> Hi Arjun,
>
> Thanks for the KIP. Do we really need a JMX-based API? Is there literally
> anyone in the world that wants to use JMX if they don't have to? I thought
> one of the major motivations of KIP-412 was how much of a pain JMX is.
>
> Thanks,
> Jason
>
> On Mon, Aug 19, 2019 at 5:28 PM Arjun Satish 
> wrote:
>
>> Thanks, Konstantine.
>>
>> Updated the KIP with the restrictions around log4j and added references to
>> similar KIPs.
>>
>> Best,
>>
>> On Mon, Aug 19, 2019 at 3:20 PM Konstantine Karantasis <
>> konstant...@confluent.io> wrote:
>>
>> > Thanks Arjun, the example is useful!
>> >
>> > My point when I mentioned the restrictions around log4j is that this is
>> > information is significant and IMO needs to be included in the KIP.
>> >
>> > Speaking of its relevance to KIP-412, I think a reference would be nice
>> > too.
>> >
>> > Konstantine
>> >
>> >
>> >
>> > On Thu, Aug 15, 2019 at 4:00 PM Arjun Satish 
>> > wrote:
>> >
>> > > Hey Konstantine,
>> > >
>> > > Thanks for the feedback.
>> > >
>> > > re: the use of log4j, yes, the proposed changes will only work if
>> log4j
>> > is
>> > > available in runtime. We will not add the mBean if log4j is not
>> available
>> > > in classpath. If we change from log4j 1 to 2, that would involve
>> another
>> > > KIP, and it would need to update the changes proposed in this KIP and
>> > > others (KIP-412, for instance).
>> > >
>> > > re: use of Object types, I've changed it from Boolean to the primitive
>> > type
>> > > for setLogLevel. We are changing the signature of the old method this
>> > way,
>> > > but since it never returned null, this should be fine.
>> > >
>> > > re: example usage, I've added some screenshot on how this feature
>> would
>> > be
>> > > used with jconsole.
>> > >
>> > > Hope this works!
>> > >
>> > > Thanks very much,
>> > > Arjun
>> > >
>> > > On Wed, Aug 14, 2019 at 6:42 AM Konstantine Karantasis <
>> > > konstant...@confluent.io> wrote:
>> > >
>> > > > And one thing I forgot is also related to Chris's comment above. I
>> > agree
>> > > > that an example on how a user is expected to set the log level (for
>> > > > instance to DEBUG) would be nice, even if it's showing only one out
>> of
>> > > the
>> > > > many possible ways to achieve that.
>> > > >
>> > > > - Konstantine
>> > > >
>> > > > On Wed, Aug 14, 2019 at 4:38 PM Konstantine Karantasis <
>> > > > konstant...@confluent.io> wrote:
>> > > >
>> > > > >
>> > > > > Thanks Arjun for tackling the need to support this very useful
>> > feature.
>> > > > >
>> > > > > One thing I noticed while reading the KIP is that I would have
>> loved
>> > to
>> > > > > see more info regarding how this proposal depends on the
>> underlying
>> > > > logging
>> > > > > APIs and implementations. For instance, my understanding is that
>> > slf4j
>> > > > can
>> > > > > not be leveraged and that the logging framework needs to be
>> pegged to
>> > > > log4j
>> > > > > explicitly (or another logging implementation). Correct me if I'm
>> > > wrong,
>> > > > > but if such a dependency is introduced I believe it's worth
>> > mentioning.
>> > > > >
>> > > > > Additionally, if the above is correct, there are differences in
>> > log4j's
>> > > > > APIs between version 1 and version 2. In version 2,
>> Logger#setLevel
>> > > > method
>> > > > > has been removed from the Logger interface and in order to set the
>> > log
>> > > > > level programmatically the Configurator class needs to used,
>> which as
>> > > > > stated in the FAQ (
>> > > > >
>> > https://logging.apache.org/log4j/2.x/faq.html#reconfig_level_from_code
>> > > )
>> > > > > it's not part of log4j2's public API. Is this a concern? I believe
>> > that
>> > > > > even if these are implementation specific details for the wrappers
>> > > > > introduced by this KIP (which to a certain extent they are), a
>> > mention
>> > > in
>> > > > > the KIP text and a few references 

[jira] [Resolved] (KAFKA-8753) Add JMX for number of topics marked for deletion

2019-08-23 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8753.

Fix Version/s: 2.4.0
   Resolution: Fixed

> Add JMX for number of topics marked for deletion
> 
>
> Key: KAFKA-8753
> URL: https://issues.apache.org/jira/browse/KAFKA-8753
> Project: Kafka
>  Issue Type: Improvement
>  Components: metrics
>Reporter: David Arthur
>Assignee: David Arthur
>Priority: Minor
> Fix For: 2.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: [DISCUSS] KIP-500: Replace ZooKeeper with a Self-Managed Metadata Quorum

2019-08-23 Thread Ryanne Dolan
Thanks Colin, sgtm. Please make this clear in the KIP -- otherwise it is
hard to nail down what we are voting for.

Ryanne


On Fri, Aug 23, 2019, 12:58 PM Colin McCabe  wrote:

> On Fri, Aug 23, 2019, at 06:24, Ryanne Dolan wrote:
> > Colin, can you outline what specifically would be in scope for this KIP
> vs
> > deferred to the follow-on KIPs you've mentioned? Maybe a Future Work
> > section? Is the idea to get to the bridge release with this KIP, and then
> > go from there?
> >
> > Ryanne
> >
>
> Hi Ryanne,
>
> The goal for KIP-500 is to set out an overall vision for how we will
> remove ZooKeeper and transition to managing metadata via a controller
> quorum.
>
> We will create follow-on KIPs that will lay out the specific details of
> each step.
>
> * A KIP for allowing kafka-configs.sh to change topic configurations
> without using ZooKeeper.  (It can already change broker configurations
> without ZK)
>
> * A KIP for adding APIs to replace direct ZK access by the brokers.
>
> * A KIP to describe Raft replication in Kafka, including the overall
> protocol, details of each RPC, etc.
>
> * A KIP describing the controller changes, how metadata is stored, etc.
>
> There may be other KIPs that we need (for example, if we find another tool
> that still has a hard ZK dependency), but that's the general idea.  KIP-500
> is about the overall design-- the follow on KIPs are about the specific
> details.
>
> best,
> Colin
>
>
> >
> > On Thu, Aug 22, 2019, 11:58 AM Colin McCabe  wrote:
> >
> > > On Wed, Aug 21, 2019, at 19:48, Ron Dagostino wrote:
> > > > Thanks, Colin.  The changes you made to the KIP related to the bridge
> > > > release help make it clearer.  I still have some confusion about the
> > > phrase
> > > > "The rolling upgrade from the bridge release will take several
> steps."
> > > > This made me think you are talking about moving from the bridge
> release
> > > to
> > > > some other, newer, release that comes after the bridge release.  But
> I
> > > > think what you are getting at is that the bridge release can be run
> with
> > > or
> > > > without Zookeeper -- when first upgrading to it Zookeeper remains in
> use,
> > > > but then there is a transition that can be made to engage the warp
> > > drive...
> > > > I mean the Controller Quorum.  So maybe the phrase should be "The
> rolling
> > > > upgrade through the bridge release -- starting with Zookeeper being
> in
> > > use
> > > > and ending with Zookeeper having been replaced by the Controller
> Quorum
> > > --
> > > > will take several steps."
> > >
> > > Hi Ron,
> > >
> > > To clarify, the bridge release will require ZooKeeper.  It will also
> not
> > > support the controller quorum.  It's a bridge in the sense that you
> must
> > > upgrade to a bridge release prior to upgrading to a ZK-less release.  I
> > > added some more descriptive text to the bridge release paragraph--
> > > hopefully this makes it clearer.
> > >
> > > best,
> > > Colin
> > >
> > > >
> > > > Do I understand it correctly, and might some change in phrasing or
> > > > additional clarification help others avoid the same confusion I had?
> > > >
> > > > Ron
> > > >
> > > > On Wed, Aug 21, 2019 at 2:31 PM Colin McCabe 
> wrote:
> > > >
> > > > > On Wed, Aug 21, 2019, at 04:22, Ron Dagostino wrote:
> > > > > > Hi Colin.  I like the concept of a "bridge release" for migrating
> > > off of
> > > > > > Zookeeper, but I worry that it may become a bottleneck if people
> > > hesitate
> > > > > > to replace Zookeeper -- they would be unable to adopt newer
> versions
> > > of
> > > > > > Kafka until taking (what feels to them like) a giant leap.  As an
> > > > > example,
> > > > > > assuming version 4.0.x of Kafka is the supported bridge release,
> I
> > > would
> > > > > > not be surprised if uptake of the 4.x release and the time-based
> > > releases
> > > > > > that follow it end up being much slower due to the perceived
> barrier.
> > > > > >
> > > > > > Any perceived barrier could be lowered if the 4.0.x release could
> > > > > > optionally continue to use Zookeeper -- then the cutover would
> be two
> > > > > > incremental steps (move to 4.0.x, then replace Zookeeper while
> > > staying on
> > > > > > 4.0.x) as opposed to a single big-bang (upgrade to 4.0.x and
> replace
> > > > > > Zookeeper in one fell swoop).
> > > > >
> > > > > Hi Ron,
> > > > >
> > > > > Just to clarify, the "bridge release" will continue to use
> ZooKeeper.
> > > It
> > > > > will not support running without ZooKeeper.  It is the releases
> that
> > > follow
> > > > > the bridge release that will remove ZooKeeper.
> > > > >
> > > > > Also, it's a bit unclear whether the bridge release would be 3.x or
> > > 4.x,
> > > > > or something to follow.  We do know that the bridge release can't
> be a
> > > 2.x
> > > > > release, since it requires at least one incompatible change,
> removing
> > > > > --zookeeper options from all the shell scripts.  (Since we're doing
> > > > > semantic versioning, any time we 

Build failed in Jenkins: kafka-trunk-jdk8 #3867

2019-08-23 Thread Apache Jenkins Server
See 


Changes:

[manikumar] KAFKA-8698: Fix typo in ListOffsetResponse v0 protocol field name

--
[...truncated 5.91 MB...]

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > testStringHeaderToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > testStringHeaderToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structSchemaIdentical STARTED

org.apache.kafka.connect.json.JsonConverterTest > structSchemaIdentical PASSED

org.apache.kafka.connect.json.JsonConverterTest > 

[jira] [Created] (KAFKA-8830) KIP-512: Adding headers to RecordMetaData

2019-08-23 Thread Renuka Metukuru (Jira)
Renuka Metukuru created KAFKA-8830:
--

 Summary: KIP-512: Adding headers to RecordMetaData
 Key: KAFKA-8830
 URL: https://issues.apache.org/jira/browse/KAFKA-8830
 Project: Kafka
  Issue Type: New Feature
  Components: clients
Reporter: Renuka Metukuru


[https://cwiki.apache.org/confluence/display/KAFKA/KIP-512%3A+Adding+headers+to+RecordMetaData]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: [VOTE] KIP-352: Distinguish URPs caused by reassignment

2019-08-23 Thread Jason Gustafson
Thanks Stan, good catch. I have updated the KIP. I will plan to close the
vote Monday if there are no objections.

-Jason

On Fri, Aug 23, 2019 at 11:14 AM Colin McCabe  wrote:

> On Fri, Aug 23, 2019, at 11:08, Stanislav Kozlovski wrote:
> > Thanks for the KIP, this is very helpful
> >
> > I had an offline discussion with Jason and we discussed the semantics of
> > the underMinIsr/atMinIsr metrics. The current proposal would expose a gap
> > where we could report URP but no MinIsr.
> > A brief example:
> > original replica set = [0,1,2]
> > new replica set = [3,4,5]
> > isr = [0, 3, 4]
> > config.minIsr = 3
> >
> > As the KIP said
> > > In other words, we will subtract the AddingReplica from both the total
> > replicas and the current ISR when determining URP satisfaction.
> > We would report URP=2 (1 and 2 are not in ISR) but not underMinIsr, as we
> > have an ISR of 3.
> > Technically, any produce requests with acks=all would succeed, so it
> would
> > be false to report `underMinIsr`. We thought it'd be good to keep both
> > metrics consistent, so a new proposal is to use the following algorithm:
> > ```
> > isUrp == size(original replicas) - size(isr) > 0
> > ```
>
> Hi Stan,
>
> That's a good point.  Basically we should regard the size of the original
> replica set as the desired replication factor, and calculate the URPs based
> on that.  +1 for this.  (I assume Jason will update the KIP...)
>
> best,
> Colin
>
>
> >
> > Taking that into account, +1 from me! (non-binding)
> >
> > On Fri, Aug 23, 2019 at 7:00 PM Colin McCabe  wrote:
> >
> > > +1 (binding).
> > >
> > > cheers,
> > > Colin
> > >
> > > On Tue, Aug 20, 2019, at 10:55, Jason Gustafson wrote:
> > > > Hi All,
> > > >
> > > > I'd like to start a vote on KIP-352, which is a follow-up to KIP-455
> to
> > > > fix
> > > > a long-known shortcoming of URP reporting and to improve reassignment
> > > > monitoring:
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-352%3A+Distinguish+URPs+caused+by+reassignment
> > > > .
> > > >
> > > > Note that I have added one new metric following the discussion. It
> seemed
> > > > useful to have a lag metric for reassigning partitions.
> > > >
> > > > Thanks,
> > > > Jason
> > > >
> > >
> >
> >
> > --
> > Best,
> > Stanislav
> >
>


Re: [VOTE] KIP-352: Distinguish URPs caused by reassignment

2019-08-23 Thread Colin McCabe
On Fri, Aug 23, 2019, at 11:08, Stanislav Kozlovski wrote:
> Thanks for the KIP, this is very helpful
> 
> I had an offline discussion with Jason and we discussed the semantics of
> the underMinIsr/atMinIsr metrics. The current proposal would expose a gap
> where we could report URP but no MinIsr.
> A brief example:
> original replica set = [0,1,2]
> new replica set = [3,4,5]
> isr = [0, 3, 4]
> config.minIsr = 3
> 
> As the KIP said
> > In other words, we will subtract the AddingReplica from both the total
> replicas and the current ISR when determining URP satisfaction.
> We would report URP=2 (1 and 2 are not in ISR) but not underMinIsr, as we
> have an ISR of 3.
> Technically, any produce requests with acks=all would succeed, so it would
> be false to report `underMinIsr`. We thought it'd be good to keep both
> metrics consistent, so a new proposal is to use the following algorithm:
> ```
> isUrp == size(original replicas) - size(isr) > 0
> ```

Hi Stan,

That's a good point.  Basically we should regard the size of the original 
replica set as the desired replication factor, and calculate the URPs based on 
that.  +1 for this.  (I assume Jason will update the KIP...)

best,
Colin


> 
> Taking that into account, +1 from me! (non-binding)
> 
> On Fri, Aug 23, 2019 at 7:00 PM Colin McCabe  wrote:
> 
> > +1 (binding).
> >
> > cheers,
> > Colin
> >
> > On Tue, Aug 20, 2019, at 10:55, Jason Gustafson wrote:
> > > Hi All,
> > >
> > > I'd like to start a vote on KIP-352, which is a follow-up to KIP-455 to
> > > fix
> > > a long-known shortcoming of URP reporting and to improve reassignment
> > > monitoring:
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-352%3A+Distinguish+URPs+caused+by+reassignment
> > > .
> > >
> > > Note that I have added one new metric following the discussion. It seemed
> > > useful to have a lag metric for reassigning partitions.
> > >
> > > Thanks,
> > > Jason
> > >
> >
> 
> 
> -- 
> Best,
> Stanislav
>


Re: [VOTE] KIP-352: Distinguish URPs caused by reassignment

2019-08-23 Thread Stanislav Kozlovski
Thanks for the KIP, this is very helpful

I had an offline discussion with Jason and we discussed the semantics of
the underMinIsr/atMinIsr metrics. The current proposal would expose a gap
where we could report URP but no MinIsr.
A brief example:
original replica set = [0,1,2]
new replica set = [3,4,5]
isr = [0, 3, 4]
config.minIsr = 3

As the KIP said
> In other words, we will subtract the AddingReplica from both the total
replicas and the current ISR when determining URP satisfaction.
We would report URP=2 (1 and 2 are not in ISR) but not underMinIsr, as we
have an ISR of 3.
Technically, any produce requests with acks=all would succeed, so it would
be false to report `underMinIsr`. We thought it'd be good to keep both
metrics consistent, so a new proposal is to use the following algorithm:
```
isUrp == size(original replicas) - size(isr) > 0
```

Taking that into account, +1 from me! (non-binding)

On Fri, Aug 23, 2019 at 7:00 PM Colin McCabe  wrote:

> +1 (binding).
>
> cheers,
> Colin
>
> On Tue, Aug 20, 2019, at 10:55, Jason Gustafson wrote:
> > Hi All,
> >
> > I'd like to start a vote on KIP-352, which is a follow-up to KIP-455 to
> > fix
> > a long-known shortcoming of URP reporting and to improve reassignment
> > monitoring:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-352%3A+Distinguish+URPs+caused+by+reassignment
> > .
> >
> > Note that I have added one new metric following the discussion. It seemed
> > useful to have a lag metric for reassigning partitions.
> >
> > Thanks,
> > Jason
> >
>


-- 
Best,
Stanislav


Re: [VOTE] KIP-352: Distinguish URPs caused by reassignment

2019-08-23 Thread Colin McCabe
+1 (binding).

cheers,
Colin

On Tue, Aug 20, 2019, at 10:55, Jason Gustafson wrote:
> Hi All,
> 
> I'd like to start a vote on KIP-352, which is a follow-up to KIP-455 to 
> fix
> a long-known shortcoming of URP reporting and to improve reassignment
> monitoring:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-352%3A+Distinguish+URPs+caused+by+reassignment
> .
> 
> Note that I have added one new metric following the discussion. It seemed
> useful to have a lag metric for reassigning partitions.
> 
> Thanks,
> Jason
>


Re: [DISCUSS] KIP-500: Replace ZooKeeper with a Self-Managed Metadata Quorum

2019-08-23 Thread Colin McCabe
On Fri, Aug 23, 2019, at 06:24, Ryanne Dolan wrote:
> Colin, can you outline what specifically would be in scope for this KIP vs
> deferred to the follow-on KIPs you've mentioned? Maybe a Future Work
> section? Is the idea to get to the bridge release with this KIP, and then
> go from there?
> 
> Ryanne
>

Hi Ryanne,

The goal for KIP-500 is to set out an overall vision for how we will remove 
ZooKeeper and transition to managing metadata via a controller quorum.

We will create follow-on KIPs that will lay out the specific details of each 
step.  

* A KIP for allowing kafka-configs.sh to change topic configurations without 
using ZooKeeper.  (It can already change broker configurations without ZK)

* A KIP for adding APIs to replace direct ZK access by the brokers.

* A KIP to describe Raft replication in Kafka, including the overall protocol, 
details of each RPC, etc.

* A KIP describing the controller changes, how metadata is stored, etc.

There may be other KIPs that we need (for example, if we find another tool that 
still has a hard ZK dependency), but that's the general idea.  KIP-500 is about 
the overall design-- the follow on KIPs are about the specific details.

best,
Colin


> 
> On Thu, Aug 22, 2019, 11:58 AM Colin McCabe  wrote:
> 
> > On Wed, Aug 21, 2019, at 19:48, Ron Dagostino wrote:
> > > Thanks, Colin.  The changes you made to the KIP related to the bridge
> > > release help make it clearer.  I still have some confusion about the
> > phrase
> > > "The rolling upgrade from the bridge release will take several steps."
> > > This made me think you are talking about moving from the bridge release
> > to
> > > some other, newer, release that comes after the bridge release.  But I
> > > think what you are getting at is that the bridge release can be run with
> > or
> > > without Zookeeper -- when first upgrading to it Zookeeper remains in use,
> > > but then there is a transition that can be made to engage the warp
> > drive...
> > > I mean the Controller Quorum.  So maybe the phrase should be "The rolling
> > > upgrade through the bridge release -- starting with Zookeeper being in
> > use
> > > and ending with Zookeeper having been replaced by the Controller Quorum
> > --
> > > will take several steps."
> >
> > Hi Ron,
> >
> > To clarify, the bridge release will require ZooKeeper.  It will also not
> > support the controller quorum.  It's a bridge in the sense that you must
> > upgrade to a bridge release prior to upgrading to a ZK-less release.  I
> > added some more descriptive text to the bridge release paragraph--
> > hopefully this makes it clearer.
> >
> > best,
> > Colin
> >
> > >
> > > Do I understand it correctly, and might some change in phrasing or
> > > additional clarification help others avoid the same confusion I had?
> > >
> > > Ron
> > >
> > > On Wed, Aug 21, 2019 at 2:31 PM Colin McCabe  wrote:
> > >
> > > > On Wed, Aug 21, 2019, at 04:22, Ron Dagostino wrote:
> > > > > Hi Colin.  I like the concept of a "bridge release" for migrating
> > off of
> > > > > Zookeeper, but I worry that it may become a bottleneck if people
> > hesitate
> > > > > to replace Zookeeper -- they would be unable to adopt newer versions
> > of
> > > > > Kafka until taking (what feels to them like) a giant leap.  As an
> > > > example,
> > > > > assuming version 4.0.x of Kafka is the supported bridge release, I
> > would
> > > > > not be surprised if uptake of the 4.x release and the time-based
> > releases
> > > > > that follow it end up being much slower due to the perceived barrier.
> > > > >
> > > > > Any perceived barrier could be lowered if the 4.0.x release could
> > > > > optionally continue to use Zookeeper -- then the cutover would be two
> > > > > incremental steps (move to 4.0.x, then replace Zookeeper while
> > staying on
> > > > > 4.0.x) as opposed to a single big-bang (upgrade to 4.0.x and replace
> > > > > Zookeeper in one fell swoop).
> > > >
> > > > Hi Ron,
> > > >
> > > > Just to clarify, the "bridge release" will continue to use ZooKeeper.
> > It
> > > > will not support running without ZooKeeper.  It is the releases that
> > follow
> > > > the bridge release that will remove ZooKeeper.
> > > >
> > > > Also, it's a bit unclear whether the bridge release would be 3.x or
> > 4.x,
> > > > or something to follow.  We do know that the bridge release can't be a
> > 2.x
> > > > release, since it requires at least one incompatible change, removing
> > > > --zookeeper options from all the shell scripts.  (Since we're doing
> > > > semantic versioning, any time we make an incompatible change, we bump
> > the
> > > > major version number.)
> > > >
> > > > In general, using two sources of metadata is a lot more complex and
> > > > error-prone than one.  A lot of the bugs and corner cases we have are
> > the
> > > > result of divergences between the controller and the state in
> > ZooKeeper.
> > > > Eliminating this divergence, and the split-brain scenarios it creates,
> > is a
> > > > major 

Re: [DISCUSS] KIP-495: Dynamically Adjust Log Levels in Connect

2019-08-23 Thread Jason Gustafson
Hi Arjun,

Thanks for the KIP. Do we really need a JMX-based API? Is there literally
anyone in the world that wants to use JMX if they don't have to? I thought
one of the major motivations of KIP-412 was how much of a pain JMX is.

Thanks,
Jason

On Mon, Aug 19, 2019 at 5:28 PM Arjun Satish  wrote:

> Thanks, Konstantine.
>
> Updated the KIP with the restrictions around log4j and added references to
> similar KIPs.
>
> Best,
>
> On Mon, Aug 19, 2019 at 3:20 PM Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
> > Thanks Arjun, the example is useful!
> >
> > My point when I mentioned the restrictions around log4j is that this is
> > information is significant and IMO needs to be included in the KIP.
> >
> > Speaking of its relevance to KIP-412, I think a reference would be nice
> > too.
> >
> > Konstantine
> >
> >
> >
> > On Thu, Aug 15, 2019 at 4:00 PM Arjun Satish 
> > wrote:
> >
> > > Hey Konstantine,
> > >
> > > Thanks for the feedback.
> > >
> > > re: the use of log4j, yes, the proposed changes will only work if log4j
> > is
> > > available in runtime. We will not add the mBean if log4j is not
> available
> > > in classpath. If we change from log4j 1 to 2, that would involve
> another
> > > KIP, and it would need to update the changes proposed in this KIP and
> > > others (KIP-412, for instance).
> > >
> > > re: use of Object types, I've changed it from Boolean to the primitive
> > type
> > > for setLogLevel. We are changing the signature of the old method this
> > way,
> > > but since it never returned null, this should be fine.
> > >
> > > re: example usage, I've added some screenshot on how this feature would
> > be
> > > used with jconsole.
> > >
> > > Hope this works!
> > >
> > > Thanks very much,
> > > Arjun
> > >
> > > On Wed, Aug 14, 2019 at 6:42 AM Konstantine Karantasis <
> > > konstant...@confluent.io> wrote:
> > >
> > > > And one thing I forgot is also related to Chris's comment above. I
> > agree
> > > > that an example on how a user is expected to set the log level (for
> > > > instance to DEBUG) would be nice, even if it's showing only one out
> of
> > > the
> > > > many possible ways to achieve that.
> > > >
> > > > - Konstantine
> > > >
> > > > On Wed, Aug 14, 2019 at 4:38 PM Konstantine Karantasis <
> > > > konstant...@confluent.io> wrote:
> > > >
> > > > >
> > > > > Thanks Arjun for tackling the need to support this very useful
> > feature.
> > > > >
> > > > > One thing I noticed while reading the KIP is that I would have
> loved
> > to
> > > > > see more info regarding how this proposal depends on the underlying
> > > > logging
> > > > > APIs and implementations. For instance, my understanding is that
> > slf4j
> > > > can
> > > > > not be leveraged and that the logging framework needs to be pegged
> to
> > > > log4j
> > > > > explicitly (or another logging implementation). Correct me if I'm
> > > wrong,
> > > > > but if such a dependency is introduced I believe it's worth
> > mentioning.
> > > > >
> > > > > Additionally, if the above is correct, there are differences in
> > log4j's
> > > > > APIs between version 1 and version 2. In version 2, Logger#setLevel
> > > > method
> > > > > has been removed from the Logger interface and in order to set the
> > log
> > > > > level programmatically the Configurator class needs to used, which
> as
> > > > > stated in the FAQ (
> > > > >
> > https://logging.apache.org/log4j/2.x/faq.html#reconfig_level_from_code
> > > )
> > > > > it's not part of log4j2's public API. Is this a concern? I believe
> > that
> > > > > even if these are implementation specific details for the wrappers
> > > > > introduced by this KIP (which to a certain extent they are), a
> > mention
> > > in
> > > > > the KIP text and a few references would be useful to understand the
> > > > changes
> > > > > and the dependencies introduced by this proposal.
> > > > >
> > > > > And a few minor comments:
> > > > > - Is there any specific reason that object types were preferred in
> > the
> > > > > proposed interface compared to primitive types? My understanding is
> > > that
> > > > > `null` is not expected as a return value.
> > > > > - Related to the above, I think it'd be nice for the javadoc to
> > mention
> > > > > when a parameter is not expected to be `null` with an appropriate
> > > comment
> > > > > (e.g. foo bar etc; may not be null)
> > > > >
> > > > > Cheers,
> > > > > Konstantine
> > > > >
> > > > > On Tue, Aug 6, 2019 at 9:34 AM Cyrus Vafadari 
> > > > wrote:
> > > > >
> > > > >> This looks like a useful feature, the strategy makes sense, and
> the
> > > KIP
> > > > is
> > > > >> thorough and nicely written. Thanks!
> > > > >>
> > > > >> Cyrus
> > > > >>
> > > > >> On Thu, Aug 1, 2019, 12:40 PM Chris Egerton 
> > > > wrote:
> > > > >>
> > > > >> > Thanks Arjun! Looks good to me.
> > > > >> >
> > > > >> > On Thu, Aug 1, 2019 at 12:33 PM Arjun Satish <
> > > arjun.sat...@gmail.com>
> > > > >> > wrote:
> > > > >> >
> > > > >> > > Thanks for the feedback, 

Build failed in Jenkins: kafka-trunk-jdk11 #771

2019-08-23 Thread Apache Jenkins Server
See 


Changes:

[manikumar] KAFKA-8698: Fix typo in ListOffsetResponse v0 protocol field name

--
[...truncated 2.59 MB...]
org.apache.kafka.trogdor.common.WorkerUtilsTest > testCreatesNotExistingTopics 
PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreateZeroTopicsDoesNothing STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreateZeroTopicsDoesNothing PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreateNonExistingTopicsWithZeroTopicsDoesNothing STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreateNonExistingTopicsWithZeroTopicsDoesNothing PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreateTopicsFailsIfAtLeastOneTopicExists STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreateTopicsFailsIfAtLeastOneTopicExists PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreatesOneTopicVerifiesOneTopic STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCreatesOneTopicVerifiesOneTopic PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCommonConfigOverwritesDefaultProps STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testCommonConfigOverwritesDefaultProps PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testClientConfigOverwritesBothDefaultAndCommonConfigs STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testClientConfigOverwritesBothDefaultAndCommonConfigs PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testVerifyTopics STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testVerifyTopics PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testExistingTopicsNotCreated 
STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testExistingTopicsNotCreated 
PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testGetMatchingTopicPartitionsCorrectlyMatchesExactTopicName STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testGetMatchingTopicPartitionsCorrectlyMatchesExactTopicName PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testCreateRetriesOnTimeout 
STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testCreateRetriesOnTimeout 
PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testExistingTopicsMustHaveRequestedNumberOfPartitions STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testExistingTopicsMustHaveRequestedNumberOfPartitions PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testAddConfigsToPropertiesAddsAllConfigs STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testAddConfigsToPropertiesAddsAllConfigs PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testGetMatchingTopicPartitionsCorrectlyMatchesTopics STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 
testGetMatchingTopicPartitionsCorrectlyMatchesTopics PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testCreateOneTopic STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testCreateOneTopic PASSED

org.apache.kafka.trogdor.common.TopologyTest > testAgentNodeNames STARTED

org.apache.kafka.trogdor.common.TopologyTest > testAgentNodeNames PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorClientTest > 
testPrettyPrintTaskInfo STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorClientTest > 
testPrettyPrintTaskInfo PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTaskRequest STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTaskRequest PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTaskDestruction 
STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTaskDestruction 
PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTasksRequestMatches 
STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTasksRequestMatches 
PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testWorkersExitingAtDifferentTimes STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testWorkersExitingAtDifferentTimes PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testAgentFailureAndTaskExpiry STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testAgentFailureAndTaskExpiry PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTaskDistribution 
STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTaskDistribution 
PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testTaskRequestWithOldStartMsGetsUpdated STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testTaskRequestWithOldStartMsGetsUpdated PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testNetworkPartitionFault STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testNetworkPartitionFault PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 

[jira] [Created] (KAFKA-8829) Provide factories for creating Window instances

2019-08-23 Thread Marcos Passos (Jira)
Marcos Passos created KAFKA-8829:


 Summary: Provide factories for creating Window instances
 Key: KAFKA-8829
 URL: https://issues.apache.org/jira/browse/KAFKA-8829
 Project: Kafka
  Issue Type: New Feature
Affects Versions: 2.3.0
Reporter: Marcos Passos


The API provides no ways to create {{Window}} instances without relying on 
internal classes.

 The issue becomes more evident when using session stores as both {{put}} and 
{{remove}} methods expects a windowed key, but the API does not expose any way 
to create a proper session-windowed key in the userland, leaving the developer 
with two choices: a) implement a window that duplicates the logic of 
{{SessionWindow}} or b) relies on the {{SessionWindow}}, which is an internal 
class.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KAFKA-8828) [BC Break] Global store returns a TimestampedKeyValueStore in 2.3

2019-08-23 Thread Marcos Passos (Jira)
Marcos Passos created KAFKA-8828:


 Summary: [BC Break] Global store returns a 
TimestampedKeyValueStore in 2.3
 Key: KAFKA-8828
 URL: https://issues.apache.org/jira/browse/KAFKA-8828
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.3.0
Reporter: Marcos Passos


Since 2.3, ProcessorContext returns a TimestampedKeyValueStore for global 
stores, which is backward incompatible. This change makes the upgrade path a 
lot painful and involves creating a non-trivial adapter to hide the 
timestamp-related functionality in cases where it is not needed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KAFKA-8827) kafka followers intermittent traffic

2019-08-23 Thread frid (Jira)
frid created KAFKA-8827:
---

 Summary: kafka followers intermittent traffic
 Key: KAFKA-8827
 URL: https://issues.apache.org/jira/browse/KAFKA-8827
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.2.0
Reporter: frid
 Attachments: image-2019-08-23-15-35-17-558.png

i have kafka cluster with version 0.10.2, 3brokers an 3 zookeeper i try to 
generete 25 mb/sec to my topic wich have 400 partitions, the cluster behavior 
is very strange

the followers stop receiving messages each 5 min but the leader still receive 
messages, which generates lag so underreplicated partitions and the cluster 
become unstable. cpu reache 80% in followers brokers but in leadear still in 20 
%

Could someone explain this phenomenon?  thanks in advance find here some metric

!image-2019-08-23-15-35-17-558.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: [DISCUSS] KIP-500: Replace ZooKeeper with a Self-Managed Metadata Quorum

2019-08-23 Thread Ryanne Dolan
Colin, can you outline what specifically would be in scope for this KIP vs
deferred to the follow-on KIPs you've mentioned? Maybe a Future Work
section? Is the idea to get to the bridge release with this KIP, and then
go from there?

Ryanne

On Thu, Aug 22, 2019, 11:58 AM Colin McCabe  wrote:

> On Wed, Aug 21, 2019, at 19:48, Ron Dagostino wrote:
> > Thanks, Colin.  The changes you made to the KIP related to the bridge
> > release help make it clearer.  I still have some confusion about the
> phrase
> > "The rolling upgrade from the bridge release will take several steps."
> > This made me think you are talking about moving from the bridge release
> to
> > some other, newer, release that comes after the bridge release.  But I
> > think what you are getting at is that the bridge release can be run with
> or
> > without Zookeeper -- when first upgrading to it Zookeeper remains in use,
> > but then there is a transition that can be made to engage the warp
> drive...
> > I mean the Controller Quorum.  So maybe the phrase should be "The rolling
> > upgrade through the bridge release -- starting with Zookeeper being in
> use
> > and ending with Zookeeper having been replaced by the Controller Quorum
> --
> > will take several steps."
>
> Hi Ron,
>
> To clarify, the bridge release will require ZooKeeper.  It will also not
> support the controller quorum.  It's a bridge in the sense that you must
> upgrade to a bridge release prior to upgrading to a ZK-less release.  I
> added some more descriptive text to the bridge release paragraph--
> hopefully this makes it clearer.
>
> best,
> Colin
>
> >
> > Do I understand it correctly, and might some change in phrasing or
> > additional clarification help others avoid the same confusion I had?
> >
> > Ron
> >
> > On Wed, Aug 21, 2019 at 2:31 PM Colin McCabe  wrote:
> >
> > > On Wed, Aug 21, 2019, at 04:22, Ron Dagostino wrote:
> > > > Hi Colin.  I like the concept of a "bridge release" for migrating
> off of
> > > > Zookeeper, but I worry that it may become a bottleneck if people
> hesitate
> > > > to replace Zookeeper -- they would be unable to adopt newer versions
> of
> > > > Kafka until taking (what feels to them like) a giant leap.  As an
> > > example,
> > > > assuming version 4.0.x of Kafka is the supported bridge release, I
> would
> > > > not be surprised if uptake of the 4.x release and the time-based
> releases
> > > > that follow it end up being much slower due to the perceived barrier.
> > > >
> > > > Any perceived barrier could be lowered if the 4.0.x release could
> > > > optionally continue to use Zookeeper -- then the cutover would be two
> > > > incremental steps (move to 4.0.x, then replace Zookeeper while
> staying on
> > > > 4.0.x) as opposed to a single big-bang (upgrade to 4.0.x and replace
> > > > Zookeeper in one fell swoop).
> > >
> > > Hi Ron,
> > >
> > > Just to clarify, the "bridge release" will continue to use ZooKeeper.
> It
> > > will not support running without ZooKeeper.  It is the releases that
> follow
> > > the bridge release that will remove ZooKeeper.
> > >
> > > Also, it's a bit unclear whether the bridge release would be 3.x or
> 4.x,
> > > or something to follow.  We do know that the bridge release can't be a
> 2.x
> > > release, since it requires at least one incompatible change, removing
> > > --zookeeper options from all the shell scripts.  (Since we're doing
> > > semantic versioning, any time we make an incompatible change, we bump
> the
> > > major version number.)
> > >
> > > In general, using two sources of metadata is a lot more complex and
> > > error-prone than one.  A lot of the bugs and corner cases we have are
> the
> > > result of divergences between the controller and the state in
> ZooKeeper.
> > > Eliminating this divergence, and the split-brain scenarios it creates,
> is a
> > > major goal of this work.
> > >
> > > >
> > > > Regardless of whether what I wrote above has merit or not, I think
> the
> > > KIP
> > > > should be more explicit about what the upgrade constraints actually
> are.
> > > > Can the bridge release be adopted with Zookeeper remaining in place
> and
> > > > then cutting over as a second, follow-on step, or must the Controller
> > > > Quorum nodes be started first and the bridge release cannot be used
> with
> > > > Zookeeper at all?
> > >
> > > As I mentioned above, the bridge release supports (indeed, requires)
> > > ZooKeeper.  I have added a little more text about this to KIP-500 which
> > > hopefully makes it clearer.
> > >
> > > best,
> > > Colin
> > >
> > > >  If the bridge release cannot be used with Zookeeper at
> > > > all, then no version at or beyond the bridge release is available
> > > > unless/until abandoning Zookeeper; if the bridge release can be used
> with
> > > > Zookeeper, then is it the only version that can be used with
> Zookeeper,
> > > or
> > > > can Zookeeper be kept for additional releases if desired?
> > > >
> > > > Ron
> > > >
> > > > On Tue, Aug 20, 2019 at 

[jira] [Resolved] (KAFKA-8698) ListOffsets Response protocol documentation

2019-08-23 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-8698.
--
Fix Version/s: 2.4.0
   Resolution: Fixed

> ListOffsets Response protocol documentation
> ---
>
> Key: KAFKA-8698
> URL: https://issues.apache.org/jira/browse/KAFKA-8698
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Fábio Silva
>Assignee: Asutosh Pandya
>Priority: Minor
>  Labels: documentation, protocol-documentation
> Fix For: 2.4.0
>
>
> The documentation of ListOffsets Response (Version: 0) appears to have an 
> typo on offsets field name, suffixed with `'`.
> {code:java}
> [offsets']{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: [DISCUSS] KIP-511: Collect and Expose Client's Name and Version in the Brokers

2019-08-23 Thread Magnus Edenhill
Great proposal, this feature is well overdue!

1)
>From an operator's perspective I don't think the kafka client
implementation name and version are sufficient,
I also believe the application name and version are of interest.
You could have all applications in your cluster run the same kafka client
and version, but only one type or
version of an application misbehave and needing to be tracked down.

While the application and client name and version could be combined in the
ClientName/ClientVersion fields by
the user (e.g. like User-Agent), it would not be in a generalized format
and hard for generic monitoring tools to parse correctly.

So I'd suggest keeping ClientName and ClientVersion as the client
implementation name ("java" or "org.apache.kafka...") and version,
which can't be changed by the user/app developer, and providing two
optional fields for the application counterpart:
ApplicationName and ApplicationVersion, which are backed by corresponding
configuration properties (application.name, application.version).

2)
Do ..Name and ..Version need to be two separate fields, seeing how the two
fields are ambigious when separated?
If we're looking to identify unique versions, combining the two fields
would be sufficient (e.g., "java-2.3.1", "librdkafka/1.2.0", "sarama@1.2.3")
and perhaps easier to work with.
The actual format or content of the name-version string is irrelevant as
long as it identifies a unique name+version.


3)
As for allowed characters, will the broker fail the ApiVersionResponse if
any of these fields contain invalid characters,
or will the broker sanitize the strings?
For future backwards compatibility (when the broker constraints change but
clients are not updated) I suggest the latter.

4)
And while we're at it, can we add the broker name and version to the
ApiVersionResponse?
While an application must not use this information to detect features (Hi
Jay!), it is good for troubleshooting
and providing more meaningful logs to the client user in case a feature
(based on the actual api versions) is not available.

/Magnus


Den tors 22 aug. 2019 kl 10:09 skrev David Jacot :

> Hi Satish,
>
> Thank you for your feedback!
>
> Please find my answers below.
>
> >> Did you consider taking version property by loading
> “kafka/kafka-version.properties” as a resource while java client is
> initialized?  “kafka/kafka-version.properties” is shipped with
> kafka-clients jar.
>
> I wasn't aware of the property file. It is exactly what I need. Thanks for
> pointing that out!
>
> >> I assume this metric value will be the total no of clients connected
> to a broker irrespective of whether name and version follow the
> expected pattern ([-.\w]+) or not.
>
> That is correct.
>
> >> It seems client name and client version are treated as tags for
> `ConnectedClients` metric. If so, you may implement this metric
> similar to `BrokerTopicMetrics` with topic tag as mentioned here[1].
> When is the metric removed for a specific client-name and
> client-version?
>
> That is correct. Client name and version are treated as tags like in
> BrokerTopicMetrics. My plan is to remove the metric when it goes
> back to zero - when all clients with a given name & version are
> disconnected.
>
> Best,
> David
>
> On Wed, Aug 21, 2019 at 6:52 PM Satish Duggana 
> wrote:
>
> > Hi David,
> > Thanks for the KIP. I have a couple of questions.
> >
> > >> For the Java client, the idea is to define two constants in the code
> to
> > store its name and its version. If possible, the version will be set
> > automatically based on metadata coming from gradle (or the repo itself)
> to
> > avoid having to do manual changes.
> >
> > Did you consider taking version property by loading
> > “kafka/kafka-version.properties” as a resource while java client is
> > initialized?  “kafka/kafka-version.properties” is shipped with
> > kafka-clients jar.
> >
> > >> kafka.server:type=ClientMetrics,name=ConnectedClients
> > I assume this metric value will be the total no of clients connected
> > to a broker irrespective of whether name and version follow the
> > expected pattern ([-.\w]+) or not.
> >
> > >>
> >
> kafka.server:type=ClientMetrics,name=ConnectedClients,clientname=([-.\w]+),clientversion=([-.\w]+)
> > It seems client name and client version are treated as tags for
> > `ConnectedClients` metric. If so, you may implement this metric
> > similar to `BrokerTopicMetrics` with topic tag as mentioned here[1].
> > When is the metric removed for a specific client-name and
> > client-version?
> >
> > 1.
> >
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaRequestHandler.scala#L231
> >
> > Thanks,
> > Satish.
> >
> >
> >
> >
> > On Wed, Aug 21, 2019 at 5:33 PM David Jacot  wrote:
> > >
> > > Hi all,
> > >
> > > I would like to start a discussion for KIP-511:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers
> > >
> > > Let me