Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #107

2021-05-06 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #106

2021-05-06 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12760) Delete My Account

2021-05-06 Thread name (Jira)
name created KAFKA-12760:


 Summary: Delete My Account
 Key: KAFKA-12760
 URL: https://issues.apache.org/jira/browse/KAFKA-12760
 Project: Kafka
  Issue Type: Wish
Reporter: name


I wish to have my account deleted. There doesnt seem to be a way to do it from 
within my own account, but it should be possible for an admin to do it.

Many thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10847) Avoid spurious left/outer join results in stream-stream join

2021-05-06 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-10847.
-
Fix Version/s: 3.0.0
   Resolution: Fixed

> Avoid spurious left/outer join results in stream-stream join 
> -
>
> Key: KAFKA-10847
> URL: https://issues.apache.org/jira/browse/KAFKA-10847
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Sergio Peña
>Priority: Major
> Fix For: 3.0.0
>
>
> KafkaStreams follows an eager execution model, ie, it never buffers input 
> records but processes them right away. For left/outer stream-stream join, 
> this implies that left/outer join result might be emitted before the window 
> end (or window close) time is reached. Thus, a record what will be an 
> inner-join result, might produce a eager (and spurious) left/outer join 
> result.
> We should change the implementation of the join, to not emit eager left/outer 
> join result, but instead delay the emission of such result after the window 
> grace period passed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #105

2021-05-06 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12759) Kafka consumers with static group membership won't consume from newly subscribed topics

2021-05-06 Thread Andrey Polyakov (Jira)
Andrey Polyakov created KAFKA-12759:
---

 Summary: Kafka consumers with static group membership won't 
consume from newly subscribed topics
 Key: KAFKA-12759
 URL: https://issues.apache.org/jira/browse/KAFKA-12759
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Andrey Polyakov


We've recently started using static group membership and noticed that when 
adding a new topic to the subscription, it's not consumed from, regardless of 
how long the consumer is left to run. A workaround we have is shutting down all 
consumers in the group for longer than session.timeout.ms, then starting them 
back up. Is this expected behaviour or a bug?

Sample application:
{code:java}
import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;

public class Test {
  static volatile boolean shutdown = false;
  static final Object shutdownLock = new Object();

  public static void main(String[] args) {
Runtime.getRuntime()
.addShutdownHook(
new Thread(
() -> {
  shutdown = true;
  synchronized (shutdownLock) {
try {
  shutdownLock.wait();
} catch (InterruptedException e) {
  e.printStackTrace();
}
  }
}));

Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
ByteArrayDeserializer.class.getCanonicalName());
props.put(
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
ByteArrayDeserializer.class.getCanonicalName());

props.put(ConsumerConfig.GROUP_ID_CONFIG, "myGroupID");
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "30"); // 5 min
props.put(ConsumerConfig.GROUP_INSTANCE_ID_CONFIG, "instance1");

KafkaConsumer consumer = new KafkaConsumer<>(props);

consumer.subscribe(Arrays.asList("topic1"));
// consumer.subscribe(Arrays.asList("topic1", "topic2"));

while (!shutdown) {
  ConsumerRecords records = 
consumer.poll(Duration.ofSeconds(5));
  System.out.println("poll() returned " + records.count() + " records");
}

System.out.println("Closing consumer");
consumer.close();
synchronized (shutdownLock) {
  shutdownLock.notifyAll();
  System.out.println("Done closing consumer");
}
  }
}
{code}
Steps to reproduce:
 0. update bootstrap server config in example code
 1. run above application, which consumes from topic1
 2. send SIGTERM to process, cleaning closing the consumer
 3. modify code to consume from topic1 AND topic2
 4. run application again, and see that both topics appear in the logs as being 
part of the subscription, but they're never assigned, regardless of how long 
you let the consumer run.

Logs from first run (1 topic subscription):
{code:java}
ConsumerConfig values: 
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-myGroupID-instance1
client.rack = 
connections.max.idle.ms = 54
default.api.timeout.ms = 6
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = myGroupID
group.instance.id = instance1
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class 
org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 30
max.poll.records = 500
metadata.max.age.ms = 30
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 3
partition.assignment.strategy = [class 
org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 3
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #104

2021-05-06 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-12252) Distributed herder tick thread loops rapidly when worker loses leadership

2021-05-06 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12252.
---
Fix Version/s: 2.8.1
   3.0.0
   Resolution: Fixed

I'm still working on backporting this to the 2.7 and 2.6 branches. When I'm 
able to do that, I'll update the fix versions on this issue.

> Distributed herder tick thread loops rapidly when worker loses leadership
> -
>
> Key: KAFKA-12252
> URL: https://issues.apache.org/jira/browse/KAFKA-12252
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 3.0.0, 2.8.1
>
>
> When a new session key is read from the config topic, if the worker is the 
> leader, it [schedules a new key 
> rotation|https://github.com/apache/kafka/blob/5cf9cfcaba67cffa2435b07ade58365449c60bd9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L1579-L1581].
>  The time between key rotations is configurable but defaults to an hour.
> The herder then continues its tick loop, which usually ends with a long poll 
> for rebalance activity. However, when a key rotation is scheduled, it will 
> [limit the time spent 
> polling|https://github.com/apache/kafka/blob/5cf9cfcaba67cffa2435b07ade58365449c60bd9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L384-L388]
>  at the end of the tick loop in order to be able to perform the rotation.
> Once woken up, the worker checks to see if a key rotation is necessary and, 
> if so, [sets the expected key rotation time to 
> Long.MAX_VALUE|https://github.com/apache/kafka/blob/bf4afae8f53471ab6403cbbfcd2c4e427bdd4568/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L344],
>  then [writes a new session key to the config 
> topic|https://github.com/apache/kafka/blob/bf4afae8f53471ab6403cbbfcd2c4e427bdd4568/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L345-L348].
>  The problem is, [the worker only ever decides a key rotation is necessary if 
> it is still the 
> leader|https://github.com/apache/kafka/blob/5cf9cfcaba67cffa2435b07ade58365449c60bd9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L456-L469].
>  If the worker is no longer the leader at the time of the key rotation 
> (likely due to falling out of the cluster after losing contact with the group 
> coordinator), its key expiration time won’t be reset, and the long poll for 
> rebalance activity at the end of the tick loop will be given a timeout of 0 
> ms and result in the tick loop being immediately restarted. Even if the 
> worker reads a new session key from the config topic, it’ll continue looping 
> like this since its scheduled key rotation won’t be updated. At this point, 
> the only thing that would help the worker get back into a healthy state would 
> be if it were made the leader of the cluster again.
> One possible fix could be to add a conditional check in the tick thread to 
> only limit the time spent on rebalance polling if the worker is currently the 
> leader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-730: Producer ID generation in KRaft mode

2021-05-06 Thread Colin McCabe
Sorry, I meant to write "AllocateProducerIdsRecord" in the previous message.  
-C.

On Thu, May 6, 2021, at 12:58, Colin McCabe wrote:
> Hi David,
> 
> Thanks for the KIP -- it looks good.
> 
> It seems like we should be clear that the new RPC should be used for 
> both the ZK and KRaft cases.  I think that is implied, but it would be 
> good to spell it out just to be clear.  As the KIP explains, this is 
> needed for the bridge release.
> 
> I think AllocateProducersIdRecord would be a nicer name than 
> ProducerIdRecord -- what do you think?
> 
> In the snapshot, does it make sense to store the latest producer ID 
> allocation record for every broker?  This might be useful for debugging 
> purposes, and it's unlikely to be that many records  On the other 
> hand, as you mention, we only really need the highest one for 
> correctness.
> 
> best,
> Colin
> 
> 
> On Thu, May 6, 2021, at 11:53, Tom Bentley wrote:
> > Hi David,
> > 
> > Thanks for the KIP, +1 binding.
> > 
> > Tom
> > 
> > On Thu, May 6, 2021 at 7:16 PM Guozhang Wang  wrote:
> > 
> > > LGTM! Thanks David.
> > >
> > > On Thu, May 6, 2021 at 10:03 AM Ron Dagostino  wrote:
> > >
> > > > Thanks again for the KIP, David.  +1 (non-binding) from me.
> > > >
> > > > Ron
> > > >
> > > > On Tue, May 4, 2021 at 11:21 AM David Arthur  wrote:
> > > >
> > > > > Hello everyone, I'd like to start the vote on KIP-730 which adds a new
> > > > RPC
> > > > > for producer ID generation in KRaft mode.
> > > > >
> > > > >
> > > > >
> > > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-730%3A+Producer+ID+generation+in+KRaft+mode
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > David Arthur
> > > > >
> > > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> > 
> 


Re: [VOTE] KIP-730: Producer ID generation in KRaft mode

2021-05-06 Thread Colin McCabe
Hi David,

Thanks for the KIP -- it looks good.

It seems like we should be clear that the new RPC should be used for both the 
ZK and KRaft cases.  I think that is implied, but it would be good to spell it 
out just to be clear.  As the KIP explains, this is needed for the bridge 
release.

I think AllocateProducersIdRecord would be a nicer name than ProducerIdRecord 
-- what do you think?

In the snapshot, does it make sense to store the latest producer ID allocation 
record for every broker?  This might be useful for debugging purposes, and it's 
unlikely to be that many records  On the other hand, as you mention, we 
only really need the highest one for correctness.

best,
Colin


On Thu, May 6, 2021, at 11:53, Tom Bentley wrote:
> Hi David,
> 
> Thanks for the KIP, +1 binding.
> 
> Tom
> 
> On Thu, May 6, 2021 at 7:16 PM Guozhang Wang  wrote:
> 
> > LGTM! Thanks David.
> >
> > On Thu, May 6, 2021 at 10:03 AM Ron Dagostino  wrote:
> >
> > > Thanks again for the KIP, David.  +1 (non-binding) from me.
> > >
> > > Ron
> > >
> > > On Tue, May 4, 2021 at 11:21 AM David Arthur  wrote:
> > >
> > > > Hello everyone, I'd like to start the vote on KIP-730 which adds a new
> > > RPC
> > > > for producer ID generation in KRaft mode.
> > > >
> > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-730%3A+Producer+ID+generation+in+KRaft+mode
> > > >
> > > >
> > > >
> > > > --
> > > > David Arthur
> > > >
> > >
> >
> >
> > --
> > -- Guozhang
> >
> 


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #103

2021-05-06 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-730: Producer ID generation in KRaft mode

2021-05-06 Thread Tom Bentley
Hi David,

Thanks for the KIP, +1 binding.

Tom

On Thu, May 6, 2021 at 7:16 PM Guozhang Wang  wrote:

> LGTM! Thanks David.
>
> On Thu, May 6, 2021 at 10:03 AM Ron Dagostino  wrote:
>
> > Thanks again for the KIP, David.  +1 (non-binding) from me.
> >
> > Ron
> >
> > On Tue, May 4, 2021 at 11:21 AM David Arthur  wrote:
> >
> > > Hello everyone, I'd like to start the vote on KIP-730 which adds a new
> > RPC
> > > for producer ID generation in KRaft mode.
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-730%3A+Producer+ID+generation+in+KRaft+mode
> > >
> > >
> > >
> > > --
> > > David Arthur
> > >
> >
>
>
> --
> -- Guozhang
>


Re: [VOTE] 2.7.1 RC2

2021-05-06 Thread Randall Hauch
+1 (binding)

- Performed quickstart for broker and Connect
- Verified signatures and checksums
- Verified the tag
- Compared the release notes to JIRA
- Manually spotchecked the Javadocs

Thanks!

Randall

On Mon, Apr 26, 2021 at 7:54 PM Satish Duggana 
wrote:

> +1 (non-binding)
>
> - Ran releaseTarGzAll successfully with no failures.
> - Ran subset of tests.
> - Ran through quickstart on builds generated from the tag.
> - Ran a few internal apps targeting topics on a 5 node cluster.
>
> Thanks,
> Satish.
>
> On Tue, 27 Apr 2021 at 02:57, Israel Ekpo  wrote:
> >
> > +1 for the release candidate (non-binding)
> >
> > Environment: Ubuntu 20.04 LTS, JDK 11.0.10, Scala 2.13.3, Gradle 6.6.1
> >
> > I have validated the GPG signatures and spot checked the javadocs, kafka
> > documentation and licenses used in the project.
> >
> > I learned about a couple of very interesting licenses as well today.
> >
> > Thanks for running this release, Mickael.
> >
> >
> >
> > On Wed, Apr 21, 2021 at 6:19 PM Israel Ekpo 
> wrote:
> >
> > > I will build from source tomorrow morning, run validation checks and
> share
> > > my findings and vote then.
> > >
> > > Thanks for the reminder.
> > >
> > > On Wed, Apr 21, 2021 at 12:50 PM Mickael Maison <
> mickael.mai...@gmail.com>
> > > wrote:
> > >
> > >> Bumping this thread
> > >>
> > >> Can we get a few more votes?
> > >>
> > >> Thanks
> > >>
> > >>
> > >> On Thu, Apr 15, 2021 at 5:37 PM Bill Bejeck 
> wrote:
> > >> >
> > >> > Mickael,
> > >> >
> > >> > Thanks for running the release.
> > >> >
> > >> > I validated the checksums and signatures, built the project from
> src,
> > >> and
> > >> > ran the unit tests.
> > >> > I'm +1(binding).
> > >> >
> > >> > -Bill
> > >> >
> > >> > On Wed, Apr 14, 2021 at 7:31 PM John Roesler 
> > >> wrote:
> > >> >
> > >> > > Hi Mickael,
> > >> > >
> > >> > > I verified the signatures and checksums, ran the tests, and
> > >> > > spot-checked the license.
> > >> > >
> > >> > > I'm +1 (binding).
> > >> > >
> > >> > > Thanks for driving this release!
> > >> > > -John
> > >> > >
> > >> > > On Fri, 2021-04-09 at 11:07 +0100, Mickael Maison wrote:
> > >> > > > Hi,
> > >> > > >
> > >> > > > Here is a successful build of 2.7.1 RC2:
> > >> > > > https://ci-builds.apache.org/job/Kafka/job/kafka-2.7-jdk8/144/
> > >> > > >
> > >> > > > Thanks
> > >> > > >
> > >> > > > On Thu, Apr 8, 2021 at 6:27 PM Mickael Maison <
> mimai...@apache.org>
> > >> > > wrote:
> > >> > > > >
> > >> > > > > Hello Kafka users, developers and client-developers,
> > >> > > > >
> > >> > > > > This is the third candidate for release of Apache Kafka 2.7.1.
> > >> > > > >
> > >> > > > > Since 2.7.1 RC1, the following JIRAs have been fixed:
> KAFKA-12593,
> > >> > > > > KAFKA-12474, KAFKA-12602.
> > >> > > > >
> > >> > > > > Release notes for the 2.7.1 release:
> > >> > > > >
> > >> https://home.apache.org/~mimaison/kafka-2.7.1-rc2/RELEASE_NOTES.html
> > >> > > > >
> > >> > > > > *** Please download, test and vote by Friday, April 16, 5pm
> BST
> > >> > > > >
> > >> > > > > Kafka's KEYS file containing PGP keys we use to sign the
> release:
> > >> > > > > https://kafka.apache.org/KEYS
> > >> > > > >
> > >> > > > > * Release artifacts to be voted upon (source and binary):
> > >> > > > > https://home.apache.org/~mimaison/kafka-2.7.1-rc2/
> > >> > > > >
> > >> > > > > * Maven artifacts to be voted upon:
> > >> > > > >
> > >>
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >> > > > >
> > >> > > > > * Javadoc:
> > >> > > > > https://home.apache.org/~mimaison/kafka-2.7.1-rc2/javadoc/
> > >> > > > >
> > >> > > > > * Tag to be voted upon (off 2.7 branch) is the 2.7.1 tag:
> > >> > > > > https://github.com/apache/kafka/releases/tag/2.7.1-rc2
> > >> > > > >
> > >> > > > > * Documentation:
> > >> > > > > https://kafka.apache.org/27/documentation.html
> > >> > > > >
> > >> > > > > * Protocol:
> > >> > > > > https://kafka.apache.org/27/protocol.html
> > >> > > > >
> > >> > > > > * Successful Jenkins builds for the 2.7 branch:
> > >> > > > > The build is still running, I'll update the thread once it's
> > >> complete
> > >> > > > >
> > >> > > > > /**
> > >> > > > >
> > >> > > > > Thanks,
> > >> > > > > Mickael
> > >> > >
> > >> > >
> > >> > >
> > >>
> > >
>


Re: [VOTE] KIP-730: Producer ID generation in KRaft mode

2021-05-06 Thread Guozhang Wang
LGTM! Thanks David.

On Thu, May 6, 2021 at 10:03 AM Ron Dagostino  wrote:

> Thanks again for the KIP, David.  +1 (non-binding) from me.
>
> Ron
>
> On Tue, May 4, 2021 at 11:21 AM David Arthur  wrote:
>
> > Hello everyone, I'd like to start the vote on KIP-730 which adds a new
> RPC
> > for producer ID generation in KRaft mode.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-730%3A+Producer+ID+generation+in+KRaft+mode
> >
> >
> >
> > --
> > David Arthur
> >
>


-- 
-- Guozhang


Re: [VOTE] KIP-735: Increase default consumer session timeout

2021-05-06 Thread Jason Gustafson
Calling the vote here. Thanks everyone! The total is +8 with no -1 votes.
The binding total is +7 with votes from myself, Sophie, Gwen, Guozhang,
Bill, David, and Bruno.

-Jason


On Thu, Apr 29, 2021 at 10:05 AM Jason Gustafson  wrote:

> Hey All,
>
> Thanks everyone for the votes.
>
> I had some offline discussion with Magnus Edenhill and he brought up a
> potential problem with the change in behavior for group.(min|max).
> session.timeout.ms. Unlike the java consumer, the librdkafka consumer
> proactively revokes partitions after the session timeout expires locally
> with no response from the coordinator. That means that the automatic
> adjustment of the session timeout would not be respected by the client.
> What is worse, after the consumer expires its local session timeout and
> revokes partitions, it rejoins as a new member. However, the rebalance
> would not be able to complete until the old member had expired, so the net
> effect would be that rebalances get unexpectedly delayed by the adjusted
> session timeout.
>
> I think this means we need to give this change some more thought, so I am
> removing it from the KIP. Probably we will have to bump the JoinGroup API
> so that the coordinator can make the client aware of the adjusted session
> timeout (and probably the heartbeat interval as well). I will look into
> doing this change in a separate proposal.
>
> Thanks,
> Jason
>
> On Thu, Apr 29, 2021 at 1:43 AM Bruno Cadonna  wrote:
>
>> Thank you for the KIP, Jason!
>>
>> +1 (binding)
>>
>> Best,
>> Bruno
>>
>> On 29.04.21 10:10, Luke Chen wrote:
>> > Hi Jason,
>> > +1 (non-binding)
>> >
>> > Really need this KIP to save poor jenkins flaky tests. :)
>> >
>> > Luke
>> >
>> > On Thu, Apr 29, 2021 at 4:01 PM David Jacot > >
>> > wrote:
>> >
>> >> +1 (binding)
>> >>
>> >> Thanks for the KIP.
>> >>
>> >> On Thu, Apr 29, 2021 at 2:27 AM Bill Bejeck  wrote:
>> >>
>> >>> Thanks for the KIP Jason, +1(binding)
>> >>>
>> >>> -Bill
>> >>>
>> >>> On Wed, Apr 28, 2021 at 7:47 PM Guozhang Wang 
>> >> wrote:
>> >>>
>>  +1. Thanks Jason!
>> 
>>  On Wed, Apr 28, 2021 at 12:50 PM Gwen Shapira
>> >> > 
>>  wrote:
>> 
>> > I love this improvement.
>> >
>> > +1 (binding)
>> >
>> > On Wed, Apr 28, 2021 at 10:46 AM Jason Gustafson
>> > 
>> > wrote:
>> >
>> >> Hi All,
>> >>
>> >> I'd like to start a vote on KIP-735:
>> >>
>> >>
>> >
>> 
>> >>>
>> >>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-735%3A+Increase+default+consumer+session+timeout
>> >> .
>> >> +1
>> >> from myself obviously
>> >>
>> >> -Jason
>> >>
>> >
>> >
>> > --
>> > Gwen Shapira
>> > Engineering Manager | Confluent
>> > 650.450.2760 | @gwenshap
>> > Follow us: Twitter | blog
>> >
>> 
>> 
>>  --
>>  -- Guozhang
>> 
>> >>>
>> >>
>> >
>>
>


Request to create a KIP

2021-05-06 Thread Parthasarathy, Mohan
Hi,

I would like to create a proposal for enhancing the Streams API to support 
additional aggregation functions. Requesting permission. My ID: mparthas99

Thanks
Mohan


Re: [VOTE] KIP-730: Producer ID generation in KRaft mode

2021-05-06 Thread Ron Dagostino
Thanks again for the KIP, David.  +1 (non-binding) from me.

Ron

On Tue, May 4, 2021 at 11:21 AM David Arthur  wrote:

> Hello everyone, I'd like to start the vote on KIP-730 which adds a new RPC
> for producer ID generation in KRaft mode.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-730%3A+Producer+ID+generation+in+KRaft+mode
>
>
>
> --
> David Arthur
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #24

2021-05-06 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-618: Atomic commit of source connector records and offsets

2021-05-06 Thread Tom Bentley
Hi Chris,

Thanks for this KIP. I've taken an initial look and have a few questions.

1. The doc for exactly.once.source.enabled says "Note that this must be
enabled on every worker in a cluster in order for exactly-once delivery to
be guaranteed, and that some source connectors may still not be able to
provide exactly-once delivery guarantees even with this support enabled."
a) Could we detect when only some workers in the cluster had support
enabled, and make it apparent that exactly-once wasn't guaranteed?
b) I'm wondering how users will be able to reason about when a connector is
really giving them exactly-once. I think this is defined in the limitations
section much later on, right? It seems to require somewhat detailed
knowledge of how the connector is implemented. And right now there's no
standard way for a connector author to advertise their connector as being
compatible. I wonder if we could tackle this in a compatible way using an
annotation on the Connector class. That shouldn't cause problems for older
versions of Connect from running connectors with the annotation (it's not
an error for an annotation type to not be present at runtime), but would
allow the support for exactly-once to be apparent and perhaps even exposed
through the connector status REST endpoint. It completely relies on the
connector author honouring the contact, of course, but it doesn't have the
compatibility problems of using a marker interface, for example.

2. About the post-transaction offset commit: " This will be handled on a
separate thread from the task’s work and offset commit threads, and should
not block or interfere with the task at all." If there's no blocking then
how can we be sure that the write to the global offsets topic ever actually
happens? If it never happens then presumably in case of a hard downgrade we
could see arbitrarily many duplicates? I don't necessarily see this as a
show-stopper, more I'm trying to understand what's possible with this
design.

3. What authorization is needed for Admin.fenceProducers()?

4. Maybe I missed it, but when a per-connector offsets storage topic is
created implicitly what will it be called?

Cheers,

Tom


On Tue, May 4, 2021 at 10:26 PM Chris Egerton 
wrote:

> Hi all,
>
> Good news everyone! I've reworked the design one more time and hopefully
> some of the improvements here should make the proposal more palatable.
> TL;DR:
>
> - Rolling upgrades are now possible, in at most two phases; workers will
> first have to be given a binary upgrade to 3.0 (the targeted version for
> this feature) which can be a rolling upgrade, and then a rolling upgrade to
> enable exactly-once source support in a cluster should be possible with no
> anticipated downtime for source connectors or their tasks
> - Offset topic migration is completely removed in favor of fallback to the
> global offsets topic (much simpler!)
> - One backwards-incompatible change is introduced: the leader will be
> required to use a transactional producer for writes to the config topic
> regardless of whether exactly-once support is enabled on the cluster.
> Technically we could gate this behind a config property but since the
> benefits of a transactional producer actually extend beyond exactly-once
> source support (we can now ensure that there's only one writer to the
> config topic at any given time, which isn't guaranteed with the current
> model) and the cost to accommodate it is fairly low (a handful of
> well-defined and limited-scope ACLs), I erred on the side of keeping things
> simple
>
> Looking forward to the next round of review and really hoping we can get
> the ball rolling in time for this to land with 3.0!
>
> Cheers,
>
> Chris
>
> On Mon, Apr 12, 2021 at 7:51 AM Chris Egerton  wrote:
>
> > Hi Randall,
> >
> > After thinking things over carefully, I've done some reworking of the
> > design. Instead of performing zombie fencing during rebalance, the leader
> > will expose an internal REST endpoint that will allow workers to request
> a
> > round of zombie fencing on demand, at any time. Workers will then hit
> this
> > endpoint after starting connectors and after task config updates for
> > connectors are detected; the precise details of this are outlined in the
> > KIP. If a round of fencing should fail for any reason, the worker will be
> > able to mark its Connector failed and, if the user wants to retry, they
> can
> > simply restart the Connector via the REST API (specifically, the POST
> > /connectors/{connector}/restart endpoint).
> >
> > The idea I'd been playing with to allow workers to directly write to the
> > config topic seemed promising at first, but it allowed things to get
> pretty
> > hairy for users if any kind of rebalancing bug took place and two workers
> > believed they owned the same Connector object.
> >
> > I hope this answers any outstanding questions and look forward to your
> > thoughts.
> >
> > Cheers,
> >
> > Chris
> >
> > On Mon, Mar 22, 2021 at 4:38 PM Chris 

[jira] [Resolved] (KAFKA-12752) CVE-2021-28168 upgrade jersey to 2.34 or 3.02

2021-05-06 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-12752.
---
Fix Version/s: 2.8.1
   2.7.2
   3.0.0
 Reviewer: Manikumar
   Resolution: Fixed

> CVE-2021-28168 upgrade jersey to 2.34 or 3.02
> -
>
> Key: KAFKA-12752
> URL: https://issues.apache.org/jira/browse/KAFKA-12752
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: John Stacy
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: CVE, security
> Fix For: 3.0.0, 2.7.2, 2.8.1
>
>
> [https://nvd.nist.gov/vuln/detail/CVE-2021-28168]
> CVE-2021-28168 affects jersey versions <=2.33, <=3.0.1. Upgrading to 2.34 or 
> 3.02 should resolve the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-722: Enable connector client overrides by default

2021-05-06 Thread Ryanne Dolan
+1 (non-binding) Thanks!

Ryanne

On Wed, May 5, 2021, 4:04 PM Randall Hauch  wrote:

> I'd like to start a vote on KIP-722:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-722%3A+Enable+connector+client+overrides+by+default
>
> +1 (binding) from myself.
>
> Thanks, and best regards!
>
> Randall
>


[jira] [Created] (KAFKA-12758) Create a new `server-common` module and move ApiMessageAndVersion, RecordSerde, AbstractApiMessageSerde, and BytesApiMessageSerde to that module.

2021-05-06 Thread Satish Duggana (Jira)
Satish Duggana created KAFKA-12758:
--

 Summary: Create a new `server-common` module and move 
ApiMessageAndVersion, RecordSerde, AbstractApiMessageSerde, and 
BytesApiMessageSerde to that module.
 Key: KAFKA-12758
 URL: https://issues.apache.org/jira/browse/KAFKA-12758
 Project: Kafka
  Issue Type: Sub-task
Reporter: Satish Duggana
Assignee: Satish Duggana






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12757) Move server related common classes into a separate `server-common` module.

2021-05-06 Thread Satish Duggana (Jira)
Satish Duggana created KAFKA-12757:
--

 Summary: Move server related common classes into a separate 
`server-common` module.
 Key: KAFKA-12757
 URL: https://issues.apache.org/jira/browse/KAFKA-12757
 Project: Kafka
  Issue Type: Improvement
Reporter: Satish Duggana


Move server related common classes into a separate `server-common` module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10767) Add Unit Test cases for missing methods in ThreadCacheTest

2021-05-06 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna resolved KAFKA-10767.
---
Resolution: Fixed

> Add Unit Test cases for missing methods in ThreadCacheTest
> --
>
> Key: KAFKA-10767
> URL: https://issues.apache.org/jira/browse/KAFKA-10767
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, unit tests
>Reporter: Sagar Rao
>Assignee: Sagar Rao
>Priority: Major
>  Labels: newbie
> Fix For: 3.0.0
>
>
> During the code review for KIP-614, it was noticed that some methods in 
> ThreadCache don't have unit tests. Need to identify them and add unit test 
> cases for them.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #101

2021-05-06 Thread Apache Jenkins Server
See