Re: request to contribute to kafka

2022-11-09 Thread Luke Chen
Hi Dan,

Done.
Thanks for the interest in Apache Kafka.

Luke

On Thu, Nov 10, 2022 at 3:41 PM Dan S  wrote:

> Hello,
>
> I would like to contribute to kafka,
>
> my wiki id, jira id, and github username are all "scanteianu"
>
> Thanks,
>
> Dan
>


Review request - PR#12753

2022-11-09 Thread Dan S
Hello,

I would really appreciate another review on
https://github.com/apache/kafka/pull/12753/files as I think it would be
great to add a bit more documentation on the behaviour of seek, as well as
some tests around invalid offsets (I found this very confusing when
developing for it).

Thanks,

Dan


request to contribute to kafka

2022-11-09 Thread Dan S
Hello,

I would like to contribute to kafka,

my wiki id, jira id, and github username are all "scanteianu"

Thanks,

Dan


Re: [DISCUSS] KIP-852 Optimize calculation of size for log in remote tier

2022-11-09 Thread Luke Chen
Hi Divij,

One more question about the metric:
I think the metric will be updated when
(1) each time we run the log retention check (that is,
log.retention.check.interval.ms)
(2) When user explicitly call getRemoteLogSize

Is that correct?
Maybe we should add a note in metric description, otherwise, when user got,
let's say 0 of RemoteLogSizeBytes, will be surprised.

Otherwise, LGTM

Thank you for the KIP
Luke

On Thu, Nov 10, 2022 at 2:55 AM Jun Rao  wrote:

> Hi, Divij,
>
> Thanks for the explanation.
>
> 1. Hmm, the default implementation of RLMM does local caching, right?
> Currently, we also cache all segment metadata in the brokers without
> KIP-405. Do you see a need to change that?
>
> 2,3,4: Yes, your explanation makes sense. However,
> currently, RemoteLogMetadataManager.listRemoteLogSegments() doesn't specify
> a particular order of the iterator. Do you intend to change that?
>
> Thanks,
>
> Jun
>
> On Tue, Nov 8, 2022 at 3:31 AM Divij Vaidya 
> wrote:
>
> > Hey Jun
> >
> > Thank you for your comments.
> >
> > *1. "RLMM implementor could ensure that listRemoteLogSegments() is fast"*
> > This would be ideal but pragmatically, it is difficult to ensure that
> > listRemoteLogSegments() is fast. This is because of the possibility of a
> > large number of segments (much larger than what Kafka currently handles
> > with local storage today) would make it infeasible to adopt strategies
> such
> > as local caching to improve the performance of listRemoteLogSegments.
> Apart
> > from caching (which won't work due to size limitations) I can't think of
> > other strategies which may eliminate the need for IO
> > operations proportional to the number of total segments. Please advise if
> > you have something in mind.
> >
> > 2.  "*If the size exceeds the retention size, we need to determine the
> > subset of segments to delete to bring the size within the retention
> limit.
> > Do we need to call RemoteLogMetadataManager.listRemoteLogSegments() to
> > determine that?"*
> > Yes, we need to call listRemoteLogSegments() to determine which segments
> > should be deleted. But there is a difference with the use case we are
> > trying to optimize with this KIP. To determine the subset of segments
> which
> > would be deleted, we only read metadata for segments which would be
> deleted
> > via the listRemoteLogSegments(). But to determine the totalLogSize, which
> > is required every time retention logic based on size executes, we read
> > metadata of *all* the segments in remote storage. Hence, the number of
> > results returned by *RemoteLogMetadataManager.listRemoteLogSegments() *is
> > different when we are calculating totalLogSize vs. when we are
> determining
> > the subset of segments to delete.
> >
> > 3.
> > *"Also, what about time-based retention? To make that efficient, do we
> need
> > to make some additional interface changes?"*No. Note that time complexity
> > to determine the segments for retention is different for time based vs.
> > size based. For time based, the time complexity is a function of the
> number
> > of segments which are "eligible for deletion" (since we only read
> metadata
> > for segments which would be deleted) whereas in size based retention, the
> > time complexity is a function of "all segments" available in remote
> storage
> > (metadata of all segments needs to be read to calculate the total size).
> As
> > you may observe, this KIP will bring the time complexity for both time
> > based retention & size based retention to the same function.
> >
> > 4. Also, please note that this new API introduced in this KIP also
> enables
> > us to provide a metric for total size of data stored in remote storage.
> > Without the API, calculation of this metric will become very expensive
> with
> > *listRemoteLogSegments().*
> > I understand that your motivation here is to avoid polluting the
> interface
> > with optimization specific APIs and I will agree with that goal. But I
> > believe that this new API proposed in the KIP brings in significant
> > improvement and there is no other work around available to achieve the
> same
> > performance.
> >
> > Regards,
> > Divij Vaidya
> >
> >
> >
> > On Tue, Nov 8, 2022 at 12:12 AM Jun Rao 
> wrote:
> >
> > > Hi, Divij,
> > >
> > > Thanks for the KIP. Sorry for the late reply.
> > >
> > > The motivation of the KIP is to improve the efficiency of size based
> > > retention. I am not sure the proposed changes are enough. For example,
> if
> > > the size exceeds the retention size, we need to determine the subset of
> > > segments to delete to bring the size within the retention limit. Do we
> > need
> > > to call RemoteLogMetadataManager.listRemoteLogSegments() to determine
> > that?
> > > Also, what about time-based retention? To make that efficient, do we
> need
> > > to make some additional interface changes?
> > >
> > > An alternative approach is for the RLMM implementor to make sure
> > > that RemoteLogMetadataManager.listRemoteLogSegments() 

[jira] [Resolved] (KAFKA-13964) kafka-configs.sh end with UnsupportedVersionException when describing TLS user with quotas

2022-11-09 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-13964.
-
Resolution: Duplicate

Thanks for reporting the issue. This will be resolved by 
https://issues.apache.org/jira/browse/KAFKA-14084.

> kafka-configs.sh end with UnsupportedVersionException when describing TLS 
> user with quotas 
> ---
>
> Key: KAFKA-13964
> URL: https://issues.apache.org/jira/browse/KAFKA-13964
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, kraft
>Affects Versions: 3.2.0
> Environment: Kafka 3.2.0 running on OpenShift 4.10 in KRaft mode 
> managed by Strimzi
>Reporter: Jakub Stejskal
>Priority: Minor
>
> {color:#424242}Usage of {color:#424242}kafka-configs.sh end with 
> {color:#424242}org.apache.kafka.common.errors.UnsupportedVersionException: 
> The broker does not support DESCRIBE_USER_SCRAM_CREDENTIALS when describing 
> TLS user with quotas enabled. {color}{color}{color}
>  
> {code:java}
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --user 
> CN=encrypted-arnost` got status code 1 and stderr: -- Error while 
> executing config command with args '--bootstrap-server localhost:9092 
> --describe --user CN=encrypted-arnost' 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnsupportedVersionException: The broker does 
> not support DESCRIBE_USER_SCRAM_CREDENTIALS{code}
> STDOUT contains all necessary data, but the script itself ends with return 
> code 1 and the error above. Scram-sha has not been configured anywhere in 
> that case (not supported by KRaft). This might be fixed by adding support for 
> scram-sha in the next version (not reproducible without KRaft enabled).
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14373) provide builders for producer/consumer

2022-11-09 Thread Daniel Scanteianu (Jira)
Daniel Scanteianu created KAFKA-14373:
-

 Summary: provide builders for producer/consumer
 Key: KAFKA-14373
 URL: https://issues.apache.org/jira/browse/KAFKA-14373
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Daniel Scanteianu


creating a producer/consumer involves looking up the consts for kafka 
configuration, and usually doing a lot of googling for what the possible values 
are for each configuration. Providing a builder where there are named methods 
to set various configuration, with meaningful parameters (ie: enums for 
configurations that can only have a few values), and good javadoc would make it 
much easier from developers working from ides (such as intellij, eclipse, 
netbeans, etc) to discover the various configurations, and navigate to the 
documentation for the various options when creating producers/consumers.

A salient consumer example: 

Examples:

Pseudocode:

Producer:

ProducerBuilder(String bootstrapUrl){
...
}

ProducerBuilder withTransactionsEnabled(String transactionalId){
...
// internally sets a flag so that transactions are initialized when producer is 
built
}

Producer build(){

...

}

Salient Consumer Example:
/**
* 
* @param strategy : if the strategy is set to NONE, then an error is thrown if 
a valid offset is passed to seek (etc.), if it is set to EARLIEST, then the 
consumer will seek to beginning (etc).
*/
ConsumerBuilder withAutoOffsetResetStrategy(OffsetResetStrategy strategy){
...
}
where OffsetResetStrategy is an enum with 3 values (EARLIEST,LATEST, NONE)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-866 ZooKeeper to KRaft Migration

2022-11-09 Thread Colin McCabe
Hi David,

Thanks for the response. Replies inline.

On Wed, Nov 9, 2022, at 08:17, David Arthur wrote:
> Colin
>
>>  Maybe zk.metadata.migration.enable ?
>
> Done. I went with "zookeeper.metadata.migration.enable" since our
> other ZK configs start with "zookeeper.*"
>
>> SImilarly, for MigrationRecord: can we rename this to 
>> ZkMigrationStateRecord? Then change MigrationState -> ZkMigrationState.
>
> Sure
>
>> With ZkMigrationStateRecord, one thing to keep in mind here is that we will 
>> eventually compact all the metadata logs into a snapshot. That snapshot will 
>> then have to keep alive the memory of the old migration. So it is not really 
>> a matter of replaying the old metadata logs (probably) but a matter of 
>> checking to see what the ZkMigrationState is, which I suppose could be 
>> Optional. If it's not Optional.empty, we already migrated 
>> / are migrating.
>
> Yea, makes sense.
>
>> For the /migration ZNode, is "last_update_time_ms" necessary? I thought ZK 
>> already tracked this information in the mzxid of the znode?
>
> Yes, Jun pointed this out previously, I missed this update in the KIP.
> Fixed now.
>
>> It is true that technically it is only needed in UMR, but I would still 
>> suggest including KRaftControllerId in LeaderAndIsrRequest because it will 
>> make debugging much easier.
>>
>> I would suggest not implementing the topic deletion state machine, but just 
>> deleting topics eagerly when in migration mode. We can implement this 
>> behavior change by keying off of whether KRaftControllerId is present in 
>> LeaderAndIsrRequest. On broker startup, we'll be sent a full 
>> LeaderAndIsrRequest and can delete stray partitions whose IDs are not as 
>> expected (again, this behavior change would only be for migration mode)
>
> Sounds good to me. Since this is somewhat of an implementation detail,
> do you think we need this included in the KIP?

Yeah, maybe we don't need to go into the delete behavior here. But I think the 
KIP should specify that we have KRaftControllerId in both LeaderAndIsrRequest. 
That will allow us to implement this behavior conditionally on zk-based brokers 
when in dual write mode.

>
>> For existing KRaft controllers, will 
>> kafka.controller:type=KafkaController,name=MigrationState show up as 4 
>> (MigrationFinalized)? I assume this is true, but it would be good to spell 
>> it out. Sorry if this is answered somewhere else.
>
> We discussed using 0 (None) as the value to report for original,
> un-migrated KRaft clusters. 4 (MigrationFinalized) would be for
> clusters which underwent a migration. I have some description of this
> in the table under "Migration Overview"
>

I don't feel that strongly about this, but wouldn't it be a good idea for 
MigrationState to have a different value for ZK-based clusters and KRaft-based 
clusters? If you have a bunch of clusters and you take an aggregate of this 
metric, it would be good to get a report of three numbers:
1. unupgraded ZK
2. in progress upgrades
3. kraft

I guess we could get that from examining some other metrics too, though. Not 
sure, what do you think?

>> As you point out, the ZK brokers being upgraded will need to contact the 
>> KRaft quorum in order to forward requests to there, once we are in migration 
>> mode. This raises a question: rather than changing the broker registration, 
>> can we have those brokers send an RPC to the kraft controller quorum 
>> instead? This would serve to confirm that they can reach the quorum. Then 
>> the quorum could wait for all of the brokers to check in this way before 
>> starting the migration (It would know all the brokers by looking at /brokers)
>
> One of the motivations I had for putting the migration details in the
> broker registration is that it removes ordering constraints between
> the brokers and controllers when starting the migration. If we set the
> brokers in migration mode before the KRaft quorum is available, they
> will just carry on until the KRaft controller takes over the
> controller leadership and starts sending UMR/LISR.
>

I agree that any scheme that requires complicated startup ordering is not a 
good idea. I was more thinking about having the zk-based brokers periodically 
send some kind of "I'm here" upgrade heartbeat to the controller quorum.

> I wonder if we could infer reachability of the new KRaft controller by
> the brokers through ApiVersions? Once taking leadership, the active
> KRaft controller could monitor its NodeApiVersions to see that each of
> the ZK brokers had reached it. Would this be enough to verify
> reachability?
>

If the new KRaft controller has already taken leadership, it's too late to do 
anything about brokers that can't reach it. Unless we want to implement 
automatic rollback to the pre-upgrade state, which sounds quite complex and 
error prone.

So, to sum up: my proposal was that the ZK based brokers periodically (like 
maybe every 10 seconds) try to contact the quorum with an RPC containing 

Re: [DISCUSS] KIP-883: Add delete callback method to Connector API

2022-11-09 Thread Greg Harris
Hi Hector,

Thanks for the KIP!

This is certainly missing functionality from the native Connect framework,
and we should try to make it possible to inform connectors about this part
of their lifecycle.
However, as with most functionality that was left out of the initial
implementation of the framework, the details are more challenging to work
out.

1. What happens when the destroy call throws an error, how does the
framework respond?

This is unspecified in the KIP, and it appears that your proposed changes
could cause the herder to fail.
>From the perspective of operators & connector developers, what is a
reasonable expectation to have for failure of a destroy?
I could see operators wanting both a graceful-delete to make use of this
new feature, and a force-delete for when the graceful-delete fails.
A connector developer could choose to swallow all errors encountered, or
fail-fast to indicate to the operator that there is an issue with the
graceful-delete flow.
If the alternative is crashing the herder, connector developers may choose
to hide serious errors, which is undesirable.

2. What happens when the destroy() call takes a long time to complete, or
is interrupted?

It appears that your implementation serially destroy()s each appropriate
connector, and may prevent the herder thread from making progress while the
operation is ongoing.
We have previously had to patch Connect to perform all connector and task
operations on a background thread, because some connector method
implementations can stall indefinitely.
Connect also has the notion of "cancelling" a connector/task if a graceful
shutdown timeout operation takes too long. Perhaps some of that design or
machinery may be useful to protect this method call as well.

More specific to the destroy() call itself, what happens when a connector
completes part of a destroy operation and then cannot complete the
remainder, either due to timing out or a worker crashing?
What is the contract with the connector developer about this method? Is the
destroy() only started exactly once during the lifetime of the connector,
or may it be retried?

3. What should be considered a reasonable custom implementation of the
destroy() call? What resources should it clean up by default?

I think we can broadly categorize the state a connector mutates among the
following
* Framework-managed state (e.g. source offsets, consumer offsets)
* Implementation detail state (e.g. debezium db history topic, audit
tables, temporary accounts)
* Third party system data (e.g. the actual data being written by a sink
connector)
* Third party system metadata (e.g. tables in a database, delivery
receipts, permissions)

I think it's apparent that the framework-managed state cannot/should not be
interacted with by the destroy() call. However, the framework could be
changed to clean up these resources at the same time that destroy() is
called. Is that out-of-scope of this proposal, and better handled by manual
intervention?
>From the text of the KIP, I think it explicitly includes the Implementation
detail state, which should not be depended on externally and should be safe
to clean up during a destroy(). I think this is completely reasonable.
Are the third-party data and metadata out-of-scope for this proposal? Can
we officially recommend against it, or should we accommodate users and
connector developers that wish to clean up data/metadata during destroy()?

4. How should connector implementations of destroy handle backwards
compatibility?

In terms of backward-compatibility for the framework vs connector versions,
I think the default-noop method is very reasonable.
However, what happens when someone upgrades from a version of a connector
without a destroy() implementation to one with an implementation, and
maintain backwards compatibility?
To replicate the same behavior, the connector might include something like
an `enable.cleanup` config which allows users to opt-in to the new
behavior. This could mean the proliferation of many different
configurations to handle this behavior.
Maybe we can provide some recommendations to developers, or some mechanism
to standardize this opt-in behavior.

I'm interested to hear if you have any experience with the above, if you've
experimented with this feature in your fork.

Thanks,
Greg


On Thu, Nov 3, 2022 at 11:55 AM Hector Geraldino (BLOOMBERG/ 919 3RD A) <
hgerald...@bloomberg.net> wrote:

> Hi everyone,
>
> I've submitted KIP-883, which introduces a callback to the public
> Connector API called when deleting a connector:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-883%3A+Add+delete+callback+method+to+Connector+API
>
> It adds a new `deleted()` method (open to better naming suggestions) to
> the org.apache.kafka.connect.connector.Connector abstract class, which will
> be invoked by connect Workers when a connector is being deleted.
>
> Feedback and comments are welcome.
>
> Thank you!
> Hector
>
>


[jira] [Created] (KAFKA-14372) RackAwareReplicaSelector should choose a replica from the isr

2022-11-09 Thread Jeff Kim (Jira)
Jeff Kim created KAFKA-14372:


 Summary: RackAwareReplicaSelector should choose a replica from the 
isr
 Key: KAFKA-14372
 URL: https://issues.apache.org/jira/browse/KAFKA-14372
 Project: Kafka
  Issue Type: Bug
Reporter: Jeff Kim
Assignee: Jeff Kim


The default replica selector chooses a replica solely on whether the 
broker.rack matches the client.rack in the fetch request. Even if the broker 
matching the rack is unavailable, the consumer will go to that broker. The 
selector should choose a broker from the isr.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-852 Optimize calculation of size for log in remote tier

2022-11-09 Thread Jun Rao
Hi, Divij,

Thanks for the explanation.

1. Hmm, the default implementation of RLMM does local caching, right?
Currently, we also cache all segment metadata in the brokers without
KIP-405. Do you see a need to change that?

2,3,4: Yes, your explanation makes sense. However,
currently, RemoteLogMetadataManager.listRemoteLogSegments() doesn't specify
a particular order of the iterator. Do you intend to change that?

Thanks,

Jun

On Tue, Nov 8, 2022 at 3:31 AM Divij Vaidya  wrote:

> Hey Jun
>
> Thank you for your comments.
>
> *1. "RLMM implementor could ensure that listRemoteLogSegments() is fast"*
> This would be ideal but pragmatically, it is difficult to ensure that
> listRemoteLogSegments() is fast. This is because of the possibility of a
> large number of segments (much larger than what Kafka currently handles
> with local storage today) would make it infeasible to adopt strategies such
> as local caching to improve the performance of listRemoteLogSegments. Apart
> from caching (which won't work due to size limitations) I can't think of
> other strategies which may eliminate the need for IO
> operations proportional to the number of total segments. Please advise if
> you have something in mind.
>
> 2.  "*If the size exceeds the retention size, we need to determine the
> subset of segments to delete to bring the size within the retention limit.
> Do we need to call RemoteLogMetadataManager.listRemoteLogSegments() to
> determine that?"*
> Yes, we need to call listRemoteLogSegments() to determine which segments
> should be deleted. But there is a difference with the use case we are
> trying to optimize with this KIP. To determine the subset of segments which
> would be deleted, we only read metadata for segments which would be deleted
> via the listRemoteLogSegments(). But to determine the totalLogSize, which
> is required every time retention logic based on size executes, we read
> metadata of *all* the segments in remote storage. Hence, the number of
> results returned by *RemoteLogMetadataManager.listRemoteLogSegments() *is
> different when we are calculating totalLogSize vs. when we are determining
> the subset of segments to delete.
>
> 3.
> *"Also, what about time-based retention? To make that efficient, do we need
> to make some additional interface changes?"*No. Note that time complexity
> to determine the segments for retention is different for time based vs.
> size based. For time based, the time complexity is a function of the number
> of segments which are "eligible for deletion" (since we only read metadata
> for segments which would be deleted) whereas in size based retention, the
> time complexity is a function of "all segments" available in remote storage
> (metadata of all segments needs to be read to calculate the total size). As
> you may observe, this KIP will bring the time complexity for both time
> based retention & size based retention to the same function.
>
> 4. Also, please note that this new API introduced in this KIP also enables
> us to provide a metric for total size of data stored in remote storage.
> Without the API, calculation of this metric will become very expensive with
> *listRemoteLogSegments().*
> I understand that your motivation here is to avoid polluting the interface
> with optimization specific APIs and I will agree with that goal. But I
> believe that this new API proposed in the KIP brings in significant
> improvement and there is no other work around available to achieve the same
> performance.
>
> Regards,
> Divij Vaidya
>
>
>
> On Tue, Nov 8, 2022 at 12:12 AM Jun Rao  wrote:
>
> > Hi, Divij,
> >
> > Thanks for the KIP. Sorry for the late reply.
> >
> > The motivation of the KIP is to improve the efficiency of size based
> > retention. I am not sure the proposed changes are enough. For example, if
> > the size exceeds the retention size, we need to determine the subset of
> > segments to delete to bring the size within the retention limit. Do we
> need
> > to call RemoteLogMetadataManager.listRemoteLogSegments() to determine
> that?
> > Also, what about time-based retention? To make that efficient, do we need
> > to make some additional interface changes?
> >
> > An alternative approach is for the RLMM implementor to make sure
> > that RemoteLogMetadataManager.listRemoteLogSegments() is fast (e.g., with
> > local caching). This way, we could keep the interface simple. Have we
> > considered that?
> >
> > Thanks,
> >
> > Jun
> >
> > On Wed, Sep 28, 2022 at 6:28 AM Divij Vaidya 
> > wrote:
> >
> > > Hey folks
> > >
> > > Does anyone else have any thoughts on this before I propose this for a
> > > vote?
> > >
> > > --
> > > Divij Vaidya
> > >
> > >
> > >
> > > On Mon, Sep 5, 2022 at 12:57 PM Satish Duggana <
> satish.dugg...@gmail.com
> > >
> > > wrote:
> > >
> > > > Thanks for the KIP Divij!
> > > >
> > > > This is a nice improvement to avoid recalculation of size. Customized
> > > RLMMs
> > > > can implement the best possible approach by caching or 

Re: [DISCUSS] KIP-866 ZooKeeper to KRaft Migration

2022-11-09 Thread Jun Rao
Hi, David,

Thanks for the reply.

Now that we have added KRaftControllerId to both UpdateMetdataRequest and
LeaderAndIsrRequest, should we add it to StopReplicaRequest to make it more
consistent?

Thanks,

Jun

On Wed, Nov 9, 2022 at 8:17 AM David Arthur  wrote:

> Jun, I've updated the doc with clarification of combined mode under
> the ApiVersionsResponse section. I also added your suggestion to
> rename the config to something more explicit. Colin had some feedback
> in the VOTE thread which I've also incorporated.
>
> Thanks!
> David
>
> On Tue, Nov 8, 2022 at 1:18 PM Jun Rao  wrote:
> >
> > Hi, David,
> >
> > Thanks for the reply.
> >
> > 20/21. Thanks for the explanation. The suggestion sounds good. Could you
> > include that in the doc?
> >
> > 40. metadata.migration.enable: We may do future metadata migrations
> within
> > KRaft. Could we make the name more specific to ZK migration like
> > zookeeper.metadata.migration.enable?
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Nov 8, 2022 at 9:47 AM David Arthur  wrote:
> >
> > > Ah, sorry about that, you're right. Since we won't support ZK
> > > migrations in combined mode, is this issue avoided?
> > >
> > > Essentially, we would only set ZkMigrationReady in ApiVersionsResponse
> if
> > >
> > > * process.roles=controller
> > > * kafka.metadata.migration.enabled=true
> > >
> > > Also, the following would be an invalid config that should prevent
> startup:
> > >
> > > * process.roles=broker,controller
> > > * kafka.metadata.migration.enabled=true
> > >
> > > Does this seem reasonable?
> > >
> > > Thanks!
> > > David
> > >
> > > On Tue, Nov 8, 2022 at 11:12 AM Jun Rao 
> wrote:
> > > >
> > > > Hi, David,
> > > >
> > > > I am not sure that we are fully settled on the following.
> > > >
> > > > 20/21. Since separate listeners are optional, it seems that the
> broker
> > > > can't distinguish between ApiVersion requests coming from the client
> or
> > > > other brokers. This means the clients will get ZkMigrationReady in
> the
> > > > ApiVersion response, which is weird.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Tue, Nov 8, 2022 at 7:18 AM David Arthur 
> wrote:
> > > >
> > > > > Thanks for the discussion everyone, I'm going to move ahead with
> the
> > > > > vote for this KIP.
> > > > >
> > > > > -David
> > > > >
> > > > > On Thu, Nov 3, 2022 at 1:20 PM Jun Rao 
> > > wrote:
> > > > > >
> > > > > > Hi, David,
> > > > > >
> > > > > > Thanks for the reply.
> > > > > >
> > > > > > 20/21 Yes, but separate listeners are optional. It's possible
> for the
> > > > > nodes
> > > > > > to use a single port for both client and server side
> communications.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Jun
> > > > > >
> > > > > > On Thu, Nov 3, 2022 at 9:59 AM David Arthur
> > > > > >  wrote:
> > > > > >
> > > > > > > 20/21, in combined mode we still have a separate listener for
> the
> > > > > > > controller APIs, e.g.,
> > > > > > >
> > > > > > > listeners=PLAINTEXT://:9092,CONTROLLER://:9093
> > > > > > >
> > > > > > > inter.broker.listener.name=PLAINTEXT
> > > > > > >
> > > > > > > controller.listener.names=CONTROLLER
> > > > > > >
> > > > > > > advertised.listeners=PLAINTEXT://localhost:9092
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Clients still talk to the broker through the advertised
> listener,
> > > and
> > > > > only
> > > > > > > brokers and other controllers will communicate over the
> controller
> > > > > > > listener.
> > > > > > >
> > > > > > > 40. Sounds good, I updated the KIP
> > > > > > >
> > > > > > > Thanks!
> > > > > > > David
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Nov 3, 2022 at 12:14 PM Jun Rao
> 
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi, David,
> > > > > > > >
> > > > > > > > Thanks for the reply.
> > > > > > > >
> > > > > > > > 20/21. When KRaft runs in the combined mode, does a
> controller
> > > know
> > > > > > > whether
> > > > > > > > an ApiRequest is from a client or another broker?
> > > > > > > >
> > > > > > > > 40. Adding a "None" state sounds reasonable.
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > >
> > > > > > > > Jun
> > > > > > > >
> > > > > > > > On Thu, Nov 3, 2022 at 8:39 AM David Arthur
> > > > > > > >  wrote:
> > > > > > > >
> > > > > > > > > Jun,
> > > > > > > > >
> > > > > > > > > 20/21 If we use a tagged field, then I don't think clients
> > > need to
> > > > > be
> > > > > > > > > concerned with it, right?. In ApiVersionsResponse sent by
> > > brokers
> > > > > to
> > > > > > > > > clients, this field would be omitted. Clients can't connect
> > > > > directly to
> > > > > > > > the
> > > > > > > > > KRaft controller nodes. Also, we have a precedent of
> sending
> > > > > controller
> > > > > > > > > node state between controllers through ApiVersions
> > > > > ("metadata.version"
> > > > > > > > > feature), so I think it fits well.
> > > > > > > > >
> > > > > > > > > 40. For new KRaft clusters, we could add another state to
> > > 

[jira] [Created] (KAFKA-14371) quorum-state file contains empty/unused clusterId field

2022-11-09 Thread Ron Dagostino (Jira)
Ron Dagostino created KAFKA-14371:
-

 Summary: quorum-state file contains empty/unused clusterId field
 Key: KAFKA-14371
 URL: https://issues.apache.org/jira/browse/KAFKA-14371
 Project: Kafka
  Issue Type: Improvement
Reporter: Ron Dagostino


The KRaft controller's quorum-state file 
`$LOG_DIR/__cluster_metadata-0/quorum-state` contains an empty clusterId value. 
 This value is never non-empty, and it is never used after it is written and 
then subsequently read.  This is a cosmetic issue; it would be best if this 
value did not exist there.  The cluster ID already exists in the 
`$LOG_DIR/meta.properties` file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-866 ZooKeeper to KRaft Migration

2022-11-09 Thread David Arthur
Colin

>  Maybe zk.metadata.migration.enable ?

Done. I went with "zookeeper.metadata.migration.enable" since our
other ZK configs start with "zookeeper.*"

> SImilarly, for MigrationRecord: can we rename this to ZkMigrationStateRecord? 
> Then change MigrationState -> ZkMigrationState.

Sure

> With ZkMigrationStateRecord, one thing to keep in mind here is that we will 
> eventually compact all the metadata logs into a snapshot. That snapshot will 
> then have to keep alive the memory of the old migration. So it is not really 
> a matter of replaying the old metadata logs (probably) but a matter of 
> checking to see what the ZkMigrationState is, which I suppose could be 
> Optional. If it's not Optional.empty, we already migrated / 
> are migrating.

Yea, makes sense.

> For the /migration ZNode, is "last_update_time_ms" necessary? I thought ZK 
> already tracked this information in the mzxid of the znode?

Yes, Jun pointed this out previously, I missed this update in the KIP.
Fixed now.

> It is true that technically it is only needed in UMR, but I would still 
> suggest including KRaftControllerId in LeaderAndIsrRequest because it will 
> make debugging much easier.
>
> I would suggest not implementing the topic deletion state machine, but just 
> deleting topics eagerly when in migration mode. We can implement this 
> behavior change by keying off of whether KRaftControllerId is present in 
> LeaderAndIsrRequest. On broker startup, we'll be sent a full 
> LeaderAndIsrRequest and can delete stray partitions whose IDs are not as 
> expected (again, this behavior change would only be for migration mode)

Sounds good to me. Since this is somewhat of an implementation detail,
do you think we need this included in the KIP?

> For existing KRaft controllers, will 
> kafka.controller:type=KafkaController,name=MigrationState show up as 4 
> (MigrationFinalized)? I assume this is true, but it would be good to spell it 
> out. Sorry if this is answered somewhere else.

We discussed using 0 (None) as the value to report for original,
un-migrated KRaft clusters. 4 (MigrationFinalized) would be for
clusters which underwent a migration. I have some description of this
in the table under "Migration Overview"

> As you point out, the ZK brokers being upgraded will need to contact the 
> KRaft quorum in order to forward requests to there, once we are in migration 
> mode. This raises a question: rather than changing the broker registration, 
> can we have those brokers send an RPC to the kraft controller quorum instead? 
> This would serve to confirm that they can reach the quorum. Then the quorum 
> could wait for all of the brokers to check in this way before starting the 
> migration (It would know all the brokers by looking at /brokers)

One of the motivations I had for putting the migration details in the
broker registration is that it removes ordering constraints between
the brokers and controllers when starting the migration. If we set the
brokers in migration mode before the KRaft quorum is available, they
will just carry on until the KRaft controller takes over the
controller leadership and starts sending UMR/LISR.

I wonder if we could infer reachability of the new KRaft controller by
the brokers through ApiVersions? Once taking leadership, the active
KRaft controller could monitor its NodeApiVersions to see that each of
the ZK brokers had reached it. Would this be enough to verify
reachability?

-David


On Tue, Nov 8, 2022 at 6:15 PM Colin McCabe  wrote:
>
> Hi David,
>
> Looks great. Some questions:
>
> I agree with Jun that it would be good to rename metadata.migration.enable to 
> something more zk-specific. Maybe zk.metadata.migration.enable ?
>
> SImilarly, for MigrationRecord: can we rename this to ZkMigrationStateRecord? 
> Then change MigrationState -> ZkMigrationState.
>
> With ZkMigrationStateRecord, one thing to keep in mind here is that we will 
> eventually compact all the metadata logs into a snapshot. That snapshot will 
> then have to keep alive the memory of the old migration. So it is not really 
> a matter of replaying the old metadata logs (probably) but a matter of 
> checking to see what the ZkMigrationState is, which I suppose could be 
> Optional. If it's not Optional.empty, we already migrated / 
> are migrating.
>
> For the /migration ZNode, is "last_update_time_ms" necessary? I thought ZK 
> already tracked this information in the mzxid of the znode?
>
> It is true that technically it is only needed in UMR, but I would still 
> suggest including KRaftControllerId in LeaderAndIsrRequest because it will 
> make debugging much easier.
>
> I would suggest not implementing the topic deletion state machine, but just 
> deleting topics eagerly when in migration mode. We can implement this 
> behavior change by keying off of whether KRaftControllerId is present in 
> LeaderAndIsrRequest. On broker startup, we'll be sent a full 
> LeaderAndIsrRequest and can delete stray partitions whose 

Re: [DISCUSS] KIP-866 ZooKeeper to KRaft Migration

2022-11-09 Thread David Arthur
Jun, I've updated the doc with clarification of combined mode under
the ApiVersionsResponse section. I also added your suggestion to
rename the config to something more explicit. Colin had some feedback
in the VOTE thread which I've also incorporated.

Thanks!
David

On Tue, Nov 8, 2022 at 1:18 PM Jun Rao  wrote:
>
> Hi, David,
>
> Thanks for the reply.
>
> 20/21. Thanks for the explanation. The suggestion sounds good. Could you
> include that in the doc?
>
> 40. metadata.migration.enable: We may do future metadata migrations within
> KRaft. Could we make the name more specific to ZK migration like
> zookeeper.metadata.migration.enable?
>
> Thanks,
>
> Jun
>
> On Tue, Nov 8, 2022 at 9:47 AM David Arthur  wrote:
>
> > Ah, sorry about that, you're right. Since we won't support ZK
> > migrations in combined mode, is this issue avoided?
> >
> > Essentially, we would only set ZkMigrationReady in ApiVersionsResponse if
> >
> > * process.roles=controller
> > * kafka.metadata.migration.enabled=true
> >
> > Also, the following would be an invalid config that should prevent startup:
> >
> > * process.roles=broker,controller
> > * kafka.metadata.migration.enabled=true
> >
> > Does this seem reasonable?
> >
> > Thanks!
> > David
> >
> > On Tue, Nov 8, 2022 at 11:12 AM Jun Rao  wrote:
> > >
> > > Hi, David,
> > >
> > > I am not sure that we are fully settled on the following.
> > >
> > > 20/21. Since separate listeners are optional, it seems that the broker
> > > can't distinguish between ApiVersion requests coming from the client or
> > > other brokers. This means the clients will get ZkMigrationReady in the
> > > ApiVersion response, which is weird.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Tue, Nov 8, 2022 at 7:18 AM David Arthur  wrote:
> > >
> > > > Thanks for the discussion everyone, I'm going to move ahead with the
> > > > vote for this KIP.
> > > >
> > > > -David
> > > >
> > > > On Thu, Nov 3, 2022 at 1:20 PM Jun Rao 
> > wrote:
> > > > >
> > > > > Hi, David,
> > > > >
> > > > > Thanks for the reply.
> > > > >
> > > > > 20/21 Yes, but separate listeners are optional. It's possible for the
> > > > nodes
> > > > > to use a single port for both client and server side communications.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > > On Thu, Nov 3, 2022 at 9:59 AM David Arthur
> > > > >  wrote:
> > > > >
> > > > > > 20/21, in combined mode we still have a separate listener for the
> > > > > > controller APIs, e.g.,
> > > > > >
> > > > > > listeners=PLAINTEXT://:9092,CONTROLLER://:9093
> > > > > >
> > > > > > inter.broker.listener.name=PLAINTEXT
> > > > > >
> > > > > > controller.listener.names=CONTROLLER
> > > > > >
> > > > > > advertised.listeners=PLAINTEXT://localhost:9092
> > > > > >
> > > > > >
> > > > > >
> > > > > > Clients still talk to the broker through the advertised listener,
> > and
> > > > only
> > > > > > brokers and other controllers will communicate over the controller
> > > > > > listener.
> > > > > >
> > > > > > 40. Sounds good, I updated the KIP
> > > > > >
> > > > > > Thanks!
> > > > > > David
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Thu, Nov 3, 2022 at 12:14 PM Jun Rao 
> > > > wrote:
> > > > > >
> > > > > > > Hi, David,
> > > > > > >
> > > > > > > Thanks for the reply.
> > > > > > >
> > > > > > > 20/21. When KRaft runs in the combined mode, does a controller
> > know
> > > > > > whether
> > > > > > > an ApiRequest is from a client or another broker?
> > > > > > >
> > > > > > > 40. Adding a "None" state sounds reasonable.
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Jun
> > > > > > >
> > > > > > > On Thu, Nov 3, 2022 at 8:39 AM David Arthur
> > > > > > >  wrote:
> > > > > > >
> > > > > > > > Jun,
> > > > > > > >
> > > > > > > > 20/21 If we use a tagged field, then I don't think clients
> > need to
> > > > be
> > > > > > > > concerned with it, right?. In ApiVersionsResponse sent by
> > brokers
> > > > to
> > > > > > > > clients, this field would be omitted. Clients can't connect
> > > > directly to
> > > > > > > the
> > > > > > > > KRaft controller nodes. Also, we have a precedent of sending
> > > > controller
> > > > > > > > node state between controllers through ApiVersions
> > > > ("metadata.version"
> > > > > > > > feature), so I think it fits well.
> > > > > > > >
> > > > > > > > 40. For new KRaft clusters, we could add another state to
> > indicate
> > > > it
> > > > > > was
> > > > > > > > not migrated from ZK, but created as a KRaft cluster. Maybe
> > > > > > > > "kafka.controller:type=KafkaController,name=MigrationState" =>
> > > > "None" ?
> > > > > > > We
> > > > > > > > could also omit that metric for unmigrated clusters, but I'm
> > not a
> > > > fan
> > > > > > of
> > > > > > > > using the absence of a value to signal something.
> > > > > > > >
> > > > > > > > -
> > > > > > > >
> > > > > > > > Akhilesh, thanks for reviewing the KIP!
> > > > > > > >
> > > > > > > > 1. MigrationState and MetadataType are mostly the same 

Re: [DISCUSS] KIP-864: Add End-To-End Latency Metrics to Connectors

2022-11-09 Thread Mickael Maison
Hi Jorge,

Thanks for the KIP, it is a nice improvement.

1) The per transformation metrics still have a question mark next to
them in the KIP. Do you want to include them? If so we'll want to tag
them, we should be able to include the aliases in TransformationChain
and use them.

2) I see no references to predicates. If we don't want to measure
their latency, can we say it explicitly?

3) Should we have sink-record-batch-latency-avg-ms? All other metrics
have both the maximum and average values.

Thanks,
Mickael

On Thu, Oct 20, 2022 at 9:58 PM Jorge Esteban Quilcate Otoya
 wrote:
>
> Thanks, Chris! Great feedback! Please, find my comments below:
>
> On Thu, 13 Oct 2022 at 18:52, Chris Egerton  wrote:
>
> > Hi Jorge,
> >
> > Thanks for the KIP. I agree with the overall direction and think this would
> > be a nice improvement to Kafka Connect. Here are my initial thoughts on the
> > details:
> >
> > 1. The motivation section outlines the gaps in Kafka Connect's task metrics
> > nicely. I think it'd be useful to include more concrete details on why
> > these gaps need to be filled in, and in which cases additional metrics
> > would be helpful. One goal could be to provide enhanced monitoring of
> > production deployments that allows for cluster administrators to set up
> > automatic alerts for latency spikes and, if triggered, quickly identify the
> > root cause of those alerts, reducing the time to remediation. Another goal
> > could be to provide more insight to developers or cluster administrators
> > who want to do performance testing on connectors in non-production
> > environments. It may help guide our decision making process to have a
> > clearer picture of the goals we're trying to achieve.
> >
>
> Agree. The Motivation section has been updated.
> Thanks for the examples, I see both of them being covered by the KIP.
> I see how these could give us a good distinction on whether to position
> some metrics at INFO or DEBUG level.
>
>
> > 2. If we're trying to address the alert-and-diagnose use case, it'd be
> > useful to have as much information as possible at INFO level, rather than
> > forcing cluster administrators to possibly reconfigure a connector to emit
> > DEBUG or TRACE level metrics in order to diagnose a potential
> > production-impacting performance bottleneck. I can see the rationale for
> > emitting per-record metrics that track an average value at DEBUG level, but
> > for per-record metrics that track a maximum value, is there any reason not
> > to provide this information at INFO level?
> >
>
> Agree. Though with Max and Avg metrics being part of the same sensor —
> where Metric Level is defined — then both metrics get the same level.
>
>
> > 3. I'm also curious about the performance testing suggested by Yash to
> > gauge the potential impact of this change. Have you been able to do any
> > testing with your draft implementation yet?
> >
>
> No, not so far.
> I think it would be valuable to discuss the scope of this testing and maybe
> tackle it
> in a separate issue as Sensors and Metrics are used all over the place.
> My initial understanding is that these tests should by placed in the
> jmh-benchmarks[1].
> Then, we could target testing Sensors and Metrics, and validate how much
> overhead
> is added by having only Max vs Max,Avg(,Min), etc.
> In the other hand, we could extend this to Transformers or other Connect
> layers.
>
> Here are some pointers to the Sensors and Metrics implementations that
> could be considered:
> Path to metric recording:
> -
> https://github.com/apache/kafka/blob/5cab11cf525f6c06fcf9eb43f7f95ef33fe1cdbb/clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java#L195-L199
> -
> https://github.com/apache/kafka/blob/5cab11cf525f6c06fcf9eb43f7f95ef33fe1cdbb/clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java#L230-L244
>
> ```
> // increment all the stats
> for (StatAndConfig statAndConfig : this.stats) {
>statAndConfig.stat.record(statAndConfig.config(), value, timeMs);
> }
> ```
>
> SampledStats:
> - Avg:
> https://github.com/apache/kafka/blob/068ab9cefae301f3187ea885d645c425955e77d2/clients/src/main/java/org/apache/kafka/common/metrics/stats/Avg.java
> - Max:
> https://github.com/apache/kafka/blob/068ab9cefae301f3187ea885d645c425955e77d2/clients/src/main/java/org/apache/kafka/common/metrics/stats/Max.java
> - Min:
> https://github.com/apache/kafka/blob/068ab9cefae301f3187ea885d645c425955e77d2/clients/src/main/java/org/apache/kafka/common/metrics/stats/Min.java
>
> `stat#record()` are implemented by `update` method in SampledStat:
>
> ```Max.java
> @Override
> protected void update(Sample sample, MetricConfig config, double value,
> long now) {
> sample.value += value;
> }
> ```
>
> ```Avg.java
> @Override
> protected void update(Sample sample, MetricConfig config, double value,
> long now) {
> sample.value = Math.max(sample.value, value);
> }
> ```
>
> As far as I understand, most of the 

Kafka adopt two measurement systems

2022-11-09 Thread 张占昌
Hello!
Why does Kafka adopt two measurement systems? One is yammer, and the other is 
its own metric.

Re: [DISCUSS] KIP-875: First-class offsets support in Kafka Connect

2022-11-09 Thread Mickael Maison
Hi Chris,

Thanks for the KIP, you're picking something that has been in my todo
list for a while ;)

It looks good overall, I just have a couple of questions:
1) I consider both features listed in Future Work pretty important. In
both cases you mention the reason for not addressing them now is
because of the implementation. If the design is simple and if we have
volunteers to implement them, I wonder if we could include them in
this KIP. So you would not have to implement everything but we would
have a single KIP and vote.

2) Regarding the backward compatibility for the stopped state. The
"state.v2" field is a bit unfortunate but I can't think of a better
solution. The other alternative would be to not do anything but I
think the graceful degradation you propose is a bit better.

Thanks,
Mickael





On Tue, Nov 8, 2022 at 5:58 PM Chris Egerton  wrote:
>
> Hi Yash,
>
> Good question! This is actually a subtle source of asymmetry in the current
> proposal. Requests to delete a consumer group with active members will
> fail, so if there are zombie sink tasks that are still communicating with
> Kafka, offset reset requests for that connector will also fail. It is
> possible to use an admin client to remove all active members from the group
> and then delete the group. However, this solution isn't as complete as the
> zombie fencing that we can perform for exactly-once source tasks, since
> removing consumers from a group doesn't prevent them from immediately
> rejoining the group, which would either cause the group deletion request to
> fail (if they rejoin before the group is deleted), or recreate the group
> (if they rejoin after the group is deleted).
>
> For ease of implementation, I'd prefer to leave the asymmetry in the API
> for now and fail fast and clearly if there are still consumers active in
> the sink connector's group. We can try to detect this case and provide a
> helpful error message to the user explaining why the offset reset request
> has failed and some steps they can take to try to resolve things (wait for
> slow task shutdown to complete, restart zombie workers and/or workers with
> blocked tasks on them). In the future we can possibly even revisit KIP-611
> [1] or something like it to provide better insight into zombie tasks on a
> worker so that it's easier to find which tasks have been abandoned but are
> still running.
>
> Let me know what you think; this is an important point to call out and if
> we can reach some consensus on how to handle sink connector offset resets
> w/r/t zombie tasks, I'll update the KIP with the details.
>
> [1] -
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-611%3A+Improved+Handling+of+Abandoned+Connectors+and+Tasks
>
> Cheers,
>
> Chris
>
> On Tue, Nov 8, 2022 at 8:00 AM Yash Mayya  wrote:
>
> > Hi Chris,
> >
> > Thanks for the response and the explanations, I think you've answered
> > pretty much all the questions I had meticulously!
> >
> >
> > > if something goes wrong while resetting offsets, there's no
> > > immediate impact--the connector will still be in the STOPPED
> > >  state. The REST response for requests to reset the offsets
> > > will clearly call out that the operation has failed, and if necessary,
> > > we can probably also add a scary-looking warning message
> > > stating that we can't guarantee which offsets have been successfully
> > >  wiped and which haven't. Users can query the exact offsets of
> > > the connector at this point to determine what will happen if/what they
> > > resume it. And they can repeat attempts to reset the offsets as many
> > >  times as they'd like until they get back a 2XX response, indicating
> > > that it's finally safe to resume the connector. Thoughts?
> >
> > Yeah, I agree, the case that I mentioned earlier where a user would try to
> > resume a stopped connector after a failed offset reset attempt without
> > knowing that the offset reset attempt didn't fail cleanly is probably just
> > an extreme edge case. I think as long as the response is verbose enough and
> > self explanatory, we should be fine.
> >
> > Another question that I had was behavior w.r.t sink connector offset resets
> > when there are zombie tasks/workers in the Connect cluster - the KIP
> > mentions that for sink connectors offset resets will be done by deleting
> > the consumer group. However, if there are zombie tasks which are still able
> > to communicate with the Kafka cluster that the sink connector is consuming
> > from, I think the consumer group will automatically get re-created and the
> > zombie task may be able to commit offsets for the partitions that it is
> > consuming from?
> >
> > Thanks,
> > Yash
> >
> >
> > On Fri, Nov 4, 2022 at 10:27 PM Chris Egerton 
> > wrote:
> >
> > > Hi Yash,
> > >
> > > Thanks again for your thoughts! Responses to ongoing discussions inline
> > > (easier to track context than referencing comment numbers):
> > >
> > > > However, this then leads me to wonder if we can make that 

[jira] [Resolved] (KAFKA-14298) Getting null pointer exception

2022-11-09 Thread Ramakrishna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramakrishna resolved KAFKA-14298.
-
Resolution: Not A Problem

> Getting null pointer exception
> --
>
> Key: KAFKA-14298
> URL: https://issues.apache.org/jira/browse/KAFKA-14298
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Ramakrishna
>Priority: Major
>
> Getting null pointer exception.
>  
> {noformat}
> java.lang.NullPointerException 
> at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:995)
>  
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:925) 
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:801) 
> at 
> com.wellsfargo.dci.opensource.kafka.KafkaUtil$$anonfun$10$$anonfun$11.apply(KafkaUtil.scala:363)
>  
> at 
> com.wellsfargo.dci.opensource.kafka.KafkaUtil$$anonfun$10$$anonfun$11.apply(KafkaUtil.scala:361){noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1347

2022-11-09 Thread Apache Jenkins Server
See