Hello Paolo,

I'm looking at your PR for KIP-204 now. Will reply on the discussion thread
/ PR diff file directly if I find anything.


Guozhang

On Tue, Oct 24, 2017 at 5:45 AM, Paolo Patierno <ppatie...@live.com> wrote:

> Hi Guozhang,
>
> thanks for replying !
>
>
> I see your point about the Metadata class which doesn't need to expose
> errors because transient.
>
>
> Regarding the KIP-204, the delete operations in the "legacy" client
> doesn't have any retry logic but it just returns the error to the user
> which should retry himself (on topics where the operation failed).
>
> If I should add a retry logic in the "new" admin client, considering a
> delete records operation on more topics partitions at same time, I should
> retry if at least one of the topics partitions will come with a
> LEADER_NOT_AVAILABLE (after metadata request), without going on with other
> topic partitions which have leaders.
>
> Maybe it's better to continue with the operations on such topics and come
> back to the user with a LEADER_NOT_AVAILABLE for the others (it's the
> current behaviour with "legacy" admin client).
>
>
> For now the current implementation I have (I'll push a PR soon), use the
> Call class for sending a MetadataRequest and then its handleResponse for
> using another Call instance for sending the DeleteRecordsRequest.
>
>
> Thanks
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno<http://twitter.com/ppatierno>
> Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> Blog : DevExperience<http://paolopatierno.wordpress.com/>
>
>
> ________________________________
> From: Guozhang Wang <wangg...@gmail.com>
> Sent: Tuesday, October 24, 2017 12:52 AM
> To: dev@kafka.apache.org
> Subject: Re: Metadata class doesn't "expose" topics with errors
>
> Hello Paolo,
>
> The reason we filtered the errors in the topics in the generated Cluster is
> that Metadata and its "fetch()" returned Cluster is a common class that is
> used among all clients (producer, consumer, connect, streams, admin), and
> is treated as a high-level representation of the current snapshot of the
> hosted topic information of the cluster, and hence we intentionally exclude
> any transient errors in the representation to abstract such issues away
> from its users.
>
> As for your implementation on KIP-204, I think just wait-and-retry for the
> updated metadata.fetch() Cluster contain the leader information for the
> topic is fine: since if a LEADER_NOT_AVAILABLE is returned you'll need to
> backoff and retry anyways, right?
>
>
> Guozhang
>
>
>
> On Mon, Oct 23, 2017 at 2:36 AM, Paolo Patierno <ppatie...@live.com>
> wrote:
>
> > Finally another plan could be to use nesting of runnable calls.
> >
> > The first one for asking metadata (using the MetadataRequest which
> > provides us all the errors) and then sending the delete records requests
> in
> > the handleResponse() of such metadata request.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno<http://twitter.com/ppatierno>
> > Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> > Blog : DevExperience<http://paolopatierno.wordpress.com/>
> >
> >
> > ________________________________
> > From: Paolo Patierno <ppatie...@live.com>
> > Sent: Monday, October 23, 2017 9:06 AM
> > To: dev@kafka.apache.org
> > Subject: Metadata class doesn't "expose" topics with errors
> >
> > Hi devs,
> >
> > while developing the KIP-204 (having delete records operation in the
> "new"
> > Admin Client) I'm facing with the following doubt (or maybe a lack of
> info)
> > ...
> >
> >
> > As described by KIP-107 (which implements this feature at protocol level
> > and in the "legacy" Admin Client), the request needs to be sent to the
> > leader.
> >
> >
> > For both KIPs, the operation has a Map<TopicPartition, offset> (offset is
> > a long in the "legacy" API but it's becoming to be a class in the "new"
> > API) and in order to reduce the number of requests to different leaders,
> my
> > code groups partitions having same leader so having a Map<Node,
> > Map<TopicPartition, offset>>.
> >
> >
> > In order to know the leaders I need to request metadata and there are two
> > ways for doing that :
> >
> >
> >   *   using something like the producer does with Metadata class, putting
> > the topics, request update and waiting for it
> >   *   using the low level MetadataRequest and handling the related
> > response (which is what the "legacy" API does today)
> >
> > I noticed that building the Cluster object from the MetadataResponse, the
> > topics with errors are skipped and it means that in the final "high
> level"
> > Metadata class (fetching the Cluster object) there is no information
> about
> > them. So with first solution we have no info about topics with errors
> > (maybe the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE",
> if
> > leaderFor() on the Cluster returns a null Node).
> >
> > Is there any specific reason why "topics with errors" are not exposed in
> > the Metadata instance ?
> > Is the preferred pattern using the low level protocol stuff in such case
> ?
> >
> > Thanks
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno<http://twitter.com/ppatierno>
> > Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> > Blog : DevExperience<http://paolopatierno.wordpress.com/>
> >
>
>
>
> --
> -- Guozhang
>



-- 
-- Guozhang

Reply via email to