Vladimir, could you please share the new IEP link here, and also link it to
the ticket?

On Thu, Oct 26, 2023 at 1:30 PM Vladimir Steshin <vlads...@gmail.com> wrote:

>      Roman, hi.
>
> Done. Thanks!
>
> On 26.10.2023 13:25, Roman Puchkovskiy wrote:
> > Hi Vladimir. Sorry to intervene, but we have a clash with IEP numbers,
> > there is already IEP-110 in Ignite 3, it was created on August, 1:
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-110%3A+Schema+synchronization%3A+basic+schema+changes
> >
> > Is it possible to pick another number, while your IEP is fresh?
> >
> > чт, 26 окт. 2023 г. в 14:05, Vladimir Steshin <vlads...@gmail.com>:
> >>       All right. Pavel, thank you.
> >>
> >> IEP:
> >>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-110+Thin+Client+Service+Awareness
> >>
> >> Ticket: https://issues.apache.org/jira/browse/IGNITE-20656
> >>
> >> On 25.10.2023 11:04, Pavel Tupitsyn wrote:
> >>> Looks good to me
> >>>
> >>> On Tue, Oct 24, 2023 at 1:50 PM Vladimir Steshin <vlads...@gmail.com>
> wrote:
> >>>
> >>>>        We've privately discussed with Mikhail Petrov and Alexey
> Plekhanov.
> >>>> To us, #2 seems OK with the exceptions that a dedicated request would
> be
> >>>> better for transferring the service topology. And it should be
> processed
> >>>> by the client instead of every service proxy.
> >>>>
> >>>> So, the suggested solution is:
> >>>> 1) Bring a new feature to the thin client protocol.
> >>>> 2) Require the partition awareness flag enabled.
> >>>> 3) Obtain service topology with a dedicated request by the client and
> >>>> provide it to the service proxies.
> >>>> 4) Initiate the topology update with: first service invocation,
> cluster
> >>>> topology change, some timeout (only if service is invoked).
> >>>>
> >>>> Cons:
> >>>>     - Some delay of the topology obtaining. The invocation redirects
> are
> >>>> still possible when service migrates.
> >>>>     - No sign of service cancel/deploy on the client side. We have to
> >>>> update by a timeout too.
> >>>>     - The topology is probably kept by client while it exists even if
> is
> >>>> not in use any more.
> >>>>
> >>>> If the suggestion looks reasonable, I'm ready to implement, create
> IEP.
> >>>>
> >>>> On 17.10.2023 18:28, Vladimir Steshin wrote:
> >>>>>       They barely can guarantee. If client miss service instance
> node,
> >>>>> the request is just redirected. But I talk about the most reliable
> way
> >>>>> to keep actual service topology. If we watch cluster topology change
> >>>>> event, we have to take in account cases like:
> >>>>>
> >>>>> - Client request service, gets its topology
> >>>>>
> >>>>> - The service is canceled and redeployed to another nodes. No cluster
> >>>>> topology change, no sign of it on the client side.
> >>>>>
> >>>>> - Client continue service requesting and misses instance node forever
> >>>>> or often.
> >>>>>
> >>>>> If we provide, for example, version or hash of client topology
> version
> >>>>> in every service call request, we always get actual service topology
> >>>>> just by comparing on server side. Independently of why and when
> >>>>> service redeploys. Isn't it simple and safe?
> >>>>>
> >>>>> On 17.10.2023 15:52, Pavel Tupitsyn wrote:
> >>>>>> None of the described approaches provides 100% guarantee of hitting
> the
> >>>>>> primary node in all conditions.
> >>>>>> And it is fine to miss a few requests. I don't see a reason to
> increase
> >>>>>> complexity trying to optimize a rare use case.
> >>>>>>
> >>>>>> On Tue, Oct 17, 2023 at 2:49 PM<vlads...@gmail.com>  wrote:
> >>>>>>
> >>>>>>> What if topology change event preceedes service redeployment and
> >>>> service
> >>>>>>> mapping change? There a possibility for client to save new topology
> >>>> version
> >>>>>>> before services are actually redeployed. If we rely on actual
> change
> >>>> of the
> >>>>>>> service mapping (redeployment), there is no such problem.
> >>>>>>>
> >>>>>>> On 17.10.2023 13:44, Pavel Tupitsyn<ptupit...@apache.org>  wrote:
> >>>>>>>> I think if it's good enough for cache partition awareness, then
> it's
> >>>> good
> >>>>>>>> enough for services. Topology changes are not that frequent.
> >>>>>>>>
> >>>>>>>> On Tue, Oct 17, 2023 at 12:22 PM<vlads...@gmail.com>  wrote:
> >>>>>>>>
> >>>>>>>>> Hi, Pavel.
> >>>>>>>>>
> >>>>>>>>> 1. Correct.
> >>>>>>>>> 2. Yes, client watches ClientFlag.AFFINITY_TOPOLOGY_CHANGED flag
> and
> >>>>>>> sends
> >>>>>>>>> additional ClientOperation.CLUSTER_GROUP_GET_NODE_ENDPOINTS to
> get
> >>>> new
> >>>>>>>>> cluster topology. Thus, the topology updates with some delay. We
> >>>> could
> >>>>>>>>> watch this event somehow in the service proxy. But direct service
> >>>>>>> topology
> >>>>>>>>> version in the call responses should work faster if service is
> being
> >>>>>>>>> requested. Or you think this is not significant?
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On 17.10.2023 11:13, Pavel Tupitsyn<ptupit...@apache.org>
> wrote:
> >>>>>>>>>> Hi Vladimir,
> >>>>>>>>>>
> >>>>>>>>>> 1. A topology of a deployed service can change only when the
> cluster
> >>>>>>>>>> topology changes.
> >>>>>>>>>> 2. We already have a topology change flag in every server
> response.
> >>>>>>>>>>
> >>>>>>>>>> Therefore, the client can request the topology once per
> service, and
> >>>>>>>>>> refresh it when cluster topology changes, right?
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> On Mon, Oct 16, 2023 at 8:17 PM Vladimir Steshin<
> vlads...@gmail.com
> >>>>>>>>> wrote:
> >>>>>>>>>>> Hi Igniters! I propose to add the /service awareness feature
> to the
> >>>>>>>>> thin
> >>>>>>>>>>> client/. I remember a couple of users asked of it. Looks nice
> to
> >>>> have
> >>>>>>>>>>> and simple to implement. Similar to the partition awareness.
> >>>>>>>>>>> Reason:
> >>>>>>>>>>> A service can be deployed only on one or few nodes. Currently,
> the
> >>>>>>> thin
> >>>>>>>>>>> client chooses one or a random node to invoke a service. Then,
> the
> >>>>>>>>>>> service call can be always or often redirected to other server
> >>>> node.
> >>>>>>> I
> >>>>>>>>>>> think we would need: - Bring a new feature to the thin client
> >>>>>>> protocol
> >>>>>>>>>>> (no protocol version change). - Require the partition awareness
> >>>> flag
> >>>>>>>>>>> enabled (it creates required connections to the cluster). -
> >>>> Transfer
> >>>>>>>>> the
> >>>>>>>>>>> service topology in the service call response (server node
> /already
> >>>>>>>>>>> holds /needed service topology).
> >>>>>>>>>>> - Keep the service topology in the client service proxy. If
> that is
> >>>>>>> ok,
> >>>>>>>>>>> my question is /how to update service topology on the client/?
> >>>>>>>>>>> I see the options: 1) Add a version to the service topology on
> the
> >>>>>>>>>>> server node and on the client service proxy. Add actual service
> >>>>>>>>> topology
> >>>>>>>>>>> to the service call response if actual>client.
> >>>>>>>>>>> /Pros/: Always most actual service top. version
> >>>>>>>>>>> /Cons/: Requires holding and syncing top. version on server
> nodes
> >>>>>>> only
> >>>>>>>>>>> for the thin clients.
> >>>>>>>>>>> 2) Add the actual service topology to the service call response
> >>>> only
> >>>>>>> if
> >>>>>>>>>>> service is not deployed on the current node. The client
> invalidates
> >>>>>>>>>>> received service topology every N invocations and/or every N
> >>>> seconds
> >>>>>>>>>>> (/code constants/).
> >>>>>>>>>>> /Pros/: Simple.
> >>>>>>>>>>> /Cons/: Actual topology delays. Not the best load balancing.
> >>>>>>>>>>> 3) Send from client a hash for the known service nodes UUIDs in
> >>>> every
> >>>>>>>>>>> service call request. Add actual service topology to the
> service
> >>>> call
> >>>>>>>>>>> response if the server's hash is not equal.
> >>>>>>>>>>> /Pros/: Simple. Always most actual service topology.
> >>>>>>>>>>> /Cons/: Costs some CPU sometimes.
> >>>>>>>>>>> WDYT?
> >>>>>>>>>>>
>

Reply via email to