same thing. if you want go multiple hops ZeroMQ you need forwarding
already. And if you go one hop it really doesn't matter, it's just FOSE
(flooding over something else ;-)

-- tony

On Wed, Mar 10, 2021 at 12:52 PM Robert Raszuk <rob...@raszuk.net> wrote:

> >  You think Kafka here?
>
> Nope ... I meant ZeroMQ message bus as underlaying pub-sub transport for
> service related info.
>
> Thx,
> R.,
>
>
> On Wed, Mar 10, 2021 at 11:41 AM Tony Przygienda <tonysi...@gmail.com>
> wrote:
>
>> ? Last time I looked @ it (and it's been a while) Open-R had nothing of
>> that sort, it was basically KV playing LSDB (innovative and clever in
>> itself). You think Kafka here? Which in turn needs underlying IGP however
>> and is nothing but BGP problems in new clothes having looked @ their
>> internal architecture and where it's goiing a while ago.
>>
>> -- tony
>>
>> On Wed, Mar 10, 2021 at 11:29 AM Robert Raszuk <rob...@raszuk.net> wrote:
>>
>>> Peter,
>>>
>>> > But suddenly the DOWN event distribution is considered
>>> > problematic. Not sure I follow.
>>>
>>> In routing and IP reachability we use p2mp distribution and flooding as
>>> it is required to provide any to any connectivity.
>>>
>>> Such spray model no longer fits services where not every endpoint
>>> participates in all services.
>>>
>>> So my point is that just because you have transport ready we should not
>>> continue to announce neither good nor bad news in spray fashion for
>>> services.
>>>
>>> Sure it works, but it is hardly a good design and sound architecture.
>>>
>>> It happened to BGP as the convenience of already having TCP sessions
>>> between nodes was so great that we loaded loads of stuff to go along basic
>>> routing reachability.
>>>
>>> And now it seems time came to do the same with IGPs :).
>>>
>>> I think unless we stop and define a real pub-sub messaging protocol
>>> (like FB does with open-R)  we will continue this.
>>>
>>> And to me it is like building a tower from the cards ... the higher you
>>> go the more likely your entire tower is to collapse.
>>>
>>> Cheers,
>>> R.
>>>
>>> PS.
>>>
>>> > with MPLS loopback address of all PEs is advertised everywhere.
>>>
>>> Is this a feature or a day one design bug later fixed by RFC5283 ?
>>>
>>>
>>>
>>>
>>> On Wed, Mar 10, 2021 at 9:10 AM Peter Psenak <ppse...@cisco.com> wrote:
>>>
>>>> Robert,
>>>>
>>>>
>>>> On 09/03/2021 19:30, Robert Raszuk wrote:
>>>> > Hi Peter,
>>>> >
>>>> >      > Example 1:
>>>> >      >
>>>> >      > If session to PE1 goes down, withdraw all RDs received from
>>>> such PE.
>>>> >
>>>> >     still dependent on RDs and BGP specific.
>>>> >
>>>> >
>>>> > To me this does sound like a feature ... to you I think it was rather
>>>> > pejorative.
>>>>
>>>> not sure I understand your point with "pejorative"...
>>>>
>>>> There are other ways to provide services outside of BGP - think GRE,
>>>> IPsec, etc. The solution should cover them all.
>>>>
>>>> >
>>>> >     We want app independent way of
>>>> >     signaling the reachability loss. At the end that's what IGPs do
>>>> without
>>>> >     a presence of summarization.
>>>> >
>>>> >
>>>> > Here you go. I suppose you just drafted the first use case for OSPF
>>>> > Transport Instance.
>>>>
>>>> you said it, not me.
>>>>
>>>>
>>>> >
>>>> > I suppose you just run new ISIS or OSPF Instance and flood info about
>>>> PE
>>>> > down events to all other instance nodes (hopefully just PEs and no Ps
>>>> as
>>>> > such plane would be OTT one).  Still you will be flooding this to
>>>> 100s
>>>> > of PEs which may never need this information at all which I think is
>>>> the
>>>> > main issue here. Such bad news IMHO should be distributed on a
>>>> pub/sub
>>>> > basis only. First you subscribe then you get updates ... not get
>>>> > everything then keep junk till it get's removed or expires.
>>>>
>>>> with MPLS loopback address of all PEs is advertised everywhere. So you
>>>> keep the state when the remote PE loopback is up and you get a state
>>>> withdrawal when the remote PE loopback goes down.
>>>>
>>>> In Srv6, with summarization we can reduced the amount of UP state to
>>>> minimum. But suddenly the DOWN event distribution is considered
>>>> problematic. Not sure I follow.
>>>>
>>>> thanks,
>>>> Peter
>>>>
>>>> >
>>>> > Many thx,
>>>> > Robert
>>>> >
>>>>
>>>> _______________________________________________
>>> Lsr mailing list
>>> Lsr@ietf.org
>>> https://www.ietf.org/mailman/listinfo/lsr
>>>
>>
_______________________________________________
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to