Re: [tipc-discussion] [net] tipc: fix incorrect setting window for bcast link

2020-10-12 Thread Hoang Huu Le
> -Original Message-
> From: Jon Maloy 
> Sent: Tuesday, October 13, 2020 6:39 AM
> To: Hoang Huu Le ; 
> tipc-discussion@lists.sourceforge.net; ma...@donjonn.com;
> ying@windriver.com
> Subject: Re: [net] tipc: fix incorrect setting window for bcast link
> 
> Hi Hoang,
> I apologize for not paying enough attention to this problem until now.
> See below.
> 
> 
> On 9/30/20 9:43 PM, Hoang Huu Le wrote:
> > In commit 16ad3f4022bb
> > ("tipc: introduce variable window congestion control"), we applied
> > the Reno algorithm to select window size from minimum window to the
> > configured maximum window for unicast link.
> >
> > However, when setting window variable we do not specific to unicast link.
> > This lead to the window for broadcast link had effect as unexpect value
> > (i.e the window size is constantly 32).
> >
> > We fix this by updating the window for broadcast as its configuration.
> >
> > Signed-off-by: Hoang Huu Le 
> > ---
> >   net/tipc/bcast.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
> > index 940d176e0e87..abac9443b4d9 100644
> > --- a/net/tipc/bcast.c
> > +++ b/net/tipc/bcast.c
> > @@ -585,7 +585,7 @@ static int tipc_bc_link_set_queue_limits(struct net 
> > *net, u32 max_win)
> > if (max_win > TIPC_MAX_LINK_WIN)
> > return -EINVAL;
> > tipc_bcast_lock(net);
> > -   tipc_link_set_queue_limits(l, BCLINK_WIN_MIN, max_win);
> > +   tipc_link_set_queue_limits(l, max_win, max_win);
> I think this is dangerous. The broadcast link puts a much higher stress
> on the switch, and the risk of massive packet loss with ditto
> retransmissions is very high.
> A safer bet to stay with a fix window of 50 for now, to solve the
> problem you have at sites, and then you can possibly experiment with a
> variable window later.
So, should we return to 'not support' when user changes the window for 
broadcast link. Or just silent ignore the setting?

> If it gives significant higher throughput it might be worthwhile trying,
> but I am pretty sure that e.g. the base for calculating ssthresh (300)
> is too big.
> 
> ///jon
> > tipc_bcast_unlock(net);
> > return 0;
> >   }


___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion


Re: [tipc-discussion] [net] tipc: fix incorrect setting window for bcast link

2020-10-12 Thread Jon Maloy

Hi Hoang,
I apologize for not paying enough attention to this problem until now.
See below.


On 9/30/20 9:43 PM, Hoang Huu Le wrote:

In commit 16ad3f4022bb
("tipc: introduce variable window congestion control"), we applied
the Reno algorithm to select window size from minimum window to the
configured maximum window for unicast link.

However, when setting window variable we do not specific to unicast link.
This lead to the window for broadcast link had effect as unexpect value
(i.e the window size is constantly 32).

We fix this by updating the window for broadcast as its configuration.

Signed-off-by: Hoang Huu Le 
---
  net/tipc/bcast.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
index 940d176e0e87..abac9443b4d9 100644
--- a/net/tipc/bcast.c
+++ b/net/tipc/bcast.c
@@ -585,7 +585,7 @@ static int tipc_bc_link_set_queue_limits(struct net *net, 
u32 max_win)
if (max_win > TIPC_MAX_LINK_WIN)
return -EINVAL;
tipc_bcast_lock(net);
-   tipc_link_set_queue_limits(l, BCLINK_WIN_MIN, max_win);
+   tipc_link_set_queue_limits(l, max_win, max_win);
I think this is dangerous. The broadcast link puts a much higher stress 
on the switch, and the risk of massive packet loss with ditto 
retransmissions is very high.
A safer bet to stay with a fix window of 50 for now, to solve the 
problem you have at sites, and then you can possibly experiment with a 
variable window later.
If it gives significant higher throughput it might be worthwhile trying, 
but I am pretty sure that e.g. the base for calculating ssthresh (300) 
is too big.


///jon

tipc_bcast_unlock(net);
return 0;
  }




___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion


Re: [tipc-discussion] [net] tipc: fix incorrect setting window for bcast link

2020-10-01 Thread Hoang Huu Le
Hi Jon,

See my inline comment.

Regards,
Hoang
> -Original Message-
> From: Jon Maloy 
> Sent: Friday, October 2, 2020 4:40 AM
> To: Hoang Huu Le ; 
> tipc-discussion@lists.sourceforge.net; ma...@donjonn.com;
> ying@windriver.com; Xin Long 
> Subject: Re: [net] tipc: fix incorrect setting window for bcast link
> 
> 
> 
> On 9/30/20 9:43 PM, Hoang Huu Le wrote:
> > In commit 16ad3f4022bb
> > ("tipc: introduce variable window congestion control"), we applied
> > the Reno algorithm to select window size from minimum window to the
> > configured maximum window for unicast link.
> >
> > However, when setting window variable we do not specific to unicast link.
> > This lead to the window for broadcast link had effect as unexpect value
> > (i.e the window size is constantly 32).
> This was intentional, although I thought the value was 50, not 32.
> In my experience we cannot afford a generous variable window
> in the broadcast link the same way we do with the unicast link.
> There will be too many losses and retransmissions because of the
> way switches work.

[Hoang]
When it is created, the value is 50. However, if we use the tipc command:
i.e $ tipc link set win 50 link broadcast
The window is set to 32 - this value is constant since then. 
Is it counting as bug?

> 
> >
> > We fix this by updating the window for broadcast as its configuration.
> >
> > Signed-off-by: Hoang Huu Le 
> > ---
> >   net/tipc/bcast.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
> > index 940d176e0e87..abac9443b4d9 100644
> > --- a/net/tipc/bcast.c
> > +++ b/net/tipc/bcast.c
> > @@ -585,7 +585,7 @@ static int tipc_bc_link_set_queue_limits(struct net 
> > *net, u32 max_win)
> > if (max_win > TIPC_MAX_LINK_WIN)
> > return -EINVAL;
> > tipc_bcast_lock(net);
> > -   tipc_link_set_queue_limits(l, BCLINK_WIN_MIN, max_win);
> > +   tipc_link_set_queue_limits(l, max_win, max_win);
> > tipc_bcast_unlock(net);
> > return 0;
> >   }
> What is the effect of this change? Do we still have a fix window?
[Hoang]
No, now window can be configured as user intention.
If you'd like to use the fix window 50. I will update the code change. 

> What happens when we have buffer overflow? The broadcast
> send link can never be reset I rember correctly.
> Did you test this with high load, e.g. using the multicast_blast test
> program?
[Hoang] I tried to publish hundreds or services that uses SYSTEM_IMPORTANCE - 
it was not being reset when the link is overflow.
For others imp, the overflow issue will not happen because of oversubscription 
allowing. I will give a try with the test.
> 
> Regards
> ///jon


___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion


Re: [tipc-discussion] [net] tipc: fix incorrect setting window for bcast link

2020-10-01 Thread Jon Maloy




On 9/30/20 9:43 PM, Hoang Huu Le wrote:

In commit 16ad3f4022bb
("tipc: introduce variable window congestion control"), we applied
the Reno algorithm to select window size from minimum window to the
configured maximum window for unicast link.

However, when setting window variable we do not specific to unicast link.
This lead to the window for broadcast link had effect as unexpect value
(i.e the window size is constantly 32).

This was intentional, although I thought the value was 50, not 32.
In my experience we cannot afford a generous variable window
in the broadcast link the same way we do with the unicast link.
There will be too many losses and retransmissions because of the
way switches work.



We fix this by updating the window for broadcast as its configuration.

Signed-off-by: Hoang Huu Le 
---
  net/tipc/bcast.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
index 940d176e0e87..abac9443b4d9 100644
--- a/net/tipc/bcast.c
+++ b/net/tipc/bcast.c
@@ -585,7 +585,7 @@ static int tipc_bc_link_set_queue_limits(struct net *net, 
u32 max_win)
if (max_win > TIPC_MAX_LINK_WIN)
return -EINVAL;
tipc_bcast_lock(net);
-   tipc_link_set_queue_limits(l, BCLINK_WIN_MIN, max_win);
+   tipc_link_set_queue_limits(l, max_win, max_win);
tipc_bcast_unlock(net);
return 0;
  }

What is the effect of this change? Do we still have a fix window?
What happens when we have buffer overflow? The broadcast
send link can never be reset I rember correctly.
Did you test this with high load, e.g. using the multicast_blast test
program?

Regards
///jon



___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion


[tipc-discussion] [net] tipc: fix incorrect setting window for bcast link

2020-09-30 Thread Hoang Huu Le
In commit 16ad3f4022bb
("tipc: introduce variable window congestion control"), we applied
the Reno algorithm to select window size from minimum window to the
configured maximum window for unicast link.

However, when setting window variable we do not specific to unicast link.
This lead to the window for broadcast link had effect as unexpect value
(i.e the window size is constantly 32).

We fix this by updating the window for broadcast as its configuration.

Signed-off-by: Hoang Huu Le 
---
 net/tipc/bcast.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
index 940d176e0e87..abac9443b4d9 100644
--- a/net/tipc/bcast.c
+++ b/net/tipc/bcast.c
@@ -585,7 +585,7 @@ static int tipc_bc_link_set_queue_limits(struct net *net, 
u32 max_win)
if (max_win > TIPC_MAX_LINK_WIN)
return -EINVAL;
tipc_bcast_lock(net);
-   tipc_link_set_queue_limits(l, BCLINK_WIN_MIN, max_win);
+   tipc_link_set_queue_limits(l, max_win, max_win);
tipc_bcast_unlock(net);
return 0;
 }
-- 
2.25.1



___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion