> -----Original Message-----
> From: Paolo Abeni <pab...@redhat.com> 
> Sent: Tuesday, May 13, 2025 3:55 PM
> To: Chia-Yu Chang (Nokia) <chia-yu.ch...@nokia-bell-labs.com>; 
> ho...@kernel.org; dsah...@kernel.org; kun...@amazon.com; 
> b...@vger.kernel.org; net...@vger.kernel.org; dave.t...@gmail.com; 
> j...@mojatatu.com; k...@kernel.org; step...@networkplumber.org; 
> xiyou.wangc...@gmail.com; j...@resnulli.us; da...@davemloft.net; 
> eduma...@google.com; andrew+net...@lunn.ch; donald.hun...@gmail.com; 
> a...@fiberby.net; liuhang...@gmail.com; sh...@kernel.org; 
> linux-kselftest@vger.kernel.org; i...@kernel.org; ncardw...@google.com; Koen 
> De Schepper (Nokia) <koen.de_schep...@nokia-bell-labs.com>; g.white 
> <g.wh...@cablelabs.com>; ingemar.s.johans...@ericsson.com; 
> mirja.kuehlew...@ericsson.com; chesh...@apple.com; rs.i...@gmx.at; 
> jason_living...@comcast.com; vidhi_goel <vidhi_g...@apple.com>
> Cc: Olivier Tilmans (Nokia) <olivier.tilm...@nokia.com>
> Subject: Re: [PATCH v6 net-next 04/15] tcp: AccECN core
> 
> 
> CAUTION: This is an external email. Please be very careful when clicking 
> links or opening attachments. See the URL nok.it/ext for additional 
> information.
> 
> 
> 
> On 5/9/25 11:18 PM, chia-yu.ch...@nokia-bell-labs.com wrote:
> > @@ -5098,7 +5100,8 @@ static void __init tcp_struct_check(void)
> >       /* 32bit arches with 8byte alignment on u64 fields might need padding
> >        * before tcp_clock_cache.
> >        */
> > -     CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 92 
> > + 4);
> > +     CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, 
> > + tcp_sock_write_txrx, 96 + 4);
> 
> This looks inconsistent with the pahole output in the commit message (the 
> groups looks 95 bytes wide, comprising the holes)

Hi Paolo,

Thanks for the feedback.
Indeed, the group size shall be updated before adding AccECN changes based on 
pahole results.
And I will fix in #3, #4, #6 for tcp_sock_write_txrx, #3 for tcp_sock_write_tx, 
and #10 for tcp_sock_write_rx.
 
> [...]
> > @@ -382,11 +393,17 @@ static void tcp_ecn_send(struct sock *sk, struct 
> > sk_buff *skb,  {
> >       struct tcp_sock *tp = tcp_sk(sk);
> >
> > -     if (tcp_ecn_mode_rfc3168(tp)) {
> > +     if (!tcp_ecn_mode_any(tp))
> > +             return;
> > +
> > +     INET_ECN_xmit(sk);
> > +     if (tcp_ecn_mode_accecn(tp)) {
> > +             tcp_accecn_set_ace(th, tp);
> > +             skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ACCECN;
> > +     } else {
> >               /* Not-retransmitted data segment: set ECT and inject CWR. */
> >               if (skb->len != tcp_header_len &&
> >                   !before(TCP_SKB_CB(skb)->seq, tp->snd_nxt)) {
> > -                     INET_ECN_xmit(sk);
> 
> The above chunk apparently changes the current behaviour for 
> !tcp_ecn_mode_accecn(), unconditionally setting ECN, while before ECN was set 
> only for non retrans segments.
> 
> /P

This will be fixed in the next version.

Chia-Yu

Reply via email to