[plasmashell] [Bug 465755] folder view empty

2023-02-16 Thread Valerio Galdo
https://bugs.kde.org/show_bug.cgi?id=465755

--- Comment #6 from Valerio Galdo  ---
Ok, 
I've unplugged the second screen cable and everything works, now i can see the
icons in Folder View, but for me it's a problem because i need the second
screen.

-- 
You are receiving this mail because:
You are watching all bug changes.

[plasmashell] [Bug 465755] folder view empty

2023-02-15 Thread Valerio Galdo
https://bugs.kde.org/show_bug.cgi?id=465755

--- Comment #5 from Valerio Galdo  ---

and i noticed one more thing: if i switch activity from desktop view to folder
view, they overlap.

-- 
You are receiving this mail because:
You are watching all bug changes.

[plasmashell] [Bug 465755] folder view empty

2023-02-15 Thread Valerio Galdo
https://bugs.kde.org/show_bug.cgi?id=465755

--- Comment #4 from Valerio Galdo  ---
sorry for my english, i'm italian...
The version i use is 
kde neon 5.27
kde frameworks 5.102.0
qt version 5.15.8
and the kernel is 5.15.0.60
graphics X11

-- 
You are receiving this mail because:
You are watching all bug changes.

[plasmashell] [Bug 465755] folder view empty

2023-02-15 Thread Valerio Galdo
https://bugs.kde.org/show_bug.cgi?id=465755

--- Comment #3 from Valerio Galdo  ---
I use kde neon e this happen after last update, i use multiple screens but
i never had this problem

Il mer 15 feb 2023, 15:29 Nate Graham  ha scritto:

> https://bugs.kde.org/show_bug.cgi?id=465755
>
> Nate Graham  changed:
>
>What|Removed |Added
>
> 
>Assignee|unassigned-b...@kde.org |plasma-b...@kde.org
>  Status|REPORTED|NEEDSINFO
>  Resolution|--- |WAITINGFORINFO
>  CC||h...@kde.org,
> n...@kde.org
> Version|unspecified |5.27.0
> Product|kde |plasmashell
>   Component|general |Folder
>Target Milestone|--- |1.0
>
> --- Comment #2 from Nate Graham  ---
> What version of Plasma are you using?
>
> Did this happen after you upgraded or took some other action, or was it
> always
> like this?
>
> Do you have multiple screens in use? Have you ever had multiple screens in
> the
> past?
>
> --
> You are receiving this mail because:
> You reported the bug.

-- 
You are receiving this mail because:
You are watching all bug changes.

[kde] [Bug 465755] folder view empty

2023-02-15 Thread Valerio Galdo
https://bugs.kde.org/show_bug.cgi?id=465755

Valerio Galdo  changed:

   What|Removed |Added

   Platform|Other   |Neon

-- 
You are receiving this mail because:
You are watching all bug changes.

[kde] [Bug 465755] folder view empty

2023-02-15 Thread Valerio Galdo
https://bugs.kde.org/show_bug.cgi?id=465755

--- Comment #1 from Valerio Galdo  ---
Created attachment 156264
  --> https://bugs.kde.org/attachment.cgi?id=156264&action=edit
this is my choise for my desktop, but i can't see anything

-- 
You are receiving this mail because:
You are watching all bug changes.

[kde] [Bug 465755] New: folder view empty

2023-02-15 Thread Valerio Galdo
https://bugs.kde.org/show_bug.cgi?id=465755

Bug ID: 465755
   Summary: folder view empty
Classification: I don't know
   Product: kde
   Version: unspecified
  Platform: Other
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: unassigned-b...@kde.org
  Reporter: valerio.ga...@gmail.com
  Target Milestone: ---

SUMMARY
***
NOTE: If you are reporting a crash, please try to attach a backtrace with debug
symbols.
See
https://community.kde.org/Guidelines_and_HOWTOs/Debugging/How_to_create_useful_crash_reports
***
i can't see any icons in folder view

STEPS TO REPRODUCE
1. 
2. 
3. 

OBSERVED RESULT


EXPECTED RESULT


SOFTWARE/OS VERSIONS
Windows: 
macOS: 
Linux/KDE Plasma: 
(available in About System)
KDE Plasma Version: 
KDE Frameworks Version: 
Qt Version: 

ADDITIONAL INFORMATION

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: [ovs-dev] [PATCH v6] conntrack: Properly unNAT inner header of related traffic

2023-02-09 Thread Paolo Valerio
Hi Ales,

I just have two small nits, but other than that the patch LGTM.

Acked-by: Paolo Valerio 

Ales Musil  writes:

> The inner header was not handled properly.
> Simplify the code which allows proper handling
> of the inner headers.
>
> Reported-at: https://bugzilla.redhat.com/2137754
> Signed-off-by: Ales Musil 
> ---
> v6: Rebase on top of current master.
> Address comments from Paolo:
> - Add test case for ICMP related in reply direction.
> - Fix a mistake when the inner header was using
> wrong nat_action.
> v5: Rebase on top of current master.
> Address comments from Dumitru:
> - Use explicit struct sizes for inner_l3 pointer.
> - Use copied conn_key for reverse operation instead
> of double reverse of the original one.
> - Update the test case to use separate zone instead
> of default one.
> v4: Rebase on top of current master.
> Use output of ovs-pcap in tests rather than tcpdump.
> v3: Rebase on top of current master.
> Update the BZ reference.
> Update the test case.
> ---
>  lib/conntrack.c | 254 ++--
>  tests/system-traffic.at | 107 +
>  2 files changed, 198 insertions(+), 163 deletions(-)
>
> diff --git a/lib/conntrack.c b/lib/conntrack.c
> index 550b2be9b..3162924ca 100644
> --- a/lib/conntrack.c
> +++ b/lib/conntrack.c
> @@ -764,109 +764,61 @@ handle_alg_ctl(struct conntrack *ct, const struct 
> conn_lookup_ctx *ctx,
>  }
>  
>  static void
> -pat_packet(struct dp_packet *pkt, const struct conn *conn)
> +pat_packet(struct dp_packet *pkt, const struct conn_key *key)
>  {
> -if (conn->nat_action & NAT_ACTION_SRC) {
> -if (conn->key.nw_proto == IPPROTO_TCP) {
> -struct tcp_header *th = dp_packet_l4(pkt);
> -packet_set_tcp_port(pkt, conn->rev_key.dst.port, th->tcp_dst);
> -} else if (conn->key.nw_proto == IPPROTO_UDP) {
> -struct udp_header *uh = dp_packet_l4(pkt);
> -packet_set_udp_port(pkt, conn->rev_key.dst.port, uh->udp_dst);
> -}
> -} else if (conn->nat_action & NAT_ACTION_DST) {
> -if (conn->key.nw_proto == IPPROTO_TCP) {
> -packet_set_tcp_port(pkt, conn->rev_key.dst.port,
> -conn->rev_key.src.port);
> -} else if (conn->key.nw_proto == IPPROTO_UDP) {
> -packet_set_udp_port(pkt, conn->rev_key.dst.port,
> -conn->rev_key.src.port);
> -}
> +if (key->nw_proto == IPPROTO_TCP) {
> +packet_set_tcp_port(pkt, key->dst.port, key->src.port);
> +} else if (key->nw_proto == IPPROTO_UDP) {
> +packet_set_udp_port(pkt, key->dst.port, key->src.port);
>  }
>  }
>  
> -static void
> -nat_packet(struct dp_packet *pkt, const struct conn *conn, bool related)
> +static uint16_t
> +nat_action_reverse(uint16_t nat_action)
>  {
> -if (conn->nat_action & NAT_ACTION_SRC) {
> -pkt->md.ct_state |= CS_SRC_NAT;
> -if (conn->key.dl_type == htons(ETH_TYPE_IP)) {
> -struct ip_header *nh = dp_packet_l3(pkt);
> -packet_set_ipv4_addr(pkt, &nh->ip_src,
> - conn->rev_key.dst.addr.ipv4);
> -} else {
> -struct ovs_16aligned_ip6_hdr *nh6 = dp_packet_l3(pkt);
> -packet_set_ipv6_addr(pkt, conn->key.nw_proto,
> - nh6->ip6_src.be32,
> - &conn->rev_key.dst.addr.ipv6, true);
> -}
> -if (!related) {
> -pat_packet(pkt, conn);
> -}
> -} else if (conn->nat_action & NAT_ACTION_DST) {
> -pkt->md.ct_state |= CS_DST_NAT;
> -if (conn->key.dl_type == htons(ETH_TYPE_IP)) {
> -struct ip_header *nh = dp_packet_l3(pkt);
> -packet_set_ipv4_addr(pkt, &nh->ip_dst,
> - conn->rev_key.src.addr.ipv4);
> -} else {
> -struct ovs_16aligned_ip6_hdr *nh6 = dp_packet_l3(pkt);
> -packet_set_ipv6_addr(pkt, conn->key.nw_proto,
> - nh6->ip6_dst.be32,
> - &conn->rev_key.src.addr.ipv6, true);
> -}
> -if (!related) {
> -pat_packet(pkt, conn);
> -}
> +if (nat_action & NAT_ACTION_SRC) {
> +VLOG_INFO("original SRC");

Not sure this is useful. I'd remove it including the one below.

> +nat_action ^= NAT_ACTION_SRC;
> +nat_action |= NAT_ACTION_DST;
&

Re: firmware-iwlwifi scomparso da sid

2023-02-06 Thread valerio




Il 06/02/23 15:43, peterpunk ha scritto:

On Mon, 6 Feb 2023 14:34:27 +0100
Enrico Rossi  wrote:


Ciao Valerio,


sei sicuro?
ho provato ma mi da errore...

Io ho appena risolto aggiungendo  non-free-firmware come ha
detto Enrico. Non ho fatto altro.
Uso unstable.

allora, probabilmente per adesso è solo per sid..


l'esempio che ho scritto x bookworm è l'attuale testing, che anche
io uso.


Confermo che su testing non-free-firmware funziona ;)


ciao a tutti,
probabilmente ho fatto qualche errore di scrittura...

ho riprovato e adesso ha funzionato.

valerio

p.s.
per favore non rispondetemi anche in privato



Re: firmware-iwlwifi scomparso da sid

2023-02-06 Thread valerio




Il 06/02/23 14:32, Luca Costantino ha scritto:

Immagino tu abbia dato apt-get update?



no, apt update ..

valerio



Re: firmware-iwlwifi scomparso da sid

2023-02-06 Thread valerio




Il 06/02/23 14:19, dot...@gmail.com ha scritto:

On Mon, Feb 6, 2023 at 1:30 PM valerio 
wrote:


allora, probabilmente per adesso è solo per sid..



Può essere, ma mi sembra in contrasto con le indicazioni in
https://wiki.debian.org/Firmware.
O meglio, lì si parla di possibili informazioni out-of-date.

Qual è l'errore?



ciao,
adesso non riesco a recuperarlo, ma diceva che non esisteva 
non-free-firmware nei repository


valerio



a.





Re: firmware-iwlwifi scomparso da sid

2023-02-06 Thread valerio




Il 06/02/23 13:07, dot...@gmail.com ha scritto:

On Mon, Feb 6, 2023 at 12:59 PM valerio 
wrote:


sei sicuro?
ho provato ma mi da errore...



Io ho appena risolto aggiungendo  non-free-firmware come ha detto Enrico.
Non ho fatto altro.
Uso unstable.

Che errore ti da?

a.


ciao,
allora, probabilmente per adesso è solo per sid..

valerio



Re: [ovs-dev] [PATCH v5] conntrack: Properly unNAT inner header of related traffic

2023-02-06 Thread Paolo Valerio
Ales Musil  writes:

> On Sun, Feb 5, 2023 at 7:17 PM Paolo Valerio  wrote:
>
> Ales Musil  writes:
>
> > The inner header was not handled properly.
> > Simplify the code which allows proper handling
> > of the inner headers.
> >
> > Reported-at: https://bugzilla.redhat.com/2137754
> > Signed-off-by: Ales Musil 
> > ---
> > v5: Rebase on top of current master.
> >     Address comments from Dumitru:
> >     - Use explicit struct sizes for inner_l3 pointer.
> >     - Use copied conn_key for reverse operation instead
> >     of double reverse of the original one.
> >     - Update the test case to use separate zone instead
> >     of default one.
> > v4: Rebase on top of current master.
> >     Use output of ovs-pcap in tests rather than tcpdump.
> > v3: Rebase on top of current master.
> >     Update the BZ reference.
> >     Update the test case.
> > ---
>
> Hello Ales,
>
>
> Hi Paolo,
>
> thank you for the review.
>  
>
>
> thanks for the patch.
> One noticeable thing is that the patch doesn't enforce the commit flag
> as it happens for the kernel datapath. This seems what you want
> considering the flows in the test.
>
>
> It wasn't doing it even before, this seems to be out of scope of this patch
> as this tries to fix the problem with inner header translation. However I 
> agree
> that userspace and kernel should behave the same way, if you don't mind it
> could
> be a follow up patch.
>  

in general, this doesn't happen for the kernel datapath as well for
the reply direction, see "conntrack - ICMP related with NAT". This was
the point I wanted to make asking if you happen to test it in the reply
dir without committing.

Keeping it out of this patch sounds good to me.

>
>
> E.g. with this diff on top of your patch:
>
> diff --git a/tests/system-traffic.at b/tests/system-traffic.at
> index 798343877..b309635b9 100644
> --- a/tests/system-traffic.at
> +++ b/tests/system-traffic.at
> @@ -7147,7 +7147,6 @@ dnl Send traffic from client to CT, do DNAT if the
> traffic is new otherwise send
>  AT_DATA([flows.txt], [dnl
>  table=0,ip,actions=ct(table=1,zone=42,nat)
>  table=1,in_port=ovs-client,ip,ct_state=+trk+new,actions=ct(commit,table=
> 2,zone=42,nat(dst(192.168.10.20))
> 
> -table=1,in_port=ovs-client,icmp,ct_state=+trk+rel,actions=ct(commit,table=
> 2,zone=42,nat)
>  table=1,ip,actions=resubmit(,2)
>  table=2,in_port=ovs-client,ip,ct_state=+trk+new,actions=output:ovs-server
>  table=2,in_port=ovs-client,icmp,ct_state=+trk+rel,actions=
> output:ovs-server
> @@ -7176,8 +7175,7 @@ AT_CHECK([ovs-appctl revalidator/purge], [0])
>  AT_CHECK([ovs-ofctl -O OpenFlow15 dump-flows br0 | ofctl_strip | sort ],
> [0], [dnl
>   n_packets=3, n_bytes=154, reset_counts ip 
> actions=ct(table=1,zone=42,nat)
>   table=1, n_packets=1, n_bytes=42, reset_counts ct_state=
> +new+trk,ip,in_port=1 actions=ct(commit,table=2,zone=42,nat(dst=
> 192.168.10.20))
> - table=1, n_packets=1, n_bytes=42, reset_counts ip actions=resubmit(,2)
> - table=1, n_packets=1, n_bytes=70, reset_counts ct_state=
> +rel+trk,icmp,in_port=1 actions=ct(commit,table=2,zone=42,nat)
> + table=1, n_packets=2, n_bytes=112, reset_counts ip actions=resubmit(,2)
>   table=2, n_packets=1, n_bytes=42, reset_counts ct_state=
> +new+trk,ip,in_port=1 actions=output:2
>   table=2, n_packets=1, n_bytes=42, reset_counts ct_state=
> +rpl+trk,ip,in_port=2 actions=output:1
>   table=2, n_packets=1, n_bytes=70, reset_counts ct_state=
> +rel+trk,icmp,in_port=1 actions=output:2
>
> the test passes for the userspace datapath, but fails for the kernel.
>
> I have a question, though, did you happen to test for both datapaths
> what happens if a middlebox sends the icmp error from the reply
> direction instead without your patch?
> I assume things worked (without commit for both datapaths) in that case.
>
>
> Another thing that IMO could be nice to add is a test case for the same
> scenario, but in the reply direction. At least, both directions will be
> covered and verified.
>
>
> I've added a test case for the reply direction. It actually caught small
> mistake I made
> which should be both in v6.
>  
>
>
> Paolo
>
> >  lib/conntrack.c         | 252 ++--
> >  tests/system-traffic.at |  66 +++
> >  2 files changed, 155 inse

Re: firmware-iwlwifi scomparso da sid

2023-02-06 Thread valerio




Il 05/02/23 16:05, Enrico Rossi ha scritto:

Ciao,

devi aggiungere non-free-firmware ai components.

https://wiki.debian.org/Firmware

ex. /etc/apt/sources.list

deb http://deb.debian.org/debian/ bookworm main non-free contrib 
non-free-firmware
deb-src http://deb.debian.org/debian/ bookworm main non-free contrib 
non-free-firmware

deb http://deb.debian.org/debian-security/ bookworm-security main non-free 
contrib non-free-firmware
deb-src http://deb.debian.org/debian-security/ bookworm-security main non-free 
contrib non-free-firmware

E.




ciao,
sei sicuro?
ho provato ma mi da errore...

uso testing

valerio



Re: [ovs-dev] [PATCH v5] conntrack: Properly unNAT inner header of related traffic

2023-02-05 Thread Paolo Valerio
Ales Musil  writes:

> The inner header was not handled properly.
> Simplify the code which allows proper handling
> of the inner headers.
>
> Reported-at: https://bugzilla.redhat.com/2137754
> Signed-off-by: Ales Musil 
> ---
> v5: Rebase on top of current master.
> Address comments from Dumitru:
> - Use explicit struct sizes for inner_l3 pointer.
> - Use copied conn_key for reverse operation instead
> of double reverse of the original one.
> - Update the test case to use separate zone instead
> of default one.
> v4: Rebase on top of current master.
> Use output of ovs-pcap in tests rather than tcpdump.
> v3: Rebase on top of current master.
> Update the BZ reference.
> Update the test case.
> ---

Hello Ales,

thanks for the patch.
One noticeable thing is that the patch doesn't enforce the commit flag
as it happens for the kernel datapath. This seems what you want
considering the flows in the test.

E.g. with this diff on top of your patch:

diff --git a/tests/system-traffic.at b/tests/system-traffic.at
index 798343877..b309635b9 100644
--- a/tests/system-traffic.at
+++ b/tests/system-traffic.at
@@ -7147,7 +7147,6 @@ dnl Send traffic from client to CT, do DNAT if the 
traffic is new otherwise send
 AT_DATA([flows.txt], [dnl
 table=0,ip,actions=ct(table=1,zone=42,nat)
 
table=1,in_port=ovs-client,ip,ct_state=+trk+new,actions=ct(commit,table=2,zone=42,nat(dst(192.168.10.20))
-table=1,in_port=ovs-client,icmp,ct_state=+trk+rel,actions=ct(commit,table=2,zone=42,nat)
 table=1,ip,actions=resubmit(,2)
 table=2,in_port=ovs-client,ip,ct_state=+trk+new,actions=output:ovs-server
 table=2,in_port=ovs-client,icmp,ct_state=+trk+rel,actions=output:ovs-server
@@ -7176,8 +7175,7 @@ AT_CHECK([ovs-appctl revalidator/purge], [0])
 AT_CHECK([ovs-ofctl -O OpenFlow15 dump-flows br0 | ofctl_strip | sort ], [0], 
[dnl
  n_packets=3, n_bytes=154, reset_counts ip actions=ct(table=1,zone=42,nat)
  table=1, n_packets=1, n_bytes=42, reset_counts ct_state=+new+trk,ip,in_port=1 
actions=ct(commit,table=2,zone=42,nat(dst=192.168.10.20))
- table=1, n_packets=1, n_bytes=42, reset_counts ip actions=resubmit(,2)
- table=1, n_packets=1, n_bytes=70, reset_counts 
ct_state=+rel+trk,icmp,in_port=1 actions=ct(commit,table=2,zone=42,nat)
+ table=1, n_packets=2, n_bytes=112, reset_counts ip actions=resubmit(,2)
  table=2, n_packets=1, n_bytes=42, reset_counts ct_state=+new+trk,ip,in_port=1 
actions=output:2
  table=2, n_packets=1, n_bytes=42, reset_counts ct_state=+rpl+trk,ip,in_port=2 
actions=output:1
  table=2, n_packets=1, n_bytes=70, reset_counts 
ct_state=+rel+trk,icmp,in_port=1 actions=output:2

the test passes for the userspace datapath, but fails for the kernel.

I have a question, though, did you happen to test for both datapaths
what happens if a middlebox sends the icmp error from the reply
direction instead without your patch?
I assume things worked (without commit for both datapaths) in that case.

Another thing that IMO could be nice to add is a test case for the same
scenario, but in the reply direction. At least, both directions will be
covered and verified.

Paolo

>  lib/conntrack.c | 252 ++--
>  tests/system-traffic.at |  66 +++
>  2 files changed, 155 insertions(+), 163 deletions(-)
>
> diff --git a/lib/conntrack.c b/lib/conntrack.c
> index 550b2be9b..b207f379d 100644
> --- a/lib/conntrack.c
> +++ b/lib/conntrack.c
> @@ -764,109 +764,59 @@ handle_alg_ctl(struct conntrack *ct, const struct 
> conn_lookup_ctx *ctx,
>  }
>  
>  static void
> -pat_packet(struct dp_packet *pkt, const struct conn *conn)
> +pat_packet(struct dp_packet *pkt, const struct conn_key *key)
>  {
> -if (conn->nat_action & NAT_ACTION_SRC) {
> -if (conn->key.nw_proto == IPPROTO_TCP) {
> -struct tcp_header *th = dp_packet_l4(pkt);
> -packet_set_tcp_port(pkt, conn->rev_key.dst.port, th->tcp_dst);
> -} else if (conn->key.nw_proto == IPPROTO_UDP) {
> -struct udp_header *uh = dp_packet_l4(pkt);
> -packet_set_udp_port(pkt, conn->rev_key.dst.port, uh->udp_dst);
> -}
> -} else if (conn->nat_action & NAT_ACTION_DST) {
> -if (conn->key.nw_proto == IPPROTO_TCP) {
> -packet_set_tcp_port(pkt, conn->rev_key.dst.port,
> -conn->rev_key.src.port);
> -} else if (conn->key.nw_proto == IPPROTO_UDP) {
> -packet_set_udp_port(pkt, conn->rev_key.dst.port,
> -conn->rev_key.src.port);
> -}
> +if (key->nw_proto == IPPROTO_TCP) {
> +packet_set_tcp_port(pkt, key->dst.port, key->src.port);
> +} else if (key->nw_proto == IPPROTO_UDP) {
> +packet_set_udp_port(pkt, key->dst.port, key->src.port);
>  }
>  }
>  
> -static void
> -nat_packet(struct dp_packet *pkt, const struct conn *conn, bool related)
> +static uint16_t
> +nat_action_reverse(uint16_t nat_action)
>  {
> -if (conn->nat_actio

Re: [Python] [python] escludere righe vuote o commentate

2023-02-05 Thread Valerio Pachera
Il giorno mer 1 feb 2023 alle ore 12:43 Marco Giusti 
ha scritto:

> for line in open(filename):
>  if line.strip() and not line.startswith("#"):
>  clean.append(line)
>
>
Credo vada modificato così, altrimenti le righe che iniziano con degli
spazi e seguire da un cancelletto vengono aggiunte alla lista:

for line in open(filename):
 line = line.strip()
 if line and not line.startswith("#"):
 clean.append(line)
___
Python mailing list
Python@lists.python.it
https://lists.python.it/mailman/listinfo/python


Re: [Python] [python] escludere righe vuote o commentate

2023-02-01 Thread Valerio Pachera
Il giorno mer 1 feb 2023 alle ore 10:25 Valerio Pachera 
ha scritto:

> Ciao a tutti, vorrei ottenere lo stesso risultato di
>
> grep -Ev '(^[[:blank:]]*$|^#)'
>

 Per ora ho usato questo approccio:

with open('file.txt') as f:
p = re.compile('^\s*$|^#.*')
clean = [line for line in f.readlines() if not re.match(p, line)]

clean è una lista con le righe "buone" del file.
Notare che mantengono il \n alla fine.
Per stampare quindi il contenuto mi basta fare join di una stringa vuota.

  print(''.join(clean))

Nel mio caso specifico voglio poi portare tutto su una singola riga.
In tal caso, devo rimuovere il fine riga (usando strip) e fare il join
usando uno spazio.

with open('template.txt') as template_content:
p = re.compile('^\s*$|^#.*')
clean = [line.*strip*() for line in template_content.readlines() if not
re.match(p, line)]
print(' '.join(clean))

Sono comunque curioso di vedere altre implementazioni :-)
___
Python mailing list
Python@lists.python.it
https://lists.python.it/mailman/listinfo/python


[Python] [python] escludere righe vuote o commentate

2023-02-01 Thread Valerio Pachera
Ciao a tutti, vorrei ottenere lo stesso risultato di

grep -Ev '(^[[:blank:]]*$|^#)'

Ho pensato a questo:

p = re.compile('^\s*$|^#.*', re.MULTILINE)

\s
Matches any whitespace character; this is equivalent to the class [
\t\n\r\f\v].

Se provo però a sostituire le righe che matchano con una stringa vuota, non
viene rimosso il new line.

re.sub(p, '', mystring)
___
Python mailing list
Python@lists.python.it
https://lists.python.it/mailman/listinfo/python


[Corpora-List] CD-MAKE 2023 special session on “Multi-Perspectivist Data and Learning”

2023-01-25 Thread Valerio Basile via Corpora
Dear colleagues,

We are happy to announce the call for papers for our one-day special
session on “Multi-Perspectivist Data and Learning 2023” at the upcoming CD
MAKE 2023 conference:


https://cd-make.net/special-sessions/multi-perspectivist-data-and-learning/

***
Description, scope and aims

Many Artificial Intelligence applications are based on supervised machine
learning (ML), which ultimately grounds on manually annotated data. The
annotation process (i.e., ground-truthing) is often performed in terms of a
majority vote and this has been proved to be often problematic, as
highlighted by recent studies on the evaluation of ML models. Recently, a
different paradigm for ground-truthing has started to emerge, called data
perspectivism, which moves away from traditional majority aggregated
datasets, towards the adoption of methods that integrate different opinions
and perspectives within the knowledge representation, training, and
evaluation steps of ML processes, by adopting a non-aggregation policy.
This alternative paradigm obviously implies a radical change in how we
develop and evaluate ML systems: such ML systems have to take into account
multiple, uncertain, and potentially mutually conflicting views. This
obviously brings both opportunities and difficulties: novel models or
training techniques may need to be designed, and the validation phase may
become more complex. Nonetheless, initial works have shown that data
perspectivism can lead to better performances, and could also have
important implications in terms of human-in-the-loop and interpretable AI,
as well as in regard to the ethical issues or concerns related to the use
of AI systems.

The scope of this special session is to attract contributions related to
the management of subjective, crowd-sourced, multi-perspective, or
otherwise non-aggregated data in ground-truthing, machine learning, and
more generally artificial intelligence systems.

Invited contributions: full research papers and research in progress papers.

***

Topics of interest:

- Subjective, uncertain, or conflicting information in annotation and
crowdsourcing processes;

- Limits and problems with standard data annotation and aggregation
processes;

- Theoretical studies on the problem of learning from multi-rater and
non-aggregated data;

- Participation mechanisms/incentives/gamification for rater engagement and
crowdsourcing;

- Ethical and legal concerns related to annotation and aggregation
processes in ground-truthing;

- Creation and documentation of multi-rater and non-aggregated datasets and
benchmarks;

- Development of ML algorithms for multi-rater and non-aggregated data;

- Techniques for the evaluation of ML systems based on multi-rater and
non-aggregated data;

- Applications of data perspectivism and non-aggregated data to eXplainable
AI, human-in-the-loop AI and algorithmic fairness;

- Experimental and application studies of ML/AI systems on multi-rater and
non-aggregated data, in possibly different application domains (e.g. NLP,
medicine, legal studies, etc.)

***

Important dates:

Submission Deadline March 27, 2023 (AoE)

Author Notification June 01, 2023

Proceedings Version June 22, 2023 (AoE)

Conference August 29 – September 01, 2023

***

Special Session Chairs:

Federico Cabitza (University of Milano-Bicocca, Italy)

Andrea Campagner (University of Milano-Bicocca, Italy)

Valerio Basile (University of Turin, Italy)

Program Committee (provisional):

Nahuel Costa Cortez, University of Oviedo

Elisa Leonardelli, Fondazione Bruno Kessler (FBK)

Julian Lienen, Paderborn University

Gavin Abercrombie, Heriot-Watt University

Simona Frenda, University of Turin

Marília Barandas, Fraunhofer Portugal AICOS

Duarte Folgado, Fraunhofer Portugal AICOS

Barbara Plank, Ludwig Maximilian University of Munich

Tommaso Caselli, Rijksuniversiteit Groningen

***

Related readings

[1] Cabitza, F., Campagner, A., Basile, V. (2023)

Toward a Perspectivist Turn in Ground Truthing for Predictive Computing

Proceedings of the AAAI Conference on Artificial Intelligence

(extended preprint at: https://arxiv.org/pdf/2109.04270.pdf)

[2] V. Basile (2020)

It’s the End of the Gold Standard as we Know it. On the Impact of
Pre-aggregation on the Evaluation of Highly Subjective Tasks

Proceedings of the AIxIA 2020 Discussion Papers Workshop

[3] F. Cabitza, A. Campagner, L. M. Sconfienza (2020)

As if sand were stone. New concepts and metrics to probe the ground on
which to build trustable AI

BMC Medical Informatics and Decision Making

[4] Plank, B. (2022).

The 'Problem' of Human Label Variation: On Ground Truth in Data, Modeling
and Evaluation.

arXiv preprint arXiv:

[MediaWiki-l] Extra Namespaces | Conflict Management

2023-01-15 Thread Valerio
Hi,
I made a big mistake.
I defined a custom namespace in conflict with SocialProfile 
<https://www.mediawiki.org/wiki/Extension:SocialProfile>  extension:

# # # # # # # # # # #
#
#   EXTRA NAMESPACES 
# 
# # # # # # # # # # #

##   BIOGRAPHY  ##
define("NS_BIOGRAPHY", 202);
define("NS_BIOGRAPHY_TALK", 203);
 
$wgExtraNamespaces[NS_BIOGRAPHY] = "Biography";
$wgExtraNamespaces[NS_BIOGRAPHY_TALK] = "Biography_talk";


202 User_profile:   NS_USER_PROFILE 
203 User_profile_talk:  NS_USER_PROFILE_TALKSince r93317 
<https://www.mediawiki.org/wiki/Special:Code/MediaWiki/93317>.
What can I do to correct this conflict other than uninstall the Social Profile

Thanks so much.

Valerio


The Traditional Tune Archive
   The Semantic Index of North American, British and
   Irish traditional instrumental music with annotation.
   ---
   Web: http://tunearch.org <http://tunearch.org/>
   Mail: v...@tunearch.org <mailto:v...@tunearch.org>
___
My Special:Version

Installed software
Product Version
MediaWiki <https://www.mediawiki.org/>  1.35.7 
<https://www.mediawiki.org/wiki/MediaWiki_1.35> (2912741) 
<https://gerrit.wikimedia.org/g/mediawiki/core.git/+/2912741f0bd6dcec405d9d3bc78534a22012d1e0>
00:00, July 13, 2022
PHP <https://php.net/>  7.3.29-1+ubuntu18.04.1+deb.sury.org+1 (apache2handler)
MariaDB <https://mariadb.org/>  10.1.48-MariaDB-0ubuntu0.18.04.1
ICU <http://site.icu-project.org/>  65.1
Lua <http://www.lua.org/>   5.1.5
LilyPond <http://lilypond.org/> 2.18.2
Elasticsearch <https://www.elastic.co/products/elasticsearch>   6.8.18
SocialProfile <https://www.mediawiki.org/wiki/Extension:SocialProfile>  1.14 
(0d678ab) 
<https://gerrit.wikimedia.org/g/mediawiki/extensions/SocialProfile/+/0d678abb40890b6e48a1abffde2df999f51b82c2>03:03,
 November 22, 2020



___
MediaWiki-l mailing list -- mediawiki-l@lists.wikimedia.org
To unsubscribe send an email to mediawiki-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/mediawiki-l.lists.wikimedia.org/

Bug#1027721: gdm3: fill disk with same errors

2023-01-02 Thread Valerio
Package: gdm3
Version: 3.38.2.1-1
Severity: important

Dear Maintainer,

   * What led up to the situation?
 Playing SuperTuxKart 1.4
   * What exactly did you do (or not do) that was effective (or
 ineffective)?
 sorry happen on random track in random situation, but each time for a race
   * What was the outcome of this action?
 GDM fill the disk with millions of same error lines.
 The error is this:
 /usr/libexec/gdm-x-session[n]: amdgpu: amdgpu_cs_query_fence_status failed
 and is written at the end of the following three files:
 /var/log/syslog
 /var/log/messages
 /var/log/user.log
 I keep root partitions with 100 GB free space, when the error is
triggered,
 the three files grown to 33 GB each, when the root is full, STK freeze,
 GDM crash, CTRL+ALT+Fx won't work, I had to remotely connect via SSH and
 remove user.log to free minumum space.
 Then manually remove those repeated error lines
   * What outcome did you expect instead?
 GDM write the error once to avoid saturate the root partition

 Other notes:
 CPU / Graphics card:
 MB: Asus TUF Gaming A520M-Plus
 CPU: AMD Ryzen 5 5600G box (/proc/cpuinfo: AMD Ryzen 5 5600G with Radeon
Graphics)
 RAM: DDR4 Kingston Fury Beast 3200MHz 2x8GB
 SSD: NVME ADATA GAMMIX S11P 512GB Gen3
 Supply: ADJ 700W
 Monitor Yashi Pioneers YZ2407 @1920x1080

 Further details at: https://gitlab.gnome.org/GNOME/gdm/-/issues/829
 In Synaptic, force version, 3.38 is the only available.
 I have no idea how to force newer supported version like 3.41

 As now I had temporaly switched to LightDM as the system was unusable


-- System Information:
Debian Release: 11.6
  APT prefers stable-updates
  APT policy: (500, 'stable-updates'), (500, 'stable-security'), (500,
'stable')
Architecture: amd64 (x86_64)

Kernel: Linux 5.10.0-20-amd64 (SMP w/12 CPU threads)
Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE,
TAINT_UNSIGNED_MODULE
Locale: LANG=it_IT.UTF-8, LC_CTYPE=it_IT.UTF-8 (charmap=UTF-8), LANGUAGE not
set
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages gdm3 depends on:
ii  accountsservice   0.6.55-3
ii  adduser   3.118
ii  dbus  1.12.24-0+deb11u1
ii  dconf-cli 0.38.0-2
ii  dconf-gsettings-backend   0.38.0-2
ii  debconf [debconf-2.0] 1.5.77
ii  gir1.2-gdm-1.03.38.2.1-1
ii  gnome-session [x-session-manager] 3.38.0-4
ii  gnome-session-bin 3.38.0-4
ii  gnome-session-common  3.38.0-4
ii  gnome-settings-daemon 3.38.2-1
ii  gnome-shell   3.38.6-1~deb11u1
ii  gnome-terminal [x-terminal-emulator]  3.38.3-1
ii  gsettings-desktop-schemas 3.38.0-2
ii  konsole [x-terminal-emulator] 4:20.12.3-1
ii  kwin-x11 [x-window-manager]   4:5.20.5-1
ii  libaccountsservice0   0.6.55-3
ii  libaudit1 1:3.0-2
ii  libc6 2.31-13+deb11u5
ii  libcanberra-gtk3-00.30-7
ii  libcanberra0  0.30-7
ii  libgdk-pixbuf-2.0-0   2.42.2+dfsg-1+deb11u1
ii  libgdm1   3.38.2.1-1
ii  libglib2.0-0  2.66.8-1
ii  libglib2.0-bin2.66.8-1
ii  libgtk-3-03.24.24-4+deb11u2
ii  libkeyutils1  1.6.1-2
ii  libpam-modules1.4.0-9+deb11u1
ii  libpam-runtime1.4.0-9+deb11u1
ii  libpam-systemd247.3-7+deb11u1
ii  libpam0g  1.4.0-9+deb11u1
ii  librsvg2-common   2.50.3+dfsg-1
ii  libselinux1   3.1-3
ii  libsystemd0   247.3-7+deb11u1
ii  libx11-6  2:1.7.2-1
ii  libxau6   1:1.0.9-1
ii  libxcb1   1.14-3
ii  libxdmcp6 1:1.1.2-3
ii  lsb-base  11.1.0
ii  marco [x-window-manager]  1.24.1-3
ii  mate-session-manager [x-session-manager]  1.24.1-2
ii  mate-terminal [x-terminal-emulator]   1.24.1-1
ii  mutter [x-window-manager] 3.38.6-2~deb11u2
ii  plasma-workspace [x-session-manager]  4:5.20.5-6
ii  policykit-1   0.105-31+deb11u1
ii  procps2:3.3.17-5
ii  ucf   3.0043
ii  x11-common1:7.7+22
ii  x11-xserver-utils  

Re: mariadb-server

2022-12-21 Thread valerio




Il 21/12/22 18:17, Davide Prina ha scritto:

valerio ha scritto:


aggiornando mi da errore su mariadb-server:

Configurazione di mariadb-server-10.6 (1:10.6.11-1)...
dpkg: errore nell'elaborare il pacchetto mariadb-server-10.6 (--configure):
   il sottoprocesso installato pacchetto mariadb-server-10.6 script
post-installation ha restituito lo stato di errore 1
dpkg: problemi con le dipendenze impediscono la configurazione di
default-mysql-server:
   default-mysql-server dipende da mariadb-server-10.6; tuttavia:
Il pacchetto mariadb-server-10.6 non è ancora configurato.




ciao e grazie,


in teoria questo non dovrebbe accedere (e bisognerebbe aprire un bug report),
ma dato che:

$ apt show mariadb-server-10.6 | grep "default-mysql-server"
$

dovresti poter risolvere facendo così:
# apt install mariadb-server-10.6
# apt upgrade
# apt dist-upgrade


no, non risolve



nel caso hai sempre l'errore, allora fai così:
# apt remove default-mysql-server
# apt install mariadb-server-10.6
# apt install default-mysql-server
# apt upgrade
# apt dist-upgrade

neppure



non dovresti aver problemi a rimuovere il pacchetto a meno che non hai uno
dei tre seguenti installato:

$ apt rdepends default-mysql-server | grep -v "Raccomanda\|Consiglia"

Reverse Depends:
  |Dipende: python3-testing.mysqld (>= 1.0.2)
  |Dipende: zoph
  |Dipende: openstack-cloud-services



ho fatto varie prove, fra cui la rimozione sia di mariadb che di mysql 
server, poi li ho reinstallati, mi danno un po' fastidio, ma almeno se 
si aggiornano i pacchetti, magari ripartono...


il mio dubbio è che possa dipendere da una mia impostazione, perché non 
trovo segnalazioni al riguardo, anche nella ML specifica. L'unica cosa 
che ho trovato è un problema con la versione precedente che addirittura 
eliminava mariadb-server al momento dell'aggiornamento, ma non è il mio 
caso.


Stavo pensando di reinstallare tutto (apache, mysql e php) o almeno 
controllare le impostazioni.


leggendo in giro, consigliavano di cancellare la versione attuale (in 
caso di aggiornamento) e poi installare la versione aggiornata.


adesso vedo tutte le configurazioni, poi decido se aprire un bug...

grazie

valerio






Ciao
Davide

--
La mia privacy non è affar tuo
https://noyb.eu/it





Re: [ovs-dev] [PATCH v5 2/2] openflow: Add extension to flush CT by generic match

2022-12-16 Thread Paolo Valerio
Ales Musil  writes:

> Add extension that allows to flush connections from CT
> by specifying fields that the connections should be
> matched against. This allows to match only some fields
> of the connection e.g. source address for orig direrction.
>
> Reported-at: https://bugzilla.redhat.com/2120546
> Signed-off-by: Ales Musil 
> ---
> v5: Add missing usage and man for ovs-ofctl command.
> v4: Allow ovs-ofctl flush/conntrack without any zone/tuple.
> v3: Rebase on top of master.
> v2: Rebase on top of master.
> Use suggestion from Ilya.
> ---

Thanks Ales.

LGTM,

Acked-by: Paolo Valerio 

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v4 2/2] openflow: Add extension to flush CT by generic match

2022-12-16 Thread Paolo Valerio
Ales Musil  writes:

> Add extension that allows to flush connections from CT
> by specifying fields that the connections should be
> matched against. This allows to match only some fields
> of the connection e.g. source address for orig direrction.
>
> Reported-at: https://bugzilla.redhat.com/2120546
> Signed-off-by: Ales Musil 
> ---
> v4: Allow ovs-ofctl flush/conntrack without any zone/tuple.
> v3: Rebase on top of master.
> v2: Rebase on top of master.
> Use suggestion from Ilya.
> ---
>  NEWS   |   3 +
>  include/openflow/nicira-ext.h  |  30 +++
>  include/openvswitch/ofp-msgs.h |   4 +
>  include/openvswitch/ofp-util.h |   4 +
>  lib/ofp-bundle.c   |   1 +
>  lib/ofp-ct-util.c  | 146 +
>  lib/ofp-ct-util.h  |   9 ++
>  lib/ofp-print.c|  20 +
>  lib/ofp-util.c |  25 ++
>  lib/rconn.c|   1 +
>  ofproto/ofproto-dpif.c |   8 +-
>  ofproto/ofproto-provider.h |   7 +-
>  ofproto/ofproto.c  |  30 ++-
>  tests/ofp-print.at |  93 +
>  tests/ovs-ofctl.at |  26 ++
>  tests/system-traffic.at| 116 ++
>  utilities/ovs-ofctl.c  |  38 +
>  17 files changed, 503 insertions(+), 58 deletions(-)
>
> diff --git a/NEWS b/NEWS
> index ff8904b02..46b8faa41 100644
> --- a/NEWS
> +++ b/NEWS
> @@ -16,6 +16,9 @@ Post-v3.0.0
>   by specifying 'max-rate' or '[r]stp-path-cost' accordingly.
> - ovs-dpctl and related ovs-appctl commands:
>   * "flush-conntrack" is capable of handling partial 5-tuple.
> +   - OpenFlow:
> +  * New OpenFlow extension NXT_CT_FLUSH to flush connections matching
> +the specified fields.
>

I guess we miss an entry for ovs-ofctl flush-conntrack

>  
>  v3.0.0 - 15 Aug 2022
> diff --git a/include/openflow/nicira-ext.h b/include/openflow/nicira-ext.h
> index b68804991..32ce56d31 100644
> --- a/include/openflow/nicira-ext.h
> +++ b/include/openflow/nicira-ext.h
> @@ -1064,4 +1064,34 @@ struct nx_zone_id {
>  };
>  OFP_ASSERT(sizeof(struct nx_zone_id) == 8);
>  
> +/* CT flush available TLVs. */
> +enum nx_ct_flush_tlv_type {
> +/* Outer types. */
> +NXT_CT_ORIG_DIRECTION,/* CT orig direction outer type. */
> +NXT_CT_REPLY_DIRECTION,   /* CT reply direction outer type. */
> +
> +/* Nested types. */
> +NXT_CT_SRC,   /* CT source IPv6 or mapped IPv4 address. */
> +NXT_CT_DST,   /* CT destination IPv6 or mapped IPv4 address. 
> */
> +NXT_CT_SRC_PORT,  /* CT source port. */
> +NXT_CT_DST_PORT,  /* CT destination port. */
> +NXT_CT_ICMP_ID,   /* CT ICMP id. */
> +NXT_CT_ICMP_TYPE, /* CT ICMP type. */
> +NXT_CT_ICMP_CODE, /* CT ICMP code. */
> +
> +/* Primitive types. */
> +NXT_CT_ZONE_ID,   /* CT zone id. */
> +};
> +
> +/* NXT_CT_FLUSH.
> + *
> + * Flushes the connection tracking specified by 5-tuple.
> + * The struct should be followed by TLVs specifying the matching parameters. 
> */
> +struct nx_ct_flush {
> +uint8_t ip_proto;  /* IP protocol. */
> +uint8_t family;/* L3 address family. */
> +uint8_t zero[6];   /* Must be zero. */
> +};
> +OFP_ASSERT(sizeof(struct nx_ct_flush) == 8);
> +
>  #endif /* openflow/nicira-ext.h */
> diff --git a/include/openvswitch/ofp-msgs.h b/include/openvswitch/ofp-msgs.h
> index 921a937e5..659b0a3e7 100644
> --- a/include/openvswitch/ofp-msgs.h
> +++ b/include/openvswitch/ofp-msgs.h
> @@ -526,6 +526,9 @@ enum ofpraw {
>  
>  /* NXST 1.0+ (4): struct nx_ipfix_stats_reply[]. */
>  OFPRAW_NXST_IPFIX_FLOW_REPLY,
> +
> +/* NXT 1.0+ (32): struct nx_ct_flush, uint8_t[8][]. */
> +OFPRAW_NXT_CT_FLUSH,
>  };
>  
>  /* Decoding messages into OFPRAW_* values. */
> @@ -772,6 +775,7 @@ enum ofptype {
>  OFPTYPE_IPFIX_FLOW_STATS_REQUEST, /* OFPRAW_NXST_IPFIX_FLOW_REQUEST */
>  OFPTYPE_IPFIX_FLOW_STATS_REPLY,   /* OFPRAW_NXST_IPFIX_FLOW_REPLY */
>  OFPTYPE_CT_FLUSH_ZONE,/* OFPRAW_NXT_CT_FLUSH_ZONE. */
> +OFPTYPE_CT_FLUSH,   /* OFPRAW_NXT_CT_FLUSH. */
>  
>  /* Flow monitor extension. */
>  OFPTYPE_FLOW_MONITOR_CANCEL,  /* OFPRAW_NXT_FLOW_MONITOR_CANCEL.
> diff --git a/include/openvswitch/ofp-util.h b/include/openvswitch/ofp-util.h
> index 84937ae26..e10d90b9f 100644
> --- a/include/openvswitch/ofp-util.h
> +++ b/include/openvswitch/ofp-util.h
> @@ -65,6 +65,10 @@ struct ofpbuf *ofputil_encode_echo_reply(const struct 
> ofp_header *);
>  
>  struct ofpbuf *ofputil_encode_barrier_request(enum ofp_version);
>  
> +struct ofpbuf *ofputil_ct_match_encode(const struct ofputil_ct_match *match,
> +   uint16_t *zone_id,
> +   enum ofp_version version);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/

Re: [ovs-dev] [PATCH v4 1/2] ofp, dpif: Allow CT flush based on partial match

2022-12-16 Thread Paolo Valerio
Ales Musil  writes:

> Currently, the CT can be flushed by dpctl only be specifying
> the whole 5-tuple. This is not very convenient when there are
> only some fields known to the user of CT flush. Add new struct
> ofputil_ct_match which represents the generic filtering that can
> be done for CT flush. The match is done only on fields that are
> non-zero with exception to the icmp fields.
>
> This allows the filtering just within dpctl, however
> it is a preparation for OpenFlow extension.
>
> Reported-at: https://bugzilla.redhat.com/2120546
> Signed-off-by: Ales Musil 
> ---
> v4: Fix a flush all scenario.
> v3: Rebase on top of master.
> Address the C99 comment and missing dpif_close call.
> v2: Rebase on top of master.
> Address comments from Paolo.
> ---

stressed a bit more some corner cases mostly related to icmp and after a
quick discussion offline things LGTM

Acked-by: Paolo Valerio 

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: TFTP boot using DNS

2022-12-15 Thread Valerio Nappi

Hi Harald,

Thank you for your reply.

As a workaround right now I'm using a patched version that comments out the 
check for the serverip handler, and it is working.

Nevertheless, let's see if we can understand how this should be handled.

Best,
Valerio

On 12/15/22 13:55, Harald Seiler wrote:

Hi Valerio,

On Thu, 2022-12-15 at 10:27 +0100, Valerio Nappi wrote:

Hello everyone,

Context:
I'm new to the u-boot project. I am trying to boot a Zynq SoC from an
internal TFTP server, but i have no guarantee that the IP of the
server is going to stay the same. A DNS service is offered, and kept
updated.

When i try to update the serverip variable using the dns 
serverip command, the variable seems updated (in the sense that I can
print the new value) but it is not. Apparently this is ignored on
purpose at this line of the source [1]. Commit fd3056337 that
introduced this change, mentions "Don't update the variables when the
source was programmatic, since the variables were the source of the
new value.", that is not true in this case, in fact the source of the
new value is the DNS service.

Is there a particular reason why this usecase was not forseen?

Unrelated to your point about proper support for this usecase, but maybe
you can use the following as a workaround in the meantime?

tftp ${loadaddr} ${serverip}:path/to/file

Also adding Joe on CC as he authored the commit you mentioned.



Re: [ovs-dev] [PATCH v3 2/2] openflow: Add extension to flush CT by generic match

2022-12-15 Thread Paolo Valerio
Ales Musil  writes:

> Add extension that allows to flush connections from CT
> by specifying fields that the connections should be
> matched against. This allows to match only some fields
> of the connection e.g. source address for orig direrction.
>
> Reported-at: https://bugzilla.redhat.com/2120546
> Signed-off-by: Ales Musil 
> ---
> v3: Rebase on top of master.
> v2: Rebase on top of master.
> Use suggestion from Ilya.
> ---

Although a second opinion would be nice to have here,
the patch LGTM and the tests succeeded.

Acked-by: Paolo Valerio 

>  NEWS   |   3 +
>  include/openflow/nicira-ext.h  |  30 +++
>  include/openvswitch/ofp-msgs.h |   4 +
>  include/openvswitch/ofp-util.h |   4 +
>  lib/ofp-bundle.c   |   1 +
>  lib/ofp-ct-util.c  | 146 +
>  lib/ofp-ct-util.h  |   9 ++
>  lib/ofp-print.c|  20 +
>  lib/ofp-util.c |  25 ++
>  lib/rconn.c|   1 +
>  ofproto/ofproto-dpif.c |   8 +-
>  ofproto/ofproto-provider.h |   7 +-
>  ofproto/ofproto.c  |  30 ++-
>  tests/ofp-print.at |  93 +
>  tests/ovs-ofctl.at |  12 +++
>  tests/system-traffic.at| 116 ++
>  utilities/ovs-ofctl.c  |  38 +
>  17 files changed, 489 insertions(+), 58 deletions(-)
>
> diff --git a/NEWS b/NEWS
> index ff8904b02..46b8faa41 100644
> --- a/NEWS
> +++ b/NEWS
> @@ -16,6 +16,9 @@ Post-v3.0.0
>   by specifying 'max-rate' or '[r]stp-path-cost' accordingly.
> - ovs-dpctl and related ovs-appctl commands:
>   * "flush-conntrack" is capable of handling partial 5-tuple.
> +   - OpenFlow:
> +  * New OpenFlow extension NXT_CT_FLUSH to flush connections matching
> +the specified fields.
>  
>  
>  v3.0.0 - 15 Aug 2022
> diff --git a/include/openflow/nicira-ext.h b/include/openflow/nicira-ext.h
> index b68804991..32ce56d31 100644
> --- a/include/openflow/nicira-ext.h
> +++ b/include/openflow/nicira-ext.h
> @@ -1064,4 +1064,34 @@ struct nx_zone_id {
>  };
>  OFP_ASSERT(sizeof(struct nx_zone_id) == 8);
>  
> +/* CT flush available TLVs. */
> +enum nx_ct_flush_tlv_type {
> +/* Outer types. */
> +NXT_CT_ORIG_DIRECTION,/* CT orig direction outer type. */
> +NXT_CT_REPLY_DIRECTION,   /* CT reply direction outer type. */
> +
> +/* Nested types. */
> +NXT_CT_SRC,   /* CT source IPv6 or mapped IPv4 address. */
> +NXT_CT_DST,   /* CT destination IPv6 or mapped IPv4 address. 
> */
> +NXT_CT_SRC_PORT,  /* CT source port. */
> +NXT_CT_DST_PORT,  /* CT destination port. */
> +NXT_CT_ICMP_ID,   /* CT ICMP id. */
> +NXT_CT_ICMP_TYPE, /* CT ICMP type. */
> +NXT_CT_ICMP_CODE, /* CT ICMP code. */
> +
> +/* Primitive types. */
> +NXT_CT_ZONE_ID,   /* CT zone id. */
> +};
> +
> +/* NXT_CT_FLUSH.
> + *
> + * Flushes the connection tracking specified by 5-tuple.
> + * The struct should be followed by TLVs specifying the matching parameters. 
> */
> +struct nx_ct_flush {
> +uint8_t ip_proto;  /* IP protocol. */
> +uint8_t family;/* L3 address family. */
> +uint8_t zero[6];   /* Must be zero. */
> +};
> +OFP_ASSERT(sizeof(struct nx_ct_flush) == 8);
> +
>  #endif /* openflow/nicira-ext.h */
> diff --git a/include/openvswitch/ofp-msgs.h b/include/openvswitch/ofp-msgs.h
> index 921a937e5..659b0a3e7 100644
> --- a/include/openvswitch/ofp-msgs.h
> +++ b/include/openvswitch/ofp-msgs.h
> @@ -526,6 +526,9 @@ enum ofpraw {
>  
>  /* NXST 1.0+ (4): struct nx_ipfix_stats_reply[]. */
>  OFPRAW_NXST_IPFIX_FLOW_REPLY,
> +
> +/* NXT 1.0+ (32): struct nx_ct_flush, uint8_t[8][]. */
> +OFPRAW_NXT_CT_FLUSH,
>  };
>  
>  /* Decoding messages into OFPRAW_* values. */
> @@ -772,6 +775,7 @@ enum ofptype {
>  OFPTYPE_IPFIX_FLOW_STATS_REQUEST, /* OFPRAW_NXST_IPFIX_FLOW_REQUEST */
>  OFPTYPE_IPFIX_FLOW_STATS_REPLY,   /* OFPRAW_NXST_IPFIX_FLOW_REPLY */
>  OFPTYPE_CT_FLUSH_ZONE,/* OFPRAW_NXT_CT_FLUSH_ZONE. */
> +OFPTYPE_CT_FLUSH,   /* OFPRAW_NXT_CT_FLUSH. */
>  
>  /* Flow monitor extension. */
>  OFPTYPE_FLOW_MONITOR_CANCEL,  /* OFPRAW_NXT_FLOW_MONITOR_CANCEL.
> diff --git a/include/openvswitch/ofp-util.h b/include/openvswitch/ofp-util.h
> index 84937ae26..e10d90b9f 100644
> --- a/include/openvswitch/ofp-util.h
> +++ b/include/openvswitch/ofp-util.h
> @@ -65,6 +65,10 @@ struct ofpbuf *ofputil_encode_echo_reply(c

Re: [ovs-dev] [PATCH v3 1/2] ofp, dpif: Allow CT flush based on partial match

2022-12-15 Thread Paolo Valerio
Ales Musil  writes:

> On Thu, Dec 15, 2022 at 4:28 PM Paolo Valerio  wrote:
>
> Ales Musil  writes:
>
> > Currently, the CT can be flushed by dpctl only be specifying
> > the whole 5-tuple. This is not very convenient when there are
> > only some fields known to the user of CT flush. Add new struct
> > ofputil_ct_match which represents the generic filtering that can
> > be done for CT flush. The match is done only on fields that are
> > non-zero with exception to the icmp fields.
> >
> > This allows the filtering just within dpctl, however
> > it is a preparation for OpenFlow extension.
> >
> > Reported-at: https://bugzilla.redhat.com/2120546
> > Signed-off-by: Ales Musil 
> > ---
> > v3: Rebase on top of master.
> >     Address the C99 comment and missing dpif_close call.
> > v2: Rebase on top of master.
> >     Address comments from Paolo.
> > ---
> >  NEWS                           |   2 +
> >  include/openvswitch/ofp-util.h |  28 +++
> >  lib/automake.mk                |   2 +
> >  lib/ct-dpif.c                  | 201 +
> >  lib/ct-dpif.h                  |   4 +-
> >  lib/dpctl.c                    |  45 +++--
> >  lib/dpctl.man                  |   3 +-
> >  lib/ofp-ct-util.c              | 311 +
> >  lib/ofp-ct-util.h              |  34 
> >  tests/system-traffic.at        |  82 -
> >  10 files changed, 568 insertions(+), 144 deletions(-)
> >  create mode 100644 lib/ofp-ct-util.c
> >  create mode 100644 lib/ofp-ct-util.h
> >
> > diff --git a/NEWS b/NEWS
> > index 265375e1c..ff8904b02 100644
> > --- a/NEWS
> > +++ b/NEWS
> > @@ -14,6 +14,8 @@ Post-v3.0.0
> >       10 Gbps link speed by default in case the actual link speed cannot
> be
> >       determined.  Previously it was 10 Mbps.  Values can still be
> overridden
> >       by specifying 'max-rate' or '[r]stp-path-cost' accordingly.
> > +   - ovs-dpctl and related ovs-appctl commands:
> > +     * "flush-conntrack" is capable of handling partial 5-tuple.
> > 
> > 
> >  v3.0.0 - 15 Aug 2022
> > diff --git a/include/openvswitch/ofp-util.h b/include/openvswitch/
> ofp-util.h
> > index 091a09cad..84937ae26 100644
> > --- a/include/openvswitch/ofp-util.h
> > +++ b/include/openvswitch/ofp-util.h
> > @@ -19,6 +19,9 @@
> > 
> >  #include 
> >  #include 
> > +#include 
> > +#include 
> > +
> >  #include "openvswitch/ofp-protocol.h"
> > 
> >  struct ofp_header;
> > @@ -27,6 +30,31 @@ struct ofp_header;
> >  extern "C" {
> >  #endif
> > 
> > +struct ofputil_ct_tuple {
> > +    struct in6_addr src;
> > +    struct in6_addr dst;
> > +
> > +    union {
> > +        ovs_be16 src_port;
> > +        ovs_be16 icmp_id;
> > +    };
> > +    union {
> > +        ovs_be16 dst_port;
> > +        struct {
> > +            uint8_t icmp_code;
> > +            uint8_t icmp_type;
> > +        };
> > +    };
> > +};
> > +
> > +struct ofputil_ct_match {
> > +    uint8_t ip_proto;
> > +    uint16_t l3_type;
> > +
> > +    struct ofputil_ct_tuple tuple_orig;
> > +    struct ofputil_ct_tuple tuple_reply;
> > +};
> > +
> >  bool ofputil_decode_hello(const struct ofp_header *,
> >                            uint32_t *allowed_versions);
> >  struct ofpbuf *ofputil_encode_hello(uint32_t version_bitmap);
> > diff --git a/lib/automake.mk b/lib/automake.mk
> > index a0fabe38f..37135f118 100644
> > --- a/lib/automake.mk
> > +++ b/lib/automake.mk
> > @@ -227,6 +227,8 @@ lib_libopenvswitch_la_SOURCES = \
> >       lib/ofp-actions.c \
> >       lib/ofp-bundle.c \
> >       lib/ofp-connection.c \
> > +     lib/ofp-ct-util.c \
> > +     lib/ofp-ct-util.h \
> >       lib/ofp-ed-props.c \
> >       lib/ofp-errors.c \
> >       lib/ofp-flow.c \
> > diff --git a/lib/ct-dpif.c b/lib/ct-dpif.c
> > index 6f17a26b5..906e827c1 100644
> > --- a/lib/ct-dpif.c
> > +++ b/lib/

Re: [ovs-dev] [PATCH v3 1/2] ofp, dpif: Allow CT flush based on partial match

2022-12-15 Thread Paolo Valerio
ruct ofputil_ct_match *);
>  int ct_dpif_set_maxconns(struct dpif *dpif, uint32_t maxconns);
>  int ct_dpif_get_maxconns(struct dpif *dpif, uint32_t *maxconns);
>  int ct_dpif_get_nconns(struct dpif *dpif, uint32_t *nconns);
> @@ -311,7 +312,6 @@ void ct_dpif_format_ipproto(struct ds *ds, uint16_t 
> ipproto);
>  void ct_dpif_format_tuple(struct ds *, const struct ct_dpif_tuple *);
>  uint8_t ct_dpif_coalesce_tcp_state(uint8_t state);
>  void ct_dpif_format_tcp_stat(struct ds *, int, int);
> -bool ct_dpif_parse_tuple(struct ct_dpif_tuple *, const char *s, struct ds *);
>  void ct_dpif_push_zone_limit(struct ovs_list *, uint16_t zone, uint32_t 
> limit,
>   uint32_t count);
>  void ct_dpif_free_zone_limits(struct ovs_list *);
> diff --git a/lib/dpctl.c b/lib/dpctl.c
> index 29041fa3e..3cdedbe97 100644
> --- a/lib/dpctl.c
> +++ b/lib/dpctl.c
> @@ -40,6 +40,7 @@
>  #include "netdev.h"
>  #include "netlink.h"
>  #include "odp-util.h"
> +#include "ofp-ct-util.h"
>  #include "openvswitch/ofpbuf.h"
>  #include "packets.h"
>  #include "openvswitch/shash.h"
> @@ -1707,47 +1708,41 @@ dpctl_flush_conntrack(int argc, const char *argv[],
>struct dpctl_params *dpctl_p)
>  {
>  struct dpif *dpif = NULL;
> -struct ct_dpif_tuple tuple, *ptuple = NULL;
> -struct ds ds = DS_EMPTY_INITIALIZER;
> -uint16_t zone, *pzone = NULL;
> -int error;
> +struct ofputil_ct_match match = {0};
> +uint16_t zone;
> +bool with_zone = false;
>  int args = argc - 1;
>  
>  /* Parse ct tuple */
> -if (args && ct_dpif_parse_tuple(&tuple, argv[args], &ds)) {
> -ptuple = &tuple;
> +if (args) {
> +struct ds ds = DS_EMPTY_INITIALIZER;
> +if (!ofputil_ct_match_parse(&match, argv[args], &ds)) {
> +dpctl_error(dpctl_p, EINVAL, "%s", ds_cstr(&ds));
> +ds_destroy(&ds);
> +return EINVAL;
> +}
>  args--;
>  }
>  
>  /* Parse zone */
>  if (args && ovs_scan(argv[args], "zone=%"SCNu16, &zone)) {
> -pzone = &zone;
> +with_zone = true;
>  args--;
>  }
>  
> -/* Report error if there are more than one unparsed argument. */
> -if (args > 1) {
> -ds_put_cstr(&ds, "invalid arguments");
> -error = EINVAL;
> -goto error;
> -}
> -
> -error = opt_dpif_open(argc, argv, dpctl_p, 4, &dpif);
> +int error = opt_dpif_open(argc, argv, dpctl_p, 4, &dpif);
>  if (error) {
> -return error;
> +dpctl_error(dpctl_p, error, "Cannot open dpif");
> +goto end;

just returning error here is fine, right?

>  }
>  
> -error = ct_dpif_flush(dpif, pzone, ptuple);
> -if (!error) {
> -dpif_close(dpif);
> -return 0;
> -} else {
> -ds_put_cstr(&ds, "failed to flush conntrack");
> +error = ct_dpif_flush(dpif, with_zone ? &zone : NULL, &match);
> +if (error) {
> +dpctl_error(dpctl_p, error, "Failed to flush conntrack");
> +goto end;

Given the above, the label could be removed, and so the goto here

Other than that, the patch LGTM:

Acked-by: Paolo Valerio 

>  }
>  
> -error:
> -dpctl_error(dpctl_p, error, "%s", ds_cstr(&ds));
> -ds_destroy(&ds);
> +end:
>  dpif_close(dpif);
>  return error;
>  }
> diff --git a/lib/dpctl.man b/lib/dpctl.man
> index 87ea8087b..b0cabe05d 100644
> --- a/lib/dpctl.man
> +++ b/lib/dpctl.man
> @@ -312,7 +312,8 @@ If \fBzone\fR=\fIzone\fR is specified, only flushes the 
> connections in
>  If \fIct-tuple\fR is provided, flushes the connection entry specified by
>  \fIct-tuple\fR in \fIzone\fR. The zone defaults to 0 if it is not provided.
>  The userspace connection tracker requires flushing with the original 
> pre-NATed
> -tuple and a warning log will be otherwise generated.
> +tuple and a warning log will be otherwise generated. The tuple can be partial
> +and will remove all connections that are matching on the specified fields.
>  An example of an IPv4 ICMP \fIct-tuple\fR:
>  .IP
>  
> "ct_nw_src=10.1.1.1,ct_nw_dst=10.1.1.2,ct_nw_proto=1,icmp_type=8,icmp_code=0,icmp_id=10"
> diff --git a/lib/ofp-ct-util.c b/lib/ofp-ct-util.c
> new file mode 100644
> index 0..9112305cc
> --- /dev/null
> +++ b/lib/ofp-ct-util.c
> @@ -0,0 +1,311 @@
> +
> +/* Copyright (c) 2022, Red Hat, Inc.
> + *
> + * Licensed under the Apache Li

TFTP boot using DNS

2022-12-15 Thread Valerio Nappi

Hello everyone,

Context:
I'm new to the u-boot project. I am trying to boot a Zynq SoC from an internal 
TFTP server, but i have no guarantee that the IP of the server is going to stay 
the same. A DNS service is offered, and kept updated.

When i try to update the serverip variable using the dns  serverip command, the 
variable seems updated (in the sense that I can print the new value) but it is not. Apparently 
this is ignored on purpose at this line of the source [1]. Commit fd3056337 that introduced 
this change, mentions "Don't update the variables when the source was programmatic, since 
the variables were the source of the new value.", that is not true in this case, in fact 
the source of the new value is the DNS service.

Is there a particular reason why this usecase was not forseen?

Best,
Valerio

[1]: 
https://github.com/u-boot/u-boot/blob/c917865c7fd14420d25388bb3c8c24cb03911caf/net/net.c#L251



Re: OT: duplicazione di una finestra su due monitor distinti

2022-12-14 Thread valerio




Il 14/12/22 15:56, Stefano Simonucci ha scritto:
In realtà riesco a far sì che sui due monitor lo schermo sia lo stesso, 
ma non riesco a far sì che i due schermi siano in generale diversi e una 
delle finestre sia duplicata nei due gli schermi. In pratica io voglio 
lavorare su una finestra che è vista anche dal pubblico (proiettore) e 
su altre finestre che non sono viste dal pubblico. Tutte le finestre 
devono però essere visibili dal portatile su cui io lavoro senza 
guardare lo schermo del proiettore.


Stefano



un po' complicato...
forse con un altro monitor attaccato?

oppure duplicando la finestra su due monitor, ma in uno un po' più 
piccola e con altre inquadrature...


valerio



Re: OT: duplicazione di una finestra su due monitor distinti

2022-12-14 Thread valerio



ciao,

Il 14/12/22 13:16, Stefano Simonucci ha scritto:

Un saluto a tutta la lista.

A volte ho bisogno di lavorare con il portatile agganciato ad un 
proiettore. In pratica avrei bisogno di duplicare una finestra (ossia 
un'applicazione) sui due schermi (laptop e proiettore), senza però 
duplicare tutto lo schermo, in quanto vorrei che alcune applicazione 
fossero solo sul laptop). Le due finestre dovrebbero essere una "mirror" 
dell'altra, in quanto dal laptop non riesco a vedere bene lo schermo di 
proiezione. Ho provato a cercare in rete, ma ho trovato solo un metodo 
che mi sembrava complicato e utilizzava VNC.


Qualcuno ha avuto un problema di questo tipo.

non so se possa essere uguale (non ho mai lavorato con proiettori), ma 
io lavoro normalmente con due monitor.
uso proprietà del monitor, mi vede i due monitor e scelgo se fare il 
mirror uno dell'altro, oppure avere due finestre diverse.

uso xfce, non so con altre interfacce, ma penso non sia molto diverso.


Grazie a tutti. Ciao

Stefano



valerio



mariadb-server

2022-12-12 Thread valerio

buonasera a tutti,
ho un problema da qualche giorno e non riesco a capire come risolverlo 
(e se è possibile).


aggiornando mi da errore su mariadb-server:

Configurazione di mariadb-server-10.6 (1:10.6.11-1)...
dpkg: errore nell'elaborare il pacchetto mariadb-server-10.6 (--configure):
 il sottoprocesso installato pacchetto mariadb-server-10.6 script 
post-installation ha restituito lo stato di errore 1
dpkg: problemi con le dipendenze impediscono la configurazione di 
default-mysql-server:

 default-mysql-server dipende da mariadb-server-10.6; tuttavia:
  Il pacchetto mariadb-server-10.6 non è ancora configurato.

dpkg: errore nell'elaborare il pacchetto default-mysql-server (--configure):
 problemi con le dipendenze - lasciato non configurato
Si sono verificati degli errori nell'elaborazione:
 mariadb-server-10.6
 default-mysql-server
E: Sub-process /usr/bin/dpkg returned an error code (1)


questo messagio lo da dopo aver fatto vari tentativi, disintallazione  e 
successiva installazione di mariadb-server e mysql.


ho trovato che tempo fa veniva cancellato mariadb-server-10.5, ma non mi 
sono mai accorto, e non ho trovato altro bug. penso che potrei tornare 
indietro di versione, ma prima vorrei sapere e c'è un altro sistema per 
risolvere.


questo mi blocca un programmino di rubrica in lamp e l'esecuzione di adminer

qualcuno conosce il problema e sa come si può risolvere?

grazie e buona serata

valerio



Re: [QGIS-it-user] Discussione budget QGIS

2022-12-05 Thread Valerio Pinna via QGIS-it-user
Salve a tutti.  Grazie Matteo per aver condiviso il tuo messaggio e per invitarci alla discussione. Ho avuto modo di partecipare solo ad una parte della chiaccherata dell’altra sera, ma continuavo a pensare a come poter contribuire al dibattito.  Per quell che puo’ valere, condivido appienno il tuo punto di vista e le tue preoccupazioni.  Personalmente, non mi convince la proposta di stipendiare una persona per la documentazione. Capisco che ci sia certamente bisogno  di rimettere (continuamente) mano alla documentazione, ma ho l’impressione che questa proposta vada in senso assolutamente contrario al senso di cumunitá che per ha fatto per tanto tempo la forza di QGIS. Il fatto stesso di voler/dover pagare qualcuno é un segnale in se del fatto che qualcosa non va nella comunita (mancata partecipazione, poca disponibilitá, etc..). Francamente mi sarei aspettato almeno un tentativo piú sostanziale nel provare a risvegliare I vari membri e cercare di riattivare innanzitutto le energie dei volontari. Non mi pare ci sia stata di recente nessuna campagna pubblica in tal senso. Non ho strumenti a sufficenza per giudicare quel che succede a livello di codice, ma certamente, nel mio piccolo, noto che tutta la parte del codice é sempre piú dominata da pochi individui ed é stato reso piú difficile contribuire (col “paletto” C++). Non mi pare che la proposta di ridurre i grant possa aiutare in alcun modo a coinvolgere nuovi developers. Poi, ripeto, mi mancano sicuramente gli strumenti per capire bene che stia succedendo all’interno di QGIS e sarebbe interessante approfondire meglio alcune degli aspetti a cui accennava Paolo C. l’altra sera. Peró,  da utente attivo che cerca nel suo piccolo di contribuire in vari modi da volontario,  mi pare importante  dare la mia opinione. Grazie,  ValerioSent from Mail for Windows 
___
QGIS-it-user mailing list
QGIS-it-user@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/qgis-it-user


[Corpora-List] 2-year postdoc on perspectivism at University of Turin

2022-12-05 Thread Valerio Basile via Corpora
We are offering a 2-year postdoc position in the Computer Science
Department of the University of Turin, Italy, in the context of the project
POPULUS, funded by Amazon Alexa and Compagnia di San Paolo. The project
concerns the development of perspectivist NLP methods for the analysis of
pragmatic phenomena in language, such as irony, sarcasm, offensiveness or
hate speech.

References:
https://nlperspectives.di.unito.it/
https://pdai.info/
https://arxiv.org/pdf/2109.04270.pdf
https://ojs.aaai.org/index.php/HCOMP/article/view/7473/7260

The project is highly interdisciplinary, with a team of five University
researchers led by Valerio Basile in collaboration with a research team
from Amazon Alexa. A PhD in Computer Science, Computational Linguistics, or
related areas is highly recommended. Knowledge of Italian is not mandatory.
The deadline for application is *December 22nd* 2022.
Starting date: February or March 2023

The gross yearly salary is 23.097 EUR, corresponding to a net pay of 1,700
EUR/month.
The official call documentation, in Italian, including other postdoc
positions at the University of Turin, is here:


https://webapps.unito.it/albo_ateneo/?area=Albo&action=Read_Download&id_attach=51655

The application must be submitted through the PICA system:

  https://pica.cineca.it/en/unito/assegni-di-ricerca-unito-2022-v/

(click on "manage your applications")

The Content-centered Computing group (
https://cs.unito.it/do/gruppi.pl/Show?_id=453y) is a research group at the
Department of Computer Science focused on the study and the development of
"content items". The group is highly inter- and trans-disciplinary, with
interests in Computational Linguistics, Multimedia, Cognitive Science,
Semantic Web, and more, as well as the creation of language resources and
the organization of evaluation campaigns.

The University of Turin (UniTo) is one of the largest Italian Universities,
with about 80.000 students, 3.900 employees (academic, administrative and
technical staff), 1.800 post-graduate and post-doctoral research fellows.
Research and training are performed in 27 Departments, encompassing all
scientific disciplines. According to ARWU international ranking, in 2018
UniTo ranks among the top 300 universities out of 1.200 and as the second
university in Italy.

Please write to Valerio Basile  for any further
information.
___
Corpora mailing list -- corpora@list.elra.info
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to corpora-le...@list.elra.info


Re: [ovs-dev] [PATCH 1/2] ofp, dpif: Allow CT flush based on partial match

2022-11-28 Thread Paolo Valerio
Hi Ales,

the patch lgtm, and works as expected. 
There are some nit/remarks below, but other than that, I'm ok with the
change.

Ales Musil  writes:

> Currently, the CT can be flushed by dpctl only be specifying
> the whole 5-tuple. This is not very convenient when there are
> only some fields known to the user of CT flush. Add new struct
> ofputil_ct_match which represents the generic filtering that can
> be done for CT flush. The match is done only on fields that are
> non-zero with exception to the icmp fields.
>
> This allows the filtering just within dpctl, however
> it is a preparation for OpenFlow extension.
>
> Reported-at: https://bugzilla.redhat.com/2120546
> Signed-off-by: Ales Musil 
> ---
>  NEWS   |   2 +
>  include/openvswitch/ofp-util.h |  28 +++
>  lib/automake.mk|   2 +
>  lib/ct-dpif.c  | 201 +
>  lib/ct-dpif.h  |   4 +-
>  lib/dpctl.c|  14 +-
>  lib/dpctl.man  |   3 +-
>  lib/ofp-ct-util.c  | 311 +
>  lib/ofp-ct-util.h  |  34 
>  tests/system-traffic.at|  80 +
>  10 files changed, 557 insertions(+), 122 deletions(-)
>  create mode 100644 lib/ofp-ct-util.c
>  create mode 100644 lib/ofp-ct-util.h
>
> diff --git a/NEWS b/NEWS
> index ff77ee404..81909812e 100644
> --- a/NEWS
> +++ b/NEWS
> @@ -23,6 +23,8 @@ Post-v3.0.0
> bug and CVE fixes addressed since its release.
> If a user wishes to benefit from these fixes it is recommended to use
> DPDK 21.11.2.
> +   - ovs-dpctl and related ovs-appctl commands:
> + * "flush-conntrack" is capable of handling partial 5-tuple.
>  
>  
>  v3.0.0 - 15 Aug 2022
> diff --git a/include/openvswitch/ofp-util.h b/include/openvswitch/ofp-util.h
> index 091a09cad..84937ae26 100644
> --- a/include/openvswitch/ofp-util.h
> +++ b/include/openvswitch/ofp-util.h
> @@ -19,6 +19,9 @@
>  
>  #include 
>  #include 
> +#include 
> +#include 
> +
>  #include "openvswitch/ofp-protocol.h"
>  
>  struct ofp_header;
> @@ -27,6 +30,31 @@ struct ofp_header;
>  extern "C" {
>  #endif
>  
> +struct ofputil_ct_tuple {
> +struct in6_addr src;
> +struct in6_addr dst;
> +
> +union {
> +ovs_be16 src_port;
> +ovs_be16 icmp_id;
> +};
> +union {
> +ovs_be16 dst_port;
> +struct {
> +uint8_t icmp_code;
> +uint8_t icmp_type;
> +};
> +};
> +};
> +
> +struct ofputil_ct_match {
> +uint8_t ip_proto;
> +uint16_t l3_type;
> +
> +struct ofputil_ct_tuple tuple_orig;
> +struct ofputil_ct_tuple tuple_reply;
> +};
> +
>  bool ofputil_decode_hello(const struct ofp_header *,
>uint32_t *allowed_versions);
>  struct ofpbuf *ofputil_encode_hello(uint32_t version_bitmap);
> diff --git a/lib/automake.mk b/lib/automake.mk
> index a0fabe38f..37135f118 100644
> --- a/lib/automake.mk
> +++ b/lib/automake.mk
> @@ -227,6 +227,8 @@ lib_libopenvswitch_la_SOURCES = \
>   lib/ofp-actions.c \
>   lib/ofp-bundle.c \
>   lib/ofp-connection.c \
> + lib/ofp-ct-util.c \
> + lib/ofp-ct-util.h \
>   lib/ofp-ed-props.c \
>   lib/ofp-errors.c \
>   lib/ofp-flow.c \
> diff --git a/lib/ct-dpif.c b/lib/ct-dpif.c
> index cfc2315e3..7fbf2bea6 100644
> --- a/lib/ct-dpif.c
> +++ b/lib/ct-dpif.c
> @@ -20,6 +20,7 @@
>  #include 
>  
>  #include "ct-dpif.h"
> +#include "ofp-ct-util.h"
>  #include "openvswitch/ofp-parse.h"
>  #include "openvswitch/vlog.h"
>  
> @@ -80,6 +81,31 @@ ct_dpif_dump_start(struct dpif *dpif, struct 
> ct_dpif_dump_state **dump,
>  return err;
>  }
>  
> +static void
> +ct_dpif_tuple_from_ofputil_ct_tuple(const struct ofputil_ct_tuple *ofp_tuple,
> +struct ct_dpif_tuple *tuple,
> +uint16_t l3_type, uint8_t ip_proto)
> +{
> +if (l3_type == AF_INET) {
> +tuple->src.ip = in6_addr_get_mapped_ipv4(&ofp_tuple->src);
> +tuple->dst.ip = in6_addr_get_mapped_ipv4(&ofp_tuple->dst);
> +} else {
> +tuple->src.in6 = ofp_tuple->src;
> +tuple->dst.in6 = ofp_tuple->dst;
> +}
> +
> +tuple->l3_type = l3_type;
> +tuple->ip_proto = ip_proto;
> +tuple->src_port = ofp_tuple->src_port;
> +
> +if (ip_proto == IPPROTO_ICMP || ip_proto == IPPROTO_ICMPV6) {
> +tuple->icmp_code = ofp_tuple->icmp_code;
> +tuple->icmp_type = ofp_tuple->icmp_type;
> +} else {
> +tuple->dst_port = ofp_tuple->dst_port;
> +}
> +}
> +
>  /* Dump one connection from a tracker, and put it in 'entry'.
>   *
>   * 'dump' should have been initialized by ct_dpif_dump_start().
> @@ -109,7 +135,62 @@ ct_dpif_dump_done(struct ct_dpif_dump_state *dump)
>  ? dpif->dpif_class->ct_dump_done(dpif, dump)
>  : EOPNOTSUPP);
>  }
> -
> +
> +static int
> +ct_dpif_flush_tuple(struct dpif *dpif, const uint1

Re: [it-users] a capo

2022-11-10 Thread Valerio Messina

On 11/10/22 7:27 PM, Attilio Tempestini wrote:
In Calc, se il testo di una cella è maggiore dello spazio visualizzato, 
mi accade che in una tabella il testo venga troncato ed una freccetta 
consenta, cliccandovi su, di visualizzarne la parte residua
succede quando nella cella a destra di quella con la freccia hai del 
testo al posto che celle vuote, magari anche solo uno spazio


--
Valerio

--
Come cancellarsi: E-mail users+unsubscr...@it.libreoffice.org
Problemi? https://it.libreoffice.org/supporto/mailing-lists/come-cancellarsi/
Linee guida per postare + altro: 
https://wiki.documentfoundation.org/Local_Mailing_Lists/it
Archivio della lista: https://listarchives.libreoffice.org/it/users/
Privacy Policy: https://www.documentfoundation.org/privacy


Re: [sigrok-devel] Can we finally merge link-mso19 hardware support [PR-144].

2022-11-08 Thread Valerio Messina via sigrok-devel

On 11/8/22 7:38 AM, Gerhard Sittig wrote:

I don't believe that doing a review requires commit access to a repo


Gerhard I certainly don't want to take control of the project, I'm only
sorry if this excellent project dies.
From the list comments it is evident that what is missing in this
project are the commits not the reviews to pull request for git master.

If you want to keep commit and release exclusivity, the project is your
completely free to do so.

--
Valerio


___
sigrok-devel mailing list
sigrok-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sigrok-devel


Re: [sigrok-devel] Can we finally merge link-mso19 hardware support [PR-144].

2022-11-01 Thread Valerio Messina via sigrok-devel

On 11/1/22 9:57 AM, Jorge Solla wrote:

Lots of commits pending :(
Also mine has been hanging there for months.

Unfortunately project seems to bedead?


The situation has been like this for months, current maintenancers 
evidently have no time and/or desire to continue development.
Should appoint at least two new reviewer with the right of commit, or 
this good project will dead or fork


--
Valerio


___
sigrok-devel mailing list
sigrok-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sigrok-devel


[ovs-dev] [PATCH] conntrack: Show parent key if present.

2022-10-31 Thread Paolo Valerio
Similarly to what happens when CTA_TUPLE_MASTER is present in a ct
netlink dump, add the ability to print out the parent key to the
userspace implementation as well.

Signed-off-by: Paolo Valerio 
---
 lib/conntrack.c |4 
 1 file changed, 4 insertions(+)

diff --git a/lib/conntrack.c b/lib/conntrack.c
index 13c5ab628..550b2be9b 100644
--- a/lib/conntrack.c
+++ b/lib/conntrack.c
@@ -2647,6 +2647,10 @@ conn_to_ct_dpif_entry(const struct conn *conn, struct 
ct_dpif_entry *entry,
 conn_key_to_tuple(&conn->key, &entry->tuple_orig);
 conn_key_to_tuple(&conn->rev_key, &entry->tuple_reply);
 
+if (conn->alg_related) {
+conn_key_to_tuple(&conn->parent_key, &entry->tuple_parent);
+}
+
 entry->zone = conn->key.zone;
 
 ovs_mutex_lock(&conn->lock);

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] conntrack: Refactor nat handling functions

2022-10-27 Thread Paolo Valerio
Ales Musil  writes:

> On Thu, Oct 27, 2022 at 11:14 AM Ales Musil  wrote:
>
> In order to support NAT of inner packet
> for ICMP related traffic refactor the nat
> functions. This fixes the issue that the
> NAT was not performed on inner header in orig
> direction and avoids some code duplication.
>
> Reported-at: https://bugzilla.redhat.com/2120546
> Signed-off-by: Ales Musil 
> ---
>  lib/conntrack.c         | 250 ++--
>  tests/system-traffic.at |  67 +++
>  2 files changed, 155 insertions(+), 162 deletions(-)
>
> diff --git a/lib/conntrack.c b/lib/conntrack.c
> index 13c5ab628..b8b9f9c49 100644
> --- a/lib/conntrack.c
> +++ b/lib/conntrack.c
> @@ -764,109 +764,59 @@ handle_alg_ctl(struct conntrack *ct, const struct
> conn_lookup_ctx *ctx,
>  }
>
>  static void
> -pat_packet(struct dp_packet *pkt, const struct conn *conn)
> +pat_packet(struct dp_packet *pkt, const struct conn_key *key)
>  {
> -    if (conn->nat_action & NAT_ACTION_SRC) {
> -        if (conn->key.nw_proto == IPPROTO_TCP) {
> -            struct tcp_header *th = dp_packet_l4(pkt);
> -            packet_set_tcp_port(pkt, conn->rev_key.dst.port, 
> th->tcp_dst);
> -        } else if (conn->key.nw_proto == IPPROTO_UDP) {
> -            struct udp_header *uh = dp_packet_l4(pkt);
> -            packet_set_udp_port(pkt, conn->rev_key.dst.port, 
> uh->udp_dst);
> -        }
> -    } else if (conn->nat_action & NAT_ACTION_DST) {
> -        if (conn->key.nw_proto == IPPROTO_TCP) {
> -            packet_set_tcp_port(pkt, conn->rev_key.dst.port,
> -                                conn->rev_key.src.port);
> -        } else if (conn->key.nw_proto == IPPROTO_UDP) {
> -            packet_set_udp_port(pkt, conn->rev_key.dst.port,
> -                                conn->rev_key.src.port);
> -        }
> +    if (key->nw_proto == IPPROTO_TCP) {
> +        packet_set_tcp_port(pkt, key->dst.port, key->src.port);
> +    } else if (key->nw_proto == IPPROTO_UDP) {
> +        packet_set_udp_port(pkt, key->dst.port, key->src.port);
>      }
>  }
>
> -static void
> -nat_packet(struct dp_packet *pkt, const struct conn *conn, bool related)
> +static uint16_t
> +nat_action_reverse(uint16_t nat_action)
>  {
> -    if (conn->nat_action & NAT_ACTION_SRC) {
> -        pkt->md.ct_state |= CS_SRC_NAT;
> -        if (conn->key.dl_type == htons(ETH_TYPE_IP)) {
> -            struct ip_header *nh = dp_packet_l3(pkt);
> -            packet_set_ipv4_addr(pkt, &nh->ip_src,
> -                                 conn->rev_key.dst.addr.ipv4);
> -        } else {
> -            struct ovs_16aligned_ip6_hdr *nh6 = dp_packet_l3(pkt);
> -            packet_set_ipv6_addr(pkt, conn->key.nw_proto,
> -                                 nh6->ip6_src.be32,
> -                                 &conn->rev_key.dst.addr.ipv6, true);
> -        }
> -        if (!related) {
> -            pat_packet(pkt, conn);
> -        }
> -    } else if (conn->nat_action & NAT_ACTION_DST) {
> -        pkt->md.ct_state |= CS_DST_NAT;
> -        if (conn->key.dl_type == htons(ETH_TYPE_IP)) {
> -            struct ip_header *nh = dp_packet_l3(pkt);
> -            packet_set_ipv4_addr(pkt, &nh->ip_dst,
> -                                 conn->rev_key.src.addr.ipv4);
> -        } else {
> -            struct ovs_16aligned_ip6_hdr *nh6 = dp_packet_l3(pkt);
> -            packet_set_ipv6_addr(pkt, conn->key.nw_proto,
> -                                 nh6->ip6_dst.be32,
> -                                 &conn->rev_key.src.addr.ipv6, true);
> -        }
> -        if (!related) {
> -            pat_packet(pkt, conn);
> -        }
> +    if (nat_action & NAT_ACTION_SRC) {
> +        nat_action ^= NAT_ACTION_SRC;
> +        nat_action |= NAT_ACTION_DST;
> +    } else if (nat_action & NAT_ACTION_DST) {
> +        nat_action ^= NAT_ACTION_DST;
> +        nat_action |= NAT_ACTION_SRC;
>      }
> +    return nat_action;
>  }
>
>  static void
> -un_pat_packet(struct dp_packet *pkt, const struct conn *conn)
> +nat_packet_ipv4(struct dp_packet *pkt, const struct conn_key *key,
> +                uint16_t nat_action)
>  {
> -    if (conn->nat_action & NAT_ACTION_SRC) {
> -        if (conn->key.nw_proto == IPPROTO_TCP) {
> -            struct tcp_header *th = dp_packet_l4(pkt);
> -            packet_set_tcp_port(pkt, th->tcp_src, conn->key.src.port);
> -        } else if (conn->key.nw_proto == IPPROTO_UDP) {
> -            struct udp_header *uh = dp_packet_l4(pkt);
> -            packet_set_udp_port(pkt, uh->udp_src, conn->key.src.port);
> -        }
> -    } else if (conn->nat_action 

[Nav-users] R: R: Nav - LLDP Missing on some FS model

2022-10-26 Thread Marco Valerio Bifolco
HI,
Again thanks for your time, yes it is a mlag configuration, but the strange 
thing, it's that is not the only one, but is the unique having this problem. 

So, I guess again my only chance is the feature to choose the right neighbor.

Any advice will be appreciated.

Thanks
Best Regards


    


Marco V. Bifolco
System Engineer
Office (+39) 0694320122 | Mobile (+39) 3519666252
marco.bifo...@sferanet.net
assiste...@sferanet.net
  sferanet.net



-Messaggio originale-
Da: Morten Brekkevold  
Inviato: venerdì 21 ottobre 2022 13:29
A: Marco Valerio Bifolco 
Cc: nav-users@lister.sikt.no; Isaia Mammano 
Oggetto: Re: R: [Nav-users] Nav - LLDP Missing on some FS model

On Wed, 5 Oct 2022 14:29:45 + Marco Valerio Bifolco 
 wrote:

> But now I got an other issue, s3900 model that I'm sure is working due
> 6 of them with same firmware are working fine, seems to be isolated, 
> but looking on candidate it has 2 lldp neighbour that it is not using


The exact reason NAV isn't using it is because there are *2* instead of *1*. 
NAV's topology model will only allow a single neighbor on any port
- and this port reports *2* neighbors.  How can `Trunk1` have *2* neighbors? Is 
a virtual MLAG port?

--
Sincerely,
Morten Brekkevold

Sikt – Norwegian Agency for Shared Services in Education and Research
___
Nav-users mailing list -- nav-users@lister.sikt.no
To unsubscribe send an email to nav-users-le...@lister.sikt.no


Re: [ovs-dev] [PATCH] odp-util: Add missing comma in format_odp_conntrack_action()

2022-10-26 Thread Paolo Valerio
Ilya Maximets  writes:

> On 10/21/22 15:22, Paolo Valerio wrote:
>> If OVS_CT_ATTR_TIMEOUT is included, the resulting output is
>> the following:
>> 
>> actions:ct(commit,timeout=1nat(src=10.1.1.240))
>> 
>> Fix it by trivially adding a trailing ',' to timeout as well.
>> 
>> Signed-off-by: Paolo Valerio 
>> ---
>>  lib/odp-util.c |2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>> 
>> diff --git a/lib/odp-util.c b/lib/odp-util.c
>> index ba5be4bb3..72e076e1c 100644
>> --- a/lib/odp-util.c
>> +++ b/lib/odp-util.c
>> @@ -1004,7 +1004,7 @@ format_odp_conntrack_action(struct ds *ds, const 
>> struct nlattr *attr)
>>  ds_put_format(ds, "helper=%s,", helper);
>>  }
>>  if (timeout) {
>> -ds_put_format(ds, "timeout=%s", timeout);
>> +ds_put_format(ds, "timeout=%s,", timeout);
>>  }
>>  if (nat) {
>>  format_odp_ct_nat(ds, nat);
>> 
>
> Hi.  Thanks for the patch!
> Could you also, please, add a test case to tests/odp.at for this?

Sure, thanks for pointing that out.
Sent v2:

https://patchwork.ozlabs.org/project/openvswitch/patch/166677384931.806968.5359905777279608036.st...@fed.void/

>
> Best regards, Ilya Maximets.

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH v2] odp-util: Add missing separator in format_odp_conntrack_action()

2022-10-26 Thread Paolo Valerio
If OVS_CT_ATTR_TIMEOUT is included, the resulting output is
the following:

actions:ct(commit,timeout=1nat(src=10.1.1.240))

Fix it by trivially adding a trailing ',' to timeout as well.

Signed-off-by: Paolo Valerio 
---
v2: added test case in odp.at
---
 lib/odp-util.c |2 +-
 tests/odp.at   |2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/lib/odp-util.c b/lib/odp-util.c
index ba5be4bb3..72e076e1c 100644
--- a/lib/odp-util.c
+++ b/lib/odp-util.c
@@ -1004,7 +1004,7 @@ format_odp_conntrack_action(struct ds *ds, const struct 
nlattr *attr)
 ds_put_format(ds, "helper=%s,", helper);
 }
 if (timeout) {
-ds_put_format(ds, "timeout=%s", timeout);
+ds_put_format(ds, "timeout=%s,", timeout);
 }
 if (nat) {
 format_odp_ct_nat(ds, nat);
diff --git a/tests/odp.at b/tests/odp.at
index 7a1cf3b2c..88b7cfd91 100644
--- a/tests/odp.at
+++ b/tests/odp.at
@@ -348,7 +348,9 @@ ct(commit,helper=tftp)
 ct(commit,timeout=ovs_tp_1_tcp4)
 ct(nat)
 ct(commit,nat(src))
+ct(commit,timeout=ovs_tp_1_tcp4,nat(src))
 ct(commit,nat(dst))
+ct(commit,timeout=ovs_tp_1_tcp4,nat(dst))
 ct(commit,nat(src=10.0.0.240,random))
 ct(commit,nat(src=10.0.0.240:32768-65535,random))
 ct(commit,nat(dst=10.0.0.128-10.0.0.254,hash))

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[kdenlive] [Bug 460928] Archive does not export all Effects

2022-10-25 Thread Valerio Bozzolan
https://bugs.kde.org/show_bug.cgi?id=460928

--- Comment #4 from Valerio Bozzolan  ---
With this information, maybe an easy way to do is could be:

- when saving an effect, also save it in the project folder
- when the project is opened: import the effect if missing or, if already
present, just ask Y/n «Do you want to import the effect "%s" from the project?
This will update your already existing system effect.»

-- 
You are receiving this mail because:
You are watching all bug changes.

[kdenlive] [Bug 460928] Archive does not export all Effects

2022-10-24 Thread Valerio Bozzolan
https://bugs.kde.org/show_bug.cgi?id=460928

--- Comment #2 from Valerio Bozzolan  ---
You may be interested in the related error message:

> Missing effect: EXAMPLEEFFECT will be removed from project.

-- 
You are receiving this mail because:
You are watching all bug changes.

[kdenlive] [Bug 460928] Archive does not export all Effects

2022-10-24 Thread Valerio Bozzolan
https://bugs.kde.org/show_bug.cgi?id=460928

Valerio Bozzolan  changed:

   What|Removed |Added

 CC||b...@reyboz.it
Version|unspecified |19.12.3

--- Comment #1 from Valerio Bozzolan  ---
Honestly I tried this in 19.12.3 but other people were able to reproduce on
Telegram from latest on master.

-- 
You are receiving this mail because:
You are watching all bug changes.

[kdenlive] [Bug 460928] New: Archive does not export all Effects

2022-10-24 Thread Valerio Bozzolan
https://bugs.kde.org/show_bug.cgi?id=460928

Bug ID: 460928
   Summary: Archive does not export all Effects
Classification: Applications
   Product: kdenlive
   Version: unspecified
  Platform: unspecified
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: Video Display & Export
  Assignee: j...@kdenlive.org
  Reporter: b...@reyboz.it
  Target Milestone: ---

Archiving a project should create a reproducible project, but some effects are
not exported.

STEPS TO REPRODUCE
1. create a simple project (e.g. an image for some seconds)
3. apply a random effect (example: transform)
4. save effect (example name: "My Transform")
5. save
6. Project > Archive
7. Try to import that again, on a fresh installation.

OBSERVED RESULT

When re-importing on a fresh installation, any custom effect is lost ("My
Transform" effect is lost).

EXPECTED RESULT

When exporting from Archive, any custom effect should be exported as well.
When importing from Archive, any missing custom effect should be imported as
well.

SOFTWARE/OS VERSIONS
Linux/KDE Plasma:  19.12.3

ADDITIONAL INFORMATION

https://t.me/kdenlive/58281

-- 
You are receiving this mail because:
You are watching all bug changes.

[ovs-dev] [PATCH] odp-util: Add missing comma in format_odp_conntrack_action()

2022-10-21 Thread Paolo Valerio
If OVS_CT_ATTR_TIMEOUT is included, the resulting output is
the following:

actions:ct(commit,timeout=1nat(src=10.1.1.240))

Fix it by trivially adding a trailing ',' to timeout as well.

Signed-off-by: Paolo Valerio 
---
 lib/odp-util.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/odp-util.c b/lib/odp-util.c
index ba5be4bb3..72e076e1c 100644
--- a/lib/odp-util.c
+++ b/lib/odp-util.c
@@ -1004,7 +1004,7 @@ format_odp_conntrack_action(struct ds *ds, const struct 
nlattr *attr)
 ds_put_format(ds, "helper=%s,", helper);
 }
 if (timeout) {
-ds_put_format(ds, "timeout=%s", timeout);
+ds_put_format(ds, "timeout=%s,", timeout);
 }
 if (nat) {
 format_odp_ct_nat(ds, nat);

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [RFC PATCH 1/2] dpif: Add support for CT flush with partial tuple

2022-10-17 Thread Paolo Valerio
Hello Ales,

overall the approach is ok, the only concern is that, unless I'm missing
something, in case of many connections, the exact match deletion could
potentially take a while, whereas in the previous case the cost
was basically a lookup (constant time) and of course the remaining
deletion operation.

It would be nice to avoid the extra cost when the whole 5-tuple is
specified. WDYT?

Ales Musil  writes:

> Curreently in order to flush conntrack you would need to
> specify full 5-tuple. Add support for partial match
> it still has some limitations however it is capable of flushing
> all that match specified field e.g. source ip address.
>
> Reported-at: https://bugzilla.redhat.com/2120546
> Signed-off-by: Ales Musil 
> ---
>  NEWS|   2 +
>  lib/ct-dpif.c   | 178 +++-
>  lib/dpctl.man   |   3 +-
>  tests/system-traffic.at |  84 ++-
>  4 files changed, 226 insertions(+), 41 deletions(-)
>
> diff --git a/NEWS b/NEWS
> index ff77ee404..81909812e 100644
> --- a/NEWS
> +++ b/NEWS
> @@ -23,6 +23,8 @@ Post-v3.0.0
> bug and CVE fixes addressed since its release.
> If a user wishes to benefit from these fixes it is recommended to use
> DPDK 21.11.2.
> +   - ovs-dpctl and related ovs-appctl commands:
> + * "flush-conntrack" is capable of handling partial 5-tuple.
>  
>  
>  v3.0.0 - 15 Aug 2022
> diff --git a/lib/ct-dpif.c b/lib/ct-dpif.c
> index cfc2315e3..57995f5e5 100644
> --- a/lib/ct-dpif.c
> +++ b/lib/ct-dpif.c
> @@ -18,6 +18,8 @@
>  #include "dpif-provider.h"
>  
>  #include 
> +#include 
> +#include 
>  
>  #include "ct-dpif.h"
>  #include "openvswitch/ofp-parse.h"
> @@ -109,7 +111,113 @@ ct_dpif_dump_done(struct ct_dpif_dump_state *dump)
>  ? dpif->dpif_class->ct_dump_done(dpif, dump)
>  : EOPNOTSUPP);
>  }
> -

was this intentional?
Just checking

> +
> +static inline bool
> +ct_dpif_inet_addr_cmp_partial(const union ct_dpif_inet_addr *partial,
> +  const union ct_dpif_inet_addr *addr)
> +{
> +if (!ipv6_is_zero(&partial->in6) &&
> +!ipv6_addr_equals(&partial->in6, &addr->in6)) {
> +return false;
> +}
> +return true;
> +}
> +
> +/* Compares the non-zero members if they match. This is usefull for clearing
> + * up all conntracks specified by a partial tuple. */
> +static inline bool
> +ct_dpif_tuple_cmp_partial(const struct ct_dpif_tuple *partial,
> +  const struct ct_dpif_tuple *tuple)
> +{
> +/* There is no point in continuing if both do not use the same eth type. 
> */
> +if (partial->l3_type != tuple->l3_type) {
> +return false;
> +}
> +
> +if (partial->ip_proto && partial->ip_proto != tuple->ip_proto) {
> +return false;
> +}
> +
> +if (!ct_dpif_inet_addr_cmp_partial(&partial->src, &tuple->src)) {
> +return false;
> +}
> +
> +if (!ct_dpif_inet_addr_cmp_partial(&partial->dst, &tuple->dst)) {
> +return false;
> +}
> +
> +if (partial->ip_proto == IPPROTO_TCP || partial->ip_proto == 
> IPPROTO_UDP) {
> +
> +if (partial->src_port && partial->src_port != tuple->src_port) {
> +return false;
> +}
> +
> +if (partial->dst_port && partial->dst_port != tuple->dst_port) {
> +return false;
> +}
> +} else if (partial->ip_proto == IPPROTO_ICMP ||
> +   partial->ip_proto == IPPROTO_ICMPV6) {
> +
> +if (partial->icmp_id != tuple->icmp_id) {
> +return false;
> +}
> +
> +if (partial->icmp_type != tuple->icmp_type) {
> +return false;
> +}
> +
> +if (partial->icmp_code != tuple->icmp_code) {
> +return false;
> +}
> +}
> +
> +return true;
> +}
> +
> +static int
> +ct_dpif_flush_tuple(struct dpif *dpif, const uint16_t *zone,
> +const struct ct_dpif_tuple *tuple) {
> +struct ct_dpif_dump_state *dump;
> +struct ct_dpif_entry cte;
> +int error;
> +int tot_bkts;
> +
> +if (!dpif->dpif_class->ct_flush) {
> +return EOPNOTSUPP;
> +}
> +
> +if (VLOG_IS_DBG_ENABLED()) {
> +struct ds ds = DS_EMPTY_INITIALIZER;
> +ct_dpif_format_tuple(&ds, tuple);
> +VLOG_DBG("%s: ct_flush: %s in zone %d", dpif_name(dpif), 
> ds_cstr(&ds),
> +  zone ? *zone : 0);
> +ds_destroy(&ds);
> +}
> +
> +error = ct_dpif_dump_start(dpif, &dump, zone, &tot_bkts);
> +if (error) {
> +return error;
> +}
> +
> +while (!(error = ct_dpif_dump_next(dump, &cte))) {
> +if (zone && *zone != cte.zone) {
> +continue;
> +}
> +
> +if (ct_dpif_tuple_cmp_partial(tuple, &cte.tuple_orig)) {
> +error = dpif->dpif_class->ct_flush(dpif, &cte.zone,
> +   &cte.tuple_orig);
> +if (error) {
> + 

[ovs-dev] [PATCH v2] ct-dpif: Replace ct_dpif_format_flags() with format_flags_masked().

2022-10-12 Thread Paolo Valerio
This patch removes ct_dpif_format_flags() in favor of the existing
format_flags_masked().
This has the extra bonus of showing keys with empty values as "key=0",
instead of showing "key=".

E.g., the following:

NEW tcp,orig=([...]),reply=([...]),id=1800618864,
status=CONFIRMED|SRC_NAT_DONE|DST_NAT_DONE,timeout=120,
protoinfo=(state_orig=SYN_SENT,state_reply=SYN_SENT,wscale_orig=7,
   wscale_reply=0,flags_orig=WINDOW_SCALE|SACK_PERM,flags_reply=)

becomes:

NEW tcp,orig=([...]),reply=([...]),id=1800618864,
status=CONFIRMED|SRC_NAT_DONE|DST_NAT_DONE,timeout=120,
protoinfo=(state_orig=SYN_SENT,state_reply=SYN_SENT,wscale_orig=7,
   wscale_reply=0,flags_orig=WINDOW_SCALE|SACK_PERM,flags_reply=0)

Signed-off-by: Paolo Valerio 
---
v2:
 - updated commit message (was "ct-dpif: Do not show flag key if empty.")
 - instead of hiding the key, ct_dpif_format_flags() got replaced by
   format_flags_masked() which will show "key=0" in case of empty flags
---
 lib/ct-dpif.c |   76 +
 lib/ct-dpif.h |4 +++
 2 files changed, 43 insertions(+), 37 deletions(-)

diff --git a/lib/ct-dpif.c b/lib/ct-dpif.c
index cfc2315e3..6f17a26b5 100644
--- a/lib/ct-dpif.c
+++ b/lib/ct-dpif.c
@@ -35,20 +35,11 @@ static void ct_dpif_format_counters(struct ds *,
 const struct ct_dpif_counters *);
 static void ct_dpif_format_timestamp(struct ds *,
  const struct ct_dpif_timestamp *);
-static void ct_dpif_format_flags(struct ds *, const char *title,
- uint32_t flags, const struct flags *);
 static void ct_dpif_format_protoinfo(struct ds *, const char *title,
  const struct ct_dpif_protoinfo *,
  bool verbose);
 static void ct_dpif_format_helper(struct ds *, const char *title,
   const struct ct_dpif_helper *);
-
-static const struct flags ct_dpif_status_flags[] = {
-#define CT_DPIF_STATUS_FLAG(FLAG) { CT_DPIF_STATUS_##FLAG, #FLAG },
-CT_DPIF_STATUS_FLAGS
-#undef CT_DPIF_STATUS_FLAG
-{ 0, NULL } /* End marker. */
-};
 
 /* Dumping */
 
@@ -275,6 +266,20 @@ ct_dpif_entry_uninit(struct ct_dpif_entry *entry)
 }
 }
 
+static const char *
+ct_dpif_status_flags(uint32_t flags)
+{
+switch (flags) {
+#define CT_DPIF_STATUS_FLAG(FLAG) \
+case CT_DPIF_STATUS_##FLAG: \
+return #FLAG;
+CT_DPIF_STATUS_FLAGS
+#undef CT_DPIF_TCP_FLAG
+default:
+return NULL;
+}
+}
+
 void
 ct_dpif_format_entry(const struct ct_dpif_entry *entry, struct ds *ds,
  bool verbose, bool print_stats)
@@ -305,8 +310,9 @@ ct_dpif_format_entry(const struct ct_dpif_entry *entry, 
struct ds *ds,
 ds_put_format(ds, ",zone=%"PRIu16, entry->zone);
 }
 if (verbose) {
-ct_dpif_format_flags(ds, ",status=", entry->status,
- ct_dpif_status_flags);
+format_flags_masked(ds, ",status", ct_dpif_status_flags,
+entry->status, CT_DPIF_STATUS_MASK,
+CT_DPIF_STATUS_MASK);
 }
 if (print_stats) {
 ds_put_format(ds, ",timeout=%"PRIu32, entry->timeout);
@@ -415,28 +421,6 @@ ct_dpif_format_tuple(struct ds *ds, const struct 
ct_dpif_tuple *tuple)
 }
 }
 
-static void
-ct_dpif_format_flags(struct ds *ds, const char *title, uint32_t flags,
- const struct flags *table)
-{
-if (title) {
-ds_put_cstr(ds, title);
-}
-for (; table->name; table++) {
-if (flags & table->flag) {
-ds_put_format(ds, "%s|", table->name);
-}
-}
-ds_chomp(ds, '|');
-}
-
-static const struct flags tcp_flags[] = {
-#define CT_DPIF_TCP_FLAG(FLAG)  { CT_DPIF_TCPF_##FLAG, #FLAG },
-CT_DPIF_TCP_FLAGS
-#undef CT_DPIF_TCP_FLAG
-{ 0, NULL } /* End marker. */
-};
-
 const char *ct_dpif_tcp_state_string[] = {
 #define CT_DPIF_TCP_STATE(STATE) [CT_DPIF_TCPS_##STATE] = #STATE,
 CT_DPIF_TCP_STATES
@@ -498,6 +482,20 @@ ct_dpif_format_protoinfo_tcp(struct ds *ds,
 ct_dpif_format_enum(ds, "state=", tcp_state, ct_dpif_tcp_state_string);
 }
 
+static const char *
+ct_dpif_tcp_flags(uint32_t flags)
+{
+switch (flags) {
+#define CT_DPIF_TCP_FLAG(FLAG) \
+case CT_DPIF_TCPF_##FLAG: \
+return #FLAG;
+CT_DPIF_TCP_FLAGS
+#undef CT_DPIF_TCP_FLAG
+default:
+return NULL;
+}
+}
+
 static void
 ct_dpif_format_protoinfo_tcp_verbose(struct ds *ds,
  const struct ct_dpif_protoinfo *protoinfo)
@@ -512,10 +510,14 @@ ct_dpif_format_protoinfo_tcp_verbose(struct ds *ds,
   protoinfo->tcp.wscale_orig,
   protoinfo->tcp.wscale_reply);
 }
-ct_dpif_

[Github-comments] Re: [geany/geany] Move sidebar's tabs to message window (Issue #3308)

2022-10-07 Thread Valerio Setti
Too bad :(
Thanks for your reply! 

-- 
Reply to this email directly or view it on GitHub:
https://github.com/geany/geany/issues/3308#issuecomment-1271233380
You are receiving this because you are subscribed to this thread.

Message ID: 

[Github-comments] Re: [geany/geany] Move sidebar's tabs to message window (Issue #3308)

2022-10-07 Thread Valerio Setti
Closed #3308 as completed.

-- 
Reply to this email directly or view it on GitHub:
https://github.com/geany/geany/issues/3308#event-7541249827
You are receiving this because you are subscribed to this thread.

Message ID: 

Re: [R] Fixed effect model: different estimation approaches with R return different results

2022-10-06 Thread Valerio Leone Sciabolazza
Thank you Bert, I see your point.
The problem is that I am not sure if this is a statistical issue -
i.e., there is something wrong with my procedure - or, in a manner of
speaking, a software issue: e.g. lm is not expected to return the same
estimates for different approaches because of the way in which the
function estimates the solution, and it should not be used the way I
do. Maybe I should have better stressed this.
Valerio


On Thu, Oct 6, 2022 at 4:31 PM Bert Gunter  wrote:
>
> You could get lucky here, but strictly speaking, this list is about R 
> programming and statistical issues are typically off topic Someone might 
> respond privately, though.
>
> Cheers,
> Bert
>
> On Thu, Oct 6, 2022 at 4:24 AM Valerio Leone Sciabolazza 
>  wrote:
>>
>> Good morning,
>> I am trying to use R to estimate a fixed effects model (i.e., a panel
>> regression model controlling for unobserved time-invariant
>> heterogeneities across agents) using different estimation approaches
>> (e.g. replicating xtreg from Stata, see e.g.
>> https://www.stata.com/support/faqs/statistics/intercept-in-fixed-effects-model/).
>> I have already asked this question on different stacks exchange forums
>> and contacted package creators who dealt with this issue before, but I
>> wasn't able to obtain an answer to my doubts.
>> I hope to have better luck on this list.
>>
>> Let me introduce the problem, and note that I am using an unbalanced panel.
>>
>> The easiest way to estimate my fixed effect model is using the function lm.
>>
>> Example:
>>
>> # load packages
>> library(dplyr)
>> # set seed for replication purposes
>> set.seed(123)
>> # create toy dataset
>> x <- rnorm(4000)
>> x2 <- rnorm(length(x))
>> id <- factor(sample(500,length(x),replace=TRUE))
>> firm <- data.frame(id = id) %>%
>> group_by(id) %>%
>> mutate(firm = 1:n()) %>%
>> pull(firm)
>> id.eff <- rlnorm(nlevels(id))
>> firm.eff <- rexp(length(unique(firm)))
>> y <- x + 0.25*x2 + id.eff[id] + firm.eff[firm] + rnorm(length(x))
>> db = data.frame(y = y, x = x, id = id, firm = firm)
>> rm <- db %>% group_by(id) %>% summarise(firm = max(firm)) %>%
>> filter(firm == 1) %>% pull(id)
>> db = db[-which(db$id %in% rm), ]
>> # Run regression
>> test <- lm(y ~ x + id, data = db)
>>
>> Another approach is demeaning the variables included into the model
>> specification.
>> In this way, one can exclude the fixed effects from the model. Of
>> course, point estimates will be correct, while standard errors will be
>> not (because we are not accounting for the degrees of freedom used in
>> the demeaning).
>>
>> # demean data
>> dbm <- as_tibble(db) %>%
>> group_by(id) %>%
>> mutate(y = y - mean(y),
>>x = x - mean(x)) %>%
>> ungroup()
>> # run regression
>> test2 <- lm(y ~ x, data = dbm)
>> # compare results
>> summary(test)$coefficients[2,1]
>> > 0.9753364
>> summary(test2)$coefficients[2,1]
>> > 0.9753364
>>
>> Another way to do this is to demean the variables and add their grand
>> average (I believe that this is what xtreg from Stata does)
>>
>> # create data
>> n = length(unique(db$id))
>> dbh <- dbm %>%
>> mutate(yh = y + (sum(db$y)/n),
>>xh = x + (sum(db$x)/n))
>> # run regression
>> test3 <- lm(yh ~ xh, dbh)
>> # compare results
>> summary(test)$coefficients[2,1]
>> > 0.9753364
>> summary(test2)$coefficients[2,1]
>> > 0.9753364
>> summary(test3)$coefficients[2,1]
>> > 0.9753364
>>
>> As one can see, the three approaches report the same point estimates
>> (again, standard errors will be different instead).
>>
>> When I include an additional set of fixed effects in the model
>> specification, the three approaches no longer return the same point
>> estimate. However, differences seem to be negligible and they could be
>> due to rounding.
>>
>> db$firm <- as.factor(db$firm)
>> dbm$firm <- as.factor(dbm$firm)
>> dbh$firm <- as.factor(dbh$firm)
>> testB <- lm(y ~ x + id + firm, data = db)
>> testB2 <- lm(y ~ x + firm, data = dbm)
>> testB3 <- lm(yh ~ xh + firm, data = dbh)
>> summary(testB)$coefficients[2,1]
>> > 0.9834414
>> summary(testB2)$coefficients[2,1]
>> > 0.984
>> summary(testB3)$coefficients[2,1]
>> > 0.984
>>
>> A similar behavior occurs if I use a dummy variable rather than a
>> continous one. For the onl

[R] Fixed effect model: different estimation approaches with R return different results

2022-10-06 Thread Valerio Leone Sciabolazza
3.8794 -0.7497  0.0010  0.7442  3.8486

Coefficients: (1 not defined because of singularities)
   Estimate Std. Error t value Pr(>|t|)
x3  1.579160.03779  41.788  < 2e-16 ***
x4   NA NA  NA   NA
... redacted
summary(testD3)$coefficients[1:2]
> 3.254654 1.675495

As you can see, the second approach is not able to estimate the impact
of x4 on y. At the same time, the first and the third approach return
very different point estimates.

Is anyone able to explain me why I cannot obtain the same point
estimates for this last exercise?

Is there anything wrong in the way I include the second set of fixed effects?
Is there anything wrong in the way I include the variables x3 and x4?
Or this is simply a problem due to some internal functions in R?

Any hint would be much appreciated.

Best,
Valerio

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[kde] [Bug 459848] New: System settings crashing when tryign to connect display over usb-c

2022-09-30 Thread Valerio Formato
https://bugs.kde.org/show_bug.cgi?id=459848

Bug ID: 459848
   Summary: System settings crashing when tryign to connect
display over usb-c
Classification: I don't know
   Product: kde
   Version: unspecified
  Platform: unspecified
OS: Linux
Status: REPORTED
  Keywords: drkonqi
  Severity: crash
  Priority: NOR
 Component: general
  Assignee: unassigned-b...@kde.org
  Reporter: rustico.ba...@gmail.com
  Target Milestone: ---

Application: systemsettings (5.25.5)

Qt Version: 5.15.6
Frameworks Version: 5.98.0
Operating System: Linux 5.19.12-arch1-1 x86_64
Windowing System: X11
Distribution: Arch Linux
DrKonqi: 5.25.5 [KCrashBackend]

-- Information about the crash:
I am trying to connect an external display via Displayport over USB-C and the
display keeps disconnecting every two seconds. While trying to setup the system
settings app also crashed

The reporter is unsure if this crash is reproducible.

-- Backtrace:
Application: System Settings (systemsettings), signal: Segmentation fault

[KCrash Handler]
#4  0x7fa0340290e1 in KScreen::Config::clone() const () from
/usr/lib/libKF5Screen.so.7
#5  0x7fa037f52d39 in ?? () from /usr/lib/qt/plugins/kcms/kcm_kscreen.so
#6  0x7fa037f539e3 in ?? () from /usr/lib/qt/plugins/kcms/kcm_kscreen.so
#7  0x7fa042e76784 in ?? () from /usr/lib/libQt5Qml.so.5
#8  0x7fa042d7a1b7 in ?? () from /usr/lib/libQt5Qml.so.5
#9  0x7fa042d7c1b2 in QV4::QObjectMethod::callInternal(QV4::Value const*,
QV4::Value const*, int) const () from /usr/lib/libQt5Qml.so.5
#10 0x7fa042d8f103 in ?? () from /usr/lib/libQt5Qml.so.5
#11 0x7fa042d955e4 in ?? () from /usr/lib/libQt5Qml.so.5
#12 0x7fa042d3bd26 in QV4::Function::call(QV4::Value const*, QV4::Value
const*, int, QV4::ExecutionContext const*) () from /usr/lib/libQt5Qml.so.5
#13 0x7fa042e973fd in QQmlJavaScriptExpression::evaluate(QV4::CallData*,
bool*) () from /usr/lib/libQt5Qml.so.5
#14 0x7fa042e53a31 in QQmlBoundSignalExpression::evaluate(void**) () from
/usr/lib/libQt5Qml.so.5
#15 0x7fa042e549ac in ?? () from /usr/lib/libQt5Qml.so.5
#16 0x7fa042e766bf in QQmlNotifier::emitNotify(QQmlNotifierEndpoint*,
void**) () from /usr/lib/libQt5Qml.so.5
#17 0x7fa0450bd050 in ?? () from /usr/lib/libQt5Core.so.5
#18 0x7fa03e25edf7 in QQuickAction::triggered(QObject*) () from
/usr/lib/libQt5QuickTemplates2.so.5
#19 0x7fa03e261730 in ?? () from /usr/lib/libQt5QuickTemplates2.so.5
#20 0x7fa03e261a60 in QQuickAbstractButtonPrivate::trigger() () from
/usr/lib/libQt5QuickTemplates2.so.5
#21 0x7fa03e264a1a in QQuickAbstractButtonPrivate::handleRelease(QPointF
const&) () from /usr/lib/libQt5QuickTemplates2.so.5
#22 0x7fa03e27fc29 in QQuickControl::mouseReleaseEvent(QMouseEvent*) ()
from /usr/lib/libQt5QuickTemplates2.so.5
#23 0x7fa04343f979 in QQuickItem::event(QEvent*) () from
/usr/lib/libQt5Quick.so.5
#24 0x7fa045d78b1c in QApplicationPrivate::notify_helper(QObject*, QEvent*)
() from /usr/lib/libQt5Widgets.so.5
#25 0x7fa04508cb88 in QCoreApplication::notifyInternal2(QObject*, QEvent*)
() from /usr/lib/libQt5Core.so.5
#26 0x7fa04344da85 in
QQuickWindowPrivate::deliverMouseEvent(QQuickPointerMouseEvent*) () from
/usr/lib/libQt5Quick.so.5
#27 0x7fa04344e4d2 in
QQuickWindowPrivate::deliverPointerEvent(QQuickPointerEvent*) () from
/usr/lib/libQt5Quick.so.5
#28 0x7fa04c45 in QWindow::event(QEvent*) () from
/usr/lib/libQt5Gui.so.5
#29 0x7fa045d78b1c in QApplicationPrivate::notify_helper(QObject*, QEvent*)
() from /usr/lib/libQt5Widgets.so.5
#30 0x7fa04508cb88 in QCoreApplication::notifyInternal2(QObject*, QEvent*)
() from /usr/lib/libQt5Core.so.5
#31 0x7fa043f8bcbe in QQuickWidget::mouseReleaseEvent(QMouseEvent*) () from
/usr/lib/libQt5QuickWidgets.so.5
#32 0x7fa045daf6e7 in QWidget::event(QEvent*) () from
/usr/lib/libQt5Widgets.so.5
#33 0x7fa045d78b1c in QApplicationPrivate::notify_helper(QObject*, QEvent*)
() from /usr/lib/libQt5Widgets.so.5
#34 0x7fa045d7e339 in QApplication::notify(QObject*, QEvent*) () from
/usr/lib/libQt5Widgets.so.5
#35 0x7fa04508cb88 in QCoreApplication::notifyInternal2(QObject*, QEvent*)
() from /usr/lib/libQt5Core.so.5
#36 0x7fa045d7c337 in QApplicationPrivate::sendMouseEvent(QWidget*,
QMouseEvent*, QWidget*, QWidget*, QWidget**, QPointer&, bool, bool) ()
from /usr/lib/libQt5Widgets.so.5
#37 0x7fa045dcd3b5 in ?? () from /usr/lib/libQt5Widgets.so.5
#38 0x7fa045dcf15e in ?? () from /usr/lib/libQt5Widgets.so.5
#39 0x7fa045d78b1c in QApplicationPrivate::notify_helper(QObject*, QEvent*)
() from /usr/lib/libQt5Widgets.so.5
#40 0x7fa04508cb88 in QCoreApplication::notifyInternal2(QObject*, QEvent*)
() from /usr/lib/libQt5Core.so.5
#41 0x7fa04553f13c in
QGuiApplicationPrivate::processMouseEvent(QWindowSystemInterfacePrivate::MouseEvent*)
() from /usr/lib/libQ

[Nav-users] Nav - LLDP Missing on some FS model

2022-09-28 Thread Marco Valerio Bifolco
Hello,
First of all thanks for your job, we really appreciate nav solution.
We noticed that some of our FS.COM switch, (we got, cisco, hp, dlink, fs) does 
not work as expected.
Nav doesn't get LLDP information, and for this it is wrongly populating the net 
map.
The FS model working properly are s3900, s5850; But not so lucky with s2800, 
s1400, s3400 .

It's possible to force nav to choose a neighbor from cam candidate? Or set it 
manually?
Or is it possible to make these switch works with nav? Maybe passing mib 
somehow?

Many Thanks in advance
Best Regards
[cid:image001.png@01D8D356.6C245C00]


Marco V. Bifolco
System Engineer
Office (+39) 0694320122 | Mobile (+39) 3519666252
marco.bifo...@sferanet.net
assiste...@sferanet.net
  [cid:image002.png@01D8D356.6C245C00] sferanet.net



___
Nav-users mailing list -- nav-users@lister.sikt.no
To unsubscribe send an email to nav-users-le...@lister.sikt.no


Re: [ovs-dev] [PATCH v3] ofproto-dpif-xlate: Update tunnel neighbor when receive gratuitous arp.

2022-09-21 Thread Paolo Valerio
Hello Han,

"Han Ding"  writes:

> Commit ba07cf222a add the feature "Handle gratuitous ARP requests and
> replies in tnl_arp_snoop()". But commit 83c2757bd1 just allow the ARP whitch
> the destination address of the ARP is matched against the known xbridge 
> addresses.
> So the modification of commit ba07cf222a is not effective. When ovs receive 
> the
> gratuitous ARP from underlay gateway which the source address and destination
> address are all gateway IP, tunnel neighbor will not be updated.
>

I think it would be clearer formatting the commits like below:

$ git -P show -s --format="%h (\"%s\")" --abbrev=12 ba07cf222a
ba07cf222a0c ("Handle gratuitous ARP requests and replies in tnl_arp_snoop()")

$ git -P show -s --format="%h (\"%s\")" --abbrev=12 83c2757bd1
83c2757bd16e ("xlate: Move tnl_neigh_snoop() to terminate_native_tunnel()")

I guess that the last commit deserves a Fixes tag as well.

> Signed-off-by: Han Ding 
> ---
>
> Notes:
> v3
> Correct the spell mistake.
>
> v2
> Change author name.  
>
>  ofproto/ofproto-dpif-xlate.c | 10 +++---
>  tests/tunnel-push-pop.at | 20 
>  2 files changed, 27 insertions(+), 3 deletions(-)
>
> diff --git a/ofproto/ofproto-dpif-xlate.c b/ofproto/ofproto-dpif-xlate.c
> index 8e5d030ac..6c69f981b 100644
> --- a/ofproto/ofproto-dpif-xlate.c
> +++ b/ofproto/ofproto-dpif-xlate.c
> @@ -4126,6 +4126,11 @@ xport_has_ip(const struct xport *xport)
>  return n_in6 ? true : false;
>  }
>
> +#define IS_VALID_NEIGHBOR_REPLY(flow, ctx) \
> +((flow->dl_type == htons(ETH_TYPE_ARP) || \
> +  flow->nw_proto == IPPROTO_ICMPV6) && \
> + is_neighbor_reply_correct(ctx, flow))
> +

Although terminate_native_tunnel() would be the only user, I guess a
static function could be ok here, instead.

>  static bool
>  terminate_native_tunnel(struct xlate_ctx *ctx, const struct xport *xport,
>  struct flow *flow, struct flow_wildcards *wc,
> @@ -4146,9 +4151,8 @@ terminate_native_tunnel(struct xlate_ctx *ctx, const 
> struct xport *xport,
>  /* If no tunnel port was found and it's about an ARP or ICMPv6 
> packet,
>   * do tunnel neighbor snooping. */
>  if (*tnl_port == ODPP_NONE &&
> -(flow->dl_type == htons(ETH_TYPE_ARP) ||
> - flow->nw_proto == IPPROTO_ICMPV6) &&
> - is_neighbor_reply_correct(ctx, flow)) {
> +(IS_VALID_NEIGHBOR_REPLY(flow, ctx) ||
> + is_garp(flow, wc))) {

AFAICT, this seems ok to me and the tests related to tunnel_push_pop
succeed. There's probably some room for improvement in the code down to
tnl_arp_snoop(), but I guess it's a bit out of scope of this patch.

>  tnl_neigh_snoop(flow, wc, ctx->xbridge->name,
>  ctx->xin->allow_side_effects);
>  } else if (*tnl_port != ODPP_NONE &&
> diff --git a/tests/tunnel-push-pop.at b/tests/tunnel-push-pop.at
> index c63344196..0bac362f4 100644
> --- a/tests/tunnel-push-pop.at
> +++ b/tests/tunnel-push-pop.at
> @@ -369,6 +369,26 @@ AT_CHECK([ovs-appctl tnl/neigh/show | grep br | sort], 
> [0], [dnl
>  1.1.2.92  f8:bc:12:44:34:b6   br0
>  ])
>
> +dnl Receiving Gratuitous ARP request with correct VLAN id should alter 
> tunnel neighbor cache
> +AT_CHECK([ovs-appctl netdev-dummy/receive p0 
> 'recirc_id(0),in_port(1),eth(src=f8:bc:12:44:34:c8,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=10,pcp=7),encap(eth_type(0x0806),arp(sip=1.1.2.92,tip=1.1.2.92,op=1,sha=f8:bc:12:44:34:c8,tha=00:00:00:00:00:00))'])
> +
> +ovs-appctl time/warp 1000
> +ovs-appctl time/warp 1000
> +
> +AT_CHECK([ovs-appctl tnl/neigh/show | grep br | sort], [0], [dnl
> +1.1.2.92  f8:bc:12:44:34:c8   br0
> +])
> +
> +dnl Receiving Gratuitous ARP reply with correct VLAN id should alter tunnel 
> neighbor cache
> +AT_CHECK([ovs-appctl netdev-dummy/receive p0 
> 'recirc_id(0),in_port(1),eth(src=f8:bc:12:44:34:b2,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=10,pcp=7),encap(eth_type(0x0806),arp(sip=1.1.2.92,tip=1.1.2.92,op=2,sha=f8:bc:12:44:34:b2,tha=f8:bc:12:44:34:b2))'])
> +
> +ovs-appctl time/warp 1000
> +ovs-appctl time/warp 1000
> +
> +AT_CHECK([ovs-appctl tnl/neigh/show | grep br | sort], [0], [dnl
> +1.1.2.92  f8:bc:12:44:34:b2   br0
> +])
> +
>  dnl Receive ARP reply without VLAN header
>  AT_CHECK([ovs-vsctl set port br0 tag=0])
>  AT_CHECK([ovs-appctl tnl/neigh/flush], [0], [OK
> --
> 2.27.0
>
>
>
>
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [Python] Parsing di un file ldif invalido

2022-09-15 Thread Valerio Pachera
Il giorno mer 14 set 2022 alle ore 15:36 Marco Giusti <
marco.giu...@posteo.de> ha scritto:

>
>
> #!/usr/bin/env python3
>
> import sys
> from ldif3 import LDIFParser
>
> ldif_path = sys.argv[1]
>
> with open(ldif_path, 'rb') as ldif_file:
>  parser = LDIFParser(ldif_file)
>
>  for dn, entry in parser.parse():
>  try:
>  print(dn, entry['cn'])
>  except ValueError:
>  continue
>
>
> Ciao, grazie della risposta ma l'errore avviene nel momento in cui la
variabile dn viene popolata, prima del try.
Credo che non sia possibile gestire l'eccezione in questa fase :-(.
Dovrebbe essere gestita dal modulo ldif3.
Mentre scrivevo questo sono andata a spulciare la documentazione della
libreria  e fra i parametri ho
trovato:
"strict (boolean) – If set to False, recoverable parse errors will produce
log warnings rather than exceptions."

Ho modificato lo script come segue:

---
#!/usr/bin/env python3

import sys
from pprint import pprint
from ldif3 import LDIFParser

ldif_path = sys.argv[1]

with open(ldif_path, 'rb') as ldif_file:
parser = LDIFParser(ldif_file, *strict=False*)

for dn, entry in parser.parse():
pprint(dn)
pprint(entry)
---

Ho eseguito lo script:

---
./test.py bad_sample.ldif 2> error.log

'cn=*Mario, Rossi*,mail=mario.ro...@domain.com'
OrderedDict([('objectclass',
  ['top',
   'person',
   'organizationalPerson',
   'inetOrgPerson',
   'mozillaAbPersonAlpha']),
 ('givenName', ['Mario Rossi']),
 *('cn', ['Mario, Rossi']),*
 ('mail', ['mario.ro...@domain.com']),
 ('modifytimestamp', ['1632815299'])])

cat error.log
No valid string-representation of distinguished name cn=Mario, Rossi,mail=
mario.ro...@domain.com.
---

In questo modo lo script non si blocca e ho comunque traccia dei contatti
problematici da sistemare in sorgente!
___
Python mailing list
Python@lists.python.it
https://lists.python.it/mailman/listinfo/python


[Python] Parsing di un file ldif invalido

2022-09-14 Thread Valerio Pachera
Buongiorno a tutti, ho la seguente esigenza: parsare un file ldif prodotto
dall'esportazione di una rubrica di Thunderbird.

Prendiamo questo ldif come esempio:
---
dn: cn=Mario Rossi,mail=mario.ro...@domain.com
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
objectclass: mozillaAbPersonAlpha
givenName: Mario Rossi
cn: Mario Rossi
mail: mario.ro...@domain.com
modifytimestamp: 1632815299
---

E il codice necessario per fare il parsing:
---
#!/usr/bin/env python3

import sys
from ldif3 import LDIFParser

ldif_path = sys.argv[1]

with open(ldif_path, 'rb') as ldif_file:
parser = LDIFParser(ldif_file)

for dn, entry in parser.parse():
print(dn, entry['cn'])
---

Funziona perfettamente fino a che non trovi un contatto che una virgola nel
CN.
Esempio di ldif problematico:
---
dn: cn=Mario, Rossi,mail=mario.ro...@domain.com
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
objectclass: mozillaAbPersonAlpha
givenName: Mario Rossi
cn: Mario, Rossi
mail: mario.ro...@domain.com
modifytimestamp: 1632815299
---

Errore che si ottiene:
---
Traceback (most recent call last):
  File "./simple.py", line 11, in 
for dn, entry in parser.parse():
  File "/usr/local/lib/python3.8/dist-packages/ldif3.py", line 384, in parse
yield self._parse_entry_record(block)
  File "/usr/local/lib/python3.8/dist-packages/ldif3.py", line 360, in
_parse_entry_record
self._check_dn(dn, attr_value)
  File "/usr/local/lib/python3.8/dist-packages/ldif3.py", line 339, in
_check_dn
self._error('No valid string-representation of '
  File "/usr/local/lib/python3.8/dist-packages/ldif3.py", line 330, in
_error
raise ValueError(msg)
ValueError: No valid string-representation of distinguished name cn=Mario,
Rossi,mail=mario.ro...@domain.com.
---

Nota: il CN a volte è espresso come base64 e al suo interno ci può essere
una virgola che rompe la sintassi.

*E' possibile "ignorare" i valori errati con un try?* Come applicarlo al
ciclo?
https://stackoverflow.com/questions/39889811/python-ldif3-parser-and-exception-in-for-loop

Ogni suggerimento è ben accetto :-)
___
Python mailing list
Python@lists.python.it
https://lists.python.it/mailman/listinfo/python


[Kernel-packages] [Bug 1989458] [NEW] package linux-firmware 20220329.git681281e4-0ubuntu3.5 failed to install/upgrade: il sottoprocesso installato pacchetto linux-firmware script post-installation ha

2022-09-13 Thread valerio cogrossi
Public bug reported:

non so descrivere il proglema

ProblemType: Package
DistroRelease: Ubuntu 22.04
Package: linux-firmware 20220329.git681281e4-0ubuntu3.5
ProcVersionSignature: Ubuntu 5.15.0-47.51-generic 5.15.46
Uname: Linux 5.15.0-47-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.11-0ubuntu82.1
Architecture: amd64
AudioDevicesInUse:
 USERPID ACCESS COMMAND
 /dev/snd/controlC0:  valerio3593 F pulseaudio
CRDA: N/A
CasperMD5CheckResult: unknown
Date: Tue Sep 13 14:43:27 2022
Dependencies: firmware-sof-signed 2.0-1ubuntu3
ErrorMessage: il sottoprocesso installato pacchetto linux-firmware script 
post-installation ha restituito lo stato di errore 1
InstallationDate: Installed on 2022-09-13 (0 days ago)
InstallationMedia: Lubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 
(20220419)
MachineType: To Be Filled By O.E.M. To Be Filled By O.E.M.
PackageArchitecture: all
ProcFB: 0 i915drmfb
ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-47-generic 
root=UUID=b3300f1c-ce9f-47e9-962d-37ee5300ba29 ro quiet splash
PulseList: Error: command ['pacmd', 'list'] failed with exit code 1: No 
PulseAudio daemon running, or not running as session daemon.
Python3Details: /usr/bin/python3.10, Python 3.10.4, python3-minimal, 
3.10.4-0ubuntu2
PythonDetails: N/A
RelatedPackageVersions: grub-pc N/A
RfKill:
 
SourcePackage: linux-firmware
Title: package linux-firmware 20220329.git681281e4-0ubuntu3.5 failed to 
install/upgrade: il sottoprocesso installato pacchetto linux-firmware script 
post-installation ha restituito lo stato di errore 1
UpgradeStatus: No upgrade log present (probably fresh install)
dmi.bios.date: 05/31/2010
dmi.bios.release: 8.15
dmi.bios.vendor: American Megatrends Inc.
dmi.bios.version: P1.20
dmi.board.name: G41M-VS3
dmi.board.vendor: ASRock
dmi.chassis.asset.tag: To Be Filled By O.E.M.
dmi.chassis.type: 3
dmi.chassis.vendor: To Be Filled By O.E.M.
dmi.chassis.version: To Be Filled By O.E.M.
dmi.modalias: 
dmi:bvnAmericanMegatrendsInc.:bvrP1.20:bd05/31/2010:br8.15:svnToBeFilledByO.E.M.:pnToBeFilledByO.E.M.:pvrToBeFilledByO.E.M.:rvnASRock:rnG41M-VS3:rvr:cvnToBeFilledByO.E.M.:ct3:cvrToBeFilledByO.E.M.:skuToBeFilledByO.E.M.:
dmi.product.family: To Be Filled By O.E.M.
dmi.product.name: To Be Filled By O.E.M.
dmi.product.sku: To Be Filled By O.E.M.
dmi.product.version: To Be Filled By O.E.M.
dmi.sys.vendor: To Be Filled By O.E.M.

** Affects: linux-firmware (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-package jammy

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-firmware in Ubuntu.
https://bugs.launchpad.net/bugs/1989458

Title:
  package linux-firmware 20220329.git681281e4-0ubuntu3.5 failed to
  install/upgrade: il sottoprocesso installato pacchetto linux-firmware
  script post-installation ha restituito lo stato di errore 1

Status in linux-firmware package in Ubuntu:
  New

Bug description:
  non so descrivere il proglema

  ProblemType: Package
  DistroRelease: Ubuntu 22.04
  Package: linux-firmware 20220329.git681281e4-0ubuntu3.5
  ProcVersionSignature: Ubuntu 5.15.0-47.51-generic 5.15.46
  Uname: Linux 5.15.0-47-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  valerio3593 F pulseaudio
  CRDA: N/A
  CasperMD5CheckResult: unknown
  Date: Tue Sep 13 14:43:27 2022
  Dependencies: firmware-sof-signed 2.0-1ubuntu3
  ErrorMessage: il sottoprocesso installato pacchetto linux-firmware script 
post-installation ha restituito lo stato di errore 1
  InstallationDate: Installed on 2022-09-13 (0 days ago)
  InstallationMedia: Lubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 
(20220419)
  MachineType: To Be Filled By O.E.M. To Be Filled By O.E.M.
  PackageArchitecture: all
  ProcFB: 0 i915drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-47-generic 
root=UUID=b3300f1c-ce9f-47e9-962d-37ee5300ba29 ro quiet splash
  PulseList: Error: command ['pacmd', 'list'] failed with exit code 1: No 
PulseAudio daemon running, or not running as session daemon.
  Python3Details: /usr/bin/python3.10, Python 3.10.4, python3-minimal, 
3.10.4-0ubuntu2
  PythonDetails: N/A
  RelatedPackageVersions: grub-pc N/A
  RfKill:
   
  SourcePackage: linux-firmware
  Title: package linux-firmware 20220329.git681281e4-0ubuntu3.5 failed to 
install/upgrade: il sottoprocesso installato pacchetto linux-firmware script 
post-installation ha restituito lo stato di errore 1
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 05/31/2010
  dmi.bios.release: 8.15
  dmi.bios.vendor: American Megatrends Inc.
  dmi.bios.version: P1.20
  dmi.board.name: G41M-VS3
  dmi.board.vendor: ASRock
  dmi.chassis.asset.tag: To Be Filled

Re: problema Thunderbird 102.2.1-1 - testing

2022-09-12 Thread valerio




Il 13/09/22 08:29, Luca Sighinolfi ha scritto:

Ciao Angelo,

On 2022-09-13 05:51, root wrote:

Un fatto analogo era successo ad un mio amico dieci anni fà e il
problema era una mail di spam che incasinava tutto.
Avevo risolto rinominando la cartella .thunderbird e rifacendo le
connessioni mail.


ho provato a rinominare la cartella .thunderbird in modo che il 
programma ripartisse da zero.


Ho ricollegato uno degli account di posta con successo.


ciao,
non so quanto complesso sia l'account, ma se hai rinominato la cartella 
con la posta, invece di far ripartire thunder, non potresti fare una 
reinstallazione?
personalmente ho un po' di indirizzi, ma a suo tempo per recuperare 
posta vecchia, ho semplicemente inserito i file di posta (con nomi 
fittizi) in directory di posta esistenti.


ovviamente con le dovute cautele...
valerio



Re: problema Thunderbird 102.2.1-1 - testing

2022-09-12 Thread valerio




Il 12/09/22 09:20, Luca Sighinolfi ha scritto:

Ciao Piviul

On 2022-09-12 06:30, Piviul wrote:

On 11/09/22 13:08, Luca Sighinolfi wrote:

[...]

ATTENTION: default value of option mesa_glthread overridden by
environment.


non credo centri, lo da anche a me... se apri thunderbird da console,
quando cambi dal menù visualizza un'opzione non mostra un errore
sulla console a parte quello all'avvio che hai già segnalato?


Sulla console non ho informazioni aggiuntive...

Sto problema rende il programma inutilizzabile e sono a piedi...

Comunque le informazioni ci sono perché se faccio un cerca restituisce i 
risultati e se chiedo di scaricare la posta, non da nessun errore e 
sembra che scarichi corretamente.


Solo che non vedo né cartelle ne mail...


ciao,
puoi descrivere cosa vedi, allora?
hai guardato nella directory di .thunderbird i permessi dei file di 
posta, e cosa contengono?
per la cronaca (ma credo che non c'entri) tempo fa c'era apparmor che 
impediva alcune operazioni.

altro non saprei...

valerio




Piviul


Grazie




Re: problema Thunderbird 102.2.1-1 - testing

2022-09-11 Thread valerio

ciao,

Il 11/09/22 15:14, Luca Sighinolfi ha scritto:

Ciao Valerio

On Sun, 11 Sep 2022 14:26:56 +0200
valerio  wrote:


ciao.

Il 11/09/22 13:08, Luca Sighinolfi ha scritto:

Buongiorno a tutti,

ho un problema con Thunderbird sul portatile di mia moglie.

Il sistema è Debian Tetsing, XFCE, Thunderbird aggiornato ad oggi:

ii  thunderbird 1:102.2.1-1amd64
ii  thunderbird-l10n-it 1:102.2.1-1all

kernel: 5.18.0-4-amd64 #1 SMP PREEMPT_DYNAMIC Debian 5.18.16-1
(2022-08-10) x86_64 GNU/Linux

QUando avvio il programma, non vedo né la lista delle cartelle sulla
sx, né la lista dei messaggi.
   


hai guardato sul menù visualizza?
se qualcosa é escluso?


probabilmente altre voci di menù, tipo struttura, o altro
anch'io, per quanto usi thunder da svariati anni non ho mai approfondito



Si, ho guardato e provato un po' di opzioni ma niente che abbia aiutato.
Nel menu Visualizza --> Cartelle noto che nessuna opzione è fleggata,
qualsiasi selezione faccio non ha effetti ed al riavvio del programma
viene dimenticata. Non ho esperienza con Thunderbird per sapere se sia
giusto così...




Questo è successo dopo un aggiornamento del mese scorso piuttosto
corposo (purtroppo aggiorno quando riesco...).

Subito ho pensato avesse perso il profilo, ma guardando tra nel menu
account, sono tutti presenti.

Ho provato comunque a creare un nuovo profilo con l'opzione -P
avviando da console. Ma nahc ein questo caso mi memorizza la
l'account, mi fa vedere anche le cartelle e le email, ma quando
esco da thunderbird e riavvio sono daccapo: non vedo né le cartelle
ne i messaggi.

Se avvio da console, lunica cosa che vedo è la ripetizione per 7
vote del seguente messaggio:

ATTENTION: default value of option mesa_glthread overridden by
environment.
   


questo invece è un bug:

https://bugzilla.mozilla.org/show_bug.cgi?id=1744389

ma non capisco da cosa possa dipendere, forse da altri programmi che 
ingrandiscono? e cambiano le caratteristiche di visualizzazione.

vediamo se qualcuno ne sa di più ...


Grazie


valerio



Re: problema Thunderbird 102.2.1-1 - testing

2022-09-11 Thread valerio

ciao.

Il 11/09/22 13:08, Luca Sighinolfi ha scritto:

Buongiorno a tutti,

ho un problema con Thunderbird sul portatile di mia moglie.

Il sistema è Debian Tetsing, XFCE, Thunderbird aggiornato ad oggi:

ii  thunderbird 1:102.2.1-1amd64
ii  thunderbird-l10n-it 1:102.2.1-1all

kernel: 5.18.0-4-amd64 #1 SMP PREEMPT_DYNAMIC Debian 5.18.16-1
(2022-08-10) x86_64 GNU/Linux

QUando avvio il programma, non vedo né la lista delle cartelle sulla
sx, né la lista dei messaggi.



hai guardato sul menù visualizza?
se qualcosa é escluso?


Questo è successo dopo un aggiornamento del mese scorso piuttosto
corposo (purtroppo aggiorno quando riesco...).

Subito ho pensato avesse perso il profilo, ma guardando tra nel menu
account, sono tutti presenti.

Ho provato comunque a creare un nuovo profilo con l'opzione -P avviando
da console. Ma nahc ein questo caso mi memorizza la l'account, mi fa
vedere anche le cartelle e le email, ma quando esco da thunderbird e
riavvio sono daccapo: non vedo né le cartelle ne i messaggi.

Se avvio da console, lunica cosa che vedo è la ripetizione per 7 vote
del seguente messaggio:

ATTENTION: default value of option mesa_glthread overridden by
environment.

sembra che "qualcosa" sovrascriva "qualcosaltro": cosa dice il comando 
"env"?




Ho anche guardato tra i bug segnalati ma non ho trovato nulla che sia
simile al mio problema e la ricerca in rete non ha dato grandi
risultati.

Ho installato Thunderbird dalla Unstable ma non ha risolto.

Avete quanlche suggerimento?


non sembra un bug




Grazie mille
Ciao



valerio



Re: [ovs-dev] [PATCH 2/2] ct-dpif: Do not show flag key if empty.

2022-09-09 Thread Paolo Valerio
Ilya Maximets  writes:

> On 8/4/22 18:07, Paolo Valerio wrote:
>> This patch avoids to show flags_orig/flags_reply key if they have no value.
>> E.g., the following:
>> 
>> NEW tcp,orig=([...]),reply=([...]),id=1800618864,
>> status=CONFIRMED|SRC_NAT_DONE|DST_NAT_DONE,timeout=120,
>> protoinfo=(state_orig=SYN_SENT,state_reply=SYN_SENT,wscale_orig=7,
>>wscale_reply=0,flags_orig=WINDOW_SCALE|SACK_PERM,flags_reply=)
>> 
>> becomes:
>> 
>> NEW tcp,orig=([...]),reply=([...]),id=1800618864,
>> status=CONFIRMED|SRC_NAT_DONE|DST_NAT_DONE,timeout=120,
>> protoinfo=(state_orig=SYN_SENT,state_reply=SYN_SENT,wscale_orig=7,
>>        wscale_reply=0,flags_orig=WINDOW_SCALE|SACK_PERM)
>> 
>> Signed-off-by: Paolo Valerio 
>> ---
>>  lib/ct-dpif.c |   14 ++
>>  1 file changed, 10 insertions(+), 4 deletions(-)
>> 
>> diff --git a/lib/ct-dpif.c b/lib/ct-dpif.c
>> index cfc2315e3..f1a375523 100644
>> --- a/lib/ct-dpif.c
>> +++ b/lib/ct-dpif.c
>> @@ -512,10 +512,16 @@ ct_dpif_format_protoinfo_tcp_verbose(struct ds *ds,
>>protoinfo->tcp.wscale_orig,
>>protoinfo->tcp.wscale_reply);
>>  }
>> -ct_dpif_format_flags(ds, ",flags_orig=", protoinfo->tcp.flags_orig,
>> - tcp_flags);
>> -ct_dpif_format_flags(ds, ",flags_reply=", protoinfo->tcp.flags_reply,
>> - tcp_flags);
>> +
>> +if (protoinfo->tcp.flags_orig) {
>> +ct_dpif_format_flags(ds, ",flags_orig=", protoinfo->tcp.flags_orig,
>> + tcp_flags);
>> +}
>> +
>> +if (protoinfo->tcp.flags_reply) {
>> +ct_dpif_format_flags(ds, ",flags_reply=", 
>> protoinfo->tcp.flags_reply,
>> + tcp_flags);
>> +}
>
> Hmm.  I'm trying to understand why ct_dpif_format_flags() exists at all.
> Shouldn't this be just:
>
>   format_flags_masked(ds, "flags_orig", packet_tcp_flag_to_string,
>   protoinfo->tcp.flags_orig, TCP_FLAGS(OVS_BE16_MAX),
>   TCP_FLAGS(OVS_BE16_MAX));
>
> ?
>
> This will change the appearance of the flags, so maybe tcp_flags[] array
> should be replaced with a simple conversion function.
>

Uhm, I guess you're right. It seems redundant and could be removed.
What about something like this?

diff --git a/lib/ct-dpif.c b/lib/ct-dpif.c
index cfc2315e3..6f17a26b5 100644
--- a/lib/ct-dpif.c
+++ b/lib/ct-dpif.c
@@ -35,20 +35,11 @@ static void ct_dpif_format_counters(struct ds *,
 const struct ct_dpif_counters *);
 static void ct_dpif_format_timestamp(struct ds *,
  const struct ct_dpif_timestamp *);
-static void ct_dpif_format_flags(struct ds *, const char *title,
- uint32_t flags, const struct flags *);
 static void ct_dpif_format_protoinfo(struct ds *, const char *title,
  const struct ct_dpif_protoinfo *,
  bool verbose);
 static void ct_dpif_format_helper(struct ds *, const char *title,
   const struct ct_dpif_helper *);
-
-static const struct flags ct_dpif_status_flags[] = {
-#define CT_DPIF_STATUS_FLAG(FLAG) { CT_DPIF_STATUS_##FLAG, #FLAG },
-CT_DPIF_STATUS_FLAGS
-#undef CT_DPIF_STATUS_FLAG
-{ 0, NULL } /* End marker. */
-};
 
 /* Dumping */
 
@@ -275,6 +266,20 @@ ct_dpif_entry_uninit(struct ct_dpif_entry *entry)
 }
 }
 
+static const char *
+ct_dpif_status_flags(uint32_t flags)
+{
+switch (flags) {
+#define CT_DPIF_STATUS_FLAG(FLAG) \
+case CT_DPIF_STATUS_##FLAG: \
+return #FLAG;
+CT_DPIF_STATUS_FLAGS
+#undef CT_DPIF_TCP_FLAG
+default:
+return NULL;
+}
+}
+
 void
 ct_dpif_format_entry(const struct ct_dpif_entry *entry, struct ds *ds,
  bool verbose, bool print_stats)
@@ -305,8 +310,9 @@ ct_dpif_format_entry(const struct ct_dpif_entry *entry, 
struct ds *ds,
 ds_put_format(ds, ",zone=%"PRIu16, entry->zone);
 }
 if (verbose) {
-ct_dpif_format_flags(ds, ",status=", entry->status,
- ct_dpif_status_flags);
+format_flags_masked(ds, ",status", ct_dpif_status_flags,
+entry->status, CT_DPIF_STATUS_MASK,
+CT_DPIF_STATUS_MASK);
 }
 if (print_stats) {
 ds_put_format(ds, ",timeout=%"PRIu32, entry->timeout);
@@ -415,28 +421,6 @@ ct_dpif_format_tuple(struct ds *ds, const struct 
ct_dpif

Re: Subpixel offset for glyphs rendering

2022-09-03 Thread Valerio De Benedetto
> Sorry for the late reply.
No problem.

In the meanwhile, I managed to make it work. I'm using
FT_Outline_Translate(), though I'm still not sure if I'm supposed to
pass the translation vector as pixels or font units. For the moment I'm
using pixels, and looks like it's working fine. (To whoever will read
this mail, remember to take into account bitmap_left when translating
outlines, I've lost 3 days wondering why my subpixel positioning was
worse than snapping glyphs to whole pixels!)

Since we are here, I would like to ask you some advice, if you don't mind :)

This is the final result:
https://i.postimg.cc/J7vmztrG/Schermata-del-2022-09-03-19-57-38.png
Environment/settings:
- 96 dpi
- Noto Sans
- FT_LOAD_TARGET_LIGHT | FT_LOAD_FORCE_AUTOHINT
- FT_RENDER_MODE_NORMAL
- 1.4 (faux) gamma correction
- No round()/floor()/ceil() anywhere, except for floor()ing the final x
position of glyphs quads in the vertex shader in order to apply subpixel
positioning

Personally, I think the text looks very good, especially considering I'm
planning to avoid LCD subpixel rendering.
My target is to reach a completely linear layout. I'm very close, some
letters are still jumping around when there is more than 1 emoji on the
same line.

The reason why I added "faux" to gamma correction is that I'm elevating
the final pixel alpha value to (1.0f / 1.4f) in the OpenGL fragment
shader, which is before the pixel is automatically blended with the
background color. Nonetheless, it think it is working good enough, at
least the shapes are not as light and frail as when this correction is
absent.

What do you think of my approach so far? Do you see any room for
improvement?

Thanks again for your time.

Valerio





Re: [Oiio-dev] Oiio-dev Digest, Vol 168, Issue 2

2022-09-01 Thread Valerio Viperino
11/2.4.0)
> >>   - Fixes to FindOpenColorIO.cmake module, now it prefers an OCIO
> exported
> >> cmake config (for OCIO 2.1+) unless OPENCOLORIO_NO_CONFIG=ON is set.
> >> #3278 (2.4.0.1/2.3.12)
> >>   - Fix problems with FindOpenEXR build script for Windows. #3281
> >> (2.4.0.1/2.3.12)
> >>   - New CMake cache variable `DOWNSTREAM_CXX_STANDARD` specifies which
> C++
> >> standard is the minimum for downstream projects (c

Subpixel offset for glyphs rendering

2022-08-28 Thread Valerio De Benedetto
Hi, I'm in the process of implementing subpixel positioning in my
application. Since I'm using a glyph cache, I want to generate 4
different versions of the same glyph shifted by 1/4 pixel on the X axis,
and then use the appropriate version at rendering time, depending on the
final X position of the glyph (this explains better what I'm trying to
do:
https://freddie.witherden.org/pages/font-rasterisation/#sub-pixel-positioning).
Is there a way to make freetype rasterize a glyph offsetted by a certain
amount of pixels? Should I use the FT_Set_Transform() API, in particular
the delta parameter, before calling FT_Render_Glyph()?

Thanks in advance for your response.





RE: [DISCUSS] LiveSync

2022-08-18 Thread Valerio Crescia

Hi all,

I am really interested in implementing this new feature in syncope and 
would like to write my thesis on this work.



Best regards,

Valerio Crescia



On 2022/08/17 07:22:38 Francesco Chicchiriccò wrote:
> Hi all,
> I have put some considerations about LiveSync in [1]: feel free to 
comment and / or amend / complete.

>
> Anyone stepping in for implementation?
>
> Regards.
>
> [1] 
https://cwiki.apache.org/confluence/display/SYNCOPE/%5BDISCUSS%5D+LiveSync

>
> --
> Francesco Chicchiriccò
>
> Tirasa - Open Source Excellence
> http://www.tirasa.net/
>
> Member at The Apache Software Foundation
> Syncope, Cocoon, Olingo, CXF, OpenJPA, PonyMail
> http://home.apache.org/~ilgrosso/
>
>

Re: [ovs-dev] [PATCH] system-traffic: Fix IPv4 fragmentation test sequence for check-kernel.

2022-08-09 Thread Paolo Valerio
Ilya Maximets  writes:

> On 8/5/22 23:49, Paolo Valerio wrote:
>> Ilya Maximets  writes:
>> 
>>> On 8/5/22 17:08, Paolo Valerio wrote:
>>>> The following test sequence:
>>>>
>>>> conntrack - IPv4 fragmentation incomplete reassembled packet
>>>> conntrack - IPv4 fragmentation with fragments specified
>>>>
>>>> leads to a systematic failure of the latter test on the kernel
>>>> datapath (linux).  Multiple executions of the former may also lead to
>>>> multiple failures.
>>>> This is due to the fact that fragments not yet reassembled are kept in
>>>> a queue for /proc/sys/net/ipv4/ipfrag_time seconds, and if the
>>>> kernel receives a fragment already present in the queue, it returns
>>>> -EINVAL.
>>>
>>> Thanks for the patch!  I've been looking at the issue earlier
>>> this week.  One thing I don't understand is that we're reloading
>>> all the netfilter modules between tests, shouldn't this clear
>>> all the pending queues?  Or this re-assembly is happening outside
>>> of the conntrack?
>>>
>> 
>> That's a fair point.
>> AFAICT, queues and the pending fragments sit in a per netns fragment
>> queue directory. In the case of the kernel dp ovs_dp_get_net(dp). If my
>> reading is correct, IPv4 pending fragments should be removed when the
>> netns is destroyed.
>
> Hmm, ok.  Thanks for the explanation.  I tried to prototype some
> change to run all tests in a separate namespace that gets removed
> after each test, but the integration with autotest doesn't work
> well this way.  I guess, we either need a way to put current shell
> (not the forked one) into a new namespace, for which I didn't find
> any supported APIs, or we'll have to heavily modify all the tests
> and macros, which doesn't sound like a lot of fun.
>

In general, the idea seems a good one to me aside from this specific
issue.
Yes, no APIs spotted, and I agree that all those modifications don't
sound particularly fun :)

> For now, I confirmed that the fix is working on my setup.
> Applied and backported down to 2.13.
>

Thank you Ilya!

> Best regards, Ilya Maximets.
>
>> 
>>>>
>>>> Below the related log message:
>>>> |00058|dpif|WARN|system@ovs-system: execute ct(commit) failed (Invalid 
>>>> argument)
>>>>   on packet 
>>>> udp,vlan_tci=0x,dl_src=50:54:00:00:00:09,dl_dst=50:54:00:00:00:0a,
>>>>   
>>>> nw_src=10.1.1.1,nw_dst=10.1.1.2,nw_tos=0,nw_ecn=0,nw_ttl=0,nw_frag=first,tp_src=1,
>>>>   tp_dst=2 udp_csum:0
>>>>
>>>> Fix the sequence by sending the second fragment in "conntrack - IPv4
>>>> fragmentation incomplete reassembled packet", once the checks are
>>>> done.
>>>>
>>>> IPv6 tests are not affected as the defrag kernel code path pretends to
>>>> add the duplicate fragment to the queue returning -EINPROGRESS, when a
>>>> duplicate is detected.
>>>>
>>>> Signed-off-by: Paolo Valerio 
>>>> ---
>>>>  tests/system-traffic.at |5 +
>>>>  1 file changed, 5 insertions(+)
>>>>
>>>> diff --git a/tests/system-traffic.at b/tests/system-traffic.at
>>>> index 1a864057c..8497b4d9e 100644
>>>> --- a/tests/system-traffic.at
>>>> +++ b/tests/system-traffic.at
>>>> @@ -3452,6 +3452,11 @@ AT_CHECK([ovs-ofctl bundle br0 bundle.txt])
>>>>  AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(10.1.1.2)], [0], 
>>>> [dnl
>>>>  ])
>>>>  
>>>> +dnl Send the second fragment in order to avoid keeping the first fragment
>>>> +dnl in the queue until the expiration occurs. Fragments already queued, 
>>>> if resent,
>>>> +dnl may lead to failures on the kernel datapath.
>>>> +AT_CHECK([ovs-ofctl -O OpenFlow13 packet-out br0 "in_port=1, 
>>>> packet=5054000a505400090800453100320011a4860a0101010a010102000100020008001020304050607080910203040506070809,
>>>>  actions=ct(commit)"])
>>>> +
>>>>  OVS_TRAFFIC_VSWITCHD_STOP
>>>>  AT_CLEANUP
>>>>  
>>>>
>> 

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] system-traffic: Fix IPv4 fragmentation test sequence for check-kernel.

2022-08-05 Thread Paolo Valerio
Ilya Maximets  writes:

> On 8/5/22 17:08, Paolo Valerio wrote:
>> The following test sequence:
>> 
>> conntrack - IPv4 fragmentation incomplete reassembled packet
>> conntrack - IPv4 fragmentation with fragments specified
>> 
>> leads to a systematic failure of the latter test on the kernel
>> datapath (linux).  Multiple executions of the former may also lead to
>> multiple failures.
>> This is due to the fact that fragments not yet reassembled are kept in
>> a queue for /proc/sys/net/ipv4/ipfrag_time seconds, and if the
>> kernel receives a fragment already present in the queue, it returns
>> -EINVAL.
>
> Thanks for the patch!  I've been looking at the issue earlier
> this week.  One thing I don't understand is that we're reloading
> all the netfilter modules between tests, shouldn't this clear
> all the pending queues?  Or this re-assembly is happening outside
> of the conntrack?
>

That's a fair point.
AFAICT, queues and the pending fragments sit in a per netns fragment
queue directory. In the case of the kernel dp ovs_dp_get_net(dp). If my
reading is correct, IPv4 pending fragments should be removed when the
netns is destroyed.

>> 
>> Below the related log message:
>> |00058|dpif|WARN|system@ovs-system: execute ct(commit) failed (Invalid 
>> argument)
>>   on packet 
>> udp,vlan_tci=0x,dl_src=50:54:00:00:00:09,dl_dst=50:54:00:00:00:0a,
>>   
>> nw_src=10.1.1.1,nw_dst=10.1.1.2,nw_tos=0,nw_ecn=0,nw_ttl=0,nw_frag=first,tp_src=1,
>>   tp_dst=2 udp_csum:0
>> 
>> Fix the sequence by sending the second fragment in "conntrack - IPv4
>> fragmentation incomplete reassembled packet", once the checks are
>> done.
>> 
>> IPv6 tests are not affected as the defrag kernel code path pretends to
>> add the duplicate fragment to the queue returning -EINPROGRESS, when a
>> duplicate is detected.
>> 
>> Signed-off-by: Paolo Valerio 
>> ---
>>  tests/system-traffic.at |5 +
>>  1 file changed, 5 insertions(+)
>> 
>> diff --git a/tests/system-traffic.at b/tests/system-traffic.at
>> index 1a864057c..8497b4d9e 100644
>> --- a/tests/system-traffic.at
>> +++ b/tests/system-traffic.at
>> @@ -3452,6 +3452,11 @@ AT_CHECK([ovs-ofctl bundle br0 bundle.txt])
>>  AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(10.1.1.2)], [0], [dnl
>>  ])
>>  
>> +dnl Send the second fragment in order to avoid keeping the first fragment
>> +dnl in the queue until the expiration occurs. Fragments already queued, if 
>> resent,
>> +dnl may lead to failures on the kernel datapath.
>> +AT_CHECK([ovs-ofctl -O OpenFlow13 packet-out br0 "in_port=1, 
>> packet=5054000a505400090800453100320011a4860a0101010a010102000100020008001020304050607080910203040506070809,
>>  actions=ct(commit)"])
>> +
>>  OVS_TRAFFIC_VSWITCHD_STOP
>>  AT_CLEANUP
>>  
>> 

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH] system-traffic: Fix IPv4 fragmentation test sequence for check-kernel.

2022-08-05 Thread Paolo Valerio
The following test sequence:

conntrack - IPv4 fragmentation incomplete reassembled packet
conntrack - IPv4 fragmentation with fragments specified

leads to a systematic failure of the latter test on the kernel
datapath (linux).  Multiple executions of the former may also lead to
multiple failures.
This is due to the fact that fragments not yet reassembled are kept in
a queue for /proc/sys/net/ipv4/ipfrag_time seconds, and if the
kernel receives a fragment already present in the queue, it returns
-EINVAL.

Below the related log message:
|00058|dpif|WARN|system@ovs-system: execute ct(commit) failed (Invalid argument)
  on packet 
udp,vlan_tci=0x,dl_src=50:54:00:00:00:09,dl_dst=50:54:00:00:00:0a,
  
nw_src=10.1.1.1,nw_dst=10.1.1.2,nw_tos=0,nw_ecn=0,nw_ttl=0,nw_frag=first,tp_src=1,
  tp_dst=2 udp_csum:0

Fix the sequence by sending the second fragment in "conntrack - IPv4
fragmentation incomplete reassembled packet", once the checks are
done.

IPv6 tests are not affected as the defrag kernel code path pretends to
add the duplicate fragment to the queue returning -EINPROGRESS, when a
duplicate is detected.

Signed-off-by: Paolo Valerio 
---
 tests/system-traffic.at |5 +
 1 file changed, 5 insertions(+)

diff --git a/tests/system-traffic.at b/tests/system-traffic.at
index 1a864057c..8497b4d9e 100644
--- a/tests/system-traffic.at
+++ b/tests/system-traffic.at
@@ -3452,6 +3452,11 @@ AT_CHECK([ovs-ofctl bundle br0 bundle.txt])
 AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(10.1.1.2)], [0], [dnl
 ])
 
+dnl Send the second fragment in order to avoid keeping the first fragment
+dnl in the queue until the expiration occurs. Fragments already queued, if 
resent,
+dnl may lead to failures on the kernel datapath.
+AT_CHECK([ovs-ofctl -O OpenFlow13 packet-out br0 "in_port=1, 
packet=5054000a505400090800453100320011a4860a0101010a010102000100020008001020304050607080910203040506070809,
 actions=ct(commit)"])
+
 OVS_TRAFFIC_VSWITCHD_STOP
 AT_CLEANUP
 

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: dis-informazione su linux

2022-08-05 Thread valerio




Il 24/07/22 11:59, raffaele angelo di paola ha scritto:

link dell articolo?



ciao,
non sono sicuro in quanto non conosco la lingua francese, ma mi pare questo:

https://www.revue-ballast.fr/la-part-anarchiste-des-communs/

valerio



[ovs-dev] [PATCH 1/2] netlink-conntrack: Do not fail to parse if optional TCP protocol attributes are not found.

2022-08-04 Thread Paolo Valerio
Some of the CTA_PROTOINFO_TCP nested attributes are not always
included in the received message, but the parsing logic considers them
as required, failing in case they are not found.

This was observed while monitoring some connections by reading the
events sent by conntrack:

./ovstest test-netlink-conntrack monitor
[...]
2022-08-04T09:39:02Z|7|netlink_conntrack|ERR|Could not parse nested TCP 
protoinfo
  options. Possibly incompatible Linux kernel version.
2022-08-04T09:39:02Z|8|netlink_notifier|WARN|unexpected netlink message 
contents
[...]

All the TCP DELETE/DESTROY events fail to parse with the message
above.

Fix it by turning the relevant attributes to optional.

Signed-off-by: Paolo Valerio 
---
- [1] is the related piece of code that skips flags and wscale for the
  destroy evts.

[1] 
https://github.com/torvalds/linux/blob/master/net/netfilter/nf_conntrack_proto_tcp.c#L1202
---
 lib/netlink-conntrack.c |   45 +++--
 1 file changed, 27 insertions(+), 18 deletions(-)

diff --git a/lib/netlink-conntrack.c b/lib/netlink-conntrack.c
index 78f1bf60b..4fcde9ba1 100644
--- a/lib/netlink-conntrack.c
+++ b/lib/netlink-conntrack.c
@@ -672,13 +672,13 @@ nl_ct_parse_protoinfo_tcp(struct nlattr *nla,
 static const struct nl_policy policy[] = {
 [CTA_PROTOINFO_TCP_STATE] = { .type = NL_A_U8, .optional = false },
 [CTA_PROTOINFO_TCP_WSCALE_ORIGINAL] = { .type = NL_A_U8,
-.optional = false },
+.optional = true },
 [CTA_PROTOINFO_TCP_WSCALE_REPLY] = { .type = NL_A_U8,
- .optional = false },
+ .optional = true },
 [CTA_PROTOINFO_TCP_FLAGS_ORIGINAL] = { .type = NL_A_U16,
-   .optional = false },
+   .optional = true },
 [CTA_PROTOINFO_TCP_FLAGS_REPLY] = { .type = NL_A_U16,
-.optional = false },
+.optional = true },
 };
 struct nlattr *attrs[ARRAY_SIZE(policy)];
 bool parsed;
@@ -695,20 +695,29 @@ nl_ct_parse_protoinfo_tcp(struct nlattr *nla,
  * connection, but our structures store a separate state for
  * each endpoint.  Here we duplicate the state. */
 protoinfo->tcp.state_orig = protoinfo->tcp.state_reply = state;
-protoinfo->tcp.wscale_orig = nl_attr_get_u8(
-attrs[CTA_PROTOINFO_TCP_WSCALE_ORIGINAL]);
-protoinfo->tcp.wscale_reply = nl_attr_get_u8(
-attrs[CTA_PROTOINFO_TCP_WSCALE_REPLY]);
-flags_orig =
-nl_attr_get_unspec(attrs[CTA_PROTOINFO_TCP_FLAGS_ORIGINAL],
-   sizeof *flags_orig);
-protoinfo->tcp.flags_orig =
-ip_ct_tcp_flags_to_dpif(flags_orig->flags);
-flags_reply =
-nl_attr_get_unspec(attrs[CTA_PROTOINFO_TCP_FLAGS_REPLY],
-   sizeof *flags_reply);
-protoinfo->tcp.flags_reply =
-ip_ct_tcp_flags_to_dpif(flags_reply->flags);
+
+if (attrs[CTA_PROTOINFO_TCP_WSCALE_ORIGINAL]) {
+protoinfo->tcp.wscale_orig =
+nl_attr_get_u8(attrs[CTA_PROTOINFO_TCP_WSCALE_ORIGINAL]);
+}
+if (attrs[CTA_PROTOINFO_TCP_WSCALE_REPLY]) {
+protoinfo->tcp.wscale_reply =
+nl_attr_get_u8(attrs[CTA_PROTOINFO_TCP_WSCALE_REPLY]);
+}
+if (attrs[CTA_PROTOINFO_TCP_FLAGS_ORIGINAL]) {
+flags_orig =
+nl_attr_get_unspec(attrs[CTA_PROTOINFO_TCP_FLAGS_ORIGINAL],
+   sizeof *flags_orig);
+protoinfo->tcp.flags_orig =
+ip_ct_tcp_flags_to_dpif(flags_orig->flags);
+}
+if (attrs[CTA_PROTOINFO_TCP_FLAGS_REPLY]) {
+flags_reply =
+nl_attr_get_unspec(attrs[CTA_PROTOINFO_TCP_FLAGS_REPLY],
+   sizeof *flags_reply);
+protoinfo->tcp.flags_reply =
+ip_ct_tcp_flags_to_dpif(flags_reply->flags);
+}
 } else {
 VLOG_ERR_RL(&rl, "Could not parse nested TCP protoinfo options. "
 "Possibly incompatible Linux kernel version.");

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH 2/2] ct-dpif: Do not show flag key if empty.

2022-08-04 Thread Paolo Valerio
This patch avoids to show flags_orig/flags_reply key if they have no value.
E.g., the following:

NEW tcp,orig=([...]),reply=([...]),id=1800618864,
status=CONFIRMED|SRC_NAT_DONE|DST_NAT_DONE,timeout=120,
protoinfo=(state_orig=SYN_SENT,state_reply=SYN_SENT,wscale_orig=7,
   wscale_reply=0,flags_orig=WINDOW_SCALE|SACK_PERM,flags_reply=)

becomes:

NEW tcp,orig=([...]),reply=([...]),id=1800618864,
status=CONFIRMED|SRC_NAT_DONE|DST_NAT_DONE,timeout=120,
protoinfo=(state_orig=SYN_SENT,state_reply=SYN_SENT,wscale_orig=7,
   wscale_reply=0,flags_orig=WINDOW_SCALE|SACK_PERM)

Signed-off-by: Paolo Valerio 
---
 lib/ct-dpif.c |   14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/lib/ct-dpif.c b/lib/ct-dpif.c
index cfc2315e3..f1a375523 100644
--- a/lib/ct-dpif.c
+++ b/lib/ct-dpif.c
@@ -512,10 +512,16 @@ ct_dpif_format_protoinfo_tcp_verbose(struct ds *ds,
   protoinfo->tcp.wscale_orig,
   protoinfo->tcp.wscale_reply);
 }
-ct_dpif_format_flags(ds, ",flags_orig=", protoinfo->tcp.flags_orig,
- tcp_flags);
-ct_dpif_format_flags(ds, ",flags_reply=", protoinfo->tcp.flags_reply,
- tcp_flags);
+
+if (protoinfo->tcp.flags_orig) {
+ct_dpif_format_flags(ds, ",flags_orig=", protoinfo->tcp.flags_orig,
+ tcp_flags);
+}
+
+if (protoinfo->tcp.flags_reply) {
+ct_dpif_format_flags(ds, ",flags_reply=", protoinfo->tcp.flags_reply,
+ tcp_flags);
+}
 }
 
 static void

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: test carta d'identità

2022-08-02 Thread valerio




Il 02/08/22 08:57, valerio ha scritto:



Il 02/08/22 06:45, Piviul ha scritto:

Il 01/08/22 22:34, Davide Prina ha scritto:

Piviul ha scritto:


e in più utilizzando la CIE di mia figlia e le ultime 4 cifre del suo
PIN tutto funziona correttamente. Sono perplesso ma io non posso che
concludere  che la mia CIE si è danneggiata... bah.

potrebbe essere che hai fatto tre tentativi errati e hai disattivato
il PIN della tessera...


ciao,
hai l'applicazione per la cie? con l'icona con cie ID ed il profilo di 
una faccia?




oppure la directory .CIEPKI nella tua home?



valerio





Re: test carta d'identità

2022-08-01 Thread valerio




Il 02/08/22 06:45, Piviul ha scritto:

Il 01/08/22 22:34, Davide Prina ha scritto:

Piviul ha scritto:


e in più utilizzando la CIE di mia figlia e le ultime 4 cifre del suo
PIN tutto funziona correttamente. Sono perplesso ma io non posso che
concludere  che la mia CIE si è danneggiata... bah.

potrebbe essere che hai fatto tre tentativi errati e hai disattivato
il PIN della tessera...


ciao,
hai l'applicazione per la cie? con l'icona con cie ID ed il profilo di 
una faccia?


valerio



Re: test carta d'identità

2022-07-31 Thread valerio




Il 31/07/22 15:35, Piviul ha scritto:
Ciao a tutti, ho lasciato per un po' di tempo la CIE sul lettore; 
qualche giorno fa ho cercato di autenticarmi e non riuscivo, ho preso in 
mano la CIE ed era calda e da allora non ha più funzionato. Temo si sia 
bruciata e quindi non autentichi più ma come faccio a verificarlo? Non 
ci sono logs?


ciao,
hai provato con il browser che usi se ti da qualche messaggio quando la 
inserisci?


di solito nei dispositivi di sicurezza: certe volte devo attivarla da lì

valerio



Grazie

Piviul





Re: dis-informazione su linux

2022-07-27 Thread valerio




Il 26/07/22 11:02, Davide Prina ha scritto:

valerio ha scritto:


su una nuova rivista è apparso un articolo sui "beni comuni" di
Edouard Jourdain che cita:


concordo che chi ha scritto quel pezzo non ha capito l'argomento che
stava cercando di trattare.


la mia risposta (molto semplice e poco esauriente, di getto con un po'
di rabbia):

"Linux è un sistema operativo definito "open source".


però Linux è soltanto la parte Kernel o meglio uno dei Kernel usabili.
Sarebbe meglio chiamarlo GNU/Linux, per far capire che è il sistema GNU
con il Kernel Linux.
Tieni conto che per Debian esistono anche altri Kernel più o meno usabili
(es: GNU/BSD, GNU/HURD). A seconda del sistema che stai usando puoi
avere dei pacchetti non disponibili su altri.
Ad esempio avevo trovato un bel giochino che volevo installare per i miei
nipoti, ma il pacchetto Debian non c'era per GNU/Linux, ma, se non ricordo
male, solo per GNU/BSD.

Inoltre io preferirei parlare di software libero e non software opensource,
anche se alcuni li usano come sinonimo in realtà non corrispondono al 100%.
C'è software opensource che non è software libero.
Ad esempio una licenza della NASA non è considerata software libero[¹], ma
è considerata software opensource[²]


un sistema operativo è un insieme di programmi che permettono all'essere
umano di interagire con una macchina, la differenza fra opensource e
proprietario è data fondamentalmente dalla possibilità di vedere quello
che il sistema fa ed eventualmente modificarlo.


come indicato, io preferisco non parlare di software opensource, ma di
software libero, poiché il software opensource è più orientato a indicare
come è "sviluppato" il software, mentre il software libero è più
orientato sui diritti di chi lo utilizza.

Il software libero offre delle libertà all'utente:
* possibilità di eseguire il software come si desidera e per qualsiasi uso
* possibilità di studiare il sorgente e modificarlo a piacere
* possibilità di ridistribuire il sorgente/eseguibile modificato

Notare che garantendo queste libertà a terzi, in realtà l'autore si può
garantire per sé stesso la possibilità di entrare in possesso ed usare
anche le eventuali modifiche di terzi. Naturalmente per avere questo
ritorno deve usare la licenza giusta per il caso. Normalmente, secondo
me, la migliore è la AGPL.


Linux è gratis


no, in realtà Linux o GNU/Linux non è gratis.
Una licenza di software libero ti permette di vendere il software, anche
non tuo, a qualsiasi prezzo che l'acquirente è disposto a versare.
In passato società hanno fatto soldi proprio ridistribuendo software
libero, soprattutto nel periodo in cui tanti non avevano internet o le
connessioni erano molto lente.
La stessa m$ nel tempo in cui indicava la GPL come virale e da evitare
vendeva quelli che chiamava i "GNU tools" ai propri clienti... che non
erano altro che parte del sistema GNU.


I sistemi proprietari non
permettono alcuna modifica né alcuna comprensione di quello che succede,


in generale non è detto, potrebbe essere che un sistema operativo (o più
facilmente un software) sia distribuito anche con i sorgenti, ma sia
vietata la modifica e ridistribuzione.
Se non erro, almeno fino a qualche anno fa, m$ dava la possibilità alle
PA di accedere ai sorgenti del suo simil sistema operativo (o per lo meno
a buona parte dei sorgenti)...


Edouard Jourdain dice che Linux ha dei proprietari, ma a me non risulta,


in realtà è corretto.
Il software libero, se non è distribuito sotto il pubblico dominio, deve
avere dei proprietari.
Una licenza di un software non è valida se il software non ha un copyright
valido.
In pratica il software deve avere "agganciato" sia il copyright degli
autori che la licenza applicata. La licenza non ha nessuna validità se
manca il primo.
La licenza indica le modalità di distribuzione/uso del software e il
copyright indica che ha i diritti su quel software e quindi ha il diritto
di applicare la licenza scelta.
L'utente può accettare la licenza e usare il software o non accettarla
e in quel caso il software non può essere usato.

La differenza è che avendo agganciato al licenza GPL a Linux (sto parlando
del kernel) Torval e tutti quelli che hanno contribuito hanno permesso a
chiunque altro di poter usare le libertà offerte dal software libero.

Ipotizziamo che un software abbia il copyright di una sola persona (che è
il caso più semplice, anche in ottica internazionale). Questa persona
decide di distribuire il suo software sotto licenza AGPL e con indicato il
suo copyright. Distribuisce sotto AGPL tutte le versioni fino alla 5.0.
Poi dalla 6.0 cambia licenza e passa ad una licenza non più di software
libero senza distribuire i sorgenti. Può benissimo farlo e diversi hanno
seguito questa strada... purtroppo. Però tale persona non può più impedire
che il suo software fino alla versione 5.0 sia preso da altri e modificato
e ridistribuito. 

Re: [ovs-dev] [PATCH] conntrack: Fix conntrack multiple new state

2022-07-25 Thread Paolo Valerio
Hello Eli,

Eli Britstein via dev  writes:

> A connection is established if we see packets from both directions.
> The cited commit [1] fixed the issue of sending twice in one direction,
> but still an issue if more than that.
> Fix it.
>

The patch LGTM.
Just a very minor nit: I guess "[1]" could be removed from the
description. "The cited commit" seems enough.

In any case,

Acked-by: Paolo Valerio 

> Fixes: a867c010ee91 ("conntrack: Fix conntrack new state")
> Signed-off-by: Eli Britstein 
> ---
>  lib/conntrack-other.c   | 7 ---
>  tests/system-traffic.at | 9 +
>  2 files changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/lib/conntrack-other.c b/lib/conntrack-other.c
> index d3b4601858..7f3e63c384 100644
> --- a/lib/conntrack-other.c
> +++ b/lib/conntrack-other.c
> @@ -48,18 +48,19 @@ other_conn_update(struct conntrack *ct, struct conn 
> *conn_,
>struct dp_packet *pkt OVS_UNUSED, bool reply, long long 
> now)
>  {
>  struct conn_other *conn = conn_other_cast(conn_);
> -enum ct_update_res ret = CT_UPDATE_VALID;
>  
>  if (reply && conn->state != OTHERS_BIDIR) {
>  conn->state = OTHERS_BIDIR;
>  } else if (conn->state == OTHERS_FIRST) {
>  conn->state = OTHERS_MULTIPLE;
> -ret = CT_UPDATE_VALID_NEW;
>  }
>  
>  conn_update_expiration(ct, &conn->up, other_timeouts[conn->state], now);
>  
> -return ret;
> +if (conn->state == OTHERS_BIDIR) {
> +return CT_UPDATE_VALID;
> +}
> +return CT_UPDATE_VALID_NEW;
>  }
>  
>  static bool
> diff --git a/tests/system-traffic.at b/tests/system-traffic.at
> index 89107ab624..182a78847e 100644
> --- a/tests/system-traffic.at
> +++ b/tests/system-traffic.at
> @@ -3078,6 +3078,15 @@ NXST_FLOW reply:
>   table=1, priority=100,ct_state=+est+trk,in_port=1 actions=output:2
>  ])
>  
> +dnl Send a 3rd UDP packet on port 1
> +AT_CHECK([ovs-ofctl -O OpenFlow13 packet-out br0 "in_port=1 
> packet=5054000a505400090800451c0011a4cd0a0101010a010102000100020008
>  actions=resubmit(,0)"])
> +
> +dnl There still should not be any packet that matches the established 
> ct_state.
> +AT_CHECK([ovs-ofctl dump-flows br0 "table=1 in_port=1,ct_state=+trk+est" | 
> ofctl_strip], [0], [dnl
> +NXST_FLOW reply:
> + table=1, priority=100,ct_state=+est+trk,in_port=1 actions=output:2
> +])
> +
>  OVS_TRAFFIC_VSWITCHD_STOP
>  AT_CLEANUP
>  
> -- 
> 2.26.2.1730.g385c171
>
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: dis-informazione su linux

2022-07-24 Thread valerio




Il 24/07/22 17:18, Luca Alzetta ha scritto:

Ok, questi concetti in qualche modo li ho assimilati, quello che trovo
strano dell'open source e non completamente innocente, è la rincorsa (dal
mio punto di vista pazzesca) agli standard di sicurezza della rete nella
modalità in cui si è sviluppata in questi ultimi quindici anni. Tento di
spiegarmi  meglio: secondo me algoritmi di cifratura, protocolli https,
firme digitali, autenticazioni criptate e chi più ne ha più ne metta, sono
tutte tecnologie al servizio quasi esclusivo delle transazioni finanziarie


ciao,
provo a risponderti in modo sintetico, poi ti conviene cercare 
informazioni più precise in rete.


la differenza fra un office che risponde alle caratteristiche di un 
opensource (e quindi è libero e trasparente) ed un office proprietario è 
proprio lì, la seconda non è trasparente...


per quanto riguarda la sicurezza, non riguarda solo la parte 
finanziaria. normalmente quando un hacker porta avanti un attacco, non 
lo fa dal proprio pc, ma prende possesso di una serie di pc (non 
protetti) e con quelli porta avanti gli attacchi...


valerio



Re: dis-informazione su linux

2022-07-24 Thread valerio




Il 24/07/22 15:32, Lorenzo Breda ha scritto:

Il giorno dom 24 lug 2022 alle ore 11:15 valerio <
bardo_ml_deb...@fastwebnet.it> ha scritto:



qualcuno ha dei commenti da fare?



Solitamente, chi fa critiche - tipicamente meno superficiali di questa - di
questo tipo fa riferimento al fatto che molte grandi aziende (Google e
Microsoft massicciamente) contribuiscano allo sviluppo di Linux, in qualche
modo influenzandolo.


ciao,
il punto è che non passa come critica (che accetterei) ma come falsa 
informazione: non mi sembra che linux nasca come dice l'articolo, e al 
di là delle collaborazioni (magari forzanti) si parla di proprietà...




È una critica che trovo poco ricevibile, onestamente. Non trovo sia molto
sensato che un prodotto Open Source debba necessariamente rimanere
"amatoriale" e le grandi aziende private non debbano contribuire (anche
perché lo usano, ed è bene che paghino ALMENO mettendoci qualche
sviluppatore su). Inoltre ci sono software storici, come CUPS, per i quali
trovo che gli effetti della collaborazione privata di vecchia data siano
stati molto buoni.

È allo stesso tempo però una critica assai lecita, sotto diversi punti di
vista: il fatto che il kernel contenga un sacco di roba funzionale ad
Android e poco altro è ben lontano dall'essere falso.



valerio



Re: dis-informazione su linux

2022-07-24 Thread valerio




Il 24/07/22 12:05, Luca Alzetta ha scritto:
Salve, potresti inviarci la tua risposta, i dettagli filosofici 
dell'open source non mi sono mai stati chiari e tecnologicamente 
parlando sono una capra. Però a pelle il mondo LINUX mi stà simpatico, 
quello che detesto è l'informatizzazione di massa.


l'articolo non è on-line...

la mia risposta (molto semplice e poco esauriente, di getto con un po' 
di rabbia):


"Linux è un sistema operativo definito "open source".
un sistema operativo è un insieme di programmi che permettono all'essere 
umano di interagire con una macchina, la differenza fra opensource e 
proprietario è data fondamentalmente dalla possibilità di vedere quello 
che il sistema fa ed eventualmente modificarlo. Linux è gratis, non devi 
pagare nessun diritto a nessuno per poterlo utilizzare puoi 
completamente cambiarlo a tuo piacimento. I sistemi proprietari non 
permettono alcuna modifica né alcuna comprensione di quello che succede, 
molti hanno dei comandi nascosti (backdoors) come porte di servizio 
accessibili anche in linea, attraverso cui i proprietari possono vedere, 
cambiare quello che succede sulla tua macchina (smartphone, desktop o 
laptop).
Edouard Jourdain dice che Linux ha dei proprietari, ma a me non risulta, 
nessuno deve pagare dei diritti per usare il sistema operativo. Penso 
che non sappia cosa sia un sistema operativo opensource, che a me sembra 
proprio un buon esempio di bene comune. Esistono delle Communities 
appunto che si occupano di gestire i possibili difetti del sistema e 
provvedono a correggerli. Gli utilizzatori solitamente e compatibilmente 
alla loro capacità contribuiscono a migliorarlo.


...

valerio



Il 24/07/22 11:15, valerio ha scritto:

buongiorno a tutti,
su una nuova rivista è apparso un articolo sui "beni comuni" di 
Edouard Jourdain che cita:



"Il cooperativismo di piattaforma mira così a restituire agli 
utilizzatori la gestione di quest'ultima e lo fa attraverso una linea 
improntata all'autogestione cara alla tradizione libertaria. Una delle 
sfide di queste piattaforme è quella di mantenere tale tipo di modello 
in un ambiente economico ostile, come testimonia l'esperienza di Linux 
- sistema operativo nato dall'incontro del modo di operare Hacker e i 
principi del software libero - ormai diventato per il 90% di proprietà 
di imprese capitaliste."


Ho scritto ad uno dei responsabili della rivista una sintesi 
dell'opensource e le differenze con i sistemi proprietari, finendo con 
l'affermare che l'autore dell'articolo, secondo me, non sa niente di 
cosa sia l'opensource.


qualcuno ha dei commenti da fare?

valerio







dis-informazione su linux

2022-07-24 Thread valerio

buongiorno a tutti,
su una nuova rivista è apparso un articolo sui "beni comuni" di Edouard 
Jourdain che cita:



"Il cooperativismo di piattaforma mira così a restituire agli 
utilizzatori la gestione di quest'ultima e lo fa attraverso una linea 
improntata all'autogestione cara alla tradizione libertaria. Una delle 
sfide di queste piattaforme è quella di mantenere tale tipo di modello 
in un ambiente economico ostile, come testimonia l'esperienza di Linux - 
sistema operativo nato dall'incontro del modo di operare Hacker e i 
principi del software libero - ormai diventato per il 90% di proprietà 
di imprese capitaliste."


Ho scritto ad uno dei responsabili della rivista una sintesi 
dell'opensource e le differenze con i sistemi proprietari, finendo con 
l'affermare che l'autore dell'articolo, secondo me, non sa niente di 
cosa sia l'opensource.


qualcuno ha dei commenti da fare?

valerio



[ovs-dev] [PATCH v7 5/5] conntrack: Check for expiration before comparing the keys during the lookup

2022-07-11 Thread Paolo Valerio
From: Ilya Maximets 

This could save some costly key comparison miss, especially in the
case there are many expired connections waiting for the sweeper to
evict them.

Signed-off-by: Ilya Maximets 
Signed-off-by: Paolo Valerio 
---
 lib/conntrack.c |7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/lib/conntrack.c b/lib/conntrack.c
index 468450a89..13c5ab628 100644
--- a/lib/conntrack.c
+++ b/lib/conntrack.c
@@ -586,14 +586,17 @@ conn_key_lookup(struct conntrack *ct, const struct 
conn_key *key,
 bool found = false;
 
 CMAP_FOR_EACH_WITH_HASH (conn, cm_node, hash, &ct->conns) {
-if (!conn_key_cmp(&conn->key, key) && !conn_expired(conn, now)) {
+if (conn_expired(conn, now)) {
+continue;
+}
+if (!conn_key_cmp(&conn->key, key)) {
 found = true;
 if (reply) {
 *reply = false;
 }
 break;
 }
-if (!conn_key_cmp(&conn->rev_key, key) && !conn_expired(conn, now)) {
+if (!conn_key_cmp(&conn->rev_key, key)) {
 found = true;
 if (reply) {
 *reply = true;

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH v7 4/5] conntrack: Use an atomic conn expiration value

2022-07-11 Thread Paolo Valerio
From: Gaetan Rivet 

A lock is taken during conn_lookup() to check whether a connection is
expired before returning it. This lock can have some contention.

Even though this lock ensures a consistent sequence of writes, it does
not imply a specific order. A ct_clean thread taking the lock first
could read a value that would be updated immediately after by a PMD
waiting on the same lock, just as well as the inverse order.

As such, the expiration time can be stale anytime it is read. In this
context, using an atomic will ensure the same guarantees for either
writes or reads, i.e. writes are consistent and reads are not undefined
behaviour. Reading an atomic is however less costly than taking and
releasing a lock.

Signed-off-by: Gaetan Rivet 
Signed-off-by: Paolo Valerio 
---
v6:
- A couple of hunks slipped away from the stg refresh before sending
  v5.
---
 lib/conntrack-private.h |2 +-
 lib/conntrack-tp.c  |2 +-
 lib/conntrack.c |   27 +++
 3 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/lib/conntrack-private.h b/lib/conntrack-private.h
index 67882840f..fae8b3a9b 100644
--- a/lib/conntrack-private.h
+++ b/lib/conntrack-private.h
@@ -136,7 +136,7 @@ struct conn {
 /* Mutable data. */
 struct ovs_mutex lock; /* Guards all mutable fields. */
 ovs_u128 label;
-long long expiration;
+atomic_llong expiration;
 uint32_t mark;
 int seq_skew;
 
diff --git a/lib/conntrack-tp.c b/lib/conntrack-tp.c
index 7b8f9007b..89cb2704a 100644
--- a/lib/conntrack-tp.c
+++ b/lib/conntrack-tp.c
@@ -255,7 +255,7 @@ conn_update_expiration(struct conntrack *ct, struct conn 
*conn,
 "val=%u sec.",
 ct_timeout_str[tm], conn->key.zone, conn->tp_id, val);
 
-conn->expiration = now + val * 1000;
+atomic_store_relaxed(&conn->expiration, now + val * 1000);
 }
 
 void
diff --git a/lib/conntrack.c b/lib/conntrack.c
index 0b329bacb..468450a89 100644
--- a/lib/conntrack.c
+++ b/lib/conntrack.c
@@ -100,6 +100,7 @@ static enum ct_update_res conn_update(struct conntrack *ct, 
struct conn *conn,
   struct dp_packet *pkt,
   struct conn_lookup_ctx *ctx,
   long long now);
+static long long int conn_expiration(const struct conn *);
 static bool conn_expired(struct conn *, long long now);
 static void conn_expire_push_front(struct conntrack *ct, struct conn *conn);
 static void set_mark(struct dp_packet *, struct conn *,
@@ -515,9 +516,7 @@ conn_clean(struct conntrack *ct, struct conn *conn)
 static void
 conn_force_expire(struct conn *conn)
 {
-ovs_mutex_lock(&conn->lock);
-conn->expiration = 0;
-ovs_mutex_unlock(&conn->lock);
+atomic_store_relaxed(&conn->expiration, 0);
 }
 
 /* Destroys the connection tracker 'ct' and frees all the allocated memory.
@@ -972,13 +971,10 @@ un_nat_packet(struct dp_packet *pkt, const struct conn 
*conn,
 static void
 conn_seq_skew_set(struct conntrack *ct, const struct conn *conn_in,
   long long now, int seq_skew, bool seq_skew_dir)
-OVS_NO_THREAD_SAFETY_ANALYSIS
 {
 struct conn *conn;
-ovs_mutex_unlock(&conn_in->lock);
-conn_lookup(ct, &conn_in->key, now, &conn, NULL);
-ovs_mutex_lock(&conn_in->lock);
 
+conn_lookup(ct, &conn_in->key, now, &conn, NULL);
 if (conn && seq_skew) {
 conn->seq_skew = seq_skew;
 conn->seq_skew_dir = seq_skew_dir;
@@ -2507,14 +2503,21 @@ conn_expire_push_front(struct conntrack *ct, struct 
conn *conn)
 rculist_push_front(&ct->exp_lists[curr], &conn->node);
 }
 
+static long long int
+conn_expiration(const struct conn *conn)
+{
+long long int expiration;
+
+atomic_read_relaxed(&CONST_CAST(struct conn *, conn)->expiration,
+&expiration);
+return expiration;
+}
+
 static bool
 conn_expired(struct conn *conn, long long now)
 {
 if (conn->conn_type == CT_CONN_TYPE_DEFAULT) {
-ovs_mutex_lock(&conn->lock);
-bool expired = now >= conn->expiration ? true : false;
-ovs_mutex_unlock(&conn->lock);
-return expired;
+return now >= conn_expiration(conn);
 }
 return false;
 }
@@ -2647,7 +2650,7 @@ conn_to_ct_dpif_entry(const struct conn *conn, struct 
ct_dpif_entry *entry,
 entry->mark = conn->mark;
 memcpy(&entry->labels, &conn->label, sizeof entry->labels);
 
-long long expiration = conn->expiration - now;
+long long expiration = conn_expiration(conn) - now;
 
 struct ct_l4_proto *class = l4_protos[conn->key.nw_proto];
 if (class->conn_get_protoinfo) {

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH v7 3/5] conntrack: Replace timeout based expiration lists with rculists.

2022-07-11 Thread Paolo Valerio
From: Gaetan Rivet 

This patch aims to replace the expiration lists as, due to the way
they are used, besides being a source of contention, they have a known
issue when used with non-default policies for different zones that
could lead to retaining expired connections potentially for a long
time.

This patch replaces them with an array of rculist used to distribute
all the newly created connections in order to, during the sweeping
phase, scan them without locking, and evict the expired connections
only locking during the actual removal.  This allows to reduce the
contention introduced by the pushback performed at every packet
update, also solving the issue related to zones and timeout policies.

Signed-off-by: Gaetan Rivet 
Co-authored-by: Paolo Valerio 
Signed-off-by: Paolo Valerio 
---
v7:
- renamed EXP_LISTS -> N_EXP_LISTS; ct_next_list -> next_list (turned
  to non atomic as it is always under ct_lock)
- folded ct_next_list() into conn_expire_push_front() and moved in
  conntrack.c
- changed return ct_sweep() return type to size_t
- removed/changed some comments that no longer apply
- removed redundant MAX() as it always returned next_wake as it was
  supposed to be
- moved zone_limit_lookup() out of ct_lock (wenxu's suggestion)

v6:
- minor function renaming
- removed conn->lock in conn_clean() as this was unneeded.
- minor commit message rephrase
---
 lib/conntrack-private.h |   67 ---
 lib/conntrack-tp.c  |   44 --
 lib/conntrack.c |  216 ++-
 3 files changed, 144 insertions(+), 183 deletions(-)

diff --git a/lib/conntrack-private.h b/lib/conntrack-private.h
index 34c688821..67882840f 100644
--- a/lib/conntrack-private.h
+++ b/lib/conntrack-private.h
@@ -29,6 +29,7 @@
 #include "openvswitch/list.h"
 #include "openvswitch/types.h"
 #include "packets.h"
+#include "rculist.h"
 #include "unaligned.h"
 #include "dp-packet.h"
 
@@ -86,6 +87,31 @@ struct alg_exp_node {
 bool nat_rpl_dst;
 };
 
+/* Timeouts: all the possible timeout states passed to update_expiration()
+ * are listed here. The name will be prefix by CT_TM_ and the value is in
+ * milliseconds */
+#define CT_TIMEOUTS \
+CT_TIMEOUT(TCP_FIRST_PACKET) \
+CT_TIMEOUT(TCP_OPENING) \
+CT_TIMEOUT(TCP_ESTABLISHED) \
+CT_TIMEOUT(TCP_CLOSING) \
+CT_TIMEOUT(TCP_FIN_WAIT) \
+CT_TIMEOUT(TCP_CLOSED) \
+CT_TIMEOUT(OTHER_FIRST) \
+CT_TIMEOUT(OTHER_MULTIPLE) \
+CT_TIMEOUT(OTHER_BIDIR) \
+CT_TIMEOUT(ICMP_FIRST) \
+CT_TIMEOUT(ICMP_REPLY)
+
+enum ct_timeout {
+#define CT_TIMEOUT(NAME) CT_TM_##NAME,
+CT_TIMEOUTS
+#undef CT_TIMEOUT
+N_CT_TM
+};
+
+#define N_EXP_LISTS 100
+
 enum OVS_PACKED_ENUM ct_conn_type {
 CT_CONN_TYPE_DEFAULT,
 CT_CONN_TYPE_UN_NAT,
@@ -96,11 +122,16 @@ struct conn {
 struct conn_key key;
 struct conn_key rev_key;
 struct conn_key parent_key; /* Only used for orig_tuple support. */
-struct ovs_list exp_node;
 struct cmap_node cm_node;
 uint16_t nat_action;
 char *alg;
 struct conn *nat_conn; /* The NAT 'conn' context, if there is one. */
+atomic_flag reclaimed; /* False during the lifetime of the connection,
+* True as soon as a thread has started freeing
+* its memory. */
+
+/* Inserted once by a PMD, then managed by the 'ct_clean' thread. */
+struct rculist node;
 
 /* Mutable data. */
 struct ovs_mutex lock; /* Guards all mutable fields. */
@@ -116,7 +147,6 @@ struct conn {
 /* Mutable data. */
 bool seq_skew_dir; /* TCP sequence skew direction due to NATTing of FTP
 * control messages; true if reply direction. */
-bool cleaned; /* True if cleaned from expiry lists. */
 
 /* Immutable data. */
 bool alg_related; /* True if alg data connection. */
@@ -132,22 +162,6 @@ enum ct_update_res {
 CT_UPDATE_VALID_NEW,
 };
 
-/* Timeouts: all the possible timeout states passed to update_expiration()
- * are listed here. The name will be prefix by CT_TM_ and the value is in
- * milliseconds */
-#define CT_TIMEOUTS \
-CT_TIMEOUT(TCP_FIRST_PACKET) \
-CT_TIMEOUT(TCP_OPENING) \
-CT_TIMEOUT(TCP_ESTABLISHED) \
-CT_TIMEOUT(TCP_CLOSING) \
-CT_TIMEOUT(TCP_FIN_WAIT) \
-CT_TIMEOUT(TCP_CLOSED) \
-CT_TIMEOUT(OTHER_FIRST) \
-CT_TIMEOUT(OTHER_MULTIPLE) \
-CT_TIMEOUT(OTHER_BIDIR) \
-CT_TIMEOUT(ICMP_FIRST) \
-CT_TIMEOUT(ICMP_REPLY)
-
 #define NAT_ACTION_SNAT_ALL (NAT_ACTION_SRC | NAT_ACTION_SRC_PORT)
 #define NAT_ACTION_DNAT_ALL (NAT_ACTION_DST | NAT_ACTION_DST_PORT)
 
@@ -181,22 +195,19 @@ enum ct_ephemeral_range {
 #define FOR_EACH_PORT_IN_RANGE(curr, min, max) \
 FOR_EACH_PORT_IN_RANGE__(curr, min, max, OVS_JOIN(idx, __COUNTER__))
 
-enum ct_timeout {
-#define CT_TIMEOUT(NAME) CT_TM_##NAME,
-CT_TIMEOUTS
-#undef CT_TIMEOUT
-N

[ovs-dev] [PATCH v7 2/5] conntrack-tp: Use a cmap to store timeout policies

2022-07-11 Thread Paolo Valerio
From: Gaetan Rivet 

Multiple lookups are done to stored timeout policies, each time blocking
the global 'ct_lock'. This is usually not necessary and it should be
acceptable to get policy updates slightly delayed (by one RCU sync
at most). Using a CMAP reduces multiple lock taking and releasing in
the connection insertion path.

Signed-off-by: Gaetan Rivet 
Reviewed-by: Eli Britstein 
Acked-by: William Tu 
Signed-off-by: Paolo Valerio 
---
 lib/conntrack-private.h |2 +-
 lib/conntrack-tp.c  |   54 ++-
 lib/conntrack.c |9 +---
 lib/conntrack.h |2 +-
 4 files changed, 38 insertions(+), 29 deletions(-)

diff --git a/lib/conntrack-private.h b/lib/conntrack-private.h
index d9461b811..34c688821 100644
--- a/lib/conntrack-private.h
+++ b/lib/conntrack-private.h
@@ -193,7 +193,7 @@ struct conntrack {
 struct cmap conns OVS_GUARDED;
 struct ovs_list exp_lists[N_CT_TM] OVS_GUARDED;
 struct cmap zone_limits OVS_GUARDED;
-struct hmap timeout_policies OVS_GUARDED;
+struct cmap timeout_policies OVS_GUARDED;
 uint32_t hash_basis; /* Salt for hashing a connection key. */
 pthread_t clean_thread; /* Periodically cleans up connection tracker. */
 struct latch clean_thread_exit; /* To destroy the 'clean_thread'. */
diff --git a/lib/conntrack-tp.c b/lib/conntrack-tp.c
index a586d3a8d..c2245038b 100644
--- a/lib/conntrack-tp.c
+++ b/lib/conntrack-tp.c
@@ -47,14 +47,15 @@ static unsigned int ct_dpif_netdev_tp_def[] = {
 };
 
 static struct timeout_policy *
-timeout_policy_lookup(struct conntrack *ct, int32_t tp_id)
+timeout_policy_lookup_protected(struct conntrack *ct, int32_t tp_id)
 OVS_REQUIRES(ct->ct_lock)
 {
 struct timeout_policy *tp;
 uint32_t hash;
 
 hash = hash_int(tp_id, ct->hash_basis);
-HMAP_FOR_EACH_IN_BUCKET (tp, node, hash, &ct->timeout_policies) {
+CMAP_FOR_EACH_WITH_HASH_PROTECTED (tp, node, hash,
+   &ct->timeout_policies) {
 if (tp->policy.id == tp_id) {
 return tp;
 }
@@ -62,20 +63,25 @@ timeout_policy_lookup(struct conntrack *ct, int32_t tp_id)
 return NULL;
 }
 
-struct timeout_policy *
-timeout_policy_get(struct conntrack *ct, int32_t tp_id)
+static struct timeout_policy *
+timeout_policy_lookup(struct conntrack *ct, int32_t tp_id)
 {
 struct timeout_policy *tp;
+uint32_t hash;
 
-ovs_mutex_lock(&ct->ct_lock);
-tp = timeout_policy_lookup(ct, tp_id);
-if (!tp) {
-ovs_mutex_unlock(&ct->ct_lock);
-return NULL;
+hash = hash_int(tp_id, ct->hash_basis);
+CMAP_FOR_EACH_WITH_HASH (tp, node, hash, &ct->timeout_policies) {
+if (tp->policy.id == tp_id) {
+return tp;
+}
 }
+return NULL;
+}
 
-ovs_mutex_unlock(&ct->ct_lock);
-return tp;
+struct timeout_policy *
+timeout_policy_get(struct conntrack *ct, int32_t tp_id)
+{
+return timeout_policy_lookup(ct, tp_id);
 }
 
 static void
@@ -125,27 +131,30 @@ timeout_policy_create(struct conntrack *ct,
 init_default_tp(tp, tp_id);
 update_existing_tp(tp, new_tp);
 hash = hash_int(tp_id, ct->hash_basis);
-hmap_insert(&ct->timeout_policies, &tp->node, hash);
+cmap_insert(&ct->timeout_policies, &tp->node, hash);
 }
 
 static void
 timeout_policy_clean(struct conntrack *ct, struct timeout_policy *tp)
 OVS_REQUIRES(ct->ct_lock)
 {
-hmap_remove(&ct->timeout_policies, &tp->node);
-free(tp);
+uint32_t hash = hash_int(tp->policy.id, ct->hash_basis);
+cmap_remove(&ct->timeout_policies, &tp->node, hash);
+ovsrcu_postpone(free, tp);
 }
 
 static int
-timeout_policy_delete__(struct conntrack *ct, uint32_t tp_id)
+timeout_policy_delete__(struct conntrack *ct, uint32_t tp_id,
+bool warn_on_error)
 OVS_REQUIRES(ct->ct_lock)
 {
+struct timeout_policy *tp;
 int err = 0;
-struct timeout_policy *tp = timeout_policy_lookup(ct, tp_id);
 
+tp = timeout_policy_lookup_protected(ct, tp_id);
 if (tp) {
 timeout_policy_clean(ct, tp);
-} else {
+} else if (warn_on_error) {
 VLOG_WARN_RL(&rl, "Failed to delete a non-existent timeout "
  "policy: id=%d", tp_id);
 err = ENOENT;
@@ -159,7 +168,7 @@ timeout_policy_delete(struct conntrack *ct, uint32_t tp_id)
 int err;
 
 ovs_mutex_lock(&ct->ct_lock);
-err = timeout_policy_delete__(ct, tp_id);
+err = timeout_policy_delete__(ct, tp_id, true);
 ovs_mutex_unlock(&ct->ct_lock);
 return err;
 }
@@ -170,7 +179,7 @@ timeout_policy_init(struct conntrack *ct)
 {
 struct timeout_policy tp;
 
-hmap_init(&ct->timeout_policies);
+cmap_init(&ct->timeout_policies);
 
 /* Create default timeout policy. 

[ovs-dev] [PATCH v7 1/5] conntrack: Use a cmap to store zone limits

2022-07-11 Thread Paolo Valerio
From: Gaetan Rivet 

Change the data structure from hmap to cmap for zone limits.
As they are shared amongst multiple conntrack users, multiple
readers want to check the current zone limit state before progressing in
their processing. Using a CMAP allows doing lookups without taking the
global 'ct_lock', thus reducing contention.

Signed-off-by: Gaetan Rivet 
Reviewed-by: Eli Britstein 
Signed-off-by: Paolo Valerio 
---
 lib/conntrack-private.h |2 +
 lib/conntrack.c |   70 ---
 lib/conntrack.h |2 +
 lib/dpif-netdev.c   |5 ++-
 4 files changed, 53 insertions(+), 26 deletions(-)

diff --git a/lib/conntrack-private.h b/lib/conntrack-private.h
index dfdf4e676..d9461b811 100644
--- a/lib/conntrack-private.h
+++ b/lib/conntrack-private.h
@@ -192,7 +192,7 @@ struct conntrack {
 struct ovs_mutex ct_lock; /* Protects 2 following fields. */
 struct cmap conns OVS_GUARDED;
 struct ovs_list exp_lists[N_CT_TM] OVS_GUARDED;
-struct hmap zone_limits OVS_GUARDED;
+struct cmap zone_limits OVS_GUARDED;
 struct hmap timeout_policies OVS_GUARDED;
 uint32_t hash_basis; /* Salt for hashing a connection key. */
 pthread_t clean_thread; /* Periodically cleans up connection tracker. */
diff --git a/lib/conntrack.c b/lib/conntrack.c
index faa2d6ab7..6df1142b9 100644
--- a/lib/conntrack.c
+++ b/lib/conntrack.c
@@ -81,7 +81,7 @@ enum ct_alg_ctl_type {
 };
 
 struct zone_limit {
-struct hmap_node node;
+struct cmap_node node;
 struct conntrack_zone_limit czl;
 };
 
@@ -311,7 +311,7 @@ conntrack_init(void)
 for (unsigned i = 0; i < ARRAY_SIZE(ct->exp_lists); i++) {
 ovs_list_init(&ct->exp_lists[i]);
 }
-hmap_init(&ct->zone_limits);
+cmap_init(&ct->zone_limits);
 ct->zone_limit_seq = 0;
 timeout_policy_init(ct);
 ovs_mutex_unlock(&ct->ct_lock);
@@ -346,12 +346,25 @@ zone_key_hash(int32_t zone, uint32_t basis)
 }
 
 static struct zone_limit *
-zone_limit_lookup(struct conntrack *ct, int32_t zone)
+zone_limit_lookup_protected(struct conntrack *ct, int32_t zone)
 OVS_REQUIRES(ct->ct_lock)
 {
 uint32_t hash = zone_key_hash(zone, ct->hash_basis);
 struct zone_limit *zl;
-HMAP_FOR_EACH_IN_BUCKET (zl, node, hash, &ct->zone_limits) {
+CMAP_FOR_EACH_WITH_HASH_PROTECTED (zl, node, hash, &ct->zone_limits) {
+if (zl->czl.zone == zone) {
+return zl;
+}
+}
+return NULL;
+}
+
+static struct zone_limit *
+zone_limit_lookup(struct conntrack *ct, int32_t zone)
+{
+uint32_t hash = zone_key_hash(zone, ct->hash_basis);
+struct zone_limit *zl;
+CMAP_FOR_EACH_WITH_HASH (zl, node, hash, &ct->zone_limits) {
 if (zl->czl.zone == zone) {
 return zl;
 }
@@ -361,7 +374,6 @@ zone_limit_lookup(struct conntrack *ct, int32_t zone)
 
 static struct zone_limit *
 zone_limit_lookup_or_default(struct conntrack *ct, int32_t zone)
-OVS_REQUIRES(ct->ct_lock)
 {
 struct zone_limit *zl = zone_limit_lookup(ct, zone);
 return zl ? zl : zone_limit_lookup(ct, DEFAULT_ZONE);
@@ -370,13 +382,16 @@ zone_limit_lookup_or_default(struct conntrack *ct, 
int32_t zone)
 struct conntrack_zone_limit
 zone_limit_get(struct conntrack *ct, int32_t zone)
 {
-ovs_mutex_lock(&ct->ct_lock);
-struct conntrack_zone_limit czl = {DEFAULT_ZONE, 0, 0, 0};
+struct conntrack_zone_limit czl = {
+.zone = DEFAULT_ZONE,
+.limit = 0,
+.count = ATOMIC_COUNT_INIT(0),
+.zone_limit_seq = 0,
+};
 struct zone_limit *zl = zone_limit_lookup_or_default(ct, zone);
 if (zl) {
 czl = zl->czl;
 }
-ovs_mutex_unlock(&ct->ct_lock);
 return czl;
 }
 
@@ -384,13 +399,19 @@ static int
 zone_limit_create(struct conntrack *ct, int32_t zone, uint32_t limit)
 OVS_REQUIRES(ct->ct_lock)
 {
+struct zone_limit *zl = zone_limit_lookup_protected(ct, zone);
+
+if (zl) {
+return 0;
+}
+
 if (zone >= DEFAULT_ZONE && zone <= MAX_ZONE) {
-struct zone_limit *zl = xzalloc(sizeof *zl);
+zl = xzalloc(sizeof *zl);
 zl->czl.limit = limit;
 zl->czl.zone = zone;
 zl->czl.zone_limit_seq = ct->zone_limit_seq++;
 uint32_t hash = zone_key_hash(zone, ct->hash_basis);
-hmap_insert(&ct->zone_limits, &zl->node, hash);
+cmap_insert(&ct->zone_limits, &zl->node, hash);
 return 0;
 } else {
 return EINVAL;
@@ -401,13 +422,14 @@ int
 zone_limit_update(struct conntrack *ct, int32_t zone, uint32_t limit)
 {
 int err = 0;
-ovs_mutex_lock(&ct->ct_lock);
 struct zone_limit *zl = zone_limit_lookup(ct, zone);
 if (zl) {
 zl->czl.limit = limit;
 VLOG_INFO("Changed zone limit of %u for zone %d", limit, zone);
 

[ovs-dev] [PATCH v7 0/5] conntrack: Improve multithread scalability.

2022-07-11 Thread Paolo Valerio
This series aims to address the issues present in the previous versions.
The end result is a different approach, using different data structure,
but it solves the multiple issues observed in v4 and the problems that
affected the baseline.

The tests (similar to the ones previously performed by Robin [0]) 
show performance comparable with the v4 both in terms of cps
and throughput.

[0] https://mail.openvswitch.org/pipermail/ovs-dev/2022-June/394711.html

v7:

Patch 3:
- some minor clean up (comments, variable renamed) and code moved to 
  increase a bit the readability.
- next_list turned to non atomic as keeping it atomic was not 
  needed (always accessed under ct_lock)
- moved zone_limit_lookup() out of ct_lock (as per wenxu's feedback)

v6:

- removed lock on a connection (unneeded)
- minor rename and added two hunks slipped away from the refresh

Gaetan Rivet (4):
  conntrack: Use a cmap to store zone limits
  conntrack-tp: Use a cmap to store timeout policies
  conntrack: Replace timeout based expiration lists with rculists.
  conntrack: Use an atomic conn expiration value

Ilya Maximets (1):
  conntrack: Check for expiration before comparing the keys during the 
lookup


 lib/conntrack-private.h |  73 +
 lib/conntrack-tp.c  |  98 
 lib/conntrack.c | 319 ++--
 lib/conntrack.h |   4 +-
 lib/dpif-netdev.c   |   5 +-
 5 files changed, 251 insertions(+), 248 deletions(-)

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v6 3/5] conntrack: Replace timeout based expiration lists with rculists.

2022-07-10 Thread Paolo Valerio
Paolo Valerio  writes:

> From: Gaetan Rivet 
>
> This patch aims to replace the expiration lists as, due to the way
> they are used, besides being a source of contention, they have a known
> issue when used with non-default policies for different zones that
> could lead to retaining expired connections potentially for a long
> time.
>
> This patch replaces them with an array of rculist used to distribute
> all the newly created connections in order to, during the sweeping
> phase, scan them without locking, and evict the expired connections
> only locking during the actual removal.  This allows to reduce the
> contention introduced by the pushback performed at every packet
> update, also solving the issue related to zones and timeout policies.
>
> Signed-off-by: Gaetan Rivet 
> Co-authored-by: Paolo Valerio 
> Signed-off-by: Paolo Valerio 
> ---
> v6:
> - minor function renaming
> - removed conn->lock in conn_clean() as this was unneeded.
> - minor commit message rephrase
> ---

I have a small incremental diff that changes minor things:

- changes/remove some comments
- next_wake changed from 30 to 20
- removes MAX() as next_wake is always used

I'm planning to send it by the end of Monday so that if any feedback
comes can be folded into a single re-spin.

diff --git a/lib/conntrack.c b/lib/conntrack.c
index 819c356c1..66a44da2e 100644
--- a/lib/conntrack.c
+++ b/lib/conntrack.c
@@ -1552,11 +1552,7 @@ set_label(struct dp_packet *pkt, struct conn *conn,
 }
 
 
-/* Delete the expired connections from 'ctb', up to 'limit'. Returns the
- * earliest expiration time among the remaining connections in 'ctb'.  Returns
- * LLONG_MAX if 'ctb' is empty.  The return value might be smaller than 'now',
- * if 'limit' is reached */
-static long long
+static size_t
 ct_sweep(struct conntrack *ct, struct rculist *list, long long now)
 OVS_NO_THREAD_SAFETY_ANALYSIS
 {
@@ -1574,16 +1570,15 @@ ct_sweep(struct conntrack *ct, struct rculist *list, 
long long now)
 return count;
 }
 
-/* Cleans up old connection entries from 'ct'.  Returns the time when the
- * next expiration might happen.  The return value might be smaller than
- * 'now', meaning that an internal limit has been reached, and some expired
- * connections have not been deleted. */
+/* Cleans up old connection entries from 'ct'.  Returns the time
+ * when the next wake will happen. The return value might be zero,
+ * meaning that an internal limit has been reached. */
 static long long
 conntrack_clean(struct conntrack *ct, long long now)
 {
-long long next_wakeup = now + 30 * 1000;
-unsigned int n_conn_limit, i, count = 0;
-size_t clean_end;
+long long next_wakeup = now + 20 * 1000;
+unsigned int n_conn_limit, i;
+size_t clean_end, count = 0;
 
 atomic_read_relaxed(&ct->n_conn_limit, &n_conn_limit);
 clean_end = n_conn_limit / 64;
@@ -1599,7 +1594,7 @@ conntrack_clean(struct conntrack *ct, long long now)
 
 ct->next_sweep = (i < EXP_LISTS) ? i : 0;
 
-VLOG_DBG("conntrack cleanup %"PRIu32" entries in %lld msec", count,
+VLOG_DBG("conntrack cleanup %"PRIuSIZE" entries in %lld msec", count,
  time_msec() - now);
 
 return next_wakeup;
@@ -1608,24 +1603,8 @@ conntrack_clean(struct conntrack *ct, long long now)
 /* Cleanup:
  *
  * We must call conntrack_clean() periodically.  conntrack_clean() return
- * value gives an hint on when the next cleanup must be done (either because
- * there is an actual connection that expires, or because a new connection
- * might be created with the minimum timeout).
- *
- * The logic below has two goals:
- *
- * - We want to reduce the number of wakeups and batch connection cleanup
- *   when the load is not very high.  CT_CLEAN_INTERVAL ensures that if we
- *   are coping with the current cleanup tasks, then we wait at least
- *   5 seconds to do further cleanup.
- *
- * - We don't want to keep the map locked too long, as we might prevent
- *   traffic from flowing.  CT_CLEAN_MIN_INTERVAL ensures that if cleanup is
- *   behind, there is at least some 200ms blocks of time when the map will be
- *   left alone, so the datapath can operate unhindered.
- */
-#define CT_CLEAN_INTERVAL 5000 /* 5 seconds */
-#define CT_CLEAN_MIN_INTERVAL 200  /* 0.2 seconds */
+ * value gives an hint on when the next cleanup must be done. */
+#define CT_CLEAN_MIN_INTERVAL_MS 200
 
 static void *
 clean_thread_main(void *f_)
@@ -1639,9 +1618,9 @@ clean_thread_main(void *f_)
 next_wake = conntrack_clean(ct, now);
 
 if (next_wake < now) {
-poll_timer_wait_until(now + CT_CLEAN_MIN_INTERVAL);
+poll_timer_wait_until(now + CT_CLEAN_MIN_INTERVAL_MS);
 } else {
-poll_timer_wait_until(MAX(next_wake, now 

Re: [ovs-dev] User space connection tracking benchmarks

2022-07-08 Thread Paolo Valerio
Aaron Conole  writes:

> Paolo Valerio  writes:
>
>> Paolo Valerio  writes:
>>
>>> Ilya Maximets  writes:
>>>
>>>> On 6/20/22 23:57, Paolo Valerio wrote:
>>>>> Ilya Maximets  writes:
>>>>> 
>>>>>> On 6/7/22 11:39, Robin Jarry wrote:
>>>>>>> Paolo Valerio, Jun 05, 2022 at 19:37:
>>>>>>>> Just a note that may be useful.
>>>>>>>> After some tests, I noticed that establishing e.g. two TCP connections,
>>>>>>>> and leaving the first one idle after 3whs, once the second connection
>>>>>>>> expires (after moving to TIME_WAIT as a result of termination), the
>>>>>>>> second doesn't get evicted until any event gets scheduled for the 
>>>>>>>> first.
>>>>>>>>
>>>>>>>> ovs-appctl dpctl/dump-conntrack -s
>>>>>>>> tcp,orig=(src=10.1.1.1,dst=10.1.1.2,sport=9090,dport=8080),reply=(src=10.1.1.2,dst=10.1.1.1,sport=8080,dport=9090),zone=1,timeout=84576,protoinfo=(state=ESTABLISHED)
>>>>>>>> tcp,orig=(src=10.1.1.1,dst=10.1.1.2,sport=9091,dport=8080),reply=(src=10.1.1.2,dst=10.1.1.1,sport=8080,dport=9091),zone=1,timeout=0,protoinfo=(state=TIME_WAIT)
>>>>>>>>
>>>>>>>> This may be somewhat related to your results as during the
>>>>>>>> test, the number of connections may reach the limit so apparently 
>>>>>>>> reducing
>>>>>>>> the performances.
>>>>>>>
>>>>>>> Indeed, there was an issue in my test procedure. Due to the way T-Rex
>>>>>>> generates connections, it is easy to fill the conntrack table after
>>>>>>> a few iterations, making the test results inconsistent.
>>>>>>>
>>>>>>> Also, the flows which I had configured were not correct. There was an
>>>>>>> extraneous action=NORMAL flow at the end. When the conntrack table is
>>>>>>> full and a new packet cannot be tracked, it is marked as +trk+inv and
>>>>>>> not dropped. This behaviour is specific to the userspace datapath. The
>>>>>>> linux kernel datapath seems to drop the packet when it cannot be added
>>>>>>> to connection tracking.
>>>>>>>
>>>>>>> Gaëtan's series (v4) seems less resilient to the conntrack table being
>>>>>>> full, especially when there is more than one PMD core.
>>>>>>>
>>>>>>> I have changed the t-rex script to allow running arbitrary commands in
>>>>>>> between traffic iterations. This is leveraged to flush the conntrack
>>>>>>> table and run each iteration in the same conditions.
>>>>>>>
>>>>>>> https://github.com/cisco-system-traffic-generator/trex-core/blob/v2.98/scripts/cps_ndr.py
>>>>>>>
>>>>>>> To avoid filling the conntrack table, the max size was increased to 50M.
>>>>>>> The DUT configuration can be summarized as the following:
>>>>>>>
>>>>>>> ovs-vsctl set open_vswitch . other_config:dpdk-init=true
>>>>>>> ovs-vsctl set open_vswitch . other_config:pmd-cpu-mask="0x15554"
>>>>>>> ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
>>>>>>> ovs-vsctl add-port br0 pf0 -- set Interface pf0 type=dpdk \
>>>>>>> options:dpdk-devargs=:3b:00.0 options:n_rxq=4 
>>>>>>> options:n_rxq_desc=4096
>>>>>>> ovs-vsctl add-port br0 pf1 -- set Interface pf1 type=dpdk \
>>>>>>> options:dpdk-devargs=:3b:00.1 options:n_rxq=4 
>>>>>>> options:n_rxq_desc=4096
>>>>>>> ovs-appctl dpctl/ct-set-maxconns 5000
>>>>>>> ovs-ofctl add-flow br0 
>>>>>>> "table=0,priority=10,ip,ct_state=-trk,actions=ct(table=0)"
>>>>>>> ovs-ofctl add-flow br0 
>>>>>>> "table=0,priority=10,ip,ct_state=+trk+new,actions=ct(commit),NORMAL"
>>>>>>> ovs-ofctl add-flow br0 
>>>>>>> "table=0,priority=10,ip,ct_state=+trk+est,actions=NORMAL"
>>>>>>> ovs-ofctl add-flow br0 "table=0,priority=0,actions=drop"
>>>>>>>
>>>>>>> Short Lived Connections
>>>>

Re: [ovs-dev] [PATCH v6 3/5] conntrack: Replace timeout based expiration lists with rculists.

2022-07-04 Thread Paolo Valerio
wenxu   writes:

> At 2022-07-04 16:43:20, "Paolo Valerio"  wrote:
>>Hello wenxu,
>>
>>thanks for having a look at it.
>>
>>wenxu   writes:
>>
>>> Hi Paolo,
>>>
>>> There are two small question.
>>> First the ct_lock lock/unlock as below maybe also can be dropped with this
>>> patch ?
>>>
>>>ovs_mutex_lock(&ct->ct_lock);
>>> if (!conn_lookup(ct, &ctx->key, now, NULL, NULL)) {
>>> conn = conn_not_found(ct, pkt, ctx, commit, now, 
>>> nat_action_info,
>>>   helper, alg_exp, ct_alg_ctl, tp_id);
>>> }
>>> ovs_mutex_unlock(&ct->ct_lock);
>>>
>>
>>The locked lookup/insertion should be kept, as it could lead e.g. to a
>>double insertion in the case we lookup without locking.
> Yes, What I mean is narrow the region of the lock. Only the insertion
> need this lock.
>

Just to clarify what I was referring to in my previous email [1].

[1] 
http://patchwork.ozlabs.org/project/openvswitch/patch/9ae8ad243da85be4853b90eccc958600dace7726.1623786081.git.gr...@u256.net/#2728678

>>But you're right, in general, there should be room for improvement
>>because we could probably narrow the region we lock.
>>IMO, we should keep this out of this series, and maybe follow up
>>later, to avoid introducing too many changes at once.
>>

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH v2] meta-flow: Document nw_proto limitation for IPv6 later frags.

2022-07-04 Thread Paolo Valerio
Signed-off-by: Paolo Valerio 
---
 lib/meta-flow.xml |9 +
 1 file changed, 9 insertions(+)

diff --git a/lib/meta-flow.xml b/lib/meta-flow.xml
index 28865f88c..a1a20366d 100644
--- a/lib/meta-flow.xml
+++ b/lib/meta-flow.xml
@@ -4101,6 +4101,15 @@ r r c c c.
 opcodes greater than 255 are treated as 0; this works adequately
 because in practice ARP and RARP only use opcodes 1 through 4.
   
+
+  
+In the case of fragmented traffic, a difference exists in the way
+the field acts for IPv4 and IPv6 later fragments. For IPv6 fragments
+with nonzero offset, nw_proto is set to the IPv6 protocol
+type for fragments (44).
+Conversely, for IPv4 later fragments, the field is set based on the
+protocol type present in the header.
+  
 
 
 

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v6 3/5] conntrack: Replace timeout based expiration lists with rculists.

2022-07-04 Thread Paolo Valerio
Hello wenxu,

thanks for having a look at it.

wenxu   writes:

> Hi Paolo,
>
> There are two small question.
> First the ct_lock lock/unlock as below maybe also can be dropped with this
> patch ?
>
>ovs_mutex_lock(&ct->ct_lock);
> if (!conn_lookup(ct, &ctx->key, now, NULL, NULL)) {
> conn = conn_not_found(ct, pkt, ctx, commit, now, nat_action_info,
>   helper, alg_exp, ct_alg_ctl, tp_id);
> }
> ovs_mutex_unlock(&ct->ct_lock);
>

The locked lookup/insertion should be kept, as it could lead e.g. to a
double insertion in the case we lookup without locking.
But you're right, in general, there should be room for improvement
because we could probably narrow the region we lock.
IMO, we should keep this out of this series, and maybe follow up
later, to avoid introducing too many changes at once.

>
>
> Second one  the zone_limit_lookup can be outof the conn_clean__ as below?
>
> static void
> conn_clean__(struct conntrack *ct, struct conn *conn)
> OVS_REQUIRES(ct->ct_lock)
> {
> if (conn->alg) {
> expectation_clean(ct, &conn->key);
> }
>
> uint32_t hash = conn_key_hash(&conn->key, ct->hash_basis);
> cmap_remove(&ct->conns, &conn->cm_node, hash);
>
> struct zone_limit *zl = zone_limit_lookup(ct, conn->admit_zone);
> if (zl && zl->czl.zone_limit_seq == conn->zone_limit_seq) {
> atomic_count_dec(&zl->czl.count);
> }
> }
>

do you mean it doesn't need to be under the ct_lock?
Although not a problem, it seems that keeping it under the ct_lock could
be avoided.
I guess we could easily follow up on this. Not sure whether it's worth a
respin now (for the code freeze) or, if this makes it, afterward.
I'd say, let's give Ilya and others the chance to have a look at this
and let's hear from them.

>
>
> BR
> wenxu
>
>
>
>
>
> At 2022-07-02 02:14:12, "Paolo Valerio"  wrote:
>>From: Gaetan Rivet 
>>
>>This patch aims to replace the expiration lists as, due to the way
>>they are used, besides being a source of contention, they have a known
>>issue when used with non-default policies for different zones that
>>could lead to retaining expired connections potentially for a long
>>time.
>>
>>This patch replaces them with an array of rculist used to distribute
>>all the newly created connections in order to, during the sweeping
>>phase, scan them without locking, and evict the expired connections
>>only locking during the actual removal.  This allows to reduce the
>>contention introduced by the pushback performed at every packet
>>update, also solving the issue related to zones and timeout policies.
>>
>>Signed-off-by: Gaetan Rivet 
>>Co-authored-by: Paolo Valerio 
>>Signed-off-by: Paolo Valerio 
>>---
>>v6:
>>- minor function renaming
>>- removed conn->lock in conn_clean() as this was unneeded.
>>- minor commit message rephrase
>>---
>> lib/conntrack-private.h |   84 +-
>> lib/conntrack-tp.c  |   44 +-
>> lib/conntrack.c |  152 
>> +++
>> 3 files changed, 133 insertions(+), 147 deletions(-)
>>
>>diff --git a/lib/conntrack-private.h b/lib/conntrack-private.h
>>index 34c688821..676f58d83 100644
>>--- a/lib/conntrack-private.h
>>+++ b/lib/conntrack-private.h
>>@@ -29,6 +29,7 @@
>> #include "openvswitch/list.h"
>> #include "openvswitch/types.h"
>> #include "packets.h"
>>+#include "rculist.h"
>> #include "unaligned.h"
>> #include "dp-packet.h"
>>
>>@@ -86,6 +87,31 @@ struct alg_exp_node {
>> bool nat_rpl_dst;
>> };
>>
>>+/* Timeouts: all the possible timeout states passed to update_expiration()
>>+ * are listed here. The name will be prefix by CT_TM_ and the value is in
>>+ * milliseconds */
>>+#define CT_TIMEOUTS \
>>+CT_TIMEOUT(TCP_FIRST_PACKET) \
>>+CT_TIMEOUT(TCP_OPENING) \
>>+CT_TIMEOUT(TCP_ESTABLISHED) \
>>+CT_TIMEOUT(TCP_CLOSING) \
>>+CT_TIMEOUT(TCP_FIN_WAIT) \
>>+CT_TIMEOUT(TCP_CLOSED) \
>>+CT_TIMEOUT(OTHER_FIRST) \
>>+CT_TIMEOUT(OTHER_MULTIPLE) \
>>+CT_TIMEOUT(OTHER_BIDIR) \
>>+CT_TIMEOUT(ICMP_FIRST) \
>>+CT_TIMEOUT(ICMP_REPLY)
>>+
>>+enum ct_timeout {
>>+#define CT_TIMEOUT(NAME) CT_TM_##NAME,
>>+CT_TIMEOUTS
>>+#undef CT_TIMEOUT
>>+N_CT_TM
>>

Re: [ovs-dev] [PATCH] meta-flow: Document nw_proto limitation for IPv6 later frags.

2022-07-01 Thread Paolo Valerio
Ilya Maximets  writes:

> On 4/14/22 17:34, Paolo Valerio wrote:
>> Signed-off-by: Paolo Valerio 
>> ---
>>  lib/meta-flow.xml |9 +
>>  1 file changed, 9 insertions(+)
>> 
>
> Hi, Paolo.   Thanks for the patch!
> See some comments inline.
>
>> diff --git a/lib/meta-flow.xml b/lib/meta-flow.xml
>> index 28865f88c..3445246f4 100644
>> --- a/lib/meta-flow.xml
>> +++ b/lib/meta-flow.xml
>> @@ -4101,6 +4101,15 @@ r r c c c.
>>  opcodes greater than 255 are treated as 0; this works adequately
>>  because in practice ARP and RARP only use opcodes 1 through 4.
>>
>> +
>> +  
>> +In the case of fragmented traffic, a difference exists in the way
>> +the field acts for IPv4 and IPv6 later fragments.
>
>>  Because of the
>> +headers structure, the protocol type is not processed for IPv6
>> +fragments with nonzero offset, meaning that matches based on this
>> +field are not effective for those packets.
>
> This part reads as nw_proto makes no meaningful value in case of
> IPv6 later fragments, but that is not correct.  It has a value 44
> and users can match on it, IIUC.  Could you clarify this in the
> text?  You may also look at how this is described in the design
> doc: Documentation/topics/design.rst.

You're right. Re-reading I realize that it's badly phrased as the
intention was to state the fact that upper layer protocols could not be
matched as - and this is the missing piece you spotted - nw_proto for
later ipv6 frags is set to IPPROTO_FRAGMENT regardless.
I will fix the description specifying only the different behavior
between ipv4 and ipv6 (dropping the header structure part as well).

Thanks,
Paolo

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH v6 5/5] conntrack: Check for expiration before comparing the keys during the lookup

2022-07-01 Thread Paolo Valerio
From: Ilya Maximets 

This could save some costly key comparison miss, especially in the
case there are many expired connections waiting for the sweeper to
evict them.

Signed-off-by: Ilya Maximets 
Signed-off-by: Paolo Valerio 
---
 lib/conntrack.c |7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/lib/conntrack.c b/lib/conntrack.c
index 9d7891d6a..4a0166ecb 100644
--- a/lib/conntrack.c
+++ b/lib/conntrack.c
@@ -583,14 +583,17 @@ conn_key_lookup(struct conntrack *ct, const struct 
conn_key *key,
 bool found = false;
 
 CMAP_FOR_EACH_WITH_HASH (conn, cm_node, hash, &ct->conns) {
-if (!conn_key_cmp(&conn->key, key) && !conn_expired(conn, now)) {
+if (conn_expired(conn, now)) {
+continue;
+}
+if (!conn_key_cmp(&conn->key, key)) {
 found = true;
 if (reply) {
 *reply = false;
 }
 break;
 }
-if (!conn_key_cmp(&conn->rev_key, key) && !conn_expired(conn, now)) {
+if (!conn_key_cmp(&conn->rev_key, key)) {
 found = true;
 if (reply) {
 *reply = true;

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH v6 4/5] conntrack: Use an atomic conn expiration value

2022-07-01 Thread Paolo Valerio
From: Gaetan Rivet 

A lock is taken during conn_lookup() to check whether a connection is
expired before returning it. This lock can have some contention.

Even though this lock ensures a consistent sequence of writes, it does
not imply a specific order. A ct_clean thread taking the lock first
could read a value that would be updated immediately after by a PMD
waiting on the same lock, just as well as the inverse order.

As such, the expiration time can be stale anytime it is read. In this
context, using an atomic will ensure the same guarantees for either
writes or reads, i.e. writes are consistent and reads are not undefined
behaviour. Reading an atomic is however less costly than taking and
releasing a lock.

Signed-off-by: Gaetan Rivet 
Signed-off-by: Paolo Valerio 
---
v6:
- A couple of hunks slipped away from the stg refresh before sending
  v5.
---
 lib/conntrack-private.h |2 +-
 lib/conntrack-tp.c  |2 +-
 lib/conntrack.c |   27 +++
 3 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/lib/conntrack-private.h b/lib/conntrack-private.h
index 676f58d83..d36845c2d 100644
--- a/lib/conntrack-private.h
+++ b/lib/conntrack-private.h
@@ -136,7 +136,7 @@ struct conn {
 /* Mutable data. */
 struct ovs_mutex lock; /* Guards all mutable fields. */
 ovs_u128 label;
-long long expiration;
+atomic_llong expiration;
 uint32_t mark;
 int seq_skew;
 
diff --git a/lib/conntrack-tp.c b/lib/conntrack-tp.c
index 7b8f9007b..89cb2704a 100644
--- a/lib/conntrack-tp.c
+++ b/lib/conntrack-tp.c
@@ -255,7 +255,7 @@ conn_update_expiration(struct conntrack *ct, struct conn 
*conn,
 "val=%u sec.",
 ct_timeout_str[tm], conn->key.zone, conn->tp_id, val);
 
-conn->expiration = now + val * 1000;
+atomic_store_relaxed(&conn->expiration, now + val * 1000);
 }
 
 void
diff --git a/lib/conntrack.c b/lib/conntrack.c
index 819c356c1..9d7891d6a 100644
--- a/lib/conntrack.c
+++ b/lib/conntrack.c
@@ -100,6 +100,7 @@ static enum ct_update_res conn_update(struct conntrack *ct, 
struct conn *conn,
   struct dp_packet *pkt,
   struct conn_lookup_ctx *ctx,
   long long now);
+static long long int conn_expiration(const struct conn *);
 static bool conn_expired(struct conn *, long long now);
 static void set_mark(struct dp_packet *, struct conn *,
  uint32_t val, uint32_t mask);
@@ -512,9 +513,7 @@ conn_clean(struct conntrack *ct, struct conn *conn)
 static void
 conn_force_expire(struct conn *conn)
 {
-ovs_mutex_lock(&conn->lock);
-conn->expiration = 0;
-ovs_mutex_unlock(&conn->lock);
+atomic_store_relaxed(&conn->expiration, 0);
 }
 
 /* Destroys the connection tracker 'ct' and frees all the allocated memory.
@@ -969,13 +968,10 @@ un_nat_packet(struct dp_packet *pkt, const struct conn 
*conn,
 static void
 conn_seq_skew_set(struct conntrack *ct, const struct conn *conn_in,
   long long now, int seq_skew, bool seq_skew_dir)
-OVS_NO_THREAD_SAFETY_ANALYSIS
 {
 struct conn *conn;
-ovs_mutex_unlock(&conn_in->lock);
-conn_lookup(ct, &conn_in->key, now, &conn, NULL);
-ovs_mutex_lock(&conn_in->lock);
 
+conn_lookup(ct, &conn_in->key, now, &conn, NULL);
 if (conn && seq_skew) {
 conn->seq_skew = seq_skew;
 conn->seq_skew_dir = seq_skew_dir;
@@ -2515,14 +2511,21 @@ conn_update(struct conntrack *ct, struct conn *conn, 
struct dp_packet *pkt,
 return update_res;
 }
 
+static long long int
+conn_expiration(const struct conn *conn)
+{
+long long int expiration;
+
+atomic_read_relaxed(&CONST_CAST(struct conn *, conn)->expiration,
+&expiration);
+return expiration;
+}
+
 static bool
 conn_expired(struct conn *conn, long long now)
 {
 if (conn->conn_type == CT_CONN_TYPE_DEFAULT) {
-ovs_mutex_lock(&conn->lock);
-bool expired = now >= conn->expiration ? true : false;
-ovs_mutex_unlock(&conn->lock);
-return expired;
+return now >= conn_expiration(conn);
 }
 return false;
 }
@@ -2655,7 +2658,7 @@ conn_to_ct_dpif_entry(const struct conn *conn, struct 
ct_dpif_entry *entry,
 entry->mark = conn->mark;
 memcpy(&entry->labels, &conn->label, sizeof entry->labels);
 
-long long expiration = conn->expiration - now;
+long long expiration = conn_expiration(conn) - now;
 
 struct ct_l4_proto *class = l4_protos[conn->key.nw_proto];
 if (class->conn_get_protoinfo) {

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH v6 3/5] conntrack: Replace timeout based expiration lists with rculists.

2022-07-01 Thread Paolo Valerio
From: Gaetan Rivet 

This patch aims to replace the expiration lists as, due to the way
they are used, besides being a source of contention, they have a known
issue when used with non-default policies for different zones that
could lead to retaining expired connections potentially for a long
time.

This patch replaces them with an array of rculist used to distribute
all the newly created connections in order to, during the sweeping
phase, scan them without locking, and evict the expired connections
only locking during the actual removal.  This allows to reduce the
contention introduced by the pushback performed at every packet
update, also solving the issue related to zones and timeout policies.

Signed-off-by: Gaetan Rivet 
Co-authored-by: Paolo Valerio 
Signed-off-by: Paolo Valerio 
---
v6:
- minor function renaming
- removed conn->lock in conn_clean() as this was unneeded.
- minor commit message rephrase
---
 lib/conntrack-private.h |   84 +-
 lib/conntrack-tp.c  |   44 +-
 lib/conntrack.c |  152 +++
 3 files changed, 133 insertions(+), 147 deletions(-)

diff --git a/lib/conntrack-private.h b/lib/conntrack-private.h
index 34c688821..676f58d83 100644
--- a/lib/conntrack-private.h
+++ b/lib/conntrack-private.h
@@ -29,6 +29,7 @@
 #include "openvswitch/list.h"
 #include "openvswitch/types.h"
 #include "packets.h"
+#include "rculist.h"
 #include "unaligned.h"
 #include "dp-packet.h"
 
@@ -86,6 +87,31 @@ struct alg_exp_node {
 bool nat_rpl_dst;
 };
 
+/* Timeouts: all the possible timeout states passed to update_expiration()
+ * are listed here. The name will be prefix by CT_TM_ and the value is in
+ * milliseconds */
+#define CT_TIMEOUTS \
+CT_TIMEOUT(TCP_FIRST_PACKET) \
+CT_TIMEOUT(TCP_OPENING) \
+CT_TIMEOUT(TCP_ESTABLISHED) \
+CT_TIMEOUT(TCP_CLOSING) \
+CT_TIMEOUT(TCP_FIN_WAIT) \
+CT_TIMEOUT(TCP_CLOSED) \
+CT_TIMEOUT(OTHER_FIRST) \
+CT_TIMEOUT(OTHER_MULTIPLE) \
+CT_TIMEOUT(OTHER_BIDIR) \
+CT_TIMEOUT(ICMP_FIRST) \
+CT_TIMEOUT(ICMP_REPLY)
+
+enum ct_timeout {
+#define CT_TIMEOUT(NAME) CT_TM_##NAME,
+CT_TIMEOUTS
+#undef CT_TIMEOUT
+N_CT_TM
+};
+
+#define EXP_LISTS 100
+
 enum OVS_PACKED_ENUM ct_conn_type {
 CT_CONN_TYPE_DEFAULT,
 CT_CONN_TYPE_UN_NAT,
@@ -96,11 +122,16 @@ struct conn {
 struct conn_key key;
 struct conn_key rev_key;
 struct conn_key parent_key; /* Only used for orig_tuple support. */
-struct ovs_list exp_node;
 struct cmap_node cm_node;
 uint16_t nat_action;
 char *alg;
 struct conn *nat_conn; /* The NAT 'conn' context, if there is one. */
+atomic_flag reclaimed; /* False during the lifetime of the connection,
+* True as soon as a thread has started freeing
+* its memory. */
+
+/* Inserted once by a PMD, then managed by the 'ct_clean' thread. */
+struct rculist node;
 
 /* Mutable data. */
 struct ovs_mutex lock; /* Guards all mutable fields. */
@@ -116,7 +147,6 @@ struct conn {
 /* Mutable data. */
 bool seq_skew_dir; /* TCP sequence skew direction due to NATTing of FTP
 * control messages; true if reply direction. */
-bool cleaned; /* True if cleaned from expiry lists. */
 
 /* Immutable data. */
 bool alg_related; /* True if alg data connection. */
@@ -132,22 +162,6 @@ enum ct_update_res {
 CT_UPDATE_VALID_NEW,
 };
 
-/* Timeouts: all the possible timeout states passed to update_expiration()
- * are listed here. The name will be prefix by CT_TM_ and the value is in
- * milliseconds */
-#define CT_TIMEOUTS \
-CT_TIMEOUT(TCP_FIRST_PACKET) \
-CT_TIMEOUT(TCP_OPENING) \
-CT_TIMEOUT(TCP_ESTABLISHED) \
-CT_TIMEOUT(TCP_CLOSING) \
-CT_TIMEOUT(TCP_FIN_WAIT) \
-CT_TIMEOUT(TCP_CLOSED) \
-CT_TIMEOUT(OTHER_FIRST) \
-CT_TIMEOUT(OTHER_MULTIPLE) \
-CT_TIMEOUT(OTHER_BIDIR) \
-CT_TIMEOUT(ICMP_FIRST) \
-CT_TIMEOUT(ICMP_REPLY)
-
 #define NAT_ACTION_SNAT_ALL (NAT_ACTION_SRC | NAT_ACTION_SRC_PORT)
 #define NAT_ACTION_DNAT_ALL (NAT_ACTION_DST | NAT_ACTION_DST_PORT)
 
@@ -181,22 +195,17 @@ enum ct_ephemeral_range {
 #define FOR_EACH_PORT_IN_RANGE(curr, min, max) \
 FOR_EACH_PORT_IN_RANGE__(curr, min, max, OVS_JOIN(idx, __COUNTER__))
 
-enum ct_timeout {
-#define CT_TIMEOUT(NAME) CT_TM_##NAME,
-CT_TIMEOUTS
-#undef CT_TIMEOUT
-N_CT_TM
-};
-
 struct conntrack {
 struct ovs_mutex ct_lock; /* Protects 2 following fields. */
 struct cmap conns OVS_GUARDED;
-struct ovs_list exp_lists[N_CT_TM] OVS_GUARDED;
+struct rculist exp_lists[EXP_LISTS];
 struct cmap zone_limits OVS_GUARDED;
 struct cmap timeout_policies OVS_GUARDED;
 uint32_t hash_basis; /* Salt for hashing a connection key. */
 pthread_t clean_thread; /* Periodically cleans up connecti

<    1   2   3   4   5   6   7   8   9   10   >