Re: Add kernel stack trace saving for riscv64

2022-03-08 Thread Jeremie Courreges-Anglas
On Tue, Mar 08 2022, Visa Hankala  wrote:
> This patch adds kernel stack trace saving for riscv64, for the benefit
> of dt(4) and witness(4).

Nice!

> The unwinder is slow because of the symbol
> lookup, but this can be tweaked later.

A dumb approach that appears to work: add
cpu_exception_handler_supervisor_end and cpu_exception_handler_user_end
symbols, and perform a range check.

> The limit variable prevents the unwinder from using user-controllable
> register values. The limit has to reflect the kernel stack setup in
> cpu_fork(). To ensure consistency, the stack start address is stored
> in a variable in struct pcb.
>
> OK?

Works for me on the Unmatched with dt(4), thanks.  ok jca@ fwiw

> Index: arch/riscv64/include/pcb.h
> ===
> RCS file: src/sys/arch/riscv64/include/pcb.h,v
> retrieving revision 1.3
> diff -u -p -r1.3 pcb.h
> --- arch/riscv64/include/pcb.h30 Jun 2021 22:20:56 -  1.3
> +++ arch/riscv64/include/pcb.h8 Mar 2022 16:54:58 -
> @@ -39,5 +39,6 @@ struct pcb {
>  
>   caddr_t pcb_onfault;// On fault handler
>   struct fpregpcb_fpstate;// Floating Point state */
> + register_t  pcb_kstack; /* kernel stack address */
>  };
>  #endif   /* _MACHINE_PCB_H_ */
> Index: arch/riscv64/riscv64/db_trace.c
> ===
> RCS file: src/sys/arch/riscv64/riscv64/db_trace.c,v
> retrieving revision 1.5
> diff -u -p -r1.5 db_trace.c
> --- arch/riscv64/riscv64/db_trace.c   22 Feb 2022 07:46:04 -  1.5
> +++ arch/riscv64/riscv64/db_trace.c   8 Mar 2022 16:54:58 -
> @@ -141,3 +141,56 @@ db_stack_trace_print(db_expr_t addr, int
>   }
>   (*pr)("end trace frame: 0x%lx, count: %d\n", frame, count);
>  }
> +
> +void
> +stacktrace_save_at(struct stacktrace *st, unsigned int skip)
> +{
> + struct callframe *frame, *lastframe, *limit;
> + struct pcb *pcb = curpcb;
> + Elf_Sym *sym;
> + db_expr_t diff;
> + vaddr_t ra, subr;
> +
> + st->st_count = 0;
> +
> + if (pcb == NULL)
> + return;
> +
> + ra = (vaddr_t)__builtin_return_address(0);
> + frame = (struct callframe *)__builtin_frame_address(0);
> + KASSERT(INKERNEL(frame));
> + limit = (struct callframe *)((struct trapframe *)pcb->pcb_kstack - 1);
> +
> + while (st->st_count < STACKTRACE_MAX) {
> + if (skip == 0)
> + st->st_pc[st->st_count++] = ra;
> + else
> + skip--;
> +
> + sym = db_search_symbol(ra, DB_STGY_PROC, );
> + if (sym == NULL)
> + break;
> + subr = ra - (vaddr_t)diff;
> +
> + lastframe = frame;
> + if (subr == (vaddr_t)cpu_exception_handler_supervisor ||
> + subr == (vaddr_t)cpu_exception_handler_user) {
> + struct trapframe *tf = (struct trapframe *)frame;
> +
> + frame = (struct callframe *)tf->tf_s[0];
> + ra = tf->tf_ra;
> + } else {
> + frame = frame[-1].f_frame;
> + if (frame == NULL)
> + break;
> + ra = frame[-1].f_ra;
> + }
> +
> + if (frame <= lastframe)
> + break;
> + if (frame >= limit)
> + break;
> + if (!INKERNEL(ra))
> + break;
> + }
> +}
> Index: arch/riscv64/riscv64/vm_machdep.c
> ===
> RCS file: src/sys/arch/riscv64/riscv64/vm_machdep.c,v
> retrieving revision 1.10
> diff -u -p -r1.10 vm_machdep.c
> --- arch/riscv64/riscv64/vm_machdep.c 24 Feb 2022 14:19:10 -  1.10
> +++ arch/riscv64/riscv64/vm_machdep.c 8 Mar 2022 16:54:58 -
> @@ -75,13 +75,12 @@ cpu_fork(struct proc *p1, struct proc *p
>  
>   pmap_activate(p2);
>  
> - tf = (struct trapframe *)((u_long)p2->p_addr
> + pcb->pcb_kstack = STACKALIGN((u_long)p2->p_addr
>   + USPACE
> - - sizeof(struct trapframe)
>   - sizeof(register_t)/* for holding curcpu */
>   - 0x10);
>  
> - tf = (struct trapframe *)STACKALIGN(tf);
> + tf = (struct trapframe *)pcb->pcb_kstack - 1;
>   pcb->pcb_tf = tf;
>   *tf = *p1->p_addr->u_pcb.pcb_tf;
>  
>

-- 
jca | PGP : 0x1524E7EE / 5135 92C1 AD36 5293 2BDF  DDCC 0DFA 74AE 1524 E7EE



wg(4): 'Address already in use' when wgrtable is changed

2022-03-08 Thread Yuichiro NAITO

I see 'Address already in use' message,
when I change wgrtable for a running wg interface.
It doesn't make sense to me.

It can be reproduced by the following command sequence.

```
# route -T1 add default `cat /etc/mygate`
# ifconfig wg0 create wgport 7111 wgkey `cat /etc/mykey.wg0`
# ifconfig wg0 up
# ifconfig wg0 wgrtable 1
ifconfig: SIOCSWG: Address already in use
```

When I down wg0 interface before changing wgrtable,
It succeeds and no messages are shown.

I investigated the reason why 'Address already in use' is shown.

If wgrtable is specified by ifconfig argument,
`wg_ioctl_set` function in `sys/net/if_wg.c` is called.

And if the wg interface is running, `wg_bind` function is called.
`wg_bind` creates new sockets (IPv4 and 6) and replace them from old ones.

If only wgrtable is changed, `wg_bind` binds as same port as existing sockets.
So 'Address already in use' is shown.

Here is a simple patch to close existing sockets before `wg_bind`.
It works for me but I'm not 100% sure this is right fix.

Any other ideas?

```
diff --git a/sys/net/if_wg.c b/sys/net/if_wg.c
index 4dae3e3c976..0159664fb34 100644
--- a/sys/net/if_wg.c
+++ b/sys/net/if_wg.c
@@ -2253,11 +2253,14 @@ wg_ioctl_set(struct wg_softc *sc, struct wg_data_io 
*data)
if (port != sc->sc_udp_port || rtable != sc->sc_udp_rtable) {
TAILQ_FOREACH(peer, >sc_peer_seq, p_seq_entry)
wg_peer_clear_src(peer);

-   if (sc->sc_if.if_flags & IFF_RUNNING)
+   if (sc->sc_if.if_flags & IFF_RUNNING) {
+   if (port == sc->sc_udp_port)
+   wg_unbind(sc);
if ((ret = wg_bind(sc, , )) != 0)
goto error;
+   }

sc->sc_udp_port = port;
sc->sc_udp_rtable = rtable;
}
```

--
Yuichiro NAITO (naito.yuich...@gmail.com)



Re: ieee80211_stats userland vs. kernel

2022-03-08 Thread Theo de Raadt
Stefan Sperling  wrote:

> On Tue, Mar 08, 2022 at 12:58:27PM -0700, Theo de Raadt wrote:
> > Claudio Jeker  wrote:
> > 
> > > Honestly I think this is overkill. There is no stat struct where we do
> > > this dance. It is accepted that netstat needs to keep in sync for these
> > > structs to work. Why is it necessary to disconnect the kernel and userland
> > > for this?
> > 
> > Actually there is a major one: it is how ps works.
> > 
> > I think the problem is when this struct is changed, ifconfig becomes 
> > unusable?
> 
> In this case it is not ifconfig, but netstat -W iwm0.
> Which is a debugging tool, like netstat -s.

We don't care when netstat breaks

But we really care when ifconfig breaks, in particular in the area of the
"media" command related things.



Re: ieee80211_stats userland vs. kernel

2022-03-08 Thread Stefan Sperling
On Tue, Mar 08, 2022 at 12:58:27PM -0700, Theo de Raadt wrote:
> Claudio Jeker  wrote:
> 
> > Honestly I think this is overkill. There is no stat struct where we do
> > this dance. It is accepted that netstat needs to keep in sync for these
> > structs to work. Why is it necessary to disconnect the kernel and userland
> > for this?
> 
> Actually there is a major one: it is how ps works.
> 
> I think the problem is when this struct is changed, ifconfig becomes unusable?

In this case it is not ifconfig, but netstat -W iwm0.
Which is a debugging tool, like netstat -s.



Re: ipsec policy refcount

2022-03-08 Thread Tobias Heider
On Tue, Mar 08, 2022 at 08:17:13PM +0100, Alexander Bluhm wrote:
> Hi,
> 
> In IPsec policy replace integer refcount with atomic refcount.
> 
> It is a bit strange that ipo_refcnt is never taken, but let's go
> towards MP safety in small steps.
> 
> ok?
> 
> bluhm

ok tobhe@

> 
> Index: net/pfkeyv2.c
> ===
> RCS file: /data/mirror/openbsd/cvs/src/sys/net/pfkeyv2.c,v
> retrieving revision 1.231
> diff -u -p -r1.231 pfkeyv2.c
> --- net/pfkeyv2.c 25 Feb 2022 23:51:03 -  1.231
> +++ net/pfkeyv2.c 8 Mar 2022 18:44:28 -
> @@ -1996,7 +1996,7 @@ pfkeyv2_send(struct socket *so, void *me
>  
>   TAILQ_INIT(>ipo_acquires);
>   ipo->ipo_rdomain = rdomain;
> - ipo->ipo_ref_count = 1;
> + refcnt_init(>ipo_refcnt);
>  
>   /* Add SPD entry */
>   if ((rnh = spd_table_get(rdomain)) == NULL ||
> Index: netinet/ip_ipsp.h
> ===
> RCS file: /data/mirror/openbsd/cvs/src/sys/netinet/ip_ipsp.h,v
> retrieving revision 1.235
> diff -u -p -r1.235 ip_ipsp.h
> --- netinet/ip_ipsp.h 2 Mar 2022 20:16:43 -   1.235
> +++ netinet/ip_ipsp.h 8 Mar 2022 18:43:38 -
> @@ -281,7 +281,7 @@ struct ipsec_policy {
>   u_int8_tipo_sproto; /* ESP/AH; if zero, use system 
> dflts */
>   u_int   ipo_rdomain;
>  
> - int ipo_ref_count;
> + struct refcnt   ipo_refcnt;
>  
>   struct tdb  *ipo_tdb;   /* [p] Cached TDB entry */
>  
> Index: netinet/ip_spd.c
> ===
> RCS file: /data/mirror/openbsd/cvs/src/sys/netinet/ip_spd.c,v
> retrieving revision 1.113
> diff -u -p -r1.113 ip_spd.c
> --- netinet/ip_spd.c  6 Mar 2022 15:24:50 -   1.113
> +++ netinet/ip_spd.c  8 Mar 2022 18:44:32 -
> @@ -666,11 +666,10 @@ ipsec_delete_policy(struct ipsec_policy 
>   struct ipsec_acquire *ipa;
>   struct radix_node_head *rnh;
>   struct radix_node *rn = (struct radix_node *)ipo;
> - int err = 0;
>  
>   NET_ASSERT_LOCKED();
>  
> - if (--ipo->ipo_ref_count > 0)
> + if (refcnt_rele(>ipo_refcnt) == 0)
>   return 0;
>  
>   /* Delete from SPD. */
> @@ -699,7 +698,7 @@ ipsec_delete_policy(struct ipsec_policy 
>  
>   pool_put(_policy_pool, ipo);
>  
> - return err;
> + return 0;
>  }
>  
>  void
> 



Re: ieee80211_stats userland vs. kernel

2022-03-08 Thread Theo de Raadt
Claudio Jeker  wrote:

> Honestly I think this is overkill. There is no stat struct where we do
> this dance. It is accepted that netstat needs to keep in sync for these
> structs to work. Why is it necessary to disconnect the kernel and userland
> for this?

Actually there is a major one: it is how ps works.

I think the problem is when this struct is changed, ifconfig becomes unusable?

Making a different userland & kernel struct might not solve it, unless some
games are played.  For instance, append extra pads for future extension, so
that the ioctl can still get at the data when fields are used.  Discourage
changing the types/sizes of fields.  When a field is removed, change it into
a pad rather than shuffling the struct.  Eventually we'll hit a flag day.
But with this current design, I think we'll trip into kernel:userland ABI
breakage too easily, and it just needs some clever policy around the struct
to ensure it doesn't cause that breakage.



Re: ieee80211_stats userland vs. kernel

2022-03-08 Thread Claudio Jeker
On Tue, Mar 08, 2022 at 07:17:33PM +0100, Stefan Sperling wrote:
> On Tue, Mar 08, 2022 at 03:55:48PM +0100, Stefan Sperling wrote:
> > On Mon, Mar 07, 2022 at 03:04:06PM -0700, Theo de Raadt wrote:
> > > > For now, the structs are identical so the code copying data out is
> > > > kept simple.
> > > 
> > > I think this is unwise, and you should write the field-by-field copying
> > > function at the same time, otherwise this is just asking for trouble.
> > > You really cannot wait until an intentional change.
> > 
> > Sure, here it is.
> 
> On second thought, avoiding the malloc/free dance is better.
> The struct is still small enough to fit on the stack.
> 
> diff refs/heads/master refs/heads/statsreq
> blob - 85d795d745eb21fd218056c2f3faf7fbc2c7fe49
> blob + 62938001ed22fc133a0c98e27ef5690c978e21f3
> --- sys/net80211/ieee80211_ioctl.c
> +++ sys/net80211/ieee80211_ioctl.c
> @@ -55,6 +55,8 @@ void ieee80211_node2req(struct ieee80211com *,
>   const struct ieee80211_node *, struct ieee80211_nodereq *);
>  void  ieee80211_req2node(struct ieee80211com *,
>   const struct ieee80211_nodereq *, struct ieee80211_node *);
> +void ieee80211_stats2req(struct ieee80211_statsreq *,
> + struct ieee80211_stats *);
>  
>  void
>  ieee80211_node2req(struct ieee80211com *ic, const struct ieee80211_node *ni,
> @@ -180,6 +182,89 @@ ieee80211_req2node(struct ieee80211com *ic, const stru
>  }
>  
>  void
> +ieee80211_stats2req(struct ieee80211_statsreq *req,
> +struct ieee80211_stats *stats)
> +{
> + memset(req, 0, sizeof(*req));
> +
> + req->is_rx_badversion = stats->is_rx_badversion;
> + req->is_rx_tooshort = stats->is_rx_tooshort;
> + req->is_rx_wrongbss = stats->is_rx_wrongbss;
> + req->is_rx_dup = stats->is_rx_dup;
> + req->is_rx_wrongdir = stats->is_rx_wrongdir;
> + req->is_rx_mcastecho = stats->is_rx_mcastecho;
> + req->is_rx_notassoc = stats->is_rx_notassoc;
> + req->is_rx_nowep = stats->is_rx_nowep;
> + req->is_rx_unencrypted = stats->is_rx_unencrypted;
> + req->is_rx_wepfail = stats->is_rx_wepfail;
> + req->is_rx_decap = stats->is_rx_decap;
> + req->is_rx_mgtdiscard = stats->is_rx_mgtdiscard;
> + req->is_rx_ctl = stats->is_rx_ctl;
> + req->is_rx_rstoobig = stats->is_rx_rstoobig;
> + req->is_rx_elem_missing = stats->is_rx_elem_missing;
> + req->is_rx_elem_toobig = stats->is_rx_elem_toobig;
> + req->is_rx_elem_toosmall = stats->is_rx_elem_toosmall;
> + req->is_rx_badchan = stats->is_rx_badchan;
> + req->is_rx_chanmismatch = stats->is_rx_chanmismatch;
> + req->is_rx_nodealloc = stats->is_rx_nodealloc;
> + req->is_rx_ssidmismatch = stats->is_rx_ssidmismatch;
> + req->is_rx_auth_unsupported = stats->is_rx_auth_unsupported;
> + req->is_rx_auth_fail = stats->is_rx_auth_fail;
> + req->is_rx_assoc_bss = stats->is_rx_assoc_bss;
> + req->is_rx_assoc_notauth = stats->is_rx_assoc_notauth;
> + req->is_rx_assoc_capmismatch = stats->is_rx_assoc_capmismatch;
> + req->is_rx_assoc_norate = stats->is_rx_assoc_norate;
> + req->is_rx_deauth = stats->is_rx_deauth;
> + req->is_rx_disassoc = stats->is_rx_disassoc;
> + req->is_rx_badsubtype = stats->is_rx_badsubtype;
> + req->is_rx_nombuf = stats->is_rx_nombuf;
> + req->is_rx_decryptcrc = stats->is_rx_decryptcrc;
> + req->is_rx_ahdemo_mgt = stats->is_rx_ahdemo_mgt;
> + req->is_rx_bad_auth = stats->is_rx_bad_auth;
> + req->is_tx_nombuf = stats->is_tx_nombuf;
> + req->is_tx_nonode = stats->is_tx_nonode;
> + req->is_tx_unknownmgt = stats->is_tx_unknownmgt;
> + req->is_scan_active = stats->is_scan_active;
> + req->is_scan_passive = stats->is_scan_passive;
> + req->is_node_timeout = stats->is_node_timeout;
> + req->is_crypto_nomem = stats->is_crypto_nomem;
> + req->is_rx_assoc_badrsnie = stats->is_rx_assoc_badrsnie;
> + req->is_rx_unauth = stats->is_rx_unauth;
> + req->is_tx_noauth = stats->is_tx_noauth;
> + req->is_rx_eapol_key = stats->is_rx_eapol_key;
> + req->is_rx_eapol_replay = stats->is_rx_eapol_replay;
> + req->is_rx_eapol_badmic = stats->is_rx_eapol_badmic;
> + req->is_rx_remmicfail = stats->is_rx_remmicfail;
> + req->is_rx_locmicfail = stats->is_rx_locmicfail;
> + req->is_tkip_replays = stats->is_tkip_replays;
> + req->is_tkip_icv_errs = stats->is_tkip_icv_errs;
> + req->is_ccmp_replays = stats->is_ccmp_replays;
> + req->is_ccmp_dec_errs = stats->is_ccmp_dec_errs;
> + req->is_cmac_replays = stats->is_cmac_replays;
> + req->is_cmac_icv_errs = stats->is_cmac_icv_errs;
> + req->is_pbac_errs = stats->is_pbac_errs;
> + req->is_ht_nego_no_mandatory_mcs = stats->is_ht_nego_no_mandatory_mcs;
> + req->is_ht_nego_no_basic_mcs = stats->is_ht_nego_no_basic_mcs;
> + req->is_ht_nego_bad_crypto = stats->is_ht_nego_bad_crypto;
> + req->is_ht_prot_change = stats->is_ht_prot_change;
> + req->is_ht_rx_ba_agreements = 

Re: simplify mutexes for pf state table in pfsync

2022-03-08 Thread Alexander Bluhm
On Tue, Mar 08, 2022 at 05:49:22PM +0100, Alexandr Nedvedicky wrote:
> Hello,
> 
> my original idea was to have a one mutex per pfsync queue.  however it creates
> things more complicated then necessary for queues, which keep pf(4) state.
> hrvoje@ hit some panics in this area recently. bluhm@ and I are still looking
> at those issues. However there is the first change in order to make code more
> simple: diff below merges the array of ->sc_mtx[] into a single mutex. 
> 
> OK ?

I have thrown it on my performance test machine with pfsync enabled.
No issues seen.
http://bluhm.genua.de/perform/results/2022-03-06T22%3A40%3A13Z/perform.html

OK bluhm@

> 8<---8<---8<--8<
> 
> diff --git a/sys/net/if_pfsync.c b/sys/net/if_pfsync.c
> index edeb2209161..9a92fc4d21f 100644
> --- a/sys/net/if_pfsync.c
> +++ b/sys/net/if_pfsync.c
> @@ -212,7 +212,7 @@ struct pfsync_softc {
>   struct ipsc_template;
>  
>   struct pf_state_queuesc_qs[PFSYNC_S_COUNT];
> - struct mutex sc_mtx[PFSYNC_S_COUNT];
> + struct mutex sc_st_mtx;
>   size_t   sc_len;
>  
>   struct pfsync_upd_reqs   sc_upd_req_list;
> @@ -324,13 +324,6 @@ pfsync_clone_create(struct if_clone *ifc, int unit)
>   struct pfsync_softc *sc;
>   struct ifnet *ifp;
>   int q;
> - static const char *mtx_names[] = {
> - "iack_mtx",
> - "upd_c_mtx",
> - "del_mtx",
> - "ins_mtx",
> - "upd_mtx",
> - "" };
>  
>   if (unit != 0)
>   return (EINVAL);
> @@ -338,10 +331,9 @@ pfsync_clone_create(struct if_clone *ifc, int unit)
>   pfsync_sync_ok = 1;
>  
>   sc = malloc(sizeof(*pfsyncif), M_DEVBUF, M_WAITOK|M_ZERO);
> - for (q = 0; q < PFSYNC_S_COUNT; q++) {
> + for (q = 0; q < PFSYNC_S_COUNT; q++)
>   TAILQ_INIT(>sc_qs[q]);
> - mtx_init_flags(>sc_mtx[q], IPL_SOFTNET, mtx_names[q], 0);
> - }
> + mtx_init_flags(>sc_st_mtx, IPL_SOFTNET, "st_mtx", 0);
>  
>   pool_init(>sc_pool, PFSYNC_PLSIZE, 0, IPL_SOFTNET, 0, "pfsync",
>   NULL);
> @@ -1585,9 +1577,7 @@ pfsync_grab_snapshot(struct pfsync_snapshot *sn, struct 
> pfsync_softc *sc)
>  
>   sn->sn_sc = sc;
>  
> - for (q = 0; q < PFSYNC_S_COUNT; q++)
> - mtx_enter(>sc_mtx[q]);
> -
> + mtx_enter(>sc_st_mtx);
>   mtx_enter(>sc_upd_req_mtx);
>   mtx_enter(>sc_tdb_mtx);
>  
> @@ -1612,9 +1602,7 @@ pfsync_grab_snapshot(struct pfsync_snapshot *sn, struct 
> pfsync_softc *sc)
>  
>   mtx_leave(>sc_tdb_mtx);
>   mtx_leave(>sc_upd_req_mtx);
> -
> - for (q = (PFSYNC_S_COUNT - 1); q >= 0; q--)
> - mtx_leave(>sc_mtx[q]);
> + mtx_leave(>sc_st_mtx);
>  }
>  
>  void
> @@ -2420,14 +2408,14 @@ pfsync_q_ins(struct pf_state *st, int q)
>   panic("pfsync pkt len is too low %zd", sc->sc_len);
>  #endif
>   do {
> - mtx_enter(>sc_mtx[q]);
> + mtx_enter(>sc_st_mtx);
>  
>   /*
>* If two threads are competing to insert the same state, then
>* there must be just single winner.
>*/
>   if (st->sync_state != PFSYNC_S_NONE) {
> - mtx_leave(>sc_mtx[q]);
> + mtx_leave(>sc_st_mtx);
>   break;
>   }
>  
> @@ -2439,7 +2427,7 @@ pfsync_q_ins(struct pf_state *st, int q)
>   sc_len = atomic_add_long_nv(>sc_len, nlen);
>   if (sc_len > sc->sc_if.if_mtu) {
>   atomic_sub_long(>sc_len, nlen);
> - mtx_leave(>sc_mtx[q]);
> + mtx_leave(>sc_st_mtx);
>   pfsync_sendout();
>   continue;
>   }
> @@ -2448,7 +2436,7 @@ pfsync_q_ins(struct pf_state *st, int q)
>  
>   TAILQ_INSERT_TAIL(>sc_qs[q], st, sync_list);
>   st->sync_state = q;
> - mtx_leave(>sc_mtx[q]);
> + mtx_leave(>sc_st_mtx);
>   } while (0);
>  }
>  
> @@ -2456,18 +2444,19 @@ void
>  pfsync_q_del(struct pf_state *st)
>  {
>   struct pfsync_softc *sc = pfsyncif;
> - int q = st->sync_state;
> + int q;
>  
>   KASSERT(st->sync_state != PFSYNC_S_NONE);
>  
> - mtx_enter(>sc_mtx[q]);
> + mtx_enter(>sc_st_mtx);
> + q = st->sync_state;
>   atomic_sub_long(>sc_len, pfsync_qs[q].len);
>   TAILQ_REMOVE(>sc_qs[q], st, sync_list);
>   if (TAILQ_EMPTY(>sc_qs[q]))
>   atomic_sub_long(>sc_len, sizeof (struct pfsync_subheader));
> - mtx_leave(>sc_mtx[q]);
> -
>   st->sync_state = PFSYNC_S_NONE;
> + mtx_leave(>sc_st_mtx);
> +
>   pf_state_unref(st);
>  }
>  



ipsec policy refcount

2022-03-08 Thread Alexander Bluhm
Hi,

In IPsec policy replace integer refcount with atomic refcount.

It is a bit strange that ipo_refcnt is never taken, but let's go
towards MP safety in small steps.

ok?

bluhm

Index: net/pfkeyv2.c
===
RCS file: /data/mirror/openbsd/cvs/src/sys/net/pfkeyv2.c,v
retrieving revision 1.231
diff -u -p -r1.231 pfkeyv2.c
--- net/pfkeyv2.c   25 Feb 2022 23:51:03 -  1.231
+++ net/pfkeyv2.c   8 Mar 2022 18:44:28 -
@@ -1996,7 +1996,7 @@ pfkeyv2_send(struct socket *so, void *me
 
TAILQ_INIT(>ipo_acquires);
ipo->ipo_rdomain = rdomain;
-   ipo->ipo_ref_count = 1;
+   refcnt_init(>ipo_refcnt);
 
/* Add SPD entry */
if ((rnh = spd_table_get(rdomain)) == NULL ||
Index: netinet/ip_ipsp.h
===
RCS file: /data/mirror/openbsd/cvs/src/sys/netinet/ip_ipsp.h,v
retrieving revision 1.235
diff -u -p -r1.235 ip_ipsp.h
--- netinet/ip_ipsp.h   2 Mar 2022 20:16:43 -   1.235
+++ netinet/ip_ipsp.h   8 Mar 2022 18:43:38 -
@@ -281,7 +281,7 @@ struct ipsec_policy {
u_int8_tipo_sproto; /* ESP/AH; if zero, use system 
dflts */
u_int   ipo_rdomain;
 
-   int ipo_ref_count;
+   struct refcnt   ipo_refcnt;
 
struct tdb  *ipo_tdb;   /* [p] Cached TDB entry */
 
Index: netinet/ip_spd.c
===
RCS file: /data/mirror/openbsd/cvs/src/sys/netinet/ip_spd.c,v
retrieving revision 1.113
diff -u -p -r1.113 ip_spd.c
--- netinet/ip_spd.c6 Mar 2022 15:24:50 -   1.113
+++ netinet/ip_spd.c8 Mar 2022 18:44:32 -
@@ -666,11 +666,10 @@ ipsec_delete_policy(struct ipsec_policy 
struct ipsec_acquire *ipa;
struct radix_node_head *rnh;
struct radix_node *rn = (struct radix_node *)ipo;
-   int err = 0;
 
NET_ASSERT_LOCKED();
 
-   if (--ipo->ipo_ref_count > 0)
+   if (refcnt_rele(>ipo_refcnt) == 0)
return 0;
 
/* Delete from SPD. */
@@ -699,7 +698,7 @@ ipsec_delete_policy(struct ipsec_policy 
 
pool_put(_policy_pool, ipo);
 
-   return err;
+   return 0;
 }
 
 void



Re: ieee80211_stats userland vs. kernel

2022-03-08 Thread Stefan Sperling
On Tue, Mar 08, 2022 at 03:55:48PM +0100, Stefan Sperling wrote:
> On Mon, Mar 07, 2022 at 03:04:06PM -0700, Theo de Raadt wrote:
> > > For now, the structs are identical so the code copying data out is
> > > kept simple.
> > 
> > I think this is unwise, and you should write the field-by-field copying
> > function at the same time, otherwise this is just asking for trouble.
> > You really cannot wait until an intentional change.
> 
> Sure, here it is.

On second thought, avoiding the malloc/free dance is better.
The struct is still small enough to fit on the stack.

diff refs/heads/master refs/heads/statsreq
blob - 85d795d745eb21fd218056c2f3faf7fbc2c7fe49
blob + 62938001ed22fc133a0c98e27ef5690c978e21f3
--- sys/net80211/ieee80211_ioctl.c
+++ sys/net80211/ieee80211_ioctl.c
@@ -55,6 +55,8 @@ void   ieee80211_node2req(struct ieee80211com *,
const struct ieee80211_node *, struct ieee80211_nodereq *);
 voidieee80211_req2node(struct ieee80211com *,
const struct ieee80211_nodereq *, struct ieee80211_node *);
+void ieee80211_stats2req(struct ieee80211_statsreq *,
+   struct ieee80211_stats *);
 
 void
 ieee80211_node2req(struct ieee80211com *ic, const struct ieee80211_node *ni,
@@ -180,6 +182,89 @@ ieee80211_req2node(struct ieee80211com *ic, const stru
 }
 
 void
+ieee80211_stats2req(struct ieee80211_statsreq *req,
+struct ieee80211_stats *stats)
+{
+   memset(req, 0, sizeof(*req));
+
+   req->is_rx_badversion = stats->is_rx_badversion;
+   req->is_rx_tooshort = stats->is_rx_tooshort;
+   req->is_rx_wrongbss = stats->is_rx_wrongbss;
+   req->is_rx_dup = stats->is_rx_dup;
+   req->is_rx_wrongdir = stats->is_rx_wrongdir;
+   req->is_rx_mcastecho = stats->is_rx_mcastecho;
+   req->is_rx_notassoc = stats->is_rx_notassoc;
+   req->is_rx_nowep = stats->is_rx_nowep;
+   req->is_rx_unencrypted = stats->is_rx_unencrypted;
+   req->is_rx_wepfail = stats->is_rx_wepfail;
+   req->is_rx_decap = stats->is_rx_decap;
+   req->is_rx_mgtdiscard = stats->is_rx_mgtdiscard;
+   req->is_rx_ctl = stats->is_rx_ctl;
+   req->is_rx_rstoobig = stats->is_rx_rstoobig;
+   req->is_rx_elem_missing = stats->is_rx_elem_missing;
+   req->is_rx_elem_toobig = stats->is_rx_elem_toobig;
+   req->is_rx_elem_toosmall = stats->is_rx_elem_toosmall;
+   req->is_rx_badchan = stats->is_rx_badchan;
+   req->is_rx_chanmismatch = stats->is_rx_chanmismatch;
+   req->is_rx_nodealloc = stats->is_rx_nodealloc;
+   req->is_rx_ssidmismatch = stats->is_rx_ssidmismatch;
+   req->is_rx_auth_unsupported = stats->is_rx_auth_unsupported;
+   req->is_rx_auth_fail = stats->is_rx_auth_fail;
+   req->is_rx_assoc_bss = stats->is_rx_assoc_bss;
+   req->is_rx_assoc_notauth = stats->is_rx_assoc_notauth;
+   req->is_rx_assoc_capmismatch = stats->is_rx_assoc_capmismatch;
+   req->is_rx_assoc_norate = stats->is_rx_assoc_norate;
+   req->is_rx_deauth = stats->is_rx_deauth;
+   req->is_rx_disassoc = stats->is_rx_disassoc;
+   req->is_rx_badsubtype = stats->is_rx_badsubtype;
+   req->is_rx_nombuf = stats->is_rx_nombuf;
+   req->is_rx_decryptcrc = stats->is_rx_decryptcrc;
+   req->is_rx_ahdemo_mgt = stats->is_rx_ahdemo_mgt;
+   req->is_rx_bad_auth = stats->is_rx_bad_auth;
+   req->is_tx_nombuf = stats->is_tx_nombuf;
+   req->is_tx_nonode = stats->is_tx_nonode;
+   req->is_tx_unknownmgt = stats->is_tx_unknownmgt;
+   req->is_scan_active = stats->is_scan_active;
+   req->is_scan_passive = stats->is_scan_passive;
+   req->is_node_timeout = stats->is_node_timeout;
+   req->is_crypto_nomem = stats->is_crypto_nomem;
+   req->is_rx_assoc_badrsnie = stats->is_rx_assoc_badrsnie;
+   req->is_rx_unauth = stats->is_rx_unauth;
+   req->is_tx_noauth = stats->is_tx_noauth;
+   req->is_rx_eapol_key = stats->is_rx_eapol_key;
+   req->is_rx_eapol_replay = stats->is_rx_eapol_replay;
+   req->is_rx_eapol_badmic = stats->is_rx_eapol_badmic;
+   req->is_rx_remmicfail = stats->is_rx_remmicfail;
+   req->is_rx_locmicfail = stats->is_rx_locmicfail;
+   req->is_tkip_replays = stats->is_tkip_replays;
+   req->is_tkip_icv_errs = stats->is_tkip_icv_errs;
+   req->is_ccmp_replays = stats->is_ccmp_replays;
+   req->is_ccmp_dec_errs = stats->is_ccmp_dec_errs;
+   req->is_cmac_replays = stats->is_cmac_replays;
+   req->is_cmac_icv_errs = stats->is_cmac_icv_errs;
+   req->is_pbac_errs = stats->is_pbac_errs;
+   req->is_ht_nego_no_mandatory_mcs = stats->is_ht_nego_no_mandatory_mcs;
+   req->is_ht_nego_no_basic_mcs = stats->is_ht_nego_no_basic_mcs;
+   req->is_ht_nego_bad_crypto = stats->is_ht_nego_bad_crypto;
+   req->is_ht_prot_change = stats->is_ht_prot_change;
+   req->is_ht_rx_ba_agreements = stats->is_ht_rx_ba_agreements;
+   req->is_ht_tx_ba_agreements = stats->is_ht_tx_ba_agreements;
+   req->is_ht_rx_frame_below_ba_winstart 

Add kernel stack trace saving for riscv64

2022-03-08 Thread Visa Hankala
This patch adds kernel stack trace saving for riscv64, for the benefit
of dt(4) and witness(4). The unwinder is slow because of the symbol
lookup, but this can be tweaked later.

The limit variable prevents the unwinder from using user-controllable
register values. The limit has to reflect the kernel stack setup in
cpu_fork(). To ensure consistency, the stack start address is stored
in a variable in struct pcb.

OK?

Index: arch/riscv64/include/pcb.h
===
RCS file: src/sys/arch/riscv64/include/pcb.h,v
retrieving revision 1.3
diff -u -p -r1.3 pcb.h
--- arch/riscv64/include/pcb.h  30 Jun 2021 22:20:56 -  1.3
+++ arch/riscv64/include/pcb.h  8 Mar 2022 16:54:58 -
@@ -39,5 +39,6 @@ struct pcb {
 
caddr_t pcb_onfault;// On fault handler
struct fpregpcb_fpstate;// Floating Point state */
+   register_t  pcb_kstack; /* kernel stack address */
 };
 #endif /* _MACHINE_PCB_H_ */
Index: arch/riscv64/riscv64/db_trace.c
===
RCS file: src/sys/arch/riscv64/riscv64/db_trace.c,v
retrieving revision 1.5
diff -u -p -r1.5 db_trace.c
--- arch/riscv64/riscv64/db_trace.c 22 Feb 2022 07:46:04 -  1.5
+++ arch/riscv64/riscv64/db_trace.c 8 Mar 2022 16:54:58 -
@@ -141,3 +141,56 @@ db_stack_trace_print(db_expr_t addr, int
}
(*pr)("end trace frame: 0x%lx, count: %d\n", frame, count);
 }
+
+void
+stacktrace_save_at(struct stacktrace *st, unsigned int skip)
+{
+   struct callframe *frame, *lastframe, *limit;
+   struct pcb *pcb = curpcb;
+   Elf_Sym *sym;
+   db_expr_t diff;
+   vaddr_t ra, subr;
+
+   st->st_count = 0;
+
+   if (pcb == NULL)
+   return;
+
+   ra = (vaddr_t)__builtin_return_address(0);
+   frame = (struct callframe *)__builtin_frame_address(0);
+   KASSERT(INKERNEL(frame));
+   limit = (struct callframe *)((struct trapframe *)pcb->pcb_kstack - 1);
+
+   while (st->st_count < STACKTRACE_MAX) {
+   if (skip == 0)
+   st->st_pc[st->st_count++] = ra;
+   else
+   skip--;
+
+   sym = db_search_symbol(ra, DB_STGY_PROC, );
+   if (sym == NULL)
+   break;
+   subr = ra - (vaddr_t)diff;
+
+   lastframe = frame;
+   if (subr == (vaddr_t)cpu_exception_handler_supervisor ||
+   subr == (vaddr_t)cpu_exception_handler_user) {
+   struct trapframe *tf = (struct trapframe *)frame;
+
+   frame = (struct callframe *)tf->tf_s[0];
+   ra = tf->tf_ra;
+   } else {
+   frame = frame[-1].f_frame;
+   if (frame == NULL)
+   break;
+   ra = frame[-1].f_ra;
+   }
+
+   if (frame <= lastframe)
+   break;
+   if (frame >= limit)
+   break;
+   if (!INKERNEL(ra))
+   break;
+   }
+}
Index: arch/riscv64/riscv64/vm_machdep.c
===
RCS file: src/sys/arch/riscv64/riscv64/vm_machdep.c,v
retrieving revision 1.10
diff -u -p -r1.10 vm_machdep.c
--- arch/riscv64/riscv64/vm_machdep.c   24 Feb 2022 14:19:10 -  1.10
+++ arch/riscv64/riscv64/vm_machdep.c   8 Mar 2022 16:54:58 -
@@ -75,13 +75,12 @@ cpu_fork(struct proc *p1, struct proc *p
 
pmap_activate(p2);
 
-   tf = (struct trapframe *)((u_long)p2->p_addr
+   pcb->pcb_kstack = STACKALIGN((u_long)p2->p_addr
+ USPACE
-   - sizeof(struct trapframe)
- sizeof(register_t)/* for holding curcpu */
- 0x10);
 
-   tf = (struct trapframe *)STACKALIGN(tf);
+   tf = (struct trapframe *)pcb->pcb_kstack - 1;
pcb->pcb_tf = tf;
*tf = *p1->p_addr->u_pcb.pcb_tf;
 



simplify mutexes for pf state table in pfsync

2022-03-08 Thread Alexandr Nedvedicky
Hello,

my original idea was to have a one mutex per pfsync queue.  however it creates
things more complicated then necessary for queues, which keep pf(4) state.
hrvoje@ hit some panics in this area recently. bluhm@ and I are still looking
at those issues. However there is the first change in order to make code more
simple: diff below merges the array of ->sc_mtx[] into a single mutex. 


OK ?

thanks and
regards
sashan

8<---8<---8<--8<

diff --git a/sys/net/if_pfsync.c b/sys/net/if_pfsync.c
index edeb2209161..9a92fc4d21f 100644
--- a/sys/net/if_pfsync.c
+++ b/sys/net/if_pfsync.c
@@ -212,7 +212,7 @@ struct pfsync_softc {
struct ipsc_template;
 
struct pf_state_queuesc_qs[PFSYNC_S_COUNT];
-   struct mutex sc_mtx[PFSYNC_S_COUNT];
+   struct mutex sc_st_mtx;
size_t   sc_len;
 
struct pfsync_upd_reqs   sc_upd_req_list;
@@ -324,13 +324,6 @@ pfsync_clone_create(struct if_clone *ifc, int unit)
struct pfsync_softc *sc;
struct ifnet *ifp;
int q;
-   static const char *mtx_names[] = {
-   "iack_mtx",
-   "upd_c_mtx",
-   "del_mtx",
-   "ins_mtx",
-   "upd_mtx",
-   "" };
 
if (unit != 0)
return (EINVAL);
@@ -338,10 +331,9 @@ pfsync_clone_create(struct if_clone *ifc, int unit)
pfsync_sync_ok = 1;
 
sc = malloc(sizeof(*pfsyncif), M_DEVBUF, M_WAITOK|M_ZERO);
-   for (q = 0; q < PFSYNC_S_COUNT; q++) {
+   for (q = 0; q < PFSYNC_S_COUNT; q++)
TAILQ_INIT(>sc_qs[q]);
-   mtx_init_flags(>sc_mtx[q], IPL_SOFTNET, mtx_names[q], 0);
-   }
+   mtx_init_flags(>sc_st_mtx, IPL_SOFTNET, "st_mtx", 0);
 
pool_init(>sc_pool, PFSYNC_PLSIZE, 0, IPL_SOFTNET, 0, "pfsync",
NULL);
@@ -1585,9 +1577,7 @@ pfsync_grab_snapshot(struct pfsync_snapshot *sn, struct 
pfsync_softc *sc)
 
sn->sn_sc = sc;
 
-   for (q = 0; q < PFSYNC_S_COUNT; q++)
-   mtx_enter(>sc_mtx[q]);
-
+   mtx_enter(>sc_st_mtx);
mtx_enter(>sc_upd_req_mtx);
mtx_enter(>sc_tdb_mtx);
 
@@ -1612,9 +1602,7 @@ pfsync_grab_snapshot(struct pfsync_snapshot *sn, struct 
pfsync_softc *sc)
 
mtx_leave(>sc_tdb_mtx);
mtx_leave(>sc_upd_req_mtx);
-
-   for (q = (PFSYNC_S_COUNT - 1); q >= 0; q--)
-   mtx_leave(>sc_mtx[q]);
+   mtx_leave(>sc_st_mtx);
 }
 
 void
@@ -2420,14 +2408,14 @@ pfsync_q_ins(struct pf_state *st, int q)
panic("pfsync pkt len is too low %zd", sc->sc_len);
 #endif
do {
-   mtx_enter(>sc_mtx[q]);
+   mtx_enter(>sc_st_mtx);
 
/*
 * If two threads are competing to insert the same state, then
 * there must be just single winner.
 */
if (st->sync_state != PFSYNC_S_NONE) {
-   mtx_leave(>sc_mtx[q]);
+   mtx_leave(>sc_st_mtx);
break;
}
 
@@ -2439,7 +2427,7 @@ pfsync_q_ins(struct pf_state *st, int q)
sc_len = atomic_add_long_nv(>sc_len, nlen);
if (sc_len > sc->sc_if.if_mtu) {
atomic_sub_long(>sc_len, nlen);
-   mtx_leave(>sc_mtx[q]);
+   mtx_leave(>sc_st_mtx);
pfsync_sendout();
continue;
}
@@ -2448,7 +2436,7 @@ pfsync_q_ins(struct pf_state *st, int q)
 
TAILQ_INSERT_TAIL(>sc_qs[q], st, sync_list);
st->sync_state = q;
-   mtx_leave(>sc_mtx[q]);
+   mtx_leave(>sc_st_mtx);
} while (0);
 }
 
@@ -2456,18 +2444,19 @@ void
 pfsync_q_del(struct pf_state *st)
 {
struct pfsync_softc *sc = pfsyncif;
-   int q = st->sync_state;
+   int q;
 
KASSERT(st->sync_state != PFSYNC_S_NONE);
 
-   mtx_enter(>sc_mtx[q]);
+   mtx_enter(>sc_st_mtx);
+   q = st->sync_state;
atomic_sub_long(>sc_len, pfsync_qs[q].len);
TAILQ_REMOVE(>sc_qs[q], st, sync_list);
if (TAILQ_EMPTY(>sc_qs[q]))
atomic_sub_long(>sc_len, sizeof (struct pfsync_subheader));
-   mtx_leave(>sc_mtx[q]);
-
st->sync_state = PFSYNC_S_NONE;
+   mtx_leave(>sc_st_mtx);
+
pf_state_unref(st);
 }
 



atomic read write

2022-03-08 Thread Alexander Bluhm
Hi,

Once we had the discussion where we need the READ_ONCE() macro.  As
modern C compiler has much freedom how to access memory, I came to
the conclusion that it would be wise to use READ_ONCE() and
WRITE_ONCE() everywhere when we use atomic operations variables.
Using atomic operations on one side and do whatever the compiler
thinks at the other side of the variable feels wrong.

The rule use READ_ONCE, WRITE_ONCE, atmic_inc, atmic_dec consistently
would be easy to follow.  Thinking about where the compiler might
reorder things and break MP code is much more complicated.

Do we want to go this direction?

If yes, here is the change for struct refcnt and cond.  While there
rename the filed to r_refs which is easier to grep.

ok?

bluhm

Index: dev/pci/if_iwm.c
===
RCS file: /data/mirror/openbsd/cvs/src/sys/dev/pci/if_iwm.c,v
retrieving revision 1.391
diff -u -p -r1.391 if_iwm.c
--- dev/pci/if_iwm.c8 Feb 2022 14:24:36 -   1.391
+++ dev/pci/if_iwm.c8 Mar 2022 14:57:26 -
@@ -9975,7 +9975,7 @@ iwm_init(struct ifnet *ifp)
 
generation = ++sc->sc_generation;
 
-   KASSERT(sc->task_refs.refs == 0);
+   KASSERT(READ_ONCE(sc->task_refs.r_refs) == 0);
refcnt_init(>task_refs);
 
err = iwm_preinit(sc);
@@ -10116,7 +10116,7 @@ iwm_stop(struct ifnet *ifp)
iwm_del_task(sc, systq, >mac_ctxt_task);
iwm_del_task(sc, systq, >phy_ctxt_task);
iwm_del_task(sc, systq, >bgscan_done_task);
-   KASSERT(sc->task_refs.refs >= 1);
+   KASSERT(READ_ONCE(sc->task_refs.r_refs) >= 1);
refcnt_finalize(>task_refs, "iwmstop");
 
iwm_stop_device(sc);
Index: dev/pci/if_iwx.c
===
RCS file: /data/mirror/openbsd/cvs/src/sys/dev/pci/if_iwx.c,v
retrieving revision 1.134
diff -u -p -r1.134 if_iwx.c
--- dev/pci/if_iwx.c21 Jan 2022 15:51:02 -  1.134
+++ dev/pci/if_iwx.c8 Mar 2022 14:57:26 -
@@ -8017,7 +8017,7 @@ iwx_init(struct ifnet *ifp)
if (sc->sc_nvm.sku_cap_11n_enable)
iwx_setup_ht_rates(sc);
 
-   KASSERT(sc->task_refs.refs == 0);
+   KASSERT(READ_ONCE(sc->task_refs.r_refs) == 0);
refcnt_init(>task_refs);
ifq_clr_oactive(>if_snd);
ifp->if_flags |= IFF_RUNNING;
@@ -8139,7 +8139,7 @@ iwx_stop(struct ifnet *ifp)
iwx_del_task(sc, systq, >mac_ctxt_task);
iwx_del_task(sc, systq, >phy_ctxt_task);
iwx_del_task(sc, systq, >bgscan_done_task);
-   KASSERT(sc->task_refs.refs >= 1);
+   KASSERT(READ_ONCE(sc->task_refs.r_refs) >= 1);
refcnt_finalize(>task_refs, "iwxstop");
 
iwx_stop_device(sc);
Index: kern/kern_synch.c
===
RCS file: /data/mirror/openbsd/cvs/src/sys/kern/kern_synch.c,v
retrieving revision 1.182
diff -u -p -r1.182 kern_synch.c
--- kern/kern_synch.c   19 Feb 2022 23:56:18 -  1.182
+++ kern/kern_synch.c   8 Mar 2022 14:57:26 -
@@ -804,7 +804,7 @@ sys___thrwakeup(struct proc *p, void *v,
 void
 refcnt_init(struct refcnt *r)
 {
-   r->refs = 1;
+   WRITE_ONCE(r->r_refs, 1);
 }
 
 void
@@ -813,10 +813,10 @@ refcnt_take(struct refcnt *r)
 #ifdef DIAGNOSTIC
u_int refcnt;
 
-   refcnt = atomic_inc_int_nv(>refs);
+   refcnt = atomic_inc_int_nv(>r_refs);
KASSERT(refcnt != 0);
 #else
-   atomic_inc_int(>refs);
+   atomic_inc_int(>r_refs);
 #endif
 }
 
@@ -825,7 +825,7 @@ refcnt_rele(struct refcnt *r)
 {
u_int refcnt;
 
-   refcnt = atomic_dec_int_nv(>refs);
+   refcnt = atomic_dec_int_nv(>r_refs);
KASSERT(refcnt != ~0);
 
return (refcnt == 0);
@@ -844,10 +844,10 @@ refcnt_finalize(struct refcnt *r, const 
struct sleep_state sls;
u_int refcnt;
 
-   refcnt = atomic_dec_int_nv(>refs);
+   refcnt = atomic_dec_int_nv(>r_refs);
while (refcnt) {
sleep_setup(, r, PWAIT, wmesg, 0);
-   refcnt = r->refs;
+   refcnt = READ_ONCE(r->r_refs);
sleep_finish(, refcnt);
}
 }
@@ -855,13 +855,13 @@ refcnt_finalize(struct refcnt *r, const 
 void
 cond_init(struct cond *c)
 {
-   c->c_wait = 1;
+   WRITE_ONCE(c->c_wait, 1);
 }
 
 void
 cond_signal(struct cond *c)
 {
-   c->c_wait = 0;
+   WRITE_ONCE(c->c_wait, 0);
 
wakeup_one(c);
 }
@@ -872,10 +872,10 @@ cond_wait(struct cond *c, const char *wm
struct sleep_state sls;
int wait;
 
-   wait = c->c_wait;
+   wait = READ_ONCE(c->c_wait);
while (wait) {
sleep_setup(, c, PWAIT, wmesg, 0);
-   wait = c->c_wait;
+   wait = READ_ONCE(c->c_wait);
sleep_finish(, wait);
}
 }
Index: netinet/ip_ipsp.c
===
RCS file: /data/mirror/openbsd/cvs/src/sys/netinet/ip_ipsp.c,v

Re: bgpd expand macros in strings

2022-03-08 Thread Theo Buehler
On Tue, Mar 08, 2022 at 02:36:00PM +0100, Claudio Jeker wrote:
> bgpd's parse.y uses a lot of STRING that is then further bisected in the
> actual rule. One good example are all communities. Now if someone wants to
> use macros in such arguments they do not work in all cases. e.g.
> large-community $someas:1:2 works but large-community 1:$someas:2 does
> not.
> 
> Right now macro expansion only happens at the start of a token but not
> inside a string token. The following diff changes this. It will also
> expand:
> large-community $someas:$otheras:42
> 
> This only works if the macro name ends on a not-allowed-in-macro-name
> character ([^a-zA-Z0-9_]). So while 'descr v4_$name' or 'descr $name-v4'
> works 'descr $name_v4' will not. Also no expansion happens inside quoted
> strings like 'descr "$name-v4"'.

The diff reads ok to me and makes sense, but I'm not super familiar with
yacc, so I'd prefer if someone with better yacc experience could look
over this, especially in view of what should be done in all the other
parse.y.



Re: ieee80211_stats userland vs. kernel

2022-03-08 Thread Stefan Sperling
On Mon, Mar 07, 2022 at 03:04:06PM -0700, Theo de Raadt wrote:
> > For now, the structs are identical so the code copying data out is
> > kept simple.
> 
> I think this is unwise, and you should write the field-by-field copying
> function at the same time, otherwise this is just asking for trouble.
> You really cannot wait until an intentional change.

Sure, here it is.

diff 0ed925b6612724f216b84360a04117aad1c6df9b 
6615def0a9e782a7fd930d75fda1ae4ec0f90dd2
blob - 85d795d745eb21fd218056c2f3faf7fbc2c7fe49
blob + d4f6b536b2eb2165f85f9df7c08a1cee36a428a4
--- sys/net80211/ieee80211_ioctl.c
+++ sys/net80211/ieee80211_ioctl.c
@@ -55,6 +55,7 @@ void   ieee80211_node2req(struct ieee80211com *,
const struct ieee80211_node *, struct ieee80211_nodereq *);
 voidieee80211_req2node(struct ieee80211com *,
const struct ieee80211_nodereq *, struct ieee80211_node *);
+struct ieee80211_statsreq *ieee80211_stats2req(struct ieee80211_stats *);
 
 void
 ieee80211_node2req(struct ieee80211com *ic, const struct ieee80211_node *ni,
@@ -179,6 +180,94 @@ ieee80211_req2node(struct ieee80211com *ic, const stru
ni->ni_state = nr->nr_state;
 }
 
+struct ieee80211_statsreq *
+ieee80211_stats2req(struct ieee80211_stats *stats)
+{
+   struct ieee80211_statsreq *req;
+
+   req = malloc(sizeof(*req), M_DEVBUF, M_ZERO | M_WAITOK | M_CANFAIL);
+   if (req == NULL)
+   return NULL;
+
+   req->is_rx_badversion = stats->is_rx_badversion;
+   req->is_rx_tooshort = stats->is_rx_tooshort;
+   req->is_rx_wrongbss = stats->is_rx_wrongbss;
+   req->is_rx_dup = stats->is_rx_dup;
+   req->is_rx_wrongdir = stats->is_rx_wrongdir;
+   req->is_rx_mcastecho = stats->is_rx_mcastecho;
+   req->is_rx_notassoc = stats->is_rx_notassoc;
+   req->is_rx_nowep = stats->is_rx_nowep;
+   req->is_rx_unencrypted = stats->is_rx_unencrypted;
+   req->is_rx_wepfail = stats->is_rx_wepfail;
+   req->is_rx_decap = stats->is_rx_decap;
+   req->is_rx_mgtdiscard = stats->is_rx_mgtdiscard;
+   req->is_rx_ctl = stats->is_rx_ctl;
+   req->is_rx_rstoobig = stats->is_rx_rstoobig;
+   req->is_rx_elem_missing = stats->is_rx_elem_missing;
+   req->is_rx_elem_toobig = stats->is_rx_elem_toobig;
+   req->is_rx_elem_toosmall = stats->is_rx_elem_toosmall;
+   req->is_rx_badchan = stats->is_rx_badchan;
+   req->is_rx_chanmismatch = stats->is_rx_chanmismatch;
+   req->is_rx_nodealloc = stats->is_rx_nodealloc;
+   req->is_rx_ssidmismatch = stats->is_rx_ssidmismatch;
+   req->is_rx_auth_unsupported = stats->is_rx_auth_unsupported;
+   req->is_rx_auth_fail = stats->is_rx_auth_fail;
+   req->is_rx_assoc_bss = stats->is_rx_assoc_bss;
+   req->is_rx_assoc_notauth = stats->is_rx_assoc_notauth;
+   req->is_rx_assoc_capmismatch = stats->is_rx_assoc_capmismatch;
+   req->is_rx_assoc_norate = stats->is_rx_assoc_norate;
+   req->is_rx_deauth = stats->is_rx_deauth;
+   req->is_rx_disassoc = stats->is_rx_disassoc;
+   req->is_rx_badsubtype = stats->is_rx_badsubtype;
+   req->is_rx_nombuf = stats->is_rx_nombuf;
+   req->is_rx_decryptcrc = stats->is_rx_decryptcrc;
+   req->is_rx_ahdemo_mgt = stats->is_rx_ahdemo_mgt;
+   req->is_rx_bad_auth = stats->is_rx_bad_auth;
+   req->is_tx_nombuf = stats->is_tx_nombuf;
+   req->is_tx_nonode = stats->is_tx_nonode;
+   req->is_tx_unknownmgt = stats->is_tx_unknownmgt;
+   req->is_scan_active = stats->is_scan_active;
+   req->is_scan_passive = stats->is_scan_passive;
+   req->is_node_timeout = stats->is_node_timeout;
+   req->is_crypto_nomem = stats->is_crypto_nomem;
+   req->is_rx_assoc_badrsnie = stats->is_rx_assoc_badrsnie;
+   req->is_rx_unauth = stats->is_rx_unauth;
+   req->is_tx_noauth = stats->is_tx_noauth;
+   req->is_rx_eapol_key = stats->is_rx_eapol_key;
+   req->is_rx_eapol_replay = stats->is_rx_eapol_replay;
+   req->is_rx_eapol_badmic = stats->is_rx_eapol_badmic;
+   req->is_rx_remmicfail = stats->is_rx_remmicfail;
+   req->is_rx_locmicfail = stats->is_rx_locmicfail;
+   req->is_tkip_replays = stats->is_tkip_replays;
+   req->is_tkip_icv_errs = stats->is_tkip_icv_errs;
+   req->is_ccmp_replays = stats->is_ccmp_replays;
+   req->is_ccmp_dec_errs = stats->is_ccmp_dec_errs;
+   req->is_cmac_replays = stats->is_cmac_replays;
+   req->is_cmac_icv_errs = stats->is_cmac_icv_errs;
+   req->is_pbac_errs = stats->is_pbac_errs;
+   req->is_ht_nego_no_mandatory_mcs = stats->is_ht_nego_no_mandatory_mcs;
+   req->is_ht_nego_no_basic_mcs = stats->is_ht_nego_no_basic_mcs;
+   req->is_ht_nego_bad_crypto = stats->is_ht_nego_bad_crypto;
+   req->is_ht_prot_change = stats->is_ht_prot_change;
+   req->is_ht_rx_ba_agreements = stats->is_ht_rx_ba_agreements;
+   req->is_ht_tx_ba_agreements = stats->is_ht_tx_ba_agreements;
+   req->is_ht_rx_frame_below_ba_winstart =
+   

Re: pluart(4): fifo support

2022-03-08 Thread Visa Hankala
On Tue, Mar 08, 2022 at 08:04:36AM +0100, Anton Lindqvist wrote:
> On Mon, Mar 07, 2022 at 07:36:35AM +, Visa Hankala wrote:
> > I still think that checking TXFF and using the same code for both
> > SBSA and true PL011 UARTs would be the best choice. This would avoid
> > fragmenting the code and improve robustness by relying on functionality
> > that is common to the different controller variants.
> 
> Fair enough, new diff.

Maybe the comments should omit the FIFO space description and just
mention the lack of the level control register in the SBSA UART
register interface.

OK visa@

> diff --git sys/dev/acpi/pluart_acpi.c sys/dev/acpi/pluart_acpi.c
> index dc8ea5e9922..08ebe13ffbc 100644
> --- sys/dev/acpi/pluart_acpi.c
> +++ sys/dev/acpi/pluart_acpi.c
> @@ -91,6 +91,8 @@ pluart_acpi_attach(struct device *parent, struct device 
> *self, void *aux)
>   return;
>   }
>  
> + sc->sc.sc_hwflags |= COM_HW_SBSA;
> +
>   pluart_attach_common(>sc, pluart_acpi_is_console(sc));
>  }
>  
> diff --git sys/dev/fdt/pluart_fdt.c sys/dev/fdt/pluart_fdt.c
> index 7f17365f1d6..798250593bf 100644
> --- sys/dev/fdt/pluart_fdt.c
> +++ sys/dev/fdt/pluart_fdt.c
> @@ -69,6 +69,9 @@ pluart_fdt_attach(struct device *parent, struct device 
> *self, void *aux)
>   return;
>   }
>  
> + if (OF_is_compatible(faa->fa_node, "arm,sbsa-uart"))
> + sc->sc_hwflags |= COM_HW_SBSA;
> +
>   sc->sc_irq = fdt_intr_establish(faa->fa_node, IPL_TTY, pluart_intr,
>   sc, sc->sc_dev.dv_xname);
>  
> diff --git sys/dev/ic/pluart.c sys/dev/ic/pluart.c
> index eaa11b6c44b..457c88d8cad 100644
> --- sys/dev/ic/pluart.c
> +++ sys/dev/ic/pluart.c
> @@ -99,6 +99,13 @@
>  #define UART_CR_CTSE (1 << 14)   /* CTS hardware flow control 
> enable */
>  #define UART_CR_RTSE (1 << 15)   /* RTS hardware flow control 
> enable */
>  #define UART_IFLS0x34/* Interrupt FIFO level select 
> register */
> +#define UART_IFLS_RX_SHIFT   3   /* RX level in bits [5:3] */
> +#define UART_IFLS_TX_SHIFT   0   /* TX level in bits [2:0] */
> +#define UART_IFLS_1_80   /* FIFO 1/8 full */
> +#define UART_IFLS_1_41   /* FIFO 1/4 full */
> +#define UART_IFLS_1_22   /* FIFO 1/2 full */
> +#define UART_IFLS_3_43   /* FIFO 3/4 full */
> +#define UART_IFLS_7_84   /* FIFO 7/8 full */
>  #define UART_IMSC0x38/* Interrupt mask set/clear 
> register */
>  #define UART_IMSC_RIMIM  (1 << 0)
>  #define UART_IMSC_CTSMIM (1 << 1)
> @@ -115,8 +122,16 @@
>  #define UART_MIS 0x40/* Masked interrupt status 
> register */
>  #define UART_ICR 0x44/* Interrupt clear register */
>  #define UART_DMACR   0x48/* DMA control register */
> +#define UART_PID00xfe0   /* Peripheral identification 
> register 0 */
> +#define UART_PID10xfe4   /* Peripheral identification 
> register 1 */
> +#define UART_PID20xfe8   /* Peripheral identification 
> register 2 */
> +#define UART_PID2_REV(x) (((x) & 0xf0) >> 4)
> +#define UART_PID30xfec   /* Peripheral identification 
> register 3 */
>  #define UART_SPACE   0x100
>  
> +#define UART_FIFO_SIZE   16
> +#define UART_FIFO_SIZE_R332
> +
>  void pluartcnprobe(struct consdev *cp);
>  void pluartcninit(struct consdev *cp);
>  int pluartcngetc(dev_t dev);
> @@ -150,7 +165,31 @@ struct cdevsw pluartdev =
>  void
>  pluart_attach_common(struct pluart_softc *sc, int console)
>  {
> - int maj;
> + int fifolen, maj;
> + int lcr;
> +
> + if ((sc->sc_hwflags & COM_HW_SBSA) == 0) {
> + int rev;
> +
> + rev = UART_PID2_REV(bus_space_read_4(sc->sc_iot, sc->sc_ioh,
> + UART_PID2));
> + if (rev < 3)
> + fifolen = UART_FIFO_SIZE;
> + else
> + fifolen = UART_FIFO_SIZE_R3;
> + printf(": rev %d, %d byte fifo\n", rev, fifolen);
> + } else {
> + /*
> +  * The SBSA UART is PL011 r1p5 compliant which implies revision
> +  * 3 with a 32 byte FIFO. However, we cannot expect to configure
> +  * RX/TX interrupt levels using the UARTIFLS register making it
> +  * impossible to make assumptions about the number of available
> +  * bytes in the FIFO. Therefore disable FIFO support for such
> +  * devices.
> +  */
> + fifolen = 0;
> + printf("\n");
> + }
>  
>   if (console) {
>   /* Locate the major number. */
> @@ -159,7 +198,7 @@ pluart_attach_common(struct pluart_softc *sc, int console)
>   break;
> 

bgpd expand macros in strings

2022-03-08 Thread Claudio Jeker
bgpd's parse.y uses a lot of STRING that is then further bisected in the
actual rule. One good example are all communities. Now if someone wants to
use macros in such arguments they do not work in all cases. e.g.
large-community $someas:1:2 works but large-community 1:$someas:2 does
not.

Right now macro expansion only happens at the start of a token but not
inside a string token. The following diff changes this. It will also
expand:
large-community $someas:$otheras:42

This only works if the macro name ends on a not-allowed-in-macro-name
character ([^a-zA-Z0-9_]). So while 'descr v4_$name' or 'descr $name-v4'
works 'descr $name_v4' will not. Also no expansion happens inside quoted
strings like 'descr "$name-v4"'.

-- 
:wq Claudio

Index: parse.y
===
RCS file: /cvs/src/usr.sbin/bgpd/parse.y,v
retrieving revision 1.422
diff -u -p -r1.422 parse.y
--- parse.y 23 Feb 2022 11:20:35 -  1.422
+++ parse.y 8 Mar 2022 09:25:55 -
@@ -48,6 +48,8 @@
 #include "rde.h"
 #include "log.h"
 
+#define MACRO_NAME_LEN 128
+
 TAILQ_HEAD(files, file) files = TAILQ_HEAD_INITIALIZER(files);
 static struct file {
TAILQ_ENTRY(file)entry;
@@ -74,6 +76,7 @@ intigetc(void);
 int lgetc(int);
 voidlungetc(int);
 int findeol(void);
+int expand_macro(void);
 
 TAILQ_HEAD(symhead, sym)symhead = TAILQ_HEAD_INITIALIZER(symhead);
 struct sym {
@@ -380,17 +383,25 @@ yesno :  STRING   {
 
 varset : STRING '=' string {
char *s = $1;
+   if (strlen($1) >= MACRO_NAME_LEN) {
+   yyerror("macro name to long, max %d characters",
+   MACRO_NAME_LEN - 1);
+   free($1);
+   free($3);
+   YYERROR;
+   }
+   do {
+   if (isalnum((unsigned char)*s) || *s == '_')
+   continue;
+   yyerror("macro name can only contain "
+   "alphanumerics and '_'");
+   free($1);
+   free($3);
+   YYERROR;
+   } while (*++s);
+
if (cmd_opts & BGPD_OPT_VERBOSE)
printf("%s = \"%s\"\n", $1, $3);
-   while (*s++) {
-   if (isspace((unsigned char)*s)) {
-   yyerror("macro name cannot contain "
-   "whitespace");
-   free($1);
-   free($3);
-   YYERROR;
-   }
-   }
if (symset($1, $3, 0) == -1)
fatal("cannot store variable");
free($1);
@@ -3169,10 +3180,46 @@ findeol(void)
 }
 
 int
+expand_macro(void)
+{
+   char buf[MACRO_NAME_LEN];
+   char*p, *val;
+   int  c;
+
+   p = buf;
+   while (1) {
+   if ((c = lgetc('$')) == EOF)
+   return (ERROR);
+   if (p + 1 >= buf + sizeof(buf) - 1) {
+   yyerror("macro name too long");
+   return (ERROR);
+   }
+   if (isalnum(c) || c == '_') {
+   *p++ = c;
+   continue;
+   }
+   *p = '\0';
+   lungetc(c);
+   break;
+   }
+   val = symget(buf);
+   if (val == NULL)
+   yyerror("macro '%s' not defined", buf);
+   p = val + strlen(val) - 1;
+   lungetc(DONE_EXPAND);
+   while (p >= val) {
+   lungetc((unsigned char)*p);
+   p--;
+   }
+   lungetc(START_EXPAND);
+   return (0);
+}
+
+int
 yylex(void)
 {
char buf[8096];
-   char*p, *val;
+   char*p;
int  quotec, next, c;
int  token;
 
@@ -3186,34 +3233,9 @@ top:
while ((c = lgetc(0)) != '\n' && c != EOF)
; /* nothing */
if (c == '$' && !expanding) {
-   while (1) {
-   if ((c = lgetc(0)) == EOF)
-   return (0);
-
-   if (p + 1 >= buf + sizeof(buf) - 1) {
-   yyerror("string too long");
-   return (findeol());
-   }
-   if (isalnum(c) || c == '_') {
-   *p++ = c;
-   

Re: [patch] ksh: backspace in search-history buffer with UTF8 chars

2022-03-08 Thread Mikhail
On Tue, Mar 08, 2022 at 03:15:55PM +0300, Mikhail wrote:
> Inlined diff helps ksh search-history function (ctrl-r) to handle
> backspace for UTF8 characters properly, without the patch, if I have
> UTF8 characters in my search buffer, I need to press backspace twice to
> push cursor to the left.
> 
> The search itself is not perfect for UTF8 either, sometimes I get
> patterns which are not handled properly, or the input prompt becomes
> mangled, but the issue with backspace is what currently bothers me the
> most.
> 
> Comments, objections?

Better version, behavior is not changed:


diff --git bin/ksh/emacs.c bin/ksh/emacs.c
index 5370e814f70..a5ed4095ae5 100644
--- bin/ksh/emacs.c
+++ bin/ksh/emacs.c
@@ -907,8 +907,11 @@ x_search_hist(int c)
offset = -1;
break;
}
-   if (p > pat)
-   *--p = '\0';
+   if (p > pat) {
+   while (isu8cont(*--p))
+   ;
+   *p ='\0';
+   }
if (p == pat)
offset = -1;
else



Re: bgpd: plug leaks in rtr_parse_ipv{4,6}_prefix()

2022-03-08 Thread Claudio Jeker
On Tue, Mar 08, 2022 at 01:33:01PM +0100, Theo Buehler wrote:
> If the length checks trigger, roa is leaked.  It makes more sense to me
> to copy the data into ip4 and ip6, check lengths and then calloc rather
> than the current order, so I moved the calloc down a bit. Alternatively,
> we could just add a free(roa) before the return -1 in the length checks.
> 
> Index: rtr_proto.c
> ===
> RCS file: /cvs/src/usr.sbin/bgpd/rtr_proto.c,v
> retrieving revision 1.5
> diff -u -p -U4 -r1.5 rtr_proto.c
> --- rtr_proto.c   6 Feb 2022 09:51:19 -   1.5
> +++ rtr_proto.c   8 Mar 2022 12:26:29 -
> @@ -441,23 +441,23 @@ rtr_parse_ipv4_prefix(struct rtr_session
>   return -1;
>   }
>  
>   memcpy(, buf + sizeof(struct rtr_header), sizeof(ip4));
> -
> - if ((roa = calloc(1, sizeof(*roa))) == NULL) {
> - log_warn("rtr %s: received %s",
> - log_rtr(rs), log_rtr_type(IPV4_PREFIX));
> - rtr_send_error(rs, INTERNAL_ERROR, "out of memory", NULL, 0);
> - return -1;
> - }
>   if (ip4.prefixlen > 32 || ip4.maxlen > 32 ||
>   ip4.prefixlen > ip4.maxlen) {
>   log_warnx("rtr: %s: received %s: bad prefixlen / maxlen",
>   log_rtr(rs), log_rtr_type(IPV4_PREFIX));
>   rtr_send_error(rs, CORRUPT_DATA, "bad prefixlen / maxlen",
>   buf, len);
>   return -1;
>   }
> +
> + if ((roa = calloc(1, sizeof(*roa))) == NULL) {
> + log_warn("rtr %s: received %s",
> + log_rtr(rs), log_rtr_type(IPV4_PREFIX));
> + rtr_send_error(rs, INTERNAL_ERROR, "out of memory", NULL, 0);
> + return -1;
> + }
>   roa->aid = AID_INET;
>   roa->prefixlen = ip4.prefixlen;
>   roa->maxlen = ip4.maxlen;
>   roa->asnum = ntohl(ip4.asnum);
> @@ -510,21 +510,21 @@ rtr_parse_ipv6_prefix(struct rtr_session
>   return -1;
>   }
>  
>   memcpy(, buf + sizeof(struct rtr_header), sizeof(ip6));
> -
> - if ((roa = calloc(1, sizeof(*roa))) == NULL) {
> - log_warn("rtr %s: received %s",
> - log_rtr(rs), log_rtr_type(IPV6_PREFIX));
> - rtr_send_error(rs, INTERNAL_ERROR, "out of memory", NULL, 0);
> - return -1;
> - }
>   if (ip6.prefixlen > 128 || ip6.maxlen > 128 ||
>   ip6.prefixlen > ip6.maxlen) {
>   log_warnx("rtr: %s: received %s: bad prefixlen / maxlen",
>   log_rtr(rs), log_rtr_type(IPV6_PREFIX));
>   rtr_send_error(rs, CORRUPT_DATA, "bad prefixlen / maxlen",
>   buf, len);
> + return -1;
> + }
> +
> + if ((roa = calloc(1, sizeof(*roa))) == NULL) {
> + log_warn("rtr %s: received %s",
> + log_rtr(rs), log_rtr_type(IPV6_PREFIX));
> + rtr_send_error(rs, INTERNAL_ERROR, "out of memory", NULL, 0);
>   return -1;
>   }
>   roa->aid = AID_INET6;
>   roa->prefixlen = ip6.prefixlen;
> 

OK claudio@

-- 
:wq Claudio



bgpd: plug leaks in rtr_parse_ipv{4,6}_prefix()

2022-03-08 Thread Theo Buehler
If the length checks trigger, roa is leaked.  It makes more sense to me
to copy the data into ip4 and ip6, check lengths and then calloc rather
than the current order, so I moved the calloc down a bit. Alternatively,
we could just add a free(roa) before the return -1 in the length checks.

Index: rtr_proto.c
===
RCS file: /cvs/src/usr.sbin/bgpd/rtr_proto.c,v
retrieving revision 1.5
diff -u -p -U4 -r1.5 rtr_proto.c
--- rtr_proto.c 6 Feb 2022 09:51:19 -   1.5
+++ rtr_proto.c 8 Mar 2022 12:26:29 -
@@ -441,23 +441,23 @@ rtr_parse_ipv4_prefix(struct rtr_session
return -1;
}
 
memcpy(, buf + sizeof(struct rtr_header), sizeof(ip4));
-
-   if ((roa = calloc(1, sizeof(*roa))) == NULL) {
-   log_warn("rtr %s: received %s",
-   log_rtr(rs), log_rtr_type(IPV4_PREFIX));
-   rtr_send_error(rs, INTERNAL_ERROR, "out of memory", NULL, 0);
-   return -1;
-   }
if (ip4.prefixlen > 32 || ip4.maxlen > 32 ||
ip4.prefixlen > ip4.maxlen) {
log_warnx("rtr: %s: received %s: bad prefixlen / maxlen",
log_rtr(rs), log_rtr_type(IPV4_PREFIX));
rtr_send_error(rs, CORRUPT_DATA, "bad prefixlen / maxlen",
buf, len);
return -1;
}
+
+   if ((roa = calloc(1, sizeof(*roa))) == NULL) {
+   log_warn("rtr %s: received %s",
+   log_rtr(rs), log_rtr_type(IPV4_PREFIX));
+   rtr_send_error(rs, INTERNAL_ERROR, "out of memory", NULL, 0);
+   return -1;
+   }
roa->aid = AID_INET;
roa->prefixlen = ip4.prefixlen;
roa->maxlen = ip4.maxlen;
roa->asnum = ntohl(ip4.asnum);
@@ -510,21 +510,21 @@ rtr_parse_ipv6_prefix(struct rtr_session
return -1;
}
 
memcpy(, buf + sizeof(struct rtr_header), sizeof(ip6));
-
-   if ((roa = calloc(1, sizeof(*roa))) == NULL) {
-   log_warn("rtr %s: received %s",
-   log_rtr(rs), log_rtr_type(IPV6_PREFIX));
-   rtr_send_error(rs, INTERNAL_ERROR, "out of memory", NULL, 0);
-   return -1;
-   }
if (ip6.prefixlen > 128 || ip6.maxlen > 128 ||
ip6.prefixlen > ip6.maxlen) {
log_warnx("rtr: %s: received %s: bad prefixlen / maxlen",
log_rtr(rs), log_rtr_type(IPV6_PREFIX));
rtr_send_error(rs, CORRUPT_DATA, "bad prefixlen / maxlen",
buf, len);
+   return -1;
+   }
+
+   if ((roa = calloc(1, sizeof(*roa))) == NULL) {
+   log_warn("rtr %s: received %s",
+   log_rtr(rs), log_rtr_type(IPV6_PREFIX));
+   rtr_send_error(rs, INTERNAL_ERROR, "out of memory", NULL, 0);
return -1;
}
roa->aid = AID_INET6;
roa->prefixlen = ip6.prefixlen;



[patch] ksh: backspace in search-history buffer with UTF8 chars

2022-03-08 Thread Mikhail
Inlined diff helps ksh search-history function (ctrl-r) to handle
backspace for UTF8 characters properly, without the patch, if I have
UTF8 characters in my search buffer, I need to press backspace twice to
push cursor to the left.

The search itself is not perfect for UTF8 either, sometimes I get
patterns which are not handled properly, or the input prompt becomes
mangled, but the issue with backspace is what currently bothers me the
most.

Comments, objections?

diff --git bin/ksh/emacs.c bin/ksh/emacs.c
index 5370e814f70..0058bad9adb 100644
--- bin/ksh/emacs.c
+++ bin/ksh/emacs.c
@@ -908,7 +908,9 @@ x_search_hist(int c)
break;
}
if (p > pat)
-   *--p = '\0';
+   while (isu8cont(*--p))
+   ;
+   *p ='\0';
if (p == pat)
offset = -1;
else