Re: acpihpet(4): acpihpet_delay: only use lower 32 bits of counter

2022-09-11 Thread Jonathan Gray
On Fri, Sep 09, 2022 at 07:32:58AM -0500, Scott Cheloha wrote:
> On Fri, Sep 09, 2022 at 03:59:01PM +1000, Jonathan Gray wrote:
> > On Thu, Sep 08, 2022 at 08:31:21PM -0500, Scott Cheloha wrote:
> > > On Sat, Aug 27, 2022 at 09:28:06PM -0500, Scott Cheloha wrote:
> > > > Whoops, forgot about the split read problem.  My mistake.
> > > > 
> > > > Because 32-bit platforms cannot do bus_space_read_8 atomically, and
> > > > i386 can use acpihpet(4), we can only safely use the lower 32 bits of
> > > > the counter in acpihpet_delay() (unless we want two versions of
> > > > acpihpet_delay()... which I don't).
> > > > 
> > > > Switch from acpihpet_r() to bus_space_read_4(9) and accumulate cycles
> > > > as we do in acpihpet_delay().  Unlike acpitimer(4), the HPET is a
> > > > 64-bit counter so we don't need to mask the difference between val1
> > > > and val2.
> > > > 
> > > > [...]
> > > 
> > > 12 day ping.
> > > 
> > > This needs fixing before it causes problems.
> > 
> > the hpet spec says to set a bit to force a 32-bit counter on
> > 32-bit platforms
> > 
> > see 2.4.7 Issues related to 64-bit Timers with 32-bit CPUs, in
> > https://www.intel.com/content/dam/www/public/us/en/documents/technical-specifications/software-developers-hpet-spec-1-0a.pdf
> 
> I don't follow your meaning.  Putting the HPET in 32-bit mode doesn't
> help us here, it would just break acpihpet_delay() in a different way.
> 
> The problem is that acpihpet_delay() is written as a 64-bit delay(9)
> and there's no way to do that safely on i386 without introducing extra
> overhead.
> 
> The easiest and cheapest fix is to rewrite acpihpet_delay() as a
> 32-bit delay(9), i.e. we count cycles until we pass a threshold.
> acpitimer_delay() in acpi/acpitimer.c is a 32-bit delay(9) and it
> works great, let's just do the same thing again here.
> 
> ok?

Seems to handle wrapping.

ok jsg@

> 
> Index: acpihpet.c
> ===
> RCS file: /cvs/src/sys/dev/acpi/acpihpet.c,v
> retrieving revision 1.28
> diff -u -p -r1.28 acpihpet.c
> --- acpihpet.c25 Aug 2022 18:01:54 -  1.28
> +++ acpihpet.c9 Sep 2022 12:29:41 -
> @@ -281,13 +281,19 @@ acpihpet_attach(struct device *parent, s
>  void
>  acpihpet_delay(int usecs)
>  {
> - uint64_t c, s;
> + uint64_t count = 0, cycles;
>   struct acpihpet_softc *sc = hpet_timecounter.tc_priv;
> + uint32_t val1, val2;
>  
> - s = acpihpet_r(sc->sc_iot, sc->sc_ioh, HPET_MAIN_COUNTER);
> - c = usecs * hpet_timecounter.tc_frequency / 100;
> - while (acpihpet_r(sc->sc_iot, sc->sc_ioh, HPET_MAIN_COUNTER) - s < c)
> + val2 = bus_space_read_4(sc->sc_iot, sc->sc_ioh, HPET_MAIN_COUNTER);
> + cycles = usecs * hpet_timecounter.tc_frequency / 100;
> + while (count < cycles) {
>   CPU_BUSY_CYCLE();
> + val1 = val2;
> + val2 = bus_space_read_4(sc->sc_iot, sc->sc_ioh,
> + HPET_MAIN_COUNTER);
> + count += val2 - val1;
> + }
>  }
>  
>  u_int
> 
> 



Re: tetris(6) "Random Generator" and advanced controls

2022-09-11 Thread Tom MTT.
ping?



rpki-client 8.0 released

2022-09-11 Thread Sebastian Benoit
rpki-client 8.0 has just been released and will be available in the
rpki-client directory of any OpenBSD mirror soon.

rpki-client is a FREE, easy-to-use implementation of the Resource
Public Key Infrastructure (RPKI) for Relying Parties (RP) to
facilitate validation of BGP announcements. The program queries the
global RPKI repository system and validates untrusted network inputs.
The program outputs validated ROA payloads, BGPsec Router keys, and
ASPA payloads in configuration formats suitable for OpenBGPD and BIRD,
and supports emitting CSV and JSON for consumption by other routing
stacks.

See RFC 6480 and RFC 6811 for a description of how RPKI and BGP Prefix
Origin Validation help secure the global Internet routing system.

rpki-client was primarily developed by Kristaps Dzonsons, Claudio
Jeker, Job Snijders, Theo Buehler, Theo de Raadt and Sebastian Benoit
as part of the OpenBSD Project.

This release includes the following changes to the previous release:

* Add suport for validating Autonomous System Provider Authorization
  (ASPA) objects conforming to draft-ietf-sidrops-aspa-profile-10.
  Validated ASPA payloads are visible in JSON and filemode (-f) output.
* Set rsync connection I/O idle timeout to 15 seconds.
* Unify the maximum idle I/O and connect timeouts for RSYNC & HTTPS.
* Rpki-client now performs stricter EE certificate validation:
- Disallow AS Resources extensions in ROA EE certificates.
- Disallow Subject Information Access (SIA) extensions in RPKI
  Signed Checklist (RSC) EE certs.
- Check the resources in ROAs and RSCs against EE certs.
* Improve readability and add various information being printed in
  verbose mode.
* Extend filemode (-f) output and print X.509 certificates in PEM
  format when increased verbosity (-vv) is specified.
* Shorten the RRDP I/O idle timeout.
* Introduce a deadline timer that aborts all repository synchronization 
  after seven eights of timeout (-s). With this rpki-client has improved
  chances to complete and produce an output even when a CA is excessivly
  slow.
* Abort a currently running RRDP request process when the per-repository
  timeout is reached.
* Permit multiple AccessDescription entries in SIA X.509 extensions. While
  fetching from secondary locations is not yet supported, rpki-client will
  not treat occurence as a fatal error.
* Resolve a potential for a race condition in non-atomic RRDP deltas.
* Fix some memory leaks.
* Improve compliance with the HTTP protocol specification.

rpki-client works on all operating systems with a libcrypto library
based on OpenSSL 1.1 or LibreSSL 3.5, and a libtls library compatible
with LibreSSL 3.5 or later.

rpki-client is known to compile and run on at least the following
operating systems: Alpine, CentOS, Debian, Fedora, FreeBSD, Red Hat,
Rocky, Ubuntu, macOS, and of course OpenBSD!

It is our hope that packagers take interest and help adapt
rpki-client-portable to more distributions.

The mirrors where rpki-client can be found are on
https://www.rpki-client.org/portable.html

Reporting Bugs:
===

General bugs may be reported to tech@openbsd.org

Portable bugs may be filed at
https://github.com/rpki-client/rpki-client-portable

We welcome feedback and improvements from the broader community.
Thanks to all of the contributors who helped make this release
possible.

Assistance to coordinate security issues is available via
secur...@openbsd.org.



sparc64: 32-bit compatibility cleanup

2022-09-11 Thread Scott Cheloha
kettenis@ suggested in a different thread that we ought to clean up
the 32-bit compatibility cruft in the sparc64 machine headers before
it would be safe to move the clockframe definition into frame.h:

https://marc.info/?l=openbsd-tech&m=166179164008301&w=2

> We really should be getting rid of the xxx32 stuff and rename the
> xxx64 ones to xxx.  And move trapframe (and possibly rwindow) to
> frame.h.

miod@ came forward in private and offered the attached patch to do so.

I don't have a sparc64 machine so I can't test it.  But if this
cleanup is indeed a necessary step to consolidating the clockframe
definitions I guess I can just ask:

Does this patch work for everyone?  Can we go ahead with this?

Index: dev/creator.c
===
RCS file: /OpenBSD/src/sys/arch/sparc64/dev/creator.c,v
retrieving revision 1.55
diff -u -p -r1.55 creator.c
--- dev/creator.c   15 Jul 2022 17:57:26 -  1.55
+++ dev/creator.c   30 Aug 2022 18:33:27 -
@@ -33,8 +33,9 @@
 #include 
 #include 
 
-#include 
 #include 
+#include 
+#include 
 #include 
 
 #include 
Index: fpu/fpu.c
===
RCS file: /OpenBSD/src/sys/arch/sparc64/fpu/fpu.c,v
retrieving revision 1.21
diff -u -p -r1.21 fpu.c
--- fpu/fpu.c   19 Aug 2020 10:10:58 -  1.21
+++ fpu/fpu.c   30 Aug 2022 18:33:27 -
@@ -81,22 +81,22 @@
 #include 
 
 int fpu_regoffset(int, int);
-int fpu_insn_fmov(struct fpstate64 *, struct fpemu *, union instr);
-int fpu_insn_fabs(struct fpstate64 *, struct fpemu *, union instr);
-int fpu_insn_fneg(struct fpstate64 *, struct fpemu *, union instr);
+int fpu_insn_fmov(struct fpstate *, struct fpemu *, union instr);
+int fpu_insn_fabs(struct fpstate *, struct fpemu *, union instr);
+int fpu_insn_fneg(struct fpstate *, struct fpemu *, union instr);
 int fpu_insn_itof(struct fpemu *, union instr, int, int *,
 int *, u_int *);
 int fpu_insn_ftoi(struct fpemu *, union instr, int *, int, u_int *);
 int fpu_insn_ftof(struct fpemu *, union instr, int *, int *, u_int *);
 int fpu_insn_fsqrt(struct fpemu *, union instr, int *, int *, u_int *);
-int fpu_insn_fcmp(struct fpstate64 *, struct fpemu *, union instr, int);
+int fpu_insn_fcmp(struct fpstate *, struct fpemu *, union instr, int);
 int fpu_insn_fmul(struct fpemu *, union instr, int *, int *, u_int *);
 int fpu_insn_fmulx(struct fpemu *, union instr, int *, int *, u_int *);
 int fpu_insn_fdiv(struct fpemu *, union instr, int *, int *, u_int *);
 int fpu_insn_fadd(struct fpemu *, union instr, int *, int *, u_int *);
 int fpu_insn_fsub(struct fpemu *, union instr, int *, int *, u_int *);
-int fpu_insn_fmovcc(struct proc *, struct fpstate64 *, union instr);
-int fpu_insn_fmovr(struct proc *, struct fpstate64 *, union instr);
+int fpu_insn_fmovcc(struct proc *, struct fpstate *, union instr);
+int fpu_insn_fmovr(struct proc *, struct fpstate *, union instr);
 void fpu_fcopy(u_int *, u_int *, int);
 
 #ifdef DEBUG
@@ -115,7 +115,7 @@ fpu_dumpfpn(struct fpn *fp)
fp->fp_mant[2], fp->fp_mant[3], fp->fp_exp);
 }
 void
-fpu_dumpstate(struct fpstate64 *fs)
+fpu_dumpstate(struct fpstate *fs)
 {
int i;
 
@@ -189,7 +189,7 @@ fpu_fcopy(src, dst, type)
 void
 fpu_cleanup(p, fs)
register struct proc *p;
-   register struct fpstate64 *fs;
+   register struct fpstate *fs;
 {
register int i, fsr = fs->fs_fsr, error;
union instr instr;
@@ -455,7 +455,7 @@ fpu_execute(p, fe, instr)
  */
 int
 fpu_insn_fmov(fs, fe, instr)
-   struct fpstate64 *fs;
+   struct fpstate *fs;
struct fpemu *fe;
union instr instr;
 {
@@ -478,7 +478,7 @@ fpu_insn_fmov(fs, fe, instr)
  */
 int
 fpu_insn_fabs(fs, fe, instr)
-   struct fpstate64 *fs;
+   struct fpstate *fs;
struct fpemu *fe;
union instr instr;
 {
@@ -502,7 +502,7 @@ fpu_insn_fabs(fs, fe, instr)
  */
 int
 fpu_insn_fneg(fs, fe, instr)
-   struct fpstate64 *fs;
+   struct fpstate *fs;
struct fpemu *fe;
union instr instr;
 {
@@ -644,7 +644,7 @@ fpu_insn_fsqrt(fe, instr, rdp, rdtypep, 
  */
 int
 fpu_insn_fcmp(fs, fe, instr, cmpe)
-   struct fpstate64 *fs;
+   struct fpstate *fs;
struct fpemu *fe;
union instr instr;
int cmpe;
@@ -848,7 +848,7 @@ fpu_insn_fsub(fe, instr, rdp, rdtypep, s
 int
 fpu_insn_fmovcc(p, fs, instr)
struct proc *p;
-   struct fpstate64 *fs;
+   struct fpstate *fs;
union instr instr;
 {
int rtype, rd, rs, cond;
@@ -900,7 +900,7 @@ fpu_insn_fmovcc(p, fs, instr)
 int
 fpu_insn_fmovr(p, fs, instr)
struct proc *p;
-   struct fpstate64 *fs;
+   struct fpstate *fs;
union instr instr;
 {
int rtype, rd, rs2, rs1;
Index: fpu/fpu_emu.h
===
RCS file: /OpenBSD/src/sys/arch/sparc64/fpu/fpu_emu.h,v
retrieving revision 1.5
diff -u -p -r1.5 fpu_emu.h
--- f

Towards unlocking mmap(2) & munmap(2)

2022-09-11 Thread Martin Pieuchot
Diff below adds a minimalist set of assertions to ensure proper locks
are held in uvm_mapanon() and uvm_unmap_remove() which are the guts of
mmap(2) for anons and munmap(2).

Please test it with WITNESS enabled and report back.

Index: uvm/uvm_addr.c
===
RCS file: /cvs/src/sys/uvm/uvm_addr.c,v
retrieving revision 1.31
diff -u -p -r1.31 uvm_addr.c
--- uvm/uvm_addr.c  21 Feb 2022 10:26:20 -  1.31
+++ uvm/uvm_addr.c  11 Sep 2022 09:08:10 -
@@ -416,6 +416,8 @@ uvm_addr_invoke(struct vm_map *map, stru
!(hint >= uaddr->uaddr_minaddr && hint < uaddr->uaddr_maxaddr))
return ENOMEM;
 
+   vm_map_assert_anylock(map);
+
error = (*uaddr->uaddr_functions->uaddr_select)(map, uaddr,
entry_out, addr_out, sz, align, offset, prot, hint);
 
Index: uvm/uvm_fault.c
===
RCS file: /cvs/src/sys/uvm/uvm_fault.c,v
retrieving revision 1.132
diff -u -p -r1.132 uvm_fault.c
--- uvm/uvm_fault.c 31 Aug 2022 01:27:04 -  1.132
+++ uvm/uvm_fault.c 11 Sep 2022 08:57:35 -
@@ -1626,6 +1626,7 @@ uvm_fault_unwire_locked(vm_map_t map, va
struct vm_page *pg;
 
KASSERT((map->flags & VM_MAP_INTRSAFE) == 0);
+   vm_map_assert_anylock(map);
 
/*
 * we assume that the area we are unwiring has actually been wired
Index: uvm/uvm_map.c
===
RCS file: /cvs/src/sys/uvm/uvm_map.c,v
retrieving revision 1.294
diff -u -p -r1.294 uvm_map.c
--- uvm/uvm_map.c   15 Aug 2022 15:53:45 -  1.294
+++ uvm/uvm_map.c   11 Sep 2022 09:37:44 -
@@ -162,6 +162,8 @@ int  uvm_map_inentry_recheck(u_long, v
 struct p_inentry *);
 boolean_t   uvm_map_inentry_fix(struct proc *, struct p_inentry *,
 vaddr_t, int (*)(vm_map_entry_t), u_long);
+boolean_t   uvm_map_is_stack_remappable(struct vm_map *,
+vaddr_t, vsize_t);
 /*
  * Tree management functions.
  */
@@ -491,6 +493,8 @@ uvmspace_dused(struct vm_map *map, vaddr
vaddr_t stack_begin, stack_end; /* Position of stack. */
 
KASSERT(map->flags & VM_MAP_ISVMSPACE);
+   vm_map_assert_anylock(map);
+
vm = (struct vmspace *)map;
stack_begin = MIN((vaddr_t)vm->vm_maxsaddr, (vaddr_t)vm->vm_minsaddr);
stack_end = MAX((vaddr_t)vm->vm_maxsaddr, (vaddr_t)vm->vm_minsaddr);
@@ -570,6 +574,8 @@ uvm_map_isavail(struct vm_map *map, stru
if (addr + sz < addr)
return 0;
 
+   vm_map_assert_anylock(map);
+
/*
 * Kernel memory above uvm_maxkaddr is considered unavailable.
 */
@@ -1446,6 +1452,8 @@ uvm_map_mkentry(struct vm_map *map, stru
entry->guard = 0;
entry->fspace = 0;
 
+   vm_map_assert_wrlock(map);
+
/* Reset free space in first. */
free = uvm_map_uaddr_e(map, first);
uvm_mapent_free_remove(map, free, first);
@@ -1573,6 +1581,8 @@ boolean_t
 uvm_map_lookup_entry(struct vm_map *map, vaddr_t address,
 struct vm_map_entry **entry)
 {
+   vm_map_assert_anylock(map);
+
*entry = uvm_map_entrybyaddr(&map->addr, address);
return *entry != NULL && !UVM_ET_ISHOLE(*entry) &&
(*entry)->start <= address && (*entry)->end > address;
@@ -1692,6 +1702,8 @@ uvm_map_is_stack_remappable(struct vm_ma
vaddr_t end = addr + sz;
struct vm_map_entry *first, *iter, *prev = NULL;
 
+   vm_map_assert_anylock(map);
+
if (!uvm_map_lookup_entry(map, addr, &first)) {
printf("map stack 0x%lx-0x%lx of map %p failed: no mapping\n",
addr, end, map);
@@ -1843,6 +1855,8 @@ uvm_mapent_mkfree(struct vm_map *map, st
vaddr_t  addr;  /* Start of freed range. */
vaddr_t  end;   /* End of freed range. */
 
+   UVM_MAP_REQ_WRITE(map);
+
prev = *prev_ptr;
if (prev == entry)
*prev_ptr = prev = NULL;
@@ -1971,10 +1985,7 @@ uvm_unmap_remove(struct vm_map *map, vad
if (start >= end)
return;
 
-   if ((map->flags & VM_MAP_INTRSAFE) == 0)
-   splassert(IPL_NONE);
-   else
-   splassert(IPL_VM);
+   vm_map_assert_wrlock(map);
 
/* Find first affected entry. */
entry = uvm_map_entrybyaddr(&map->addr, start);
@@ -4027,6 +4038,8 @@ uvm_map_checkprot(struct vm_map *map, va
 {
struct vm_map_entry *entry;
 
+   vm_map_assert_anylock(map);
+
if (start < map->min_offset || end > map->max_offset || start > end)
return FALSE;
if (start == end)
@@ -4886,6 +4899,7 @@ uvm_map_freelist_update(struct vm_map *m
 vaddr_t b_start, vaddr_t b_end, vaddr_t s_start, vaddr_t s_end, int flags)
 {
KDASSERT(b_end >= b_star