Re: hppa MI mutex

2018-04-03 Thread Mark Kettenis
> Date: Tue, 3 Apr 2018 12:36:18 +0200
> From: Martin Pieuchot <m...@openbsd.org>
> 
> On 03/04/18(Tue) 12:06, Mark Kettenis wrote:
> > > Date: Tue, 3 Apr 2018 11:16:45 +0200
> > > From: Martin Pieuchot <m...@openbsd.org>
> > > 
> > > Here's a diff to switch hppa to the MI mutex implementation.  I'm
> > > looking for testers, as I don't own such hardware.
> > > 
> > > Note that our MI mutex implementation relies on atomic_cas_ptr(9).  On
> > > hppa all the calls to this function are serialized on a single lock. I
> > > don't believe it will introduce contention at this point since the
> > > KERNEL_LOCK() is still the problem.  However if it becomes a bottleneck
> > > we could use multiple locks in a small hash table.
> > 
> > I don't have such hardware either, but I'm not in favour of doing
> > this.  PA-RISC is "challenged" but at least the locking primitive it
> > supplies is suitable for implementing spin locks.  Emulating CAS with
> > spin locks to implement a spin lock feels so wrong.  There are also
> > some concerns about guarantees of forward progress in such an
> > implementation.
> 
> Well I would prefer if we could solve these concerns because we are already
> using atomic operations.
> 
> So what are the options?
> 
> 1. Keep hppa mutex MD and start diverging with the MI mutex
> 
> 2. Add another MI layer of abstraction on top of the atomic*(9) API
>because of hppa
> 
> 3. Emulate the atomic*(9) API as well as we can on hppa
> 
> I picked 3 because I don't want to add more complexity for an arch
> which is not widely used.  Who care if it feels wrong to emulate a CAS
> with a hw spinlock?  I believed that what we need is locking primitives
> that we can debug and analyse.  If hppa can benefit from that and help us
> find bugs, that's even better :)

Option 1. doesn't rule this out.  Having most (but not all)
architectures using the MI implementation is still a win.  The hppa
mutex code is already written in C so it would be fairly easy do
adjust it to provide the equivalent diagnostics.  I can give that a go
if you want me to.



Re: hppa MI mutex

2018-04-03 Thread Martin Pieuchot
On 03/04/18(Tue) 12:06, Mark Kettenis wrote:
> > Date: Tue, 3 Apr 2018 11:16:45 +0200
> > From: Martin Pieuchot <m...@openbsd.org>
> > 
> > Here's a diff to switch hppa to the MI mutex implementation.  I'm
> > looking for testers, as I don't own such hardware.
> > 
> > Note that our MI mutex implementation relies on atomic_cas_ptr(9).  On
> > hppa all the calls to this function are serialized on a single lock. I
> > don't believe it will introduce contention at this point since the
> > KERNEL_LOCK() is still the problem.  However if it becomes a bottleneck
> > we could use multiple locks in a small hash table.
> 
> I don't have such hardware either, but I'm not in favour of doing
> this.  PA-RISC is "challenged" but at least the locking primitive it
> supplies is suitable for implementing spin locks.  Emulating CAS with
> spin locks to implement a spin lock feels so wrong.  There are also
> some concerns about guarantees of forward progress in such an
> implementation.

Well I would prefer if we could solve these concerns because we are already
using atomic operations.

So what are the options?

1. Keep hppa mutex MD and start diverging with the MI mutex

2. Add another MI layer of abstraction on top of the atomic*(9) API
   because of hppa

3. Emulate the atomic*(9) API as well as we can on hppa

I picked 3 because I don't want to add more complexity for an arch
which is not widely used.  Who care if it feels wrong to emulate a CAS
with a hw spinlock?  I believed that what we need is locking primitives
that we can debug and analyse.  If hppa can benefit from that and help us
find bugs, that's even better :)

Do you see another alternative?



Re: hppa MI mutex

2018-04-03 Thread Mark Kettenis
> Date: Tue, 3 Apr 2018 11:16:45 +0200
> From: Martin Pieuchot <m...@openbsd.org>
> 
> Here's a diff to switch hppa to the MI mutex implementation.  I'm
> looking for testers, as I don't own such hardware.
> 
> Note that our MI mutex implementation relies on atomic_cas_ptr(9).  On
> hppa all the calls to this function are serialized on a single lock. I
> don't believe it will introduce contention at this point since the
> KERNEL_LOCK() is still the problem.  However if it becomes a bottleneck
> we could use multiple locks in a small hash table.

I don't have such hardware either, but I'm not in favour of doing
this.  PA-RISC is "challenged" but at least the locking primitive it
supplies is suitable for implementing spin locks.  Emulating CAS with
spin locks to implement a spin lock feels so wrong.  There are also
some concerns about guarantees of forward progress in such an
implementation.

> Index: arch/hppa/conf/files.hppa
> ===
> RCS file: /cvs/src/sys/arch/hppa/conf/files.hppa,v
> retrieving revision 1.97
> diff -u -p -r1.97 files.hppa
> --- arch/hppa/conf/files.hppa 5 Jun 2017 18:59:07 -   1.97
> +++ arch/hppa/conf/files.hppa 12 Feb 2018 10:08:57 -
> @@ -298,7 +298,6 @@ file  arch/hppa/hppa/fpu.c
>  file arch/hppa/hppa/ipi.cmultiprocessor
>  file arch/hppa/hppa/lock_machdep.c   multiprocessor
>  file arch/hppa/hppa/machdep.c
> -file arch/hppa/hppa/mutex.c
>  file arch/hppa/hppa/pmap.c
>  file arch/hppa/hppa/process_machdep.c
>  file arch/hppa/hppa/sys_machdep.c
> Index: arch/hppa/hppa/genassym.cf
> ===
> RCS file: /cvs/src/sys/arch/hppa/hppa/genassym.cf,v
> retrieving revision 1.47
> diff -u -p -r1.47 genassym.cf
> --- arch/hppa/hppa/genassym.cf9 Feb 2015 08:20:13 -   1.47
> +++ arch/hppa/hppa/genassym.cf12 Feb 2018 10:15:56 -
> @@ -45,7 +45,7 @@ include 
>  include 
>  include 
>  include 
> -include 
> +include 
>  include 
>  include 
>  include 
> Index: arch/hppa/hppa/ipi.c
> ===
> RCS file: /cvs/src/sys/arch/hppa/hppa/ipi.c,v
> retrieving revision 1.5
> diff -u -p -r1.5 ipi.c
> --- arch/hppa/hppa/ipi.c  8 Sep 2017 05:36:51 -   1.5
> +++ arch/hppa/hppa/ipi.c  12 Feb 2018 10:16:01 -
> @@ -25,7 +25,7 @@
>  #include 
>  #include 
>  #include 
> -#include 
> +#include 
>  #include 
>  
>  void hppa_ipi_nop(void);
> Index: arch/hppa/hppa/mutex.c
> ===
> RCS file: arch/hppa/hppa/mutex.c
> diff -N arch/hppa/hppa/mutex.c
> --- arch/hppa/hppa/mutex.c20 Apr 2017 13:57:29 -  1.16
> +++ /dev/null 1 Jan 1970 00:00:00 -
> @@ -1,156 +0,0 @@
> -/*   $OpenBSD: mutex.c,v 1.16 2017/04/20 13:57:29 visa Exp $ */
> -
> -/*
> - * Copyright (c) 2004 Artur Grabowski <a...@openbsd.org>
> - * All rights reserved. 
> - *
> - * Redistribution and use in source and binary forms, with or without 
> - * modification, are permitted provided that the following conditions 
> - * are met: 
> - *
> - * 1. Redistributions of source code must retain the above copyright 
> - *notice, this list of conditions and the following disclaimer. 
> - * 2. The name of the author may not be used to endorse or promote products
> - *derived from this software without specific prior written permission. 
> - *
> - * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES,
> - * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
> - * AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
> - * THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
> - * EXEMPLARY, OR CONSEQUENTIAL  DAMAGES (INCLUDING, BUT NOT LIMITED TO,
> - * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR 
> PROFITS;
> - * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
> - * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
> - * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
> - * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
> - */
> -
> -#include 
> -#include 
> -#include 
> -#include 
> -
> -#include 
> -
> -#include 
> -
> -int __mtx_enter_try(struct mutex *);
> -
> -#ifdef MULTIPROCESSOR
> -/* Note: lock must be 16-byte aligned. */
> -#define __mtx_lock(mtx) ((int *)(((vaddr_t)mtx->mtx_lock + 0xf) & ~0xf))
> -#endif
> -
> -void
> -__mtx_init(struct mutex *mtx, int wantipl)
> -{
> -#ifdef M

hppa MI mutex

2018-04-03 Thread Martin Pieuchot
Here's a diff to switch hppa to the MI mutex implementation.  I'm
looking for testers, as I don't own such hardware.

Note that our MI mutex implementation relies on atomic_cas_ptr(9).  On
hppa all the calls to this function are serialized on a single lock. I
don't believe it will introduce contention at this point since the
KERNEL_LOCK() is still the problem.  However if it becomes a bottleneck
we could use multiple locks in a small hash table.


Index: arch/hppa/conf/files.hppa
===
RCS file: /cvs/src/sys/arch/hppa/conf/files.hppa,v
retrieving revision 1.97
diff -u -p -r1.97 files.hppa
--- arch/hppa/conf/files.hppa   5 Jun 2017 18:59:07 -   1.97
+++ arch/hppa/conf/files.hppa   12 Feb 2018 10:08:57 -
@@ -298,7 +298,6 @@ filearch/hppa/hppa/fpu.c
 file   arch/hppa/hppa/ipi.cmultiprocessor
 file   arch/hppa/hppa/lock_machdep.c   multiprocessor
 file   arch/hppa/hppa/machdep.c
-file   arch/hppa/hppa/mutex.c
 file   arch/hppa/hppa/pmap.c
 file   arch/hppa/hppa/process_machdep.c
 file   arch/hppa/hppa/sys_machdep.c
Index: arch/hppa/hppa/genassym.cf
===
RCS file: /cvs/src/sys/arch/hppa/hppa/genassym.cf,v
retrieving revision 1.47
diff -u -p -r1.47 genassym.cf
--- arch/hppa/hppa/genassym.cf  9 Feb 2015 08:20:13 -   1.47
+++ arch/hppa/hppa/genassym.cf  12 Feb 2018 10:15:56 -
@@ -45,7 +45,7 @@ include 
 include 
 include 
 include 
-include 
+include 
 include 
 include 
 include 
Index: arch/hppa/hppa/ipi.c
===
RCS file: /cvs/src/sys/arch/hppa/hppa/ipi.c,v
retrieving revision 1.5
diff -u -p -r1.5 ipi.c
--- arch/hppa/hppa/ipi.c8 Sep 2017 05:36:51 -   1.5
+++ arch/hppa/hppa/ipi.c12 Feb 2018 10:16:01 -
@@ -25,7 +25,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 void hppa_ipi_nop(void);
Index: arch/hppa/hppa/mutex.c
===
RCS file: arch/hppa/hppa/mutex.c
diff -N arch/hppa/hppa/mutex.c
--- arch/hppa/hppa/mutex.c  20 Apr 2017 13:57:29 -  1.16
+++ /dev/null   1 Jan 1970 00:00:00 -
@@ -1,156 +0,0 @@
-/* $OpenBSD: mutex.c,v 1.16 2017/04/20 13:57:29 visa Exp $ */
-
-/*
- * Copyright (c) 2004 Artur Grabowski <a...@openbsd.org>
- * All rights reserved. 
- *
- * Redistribution and use in source and binary forms, with or without 
- * modification, are permitted provided that the following conditions 
- * are met: 
- *
- * 1. Redistributions of source code must retain the above copyright 
- *notice, this list of conditions and the following disclaimer. 
- * 2. The name of the author may not be used to endorse or promote products
- *derived from this software without specific prior written permission. 
- *
- * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES,
- * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
- * AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
- * THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL  DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
- * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
- * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
- * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
- * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#include 
-#include 
-#include 
-#include 
-
-#include 
-
-#include 
-
-int __mtx_enter_try(struct mutex *);
-
-#ifdef MULTIPROCESSOR
-/* Note: lock must be 16-byte aligned. */
-#define __mtx_lock(mtx) ((int *)(((vaddr_t)mtx->mtx_lock + 0xf) & ~0xf))
-#endif
-
-void
-__mtx_init(struct mutex *mtx, int wantipl)
-{
-#ifdef MULTIPROCESSOR
-   mtx->mtx_lock[0] = 1;
-   mtx->mtx_lock[1] = 1;
-   mtx->mtx_lock[2] = 1;
-   mtx->mtx_lock[3] = 1;
-#endif
-   mtx->mtx_wantipl = wantipl;
-   mtx->mtx_oldipl = IPL_NONE;
-   mtx->mtx_owner = NULL;
-}
-
-#ifdef MULTIPROCESSOR
-void
-__mtx_enter(struct mutex *mtx)
-{
-   while (__mtx_enter_try(mtx) == 0)
-   ;
-}
-
-int
-__mtx_enter_try(struct mutex *mtx)
-{
-   struct cpu_info *ci = curcpu();
-   volatile int *lock = __mtx_lock(mtx);
-   int ret;
-   int s;
-
-   if (mtx->mtx_wantipl != IPL_NONE)
-   s = splraise(mtx->mtx_wantipl);
-
-#ifdef DIAGNOSTIC
-   if (__predict_false(mtx->mtx_owner == ci))
-   panic("mtx %p: locking against myself", mtx);
-#endif
-
-   asm volatile (
-   "ldcws  0(%2), %0"
-   : "=" (ret), "+m" (lock)
-   : "r" (lock)
- 

sparc64: use MI mutex

2018-02-10 Thread Martin Pieuchot
Diff below switches sparc64 to the MI mutex implementation.  I've been
running this on my 16CPUs guest on a T5220 without issues.

I'm not removing the assembly code yet in case we spot an issue.

More tests and oks welcome :)

Index: arch/sparc64/conf/files.sparc64
===
RCS file: /cvs/src/sys/arch/sparc64/conf/files.sparc64,v
retrieving revision 1.149
diff -u -p -r1.149 files.sparc64
--- arch/sparc64/conf/files.sparc64 17 Oct 2017 14:25:35 -  1.149
+++ arch/sparc64/conf/files.sparc64 10 Feb 2018 10:49:56 -
@@ -329,7 +329,6 @@ filearch/sparc64/sparc64/kgdb_machdep.c
 file   arch/sparc64/sparc64/machdep.c
 file   arch/sparc64/sparc64/mdesc.csun4v
 file   arch/sparc64/sparc64/mem.c
-file   arch/sparc64/sparc64/mutex.S
 file   arch/sparc64/sparc64/openprom.c
 file   arch/sparc64/sparc64/openfirm.c
 file   arch/sparc64/sparc64/ofw_machdep.c
Index: arch/sparc64/include/mutex.h
===
RCS file: /cvs/src/sys/arch/sparc64/include/mutex.h,v
retrieving revision 1.7
diff -u -p -r1.7 mutex.h
--- arch/sparc64/include/mutex.h13 Jan 2018 15:18:11 -  1.7
+++ arch/sparc64/include/mutex.h10 Feb 2018 10:49:36 -
@@ -1,85 +1,3 @@
 /* $OpenBSD: mutex.h,v 1.7 2018/01/13 15:18:11 mpi Exp $   */
 
-/*
- * Copyright (c) 2004 Artur Grabowski <a...@openbsd.org>
- * All rights reserved. 
- *
- * Redistribution and use in source and binary forms, with or without 
- * modification, are permitted provided that the following conditions 
- * are met: 
- *
- * 1. Redistributions of source code must retain the above copyright 
- *notice, this list of conditions and the following disclaimer. 
- * 2. The name of the author may not be used to endorse or promote products
- *derived from this software without specific prior written permission. 
- *
- * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES,
- * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
- * AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
- * THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL  DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
- * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
- * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
- * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
- * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef _MACHINE_MUTEX_H_
-#define _MACHINE_MUTEX_H_
-
-#include 
-
-struct mutex {
-   volatile void *mtx_owner; /* mutex.S relies upon this being first */
-   int mtx_wantipl;
-   int mtx_oldipl;
-#ifdef WITNESS
-   struct lock_object mtx_lock_obj;
-#endif
-};
-
-/*
- * To prevent lock ordering problems with the kernel lock, we need to
- * make sure we block all interrupts that can grab the kernel lock.
- * The simplest way to achieve this is to make sure mutexes always
- * raise the interrupt priority level to the highest level that has
- * interrupts that grab the kernel lock.
- */
-#ifdef MULTIPROCESSOR
-#define __MUTEX_IPL(ipl) \
-(((ipl) > IPL_NONE && (ipl) < IPL_MPFLOOR) ? IPL_MPFLOOR : (ipl))
-#else
-#define __MUTEX_IPL(ipl) (ipl)
-#endif
-
-#ifdef WITNESS
-#define MUTEX_INITIALIZER_FLAGS(ipl, name, flags) \
-   { NULL, __MUTEX_IPL((ipl)), IPL_NONE, MTX_LO_INITIALIZER(name, flags) }
-#else
-#define MUTEX_INITIALIZER_FLAGS(ipl, name, flags) \
-   { NULL, __MUTEX_IPL((ipl)), IPL_NONE }
-#endif
-
-void __mtx_init(struct mutex *, int);
-#define _mtx_init(mtx, ipl) __mtx_init((mtx), __MUTEX_IPL((ipl)))
-
-#ifdef DIAGNOSTIC
-#define MUTEX_ASSERT_LOCKED(mtx) do {  \
-   if ((mtx)->mtx_owner != curcpu())   \
-   panic("mutex %p not held in %s", (mtx), __func__);  \
-} while (0)
-
-#define MUTEX_ASSERT_UNLOCKED(mtx) do {
\
-   if ((mtx)->mtx_owner == curcpu())   \
-   panic("mutex %p held in %s", (mtx), __func__);  \
-} while (0)
-#else
-#define MUTEX_ASSERT_LOCKED(mtx) do { } while (0)
-#define MUTEX_ASSERT_UNLOCKED(mtx) do { } while (0)
-#endif
-
-#define MUTEX_LOCK_OBJECT(mtx) (&(mtx)->mtx_lock_obj)
-#define MUTEX_OLDIPL(mtx)  (mtx)->mtx_oldipl
-
-#endif /* _MACHINE_MUTEX_H_ */
+#define __USE_MI_MUTEX



Re: MI mutex

2018-01-23 Thread Mark Kettenis
> Date: Tue, 23 Jan 2018 18:14:31 +0100
> From: Martin Pieuchot 
> 
> On 23/01/18(Tue) 18:10, Mark Kettenis wrote:
> > > Date: Tue, 23 Jan 2018 14:06:08 +
> > > From: Visa Hankala 
> > > 
> > > On Mon, Jan 22, 2018 at 11:12:13AM +0100, Martin Pieuchot wrote:
> > > > Diff below moves the common mutex implementation to kern/ and enable it 
> > > > for
> > > > alpha, amd64, arm64, i386, mips64, powerpc.
> > > 
> > > Your diff seems to miss the necessary bits in .
> > > In addition, you should put the common mutex code into kern_mutex.c.
> > > Lets keep kern_lock.c for the mplock only.
> > 
> > Right.  I believe kern_lock.c is only included in NULTIPROCESSOR kernels.
> 
> It is not, however kern_mutex.c is only included in WITNESS kernels.

Must have confused myself.  It is listed (unconditionally) in sys/conf/files.



Re: MI mutex

2018-01-23 Thread Artturi Alm
On Tue, Jan 23, 2018 at 06:14:31PM +0100, Martin Pieuchot wrote:
> On 23/01/18(Tue) 18:10, Mark Kettenis wrote:
> > > Date: Tue, 23 Jan 2018 14:06:08 +
> > > From: Visa Hankala 
> > > 
> > > On Mon, Jan 22, 2018 at 11:12:13AM +0100, Martin Pieuchot wrote:
> > > > Diff below moves the common mutex implementation to kern/ and enable it 
> > > > for
> > > > alpha, amd64, arm64, i386, mips64, powerpc.
> > > 
> > > Your diff seems to miss the necessary bits in .
> > > In addition, you should put the common mutex code into kern_mutex.c.
> > > Lets keep kern_lock.c for the mplock only.

got me thinking "Why?", but i'm not sure where i've got the impression
that aiming towards GENERIC == GENERIC.MP is/was desirable direction.

> > 
> > Right.  I believe kern_lock.c is only included in NULTIPROCESSOR kernels.
> 
> It is not, however kern_mutex.c is only included in WITNESS kernels.
> 

just noise:)
-Artturi



Re: MI mutex

2018-01-23 Thread Martin Pieuchot
On 23/01/18(Tue) 18:10, Mark Kettenis wrote:
> > Date: Tue, 23 Jan 2018 14:06:08 +
> > From: Visa Hankala 
> > 
> > On Mon, Jan 22, 2018 at 11:12:13AM +0100, Martin Pieuchot wrote:
> > > Diff below moves the common mutex implementation to kern/ and enable it 
> > > for
> > > alpha, amd64, arm64, i386, mips64, powerpc.
> > 
> > Your diff seems to miss the necessary bits in .
> > In addition, you should put the common mutex code into kern_mutex.c.
> > Lets keep kern_lock.c for the mplock only.
> 
> Right.  I believe kern_lock.c is only included in NULTIPROCESSOR kernels.

It is not, however kern_mutex.c is only included in WITNESS kernels.



Re: MI mutex

2018-01-23 Thread Mark Kettenis
> Date: Tue, 23 Jan 2018 14:06:08 +
> From: Visa Hankala 
> 
> On Mon, Jan 22, 2018 at 11:12:13AM +0100, Martin Pieuchot wrote:
> > Diff below moves the common mutex implementation to kern/ and enable it for
> > alpha, amd64, arm64, i386, mips64, powerpc.
> 
> Your diff seems to miss the necessary bits in .
> In addition, you should put the common mutex code into kern_mutex.c.
> Lets keep kern_lock.c for the mplock only.

Right.  I believe kern_lock.c is only included in NULTIPROCESSOR kernels.



Re: MI mutex

2018-01-23 Thread Visa Hankala
On Tue, Jan 23, 2018 at 03:23:24PM +0100, Martin Pieuchot wrote:
> On 23/01/18(Tue) 14:06, Visa Hankala wrote:
> > In addition, you should put the common mutex code into kern_mutex.c.
> > Lets keep kern_lock.c for the mplock only.
> 
> I'm more in favor of putting everything into kern_lock.c.  We're talking
> about less than 400 lines of code.

To me this seems a bit odd at the moment, but fine.

> +#ifdef MP_LOCKDEBUG
> +#ifndef DDB
> +#error "MP_LOCKDEBUG requires DDB"
> +#endif
> +
> +/* CPU-dependent timing, needs this to be settable from ddb. */
> +extern int __mp_lock_spinout;
> +#endif

The above should be left out. There already is a similar block
at the start of the file. Otherwise OK visa@.



Re: MI mutex

2018-01-23 Thread Martin Pieuchot
On 23/01/18(Tue) 14:06, Visa Hankala wrote:
> On Mon, Jan 22, 2018 at 11:12:13AM +0100, Martin Pieuchot wrote:
> > Diff below moves the common mutex implementation to kern/ and enable it for
> > alpha, amd64, arm64, i386, mips64, powerpc.
> 
> Your diff seems to miss the necessary bits in .

Indeed, updated diff below.

> In addition, you should put the common mutex code into kern_mutex.c.
> Lets keep kern_lock.c for the mplock only.

I'm more in favor of putting everything into kern_lock.c.  We're talking
about less than 400 lines of code.  And I hope we can simplify the
#ifdef and abstraction by having more MI code.


Index: kern/kern_lock.c
===
RCS file: /cvs/src/sys/kern/kern_lock.c,v
retrieving revision 1.52
diff -u -p -r1.52 kern_lock.c
--- kern/kern_lock.c4 Dec 2017 09:51:03 -   1.52
+++ kern/kern_lock.c22 Jan 2018 09:24:06 -
@@ -1,6 +1,30 @@
 /* $OpenBSD: kern_lock.c,v 1.52 2017/12/04 09:51:03 mpi Exp $  */
 
-/* 
+/*
+ * Copyright (c) 2004 Artur Grabowski 
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *notice, this list of conditions and the following disclaimer.
+ * 2. The name of the author may not be used to endorse or promote products
+ *derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES,
+ * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
+ * AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL  DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/*
  * Copyright (c) 1995
  * The Regents of the University of California.  All rights reserved.
  *
@@ -273,3 +297,122 @@ __mp_lock_held(struct __mp_lock *mpl, st
 #endif /* __USE_MI_MPLOCK */
 
 #endif /* MULTIPROCESSOR */
+
+
+#ifdef __USE_MI_MUTEX
+void
+__mtx_init(struct mutex *mtx, int wantipl)
+{
+   mtx->mtx_owner = NULL;
+   mtx->mtx_wantipl = wantipl;
+   mtx->mtx_oldipl = IPL_NONE;
+}
+
+#ifdef MULTIPROCESSOR
+#ifdef MP_LOCKDEBUG
+#ifndef DDB
+#error "MP_LOCKDEBUG requires DDB"
+#endif
+
+/* CPU-dependent timing, needs this to be settable from ddb. */
+extern int __mp_lock_spinout;
+#endif
+
+void
+__mtx_enter(struct mutex *mtx)
+{
+#ifdef MP_LOCKDEBUG
+   int nticks = __mp_lock_spinout;
+#endif
+
+   while (__mtx_enter_try(mtx) == 0) {
+   CPU_BUSY_CYCLE();
+
+#ifdef MP_LOCKDEBUG
+   if (--nticks == 0) {
+   db_printf("%s: %p lock spun out", __func__, mtx);
+   db_enter();
+   nticks = __mp_lock_spinout;
+   }
+#endif
+   }
+}
+
+int
+__mtx_enter_try(struct mutex *mtx)
+{
+   struct cpu_info *owner, *ci = curcpu();
+   int s;
+
+   if (mtx->mtx_wantipl != IPL_NONE)
+   s = splraise(mtx->mtx_wantipl);
+
+   owner = atomic_cas_ptr(>mtx_owner, NULL, ci);
+#ifdef DIAGNOSTIC
+   if (__predict_false(owner == ci))
+   panic("mtx %p: locking against myself", mtx);
+#endif
+   if (owner == NULL) {
+   membar_enter_after_atomic();
+   if (mtx->mtx_wantipl != IPL_NONE)
+   mtx->mtx_oldipl = s;
+#ifdef DIAGNOSTIC
+   ci->ci_mutex_level++;
+#endif
+   return (1);
+   }
+
+   if (mtx->mtx_wantipl != IPL_NONE)
+   splx(s);
+
+   return (0);
+}
+#else
+void
+__mtx_enter(struct mutex *mtx)
+{
+   struct cpu_info *ci = curcpu();
+
+#ifdef DIAGNOSTIC
+   if (__predict_false(mtx->mtx_owner == ci))
+   panic("mtx %p: locking against myself", mtx);
+#endif
+
+   if (mtx->mtx_wantipl != IPL_NONE)
+   mtx->mtx_oldipl = splraise(mtx->mtx_wantipl);
+
+   mtx->mtx_owner = ci;
+
+#ifdef DIAGNOSTIC
+   ci->ci_mutex_level++;
+#endif
+}
+
+int
+__mtx_enter_try(struct mutex *mtx)
+{
+   __mtx_enter(mtx);
+   return (1);
+}
+#endif
+
+void
+__mtx_leave(struct mutex *mtx)
+{
+   int s;
+
+   MUTEX_ASSERT_LOCKED(mtx);
+
+#ifdef DIAGNOSTIC
+   curcpu()->ci_mutex_level--;
+#endif
+
+   s = mtx->mtx_oldipl;
+#ifdef MULTIPROCESSOR
+   membar_exit_before_atomic();
+#endif
+   mtx->mtx_owner = NULL;
+   if (mtx->mtx_wantipl != 

Re: MI mutex

2018-01-23 Thread Visa Hankala
On Mon, Jan 22, 2018 at 11:12:13AM +0100, Martin Pieuchot wrote:
> Diff below moves the common mutex implementation to kern/ and enable it for
> alpha, amd64, arm64, i386, mips64, powerpc.

Your diff seems to miss the necessary bits in .
In addition, you should put the common mutex code into kern_mutex.c.
Lets keep kern_lock.c for the mplock only.



MI mutex

2018-01-23 Thread Martin Pieuchot
Diff below moves the common mutex implementation to kern/ and enable it for
alpha, amd64, arm64, i386, mips64, powerpc.

ok?

Index: kern/kern_lock.c
===
RCS file: /cvs/src/sys/kern/kern_lock.c,v
retrieving revision 1.52
diff -u -p -r1.52 kern_lock.c
--- kern/kern_lock.c4 Dec 2017 09:51:03 -   1.52
+++ kern/kern_lock.c22 Jan 2018 09:24:06 -
@@ -1,6 +1,30 @@
 /* $OpenBSD: kern_lock.c,v 1.52 2017/12/04 09:51:03 mpi Exp $  */
 
-/* 
+/*
+ * Copyright (c) 2004 Artur Grabowski 
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *notice, this list of conditions and the following disclaimer.
+ * 2. The name of the author may not be used to endorse or promote products
+ *derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES,
+ * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
+ * AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL  DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/*
  * Copyright (c) 1995
  * The Regents of the University of California.  All rights reserved.
  *
@@ -273,3 +297,122 @@ __mp_lock_held(struct __mp_lock *mpl, st
 #endif /* __USE_MI_MPLOCK */
 
 #endif /* MULTIPROCESSOR */
+
+
+#ifdef __USE_MI_MUTEX
+void
+__mtx_init(struct mutex *mtx, int wantipl)
+{
+   mtx->mtx_owner = NULL;
+   mtx->mtx_wantipl = wantipl;
+   mtx->mtx_oldipl = IPL_NONE;
+}
+
+#ifdef MULTIPROCESSOR
+#ifdef MP_LOCKDEBUG
+#ifndef DDB
+#error "MP_LOCKDEBUG requires DDB"
+#endif
+
+/* CPU-dependent timing, needs this to be settable from ddb. */
+extern int __mp_lock_spinout;
+#endif
+
+void
+__mtx_enter(struct mutex *mtx)
+{
+#ifdef MP_LOCKDEBUG
+   int nticks = __mp_lock_spinout;
+#endif
+
+   while (__mtx_enter_try(mtx) == 0) {
+   CPU_BUSY_CYCLE();
+
+#ifdef MP_LOCKDEBUG
+   if (--nticks == 0) {
+   db_printf("%s: %p lock spun out", __func__, mtx);
+   db_enter();
+   nticks = __mp_lock_spinout;
+   }
+#endif
+   }
+}
+
+int
+__mtx_enter_try(struct mutex *mtx)
+{
+   struct cpu_info *owner, *ci = curcpu();
+   int s;
+
+   if (mtx->mtx_wantipl != IPL_NONE)
+   s = splraise(mtx->mtx_wantipl);
+
+   owner = atomic_cas_ptr(>mtx_owner, NULL, ci);
+#ifdef DIAGNOSTIC
+   if (__predict_false(owner == ci))
+   panic("mtx %p: locking against myself", mtx);
+#endif
+   if (owner == NULL) {
+   membar_enter_after_atomic();
+   if (mtx->mtx_wantipl != IPL_NONE)
+   mtx->mtx_oldipl = s;
+#ifdef DIAGNOSTIC
+   ci->ci_mutex_level++;
+#endif
+   return (1);
+   }
+
+   if (mtx->mtx_wantipl != IPL_NONE)
+   splx(s);
+
+   return (0);
+}
+#else
+void
+__mtx_enter(struct mutex *mtx)
+{
+   struct cpu_info *ci = curcpu();
+
+#ifdef DIAGNOSTIC
+   if (__predict_false(mtx->mtx_owner == ci))
+   panic("mtx %p: locking against myself", mtx);
+#endif
+
+   if (mtx->mtx_wantipl != IPL_NONE)
+   mtx->mtx_oldipl = splraise(mtx->mtx_wantipl);
+
+   mtx->mtx_owner = ci;
+
+#ifdef DIAGNOSTIC
+   ci->ci_mutex_level++;
+#endif
+}
+
+int
+__mtx_enter_try(struct mutex *mtx)
+{
+   __mtx_enter(mtx);
+   return (1);
+}
+#endif
+
+void
+__mtx_leave(struct mutex *mtx)
+{
+   int s;
+
+   MUTEX_ASSERT_LOCKED(mtx);
+
+#ifdef DIAGNOSTIC
+   curcpu()->ci_mutex_level--;
+#endif
+
+   s = mtx->mtx_oldipl;
+#ifdef MULTIPROCESSOR
+   membar_exit_before_atomic();
+#endif
+   mtx->mtx_owner = NULL;
+   if (mtx->mtx_wantipl != IPL_NONE)
+   splx(s);
+}
+#endif /* __USE_MI_MUTEX */
Index: arch/alpha/conf/files.alpha
===
RCS file: /cvs/src/sys/arch/alpha/conf/files.alpha,v
retrieving revision 1.105
diff -u -p -r1.105 files.alpha
--- arch/alpha/conf/files.alpha 2 Nov 2017 14:04:24 -   1.105
+++ arch/alpha/conf/files.alpha 22 Jan 2018 09:15:42 -
@@ -293,7 +293,6 @@ filearch/alpha/alpha/fp_complete.c  !no
 file