Re: [PATCH] Port GDB to Hurd x86_64.

2024-02-22 Thread John Baldwin

On 2/12/24 8:31 PM, Flavio Cruz wrote:

This port extends the existing i686 port to support x86_64 by trying to
reuse existing code whenever it makes sense.

* gdb/amd64-gnu-tdep.c: Adds logic for handling signal frames and
   position of amd64 registers in the different Hurd structs, including
   i386_thread_state. The signal code is very similar to i686, except the
   trampoline code is adapted.
* gdb/amd64-gnu-tdep.h: export register offsets for x86-gnu-nat.c.
* gdb/config/i386/nm-i386gnu.h: renamed to gdb/config/i386/nm-x86-gnu.h
   and adapt it for x86_64.
* gdb/config/i386/i386gnu.mn: renamed to gdb/config/i386/nm-x86-gnu.mn
   and reuse it for x86_64.
* gdb/configure.host: recognize gnu64 as a host.
* gdb/configure.nat: recognize gnu64 host and update existing i386gnu to
   reuse the new shared files.
* gdb/configure.tgt: recognize x86_64-*-gnu* triplet and use
   amd64-gnu-tdep.c.
* gdb/i386-gnu-tdep.c: added i386_gnu_thread_state_reg_offset that is
   copied from i386-gnu-nat.c. This makes it similar to amd64.
* gdb/i386-gnu-tdep.h: export register offsets and number of registers.
* gdb/i386-gnu-nat.c: rename it to x86-gnu-nat.c since we reuse this for
   i386 and amd64. Updated REG_ADDR to use one of the structures. Added
   VALID_REGISTER to make sure it's a register we can provide at this time
   (not all of them are available in amd64). FLAGS_REGISTER is either rfl
   or efl depending on the arch. Renamed functions and class from i386 to x86
   whenever they can be reused.

Tested on Hurd x86_64 and i686.
---

For Hurd x86_64 to work, "[PATCH] Hurd port: update interface to match
upstream and fix warnings" needs to be applied too.

  gdb/amd64-gnu-tdep.c  | 256 ++
  gdb/amd64-gnu-tdep.h  |  29 ++
  .../i386/{nm-i386gnu.h => nm-x86-gnu.h}   |   7 +
  gdb/config/i386/{i386gnu.mn => x86-gnu.mn}|   0
  gdb/configure.host|   1 +
  gdb/configure.nat |  27 +-
  gdb/configure.tgt |   4 +
  gdb/i386-gnu-tdep.c   |  37 ++-
  gdb/i386-gnu-tdep.h   |  29 ++
  gdb/{i386-gnu-nat.c => x86-gnu-nat.c} | 128 +
  10 files changed, 457 insertions(+), 61 deletions(-)
  create mode 100644 gdb/amd64-gnu-tdep.c
  create mode 100644 gdb/amd64-gnu-tdep.h
  rename gdb/config/i386/{nm-i386gnu.h => nm-x86-gnu.h} (83%)
  rename gdb/config/i386/{i386gnu.mn => x86-gnu.mn} (100%)
  create mode 100644 gdb/i386-gnu-tdep.h
  rename gdb/{i386-gnu-nat.c => x86-gnu-nat.c} (75%)

diff --git a/gdb/amd64-gnu-tdep.c b/gdb/amd64-gnu-tdep.c
new file mode 100644
index 000..57aeccea8b9
--- /dev/null
+++ b/gdb/amd64-gnu-tdep.c
@@ -0,0 +1,256 @@
+/* Mapping between the general-purpose registers in `struct
+   sigcontext' format (starting at sc_i386_thread_state)
+   and GDB's register cache layout.  */
+
+/* From .  */
+static int amd64_gnu_sc_reg_offset[] =
+{
+  15 * 8,  /* %rax */
+  12 * 8,  /* %rbx */
+  14 * 8,  /* %rcx */
+  13 * 8,  /* %rdx */
+  10 * 8,  /* %rsi */
+  9 * 8,   /* %rdi */
+  10 * 8,  /* %rbp */
+  11 * 8,  /* %rsp */
+  0 * 8,   /* %r8 ...  */
+  8 * 8,
+  7 * 8,
+  6 * 8,
+  3 * 8,
+  2 * 8,
+  1 * 8,
+  0 * 8,   /* ... %r15 */
+  16 * 8,  /* %rip */
+  18 * 8,  /* %eflags */
+  17 * 8,  /* %cs */
+};
+
+/* From .  */
+static int amd64_gnu_gregset_reg_offset[] =
+{
+  10 * 8,  /* %rax */
+  5 * 8,   /* %rbx */
+  11 * 8,  /* %rcx */
+  12 * 8,  /* %rdx */
+  13 * 8,  /* %rsi */
+  14 * 8,  /* %rdi */
+  4 * 8,   /* %rbp */
+  19 * 8,  /* %rsp */
+  9 * 8,   /* %r8 ...  */
+  8 * 8,
+  7 * 8,
+  6 * 8,
+  3 * 8,
+  2 * 8,
+  1 * 8,
+  0 * 8,   /* ... %r15 */
+  16 * 8,  /* %rip */
+  18 * 8,  /* %eflags */
+  17 * 8,  /* %cs */
+  -1,/* %ss */
+  -1,/* %ds */
+  -1,/* %es */
+  -1,/* %fs */
+  -1,/* %gs */
+};
+
+/* Offset to the thread_state_t location where REG is stored.  */
+#define REG_OFFSET(reg) offsetof (struct i386_thread_state, reg)


You can't use a reference to this OS-specific type in a tdep.c file,
only in a nat.c file.  tdep.c should be buildable on other platforms
to permit cross debugging of core dumps, remote targets, etc.


+/* At REG_OFFSET[N] is the offset to the thread_state_t location where
+   the GDB register N is 

Re: Are you doing this too for GNU/Hurd?

2024-02-22 Thread Samuel Thibault
Svante Signell, le jeu. 22 févr. 2024 18:13:50 +0100, a ecrit:
> On Thu, 2024-02-22 at 17:43 +0100, Samuel Thibault wrote:
> > Svante Signell, le jeu. 22 févr. 2024 17:36:15 +0100, a ecrit:
> > > Changes:
> > > ...
> > >  gcc-13 (13.2.0-14) experimental; urgency=medium
> > > * Pass -D_TIME_BITS=64 together with -D_FILE_OFFSET_BITS=64 by
> > > default
> > >  on the 32bit architectures armel, armhf, hppa, m68k, mips,
> > > mipsel,
> > >  powerpc (-m32 only) and sh4.
> > > ...
> > 
> > No, just like it's not planned for i386.
> 
> And why is it not planned for i386?

I won't rehash the whole complex rationale that was discussed. What I
essentially remember from it is that since this changes the ABI, this
breaks compatibility with pre-built binaries downloadable from the
Internet, so better leave them with the y2038 problem until then rather
than breaking them now.

Concerning the Hurd were we don't really have that problem, it's still a
lot of work while on the long run we should just migrate to amd64.

Samuel



Re: Are you doing this too for GNU/Hurd?

2024-02-22 Thread Jessica Clarke
On 22 Feb 2024, at 17:13, Svante Signell  wrote:
> On Thu, 2024-02-22 at 17:43 +0100, Samuel Thibault wrote:
>> Svante Signell, le jeu. 22 févr. 2024 17:36:15 +0100, a ecrit:
>>> Changes:
>>> ...
>>>  gcc-13 (13.2.0-14) experimental; urgency=medium
>>> * Pass -D_TIME_BITS=64 together with -D_FILE_OFFSET_BITS=64 by
>>> default
>>>  on the 32bit architectures armel, armhf, hppa, m68k, mips,
>>> mipsel,
>>>  powerpc (-m32 only) and sh4.
>>> ...
>> 
>> No, just like it's not planned for i386.
> 
> And why is it not planned for i386?

See https://wiki.debian.org/ReleaseGoals/64bit-time.

Jess




Re: Are you doing this too for GNU/Hurd?

2024-02-22 Thread Svante Signell
On Thu, 2024-02-22 at 17:43 +0100, Samuel Thibault wrote:
> Svante Signell, le jeu. 22 févr. 2024 17:36:15 +0100, a ecrit:
> > Changes:
> > ...
> >  gcc-13 (13.2.0-14) experimental; urgency=medium
> > * Pass -D_TIME_BITS=64 together with -D_FILE_OFFSET_BITS=64 by
> > default
> >  on the 32bit architectures armel, armhf, hppa, m68k, mips,
> > mipsel,
> >  powerpc (-m32 only) and sh4.
> > ...
> 
> No, just like it's not planned for i386.

And why is it not planned for i386?





Re: Are you doing this too for GNU/Hurd?

2024-02-22 Thread Samuel Thibault
Svante Signell, le jeu. 22 févr. 2024 17:36:15 +0100, a ecrit:
> Changes:
> ...
>  gcc-13 (13.2.0-14) experimental; urgency=medium
> * Pass -D_TIME_BITS=64 together with -D_FILE_OFFSET_BITS=64 by default
>  on the 32bit architectures armel, armhf, hppa, m68k, mips, mipsel,
>  powerpc (-m32 only) and sh4.
> ...

No, just like it's not planned for i386.

Samuel



Are you doing this too for GNU/Hurd?

2024-02-22 Thread Svante Signell
>From the changelog:

Changes:
...
 gcc-13 (13.2.0-14) experimental; urgency=medium
* Pass -D_TIME_BITS=64 together with -D_FILE_OFFSET_BITS=64 by default
 on the 32bit architectures armel, armhf, hppa, m68k, mips, mipsel,
 powerpc (-m32 only) and sh4.
...



Re: [PATCH v2 3/3 gnumach] kern/gsync: Use vm_map_lookup with keep_map_locked

2024-02-22 Thread Samuel Thibault
Looks much nicer indeed, applied, thanks!

Damien Zammit, le jeu. 22 févr. 2024 08:24:39 +, a ecrit:
> This prevents a deadlock in smp where a read lock on the map
> is taken in gsync and then the map is locked again inside
> vm_map_lookup() but another thread had a pre-existing write lock,
> therefore the second read lock blocks.
> 
> This is fixed by removing the initial gsync read lock on the map
> but keeping the read lock held upon returning from vm_map_lookup().
> 
> Co-Authored-By: Sergey Bugaev 
> ---
>  kern/gsync.c | 19 ++-
>  1 file changed, 6 insertions(+), 13 deletions(-)
> 
> diff --git a/kern/gsync.c b/kern/gsync.c
> index 19190349..31b564ca 100644
> --- a/kern/gsync.c
> +++ b/kern/gsync.c
> @@ -134,11 +134,12 @@ probe_address (vm_map_t map, vm_offset_t addr,
>vm_prot_t rprot;
>boolean_t wired_p;
>  
> -  if (vm_map_lookup (, addr, prot, FALSE, ,
> +  if (vm_map_lookup (, addr, prot, TRUE, ,
>>obj, >off, , _p) != KERN_SUCCESS)
>  return (-1);
>else if ((rprot & prot) != prot)
>  {
> +  vm_map_unlock_read (map);
>vm_object_unlock (vap->obj);
>return (-1);
>  }
> @@ -227,18 +228,13 @@ kern_return_t gsync_wait (task_t task, vm_offset_t addr,
>else if (addr % sizeof (int) != 0)
>  return (KERN_INVALID_ADDRESS);
>  
> -  vm_map_lock_read (task->map);
> -
>struct gsync_waiter w;
>struct vm_args va;
>boolean_t remote = task != current_task ();
>int bucket = gsync_prepare_key (task, addr, flags, , );
>  
>if (bucket < 0)
> -{
> -  vm_map_unlock_read (task->map);
> -  return (KERN_INVALID_ADDRESS);
> -}
> +return (KERN_INVALID_ADDRESS);
>else if (remote)
>  /* The VM object is returned locked. However, we are about to acquire
>   * a sleeping lock for a bucket, so we must not hold any simple
> @@ -354,17 +350,12 @@ kern_return_t gsync_wake (task_t task,
>else if (addr % sizeof (int) != 0)
>  return (KERN_INVALID_ADDRESS);
>  
> -  vm_map_lock_read (task->map);
> -
>union gsync_key key;
>struct vm_args va;
>int bucket = gsync_prepare_key (task, addr, flags, , );
>  
>if (bucket < 0)
> -{
> -  vm_map_unlock_read (task->map);
> -  return (KERN_INVALID_ADDRESS);
> -}
> +return (KERN_INVALID_ADDRESS);
>else if (current_task () != task && (flags & GSYNC_MUTATE) != 0)
>  /* See above on why we do this. */
>  vm_object_reference_locked (va.obj);
> @@ -437,6 +428,7 @@ kern_return_t gsync_requeue (task_t task, vm_offset_t src,
>int src_bkt = gsync_prepare_key (task, src, flags, _k, );
>if (src_bkt < 0)
>  return (KERN_INVALID_ADDRESS);
> +  vm_map_unlock_read (task->map);
>  
>/* Unlock the VM object before the second lookup. */
>vm_object_unlock (va.obj);
> @@ -444,6 +436,7 @@ kern_return_t gsync_requeue (task_t task, vm_offset_t src,
>int dst_bkt = gsync_prepare_key (task, dst, flags, _k, );
>if (dst_bkt < 0)
>  return (KERN_INVALID_ADDRESS);
> +  vm_map_unlock_read (task->map);
>  
>/* We never create any temporary mappings in 'requeue', so we
> * can unlock the VM object right now. */
> -- 
> 2.43.0
> 
> 
> 

-- 
Samuel
---
Pour une évaluation indépendante, transparente et rigoureuse !
Je soutiens la Commission d'Évaluation de l'Inria.



Re: [PATCH v2 1/3 gnumach] vm_map: Add comment and assert for vm_map_delete

2024-02-22 Thread Samuel Thibault
Hello,

Damien Zammit, le jeu. 22 févr. 2024 08:24:25 +, a ecrit:
> + /*
> +  *  Must be called with map lock taken unless refcount is zero
> +  */
> + assert((map->ref_count > 0 && map->lock.want_write) || (map->ref_count 
> == 0));

Please rather use have_write_lock().

(want_write is not enough btw, want_upgrade also means we have a write
lock).

Samuel



[PATCH v2 3/3 gnumach] kern/gsync: Use vm_map_lookup with keep_map_locked

2024-02-22 Thread Damien Zammit
This prevents a deadlock in smp where a read lock on the map
is taken in gsync and then the map is locked again inside
vm_map_lookup() but another thread had a pre-existing write lock,
therefore the second read lock blocks.

This is fixed by removing the initial gsync read lock on the map
but keeping the read lock held upon returning from vm_map_lookup().

Co-Authored-By: Sergey Bugaev 
---
 kern/gsync.c | 19 ++-
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/kern/gsync.c b/kern/gsync.c
index 19190349..31b564ca 100644
--- a/kern/gsync.c
+++ b/kern/gsync.c
@@ -134,11 +134,12 @@ probe_address (vm_map_t map, vm_offset_t addr,
   vm_prot_t rprot;
   boolean_t wired_p;
 
-  if (vm_map_lookup (, addr, prot, FALSE, ,
+  if (vm_map_lookup (, addr, prot, TRUE, ,
   >obj, >off, , _p) != KERN_SUCCESS)
 return (-1);
   else if ((rprot & prot) != prot)
 {
+  vm_map_unlock_read (map);
   vm_object_unlock (vap->obj);
   return (-1);
 }
@@ -227,18 +228,13 @@ kern_return_t gsync_wait (task_t task, vm_offset_t addr,
   else if (addr % sizeof (int) != 0)
 return (KERN_INVALID_ADDRESS);
 
-  vm_map_lock_read (task->map);
-
   struct gsync_waiter w;
   struct vm_args va;
   boolean_t remote = task != current_task ();
   int bucket = gsync_prepare_key (task, addr, flags, , );
 
   if (bucket < 0)
-{
-  vm_map_unlock_read (task->map);
-  return (KERN_INVALID_ADDRESS);
-}
+return (KERN_INVALID_ADDRESS);
   else if (remote)
 /* The VM object is returned locked. However, we are about to acquire
  * a sleeping lock for a bucket, so we must not hold any simple
@@ -354,17 +350,12 @@ kern_return_t gsync_wake (task_t task,
   else if (addr % sizeof (int) != 0)
 return (KERN_INVALID_ADDRESS);
 
-  vm_map_lock_read (task->map);
-
   union gsync_key key;
   struct vm_args va;
   int bucket = gsync_prepare_key (task, addr, flags, , );
 
   if (bucket < 0)
-{
-  vm_map_unlock_read (task->map);
-  return (KERN_INVALID_ADDRESS);
-}
+return (KERN_INVALID_ADDRESS);
   else if (current_task () != task && (flags & GSYNC_MUTATE) != 0)
 /* See above on why we do this. */
 vm_object_reference_locked (va.obj);
@@ -437,6 +428,7 @@ kern_return_t gsync_requeue (task_t task, vm_offset_t src,
   int src_bkt = gsync_prepare_key (task, src, flags, _k, );
   if (src_bkt < 0)
 return (KERN_INVALID_ADDRESS);
+  vm_map_unlock_read (task->map);
 
   /* Unlock the VM object before the second lookup. */
   vm_object_unlock (va.obj);
@@ -444,6 +436,7 @@ kern_return_t gsync_requeue (task_t task, vm_offset_t src,
   int dst_bkt = gsync_prepare_key (task, dst, flags, _k, );
   if (dst_bkt < 0)
 return (KERN_INVALID_ADDRESS);
+  vm_map_unlock_read (task->map);
 
   /* We never create any temporary mappings in 'requeue', so we
* can unlock the VM object right now. */
-- 
2.43.0





[PATCH v2 1/3 gnumach] vm_map: Add comment and assert for vm_map_delete

2024-02-22 Thread Damien Zammit
This will prevent calling vm_map_delete without the map locked
unless ref_count is zero.
---
 vm/vm_map.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/vm/vm_map.c b/vm/vm_map.c
index e454bb2a..f221c532 100644
--- a/vm/vm_map.c
+++ b/vm/vm_map.c
@@ -551,10 +551,12 @@ void vm_map_deallocate(vm_map_t map)
c = --map->ref_count;
simple_unlock(>ref_lock);
 
+   /* Check the refcount */
if (c > 0) {
return;
}
 
+   /* If no more references, call vm_map_delete without locking the map */
projected_buffer_collect(map);
(void) vm_map_delete(map, map->min_offset, map->max_offset);
 
@@ -2028,6 +2030,11 @@ kern_return_t vm_map_delete(
if (map->pmap == kernel_pmap && (start < kernel_virtual_start || end > 
kernel_virtual_end))
panic("vm_map_delete(%lx-%lx) falls in physical memory 
area!\n", (unsigned long) start, (unsigned long) end);
 
+   /*
+*  Must be called with map lock taken unless refcount is zero
+*/
+   assert((map->ref_count > 0 && map->lock.want_write) || (map->ref_count 
== 0));
+
/*
 *  Find the start of the region, and clip it
 */
-- 
2.43.0





[PATCH v2 2/3 gnumach] vm_map_lookup: Add parameter for keeping map locked

2024-02-22 Thread Damien Zammit
This adds a parameter called keep_map_locked to vm_map_lookup()
that allows the function to return with the map locked.

This is to prepare for fixing a bug with gsync where the map
is locked twice by mistake.

Co-Authored-By: Sergey Bugaev 
---
 i386/intel/read_fault.c | 4 ++--
 kern/gsync.c| 2 +-
 vm/vm_fault.c   | 4 ++--
 vm/vm_map.c | 9 ++---
 vm/vm_map.h | 2 +-
 5 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/i386/intel/read_fault.c b/i386/intel/read_fault.c
index 0b79e3d8..356145e1 100644
--- a/i386/intel/read_fault.c
+++ b/i386/intel/read_fault.c
@@ -61,7 +61,7 @@ intel_read_fault(
 *  Find the backing store object and offset into it
 *  to begin search.
 */
-   result = vm_map_lookup(, vaddr, VM_PROT_READ, ,
+   result = vm_map_lookup(, vaddr, VM_PROT_READ, FALSE, ,
, , , );
if (result != KERN_SUCCESS)
return (result);
@@ -133,7 +133,7 @@ intel_read_fault(
vm_offset_t retry_offset;
vm_prot_t   retry_prot;
 
-   result = vm_map_lookup(, vaddr, VM_PROT_READ, ,
+   result = vm_map_lookup(, vaddr, VM_PROT_READ, FALSE, ,
_object, _offset, _prot,
);
if (result != KERN_SUCCESS) {
diff --git a/kern/gsync.c b/kern/gsync.c
index e73a6cf0..19190349 100644
--- a/kern/gsync.c
+++ b/kern/gsync.c
@@ -134,7 +134,7 @@ probe_address (vm_map_t map, vm_offset_t addr,
   vm_prot_t rprot;
   boolean_t wired_p;
 
-  if (vm_map_lookup (, addr, prot, ,
+  if (vm_map_lookup (, addr, prot, FALSE, ,
   >obj, >off, , _p) != KERN_SUCCESS)
 return (-1);
   else if ((rprot & prot) != prot)
diff --git a/vm/vm_fault.c b/vm/vm_fault.c
index c6e28004..d99425a3 100644
--- a/vm/vm_fault.c
+++ b/vm/vm_fault.c
@@ -1213,7 +1213,7 @@ kern_return_t vm_fault(
 *  it to begin the search.
 */
 
-   if ((kr = vm_map_lookup(, vaddr, fault_type, ,
+   if ((kr = vm_map_lookup(, vaddr, fault_type, FALSE, ,
, ,
, )) != KERN_SUCCESS) {
goto done;
@@ -1375,7 +1375,7 @@ kern_return_t vm_fault(
 *  take another fault.
 */
kr = vm_map_lookup(, vaddr,
-  fault_type & ~VM_PROT_WRITE, ,
+  fault_type & ~VM_PROT_WRITE, FALSE, ,
   _object, _offset, _prot,
   );
 
diff --git a/vm/vm_map.c b/vm/vm_map.c
index f221c532..47abea1b 100644
--- a/vm/vm_map.c
+++ b/vm/vm_map.c
@@ -4614,8 +4614,9 @@ vm_map_t vm_map_fork(vm_map_t old_map)
  * In order to later verify this lookup, a "version"
  * is returned.
  *
- * The map should not be locked; it will not be
- * locked on exit.  In order to guarantee the
+ * The map should not be locked; it will be
+ * unlocked on exit unless keep_map_locked is set and
+ * the lookup succeeds.  In order to guarantee the
  * existence of the returned object, it is returned
  * locked.
  *
@@ -4628,6 +4629,7 @@ kern_return_t vm_map_lookup(
vm_map_t*var_map,   /* IN/OUT */
vm_offset_t vaddr,
vm_prot_t   fault_type,
+   boolean_t   keep_map_locked,
 
vm_map_version_t*out_version,   /* OUT */
vm_object_t *object,/* OUT */
@@ -4649,7 +4651,8 @@ kern_return_t vm_map_lookup(
 
 #defineRETURN(why) \
{ \
-   vm_map_unlock_read(map); \
+   if (!(keep_map_locked && (why == KERN_SUCCESS))) \
+ vm_map_unlock_read(map); \
return(why); \
}
 
diff --git a/vm/vm_map.h b/vm/vm_map.h
index a4949e4e..7e25d9f4 100644
--- a/vm/vm_map.h
+++ b/vm/vm_map.h
@@ -412,7 +412,7 @@ extern kern_return_tvm_map_inherit(vm_map_t, 
vm_offset_t, vm_offset_t,
   vm_inherit_t);
 
 /* Look up an address */
-extern kern_return_t   vm_map_lookup(vm_map_t *, vm_offset_t, vm_prot_t,
+extern kern_return_t   vm_map_lookup(vm_map_t *, vm_offset_t, vm_prot_t, 
boolean_t,
  vm_map_version_t *, vm_object_t *,
  vm_offset_t *, vm_prot_t *, boolean_t *);
 /* Find a map entry */
-- 
2.43.0





[PATCH v2 0/3 gnumach] vm_map and gsync changes

2024-02-22 Thread Damien Zammit
Hi,

I addressed the points in review of first round.
I tested this patchset using Sergey's patch additionally but
dropped it from this set so Sergey can mail in his version instead.

It still compiles gnumach with 7 cores using slave_pset.

Thanks,
Damien