[Xenomai-core] [pull request] DEBUG_SYNCH_RELAX fix and some cleanups

2009-11-11 Thread Jan Kiszka
The following changes since commit 3b0152276752561805bf113eaa7b699d93c473c5:
  Bernhard Walle (1):
hal: check CPU frequency

are available in the git repository at:

  git://git.xenomai.org/xenomai-jki.git for-upstream

Specifically the last patch is important for us as hell broke loose
here once we enabled it for our customer. The first one is a resend, the
other two are a cleanup and a minor instrumentation fix.

Jan Kiszka (4):
  hal: Ensure atomicity of rthal_local_irq_disabled
  nucleus: Cosmetic cleanup of lostage_handler
  nucleus: Improve lostage_work instrumentation
  nucleus: Fix endless loop of DEBUG_SYNCH_RELAX

 include/asm-generic/hal.h |   19 ++-
 include/nucleus/thread.h  |1 +
 ksrc/nucleus/shadow.c |   28 +---
 ksrc/nucleus/synch.c  |   11 ---
 4 files changed, 44 insertions(+), 15 deletions(-)

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] scripts: use new 'head -n' syntax.

2009-11-11 Thread Fabian Godehardt
Executing configure on a chroot which uses a modern equivalent of
_POSIX_VERSION breaks on scripts/prepare-kernel.sh:

  patching file mm/vmalloc.c
  head: `-1' option is obsolete; use `-n 1'
  Try `head --help' for more information.

This patch converts prepare-kernel.sh to the new 'head -n' syntax.

Signed-off-by: Fabian Godehardt f...@emlix.com
---
 scripts/prepare-kernel.sh |4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/scripts/prepare-kernel.sh b/scripts/prepare-kernel.sh
index d499563..b052ec3 100755
--- a/scripts/prepare-kernel.sh
+++ b/scripts/prepare-kernel.sh
@@ -408,10 +408,10 @@ if test -r 
$linux_tree/arch/$linux_arch/include/asm/ipipe.h; then
asm_ipipe_h=$linux_tree/$linux_include_asm/ipipe.h
 else
linux_include_asm=include/asm-$linux_arch
-   asm_ipipe_h=`ls $linux_tree/include/asm-{$linux_arch,$xenomai_arch}/ipipe.h 
2/dev/null|head -1`
+   asm_ipipe_h=`ls $linux_tree/include/asm-{$linux_arch,$xenomai_arch}/ipipe.h 
2/dev/null|head -n1`
 fi
 
-adeos_version=`grep '^#define.*IPIPE_ARCH_STRING.*' $asm_ipipe_h 
2/dev/null|head -1|sed -e 's,.*\(.*\)$,\1,'`
+adeos_version=`grep '^#define.*IPIPE_ARCH_STRING.*' $asm_ipipe_h 
2/dev/null|head -n1|sed -e 's,.*\(.*\)$,\1,'`
 
 if test \! x$adeos_version = x; then
if test x$verbose = x1; then
-- 
1.5.3.7


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rtdm_iomap_to_user with phys addr 4GB

2009-11-11 Thread Herrera-Bendezu, Luis
 

-Original Message-
From: jan.kis...@web.de [mailto:jan.kis...@web.de] 
Sent: Tuesday, November 10, 2009 4:22 PM
To: Herrera-Bendezu, Luis
Cc: xenomai-core@gna.org
Subject: Re: rtdm_iomap_to_user with phys addr  4GB

Herrera-Bendezu, Luis wrote:
  
 
 -Original Message-
 From: Jan Kiszka [mailto:jan.kis...@siemens.com] 
 Sent: Tuesday, November 10, 2009 1:55 PM
 To: Herrera-Bendezu, Luis
 Cc: xenomai-core@gna.org
 Subject: Re: rtdm_iomap_to_user with phys addr  4GB


 Herrera-Bendezu, Luis wrote:
  

 -Original Message-
 From: Jan Kiszka [mailto:jan.kis...@siemens.com] 
 Sent: Tuesday, November 10, 2009 1:13 PM
 To: Herrera-Bendezu, Luis
 Cc: xenomai-core@gna.org
 Subject: Re: rtdm_iomap_to_user with phys addr  4GB


 Herrera-Bendezu, Luis wrote:
  

 -Original Message-
 From: Jan Kiszka [mailto:jan.kis...@siemens.com] 
 Sent: Tuesday, November 10, 2009 12:03 PM
 To: Herrera-Bendezu, Luis
 Cc: xenomai-core@gna.org
 Subject: Re: rtdm_iomap_to_user with phys addr  4GB


 Herrera-Bendezu, Luis wrote:
 Hello,

 I am writing an RTDM driver to replace one that uses UIO. 
 The device
 resides in a physical address  4 GB on a PPC440EPx. The 
 UIO could
 not handle this address so I made a proposal to address it, 
 details at:
 
http://lists.ozlabs.org/pipermail/linuxppc-dev/2009-April/070097.html
 Function rtdm_iomap_to_user() has same issue with the 
 physical I/O
 address
unsigned long src_addr

 I am new to Xenomai and would like to get some ideas on 
 how to solve
 this
 issue.
 I think UIO as well as RTDM suffers from the same problem 
 here: The
 kernel service used to remap the physical memory 
(remap_pfn_range)
 accepts unsigned long, not phys_addr_t. How is this 
 supposed to work?
 Jan

 -- 
 Siemens AG, Corporate Technology, CT SE 2
 Corporate Competence Center Embedded Linux

 Actually, remap_pfn_range() gets passed the physical address left
 shifted by PAGE_SIZE in both UIO and RTDM 
 (xnarch_remap_io_page_range,
 wrap_remap_io_page_range).
 No, the target address is expressed in pages, the source in bytes.

 That is true for rtdm_mmap_to_user but not for 
 rtdm_iomap_to_user. See
 how
 mmap_data struct is set in both functions.
struct rtdm_mmap_data mmap_data =
{ NULL, src_addr, vm_ops, vm_private_data };

 with src_addr = physical I/O address to be mapped, setting
 mmap_data.src_paddr -- are you looking at different code?

 No, that is the code.

But there is nothing shifted, the shifting takes place in 
Xenomai's wrapper.

 Besides this, the key is how remap_pfn_page interprets the source
 address argument.

 I had used UIO with success (as described in link above). 
The equivalent
 code in UIO is (uio.c):
 static int uio_mmap_physical(struct vm_area_struct *vma)
 {
  struct uio_device *idev = vma-vm_private_data;
  int mi = uio_find_mem_index(vma);
  if (mi  0)
  return -EINVAL;
 
  vma-vm_flags |= VM_IO | VM_RESERVED;
 
  vma-vm_page_prot = pgprot_noncached(vma-vm_page_prot);
 
  return remap_pfn_range(vma,
 vma-vm_start,
 idev-info-mem[mi].addr  PAGE_SHIFT,
 vma-vm_end - vma-vm_start,
 vma-vm_page_prot);
 }
 
 where idev-info-mem[mi].addr, mem[.] is the list of 
mappable regions.
 Note that for UIO, the user application needs to mmap these 
regions to
 user space. This is a step that is not needed on RTDM, right?

OK, now I got my mistake: Confused by the wrong argument names of our
wrap_remap_io_page_range (and probably others) I thought that the
destination is given as page number, not the source.

But before adding some fancy new service for this use case, I'd like to
understand how common it actually is (crazy embedded designs 
tend to pop
up and deprecate faster than such APIs...).
I do not think this is a new service but a limitation in the design.
The kernel supports it (application can mmap the device using /dev/mem)
and the PPC (440EPx in particular) has PCI and internal peripherals
located at addresses above 4 GB (I2C, SPI, etc.).

And what was the final conclusion on LKML? As far as I understood the
UIO maintainer, the proposal was rejected. Any different follow-ups on
this that I missed? Of course, if you have a special design you can
always patch your kernel and Xenomai to fit these special purposes. But
for upstream support, kernel or Xenomai, it requires a clean and
portable model.
There were no follow-ups and the reply concerning the required changes
was:
I guess you'd have to look at the whole memory management stuff of each
 architecture to find out which kind of memory can be mapped with
addresses
 above unsigned long. Hardware often needs more than one contigous
pages.
 It might well be possible that a certain arch could have RAM for user
virtual
 addresses above 4GB, but no hardware. I don't know PPC well enough to
say
 anything about its behaviour.

In the mean time, 

Re: [Xenomai-core] [PATCH v3 9/9] nucleus: Include all heaps in statistics

2009-11-11 Thread Jan Kiszka
Jan Kiszka wrote:
 Philippe Gerum wrote:
 On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
 This extends /proc/xenomai/heap with statistics about all currently used
 heaps. It takes care to flush nklock while iterating of this potentially
 long list.

 Signed-off-by: Jan Kiszka jan.kis...@siemens.com
 ---

  include/nucleus/heap.h|   12 +++-
  ksrc/drivers/ipc/iddp.c   |3 +
  ksrc/drivers/ipc/xddp.c   |6 ++
  ksrc/nucleus/heap.c   |  131 
 -
  ksrc/nucleus/module.c |2 -
  ksrc/nucleus/pod.c|5 +-
  ksrc/nucleus/shadow.c |5 +-
  ksrc/skins/native/heap.c  |6 +-
  ksrc/skins/native/pipe.c  |4 +
  ksrc/skins/native/queue.c |6 +-
  ksrc/skins/posix/shm.c|4 +
  ksrc/skins/psos+/rn.c |6 +-
  ksrc/skins/rtai/shm.c |7 ++
  ksrc/skins/vrtx/heap.c|6 +-
  ksrc/skins/vrtx/syscall.c |3 +
  15 files changed, 169 insertions(+), 37 deletions(-)

 diff --git a/include/nucleus/heap.h b/include/nucleus/heap.h
 index 44db738..f653cd7 100644
 --- a/include/nucleus/heap.h
 +++ b/include/nucleus/heap.h
 @@ -115,6 +115,10 @@ typedef struct xnheap {

   XNARCH_DECL_DISPLAY_CONTEXT();

 + xnholder_t stat_link;   /* Link in heapq */
 +
 + char name[48];
 s,48,XNOBJECT_NAME_LEN
 
 OK, but XNOBJECT_NAME_LEN+16 (due to class prefixes and additional
 information like the minor ID).
 
 +
  } xnheap_t;

  extern xnheap_t kheap;
 @@ -202,7 +206,8 @@ void xnheap_cleanup_proc(void);

  int xnheap_init_mapped(xnheap_t *heap,
  u_long heapsize,
 -int memflags);
 +int memflags,
 +const char *name, ...);

 The va_list is handy, but this breaks the common pattern used throughout
 the rest of the nucleus, based on passing pre-formatted labels. So
 either we make all creation calls use va_lists (but xnthread would need
 more work), or we make xnheap_init_mapped use the not-so-handy current
 form.

 Actually, providing xnheap_set_name() and a name parameter/va_list to
 xnheap_init* is one too many, this clutters an inner interface
 uselessly. The latter should go away, assuming that anon heaps may still
 exist.
 
 If we want to print all heaps, we should at least set a name indicating
 their class. And therefore we need to pass a descriptive name along with
 /every/ heap initialization. Forcing the majority of xnheap_init users
 to additionally issue xnheap_set_name is the cluttering a wanted to
 avoid. Only a minority needs this split-up, and therefore you see both
 interfaces in my patch.
 
  void xnheap_destroy_mapped(xnheap_t *heap,
  void (*release)(struct xnheap *heap),
 @@ -224,7 +229,10 @@ void xnheap_destroy_mapped(xnheap_t *heap,
  int xnheap_init(xnheap_t *heap,
   void *heapaddr,
   u_long heapsize,
 - u_long pagesize);
 + u_long pagesize,
 + const char *name, ...);
 +
 +void xnheap_set_name(xnheap_t *heap, const char *name, ...);

  void xnheap_destroy(xnheap_t *heap,
   void (*flushfn)(xnheap_t *heap,
 diff --git a/ksrc/drivers/ipc/iddp.c b/ksrc/drivers/ipc/iddp.c
 index a407946..b6382f1 100644
 --- a/ksrc/drivers/ipc/iddp.c
 +++ b/ksrc/drivers/ipc/iddp.c
 @@ -559,7 +559,8 @@ static int __iddp_bind_socket(struct rtipc_private 
 *priv,
   }

   ret = xnheap_init(sk-privpool,
 -   poolmem, poolsz, XNHEAP_PAGE_SIZE);
 +   poolmem, poolsz, XNHEAP_PAGE_SIZE,
 +   ippd: %d, port);
   if (ret) {
   xnarch_free_host_mem(poolmem, poolsz);
   goto fail;
 diff --git a/ksrc/drivers/ipc/xddp.c b/ksrc/drivers/ipc/xddp.c
 index f62147a..a5dafef 100644
 --- a/ksrc/drivers/ipc/xddp.c
 +++ b/ksrc/drivers/ipc/xddp.c
 @@ -703,7 +703,7 @@ static int __xddp_bind_socket(struct rtipc_private 
 *priv,
   }

   ret = xnheap_init(sk-privpool,
 -   poolmem, poolsz, XNHEAP_PAGE_SIZE);
 +   poolmem, poolsz, XNHEAP_PAGE_SIZE, );
   if (ret) {
   xnarch_free_host_mem(poolmem, poolsz);
   goto fail;
 @@ -746,6 +746,10 @@ static int __xddp_bind_socket(struct rtipc_private 
 *priv,
   sk-minor = ret;
   sa-sipc_port = ret;
   sk-name = *sa;
 +
 + if (poolsz  0)
 + xnheap_set_name(sk-bufpool, xddp: %d, sa-sipc_port);
 +
   /* Set default destination if unset at binding time. */
   if (sk-peer.sipc_port  0)
   sk-peer = *sa;
 diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
 index 96c46f8..793d1c5 100644
 --- a/ksrc/nucleus/heap.c
 +++ b/ksrc/nucleus/heap.c
 @@ -76,6 +76,9 @@ EXPORT_SYMBOL_GPL(kheap);
  xnheap_t kstacks;/* Private stack pool */
  #endif

 +static DEFINE_XNQUEUE(heapq);/* Heap 

Re: [Xenomai-core] rtdm_iomap_to_user with phys addr 4GB

2009-11-11 Thread Jan Kiszka
Herrera-Bendezu, Luis wrote:
  
 
 -Original Message-
 From: jan.kis...@web.de [mailto:jan.kis...@web.de] 
 Sent: Tuesday, November 10, 2009 4:22 PM
 To: Herrera-Bendezu, Luis
 Cc: xenomai-core@gna.org
 Subject: Re: rtdm_iomap_to_user with phys addr  4GB

 Herrera-Bendezu, Luis wrote:
  

 -Original Message-
 From: Jan Kiszka [mailto:jan.kis...@siemens.com] 
 Sent: Tuesday, November 10, 2009 1:55 PM
 To: Herrera-Bendezu, Luis
 Cc: xenomai-core@gna.org
 Subject: Re: rtdm_iomap_to_user with phys addr  4GB


 Herrera-Bendezu, Luis wrote:
  

 -Original Message-
 From: Jan Kiszka [mailto:jan.kis...@siemens.com] 
 Sent: Tuesday, November 10, 2009 1:13 PM
 To: Herrera-Bendezu, Luis
 Cc: xenomai-core@gna.org
 Subject: Re: rtdm_iomap_to_user with phys addr  4GB


 Herrera-Bendezu, Luis wrote:
  

 -Original Message-
 From: Jan Kiszka [mailto:jan.kis...@siemens.com] 
 Sent: Tuesday, November 10, 2009 12:03 PM
 To: Herrera-Bendezu, Luis
 Cc: xenomai-core@gna.org
 Subject: Re: rtdm_iomap_to_user with phys addr  4GB


 Herrera-Bendezu, Luis wrote:
 Hello,

 I am writing an RTDM driver to replace one that uses UIO. 
 The device
 resides in a physical address  4 GB on a PPC440EPx. The 
 UIO could
 not handle this address so I made a proposal to address it, 
 details at:
 http://lists.ozlabs.org/pipermail/linuxppc-dev/2009-April/070097.html
 Function rtdm_iomap_to_user() has same issue with the 
 physical I/O
 address
unsigned long src_addr

 I am new to Xenomai and would like to get some ideas on 
 how to solve
 this
 issue.
 I think UIO as well as RTDM suffers from the same problem 
 here: The
 kernel service used to remap the physical memory 
 (remap_pfn_range)
 accepts unsigned long, not phys_addr_t. How is this 
 supposed to work?
 Jan

 -- 
 Siemens AG, Corporate Technology, CT SE 2
 Corporate Competence Center Embedded Linux

 Actually, remap_pfn_range() gets passed the physical address left
 shifted by PAGE_SIZE in both UIO and RTDM 
 (xnarch_remap_io_page_range,
 wrap_remap_io_page_range).
 No, the target address is expressed in pages, the source in bytes.

 That is true for rtdm_mmap_to_user but not for 
 rtdm_iomap_to_user. See
 how
 mmap_data struct is set in both functions.
struct rtdm_mmap_data mmap_data =
{ NULL, src_addr, vm_ops, vm_private_data };

 with src_addr = physical I/O address to be mapped, setting
 mmap_data.src_paddr -- are you looking at different code?

 No, that is the code.
 But there is nothing shifted, the shifting takes place in 
 Xenomai's wrapper.

 Besides this, the key is how remap_pfn_page interprets the source
 address argument.

 I had used UIO with success (as described in link above). 
 The equivalent
 code in UIO is (uio.c):
 static int uio_mmap_physical(struct vm_area_struct *vma)
 {
 struct uio_device *idev = vma-vm_private_data;
 int mi = uio_find_mem_index(vma);
 if (mi  0)
 return -EINVAL;

 vma-vm_flags |= VM_IO | VM_RESERVED;

 vma-vm_page_prot = pgprot_noncached(vma-vm_page_prot);

 return remap_pfn_range(vma,
vma-vm_start,
idev-info-mem[mi].addr  PAGE_SHIFT,
vma-vm_end - vma-vm_start,
vma-vm_page_prot);
 }

 where idev-info-mem[mi].addr, mem[.] is the list of 
 mappable regions.
 Note that for UIO, the user application needs to mmap these 
 regions to
 user space. This is a step that is not needed on RTDM, right?
 OK, now I got my mistake: Confused by the wrong argument names of our
 wrap_remap_io_page_range (and probably others) I thought that the
 destination is given as page number, not the source.

 But before adding some fancy new service for this use case, I'd like to
 understand how common it actually is (crazy embedded designs 
 tend to pop
 up and deprecate faster than such APIs...).
 I do not think this is a new service but a limitation in the design.
 The kernel supports it (application can mmap the device using /dev/mem)
 and the PPC (440EPx in particular) has PCI and internal peripherals
 located at addresses above 4 GB (I2C, SPI, etc.).

I think /dev/mem works by chance as it uses off_t to address the source,
and that is 64 bit even on 32 bit hosts.

But I just collected the confirmation that this extension of the PPC's
physical address range is indeed an increasingly common thing. It's
still a fairly new one, so the kernel is obviously also still in the
conversion process. Maybe poking those people again makes some sense.

 And what was the final conclusion on LKML? As far as I understood the
 UIO maintainer, the proposal was rejected. Any different follow-ups on
 this that I missed? Of course, if you have a special design you can
 always patch your kernel and Xenomai to fit these special purposes. But
 for upstream support, kernel or Xenomai, it requires a clean and
 portable model.
 There were no follow-ups and the reply concerning the required 

Re: [Xenomai-core] rtdm_iomap_to_user with phys addr 4GB

2009-11-11 Thread Herrera-Bendezu, Luis
 

-Original Message-
From: Jan Kiszka [mailto:jan.kis...@siemens.com] 
Sent: Wednesday, November 11, 2009 9:06 AM
To: Herrera-Bendezu, Luis
Cc: xenomai-core@gna.org
Subject: Re: rtdm_iomap_to_user with phys addr  4GB


Herrera-Bendezu, Luis wrote:
  
 
 -Original Message-
 From: jan.kis...@web.de [mailto:jan.kis...@web.de] 
 Sent: Tuesday, November 10, 2009 4:22 PM
 To: Herrera-Bendezu, Luis
 Cc: xenomai-core@gna.org
 Subject: Re: rtdm_iomap_to_user with phys addr  4GB

 Herrera-Bendezu, Luis wrote:
  

 -Original Message-
 From: Jan Kiszka [mailto:jan.kis...@siemens.com] 
 Sent: Tuesday, November 10, 2009 1:55 PM
 To: Herrera-Bendezu, Luis
 Cc: xenomai-core@gna.org
 Subject: Re: rtdm_iomap_to_user with phys addr  4GB


 Herrera-Bendezu, Luis wrote:
  

 -Original Message-
 From: Jan Kiszka [mailto:jan.kis...@siemens.com] 
 Sent: Tuesday, November 10, 2009 1:13 PM
 To: Herrera-Bendezu, Luis
 Cc: xenomai-core@gna.org
 Subject: Re: rtdm_iomap_to_user with phys addr  4GB


 Herrera-Bendezu, Luis wrote:
  

 -Original Message-
 From: Jan Kiszka [mailto:jan.kis...@siemens.com] 
 Sent: Tuesday, November 10, 2009 12:03 PM
 To: Herrera-Bendezu, Luis
 Cc: xenomai-core@gna.org
 Subject: Re: rtdm_iomap_to_user with phys addr  4GB


 Herrera-Bendezu, Luis wrote:
 Hello,

 I am writing an RTDM driver to replace one that uses UIO. 
 The device
 resides in a physical address  4 GB on a PPC440EPx. The 
 UIO could
 not handle this address so I made a proposal to address it, 
 details at:
 
http://lists.ozlabs.org/pipermail/linuxppc-dev/2009-April/070097.html
 Function rtdm_iomap_to_user() has same issue with the 
 physical I/O
 address
unsigned long src_addr

 I am new to Xenomai and would like to get some ideas on 
 how to solve
 this
 issue.
 I think UIO as well as RTDM suffers from the same problem 
 here: The
 kernel service used to remap the physical memory 
 (remap_pfn_range)
 accepts unsigned long, not phys_addr_t. How is this 
 supposed to work?
 Jan

 -- 
 Siemens AG, Corporate Technology, CT SE 2
 Corporate Competence Center Embedded Linux

 Actually, remap_pfn_range() gets passed the physical 
address left
 shifted by PAGE_SIZE in both UIO and RTDM 
 (xnarch_remap_io_page_range,
 wrap_remap_io_page_range).
 No, the target address is expressed in pages, the 
source in bytes.

 That is true for rtdm_mmap_to_user but not for 
 rtdm_iomap_to_user. See
 how
 mmap_data struct is set in both functions.
struct rtdm_mmap_data mmap_data =
{ NULL, src_addr, vm_ops, vm_private_data };

 with src_addr = physical I/O address to be mapped, setting
 mmap_data.src_paddr -- are you looking at different code?

 No, that is the code.
 But there is nothing shifted, the shifting takes place in 
 Xenomai's wrapper.

 Besides this, the key is how remap_pfn_page interprets the source
 address argument.

 I had used UIO with success (as described in link above). 
 The equivalent
 code in UIO is (uio.c):
 static int uio_mmap_physical(struct vm_area_struct *vma)
 {
struct uio_device *idev = vma-vm_private_data;
int mi = uio_find_mem_index(vma);
if (mi  0)
return -EINVAL;

vma-vm_flags |= VM_IO | VM_RESERVED;

vma-vm_page_prot = pgprot_noncached(vma-vm_page_prot);

return remap_pfn_range(vma,
   vma-vm_start,
   idev-info-mem[mi].addr  PAGE_SHIFT,
   vma-vm_end - vma-vm_start,
   vma-vm_page_prot);
 }

 where idev-info-mem[mi].addr, mem[.] is the list of 
 mappable regions.
 Note that for UIO, the user application needs to mmap these 
 regions to
 user space. This is a step that is not needed on RTDM, right?
 OK, now I got my mistake: Confused by the wrong argument 
names of our
 wrap_remap_io_page_range (and probably others) I thought that the
 destination is given as page number, not the source.

 But before adding some fancy new service for this use case, 
I'd like to
 understand how common it actually is (crazy embedded designs 
 tend to pop
 up and deprecate faster than such APIs...).
 I do not think this is a new service but a limitation in the design.
 The kernel supports it (application can mmap the device 
using /dev/mem)
 and the PPC (440EPx in particular) has PCI and internal peripherals
 located at addresses above 4 GB (I2C, SPI, etc.).

I think /dev/mem works by chance as it uses off_t to address 
the source,
and that is 64 bit even on 32 bit hosts.

But I just collected the confirmation that this extension of the PPC's
physical address range is indeed an increasingly common thing. It's
still a fairly new one, so the kernel is obviously also still in the
conversion process. Maybe poking those people again makes some sense.

 And what was the final conclusion on LKML? As far as I 
understood the
 UIO maintainer, the proposal was rejected. Any different 
follow-ups on
 this that I missed? Of course, if you have a special design you