Hi technical comittee, ..

I write to you again about this topic, to show you another case of
disfunctioning of the kernel team politic regarding patches, and one i
am not even involved with, so you have no excuse to ignore this issue
because i am involved.

Geoff Levand, who is one of the sony upstream developers of the sony PS3
linux port, recently did the work to provide a configuration file to the
kernel team for adding PS3 support to our kernels, which is the first
step in proper debian support on PS3, since d-i work kind of depends on
this.

Furthermore, he proposed two patches which seems to be important, as you
can see below, and bastian blank refused them asking him to go upstream
first.

Well, this is not some random guy providing some patch he got from some
random location, but the sony team is actively bringing those patches
upstream. 

Furthermore, the nature of those patches, which i would consider vital
for the useability of the memory starved PS3, doesn't justify not
applying them. One is there to allow the PS3 to make use of the full
memory of the PS3, and not be limited to 80MB, the second fixes a memory
leak.

It is clear that the kernel team is not able to have a rationale
decision on this topic, and is blocked unthinkingly because of some
"send upstream first" discourse who is too rigid and arrogant.

If they don't have enough manpower to handle this correctly, then maybe
they should not expulse people of their team without any reasonable
justification, and i would gladly be handling this.

So, i hope that the technical committee will have the decency and
basic politeness to at least aknowledge this bug report now that it
hurts others than just me, and take this very serious issue in its hand,
and take a decision, as its responsabilities dictate.

Sadly,

Sven Luther

----- Forwarded message from Geoff Levand <[EMAIL PROTECTED]> -----

Subject: Bug#483489: linux-2.6: Optional powerpc64 patches for PS3
Message-ID: <[EMAIL PROTECTED]>
Date: Wed, 28 May 2008 17:26:37 -0700
From: Geoff Levand <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]


Package: linux-2.6
Version: 2.6.25
Severity: normal
Tags: patch

Attached are two patches against the debian linux-2.6-2.6.25
sources that would be nice to apply for the PS3.

 - debian-powerpc64-vmemmap-variable-page-size.diff

   This patch changes vmemmap to use a different region (region 0xf) of the
   address space whose page size can be dynamically configured at boot.

   The problem with the current approach of always using 16M pages is that
   it's not well suited to machines that have small amounts of memory such
   as small partitions on pseries, or PS3's.

   In fact, on the PS3, failure to allocate the 16M page backing vmmemmap
   tends to prevent hotplugging the HV's "additional" memory, thus limiting
   the available memory even more, from my experience down to something
   like 80M total, which makes it really not very useable.

 - debian-powerpc64-ps3-gelic-wireless-fix-memory-leak.patch

   This fixes the bug that the I/O buffer is not freed at the driver removal.


-- System Information:
Debian Release: lenny/sid
  APT prefers testing
  APT policy: (500, 'testing')
Architecture: powerpc (ppc64)

Kernel: Linux 2.6.25-3-powerpc64 (SMP w/2 CPU cores)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/bash



Add the patch powerpc-vmemmap-variable-page-size.diff to the debian
linux-2.6-2.6.25 source tree.  This is a backport of the patch
merged into 2.6.26.

---
 debian/patches/bugfix/powerpc/powerpc-vmemmap-variable-page-size.diff |  214 
++++++++++
 debian/patches/series/1                                               |    1 
 2 files changed, 215 insertions(+)

--- /dev/null
+++ b/debian/patches/bugfix/powerpc/powerpc-vmemmap-variable-page-size.diff
@@ -0,0 +1,214 @@
+ps3-linux-stable-2.6.25
+  o Backported to 2.6.25.4
+  o Removed DEBUG's
+
+Subject: [RFC] [PATCH] vmemmap fixes to use smaller pages
+
+From: Benjamin Herrenschmidt <[EMAIL PROTECTED]>
+
+This patch changes vmemmap to use a different region (region 0xf) of the
+address space whose page size can be dynamically configured at boot.
+
+The problem with the current approach of always using 16M pages is that
+it's not well suited to machines that have small amounts of memory such
+as small partitions on pseries, or PS3's.
+
+In fact, on the PS3, failure to allocate the 16M page backing vmmemmap
+tends to prevent hotplugging the HV's "additional" memory, thus limiting
+the available memory even more, from my experience down to something
+like 80M total, which makes it really not very useable.
+
+The logic used by my match to choose the vmemmap page size is:
+
+ - If 16M pages are available and there's 1G or more RAM at boot, use that 
size.
+ - Else if 64K pages are available, use that
+ - Else use 4K pages
+
+---
+ arch/powerpc/mm/hash_utils_64.c     |   28 ++++++++++++++++++++++++++--
+ arch/powerpc/mm/init_64.c           |    8 ++++----
+ arch/powerpc/mm/slb.c               |   14 +++++++++++++-
+ arch/powerpc/mm/slb_low.S           |   16 +++++++++++++---
+ include/asm-powerpc/mmu-hash64.h    |    1 +
+ include/asm-powerpc/pgtable-ppc64.h |   10 +++++-----
+ 6 files changed, 62 insertions(+), 15 deletions(-)
+
+--- a/arch/powerpc/mm/hash_utils_64.c
++++ b/arch/powerpc/mm/hash_utils_64.c
+@@ -93,6 +93,9 @@ unsigned long htab_hash_mask;
+ int mmu_linear_psize = MMU_PAGE_4K;
+ int mmu_virtual_psize = MMU_PAGE_4K;
+ int mmu_vmalloc_psize = MMU_PAGE_4K;
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++int mmu_vmemmap_psize = MMU_PAGE_4K;
++#endif
+ int mmu_io_psize = MMU_PAGE_4K;
+ int mmu_kernel_ssize = MMU_SEGSIZE_256M;
+ int mmu_highuser_ssize = MMU_SEGSIZE_256M;
+@@ -363,11 +366,32 @@ static void __init htab_init_page_sizes(
+       }
+ #endif /* CONFIG_PPC_64K_PAGES */
+ 
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++      /* We try to use 16M pages for vmemmap if that is supported
++       * and we have at least 1G of RAM at boot
++       */
++      if (mmu_psize_defs[MMU_PAGE_16M].shift &&
++          lmb_phys_mem_size() >= 0x40000000)
++              mmu_vmemmap_psize = MMU_PAGE_16M;
++      else if (mmu_psize_defs[MMU_PAGE_64K].shift)
++              mmu_vmemmap_psize = MMU_PAGE_64K;
++      else
++              mmu_vmemmap_psize = MMU_PAGE_4K;
++#endif /* CONFIG_SPARSEMEM_VMEMMAP */
++
+       printk(KERN_DEBUG "Page orders: linear mapping = %d, "
+-             "virtual = %d, io = %d\n",
++             "virtual = %d, io = %d"
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++             ", vmemmap = %d"
++#endif
++             "\n",
+              mmu_psize_defs[mmu_linear_psize].shift,
+              mmu_psize_defs[mmu_virtual_psize].shift,
+-             mmu_psize_defs[mmu_io_psize].shift);
++             mmu_psize_defs[mmu_io_psize].shift
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++             ,mmu_psize_defs[mmu_vmemmap_psize].shift
++#endif
++             );
+ 
+ #ifdef CONFIG_HUGETLB_PAGE
+       /* Init large page size. Currently, we pick 16M or 1M depending
+--- a/arch/powerpc/mm/init_64.c
++++ b/arch/powerpc/mm/init_64.c
+@@ -208,12 +208,12 @@ int __meminit vmemmap_populated(unsigned
+ }
+ 
+ int __meminit vmemmap_populate(struct page *start_page,
+-                                      unsigned long nr_pages, int node)
++                             unsigned long nr_pages, int node)
+ {
+       unsigned long mode_rw;
+       unsigned long start = (unsigned long)start_page;
+       unsigned long end = (unsigned long)(start_page + nr_pages);
+-      unsigned long page_size = 1 << mmu_psize_defs[mmu_linear_psize].shift;
++      unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift;
+ 
+       mode_rw = _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_COHERENT | PP_RWXX;
+ 
+@@ -235,11 +235,11 @@ int __meminit vmemmap_populate(struct pa
+                       start, p, __pa(p));
+ 
+               mapped = htab_bolt_mapping(start, start + page_size,
+-                                      __pa(p), mode_rw, mmu_linear_psize,
++                                      __pa(p), mode_rw, mmu_vmemmap_psize,
+                                       mmu_kernel_ssize);
+               BUG_ON(mapped < 0);
+       }
+ 
+       return 0;
+ }
+-#endif
++#endif /* CONFIG_SPARSEMEM_VMEMMAP */
+--- a/arch/powerpc/mm/slb.c
++++ b/arch/powerpc/mm/slb.c
+@@ -263,13 +263,19 @@ void slb_initialize(void)
+       extern unsigned int *slb_miss_kernel_load_linear;
+       extern unsigned int *slb_miss_kernel_load_io;
+       extern unsigned int *slb_compare_rr_to_size;
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++      extern unsigned int *slb_miss_kernel_load_vmemmap;
++      unsigned long vmemmap_llp;
++#endif
+ 
+       /* Prepare our SLB miss handler based on our page size */
+       linear_llp = mmu_psize_defs[mmu_linear_psize].sllp;
+       io_llp = mmu_psize_defs[mmu_io_psize].sllp;
+       vmalloc_llp = mmu_psize_defs[mmu_vmalloc_psize].sllp;
+       get_paca()->vmalloc_sllp = SLB_VSID_KERNEL | vmalloc_llp;
+-
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++      vmemmap_llp = mmu_psize_defs[mmu_vmemmap_psize].sllp;
++#endif
+       if (!slb_encoding_inited) {
+               slb_encoding_inited = 1;
+               patch_slb_encoding(slb_miss_kernel_load_linear,
+@@ -281,6 +287,12 @@ void slb_initialize(void)
+ 
+               DBG("SLB: linear  LLP = %04x\n", linear_llp);
+               DBG("SLB: io      LLP = %04x\n", io_llp);
++
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++              patch_slb_encoding(slb_miss_kernel_load_vmemmap,
++                                 SLB_VSID_KERNEL | vmemmap_llp);
++              DBG("SLB: vmemmap LLP = %04lx\n", vmemmap_llp);
++#endif
+       }
+ 
+       get_paca()->stab_rr = SLB_NUM_BOLTED;
+--- a/arch/powerpc/mm/slb_low.S
++++ b/arch/powerpc/mm/slb_low.S
+@@ -47,8 +47,7 @@ _GLOBAL(slb_allocate_realmode)
+        * it to VSID 0, which is reserved as a bad VSID - one which
+        * will never have any pages in it.  */
+ 
+-      /* Check if hitting the linear mapping of the vmalloc/ioremap
+-       * kernel space
++      /* Check if hitting the linear mapping or some other kernel space
+       */
+       bne     cr7,1f
+ 
+@@ -62,7 +61,18 @@ BEGIN_FTR_SECTION
+ END_FTR_SECTION_IFCLR(CPU_FTR_1T_SEGMENT)
+       b       slb_finish_load_1T
+ 
+-1:    /* vmalloc/ioremap mapping encoding bits, the "li" instructions below
++1:
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++      /* Check virtual memmap region. To be patches at kernel boot */
++      cmpldi  cr0,r9,0xf
++      bne     1f
++_GLOBAL(slb_miss_kernel_load_vmemmap)
++      li      r11,0
++      b       6f
++1:
++#endif /* CONFIG_SPARSEMEM_VMEMMAP */
++
++      /* vmalloc/ioremap mapping encoding bits, the "li" instructions below
+        * will be patched by the kernel at boot
+        */
+ BEGIN_FTR_SECTION
+--- a/include/asm-powerpc/mmu-hash64.h
++++ b/include/asm-powerpc/mmu-hash64.h
+@@ -177,6 +177,7 @@ extern struct mmu_psize_def mmu_psize_de
+ extern int mmu_linear_psize;
+ extern int mmu_virtual_psize;
+ extern int mmu_vmalloc_psize;
++extern int mmu_vmemmap_psize;
+ extern int mmu_io_psize;
+ extern int mmu_kernel_ssize;
+ extern int mmu_highuser_ssize;
+--- a/include/asm-powerpc/pgtable-ppc64.h
++++ b/include/asm-powerpc/pgtable-ppc64.h
+@@ -65,15 +65,15 @@
+ 
+ #define VMALLOC_REGION_ID     (REGION_ID(VMALLOC_START))
+ #define KERNEL_REGION_ID      (REGION_ID(PAGE_OFFSET))
++#define VMEMMAP_REGION_ID     (0xfUL)
+ #define USER_REGION_ID                (0UL)
+ 
+ /*
+- * Defines the address of the vmemap area, in the top 16th of the
+- * kernel region.
++ * Defines the address of the vmemap area, in its own region
+  */
+-#define VMEMMAP_BASE (ASM_CONST(CONFIG_KERNEL_START) + \
+-                                      (0xfUL << (REGION_SHIFT - 4)))
+-#define vmemmap ((struct page *)VMEMMAP_BASE)
++#define VMEMMAP_BASE          (VMEMMAP_REGION_ID << REGION_SHIFT)
++#define vmemmap                       ((struct page *)VMEMMAP_BASE)
++
+ 
+ /*
+  * Common bits in a linux-style PTE.  These match the bits in the
--- a/debian/patches/series/1
+++ b/debian/patches/series/1
@@ -16,6 +16,7 @@
 + bugfix/powerpc/oldworld-boot-fix.patch
 + bugfix/powerpc/prep-utah-ide-interrupt.patch
 + bugfix/powerpc/serial.patch
++ bugfix/powerpc/powerpc-vmemmap-variable-page-size.diff
 + bugfix/mips/tulip_mwi_fix.patch
 + bugfix/arm/orion-enawrallo.patch
 + features/arm/ixp4xx-net-drivers.patch


Add the patch ps3-gelic-wireless-fix-memory-leak.patch to the debian
linux-2.6-2.6.25 source tree.  This is a backport of the patch
merged into 2.6.26.

---
 debian/patches/bugfix/powerpc/ps3-gelic-wireless-fix-memory-leak.patch |   18 
++++++++++
 debian/patches/series/1                                                |    1 
 2 files changed, 19 insertions(+)

--- /dev/null
+++ b/debian/patches/bugfix/powerpc/ps3-gelic-wireless-fix-memory-leak.patch
@@ -0,0 +1,18 @@
+This fixes the bug that the I/O buffer is not freed at the driver removal.
+
+Signed-off-by: Masakazu Mokuno <[EMAIL PROTECTED]>
+---
+ drivers/net/ps3_gelic_wireless.c |    2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/drivers/net/ps3_gelic_wireless.c
++++ b/drivers/net/ps3_gelic_wireless.c
+@@ -2474,6 +2474,8 @@ static void gelic_wl_free(struct gelic_w
+ 
+       pr_debug("%s: <-\n", __func__);
+ 
++      free_page((unsigned long)wl->buf);
++
+       pr_debug("%s: destroy queues\n", __func__);
+       destroy_workqueue(wl->eurus_cmd_queue);
+       destroy_workqueue(wl->event_queue);
--- a/debian/patches/series/1
+++ b/debian/patches/series/1
@@ -17,6 +17,7 @@
 + bugfix/powerpc/prep-utah-ide-interrupt.patch
 + bugfix/powerpc/serial.patch
 + bugfix/powerpc/powerpc-vmemmap-variable-page-size.diff
++ bugfix/powerpc/ps3-gelic-wireless-fix-memory-leak.patch
 + bugfix/mips/tulip_mwi_fix.patch
 + bugfix/arm/orion-enawrallo.patch
 + features/arm/ixp4xx-net-drivers.patch



----- End forwarded message -----



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to