On 11-Aug-23 11:56 AM, Huang, Ying wrote:
> Hi, Rao,
>
> Bharata B Rao writes:
>
>> On 24-Jul-23 11:28 PM, Andrew Morton wrote:
>>> On Fri, 21 Jul 2023 14:15:31 +1000 Alistair Popple
>>> wrote:
>>>
>>>> Thanks for this Huang, I had
On 24-Jul-23 11:28 PM, Andrew Morton wrote:
> On Fri, 21 Jul 2023 14:15:31 +1000 Alistair Popple wrote:
>
>> Thanks for this Huang, I had been hoping to take a look at it this week
>> but have run out of time. I'm keen to do some testing with it as well.
>
> Thanks. I'll queue this in
On Wed, Apr 07, 2021 at 08:28:07AM +1000, Dave Chinner wrote:
> On Mon, Apr 05, 2021 at 11:18:48AM +0530, Bharata B Rao wrote:
>
> > As an alternative approach, I have this below hack that does lazy
> > list_lru creation. The memcg-specific list is created and initial
On Thu, Apr 15, 2021 at 08:54:43AM +0200, Michal Hocko wrote:
> On Thu 15-04-21 10:53:00, Bharata B Rao wrote:
> > On Wed, Apr 07, 2021 at 08:28:07AM +1000, Dave Chinner wrote:
> > >
> > > Another approach may be to identify filesystem types that do not
> >
ual filesystems that expose system information do not really
> need full memcg awareness because they are generally only visible to
> a single memcg instance...
Would something like below be appropriate?
>From f314083ad69fde2a420a1b74febd6d3f7a25085f Mon Sep 17 00:00:00 2001
From: Bharata B Rao
On Wed, Apr 07, 2021 at 01:54:48PM +0200, Michal Hocko wrote:
> On Mon 05-04-21 11:18:48, Bharata B Rao wrote:
> > Hi,
> >
> > When running 1 (more-or-less-empty-)containers on a bare-metal Power9
> > server(160 CPUs, 2 NUMA nodes, 256G memory), it is seen
On Wed, Apr 07, 2021 at 01:07:27PM +0300, Kirill Tkhai wrote:
> > Here is how the calculation turns out to be in my setup:
> >
> > Number of possible NUMA nodes = 2
> > Number of mounts per container = 7 (Check below to see which are these)
> > Number of list creation requests per mount = 2
> >
On Wed, Apr 07, 2021 at 08:28:07AM +1000, Dave Chinner wrote:
> On Mon, Apr 05, 2021 at 11:18:48AM +0530, Bharata B Rao wrote:
> > Hi,
> >
> > When running 1 (more-or-less-empty-)containers on a bare-metal Power9
> > server(160 CPUs, 2 NUMA nodes, 256G memo
On Mon, Apr 05, 2021 at 11:38:44AM -0700, Roman Gushchin wrote:
> > > @@ -534,7 +521,17 @@ static void memcg_drain_list_lru_node(struct
> > > list_lru *lru, int nid,
> > > spin_lock_irq(>lock);
> > >
> > > src = list_lru_from_memcg_idx(nlru, src_idx);
> > > + if (!src)
> > >
On Mon, Apr 05, 2021 at 11:08:26AM -0700, Yang Shi wrote:
> On Sun, Apr 4, 2021 at 10:49 PM Bharata B Rao wrote:
> >
> > Hi,
> >
> > When running 1 (more-or-less-empty-)containers on a bare-metal Power9
> > server(160 CPUs, 2 NUMA nodes, 256G memory), it is
is
appreciated. Meanwhile the patch looks like below:
>From 9444a0c6734c2853057b1f486f85da2c409fdc84 Mon Sep 17 00:00:00 2001
From: Bharata B Rao
Date: Wed, 31 Mar 2021 18:21:45 +0530
Subject: [PATCH 1/1] mm: list_lru: Allocate list_lru_one only when required.
Don't pre-allocate list_lru_one l
On Wed, Jan 27, 2021 at 12:04:01PM +0100, Vlastimil Babka wrote:
> On 1/27/21 10:10 AM, Christoph Lameter wrote:
> > On Tue, 26 Jan 2021, Will Deacon wrote:
> >
> >> > Hm, but booting the secondaries is just a software (kernel) action? They
> >> > are
> >> > already physically there, so it seems
On Fri, Jan 22, 2021 at 02:05:47PM +0100, Jann Horn wrote:
> On Thu, Jan 21, 2021 at 7:19 PM Vlastimil Babka wrote:
> > On 1/21/21 11:01 AM, Christoph Lameter wrote:
> > > On Thu, 21 Jan 2021, Bharata B Rao wrote:
> > >
> > >> > The problem is that cal
On Fri, Jan 22, 2021 at 01:03:57PM +0100, Vlastimil Babka wrote:
> On 1/22/21 9:03 AM, Vincent Guittot wrote:
> > On Thu, 21 Jan 2021 at 19:19, Vlastimil Babka wrote:
> >>
> >> On 1/21/21 11:01 AM, Christoph Lameter wrote:
> >> > O
On Wed, Jan 20, 2021 at 06:36:31PM +0100, Vincent Guittot wrote:
> Hi,
>
> On Wed, 18 Nov 2020 at 09:28, Bharata B Rao wrote:
> >
> > The page order of the slab that gets chosen for a given slab
> > cache depends on the number of objects that can be fit in the
&
increasing the chances of chosing
a lower conservative page order for the slab.
Signed-off-by: Bharata B Rao
---
This is a generic change and I am unsure how it would affect
other archs, but as a start, here are some numbers from
PowerPC pseries KVM guest with and without this patch:
This table
On Thu, Nov 05, 2020 at 05:47:03PM +0100, Vlastimil Babka wrote:
> On 10/28/20 6:50 AM, Bharata B Rao wrote:
> > slub_max_order
> > --
> > The most promising tunable that shows consistent reduction in slab memory
> > is slub_max_order. Here is a table tha
On Wed, Oct 28, 2020 at 05:07:57PM -0700, Roman Gushchin wrote:
> On Wed, Oct 28, 2020 at 11:20:30AM +0530, Bharata B Rao wrote:
> > I have mostly looked at reducing the slab memory consumption here.
> > But I do understand that default tunable values have been arrived
>
Hi,
On POWER systems, where 64K PAGE_SIZE is default, I see that slub
consumes higher amount of memory compared to any 4K page-size system.
While slub is obviously going to consume more memory on 64K page-size
systems compared to 4K as slabs are allocated in page-size granularity,
I want to check
On Fri, Oct 09, 2020 at 11:45:51AM -0700, Roman Gushchin wrote:
> On Fri, Oct 09, 2020 at 11:34:23AM +0530, Bharata B Rao wrote:
>
> Hi Bharata,
>
> > Object cgroup charging is done for all the objects during
> > allocation, but during freeing, uncharging ends up ha
memcg_slab_free_hook() to take care of bulk uncharging.
Signed-off-by: Bharata B Rao
---
mm/slab.c | 2 +-
mm/slab.h | 42 +++---
mm/slub.c | 3 ++-
3 files changed, 30 insertions(+), 17 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index f658e86ec8cee
7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long
> gpa, struct kvm *kvm)
>
> dpage = pfn_to_page(uvmem_pfn);
> dpage->zone_device_data = pvt;
> - get_page(dpage);
> + init_page_count(dpage);
The powerpc change looks good. Passes a quick sanity test of
booting/rebooting a secure guest on Power.
Tested-by: Bharata B Rao
Regards,
Bharata.
On Tue, Sep 01, 2020 at 08:52:05AM -0400, Pavel Tatashin wrote:
> On Tue, Sep 1, 2020 at 1:28 AM Bharata B Rao wrote:
> >
> > On Fri, Aug 28, 2020 at 12:47:03PM -0400, Pavel Tatashin wrote:
> > > There appears to be another problem that is related to the
> > >
On Fri, Aug 28, 2020 at 12:47:03PM -0400, Pavel Tatashin wrote:
> There appears to be another problem that is related to the
> cgroup_mutex -> mem_hotplug_lock deadlock described above.
>
> In the original deadlock that I described, the workaround is to
> replace crash dump from piping to Linux
uring that time, not in write
> mode since the virual memory layout is not impacted, and
> kvm->arch.uvmem_lock prevents concurrent operation on the secure device.
>
> Cc: Ram Pai
> Cc: Bharata B Rao
> Cc: Paul Mackerras
> Signed-off-by: Laurent Dufour
> ---
> ar
;
> + mig.flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE;
>
> mutex_lock(>arch.uvmem_lock);
For the kvmppc changes above,
Reviewed-by: Bharata B Rao
On Fri, Jul 17, 2020 at 12:44:00PM +1000, Nicholas Piggin wrote:
> Excerpts from Nicholas Piggin's message of July 17, 2020 12:08 pm:
> > Excerpts from Qian Cai's message of July 17, 2020 3:27 am:
> >> On Fri, Jul 03, 2020 at 11:06:05AM +0530, Bharata B Rao wrote:
> >
On Thu, Jul 09, 2020 at 09:57:10AM -0700, Ralph Campbell wrote:
> When migrating system memory to device private memory, if the source
> address range is a valid VMA range and there is no memory or a zero page,
> the source PFN array is marked as valid but with no PFN. This lets the
> device
dst = _pfn;
> mig.src_owner = _uvmem_pgmap;
> + mig.dir = MIGRATE_VMA_FROM_DEVICE_PRIVATE;
Reviewed-by: Bharata B Rao
for the above kvmppc change.
On Mon, Jul 06, 2020 at 03:23:42PM -0700, Ralph Campbell wrote:
> The goal for this series is to avoid device private memory TLB
> invalidations when migrating a range of addresses from system
> memory to device private memory and some of those pages have already
> been migrated. The approach
mmap_sem is help in read mode during that time, not in write
> mode since the virual memory layout is not impacted, and
> kvm->arch.uvmem_lock prevents concurrent operation on the secure device.
>
> Cc: Ram Pai
> Cc: Bharata B Rao
> Cc: Paul Mackerras
o manually patch mm/memremap.c instead of kernel/memremap.c
though)
For the series,
Tested-by: Bharata B Rao
On Fri, Aug 16, 2019 at 08:54:30AM +0200, Christoph Hellwig wrote:
> Hi Dan and Jason,
>
> Bharata has been working on secure page management for kvmppc guests,
> and one I thing I noticed is that he had to fake up a struct device
> just so that it could be passed to the devm_memremap_pages
>
On Wed, Aug 14, 2019 at 08:11:50AM +0200, Christoph Hellwig wrote:
> On Tue, Aug 13, 2019 at 10:26:11AM +0530, Bharata B Rao wrote:
> > Yes, this patchset works non-modular and with kvm-hv as module, it
> > works with devm_memremap_pages_release() and release_mem_region() in the
&
On Mon, Aug 12, 2019 at 05:00:12PM +0200, Christoph Hellwig wrote:
> On Mon, Aug 12, 2019 at 08:20:58PM +0530, Bharata B Rao wrote:
> > On Sun, Aug 11, 2019 at 10:12:47AM +0200, Christoph Hellwig wrote:
> > > The kvmppc ultravisor code wants a device private memory pool that is
On Sun, Aug 11, 2019 at 10:12:47AM +0200, Christoph Hellwig wrote:
> The kvmppc ultravisor code wants a device private memory pool that is
> system wide and not attached to a device. Instead of faking up one
> provide a low-level memremap_pages for it. Note that this function is
> not exported,
On Tue, May 21, 2019 at 12:55:49AM +1000, Nicholas Piggin wrote:
> Bharata B Rao's on May 21, 2019 12:29 am:
> > On Mon, May 20, 2019 at 01:50:35PM +0530, Bharata B Rao wrote:
> >> On Mon, May 20, 2019 at 05:00:21PM +1000, Nicholas Piggin wrote:
> >> > Bharata B
On Mon, May 20, 2019 at 01:50:35PM +0530, Bharata B Rao wrote:
> On Mon, May 20, 2019 at 05:00:21PM +1000, Nicholas Piggin wrote:
> > Bharata B Rao's on May 20, 2019 3:56 pm:
> > > On Mon, May 20, 2019 at 02:48:35PM +1000, Nicholas Piggin wrote:
> > &
On Mon, May 20, 2019 at 05:00:21PM +1000, Nicholas Piggin wrote:
> Bharata B Rao's on May 20, 2019 3:56 pm:
> > On Mon, May 20, 2019 at 02:48:35PM +1000, Nicholas Piggin wrote:
> >> >> > git bisect points to
> >> >> >
> >> >> > commit 4231aba000f5a4583dd9f67057aadb68c3eca99d
> >> >> > Author:
On Mon, May 20, 2019 at 02:48:35PM +1000, Nicholas Piggin wrote:
> >> > git bisect points to
> >> >
> >> > commit 4231aba000f5a4583dd9f67057aadb68c3eca99d
> >> > Author: Nicholas Piggin
> >> > Date: Fri Jul 27 21:48:17 2018 +1000
> >> >
> >> > powerpc/64s: Fix page table fragment refcount
On Mon, May 20, 2019 at 12:02:23PM +1000, Michael Ellerman wrote:
> Bharata B Rao writes:
> > On Thu, May 16, 2019 at 07:44:20PM +0530, srikanth wrote:
> >> Hello,
> >>
> >> On power9 host, performing memory hotunplug from ppc64le guest results in
> >>
On Thu, May 16, 2019 at 07:44:20PM +0530, srikanth wrote:
> Hello,
>
> On power9 host, performing memory hotunplug from ppc64le guest results in
> kernel oops.
>
> Kernel used : https://github.com/torvalds/linux/tree/v5.1 built using
> ppc64le_defconfig for host and ppc64le_guest_defconfig for
On Tue, Jan 30, 2018 at 10:28:15AM +0100, Michal Hocko wrote:
> On Tue 30-01-18 10:16:00, Michal Hocko wrote:
> > On Tue 30-01-18 14:00:06, Bharata B Rao wrote:
> > > Hi,
> > >
> > > With the latest upstream, I see that memory hotplug is not working
> >
On Tue, Jan 30, 2018 at 10:28:15AM +0100, Michal Hocko wrote:
> On Tue 30-01-18 10:16:00, Michal Hocko wrote:
> > On Tue 30-01-18 14:00:06, Bharata B Rao wrote:
> > > Hi,
> > >
> > > With the latest upstream, I see that memory hotplug is not working
> >
Hi,
With the latest upstream, I see that memory hotplug is not working
as expected. The hotplugged memory isn't seen to increase the total
RAM pages. This has been observed with both x86 and Power guests.
1. Memory hotplug code intially marks pages as PageReserved via
__add_section().
2. Later
Hi,
With the latest upstream, I see that memory hotplug is not working
as expected. The hotplugged memory isn't seen to increase the total
RAM pages. This has been observed with both x86 and Power guests.
1. Memory hotplug code intially marks pages as PageReserved via
__add_section().
2. Later
On Tue, Sep 08, 2015 at 01:46:52PM +0100, Dr. David Alan Gilbert wrote:
> * Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> > On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert wrote:
> > > * Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> > > &
On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert wrote:
> * Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> > In fact I had successfully done postcopy migration of sPAPR guest with
> > this setup.
>
> Interesting - I'd not got that far myself on power; I
On Tue, Sep 08, 2015 at 04:08:06PM +1000, Michael Ellerman wrote:
> On Wed, 2015-08-12 at 10:53 +0530, Bharata B Rao wrote:
> > On Tue, Aug 11, 2015 at 03:48:26PM +0200, Andrea Arcangeli wrote:
> > > Hello Bharata,
> > >
> > > On Tue, Aug 11, 2015 at 0
On Tue, Sep 08, 2015 at 04:08:06PM +1000, Michael Ellerman wrote:
> On Wed, 2015-08-12 at 10:53 +0530, Bharata B Rao wrote:
> > On Tue, Aug 11, 2015 at 03:48:26PM +0200, Andrea Arcangeli wrote:
> > > Hello Bharata,
> > >
> > > On Tue, Aug 11, 2015 at 0
On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert wrote:
> * Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> > In fact I had successfully done postcopy migration of sPAPR guest with
> > this setup.
>
> Interesting - I'd not got that far myself on power; I
On Tue, Sep 08, 2015 at 01:46:52PM +0100, Dr. David Alan Gilbert wrote:
> * Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> > On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert wrote:
> > > * Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> > > &
On Fri, Aug 14, 2015 at 10:27:53AM -0500, Nathan Fontenot wrote:
> On 08/13/2015 04:17 AM, Bharata B Rao wrote:
> > Last section of memory block is always initialized to
> >
> > mem->start_section_nr + sections_per_block - 1
> >
> > which will not be tru
On Fri, Aug 14, 2015 at 10:27:53AM -0500, Nathan Fontenot wrote:
On 08/13/2015 04:17 AM, Bharata B Rao wrote:
Last section of memory block is always initialized to
mem-start_section_nr + sections_per_block - 1
which will not be true for a section that doesn't contain
the right
number of sections instead of assuming sections_per_block.
Signed-off-by: Bharata B Rao
Cc: Nathan Fontenot
---
drivers/base/memory.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/base/memory.c b/drivers/base/memory.c
index 2804aed..7f3ce2e 100644
--- a/drivers/base/memor
block to always contain the right
number of sections instead of assuming sections_per_block.
Signed-off-by: Bharata B Rao bhar...@linux.vnet.ibm.com
Cc: Nathan Fontenot nf...@linux.vnet.ibm.com
---
drivers/base/memory.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/base/memory.c b
On Tue, Aug 11, 2015 at 03:48:26PM +0200, Andrea Arcangeli wrote:
> Hello Bharata,
>
> On Tue, Aug 11, 2015 at 03:37:29PM +0530, Bharata B Rao wrote:
> > May be it is a bit late to bring this up, but I needed the following fix
> > to userfault21 branch of your git tree
361
> #define __NR_execveat362
> #define __NR_switch_endian 363
> +#define __NR_userfaultfd 364
May be it is a bit late to bring this up, but I needed the following fix
to userfault21 branch of your git tree to compile on powerpc.
powerpc: Bump up __NR_sysc
__NR_userfaultfd 364
May be it is a bit late to bring this up, but I needed the following fix
to userfault21 branch of your git tree to compile on powerpc.
powerpc: Bump up __NR_syscalls to account for __NR_userfaultfd
From: Bharata B Rao bhar...@linux.vnet.ibm.com
With userfaultfd syscall
On Tue, Aug 11, 2015 at 03:48:26PM +0200, Andrea Arcangeli wrote:
Hello Bharata,
On Tue, Aug 11, 2015 at 03:37:29PM +0530, Bharata B Rao wrote:
May be it is a bit late to bring this up, but I needed the following fix
to userfault21 branch of your git tree to compile on powerpc.
Not late
So will it be correct to say that memory hotplug to memory-less node
isn't supported by PowerPC kernel ? Should I enforce the same in QEMU
for PowerKVM ?
On Mon, Jun 22, 2015 at 10:18 AM, Bharata B Rao wrote:
> Hi,
>
> While developing memory hotplug support in QEMU for PoweKVM, I
&
So will it be correct to say that memory hotplug to memory-less node
isn't supported by PowerPC kernel ? Should I enforce the same in QEMU
for PowerKVM ?
On Mon, Jun 22, 2015 at 10:18 AM, Bharata B Rao bharata@gmail.com wrote:
Hi,
While developing memory hotplug support in QEMU for PoweKVM
Hi,
While developing memory hotplug support in QEMU for PoweKVM, I
realized that guest kernel has specific checks to prevent hot addition
of memory to a memory-less node.
I am referring to arch/powerpc/mm/numa.c:hot_add_scn_to_nid() which
has explicit checks to ensure that it returns a nid that
Hi,
While developing memory hotplug support in QEMU for PoweKVM, I
realized that guest kernel has specific checks to prevent hot addition
of memory to a memory-less node.
I am referring to arch/powerpc/mm/numa.c:hot_add_scn_to_nid() which
has explicit checks to ensure that it returns a nid that
Any feedback on the below patch ?
On Mon, Mar 9, 2015 at 11:00 AM, wrote:
> From: Bharata B Rao
>
> Since KVM isn't equipped to handle closure of vcpu fd from userspace(QEMU)
> correctly, certain work arounds have to be employed to allow reuse of
> vcpu array slot in KVM duri
Any feedback on the below patch ?
On Mon, Mar 9, 2015 at 11:00 AM, bharata@gmail.com wrote:
From: Bharata B Rao bhar...@linux.vnet.ibm.com
Since KVM isn't equipped to handle closure of vcpu fd from userspace(QEMU)
correctly, certain work arounds have to be employed to allow reuse
On Fri, Oct 31, 2014 at 03:41:34PM -0400, Dan Streetman wrote:
> In powerpc pseries platform dlpar operations, Use device_online() and
> device_offline() instead of cpu_up() and cpu_down().
>
> Calling cpu_up/down directly does not update the cpu device offline
> field, which is used to
On Fri, Oct 31, 2014 at 03:41:34PM -0400, Dan Streetman wrote:
In powerpc pseries platform dlpar operations, Use device_online() and
device_offline() instead of cpu_up() and cpu_down().
Calling cpu_up/down directly does not update the cpu device offline
field, which is used to online/offline
On Fri, Sep 5, 2014 at 7:38 PM, Nathan Fontenot
wrote:
> On 09/05/2014 04:16 AM, bharata@gmail.com wrote:
>> From: Bharata B Rao
>>
>> - ibm,rtas-configure-connector should treat the RTAS data as big endian.
>> - Treat ibm,ppc-interrupt-server#s
On Fri, Sep 5, 2014 at 7:38 PM, Nathan Fontenot
nf...@linux.vnet.ibm.com wrote:
On 09/05/2014 04:16 AM, bharata@gmail.com wrote:
From: Bharata B Rao bhar...@linux.vnet.ibm.com
- ibm,rtas-configure-connector should treat the RTAS data as big endian.
- Treat ibm,ppc-interrupt-server#s
On Thu, Dec 06, 2007 at 11:01:18AM +0100, Jan Blunck wrote:
> On Wed, Dec 05, Dave Hansen wrote:
>
> > I think the key here is what kind of consistency we're trying to
> > provide. If a directory is being changed underneath a reader, what
> > kinds of guarantees do they get about the contents of
On Thu, Dec 06, 2007 at 11:01:18AM +0100, Jan Blunck wrote:
On Wed, Dec 05, Dave Hansen wrote:
I think the key here is what kind of consistency we're trying to
provide. If a directory is being changed underneath a reader, what
kinds of guarantees do they get about the contents of their
Introduce list_for_each_entry_reverse_from() needed by a subsequent patch.
Signed-off-by: Bharata B Rao <[EMAIL PROTECTED]>
---
include/linux/list.h | 13 +
1 file changed, 13 insertions(+)
--- a/include/linux/list.h
+++ b/include/linux/list.h
@@ -562,6 +562,19 @@ static
places like mkdir, rmdir, mknod etc.
Signed-off-by: Bharata B Rao <[EMAIL PROTECTED]>
---
fs/dcache.c|1
fs/namei.c | 13 +++
fs/union.c | 178 -
include/linux/dcache.h |4 -
include/linu
Directory seek support.
Define the seek behaviour on the stored cache of dirents.
Signed-off-by: Bharata B Rao <[EMAIL PROTECTED]>
---
fs/read_write.c | 11 ---
fs/union.c| 171 +-
include/linux/fs.h|8 ++
i
offsets,
offsets are defined as linearly increasing indices on this cache and the same
is returned to userspace.
Signed-off-by: Bharata B Rao <[EMAIL PROTECTED]>
---
fs/file_table.c |1
fs/readdir.c | 10 -
fs/union.c
Remove the existing readdir implementation.
Signed-off-by: Bharata B Rao <[EMAIL PROTECTED]>
---
fs/readdir.c | 10 +
fs/union.c| 333 --
include/linux/union.h | 23 ---
3 files changed, 8 insertions(+), 358 del
Hi,
In Union Mount, the merged view of directories of the union is obtained
by enhancing readdir(2)/getdents(2) to read and merge the entries of
all the directories by eliminating the duplicates. While we have tried
a few approaches for this, none of them could perfectly solve all the problems.
Introduce list_for_each_entry_reverse_from() needed by a subsequent patch.
Signed-off-by: Bharata B Rao [EMAIL PROTECTED]
---
include/linux/list.h | 13 +
1 file changed, 13 insertions(+)
--- a/include/linux/list.h
+++ b/include/linux/list.h
@@ -562,6 +562,19 @@ static inline void
places like mkdir, rmdir, mknod etc.
Signed-off-by: Bharata B Rao [EMAIL PROTECTED]
---
fs/dcache.c|1
fs/namei.c | 13 +++
fs/union.c | 178 -
include/linux/dcache.h |4 -
include/linux/fs.h |4
Remove the existing readdir implementation.
Signed-off-by: Bharata B Rao [EMAIL PROTECTED]
---
fs/readdir.c | 10 +
fs/union.c| 333 --
include/linux/union.h | 23 ---
3 files changed, 8 insertions(+), 358 deletions
Hi,
In Union Mount, the merged view of directories of the union is obtained
by enhancing readdir(2)/getdents(2) to read and merge the entries of
all the directories by eliminating the duplicates. While we have tried
a few approaches for this, none of them could perfectly solve all the problems.
offsets,
offsets are defined as linearly increasing indices on this cache and the same
is returned to userspace.
Signed-off-by: Bharata B Rao [EMAIL PROTECTED]
---
fs/file_table.c |1
fs/readdir.c | 10 -
fs/union.c| 281
Directory seek support.
Define the seek behaviour on the stored cache of dirents.
Signed-off-by: Bharata B Rao [EMAIL PROTECTED]
---
fs/read_write.c | 11 ---
fs/union.c| 171 +-
include/linux/fs.h|8 ++
include/linux
On 10/29/07, Jan Blunck <[EMAIL PROTECTED]> wrote:
>
>
Did you miss the d_path() caller arch/blackfin/kernel/traps.c:printk_address() ?
Regards,
Bharata.
--
"Men come and go but mountains remain" -- Ruskin Bond.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
On 10/29/07, Jan Blunck [EMAIL PROTECTED] wrote:
Did you miss the d_path() caller arch/blackfin/kernel/traps.c:printk_address() ?
Regards,
Bharata.
--
Men come and go but mountains remain -- Ruskin Bond.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a
On Tue, Oct 23, 2007 at 10:43:05AM +0200, Jan Blunck wrote:
>
> The thing is: how do we keep going from here? Do you want to send my patches
> in the future or are you going to ask me before sending things out? We don't
> need to duplicate the work here. I already put my quilt stack into a public
On Tue, Oct 23, 2007 at 10:43:05AM +0200, Jan Blunck wrote:
The thing is: how do we keep going from here? Do you want to send my patches
in the future or are you going to ask me before sending things out? We don't
need to duplicate the work here. I already put my quilt stack into a public
On Mon, Oct 22, 2007 at 03:57:58PM +0200, Christoph Hellwig wrote:
>
> Any reason we've got this patchset posted by three people now? :)
Two reasons actually !
- The set of patches posted by Jan last was on 2.6.23-rc8-mm1. So I
thought let me help Andrew a bit by making them available on latest
Changes the name of d_path() and __d_path() to print_path() and __print_path()
respectively and fixes the kerneldoc comments for print_path().
Signed-off-by: Bharata B Rao <[EMAIL PROTECTED]>
---
arch/blackfin/kernel/traps.c |2 -
drivers/md/bitmap.c |2 -
drive
Replace the (vfsmnt, dentry) arguments in proc_inode operation proc_get_link()
by struct path.
Also, this should eventually allow do_proc_readlink() to call d_path() with
a struct path argument.
Signed-off-by: Bharata B Rao <[EMAIL PROTECTED]>
---
fs/proc/base.c
From: Jan Blunck <[EMAIL PROTECTED]>
In nearly all cases the set_fs_{root,pwd}() calls work on a struct
path. Change the function to reflect this and use path_get() here.
Signed-off-by: Jan Blunck <[EMAIL PROTECTED]>
Signed-off-by: Andreas Gruenbacher <[EMAIL PROTECTED]>
Signed
From: Andreas Gruenbacher <[EMAIL PROTECTED]>
One less argument to __d_path.
All callers to __d_path pass the dentry and vfsmount of a struct
path to __d_path. Pass the struct path directly, instead.
Signed-off-by: Andreas Gruenbacher <[EMAIL PROTECTED]>
Signed-off-by: Bharata B
From: Jan Blunck <[EMAIL PROTECTED]>
* Use struct path in fs_struct.
Signed-off-by: Andreas Gruenbacher <[EMAIL PROTECTED]>
Signed-off-by: Jan Blunck <[EMAIL PROTECTED]>
Signed-off-by: Bharata B Rao <[EMAIL PROTECTED]>
Acked-by: Christoph Hellwig <[EMAIL P
k <[EMAIL PROTECTED]>
Signed-off-by: Andreas Gruenbacher <[EMAIL PROTECTED]>
Signed-off-by: Bharata B Rao <[EMAIL PROTECTED]>
---
arch/alpha/kernel/osf_sys.c |2
arch/mips/kernel/sysirix.c |6 +-
arch/parisc/hpux/sys_hpux.c |
From: Jan Blunck <[EMAIL PROTECTED]>
Use path_put() in a few places instead of {mnt,d}put()
Signed-off-by: Jan Blunck <[EMAIL PROTECTED]>
Signed-off-by: Andreas Gruenbacher <[EMAIL PROTECTED]>
Signed-off-by: Bharata B Rao <[EMAIL PROTECTED]>
Acked-by: Christoph H
ED]>
Signed-off-by: Bharata B Rao <[EMAIL PROTECTED]>
Acked-by: Christoph Hellwig <[EMAIL PROTECTED]>
---
fs/namei.c| 17 +++--
fs/unionfs/super.c|2 +-
include/linux/namei.h |6 --
include/linux/path.h |1 +
4 files changed, 17 insertion
From: Jan Blunck <[EMAIL PROTECTED]>
Move the definition of struct path into its own header file for further
patches.
Signed-off-by: Jan Blunck <[EMAIL PROTECTED]>
Signed-off-by: Andreas Gruenbacher <[EMAIL PROTECTED]>
Signed-off-by: Bharata B Rao <[EMAIL PROTECTED]>
Ack
From: Jan Blunck <[EMAIL PROTECTED]>
path_release_on_umount() should only be called from sys_umount(). I merged the
function into sys_umount() instead of having in in namei.c.
Signed-off-by: Jan Blunck <[EMAIL PROTECTED]>
Signed-off-by: Bharata B Rao <[EMAIL PROTECTED]>
A
From: Jan Blunck <[EMAIL PROTECTED]>
This test seems to be unnecessary since we always have rootfs mounted before
calling a usermodehelper.
Signed-off-by: Andreas Gruenbacher <[EMAIL PROTECTED]>
Signed-off-by: Jan Blunck <[EMAIL PROTECTED]>
Signed-off-by: Bharata B Rao <[EM
1 - 100 of 284 matches
Mail list logo