that shows in the runtime which also
drops from 3m57s to 3m22s.
So regardless of what aim7 results we get from these changes, I'll
be merging them pending review and further testing...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
that shows in the runtime which also
drops from 3m57s to 3m22s.
So regardless of what aim7 results we get from these changes, I'll
be merging them pending review and further testing...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Sat, Aug 13, 2016 at 06:42:51PM +0100, Ben Hutchings wrote:
> 3.16.37-rc1 review patch. If anyone has any objections, please let me know.
>
> --
>
> From: Dave Chinner <dchin...@redhat.com>
>
> commit b1438f477934f5a4d5a44df26f3079a7575d5946 upstre
On Sat, Aug 13, 2016 at 06:42:51PM +0100, Ben Hutchings wrote:
> 3.16.37-rc1 review patch. If anyone has any objections, please let me know.
>
> --
>
> From: Dave Chinner
>
> commit b1438f477934f5a4d5a44df26f3079a7575d5946 upstream.
>
> When a fai
On Sat, Aug 13, 2016 at 02:30:54AM +0200, Christoph Hellwig wrote:
> On Fri, Aug 12, 2016 at 08:02:08PM +1000, Dave Chinner wrote:
> > Which says "no change". Oh well, back to the drawing board...
>
> I don't see how it would change thing much - for all relevant calculati
On Sat, Aug 13, 2016 at 02:30:54AM +0200, Christoph Hellwig wrote:
> On Fri, Aug 12, 2016 at 08:02:08PM +1000, Dave Chinner wrote:
> > Which says "no change". Oh well, back to the drawing board...
>
> I don't see how it would change thing much - for all relevant calculati
On Fri, Aug 12, 2016 at 04:51:24PM +0800, Ye Xiaolong wrote:
> On 08/12, Ye Xiaolong wrote:
> >On 08/12, Dave Chinner wrote:
>
> [snip]
>
> >>lkp-folk: the patch I've just tested it attached below - can you
> >>feed that through your test and see if it fixes
On Fri, Aug 12, 2016 at 04:51:24PM +0800, Ye Xiaolong wrote:
> On 08/12, Ye Xiaolong wrote:
> >On 08/12, Dave Chinner wrote:
>
> [snip]
>
> >>lkp-folk: the patch I've just tested it attached below - can you
> >>feed that through your test and see if it fixes
On Thu, Aug 11, 2016 at 10:02:39PM -0700, Linus Torvalds wrote:
> On Thu, Aug 11, 2016 at 9:16 PM, Dave Chinner <da...@fromorbit.com> wrote:
> >
> > That's why running aim7 as your "does the filesystem scale"
> > benchmark is somewhat irrelevant to scaling
On Thu, Aug 11, 2016 at 10:02:39PM -0700, Linus Torvalds wrote:
> On Thu, Aug 11, 2016 at 9:16 PM, Dave Chinner wrote:
> >
> > That's why running aim7 as your "does the filesystem scale"
> > benchmark is somewhat irrelevant to scaling applications on high
>
's why running aim7 as your "does the filesystem scale"
benchmark is somewhat irrelevant to scaling applications on high
performance systems these days - users with fast storage will be
expecting to see that 1.9GB/s throughput from their app, not
600MB/s
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
aim7 as your "does the filesystem scale"
benchmark is somewhat irrelevant to scaling applications on high
performance systems these days - users with fast storage will be
expecting to see that 1.9GB/s throughput from their app, not
600MB/s
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Thu, Aug 11, 2016 at 07:27:52PM -0700, Linus Torvalds wrote:
> On Thu, Aug 11, 2016 at 5:54 PM, Dave Chinner <da...@fromorbit.com> wrote:
> >
> > So, removing mark_page_accessed() made the spinlock contention
> > *worse*.
> >
> > 36.51% [kernel] [k]
On Thu, Aug 11, 2016 at 07:27:52PM -0700, Linus Torvalds wrote:
> On Thu, Aug 11, 2016 at 5:54 PM, Dave Chinner wrote:
> >
> > So, removing mark_page_accessed() made the spinlock contention
> > *worse*.
> >
> > 36.51% [kernel] [k] _raw_spin_unlock_irqr
On Fri, Aug 12, 2016 at 10:54:42AM +1000, Dave Chinner wrote:
> I'm now going to test Christoph's theory that this is an "overwrite
> doing lots of block mapping" issue. More on that to follow.
Ok, so going back to the profiles, I can say it's not an overwrite
issue, because
On Fri, Aug 12, 2016 at 10:54:42AM +1000, Dave Chinner wrote:
> I'm now going to test Christoph's theory that this is an "overwrite
> doing lots of block mapping" issue. More on that to follow.
Ok, so going back to the profiles, I can say it's not an overwrite
issue, because
On Thu, Aug 11, 2016 at 11:16:12AM +1000, Dave Chinner wrote:
> On Wed, Aug 10, 2016 at 05:33:20PM -0700, Huang, Ying wrote:
> We need to know what is happening that is different - there's a good
> chance the mapping trace events will tell us. Huang, can you get
> a raw event trace f
On Thu, Aug 11, 2016 at 11:16:12AM +1000, Dave Chinner wrote:
> On Wed, Aug 10, 2016 at 05:33:20PM -0700, Huang, Ying wrote:
> We need to know what is happening that is different - there's a good
> chance the mapping trace events will tell us. Huang, can you get
> a raw event trace f
level - the mapping->tree_lock is a global serialisation
point
I'm now going to test Christoph's theory that this is an "overwrite
doing lots of block mapping" issue. More on that to follow.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ping->tree_lock is a global serialisation
point
I'm now going to test Christoph's theory that this is an "overwrite
doing lots of block mapping" issue. More on that to follow.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
nlock_irqrestore
I don't think that this is the same as what aim7 is triggering as
there's no XFS write() path allocation functions near the top of the
profile to speak of. Still, I don't recall seeing this before...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
nk that this is the same as what aim7 is triggering as
there's no XFS write() path allocation functions near the top of the
profile to speak of. Still, I don't recall seeing this before...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Thu, Aug 11, 2016 at 10:36:59AM +0800, Ye Xiaolong wrote:
> On 08/11, Dave Chinner wrote:
> >On Thu, Aug 11, 2016 at 11:16:12AM +1000, Dave Chinner wrote:
> >> I need to see these events:
> >>
> >>xfs_file*
> >>xfs_iomap*
> >>
On Thu, Aug 11, 2016 at 10:36:59AM +0800, Ye Xiaolong wrote:
> On 08/11, Dave Chinner wrote:
> >On Thu, Aug 11, 2016 at 11:16:12AM +1000, Dave Chinner wrote:
> >> I need to see these events:
> >>
> >>xfs_file*
> >>xfs_iomap*
> >>
On Thu, Aug 11, 2016 at 11:16:12AM +1000, Dave Chinner wrote:
> I need to see these events:
>
> xfs_file*
> xfs_iomap*
> xfs_get_block*
>
> For both kernels. An example trace from 4.8-rc1 running the command
> `xfs_io -f -c 'pwrite 0 512k -b 128k'
On Thu, Aug 11, 2016 at 11:16:12AM +1000, Dave Chinner wrote:
> I need to see these events:
>
> xfs_file*
> xfs_iomap*
> xfs_get_block*
>
> For both kernels. An example trace from 4.8-rc1 running the command
> `xfs_io -f -c 'pwrite 0 512k -b 128k'
. 253971.751234: xfs_file_buffered_write: dev
253:32 ino 0x84 size 0x4 offset 0x4 count 0x2
xfs_io-2946 [001] 253971.751236: xfs_iomap_found: dev 253:32
ino 0x84 size 0x40000 offset 0x4 count 131072 type invalid startoff 0x0
startblock 24 blockcount 0x60
xfs_io-2946 [001] 253971.751381: xfs_file_buffered_write: dev
253:32 ino 0x84 size 0x4 offset 0x6 count 0x2
xfs_io-2946 [001] 253971.751415: xfs_iomap_prealloc_size: dev
253:32 ino 0x84 prealloc blocks 128 shift 0 m_writeio_blocks 16
xfs_io-2946 [001] 253971.751425: xfs_iomap_alloc: dev 253:32
ino 0x84 size 0x4 offset 0x6 count 131072 type invalid startoff 0x60
startblock -1 blockcount 0x90
That's the output I need for the complete test - you'll need to use
a better recording mechanism that this (e.g. trace-cmd record,
trace-cmd report) because it will generate a lot of events. Compress
the two report files (they'll be large) and send them to me offlist.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
e 0x4 offset 0x4 count 0x2
xfs_io-2946 [001] 253971.751236: xfs_iomap_found: dev 253:32
ino 0x84 size 0x4 offset 0x4 count 131072 type invalid startoff 0x0
startblock 24 blockcount 0x60
xfs_io-2946 [001] 253971.751381: xfs_file_buffered_write: dev
253:32 ino 0x84 size 0x4 offset 0x6 count 0x2
xfs_io-2946 [001] 253971.751415: xfs_iomap_prealloc_size: dev
253:32 ino 0x84 prealloc blocks 128 shift 0 m_writeio_blocks 16
xfs_io-2946 [001] 253971.751425: xfs_iomap_alloc: dev 253:32
ino 0x84 size 0x4 offset 0x6 count 131072 type invalid startoff 0x60
startblock -1 blockcount 0x90
That's the output I need for the complete test - you'll need to use
a better recording mechanism that this (e.g. trace-cmd record,
trace-cmd report) because it will generate a lot of events. Compress
the two report files (they'll be large) and send them to me offlist.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
lock we previously didn't spin on at all.
We really need instruction level perf profiles to understand
this - I don't have a machine with this many cpu cores available
locally, so I'm not sure I'm going to be able to make any progress
tracking it down in the short term. Maybe the lkp team has more
in-depth cpu usage profiles they can share?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ll.
We really need instruction level perf profiles to understand
this - I don't have a machine with this many cpu cores available
locally, so I'm not sure I'm going to be able to make any progress
tracking it down in the short term. Maybe the lkp team has more
in-depth cpu usage profiles they can share?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ode 100644 fs/xfs/xfs_rmap_item.c
create mode 100644 fs/xfs/xfs_rmap_item.h
create mode 100644 fs/xfs/xfs_trans_rmap.c
--
Dave Chinner
da...@fromorbit.com
ode 100644 fs/xfs/xfs_rmap_item.c
create mode 100644 fs/xfs/xfs_rmap_item.h
create mode 100644 fs/xfs/xfs_trans_rmap.c
--
Dave Chinner
da...@fromorbit.com
On Fri, Aug 05, 2016 at 09:59:35PM +1000, Dave Chinner wrote:
> On Fri, Aug 05, 2016 at 11:54:17AM +0100, Mel Gorman wrote:
> > On Fri, Aug 05, 2016 at 09:11:10AM +1000, Dave Chinner wrote:
> > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > > index
On Fri, Aug 05, 2016 at 09:59:35PM +1000, Dave Chinner wrote:
> On Fri, Aug 05, 2016 at 11:54:17AM +0100, Mel Gorman wrote:
> > On Fri, Aug 05, 2016 at 09:11:10AM +1000, Dave Chinner wrote:
> > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > > index
On Fri, Aug 05, 2016 at 11:54:17AM +0100, Mel Gorman wrote:
> On Fri, Aug 05, 2016 at 09:11:10AM +1000, Dave Chinner wrote:
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index fb975cec3518..baa97da3687d 100644
> > > --- a/mm/page_alloc.c
> > >
On Fri, Aug 05, 2016 at 11:54:17AM +0100, Mel Gorman wrote:
> On Fri, Aug 05, 2016 at 09:11:10AM +1000, Dave Chinner wrote:
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index fb975cec3518..baa97da3687d 100644
> > > --- a/mm/page_alloc.c
> > >
On Thu, Aug 04, 2016 at 01:34:58PM +0100, Mel Gorman wrote:
> On Thu, Aug 04, 2016 at 01:24:09PM +0100, Mel Gorman wrote:
> > On Thu, Aug 04, 2016 at 03:10:51PM +1000, Dave Chinner wrote:
> > > Hi folks,
> > >
> > > I just noticed a whacky memory usage prof
On Thu, Aug 04, 2016 at 01:34:58PM +0100, Mel Gorman wrote:
> On Thu, Aug 04, 2016 at 01:24:09PM +0100, Mel Gorman wrote:
> > On Thu, Aug 04, 2016 at 03:10:51PM +1000, Dave Chinner wrote:
> > > Hi folks,
> > >
> > > I just noticed a whacky memory usage prof
and removed from teh
page cache. According to the per-node counters, that is not
happening and there gigabytes of invalidated pages still sitting on
the active LRUs.
Something is broken
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
and removed from teh
page cache. According to the per-node counters, that is not
happening and there gigabytes of invalidated pages still sitting on
the active LRUs.
Something is broken
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Thu, Jul 28, 2016 at 11:25:13AM +0100, Mel Gorman wrote:
> On Thu, Jul 28, 2016 at 03:49:47PM +1000, Dave Chinner wrote:
> > Seems you're all missing the obvious.
> >
> > Add a tracepoint for a shrinker callback that includes a "name"
> > field, h
On Thu, Jul 28, 2016 at 11:25:13AM +0100, Mel Gorman wrote:
> On Thu, Jul 28, 2016 at 03:49:47PM +1000, Dave Chinner wrote:
> > Seems you're all missing the obvious.
> >
> > Add a tracepoint for a shrinker callback that includes a "name"
> > field, h
cludes a "name"
field, have the shrinker callback fill it out appropriately. e.g
in the superblock shrinker:
trace_shrinker_callback(shrinker, shrink_control, sb->s_type->name);
And generic code that doesn't want to put a specific context name in
there can simply call:
trace_shrinker_callback(shrinker, shrink_control, __func__);
And now you know exactly what shrinker is being run.
No need to add names to any structures, it's call site defined so is
flexible, and if you're not using tracepoints has no overhead.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ot;
field, have the shrinker callback fill it out appropriately. e.g
in the superblock shrinker:
trace_shrinker_callback(shrinker, shrink_control, sb->s_type->name);
And generic code that doesn't want to put a specific context name in
there can simply call:
trace_shrinker_callback(shrinker, shrink_control, __func__);
And now you know exactly what shrinker is being run.
No need to add names to any structures, it's call site defined so is
flexible, and if you're not using tracepoints has no overhead.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
rs
xfs: convert list of extents to free into a regular list
xfs: refactor btree maxlevels computation
Dave Chinner (14):
xfs: reduce lock hold times in buffer writeback
Merge branch 'fs-4.8-iomap-infrastructure' into for-next
Merge branch 'xfs-4.8-iomap-write' into fo
rs
xfs: convert list of extents to free into a regular list
xfs: refactor btree maxlevels computation
Dave Chinner (14):
xfs: reduce lock hold times in buffer writeback
Merge branch 'fs-4.8-iomap-infrastructure' into for-next
Merge branch 'xfs-4.8-iomap-write' into fo
ote.
So it's really only a per-cpu structure for list addition
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ote.
So it's really only a per-cpu structure for list addition
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
on't get the immediate
attention of my mail filters, so I didn't see it immediately.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
on't get the immediate
attention of my mail filters, so I didn't see it immediately.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Tue, Jul 19, 2016 at 02:22:47PM -0700, Calvin Owens wrote:
> On 07/18/2016 07:05 PM, Calvin Owens wrote:
> >On 07/17/2016 11:02 PM, Dave Chinner wrote:
> >>On Sun, Jul 17, 2016 at 10:00:03AM +1000, Dave Chinner wrote:
> >>>On Fri, Jul 15, 2016 at 05:18:
On Tue, Jul 19, 2016 at 02:22:47PM -0700, Calvin Owens wrote:
> On 07/18/2016 07:05 PM, Calvin Owens wrote:
> >On 07/17/2016 11:02 PM, Dave Chinner wrote:
> >>On Sun, Jul 17, 2016 at 10:00:03AM +1000, Dave Chinner wrote:
> >>>On Fri, Jul 15, 2016 at 05:18:
On Sun, Jul 17, 2016 at 10:00:03AM +1000, Dave Chinner wrote:
> On Fri, Jul 15, 2016 at 05:18:02PM -0700, Calvin Owens wrote:
> > Hello all,
> >
> > I've found a nasty source of slab corruption. Based on seeing similar
> > symptoms
> > on boxes at Facebook,
On Sun, Jul 17, 2016 at 10:00:03AM +1000, Dave Chinner wrote:
> On Fri, Jul 15, 2016 at 05:18:02PM -0700, Calvin Owens wrote:
> > Hello all,
> >
> > I've found a nasty source of slab corruption. Based on seeing similar
> > symptoms
> > on boxes at Facebook,
; if (fd == -1) {
> perror("Can't open");
> return 1;
> }
>
> if (!fork()) {
> count = atol(argv[2]);
>
> while (1) {
> for (i = 0; i < count; i++)
> if (write(fd, crap, CHUNK) != CHUNK)
> perror("Eh?");
>
> fsync(fd);
> ftruncate(fd, 0);
> }
H. Truncate is used, but only after fsync. If the truncate
is removed, does the problem go away?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
; if (fd == -1) {
> perror("Can't open");
> return 1;
> }
>
> if (!fork()) {
> count = atol(argv[2]);
>
> while (1) {
> for (i = 0; i < count; i++)
> if (write(fd, crap, CHUNK) != CHUNK)
> perror("Eh?");
>
> fsync(fd);
> ftruncate(fd, 0);
> }
H. Truncate is used, but only after fsync. If the truncate
is removed, does the problem go away?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Mon, Jul 11, 2016 at 10:02:24AM +0100, Mel Gorman wrote:
> On Mon, Jul 11, 2016 at 10:47:57AM +1000, Dave Chinner wrote:
> > > I had tested XFS with earlier releases and noticed no major problems
> > > so later releases tested only one filesystem. Given the changes
On Mon, Jul 11, 2016 at 10:02:24AM +0100, Mel Gorman wrote:
> On Mon, Jul 11, 2016 at 10:47:57AM +1000, Dave Chinner wrote:
> > > I had tested XFS with earlier releases and noticed no major problems
> > > so later releases tested only one filesystem. Given the changes
On Fri, Jul 08, 2016 at 01:05:40PM +, Trond Myklebust wrote:
> > On Jul 8, 2016, at 08:55, Trond Myklebust
> > <tron...@primarydata.com> wrote:
> >> On Jul 8, 2016, at 08:48, Seth Forshee
> >> <seth.fors...@canonical.com> wrote: On Fri, Jul 08, 2016 at
On Fri, Jul 08, 2016 at 01:05:40PM +, Trond Myklebust wrote:
> > On Jul 8, 2016, at 08:55, Trond Myklebust
> > wrote:
> >> On Jul 8, 2016, at 08:48, Seth Forshee
> >> wrote: On Fri, Jul 08, 2016 at
> >> 09:53:30AM +1000, Dave Chinner wrote:
> &g
On Fri, Jul 08, 2016 at 10:52:03AM +0100, Mel Gorman wrote:
> On Fri, Jul 08, 2016 at 09:27:13AM +1000, Dave Chinner wrote:
> > .
> > > This series is not without its hazards. There are at least three areas
> > > that I'm concerned with even though I could
On Fri, Jul 08, 2016 at 10:52:03AM +0100, Mel Gorman wrote:
> On Fri, Jul 08, 2016 at 09:27:13AM +1000, Dave Chinner wrote:
> > .
> > > This series is not without its hazards. There are at least three areas
> > > that I'm concerned with even though I could
sys_sync() isn't sufficient to quiesce a
filesystem's operations.
But I'm used to being ignored on this topic (for almost 10 years,
now!). Indeed, it's been made clear in the past that I know
absolutely nothing about what is needed to be done to safely
suspend filesystem operations... :/
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
sys_sync() isn't sufficient to quiesce a
filesystem's operations.
But I'm used to being ignored on this topic (for almost 10 years,
now!). Indeed, it's been made clear in the past that I know
absolutely nothing about what is needed to be done to safely
suspend filesystem operations... :/
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
sts you ran on ext4. It might also be worth running some highly
concurrent inode cache benchmarks (e.g. the 50-million inode, 16-way
concurrent fsmark tests) to see what impact heavy slab cache
pressure has on shrinker behaviour and system balance...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
sts you ran on ext4. It might also be worth running some highly
concurrent inode cache benchmarks (e.g. the 50-million inode, 16-way
concurrent fsmark tests) to see what impact heavy slab cache
pressure has on shrinker behaviour and system balance...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Tue, Jun 28, 2016 at 10:13:32AM +0100, Steven Whitehouse wrote:
> Hi,
>
> On 28/06/16 03:08, Dave Chinner wrote:
> >On Fri, Jun 24, 2016 at 02:50:11PM -0500, Bob Peterson wrote:
> >>This patch adds a new prune_icache_sb function for the VFS slab
> >>shrinker
On Tue, Jun 28, 2016 at 10:13:32AM +0100, Steven Whitehouse wrote:
> Hi,
>
> On 28/06/16 03:08, Dave Chinner wrote:
> >On Fri, Jun 24, 2016 at 02:50:11PM -0500, Bob Peterson wrote:
> >>This patch adds a new prune_icache_sb function for the VFS slab
> >>shrinker
then
move the parts of inode *freeing* that cause problems to a different
context via the ->evict/destroy callouts and trigger that external
context processing on demand. That external context can just do bulk
"if it is on the list then free it" processing, because the reclaim
policy has already been executed to place that inode on the reclaim
list.
This is essentially what XFS does, but it also uses the
->nr_cached_objects/->free_cached_objects() callouts in the
superblock shrinker to provide the reclaim rate feedback mechanism
required to throttle incoming memory allocations.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
*freeing* that cause problems to a different
context via the ->evict/destroy callouts and trigger that external
context processing on demand. That external context can just do bulk
"if it is on the list then free it" processing, because the reclaim
policy has already been executed to place that inode on the reclaim
list.
This is essentially what XFS does, but it also uses the
->nr_cached_objects/->free_cached_objects() callouts in the
superblock shrinker to provide the reclaim rate feedback mechanism
required to throttle incoming memory allocations.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
n of the inode but do
not destroy/free it - you simply queue it to an internal list and
then do the cleanup/freeing in your own time?
i.e. why do you need a special callout just to defer freeing to
another thread when we already have hooks than enable you to do
this?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ot destroy/free it - you simply queue it to an internal list and
then do the cleanup/freeing in your own time?
i.e. why do you need a special callout just to defer freeing to
another thread when we already have hooks than enable you to do
this?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ailure, so retry on failure is not required." To then
map KM_MAYFAIL to a flag that implies the allocation will internally
retry to try exceptionally hard to prevent failure seems wrong.
IOWs, KM_MAYFAIL means XFS is just using for normal allocator
behaviour here, so I'm not sure what problem
ed." To then
map KM_MAYFAIL to a flag that implies the allocation will internally
retry to try exceptionally hard to prevent failure seems wrong.
IOWs, KM_MAYFAIL means XFS is just using for normal allocator
behaviour here, so I'm not sure what problem this change is actually
solving and it's not clear from the description
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
*extremely* paranoid when it comes to changes to core
locking like this. Performance is secondary to correctness, and we
need much more than just a few benchmarks to verify there aren't
locking bugs being introduced
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
comes to changes to core
locking like this. Performance is secondary to correctness, and we
need much more than just a few benchmarks to verify there aren't
locking bugs being introduced
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Thu, Jun 02, 2016 at 02:44:30PM +0200, Holger Hoffstätte wrote:
> On 06/02/16 14:13, Stefan Priebe - Profihost AG wrote:
> >
> > Am 31.05.2016 um 09:31 schrieb Dave Chinner:
> >> On Tue, May 31, 2016 at 08:11:42AM +0200, Stefan Priebe - Profihost AG
> >&g
On Thu, Jun 02, 2016 at 02:44:30PM +0200, Holger Hoffstätte wrote:
> On 06/02/16 14:13, Stefan Priebe - Profihost AG wrote:
> >
> > Am 31.05.2016 um 09:31 schrieb Dave Chinner:
> >> On Tue, May 31, 2016 at 08:11:42AM +0200, Stefan Priebe - Profihost AG
> >&g
ith the same steps.
Hmmm, Ok. I've been running the lockperf test and kernel builds all
day on a filesystem that is identical in shape and size to yours
(i.e. xfs_info output is the same) but I haven't reproduced it yet.
Is it possible to get a metadump image of your filesystem to see if
I can reproduce it on that?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ith the same steps.
Hmmm, Ok. I've been running the lockperf test and kernel builds all
day on a filesystem that is identical in shape and size to yours
(i.e. xfs_info output is the same) but I haven't reproduced it yet.
Is it possible to get a metadump image of your filesystem to see if
I can reproduce it on that?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
597's new affinity list: 0,4,8,12
sh: 1: cannot create /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor:
Directory nonexistent
posix01 -n 8 -l 100
posix02 -n 8 -l 100
posix03 -n 8 -i 100
$
So, I've just removed those tests from your script. I'll see if I
have any luck with reproducing the problem now.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
597's new affinity list: 0,4,8,12
sh: 1: cannot create /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor:
Directory nonexistent
posix01 -n 8 -l 100
posix02 -n 8 -l 100
posix03 -n 8 -i 100
$
So, I've just removed those tests from your script. I'll see if I
have any luck with reproducing the problem now.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ormation_should_I_include_when_reporting_a_problem.3F
You didn't run out of space or something unusual like that? Does
'xfs_repair -n ' report any errors?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ormation_should_I_include_when_reporting_a_problem.3F
You didn't run out of space or something unusual like that? Does
'xfs_repair -n ' report any errors?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
e appears to be handling the
dirty page that is being passed to it correctly. We'll work out what
needs to be done to get rid of the warning for this case, wether it
be a mm/ change or an XFS change.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
e appears to be handling the
dirty page that is being passed to it correctly. We'll work out what
needs to be done to get rid of the warning for this case, wether it
be a mm/ change or an XFS change.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Tue, May 31, 2016 at 12:59:04PM +0900, Minchan Kim wrote:
> On Tue, May 31, 2016 at 12:55:09PM +1000, Dave Chinner wrote:
> > On Tue, May 31, 2016 at 10:07:24AM +0900, Minchan Kim wrote:
> > > On Tue, May 31, 2016 at 08:36:57AM +1000, Dave Chinner wrote:
> > > > B
On Tue, May 31, 2016 at 12:59:04PM +0900, Minchan Kim wrote:
> On Tue, May 31, 2016 at 12:55:09PM +1000, Dave Chinner wrote:
> > On Tue, May 31, 2016 at 10:07:24AM +0900, Minchan Kim wrote:
> > > On Tue, May 31, 2016 at 08:36:57AM +1000, Dave Chinner wrote:
> > > > B
On Tue, May 31, 2016 at 10:07:24AM +0900, Minchan Kim wrote:
> On Tue, May 31, 2016 at 08:36:57AM +1000, Dave Chinner wrote:
> > [adding lkml and linux-mm to the cc list]
> >
> > On Mon, May 30, 2016 at 09:23:48AM +0200, Stefan Priebe - Profihost AG
> > wrote:
>
On Tue, May 31, 2016 at 10:07:24AM +0900, Minchan Kim wrote:
> On Tue, May 31, 2016 at 08:36:57AM +1000, Dave Chinner wrote:
> > [adding lkml and linux-mm to the cc list]
> >
> > On Mon, May 30, 2016 at 09:23:48AM +0200, Stefan Priebe - Profihost AG
> > wrote:
>
and
memory reclaim. It might be worth trying as a workaround for now.
MM-folk - is this analysis correct? If so, why is
shrink_active_list() calling try_to_release_page() on dirty pages?
Is this just an oversight or is there some problem that this is
trying to work around? It seems trivial to fix to me (add a
!PageDirty check), but I don't know why the check is there in the
first place...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
and
memory reclaim. It might be worth trying as a workaround for now.
MM-folk - is this analysis correct? If so, why is
shrink_active_list() calling try_to_release_page() on dirty pages?
Is this just an oversight or is there some problem that this is
trying to work around? It seems trivial to fix to me (add a
!PageDirty check), but I don't know why the check is there in the
first place...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Thu, May 26, 2016 at 07:05:11PM -0700, Linus Torvalds wrote:
> On Thu, May 26, 2016 at 5:13 PM, Dave Chinner <da...@fromorbit.com> wrote:
> > On Thu, May 26, 2016 at 10:19:13AM -0700, Linus Torvalds wrote:
> >>
> >> i'm ok with the late branches, it's not
On Thu, May 26, 2016 at 07:05:11PM -0700, Linus Torvalds wrote:
> On Thu, May 26, 2016 at 5:13 PM, Dave Chinner wrote:
> > On Thu, May 26, 2016 at 10:19:13AM -0700, Linus Torvalds wrote:
> >>
> >> i'm ok with the late branches, it's not like xfs has been a problem sp
On Thu, May 26, 2016 at 10:19:13AM -0700, Linus Torvalds wrote:
> On Wed, May 25, 2016 at 11:13 PM, Dave Chinner <da...@fromorbit.com> wrote:
> >
> > Just yell if this is not OK and I'll drop those branches for this
> > merge and resend the pull request
>
> i'
On Thu, May 26, 2016 at 10:19:13AM -0700, Linus Torvalds wrote:
> On Wed, May 25, 2016 at 11:13 PM, Dave Chinner wrote:
> >
> > Just yell if this is not OK and I'll drop those branches for this
> > merge and resend the pull request
>
> i'm ok with the late bran
warning in xfs_finish_page_writeback for non-debug builds
Dave Chinner (20):
xfs: Don't wrap growfs AGFL indexes
xfs: build bios directly in xfs_add_to_ioend
xfs: don't release bios on completion immediately
xfs: remove xfs_fs_evict_inode()
xfs: xfs_iflush_cluster fa
warning in xfs_finish_page_writeback for non-debug builds
Dave Chinner (20):
xfs: Don't wrap growfs AGFL indexes
xfs: build bios directly in xfs_add_to_ioend
xfs: don't release bios on completion immediately
xfs: remove xfs_fs_evict_inode()
xfs: xfs_iflush_cluster fa
et
*exactly* like Linus us now suggesting, I walked away and haven't
looked at your patches since. Is it any wonder that no other
filesystem maintainer has bothered to waste their time on this
since?
Linus - I'd suggest these VFS timestamp patches need to go through
Al's VFS tree. That way we don't get unreviewed VFS infrastructure
changes going into your tree via a door that nobody was paying
attention to...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
e Linus us now suggesting, I walked away and haven't
looked at your patches since. Is it any wonder that no other
filesystem maintainer has bothered to waste their time on this
since?
Linus - I'd suggest these VFS timestamp patches need to go through
Al's VFS tree. That way we don't get unreviewed VFS infrastructure
changes going into your tree via a door that nobody was paying
attention to...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
1201 - 1300 of 3916 matches
Mail list logo