ency_record latency_record[MAXLR];
> int latencytop_enabled;
>
> #ifdef CONFIG_SYSCTL
> -static int sysctl_latencytop(struct ctl_table *table, int write, void
> *buffer,
> - size_t *lenp, loff_t *ppos)
> +static int sysctl_latencytop(const struct ctl_table *table, int write,
> + void *buffer,
> + size_t *lenp, loff_t *ppos)
> {
> int err;
>
And this.
I could go on, but there are so many examples of this in the patch
that I think that it needs to be toosed away and regenerated in a
way that doesn't trash the existing function parameter formatting.
-Dave.
--
Dave Chinner
da...@fromorbit.com
sis yet.
I suspect the fix may well be to use xfs_trans_buf_get() in the
xfs_inode_item_precommit() path if XFS_ISTALE is already set on the
inode we are trying to log. We don't need a populated cluster buffer
to read data out of or write data into in this path - all we need to
do is attach the inode to the buffer so that when the buffer
invalidation is committed to the journal it will also correctly
finish the stale inode log item.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
een processed
before the module is removed. We have an rcu_barrier() in
xfs_destroy_caches() to avoid this ..
Wait. What is xfs_buf_terminate()? I don't recall that function
Yeah, there's the bug.
exit_xfs_fs(void)
{
xfs_buf_terminate();
xfs_mru_cache_uninit();
to work
> properly for these modified functions.
>
> Miscellanea:
>
> o Remove extra trailing ; and blank line from xfs_agf_verify
>
> Signed-off-by: Joe Perches <j...@perches.com>
> ---
....
XFS bits look fine.
Acked-by: Dave Chinner <dchin...@redhat.com>
--
Dave Chinner
da...@fromorbit.com
On Mon, Sep 18, 2017 at 05:00:58PM -0500, Eric Sandeen wrote:
> On 9/18/17 4:31 PM, Dave Chinner wrote:
> > On Mon, Sep 18, 2017 at 09:28:55AM -0600, Jens Axboe wrote:
> >> On 09/18/2017 09:27 AM, Christoph Hellwig wrote:
> >>> On Mon, Sep 18, 2017 at 08:26:
should also have a comment like the post
IO invalidation - the comment probably got dropped and not noticed
when the changeover from internal XFS code to generic iomap code was
made...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
g triggered.
It needs to be on by default, bu tI'm sure we can wrap it with
something like an xfs_alert_tag() type of construct so the tag can
be set in /proc/fs/xfs/panic_mask to suppress it if testers so
desire.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
nobody has been able to reproduce it exactly
outside of the reaim benchmark. We've reproduced other, similar
issues, and the fixes for those are queued for the 4.9 window.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Mon, Mar 23, 2015 at 12:24:00PM +, Mel Gorman wrote:
These are three follow-on patches based on the xfsrepair workload Dave
Chinner reported was problematic in 4.0-rc1 due to changes in page table
management -- https://lkml.org/lkml/2015/3/1/226.
Much of the problem was reduced
), but
otherwise the system libraries are unchanged.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Thu, Mar 19, 2015 at 04:05:46PM -0700, Linus Torvalds wrote:
On Thu, Mar 19, 2015 at 3:41 PM, Dave Chinner da...@fromorbit.com wrote:
My recollection wasn't faulty - I pulled it from an earlier email.
That said, the original measurement might have been faulty. I ran
the numbers again
( +- 7.43% )
10.002032292 seconds time elapsed ( +- 0.00% )
Bit more variance there than the pte checking, but runtime
difference is in the noise - 5m4s vs 4m54s - and profiles are
identical to the pte checking version.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Thu, Mar 19, 2015 at 06:29:47PM -0700, Linus Torvalds wrote:
On Thu, Mar 19, 2015 at 5:23 PM, Dave Chinner da...@fromorbit.com wrote:
Bit more variance there than the pte checking, but runtime
difference is in the noise - 5m4s vs 4m54s - and profiles are
identical to the pte checking
%)
Hash buckets with 22 entries 1 ( 0%)
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Tue, Mar 17, 2015 at 02:30:57PM -0700, Linus Torvalds wrote:
On Tue, Mar 17, 2015 at 1:51 PM, Dave Chinner da...@fromorbit.com wrote:
On the -o ag_stride=-1 -o bhash=101073 config, the 60s perf stat I
was using during steady state shows:
471,752 migrate:mm_migrate_pages
On Mon, Mar 09, 2015 at 09:52:18AM -0700, Linus Torvalds wrote:
On Mon, Mar 9, 2015 at 4:29 AM, Dave Chinner da...@fromorbit.com wrote:
Also, is there some sane way for me to actually see this behavior on a
regular machine with just a single socket? Dave is apparently running
in some fake
a few minutes to run - if you throw 8p at it
then it should run at 100k files/s being created.
Then unmount and run xfs_repair -o bhash=101703 /path/to/file.img
on the resultant image file.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Thu, Mar 05, 2015 at 11:54:52PM +, Mel Gorman wrote:
Dave Chinner reported the following on https://lkml.org/lkml/2015/3/1/226
Across the board the 4.0-rc1 numbers are much slower, and the
degradation is far worse when using the large memory footprint
configs. Perf points
18 matches
Mail list logo