3.10.0 failed paging request from kthread_data

2013-07-17 Thread Jim Schutt
Hi, I'm trying to test the btrfs and ceph contributions to 3.11, without testing all of 3.11-rc1 (just yet), so I'm testing with the "next" branch of Chris Mason's tree (commit cbacd76bb3 from git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git) merged into the for-linus branch

3.10.0 failed paging request from kthread_data

2013-07-17 Thread Jim Schutt
Hi, I'm trying to test the btrfs and ceph contributions to 3.11, without testing all of 3.11-rc1 (just yet), so I'm testing with the next branch of Chris Mason's tree (commit cbacd76bb3 from git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git) merged into the for-linus branch of

Re: 3.10-rc3 NFSv3 mount issues

2013-05-30 Thread Jim Schutt
On 05/30/2013 02:26 PM, Chuck Lever wrote: > > On May 30, 2013, at 4:19 PM, Jim Schutt wrote: > >> Hi, >> >> I've been trying to test 3.10-rc3 on some diskless clients, and found >> that I can no longer mount my root file system via NFSv3. >> >

3.10-rc3 NFSv3 mount issues

2013-05-30 Thread Jim Schutt
Hi, I've been trying to test 3.10-rc3 on some diskless clients, and found that I can no longer mount my root file system via NFSv3. I poked around looking at NFS changes for 3.10, and found these two commits: d497ab9751 "NFSv3: match sec= flavor against server list" 4580a92d44 "NFS: Use

3.10-rc3 NFSv3 mount issues

2013-05-30 Thread Jim Schutt
Hi, I've been trying to test 3.10-rc3 on some diskless clients, and found that I can no longer mount my root file system via NFSv3. I poked around looking at NFS changes for 3.10, and found these two commits: d497ab9751 NFSv3: match sec= flavor against server list 4580a92d44 NFS: Use

Re: 3.10-rc3 NFSv3 mount issues

2013-05-30 Thread Jim Schutt
On 05/30/2013 02:26 PM, Chuck Lever wrote: On May 30, 2013, at 4:19 PM, Jim Schutt jasc...@sandia.gov wrote: Hi, I've been trying to test 3.10-rc3 on some diskless clients, and found that I can no longer mount my root file system via NFSv3. 3.10-rc3 appears to be missing the fix

Re: [PATCH v2 3/3] ceph: ceph_pagelist_append might sleep while atomic

2013-05-15 Thread Jim Schutt
On 05/15/2013 10:49 AM, Alex Elder wrote: > On 05/15/2013 11:38 AM, Jim Schutt wrote: >> > Ceph's encode_caps_cb() worked hard to not call __page_cache_alloc() while >> > holding a lock, but it's spoiled because ceph_pagelist_addpage() always >> > calls kmap

[PATCH v2 3/3] ceph: ceph_pagelist_append might sleep while atomic

2013-05-15 Thread Jim Schutt
: mds0 reconnect success [13490.720032] ceph: mds0 caps stale [13501.235257] ceph: mds0 recovery completed [13501.300419] ceph: mds0 caps renewed Fix it up by encoding locks into a buffer first, and when the number of encoded locks is stable, copy that into a ceph_pagelist. Signed-off-by: Jim Schu

[PATCH v2 2/3] ceph: add missing cpu_to_le32() calls when encoding a reconnect capability

2013-05-15 Thread Jim Schutt
and src/include/encoding.h in the Ceph server code (git://github.com/ceph/ceph). I also checked the server side for flock_len decoding, and I believe that also happens correctly, by virtue of having been declared __le32 in struct ceph_mds_cap_reconnect, in src/include/ceph_fs.h. Signed-off-by: Jim

[PATCH v2 1/3] ceph: fix up comment for ceph_count_locks() as to which lock to hold

2013-05-15 Thread Jim Schutt
Signed-off-by: Jim Schutt --- fs/ceph/locks.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c index 202dd3d..ffc86cb 100644 --- a/fs/ceph/locks.c +++ b/fs/ceph/locks.c @@ -169,7 +169,7 @@ int ceph_flock(struct file *file, int cmd

[PATCH v2 0/3] ceph: fix might_sleep while atomic

2013-05-15 Thread Jim Schutt
/include/ceph_fs.h. Jim Schutt (3): ceph: fix up comment for ceph_count_locks() as to which lock to hold ceph: add missing cpu_to_le32() calls when encoding a reconnect capability ceph: ceph_pagelist_append might sleep while atomic fs/ceph/locks.c | 75

[PATCH v2 0/3] ceph: fix might_sleep while atomic

2013-05-15 Thread Jim Schutt
/include/ceph_fs.h. Jim Schutt (3): ceph: fix up comment for ceph_count_locks() as to which lock to hold ceph: add missing cpu_to_le32() calls when encoding a reconnect capability ceph: ceph_pagelist_append might sleep while atomic fs/ceph/locks.c | 75

[PATCH v2 1/3] ceph: fix up comment for ceph_count_locks() as to which lock to hold

2013-05-15 Thread Jim Schutt
Signed-off-by: Jim Schutt jasc...@sandia.gov --- fs/ceph/locks.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c index 202dd3d..ffc86cb 100644 --- a/fs/ceph/locks.c +++ b/fs/ceph/locks.c @@ -169,7 +169,7 @@ int ceph_flock(struct file

[PATCH v2 2/3] ceph: add missing cpu_to_le32() calls when encoding a reconnect capability

2013-05-15 Thread Jim Schutt
and src/include/encoding.h in the Ceph server code (git://github.com/ceph/ceph). I also checked the server side for flock_len decoding, and I believe that also happens correctly, by virtue of having been declared __le32 in struct ceph_mds_cap_reconnect, in src/include/ceph_fs.h. Signed-off-by: Jim

[PATCH v2 3/3] ceph: ceph_pagelist_append might sleep while atomic

2013-05-15 Thread Jim Schutt
success [13490.720032] ceph: mds0 caps stale [13501.235257] ceph: mds0 recovery completed [13501.300419] ceph: mds0 caps renewed Fix it up by encoding locks into a buffer first, and when the number of encoded locks is stable, copy that into a ceph_pagelist. Signed-off-by: Jim Schutt jasc...@sandia.gov

Re: [PATCH v2 3/3] ceph: ceph_pagelist_append might sleep while atomic

2013-05-15 Thread Jim Schutt
On 05/15/2013 10:49 AM, Alex Elder wrote: On 05/15/2013 11:38 AM, Jim Schutt wrote: Ceph's encode_caps_cb() worked hard to not call __page_cache_alloc() while holding a lock, but it's spoiled because ceph_pagelist_addpage() always calls kmap(), which might sleep. Here's the result

Re: [PATCH] libceph: ceph_pagelist_append might sleep while atomic

2013-05-14 Thread Jim Schutt
On 05/14/2013 10:44 AM, Alex Elder wrote: > On 05/09/2013 09:42 AM, Jim Schutt wrote: >> Ceph's encode_caps_cb() worked hard to not call __page_cache_alloc while >> holding a lock, but it's spoiled because ceph_pagelist_addpage() always >> calls kmap(), which might slee

Re: [PATCH] libceph: ceph_pagelist_append might sleep while atomic

2013-05-14 Thread Jim Schutt
On 05/14/2013 10:44 AM, Alex Elder wrote: On 05/09/2013 09:42 AM, Jim Schutt wrote: Ceph's encode_caps_cb() worked hard to not call __page_cache_alloc while holding a lock, but it's spoiled because ceph_pagelist_addpage() always calls kmap(), which might sleep. Here's the result: I finally

[PATCH] libceph: ceph_pagelist_append might sleep while atomic

2013-05-09 Thread Jim Schutt
: mds0 reconnect success [13490.720032] ceph: mds0 caps stale [13501.235257] ceph: mds0 recovery completed [13501.300419] ceph: mds0 caps renewed Fix it up by encoding locks into a buffer first, and when the number of encoded locks is stable, copy that into a ceph_pagelist. Signed-off-by: Jim Schu

[PATCH] libceph: ceph_pagelist_append might sleep while atomic

2013-05-09 Thread Jim Schutt
success [13490.720032] ceph: mds0 caps stale [13501.235257] ceph: mds0 recovery completed [13501.300419] ceph: mds0 caps renewed Fix it up by encoding locks into a buffer first, and when the number of encoded locks is stable, copy that into a ceph_pagelist. Signed-off-by: Jim Schutt jasc...@sandia.gov

Re: 3.7.0-rc8 btrfs locking issue

2012-12-12 Thread Jim Schutt
On 12/11/2012 06:37 PM, Liu Bo wrote: > On Tue, Dec 11, 2012 at 09:33:15AM -0700, Jim Schutt wrote: >> On 12/09/2012 07:04 AM, Liu Bo wrote: >>> On Wed, Dec 05, 2012 at 09:07:05AM -0700, Jim Schutt wrote: >>> Hi Jim, >>> >>> Could you please apply the

Re: 3.7.0-rc8 btrfs locking issue

2012-12-12 Thread Jim Schutt
On 12/11/2012 06:37 PM, Liu Bo wrote: On Tue, Dec 11, 2012 at 09:33:15AM -0700, Jim Schutt wrote: On 12/09/2012 07:04 AM, Liu Bo wrote: On Wed, Dec 05, 2012 at 09:07:05AM -0700, Jim Schutt wrote: Hi Jim, Could you please apply the following patch to test if it works? Hi, So far

Re: 3.7.0-rc8 btrfs locking issue

2012-12-11 Thread Jim Schutt
On 12/09/2012 07:04 AM, Liu Bo wrote: > On Wed, Dec 05, 2012 at 09:07:05AM -0700, Jim Schutt wrote: >> > Hi, >> > >> > I'm hitting a btrfs locking issue with 3.7.0-rc8. >> > >> > The btrfs filesystem in question is backing a Ceph OSD >&g

Re: 3.7.0-rc8 btrfs locking issue

2012-12-11 Thread Jim Schutt
On 12/09/2012 07:04 AM, Liu Bo wrote: On Wed, Dec 05, 2012 at 09:07:05AM -0700, Jim Schutt wrote: Hi, I'm hitting a btrfs locking issue with 3.7.0-rc8. The btrfs filesystem in question is backing a Ceph OSD under a heavy write load from many cephfs clients. I reported

Re: 3.7.0-rc8 btrfs locking issue

2012-12-07 Thread Jim Schutt
On 12/05/2012 09:07 AM, Jim Schutt wrote: Hi, I'm hitting a btrfs locking issue with 3.7.0-rc8. The btrfs filesystem in question is backing a Ceph OSD under a heavy write load from many cephfs clients. I reported this issue a while ago: http://www.spinics.net/lists/linux-btrfs/msg19370.html

Re: 3.7.0-rc8 btrfs locking issue

2012-12-07 Thread Jim Schutt
On 12/05/2012 09:07 AM, Jim Schutt wrote: Hi, I'm hitting a btrfs locking issue with 3.7.0-rc8. The btrfs filesystem in question is backing a Ceph OSD under a heavy write load from many cephfs clients. I reported this issue a while ago: http://www.spinics.net/lists/linux-btrfs/msg19370.html

3.7.0-rc8 btrfs locking issue

2012-12-05 Thread Jim Schutt
Hi, I'm hitting a btrfs locking issue with 3.7.0-rc8. The btrfs filesystem in question is backing a Ceph OSD under a heavy write load from many cephfs clients. I reported this issue a while ago: http://www.spinics.net/lists/linux-btrfs/msg19370.html when I was testing what I thought might be

3.7.0-rc8 btrfs locking issue

2012-12-05 Thread Jim Schutt
Hi, I'm hitting a btrfs locking issue with 3.7.0-rc8. The btrfs filesystem in question is backing a Ceph OSD under a heavy write load from many cephfs clients. I reported this issue a while ago: http://www.spinics.net/lists/linux-btrfs/msg19370.html when I was testing what I thought might be

Re: [RFC PATCH 0/5] Improve hugepage allocation success rates under load V3

2012-08-13 Thread Jim Schutt
Hi Mel, On 08/12/2012 02:22 PM, Mel Gorman wrote: I went through the patch again but only found the following which is a weak candidate. Still, can you retest with the following patch on top and CONFIG_PROVE_LOCKING set please? I've gotten in several hours of testing on this patch with no

Re: [RFC PATCH 0/5] Improve hugepage allocation success rates under load V3

2012-08-13 Thread Jim Schutt
Hi Mel, On 08/12/2012 02:22 PM, Mel Gorman wrote: I went through the patch again but only found the following which is a weak candidate. Still, can you retest with the following patch on top and CONFIG_PROVE_LOCKING set please? I've gotten in several hours of testing on this patch with no

Re: [RFC PATCH 0/5] Improve hugepage allocation success rates under load V3

2012-08-10 Thread Jim Schutt
On 08/10/2012 05:02 AM, Mel Gorman wrote: On Thu, Aug 09, 2012 at 04:38:24PM -0600, Jim Schutt wrote: Ok, this is an untested hack and I expect it would drop allocation success rates again under load (but not as much). Can you test again and see what effect, if any, it has please? ---8

Re: [RFC PATCH 0/5] Improve hugepage allocation success rates under load V3

2012-08-10 Thread Jim Schutt
On 08/10/2012 05:02 AM, Mel Gorman wrote: On Thu, Aug 09, 2012 at 04:38:24PM -0600, Jim Schutt wrote: Ok, this is an untested hack and I expect it would drop allocation success rates again under load (but not as much). Can you test again and see what effect, if any, it has please? ---8

Re: [RFC PATCH 0/5] Improve hugepage allocation success rates under load V3

2012-08-09 Thread Jim Schutt
On 08/09/2012 02:46 PM, Mel Gorman wrote: On Thu, Aug 09, 2012 at 12:16:35PM -0600, Jim Schutt wrote: On 08/09/2012 07:49 AM, Mel Gorman wrote: Changelog since V2 o Capture !MIGRATE_MOVABLE pages where possible o Document the treatment of MIGRATE_MOVABLE pages while capturing o Expand

Re: [RFC PATCH 0/5] Improve hugepage allocation success rates under load V3

2012-08-09 Thread Jim Schutt
der load - to 0% in one case. There is a proposed change to that patch in this series and it would be ideal if Jim Schutt could retest the workload that led to commit [7db8889a: mm: have order> 0 compaction start off where it left]. On my first test of this patch series on top of 3.5,

Re: [RFC PATCH 0/5] Improve hugepage allocation success rates under load V3

2012-08-09 Thread Jim Schutt
ess rates under load - to 0% in one case. There is a proposed change to that patch in this series and it would be ideal if Jim Schutt could retest the workload that led to commit [7db8889a: mm: have order> 0 compaction start off where it left]. I was successful at resolving my Ceph issue on 3

Re: [RFC PATCH 0/5] Improve hugepage allocation success rates under load V3

2012-08-09 Thread Jim Schutt
under load - to 0% in one case. There is a proposed change to that patch in this series and it would be ideal if Jim Schutt could retest the workload that led to commit [7db8889a: mm: have order 0 compaction start off where it left]. I was successful at resolving my Ceph issue on 3.6-rc1

Re: [RFC PATCH 0/5] Improve hugepage allocation success rates under load V3

2012-08-09 Thread Jim Schutt
load - to 0% in one case. There is a proposed change to that patch in this series and it would be ideal if Jim Schutt could retest the workload that led to commit [7db8889a: mm: have order 0 compaction start off where it left]. On my first test of this patch series on top of 3.5, I ran

Re: [RFC PATCH 0/5] Improve hugepage allocation success rates under load V3

2012-08-09 Thread Jim Schutt
On 08/09/2012 02:46 PM, Mel Gorman wrote: On Thu, Aug 09, 2012 at 12:16:35PM -0600, Jim Schutt wrote: On 08/09/2012 07:49 AM, Mel Gorman wrote: Changelog since V2 o Capture !MIGRATE_MOVABLE pages where possible o Document the treatment of MIGRATE_MOVABLE pages while capturing o Expand

Re: [PATCH 6/6] mm: have order > 0 compaction start near a pageblock with free pages

2012-08-07 Thread Jim Schutt
On 08/07/2012 08:52 AM, Mel Gorman wrote: On Tue, Aug 07, 2012 at 10:45:25AM -0400, Rik van Riel wrote: On 08/07/2012 08:31 AM, Mel Gorman wrote: commit [7db8889a: mm: have order> 0 compaction start off where it left] introduced a caching mechanism to reduce the amount work the free page

Re: [PATCH 6/6] mm: have order 0 compaction start near a pageblock with free pages

2012-08-07 Thread Jim Schutt
On 08/07/2012 08:52 AM, Mel Gorman wrote: On Tue, Aug 07, 2012 at 10:45:25AM -0400, Rik van Riel wrote: On 08/07/2012 08:31 AM, Mel Gorman wrote: commit [7db8889a: mm: have order 0 compaction start off where it left] introduced a caching mechanism to reduce the amount work the free page

Re: splice/vmsplice performance test results

2006-11-27 Thread Jim Schutt
On Thu, 2006-11-23 at 12:24 +0100, Jens Axboe wrote: > On Wed, Nov 22 2006, Jim Schutt wrote: > > > > On Wed, 2006-11-22 at 09:57 +0100, Jens Axboe wrote: > > > On Tue, Nov 21 2006, Jim Schutt wrote: > > [snip] > > > > > > > >

Re: splice/vmsplice performance test results

2006-11-27 Thread Jim Schutt
On Thu, 2006-11-23 at 12:24 +0100, Jens Axboe wrote: On Wed, Nov 22 2006, Jim Schutt wrote: On Wed, 2006-11-22 at 09:57 +0100, Jens Axboe wrote: On Tue, Nov 21 2006, Jim Schutt wrote: [snip] Hmmm. Is it worth me trying to do some sort of kernel profiling to see

Re: splice/vmsplice performance test results

2006-11-17 Thread Jim Schutt
On Thu, 2006-11-16 at 21:25 +0100, Jens Axboe wrote: > On Thu, Nov 16 2006, Jim Schutt wrote: > > Hi, > > > > My test program can do one of the following: > > > > send data: > > A) read() from file into buffer, write() buffer into socket > > B) mma

Re: splice/vmsplice performance test results

2006-11-17 Thread Jim Schutt
On Thu, 2006-11-16 at 21:25 +0100, Jens Axboe wrote: On Thu, Nov 16 2006, Jim Schutt wrote: Hi, My test program can do one of the following: send data: A) read() from file into buffer, write() buffer into socket B) mmap() section of file, write() that into socket, munmap() C

Re: splice/vmsplice performance test results

2006-11-16 Thread Jim Schutt
On Thu, 2006-11-16 at 21:25 +0100, Jens Axboe wrote: > On Thu, Nov 16 2006, Jim Schutt wrote: > > Hi, > > > > > > My test program can do one of the following: > > > > send data: > > A) read() from file into buffer, write() buffer into

splice/vmsplice performance test results

2006-11-16 Thread Jim Schutt
is read+write really the fastest way to get data off a socket and into a file? -- Jim Schutt (Please Cc: me, as I'm not subscribed to lkml.) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger

splice/vmsplice performance test results

2006-11-16 Thread Jim Schutt
to get data off a socket and into a file? -- Jim Schutt (Please Cc: me, as I'm not subscribed to lkml.) - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please

Re: splice/vmsplice performance test results

2006-11-16 Thread Jim Schutt
On Thu, 2006-11-16 at 21:25 +0100, Jens Axboe wrote: On Thu, Nov 16 2006, Jim Schutt wrote: Hi, My test program can do one of the following: send data: A) read() from file into buffer, write() buffer into socket B) mmap() section of file, write() that into socket, munmap

Re: Spurious interrupts for UP w/ IO-APIC on ServerWorks

2001-04-24 Thread Jim Schutt
bet that helps It does -- no more spurious interrupts. Thanks -- Jim -- Jim Schutt <[EMAIL PROTECTED]> Sandia National Laboratories, Albuquerque, New Mexico USA - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL

Spurious interrupts for UP w/ IO-APIC on ServerWorks

2001-04-24 Thread Jim Schutt
available on request. Thanks -- Jim -- Jim Schutt <[EMAIL PROTECTED]> Sandia National Laboratories, Albuquerque, New Mexico USA - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://

Re: Spurious interrupts for UP w/ IO-APIC on ServerWorks

2001-04-24 Thread Jim Schutt
spurious interrupts. Thanks -- Jim -- Jim Schutt [EMAIL PROTECTED] Sandia National Laboratories, Albuquerque, New Mexico USA - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org

tulip vs. de4x5 (was Re: Linux 2.4 Status / TODO page ...)

2000-11-08 Thread Jim Schutt
Jeff Garzik wrote: > > > de4x5 is becoming EISA-only in 2.5.x too, since its PCI support is > duplicated now in tulip driver. > I've got some DEC Miatas with DECchip 21142/43 ethernet cards, and I don't get the same link speeds when using the de4x5 and tulip drivers, as of 2.4.0-test10. The

tulip vs. de4x5 (was Re: Linux 2.4 Status / TODO page ...)

2000-11-08 Thread Jim Schutt
Jeff Garzik wrote: de4x5 is becoming EISA-only in 2.5.x too, since its PCI support is duplicated now in tulip driver. I've got some DEC Miatas with DECchip 21142/43 ethernet cards, and I don't get the same link speeds when using the de4x5 and tulip drivers, as of 2.4.0-test10. The