On Sun, 8 Jan 2017 05:46:39 +
Al Viro wrote:
> On Sat, Jan 07, 2017 at 06:15:23PM +, Al Viro wrote:
> > On Sat, Jan 07, 2017 at 05:19:10PM +, Al Viro wrote:
> >
> > > released) simply trigger virtio_queue_notify_vq() again? It *is* a bug
> > > (if we get
On Sun, 8 Jan 2017 05:46:39 +
Al Viro wrote:
> On Sat, Jan 07, 2017 at 06:15:23PM +, Al Viro wrote:
> > On Sat, Jan 07, 2017 at 05:19:10PM +, Al Viro wrote:
> >
> > > released) simply trigger virtio_queue_notify_vq() again? It *is* a bug
> > > (if we get a burst filling a
On Mon, Jan 09, 2017 at 08:39:31PM +0200, Tuomas Tynkkynen wrote:
> Yes, this does seem to be related to this or otherwise MAX_REQ related!
> - Bumping MAX_REQ up to 1024 makes the hang go away (on 4.7).
> - Dropping it to 64 makes the same hang happen on kernels where it worked
> before (I
On Mon, Jan 09, 2017 at 08:39:31PM +0200, Tuomas Tynkkynen wrote:
> Yes, this does seem to be related to this or otherwise MAX_REQ related!
> - Bumping MAX_REQ up to 1024 makes the hang go away (on 4.7).
> - Dropping it to 64 makes the same hang happen on kernels where it worked
> before (I
On Sat, 7 Jan 2017 17:19:10 +
Al Viro wrote:
> On Sat, Jan 07, 2017 at 04:10:45PM +0100, Greg Kurz wrote:
> > > virtqueue_push(), but pdu freeing is delayed until v9fs_flush() gets woken
> > > up. In the meanwhile, another request arrives into the slot of freed by
>
On Sat, 7 Jan 2017 17:19:10 +
Al Viro wrote:
> On Sat, Jan 07, 2017 at 04:10:45PM +0100, Greg Kurz wrote:
> > > virtqueue_push(), but pdu freeing is delayed until v9fs_flush() gets woken
> > > up. In the meanwhile, another request arrives into the slot of freed by
> > > that
On Sat, 7 Jan 2017 17:19:10 +
Al Viro wrote:
> On Sat, Jan 07, 2017 at 04:10:45PM +0100, Greg Kurz wrote:
> > > virtqueue_push(), but pdu freeing is delayed until v9fs_flush() gets woken
> > > up. In the meanwhile, another request arrives into the slot of freed by
>
On Sat, 7 Jan 2017 17:19:10 +
Al Viro wrote:
> On Sat, Jan 07, 2017 at 04:10:45PM +0100, Greg Kurz wrote:
> > > virtqueue_push(), but pdu freeing is delayed until v9fs_flush() gets woken
> > > up. In the meanwhile, another request arrives into the slot of freed by
> > > that
On Sat, Jan 07, 2017 at 06:15:23PM +, Al Viro wrote:
> On Sat, Jan 07, 2017 at 05:19:10PM +, Al Viro wrote:
>
> > released) simply trigger virtio_queue_notify_vq() again? It *is* a bug
> > (if we get a burst filling a previously empty queue all at once, there won't
> > be any slots
On Sat, Jan 07, 2017 at 06:15:23PM +, Al Viro wrote:
> On Sat, Jan 07, 2017 at 05:19:10PM +, Al Viro wrote:
>
> > released) simply trigger virtio_queue_notify_vq() again? It *is* a bug
> > (if we get a burst filling a previously empty queue all at once, there won't
> > be any slots
On Sat, Jan 07, 2017 at 05:19:10PM +, Al Viro wrote:
> released) simply trigger virtio_queue_notify_vq() again? It *is* a bug
> (if we get a burst filling a previously empty queue all at once, there won't
> be any slots becoming freed
Umm... Nope, even in that scenario we'll have all
On Sat, Jan 07, 2017 at 05:19:10PM +, Al Viro wrote:
> released) simply trigger virtio_queue_notify_vq() again? It *is* a bug
> (if we get a burst filling a previously empty queue all at once, there won't
> be any slots becoming freed
Umm... Nope, even in that scenario we'll have all
On Sat, Jan 07, 2017 at 04:10:45PM +0100, Greg Kurz wrote:
> > virtqueue_push(), but pdu freeing is delayed until v9fs_flush() gets woken
> > up. In the meanwhile, another request arrives into the slot of freed by
> > that virtqueue_push() and we are out of pdus.
> >
>
> Indeed. Even if this
On Sat, Jan 07, 2017 at 04:10:45PM +0100, Greg Kurz wrote:
> > virtqueue_push(), but pdu freeing is delayed until v9fs_flush() gets woken
> > up. In the meanwhile, another request arrives into the slot of freed by
> > that virtqueue_push() and we are out of pdus.
> >
>
> Indeed. Even if this
On Sat, 7 Jan 2017 06:26:47 +
Al Viro wrote:
> On Fri, Jan 06, 2017 at 02:52:35PM +0100, Greg Kurz wrote:
>
> > Looking at the tag numbers, I think we're hitting the hardcoded limit of 128
> > simultaneous requests in QEMU (which doesn't produce any error, new
On Sat, 7 Jan 2017 06:26:47 +
Al Viro wrote:
> On Fri, Jan 06, 2017 at 02:52:35PM +0100, Greg Kurz wrote:
>
> > Looking at the tag numbers, I think we're hitting the hardcoded limit of 128
> > simultaneous requests in QEMU (which doesn't produce any error, new requests
> > are silently
On Fri, Jan 06, 2017 at 02:52:35PM +0100, Greg Kurz wrote:
> Looking at the tag numbers, I think we're hitting the hardcoded limit of 128
> simultaneous requests in QEMU (which doesn't produce any error, new requests
> are silently dropped).
>
> Tuomas, can you change MAX_REQ to some higher
On Fri, Jan 06, 2017 at 02:52:35PM +0100, Greg Kurz wrote:
> Looking at the tag numbers, I think we're hitting the hardcoded limit of 128
> simultaneous requests in QEMU (which doesn't produce any error, new requests
> are silently dropped).
>
> Tuomas, can you change MAX_REQ to some higher
On Wed, 4 Jan 2017 23:01:01 +
Al Viro wrote:
> > Here's logs that should be complete this time:
> >
> > https://gist.githubusercontent.com/dezgeg/08629d4c8ca79da794bc087e5951e518/raw/a1a82b9bc24e5282c82beb43a9dc91974ffcf75a/9p.qemu.log
> >
On Wed, 4 Jan 2017 23:01:01 +
Al Viro wrote:
> > Here's logs that should be complete this time:
> >
> > https://gist.githubusercontent.com/dezgeg/08629d4c8ca79da794bc087e5951e518/raw/a1a82b9bc24e5282c82beb43a9dc91974ffcf75a/9p.qemu.log
> >
> Here's logs that should be complete this time:
>
> https://gist.githubusercontent.com/dezgeg/08629d4c8ca79da794bc087e5951e518/raw/a1a82b9bc24e5282c82beb43a9dc91974ffcf75a/9p.qemu.log
>
> Here's logs that should be complete this time:
>
> https://gist.githubusercontent.com/dezgeg/08629d4c8ca79da794bc087e5951e518/raw/a1a82b9bc24e5282c82beb43a9dc91974ffcf75a/9p.qemu.log
>
On Wed, 4 Jan 2017 01:47:53 +
Al Viro wrote:
> On Wed, Jan 04, 2017 at 01:34:50AM +0200, Tuomas Tynkkynen wrote:
>
> > I got these logs from the server & client with 9p tracepoints enabled:
> >
> >
On Wed, 4 Jan 2017 01:47:53 +
Al Viro wrote:
> On Wed, Jan 04, 2017 at 01:34:50AM +0200, Tuomas Tynkkynen wrote:
>
> > I got these logs from the server & client with 9p tracepoints enabled:
> >
> >
On Wed, Jan 04, 2017 at 01:34:50AM +0200, Tuomas Tynkkynen wrote:
> I got these logs from the server & client with 9p tracepoints enabled:
>
> https://gist.githubusercontent.com/dezgeg/02447100b3182167403099fe7de4d941/raw/3772e408ddf586fb662ac9148fc10943529a6b99/dmesg%2520with%25209p%2520trace
>
On Wed, Jan 04, 2017 at 01:34:50AM +0200, Tuomas Tynkkynen wrote:
> I got these logs from the server & client with 9p tracepoints enabled:
>
> https://gist.githubusercontent.com/dezgeg/02447100b3182167403099fe7de4d941/raw/3772e408ddf586fb662ac9148fc10943529a6b99/dmesg%2520with%25209p%2520trace
>
On Mon, 2 Jan 2017 16:23:09 +
Al Viro wrote:
>
> What I'd like to see is a log of 9p traffic in those; to hell with the
> payloads, just the type and tag of from each message [...]
Thanks for the suggestions. With the following patch to QEMU:
diff --git
On Mon, 2 Jan 2017 16:23:09 +
Al Viro wrote:
>
> What I'd like to see is a log of 9p traffic in those; to hell with the
> payloads, just the type and tag of from each message [...]
Thanks for the suggestions. With the following patch to QEMU:
diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
On Mon, Jan 02, 2017 at 10:20:35AM +0200, Tuomas Tynkkynen wrote:
> Hi fsdevel,
>
> I tracked this problem down a bit and it seems that it started happening
> after the VFS merge in 4.7-rc1: 7f427d3a6029331304f91ef4d7cf646f054216d2:
>
> Merge branch 'for-linus' of
>
On Mon, Jan 02, 2017 at 10:20:35AM +0200, Tuomas Tynkkynen wrote:
> Hi fsdevel,
>
> I tracked this problem down a bit and it seems that it started happening
> after the VFS merge in 4.7-rc1: 7f427d3a6029331304f91ef4d7cf646f054216d2:
>
> Merge branch 'for-linus' of
>
Hi fsdevel,
I tracked this problem down a bit and it seems that it started happening
after the VFS merge in 4.7-rc1: 7f427d3a6029331304f91ef4d7cf646f054216d2:
Merge branch 'for-linus' of
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull parallel filesystem directory
Hi fsdevel,
I tracked this problem down a bit and it seems that it started happening
after the VFS merge in 4.7-rc1: 7f427d3a6029331304f91ef4d7cf646f054216d2:
Merge branch 'for-linus' of
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull parallel filesystem directory
On Tue, 29 Nov 2016 10:39:39 -0600
Eric Van Hensbergen wrote:
> Any idea of what xfstests is doing at this point in time? I'd be a
> bit worried about some sort of loop in the namespace since it seems to
> be in path traversal. Could also be some sort of resource leak or
>
On Tue, 29 Nov 2016 10:39:39 -0600
Eric Van Hensbergen wrote:
> Any idea of what xfstests is doing at this point in time? I'd be a
> bit worried about some sort of loop in the namespace since it seems to
> be in path traversal. Could also be some sort of resource leak or
> fragmentation, I'll
Any idea of what xfstests is doing at this point in time? I'd be a
bit worried about some sort of loop in the namespace since it seems to
be in path traversal. Could also be some sort of resource leak or
fragmentation, I'll admit that many of the regression tests we do are
fairly short in
Any idea of what xfstests is doing at this point in time? I'd be a
bit worried about some sort of loop in the namespace since it seems to
be in path traversal. Could also be some sort of resource leak or
fragmentation, I'll admit that many of the regression tests we do are
fairly short in
Hi fsdevel,
I have been observing hangs when running xfstests generic/224. Curiously
enough, the test is *not* causing problems on the FS under test (I've
tried both ext4 and f2fs) but instead it's causing the 9pfs that I'm
using as the root filesystem to crap out.
How it shows up is that the
Hi fsdevel,
I have been observing hangs when running xfstests generic/224. Curiously
enough, the test is *not* causing problems on the FS under test (I've
tried both ext4 and f2fs) but instead it's causing the 9pfs that I'm
using as the root filesystem to crap out.
How it shows up is that the
38 matches
Mail list logo