On 15/02/18 16:48, J. Bruce Fields wrote:
On Tue, Feb 13, 2018 at 07:01:22AM +, Terry Barnaby wrote:
The transaction system allows the write delegation to send the data to the
servers RAM without the overhead of synchronous writes to the disk.
As far as I'm concerned this problem is
On Tue, Feb 13, 2018 at 07:01:22AM +, Terry Barnaby wrote:
> The transaction system allows the write delegation to send the data to the
> servers RAM without the overhead of synchronous writes to the disk.
As far as I'm concerned this problem is already solved--did you miss the
discussion of
On Mon, Feb 05, 2018 at 06:06:29PM -0500, J. Bruce Fields wrote:
> Or this?:
>
>
> https://www.newegg.com/Product/Product.aspx?Item=N82E16820156153_re=ssd_power_loss_protection-_-20-156-153-_-Product
Ugh, Anandtech explains that their marketing is misleading, that drive
can't actually
On 13 February 2018 at 02:01, Terry Barnaby wrote:
>> Yes: http://nfs.sourceforge.net/#faq_a8
>>
>> --b.
>
>
> Quite right, it was network limited (disk vs network speed is about the
> same). Using a slower USB stick disk shows that fsync() is not working with
> a NFSv4
On 12/02/18 22:14, J. Bruce Fields wrote:
On Mon, Feb 12, 2018 at 08:12:58PM +, Terry Barnaby wrote:
On 12/02/18 17:35, Terry Barnaby wrote:
On 12/02/18 17:15, J. Bruce Fields wrote:
On Mon, Feb 12, 2018 at 05:09:32PM +, Terry Barnaby wrote:
One thing on this, that I forgot to ask,
On Mon, Feb 12, 2018 at 08:12:58PM +, Terry Barnaby wrote:
> On 12/02/18 17:35, Terry Barnaby wrote:
> > On 12/02/18 17:15, J. Bruce Fields wrote:
> > > On Mon, Feb 12, 2018 at 05:09:32PM +, Terry Barnaby wrote:
> > > > One thing on this, that I forgot to ask, doesn't fsync() work
> > > >
On Mon, Feb 12, 2018 at 05:35:49PM +, Terry Barnaby wrote:
> Well that seems like a major drop off, I always thought that fsync() would
> work in this case.
No, it never has.
> I don't understand why fsync() should not operate as
> intended ? Sounds like this NFS async thing needs some work
On 12/02/18 17:35, Terry Barnaby wrote:
On 12/02/18 17:15, J. Bruce Fields wrote:
On Mon, Feb 12, 2018 at 05:09:32PM +, Terry Barnaby wrote:
One thing on this, that I forgot to ask, doesn't fsync() work
properly with
an NFS server side async mount then ?
No.
If a server sets "async" on
On 12/02/18 17:15, J. Bruce Fields wrote:
On Mon, Feb 12, 2018 at 05:09:32PM +, Terry Barnaby wrote:
One thing on this, that I forgot to ask, doesn't fsync() work properly with
an NFS server side async mount then ?
No.
If a server sets "async" on an export, there is absolutely no way for
On Mon, Feb 12, 2018 at 05:09:32PM +, Terry Barnaby wrote:
> One thing on this, that I forgot to ask, doesn't fsync() work properly with
> an NFS server side async mount then ?
No.
If a server sets "async" on an export, there is absolutely no way for a
client to guarantee that data reaches
On 12/02/18 17:06, J. Bruce Fields wrote:
On Mon, Feb 12, 2018 at 09:08:47AM +, Terry Barnaby wrote:
On 09/02/18 08:25, nicolas.mail...@laposte.net wrote:
- Mail original -
De: "Terry Barnaby"
If
it was important to get the data to disk it would have been using
fsync(), FS sync,
On Mon, Feb 12, 2018 at 09:08:47AM +, Terry Barnaby wrote:
> On 09/02/18 08:25, nicolas.mail...@laposte.net wrote:
> > - Mail original -
> > De: "Terry Barnaby"
> > > If
> > > it was important to get the data to disk it would have been using
> > > fsync(), FS sync, or some other
On 09/02/18 08:25, nicolas.mail...@laposte.net wrote:
- Mail original -
De: "Terry Barnaby"
If
it was important to get the data to disk it would have been using
fsync(), FS sync, or some other transaction based app
??? Many people use NFS NAS because doing RAID+Backup on every client
- Mail original -
De: "Terry Barnaby"
>If
>it was important to get the data to disk it would have been using
>fsync(), FS sync, or some other transaction based app
??? Many people use NFS NAS because doing RAID+Backup on every client is too
expensive. So yes, they *are* using NFS
On Thu, Feb 08, 2018 at 08:21:44PM +, Terry Barnaby wrote:
> Doesn't fsync() and perhaps sync() work across NFS then when the server has
> an async export,
No.
On a local filesystem, a file create followed by a sync will ensure
the file create reaches disk. Normally on NFS, the same is
On 06/02/18 21:48, J. Bruce Fields wrote:
On Tue, Feb 06, 2018 at 08:18:27PM +, Terry Barnaby wrote:
Well, when a program running on a system calls open(), write() etc. to the
local disk FS the disk's contents is not actually updated. The data is in
server buffers until the next sync/fsync
On Tue, Feb 06, 2018 at 08:18:27PM +, Terry Barnaby wrote:
> Well, when a program running on a system calls open(), write() etc. to the
> local disk FS the disk's contents is not actually updated. The data is in
> server buffers until the next sync/fsync or some time has passed. So, in
> your
On 06/02/18 18:55, J. Bruce Fields wrote:
On Tue, Feb 06, 2018 at 06:49:28PM +, Terry Barnaby wrote:
On 05/02/18 14:52, J. Bruce Fields wrote:
Yet another poor NFSv3 performance issue. If I do a "ls -lR" of a certain
NFS mounted directory over a slow link (NFS over Openvpn over FTTP
On 05/02/18 23:06, J. Bruce Fields wrote:
On Thu, Feb 01, 2018 at 08:29:49AM +, Terry Barnaby wrote:
1. Have an OPEN-SETATTR-WRITE RPC call all in one and a SETATTR-CLOSE call
all in one. This would reduce the latency of a small file to 1ms rather than
3ms thus 66% faster. Would require the
On 05/02/18 14:52, J. Bruce Fields wrote:
Yet another poor NFSv3 performance issue. If I do a "ls -lR" of a certain
NFS mounted directory over a slow link (NFS over Openvpn over FTTP
80/20Mbps), just after mounting the file system (default NFSv4 mount with
async), it takes about 9 seconds. If I
On Tue, Feb 06, 2018 at 06:49:28PM +, Terry Barnaby wrote:
> On 05/02/18 14:52, J. Bruce Fields wrote:
> > > Yet another poor NFSv3 performance issue. If I do a "ls -lR" of a certain
> > > NFS mounted directory over a slow link (NFS over Openvpn over FTTP
> > > 80/20Mbps), just after mounting
On Thu, Feb 01, 2018 at 08:29:49AM +, Terry Barnaby wrote:
> 1. Have an OPEN-SETATTR-WRITE RPC call all in one and a SETATTR-CLOSE call
> all in one. This would reduce the latency of a small file to 1ms rather than
> 3ms thus 66% faster. Would require the client to delay the OPEN/SETATTR
>
On Mon, Feb 05, 2018 at 08:21:06AM +, Terry Barnaby wrote:
> On 01/02/18 08:29, Terry Barnaby wrote:
> > On 01/02/18 01:34, Jeremy Linton wrote:
> > > On 01/31/2018 09:49 AM, J. Bruce Fields wrote:
> > > > On Tue, Jan 30, 2018 at 01:52:49PM -0600, Jeremy Linton wrote:
> > > > > Have you tried
http://vger.kernel.org/vger-lists.html#linux-nfs
--
Chris Murphy
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
On 01/02/18 08:29, Terry Barnaby wrote:
On 01/02/18 01:34, Jeremy Linton wrote:
On 01/31/2018 09:49 AM, J. Bruce Fields wrote:
On Tue, Jan 30, 2018 at 01:52:49PM -0600, Jeremy Linton wrote:
Have you tried this with a '-o nfsvers=3' during mount? Did that help?
I noticed a large decrease in
On Wed, Jan 31, 2018 at 07:34:24PM -0600, Jeremy Linton wrote:
> On 01/31/2018 09:49 AM, J. Bruce Fields wrote:
> > In the kernel compile case there's probably also a lot of re-opening and
> > re-reading files too? NFSv4 is chattier there too. Read delegations
> > should help compensate, but we
On 01/02/18 01:34, Jeremy Linton wrote:
On 01/31/2018 09:49 AM, J. Bruce Fields wrote:
On Tue, Jan 30, 2018 at 01:52:49PM -0600, Jeremy Linton wrote:
Have you tried this with a '-o nfsvers=3' during mount? Did that help?
I noticed a large decrease in my kernel build times across NFS/lan a
On 01/31/2018 09:49 AM, J. Bruce Fields wrote:
On Tue, Jan 30, 2018 at 01:52:49PM -0600, Jeremy Linton wrote:
Have you tried this with a '-o nfsvers=3' during mount? Did that help?
I noticed a large decrease in my kernel build times across NFS/lan a while
back after a machine/kernel/10g
On Tue, Jan 30, 2018 at 01:52:49PM -0600, Jeremy Linton wrote:
> Have you tried this with a '-o nfsvers=3' during mount? Did that help?
>
> I noticed a large decrease in my kernel build times across NFS/lan a while
> back after a machine/kernel/10g upgrade. After playing with mount/export
>
On 28 January 2018 at 07:48, Terry Barnaby wrote:
> When doing a tar -xzf ... of a big source tar on an NFSv4 file system the
> time taken is huge. I am seeing an overall data rate of about 1 MByte per
> second across the network interface. If I copy a single large file I see
On Tue, Jan 30, 2018 at 10:30:04PM +, Terry Barnaby wrote:
> Also, on the 0.5ms. Is this effectively the 1ms system tick ie. the NFS
> processing is not processing based on the packet events (not pre-emptive)
> but on the next system tick ?
>
> An ICMP ping is about 0.13ms (to and fro)
On 30/01/18 21:31, J. Bruce Fields wrote:
On Tue, Jan 30, 2018 at 07:03:17PM +, Terry Barnaby wrote:
It looks like each RPC call takes about 0.5ms. Why do there need to be some
many RPC calls for this ? The OPEN call could set the attribs, no need for
the later GETATTR or SETATTR calls.
On Tue, Jan 30, 2018 at 04:31:58PM -0500, J. Bruce Fields wrote:
> On Tue, Jan 30, 2018 at 07:03:17PM +, Terry Barnaby wrote:
> > It looks like each RPC call takes about 0.5ms. Why do there need to be some
> > many RPC calls for this ? The OPEN call could set the attribs, no need for
> > the
On Tue, Jan 30, 2018 at 07:03:17PM +, Terry Barnaby wrote:
> It looks like each RPC call takes about 0.5ms. Why do there need to be some
> many RPC calls for this ? The OPEN call could set the attribs, no need for
> the later GETATTR or SETATTR calls.
The first SETATTR (which sets ctime and
Hi,
On 01/30/2018 01:03 PM, Terry Barnaby wrote:
Being a daredevil, I have used the NFS async option for 27 years
without an issue on multiple systems :)
I have just mounted my ext4 disk with the same options you were using
and the same NFS export options and the speed here looks the same
Being a daredevil, I have used the NFS async option for 27 years
without an issue on multiple systems :)
I have just mounted my ext4 disk with the same options you were using
and the same NFS export options and the speed here looks the same as I
had previously. As I can't wait 2+ hours so
On 30/01/18 17:54, J. Bruce Fields wrote:
On Tue, Jan 30, 2018 at 12:31:22PM -0500, J. Bruce Fields wrote:
On Tue, Jan 30, 2018 at 04:49:41PM +, Terry Barnaby wrote:
I have just tried running the untar on our work systems. These are again
Fedora27 but newer hardware.
I set one of the
On Tue, Jan 30, 2018 at 12:31:22PM -0500, J. Bruce Fields wrote:
> On Tue, Jan 30, 2018 at 04:49:41PM +, Terry Barnaby wrote:
> > I have just tried running the untar on our work systems. These are again
> > Fedora27 but newer hardware.
> > I set one of the servers NFS exports to just rw
On Tue, Jan 30, 2018 at 04:49:41PM +, Terry Barnaby wrote:
> I have just tried running the untar on our work systems. These are again
> Fedora27 but newer hardware.
> I set one of the servers NFS exports to just rw (removed the async option in
> /etc/exports and ran exportfs -arv).
> Remounted
On 30/01/18 16:22, J. Bruce Fields wrote:
On Tue, Jan 30, 2018 at 03:29:41PM +, Terry Barnaby wrote:
On 30/01/18 15:09, J. Bruce Fields wrote:
By comparison on my little home server (Fedora, ext4, a couple WD Black
1TB drives), with sync, that untar takes is 7:44, about 8ms/file.
Ok, that
On Tue, Jan 30, 2018 at 03:29:41PM +, Terry Barnaby wrote:
> On 30/01/18 15:09, J. Bruce Fields wrote:
> > By comparison on my little home server (Fedora, ext4, a couple WD Black
> > 1TB drives), with sync, that untar takes is 7:44, about 8ms/file.
> Ok, that is far more reasonable, so
On 30 January 2018 at 03:12, Petr Pisar wrote:
> On Tue, Jan 30, 2018 at 08:31:05AM +0100, Reindl Harald wrote:
> > Am 30.01.2018 um 08:25 schrieb Petr Pisar:
> > > On 2018-01-29, J. Bruce Fields wrote:
> > > > The file create isn't allowed to return until
On 30/01/18 15:09, J. Bruce Fields wrote:
On Tue, Jan 30, 2018 at 08:49:27AM +, Terry Barnaby wrote:
On 29/01/18 22:28, J. Bruce Fields wrote:
On Mon, Jan 29, 2018 at 08:37:50PM +, Terry Barnaby wrote:
Ok, that's a shame unless NFSv4's write performance with small files/dirs
is
On Tue, Jan 30, 2018 at 10:00:44AM +0100, Reindl Harald wrote:
> Am 30.01.2018 um 09:49 schrieb Terry Barnaby:
> > Untar on server to its local disk: 13 seconds, effective data rate: 68
> > MBytes/s
> >
> > Untar on server over NFSv4.2 with async on server: 3 minutes, effective
> > data rate:
On Tue, Jan 30, 2018 at 08:49:27AM +, Terry Barnaby wrote:
> On 29/01/18 22:28, J. Bruce Fields wrote:
> > On Mon, Jan 29, 2018 at 08:37:50PM +, Terry Barnaby wrote:
> > > Ok, that's a shame unless NFSv4's write performance with small files/dirs
> > > is relatively ok which it isn't on my
On 29/01/18 22:28, J. Bruce Fields wrote:
On Mon, Jan 29, 2018 at 08:37:50PM +, Terry Barnaby wrote:
Ok, that's a shame unless NFSv4's write performance with small files/dirs
is relatively ok which it isn't on my systems.
Although async was "unsafe" this was not an issue in main standard
On Tue, Jan 30, 2018 at 08:31:05AM +0100, Reindl Harald wrote:
> Am 30.01.2018 um 08:25 schrieb Petr Pisar:
> > On 2018-01-29, J. Bruce Fields wrote:
> > > The file create isn't allowed to return until the server has created the
> > > file and the change has actually reached
On 2018-01-29, J. Bruce Fields wrote:
> The file create isn't allowed to return until the server has created the
> file and the change has actually reached disk.
>
Why is there such a requirement? This is not true for local file
systems. This is why fsync() exists.
-- Petr
On Mon, Jan 29, 2018 at 08:37:50PM +, Terry Barnaby wrote:
> Ok, that's a shame unless NFSv4's write performance with small files/dirs
> is relatively ok which it isn't on my systems.
> Although async was "unsafe" this was not an issue in main standard
> scenarios such as an NFS mounted home
On 29/01/18 19:50, Steve Dickson wrote:
On 01/29/2018 12:42 PM, Steven Whitehouse wrote:
Forwarded Message
Subject:Re: Fedora27: NFS v4 terrible write performance, is async
working
Date: Sun, 28 Jan 2018 21:17:02 +
From: Terry Barnaby <ter...@beam.ltd
On 01/29/2018 12:42 PM, Steven Whitehouse wrote:
>
>
>
> Forwarded Message
> Subject: Re: Fedora27: NFS v4 terrible write performance, is async
> working
> Date: Sun, 28 Jan 2018 21:17:02 +
> From: Terry Barnaby <ter...@
On 28/01/18 15:47, Richard W.M. Jones wrote:
Please post questions on the users list:
https://lists.fedoraproject.org/admin/lists/users.lists.fedoraproject.org/
Sorry, will move there. Thought developers may be more into NFS that
users in general these days there being no responses in the
On 28/01/18 14:38, Steven Whitehouse wrote:
Hi,
On 28/01/18 07:48, Terry Barnaby wrote:
When doing a tar -xzf ... of a big source tar on an NFSv4 file system
the time taken is huge. I am seeing an overall data rate of about 1
MByte per second across the network interface. If I copy a single
Please post questions on the users list:
https://lists.fedoraproject.org/admin/lists/users.lists.fedoraproject.org/
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-p2v converts physical
Hi,
On 28/01/18 07:48, Terry Barnaby wrote:
When doing a tar -xzf ... of a big source tar on an NFSv4 file system
the time taken is huge. I am seeing an overall data rate of about 1
MByte per second across the network interface. If I copy a single
large file I see a network data rate of
When doing a tar -xzf ... of a big source tar on an NFSv4 file system
the time taken is huge. I am seeing an overall data rate of about 1
MByte per second across the network interface. If I copy a single large
file I see a network data rate of about 110 MBytes/sec which is about
the limit of
56 matches
Mail list logo