Re: Announcing creation of Fedora Source-git SIG

2021-04-15 Thread J. Bruce Fields
On Wed, Apr 14, 2021 at 04:53:06PM +0200, Vitaly Zaitsev via devel wrote:
> On 14.04.2021 16:27, Tomas Tomecek wrote:
> > Could you, please, be more constructive and say what the actual
> > problems are for you with such repositories?
> 
> 1. Some upstream repositories (Qt, Chromium, Linux kernel) are very huge
> (more than 100 GiB). I don't want to download them from upstream and then
> upload to Fedora.

Which one exactly is more than 100 GiB?  The Linux kernel certainly
isn't (more like 2.5 GiB, last I checked.)

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Proposal to deprecated `fedpkg local`

2021-01-27 Thread J. Bruce Fields
On Wed, Jan 27, 2021 at 05:17:24PM +0100, Vít Ondruch wrote:
> I wonder, what would be the sentiment if I proposed to deprecated the
> `fedpkg local` command. I don't think it should be used. Mock should
> be the preferred way. Would there be anybody really missing this
> functionality?

For what it's worth, when I do a search for "how to build a fedora
kernel", the first three hits I get are:

https://fedoraproject.org/wiki/Building_a_custom_kernel

https://docs.fedoraproject.org/en-US/quick-docs/kernel/build-custom-kernel/
https://fedoramagazine.org/building-fedora-kernel/

which all use "fedpkg local".

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Route all Audio to PipeWire (System-Wide Change)

2020-11-30 Thread J. Bruce Fields
On Tue, Nov 24, 2020 at 08:06:41PM +0100, Joël Krähemann wrote:
> On Tue, Nov 24, 2020 at 3:27 PM Neal Gompa  wrote:
> > That being said, I have spoken to a few audio engineers, and basically
> > none of them use ALSA directly. They can't because ALSA doesn't
> > support mixing properly, among other things. Most of them use JACK or
> > PulseAudio, depending on their requirements. PipeWire is intended to
> > simplify the pro audio case while bringing those benefits to casual
> > audiophiles who use PulseAudio.
> 
> I really doubt that you listen to youtube while creating music. What do
> you want to mix? Might be for some exotic JACK setup.

FWIW, as an amateur musician, bandmates send me youtube links for songs
they want to cover, and as part of learning the song I may play along
with the youtube video, record it to an Ardour track, etc.

All currently doable on Fedora but there are some hoops to jump through.

(Very much looking forward to PipeWire.)

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 33 System-Wide Change proposal: Make btrfs the default file system for desktop variants

2020-06-29 Thread J. Bruce Fields
On Mon, Jun 29, 2020 at 01:33:37PM -0400, Josef Bacik wrote:
> On 6/29/20 12:23 PM, J. Bruce Fields wrote:
> > Maybe not a desktop question, but do you know btrfs's change
> > attribute/i_version status?  Does it default to bumping i_version on
> > each change, or does that still need to be opted in?  And has anyone
> > measured the performance delta (i_version vs. noi_version) recently?
> > 
> 
> Yeah it defaults to bumping it all the time, we just use the normal inode
> changing infrastructure so it gets updated the same way everybody else does.
> AFAIK there's no way to opt out of it, unless there's a -o noiversion that
> exists?

Yeah, there's a -noiversion.

I decided I should actually go check, and a btrfs filesystem created and
mounted with defaults did look like it was doing this right.  Good!

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 33 System-Wide Change proposal: Make btrfs the default file system for desktop variants

2020-06-29 Thread J. Bruce Fields
Maybe not a desktop question, but do you know btrfs's change
attribute/i_version status?  Does it default to bumping i_version on
each change, or does that still need to be opted in?  And has anyone
measured the performance delta (i_version vs. noi_version) recently?

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora27: NFS v4 terrible write performance, is async working

2018-02-15 Thread J. Bruce Fields
On Tue, Feb 13, 2018 at 07:01:22AM +, Terry Barnaby wrote:
> The transaction system allows the write delegation to send the data to the
> servers RAM without the overhead of synchronous writes to the disk.

As far as I'm concerned this problem is already solved--did you miss the
discussion of WRITE/COMMIT in other email?

The problem you're running into is with metadata (file creates) more
than data.

> PS: I have some RPC latency figures for some other NFS servers at work. The
> NFS RPC latency on some of them is nearer the ICMP ping times, ie about
> 100us. Maybe quite a bit of CPU is needed to respond to an NFS RPC call
> these days. The 500us RPC time was on a oldish home server using an Intel(R)
> Core(TM)2 CPU 6300 @ 1.86GHz.

Tracing to figure ou the source of the latency might still be
interesting.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-02-14 Thread J. Bruce Fields
On Mon, Feb 05, 2018 at 06:06:29PM -0500, J. Bruce Fields wrote:
> Or this?:
> 
>   
> https://www.newegg.com/Product/Product.aspx?Item=N82E16820156153_re=ssd_power_loss_protection-_-20-156-153-_-Product

Ugh, Anandtech explains that their marketing is misleading, that drive
can't actually destage its volatile write cache on power loss:

https://www.anandtech.com/show/8528/micron-m600-128gb-256gb-1tb-ssd-revi
+ew-nda-placeholder

I've been trying to figure this out in part because I wondered what I
might use if I replaced my home server this year.  After some further
looking the cheapest PCIe-attached SSD with real power loss protection
that I've found is this Intel model a little over $300:


http://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/data-center-ssds/dc-p3520-series/dc-p3520-450gb-2-5inch-3d1.html

Kinda ridiculous to buy a 450 gig drive mainly so I can put a half-gig
journal on it.  It might turn out to be best for my case just to RAID a
couple of those SSDs and skip the conventional drives completely.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fedora27: NFS v4 terrible write performance, is async working

2018-02-12 Thread J. Bruce Fields
On Mon, Feb 12, 2018 at 08:12:58PM +, Terry Barnaby wrote:
> On 12/02/18 17:35, Terry Barnaby wrote:
> > On 12/02/18 17:15, J. Bruce Fields wrote:
> > > On Mon, Feb 12, 2018 at 05:09:32PM +, Terry Barnaby wrote:
> > > > One thing on this, that I forgot to ask, doesn't fsync() work
> > > > properly with
> > > > an NFS server side async mount then ?
> > > No.
> > > 
> > > If a server sets "async" on an export, there is absolutely no way for a
> > > client to guarantee that data reaches disk, or to know when it happens.
> > > 
> > > Possibly "ignore_sync", or "unsafe_sync", or something else, would be a
> > > better name.
...
> Just tried the use of fsync() with an NFS async mount, it appears to work.

That's expected, it's the *export* option that cheats, not the mount
option.

Also, even if you're using the async export option--fsync will still
flush data to server memory, just not necessarily to disk.

> With a simple 'C' program as a test program I see the following data
> rates/times when the program writes 100 MBytes to a single file over NFS
> (open, write, write .., fsync) followed by close (after the timing):
> 
> NFS Write multiple small files 0.001584 ms/per file 0.615829 MBytes/sec
> CpuUsage: 3.2%
> Disktest: Writing/Reading 100.00 MBytes in 1048576 Byte Chunks
> Disk Write sequential data rate fsync: 1 107.250685 MBytes/sec CpuUsage:
> 13.4%
> Disk Write sequential data rate fsync: 0 4758.953878 MBytes/sec CpuUsage:
> 66.7%
> 
> Without the fsync() call the data rate is obviously to buffers and with the
> fsync() call it definitely looks like it is to disk.

Could be, or you could be network-limited, hard to tell without knowing
more.

> Interestingly, it appears, that the close() call actually does an effective
> fsync() as well as the close() takes an age when fsync() is not used.

Yes: http://nfs.sourceforge.net/#faq_a8

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fedora27: NFS v4 terrible write performance, is async working

2018-02-12 Thread J. Bruce Fields
On Mon, Feb 12, 2018 at 05:35:49PM +, Terry Barnaby wrote:
> Well that seems like a major drop off, I always thought that fsync() would
> work in this case.

No, it never has.

> I don't understand why fsync() should not operate as
> intended ? Sounds like this NFS async thing needs some work !

By "NFS async" I assume you mean the export option.  Believe me, I'd
remove it entirely if I thought I could get away with it.

> I still do not understand why NFS doesn't operate in the same way as a
> standard mount on this. The use for async is only for improved performance
> due to disk write latency and speed (or are there other reasons ?)

Reasons for the async export option?  Historically I believe it was a
workaround for the fact that NFSv2 didn't have COMMIT, so even writes of
ordinary file data suffered from the problem that metadata-modifying
operations still have today.

> So with a local system mount:
> 
> async: normal mode: All system calls manipulate in buffer memory disk
> structure (inodes etc). Data/Metadata is flushed to disk on fsync(), sync()
> and occasionally by kernel. Processes data is not actually stored until
> fsync(), sync() etc.
> 
> sync: with sync option. Data/metadata is written to disk before system calls
> return (all FS system calls ?).
> 
> With an NFS mount I would have thought it should be the same.

As a distributed filesystem which aims to survive server reboots, it's
more complicated.

> async: normal mode: All system calls manipulate in buffer memory disk
> structure (inodes etc) this would normally be on the server (so multiple
> clients can work with the same data) but with some options (particular
> usage) maybe client side write buffering/caching could be used (ie. data
> would not actually pass to server during every FS system call).

Definitely required if you want to, for example, be able to use the full
network bandwidth when writing data to a file.

> Data/Metadata is flushed to server disk on fsync(), sync() and occasionally
> by kernel (If client side write caching is used flushes across network and
> then flushes server buffers). Processes data is not actually stored until
> fsync(), sync() etc.

I'd be nervous about the idea of a lot unsync'd metadata changes sitting
around in server memory.  On server crash/restart that's a bunch of
files and directories that are visible to every client, and that vanish
without anyone actually deleting them.  I wonder what the consequences
would be?

This is something that can only happen on a distributed filesystem: on
ext4, a crash takes down all the users of the filesystem too

(Thinking about this: don't we already have a tiny window during the rpc
processing, after a change has been made but before it's been committed,
when a server crash could make the change vanish?  But, no, actually, I
believe we hold a lock on the parent directory in every such case,
preventing anyone from seeing the change till the commit has finished.)

Also, delegations potentially hide both network and disk latency,
whereas your proposal only hides disk latency.  The latter is more
important in your case.  I'm not sure what the ratio is for higher-end
setups, actually--probably disk latency is still higher if not as high.

> sync: with client side sync option. Data/metadata is written across NFS and
> to Server disk before system calls return (all FS system calls ?).
> 
> I really don't understand why the async option is implemented on the server
> export although a sync option here could force sync for all clients for that
> mount. What am I missing ? Is there some good reason (rather than history)
> it is done this way ?

So, again, Linux knfsd's "async" export behavior is just incorrect, and
I'd be happier if we didn't have to support it.

See above for why I don't think what you describe as async-like behavior
would fly.

As for adding protocol to allow the server to tell all clients that they
should do "sync" mounts: I don't know, I suppose it's possible, but a) I
don't know how much use it would actually get (I suspect "sync" mounts
are pretty rare), and b) that's meddling with client implementation
behavior a little more than we normally would in the protocol.  The
difference between "sync" and "async" mounts is purely a matter of
client behavior, after all, it's not really visible to the protocol at
all.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fedora27: NFS v4 terrible write performance, is async working

2018-02-12 Thread J. Bruce Fields
On Mon, Feb 12, 2018 at 05:09:32PM +, Terry Barnaby wrote:
> One thing on this, that I forgot to ask, doesn't fsync() work properly with
> an NFS server side async mount then ?

No.

If a server sets "async" on an export, there is absolutely no way for a
client to guarantee that data reaches disk, or to know when it happens.

Possibly "ignore_sync", or "unsafe_sync", or something else, would be a
better name.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fedora27: NFS v4 terrible write performance, is async working

2018-02-12 Thread J. Bruce Fields
On Mon, Feb 12, 2018 at 09:08:47AM +, Terry Barnaby wrote:
> On 09/02/18 08:25, nicolas.mail...@laposte.net wrote:
> > - Mail original -
> > De: "Terry Barnaby"
> > > If
> > > it was important to get the data to disk it would have been using
> > > fsync(), FS sync, or some other transaction based app
> > ??? Many people use NFS NAS because doing RAID+Backup on every client is 
> > too expensive. So yes, they *are* using NFS because it is important to get 
> > the data to disk.
> > 
> > Regards,
> > 
> Yes, that is why I said some people would be using "FS sync". These people
> would use the sync option, but then they would use "sync" mount option,
> (ideally this would be set on the NFS client as the clients know they need
> this).

The "sync" mount option should not be necessary for data safety.
Carefully written apps know how to use fsync() and related calls at
points where they need data to be durable.

The server-side "async" export option, on the other hand, undermines
exactly those calls and therefore can result in lost or corrupted data
on a server crash, no matter how careful the application.

Again, we need to be very careful to distinguish between the client-side
"sync" mount option and the server-side "sync" export option.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-02-08 Thread J. Bruce Fields
On Thu, Feb 08, 2018 at 08:21:44PM +, Terry Barnaby wrote:
> Doesn't fsync() and perhaps sync() work across NFS then when the server has
> an async export,

No.

On a local filesystem, a file create followed by a sync will ensure
the file create reaches disk.  Normally on NFS, the same is true--for
the trivial reason that the file create already ensured this.  If your
server is Linux knfsd exporting the filesystem with async, the file
create may still not be on disk after the sync.

> I don't think a program on a remote system is particularly worse off
> if the NFS server dies, it may have to die if it can't do any special
> recovery.

Well-written applications should be able to deal with recovering after a
crash, *if* the filesystem respects fsync() and friends.  If the
filesystem ignores them and loses data silently, the application is left
in a rather more difficult position!

> > > Only difference from the normal FS conventions I am suggesting is to
> > > allow the server to stipulate "sync" on its mount that forces sync
> > > mode for all clients on that FS.
> > Anyway, we don't have protocol to tell clients to do that.
> As I said NFSv4.3 :)

Protocol extensions are certainly possible.

> > So if you have reliable servers and power, maybe you're comfortable with
> > the risk.  There's a reason that's not the default, though.
> Well, it is the default for local FS mounts so I really don't see why it
> should be different for network mounts.

It's definitely not the default for local mounts to ignore sync().  So,
you understand why I say that the "async" export option is very
different from the mount option with the same name.  (Yes, the name was
a mistake.)  And you can see why a filesystem engineer would get nervous
about recommending that configuration.

> But anyway for my usage NFS sync is completely unusable (as would
> local sync mounts) so it has to be async NFS or local disks (13 secs
> local disk -> 3mins NFS async-> 2 hours NFS sync). I would have
> thought that would go for the majority of NFS usage. No issue to me
> though as long as async can be configured and works well :)

So, instead what I personally use is a hardware configuration that allow
me to get similar performance while still using the default export
options.

> > Sure.  The protocol issues are probably more complicated than they first
> > appear, though!
> Yes, they probably are, most things are below the surface, but I still think
> there are likely to be a lot of improvements that could be made that would
> make using NFS async more tenable to the user.
> If necessary local file caching (to local disk) with delayed NFS writes. I
> do use fscache for the NFS - OpenVPN - FTTP mounts, but the NFS caching time
> tests probably hit the performance of this for reads and I presume writes
> would be write through rather than delayed write. Haven't actually looked at
> the performance of this and I know there are other network file systems that
> may be more suited in that case.

fscache doesn't remove the need for synchronous file creates.

So, in the existing protocol write delegations are probably what would
help most; which is why they're near the top of my todo list.

But write delegations just cover file data and attributes.  If we want a
client to be able to, for example, respond to creat() with success, we
want write delegations on *directories*.  That's rather more
complicated, and we currently don't even have protocol proposed for
that.  It's been proposed in the past and I hope there may be sufficient
time and motivation to make it happen some day

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-02-06 Thread J. Bruce Fields
On Tue, Feb 06, 2018 at 08:18:27PM +, Terry Barnaby wrote:
> Well, when a program running on a system calls open(), write() etc. to the
> local disk FS the disk's contents is not actually updated. The data is in
> server buffers until the next sync/fsync or some time has passed. So, in
> your parlance, the OS write() call lies to the program. So it is by default
> async unless the "sync" mount option is used when mounting the particular
> file system in question.

That's right, but note applications are written with the knowledge that
OS's behave this way, and are given tools (sync, fsync, etc.) to manage
this behavior so that they still have some control over what survives a
crash.

(But sync & friends no longer do what they're supposed to on an Linux
server exporting with async.)

> Although it is different from the current NFS settings methods, I would have
> thought that this should be the same for NFS. So if a client mounts a file
> system normally it is async, ie write() data is in buffers somewhere (client
> or server) unless the client mounts the file system in sync mode.

In fact, this is pretty much how it works, for write().

It didn't used to be that way--NFSv2 writes were all synchronous.

The problem is that if a server power cycles while it still had dirty
data in its caches, what should you do?

You can't ignore it--you'd just be silently losing data.  You could
return an error at some point, but "we just lost some or your idea, no
idea what" isn't an error an application can really act on.

So NFSv3 introduced a separation of write into WRITE and COMMIT.  The
client first sends a WRITE with the data, then latter sends a COMMIT
call that says "please don't return till that data I sent before is
actually on disk".

If the server reboots, there's a limited set of data that the client
needs to resend to recover (just data that's been written but not
committed.)

But we only have that for file data, metadata would be more complicated,
so stuff like file creates, setattr, directory operations, etc., are
still synchronous.

> Only difference from the normal FS conventions I am suggesting is to
> allow the server to stipulate "sync" on its mount that forces sync
> mode for all clients on that FS.

Anyway, we don't have protocol to tell clients to do that.

> In the case of a /home mount for example, or a source code build file
> system, it is normally only one client that is accessing the dir and if a
> write fails due to the server going down (an unlikely occurrence, its not
> much of an issue. I have only had this happen a couple of times in 28 years
> and then with no significant issues (power outage, disk fail pre-raid etc.).

So if you have reliable servers and power, maybe you're comfortable with
the risk.  There's a reason that's not the default, though.

> > > 4. The 0.5ms RPC latency seems a bit high (ICMP pings 0.12ms) . Maybe this
> > > is worth investigating in the Linux kernel processing (how ?) ?
> > Yes, that'd be interesting to investigate.  With some kernel tracing I
> > think it should be possible to get high-resolution timings for the
> > processing of a single RPC call, which would make a good start.
> > 
> > It'd probably also interesting to start with the simplest possible RPC
> > and then work our way up and see when the RTT increases the most--e.g
> > does an RPC ping (an RPC with procedure 0, empty argument and reply)
> > already have a round-trip time closer to .5ms or .12ms?
> Any pointers to trying this ? I have a small amount of time as work is quiet
> at the moment.

Hm.  I wonder if testing over loopback would give interesting enough
results.  That might simplify testing even if it's not as realistic.
You could start by seeing if latency is still similar.

You could start by googling around for "ftrace", I think lwn.net's
articles were pretty good introductions.

I don't do this very often and don't have good step-by-step
instructions

I beleive the simplest way to do it was using "trace-cmd" (which is
packaged for fedora in a package of the same name).  The man page looks
skimpy, but https://lwn.net/Articles/410200/ looks good.  Maybe run it
while just stat-ing a single file on an NFS partition as a start.

I don't know if that will result in too much data.  Figuring out how to
filter it may be tricky.  Tracing everything may be prohibitive.
Several processes are involved so you don't want to restrict by process.
Maybe restricting to functions in nfsd and sunrpc modules would work,
with something like -l ':mod:nfs' -l ':mod:sunrpc'.

> We have also found that SSD's or at least NAND flash has quite a few write
> latency peculiarities . We use eMMC NAND flash on a few embedded systems we
> have designed and the write latency patterns are a bit random and not well
> described/defined in datasheets etc. Difficult when you have an embedded
> system with small amounts of RAM doing real-time data capture !

That's one of the reasons you want the "enterprise" 

Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-02-06 Thread J. Bruce Fields
On Tue, Feb 06, 2018 at 06:49:28PM +, Terry Barnaby wrote:
> On 05/02/18 14:52, J. Bruce Fields wrote:
> > > Yet another poor NFSv3 performance issue. If I do a "ls -lR" of a certain
> > > NFS mounted directory over a slow link (NFS over Openvpn over FTTP
> > > 80/20Mbps), just after mounting the file system (default NFSv4 mount with
> > > async), it takes about 9 seconds. If I run the same "ls -lR" again, just
> > > after, it takes about 60 seconds.
> > A wireshark trace might help.
> > 
> > Also, is it possible some process is writing while this is happening?
> > 
> > --b.
> > 
> Ok, I have made some wireshark traces and put these at:
> 
> https://www.beam.ltd.uk/files/files//nfs/
> 
> There are other processing running obviously, but nothing that should be
> doing anything that should really affect this.
> 
> As a naive input, it looks like the client is using a cache but checking the
> update times of each file individually using GETATTR. As it is using a
> simple GETATTR per file in each directory the latency of these RPC calls is
> mounting up. I guess it would be possible to check the cache status of all
> files in a dir at once with one call that would allow this to be faster when
> a full readdir is in progress, like a "GETATTR_DIR " RPC call. The
> overhead of the extra data would probably not affect a single file check
> cache time as latency rather than amount of data is the killer.

Yeah, that's effectively what READDIR is--it can request attributes
along with the directory entries.  (In NFSv4--in NFSv3 there's a
seperate call called READDIR_PLUS that gets attributes.)

So the client needs some heuristics to decide when to do a lot of
GETATTRs and when to instead do READDIR.  Those heuristics have gotten
some tweaking over time.

What kernel version is your client on again?

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-02-05 Thread J. Bruce Fields
On Thu, Feb 01, 2018 at 08:29:49AM +, Terry Barnaby wrote:
> 1. Have an OPEN-SETATTR-WRITE RPC call all in one and a SETATTR-CLOSE call
> all in one. This would reduce the latency of a small file to 1ms rather than
> 3ms thus 66% faster. Would require the client to delay the OPEN/SETATTR
> until the first WRITE. Not sure how possible this is in the implementations.
> Maybe READ's could be improved as well but getting the OPEN through quick
> may be better in this case ?
> 
> 2. Could go further with an OPEN-SETATTR-WRITE-CLOSE RPC call. (0.5ms vs
> 3ms).

The protocol doesn't currently let us delay the OPEN like that,
unfortunately.

What we can do that might help: we can grant a write delegation in the
reply to the OPEN.  In theory that should allow the following operations
to be performed asynchronously, so the untar can immediately issue the
next OPEN without waiting.  (In practice I'm not sure what the current
client will do.)

I'm expecting to get to write delegations this year

It probably wouldn't be hard to hack the server to return write
delegations even when that's not necessarily correct, just to get an
idea what kind of speedup is available here.

> 3. On sync/async modes personally I think it would be better for the client
> to request the mount in sync/async mode. The setting of sync on the server
> side would just enforce sync mode for all clients. If the server is in the
> default async mode clients can mount using sync or async as to their
> requirements. This seems to match normal VFS semantics and usage patterns
> better.

The client-side and server-side options are both named "sync", but they
aren't really related.  The server-side "async" export option causes the
server to lie to clients, telling them that data has reached disk even
when it hasn't.  This affects all clients, whether they mounted with
"sync" or "async".  It violates the NFS specs, so it is not the default.

I don't understand your proposal.  It sounds like you believe that
mounting on the client side with the "sync" option will make your data
safe even if the "async" option is set on the server side?
Unfortunately that's not how it works.

> 4. The 0.5ms RPC latency seems a bit high (ICMP pings 0.12ms) . Maybe this
> is worth investigating in the Linux kernel processing (how ?) ?

Yes, that'd be interesting to investigate.  With some kernel tracing I
think it should be possible to get high-resolution timings for the
processing of a single RPC call, which would make a good start.

It'd probably also interesting to start with the simplest possible RPC
and then work our way up and see when the RTT increases the most--e.g
does an RPC ping (an RPC with procedure 0, empty argument and reply)
already have a round-trip time closer to .5ms or .12ms?

> 5. The 20ms RPC latency I see in sync mode needs a look at on my system
> although async mode is fine for my usage. Maybe this ends up as 2 x 10ms
> drive seeks on ext4 and is thus expected.

Yes, this is why dedicated file servers have hardware designed to lower
that latency.

As long as you're exporting with "async" and don't care about data
safety across crashes or power outages, I guess you could go all the way
and mount your ext4 export with "nobarrier", I *think* that will let the
system acknowledge writes as soon as they reach the disk's write cache.
I don't recommend that.

Just for fun I dug around a little for cheap options to get safe
low-latency storage:

For Intel you can cross-reference this list:


https://ark.intel.com/Search/FeatureFilter?productType=solidstatedrives=true

of SSD's with "enhanced power loss data protection" (EPLDP) with
shopping sites and I find e.g. this for US $121:

https://www.newegg.com/Product/Product.aspx?Item=9SIABVR66R5680

See the "device=" option in the ext4 man pages--you can use that to give
your existing ext4 filesystem an external journal on that device.  I
think you want "data=journal" as well, then writes should normally be
acknowledged once they hit that SSD's write cache, which should be quite
quick.

I was also curious whether there were PCI SSDs, but the cheapest Intel
SSD with EPLDP is the P4800X, at US $1600.

Intel Optane Memory is interesting as it starts at $70.  It doesn't have
EPLDP but latency of the underlying storage might be better even without
that?

I haven't figured out how to get a similar list for other brands.

Just searching for "SSD power loss protection" on newegg:

This also claims "power loss protection" at $53, but I can't find any
reviews:


https://www.newegg.com/Product/Product.aspx?Item=9SIA1K642V2376_re=ssd_power_loss_protection-_-9SIA1K642V2376-_-Product

Or this?:


https://www.newegg.com/Product/Product.aspx?Item=N82E16820156153_re=ssd_power_loss_protection-_-20-156-153-_-Product

This is another interesting discussion of the problem:


https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd/

--b.

Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-02-05 Thread J. Bruce Fields
On Mon, Feb 05, 2018 at 08:21:06AM +, Terry Barnaby wrote:
> On 01/02/18 08:29, Terry Barnaby wrote:
> > On 01/02/18 01:34, Jeremy Linton wrote:
> > > On 01/31/2018 09:49 AM, J. Bruce Fields wrote:
> > > > On Tue, Jan 30, 2018 at 01:52:49PM -0600, Jeremy Linton wrote:
> > > > > Have you tried this with a '-o nfsvers=3' during mount? Did that help?
> > > > > 
> > > > > I noticed a large decrease in my kernel build times across
> > > > > NFS/lan a while
> > > > > back after a machine/kernel/10g upgrade. After playing with
> > > > > mount/export
> > > > > options filesystem tuning/etc, I got to this point of timing
> > > > > a bunch of
> > > > > these operations vs the older machine, at which point I
> > > > > discovered that
> > > > > simply backing down to NFSv3 solved the problem.
> > > > > 
> > > > > AKA a nfsv3 server on a 10 year old 4 disk xfs RAID5 on 1Gb
> > > > > ethernet, was
> > > > > slower than a modern machine with a 8 disk xfs RAID5 on 10Gb
> > > > > on nfsv4. The
> > > > > effect was enough to change a kernel build from ~45 minutes
> > > > > down to less
> > > > > than 5.
> > > > 
> > Using NFSv3 in async mode is faster than NFSv4 in async mode (still
> > abysmal in sync mode).
> > 
> > NFSv3 async: sync; time (tar -xf linux-4.14.15.tar.gz -C /data2/tmp;
> > sync)
> > 
> > real    2m25.717s
> > user    0m8.739s
> > sys 0m13.362s
> > 
> > NFSv4 async: sync; time (tar -xf linux-4.14.15.tar.gz -C /data2/tmp;
> > sync)
> > 
> > real    3m33.032s
> > user    0m8.506s
> > sys 0m16.930s
> > 
> > NFSv3 async: wireshark trace
> > 
> > No. Time   Source Destination   Protocol Length Info
> >   18527 2.815884979    192.168.202.2 192.168.202.1 NFS 
> > 216    V3 CREATE Call (Reply In 18528), DH: 0x62f39428/dma.h Mode:
> > EXCLUSIVE
> >   18528 2.816362338    192.168.202.1 192.168.202.2 NFS 
> > 328    V3 CREATE Reply (Call In 18527)
> >   18529 2.816418841    192.168.202.2 192.168.202.1 NFS 
> > 224    V3 SETATTR Call (Reply In 18530), FH: 0x13678ba0
> >   18530 2.816871820    192.168.202.1 192.168.202.2 NFS 
> > 216    V3 SETATTR Reply (Call In 18529)
> >   18531 2.816966771    192.168.202.2 192.168.202.1 NFS 
> > 1148   V3 WRITE Call (Reply In 18532), FH: 0x13678ba0 Offset: 0 Len: 934
> > FILE_SYNC
> >   18532 2.817441291    192.168.202.1 192.168.202.2 NFS 
> > 208    V3 WRITE Reply (Call In 18531) Len: 934 FILE_SYNC
> >   18533 2.817495775    192.168.202.2 192.168.202.1 NFS 
> > 236    V3 SETATTR Call (Reply In 18534), FH: 0x13678ba0
> >   18534 2.817920346    192.168.202.1 192.168.202.2 NFS 
> > 216    V3 SETATTR Reply (Call In 18533)
> >   18535 2.818002910    192.168.202.2 192.168.202.1 NFS 
> > 216    V3 CREATE Call (Reply In 18536), DH: 0x62f39428/elf.h Mode:
> > EXCLUSIVE
> >   18536 2.818492126    192.168.202.1 192.168.202.2 NFS 
> > 328    V3 CREATE Reply (Call In 18535)
> > 
> > This is taking about 2ms for a small file write rather than 3ms for
> > NFSv4. There is an extra GETATTR and CLOSE RPC in NFSv4 accounting for
> > the difference.
> > 
> > So where I am:
> > 
> > 1. NFS in sync mode, at least on my two Fedora27 systems for my usage is
> > completely unusable. (sync: 2 hours, async: 3 minutes, localdisk: 13
> > seconds).
> > 
> > 2. NFS async mode is working, but the small writes are still very slow.
> > 
> > 3. NFS in async mode is 30% better with NFSv3 than NFSv4 when writing
> > small files due to the increased latency caused by NFSv4's two extra RPC
> > calls.
> > 
> > I really think that in 2018 we should be able to have better NFS
> > performance when writing many small files such as used in software
> > development. This would speed up any system that was using NFS with this
> > sort of workload dramatically and reduce power usage all for some
> > improvements in the NFS protocol.
> > 
> > I don't know the details of if this would work, or who is responsible
> > for NFS, but it would be good if possible to have some improvements
> > (NFSv4.3 ?). Maybe:
> > 
> > 1. Have an OPEN-SETATTR-WRITE RPC call all in one and a SETATTR-CLOSE
> > call all in one. This would reduce the latency 

Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-02-01 Thread J. Bruce Fields
On Wed, Jan 31, 2018 at 07:34:24PM -0600, Jeremy Linton wrote:
> On 01/31/2018 09:49 AM, J. Bruce Fields wrote:
> > In the kernel compile case there's probably also a lot of re-opening and
> > re-reading files too?  NFSv4 is chattier there too.  Read delegations
> > should help compensate, but we need to improve the heuristics that
> > decide when they're given out.
> 
> The main kernel include files get repeatedly hammered, despite them in
> theory being in cache, IIRC. So yes, if the concurrent (re)open path is even
> slightly slower its going to hurt a lot.
> 
> > All that aside I can't think what would explain that big a difference
> > (45 minutes vs. 5).  It might be interesting to figure out what
> > happened.
> 
> I had already spent more than my time allotted looking in the wrong
> direction at the filesystem/RAID (did turn off intellipark though) by the
> time I discovered the nfsv3/v4 perf delta. Its been sitting way down on the
> "things to look into" list for a long time now. I'm still using it as a NFS
> server so at some point I can take another look if the problem persists.

OK, understood.

Well, if you ever want to take another look at the v4 issue--I've been
meaning to rework the delegation heuristics.  Assuming you're on a
recent kernel, I could give you some experimental (but probably not too
risky) kernel patches if you didn't mind keeping notes on the results.

I'll probably get around to it eventually on my own, but it'd probably
happen sooner with a collaborator.

But the difference you saw was so drastic, there may have just been some
unrelated NFSv4 bug.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-31 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 01:52:49PM -0600, Jeremy Linton wrote:
> Have you tried this with a '-o nfsvers=3' during mount? Did that help?
> 
> I noticed a large decrease in my kernel build times across NFS/lan a while
> back after a machine/kernel/10g upgrade. After playing with mount/export
> options filesystem tuning/etc, I got to this point of timing a bunch of
> these operations vs the older machine, at which point I discovered that
> simply backing down to NFSv3 solved the problem.
> 
> AKA a nfsv3 server on a 10 year old 4 disk xfs RAID5 on 1Gb ethernet, was
> slower than a modern machine with a 8 disk xfs RAID5 on 10Gb on nfsv4. The
> effect was enough to change a kernel build from ~45 minutes down to less
> than 5.

Did you mean "faster than"?

Definitely worth trying, though I wouldn't expect it to make that big a
difference in the untarring-a-kernel-tree case--I think the only RPC
avoided in the v3 case would be the CLOSE, and it should be one of the
faster ones.

In the kernel compile case there's probably also a lot of re-opening and
re-reading files too?  NFSv4 is chattier there too.  Read delegations
should help compensate, but we need to improve the heuristics that
decide when they're given out.

All that aside I can't think what would explain that big a difference
(45 minutes vs. 5).  It might be interesting to figure out what
happened.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 10:30:04PM +, Terry Barnaby wrote:
> Also, on the 0.5ms. Is this effectively the 1ms system tick ie. the NFS
> processing is not processing based on the packet events (not pre-emptive)
> but on the next system tick ?
> 
> An ICMP ping is about 0.13ms (to and fro) between these systems. Although
> 0.5ms is relatively fast, I wouldn't have thought it should have to take
> 0.5ms for a minimal RPC even over TCPIP.

It'd be interesting to break down that latency.  I'm not sure where it's
coming from.  I doubt it has to do with the system tick.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 04:31:58PM -0500, J. Bruce Fields wrote:
> On Tue, Jan 30, 2018 at 07:03:17PM +, Terry Barnaby wrote:
> > It looks like each RPC call takes about 0.5ms. Why do there need to be some
> > many RPC calls for this ? The OPEN call could set the attribs, no need for
> > the later GETATTR or SETATTR calls.
> 
> The first SETATTR (which sets ctime and mtime to server's time) seems
> unnecessary, maybe there's a client bug.
> 
> The second looks like tar's fault, strace shows it doing a utimensat()
> on each file.  I don't know why or if that's optional.
> 
> > Even the CLOSE could be integrated with the WRITE and taking this
> > further OPEN could do OPEN, SETATTR, and some WRITE all in one.
> 
> We'd probably need some new protocol to make it safe to return from the
> open systemcall before we've gotten the OPEN reply from the server.
> 
> Write delegations might save us from having to wait for the other
> operations.
> 
> Taking a look at my own setup, I see the same calls taking about 1ms.
> The drives can't do that, so I've got a problem somewhere too

Whoops, I totally forgot it was still set up with an external journal on
SSD:

# tune2fs -l /dev/mapper/export-export |grep '^Journal'
Journal UUID: dc356049-6e2f-4e74-b185-5357bee73a32
Journal device:   0x0803
Journal backup:   inode blocks
# blkid --uuid dc356049-6e2f-4e74-b185-5357bee73a32
/dev/sda3
# cat /sys/block/sda/device/model 
INTEL SSDSA2M080

So, most of the data is striped across a couple big hard drives, but the
journal is actually on a small partition on an SSD.

If I remember correctly, I initially tried this with an older intel SSD
and didn't get a performance improvement.  Then I replaced it with this
model which has the "Enhanced Power Loss Data Protection" feature, which
I believe means the write cache is durable, so it should be able to
safely acknowledge writes as soon as they reach the SSD's cache.

And weirdly I think I never actually got around to rerunning these tests
after I installed the new SSD.

Anyway, so that might explain the difference we're seeing.

I'm not sure how to find new SSDs with that feature, but it may be worth
considering as a cheap way to accelerate this kind of workload.  It can
be a very small SSD as it only needs to hold the journal.  Adding an
external journal is a quick operation (you don't have to recreate the
filesystem or anything).

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 07:03:17PM +, Terry Barnaby wrote:
> It looks like each RPC call takes about 0.5ms. Why do there need to be some
> many RPC calls for this ? The OPEN call could set the attribs, no need for
> the later GETATTR or SETATTR calls.

The first SETATTR (which sets ctime and mtime to server's time) seems
unnecessary, maybe there's a client bug.

The second looks like tar's fault, strace shows it doing a utimensat()
on each file.  I don't know why or if that's optional.

> Even the CLOSE could be integrated with the WRITE and taking this
> further OPEN could do OPEN, SETATTR, and some WRITE all in one.

We'd probably need some new protocol to make it safe to return from the
open systemcall before we've gotten the OPEN reply from the server.

Write delegations might save us from having to wait for the other
operations.

Taking a look at my own setup, I see the same calls taking about 1ms.
The drives can't do that, so I've got a problem somewhere too

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 12:31:22PM -0500, J. Bruce Fields wrote:
> On Tue, Jan 30, 2018 at 04:49:41PM +, Terry Barnaby wrote:
> > I have just tried running the untar on our work systems. These are again
> > Fedora27 but newer hardware.
> > I set one of the servers NFS exports to just rw (removed the async option in
> > /etc/exports and ran exportfs -arv).
> > Remounted this NFS file system on a Fedora27 client and re-ran the test. I
> > have only waited 10mins but the overal network data rate is in the order of
> > 0.1 MBytes/sec so it looks like it will be a multiple hour job as at home.
> > So I have two completely separate systems with the same performance over
> > NFS.
> > With your NFS "sync" test are you sure you set the "sync" mode on the server
> > and re-exported the file systems ?
> 
> Not being a daredevil, I use "sync" by default:
> 
>   # exportfs -v /export 
> (rw,sync,wdelay,hide,no_subtree_check,sec=sys,insecure,no_root_squash,no_all_squash)
> 
> For the "async" case I changed the options and actually rebooted, yes.
> 
> The filesystem is:
> 
>   /dev/mapper/export-export on /export type ext4 
> (rw,relatime,seclabel,nodelalloc,stripe=32,data=journal) 
> 
> (I think data=journal is the only non-default, and I don't remember why
> I chose that.)

Hah, well, with data=ordered (the default) the same untar (with "sync"
export) took 15m38s.  So... that probably wasn't an accident.

It may be irresponsible for me to guess given the state of my ignorance
about ext4 journaling, but perhaps writing everything to the journal and
delaying writing it out to its real location as long as possible allows
some sort of tradeoff between bandwidth and seeks that helps with this
sync-heavy workload.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 04:49:41PM +, Terry Barnaby wrote:
> I have just tried running the untar on our work systems. These are again
> Fedora27 but newer hardware.
> I set one of the servers NFS exports to just rw (removed the async option in
> /etc/exports and ran exportfs -arv).
> Remounted this NFS file system on a Fedora27 client and re-ran the test. I
> have only waited 10mins but the overal network data rate is in the order of
> 0.1 MBytes/sec so it looks like it will be a multiple hour job as at home.
> So I have two completely separate systems with the same performance over
> NFS.
> With your NFS "sync" test are you sure you set the "sync" mode on the server
> and re-exported the file systems ?

Not being a daredevil, I use "sync" by default:

# exportfs -v /export 
(rw,sync,wdelay,hide,no_subtree_check,sec=sys,insecure,no_root_squash,no_all_squash)

For the "async" case I changed the options and actually rebooted, yes.

The filesystem is:

/dev/mapper/export-export on /export type ext4 
(rw,relatime,seclabel,nodelalloc,stripe=32,data=journal) 

(I think data=journal is the only non-default, and I don't remember why
I chose that.)

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 03:29:41PM +, Terry Barnaby wrote:
> On 30/01/18 15:09, J. Bruce Fields wrote:
> > By comparison on my little home server (Fedora, ext4, a couple WD Black
> > 1TB drives), with sync, that untar takes is 7:44, about 8ms/file.
> Ok, that is far more reasonable, so something is up on my systems :)
> What speed do you get with the server export set to async ?

I tried just now and got 4m2s.

The drives probably still have to do a seek or two per create, the
difference now is that we don't have to wait for one create to start the
next one, so the drives can work in parallel.

So given that I'm striping across two drives, I *think* it makes sense
that I'm getting about double the performance with the async export
option.

But that doesn't explain the difference between async and local
performance (22s when I tried the same untar directly on the server, 25s
when I included a final sync in the timing).  And your numbers are a
complete mystery.

--b.

> > 
> > What's the disk configuration and what filesystem is this?
> Those tests above were to a single: SATA Western Digital Red 3TB, WDC
> WD30EFRX-68EUZN0 using ext4.
> Most of my tests have been to software RAID1 SATA disks, Western Digital Red
> 2TB on one server and Western Digital RE4 2TB WDC WD2003FYYS-02W0B1 on
> another quad core Xeon server all using ext4 and all having plenty of RAM.
> All on stock Fedora27 (both server and client) updated to date.
> 
> > 
> > > Is it really expected for NFS to be this bad these days with a reasonably
> > > typical operation and are there no other tuning parameters that can help  
> > > ?
> > It's expected that the performance of single-threaded file creates will
> > depend on latency, not bandwidth.
> > 
> > I believe high-performance servers use battery backed write caches with
> > storage behind them that can do lots of IOPS.
> > 
> > (One thing I've been curious about is whether you could get better
> > performance cheap on this kind of workload ext3/4 striped across a few
> > drives and an external journal on SSD.  But when I experimented with
> > that a few years ago I found synchronous write latency wasn't much
> > better.  I didn't investigate why not, maybe that's just the way SSDs
> > are.)
> > 
> > --b.
> 
> 
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 10:00:44AM +0100, Reindl Harald wrote:
> Am 30.01.2018 um 09:49 schrieb Terry Barnaby:
> > Untar on server to its local disk:  13 seconds, effective data rate: 68
> > MBytes/s
> > 
> > Untar on server over NFSv4.2 with async on server:  3 minutes, effective
> > data rate: 4.9 MBytes/sec
> > 
> > Untar on server over NFSv4.2 without async on server:  2 hours 12
> > minutes, effective data rate: 115 kBytes/s !!
> > 
> > Is it really expected for NFS to be this bad these days with a
> > reasonably typical operation and are there no other tuning parameters
> > that can help  ?
> 
> no, we are running a virtual backup appliance (VMware Data Protection aka
> EMC Avamar) on vSphere 5.5 on a HP microserver running CentOS7 with a RAID10
> built of 4x4 TB consumer desktop disks and the limiting factor currently is
> the Gigabit Ethernet
> 
> 35 TB network IO per month, around 1 TB per day which happens between 1:00
> AM and 2:00 AM as well as garbage collection between 07:AM and 08:00 AM

Again, this is highly dependent on the workload.

Your backup appliance is probably mainly doing large sequential writes
to a small number of big files, and we aim for that sort of workload to
be limited only by available bandwidth, which is what you're seeing.

If you have a single-threaded process creating lots of small files,
you'll be limited by disk write latency long before you hit any
bandwidth limits.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 08:49:27AM +, Terry Barnaby wrote:
> On 29/01/18 22:28, J. Bruce Fields wrote:
> > On Mon, Jan 29, 2018 at 08:37:50PM +, Terry Barnaby wrote:
> > > Ok, that's a shame unless NFSv4's write performance with small files/dirs
> > > is relatively ok which it isn't on my systems.
> > > Although async was "unsafe" this was not an issue in main standard
> > > scenarios such as an NFS mounted home directory only being used by one
> > > client.
> > > The async option also does not appear to work when using NFSv3. I guess it
> > > was removed from that protocol at some point as well ?
> > This isn't related to the NFS protocol version.
> > 
> > I think everybody's confusing the server-side "async" export option with
> > the client-side mount "async" option.  They're not really related.
> > 
> > The unsafe thing that speeds up file creates is the server-side "async"
> > option.  Sounds like you tried to use the client-side mount option
> > instead, which wouldn't do anything.
> > 
> > > What is the expected sort of write performance when un-taring, for 
> > > example,
> > > the linux kernel sources ? Is 2 MBytes/sec on average on a Gigabit link
> > > typical (3 mins to untar 4.14.15) or should it be better ?
> > It's not bandwidth that matters, it's latency.
> > 
> > The file create isn't allowed to return until the server has created the
> > file and the change has actually reached disk.
> > 
> > So an RPC has to reach the server, which has to wait for disk, and then
> > the client has to get the RPC reply.  Usually it's the disk latency that
> > dominates.
> > 
> > And also the final close after the new file is written can't return
> > until all the new file data has reached disk.
> > 
> > v4.14.15 has 61305 files:
> > 
> > $ git ls-tree -r  v4.14.15|wc -l
> > 61305
> > 
> > So time to create each file was about 3 minutes/61305 =~ 3ms.
> > 
> > So assuming two roundtrips per file, your disk latency is probably about
> > 1.5ms?
> > 
> > You can improve the storage latency somehow (e.g. with a battery-backed
> > write cache) or use more parallelism (has anyone ever tried to write a
> > parallel untar?).  Or you can cheat and set the async export option, and
> > then the server will no longer wait for disk before replying.  The
> > problem is that on server reboot/crash, the client's assumptions about
> > which operations succeeded may turn out to be wrong.
> > 
> > --b.
> 
> Many thanks for your reply.
> 
> Yes, I understand the above (latency and normally synchronous nature of
> NFS). I have async defined in the servers /etc/exports options. I have,
> later, also defined it on the client side as the async option on the server
> did not appear to be working and I wondered if with ongoing changes it had
> been moved there (would make some sense for the client to define it and pass
> this option over to the server as it knows, in most cases, if the bad
> aspects of async would be an issue to its usage in the situation in
> question).
> 
> It's a server with large disks, so SSD is not really an option. The use of
> async is ok for my usage (mainly /home mounted and users home files only in
> use by one client at a time etc etc.).

Note it's not concurrent access that will cause problems, it's server
crashes.  A UPS may reduce the risk a little.

> However I have just found that async is actually working! I just did not
> believe it was, due to the poor write performance. Without async on the
> server the performance is truly abysmal. The figures I get for untaring the
> kernel sources (4.14.15 895MBytes untared) using "rm -fr linux-4.14.15;
> sync; time (tar -xf linux-4.14.15.tar.gz -C /data2/tmp; sync)" are:
> 
> Untar on server to its local disk:  13 seconds, effective data rate: 68
> MBytes/s
> 
> Untar on server over NFSv4.2 with async on server:  3 minutes, effective
> data rate: 4.9 MBytes/sec
> 
> Untar on server over NFSv4.2 without async on server:  2 hours 12 minutes,
> effective data rate: 115 kBytes/s !!

2:12 is 7920 seconds, and you've got 61305 files to write, so that's
about 130ms/file.  That's more than I'd expect even if you're waiting
for a few seeks on each file create, so there may indeed be something
wrong.

By comparison on my little home server (Fedora, ext4, a couple WD Black
1TB drives), with sync, that untar takes is 7:44, about 8ms/file.

What's the disk configuration and what filesystem is this?

> Is it really expected for NFS to be this bad these days with a reasonabl

Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-29 Thread J. Bruce Fields
On Mon, Jan 29, 2018 at 08:37:50PM +, Terry Barnaby wrote:
> Ok, that's a shame unless NFSv4's write performance with small files/dirs
> is relatively ok which it isn't on my systems.
> Although async was "unsafe" this was not an issue in main standard
> scenarios such as an NFS mounted home directory only being used by one
> client.
> The async option also does not appear to work when using NFSv3. I guess it
> was removed from that protocol at some point as well ?

This isn't related to the NFS protocol version.

I think everybody's confusing the server-side "async" export option with
the client-side mount "async" option.  They're not really related.

The unsafe thing that speeds up file creates is the server-side "async"
option.  Sounds like you tried to use the client-side mount option
instead, which wouldn't do anything.

> What is the expected sort of write performance when un-taring, for example,
> the linux kernel sources ? Is 2 MBytes/sec on average on a Gigabit link
> typical (3 mins to untar 4.14.15) or should it be better ?

It's not bandwidth that matters, it's latency.

The file create isn't allowed to return until the server has created the
file and the change has actually reached disk.

So an RPC has to reach the server, which has to wait for disk, and then
the client has to get the RPC reply.  Usually it's the disk latency that
dominates.

And also the final close after the new file is written can't return
until all the new file data has reached disk.

v4.14.15 has 61305 files:

$ git ls-tree -r  v4.14.15|wc -l
61305

So time to create each file was about 3 minutes/61305 =~ 3ms.

So assuming two roundtrips per file, your disk latency is probably about
1.5ms?

You can improve the storage latency somehow (e.g. with a battery-backed
write cache) or use more parallelism (has anyone ever tried to write a
parallel untar?).  Or you can cheat and set the async export option, and
then the server will no longer wait for disk before replying.  The
problem is that on server reboot/crash, the client's assumptions about
which operations succeeded may turn out to be wrong.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: nfs-utils-2.1.1 Changes Everything!

2017-01-17 Thread J. Bruce Fields
Unless somebody's created an /etc/nfs.conf file, we should assume
they're still using /etc/sysconfig/nfs.

That shouldn't be difficult, and I don't see why we can't do that
indefinitely.

The goal should definitely be not to break any working setups on
upgrade.

--b.

On Mon, Jan 16, 2017 at 03:11:38PM -0500, Steve Dickson wrote:
> Hello,
> 
> The latest nfs-utils release drastically changes how the NFS
> servers are configured, for the good IMHO...
> 
> All daemon  configuration now goes through /etc/nfs.conf.
> See nfs.conf(5) for details.
> 
> The command line interfaces in the systemd services files
> have been removed. Which means all your current configures
> will break, because the variables in /etc/sysconfig/nfs are
> no longer used.
> 
> Again, I think is a move in the right direction and I know
> you might find this surprising 8-) but I really don't want to
> break all the current server configuration. So I'm trying t
> o figure out how to do this with least amount of impact.
> 
> Here is what I see as the options
> 
> 1) Upgrade rawhide w/out a backward compatible patch
> (since it is so early in the release cycle)
> Upgrade f25 with an backwards compatible patch 
> 
> 2) Upgrade rawhide and f25 with the backward compatible
> patch... but we have to ween ourselves of the command
> line interface at some point...
> 
> 3) Do nothing and push everything into f27, which is the least
> favorite option.
> 
> I'm leaning toward option 1... but I'm asking... so I'm listening. :-)
> 
> Also, how do I documented something like this?
> 
> tia,
> 
> steved.  
> ___
> devel mailing list -- devel@lists.fedoraproject.org
> To unsubscribe send an email to devel-le...@lists.fedoraproject.org
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: state of advanced audio in Fedora

2016-11-30 Thread J. Bruce Fields
On Wed, Nov 30, 2016 at 01:43:38PM +0100, Guido Aulisi wrote:
> IMHO Jack is a must for professional audio, because of its low latency
> and connection facility.
> You should run it with realtime scheduler and high priority, and in
> F24 there was a problem with systemd not configuring correct limits.
> I don't know if this problem has been corrected now.

Probably not.  This bug?:

https://bugzilla.redhat.com/show_bug.cgi?id=1364332

> 2016-11-29 21:19 GMT+01:00 Przemek Klosowski <przemek.klosow...@nist.gov>:
> > On 11/29/2016 12:58 PM, J. Bruce Fields wrote:
> >
> > I thought most of those music apps required jack to run--are you running
> > jack or not?  If you are, then it's probably just the usual
> > jack/pulseaudio conflicts.  Which Fedora seems set up to fix, but for
> > seem reason the fix doesn't work; I filed a bug here:
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=1390043
> >
> > I actually stopped using jackd because I felt it introduced more 'hidden
> > state' problems, and I was hobbling along with Pulse---I don't think jackd
> > is an absolute requirement. I don't want to mess around with setup every
> > time before running a music app, so maybe the way to go is to just run jackd
> > all the time?

Obviously the above bug would need fixing.

I doubt that's the only thing preventing jackd from running all the
time.  I think there may have been some cost in CPU time?

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: state of advanced audio in Fedora

2016-11-29 Thread J. Bruce Fields
On Tue, Nov 29, 2016 at 12:42:06PM -0500, Przemek Klosowski wrote:
> I was always impressed with the amount and quality of audio software in
> Linux. When it all works, and is driven by someone who knows what they're
> doing, it's essentially a high-end DAW production environment. If it all
> worked smoothly, I am sure it could be one of Linux and Fedora showcases.
> 
> I am a musical dilettante, so my attempts have been perhaps haphazard, but I
> had a mixed luck: I was able to get everything to run, but the setup seemed
> very brittle. I was not very successful debugging the problems because the
> audio chain is pretty complex, what with the raw devices, ALSA, PulseAudio
> and Jackd having overlapping roles, and lots of obsolete and conflicting
> information on the web. I decided to write to the development list in the
> hope of starting a technical discussion that would result in either
> technical and/or configuration fixes, or at least some documentation, that I
> could perhaps help develop.
> 
> I have been using the following programs:
> 
> play/aplay  simple .wav players
> espeak   speech synthesizer
> Qsynth/fluidsynth   .midi players/synthesizers
> audacity sound editor
> pianobooster  keyboard play-along teaching tool
> Rosegarden w/lilypond music editor
> Hydrogen   drum synthesizer
> Yoshimi  synthesizer
> rakarrack   guitar effect processor
> 
> As I said, I was able to use all of them successfully, but I had problems
> integrating them and keeping them up and running in the long term. I wonder
> if I am doing something wrong, or are there technical issues that I'm
> running into, currently on Fedora 25 but also on previous versions.
> 
> Obviously, out of the box, simple sound obviously works: I can aplay a .wav
> file, espeak works, and some of the synthesizers like audacity and hydrogen
> simply work without any preconditions.
> Other audio programs require starting Qsynth first: that seems to be the
> case for Rosegarden, Yoshimi and pianobooster. What is puzzling is that
> there seems to be a lot of hidden state: after running Qsynth for a while,
> the simple sound (aplay, espeak) tend to no longer work: they hang without
> producing any sound, even though Qsynth is no longer running. I tried
> stracing them, but they just go into nanosleep() busy loops on internal file
> descriptors, so it's not clear what exactly they're blocking on. I ran into
> one glitch where qsynth somehow inserted a .wav file as a soundfont in the
> configuration file, which prevented it from working subsequently (I had to
> delete the ~/.config/rncbc.org/Qsynth.conf file).
> 
> I am planning to log some bugzilla reports, but I am not sure against what
> subsystems: is it ALSA, or PulseAudio, or Gnome/pavucontrol, or Qsynth.
> Specifically, I'd like to address the following issues:
> 
> - simple sound (aplay, espeak) failing after running fancy synthesized sound
> apps (Qsynth): I'd need guidance what to test to find the hidden state that
> causes that.
> 
> - fancy sound apps (Rosegarden/pianobooster) silently failing without the
> synthesizer (Qsynth) running first. I'd like to discuss what could be done
> to at least produce some error messages directing users to set up
> synthesizers first, or maybe to automatically start the required
> synthesizers.

I thought most of those music apps required jack to run--are you running
jack or not?  If you are, then it's probably just the usual
jack/pulseaudio conflicts.  Which Fedora seems set up to fix, but for
seem reason the fix doesn't work; I filed a bug here:

https://bugzilla.redhat.com/show_bug.cgi?id=1390043

Agreed that the Fedora music stuff seems very promising but a bit
frustrating to get set up in practice.  It'd be nice to get some of
these problems sorted out.

I'm an amateur musician more interested in live performance--I use my
laptop as a sound module for organ/piano/synth sounds played from a midi
controller keyboard.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: installing RPMs on NFS filesystems

2016-11-28 Thread J. Bruce Fields
On Mon, Nov 28, 2016 at 05:04:07PM -0500, J. Bruce Fields wrote:
> On Wed, Nov 23, 2016 at 08:28:12PM -0500, Stephen John Smoogen wrote:
> > On 23 November 2016 at 19:36, Samuel Sieb <sam...@sieb.net> wrote:
> > > On 11/23/2016 07:39 AM, Chuck Anderson wrote:
> > >>
> > >> Is it supposed to be supported to install RPMs onto NFS filesystems?
> > >> Apparently NFSv3 doesn't support capabilities, so I'm not sure what to
> > >> do with this bug which happens because cap_net_raw is used for the
> > >> fping binaries:
> > >>
> > > I would expect that isn't supported, although I'm somewhat surprised that 
> > > it
> > > fails instead of just warning.  That's a very unusual setup, having the 
> > > root
> > > filesystem on NFS.
> > 
> > I doubt that installing on NFS was supported after we began using
> > capabilities on files for security. While installing on NFS was in
> > vogue in the 80's and 90's for thin clients and similar environments,
> > I think it has fallen to the wayside for current development. [In the
> > EPEL environment space I do expect it is still in use for root but
> > probably only in EL6 land versus EL7]
> 
> This isn't the first complaint we've gotten, though admittedly it may
> have been a while.  (And I'm having no luck finding the bugs in
> bugzilla.)
> 
> We could add support for capabilities to the NFS protocol, but that
> could take a while.
> 
> It'd be nice if rpm installs could fall back on something else instead
> of failing, but maybe it's complicated to do that safely.

Oh, here's one:

https://bugzilla.redhat.com/show_bug.cgi?id=648654

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: installing RPMs on NFS filesystems

2016-11-28 Thread J. Bruce Fields
On Wed, Nov 23, 2016 at 08:28:12PM -0500, Stephen John Smoogen wrote:
> On 23 November 2016 at 19:36, Samuel Sieb  wrote:
> > On 11/23/2016 07:39 AM, Chuck Anderson wrote:
> >>
> >> Is it supposed to be supported to install RPMs onto NFS filesystems?
> >> Apparently NFSv3 doesn't support capabilities, so I'm not sure what to
> >> do with this bug which happens because cap_net_raw is used for the
> >> fping binaries:
> >>
> > I would expect that isn't supported, although I'm somewhat surprised that it
> > fails instead of just warning.  That's a very unusual setup, having the root
> > filesystem on NFS.
> 
> I doubt that installing on NFS was supported after we began using
> capabilities on files for security. While installing on NFS was in
> vogue in the 80's and 90's for thin clients and similar environments,
> I think it has fallen to the wayside for current development. [In the
> EPEL environment space I do expect it is still in use for root but
> probably only in EL6 land versus EL7]

This isn't the first complaint we've gotten, though admittedly it may
have been a while.  (And I'm having no luck finding the bugs in
bugzilla.)

We could add support for capabilities to the NFS protocol, but that
could take a while.

It'd be nice if rpm installs could fall back on something else instead
of failing, but maybe it's complicated to do that safely.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Self Introduction: Guido Aulisi

2016-09-12 Thread J. Bruce Fields
On Thu, Sep 08, 2016 at 01:37:49AM +0200, Guido Aulisi wrote:
> Il giorno mer, 07/09/2016 alle 00.54 +0200, Germano Massullo ha
> scritto:
> > Hi Guido, welcome! Have you already chosen a sponsor? [1]
> > Have a nice day
> 
> Hi,
> I am looking for a sponsor and I have already made a review request
> https://bugzilla.redhat.com/show_bug.cgi?id=1373641

Oh, neat.  I was just looking around to see if there's a usable open
source clonewheel, and all I found was Bristol's emulation which seemed
very glitchy  Anyway, apologies for the digression--I probably don't
have enough Fedora experience to be useful as a sponsor.

--b.

> If someone can sponsor me, I'd be happy :-)
> 
> I forgot my gpg key fingerprint:
> E53C 205C 7A5D BC0C 4F58  1573 375A 9989 0261 5187
> 
> Bye.



> --
> devel mailing list
> devel@lists.fedoraproject.org
> https://lists.fedoraproject.org/admin/lists/devel@lists.fedoraproject.org
--
devel mailing list
devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/devel@lists.fedoraproject.org


Re: RFC: Fixing the "nobody" user?

2016-07-20 Thread J. Bruce Fields
On Mon, Jul 18, 2016 at 03:59:09PM +0200, Ondřej Vašík wrote:
> Lennart Poettering píše v Po 18. 07. 2016 v 14:39 +0200:
> > Heya!
> > 
> > I'd like to start a discussion regarding the "nobody" user on Fedora,
> > and propose that we change its definition sooner or later. I am not
> > proposing a feature according to the feature process for this yet, but
> > my hope is that these discussions will lead to one eventually.
> 
> Thanks for starting the discussion on Fedora devel - as there already
> was https://bugzilla.redhat.com/show_bug.cgi?id=1350526 - where it ended
> up closed NOTABUG - as the nfs-utils maintainer is concerned about such
> change ( https://bugzilla.redhat.com/show_bug.cgi?id=1350526#c3 ) - and
> most of commenters (moved across several components) recommended "not a
> bug" resolution. 

That was me.  (I'm not the nfs-utils maintainer, though.)

I honestly didn't think about it much beyond: there might be some risk
to the change, so it needs some justification.

So, trying to think it through some more from an NFS point of view:

For authentication, rpc uses either numeric ID's or kerberos names.

For referring to principals in file owners, groups, or ACLs, NFSv2/v3
uses numeric ID's, NFSv4 may use string names instead in some cases.

So in the NFSv4 case you could end up with a read-modify-write of an ACL
resulting in on-disk references to uid 99 turning into 65534's.

That can happen in the local case too if, say, you're using a
commandline utility that uses names, and you map 99 and 65534 both to
"nobody" and "nobody" back to 65534.

NFS users can already see that sort of behavior in mixed-distro
environments.

Anyway, I don't know.  It'd certainly be nice to see the current
situation cleaned up.  I don't feel like I understand what might break
on transition.

--b.

> 
> I agree with containers and user namespaces, overflow uid named
> "nfsnobody" confuses users. But is there really some good and
> non-disruptive solution? e.g. Overflow id can be changed to different
> than (uint_16_t) -2, but it is the right way?
> 
> > Most distributions (in particular Debian/Ubuntu-based ones) map the
> > user "nobody" to UID 65534. I think we should change Fedora to do the
> > same. Background:
> > 
> > On Linux two UIDs are special: that's UID 0 for root, which is the
> > privileged user we all know. And then there's UID 65534
> > (i.e. (uint16_t) -2), which is less well known. The Linux kernel calls
> > it the "overflow" UID. It has four purposes:
> > 
> > 1. The kernel maps UIDs > 65535 to it when when some subsystem/API/fs
> >only supports 16bit UIDs, but a 32bit UID is passed to it.
> > 
> > 2. it's used by the kernel's user namespacing as a the internal UID
> >that external UIDs are mapped to that don't have any local mapping.
> > 
> > 3. It's used by NFS for all user IDs that cannot be mapped locally if
> >UID mapping is enabled.
> > 
> > 4. One upon a time some system daemons chose to run as the "nobody"
> >user, instead of a proper system user of their own. But this is
> >universally frowned upon, and isn't done on any current systems
> >afaics. In fact, to my knowledge Fedora even prohibits this
> >explicitly in its policy (?).
> > 
> > The uses 1-3 are relevant today, use 4 is clearly obsolete
> > afaics. Uses 1-3 can be subsumed pretty nicely as "the UID something
> > that cannot be mapped properly is mapped to".
> > 
> > On Fedora, we currently have a "nobody" user that is defined to UID
> > 99. It's defined unconditionally like this. To my knowledge there's no
> > actual use of this user at all in Fedora however. The UID 65514
> > carries no name by default on Fedora, but as soon as you install the
> > NFS utils it gets mapped to the "nfsnobody" user name, misleadingly
> > indicating that it would be used only by NFS even though it's a much
> > more general concept. I figure the NFS guys adopted the name
> > "nfsnobody" for this, simply because "nobody" was already taken by UID
> > 99 on Fedora, unlike on other distributions.
> 
> It is really a historical reason. I don't think there was common
> agreement at the time when 99 for nobody was selected (at least several
> different approaches were in place these days).
> 
> > In the context of user namespacing the UID 65534 appears a lot more
> > often as owner of various files. For example, if you turn on user
> > namespacing in typical container managers you'll notice that a ton of
> > files in /proc will then be owned by this user. Very confusingly, in a
> > container that includes the NFS utils all those files actually show up
> > as "nfsnobody"-owned now, even though there's no relation to NFS at all
> > for them.
> > 
> > I'd like to propose that we clean this up, and just make Fedora work
> > like all other distributions. After all the reason of having this
> > special UID in the first place is to sidestep mapping problems between
> > different UID "realms". Hence I think it would be wise to at least
> 

Re: 22: nfs = long boot delay

2014-08-14 Thread J. Bruce Fields
On Tue, Aug 12, 2014 at 12:58:13AM -0400, Felix Miata wrote:
 Why when nothing is automounting nfs either as client or server does boot
 not proceed to completion without a 2+ minute pause while nfs-server fails
 to start?

Sounds like a bug.  Is the failure to start expected?

 Exportfs never (on my installations) needed fsid= in exports
 before, why the complaint about its absence now, and with output of
 showmount -e showing no signs of having failed?

fsid=0 shouldn't be required.  Probably file a bug with the details.

--b.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: Summary/Minutes from today's FESCO meeting (2011-12-12 at 1800 UTC)

2011-12-13 Thread J. Bruce Fields
On Tue, Dec 13, 2011 at 02:12:30PM -0500, Stephen Gallagher wrote:
 On Tue, 2011-12-13 at 14:06 -0500, J. Bruce Fields wrote:
  On Mon, Dec 12, 2011 at 01:22:06PM -0800, Toshio Kuratomi wrote:
   To some extent I agree with both sgallagh's sentiment and the logical
   conclusion you're drawing.  However, I think the lookaside cache is
   a necessary optimization/compromise to the ideal of putting everything 
   into
   version control, though.  Current technology would make it prohibitive in
   terms of packager time (and for some packages, space on developer's
   machines) to put tarballs into git as the cloned repository would then
   contain every single new tarball the package ever had.
  
  I'd be curious to know how expensive that actually was.
  
  I'd think delta-compression could make it quite reasonable for the
  typical project.  (Exceptions including things like games with lots of
  binary data in each release.)
 
 Nearly all packages are released as a compressed tarball. So any change
 in the package is likely to result in a delta of the binary image that
 is close enough to 100% as makes no difference.

You'd want to uncompress before checking in.  Or even expand before
checking in--git diff and git grep would then be a lot more useful.

You'd no longer have a copy of exactly what you downloaded, but someone
with a copy of the download could mechanically verify that you'd
imported the same content.

You could still keep the .tar.gz in the lookaside cache, but you
wouldn't normally need to go look at it.

--b.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel