The attached patch does a rude-and-crude libeio pipe driven event loop to ramie.

The patch adds an 'N' (mnemonic: O_NONBLOCK) flag to Fopen() flags.

The libeio loop is lazily started, and atexit(3) is used to wait for all events 
to be completed.

The operations of fdatasync/fadvise/fsync are pipelined (i.e. fdatasync/fsync 
are each separate events).

Basically libeio is being used as a co-routine monitor to move the costly 
operations off  the main thread of operation in order to do a wall clock 
benchmark:

```
$ sudo /usr/bin/time ./rpm -U --root=/var/tmp/xxx --nodeps --force 
kernel-3.10.0-514.26.2.el7.x86_64.rpm
...
5.67user 0.88system 0:39.75elapsed 16%CPU (0avgtext+0avgdata 32420maxresident)k
0inputs+299992outputs (0major+13681minor)pagefuls 0swaps

```

So ~5x slower than no fsync(3) which is currently implemented in RPM.

Which is starting to get to a reasonable execution cost for the simultaneous 
benefits of
* all file operations are durable (in the sense that all installed files are 
known written to disk).
* all dirty pages in the buffer cache are invalidated (so there is no I/O cache 
blowout from performing an RPM upgrade).

The underlying goal here was to assess whether a libeio event loop (which is 
far more general than fdatasync/fsync operations here) has benefits if 
implemented in RPM. The rude-and-crude implementation here is already 2x faster 
than the synchronous rsync-on-close implementation, and is not much more 
complex. JMHO, YMMV, everyone's does.

[rpm+fsync+libeio.patch.gz](https://github.com/rpm-software-management/rpm/files/1202849/rpm.fsync.libeio.patch.gz)

The next step(s) will be to do a proper RPM+LIBEIO implementation so that 
libeio can be used everywhere as needed, not just on the peculiar 
fire-and-forget fsync-on-close file write path used here.


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/rpm-software-management/rpm/issues/258#issuecomment-320484006
_______________________________________________
Rpm-maint mailing list
Rpm-maint@lists.rpm.org
http://lists.rpm.org/mailman/listinfo/rpm-maint

Reply via email to