Closed #258.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/rpm-software-management/rpm/issues/258#event-1789678681___
Rpm-maint mailing list
#187 got sorted out in the end, in a much simpler manner that doesn't pull in
new dependencies. We can revisit the issue if an actual need for an event loop
arises, but closing for now. Thanks for the effort though.
--
You are receiving this because you are subscribed to this thread.
Reply to
@cgwalters when moving 100's of megabytes through a finite resource, there will
be a measurable effect on the cache. that's true for any large I/O operation
including rpm-ostree: have you run mincore to see how many pages are left in
cache after syncfs? typically syncfs/sync guarantee write to
FWIW, here is the rather pleasant and complete documentation for libeio:
[http://pod.tst.eu/http://cvs.schmorp.de/libeio/eio.pod](url)
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
The libeio event loop adds RPM_CHECK_LIB to handle --with-libeio here:
[https://github.com/rpm5/rpm/commit/133c60b4036fd97af89390aea6ae94034239d545](url)
(Presumably you will do your own pkg-config AutoFU to handle --with-libeio)
The patch in comments above has been mildly reworked to remove
@Conan-Kudo I do have something to add to RPM... filesystem durability and
preventing cache blowout at modest cost. Not everything is a "patch".
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
@n3npq I think he thought you had something ready to merge into RPM already. :)
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
@ignatenkobrain: send a PR ... for what?
This work is an improvement on PR #197 which hasn't moved in 6+ months.
And -- as stated -- the patches here are "rude-and-crude" and necessary to do
the justification for adding an event loop to RPM. There is nothing here that
should be added to RPM
And one more try at speeding up fsync-on-close by adding more threads that can
block:
```
eio_set_min_parallel(512);
eio_set_max_idle(512);
```
With 512 (the default is 4) backend threads that can block, the kernel install
time is
```
$ sudo /usr/bin/time ./rpm -U
Some minor tinkering on the fdatasync -> fsync pipeline with code like this:
```
pthread_yield();
if (eio_npending()) rc = eio_poll();
```
to minimize the number of calls to eio_poll() (which always processes at least
one request) gives these timings
```
$ sudo /usr/bin/time
There is one other alternative: libcoro (also by Marc Lehman). If the only
problem I was trying to solve was making fdatasync+fsync asynchronous, then
libcoro would be the best choice. My patch essentially reduces libeio to
co-routine functionality. libcoro does not require a scheduler, nor
The only sane way to add an event loop within RPM is through embedding, at
which point maintenance is with RPM, not through actively maintained external
libraries. libeio is the only reasonable embeddable choice.
Extending external libeio with additional system calls is trickier with
external
Thanks for the reasoning.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/rpm-software-management/rpm/issues/258#issuecomment-320489857___
Rpm-maint mailing
The main reason why libevent was not used:
* Marc Lehmann chose to move on from libevent ->
libev+libeio and strip out complex buffer handling. Small and targeted and
embeddable are all KISS; automating buffer allocation handling is hardly useful
or necessary to rpmio.
The
libeio is fine, I just wanted to know if you had considered the alternatives
and why this one was picked. It looks like libeio is still actively maintained,
which is good.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Yes, there is libevent, then libev+libeio, and now libuv, all tuned towards
slightly different goals.
libeio is narrowly targeted at asynchronous system calls, can be embedded, and
is generally lean-and-mean and well tested.
But go shopping for alternatives if you wish: I chose libeio after
@n3npq I thought nodejs used [libuv](http://libuv.org/) nowadays? AFAIK, they
originally used [libeio](http://software.schmorp.de/pkg/libeio.html) +
[libev](http://software.schmorp.de/pkg/libev.html), but created libuv later on.
Do you have any particular insight into the differences
The attached patch does a rude-and-crude libeio pipe driven event loop to ramie.
The patch adds an 'N' (mnemonic: O_NONBLOCK) flag to Fopen() flags.
The libeio loop is lazily started, and atexit(3) is used to wait for all events
to be completed.
The operations of fdatasync/fadvise/fsync are
The attached patch adds allocate/fadvise/fdatasync/fsync functions in rpmio
with the calls on I/O intensive install code paths.
In addition, two flags have been added to Fopen(): 'S' to do fsync-on-close,
and '?' to do per-fd debugging.
Simple benchmarks for a kernel package installation
19 matches
Mail list logo