Closed #258.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/rpm-software-management/rpm/issues/258#event-1789678681___
Rpm-maint mailing list
Rpm-maint@lists.rpm
#187 got sorted out in the end, in a much simpler manner that doesn't pull in
new dependencies. We can revisit the issue if an actual need for an event loop
arises, but closing for now. Thanks for the effort though.
--
You are receiving this because you are subscribed to this thread.
Reply to t
@cgwalters when moving 100's of megabytes through a finite resource, there will
be a measurable effect on the cache. that's true for any large I/O operation
including rpm-ostree: have you run mincore to see how many pages are left in
cache after syncfs? typically syncfs/sync guarantee write to d
> all file operations are durable (in the sense that all installed files are
> known written to disk).
Right now many real world systems rely on files written by scripts (for
example, the bootloader configuration, `/etc/ld.so.cache`, etc.); librpm as
designed today doesn't have visibility into
FWIW, here is the rather pleasant and complete documentation for libeio:
[http://pod.tst.eu/http://cvs.schmorp.de/libeio/eio.pod](url)
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/rpm-software-management
The libeio event loop adds RPM_CHECK_LIB to handle --with-libeio here:
[https://github.com/rpm5/rpm/commit/133c60b4036fd97af89390aea6ae94034239d545](url)
(Presumably you will do your own pkg-config AutoFU to handle --with-libeio)
The patch in comments above has been mildly reworked to remove
The generic boilerplate part of this patch (see the rpm+fsync.gz patch in
comment #2) is now committed at
[https://github.com/rpm5/rpm/commit/d0d04ae67410ec1113b1ac8dab1c78a1fc2c003f](url)
You will need to enable/disable manually with this change in lib/fsm.c
```
diff --git a/lib/fsm.c b/lib/fsm
@Conan-Kudo I do have something to add to RPM... filesystem durability and
preventing cache blowout at modest cost. Not everything is a "patch".
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/rpm-software
@n3npq I think he thought you had something ready to merge into RPM already. :)
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/rpm-software-management/rpm/issues/258#issuecomment-320525269_
@ignatenkobrain: send a PR ... for what?
This work is an improvement on PR #197 which hasn't moved in 6+ months.
And -- as stated -- the patches here are "rude-and-crude" and necessary to do
the justification for adding an event loop to RPM. There is nothing here that
should be added to RPM ...
And one more try at speeding up fsync-on-close by adding more threads that can
block:
```
eio_set_min_parallel(512);
eio_set_max_idle(512);
```
With 512 (the default is 4) backend threads that can block, the kernel install
time is
```
$ sudo /usr/bin/time ./rpm -U --root=/var/tmp
Some minor tinkering on the fdatasync -> fsync pipeline with code like this:
```
pthread_yield();
if (eio_npending()) rc = eio_poll();
```
to minimize the number of calls to eio_poll() (which always processes at least
one request) gives these timings
```
$ sudo /usr/bin/time ./rpm
There is one other alternative: libcoro (also by Marc Lehman). If the only
problem I was trying to solve was making fdatasync+fsync asynchronous, then
libcoro would be the best choice. My patch essentially reduces libeio to
co-routine functionality. libcoro does not require a scheduler, nor thre
@n3npq all this is great, but you have to send Pull Request.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/rpm-software-management/rpm/issues/258#issuecomment-320490451
The only sane way to add an event loop within RPM is through embedding, at
which point maintenance is with RPM, not through actively maintained external
libraries. libeio is the only reasonable embeddable choice.
Extending external libeio with additional system calls is trickier with
external l
Thanks for the reasoning. 👍
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/rpm-software-management/rpm/issues/258#issuecomment-320489857___
Rpm-maint mailing li
The main reason why libevent was not used:
* Marc Lehmann chose to move on from libevent ->
libev+libeio and strip out complex buffer handling. Small and targeted and
embeddable are all KISS; automating buffer allocation handling is hardly useful
or necessary to rpmio.
The main reason why lib
libeio is fine, I just wanted to know if you had considered the alternatives
and why this one was picked. It looks like libeio is still actively maintained,
which is good.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
htt
Yes, there is libevent, then libev+libeio, and now libuv, all tuned towards
slightly different goals.
libeio is narrowly targeted at asynchronous system calls, can be embedded, and
is generally lean-and-mean and well tested.
But go shopping for alternatives if you wish: I chose libeio after loo
@n3npq I thought nodejs used [libuv](http://libuv.org/) nowadays? AFAIK, they
originally used [libeio](http://software.schmorp.de/pkg/libeio.html) +
[libev](http://software.schmorp.de/pkg/libev.html), but created libuv later on.
Do you have any particular insight into the differences (pluses/min
The attached patch does a rude-and-crude libeio pipe driven event loop to ramie.
The patch adds an 'N' (mnemonic: O_NONBLOCK) flag to Fopen() flags.
The libeio loop is lazily started, and atexit(3) is used to wait for all events
to be completed.
The operations of fdatasync/fadvise/fsync are pip
The attached patch adds allocate/fadvise/fdatasync/fsync functions in rpmio
with the calls on I/O intensive install code paths.
In addition, two flags have been added to Fopen(): 'S' to do fsync-on-close,
and '?' to do per-fd debugging.
Simple benchmarks for a kernel package installation (reche
Projects that use autoconf are expected to detect build prerequisites and
libraries.
This usually leads to a set of ad hoc de facto m4 macros that deal with various
details like API and path incompatibilities, much of which is platform and os
and distro dependent.
Various standard tools (like
23 matches
Mail list logo