Here's the error I run into on my desktop:
Problem: problem with installed package eclipse-jgit-5.4.0-4.fc30.noarch
- eclipse-jgit-5.4.0-4.fc30.noarch does not belong to a distupgrade repository
- nothing provides jgit = 5.3.0-5.fc31 needed by
Does zswap actually keep the data compressed when the DRAM-based swap is full,
and it writes to the spill-over non-volatile swap device?
I'm not an expert on this at all, however my understanding was that zswap must
decompress the data before it writes to the backing swap. But
> On Sat, Jan 25, 2020 at 11:07 PM Bill Chatfield via devel
> True. Nobody cares about Java packages in fedora, not even Red Hat
> employees. If you look at the members of the Java SIG, a lot of them
> were (or still are) Red Hat employees. For example, even JBoss /
> WildFly (a pretty big
I have some concerns about this proposal. Given that this change was
essentially unanimously rejected, this line stood out to me:
> * As soon as feature is accepted by the community, there will be a
> smooth process to update baseline in the main Fedora, as all packages
> will be already
I think this would be a really big improvement for workstation and other
desktop spins, the handling of out of memory situations have been a consistent
paint point on Linux. However, may I ask why EarlyOOM was chosen over
something like NoHang ? I am a bit concerned that EarlyOOM's
Yep, I just ran "dnf info kernel" and then right after that "dnf changelog
kernel", in both cases dnf spent over 20 seconds syncing. I haven't seen other
package managers require this much network traffic, and I wonder if a lot of it
could be avoided.
> It's super annoying for me to post, because benchmarks drive me crazy,
> and yet here I am posting one - this is almost like self flagellation
> to paste this...
> None of these benchmarks are representative of a generic
> The main argument is that for typical and varied workloads in Fedora,
> mostly on consumer hardware, we should use mq-deadline scheduler
> rather than either none or bfq.
> It may be true most folks with NVMe won't see anything bad with
> The latter but considering they're a broad variety of workloads I
> think it's misleading to call them server workloads as if that's one
> particular type of thing, or not applicable to a desktop under IO
> pressure. Why? (a) they're using consumer storage devices (b) these
> are real workloads
> Given Hans proposal  introduced systemd/grub2/Gnome upstream changes
> it beg the question if now would not be the time to stop supporting
> booting in legacy bios mode and move to uefi only supported boot which
> has been available on any common intel based x86 platform since atleast
> It doesn't use compression so not relevant to the cited statement?
Well the paper compares ext2, ext4, xfs, f2fs, and btrfs in terms of IO
amplification and states:
"In fact, in all our experiments, btrfs was an outlier, producing the highest
read, write, and space amplification."
> (Yes, that means applications need to start being concious of what fs
> they are being run on, or at least the fedora configuration needs to do
> that check for them)
Right, and it's concerning to me that Fedora is committing to btrfs by default
before important applications have become more
> What changes?
I don't see a reason for this level of snark, in your next paragraph you
described the changes I'm talking about.
> Discussion is happening upstream to determine the best location for
> such optimization to happen.
I'm glad work is happening upstream and I hope it goes
I forgot to mention that bfq appears to be the only IO scheduler that supports
cgroups-v2 IO controllers . Perhaps I am wrong, but I wasn't able to find
documentation indicating that mq-deadline is cgroup-aware, at the very least
it's not documented in the official deadline tunables section
> I'm not convinced it's the domain of an IO scheduler to be involved,
> rather than it being explicit UX intended by the desktop environment.
> Seems to me the desktop environment is in a better position to know
> what users expect.
Well wouldn't bfq just be enforcing the bandwidth weights, if
> I'd like to propose a few guidelines:
> 1. If btrfs causes noticeable performance issues for users, that's not
> OK. It's understood and expected that it might be slower at many
> workloads, but if the difference is large enough that users notice a
> significant regression in desktop
> The context of that is: the default when the user does not specify. If
> the user chooses 'raid1' in the installer, they get 'raid1' for both
> data and metadata.
This does not seem to be the case, and from what I can tell Garry experienced
this problem as well.
I tested this in a VM with two
> For btrfs, it's raid0 data, raid1 metadata.
Surely this is considered a serious installer bug? Users who choose an option
called "raid1" with btrfs would, and should, expect to have data redundancy.
Even if this bug has existed for a long time, it doesn't make it any less
> You didn't make a mistake. Pretty sure it's a blocker bug too so I've
> proposed it as such.
Thank you for doing that, I appreciate it.
devel mailing list -- firstname.lastname@example.org
To unsubscribe send an email to
> I don't want to give the impression that nodatacow (chattr +C) is what
> apps should be doing "to be fast on btrfs". It might be that they can
> reduce their fsync footprint. Or the problem might be lock contention
> related, and an easy optimization for a heavy metadata writing apps
> would be
> On Sat, Jun 27, 2020 at 7:32 PM Garry T. Williams wrote:
> Just a PSA: btrfs raid1 does not have a concept of automatic degraded
> mount in the face of a device failure. By default systemd will not
> even attempt to mount it if devices are missing.
Is this hopefully seen by upstream as a
> I'm not sure where it is in the priority list.
> If you're doing a preemptive replace, there's no degraded state. Even
> if there's a crash during this replace, all devices are present, so
> it'll boot normally. The difficulty is if a drive has died, and
> there's a reboot before a replace
> BIOS-based systems make up a miniscule minority of the current market.
> Pretending otherwise is delusional, and delusions are no basis for
> technical decisions.
> - Solomon
In terms of physical x86 systems, you are right that UEFI is the overwhelming
majority. But as stated elsewhere
> spin is a blocker edition, so its default installation must pass our
> release criterias.
Right, but have there been any investigations to see if those release criteria
are fulfilled on Plasma + Wayland? If it doesn't currently meet
Has anyone compiled a (non-exhaustive) list of known issues that are specific
to KDE Plasma with Wayland? Are there currently any issues that would block
Wayland from becoming the default if they aren't resolved in time for F34?
devel mailing list
> On Tue, Sep 15, 2020 at 7:57 PM Kevin Kofler wrote:
> I hate to break it to you, but this problem is not just in
> filesystems, it's in basically everything in the kernel. And we've had
> variations of problems like this for years (endianness, page size,
> pointer size, single bit vs
Mail list logo