Re: fuse2 vs fuse3

2021-05-04 Thread Steven Whitehouse

Hi,

On 28/04/2021 16:27, Neal Gompa wrote:

On Wed, Apr 28, 2021 at 11:27 AM Richard W.M. Jones  wrote:

Is there any preference for fuse3 over fuse2?  I moved a package over
to fuse3 yesterday[1].  The API[2] seems a bit cleaner, but it's no
big deal.  However it's not really feasible to support both.  Since I
maintain several other fuse-using packages in Fedora, we have to pick
one or the other.

I notice also that almost no other packages have moved to fuse3:

   $ sudo dnf repoquery --whatrequires 'libfuse.so.2()(64bit)' --qf '%{name}' | 
sort -u | wc -l
   62

   $ sudo dnf repoquery --whatrequires 'libfuse3.so.3()(64bit)' --qf '%{name}' 
| sort -u | wc -l
   7

Are we going to deprecate fuse2 at some point?  Encourage upstreams to
upgrade?

One other minor point: BSDs can emulate fuse (which is essentially a
Linux-only API), but their emulation seems to be of fuse2 only.

Rich.

[1] 
https://gitlab.com/nbdkit/libnbd/-/commit/c74c7d7f01975e708b510e518895088fc61b5623
[2] https://github.com/libfuse/libfuse/releases/tag/fuse-3.0.0


I think we eventually want everything moving to fuse3, but that's a
*slow* process...



Miklos, is that your eventual plan?

Steve.

___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Meaning of Size Directories

2021-03-16 Thread Steven Whitehouse

Hi,

On 16/03/2021 16:51, John Reiser wrote:

On 3/16/21, David Howells wrote:

John Reiser  wrote:


See the manual page "man 2 getdents".


Um, which bit?  I don't see anything obvious to that end.


On that manual page:
=
The system call getdents() reads several linux_dirent structures from 
the directory
referred to by the open file descriptor fd into the buffer pointed to 
by dirp.

   [snip]]
On  success, the number of bytes read is returned.
=

So the return value is related to the size of the directory; the sum 
of the values
returned before End-Of-File should be quite close to the .st_size of 
the directory.
If a program is walking through the directory, reading all the entries 
via getdents64(),
then .st_size of the directory is the only thing known in advance 
about the total size.
(Of course anything involving a directory can depend on concurrent 
create/delete/rename

of files within the directory.)


If you are looking for a hint on how large a buffer to allocate, then 
st_blksize is generally used as a hint for directory reads, or 
otherwise, a fixed size buffer or a page or two. The st_size field is 
meaningless for directories and you'll get all kinds of odd results 
depending on the filesystem that is in use, so best avoided,


Steve.


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: booting successfully with read-only file system

2020-07-03 Thread Steven Whitehouse

Hi,

On 03/07/2020 14:18, Colin Walters wrote:

On Thu, Jul 2, 2020, at 11:53 AM, Zbigniew Jędrzejewski-Szmek wrote:


It would be great if we could fairly reliably boot with a read-only
root file system,

Eh, just mount a tmpfs for /var, and an overlayfs for /etc (backed by a tmpfs).

That's what we do for Fedora CoreOS based live images, see
https://github.com/coreos/fedora-coreos-config/blob/testing-devel/overlay.d/05core/usr/lib/dracut/modules.d/20live/live-generator
It works.

(Which we recently switched to involve a loopback-mounted xfs on tmpfs because 
SELinux, but that is mostly only necessary because we want to support Ignition 
which does system provisioning in the initramfs, which is not true on 
non-Ignition based systems)


If there is additional support required in overlayfs, then please do 
file a bug and request it,


Steve.


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 33 System-Wide Change proposal: Make btrfs the default file system for desktop variants

2020-07-01 Thread Steven Whitehouse

Hi,

On 01/07/2020 12:09, Zbigniew Jędrzejewski-Szmek wrote:

On Wed, Jul 01, 2020 at 11:28:10AM +0100, Steven Whitehouse wrote:

Hi,

On 01/07/2020 07:54, Zbigniew Jędrzejewski-Szmek wrote:

On Mon, Jun 29, 2020 at 03:15:23PM -0400, Solomon Peachy wrote:

So yes, I think an explicit "let's all test btrfs (as anaconda
configures it) before we make it default" period is warranted.

Perhaps one can argue that Fedora has already been doing that for the
past two years (since 2018-or-later-btrfs is what everyone with positive
results appears to be talking about), but it's still not clear that
those deployments utilize the same feature set as Fedora's defaults, and
how broad the hardware sample is.

Making btrfs opt-in for F33 and (assuming the result go well) opt-out for F34
could be good option. I know technically it is already opt-in, but it's not
very visible or popular. We could make the btrfs option more prominent and
ask people to pick it if they are ready to handle potential fallout.

Normally we just switch the default or we don't, without half measures. But
the fs is important enough and complicated enough to be extra careful about
any transitions.

Zbyszek

Indeed, it is an important point, and taking care is very important
when dealing with other people's data, which is in effect what we
are discussing here.

When we looked at btrfs support in RHEL, we took quite a long time
over it. In fact I'm not quite sure how long, since the process had
started before I was involved, but it was not a decision that was
made quickly, and a great deal of thought went into it. It was
difficult to get concrete information about the stability aspects at
the time. Just like the discussions that have taken place on this
thread, there was a lot of anecdotal evidence, but that is not
always a good indicator. Since time has passed since then, and there
is now more evidence, this part of the process should be easier.
That said to get a meaningful comparison then ideally one would want
to compare on the basis of user populations of similar size and
technical skill level, and look not just at the overall number of
bugs reported, but at the rate those bugs are being reported too.

Yeah. I have no doubt that the decision was made carefully back then.
That said, time has passed, and btrfs has evolved and our use cases
have evolved too, so a fresh look is good.

We have https://fedoraproject.org/wiki/Changes/DNF_Better_Counting,
maybe this could be used to collect some statistics about the fs type
too.


Yes, and also the questions that Fedora is trying to answer are 
different too. So I don't think that our analysis for RHEL is applicable 
here in general. The method that we went through, in general terms, may 
potentially be helpful.




It is often tricky to be sure of the root cause of bugs - just
because a filesystem reports an error doesn't mean that it is at
fault, it might be a hardware problem, or an issue with volume
management. Figuring out where the real problem lies is often very
time consuming work. Without that work though, the raw numbers of
bugs reported can be very misleading.
It would be worth taking that step here, and
asking each of the spins what are the features that they would most
like to see from the storage/fs stack. Comparing filesystems in the
abstract is a difficult task, and it is much easier against a
context. I know that some of the issues have already been discussed
in this thread, but maybe if someone was to gather up a list of
requirements from those messages then that would help to direct
further discussion,

Actually that part has been answered pretty comprehensively. The split
between / and /home is hurting users and we completely sidestep it
with this change. The change page lists a bunch of other benefits,
incl. better integration with the new resource allocation mechanisms
we have with cgroups2. So in a way this is a follow-up to the
cgroupsv2-by-default change in F31. Snapshots and subvolumes also give
additional powers to systemd-nspawn and other tools. I'd say that the
huge potential of btrfs is clear. It's the possibility of the loss of
stability that is my (and others') worry and the thing which is hard
to gauge.

Zbyszek


If the / and /home split is the main issue, then dm-thin might be an 
alternative solution, and we should check to see if some of the issues 
listed on the change page have been addressed. I'm copying in Jon for 
additional comment on that. Are those btrfs benefits which are listed on 
the change page in priority order?


File system resize is mentioned there, but pretty much all local 
filesystems support grow. Also, no use cases are listed for that 
benefit. Shrink is more tricky, and can easily result in poor file 
layouts, particularly if there are repeated grow/shrink operations, not 
to mention potential complications with NFS if the fs is exported. So is 
there some specific use

Re: Fedora 33 System-Wide Change proposal: Make btrfs the default file system for desktop variants

2020-07-01 Thread Steven Whitehouse

Hi,

On 01/07/2020 07:54, Zbigniew Jędrzejewski-Szmek wrote:

On Mon, Jun 29, 2020 at 03:15:23PM -0400, Solomon Peachy wrote:

So yes, I think an explicit "let's all test btrfs (as anaconda
configures it) before we make it default" period is warranted.

Perhaps one can argue that Fedora has already been doing that for the
past two years (since 2018-or-later-btrfs is what everyone with positive
results appears to be talking about), but it's still not clear that
those deployments utilize the same feature set as Fedora's defaults, and
how broad the hardware sample is.

Making btrfs opt-in for F33 and (assuming the result go well) opt-out for F34
could be good option. I know technically it is already opt-in, but it's not
very visible or popular. We could make the btrfs option more prominent and
ask people to pick it if they are ready to handle potential fallout.

Normally we just switch the default or we don't, without half measures. But
the fs is important enough and complicated enough to be extra careful about
any transitions.

Zbyszek
Indeed, it is an important point, and taking care is very important when 
dealing with other people's data, which is in effect what we are 
discussing here.


When we looked at btrfs support in RHEL, we took quite a long time over 
it. In fact I'm not quite sure how long, since the process had started 
before I was involved, but it was not a decision that was made quickly, 
and a great deal of thought went into it. It was difficult to get 
concrete information about the stability aspects at the time. Just like 
the discussions that have taken place on this thread, there was a lot of 
anecdotal evidence, but that is not always a good indicator. Since time 
has passed since then, and there is now more evidence, this part of the 
process should be easier. That said to get a meaningful comparison then 
ideally one would want to compare on the basis of user populations of 
similar size and technical skill level, and look not just at the overall 
number of bugs reported, but at the rate those bugs are being reported too.


It is often tricky to be sure of the root cause of bugs - just because a 
filesystem reports an error doesn't mean that it is at fault, it might 
be a hardware problem, or an issue with volume management. Figuring out 
where the real problem lies is often very time consuming work. Without 
that work though, the raw numbers of bugs reported can be very misleading.


It is also worth noting that when we made the decision for RHEL it was 
not just a question of stability, although that is obviously an 
important consideration. We looked at a wide range of factors, including 
the overall design and features. We had reached out to a number of 
potential users and asked them what features they wanted from their 
filesystems and tried to understand where we had gaps in our existing 
offerings. It would be worth taking that step here, and asking each of 
the spins what are the features that they would most like to see from 
the storage/fs stack. Comparing filesystems in the abstract is a 
difficult task, and it is much easier against a context. I know that 
some of the issues have already been discussed in this thread, but maybe 
if someone was to gather up a list of requirements from those messages 
then that would help to direct further discussion,


Steve.


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 33 System-Wide Change proposal: Make btrfs the default file system for desktop variants

2020-06-30 Thread Steven Whitehouse

Hi,

On 29/06/2020 19:54, Igor Raits wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On Mon, 2020-06-29 at 12:26 -0400, Matthew Miller wrote:

On Sun, Jun 28, 2020 at 09:59:52AM -0700, John M. Harris Jr wrote:

We cannot include ZFS in Fedora for legal reasons. Additionally,
ZFS is not
really intended for the laptop use case.

Has that actually been explored? How does Canonical get around the
legal
issues with OpenZFS' licensing?

I can't really speculate on Canonical's legal stance and I encourage
everyone else to also not.

I can point to Red Hat's, though: the knowledge base article here
https://access.redhat.com/solutions/79633 says:

* ZFS is not included in the upstream Linux kernel due to licensing
reasons.

* Red Hat applies the upstream first policy for kernel modules
(including
   filesystems). Without upstream presence, kernel modules like ZFS
cannot be
   supported by Red Hat.

This is not fully true to my knowledge. Red Hat ships VDO and that is
not even sent to upstream (yet?).


It has taken a bit longer than perhaps expected. However the intention 
is very much that it will go upstream,


Steve.


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 33 System-Wide Change proposal: Make btrfs the default file system for desktop variants

2020-06-30 Thread Steven Whitehouse

Hi,

On 30/06/2020 13:58, Neal Gompa wrote:

On Tue, Jun 30, 2020 at 5:42 AM Steven Whitehouse  wrote:

Hi,

On 29/06/2020 23:57, Markus Larsson wrote:

On Mon, 2020-06-29 at 18:51 -0400, James Cassell wrote:

On Mon, Jun 29, 2020, at 6:43 PM, Markus S. wrote:

Why not Stratis?

Stratis cannot be used to build the root filesystem. (It's been
answered elsewhere in the thread.)

Are we sure?
https://github.com/stratis-storage/stratisd/issues/635
While it might not be super there yet it seems it is technically
working (I may be wrong I have done 0 tests).
But given how new that is and that tolling around it isn't there it
pretty far from being a viable default.

It is perhaps also worth mentioning, since I've not seen it elsewhere in
this thread, that Stratis is part of the (larger) Project Springfield.
This is aimed at improving the overall storage/fs management experience,
and there are a number of parts of that landing in various places at the
moment. There is more to come, of course, but the overall aim is
improved user experience for whatever combination of fs/block devices
are in use,


This is the first time I've ever heard that codename, and you should
really change it, because that name is already used for cloud-based
security fuzzing from Microsoft Research. It's a great idea, though!

Improving the UX of storage management is generally a good thing, in
my view. Btrfs provides significant improvements in this regard, but
there can be even more. Tools like SSM[1] were great attempts at
making the LVM experience not suck. Cockpit does a good job of making
handling storage management a lot more approachable, too.

I'd be curious if you are only thinking of server cases, or if desktop
cases are also being considered. Historically, projects like these
from Red Hat are largely only for the server...


[1]: https://github.com/system-storage-manager/ssm

So yes, SSM has been subsumed into Springfield too. There was a long 
debate over the project name, but nobody came up with anything better, 
so it has stuck...


There are a lot of things going on, although few of them have actually 
been labelled with Springfield, so perhaps not too surprising that the 
name is not so well known. There has been a new mount API upstream, for 
example which is part of that, as is also the fs notifications (of which 
the notifications core was merged in the most recent merge window, but 
the mount notifications and fsinfo syscall are still forthcoming).


There has also been work on PCP, to ensure that we have good metrics for 
a wide variety of filesystems, and there is a dashboard for GFS2 in 
Cockpit as part of that work. Cockpit is one of the important consumers 
of the APIs that fall under the Springfield umbrella.


There is libmount (which will get an update to take advantage of the 
kernel changes mentioned above) as well as udisks2, libstoragemgmt and 
blivet. The overall aim here is not to focus on one specific tool, but 
instead to look at the overall stack and figure out how to make the 
components work better with each other to provide a better user experience.


I know it has been rather confined to Red Hat internally, however that 
was not the intention, and in fact I would like to strongly encourage 
community involvement. There is an upstream mailing list, which 
currently has almost no traffic: springfi...@sourceware.org so please do 
join and ask questions, if anybody is interested in finding out more.


There is no Springfield codebase as such - it is an umbrella project 
that involves a number of subprojects. Also, the reason that it is 
interesting is that the intent is to look at both the kernel and 
userspace parts of managing storage and filesystems and to improve the 
whole stack, rather than looking a small pieces in isolation. Our aim is 
to encourage discussion and cooperation between the individual subprojects.


To answer the earlier question, yes this it is intended for both 
workstation and server use cases. That is perhaps getting a bit off 
topic here, but hopefully it will help to clear up any confusion about 
what Springfield is/does,


Steve.


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 33 System-Wide Change proposal: Make btrfs the default file system for desktop variants

2020-06-30 Thread Steven Whitehouse

Hi,

On 29/06/2020 23:57, Markus Larsson wrote:

On Mon, 2020-06-29 at 18:51 -0400, James Cassell wrote:

On Mon, Jun 29, 2020, at 6:43 PM, Markus S. wrote:

Why not Stratis?

Stratis cannot be used to build the root filesystem. (It's been
answered elsewhere in the thread.)

Are we sure?
https://github.com/stratis-storage/stratisd/issues/635
While it might not be super there yet it seems it is technically
working (I may be wrong I have done 0 tests).
But given how new that is and that tolling around it isn't there it
pretty far from being a viable default.


It is perhaps also worth mentioning, since I've not seen it elsewhere in 
this thread, that Stratis is part of the (larger) Project Springfield. 
This is aimed at improving the overall storage/fs management experience, 
and there are a number of parts of that landing in various places at the 
moment. There is more to come, of course, but the overall aim is 
improved user experience for whatever combination of fs/block devices 
are in use,


Steve.

___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 33 System-Wide Change proposal: Make btrfs the default file system for desktop variants

2020-06-30 Thread Steven Whitehouse

Hi,

On 27/06/2020 11:00, Florian Weimer wrote:

* Josef Bacik:


As for your ENOSPC issue, I've made improvements on that area.  I
see this in production as well, I have monitoring in place to deal
with the machine before it gets to this point.  That being said if
you run the box out of metadata space things get tricky to fix.
I've been working my way down the list of issues in this area for
years, this last go around of patches I sent were in these corner
cases.

Is there anything we need to do in userspace to improve the behavior
of fflush and similar interfaces?

This is not strictly a btrfs issue: Some of us are worried about
scenarios where the write system call succeeds and the data never
makes it to storage *without a catastrophic failure*.  (I do not
consider running out of disk space a catastrophic failure.)  NFS
apparently has this property, and you have to call fsync or close the
descriptor to detect this.  fsync is not desirable due to its
performance impact.


It doesn't matter which filesystem you use, you can't be sure that the 
data is really safe on disk without calling fsync. In the case of a new 
inode, that means fsync on the file and on the containing directory.


There can be performance issues depending on how that is done, however 
there are a number of solutions to those issues which can reduce the 
performance effects to the point where they are usually no longer a 
problem. That is with the caveat that slow storage will always be slow, 
of course!


The usual tricks are to avoid doing lots of small fsyncs, by gathering 
up smaller files, ideally sorting them into inode number order for local 
filesystems, and then issuing fsyncs asynchronously, waiting for them 
all only once all the fsyncs have been issued. Also fadvise/madvise can 
be useful in these situations too,


Steve.

___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: I would like to propose that we turn on XFS Reflink in Fedora 29 by default

2018-04-28 Thread Steven Whitehouse

Hi,


On 28/04/18 14:55, Peter Robinson wrote:

On Sat, Apr 28, 2018 at 11:09 AM, Daniel Walsh  wrote:

We are adding some features to container projects for User Namespace support
that can take advantage of XFS Reflink.  I have talked to some of the XFS
Reflink kernel engineers in Red Hat and they have informed me that they
believe it is ready to be turned on by default.

I am not sure who in Red Hat I should talk to about this? Whether we should
turn it on in the installer or in the mkfs.xfs command?

Who should I be talking to?  To make this happen.

I would speak to Eric Sandeen I believe he's the Red Hat maintainer
(or one of them) of XFS.

Peter
Indeed, and also we should look at this in the context of what is done 
for upstream. Ideally Fedora would just inherit the changes there, and 
there should not be anything special required for Fedora,


Steve.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-28 Thread Steven Whitehouse

Hi,


On 28/01/18 07:48, Terry Barnaby wrote:
When doing a tar -xzf ... of a big source tar on an NFSv4 file system 
the time taken is huge. I am seeing an overall data rate of about 1 
MByte per second across the network interface. If I copy a single 
large file I see a network data rate of about 110 MBytes/sec which is 
about the limit of the Gigabit Ethernet interface I am using.


Now, in the past I have used the NFS "async" mount option to help with 
write speed (lots of small files in the case of an untar of a set of 
source files).


However, this does not seem to speed this up in Fedora27 and also I 
don't see the "async" option listed when I run the "mount" command. 
When I use the "sync" option it does show up in the "mount" list.


The question is, is the "async" option actually working with NFS v4 in 
Fedora27 ?

___


What server is in use? Is that Linux too? Also, is this v4.0 or v4.1? 
I've copied in some of the NFS team who should be able to assist,


Steve.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Deprecating old networking protocols

2017-11-14 Thread Steven Whitehouse

Hi,


On 14/11/17 19:20, Laura Abbott wrote:

The kernel has seen an uptick in testing from fuzzers lately. This
has been great for the kernel as its exposed a number of bugs. See
https://googleprojectzero.blogspot.com/2017/05/exploiting-linux-kernel-via-packet.html 


as an example. Part of this has also shown what areas of the kernel
are undermaintained. One of those areas is older networking protocols.
Maintainers have started pushing patches to deprecate some of these

https://marc.info/?l=linux-netdev&m=151067745601327&w=2
https://marc.info/?l=linux-netdev&m=151060836604824&w=2

As the upstream kernel starts to deprecate these, Fedora is going
to follow suit as well. My plan is to turn off options that are
going to be deprecated. I already turned off DCCP in rawhide based
on some other conversations.

As always, your feedback is important here. Nobody is quite sure
how much these protocols are being used so if you have use cases,
please let us know.

Thanks,
Laura


I think it is probably overdue in the DECnet case, however I did get a 
message this morning from someone who is apparently still using it, and 
is apparently very happy with it for the most part. Anyway it is clear 
that nobody is maintaining it and it seems sensible that it should get 
removed unless someone with sufficient time wants to step forward. That 
has not happened so far,


Steve.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: BTRFS dropped by RedHat

2017-08-04 Thread Steven Whitehouse



On 04/08/17 16:23, Fernando Nasser wrote:

On 2017-08-04 11:12 AM, Przemek Klosowski wrote:


The release notes for RHEL 7.4 announce that RedHat gave up on btrfs:



Is it only RHEL?
Yes, it is only RHEL. It does not have any effect on Fedora, that is 
entirely independent,


Steve.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: btrfs as default filesystem for F22?

2014-10-03 Thread Steven Whitehouse

Hi,

On 03/10/14 07:42, Juan Orti Alcaine wrote:

El 2014-10-03 05:31, Andre Robatino escribió:

openSUSE 13.2, scheduled for release in November, will have btrfs as the
default filesystem. What are the chances that F22 will follow suit, 
assuming

openSUSE has no major problems with it?

https://news.opensuse.org/2014/09/22/


I've been using btrfs for a while now, and while the kernels 3.15.x 
and 3.16.{0,1} have been problematic, the latest one is working smooth 
again.


Anyway, I recommend using only the core features (snapshots, raid1, 
scrubs, balances, cp --reflink, etc...), because others have many 
quirks, like send/receive which get corrupted from time to time, raid 
5/6 which is work in progress, or problems related to low free space.


To implement btrfs as the default, grubby must support to install 
/boot on btrfs (bug #1094489). I have to run grub2-mkconfig with every 
kernel update to circumvent this problem.


It is also worth considering adding some scheduled tasks for 
maintenance, like rebalances, or scrubs.




I think "problems related to low free space" is a big issue for a 
default file system. If users have a bad experience due to a problem on 
the default file system, then that will very likely reflect on their 
feelings about Fedora as a whole, so it is vitally important that 
whatever fs is the default is as stable as possible.


It is possible to already do all of the things (and more) which you've 
listed under "core features" using LVM/dm/md too, with the exception of 
cp --reflink, so that it wouldn't result in a big difference in 
functionality unless and until the more experimental (for want of a 
better term) btrfs features mature. There is also currently a much 
greater developer community around LVM/dm/md than there is around the 
same (volume level) features in btrfs, and LVM/dm/md supports a wider 
range of functionality.


I should also add (just in case anybody gets the wrong idea!) that I 
think it should definitely be made as easy as possible for anybody who 
wants to evaluate running btrfs on Fedora, but it is far too early to 
make it the default yet,


Steve.



--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: kernel packaging split up landing in Rawhide

2014-04-30 Thread Steven Whitehouse

Hi,

On 29/04/14 22:41, Josh Boyer wrote:

Hi All,

As part of the F21 "Modular Kernel Packaging for Cloud" Feature[1],
I've committed and pushed the kernel packaging split up into
kernel-core and kernel-drivers subpackages.  For those of you running
rawhide, this really shouldn't be a major impact at all.  When you do
a yum update, you will see "kernel", "kernel-core", and
"kernel-drivers" packages being installed.  The end result should be
in line with today's rawhide kernels.

Note: Unless you're using a typical VM or Cloud image, don't uninstall
the kernel or kernel-drivers packages.  The machine may boot with just
kernel-core, but it will lack drivers for a significant portion of
bare-metal hardware without kernel-drivers installed.

Despite best efforts in testing, it's always possible a bug or two
snuck through.  In the event that you do have an issue with this,
please file a bug against the kernel package.

josh

[1]https://fedoraproject.org/wiki/Changes/Modular_Kernel_Packaging_for_Cloud


Just wondering how this will (or will not) affect kernel-module-extras ?

Currently there is a dependency (largely for backwards compatibility 
purposes) on kernel-module-extras from gfs2-utils and I'm wondering if 
that will need to be changed (or dropped) as a result of this,


Steve.

--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: Network configuration future

2012-08-29 Thread Steven Whitehouse
Hi,

I wonder if one way to deal with the network configuration issue is to
try and help different configuration systems work with each other,
rather than to try and create one system which does everything. Last
time I looked into this there were various things which could be done to
allow peaceful coexistence of various configuration tools, but which
were missing.

One of these is related to routing. Routes can be tagged with an
originator protocol (/etc/iproute2/rt_protos) with the default being
"kernel" for routes which are generated automatically, say on interface
being brought up. This means that utilities which do some form of
dynamic routing (and I include NetworkManager, dhcp clients, etc in
this) should only remove or change routes which are either marked
"kernel" or which are marked with their own protocol id. I brought this
issue up some time ago (I did look for a reference but didn't find one
immediately), but without seeing much enthusiasm for resolving it.

My point is not so much that this particular issue should be fixed (and
maybe it has been since it is a while since I last checked) but that by
careful use of the available functions (and maybe with the odd extension
here and there) it should be possible to allow many different tools to
cooperate with each other. I'm just using the route issue as an example.

Obviously there needs to be some coordination when it comes to dealing
with an interface coming up and which particular tool will deal with the
set up, but there is still scope for allowing multiple tools to
cooperate too. That way we can have specialist tools to deal with the
less common situations without needing to clutter up those tools dealing
with the more common case with more advanced features.

So my take on the problem is to consider how it would be possible to
have different tools cooperating in the same space, and then maybe to
extract the common functions which allow this into a small library which
could then be used by each tool.

I'm not sure if this is useful in the context of the original post, but
it is what it brought to mind while I was reading it,

Steve.


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: kernel-modules-extra and GFS2

2012-04-11 Thread Steven Whitehouse
Hi,

I've opened bug #811547 for this issue,

Steve.


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: kernel-modules-extra and GFS2

2012-04-11 Thread Steven Whitehouse
Hi,

On Wed, 2012-04-11 at 07:19 -0400, Josh Boyer wrote:
> On Wed, Apr 11, 2012 at 6:03 AM, Steven Whitehouse  
> wrote:
> > Hi,
> >
> > On Wed, 2012-04-11 at 10:55 +0100, Daniel P. Berrange wrote:
> >> On Wed, Apr 11, 2012 at 10:52:19AM +0100, Steven Whitehouse wrote:
> >> > Hi,
> >> >
> >> > I've had some reports recently that appeared to suggest that in F17,
> >> > GFS2 was no longer being supported by the kernel. Having investigated
> >> > this, it appears that the root cause is that the gfs2.ko module has been
> >> > moved to a package called kernel-modules-extra (although the kernel RPM
> >> > still contains the directory in which the gfs2 module sits, which is a
> >> > bit odd - why package an empty directory?)
> >> >
> >> > Now, I'm wondering whether I should add a dependency on
> >> > kernel-modules-extra in the gfs2-utils package?
> >>
> >> Why not just open a BZ requesting that gfs2 be moved back into the
> >> main kernel RPM. IMHO having gfs2 in a separate kernel RPM just creates
> >> unnecessary complexity/pain for users.
> >
> > Well that is one possibility - I'm trying to find the documentation that
> > explains the criteria for modules being moved into the
> > kernel-modules-extra package and I've not found any so far
> 
> Essentially, it's:
> 
> "Things that are not widely used in a typical Fedora setup, or things
> that we might disable entirely but are moving to see if there are users
> that notice."
> 
> GFS2 falls into the first set, not the second.
> 
Yes, but this makes no sense at all looking at the selection that
has been made we have:

 o DLM in the main kernel package
 o OCFS2 and GFS2 - the only two in-kernel users of DLM in
kernel-modules-extra

I know that cLVM also uses DLM, but from userland and I wonder just how
many people use cLVM who don't use of the cluster filesystems - probably
a few, but most likely not a huge number. Perhaps more importantly, DLM
depends on SCTP and SCTP is only in kernel-modules-extra, so I think
this needs a rethink.

> > However, if that is the correct solution, then I'm quite happy with it,
> > but it isn't immediately obvious as to whether it is or not,
> 
> We can move it back if needs be.  Honestly, we might wind up just
> disabling the rest of the stuff contained in there and dropping the
> sub-package entirely.  We're still kind of undecided on whether it's
> worth doing at all.  Thus far there have been 3 requests to move a
> module back.  The rest seem to be unnoticed.
> 
> josh

I can certainly open a bug to request a more sane assignment of modules
to packages, but just wanted to be sure of the criteria so that I am
asking for the correct things,

Steve.



-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: kernel-modules-extra and GFS2

2012-04-11 Thread Steven Whitehouse
Hi,

On Wed, 2012-04-11 at 10:55 +0100, Daniel P. Berrange wrote:
> On Wed, Apr 11, 2012 at 10:52:19AM +0100, Steven Whitehouse wrote:
> > Hi,
> > 
> > I've had some reports recently that appeared to suggest that in F17,
> > GFS2 was no longer being supported by the kernel. Having investigated
> > this, it appears that the root cause is that the gfs2.ko module has been
> > moved to a package called kernel-modules-extra (although the kernel RPM
> > still contains the directory in which the gfs2 module sits, which is a
> > bit odd - why package an empty directory?)
> > 
> > Now, I'm wondering whether I should add a dependency on
> > kernel-modules-extra in the gfs2-utils package?
> 
> Why not just open a BZ requesting that gfs2 be moved back into the
> main kernel RPM. IMHO having gfs2 in a separate kernel RPM just creates
> unnecessary complexity/pain for users.
> 
> Regards,
> Daniel

Well that is one possibility - I'm trying to find the documentation that
explains the criteria for modules being moved into the
kernel-modules-extra package and I've not found any so far

However, if that is the correct solution, then I'm quite happy with it,
but it isn't immediately obvious as to whether it is or not,

Steve.
 

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

kernel-modules-extra and GFS2

2012-04-11 Thread Steven Whitehouse
Hi,

I've had some reports recently that appeared to suggest that in F17,
GFS2 was no longer being supported by the kernel. Having investigated
this, it appears that the root cause is that the gfs2.ko module has been
moved to a package called kernel-modules-extra (although the kernel RPM
still contains the directory in which the gfs2 module sits, which is a
bit odd - why package an empty directory?)

Now, I'm wondering whether I should add a dependency on
kernel-modules-extra in the gfs2-utils package?

There was never a dep on the kernel package in gfs2-utils - it seemed a
bit pointless, really :-) However I can see that we'll land up with a
lot of confused users I think, when they get an error message implying
that the Fedora kernel no longer supports GFS2.

Also, I wonder whether we could arrange for the loading of a kernel
module that isn't installed, but that is in the kernel-modules-extra
package to trigger some kind of notice to the user, that this is the
case and they need to install the additional package?

Steve.


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: systemd and mounting filesystems

2011-10-10 Thread Steven Whitehouse
Hi,

On Wed, 2011-10-05 at 09:18 +, "Jóhann B. Guðmundsson" wrote:
> On 10/05/2011 08:55 AM, Steven Whitehouse wrote:
> > Ok, excellent, so there is really just one issue to try and resolve in
> > that case I think, which is the ordering of mounts vs. gfs_controld
> > start,
> 
> Hum...
> 
> Could that be solved either by creating mount/path units ( for the mount 
> point ) and or by adding to gfs_controld.service  Before=local-fs.target 
> if it needs network support the unit section of that service file would 
> look something like this..
> 
> [Unit]
> Requires=network.target
> After=network.target
> Before=local-fs.target
> 
> JBG

Sorry for the delay... I'll have a look into this a bit later in the
week, when I'm back in front of a suitable test box. One concern is
whether that sequence of dependencies might create a circular dependency
since I assume that normally local-fs.target would be before
network.target

While we could create a mount/path unit for the mount point, that gets
us back to the previous problem of having to special case gfs2 mounts
which is what I'd like to avoid, if possible. At least if I've
understood your proposal correctly.

In the mean time I had a further thought... I wondered if it would be
possible to trigger starting gfs_controld from udev/sysfs. That would
require the following conditions to be fulfilled:

 1. A maximum of one instance of gfs_controld should be started
 2. While any gfs2 filesystems are mounted, gfs_controld must be running
(and we cannot allow restarts, it must be the same instance, always)
 3. We would need to buffer any sysfs messages from gfs2 and ensure that
gfs_controld saw them, in order, after it was started.

The down side of this is that it would not be at all easy to deal with
dependencies (in this case cman and friends) if they had not been
started at mount time. Not to mention, that it would probably also
require modification to gfs_controld.

So that might not be a good plan, but I thought I'd mention it for
completeness,

Steve.


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: systemd and mounting filesystems

2011-10-05 Thread Steven Whitehouse
Hi,

On Wed, 2011-10-05 at 00:01 +0200, Lennart Poettering wrote:
> On Tue, 04.10.11 14:39, Steven Whitehouse (swhit...@redhat.com) wrote:
> 
> > Hi,
> 
> Heya,
> > 
> > I'm looking for some info on systemd and how filesystems are mounted in
> > Fedora. I've started looking into converting the gfs2-utils package to
> > the new init system and run into things which are not documented (so far
> > as I can tell).
> > 
> > Currently there are two init scripts in gfs2-utils, one is called gfs2
> > and the other gfs2-cluster.
> > 
> > Converting gfs2-cluster is trivial. It simply runs the gfs_controld
> > daemon on boot.
> > 
> > The more complicated conversion is the gfs2 script. This has been used
> > historically to mount gfs2 filesystems (rather than using the system
> > scripts for this). I assume that under the new systemd regime it should
> > be possible to simply tell systemd that gfs2 filesystem mounting
> > requires gfs_controld to be running in addition to the normal filesystem
> > requirement of having the mount point accessible, and then systemd would
> > do the mounting itself.
> 
> systemd will automatically order all network mounts after
> network.target. It recognizes network mounts either by "_netdev" in the
> options field in fstab, or by the file system type (it has a short
> static list of known network file systems built in, and gfs2 is actually
> listed in it).
> 
> systemd automatically orders mounts by their path. i.e. /foo will always
> be mounted before /foo/bar.
> 
> So, probably you should simply order gfs2-cluster before network.target
> and that's already all you need to do:
> 
> [Unit]
> Before=network.target
> 
Unfortunately I have:
After=network.target

because gfs_controld requires the network to be up and working in order
to communicate with its peers on other nodes. gfs2-cluster has some
prerequisites which require the network (i.e. dlm via the cman
initscript and corosync) too.

Historically people have used _netdev with gfs2, but it isn't really a
good fit since although we require the network to be up and working, we
are not a network filesystem as such.

> > Things are slightly more complicated in that gfs_controld is only a
> > requirement for gfs2 when lock_dlm is in use. For lock_nolock
> > filesystems, mounting is just like any other local filesystem. The
> > locking type can be specified either in fstab, or in the superblock
> > (with fstab taking priority).
> 
> Well, I'd probably recommend to just ask people to enable gfs_controld
> manually with "systemctl enable" if they want to make use of it. But if
> you want an automatic pulling in depending on the mount option you could
> write a generator. That's a tiny binary (or script) you place in
> /lib/systemd/system-generators/. It will be executed very very early at
> boot and could generate the necessary deps by parsing fstab and creating
> .wants symlinks in the directory the generator gets passes as
> argv[1]. This is fairly simple to do, but I am tempted to say that
> manually enabling this service is nicer in this case. Automatisms in
> some areas are good but manually enabling the service is sometimes an
> option too. There's little documentation available on generators right
> now, simply because we don't want to advertise them too widely yet, and
> prefer if people ping us if they plan to make use of it in some package.
> 
Ok, thats not a problem. The manual system you suggest is very similar
to the current system, so the doc change will not be very great.

> > Another issue which I suspect is already resolved, but I'm not quite
> > sure how it can be specified in fstab, etc, is that of mount order of
> > filesystems. In particular how to set up bind mounts such that they
> > occur either before or after a specified filesystem?
> 
> systemd should be smart enought to handle that automatically. For bind
> mounts we wait until all mount points that are prefixes of either the
> mount source or the mount destination are mounted before we apply the
> bind mounts.
> 
> Lennart
> 
> -- 
> Lennart Poettering - Red Hat, Inc.

Ok, excellent, so there is really just one issue to try and resolve in
that case I think, which is the ordering of mounts vs. gfs_controld
start,

Steve.


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: systemd and mounting filesystems

2011-10-04 Thread Steven Whitehouse
Hi,

On Tue, 2011-10-04 at 14:54 +0100, Paul Howarth wrote:
> On 10/04/2011 02:39 PM, Steven Whitehouse wrote:
> > Hi,
> >
> > I'm looking for some info on systemd and how filesystems are mounted in
> > Fedora. I've started looking into converting the gfs2-utils package to
> > the new init system and run into things which are not documented (so far
> > as I can tell).
> >
> > Currently there are two init scripts in gfs2-utils, one is called gfs2
> > and the other gfs2-cluster.
> >
> > Converting gfs2-cluster is trivial. It simply runs the gfs_controld
> > daemon on boot.
> >
> > The more complicated conversion is the gfs2 script. This has been used
> > historically to mount gfs2 filesystems (rather than using the system
> > scripts for this). I assume that under the new systemd regime it should
> > be possible to simply tell systemd that gfs2 filesystem mounting
> > requires gfs_controld to be running in addition to the normal filesystem
> > requirement of having the mount point accessible, and then systemd would
> > do the mounting itself.
> >
> > Things are slightly more complicated in that gfs_controld is only a
> > requirement for gfs2 when lock_dlm is in use. For lock_nolock
> > filesystems, mounting is just like any other local filesystem. The
> > locking type can be specified either in fstab, or in the superblock
> > (with fstab taking priority).
> >
> > Another issue which I suspect is already resolved, but I'm not quite
> > sure how it can be specified in fstab, etc, is that of mount order of
> > filesystems. In particular how to set up bind mounts such that they
> > occur either before or after a specified filesystem?
> >
> > I hope to thus resolve the long standing bug that we have open (bz
> > #435096) for which the original response was "Wait for upstart" but for
> > which I'm hoping that systemd can resolve the problem.
> 
> I think you mean http://bugzilla.redhat.com/435906
> 
Yes, apologies for the typo

> I ran into a similar problem last month. I foolishly set up a bind mount 
> for a local filesystem, with the new mountpoint living on top of an NFS 
> filesystem, and set it up in fstab to mount on boot in an F-16 VM. When 
> I next rebooted, the attempted bind mount happened very early in the 
> boot process (long before the network was up) and failed, resulting in a 
> boot failure at an even earlier point that the usual single-user mode, 
> where all the volume groups hadn't even been scanned and devices added 
> in /dev, which was tricky to fix until I figured out what had happened 
> and removed the bind mount entry from fstab.
> 
> Paul.

That is very much the kind of thing I'm pondering at the moment,

Steve.


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


systemd and mounting filesystems

2011-10-04 Thread Steven Whitehouse
Hi,

I'm looking for some info on systemd and how filesystems are mounted in
Fedora. I've started looking into converting the gfs2-utils package to
the new init system and run into things which are not documented (so far
as I can tell).

Currently there are two init scripts in gfs2-utils, one is called gfs2
and the other gfs2-cluster.

Converting gfs2-cluster is trivial. It simply runs the gfs_controld
daemon on boot.

The more complicated conversion is the gfs2 script. This has been used
historically to mount gfs2 filesystems (rather than using the system
scripts for this). I assume that under the new systemd regime it should
be possible to simply tell systemd that gfs2 filesystem mounting
requires gfs_controld to be running in addition to the normal filesystem
requirement of having the mount point accessible, and then systemd would
do the mounting itself.

Things are slightly more complicated in that gfs_controld is only a
requirement for gfs2 when lock_dlm is in use. For lock_nolock
filesystems, mounting is just like any other local filesystem. The
locking type can be specified either in fstab, or in the superblock
(with fstab taking priority).

Another issue which I suspect is already resolved, but I'm not quite
sure how it can be specified in fstab, etc, is that of mount order of
filesystems. In particular how to set up bind mounts such that they
occur either before or after a specified filesystem?

I hope to thus resolve the long standing bug that we have open (bz
#435096) for which the original response was "Wait for upstart" but for
which I'm hoping that systemd can resolve the problem.

So I'm wondering how to express these requirements in systemd correctly,

Steve.


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


debugfs query

2011-01-28 Thread Steven Whitehouse
Hi,

Currently Fedora doesn't automatically mount debugfs at boot time. So I
thought that it might be worth asking whether this should be the case?

One alternative would be to only mount it when a package was installed
that requires debugfs. The issue there is that there are potentially, at
least, multiple packages and they would need to cooperate somehow in
mounting debugfs.

They could do it be appending a line to fstab, or they could use their
own scripts to mount it. Is there a preferred method?

Bearing in mind that it would be a lot simpler just to have debugfs
mounted as part of the default fstab, that would be my suggested
solution, unless there is a good reason not to do that?

Steve.


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel