Re: default file system, was: Comparison to Workstation Technical Specification

2014-03-01 Thread James Harshaw
In a side note, there have been *some* attempts at adding shrink
compatability to xfs, but none of them seem to developed or even complete.

Shrinking in my experience is extremely important. Having unexpected growth
in the / partition with no ability to make room for it can be a major issue
as this has happened to one of my servers and it was not a pretty
situation.
On Mar 1, 2014 4:43 PM, Jacob Yundt jyu...@gmail.com wrote:

 
  People do shrink volumes, and this lack of flexibility is an important
  consideration I feel was ignored in the Server WG decision.
 
  What is the use case for volume shrinking in a server context? Dual boot
 is a total edge case for servers.

 I shrink ext4 filesystems on servers pretty frequently. Most recently
 because:

 *) Received bad information from an end user which required changing
 several LVs/FSs.
 *) An oops situation where a filesystem was incorrectly increased by
 an extra order of magnitude
 *) Unexpected (e.g. emergency) growth of an application which required
 increasing a filesystem and shrinking another (lesser) used
 filesystem.

 Yes in all three aforementioned cases we had to unmount the ext4
 filesystem in order to shrink it, however, we would _not_ have been
 able to do this with xfs.

 On a semi related note: I grow/shrink JFS2 filesystems (on AIX) all
 the time. It would be great if ext4 had online shrink.

 -Jacob
 --
 devel mailing list
 devel@lists.fedoraproject.org
 https://admin.fedoraproject.org/mailman/listinfo/devel
 Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: lvresize and XFS, was: default file system

2014-02-27 Thread James Harshaw
So far my small of research shows it isn't that big of a problem. We should
look more into it thought.
On Feb 27, 2014 5:41 PM, Chris Murphy li...@colorremedies.com wrote:


 On Feb 27, 2014, at 3:32 PM, Chris Murphy li...@colorremedies.com wrote:

 
  On Feb 27, 2014, at 3:02 PM, Jochen Schmitt joc...@herr-schmitt.de
wrote:
 
  On Thu, Feb 27, 2014 at 04:08:46PM -0500, James Wilson Harshaw IV
wrote:
  A question I have is XFS worth it?
 
  I have done some testing with RHEL 7 Beta which use XFS as a default
file system.
 
  I have to recorgnize, that the -r switch of the lvresize command
doesn't cooperate
  with xfs in oppoiste of ext4.
 
  Where you growing or shrinking the fs, and was it mounted at the time,
and what error did you get? XFS doesn't support shrink, and only can be
grown online. I'm pretty sure lvresize -r supports xfs_growfs via fsadm.

 worksforme

 Starting with a 10TB XFS volume, 5TB x 5 disk VG.


 # lvresize -r -v --size 15T VG/LV
 Finding volume group VG
 Executing: fsadm --verbose check /dev/VG/LV
 fsadm: xfs filesystem found on /dev/mapper/VG-LV
 fsadm: Skipping filesystem check for device /dev/mapper/VG-LV as the
filesystem is mounted on /mnt
 fsadm failed: 3
 Archiving volume group VG metadata (seqno 2).
   Extending logical volume LV to 15.00 TiB
 Loading VG-LV table (253:0)
 Suspending VG-LV (253:0) with device flush
 Resuming VG-LV (253:0)
 Creating volume group backup /etc/lvm/backup/VG (seqno 3).
   Logical volume LV successfully resized
 Executing: fsadm --verbose resize /dev/VG/LV 16106127360K
 fsadm: xfs filesystem found on /dev/mapper/VG-LV
 fsadm: Device /dev/mapper/VG-LV size is 16492674416640 bytes
 fsadm: Parsing xfs_info /mnt
 fsadm: Resizing Xfs mounted on /mnt to fill device /dev/mapper/VG-LV
 fsadm: Executing xfs_growfs /mnt
 meta-data=/dev/mapper/VG-LV  isize=256agcount=10,
agsize=268435455 blks
  =   sectsz=512   attr=2
 data =   bsize=4096   blocks=2684354550, imaxpct=5
  =   sunit=0  swidth=0 blks
 naming   =version 2  bsize=4096   ascii-ci=0
 log  =internal   bsize=4096   blocks=521728, version=2
  =   sectsz=512   sunit=0 blks, lazy-count=1
 realtime =none   extsz=4096   blocks=0, rtextents=0
 data blocks changed from 2684354550 to 4026531825

 # df -h
 Filesystem Size  Used Avail Use% Mounted on
 /dev/mapper/VG-LV   15T   33M   15T   1% /mnt


 However, I don't know what fsadm failed: 3 means.


 Chris Murphy
 --
 devel mailing list
 devel@lists.fedoraproject.org
 https://admin.fedoraproject.org/mailman/listinfo/devel
 Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: lvresize and XFS, was: default file system

2014-02-27 Thread James Harshaw
Haha! Error: stuff happened.
On Feb 27, 2014 6:02 PM, Eric Sandeen sand...@redhat.com wrote:

 On 2/27/14, 4:40 PM, Chris Murphy wrote:
 
  On Feb 27, 2014, at 3:32 PM, Chris Murphy li...@colorremedies.com
 wrote:
 
 
  On Feb 27, 2014, at 3:02 PM, Jochen Schmitt joc...@herr-schmitt.de
 wrote:
 
  On Thu, Feb 27, 2014 at 04:08:46PM -0500, James Wilson Harshaw IV
 wrote:
  A question I have is XFS worth it?
 
  I have done some testing with RHEL 7 Beta which use XFS as a default
 file system.
 
  I have to recorgnize, that the -r switch of the lvresize command
 doesn't cooperate
  with xfs in oppoiste of ext4.
 
  Where you growing or shrinking the fs, and was it mounted at the time,
 and what error did you get? XFS doesn't support shrink, and only can be
 grown online. I'm pretty sure lvresize -r supports xfs_growfs via fsadm.
 
  worksforme
 
  Starting with a 10TB XFS volume, 5TB x 5 disk VG.
 
 
  # lvresize -r -v --size 15T VG/LV
  Finding volume group VG
  Executing: fsadm --verbose check /dev/VG/LV
  fsadm: xfs filesystem found on /dev/mapper/VG-LV
  fsadm: Skipping filesystem check for device /dev/mapper/VG-LV as the
 filesystem is mounted on /mnt
  fsadm failed: 3

 snip

  However, I don't know what fsadm failed: 3 means.

 fsadm.sh:

 if detect_mounted ; then
 verbose Skipping filesystem check for device \$VOLUME\
 as the filesystem is mounted on $MOUNTED;
 cleanup 3
 fi

 ...
 cleanup() {
 ...
 exit ${1:-1}
 }

 the script exits with error 3 meaning, well, 3, I guess, when the fs
 is mounted.  Not the nicest error reporting IMHO :)

 -Eric

 --
 devel mailing list
 devel@lists.fedoraproject.org
 https://admin.fedoraproject.org/mailman/listinfo/devel
 Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: devel Digest, Vol 115, Issue 94

2013-09-23 Thread James Harshaw
 Greetings testers!

 It's meeting time again on Monday! Alpha is now done and ready to go,
 but we may have a few more things to do to prepare for it, and it's time
 to look ahead to Beta as well.

 This is a reminder of the upcoming QA meeting. Please add any topic
 suggestions to the meeting wiki page:
 https://fedoraproject.org/wiki/QA/Meetings/20130923

 The current proposed agenda is included below.

 == Proposed Agenda Topics ==
 1. Previous meeting follow-up
 2. Fedora 20 Alpha final work and retrospective
 3. Fedora 20 Beta planning
 4. Open floor
   

I would like to propose a more guided partitioning gui for anaconda. One
that warns you if deleting a partition may damage the current OS on the
box, not just a generic warning. This is because many inexperienced
people who I recommend fedora to tend to delete their windows partitions
unknowingly. I believe I will be not be able to make the meeting.

Regards

-Absal0m

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct