Re: resizing mounted filesystems
On Thu, Nov 07, 2002 at 02:05:13PM -0800, Terry Lambert wrote: Lukas Ertl wrote: how hard would it be to implement resizing of mounted filesystems? Currently, growfs requires the filesystem to be unmounted, and this is definitely a showstopper for FreeBSD when it comes to production use. I'd really like to promote FreeBSD more in my organisation, where we currently use mostly AIX, and I often hear (and have to say that it's true) that the AIX LVM is so robust, stable and quite easy to use. Could this feature be implemented once FreeBSD 5.0 is out with its filesystem snapshot? Nearly impossible, without a JFS. You would need to be able to add new PP's to an LP, as you can do on AIX, or assign PP's to a hog partition, and them provide each LP with hog limits, so that they can allocate PP's to themselves automatically, as needed, up to some high watermark. It is doable - just not done. E.g. Solstice Disksuite for Solaris does this. The problem is that the allocation space is spread over all cylinder groups, effectively as a hash. This is the same reason it is recommended that you backup and restore to defrag when you run growfs. That's a performance reason. -- B.Walter COSMO-Project http://www.cosmo-project.de [EMAIL PROTECTED] Usergroup [EMAIL PROTECTED] To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: resizing mounted filesystems
Bernd Walter wrote: Nearly impossible, without a JFS. You would need to be able to add new PP's to an LP, as you can do on AIX, or assign PP's to a hog partition, and them provide each LP with hog limits, so that they can allocate PP's to themselves automatically, as needed, up to some high watermark. It is doable - just not done. E.g. Solstice Disksuite for Solaris does this. Not quite. It supports growing FS's, but fails to defrag them; it basically utilizes a version of the growfs(1) from the BSD world: http://docs.sun.com/db/doc/806-3204/6jccb3g8l?a=view#ch1basics-ix63 The problem is that the allocation space is spread over all cylinder groups, effectively as a hash. This is the same reason it is recommended that you backup and restore to defrag when you run growfs. That's a performance reason. No, it's an implementation problem. I've explained this before, with nice ASCII-art diagrams. There's an implicit expectation in the allocation policy that allocations will be spread more or less evenly across all cylinder groups. When you violate this expectation, you get *internal fragmentation*, which simply can't happen any other way, when using the FS normally. In a normal FFS, there is no such thing as internal fragmentation. If, instead of hashing allocations across cylinder groups, as FFS does, you were to use a journal, log-structured storage, or extents for storage, then the problem goes away (it is then the problem for the cleaner to take care of, on FS's that have cleaners). UFS (FFS) does not have a cleaner to unmess the disk. A disk, fragmented this way, is not the same thing as a Windows disk which has been fragmented due to poor intrinsic layout policy, and thus merely results in slightly degraded performance, or slightly less overall storage being available. A disk fragmented this way is *broken* for future allocation attempts, if the hashes happen to fall at the front of the disk enough times to trigger the soft failure being treated as a hard failure. We used to have this problem with the IDE drivers in UnixWare when using VxFS: you would get several soft failures in a row, and your /usr partition would disappear, and, with the way bad sectoring was handled by issuing controller commands, only a low level format would recover writeability for that section of the disk (VxFS is UFS-derived, which is FFS-derived, in case the connection isn't obvious). Rather than snapshots, it would have been nice if we had the ability to lock and unlock writeability on a per cylinder-group basis, and used that instead of snapshots for background fsck; it would also let us do things like background defragging, which you simply can't implement, using snapshots. -- Terry To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
resizing mounted filesystems
Hi hackers, how hard would it be to implement resizing of mounted filesystems? Currently, growfs requires the filesystem to be unmounted, and this is definitely a showstopper for FreeBSD when it comes to production use. I'd really like to promote FreeBSD more in my organisation, where we currently use mostly AIX, and I often hear (and have to say that it's true) that the AIX LVM is so robust, stable and quite easy to use. Could this feature be implemented once FreeBSD 5.0 is out with its filesystem snapshot? best regards, le -- Lukas Ertl eMail: [EMAIL PROTECTED] UNIX-SystemadministratorTel.: (+43 1) 4277-14073 Zentraler Informatikdienst (ZID)Fax.: (+43 1) 4277-9140 der Universität Wienhttp://mailbox.univie.ac.at/~le/ To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message
Re: resizing mounted filesystems
Lukas Ertl wrote: how hard would it be to implement resizing of mounted filesystems? Currently, growfs requires the filesystem to be unmounted, and this is definitely a showstopper for FreeBSD when it comes to production use. I'd really like to promote FreeBSD more in my organisation, where we currently use mostly AIX, and I often hear (and have to say that it's true) that the AIX LVM is so robust, stable and quite easy to use. Could this feature be implemented once FreeBSD 5.0 is out with its filesystem snapshot? Nearly impossible, without a JFS. You would need to be able to add new PP's to an LP, as you can do on AIX, or assign PP's to a hog partition, and them provide each LP with hog limits, so that they can allocate PP's to themselves automatically, as needed, up to some high watermark. While it should be technically possible to modify Vinum/ccd/GEOM so that, if you start with a logical instead of a physical partition that is intermediated by one of those technologies, you could grow the size of the logical partition while the system is active with a few small code changes (or a lot of them, in the GEOM case), you would still need to inform the FS of the additional space, and deal with the consequences of a size change on the FS (e.g. defrag FFS after you are done growing it). The problem is that the allocation space is spread over all cylinder groups, effectively as a hash. This is the same reason it is recommended that you backup and restore to defrag when you run growfs. -- Terry To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-hackers in the body of the message