On Tue, Apr 01, 2014 at 08:25:39PM +0200, a b wrote:
> > I believe the above depends on the kernel version. My reading tells me
> that as of kernel 2.6.33
> > write barriers are implemented correctly in which case, could I assume
> that it would be safe to use LVM?
...
> Also, as others have said, slicing up / into separate filesystems
> is something we did in the ’80’s and the ’90’s of the past centu-
> ry on Solaris.
Yepp, and many people still do it for very good reasons! Not for
/usr/openwin anymore ;-), but ...
> Barring availability of ZFS, having a single "whole disk /" is
> the most efficient use of disk storage capacity, and is adequate
Heh, what does this mean? IIRC ZFS's "grow automatically" unless
restricted by quotas/reservations. Sounds like a MS Windows engineer ...
;-)
> unless one is dealing with massive amounts of storage, which is
> where /var/opt on additional filesystems comes into play.
Not sure what "massive amounts of storage" means nor why /var/opt is
mentioned here, however, it seems to be a wrong assumption anyway.
> If systems are running the risk of filling filesystems to capaci-
> ty, then the problem is architectural, not administrative; one
In the sense of "if the filesystem is getting full, stuff some more
disks into it" this might be true, but welcome to the real world ...
> should look into why this is happening, and implement appropriate
> measures. For example, one system with a filesystem full should
An appropriate measure IS to create separate ZFS and put
reservations/quotas on it! This is cheap, efficient, very easy todo
and usually completely sufficient. Especially for SME it doesn't make
much sense to them to waste a lot of time/money to setup sophisticated
monitoring systems or even monitor the stuff actively. They usually want
a server, which gets put into a corner and runs for the next 10 years
without makeing any "noise". No SNMP or bla, smtp-notify is appropriate
for them. They are no DCs...
> never be able to affect any application from functioning; if it
> does, it is time to redesign the application, or send the vendor
> back to the drawing board until the application is resilient
Good luck with it. Especially with non-OpenSource SW. You probably live
in a different reality :(
> enough to be able to withstand random nodes going down for any
> reason. A lot of the time, "rolling up one’s own sleeves" is the
> approach called for.
Hmmm, buy or build rock solid servers (not the Dell and * crap, which
seem to sell you old technologies/stuff pimped for Windows) ...
> I have also seen /var split into its own separate filesystem in a
> lot of places, under the false logic that that action will pre-
> vent the system from choking up, but the fact is, if /var is
> full, the system will come to a grinding halt. Also, "classic"
> filesystems, where this is most prevalent, provide -m switch for
> minimum reserved space, where only root can write, thus prevent-
> ing applications for bringing the system to a halt, so the entire
Ohh, back to UFS? ;-) Anyway, wrong assumption again. First not only
processes running as root, write to /var. Many applications use it as
its default, e.g. /var/db/* /var/lib/*, /var/cache/*, /var/mysql/*,
etc... And since many people believe in "pseudo standards"/don't wanna
take the hassle to reconfigure the apps to use something different and
document it in an appropriate manner, it certainly makes sense create
a separate ZFS there and put some reservation on it.
Second, since /var is for many apps essential, it makes also make
very good sense to put it on a separate ZFS with a safe reservation as
well. In an ideal world, they would notify you about 'not enough disk
space' and handle it accordingly, but in the real world, many just go
mad and throw obscure, unrelated messages on you. And obviously, you
can't re-write/fix them all by yourself.
Similar, you might say, ok /var is safe, so I put no restriction on
/home (or any other "data closet") - not that important, user will
notify you, if they can't write to it/their apps go crazy - e.g. for
small servers or desktops with only one pool available.
Another one is /var/cores. If you have an application, which coredumps
frequently and you have global coredumps enabled or are
developing/trying out some stuff, which even crashes the machine, it
might make sense to put /var/{core/crashes} as well on a separate ZFS
and put some quota on it ...
Last but not least, you took not BEs into consideration. E.g. you are
running a mail server, which per default writes to /var/mail/, which is
not a separate ZFS. If you get some problems and decide to switch back
to the previous BE, chances that you loose mails / see old deleted
stuff again are very high. Similar for databases which use /var/* - one
will loose data! Or other applications, which store state beneath /var,
like NFS3 or */cache/*, etc. - some might just get crazy and even
destroy more data or will not work without further maintenance work ...
> argument of separate filesystems is a straw man. Separate /,
> /usr, /var, and /tmp filesystems do not prevent the system from
> grinding to a halt if any part of that system misbehaves.
Wrong. It makes a lot of sense, to use a separate ZFS for /var and as
Solaris 11.1 now does /var/share/ (they got it halfway baked), where per
default data get stored, which need to survive booting into different
BEs (like mail, cores, crash,nfs,statmon,audit - don't ask me, why not
db, etc ...). Yes, it is much better to create for certain apps a
separate ZFS und not mount it beneath /var, however, in reality, there
are many so called admins, which don't even know, what a man page is ...
;-)
We always (~20 years) used a separate fs for /var which certainly saved
us several times our butt. Was some waste/not so flexible before ZFS,
but since than, there is no real reason, to stop one anymore (and yes,
we used it on S10 from the very first time (u4?) as well ;-)).
Have fun,
jel.
--
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 52768
-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription:
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com