I appreciate the feedback on the other filesystems available. Hypothetically, if my choices were (1) ext2 and routine fsck vs. (2) ext3 and no fsck, is one better than the other? I know "better" is a loaded work, so how about "safer" in terms of data preservation and recovery, ignoring other factors like speed/performance. I'm also not concerned with downtime, since this is purely a test environment. Is journaling designed to reduce the need to run fsck? In my case, it seems like running it at all on ext3 isn't an option, but perhaps I just need to further familiarize myself with the program's options.
Note: I'm nowhere close to 24GB of memory on this.. er, system. --- On Mon, 3/22/10, [email protected] <[email protected]> wrote: > From: [email protected] <[email protected]> > Subject: OpenAFS-info digest, Vol 1 #4842 - 12 msgs > To: [email protected] > Date: Monday, March 22, 2010, 12:01 PM > Send OpenAFS-info mailing list > submissions to > [email protected] > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.openafs.org/mailman/listinfo/openafs-info > or, via email, send a message with subject or body 'help' > to > [email protected] > > You can reach the person managing the list at > [email protected] > > When replying, please edit your Subject line so it is more > specific > than "Re: Contents of OpenAFS-info digest..." > > > Today's Topics: > > 1. Re: Re: about failover - 2 servers > (one "master" one > replicas) - a bit long > (Vladimir Konrad) > 2. Re: about failover - 2 servers (one > "master" one > replicas) - a bit long > (Harald Barth) > 3. Re: Filesystem Types & FSCK > (Harald Barth) > 4. Re: Re: about failover - 2 servers > (one "master" one > replicas) - a bit long > (Harald Barth) > 5. Re: Filesystem Types & FSCK (Dirk > Heinrichs) > 6. Re: about failover - 2 servers (one > "master" one replicas) - a bit > long (Andrew Deason) > 7. Re: Filesystem Types & FSCK (Lars > Schimmer) > 8. Re: Filesystem Types & FSCK (Chaz > Chandler) > > --__--__-- > > Message: 1 > Date: Mon, 22 Mar 2010 15:00:55 +0000 > From: Vladimir Konrad <[email protected]> > To: [email protected] > Organization: lse > Subject: Re: [OpenAFS] Re: about failover - 2 servers (one > "master" one > replicas) - a bit long > > > Hello Andrew, > > > > Cheers, I forgot to say _by hand_. > > You can do this with 'vos convertROtoRW', but it's > intended to be more > > of a tool for disaster recovery (when you've > permanently lost the RW, > > and all you have are ROs). Not generally for keeping > up availability > > while a server is temporarily down. > > > Note that if A goes down, you convertROtoRW on B, and > A comes back up, > > you'll now have 2 copies of the RW. The one on B will > be the one used, > > but A has another copy that may contain data you want. > This can get > > rather confusing if you try to sync the VLDB with the > list of volumes > > that are on each server. > > Thank you, good to know this, it would be used as the last > resort. > > > Automatic failover has been done using multiple > servers sharing the same > > backend storage; I don't think anyone's done it with > separate storage, > > but we're not stopping you from doing so. You could in > theory do > > something like that with some other HA software, and > writing some > > scripts to issue 'vos' commands to do the > conversions. > > Cheers, it is quite possible some servers would get hooked > into SAN, > so it is an option. > > > But it's usually a lot easier if you can just treat RO > volumes as > > high-availability, and RW volumes not. > > Makes sense, it looks having multiple RW volumes would not > scale that well - > writes would have to go to each volume, + synchronisation > would get messy > I guess... > > Thank you all, I have done the replicas. > > Do I understand it correctly (observation), a read-only > replica placed on > the same partition as the read-write volume does not "cost" > much in terms > of disc-space? I have released few replicas and the disc > usage did not go > up. Is it along the principle of LVM snapshots? > > Kind regards, > > Vladimir > > ------ > > because it reverses the logical flow of conversation + > it is hard to follow. > >> why not? > >>> do not put a reply at the top of the message, > please... > > Please access the attached hyperlink for an important > electronic communications disclaimer: > http://www.lse.ac.uk/collections/planningAndCorporatePolicy/legalandComplianceTeam/legal/disclaimer.htm > > --__--__-- > > Message: 2 > Date: Mon, 22 Mar 2010 16:06:16 +0100 (CET) > To: [email protected] > From: Harald Barth <[email protected]> > Subject: Re: [OpenAFS] about failover - 2 servers (one > "master" one > replicas) - a bit long > > > > OpenAFS is not designed for automatic failover. > > serverA volume.readonly -> serverB volume.readonly works > automaticly > > serverA volume.readonly -> serverB volume (readwrite) > does _not_ fail > over automaticly > > Harald. > > --__--__-- > > Message: 3 > Date: Mon, 22 Mar 2010 16:08:00 +0100 (CET) > To: [email protected] > Cc: [email protected] > From: Harald Barth <[email protected]> > Subject: Re: [OpenAFS] Filesystem Types & FSCK > > > I use xfs on Linux for /vicep*. > > Harald. > > --__--__-- > > Message: 4 > Date: Mon, 22 Mar 2010 16:10:17 +0100 (CET) > To: [email protected] > Cc: [email protected] > From: Harald Barth <[email protected]> > Subject: Re: [OpenAFS] Re: about failover - 2 servers (one > "master" one > replicas) - a bit long > > > > I have released few replicas and the disc usage did > not go > > up. > > Space is shared unless you change the RW so it differs from > the RO. > After the next vos release, it will be shared again. > > > Is it along the principle of LVM snapshots? > > Kindasorta. > > Harald. > > --__--__-- > > Message: 5 > To: [email protected] > Date: Mon, 22 Mar 2010 16:33:18 +0100 > From: "Dirk Heinrichs" <[email protected]> > Organization: Privat > Subject: Re: [OpenAFS] Filesystem Types & FSCK > > Am Montag 22 M=C3=A4rz 2010 15:57:23 schrieb J: > > > So I'm wondering whether you have any advice or > comments about any of thi= > s. > > You could use XFS, it doesn't even have fsck (it's a dummy, > to make=20 > distribution's boot scripts happy). > > Bye... > > Dirk > > --__--__-- > > Message: 6 > To: [email protected] > From: Andrew Deason <[email protected]> > Date: Mon, 22 Mar 2010 10:45:21 -0500 > Organization: Sine Nomine Associates > Subject: [OpenAFS] Re: about failover - 2 servers (one > "master" one replicas) - a bit > long > > On Mon, 22 Mar 2010 15:00:55 +0000 > Vladimir Konrad <[email protected]> > wrote: > > > > But it's usually a lot easier if you can just > treat RO volumes as > > > high-availability, and RW volumes not. > > > > Makes sense, it looks having multiple RW volumes would > not scale that > > well - writes would have to go to each volume, + > synchronisation would > > get messy I guess... > > I think the hardest part is conflict resolution, but I'm > not too > familiar with it. Coda is able to do RW replication, but as > I recall can > require manual conflict resolution (2 writes happened at > the same time, > and you must manually specify which one wins). > > I believe there have been at least one or two attempts to > do this > in-band in AFS (you can read about one proposed way of > doing it at > <http://www.student.nada.kth.se/~noora/exjobb/filer.html>). > But nobody's > been able to do it yet; it is a hard problem to solve. It's > also one of > the suggested OpenAFS GSOC projects: <http://www.openafs.org/gsoc.html>. > > > Do I understand it correctly (observation), a > read-only replica placed > > on the same partition as the read-write volume does > not "cost" much in > > terms of disc-space? > > Yes, as long as your RW does not differ much from your RO. > That is one > reason why it's almost always a good idea to have an RO on > the same > server/partition as the RW, if you have any ROs for that > RW. > > > I have released few replicas and the disc usage did > not go up. Is it > > along the principle of LVM snapshots? > > Sort of, but arguably not as good. With LVM snapshots and > similar > systems, you get charged space for each block that is > changed. With > OpenAFS volume clones, you get charged for each file > (vnode) that is > changed. > > -- > Andrew Deason > [email protected] > > > --__--__-- > > Message: 7 > Date: Mon, 22 Mar 2010 16:46:12 +0100 > From: Lars Schimmer <[email protected]> > Cc: [email protected] > Subject: Re: [OpenAFS] Filesystem Types & FSCK > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Dirk Heinrichs wrote: > > Am Montag 22 M=C3=A4rz 2010 15:57:23 schrieb J: > >=20 > >> So I'm wondering whether you have any advice or > comments about any of = > this. > >=20 > > You could use XFS, it doesn't even have fsck (it's a > dummy, to make=20 > > distribution's boot scripts happy). > > XFS has got XFScheck and XFSrepair. > BUT if you have lots of file, xfscheck needs HUGE amount of > memory to > run. Even with 24GB of memory my 2TB data directory (non > OpenAFS) threw > a out of memory error on XFScheck. > > > Bye... > >=20 > > Dirk > > > MfG, > Lars Schimmer > - -- > - > ------------------------------------------------------------- > TU Graz, Institut f=C3=BCr ComputerGraphik & > WissensVisualisierung > Tel: +43 316 873-5405 > E-Mail: [email protected] > Fax: +43 316 873-5402 > PGP-Key-ID: 0x4A9B1723 > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iEYEARECAAYFAkunkMQACgkQmWhuE0qbFyOTSwCfft1Aww2m0wSqAkD2Nnp4jpRT > JsIAoJFLPXkNin73FDlOR/rPxiRnjfxU > =3DkAIB > -----END PGP SIGNATURE----- > > --__--__-- > > Message: 8 > Date: Mon, 22 Mar 2010 11:57:02 -0400 > From: Chaz Chandler <[email protected]> > To: [email protected] > Subject: Re: [OpenAFS] Filesystem Types & FSCK > > >=20 > > XFS has got XFScheck and XFSrepair. > > BUT if you have lots of file, xfscheck needs HUGE > amount of memory to > > run. Even with 24GB of memory my 2TB data directory > (non OpenAFS) threw > > a out of memory error on XFScheck. > >=20 > > True, but xfs_check =21=3D fsck_xfs, which is what would be > run at boot. > xfs_check doesn't need to be run much unless you suspect a > problem. > > ____________________________________________________________ > FREE 3D EARTH SCREENSAVER - Watch the Earth right on your > desktop=21 > Check it out at http://www.inbox.com/earth > > > --__--__-- > > _______________________________________________ > OpenAFS-info mailing list > [email protected] > https://lists.openafs.org/mailman/listinfo/openafs-info > > > End of OpenAFS-info Digest > _______________________________________________ OpenAFS-info mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-info
