Linda Walsh wrote:
> 
> Michael Tokarev wrote:
>> Unfortunately an UPS does not *really* help here.  Because unless
>> it has control program which properly shuts system down on the loss
>> of input power, and the battery really has the capacity to power the
>> system while it's shutting down (anyone tested this? 
> ----
>     Yes.  I must say, I am not connected or paid by APC.
> 
>> With new UPS?
>> and after an year of use, when the battery is not new?), -- unless
>> the UPS actually has the capacity to shutdown system, it will cut
>> the power at an unexpected time, while the disk(s) still has dirty
>> caches...
> --------
> If you have a "SmartUPS" by "APC", their is a freeware demon that monitors
[...]

Good stuff.  I knew at least SOME UPSes are good... ;)
Too bad I rarely see such stuff in use by regular
home users...
[]
>> Note also that with linux software raid barriers are NOT supported.
> ------
>     Are you sure about this?  When my system boots, I used to have
> 3 new IDE's, and one older one.  XFS checked each drive for barriers
> and turned off barriers for a disk that didn't support it.  ... or
> are you referring specifically to linux-raid setups?

I'm referring especially to linux-raid setups (software raid).
md devices don't support barriers, because of a very simple
reasons: once more than one disk drive is involved, md layer
can't guarantee ordering ACROSS drives too.  The problem is
that in case of power loss during writes, when an array needs
recovery/resync (at least the parts which were being written,
if bitmaps are in use), md layer will choose arbitrary drive
as a "master" and will copy data to another drive (speaking
of simplest case of 2-drive raid1 array).  But the thing
is that one drive may have two last barriers written (I mean
the data that was "assotiated" with the barriers), and
another neither of the two - in two different places.  And
hence we may see quite.. some inconsistency here.

This is regardless of whether underlying component devices
supports barriers or not.

>     Would it be possible on boot to have xfs probe the Raid array,
> physically, to see if barriers are really supported (or not), and disable
> them if they are not (and optionally disabling write caching, but that's
> a major performance hit in my experience.

Xfs already probes the devices as you describe, exactly the
same way as you've seen with your ide disks, and disables
barriers.

The question and confusing was about what happens when the
barriers are disabled (provided, again, that we don't rely
on UPS and other external things).  As far as I understand,
when barriers are working properly, xfs should be safe wrt
power losses (still a bit unsure about this).  Now, when
barriers are turned off (for whatever reason), is it still
as safe?  I don't know.  Does it use regular cache flushes
in place of barriers in that case (which ARE supported by
md layer)?

Generally, it has been said numerous times that XFS is not
"powercut-friendly", and it has to be used when everything
is stable, including power.  Hence I'm afraid to deploy it
where I know the power is not stable (we've about 70 such
places here, with servers in each, where they don't always
replace UPS batteries in time - ext3fs never crashed so
far, while ext2 did).

Thanks.

/mjt
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to