Am 29.01.2011 00:48, schrieb J. Ryan Earl:
On Fri, Jan 28, 2011 at 1:44 PM, Joseph Hauptmann<[email protected]>wrote:
Yes, I did try that. Doesn't make much of a (speed) difference.
It seems, that the problem is less that rm gets stuck for good, but that it
takes really long breaks (about 20 sec.) while deleting - during those
breaks the whole partition is stuck and iostat reports 100% utilization
compared to ~95% while actually deleting files.
What does I/O state report on the request queue? What's the average length?
What's the avereage I/O request latency? I suspect they are high.
not really, even when the DRBD device blocks I/O-access completley for a
few seconds.
The filesystem on resource 0 is ext3 with a block size of 4096 and lies on
a SW-RAID5 (far from ideal - I know).
Far from ideal is an understatement. You're actually using the worst
possible RAID configuration: RAID stripe parity without a write cache. The
performance you see will quickly become asymptotically bound by the
performance a single spindle in your RAID group. You're running internal
metadata (like a journal) on DRBD, and you're getting double small FUA
writes from the filesystem journal in addition to DRBD barriers.
Turn off ext3 journalling. Turn off DRBD barriers. Run your maintenance to
remove the files. Turn them both back on.
-JR
Disconnecting the peer was the first thing I did. What really bugs me,
is that deleting the same kind of files on an ext3 lvm that lies next to
the drbd resource on the same md device takes about a minute for 100k
files - without blocking access.
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user