Re: 3.10: discard/trim support on md-raid1?

2013-07-16 Thread CoolCold
Neil, I've tryed to look around commit logs but failed to find commit
where discard/trim were added.
I was looking via
http://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/log/drivers/md?id=9f2a940965286754f3a34d5737c3097c05db8725=grep=discard+support
, tryed just "discard" without support - found commits about dm, not
md, though.

Can you shed some light here? Thanks in advance.

(resent in plaintext mode, please reply to this one)


On Tue, Jul 16, 2013 at 11:15 AM, NeilBrown  wrote:
> On Sat, 13 Jul 2013 06:34:19 -0400 "Justin Piszcz" 
> wrote:
>
>> Hello,
>>
>> Running 3.10 and I see the following for an md-raid1 of two SSDs:
>>
>> Checking /sys/block/md1/queue:
>> add_random: 0
>> discard_granularity: 512
>> discard_max_bytes: 2147450880
>> discard_zeroes_data: 0
>> hw_sector_size: 512
>> iostats: 0
>> logical_block_size: 512
>> max_hw_sectors_kb: 32767
>> max_integrity_segments: 0
>> max_sectors_kb: 512
>> max_segment_size: 65536
>> max_segments: 168
>> minimum_io_size: 512
>> nomerges: 0
>> nr_requests: 128
>> optimal_io_size: 0
>> physical_block_size: 512
>> read_ahead_kb: 8192
>> rotational: 1
>> rq_affinity: 0
>> scheduler: none
>> write_same_max_bytes: 0
>>
>> What should be seen:
>> rotational: 0
>
> What has "rotational" got to do with "supports discard"?
> There may be some correlation, but it isn't causal.
>
>> And possibly:
>> discard_zeroes_data: 1
>
> This should be set as the 'or' of the same value from component devices.  And
> does not enable or disable the use of discard.
>
> I don't think that "does this device support discard" appears in sysfs.
>
> I believe trim does work on md/raid1 if the underlying devices all support it.
>
> NeilBrown
>
>
>
>>
>> Can anyone confirm if there is a workaround to allow TRIM when using
>> md-raid1?
>>
>> Some related discussion here:
>> http://us.generation-nt.com/answer/md-rotational-attribute-help-206571222.ht
>> ml
>> http://www.progtown.com/topic343938-ssd-strange-itself-conducts.html
>>
>>
>> Justin.
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Best regards,
[COOLCOLD-RIPN]
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: 3.10: discard/trim support on md-raid1?

2013-07-16 Thread CoolCold
Neil, I've tryed to look around commit logs but failed to find commit
where discard/trim were added.
I was looking via
http://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/log/drivers/md?id=9f2a940965286754f3a34d5737c3097c05db8725qt=grepq=discard+support
, tryed just discard without support - found commits about dm, not
md, though.

Can you shed some light here? Thanks in advance.

(resent in plaintext mode, please reply to this one)


On Tue, Jul 16, 2013 at 11:15 AM, NeilBrown ne...@suse.de wrote:
 On Sat, 13 Jul 2013 06:34:19 -0400 Justin Piszcz jpis...@lucidpixels.com
 wrote:

 Hello,

 Running 3.10 and I see the following for an md-raid1 of two SSDs:

 Checking /sys/block/md1/queue:
 add_random: 0
 discard_granularity: 512
 discard_max_bytes: 2147450880
 discard_zeroes_data: 0
 hw_sector_size: 512
 iostats: 0
 logical_block_size: 512
 max_hw_sectors_kb: 32767
 max_integrity_segments: 0
 max_sectors_kb: 512
 max_segment_size: 65536
 max_segments: 168
 minimum_io_size: 512
 nomerges: 0
 nr_requests: 128
 optimal_io_size: 0
 physical_block_size: 512
 read_ahead_kb: 8192
 rotational: 1
 rq_affinity: 0
 scheduler: none
 write_same_max_bytes: 0

 What should be seen:
 rotational: 0

 What has rotational got to do with supports discard?
 There may be some correlation, but it isn't causal.

 And possibly:
 discard_zeroes_data: 1

 This should be set as the 'or' of the same value from component devices.  And
 does not enable or disable the use of discard.

 I don't think that does this device support discard appears in sysfs.

 I believe trim does work on md/raid1 if the underlying devices all support it.

 NeilBrown




 Can anyone confirm if there is a workaround to allow TRIM when using
 md-raid1?

 Some related discussion here:
 http://us.generation-nt.com/answer/md-rotational-attribute-help-206571222.ht
 ml
 http://www.progtown.com/topic343938-ssd-strange-itself-conducts.html


 Justin.


 --
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Best regards,
[COOLCOLD-RIPN]
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: RAID extremely slow

2012-07-25 Thread CoolCold
On Thu, Jul 26, 2012 at 5:55 AM, Kevin Ross  wrote:
>
> Thank you very much for taking the time to look into this.
>
>
> On 07/25/2012 06:00 PM, Phil Turmel wrote:
>>
>> Piles of small reads  scattered across multiple drives, and a
>> concentration of queued writes to /dev/sda.  What's on /dev/sda?
>> It's not a member of the raid, so it must be some other system task
>> involved.
>
>
> /dev/sda1 is the root filesystem.  The writes were most likely by MySQL,
> but I would have to run iotop to be sure.
>
>
>> [ The output of "lsdrv" [1] might be useful here, along with
>> "mdadm -D /dev/md0" and "mdadm -E /dev/[b-j]" ]
>
>
> Here you go: http://pastebin.ca/2174740
>
>
>> MythTV is trying to flush recorded video to disk, I presume.  Sync is
>> known to cause stalls--a great deal of work is on-going to improve
>> this.  How old is this kernel?
>
>
> After rebooting, MythTV is currently recording two shows, and the resync
> is running at full speed.
>
>
> # cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : active raid6 sdh1[0] sdd1[9] sde1[10] sdb1[6] sdi1[7] sdc1[4]
> sdf1[3] sdg1[8] sdj1[1]
>   6837311488 blocks super 1.2 level 6, 512k chunk, algorithm 2 [9/9]
> [U]
>   [=>...]  resync =  9.3% (91363840/976758784)
> finish=1434.3min speed=10287K/sec
>
> unused devices: 
>
> atop shows the avio of all the drives to be less than 1ms, where before
> they were much higher.  It will run for a couple days under load just fine,
> and then it will come to a halt.
>
> It's a 3.2.21 kernel.  I'm running Debian Testing, and the exact Debian
> package version is:
>
> ii  linux-image-3.2.0-3-686-pae3.2.21-3
> Linux 3.2 for modern PCs
>
>
>>
>>> [51000.672258]  [] ? sysenter_do_call+0x12/0x28
>>> [51000.672261]  [] ? quirk_usb_early_handoff+0x4a9/0x522
>>>
>>> Here is some other possibly relevant info:
>>>
>>> # cat /proc/mdstat
>>> Personalities : [raid6] [raid5] [raid4]
>>> md0 : active raid6 sdh1[0] sdd1[9] sde1[10] sdb1[6] sdi1[7] sdc1[4]
>>> sdf1[3] sdg1[8] sdj1[1]
>>>6837311488 blocks super 1.2 level 6, 512k chunk, algorithm 2
>>> [9/9]
>>> [U]
>>>[==>..]  resync = 51.3% (501954432/976758784)
>>> finish=28755.6min speed=275K/sec
>>
>> Is this resync a weekly check, or did something else trigger it?
>
>
> This is not a scheduled check.  It was triggered by, I believe, an unclean
> shutdown.  An unclean shutdown will trigger a resync.  I don't think it used
> to do this, but I could be remembering wrong.
>
>
>>
>>> unused devices:
>>>
>>> # cat /proc/sys/dev/raid/speed_limit_min
>>> 1
>>
>> MD is unable to reach its minimum rebuild rate while other system
>> activity is ongoing.  You might want to lower this number to see if that
>> gets you out of the stalls.
>>
>> Or temporarily shut down mythtv.
>
>
> I will try lowering those numbers next time this happens, which will
> probably be within the next day or two.  That's about how often this
> happens.
You might be interested in write intent bitmap then, it gonna help a lot.
(resending in plain text)
>
>
>>> # cat /proc/sys/dev/raid/speed_limit_max
>>> 20
>>>
>>> Thanks in advance!
>>> -- Kevin
>>
>> HTH,
>>
>> Phil
>>
>> [1] http://github.com/pturmel/lsdrv
>>
>
> Thanks!
> -- Kevin
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html




--
Best regards,
[COOLCOLD-RIPN]
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: RAID extremely slow

2012-07-25 Thread CoolCold
On Thu, Jul 26, 2012 at 5:55 AM, Kevin Ross ke...@familyross.net wrote:

 Thank you very much for taking the time to look into this.


 On 07/25/2012 06:00 PM, Phil Turmel wrote:

 Piles of small reads  scattered across multiple drives, and a
 concentration of queued writes to /dev/sda.  What's on /dev/sda?
 It's not a member of the raid, so it must be some other system task
 involved.


 /dev/sda1 is the root filesystem.  The writes were most likely by MySQL,
 but I would have to run iotop to be sure.


 [ The output of lsdrv [1] might be useful here, along with
 mdadm -D /dev/md0 and mdadm -E /dev/[b-j] ]


 Here you go: http://pastebin.ca/2174740


 MythTV is trying to flush recorded video to disk, I presume.  Sync is
 known to cause stalls--a great deal of work is on-going to improve
 this.  How old is this kernel?


 After rebooting, MythTV is currently recording two shows, and the resync
 is running at full speed.


 # cat /proc/mdstat
 Personalities : [raid6] [raid5] [raid4]
 md0 : active raid6 sdh1[0] sdd1[9] sde1[10] sdb1[6] sdi1[7] sdc1[4]
 sdf1[3] sdg1[8] sdj1[1]
   6837311488 blocks super 1.2 level 6, 512k chunk, algorithm 2 [9/9]
 [U]
   [=...]  resync =  9.3% (91363840/976758784)
 finish=1434.3min speed=10287K/sec

 unused devices: none

 atop shows the avio of all the drives to be less than 1ms, where before
 they were much higher.  It will run for a couple days under load just fine,
 and then it will come to a halt.

 It's a 3.2.21 kernel.  I'm running Debian Testing, and the exact Debian
 package version is:

 ii  linux-image-3.2.0-3-686-pae3.2.21-3
 Linux 3.2 for modern PCs



 [51000.672258]  [c12c409f] ? sysenter_do_call+0x12/0x28
 [51000.672261]  [c12b] ? quirk_usb_early_handoff+0x4a9/0x522

 Here is some other possibly relevant info:

 # cat /proc/mdstat
 Personalities : [raid6] [raid5] [raid4]
 md0 : active raid6 sdh1[0] sdd1[9] sde1[10] sdb1[6] sdi1[7] sdc1[4]
 sdf1[3] sdg1[8] sdj1[1]
6837311488 blocks super 1.2 level 6, 512k chunk, algorithm 2
 [9/9]
 [U]
[==..]  resync = 51.3% (501954432/976758784)
 finish=28755.6min speed=275K/sec

 Is this resync a weekly check, or did something else trigger it?


 This is not a scheduled check.  It was triggered by, I believe, an unclean
 shutdown.  An unclean shutdown will trigger a resync.  I don't think it used
 to do this, but I could be remembering wrong.



 unused devices:none

 # cat /proc/sys/dev/raid/speed_limit_min
 1

 MD is unable to reach its minimum rebuild rate while other system
 activity is ongoing.  You might want to lower this number to see if that
 gets you out of the stalls.

 Or temporarily shut down mythtv.


 I will try lowering those numbers next time this happens, which will
 probably be within the next day or two.  That's about how often this
 happens.
You might be interested in write intent bitmap then, it gonna help a lot.
(resending in plain text)


 # cat /proc/sys/dev/raid/speed_limit_max
 20

 Thanks in advance!
 -- Kevin

 HTH,

 Phil

 [1] http://github.com/pturmel/lsdrv


 Thanks!
 -- Kevin


 --
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




--
Best regards,
[COOLCOLD-RIPN]
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/