Re: [zfs-discuss] slow speed problem with a new SAS shelf

2012-08-27 Thread Sašo Kiselkov
On 08/26/2012 07:40 AM, Yuri Vorobyev wrote:
 Can someone with Supermicro JBOD equipped with SAS drives and LSI
 HBA do this sequential read test?

Did that on a SC847 with 45 drives, read speeds around 2GB/s aren't a
problem.

 Don't forget to set primarycache=none on testing dataset.

There's your problem. By disabling the cache you've essentially
disabled prefetch. Why are you doing that?

--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slow speed problem with a new SAS shelf

2012-08-27 Thread Yuri Vorobyev

27.08.2012 14:02, Sašo Kiselkov пишет:


Can someone with Supermicro JBOD equipped with SAS drives and LSI
HBA do this sequential read test?


Did that on a SC847 with 45 drives, read speeds around 2GB/s aren't a
problem.

Thanks for info.


Don't forget to set primarycache=none on testing dataset.


There's your problem. By disabling the cache you've essentially
disabled prefetch. Why are you doing that?


Hm. The box have 96Gb RAM. I tried to exclude influence ARC cache.
Hasn't  thought about prefetch...

readspeed is:
readspeed ()  { dd if=$1 of=/dev/null bs=1M ;}


root@atom:/sas1/test# zfs set primarycache=metadata sas1/test
root@atom:/sas1/test# readspeed 3g
3000+0 records in
3000+0 records out
3145728000 bytes (3.1 GB) copied, 19.2203 s, 164 MB/s

Prefetch still disabled?

root@atom:/sas1/test# zfs set  primarycache=all sas1/test
root@atom:/sas1/test# readspeed 3g
3000+0 records in
3000+0 records out
3145728000 bytes (3.1 GB) copied, 3.99195 s, 788 MB/s

Seems to be this is disk read speed with prefetch enabled.

root@atom:/sas1/test# readspeed 3g
3000+0 records in
3000+0 records out
3145728000 bytes (3.1 GB) copied, 0.901665 s, 3.5 GB/s
root@atom:/sas1/test# readspeed 3g
3000+0 records in
3000+0 records out
3145728000 bytes (3.1 GB) copied, 1.02127 s, 3.1 GB/s
root@atom:/sas1/test# readspeed 3g
3000+0 records in
3000+0 records out
3145728000 bytes (3.1 GB) copied, 0.86884 s, 3.6 GB/s

These results  obviously are from memory.

Is there any way to disable ARC for testing and leave prefetch enabled?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slow speed problem with a new SAS shelf

2012-08-27 Thread Yuri Vorobyev

27.08.2012 14:02, Sašo Kiselkov пишет:


Can someone with Supermicro JBOD equipped with SAS drives and LSI
HBA do this sequential read test?


Did that on a SC847 with 45 drives, read speeds around 2GB/s aren't a
problem.

Thanks for info.


Don't forget to set primarycache=none on testing dataset.


There's your problem. By disabling the cache you've essentially
disabled prefetch. Why are you doing that?


Hm. The box have 96Gb RAM. I tried to exclude influence ARC cache.
Hasn't  thought about prefetch...

readspeed is:
readspeed ()  { dd if=$1 of=/dev/null bs=1M ;}


root@atom:/sas1/test# zfs set primarycache=metadata sas1/test
root@atom:/sas1/test# readspeed 3g
3000+0 records in
3000+0 records out
3145728000 bytes (3.1 GB) copied, 19.2203 s, 164 MB/s

Prefetch still disabled?

root@atom:/sas1/test# zfs set  primarycache=all sas1/test
root@atom:/sas1/test# readspeed 3g
3000+0 records in
3000+0 records out
3145728000 bytes (3.1 GB) copied, 3.99195 s, 788 MB/s

Seems to be this is disk read speed with prefetch enabled.

root@atom:/sas1/test# readspeed 3g
3000+0 records in
3000+0 records out
3145728000 bytes (3.1 GB) copied, 0.901665 s, 3.5 GB/s
root@atom:/sas1/test# readspeed 3g
3000+0 records in
3000+0 records out
3145728000 bytes (3.1 GB) copied, 1.02127 s, 3.1 GB/s
root@atom:/sas1/test# readspeed 3g
3000+0 records in
3000+0 records out
3145728000 bytes (3.1 GB) copied, 0.86884 s, 3.6 GB/s

These results  obviously are from memory.

Is there any way to disable ARC for testing and leave prefetch enabled?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slow speed problem with a new SAS shelf

2012-08-27 Thread Sašo Kiselkov
On 08/27/2012 10:37 AM, Yuri Vorobyev wrote:
 Is there any way to disable ARC for testing and leave prefetch enabled?

No. The reason is quite simply because prefetch is a mechanism separate
from your direct application's read requests. Prefetch runs on ahead of
your anticipated read requests and places blocks it expects you'll need
in the ARC, so obviously by disabling the ARC, you've disabled prefetch
as well.

You can get around the problem by exporting and importing the dataset
between testing runs, which will clear the ARC, so do:

# dd if=/dev/zero of=testfile bs=1024k count=1
# zpool export sas1
# zpool import sas1
# dd if=testfile of=/dev/null bs=1024k

Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slow speed problem with a new SAS shelf

2012-08-27 Thread Yuri Vorobyev

27.08.2012 14:43, Sašo Kiselkov пишет:


Is there any way to disable ARC for testing and leave prefetch enabled?


No. The reason is quite simply because prefetch is a mechanism separate
from your direct application's read requests. Prefetch runs on ahead of
your anticipated read requests and places blocks it expects you'll need
in the ARC, so obviously by disabling the ARC, you've disabled prefetch
as well.

You can get around the problem by exporting and importing the dataset
between testing runs, which will clear the ARC, so do:

# dd if=/dev/zero of=testfile bs=1024k count=1
# zpool export sas1
# zpool import sas1
# dd if=testfile of=/dev/null bs=1024k


Thank you very much, Sašo.
Now i see hardware works without problem.

I create another 10-disks pair of mirrors zpool for testing:

root@atom:/# zpool export sas2 ; zpool import sas2
root@atom:/# readspeed /sas2/5g
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 5.73728 s, 936 MB/s
root@atom:/# zpool export sas2 ; zpool import sas2
root@atom:/# readspeed /sas2/5g
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 5.63869 s, 952 MB/s



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slow speed problem with a new SAS shelf

2012-08-27 Thread Sašo Kiselkov
On 08/27/2012 12:58 PM, Yuri Vorobyev wrote:
 27.08.2012 14:43, Sašo Kiselkov пишет:
 
 Is there any way to disable ARC for testing and leave prefetch enabled?

 No. The reason is quite simply because prefetch is a mechanism separate
 from your direct application's read requests. Prefetch runs on ahead of
 your anticipated read requests and places blocks it expects you'll need
 in the ARC, so obviously by disabling the ARC, you've disabled prefetch
 as well.

 You can get around the problem by exporting and importing the dataset
 between testing runs, which will clear the ARC, so do:

 # dd if=/dev/zero of=testfile bs=1024k count=1
 # zpool export sas1
 # zpool import sas1
 # dd if=testfile of=/dev/null bs=1024k
 
 Thank you very much, Sašo.

You're very welcome.

 Now i see hardware works without problem.
 
 I create another 10-disks pair of mirrors zpool for testing:
 
 root@atom:/# zpool export sas2 ; zpool import sas2
 root@atom:/# readspeed /sas2/5g
 5120+0 records in
 5120+0 records out
 5368709120 bytes (5.4 GB) copied, 5.73728 s, 936 MB/s
 root@atom:/# zpool export sas2 ; zpool import sas2
 root@atom:/# readspeed /sas2/5g
 5120+0 records in
 5120+0 records out
 5368709120 bytes (5.4 GB) copied, 5.63869 s, 952 MB/s

Sounds about right, that's ~94MB/s from each drive. Have you tried
running multiple dd's in parallel? ZFS likes to have its pipelines
fairly saturated, so chances you'll get higher total performance numbers
with multiple parallel readers, like this:

(create multiple files like /sas2/5g_1, /sas2/5g_2, /sas2/5g_3, etc...)
# zpool export sas2  zpool import sas2
# readspeed /sas2/5g_1  readspeed /sas2/5g_2  readspeed /sas2/5g_3

The simply sum up the MB/s from each dd operation. Also, if possible,
use larger files, not just 5GB - something on the order a few 100GBs.
You can then watch your pool's performance via zpool iostat sas2 5
(that's how I usually do it).

Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zpool recovery after too many failed disks

2012-08-27 Thread Mark Wolek
RAIDz set, lost a disk, replaced it... lost another disk during resilver.  
Replaced it, ran another resilver, and now it shows all disks with too many 
errors.

Safe to say this is getting rebuilt and restored, or is there hope to recover 
some of the data?  I assume this is the case because rpool/filemover has 
errors, is that fixable?



# zpool status -v
  pool: rpool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
scrub: resilver completed after 4h51m with 190449 errors on Sat Aug 25 05:45:12 
2012
config:

NAME  STATE READ WRITE CKSUM
rpool DEGRADED  455K 0 0
  raidz1  DEGRADED  455K 0 0
c3t0d0DEGRADED 0 0 0  too many errors
c2t1d0DEGRADED 0 0 0  too many errors
replacing UNAVAIL  0 0 0  insufficient replicas
  c2t0d0s0/o  FAULTED  0 0 0  too many errors
  c2t0d0  FAULTED  0 0 0  too many errors
c3t1d0DEGRADED 0 0 0  too many errors
c4t0d0DEGRADED 0 0 0  too many errors
c4t1d0DEGRADED 0 0 0  too many errors

errors: Permanent errors have been detected in the following files:

rpool/filemover:0x1

# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
rpool6.64T  0  29.9K  /rpool
rpool/filemover  6.64T   323G  6.32T  -

Thanks
Mark

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool recovery after too many failed disks

2012-08-27 Thread Sašo Kiselkov
On 08/27/2012 09:02 PM, Mark Wolek wrote:
 RAIDz set, lost a disk, replaced it... lost another disk during resilver.  
 Replaced it, ran another resilver, and now it shows all disks with too many 
 errors.
 
 Safe to say this is getting rebuilt and restored, or is there hope to recover 
 some of the data?  I assume this is the case because rpool/filemover has 
 errors, is that fixable?

It seems you fell into the standard two-disk failure mode during
resilver. If this is really the case, it seems like your pool is lost,
because raidz works by treating each block as a single raidz stripe and
spreading it over the component devices - as a result, most of your
blocks will probably be missing some data.

You can try and retrieve as much data from the pool as possible (via
something like rsync or tar), though I'm not exactly certain how well
(or whether at all) that will work.

--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Help! OS drive lost, can I recover data?

2012-08-27 Thread Adam
Hi All,

Bit of a newbie here, in desperate need of help.

I had a fileserver based on FreeNAS/ZFS - 4 SATA drives in RaidZ, with the
OS on a USB stick (actually, a spare MicroSD card in a USB adapter).
Yesterday we had a power outage - that seems to have fried the MicroSD card.
The other disks *appear* to be OK (although I've done nothing much to check
them yet - they're being recognised on boot), but the OS is gone - the
MicroSD is completely unreadable. I think I was using FreeNAS 0.7 - I
honestly can't remember.

Can I recover the data? Can anyone talk me through the process?

Thanks - Adam...

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! OS drive lost, can I recover data?

2012-08-27 Thread eXeC001er
If your data-disks are OK, then do not worry. Just reinstall your OS and
import the data-pool.

Thanks.

2012/8/28 Adam adamthek...@gmail.com

 Hi All,

 Bit of a newbie here, in desperate need of help.

 I had a fileserver based on FreeNAS/ZFS - 4 SATA drives in RaidZ, with the
 OS on a USB stick (actually, a spare MicroSD card in a USB adapter).
 Yesterday we had a power outage - that seems to have fried the MicroSD
 card. The other disks *appear* to be OK (although I've done nothing much to
 check them yet - they're being recognised on boot), but the OS is gone -
 the MicroSD is completely unreadable. I think I was using FreeNAS 0.7 - I
 honestly can't remember.

 Can I recover the data? Can anyone talk me through the process?

 Thanks - Adam...

 ** **

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss