Re: [zfs-discuss] ZFS volume corrupted?

2009-03-03 Thread Lars-Gunnar Persson
I thought a ZFS file system wouldn't destroy a ZFS volume? Hmm, I'm  
not sure what to do now ...


First of all, this zfs volume Data/subversion1 has been working for a  
year and suddenly after a reboot of the Solaris server, running of the  
zpool export and zpool import command, I get  problems with this ZFS  
volume?


Today I checked some more, after reading this guide: 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

My main question is: Is my ZFS volume which is part of a zpool lost or  
can I recover it?


If I upgrade the Solaris server to the latest and do a zpool export  
and zpool import help?


All advices appreciated :-)

Here is some more information:

-bash-3.00$ zfs list -o  
name,type,used,avail,ratio,compression,reserv,volsize Data/subversion1

NAMETYPE   USED  AVAIL  RATIO  COMPRESS  RESERV  VOLSIZE
Data/subversion1  volume  22.5K   511G  1.00x   off250G 250G

I've also learned the the AVAIL column reports what's available in the  
zpool and NOT what's available in the ZFS volume.


-bash-3.00$ sudo zpool status -v
Password:
  pool: Data
 state: ONLINE
 scrub: scrub in progress, 5.86% done, 12h46m to go
config:

NAME STATE READ WRITE CKSUM
Data ONLINE   0 0 0
  c4t5000402001FC442Cd0  ONLINE   0 0 0

errors: No known data errors

Interesting thing here is that the scrub process should be finished  
today but the progress is much slower than reported here. And will the  
scrub process help anything in my case?



-bash-3.00$ sudo fmdump
TIME UUID SUNW-MSG-ID
Nov 15 2007 10:16:38 8aa789d2-7f3a-45d5-9f5c-c101d73b795e ZFS-8000-CS
Oct 14 09:31:40.8179 8c7d9847-94b7-ec09-8da7-c352de405b78 FMD-8000-2K

bash-3.00$ sudo fmdump -ev
TIME CLASS ENA
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e6897db600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e688d11500401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68926e600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e6897db600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68a3d3900401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68bc6741
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68d8bb600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68da5b51
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68da5b51
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68f0c9800401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68da5b51
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e6897db600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68e98191
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68f0c9800401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e690a11000401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68f0c9800401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e69038551
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e69038551
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e690a11000401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e692a4ca1
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68bc6741
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e690a11000401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68bc6741
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e69038551
Nov 15 2007 09:33:52 ereport.fs.zfs.data
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68a3d3900401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68a3d3900401
Nov 15 2007 10:16:12 ereport.fs.zfs.vdev.open_failed
0x0533bb1b56400401
Nov 15 2007 10:16:12 ereport.fs.zfs.zpool   
0x0533bb1b56400401
Oct 14 09:31:31.6092 ereport.fm.fmd.log_append  
0x02eb96a8b6502801
Oct 14 09:31:31.8643 ereport.fm.fmd.mod_init
0x02ec89eadd100401



On 3. mars. 2009, at 08.10, Lars-Gunnar Persson wrote:


I've turned off iSCSI sharing at the moment.

My first question is: how can zfs report available is larger than  
reservation on a zfs volume? I also know that used mshould be larger  
than 22.5 K. Isn't this 

Re: [zfs-discuss] ZFS volume corrupted?

2009-03-03 Thread O'Shea, Damien

Hi,

The reason zfs is saying that the available is larger is because in Zfs the
size of the pool is always available to the all the zfs filesystems that
reside in the pool. Setting a reservation will gaurntee that the reservation
size is reserved for the filesystem/volume but you can change that on the
fly.

You can see that if you create another filsystem within the pool that the
reservation in use by your volume will have be deducted from the available
size.

Like below:



r...@testfs create -V 10g testpool/test 
r...@testfs get all testpool
NAME  PROPERTY VALUE  SOURCE
testpool  type filesystem -
testpool  creation Wed Feb 11 13:17 2009  -
testpool  used 10.1G  -
testpool  available124G   -
testpool  referenced   100M   -
testpool  compressratio1.00x  -
testpool  mounted  yes-

Here the available is 124g as the volume has been set to 10g from a pool of
134g. If we set a reservation like this

r...@test1 set reservation=10g testpool/test
r...@test1 zfs get all testpool/test
NAME   PROPERTY VALUE  SOURCE
testpool/test  type volume -
testpool/test  creation Tue Mar  3 10:13 2009  -
testpool/test  used 10G-
testpool/test  available134G   -
testpool/test  referenced   16K-
testpool/test  compressratio1.00x  -

We can see that the available is now 134g, which is the avilable size of the
rest of the pool + the 10g reservation that we have set. So in theory this
volume can grow to the complete size of the pool.

So if we have a look at the availble space now in the pool we see

r...@test1# zfs get all testpool
NAME  PROPERTY VALUE  SOURCE
testpool  type filesystem -
testpool  creation Wed Feb 11 13:17 2009  -
testpool  used 10.1G  -
testpool  available124G   -
testpool  referenced   100M   -
testpool  compressratio1.00x  -
testpool  mounted  yes-

124g with 10g used to account for the size of the volume !

So if we now create another filesystem like this

r...@test1# zfs create testpool/test3
r...@test1# zfs get all testpool/test3
NAMEPROPERTY VALUE  SOURCE
testpool/test3  type filesystem -
testpool/test3  creation Tue Mar  3 10:19 2009  -
testpool/test3  used 18K-
testpool/test3  available124G   -
testpool/test3  referenced   18K-
testpool/test3  compressratio1.00x  -
testpool/test3  mounted  yes-

We see that the total amount available to the filesystem is the amount of the
space in the pool minus the 10g reservation. Lets set the reservation to
something bigger.

r...@test1# zfs set volsize=100g testpool/test
r...@test1# zfs set reservation=100g testpool/test
r...@test1# zfs get all testpool/test
NAME   PROPERTY VALUE  SOURCE
testpool/test  type volume -
testpool/test  creation Tue Mar  3 10:13 2009  -
testpool/test  used 100G   -
testpool/test  available134G   -
testpool/test  referenced   16K-

So the available is still 134G, which is the rest of the pool + the
reservation set.

r...@test1# zfs get all testpool
NAME  PROPERTY VALUE  SOURCE
testpool  type filesystem -
testpool  creation Wed Feb 11 13:17 2009  -
testpool  used 100G   -
testpool  available33.8G  -
testpool  referenced   100M   -
testpool  compressratio1.00x  -
testpool  mounted  yes-

The pool however now only has 33.8G left, which should be the same for all
the other filesystems in the pool.

Hope that helps.




-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org]on Behalf Of Lars-Gunnar
Persson
Sent: 03 March 2009 07:11
To: Richard Elling
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS volume corrupted?


*

This e-mail has been received by the Revenue Internet e-mail service. (IP)

*

I've turned off iSCSI sharing at the moment.

My first question is: how can zfs report available is larger than  
reservation on a zfs volume? I also know that used mshould be larger  
than 22.5 K. Isn't this strange?

Lars-Gunnar Persson

Den 3. mars. 2009 kl. 00.38 

Re: [zfs-discuss] ZFS volume corrupted?

2009-03-03 Thread Lars-Gunnar Persson
Thank you for your long reply. I don't believe that will help me get  
my ZFS volume back though,


From my last reply to this list I confirm that I do understand what  
the AVAIL column is reporting when running the zfs list command.


hmm, still confused ...

Regards,

Lars-Gunnar Persson

On 3. mars. 2009, at 11.26, O'Shea, Damien wrote:



Hi,

The reason zfs is saying that the available is larger is because in  
Zfs the
size of the pool is always available to the all the zfs filesystems  
that
reside in the pool. Setting a reservation will gaurntee that the  
reservation
size is reserved for the filesystem/volume but you can change that  
on the

fly.

You can see that if you create another filsystem within the pool  
that the
reservation in use by your volume will have be deducted from the  
available

size.

Like below:



r...@testfs create -V 10g testpool/test
r...@testfs get all testpool
NAME  PROPERTY VALUE  SOURCE
testpool  type filesystem -
testpool  creation Wed Feb 11 13:17 2009  -
testpool  used 10.1G  -
testpool  available124G   -
testpool  referenced   100M   -
testpool  compressratio1.00x  -
testpool  mounted  yes-

Here the available is 124g as the volume has been set to 10g from a  
pool of

134g. If we set a reservation like this

r...@test1 set reservation=10g testpool/test
r...@test1 zfs get all testpool/test
NAME   PROPERTY VALUE  SOURCE
testpool/test  type volume -
testpool/test  creation Tue Mar  3 10:13 2009  -
testpool/test  used 10G-
testpool/test  available134G   -
testpool/test  referenced   16K-
testpool/test  compressratio1.00x  -

We can see that the available is now 134g, which is the avilable  
size of the
rest of the pool + the 10g reservation that we have set. So in  
theory this

volume can grow to the complete size of the pool.

So if we have a look at the availble space now in the pool we see

r...@test1# zfs get all testpool
NAME  PROPERTY VALUE  SOURCE
testpool  type filesystem -
testpool  creation Wed Feb 11 13:17 2009  -
testpool  used 10.1G  -
testpool  available124G   -
testpool  referenced   100M   -
testpool  compressratio1.00x  -
testpool  mounted  yes-

124g with 10g used to account for the size of the volume !

So if we now create another filesystem like this

r...@test1# zfs create testpool/test3
r...@test1# zfs get all testpool/test3
NAMEPROPERTY VALUE  SOURCE
testpool/test3  type filesystem -
testpool/test3  creation Tue Mar  3 10:19 2009  -
testpool/test3  used 18K-
testpool/test3  available124G   -
testpool/test3  referenced   18K-
testpool/test3  compressratio1.00x  -
testpool/test3  mounted  yes-

We see that the total amount available to the filesystem is the  
amount of the
space in the pool minus the 10g reservation. Lets set the  
reservation to

something bigger.

r...@test1# zfs set volsize=100g testpool/test
r...@test1# zfs set reservation=100g testpool/test
r...@test1# zfs get all testpool/test
NAME   PROPERTY VALUE  SOURCE
testpool/test  type volume -
testpool/test  creation Tue Mar  3 10:13 2009  -
testpool/test  used 100G   -
testpool/test  available134G   -
testpool/test  referenced   16K-

So the available is still 134G, which is the rest of the pool + the
reservation set.

r...@test1# zfs get all testpool
NAME  PROPERTY VALUE  SOURCE
testpool  type filesystem -
testpool  creation Wed Feb 11 13:17 2009  -
testpool  used 100G   -
testpool  available33.8G  -
testpool  referenced   100M   -
testpool  compressratio1.00x  -
testpool  mounted  yes-

The pool however now only has 33.8G left, which should be the same  
for all

the other filesystems in the pool.

Hope that helps.




-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org]on Behalf Of Lars-Gunnar
Persson
Sent: 03 March 2009 07:11
To: Richard Elling
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS volume corrupted?


*

This e-mail has been 

Re: [zfs-discuss] ZFS volume corrupted?

2009-03-03 Thread Lars-Gunnar Persson

I run a new command now zdb. Here is the current output:

-bash-3.00$ sudo zdb Data
version=4
name='Data'
state=0
txg=9806565
pool_guid=6808539022472427249
vdev_tree
type='root'
id=0
guid=6808539022472427249
children[0]
type='disk'
id=0
guid=2167768931511572294
path='/dev/dsk/c4t5000402001FC442Cd0s0'
devid='id1,s...@n6000402001fc442c6e1a0e97/a'
whole_disk=1
metaslab_array=14
metaslab_shift=36
ashift=9
asize=11801587875840
Uberblock

magic = 00bab10c
version = 4
txg = 9842225
guid_sum = 8976307953983999543
timestamp = 1236084668 UTC = Tue Mar  3 13:51:08 2009

Dataset mos [META], ID 0, cr_txg 4, 392M, 1213 objects
... [snip]

Dataset Data/subversion1 [ZVOL], ID 3527, cr_txg 2514080, 22.5K, 3  
objects


... [snip]
Dataset Data [ZPL], ID 5, cr_txg 4, 108M, 2898 objects

Traversing all blocks to verify checksums and verify nothing leaked ...

and I'm still waiting for this process to finish.


On 3. mars. 2009, at 11.18, Lars-Gunnar Persson wrote:

I thought a ZFS file system wouldn't destroy a ZFS volume? Hmm, I'm  
not sure what to do now ...


First of all, this zfs volume Data/subversion1 has been working for  
a year and suddenly after a reboot of the Solaris server, running of  
the zpool export and zpool import command, I get  problems with this  
ZFS volume?


Today I checked some more, after reading this guide: 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

My main question is: Is my ZFS volume which is part of a zpool lost  
or can I recover it?


If I upgrade the Solaris server to the latest and do a zpool export  
and zpool import help?


All advices appreciated :-)

Here is some more information:

-bash-3.00$ zfs list -o  
name,type,used,avail,ratio,compression,reserv,volsize Data/subversion1
NAMETYPE   USED  AVAIL  RATIO  COMPRESS  RESERV   
VOLSIZE
Data/subversion1  volume  22.5K   511G  1.00x   off250G  
250G


I've also learned the the AVAIL column reports what's available in  
the zpool and NOT what's available in the ZFS volume.


-bash-3.00$ sudo zpool status -v
Password:
 pool: Data
state: ONLINE
scrub: scrub in progress, 5.86% done, 12h46m to go
config:

   NAME STATE READ WRITE CKSUM
   Data ONLINE   0 0 0
 c4t5000402001FC442Cd0  ONLINE   0 0 0

errors: No known data errors

Interesting thing here is that the scrub process should be finished  
today but the progress is much slower than reported here. And will  
the scrub process help anything in my case?



-bash-3.00$ sudo fmdump
TIME UUID SUNW-MSG-ID
Nov 15 2007 10:16:38 8aa789d2-7f3a-45d5-9f5c-c101d73b795e ZFS-8000-CS
Oct 14 09:31:40.8179 8c7d9847-94b7-ec09-8da7-c352de405b78 FMD-8000-2K

bash-3.00$ sudo fmdump -ev
TIME CLASS ENA
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e6897db600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e688d11500401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68926e600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e6897db600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68a3d3900401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68bc6741
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68d8bb600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68da5b51
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68da5b51
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68f0c9800401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68da5b51
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e6897db600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68e98191
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68f0c9800401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e690a11000401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68f0c9800401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e69038551
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e69038551
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e690a11000401
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e692a4ca1
Nov 15 2007 09:33:52 ereport.fs.zfs.io  
0x915e68bc6741

Re: [zfs-discuss] ZFS volume corrupted?

2009-03-03 Thread Lars-Gunnar Persson

And then the zdb process ends with:

Traversing all blocks to verify checksums and verify nothing leaked ...
out of memory -- generating core dump
Abort (core dumped)

hmm, what does that mean??


I also ran these commands:

-bash-3.00$ sudo fmstat
module ev_recv ev_acpt wait  svc_t  %w  %b  open solve   
memsz  bufsz
cpumem-retire0   0  0.00.1   0   0 0  
0  0  0
disk-transport   0   0  0.04.1   0   0 0 0 
32b  0
eft  0   0  0.05.7   0   0 0 0
1.4M  0
fmd-self-diagnosis   0   0  0.00.2   0   0 0  
0  0  0
io-retire0   0  0.00.2   0   0 0  
0  0  0
snmp-trapgen 0   0  0.00.1   0   0 0 0 
32b  0
sysevent-transport   0   0  0.0 1520.8   0   0 0  
0  0  0
syslog-msgs  0   0  0.00.1   0   0 0  
0  0  0
zfs-diagnosis  301   0  0.00.0   0   0 2 0
120b80b
zfs-retire   0   0  0.00.3   0   0 0  
0  0  0

-bash-3.00$ sudo fmadm config
MODULE   VERSION STATUS  DESCRIPTION
cpumem-retire1.1 active  CPU/Memory Retire Agent
disk-transport   1.0 active  Disk Transport Agent
eft  1.16active  eft diagnosis engine
fmd-self-diagnosis   1.0 active  Fault Manager Self-Diagnosis
io-retire1.0 active  I/O Retire Agent
snmp-trapgen 1.0 active  SNMP Trap Generation Agent
sysevent-transport   1.0 active  SysEvent Transport Agent
syslog-msgs  1.0 active  Syslog Messaging Agent
zfs-diagnosis1.0 active  ZFS Diagnosis Engine
zfs-retire   1.0 active  ZFS Retire Agent
-bash-3.00$ sudo zpool upgrade -v
This system is currently running ZFS version 4.

The following versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history

For more information on a particular version, including supported  
releases, see:


http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.


I hope I've provided enough information for all you ZFS experts out  
there.


Any tips or solutions in sight? Or is this ZFS gone completely?

Lars-Gunnar Persson


On 3. mars. 2009, at 13.58, Lars-Gunnar Persson wrote:


I run a new command now zdb. Here is the current output:

-bash-3.00$ sudo zdb Data
   version=4
   name='Data'
   state=0
   txg=9806565
   pool_guid=6808539022472427249
   vdev_tree
   type='root'
   id=0
   guid=6808539022472427249
   children[0]
   type='disk'
   id=0
   guid=2167768931511572294
   path='/dev/dsk/c4t5000402001FC442Cd0s0'
   devid='id1,s...@n6000402001fc442c6e1a0e97/a'
   whole_disk=1
   metaslab_array=14
   metaslab_shift=36
   ashift=9
   asize=11801587875840
Uberblock

   magic = 00bab10c
   version = 4
   txg = 9842225
   guid_sum = 8976307953983999543
   timestamp = 1236084668 UTC = Tue Mar  3 13:51:08 2009

Dataset mos [META], ID 0, cr_txg 4, 392M, 1213 objects
... [snip]

Dataset Data/subversion1 [ZVOL], ID 3527, cr_txg 2514080, 22.5K, 3  
objects


... [snip]
Dataset Data [ZPL], ID 5, cr_txg 4, 108M, 2898 objects

Traversing all blocks to verify checksums and verify nothing  
leaked ...


and I'm still waiting for this process to finish.


On 3. mars. 2009, at 11.18, Lars-Gunnar Persson wrote:

I thought a ZFS file system wouldn't destroy a ZFS volume? Hmm, I'm  
not sure what to do now ...


First of all, this zfs volume Data/subversion1 has been working for  
a year and suddenly after a reboot of the Solaris server, running  
of the zpool export and zpool import command, I get  problems with  
this ZFS volume?


Today I checked some more, after reading this guide: 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

My main question is: Is my ZFS volume which is part of a zpool lost  
or can I recover it?


If I upgrade the Solaris server to the latest and do a zpool export  
and zpool import help?


All advices appreciated :-)

Here is some more information:

-bash-3.00$ zfs list -o  
name,type,used,avail,ratio,compression,reserv,volsize Data/ 
subversion1
NAMETYPE   USED  AVAIL  RATIO  COMPRESS  RESERV   
VOLSIZE
Data/subversion1  volume  22.5K   511G  1.00x   off250G  
250G


I've also learned the the AVAIL column reports what's available in  
the zpool and NOT what's available in the ZFS volume.


-bash-3.00$ sudo zpool status -v
Password:
pool: Data
state: ONLINE
scrub: scrub in progress, 5.86% 

Re: [zfs-discuss] ZFS volume corrupted?

2009-03-03 Thread Sanjeev
 Lars-Gunnar,


On Tue, Mar 03, 2009 at 11:18:27AM +0100, Lars-Gunnar Persson wrote:
 -bash-3.00$ zfs list -o  
 name,type,used,avail,ratio,compression,reserv,volsize Data/subversion1
 NAMETYPE   USED  AVAIL  RATIO  COMPRESS  RESERV  VOLSIZE
 Data/subversion1  volume  22.5K   511G  1.00x   off250G 250G

This shows that the volume still exists.
Correct me if I am wrong here :
- Did you mean that the contents of the volume subversion1 are corrupted ?

What does that volume have on it ? Does it contain a filesystem which can
can be mounted on Solaris ? If so, we could try mounting it locally on the
Solaris box. This is to rule out any iSCSI issues.

Also, do you have any snapshots of the volume ? If so, you could rollback
to the latest snapshot. But, that would mean we lose some amount of data.

Also, you mentioned that the volume was in use for a year. But, I see in the
above output that it has only about 22.5K used. Is that correct ? I would
have expected it to be higher.

You should also check what 'zpool history -i ' says.

Thanks and regards,
Sanjeev

 I've also learned the the AVAIL column reports what's available in the  
 zpool and NOT what's available in the ZFS volume.

 -bash-3.00$ sudo zpool status -v
 Password:
   pool: Data
  state: ONLINE
  scrub: scrub in progress, 5.86% done, 12h46m to go
 config:

 NAME STATE READ WRITE CKSUM
 Data ONLINE   0 0 0
   c4t5000402001FC442Cd0  ONLINE   0 0 0

 errors: No known data errors

 Interesting thing here is that the scrub process should be finished  
 today but the progress is much slower than reported here. And will the  
 scrub process help anything in my case?


 -bash-3.00$ sudo fmdump
 TIME UUID SUNW-MSG-ID
 Nov 15 2007 10:16:38 8aa789d2-7f3a-45d5-9f5c-c101d73b795e ZFS-8000-CS
 Oct 14 09:31:40.8179 8c7d9847-94b7-ec09-8da7-c352de405b78 FMD-8000-2K

 bash-3.00$ sudo fmdump -ev
 TIME CLASS ENA
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e6850ff400401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e6850ff400401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e6897db600401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e688d11500401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68926e600401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e6897db600401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68a3d3900401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68bc6741
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68d8bb600401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68da5b51
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68da5b51
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68f0c9800401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68da5b51
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e6897db600401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68e98191
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68f0c9800401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e690a11000401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68f0c9800401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e69038551
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e69038551
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e690a11000401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e692a4ca1
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68bc6741
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e690a11000401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e6850ff400401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e6850ff400401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68bc6741
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e69038551
 Nov 15 2007 09:33:52 ereport.fs.zfs.data
 0x915e6850ff400401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68a3d3900401
 Nov 15 2007 09:33:52 ereport.fs.zfs.io  
 0x915e68a3d3900401
 Nov 15 2007 10:16:12 ereport.fs.zfs.vdev.open_failed
 0x0533bb1b56400401
 Nov 15 2007 10:16:12 ereport.fs.zfs.zpool   
 0x0533bb1b56400401
 Oct 14 09:31:31.6092 ereport.fm.fmd.log_append  
 0x02eb96a8b6502801
 Oct 14 09:31:31.8643 ereport.fm.fmd.mod_init
 0x02ec89eadd100401


 On 3. mars. 2009, at 08.10, 

Re: [zfs-discuss] ZFS volume corrupted?

2009-03-03 Thread Lars-Gunnar Persson


On 3. mars. 2009, at 14.51, Sanjeev wrote:

Thank you for your reply.


Lars-Gunnar,


On Tue, Mar 03, 2009 at 11:18:27AM +0100, Lars-Gunnar Persson wrote:

-bash-3.00$ zfs list -o
name,type,used,avail,ratio,compression,reserv,volsize Data/ 
subversion1
NAMETYPE   USED  AVAIL  RATIO  COMPRESS  RESERV   
VOLSIZE
Data/subversion1  volume  22.5K   511G  1.00x   off250G  
250G


This shows that the volume still exists.
Correct me if I am wrong here :
- Did you mean that the contents of the volume subversion1 are  
corrupted ?
I'm not 100% sure if it's the content of this volume or if it's the  
zpool that is corrupted. It was iSCSI exported to a Linux host where  
it was formatted as an ext3 file system.




What does that volume have on it ? Does it contain a filesystem  
which can
can be mounted on Solaris ? If so, we could try mounting it locally  
on the

Solaris box. This is to rule out any iSCSI issues.

I don't think that Solaris supports mounting of ext3 file systems or ?


Also, do you have any snapshots of the volume ? If so, you could  
rollback
to the latest snapshot. But, that would mean we lose some amount of  
data.
Nope, No snapshots - since this is a subversion repository with  
versioning built in. I didn't think I'll end up in this situation.




Also, you mentioned that the volume was in use for a year. But, I  
see in the
above output that it has only about 22.5K used. Is that correct ? I  
would

have expected it to be higher.
You're absolutely right, the 22.5K is wrong. That is why I suspect zfs  
is doing something wrong ...




You should also check what 'zpool history -i ' says.


it says:

-bash-3.00$ sudo zpool history Data | grep subversion
2008-04-02.09:08:53 zfs create -V 250GB Data/subversion1
2008-04-02.09:08:53 zfs set shareiscsi=on Data/subversion1
2008-08-14.14:13:58 zfs set shareiscsi=off Data/subversion1
2008-08-29.15:08:50 zfs set shareiscsi=on Data/subversion1
2009-03-02.10:37:36 zfs set shareiscsi=off Data/subversion1
2009-03-02.10:37:55 zfs set shareiscsi=on Data/subversion1
2009-03-02.11:37:22 zfs set shareiscsi=off Data/subversion1
2009-03-03.09:37:34 zfs set shareiscsi=on Data/subversion1

and:

2009-03-01.11:26:22 zpool export -f Data
2009-03-01.13:21:58 zpool import Data

2009-03-01.14:32:04 zpool scrub Data




Thanks and regards,
Sanjeev


More info:
I just rebooted the SOlaris server and no change in status:

-bash-3.00$ zpool status -v
  pool: Data
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
Data ONLINE   0 0 0
  c4t5000402001FC442Cd0  ONLINE   0 0 0

errors: No known data errors


The scrubing has stopped and the zdb command crashed the server.





I've also learned the the AVAIL column reports what's available in  
the

zpool and NOT what's available in the ZFS volume.

-bash-3.00$ sudo zpool status -v
Password:
 pool: Data
state: ONLINE
scrub: scrub in progress, 5.86% done, 12h46m to go
config:

   NAME STATE READ WRITE CKSUM
   Data ONLINE   0 0 0
 c4t5000402001FC442Cd0  ONLINE   0 0 0

errors: No known data errors

Interesting thing here is that the scrub process should be finished
today but the progress is much slower than reported here. And will  
the

scrub process help anything in my case?


-bash-3.00$ sudo fmdump
TIME UUID SUNW-MSG-ID
Nov 15 2007 10:16:38 8aa789d2-7f3a-45d5-9f5c-c101d73b795e ZFS-8000-CS
Oct 14 09:31:40.8179 8c7d9847-94b7-ec09-8da7-c352de405b78 FMD-8000-2K

bash-3.00$ sudo fmdump -ev
TIME CLASS ENA
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e6850ff400401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e6897db600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e688d11500401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68926e600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e6897db600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68a3d3900401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68bc6741
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68d8bb600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68da5b51
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68da5b51
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68f0c9800401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68da5b51
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e6897db600401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68e98191
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68f0c9800401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e690a11000401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e68f0c9800401
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e69038551
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e69038551
Nov 15 2007 09:33:52 ereport.fs.zfs.io
0x915e690a11000401
Nov 15 2007 

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-03-03 Thread George Wilson

Matthew Ahrens wrote:

Blake wrote:

zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)



I'd like to see:

pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)


I'm working on it.


install to mirror from the liveCD gui

zfs recovery tools (sometimes bad things happen)


We've actually discussed this at length and there will be some work 
started soon.


automated installgrub when mirroring an rpool

I'm working on it.

- George


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-03-03 Thread George Wilson

Richard Elling wrote:

David Magda wrote:


On Feb 27, 2009, at 18:23, C. Bergström wrote:


Blake wrote:

Care to share any of those in advance?  It might be cool to see input
from listees and generally get some wheels turning...


raidz boot support in grub 2 is pretty high on my list to be honest..

Which brings up another question of where is the raidz stuff mostly?

usr/src/uts/common/fs/zfs/vdev_raidz.c ?

Any high level summary, docs or blog entries of what the process 
would look like for a raidz boot support is also appreciated.


Given the threads that have appeared on this list lately, how about 
codifying / standardizing the output of zfs send so that it can be 
backed up to tape? :)


It wouldn't help.  zfs send is a data stream which contains parts of 
files,

not files (in the usual sense), so there is no real way to take a send
stream and extract a file, other than by doing a receive.

At the risk of repeating the Best Practices Guide (again):
The zfs send and receive commands do not provide an enterprise-level 
backup solution.

-- richard
Along these lines you can envision a restore tool that is capable of 
reading multiple 'zfs send' streams to construct the various versions of 
files which are available. In addition, it would be nice if the tool 
could read in the streams and then make it easy to traverse and 
construct a single file from all available streams. For example, if I 
have 5 send streams then the tool would be able to ingest all the data 
and provide a directory structure similar to .zfs which would allow you 
to restore any file which is completely intact.


- George
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs list extentions related to pNFS

2009-03-03 Thread Richard Elling

Yes.
-- richard

Lisa Week wrote:

Hi,

I am soliciting input from the ZFS engineers and/or ZFS users on an 
extension to zfs list.  Thanks in advance for your feedback.


Quick Background:
The pNFS project (http://opensolaris.org/os/project/nfsv41/) 
http://opensolaris.org/os/project/nfsv41/%29 is adding a new DMU 
object set type which is used on the pNFS data server to store pNFS 
stripe DMU objects.  A pNFS dataset gets created with the zfs create 
command and gets displayed using zfs list.


Specific Question:
Should the pNFS datasets show up in the default zfs list output? 
 Just as with ZFS file systems and ZVOLs, the number of pNFS datasets 
that exist on a data server will vary depending on the configuration.


The following is output from the modified command and reflects the 
current mode of operation (i.e. zfs list lists filesystems, volumes 
and pnfs datasets by default):


(pnfs-17-21:/home/lisagab):6 % zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  30.0G  37.0G  32.5K  /rpool
rpool/ROOT 18.2G  37.0G18K  legacy
rpool/ROOT/snv_105 18.2G  37.0G  6.86G  /
rpool/ROOT/snv_105/var 11.4G  37.0G  11.4G  /var
rpool/dump 9.77G  37.0G  9.77G  -
rpool/export 40K  37.0G21K  /export
rpool/export/home19K  37.0G19K  /export/home
rpool/pnfsds 31K  37.0G15K  - ---pNFS 
dataset
rpool/pnfsds/47C80414080A4A4216K  37.0G16K  - ---pNFS 
dataset

rpool/swap 1.97G  38.9G  4.40M  -

(pnfs-17-21:/home/lisagab):7 % zfs list -t pnfsdata
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool/pnfsds 31K  37.0G15K  -
rpool/pnfsds/47C80414080A4A4216K  37.0G16K  -

(pnfs-17-21:/home/lisagab):8 % zfs get all rpool/pnfsds
NAME  PROPERTY  VALUE  SOURCE
rpool/pnfsds  type  pnfs-dataset   -
rpool/pnfsds  creation  Mon Feb  2 13:56 2009  -
rpool/pnfsds  used  31K-
rpool/pnfsds  available 37.0G  -
rpool/pnfsds  referenced15K-
rpool/pnfsds  compressratio 1.00x  -
rpool/pnfsds  quota none   default
rpool/pnfsds  reservation   none   default
rpool/pnfsds  recordsize128K   default
rpool/pnfsds  checksum  on default
rpool/pnfsds  compression   offdefault
rpool/pnfsds  zoned offdefault
rpool/pnfsds  copies1  default
rpool/pnfsds  refquota  none   default
rpool/pnfsds  refreservationnone   default
rpool/pnfsds  sharepnfs offdefault
rpool/pnfsds  mds   none   default

Thanks,
Lisa



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-03-03 Thread Richard Elling

George Wilson wrote:
Along these lines you can envision a restore tool that is capable of 
reading multiple 'zfs send' streams to construct the various versions 
of files which are available. In addition, it would be nice if the 
tool could read in the streams and then make it easy to traverse and 
construct a single file from all available streams. For example, if I 
have 5 send streams then the tool would be able to ingest all the data 
and provide a directory structure similar to .zfs which would allow 
you to restore any file which is completely intact.


In essence, this is how HSM works (qv ADM project).  You get a view of the
file system for which the data in the files may reside on media elsewhere.
Good stuff.
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs with PERC 6/i card?

2009-03-03 Thread Kristin Amundsen
I am trying to set up OpenSolaris on a Dell 2950 III that has 8 SAS drives
connected to a PERC 6/i card.  I am wondering what the best way to configure
the RAID in the BIOS for ZFS.

Part of the problem is there seems to be some confusion inside Dell as to
what can be done with the card.  Their tech support suggested making the 8
drives show up by making them 8 raid0 devices.  I researched online to see
if I could find anyone doing that and the only person I found indicated
there were issues with needing to reboot the machine because the controller
would take drives totally offline when there were problems.  The sales rep I
have been working with said the card can be configured with a no-raid
option.  I am not sure why tech support did not know about this (I spent a
long time taking with them about if we could turn RAID off on the machine).
I could not find anyone talking about running zfs on a system configured
this way.

I would like to hear if anyone is using ZFS with this card and how you set
it up, and what, if any, issues you've had with that set up.

Thanks

-Kristin

-- 
http://tomorrowisobsolete.blogspot.com  http://kamundse.blogspot.com
http://flickr.com/photos/kamundse/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-03-03 Thread Greg Mason

Just my $0.02, but would pool shrinking be the same as vdev evacuation?

I'm quite interested in vdev evacuation as an upgrade path for 
multi-disk pools. This would be yet another reason to for folks to use 
ZFS at home (you only have to buy cheap disks), but it would also be a 
good to have that ability from an enterprise perspective, as I'm sure 
we've all engineered ourselves into a corner one time or another...


It's a much cleaner, safer, and possibly much faster alternative to 
systematically pulling drives and letting zfs resilver onto a larger 
disk, in order to upgrade a pool in-place, and in production.


basically, what I'm thinking is:

zpool remove mypool list of devices/vdevs

Allow time for ZFS to vacate the vdev(s), and then light up the OK to 
remove light on each evacuated disk.


-Greg

Blake Irvin wrote:

Shrinking pools would also solve the right-sizing dilemma.

Sent from my iPhone

On Feb 28, 2009, at 3:37 AM, Joe Esposito j...@j-espo.com wrote:


I'm using opensolaris and zfs at my house for my photography storage
as well as for an offsite backup location for my employer and several
side web projects.

I have an 80g drive as my root drive.  I recently took posesion of 2
74g 10k drives which I'd love to add as a mirror to replace the 80 g
drive.

From what I gather it is only possible if I zfs export my storage
array and reinstall solaris on the new disks.

So I guess I'm hoping zfs shrink and grow commands show up sooner or 
later.


Just a data point.

Joe Esposito
www.j-espo.com

On 2/28/09, C. Bergström cbergst...@netsyncro.com wrote:

Blake wrote:

Gnome GUI for desktop ZFS administration



On Fri, Feb 27, 2009 at 9:13 PM, Blake blake.ir...@gmail.com wrote:


zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)



I'd like to see:

pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)


This may be interesting... I'm not sure how often you need to shrink a
pool though?  Could this be classified more as a Home or SME level 
feature?

install to mirror from the liveCD gui


I'm not working on OpenSolaris at all, but for when my projects
installer is more ready /we/ can certainly do this..

zfs recovery tools (sometimes bad things happen)


Agreed.. part of what I think keeps zfs so stable though is the complete
lack of dependence on any recovery tools..  It forces customers to bring
up the issue instead of dirty hack and nobody knows.

automated installgrub when mirroring an rpool


This goes back to an installer option?

./C

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can VirtualBox run a 64 bit guests on 32 bit host

2009-03-03 Thread Miles Nordin
 hp == Harry Putnam rea...@newsguy.com writes:

hp I'm thinking of turning to Asus again and making sure there is
hp onboard SATA with at least 4 prts and preferebly 6.

I would like 64-bit hardware with ECC, 8GB RAM, and a good Ethernet
chip, that can run both Linux and Solaris.  I do not plan to use the
onboard SATA.  So far I'm having nasty problems with an nForce 750a
board from asus (M3N72-D) under Linux.

 * random powerdowns.  about once per week.

 * freezes setting the rtc clock.  as in, locks the whole board hard.
   and confuses something deep inside the board, not just Linux.  Why
   do I think this?  I've set it to ``power on after power failure''
   which it normally obeys, and I know it normally obeys because I
   always do cold-boots of this board because of other strange
   intermittent problems below.  But after such a clock-set freeze, i
   try to cold-boot, and it stays off until I press the front panel
   button disrespecting the BIOS setting (and making a Baytech strip
   useless.  piece of shit!)

 * sound card attaches sometimes, not other times.  In 'cat
   /proc/interrupts' I see the sound card shares an interrupt with USB
   (which AIUI it should not need to do with ACPI and ioapic, aren't
   there plenty of interrupts now?), and it seems to march through
   interrupt wirings after each warm boot, always picking a different
   one, so I blame the Asus BIOS.  I think it attaches about 1/3 the
   time.  It does play sound when it attaches.

 * a USB key that works with other motherboards does not work with
   this one.  I am not trying to boot off the key.  I boot with
   Etherboot PXE.  I'm only trying to use the key under the booted OS,
   and it does not work, while the same OS on a different motherboard
   is able to use the key for months without problem.

 * with the Supermicro/Marvell 8-SATA-port PCI card installed, the
   Asus board will not enter the Blue Screen of Setup.  Sometimes it
   does, though, maybe 1 in 10 times.  It runs fine with this card
   installed, just won't enter setup unless I remove it.

 * with ``fill memory holes'' enabled in BSoS, as it is by default,
   memtest86+ crashes and reboots the machine.  When I turn this off,
   memtest86+ runs fine.  I'm not sure how I even found this
   workaround, but seriously, I would rather waste time on mailing
   lists than mess aroudn with such junk.

maybe other things that I'm forgetting.

Under Solaris, and I think also Linux, the nForce 750a's nvidia
ethernet chip is pretty performant.  (newegg says realtek ethernet,
but this is wrong, it's an nvidia MAC).  And also nvidia AHCI SATA
which is supposedly better than AMD AHCI SATA.  I can verify at least
four SATA ports work well, on Linux, but I haven't tried the other two
``RAID'' SATA ports which are rumored to be on a crappy JMicron
controller with weird mandatory fakeRAID or something.  Maybe they
work fine, I don't know.

While the ATI/AMD chipset boards actually do have crappy realtek
ethernet (crappy performance on every OS), and there is supposedly a
bug in their AMD AHCI that has persisted through several chip
steppings that makes SATA slow under Linux and buggy under Solaris, and
I've heard no resolution so I'm operating under the assumption the
most recent 790{X,GX,FX}/SB750 still have the bug.

There's dispute about updating the BIOS.  in general you have to
update it to work with the latest CPU's.  Sometimes you need an old
CPU to run the BIOS updater before you can run the new CPU, even if
you ordered the CPU and the motherboard on the same invoice.  That
seems highly bogus to me.  I did not update mine at first, then
updated it to try to solve the random powerdowns, which it did not.
Other forum posters say the first BIOS revisions are written by the
BIOS programmers who did well in college, while the later updates for
newer CPUs are written by the ones who barely scraped by and are full
of regressions, and since they're not selling a new motherboard with
the update they don't give a shit and would almost rather brick the
board and force you to upgrade, so these posters say the newest BIOS
builds should be *AVOIDED*.  I've no idea!

Maybe you should buy two or three cheap boards and try them all, since
the CPU and RAM make up a bigger fraction of the cost.  I've just
bought this one:

 http://www.newegg.com/Product/Product.aspx?Item=N82E16813186150

because it has Broadcom ethernet, which is actually a good-performing
chip with a decent driver, but this chip has a lot of errata among
revisions so your Ethernet driver has to be updated more aggressively
than it does with the Intel gigabit chips.  I'm hoping it is the older
non-RNIC version, because the newest Broadcom chips have a
(newly-added, cough COUGH) Proprietary driver in Solaris, while I
think the older chips have a free driver.  Broadcom, like AHCI SATA,
is decent and much easier to obtain onboard than on a PCI card, so
it's nice to find a board which has it.

This one is also 

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-03-03 Thread Matthew Ahrens

Greg Mason wrote:

Just my $0.02, but would pool shrinking be the same as vdev evacuation?


Yes.


basically, what I'm thinking is:

zpool remove mypool list of devices/vdevs

Allow time for ZFS to vacate the vdev(s), and then light up the OK to 
remove light on each evacuated disk.


That's the goal.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can VirtualBox run a 64 bit guests on 32 bit host

2009-03-03 Thread Bob Friesenhahn

On Tue, 3 Mar 2009, Miles Nordin wrote:


I would like 64-bit hardware with ECC, 8GB RAM, and a good Ethernet
chip, that can run both Linux and Solaris.  I do not plan to use the
onboard SATA.  So far I'm having nasty problems with an nForce 750a
board from asus (M3N72-D) under Linux.


Did you accidentally blast this to the wrong group?  This is the ZFS 
discussion group, not Linux on my broken hardware group.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-03 Thread C. Bergström


For reasons which I don't care about Sun may not apply to be a gsoc 
organization this year.  However, I'm not discouraged from trying to 
propose some exciting zfs related ideas.  On/off list feel free to send 
your vote, let me know if you can mentor or if you know a company that 
could use it.


Here's more or less what I've collected...

  1) Excess ditto block removing + other green-bytes zfs+ features - 
*open source* (very hard.. can't be done in two months)
  2) raidz boot support (planning phase and suitable student already 
found. could use more docs/info for proposal)

  3) Additional zfs compression choices (good for archiving non-text files?
  4) zfs cli interface to add safety checks  (save your butt from 
deleting a pool worth more than your job)

  5) Web or gui based admin interface
  6) zfs defrag (was mentioned by someone working around petabytes of 
data..)
  7) vdev evacuation as an upgrade path (which may depend or take 
advantage of zfs resize/shrink code)

  8) zfs restore/repair tools (being worked on already?)
  9) Timeslider ported to kde4.2 ( *cough* couldn't resist, but put 
this on the list)

  10) Did I miss something..

#2 Currently planning and collecting as much information for the 
proposal as possible.  Today all ufs + solaris grub2 issues were 
resolved and will likely be committed to upstream soon.  There is a one 
liner fix in the solaris kernel also needed, but that can be binary 
hacked worst case.


#5/9 This also may be possible for an outside project.. either web 
showcase or tighter desktop integration..


The rest may just be too difficult in a two month period, not something 
which can go upstream or not enough time to really plan well enough..  
Even if this isn't done for gsoc it may still be possible for the 
community to pursue some of these..


To be a mentor will most likely require answering daily/weekly technical 
questions, ideally being on irc and having patience.  On top of this 
I'll be available to help as much as technically possible, keep the 
student motivated and the projects on schedule.


./Christopher


#ospkg irc.freenode.net - (Mostly OpenSolaris development rambling)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-03 Thread Richard Elling

C. Bergström wrote:


For reasons which I don't care about Sun may not apply to be a gsoc 
organization this year.  However, I'm not discouraged from trying to 
propose some exciting zfs related ideas.  On/off list feel free to 
send your vote, let me know if you can mentor or if you know a company 
that could use it.


Here's more or less what I've collected...

  1) Excess ditto block removing + other green-bytes zfs+ features - 
*open source* (very hard.. can't be done in two months)
  2) raidz boot support (planning phase and suitable student already 
found. could use more docs/info for proposal)
  3) Additional zfs compression choices (good for archiving non-text 
files?
  4) zfs cli interface to add safety checks  (save your butt from 
deleting a pool worth more than your job)

  5) Web or gui based admin interface


FWIW, I just took at look at the BUI in b108 and it seems to have
garnered some love since the last time I looked at it (a year ago?)
I encourage folks to take a fresh look at it.
   https://localhost:6789

-- richard

  6) zfs defrag (was mentioned by someone working around petabytes of 
data..)
  7) vdev evacuation as an upgrade path (which may depend or take 
advantage of zfs resize/shrink code)

  8) zfs restore/repair tools (being worked on already?)
  9) Timeslider ported to kde4.2 ( *cough* couldn't resist, but put 
this on the list)

  10) Did I miss something..

#2 Currently planning and collecting as much information for the 
proposal as possible.  Today all ufs + solaris grub2 issues were 
resolved and will likely be committed to upstream soon.  There is a 
one liner fix in the solaris kernel also needed, but that can be 
binary hacked worst case.


#5/9 This also may be possible for an outside project.. either web 
showcase or tighter desktop integration..


The rest may just be too difficult in a two month period, not 
something which can go upstream or not enough time to really plan well 
enough..  Even if this isn't done for gsoc it may still be possible 
for the community to pursue some of these..


To be a mentor will most likely require answering daily/weekly 
technical questions, ideally being on irc and having patience.  On top 
of this I'll be available to help as much as technically possible, 
keep the student motivated and the projects on schedule.


./Christopher


#ospkg irc.freenode.net - (Mostly OpenSolaris development rambling)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-03 Thread Nicolas Williams
On Tue, Mar 03, 2009 at 11:35:40PM +0200, C. Bergström wrote:
   7) vdev evacuation as an upgrade path (which may depend or take 
 advantage of zfs resize/shrink code)

IIRC Matt Ahrens has said on this list that vdev evacuation/pool
shrinking is being worked.  So (7) would be duplication of effort.

   8) zfs restore/repair tools (being worked on already?)

IIRC Jeff Bonwick has said on this list that ubberblock rollback on
import is now his higher priority.  So working on (8) would be
duplication of effort.

   1) Excess ditto block removing + other green-bytes zfs+ features - 
 *open source* (very hard.. can't be done in two months)

Using the new block pointer re-write code you might be able to deal with
re-creating blocks with more/fewer ditto copies (and compression, ...)
with incremental effort.  But ask Matt Ahrens.

   6) zfs defrag (was mentioned by someone working around petabytes of 
 data..)

(6) probably depends on the new block pointer re-write code as well.
But (6) may also be implied in vdev evac/pool shrink, so it may be
duplication of effort.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs with PERC 6/i card?

2009-03-03 Thread James C. McPherson
On Tue, 03 Mar 2009 09:50:51 -0800
Kristin Amundsen avatarofsl...@gmail.com wrote:

 I am trying to set up OpenSolaris on a Dell 2950 III that has 8 SAS drives
 connected to a PERC 6/i card.  I am wondering what the best way to configure
 the RAID in the BIOS for ZFS.
 
 Part of the problem is there seems to be some confusion inside Dell as to
 what can be done with the card.  Their tech support suggested making the 8
 drives show up by making them 8 raid0 devices.  I researched online to see
 if I could find anyone doing that and the only person I found indicated
 there were issues with needing to reboot the machine because the controller
 would take drives totally offline when there were problems.  The sales rep I
 have been working with said the card can be configured with a no-raid
 option.  I am not sure why tech support did not know about this (I spent a
 long time taking with them about if we could turn RAID off on the machine).
 I could not find anyone talking about running zfs on a system configured
 this way.
 
 I would like to hear if anyone is using ZFS with this card and how you set
 it up, and what, if any, issues you've had with that set up.


Gday Kristin,
I didn't specifically test ZFS with this card when I was 
making the changes for 

6712499 mpt should identify and report Dell SAS6/iR family of controllers

However I would expect that if you could present 8 raid0 luns to 
the host then that should be at least a decent config to start
using for ZFS. 


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs with PERC 6/i card?

2009-03-03 Thread Julius Roberts
 I would like to hear if anyone is using ZFS with this card and how you set
 it up, and what, if any, issues you've had with that set up.

 However I would expect that if you could present 8 raid0 luns to
 the host then that should be at least a decent config to start
 using for ZFS.

I can confirm that we are doing that here (with 3 drives) and it's
been fine for almost a year now.

Jules
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-03 Thread Blake
When I go here:
http://opensolaris.org/os/project/isns/bui

I get an error.  Where are you getting BUI from?


On Tue, Mar 3, 2009 at 5:16 PM, Richard Elling richard.ell...@gmail.comwrote:


 FWIW, I just took at look at the BUI in b108 and it seems to have
 garnered some love since the last time I looked at it (a year ago?)
 I encourage folks to take a fresh look at it.
   https://localhost:6789

 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-03 Thread Maurice Volaski

   10) Did I miss something..


Somehow, what I posted on the web forum didn't make it to the mailing 
list digest...


How about implementing dedup? This has been listed as an RFE for 
almost a year, http://bugs.opensolaris.org/view_bug.do?bug_id=6677093 
and discussed here, 
http://www.opensolaris.org/jive/thread.jspa?messageID=256373.

--

Maurice Volaski, mvola...@aecom.yu.edu
Computing Support, Rose F. Kennedy Center
Albert Einstein College of Medicine of Yeshiva University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-03 Thread Tim
On Tue, Mar 3, 2009 at 3:35 PM, C. Bergström cbergst...@netsyncro.comwrote:


 For reasons which I don't care about Sun may not apply to be a gsoc
 organization this year.  However, I'm not discouraged from trying to propose
 some exciting zfs related ideas.  On/off list feel free to send your vote,
 let me know if you can mentor or if you know a company that could use it.

 Here's more or less what I've collected...

  1) Excess ditto block removing + other green-bytes zfs+ features - *open
 source* (very hard.. can't be done in two months)
  2) raidz boot support (planning phase and suitable student already found.
 could use more docs/info for proposal)
  3) Additional zfs compression choices (good for archiving non-text files?
  4) zfs cli interface to add safety checks  (save your butt from deleting a
 pool worth more than your job)
  5) Web or gui based admin interface
  6) zfs defrag (was mentioned by someone working around petabytes of
 data..)
  7) vdev evacuation as an upgrade path (which may depend or take advantage
 of zfs resize/shrink code)
  8) zfs restore/repair tools (being worked on already?)
  9) Timeslider ported to kde4.2 ( *cough* couldn't resist, but put this on
 the list)
  10) Did I miss something..

 #2 Currently planning and collecting as much information for the proposal
 as possible.  Today all ufs + solaris grub2 issues were resolved and will
 likely be committed to upstream soon.  There is a one liner fix in the
 solaris kernel also needed, but that can be binary hacked worst case.

 #5/9 This also may be possible for an outside project.. either web showcase
 or tighter desktop integration..

 The rest may just be too difficult in a two month period, not something
 which can go upstream or not enough time to really plan well enough..  Even
 if this isn't done for gsoc it may still be possible for the community to
 pursue some of these..

 To be a mentor will most likely require answering daily/weekly technical
 questions, ideally being on irc and having patience.  On top of this I'll be
 available to help as much as technically possible, keep the student
 motivated and the projects on schedule.

 ./Christopher



I know plenty of home users would like the ability to add a single disk to
a raid-z vdev in order to grow a disk at a time.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-03 Thread Richard Elling

Blake wrote:

When I go here:

http://opensolaris.org/os/project/isns/bui

I get an error. Â Where are you getting BUI from?


The BUI is in webconsole which is available on your local machine at
port 6789
   https://localhost:6798

If you want to access it remotely, you'll need to change the configuration
as documented
   http://docs.sun.com/app/docs/doc/817-1985/gdhgt?a=view

-- richard


On Tue, Mar 3, 2009 at 5:16 PM, Richard Elling 
richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:



FWIW, I just took at look at the BUI in b108 and it seems to have
garnered some love since the last time I looked at it (a year ago?)
I encourage folks to take a fresh look at it.
  https://localhost:6789

-- richard



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Oracle database on zfs

2009-03-03 Thread Vahid Moghaddasi
Hi,

I am wondering if there is a guideline on how to configure ZFS on a server
with Oracle database?
We are experiencing some slowness on writes to ZFS filesystem. It take about
530ms to write a 2k data.
We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5
EMC EMX.
This is a small database with about 18gb storage allocated.
Is there a tunable parameters that we can apply to ZFS to make it a little
faster for writes.
Oracle is using 8k block size, can we match zfs block size to oracle without
destroying the data?
$ zpool status zpraid0_e2
  pool: zpraid0_e2
 state: ONLINE
 scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zpraid0_e2   ONLINE   0 0 0
  c3t6006048190101941533030453434d0  ONLINE   0 0 0
errors: No known data errors
Thanks,
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle database on zfs

2009-03-03 Thread Richard Elling

Vahid Moghaddasi wrote:

Hi,
 
I am wondering if there is a guideline on how to configure ZFS on a 
server with Oracle database?


Start with the Best Practices Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

We are experiencing some slowness on writes to ZFS filesystem. It take 
about 530ms to write a 2k data.


This seems unusual, unless the EMC is mismatched wrt how they may have
implemented cache flush.  The issues around this are described in the Evil
Tuning Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes
-- richard

We are running Solaris 10 u5 127127-11 and the back-end storage is a 
RAID5 EMC EMX.

This is a small database with about 18gb storage allocated.
Is there a tunable parameters that we can apply to ZFS to make it a 
little faster for writes.
Oracle is using 8k block size, can we match zfs block size to oracle 
without destroying the data?

$ zpool status zpraid0_e2
  pool: zpraid0_e2
 state: ONLINE
 scrub: none requested
config:
NAME STATE READ WRITE 
CKSUM
zpraid0_e2   ONLINE   0 
0 0
  c3t6006048190101941533030453434d0  ONLINE   0 
0 0

errors: No known data errors
Thanks,


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-03 Thread Blake
That's what I thought you meant, and I got excited thinking that you were
talking about OpenSolaris :)
I'll see about getting the new packages and trying them out.



On Tue, Mar 3, 2009 at 8:36 PM, Richard Elling richard.ell...@gmail.comwrote:

 Blake wrote:

 When I go here:

 http://opensolaris.org/os/project/isns/bui

 I get an error. Â Where are you getting BUI from?


 The BUI is in webconsole which is available on your local machine at
 port 6789
   https://localhost:6798

 If you want to access it remotely, you'll need to change the configuration
 as documented
   http://docs.sun.com/app/docs/doc/817-1985/gdhgt?a=view

 -- richard


 On Tue, Mar 3, 2009 at 5:16 PM, Richard Elling 
 richard.ell...@gmail.commailto:
 richard.ell...@gmail.com wrote:


FWIW, I just took at look at the BUI in b108 and it seems to have
garnered some love since the last time I looked at it (a year ago?)
I encourage folks to take a fresh look at it.
  https://localhost:6789

-- richard



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle database on zfs

2009-03-03 Thread David Magda


On Mar 3, 2009, at 20:51, Richard Elling wrote:


This seems unusual, unless the EMC is mismatched wrt how they may have
implemented cache flush.  The issues around this are described in  
the Evil

Tuning Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes


Under the 5/08 and snv_72 note, the following text appears:

The sd and ssd drivers should properly handle the SYNC_NV bit, so no  
changes should be needed.


I'm assuming this relates to:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6462690

So the caching-flushing scenario shouldn't be a problem with newer  
Solaris releases on higher-end arrays (assuming they support SBC-2's  
SYNV_NV).


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs with PERC 6/i card?

2009-03-03 Thread Bryant Eadon

Julius Roberts wrote:

I would like to hear if anyone is using ZFS with this card and how you set
it up, and what, if any, issues you've had with that set up.

However I would expect that if you could present 8 raid0 luns to
the host then that should be at least a decent config to start
using for ZFS.


I can confirm that we are doing that here (with 3 drives) and it's
been fine for almost a year now.



I've accomplished that with a similar RAID card, set each drive to a 'JBOD' mode 
in the RAID BIOS, this properly presents individual devices to the OS so ZFS can 
do it's thing.  One thing to note however is that if you remove a drive and 
reboot without replacing it the names of devices may be shifted forward a number 
 causing havoc with a ZFS pool.


Additionally, depending on the OS be, careful with attaching removable storage 
on bootup as this may also shift the names of the devices -- I've personally 
experienced it on FreeBSD 7.1 with a USB stick attached during reboot.  The 
stick happened to take the name of my first drive on this RAID device.



-Bryant
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs list extentions related to pNFS

2009-03-03 Thread Sanjeev
Lisa,

On Mon, Mar 02, 2009 at 09:58:08PM -0700, Lisa Week wrote:
 Should the pNFS datasets show up in the default zfs list output?  Just 
 as with ZFS file systems and ZVOLs, the number of pNFS datasets that 
 exist on a data server will vary depending on the configuration.

I think they should be listed. 

Thanks and regards,
Sanjeev

 The following is output from the modified command and reflects the  
 current mode of operation (i.e. zfs list lists filesystems, volumes  
 and pnfs datasets by default):

 (pnfs-17-21:/home/lisagab):6 % zfs list
 NAMEUSED  AVAIL  REFER  MOUNTPOINT
 rpool  30.0G  37.0G  32.5K  /rpool
 rpool/ROOT 18.2G  37.0G18K  legacy
 rpool/ROOT/snv_105 18.2G  37.0G  6.86G  /
 rpool/ROOT/snv_105/var 11.4G  37.0G  11.4G  /var
 rpool/dump 9.77G  37.0G  9.77G  -
 rpool/export 40K  37.0G21K  /export
 rpool/export/home19K  37.0G19K  /export/home
 rpool/pnfsds 31K  37.0G15K  - ---pNFS  
 dataset
 rpool/pnfsds/47C80414080A4A4216K  37.0G16K  - ---pNFS  
 dataset
 rpool/swap 1.97G  38.9G  4.40M  -

 (pnfs-17-21:/home/lisagab):7 % zfs list -t pnfsdata
 NAMEUSED  AVAIL  REFER  MOUNTPOINT
 rpool/pnfsds 31K  37.0G15K  -
 rpool/pnfsds/47C80414080A4A4216K  37.0G16K  -

 (pnfs-17-21:/home/lisagab):8 % zfs get all rpool/pnfsds
 NAME  PROPERTY  VALUE  SOURCE
 rpool/pnfsds  type  pnfs-dataset   -
 rpool/pnfsds  creation  Mon Feb  2 13:56 2009  -
 rpool/pnfsds  used  31K-
 rpool/pnfsds  available 37.0G  -
 rpool/pnfsds  referenced15K-
 rpool/pnfsds  compressratio 1.00x  -
 rpool/pnfsds  quota none   default
 rpool/pnfsds  reservation   none   default
 rpool/pnfsds  recordsize128K   default
 rpool/pnfsds  checksum  on default
 rpool/pnfsds  compression   offdefault
 rpool/pnfsds  zoned offdefault
 rpool/pnfsds  copies1  default
 rpool/pnfsds  refquota  none   default
 rpool/pnfsds  refreservationnone   default
 rpool/pnfsds  sharepnfs offdefault
 rpool/pnfsds  mds   none   default

 Thanks,
 Lisa



-- 

Sanjeev Bagewadi
Solaris RPE 
Bangalore, India
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Comstar production-ready?

2009-03-03 Thread Stephen Nelson-Smith
Hi,

I recommended a ZFS-based archive solution to a client needing to have
a network-based archive of 15TB of data in a remote datacentre.  I
based this on an X2200 + J4400, Solaris 10 + rsync.

This was enthusiastically received, to the extent that the client is
now requesting that their live system (15TB data on cheap SAN and
Linux LVM) be replaced with a ZFS-based system.

The catch is that they're not ready to move their production systems
off Linux - so web, db and app layer will all still be on RHEL 5.

As I see it, if they want to benefit from ZFS at the storage layer,
the obvious solution would be a NAS system, such as a 7210, or
something buillt from a JBOD and a head node that does something
similar.  The 7210 is out of budget - and I'm not quite sure how it
presents its storage - is it NFS/CIFS?  If so, presumably it would be
relatively easy to build something equivalent, but without the
(awesome) interface.

The interesting alternative is to set up Comstar on SXCE, create
zpools and volumes, and make these available either over a fibre
infrastructure, or iSCSI.  I'm quite excited by this as a solution,
but I'm not sure if it's really production ready.

What other options are there, and what advice/experience can you share?

Thanks,

S.
-- 
Stephen Nelson-Smith
Technical Director
Atalanta Systems Ltd
www.atalanta-systems.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar production-ready?

2009-03-03 Thread Fajar A. Nugraha
On Wed, Mar 4, 2009 at 2:07 PM, Stephen Nelson-Smith sanel...@gmail.com wrote:
 As I see it, if they want to benefit from ZFS at the storage layer,
 the obvious solution would be a NAS system, such as a 7210, or
 something buillt from a JBOD and a head node that does something
 similar.  The 7210 is out of budget - and I'm not quite sure how it
 presents its storage - is it NFS/CIFS?  If so, presumably it would be

it can also share block device (zvol) with iscsi

 The interesting alternative is to set up Comstar on SXCE, create
 zpools and volumes, and make these available either over a fibre
 infrastructure, or iSCSI.  I'm quite excited by this as a solution,
 but I'm not sure if it's really production ready.

If you want production rady software, starting from Solaris 10 8/07
Release you can create a ZFS volume as a Solaris iSCSI target device
by setting the shareiscsi property on the ZFS volume. It's not
Comstar, but it works.

You may want consider opensolaris (I like it better than SXCE) instead
of solaris if you want to stay on bleeding-edge, or even Nexenta which
recentely integrated Comstar

http://www.gulecha.org/2009/03/03/nexenta-iscsi-with-comstarzfs-integration/

Regards,

Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar production-ready?

2009-03-03 Thread Erast Benson
Hi Stephen,

NexentaStor v1.1.5+ could be an alternative, I think. And it includes
new cool COMSTAR integration, i.e. ZFS shareiscsi property actually
implements COMSTAR iSCSI target share functionality not available in
SXCE. http://www.nexenta.com/nexentastor-relnotes 

On Wed, 2009-03-04 at 07:07 +, Stephen Nelson-Smith wrote:
 Hi,
 
 I recommended a ZFS-based archive solution to a client needing to have
 a network-based archive of 15TB of data in a remote datacentre.  I
 based this on an X2200 + J4400, Solaris 10 + rsync.
 
 This was enthusiastically received, to the extent that the client is
 now requesting that their live system (15TB data on cheap SAN and
 Linux LVM) be replaced with a ZFS-based system.
 
 The catch is that they're not ready to move their production systems
 off Linux - so web, db and app layer will all still be on RHEL 5.
 
 As I see it, if they want to benefit from ZFS at the storage layer,
 the obvious solution would be a NAS system, such as a 7210, or
 something buillt from a JBOD and a head node that does something
 similar.  The 7210 is out of budget - and I'm not quite sure how it
 presents its storage - is it NFS/CIFS?  If so, presumably it would be
 relatively easy to build something equivalent, but without the
 (awesome) interface.
 
 The interesting alternative is to set up Comstar on SXCE, create
 zpools and volumes, and make these available either over a fibre
 infrastructure, or iSCSI.  I'm quite excited by this as a solution,
 but I'm not sure if it's really production ready.
 
 What other options are there, and what advice/experience can you share?
 
 Thanks,
 
 S.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar production-ready?

2009-03-03 Thread Scott Lawson



Stephen Nelson-Smith wrote:

Hi,

I recommended a ZFS-based archive solution to a client needing to have
a network-based archive of 15TB of data in a remote datacentre.  I
based this on an X2200 + J4400, Solaris 10 + rsync.

This was enthusiastically received, to the extent that the client is
now requesting that their live system (15TB data on cheap SAN and
Linux LVM) be replaced with a ZFS-based system.

The catch is that they're not ready to move their production systems
off Linux - so web, db and app layer will all still be on RHEL 5.
  

At some point I am sure you will convince them to see the light! ;)

As I see it, if they want to benefit from ZFS at the storage layer,
the obvious solution would be a NAS system, such as a 7210, or
something buillt from a JBOD and a head node that does something
similar.  The 7210 is out of budget - and I'm not quite sure how it
presents its storage - is it NFS/CIFS?
The 7000 series devices can present NFS, CIFS and iSCSI. Looks very nice 
if you need
a nice Gui / Don't know command line / need nice analytics. I had a 
play with one the other
day and am hoping to get my mit's on one shortly for testing. I would 
like to give it a real

gd crack with VMWare for VDI VM's.

  If so, presumably it would be
relatively easy to build something equivalent, but without the
(awesome) interface.
  
For sure the above gear would be fine for that. If you use standard 
Solaris 10 10/08 you have
NFS and iSCSI ability directly in the OS and also available to be 
supported via a support contract
if needed. Best bet would probably be NFS for the Linux machines, but 
you would need

to test in *their* environment with *their* workload.

The interesting alternative is to set up Comstar on SXCE, create
zpools and volumes, and make these available either over a fibre
infrastructure, or iSCSI.  I'm quite excited by this as a solution,
but I'm not sure if it's really production ready.
  
If you want fibre channel target then you will need to use OpenSolaris 
or SXDE I believe. It's
not available in mainstream Solaris yet. I am personally waiting till 
then when it has been
*well* tested in the bleeding edge community. I have too much data to 
take big risks with it.

What other options are there, and what advice/experience can you share?
  
I do very similar stuff here with J4500's and T2K's for compliance 
archives, NFS and iSCSI targets
for Windows machines. Works fine for me. Biggest system is 48TB on J4500 
for Veritas Netbackup
DDT staging volumes. Very good throughput indeed. Perfect in fact, based 
on the large files that
are created in this environment. One of these J4500's can keep 4 LTO4 
drives in a SL500  saturated with

data on a T5220. (4 streams at ~160 MB/sec)

I think you have pretty much the right idea though. Certainly if you use 
Sun kit you will be able to deliver

a commercially supported solution for them.

Thanks,

S.
  


--
_

Scott Lawson
Systems Architect
Information Communication Technology Services

Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
Manukau 2240
Auckland
New Zealand

Phone  : +64 09 968 7611
Fax: +64 09 968 7641
Mobile : +64 27 568 7611

mailto:sc...@manukau.ac.nz

http://www.manukau.ac.nz

__

perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

__



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss