Re: [zfs-discuss] zfs + NFS + FreeBSD with performance prob

2013-02-05 Thread Albert Shih
 Le 04/02/2013 ? 11:21:12-0500, Paul Kraus a écrit
 On Jan 31, 2013, at 5:16 PM, Albert Shih wrote:
 
  Well I've server running FreeBSD 9.0 with (don't count / on
  differents disks) zfs pool with 36 disk.
  
  The performance is very very good on the server.
  
  I've one NFS client running FreeBSD 8.3 and the performance over NFS
  is very good :
  
  For example : Read from the client and write over NFS to ZFS:
  
  [root@ .tmp]# time tar xf /tmp/linux-3.7.5.tar
  
  real1m7.244s user0m0.921s sys 0m8.990s
  
  this client is on 1Gbits/s network cable and same network switch as
  the server.
  
  I've a second NFS client running FreeBSD 9.1-Stable, and on this
  second client the performance is catastrophic. After 1 hour the tar
  isn't finish.  OK this second client is connect with 100Mbit/s and
  not on the same switch.  But well from 2 min -- ~ 90 min ...:-(
  
  I've try for this second client to change on the ZFS-NFS server the
  
  zfs set sync=disabled
  
  and that change nothing.
 
 I have been using FreeBSD 9 with ZFS and NFS to a couple Mac OS X
 (10.6.8 Snow Leopard) boxes and I get between 40 and 50 MB/sec

Thanks for your answer.

Can you give me the average ping time between you'r client and NFS server ? 

Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
mar 5 fév 2013 16:15:11 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs + NFS + FreeBSD with performance prob

2013-01-31 Thread Albert Shih
Hi all,

I'm not sure if the problem is with FreeBSD or ZFS or both so I cross-post
(I known it's bad). 

Well I've server running FreeBSD 9.0 with (don't count / on differents
disks) zfs pool with 36 disk.

The performance is very very good on the server.

I've one NFS client running FreeBSD 8.3 and the performance over NFS is
very good : 

For example : Read from the client and write over NFS to ZFS:

[root@ .tmp]# time tar xf /tmp/linux-3.7.5.tar 

real1m7.244s
user0m0.921s
sys 0m8.990s

this client is on 1Gbits/s network cable and same network switch as the
server.

I've a second NFS client running FreeBSD 9.1-Stable, and on this second
client the performance is catastrophic. After 1 hour the tar isn't finish.
OK this second client is connect with 100Mbit/s and not on the same switch.
But well from 2 min -- ~ 90 min ...:-(

I've try for this second client to change on the ZFS-NFS server the

zfs set sync=disabled 

and that change nothing.

On a third NFS client linux (recent Ubuntu) I got the almost same catastrophic 
performance. With or without sync=disabled. 

Those three NFS client use TCP. 

If I do a classic scp I got normal speed ~9-10 Mbytes/s so the network is
not the problem.

I try to something like (find with google): 

net.inet.tcp.sendbuf_max: 2097152 - 16777216
net.inet.tcp.recvbuf_max: 2097152 - 16777216
net.inet.tcp.sendspace: 32768 - 262144
net.inet.tcp.recvspace: 65536 - 262144
net.inet.tcp.mssdflt: 536 - 1452
net.inet.udp.recvspace: 42080 - 65535
net.inet.udp.maxdgram: 9216 - 65535
net.local.stream.recvspace: 8192 - 65535
net.local.stream.sendspace: 8192 - 65535


and that change nothing either. 

Anyone have any idea ? 

Regards.

JAS

-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
jeu 31 jan 2013 23:04:47 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-12-06 Thread Albert Shih
 Le 01/12/2012 ? 08:33:31-0700, Jan Owoc a ?crit
Hi,

Sorry, I'm very busy those past few days. 

  
   http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
 
 The commands described on that page do not have direct equivalents in
 zfs. There is currently no way to reduce the number of top-level
 vdevs in a pool or to change the RAID level.

OK. 

   I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 
   disk
   I've 36x 3T and 12 x 2T.
   Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
   to migrate all data on those 12 old disk on the new and remove those old
   disk ?
 
 In your specific example this means that you have 4 * RAIDZ2 of 12
 disks each. Zfs doesn't allow you to reduce that there are 4. Zfs
 doesn't allow you to change any of them from RAIDZ2 to any other
 configuration (eg RAIDZ). Zfs doesn't allow you to change the fact
 that you have 12 disks in a vdev.

OK thanks. 

 
 If you don't have a full set of new disks on a new system, or enough
 room on backup tapes to do a backup-restore, there are only two ways
 for to add capacity to the pool:
 1) add a 5th top-level vdev (eg. another set of 12 disks)

That's not a problem. 

 2) replace the disks with larger ones one-by-one, waiting for a
 resilver in between

This is the point I don't see how to do it. I've 48 disk actually from
/dev/da0 - /dev/da47 (I'm under FreeBSD 9.0) lets say 3To. 

I've 4 raidz2 the first from /dev/da0 - /dev/da11 etc..

So I add physically a new enclosure with new 12 disks for example 4To disk. 

I'm going to have new /dev/da48 -- /dev/da59. 

Say I want remove /dev/da0 - /dev/da11. First I pull out the /dev/da0. 
The first raidz2 going to be in «degraded state». So I going to tell the
pool the new disk is /dev/da48.

repeat this_process until /dev/da11 replace by /dev/da59.

But at the end how many space I'm going to use on those /dev/da48 --
/dev/da51. Am I going to have 3To or 4To ? Because each time before
complete ZFS going to use only 3 To how at the end he going to magically
use 4To ? 

Second question, when I'm going to pull out the first enclosure meaning the
old /dev/da0 -- /dev/da11 and reboot the server the kernel going to give
new number of those disk meaning 

old /dev/da12 -- /dev/da0
old /dev/da13 -- /dev/da1
etc...
old /dev/da59 -- /dev/da47

how zfs going to manage that ? 

  When I would like to change the disk, I also would like change the disk
  enclosure, I don't want to use the old one.
 
 You didn't give much detail about the enclosure (how it's connected,
 how many disk bays it has, how it's used etc.), but are you able to
 power off the system and transfer the all the disks at once?

Server : Dell PowerEdge 610
4 x enclosure : MD1200 with 12 disk of 3To
Connection : SAS
SAS Card : LSI
enclosures are chained : 

server -- MD1200.1 -- MD1200.2 -- MD1200.3 -- MD1200.4


 
 
  And what happen if I have 24, 36 disks to change ? It's take mounth to do
  that.
 
 Those are the current limitations of zfs. Yes, with 12x2TB of data to
 copy it could take about a month.

OK. 

 
 If you are feeling particularly risky and have backups elsewhere, you
 could swap two drives at once, but then you lose all your data if one
 of the remaining 10 drives in the vdev failed.

OK. 

Thanks for the help

Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
jeu 6 déc 2012 09:20:55 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-12-01 Thread Albert Shih
 Le 30/11/2012 ? 15:52:09+0100, Tomas Forsman a écrit
 On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:
 
  Hi all,
  
  I would like to knwon if with ZFS it's possible to do something like that :
  
  http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
  
  meaning : 
  
  I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
  I've 36x 3T and 12 x 2T.
  Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
  to migrate all data on those 12 old disk on the new and remove those old
  disk ? 
 
 You pull out one 2T, put in a 4T, wait for resilver (possibly tell it to
 replace, if you don't have autoreplace on)
 Repeat until done.

Wellin fact it's littre more complicate than that. 

When I would like to change the disk, I also would like change the disk
enclosure, I don't want to use the old one. 

And what happen if I have 24, 36 disks to change ? It's take mounth to do
that. 

Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
sam 1 déc 2012 12:17:39 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Remove disk

2012-11-30 Thread Albert Shih
Hi all,

I would like to knwon if with ZFS it's possible to do something like that :

http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html

meaning : 

I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
I've 36x 3T and 12 x 2T.
Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
to migrate all data on those 12 old disk on the new and remove those old
disk ? 

Regards.


-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
ven 30 nov 2012 15:18:32 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How many disk in one pool

2012-10-05 Thread Albert Shih
Hi all,

I'm actually running ZFS under FreeBSD. I've a question about how many
disks I «can» have in one pool. 

At this moment I'm running with one server (FreeBSD 9.0) with 4 MD1200
(Dell) meaning 48 disks. I've configure with 4 raidz2 in the pool (one on
each MD1200)

On what I understand I can add more more MD1200. But if I loose one MD1200
for any reason I lost the entire pool. 

In your experience what's the «limit» ? 100 disk ? 

How FreeBSD manage 100 disk ? /dev/da100 ? 

Regards.

JAS

-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
ven 5 oct 2012 22:52:22 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] snapshot size

2012-06-05 Thread Albert Shih
Hi all,

Two questions from a newbie.

1/ What REFER mean in zfs list ? 

2/ How can I known the size of all snapshot size for a partition ?
(OK I can add zfs list -t snapshot)

Regards.

JAS


-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@jabber.obspm.fr
Heure local/Local time:
mar 5 jui 2012 16:57:38 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshot size

2012-06-05 Thread Albert Shih
 Le 05/06/2012 ? 17:08:51+0200, Stefan Ring a écrit
  Two questions from a newbie.
 
         1/ What REFER mean in zfs list ?
 
 The amount of data that is reachable from the file system root. It's
 just what I would call the contents of the file system.

OK thanks. 

 
         2/ How can I known the size of all snapshot size for a partition ?
         (OK I can add zfs list -t snapshot)
 
 zfs get usedbysnapshots zfs-name

Thansk 

Can I say 

USED-REFER=snapshot size ? 


Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@jabber.obspm.fr
Heure local/Local time:
mar 5 jui 2012 17:16:07 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs disapeare on FreeBSD.

2012-01-17 Thread Albert Shih
Hi all. 

I'm totale newbie on ZFS so if I ask some stupid question, please don't
send some angry mail to this mailing list, send-it directly to me ;-)

Well I've a Dell server running FreeBSD 9.0 with 4 MD1200 with 48 disks. 
It's connect through a LSI card. So I can see all /dev/da0 -- /dev/da47. 

I've create a zpool with 4 raidz2 (one for each MD1200). 

After that I re-install the server (I put some wrong swap size), and I
don't find my zpool at all. 

Is that normal ? 

Regards.

JAS
-- 
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
mar 17 jan 2012 15:12:47 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs disapeare on FreeBSD.

2012-01-17 Thread Albert Shih
 Le 17/01/2012 à 06:31:22-0800, Brad Stone a écrit
 Try zpool import

Thanks. 

It's working.

Regards.

-- 
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
mar 17 jan 2012 15:35:31 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] CPU sizing for ZFS/iSCSI/NFS server

2011-12-12 Thread Albert Chin
We're preparing to purchase an X4170M2 as an upgrade for our existing
X4100M2 server for ZFS, NFS, and iSCSI. We have a choice for CPU, some
more expensive than others. Our current system has a dual-core 1.8Ghz
Opteron 2210 CPU with 8GB. Seems like either a 6-core Intel E5649
2.53Ghz CPU or 4-core Intel E5620 2.4Ghz CPU would be more than
enough. Based on what we're using the system for, it should be more
I/O bound than CPU bound. We are doing compression in ZFS but that
shouldn't be too CPU intensive. Seems we should be caring more about
more cores than high Ghz.

Recommendations?

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CPU sizing for ZFS/iSCSI/NFS server

2011-12-12 Thread Albert Chin
On Mon, Dec 12, 2011 at 02:40:52PM -0500, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. 
wrote:
 please check out the ZFS appliance 7120 spec 2.4Ghz /24GB memory and
 ZIL(SSD)
 may be try the ZFS simulator SW

Good point. Thanks.

 regards
 
 On 12/12/2011 2:28 PM, Albert Chin wrote:
 We're preparing to purchase an X4170M2 as an upgrade for our existing
 X4100M2 server for ZFS, NFS, and iSCSI. We have a choice for CPU, some
 more expensive than others. Our current system has a dual-core 1.8Ghz
 Opteron 2210 CPU with 8GB. Seems like either a 6-core Intel E5649
 2.53Ghz CPU or 4-core Intel E5620 2.4Ghz CPU would be more than
 enough. Based on what we're using the system for, it should be more
 I/O bound than CPU bound. We are doing compression in ZFS but that
 shouldn't be too CPU intensive. Seems we should be caring more about
 more cores than high Ghz.
 
 Recommendations?
 
 
 -- 
 Hung-Sheng Tsao Ph D.
 Founder  Principal
 HopBit GridComputing LLC
 cell: 9734950840
 http://laotsao.wordpress.com/
 http://laotsao.blogspot.com/
 

 begin:vcard
 fn:Hung-Sheng Tsao
 n:Tsao;Hung-Sheng
 email;internet:laot...@gmail.com
 tel;cell:9734950840
 x-mozilla-html:TRUE
 version:2.1
 end:vcard
 


-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CPU sizing for ZFS/iSCSI/NFS server

2011-12-12 Thread Albert Chin
On Mon, Dec 12, 2011 at 03:01:08PM -0500, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. 
wrote:
 4c@2.4ghz

Yep, that's the plan. Thanks.

 On 12/12/2011 2:44 PM, Albert Chin wrote:
 On Mon, Dec 12, 2011 at 02:40:52PM -0500, Hung-Sheng Tsao (Lao Tsao 老曹) 
 Ph.D. wrote:
 please check out the ZFS appliance 7120 spec 2.4Ghz /24GB memory and
 ZIL(SSD)
 may be try the ZFS simulator SW
 Good point. Thanks.
 
 regards
 
 On 12/12/2011 2:28 PM, Albert Chin wrote:
 We're preparing to purchase an X4170M2 as an upgrade for our existing
 X4100M2 server for ZFS, NFS, and iSCSI. We have a choice for CPU, some
 more expensive than others. Our current system has a dual-core 1.8Ghz
 Opteron 2210 CPU with 8GB. Seems like either a 6-core Intel E5649
 2.53Ghz CPU or 4-core Intel E5620 2.4Ghz CPU would be more than
 enough. Based on what we're using the system for, it should be more
 I/O bound than CPU bound. We are doing compression in ZFS but that
 shouldn't be too CPU intensive. Seems we should be caring more about
 more cores than high Ghz.
 
 Recommendations?
 
 -- 
 Hung-Sheng Tsao Ph D.
 Founder   Principal
 HopBit GridComputing LLC
 cell: 9734950840
 http://laotsao.wordpress.com/
 http://laotsao.blogspot.com/
 
 begin:vcard
 fn:Hung-Sheng Tsao
 n:Tsao;Hung-Sheng
 email;internet:laot...@gmail.com
 tel;cell:9734950840
 x-mozilla-html:TRUE
 version:2.1
 end:vcard
 
 
 
 -- 
 Hung-Sheng Tsao Ph D.
 Founder  Principal
 HopBit GridComputing LLC
 cell: 9734950840
 http://laotsao.wordpress.com/
 http://laotsao.blogspot.com/

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-27 Thread Albert Shih
 Le 19/10/2011 à 19:23:26-0700, Rocky Shek a écrit

 Hi. 

 Thanks for this information. 

 I also recommend LSI 9200-8E or new 9205-8E with the IT firmware based on
 past experience

Do you known if the LSI-9205-8E HBA or the LSI-9202-16E HBA work under FreBSD 
9.0 ? 

Best regards.

Regards.
-- 
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
jeu 27 oct 2011 17:20:11 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-20 Thread Albert Shih
 Le 19/10/2011 à 21:30:31+0700, Fajar A. Nugraha a écrit
  Sorry to cross-posting. I don't knwon which mailing-list I should post this
  message.
 
  I'll would like to use FreeBSD with ZFS on some Dell server with some
  MD1200 (classique DAS).
 
  When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
  two options :
 
         1/ create a LV on the PERC H800 so the server see one volume and put
         the zpool on this unique volume and let the hardware manage the
         raid.
 
         2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
         and ZFS manage the raid.
 
  which one is the best solution ?
 
 Neither.
 
 The best solution is to find a controller which can pass the disk as
 JBOD (not encapsulated as virtual disk). Failing that, I'd go with (1)
 (though others might disagree).

Thanks. That's going to be very complicate...but I'm going to try. 

 
 
  Any advise about the RAM I need on the server (actually one MD1200 so 
  12x2To disk)
 
 The more the better :)

Well, my employer is not so rich. 

It's first time I'm going to use ZFS on FreeBSD on production (I use on my
laptop but that's mean nothing), so what's in your opinion the minimum ram
I need ? Is something like 48 Go is enough ? 

 Just make sure do NOT use dedup untul you REALLY know what you're
 doing (which usually means buying lots of RAM and SSD for L2ARC).

Ok. 

Regards.

JAS
--
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
jeu 20 oct 2011 11:30:49 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-20 Thread Albert Shih
 Le 19/10/2011 à 10:52:07-0400, Krunal Desai a écrit
 On Wed, Oct 19, 2011 at 10:14 AM, Albert Shih albert.s...@obspm.fr wrote:
  When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
  two options :
 
         1/ create a LV on the PERC H800 so the server see one volume and put
         the zpool on this unique volume and let the hardware manage the
         raid.
 
         2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
         and ZFS manage the raid.
 
  which one is the best solution ?
 
  Any advise about the RAM I need on the server (actually one MD1200 so 
  12x2To disk)
 
 I know the PERC H200 can be flashed with IT firmware, making it in
 effect a dumb HBA perfect for ZFS usage. Perhaps the H800 has the
 same? (If not, can you get the machine configured with a H200?)

I'm not sure what you mean when you say «H200 flashed with IT firmware» ? 

 If that's not an option, I think Option 2 will work. My first ZFS
 server ran on a PERC 5/i, and I was forced to make 8 single-drive RAID
 0s in the PERC Option ROM, but Solaris did not seem to mind that.

OK.

I don't have choice (too complexe to explain and it's meanless here) but I
can only buy at Dell at this moment. 

On the Dell website I've the choice between : 


SAS 6Gbps External Controller
PERC H800 RAID Adapter for External JBOD, 512MB Cache, PCIe 
PERC H800 RAID Adapter for External JBOD, 512MB NV Cache, PCIe 
PERC H800 RAID Adapter for External JBOD, 1GB NV Cache, PCIe
PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 256MB Cache
PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 512MB Cache
LSI2032 SCSI Internal PCIe Controller Card

I've no idea what's the first thing is. But what I understand the best
solution is the first or the last ? 

Regards.

JAS

-- 
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
jeu 20 oct 2011 11:44:39 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS on Dell with FreeBSD

2011-10-19 Thread Albert Shih
Hi 

Sorry to cross-posting. I don't knwon which mailing-list I should post this
message. 

I'll would like to use FreeBSD with ZFS on some Dell server with some
MD1200 (classique DAS). 

When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
two options : 

1/ create a LV on the PERC H800 so the server see one volume and put
the zpool on this unique volume and let the hardware manage the
raid. 

2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
and ZFS manage the raid. 

which one is the best solution ? 

Any advise about the RAM I need on the server (actually one MD1200 so 12x2To 
disk)

Regards.

JAS
-- 
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
mer 19 oct 2011 16:11:40 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [OpenIndiana-discuss] zpool lost, and no pools available?

2011-05-24 Thread Albert Lee
man zpool /failmode

-Albert

On Tue, May 24, 2011 at 1:20 PM, Roy Sigurd Karlsbakk r...@karlsbakk.net 
wrote:
 Hi all

 I just attended this HTC conference and had a chat with a guy from UiO 
 (university of oslo) about ZFS. He claimed Solaris/OI will die silently if a 
 single pool fails. I have seen similar earlier, then due to a bug in ZFS (two 
 drives lost in a RAIDz2, spares taking over, resilvering and then a third 
 drive lost), and the system is hanging. Not even the rpool seems to be 
 available.

 Can someone please confirm this or tell me if there is a workaround available?

 Vennlige hilsener / Best regards

 roy
 --
 Roy Sigurd Karlsbakk
 (+47) 97542685
 r...@karlsbakk.net
 http://blogg.karlsbakk.net/
 --
 I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det 
 er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
 idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
 relevante synonymer på norsk.

 ___
 OpenIndiana-discuss mailing list
 openindiana-disc...@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Albert

W dniu 04.04.2011 12:44, Fajar A. Nugraha pisze:

On Mon, Apr 4, 2011 at 4:49 PM, For@llfor...@stalowka.info  wrote:

What can I do that zpool show new value?

zpool set autoexpand=on TEST
zpool set autoexpand=off TEST
  -- richard

I tried your suggestion, but no effect.

Did you modify the partition table?

IIRC if you pass a DISK to zpool create, it would create
partition/slice on it, either with SMI (the default for rpool) or EFI
(the default for other pool). When the disk size changes (like when
you change LUN size on storage node side), you PROBABLY need to resize
the partition/slice as well.

When I test with openindiana b148, simply setting zpool set
autoexpand=on is enough (I tested with Xen, and openinidiana reboot is
required). Again, you might need to set both autoexpand=on and
resize partition slice.

As a first step, try choosing c2t1d0 in format, and see what the
size of this first slice is.


Hi,

I choosed format and change type to the auto-configure and now I see new 
value if I choosed partition - print, but when I exit from format and 
reboot the old value is stay. How I can write new settings?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Disk failed, System not booting

2010-12-20 Thread Albert Frenz
hi there,

i got freenas installed with a raidz1 pool of 3 disks. one of them now failed 
and it gives me errors like Unrecovered red errors: autorreallocatefailed or 
MEDIUM ERROR asc:11,4 and the system won't even boot up. so i bought a 
replacement drive, but i am a bit concerned since normaly you should detach the 
drive via terminal. i can't do it, since it won't boot up. so am i safe, if i 
just shut down the machine and replace the drive with the new one and resilver?

thanks in advance
adrian
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best choice - file system for system

2010-12-08 Thread Albert

Hi,
I wonder what is the better option to install the system on solaris ufs 
and zfs sensitive data on whether this is the best all on zfs?

What are the pros and cons of such a solution?

f...@ll

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot limit?

2010-12-01 Thread Albert

W dniu 2010-12-01 15:19, Menno Lageman pisze:

f...@ll wrote:

Hi,

I must send zfs snaphost from one server to another. Snapshot have size
130GB. Now I have question, the zfs have any limit of sending file?


If you are sending the snapshot to another zpool (i.e. using 'zfs send |
zfs recv') then no, there is no limit. If you however send the snapshot
to a file on the other system (i.e. 'zfs send  somefile') then you are
limited by what the file system you are creating the file on supports.

Menno



Hi,

In my situation is first option, I send snapshot to another server using 
zfs send | zfs recv and I have problem when data send is completed, 
after reboot the zpool have error or have state: faulted.
First server is physical, second is a virtual machine running under 
xenserver 5.6


f...@ll

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Open Solaris installation help for backup application

2010-06-23 Thread Albert Davis
This forum has been tremendously helpful, but I decided to get some help from a 
Solaris Guru install Solaris for a backup application.

I do not want to disturb the flow of this forum, but where can I post to get 
some paid help on this forum? We are located in the San Francisco Bay Area. Any 
help would be appreciated.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] upgrade zfs stripe

2010-04-20 Thread Albert Frenz
ok thanks for the fast info. that sounds really awesome. i am glad i tried out 
zfs, so i no longer have to worry about this issues and the fact that i can 
upgrad forth and back between stripe and mirror is amazing. money was short, so 
only 2 disks had been put in and since the data is not that worthy i was aware 
of non-redundancy. though after knowing that feature i will surely add a disk 
for that in the next time. thanks again.

adrian
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] upgrade zfs stripe

2010-04-19 Thread Albert Frenz
hi there,

since i am really new to zfs, i got 2 important questions for starting. i got a 
nas up and running zfs in stripe mode with 2x 1,5tb hdd. my question for future 
proof would be, if i could add just another drive to the pool and zfs can 
integrate it flawlessly? and second if this hdd could also be another size than 
1,5tb? so could i put in 2tb also and integrate it?

thanks in advance

adrian
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] raidz using partitions

2010-01-27 Thread Albert Frenz
hi there,

maybe this is a stupid question, yet i haven't found an answer anywhere ;)
let say i got 3x 1,5tb hdds, can i create equal partitions out of each and make 
a raid5 out of it? sure the safety would drop, but that is not that important 
to me. with roughly 500gb partitions and the raid5 forumla of n-1*smallest 
drive i should be able to get 4tb storage instead of 3tb when using 3x 1,5tb in 
a normal raid5. 

thanks for you answers

greetings
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz using partitions

2010-01-27 Thread Albert Frenz
ok nice to know :) thank you very much for your quick answer
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cannot receive new filesystem stream: invalid backup stream

2009-12-27 Thread Albert Chin
I have two snv_126 systems. I'm trying to zfs send a recursive snapshot
from one system to another:
  # zfs send -v -R tww/opt/chro...@backup-20091225 |\
  ssh backupserver zfs receive -F -d -u -v tww
  ...
  found clone origin tww/opt/chroots/a...@ab-1.0
  receiving incremental stream of tww/opt/chroots/ab-...@backup-20091225 into 
tww/opt/chroots/ab-...@backup-20091225
  cannot receive new filesystem stream: invalid backup stream

If I do the following on the origin server:
  # zfs destroy -r tww/opt/chroots/ab-1.0
  # zfs list -t snapshot -r tww/opt/chroots | grep ab-1.0 
  tww/opt/chroots/a...@ab-1.0
  tww/opt/chroots/hppa1.1-hp-hpux11...@ab-1.0
  tww/opt/chroots/hppa1.1-hp-hpux11...@ab-1.0
  ...
  # zfs list -t snapshot -r tww/opt/chroots | grep ab-1.0 |\
  while read a; do zfs destroy $a; done
then another zfs send like the above, the zfs send/receive succeeds.
However, If I then perform a few operations like the following:
  zfs snapshot tww/opt/chroots/a...@ab-1.0
  zfs clone tww/opt/chroots/a...@ab-1.0 tww/opt/chroots/ab-1.0
  zfs rename tww/opt/chroots/ab/hppa1.1-hp-hpux11.00 
tww/opt/chroots/ab-1.0/hppa1.1-hp-hpux11.00
  zfs rename tww/opt/chroots/hppa1.1-hp-hpux11...@ab 
tww/opt/chroots/hppa1.1-hp-hpux11...@ab-1.0
  zfs destroy tww/opt/chroots/ab/hppa1.1-hp-hpux11.00
  zfs destroy tww/opt/chroots/hppa1.1-hp-hpux11...@ab
  zfs snapshot tww/opt/chroots/hppa1.1-hp-hpux11...@ab
  zfs clone tww/opt/chroots/hppa1.1-hp-hpux11...@ab 
tww/opt/chroots/ab/hppa1.1-hp-hpux11.00
  zfs rename tww/opt/chroots/ab/hppa1.1-hp-hpux11.11 
tww/opt/chroots/ab-1.0/hppa1.1-hp-hpux11.11
  zfs rename tww/opt/chroots/hppa1.1-hp-hpux11...@ab 
tww/opt/chroots/hppa1.1-hp-hpux11...@ab-1.0
  zfs destroy tww/opt/chroots/ab/hppa1.1-hp-hpux11.11
  zfs destroy tww/opt/chroots/hppa1.1-hp-hpux11...@ab
  zfs snapshot tww/opt/chroots/hppa1.1-hp-hpux11...@ab
  zfs clone tww/opt/chroots/hppa1.1-hp-hpux11...@ab 
tww/opt/chroots/ab/hppa1.1-hp-hpux11.11
  ...
and then perform another zfs send/receive, the error above occurs. Why?

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-11-05 Thread Albert Chin
On Thu, Nov 05, 2009 at 01:01:54PM -0800, Chris Du wrote:
 I think I finally see what you mean.
 
 # luactivate b126
 System has findroot enabled GRUB
 ERROR: Unable to determine the configuration of the current boot environment 
 b125.

A possible solution was posted in the thread:
  http://opensolaris.org/jive/thread.jspa?threadID=115503tstart=0

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send... too slow?

2009-10-26 Thread Albert Chin
On Sun, Oct 25, 2009 at 01:45:05AM -0700, Orvar Korvar wrote:
 I am trying to backup a large zfs file system to two different
 identical hard drives. I have therefore started two commands to backup
 myfs and when they have finished, I will backup nextfs
 
 zfs send mypool/m...@now | zfs receive backupzpool1/now  zfs send
 mypool/m...@now | zfs receive backupzpool2/now ; zfs send
 mypool/nex...@now | zfs receive backupzpool3/now
 
 in parallell. The logic is that the same file data is cached and
 therefore easy to send to each backup drive.
 
 Should I instead have done one zfs send... and waited for it to
 complete, and then started the next?
 
 It seems that zfs send... takes quite some time? 300GB takes 10
 hours, this far. And I have in total 3TB to backup. This means it will
 take 100 hours. Is this normal? If I had 30TB to back up, it would
 take 1000 hours, which is more than a month. Can I speed this up?

It's not immediately obvious what the cause is. Maybe the server running
zfs send has slow MB/s performance reading from disk. Maybe the network.
Or maybe the remote system. This might help:
  http://tinyurl.com/yl653am

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

2009-10-26 Thread Albert Chin
On Mon, Oct 26, 2009 at 09:58:05PM +0200, Mertol Ozyoney wrote:
 In all 2500 and 6000 series you can assign raid set's to a controller and
 that controller becomes the owner of the set. 

When I configured all 32-drives on a 6140 array and the expansion
chassis, CAM automatically split the drives amongst controllers evenly.

 The advantage of 2540 against it's bigger brothers (6140 which is EOL'ed)
 and competitors 2540 do use dedicated data paths for cache mirroring just
 like higher end unit disks (6180,6580, 6780) improving write performance
 significantly. 
 
 Spliting load between controllers can most of the time increase performance,
 but you do not need to split in two equal partitions. 
 
 Also do not forget that first tray have dedicated data lines to the
 controller so generaly it's wise not to mix those drives with other drives
 on other trays. 

But, if you have an expansion chassis, and create a zpool with drives on
the first tray and subsequent trays, what's the difference? You cannot
tell zfs which vdev to assign writes to so it seems pointless to balance
your pool based on the chassis when reads/writes are potentially spread
across all vdevs.

 Best regards
 Mertol  
 
 
 
 
 Mertol Ozyoney 
 Storage Practice - Sales Manager
 
 Sun Microsystems, TR
 Istanbul TR
 Phone +902123352200
 Mobile +905339310752
 Fax +90212335
 Email mertol.ozyo...@sun.com
 
 
 
 -Original Message-
 From: zfs-discuss-boun...@opensolaris.org
 [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
 Sent: Tuesday, October 13, 2009 10:59 PM
 To: Nils Goroll
 Cc: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)
 
 On Tue, 13 Oct 2009, Nils Goroll wrote:
 
  Regarding my bonus question: I haven't found yet a definite answer if
 there 
  is a way to read the currently active controller setting. I still assume
 that 
  the nvsram settings which can be read with
 
  service -d arrayname -c read -q nvsram region=0xf2 host=0x00
 
  do not necessarily reflect the current configuration and that the only way
 to 
  make sure the controller is running with that configuration is to reset
 it.
 
 I believe that in the STK 2540, the controllers operate Active/Active 
 except that each controller is Active for half the drives and Standby 
 for the others.  Each controller has a copy of the configuration 
 information.  Whichever one you communicate with is likely required to 
 mirror the changes to the other.
 
 In my setup I load-share the fiber channel traffic by assigning six 
 drives as active on one controller and six drives as active on the 
 other controller, and the drives are individually exported with a LUN 
 per drive.  I used CAM to do that.  MPXIO sees the changes and does 
 map 1/2 the paths down each FC link for more performance than one FC 
 link offers.
 
 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance problems with Thumper and 7TB ZFS pool using RAIDZ2

2009-10-24 Thread Albert Chin
On Sat, Oct 24, 2009 at 03:31:25PM -0400, Jim Mauro wrote:
 Posting to zfs-discuss. There's no reason this needs to be
 kept confidential.

 5-disk RAIDZ2 - doesn't that equate to only 3 data disks?
 Seems pointless - they'd be much better off using mirrors,
 which is a better choice for random IO...

Is it really pointless? Maybe they want the insurance RAIDZ2 provides.
Given the choice between insurance and performance, I'll take insurance,
though it depends on your use case. We're using 5-disk RAIDZ2 vdevs.
While I want the performance a mirrored vdev would give, it scares me
that you're just one drive away from a failed pool. Of course, you could
have two mirrors in each vdev but I don't want to sacrifice that much
space. However, over the last two years, we haven't had any
demonstratable failures that would give us cause for concern. But, it's
still unsettling.

Would love to hear other opinions on this.

 Looking at this now...

 /jim


 Jeff Savit wrote:
 Hi all,

 I'm looking for suggestions for the following situation: I'm helping  
 another SE with a customer using Thumper with a large ZFS pool mostly  
 used as an NFS server, and disappointments in performance. The storage  
 is an intermediate holding place for data to be fed into a relational  
 database, and the statement is that the NFS side can't keep up with  
 data feeds written to it as flat files.

 The ZFS pool has 8 5-volume RAIDZ2 groups, for 7.3TB of storage, with  
 1.74TB available.  Plenty of idle CPU as shown by vmstat and mpstat.   
 iostat shows queued I/O and I'm not happy about the total latencies -  
 wsvc_t in excess of 75ms at times.  Average of ~60KB per read and only  
 ~2.5KB per write. Evil Tuning guide tells me that RAIDZ2 is happiest  
 for long reads and writes, and this is not the use case here.

 I was surprised to see commands like tar, rm, and chown running  
 locally on the NFS server, so it looks like they're locally doing file  
 maintenance and pruning at the same time it's being accessed remotely.  
 That makes sense to me for the short write lengths and for the high  
 ZFS ACL activity shown by DTrace. I wonder if there is a lot of sync  
 I/O that would benefit from separately defined ZILs (whether SSD or  
 not), so I've asked them to look for fsync activity.

 Data collected thus far is listed below. I've asked for verification  
 of the Solaris 10 level (I believe it's S10u6) and ZFS recordsize.   
 Any suggestions will be appreciated.

 regards, Jeff

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! System panic when pool imported

2009-10-20 Thread Albert Chin
On Mon, Oct 19, 2009 at 09:02:20PM -0500, Albert Chin wrote:
 On Mon, Oct 19, 2009 at 03:31:46PM -0700, Matthew Ahrens wrote:
  Thanks for reporting this.  I have fixed this bug (6822816) in build  
  127.
 
 Thanks. I just installed OpenSolaris Preview based on 125 and will
 attempt to apply the patch you made to this release and import the pool.

Did the above and the zpool import worked. Thanks!

  --matt
 
  Albert Chin wrote:
  Running snv_114 on an X4100M2 connected to a 6140. Made a clone of a
  snapshot a few days ago:
# zfs snapshot a...@b
# zfs clone a...@b tank/a
# zfs clone a...@b tank/b
 
  The system started panicing after I tried:
# zfs snapshot tank/b...@backup
 
  So, I destroyed tank/b:
# zfs destroy tank/b
  then tried to destroy tank/a
# zfs destroy tank/a
 
  Now, the system is in an endless panic loop, unable to import the pool
  at system startup or with zpool import. The panic dump is:
panic[cpu1]/thread=ff0010246c60: assertion failed: 0 == 
  zap_remove_int(mos, ds_prev-ds_phys-ds_next_clones_obj, obj, tx) (0x0 == 
  0x2), file: ../../common/fs/zfs/dsl_dataset.c, line: 1512
 
ff00102468d0 genunix:assfail3+c1 ()
ff0010246a50 zfs:dsl_dataset_destroy_sync+85a ()
ff0010246aa0 zfs:dsl_sync_task_group_sync+eb ()
ff0010246b10 zfs:dsl_pool_sync+196 ()
ff0010246ba0 zfs:spa_sync+32a ()
ff0010246c40 zfs:txg_sync_thread+265 ()
ff0010246c50 unix:thread_start+8 ()
 
  We really need to import this pool. Is there a way around this? We do
  have snv_114 source on the system if we need to make changes to
  usr/src/uts/common/fs/zfs/dsl_dataset.c. It seems like the zfs
  destroy transaction never completed and it is being replayed, causing
  the panic. This cycle continues endlessly.
 

 
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 -- 
 albert chin (ch...@thewrittenword.com)
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! System panic when pool imported

2009-10-19 Thread Albert Chin
On Mon, Oct 19, 2009 at 03:31:46PM -0700, Matthew Ahrens wrote:
 Thanks for reporting this.  I have fixed this bug (6822816) in build  
 127.

Thanks. I just installed OpenSolaris Preview based on 125 and will
attempt to apply the patch you made to this release and import the pool.

 --matt

 Albert Chin wrote:
 Running snv_114 on an X4100M2 connected to a 6140. Made a clone of a
 snapshot a few days ago:
   # zfs snapshot a...@b
   # zfs clone a...@b tank/a
   # zfs clone a...@b tank/b

 The system started panicing after I tried:
   # zfs snapshot tank/b...@backup

 So, I destroyed tank/b:
   # zfs destroy tank/b
 then tried to destroy tank/a
   # zfs destroy tank/a

 Now, the system is in an endless panic loop, unable to import the pool
 at system startup or with zpool import. The panic dump is:
   panic[cpu1]/thread=ff0010246c60: assertion failed: 0 == 
 zap_remove_int(mos, ds_prev-ds_phys-ds_next_clones_obj, obj, tx) (0x0 == 
 0x2), file: ../../common/fs/zfs/dsl_dataset.c, line: 1512

   ff00102468d0 genunix:assfail3+c1 ()
   ff0010246a50 zfs:dsl_dataset_destroy_sync+85a ()
   ff0010246aa0 zfs:dsl_sync_task_group_sync+eb ()
   ff0010246b10 zfs:dsl_pool_sync+196 ()
   ff0010246ba0 zfs:spa_sync+32a ()
   ff0010246c40 zfs:txg_sync_thread+265 ()
   ff0010246c50 unix:thread_start+8 ()

 We really need to import this pool. Is there a way around this? We do
 have snv_114 source on the system if we need to make changes to
 usr/src/uts/common/fs/zfs/dsl_dataset.c. It seems like the zfs
 destroy transaction never completed and it is being replayed, causing
 the panic. This cycle continues endlessly.

   

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iscsi/comstar performance

2009-10-13 Thread Albert Chin
On Tue, Oct 13, 2009 at 01:00:35PM -0400, Frank Middleton wrote:
 After a recent upgrade to b124, decided to switch to COMSTAR for iscsi
 targets for VirtualBox hosted on AMD64 Fedora C10. Both target and
 initiator are running zfs under b124. This combination seems
 unbelievably slow compared to  the old iscsi subsystem.

 A scrub of a local 20GB disk on the target took 16 minutes. A scrub of
 a 20GB iscsi disk took 106 minutes! It seems to take much longer to
 boot from iscsi, so it seems to be reading more slowly too.

 There are a lot of variables - switching to Comstar, snv124, VBox
 3.08, etc., but such a dramatic loss of performance probably has a
 single cause. Is anyone willing to speculate?

Maybe this will help:
  
http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007118.html

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Quickest way to find files with cksum errors without doing scrub

2009-09-28 Thread Albert Chin
Without doing a zpool scrub, what's the quickest way to find files in a
filesystem with cksum errors? Iterating over all files with find takes
quite a bit of time. Maybe there's some zdb fu that will perform the
check for me?

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quickest way to find files with cksum errors without doing scrub

2009-09-28 Thread Albert Chin
On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
 On Mon, 28 Sep 2009, Richard Elling wrote:

 Scrub could be faster, but you can try
  tar cf - .  /dev/null

 If you think about it, validating checksums requires reading the data.
 So you simply need to read the data.

 This should work but it does not verify the redundant metadata.  For
 example, the duplicate metadata copy might be corrupt but the problem
 is not detected since it did not happen to be used.

Too bad we cannot scrub a dataset/object.

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quickest way to find files with cksum errors without doing scrub

2009-09-28 Thread Albert Chin
On Mon, Sep 28, 2009 at 10:16:20AM -0700, Richard Elling wrote:
 On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:

 On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
 On Mon, 28 Sep 2009, Richard Elling wrote:

 Scrub could be faster, but you can try
tar cf - .  /dev/null

 If you think about it, validating checksums requires reading the  
 data.
 So you simply need to read the data.

 This should work but it does not verify the redundant metadata.  For
 example, the duplicate metadata copy might be corrupt but the problem
 is not detected since it did not happen to be used.

 Too bad we cannot scrub a dataset/object.

 Can you provide a use case? I don't see why scrub couldn't start and
 stop at specific txgs for instance. That won't necessarily get you to a
 specific file, though.

If your pool is borked but mostly readable, yet some file systems have
cksum errors, you cannot zfs send that file system (err, snapshot of
filesystem). So, you need to manually fix the file system by traversing
it to read all files to determine which must be fixed. Once this is
done, you can snapshot and zfs send. If you have many file systems,
this is time consuming.

Of course, you could just rsync and be happy with what you were able to
recover, but if you have clones branched from the same parent, which a
few differences inbetween shapshots, having to rsync *everything* rather
than just the differences is painful. Hence the reason to try to get
zfs send to work.

But, this is an extreme example and I doubt pools are often in this
state so the engineering time isn't worth it. In such cases though, a
zfs scrub would be useful.

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] refreservation not transferred by zfs send when sending a volume?

2009-09-28 Thread Albert Chin
snv114# zfs get 
used,reservation,volsize,refreservation,usedbydataset,usedbyrefreservation 
tww/opt/vms/images/vios/mello-0.img
NAME PROPERTY  VALUE  SOURCE
tww/opt/vms/images/vios/mello-0.img  used  30.6G  -
tww/opt/vms/images/vios/mello-0.img  reservation   none   
default
tww/opt/vms/images/vios/mello-0.img  volsize   25G-
tww/opt/vms/images/vios/mello-0.img  refreservation25Glocal
tww/opt/vms/images/vios/mello-0.img  usedbydataset 5.62G  -
tww/opt/vms/images/vios/mello-0.img  usedbyrefreservation  25G-

Sent tww/opt/vms/images/vios/mello-0.img from snv_114 server
to snv_119 server.

On snv_119 server:
snv119# zfs get 
used,reservation,volsize,refreservation,usedbydataset,usedbyrefreservation 
t/opt/vms/images/vios/mello-0.img 
NAME   PROPERTY  VALUE  SOURCE
t/opt/vms/images/vios/mello-0.img  used  5.32G  -
t/opt/vms/images/vios/mello-0.img  reservation   none   default
t/opt/vms/images/vios/mello-0.img  volsize   25G-
t/opt/vms/images/vios/mello-0.img  refreservationnone   default
t/opt/vms/images/vios/mello-0.img  usedbydataset 5.32G  -
t/opt/vms/images/vios/mello-0.img  usedbyrefreservation  0  -

Any reason the refreservation and usedbyrefreservation properties are
not sent?

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Should usedbydataset be the same after zfs send/recv for a volume?

2009-09-28 Thread Albert Chin
When transferring a volume between servers, is it expected that the
usedbydataset property should be the same on both? If not, is it cause
for concern?

snv114# zfs list tww/opt/vms/images/vios/near.img
NAME   USED  AVAIL  REFER  MOUNTPOINT
tww/opt/vms/images/vios/near.img  70.5G   939G  15.5G  -
snv114# zfs get usedbydataset tww/opt/vms/images/vios/near.img
NAME  PROPERTY   VALUE   SOURCE
tww/opt/vms/images/vios/near.img  usedbydataset  15.5G   -

snv119# zfs list t/opt/vms/images/vios/near.img 
NAME USED  AVAIL  REFER  MOUNTPOINT
t/opt/vms/images/vios/near.img  14.5G  2.42T  14.5G  -
snv119# zfs get usedbydataset t/opt/vms/images/vios/near.img 
NAMEPROPERTY   VALUE   SOURCE
t/opt/vms/images/vios/near.img  usedbydataset  14.5G   -

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Should usedbydataset be the same after zfs send/recv for a volume?

2009-09-28 Thread Albert Chin
On Mon, Sep 28, 2009 at 07:33:56PM -0500, Albert Chin wrote:
 When transferring a volume between servers, is it expected that the
 usedbydataset property should be the same on both? If not, is it cause
 for concern?
 
 snv114# zfs list tww/opt/vms/images/vios/near.img
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 tww/opt/vms/images/vios/near.img  70.5G   939G  15.5G  -
 snv114# zfs get usedbydataset tww/opt/vms/images/vios/near.img
 NAME  PROPERTY   VALUE   SOURCE
 tww/opt/vms/images/vios/near.img  usedbydataset  15.5G   -
 
 snv119# zfs list t/opt/vms/images/vios/near.img 
 NAME USED  AVAIL  REFER  MOUNTPOINT
 t/opt/vms/images/vios/near.img  14.5G  2.42T  14.5G  -
 snv119# zfs get usedbydataset t/opt/vms/images/vios/near.img 
 NAMEPROPERTY   VALUE   SOURCE
 t/opt/vms/images/vios/near.img  usedbydataset  14.5G   -

Don't know if it matters but disks on both send/recv server are
different, 300GB FCAL on the send, 750GB SATA on the recv.

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive should allow to keep received system

2009-09-28 Thread Albert Chin
On Mon, Sep 28, 2009 at 03:16:17PM -0700, Igor Velkov wrote:
 Not so good as I hope.
 zfs send -R xxx/x...@daily_2009-09-26_23:51:00 |ssh -c blowfish r...@xxx.xx 
 zfs recv -vuFd xxx/xxx
 
 invalid option 'u'
 usage:
 receive [-vnF] filesystem|volume|snapshot
 receive [-vnF] -d filesystem
 
 For the property list, run: zfs set|get
 
 For the delegated permission list, run: zfs allow|unallow
 r...@xxx:~# uname -a
 SunOS xxx 5.10 Generic_13-03 sun4u sparc SUNW,Sun-Fire-V890
 
 What's wrong?

Looks like -u was a recent addition.

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! System panic when pool imported

2009-09-27 Thread Albert Chin
On Sun, Sep 27, 2009 at 10:06:16AM -0700, Andrew wrote:
 This is what my /var/adm/messages looks like:
 
 Sep 27 12:46:29 solaria genunix: [ID 403854 kern.notice] assertion failed: ss 
 == NULL, file: ../../common/fs/zfs/space_map.c, line: 109
 Sep 27 12:46:29 solaria unix: [ID 10 kern.notice]
 Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a97a0 
 genunix:assfail+7e ()
 Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9830 
 zfs:space_map_add+292 ()
 Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a98e0 
 zfs:space_map_load+3a7 ()
 Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9920 
 zfs:metaslab_activate+64 ()
 Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a99e0 
 zfs:metaslab_group_alloc+2b7 ()
 Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9ac0 
 zfs:metaslab_alloc_dva+295 ()
 Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9b60 
 zfs:metaslab_alloc+9b ()
 Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9b90 
 zfs:zio_dva_allocate+3e ()
 Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9bc0 
 zfs:zio_execute+a0 ()
 Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9c40 
 genunix:taskq_thread+193 ()
 Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9c50 
 unix:thread_start+8 ()

I'm not sure that aok=1/zfs:zfs_recover=1 would help you because
zfs_panic_recover isn't in the backtrace (see
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6638754).
Sometimes a Sun zfs engineer shows up on the freenode #zfs channel. I'd
pop up there and ask. There are somewhat similar bug reports at
bugs.opensolaris.org. I'd post a bug report just in case.

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! System panic when pool imported

2009-09-25 Thread Albert Chin
On Fri, Sep 25, 2009 at 05:21:23AM +, Albert Chin wrote:
 [[ snip snip ]]
 
 We really need to import this pool. Is there a way around this? We do
 have snv_114 source on the system if we need to make changes to
 usr/src/uts/common/fs/zfs/dsl_dataset.c. It seems like the zfs
 destroy transaction never completed and it is being replayed, causing
 the panic. This cycle continues endlessly.

What are the implications of adding the following to /etc/system:
  set zfs:zfs_recover=1
  set aok=1

And importing the pool with:
  # zpool import -o ro

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Help! System panic when pool imported

2009-09-24 Thread Albert Chin
Running snv_114 on an X4100M2 connected to a 6140. Made a clone of a
snapshot a few days ago:
  # zfs snapshot a...@b
  # zfs clone a...@b tank/a
  # zfs clone a...@b tank/b

The system started panicing after I tried:
  # zfs snapshot tank/b...@backup

So, I destroyed tank/b:
  # zfs destroy tank/b
then tried to destroy tank/a
  # zfs destroy tank/a

Now, the system is in an endless panic loop, unable to import the pool
at system startup or with zpool import. The panic dump is:
  panic[cpu1]/thread=ff0010246c60: assertion failed: 0 == 
zap_remove_int(mos, ds_prev-ds_phys-ds_next_clones_obj, obj, tx) (0x0 == 
0x2), file: ../../common/fs/zfs/dsl_dataset.c, line: 1512

  ff00102468d0 genunix:assfail3+c1 ()
  ff0010246a50 zfs:dsl_dataset_destroy_sync+85a ()
  ff0010246aa0 zfs:dsl_sync_task_group_sync+eb ()
  ff0010246b10 zfs:dsl_pool_sync+196 ()
  ff0010246ba0 zfs:spa_sync+32a ()
  ff0010246c40 zfs:txg_sync_thread+265 ()
  ff0010246c50 unix:thread_start+8 ()

We really need to import this pool. Is there a way around this? We do
have snv_114 source on the system if we need to make changes to
usr/src/uts/common/fs/zfs/dsl_dataset.c. It seems like the zfs
destroy transaction never completed and it is being replayed, causing
the panic. This cycle continues endlessly.

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs snapshot -r panic on b114

2009-09-23 Thread Albert Chin
While a resilver was running, we attempted a recursive snapshot which
resulted in a kernel panic:
  panic[cpu1]/thread=ff00104c0c60: assertion failed: 0 == 
zap_remove_int(mos, next_clones_obj, dsphys-ds_next_snap_obj, tx) (0x0 == 
0x2), file: ../../common/ fs/zfs/dsl_dataset.c, line: 1869

  ff00104c0960 genunix:assfail3+c1 ()
  ff00104c0a00 zfs:dsl_dataset_snapshot_sync+4a2 ()
  ff00104c0a50 zfs:snapshot_sync+41 ()
  ff00104c0aa0 zfs:dsl_sync_task_group_sync+eb ()
  ff00104c0b10 zfs:dsl_pool_sync+196 ()
  ff00104c0ba0 zfs:spa_sync+32a ()
  ff00104c0c40 zfs:txg_sync_thread+265 ()
  ff00104c0c50 unix:thread_start+8 ()

System is a X4100M2 running snv_114.

Any ideas?

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to recover from can't open objset, cannot iterate filesystems?

2009-09-21 Thread Albert Chin
Recently upgraded a system from b98 to b114. Also replaced two 400G
Seagate Barracudea 7200.8 SATA disks with two WD 750G RE3 SATA disks
from a 6-device raidz1 pool. Replacing the first 750G went ok. While
replacing the second 750G disk, I noticed CKSUM errors on the first
disk. Once the second disk was replaced, I halted the system, upgraded
to b114, and rebooted. Both b98 and b114 gave the errors:
  WARNING: can't open objset for tww/opt/dists/cd-8.1
  cannot iterate filesystems: I/O error

How do I recover from this?

# zpool status tww
  pool: tww
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tww ONLINE   0 0 3
  raidz1ONLINE   0 012
c4t0d0  ONLINE   0 0 0
c4t1d0  ONLINE   0 0 0
c4t4d0  ONLINE   0 0 0
c4t5d0  ONLINE   0 0 0
c4t6d0  ONLINE   0 0 0
c4t7d0  ONLINE   0 0 0

errors: 855 data errors, use '-v' for a list

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool replace complete but old drives not detached

2009-09-06 Thread Albert Chin
$ cat /etc/release
  Solaris Express Community Edition snv_114 X86
   Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
  Assembled 04 May 2009

I recently replaced two drives in a raidz2 vdev. However, after the
resilver completed, the old drives were not automatically detached.
Why? How do I detach the drives that were replaced?

# zpool replace tww c6t600A0B800029996605B04668F17Dd0 \
c6t600A0B8000299CCC099B4A400A9Cd0
# zpool replace tww c6t600A0B800029996605C24668F39Bd0 \
c6t600A0B8000299CCC0A744A94F7E2d0
... resilver runs to completion ...

# zpool status tww
  pool: tww
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver completed after 25h11m with 23375 errors on Sun Sep  6 
02:09:07 2009
config:

NAME STATE READ WRITE CKSUM
tww  DEGRADED 0 0  207K
  raidz2 ONLINE   0 0 0
c6t600A0B800029996605964668CB39d0ONLINE   0 0 0
c6t600A0B8000299CCC06C84744C892d0ONLINE   0 0 0
c6t600A0B8000299CCC05B44668CC6Ad0ONLINE   0 0 0
c6t600A0B800029996605A44668CC3Fd0ONLINE   0 0 0
c6t600A0B8000299CCC05BA4668CD2Ed0ONLINE   0 0 0
c6t600A0B800029996605AA4668CDB1d0ONLINE   0 0 0
c6t600A0B8000299966073547C5CED9d0ONLINE   0 0 0
  raidz2 DEGRADED 0 0  182K
replacingDEGRADED 0 0 0
  c6t600A0B800029996605B04668F17Dd0  DEGRADED 0 0 0 
 too many errors
  c6t600A0B8000299CCC099B4A400A9Cd0  ONLINE   0 0 0 
 255G resilvered
c6t600A0B8000299CCC099E4A400B94d0ONLINE   0 0  218K 
 10.2M resilvered
c6t600A0B8000299CCC0A6B4A93D3EEd0ONLINE   0 0   242 
 246G resilvered
spareDEGRADED 0 0 0
  c6t600A0B8000299CCC05CC4668F30Ed0  DEGRADED 0 0 3 
 too many errors
  c6t600A0B8000299CCC05D84668F448d0  ONLINE   0 0 0 
 255G resilvered
spareDEGRADED 0 0 0
  c6t600A0B800029996605BC4668F305d0  DEGRADED 0 0 0 
 too many errors
  c6t600A0B800029996605C84668F461d0  ONLINE   0 0 0 
 255G resilvered
c6t600A0B800029996609EE4A89DA51d0ONLINE   0 0 0 
 246G resilvered
replacingDEGRADED 0 0 0
  c6t600A0B800029996605C24668F39Bd0  DEGRADED 0 0 0 
 too many errors
  c6t600A0B8000299CCC0A744A94F7E2d0  ONLINE   0 0 0 
 255G resilvered
  raidz2 ONLINE   0 0  233K
c6t600A0B8000299CCC0A154A89E426d0ONLINE   0 0 0
c6t600A0B800029996609F74A89E1A5d0ONLINE   0 0   758 
 6.50K resilvered
c6t600A0B8000299CCC0A174A89E520d0ONLINE   0 0   311 
 3.50K resilvered
c6t600A0B800029996609F94A89E24Bd0ONLINE   0 0 21.8K 
 32K resilvered
c6t600A0B8000299CCC0A694A93D322d0ONLINE   0 0 0 
 1.85G resilvered
c6t600A0B8000299CCC0A0C4A89DDE8d0ONLINE   0 0 27.4K 
 41.5K resilvered
c6t600A0B800029996609F04A89DB1Bd0ONLINE   0 0 7.13K 
 24K resilvered
spares
  c6t600A0B8000299CCC05D84668F448d0  INUSE currently in use
  c6t600A0B800029996605C84668F461d0  INUSE currently in use
  c6t600A0B80002999660A454A93CEDBd0  AVAIL   
  c6t600A0B80002999660ADA4A9CF2EDd0  AVAIL   

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool scrub started resilver, not scrub

2009-08-31 Thread Albert Chin
On Wed, Aug 26, 2009 at 02:33:39AM -0500, Albert Chin wrote:
 # cat /etc/release
   Solaris Express Community Edition snv_105 X86
Copyright 2008 Sun Microsystems, Inc.  All Rights Reserved.
 Use is subject to license terms.
Assembled 15 December 2008

 So, why is a resilver in progress when I asked for a scrub?

Still seeing the same problem with snv_114.
  # cat /etc/release
Solaris Express Community Edition snv_114 X86
 Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
  Use is subject to license terms.
Assembled 04 May 2009

How do I scrub this pool?

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-08-27 Thread Albert Chin
On Thu, Aug 27, 2009 at 06:29:52AM -0700, Gary Gendel wrote:
 It looks like It's definitely related to the snv_121 upgrade.  I
 decided to roll back to snv_110 and the checksum errors have
 disappeared.  I'd like to issue a bug report, but I don't have any
 information that might help track this down, just lots of checksum
 errors.

So, on snv_121, can you read the files with checksum errors? Is it
simply the reporting mechanism that is wrong or are the files really
damaged?

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool scrub started resilver, not scrub

2009-08-26 Thread Albert Chin
# cat /etc/release
  Solaris Express Community Edition snv_105 X86
   Copyright 2008 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
   Assembled 15 December 2008
# zpool status tww
  pool: tww
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver completed after 6h15m with 27885 errors on Wed Aug 26 07:18:03 
2009
config:

NAME   STATE READ WRITE CKSUM
twwONLINE   0 0 54.5K
  raidz2   ONLINE   0 0 0
c6t600A0B800029996605964668CB39d0  ONLINE   0 0 0
c6t600A0B8000299CCC06C84744C892d0  ONLINE   0 0 0
c6t600A0B8000299CCC05B44668CC6Ad0  ONLINE   0 0 0
c6t600A0B800029996605A44668CC3Fd0  ONLINE   0 0 0
c6t600A0B8000299CCC05BA4668CD2Ed0  ONLINE   0 0 0
c6t600A0B800029996605AA4668CDB1d0  ONLINE   0 0 0
c6t600A0B8000299966073547C5CED9d0  ONLINE   0 0 0
  raidz2   ONLINE   0 0 0
c6t600A0B800029996605B04668F17Dd0  ONLINE   0 0 0
c6t600A0B8000299CCC099E4A400B94d0  ONLINE   0 0 0
c6t600A0B800029996605B64668F26Fd0  ONLINE   0 0 0
c6t600A0B8000299CCC05CC4668F30Ed0  ONLINE   0 0 0
c6t600A0B800029996605BC4668F305d0  ONLINE   0 0 0
c6t600A0B8000299CCC099B4A400A9Cd0  ONLINE   0 0 0
c6t600A0B800029996605C24668F39Bd0  ONLINE   0 0 0
  raidz2   ONLINE   0 0  109K
c6t600A0B8000299CCC0A154A89E426d0  ONLINE   0 0 0
c6t600A0B800029996609F74A89E1A5d0  ONLINE   0 018  
2.50K resilvered
c6t600A0B8000299CCC0A174A89E520d0  ONLINE   0 039  
4.50K resilvered
c6t600A0B800029996609F94A89E24Bd0  ONLINE   0 0   486  
75K resilvered
c6t600A0B80002999660A454A93CEDBd0  ONLINE   0 0 0  
2.55G resilvered
c6t600A0B8000299CCC0A0C4A89DDE8d0  ONLINE   0 034  
2K resilvered
c6t600A0B800029996609F04A89DB1Bd0  ONLINE   0 0   173  
18K resilvered
spares
  c6t600A0B8000299CCC05D84668F448d0AVAIL   
  c6t600A0B800029996605C84668F461d0AVAIL   

errors: 27885 data errors, use '-v' for a list

# zpool scrub tww
# zpool status tww
  pool: tww
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver in progress for 0h11m, 2.82% done, 6h21m to go
config:
...

So, why is a resilver in progress when I asked for a scrub?

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Resilver complete, but device not replaced, odd zpool status output

2009-08-25 Thread Albert Chin
 STATE READ
WRITE CKSUM
tww  DEGRADED 0
0 76.0K
  raidz2 ONLINE   0
0 0
c6t600A0B800029996605964668CB39d0ONLINE   0
0 0
c6t600A0B8000299CCC06C84744C892d0ONLINE   0
0 0
c6t600A0B8000299CCC05B44668CC6Ad0ONLINE   0
0 0
c6t600A0B800029996605A44668CC3Fd0ONLINE   0
0 0
c6t600A0B8000299CCC05BA4668CD2Ed0ONLINE   0
0 0
c6t600A0B800029996605AA4668CDB1d0ONLINE   0
0 0
c6t600A0B8000299966073547C5CED9d0ONLINE   0
0 0
  raidz2 ONLINE   0
0 0
c6t600A0B800029996605B04668F17Dd0ONLINE   0
0 0
c6t600A0B8000299CCC099E4A400B94d0ONLINE   0
0 0
c6t600A0B800029996605B64668F26Fd0ONLINE   0
0 0
c6t600A0B8000299CCC05CC4668F30Ed0ONLINE   0
0 0
c6t600A0B800029996605BC4668F305d0ONLINE   0
0 0
c6t600A0B8000299CCC099B4A400A9Cd0ONLINE   0
0 0
c6t600A0B800029996605C24668F39Bd0ONLINE   0
0 0
  raidz2 DEGRADED 0
0  153K
c6t600A0B8000299CCC0A154A89E426d0ONLINE   0
0 1  1K resilvered
c6t600A0B800029996609F74A89E1A5d0ONLINE   0
0 2.14K  5.67M resilvered
c6t600A0B8000299CCC0A174A89E520d0ONLINE   0
0   299  34K resilvered
c6t600A0B800029996609F94A89E24Bd0ONLINE   0
0 29.7K  23.5M resilvered
replacingDEGRADED 0
0  118K
  c6t600A0B8000299CCC0A194A89E634d0  OFFLINE 20
1.28M 0
  c6t600A0B800029996609EE4A89DA51d0  ONLINE   0
0 0  1.93G resilvered
c6t600A0B8000299CCC0A0C4A89DDE8d0ONLINE   0
0   247  54K resilvered
c6t600A0B800029996609F04A89DB1Bd0ONLINE   0
0 24.2K  51.3M resilvered
spares
  c6t600A0B8000299CCC05D84668F448d0  AVAIL   
  c6t600A0B800029996605C84668F461d0  AVAIL   

errors: 27886 data errors, use '-v' for a list

  # zpool replace c6t600A0B8000299CCC0A194A89E634d0 \
  c6t600A0B800029996609EE4A89DA51d0
  invalid vdev specification
  use '-f' to override the following errors:
  /dev/dsk/c6t600A0B800029996609EE4A89DA51d0s0 is part of active ZFS
  pool tww. Please see zpool(1M).

So, what is going on?

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resilver complete, but device not replaced, odd zpool status output

2009-08-25 Thread Albert Chin
On Tue, Aug 25, 2009 at 06:05:16AM -0500, Albert Chin wrote:
 [[ snip snip ]]
 
 After the resilver completed:
   # zpool status tww
   pool: tww
  state: DEGRADED
 status: One or more devices has experienced an error resulting in data
 corruption.  Applications may be affected.
 action: Restore the file in question if possible.  Otherwise restore the
 entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
  scrub: resilver completed after 6h9m with 27886 errors on Tue Aug 25
 08:32:41 2009
 config:
 
 NAME STATE READ
 WRITE CKSUM
 tww  DEGRADED 0
 0 76.0K
   raidz2 ONLINE   0
 0 0
 c6t600A0B800029996605964668CB39d0ONLINE   0
 0 0
 c6t600A0B8000299CCC06C84744C892d0ONLINE   0
 0 0
 c6t600A0B8000299CCC05B44668CC6Ad0ONLINE   0
 0 0
 c6t600A0B800029996605A44668CC3Fd0ONLINE   0
 0 0
 c6t600A0B8000299CCC05BA4668CD2Ed0ONLINE   0
 0 0
 c6t600A0B800029996605AA4668CDB1d0ONLINE   0
 0 0
 c6t600A0B8000299966073547C5CED9d0ONLINE   0
 0 0
   raidz2 ONLINE   0
 0 0
 c6t600A0B800029996605B04668F17Dd0ONLINE   0
 0 0
 c6t600A0B8000299CCC099E4A400B94d0ONLINE   0
 0 0
 c6t600A0B800029996605B64668F26Fd0ONLINE   0
 0 0
 c6t600A0B8000299CCC05CC4668F30Ed0ONLINE   0
 0 0
 c6t600A0B800029996605BC4668F305d0ONLINE   0
 0 0
 c6t600A0B8000299CCC099B4A400A9Cd0ONLINE   0
 0 0
 c6t600A0B800029996605C24668F39Bd0ONLINE   0
 0 0
   raidz2 DEGRADED 0
 0  153K
 c6t600A0B8000299CCC0A154A89E426d0ONLINE   0
 0 1  1K resilvered
 c6t600A0B800029996609F74A89E1A5d0ONLINE   0
 0 2.14K  5.67M resilvered
 c6t600A0B8000299CCC0A174A89E520d0ONLINE   0
 0   299  34K resilvered
 c6t600A0B800029996609F94A89E24Bd0ONLINE   0
 0 29.7K  23.5M resilvered
 replacingDEGRADED 0
 0  118K
   c6t600A0B8000299CCC0A194A89E634d0  OFFLINE 20
 1.28M 0
   c6t600A0B800029996609EE4A89DA51d0  ONLINE   0
 0 0  1.93G resilvered
 c6t600A0B8000299CCC0A0C4A89DDE8d0ONLINE   0
 0   247  54K resilvered
 c6t600A0B800029996609F04A89DB1Bd0ONLINE   0
 0 24.2K  51.3M resilvered
 spares
   c6t600A0B8000299CCC05D84668F448d0  AVAIL   
   c6t600A0B800029996605C84668F461d0  AVAIL   
 
 errors: 27886 data errors, use '-v' for a list
 
   # zpool replace c6t600A0B8000299CCC0A194A89E634d0 \
   c6t600A0B800029996609EE4A89DA51d0
   invalid vdev specification
   use '-f' to override the following errors:
   /dev/dsk/c6t600A0B800029996609EE4A89DA51d0s0 is part of active ZFS
   pool tww. Please see zpool(1M).
 
 So, what is going on?

Rebooted the server and see the same problem. So, I ran:
  # zpool detach tww c6t600A0B8000299CCC0A194A89E634d0
and now the zpool status output looks normal:
  # zpool status tww
  pool: tww
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver in progress for 0h16m, 7.88% done, 3h9m to go
config:

NAME   STATE READ WRITE CKSUM
twwONLINE   0 0 5
  raidz2   ONLINE   0 0 0
c6t600A0B800029996605964668CB39d0  ONLINE   0 0 0
c6t600A0B8000299CCC06C84744C892d0  ONLINE   0 0 0
c6t600A0B8000299CCC05B44668CC6Ad0  ONLINE   0 0 0
c6t600A0B800029996605A44668CC3Fd0  ONLINE   0 0 0
c6t600A0B8000299CCC05BA4668CD2Ed0  ONLINE   0 0 0
c6t600A0B800029996605AA4668CDB1d0  ONLINE   0 0 0
c6t600A0B8000299966073547C5CED9d0  ONLINE   0 0 0
  raidz2   ONLINE   0 0 0
c6t600A0B800029996605B04668F17Dd0  ONLINE   0 0 0
c6t600A0B8000299CCC099E4A400B94d0  ONLINE   0 0 0
c6t600A0B800029996605B64668F26Fd0  ONLINE   0 0 0

[zfs-discuss] Why so many data errors with raidz2 config and one failing drive?

2009-08-24 Thread Albert Chin
Added a third raidz2 vdev to my pool:
  pool: tww
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver in progress for 0h57m, 13.36% done, 6h9m to go
config:

NAME STATE READ
WRITE CKSUM
tww  DEGRADED 0
0 16.9K
  raidz2 ONLINE   0
0 0
c6t600A0B800029996605964668CB39d0ONLINE   0
0 0
c6t600A0B8000299CCC06C84744C892d0ONLINE   0
0 0
c6t600A0B8000299CCC05B44668CC6Ad0ONLINE   0
0 0
c6t600A0B800029996605A44668CC3Fd0ONLINE   0
0 0
c6t600A0B8000299CCC05BA4668CD2Ed0ONLINE   0
0 0
c6t600A0B800029996605AA4668CDB1d0ONLINE   0
0 0
c6t600A0B8000299966073547C5CED9d0ONLINE   0
0 0
  raidz2 ONLINE   0
0 0
c6t600A0B800029996605B04668F17Dd0ONLINE   0
0 0
c6t600A0B8000299CCC099E4A400B94d0ONLINE   0
0 0
c6t600A0B800029996605B64668F26Fd0ONLINE   0
0 0
c6t600A0B8000299CCC05CC4668F30Ed0ONLINE   0
0 0
c6t600A0B800029996605BC4668F305d0ONLINE   0
0 0
c6t600A0B8000299CCC099B4A400A9Cd0ONLINE   0
0 0
c6t600A0B800029996605C24668F39Bd0ONLINE   0
0 0
  raidz2 DEGRADED 0
0 34.0K
c6t600A0B8000299CCC0A154A89E426d0ONLINE   0
0 0
c6t600A0B800029996609F74A89E1A5d0ONLINE   0
0 7  4K resilvered
c6t600A0B8000299CCC0A174A89E520d0ONLINE   0
0 2  4K resilvered
c6t600A0B800029996609F94A89E24Bd0ONLINE   0
048  24.5K resilvered
replacingDEGRADED 0
0 78.7K
  c6t600A0B8000299CCC0A194A89E634d0  UNAVAIL 20
277K 0  experienced I/O failures
  c6t600A0B800029996609EE4A89DA51d0  ONLINE   0
0 0  38.1M resilvered
c6t600A0B8000299CCC0A0C4A89DDE8d0ONLINE   0
0 6  6K resilvered
c6t600A0B800029996609F04A89DB1Bd0ONLINE   0
086  92K resilvered
spares
  c6t600A0B8000299CCC05D84668F448d0  AVAIL   
  c6t600A0B800029996605C84668F461d0  AVAIL   

errors: 17097 data errors, use '-v' for a list


Seems some of the new drives are having problems, resulting in CKSUM
errors. I don't understand why I have so many data errors though. Why
does the third raidz2 vdev report 34.0K CKSUM errors?

The number of data errors appears to be increasing as well as the
resilver process continues.

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why so many data errors with raidz2 config and one failing drive?

2009-08-24 Thread Albert Chin
On Mon, Aug 24, 2009 at 02:01:39PM -0500, Bob Friesenhahn wrote:
 On Mon, 24 Aug 2009, Albert Chin wrote:

 Seems some of the new drives are having problems, resulting in CKSUM
 errors. I don't understand why I have so many data errors though. Why
 does the third raidz2 vdev report 34.0K CKSUM errors?

 Is it possible that this third raidz2 is inflicted with a shared
 problem such as a cable, controller, backplane, or power supply? Only
 one drive is reported as being unscathed.

Well, we're just using unused drives on the existing array. No other
changes.

 Do you periodically scrub your array?

No. Guess we will now :) But, I think all of the data loss is a result
of the new drives, not ones that were already part of the two previous
vdevs.

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread Albert Chin
On Mon, Nov 24, 2008 at 08:43:18AM -0800, Erik Trimble wrote:
 I _really_ wish rsync had an option to copy in place or something like 
 that, where the updates are made directly to the file, rather than a 
 temp copy.

Isn't this what --inplace does?

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL NVRAM partitioning?

2008-09-06 Thread Albert Chin
On Sat, Sep 06, 2008 at 11:16:15AM -0700, Kaya Bekiroğlu wrote:
  The big problem appears to be getting your hands on these cards.   
  Although I have the drivers now my first supplier let me down, and  
  while the second insists they have shipped the cards it's been three  
  weeks now and there's no sign of them.
 
 Thanks to Google Shopping I was able to order two of these cards from:
 http://www.printsavings.com/01390371OP-discount-MICRO+MEMORY-MM5425--512MB-NVRAM-battery.aspx
 
 They appear to be in good working order, but unfortunately I am unable
 to verify the driver. pkgadd -d umem_Sol_Drv_Cust_i386_v01_11.pkg
 hangs on ## Installing  part 1 of 3. on snv_95.  I do not have other
 Solaris versions to  experiment with; this is really just a hobby for
 me.

Does the card come with any programming specs to help debug the driver?

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do you grow a ZVOL?

2008-07-17 Thread Albert Chin
On Thu, Jul 17, 2008 at 04:28:34PM -0400, Charles Menser wrote:
 I've looked for anything I can find on the topic, but there does not
 appear to be anything documented.
 
 Can a ZVOL be expanded?

I think setting the volsize property expands it. Dunno what happens on
the clients though.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] J4200/J4400 Array

2008-07-03 Thread Albert Chin
On Thu, Jul 03, 2008 at 01:43:36PM +0300, Mertol Ozyoney wrote:
 You are right that J series do not have nvram onboard. However most Jbods
 like HPS's MSA series have some nvram. 
 The idea behind not using nvram on the Jbod's is 
 
 -) There is no use to add limited ram to a JBOD as disks already have a lot
 of cache.
 -) It's easy to design a redundant Jbod without nvram. If you have nvram and
 need redundancy you need to design more complex HW and more complex firmware
 -) Bateries are the first thing to fail 
 -) Servers already have too much ram

Well, if the server attached to the J series is doing ZFS/NFS,
performance will increase with zfs:zfs_nocacheflush=1. But, without
battery-backed NVRAM, this really isn't safe. So, for this usage case,
unless the server has battery-backed NVRAM, I don't see how the J series
is good for ZFS/NFS usage.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Albert Chin
On Wed, Jul 02, 2008 at 04:49:26AM -0700, Ben B. wrote:
 According to the Sun Handbook, there is a new array :
 SAS interface
 12 disks SAS or SATA
 
 ZFS could be used nicely with this box.

Doesn't seem to have any NVRAM storage on board, so seems like JBOD.

 There is an another version called
 J4400 with 24 disks.
 
 Doc is here :
 http://docs.sun.com/app/docs/coll/j4200

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration for VMware

2008-06-27 Thread Albert Chin
On Fri, Jun 27, 2008 at 08:13:14AM -0700, Ross wrote:
 Bleh, just found out the i-RAM is 5v PCI only.  Won't work on PCI-X
 slots which puts that out of the question for the motherboad I'm
 using.  Vmetro have a 2GB PCI-E card out, but it's for OEM's only:
 http://www.vmetro.com/category4304.html, and I don't have any space in
 this server to mount a SSD.

Maybe you can call Vmetro and get the names of some resellers whom you
could call to get pricing info?

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SXCE build 90 vs S10U6?

2008-06-12 Thread Albert Lee

On Thu, 2008-06-12 at 17:52 -0700, Paul B. Henson wrote:
 How close is Solaris Express build 90 to what will be released as the
 official Solaris 10 update 6?
 
 We just bought five x4500 servers, but I don't really want to deploy in
 production with U5. There are a number of features in U6 I'd like to have
 (zfs allow for better integration with our local identity system, refquota
 support to minimize user confusion, ZFS boot, ...)
 
 On the other hand, I don't really want to let these five servers sit around
 as insanely expensive and heavy paperweights all summer waiting for U6 to
 hopefully be released by September.
 
 My understanding is that SXCE maintains the same packaging system and
 jumpstart installation procedure as Solaris 10 (as opposed to OpenSolaris,
 which is completely different). If SXCE is close enough to what will become
 Solaris 10U6, I could do my initial development and integration on top of
 that, and be ready to go into production almost as soon as U6 is released,
 rather than wait for it to be released and then have to spin my wheels
 working with it.


While the S10 updates include features backported from Nevada you can
only upgrade from S10 to Solaris Express, not the other way around
(which would technically be a downgrade).

(As you probably know Solaris 10 and Nevada are completely separate
lines of development. Solaris Express is built from Nevada, as are the
other OpenSolaris distributions.)

 
 Would it be feasible to develop a ZFS boot jumpstart configuration with
 SXCE that would be mostly compatible with U6? Does SXCE have any particular
 ZFS features above and beyond what will be included in U6 I should be sure
 to avoid? Any other caveats I would want to take into consideration?
 

I don't think there will be any spec changes for S10u6 from the ZFS boot
support currently available in SX, but the JumpStart configuration for
SX might not be compatible for other reasons (install-discuss may know
better).

-Albert

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] system backup and recovery

2008-06-06 Thread Albert Lee

On Thu, 2008-06-05 at 11:53 -0700, Brandon High wrote:
 On Thu, Jun 5, 2008 at 12:44 AM, Aubrey Li [EMAIL PROTECTED] wrote:
  for windows we use ghost to backup system and recovery.
  can we do similar thing for solaris by ZFS?
 
 You could probably use a Ghost bootdisk to create an image of a
 OpenSolaris system.
 
 I've also used Drive Snapshot to image Windows and Linux systems. It
 might work for Opensolaris as well. It would create a block level
 backup, and the restore might not work on a system which isn't
 identical. http://www.drivesnapshot.de/en/



On Thu, 2008-06-05 at 23:01 -0700, Ross wrote:
 Ghost should work just fine. It's not just a windows program, it's
 best used on a bootable cd or floppy (or network boot for the
 adventurous), and it'll backup any hard drive, not just windows.
 
 If you're using ghost it's always best to have a bootable CD or
 similar, since you can recover the drive if you can't boot your
 computer otherwise.
 

Raw disk images are, uh, nice and all, but I don't think that was what
Aubrey had in mind when asking zfs-discuss about a backup solution. This
is 2008, not 1960.

If he wanted that he could just use dd (or partimage for a slight
optimisation). =P

-Albert

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Live Upgrade to snv_90 on a system with existing ZFS boot

2008-06-05 Thread Albert Lee
Hi,

I have a computer running snv_81 with ZFS root set up previously, and
wanted to test the new LU bits in snv_90. Of course, I expected this
wouldn't be as easy as it should be.

The first annoyance was that installing the new LU packages from the DVD
don't update the manpages. Fortunately,
http://opensolaris.org/os/community/arc/caselog/2006/370/commitment-materials/spec-txt/
has the details (as well as the info which I would need later).

My current setup is snv_81 with root at zpl/root, /opt at zpl/opt, and
separate filesystems for data. This differs from the new hierarchy in
the spec, where root is zpool/ROOT/BE name, and any child
filesystems are zpool/ROOT/BE name/fs. The new filesystems also
use defined mountpoints (relying on canmount=noauto to not automount
them during creation) instead of mountpoint=legacy.

A simple lucreate -n snv_90 fails because the ABE isn't defined - I
don't have a /etc/lutab. The new lu expects lutab to have been populated
at install time with the current BE as mentioned in the spec. It seems
to prefer naming the BE after the ZFS filesystem which contains the root
(in my case zpl/root so it looked for a BE named root).

I created a fresh /etc/lutab:
1:root:C:0
1:/:zpl/root:1
1:boot-device:/dev/dsk/c1t0d0s0:2

Where the BE ID is 1, root is the BE name, zpl/root is the
filesystem, and /dev/dsk/c1t0d0s0 is the device for the zpl pool.

I attempt lucreate again, and it now asks me to
run /usr/lib/lu/lux86menu_propagate, which presumably is supposed to
synchronise the GRUB menu and make it safe to delete a BE containing
GRUB's stage2 file.

So I do it:

# /usr/lib/lu/lux86menu_propagate

It leaves /zpl/boot/grub/bootsign/BE_root as a hint that the BE named
root currently contains GRUB's files.

Now lucreate -n snv_90 get further - it snapshots zpl/[EMAIL PROTECTED] and
attempts to clone to it zpl/ROOT/snv_90 - unfortunately the expected
zpl/ROOT hierarchy doesn't exist yet. To fix that:

# zfs create zpl/ROOT

The error handling doesn't seem to be complete, as I had to delete the
snapshot from the failed attempt before trying again. lucreate is able
to create the clone now and event snapshots zpl/[EMAIL PROTECTED] and clones to
ROOT/snv_90/opt correctly. It fails later (surprise), trying to set the
ZFS property canmount to the value noauto on the clones.
Unfortunately this value for canmount was introduced at the same time as
the installer changes, so it's not supported in Nevada before snv_88.

I edited /usr/lib/lu/luclonefs to set canmount=on instead (which is fine
because the current filesystems use mountpoint=legacy so the clones
won't be automounted).
# cp -p /usr/lib/lu/luclonefs /usr/lib/lu/luclonefs.orig
# perl -pi -e s,canmount=noauto,canmount=on,g /usr/lib/lu/luclonefs


# lucreate -n snv_90

Now succeeds.

With the snv_90 DVD image mounted at /mnt:

# luupgrade -u -n snv_90 -s /mnt

It even completes.


Since we can't set the mountpoint of the new filesystems correctly yet
under this Nevada build (or it will try to automount since
canmount=noauto can't be used), we set them to mountpoint=legacy again,
and update the existing vfstab entries for the new locations.

# zfs set mountpoint=legacy zpl/ROOT/snv_90
# zfs set mountpoint=legacy zpl/ROOT/snv_90/opt
# mount -F zfs zpl/ROOT/snv_90 /a
# perl -pi -e s,zpl/root,zpl/ROOT/snv_90,g /a/etc/vfstab
# perl -pi -e s,zpl/opt,zpl/ROOT/snv_90/opt,g /a/etc/vfstab


Now would also be a good time to
check /a/var/sadm/system/data/upgrade_cleanup

# umount /a


# sudo lustatus

Boot Environment   Is   Active ActiveCanCopy  
Name   Complete NowOn Reboot Delete Status
--  -- - -- --
root   yes  yesyes   no - 
snv_90 yes  no noyes-



# sudo luactivate snv_90

...
Activation of boot environment snv_90 successful.

It doesn't seem to change the bootfs property on zpl or GRUB's menu.lst
on the zpool, so we do it manually:
# zpool set bootfs=zpl/ROOT/snv_90 zpl
Update the boot archive (which is stale for some reason):
# mount -F zfs zpl/ROOT/snv_90 /a
# bootadm update-archive -R /a
# umount /a

Cross fingers, reboot!
# init 6

-Albert


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root compressed ?

2008-06-05 Thread Albert Lee

On Fri, 2008-06-06 at 06:47 +0300, Cyril Plisko wrote:
 On Fri, Jun 6, 2008 at 2:58 AM,  [EMAIL PROTECTED] wrote:
  Bill Sommerfeld writes:
 
  2. How can I do it ? (I think I can run zfs set compression=on
  rpool/ROOT/snv_90 in the other window, right after the installation
  begins, but I would like less hacky way.)
 
  what I did was to migrate via live upgrade, creating the pool and the
  pool/ROOT filesystem myself, tweaking both  copies and compression on
  pool/ROOT before using lucreate.
  I haven't tried this on a fresh install yet.
  after install, I'd think you could play games with zfs send | zfs
  receive on an inactive BE to rewrite everything with the desired
  attributes (more important for copies than compression).
 
  Would it be possible to create a new BE on a compressed filesystem and
  activate it?  Is snap upgrade implemted yet?  If so this should be quick.
 
 Wouldn't snapupgrade clone the original BE ? In this case no data
 would be rewritten.

Correct, and Live Upgrade also clones the active BE when you do
lucreate. Unless you copy all the data manually, it's going to inherit
the uncompressed blocks from the current filesystem.

-Albert

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade snv_77 with a ZFS root to snv_89

2008-05-29 Thread Albert Lee
On Thu, 2008-05-29 at 07:07 -0700, Jim Klimov wrote:
 We have a test machine installed with a ZFS root (snv_77/x86 and 
 rootpol/rootfs with grub support).
 
 Recently tried to update it to snv_89 which (in Flag Days list) claimed more 
 support for ZFS boot roots, but the installer disk didn't find any previously 
 installed operating system to upgrade.
 
 Then we tried to install SUNWlu* packages from snv_89 disk onto snv_77 
 system. It worked in terms of package updates, but lucreate fails:
 
 # lucreate -n snv_89
 ERROR: The system must be rebooted after applying required patches.
 Please reboot and try again.
 
 Apparently we rebooted a lot and it did not help...
 
 How can we upgrade the system?
 
 In particular, how does LU do it? :)
 
 Now working on an idea to update all existing packages in the cloned root, 
 using pkgrm/pkgadd -R. Updating only some packages didn't help much 
 (kernel, zfs, libs).
 
 A backup plan is to move the ZFS root back to UFS, update and move it back. 
 Probably would work, but not an elegant job ;)
 
 Suggestions welcome, maybe we'll try out some of them and report ;)


The LU support for ZFS root is part of a set of updates to the installer
that are not available until snv_90. There is a hack to do an offline
upgrade from DVD/CD (zfs_ttinstall), if you can't wait.

-Albert



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? ZFS boot in nv88 on SPARC ?

2008-04-30 Thread Albert Lee

On Tue, 2008-04-29 at 15:02 +0200, Ulrich Graef wrote:
 Hi,
 
 ZFS won't boot on my machine.
 
 I discovered, that the lu manpages are there, but not
 the new binaries.
 So I tried to set up ZFS boot manually:
 
   zpool create -f Root c0t1d0s0
  
   lucreate -n nv88_zfs -A nv88 finally on ZFS  -c nv88_ufs -p Root -x 
  /zones
  
   zpool set bootfs=Root/nv88_zfs Root
  
   ufsdump 0f - / | ( cd /Root/nv88_zfs; ufsrestore -rf - ; )
  
   eeprom boot-device=disk1
  
   Correct vfstab of the boot environment to:
  Root/nv88_zfs   -   /   zfs -   no  -
  
   zfs set mountpoint=legacy Root/nv88_zfs
  
   mount -F zfs Root/nv88_zfs /mnt
  
   bootadm update-archive -R /mnt
  
   umount /mnt
  
   installboot /usr/platform/SUNW,Ultra-60/lib/fs/zfs/bootblk 
  /dev/rdsk/c0t1d0s0
 
 When I try to boot I get the message in the ok prompt:
 
 Can't mount root
 Fast Data Access MMU Miss
 
 Same with: boot disk1 -Z Root/nv88_zfs
 
 What is missing in the setup?
 Unfortunately opensolaris contains only the preliminary setup for x86,
 so it does not help me...
 
 Regards,
 
   Ulrich
 

Does newboot automatically construct the SPARC-specific zfs-bootobj
property from the bootfs pool property?

Make sure you also didn't export the pool. The pool must be imported
and /etc/zfs/zpool.cache must be in sync between running system and the
ZFS root.

-Albert

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Downgrade zpool version?

2008-04-08 Thread Albert Lee

On Mon, 2008-04-07 at 20:21 -0600, Keith Bierman wrote:
 On Apr 7, 2008, at 1:46 PM, David Loose wrote:
   my Solaris samba shares never really played well with iTunes.
 
 
 Another approach might be to stick with Solaris on the server, and  
 run netatalk netatalk.sourceforge.net instead of SAMBA (or, you  
 know your macs can speak NFS ;).

Alternatively you could run Banshee or mt-daapd on the Solaris box and
just rely on iTunes sharing. =P

Seriously, NFS is a totally reasonable way to go.

-Albert

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv question

2008-03-11 Thread Albert Chin
On Thu, Mar 06, 2008 at 10:34:07PM -0800, Bill Shannon wrote:
 Darren J Moffat wrote:
  I know this isn't answering the question but rather than using today 
  and yesterday why not not just use dates ?
 
 Because then I have to compute yesterday's date to do the incremental
 dump.

Not if you set a ZFS property with the date of the last backup.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Panic when ZFS pool goes down?

2008-02-25 Thread Albert Lee

On Mon, 2008-02-25 at 11:48 -0800, Vincent Fox wrote:
 Is it still the case that there is a kernel panic if the device(s) with the 
 ZFS pool die?
 
 I was thinking to attach some cheap SATA disks to a system to use for 
 nearline storage.  Then I could use ZFS send/recv on the local system 
 (without ssh) to keep backups of the stuff in the main pool.  The main pool 
 is replicated across 2 arrays on the SAN and we have multipathing and it's 
 quite robust.
 
 However I don't want my mail server going DOWN if the non-replicated pool 
 goes down.
 
 I tested with a fully-patched 10u4 system and this panic still occurs.  I 
 don't have at this moment a Nevada system to test because that system got 
 pulled apart and it'll be a few days before I can put something together 
 again.  Anyone have the current scoop?
 
 Thanks!
  

PSARC 2007/567 added the failmode option to the zpool(1) command to
specify what happens when a pool fails. This was integrated in Nevada
b77, it probably won't be available in S10 until the next update.

-Albert



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Albert Chin
On Fri, Feb 15, 2008 at 09:00:05PM +, Peter Tribble wrote:
 On Fri, Feb 15, 2008 at 8:50 PM, Bob Friesenhahn
 [EMAIL PROTECTED] wrote:
  On Fri, 15 Feb 2008, Peter Tribble wrote:
   
May not be relevant, but still worth checking - I have a 2530 (which 
  ought
to be that same only SAS instead of FC), and got fairly poor performance
at first. Things improved significantly when I got the LUNs properly
balanced across the controllers.
 
   What do you mean by properly balanced across the controllers?  Are
   you using the multipath support in Solaris 10 or are you relying on
   ZFS to balance the I/O load?  Do some disks have more affinity for a
   controller than the other?
 
 Each LUN is accessed through only one of the controllers (I presume the
 2540 works the same way as the 2530 and 61X0 arrays). The paths are
 active/passive (if the active fails it will relocate to the other path).
 When I set mine up the first time it allocated all the LUNs to controller B
 and performance was terrible. I then manually transferred half the LUNs
 to controller A and it started to fly.

http://groups.google.com/group/comp.unix.solaris/browse_frm/thread/59b43034602a7b7f/0b500afc4d62d434?lnk=stq=#0b500afc4d62d434

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send / receive between different opensolaris versions?

2008-02-07 Thread Albert Lee

On Wed, 2008-02-06 at 13:42 -0600, Michael Hale wrote:
 Hello everybody,
 
 I'm thinking of building out a second machine as a backup for our mail  
 spool where I push out regular filesystem snapshots, something like a  
 warm/hot spare situation.
 
 Our mail spool is currently running snv_67 and the new machine would  
 probably be running whatever the latest opensolaris version is (snv_77  
 or later).
 
 My first question is whether or not zfs send / receive is portable  
 between differing releases of opensolaris.  My second question (kind  
 of off topic for this list) is that I was wondering the difficulty  
 involved in upgrading snv_67 to a later version of opensolaris given  
 that we're running a zfs root boot configuration


For your first question, zfs(1) says:

 zfs upgrade [-r] [-V version] [-a | filesystem]

 Upgrades file systems to a  new  on-disk  version.  Once
 this  is done, the file systems will no longer be acces-
 sible on systems running older versions of the software.
 zfs send streams generated from new snapshots of these
 file systems can not  be  accessed  on  systems  running
 older versions of the software.

The format of the stream is dependent on just the zfs filesystem version
at the time of the snapshot, so as they are backwards compatible, a
system with newer zfs bits can always receive an older snapshot. The
current filesystem version is 3 (not to be confused with zpool which is
at 10), so it's unlikely to have changed recently.


The officially supported method for upgrading a zfs boot system is to
BFU (which upgrades ON but breaks package support). However, you should
be able to do an in-place upgrade with the zfs_ttinstall wrapper for
ttinstall (the Solaris text installer). This means booting from CD/DVD
(or netbooting) and then running the script:
http://opensolaris.org/jive/thread.jspa?threadID=46588tstart=255
You will have to edit it to fit your zfs layout.

 --
 Michael Hale  
 [EMAIL PROTECTED] 
  
 Manager of Engineering SupportEnterprise 
 Engineering Group
 Transcom Enhanced Services
 http://www.transcomus.com
 
 
 

-Albert



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration for a thumper

2008-02-01 Thread Albert Shih
 Le 01/02/2008 à 11:17:14-0800, Marion Hakanson a écrit
 [EMAIL PROTECTED] said:
  Depending on needs for space vs. performance, I'd probably pixk eithr  5*9 
  or
  9*5,  with 1 hot spare. 
 
 [EMAIL PROTECTED] said:
  How you can check the speed (I'm totally newbie on Solaris) 
 
 We're deploying a new Thumper w/750GB drives, and did space vs performance
 tests comparing raidz2 4*11 (2 spares, 24TB) with 7*6 (4 spares, 19TB).
 Here are our bonnie++ and filebench results:
   http://acc.ohsu.edu/~hakansom/thumper_bench.html
 

Lots of thanks for making this work. And let me to read it. 

Regards.

--
Albert SHIH
Observatoire de Paris Meudon
SIO batiment 15
Heure local/Local time:
Ven 1 fév 2008 23:03:59 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS configuration for a thumper

2008-01-30 Thread Albert Shih
Hi all

I've a Sun X4500 with 48 disk of 750Go

The server come with Solaris install on two disk. That's mean I've got 46
disk for ZFS.

When I look the defautl configuration of the zpool 

zpool create -f zpool1 raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0
zpool add -f zpool1 raidz c0t1d0 c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0
zpool add -f zpool1 raidz c0t2d0 c1t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0
zpool add -f zpool1 raidz c0t3d0 c1t3d0 c4t3d0 c5t3d0 c6t3d0 c7t3d0
zpool add -f zpool1 raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0
zpool add -f zpool1 raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c6t5d0 c7t5d0
zpool add -f zpool1 raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 c7t6d0
zpool add -f zpool1 raidz c0t7d0 c1t7d0 c4t7d0 c5t7d0 c6t7d0 c7t7d0

that's mean there'are pool with 5 disk and other with 6 disk.

When I want to do the same I've got this message :

mismatched replication level: pool uses 5-way raidz and new vdev uses 6-way 
raidz

I can force this with «-f» option.

But what's that mean (sorry if the question is stupid). 

What's kind of pool you use with 46 disk ? (46=2*23 and 23 is prime number
that's mean I can make raidz with 6 or 7 or any number of disk).

Regards.

--
Albert SHIH
Observatoire de Paris Meudon
SIO batiment 15
Heure local/Local time:
Mer 30 jan 2008 16:36:49 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration for a thumper

2008-01-30 Thread Albert Shih
 Le 30/01/2008 à 11:01:35-0500, Kyle McDonald a écrit
 Albert Shih wrote:
 What's kind of pool you use with 46 disk ? (46=2*23 and 23 is prime number
 that's mean I can make raidz with 6 or 7 or any number of disk).
 
   
 Depending on needs for space vs. performance, I'd probably pixk eithr 5*9 
 or 9*5,  with 1 hot spare.

Thanks for the tips...

How you can check the speed (I'm totally newbie on Solaris)

I've use 

mkfile 10g

for write and I've got same perf with 5*9 or 9*5.

Have you some advice about tool like iozone ? 

Regards.

--
Albert SHIH
Observatoire de Paris Meudon
SIO batiment 15
Heure local/Local time:
Mer 30 jan 2008 17:10:55 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LowEnd Batt. backed raid controllers that will deal with ZFS commit semantics correctly?

2008-01-25 Thread Albert Chin
On Fri, Jan 25, 2008 at 12:59:18AM -0500, Kyle McDonald wrote:
 ... With the 256MB doing write caching, is there any further benefit
 to moving thte ZIL to a flash or other fast NV storage?

Do some tests with/without ZIL enabled. You should see a big
difference. You should see something equivalent to the performance of
ZIL disabled with ZIL/RAM. I'd do ZIL with a battery-backed RAM in a
heartbeat if I could find a card. I think others would as well.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs with over 10000 file systems

2008-01-23 Thread Albert Chin
On Wed, Jan 23, 2008 at 08:02:22AM -0800, Akhilesh Mritunjai wrote:
 I remember reading a discussion where these kind of problems were
 discussed.
 
 Basically it boils down to everything not being aware of the
 radical changes in filesystems concept.
 
 All these things are being worked on, but it might take sometime
 before everything is made aware that yes it's no longer unusual that
 there can be 1+ filesystems on one machine.

But shouldn't sharemgr(1M) be aware? It's relatively new.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LowEnd Batt. backed raid controllers that will deal with ZFS commit semantics correctly?

2008-01-22 Thread Albert Chin
On Tue, Jan 22, 2008 at 12:47:37PM -0500, Kyle McDonald wrote:
 
 My primary use case, is NFS base storage to a farm of software build 
 servers, and developer desktops.

For the above environment, you'll probably see a noticable improvement
with a battery-backed NVRAM-based ZIL. Unfortunately, no inexpensive
cards exist for the common consumer (with ECC memory anyways). If you
convince http://www.micromemory.com/ to sell you one, let us know :)

Set set zfs:zil_disable = 1 in /etc/system to gauge the type of
improvement you can expect. Don't use this in production though.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Not Offlining Disk on SCSI Sense Error (X4500)

2008-01-03 Thread Albert Chin
On Thu, Jan 03, 2008 at 02:57:08PM -0700, Jason J. W. Williams wrote:
 There seems to be a persistent issue we have with ZFS where one of the
 SATA disk in a zpool on a Thumper starts throwing sense errors, ZFS
 does not offline the disk and instead hangs all zpools across the
 system. If it is not caught soon enough, application data ends up in
 an inconsistent state. We've had this issue with b54 through b77 (as
 of last night).
 
 We don't seem to be the only folks with this issue reading through the
 archives. Are there any plans to fix this behavior? It really makes
 ZFS less than desirable/reliable.

http://blogs.sun.com/eschrock/entry/zfs_and_fma

FMA For ZFS Phase 2 (PSARC/2007/283) was integrated in b68:
  http://www.opensolaris.org/os/community/arc/caselog/2007/283/
  http://www.opensolaris.org/os/community/on/flag-days/all/

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] In-place upgrades of Solaris Express on ZFS root (updated)

2007-12-06 Thread Albert Lee
Hi folks,

I've updated zfs_ttinstall to work correctly (finally =P) and for
slightly better error handling.

This allows you to perform an in-place upgrade on a Solaris Express
system that is using ZFS root. Sorry, no Live Upgrade for now. You will
have to boot from DVD media for the upgrade (CDs are untested, and may
fail because of the way it handles /sbin/umount).

The script is here:
http://trisk.acm.jhu.edu/zfs_ttinstall

To use it:
1)
Edit ROOTDEV, ROOTPOOL, ROOTFS, and DATAPOOLs accordingly.
Copy the zfs_ttinstall to your root pool (for this example, it will be
rootpool).
# cp zfs_ttinstall /rootpool

2)
Boot from SXCE or SXDE DVD.
Select the Solaris Express (not DE) boot option.
Choose one of the graphical install options (having X is much nicer).

3)
After the installer starts, cancel it, and open a new terminal window.

4)
Type:
# mount -o remount,rw /
# zpool import -f rootpool
# sh /rootpool/zfs_ttinstall

5)
Check /var/sadm/system/logs/zfs_upgrade_log for problems.
# reboot
*Important* Don't zpool export your pools after the upgrade.


Please let me know if this works for you. I last tested it with updating
snv_73-snv_78, and it works for earlier builds, too.


Good luck,

-Albert

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trial x4500, zfs with NFS and quotas.

2007-11-28 Thread Albert Chin
On Wed, Nov 28, 2007 at 05:40:57PM +0900, Jorgen Lundman wrote:
 
 Ah it's a somewhat mis-leading error message:
 
 bash-3.00# mount -F lofs /zpool1/test /export/test
 bash-3.00# share -F nfs -o rw,anon=0 /export/test
 Could not share: /export/test: invalid path
 bash-3.00# umount /export/test
 bash-3.00# zfs set sharenfs=off zpool1/test
 bash-3.00# mount -F lofs /zpool1/test /export/test
 bash-3.00# share -F nfs -o rw,anon=0 /export/test
 
 So if any zfs file-system has sharenfs enabled, you will get invalid 
 path. If you disable sharenfs, then you can export the lofs.

I reported bug #6578437. We recently ugraded to b77 and this bug
appears to be fixed now.

 Lund
 
 
 J.P. King wrote:
 
  I can not export lofs on NFS. Just gives invalid path,
  
  Tell that to our mirror server.
  
  -bash-3.00$ /sbin/mount -p | grep linux
  /data/linux - /linux lofs - no ro
  /data/linux - /export/ftp/pub/linux lofs - no ro
  -bash-3.00$ grep linux /etc/dfs/sharetab
  /linux  -   nfs ro  Linux directories
  -bash-3.00$ df -k /linux
  Filesystem   1K-blocks  Used Available Use% Mounted on
  data 3369027462 3300686151  68341312  98% /data
  
  and:
 
  http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6578437
  
  I'm using straight Solaris, not Solaris Express or equivalents:
  
  -bash-3.00$ uname -a
  SunOS leprechaun.csi.cam.ac.uk 5.10 Generic_127111-01 sun4u sparc 
  SUNW,Sun-Fire-V240 Solaris
  
  I can't comment on the bug, although I notice it is categorised under 
  nfsv4, but the description doesn't seem to match that.
  
  Jorgen Lundman   | [EMAIL PROTECTED]
  
  Julian
  -- 
  Julian King
  Computer Officer, University of Cambridge, Unix Support
  
 
 -- 
 Jorgen Lundman   | [EMAIL PROTECTED]
 Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
 Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
 Japan| +81 (0)3 -3375-1767  (home)
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFSboot : Initial disk layout

2007-11-26 Thread Albert Lee

On Mon, 2007-11-26 at 08:21 -0800, Roman Morokutti wrote:
 Hi
 
 I am very interested in using ZFS as a whole: meaning
 on the whole disk in my laptop. I would now make a
 complete reinstall and don´t know how to partition
 the disk initially for ZFS. 
 
 --
 Roman
  
 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

AFAIK, ZFS boot will only work with disk slices at the moment, so you
still need to create an x86 (fdisk) partition for Solaris and at least
one slice for root. 

Your options are to netinstall or install to UFS and copy the files to
ZFS.

See http://www.opensolaris.org/os/community/zfs/boot/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why did resilvering restart?

2007-11-21 Thread Albert Chin
On Tue, Nov 20, 2007 at 11:39:30AM -0600, Albert Chin wrote:
 On Tue, Nov 20, 2007 at 11:10:20AM -0600, [EMAIL PROTECTED] wrote:
  
  [EMAIL PROTECTED] wrote on 11/20/2007 10:11:50 AM:
  
   On Tue, Nov 20, 2007 at 10:01:49AM -0600, [EMAIL PROTECTED] wrote:
Resilver and scrub are broken and restart when a snapshot is created
-- the current workaround is to disable snaps while resilvering,
the ZFS team is working on the issue for a long term fix.
  
   But, no snapshot was taken. If so, zpool history would have shown
   this. So, in short, _no_ ZFS operations are going on during the
   resilvering. Yet, it is restarting.
  
  
  Does 2007-11-20.02:37:13 actually match the expected timestamp of
  the original zpool replace command before the first zpool status
  output listed below?
 
 No. We ran some 'zpool status' commands after the last 'zpool
 replace'. The 'zpool status' output in the initial email is from this
 morning. The only ZFS command we've been running is 'zfs list', 'zpool
 list tww', 'zpool status', or 'zpool status -v' after the last 'zpool
 replace'.

I think the 'zpool status' command was resetting the resilvering. We
upgraded to b77 this morning which did not exhibit this problem.
Resilvering is now done.

 Server is on GMT time.
 
  Is it possible that another zpool replace is further up on your
  pool history (ie it was rerun by an admin or automatically from some
  service)?
 
 Yes, but a zpool replace for the same bad disk:
   2007-11-20.00:57:40 zpool replace tww c0t600A0B8000299966059E4668CBD3d0
   c0t600A0B800029996606584741C7C3d0
   2007-11-20.02:35:22 zpool detach tww c0t600A0B800029996606584741C7C3d0
   2007-11-20.02:37:13 zpool replace tww c0t600A0B8000299966059E4668CBD3d0
   c0t600A0B8000299CCC06734741CD4Ed0
 
 We accidentally removed c0t600A0B800029996606584741C7C3d0 from the
 array, hence the 'zpool detach'.
 
 The last 'zpool replace' has been running for 15h now.
 
  -Wade
  
  
   
[EMAIL PROTECTED] wrote on 11/20/2007 09:58:19 AM:
   
 On b66:
   # zpool replace tww c0t600A0B8000299966059E4668CBD3d0 \
   c0t600A0B8000299CCC06734741CD4Ed0
some hours later
   # zpool status tww
 pool: tww
state: DEGRADED
   status: One or more devices is currently being resilvered.  The
  pool
will
   continue to function, possibly in a degraded state.
   action: Wait for the resilver to complete.
scrub: resilver in progress, 62.90% done, 4h26m to go
some hours later
   # zpool status tww
 pool: tww
state: DEGRADED
   status: One or more devices is currently being resilvered.  The
  pool
will
   continue to function, possibly in a degraded state.
   action: Wait for the resilver to complete.
scrub: resilver in progress, 3.85% done, 18h49m to go

   # zpool history tww | tail -1
   2007-11-20.02:37:13 zpool replace tww
c0t600A0B8000299966059E4668CBD3d0
   c0t600A0B8000299CCC06734741CD4Ed0

 So, why did resilvering restart when no zfs operations occurred? I
 just ran zpool status again and now I get:
   # zpool status tww
 pool: tww
state: DEGRADED
   status: One or more devices is currently being resilvered.  The
  pool
will
   continue to function, possibly in a degraded state.
   action: Wait for the resilver to complete.
scrub: resilver in progress, 0.00% done, 134h45m to go

 What's going on?

 --
 albert chin ([EMAIL PROTECTED])
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
   
  
   --
   albert chin ([EMAIL PROTECTED])
   ___
   zfs-discuss mailing list
   zfs-discuss@opensolaris.org
   http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
  
 
 -- 
 albert chin ([EMAIL PROTECTED])
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why did resilvering restart?

2007-11-20 Thread Albert Chin
On Tue, Nov 20, 2007 at 10:01:49AM -0600, [EMAIL PROTECTED] wrote:
 Resilver and scrub are broken and restart when a snapshot is created
 -- the current workaround is to disable snaps while resilvering,
 the ZFS team is working on the issue for a long term fix.

But, no snapshot was taken. If so, zpool history would have shown
this. So, in short, _no_ ZFS operations are going on during the
resilvering. Yet, it is restarting.

 -Wade
 
 [EMAIL PROTECTED] wrote on 11/20/2007 09:58:19 AM:
 
  On b66:
# zpool replace tww c0t600A0B8000299966059E4668CBD3d0 \
c0t600A0B8000299CCC06734741CD4Ed0
 some hours later
# zpool status tww
  pool: tww
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool
 will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 62.90% done, 4h26m to go
 some hours later
# zpool status tww
  pool: tww
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool
 will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 3.85% done, 18h49m to go
 
# zpool history tww | tail -1
2007-11-20.02:37:13 zpool replace tww
 c0t600A0B8000299966059E4668CBD3d0
c0t600A0B8000299CCC06734741CD4Ed0
 
  So, why did resilvering restart when no zfs operations occurred? I
  just ran zpool status again and now I get:
# zpool status tww
  pool: tww
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool
 will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 0.00% done, 134h45m to go
 
  What's going on?
 
  --
  albert chin ([EMAIL PROTECTED])
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why did resilvering restart?

2007-11-20 Thread Albert Chin
On Tue, Nov 20, 2007 at 11:10:20AM -0600, [EMAIL PROTECTED] wrote:
 
 [EMAIL PROTECTED] wrote on 11/20/2007 10:11:50 AM:
 
  On Tue, Nov 20, 2007 at 10:01:49AM -0600, [EMAIL PROTECTED] wrote:
   Resilver and scrub are broken and restart when a snapshot is created
   -- the current workaround is to disable snaps while resilvering,
   the ZFS team is working on the issue for a long term fix.
 
  But, no snapshot was taken. If so, zpool history would have shown
  this. So, in short, _no_ ZFS operations are going on during the
  resilvering. Yet, it is restarting.
 
 
 Does 2007-11-20.02:37:13 actually match the expected timestamp of
 the original zpool replace command before the first zpool status
 output listed below?

No. We ran some 'zpool status' commands after the last 'zpool
replace'. The 'zpool status' output in the initial email is from this
morning. The only ZFS command we've been running is 'zfs list', 'zpool
list tww', 'zpool status', or 'zpool status -v' after the last 'zpool
replace'.

Server is on GMT time.

 Is it possible that another zpool replace is further up on your
 pool history (ie it was rerun by an admin or automatically from some
 service)?

Yes, but a zpool replace for the same bad disk:
  2007-11-20.00:57:40 zpool replace tww c0t600A0B8000299966059E4668CBD3d0
  c0t600A0B800029996606584741C7C3d0
  2007-11-20.02:35:22 zpool detach tww c0t600A0B800029996606584741C7C3d0
  2007-11-20.02:37:13 zpool replace tww c0t600A0B8000299966059E4668CBD3d0
  c0t600A0B8000299CCC06734741CD4Ed0

We accidentally removed c0t600A0B800029996606584741C7C3d0 from the
array, hence the 'zpool detach'.

The last 'zpool replace' has been running for 15h now.

 -Wade
 
 
  
   [EMAIL PROTECTED] wrote on 11/20/2007 09:58:19 AM:
  
On b66:
  # zpool replace tww c0t600A0B8000299966059E4668CBD3d0 \
  c0t600A0B8000299CCC06734741CD4Ed0
   some hours later
  # zpool status tww
pool: tww
   state: DEGRADED
  status: One or more devices is currently being resilvered.  The
 pool
   will
  continue to function, possibly in a degraded state.
  action: Wait for the resilver to complete.
   scrub: resilver in progress, 62.90% done, 4h26m to go
   some hours later
  # zpool status tww
pool: tww
   state: DEGRADED
  status: One or more devices is currently being resilvered.  The
 pool
   will
  continue to function, possibly in a degraded state.
  action: Wait for the resilver to complete.
   scrub: resilver in progress, 3.85% done, 18h49m to go
   
  # zpool history tww | tail -1
  2007-11-20.02:37:13 zpool replace tww
   c0t600A0B8000299966059E4668CBD3d0
  c0t600A0B8000299CCC06734741CD4Ed0
   
So, why did resilvering restart when no zfs operations occurred? I
just ran zpool status again and now I get:
  # zpool status tww
pool: tww
   state: DEGRADED
  status: One or more devices is currently being resilvered.  The
 pool
   will
  continue to function, possibly in a degraded state.
  action: Wait for the resilver to complete.
   scrub: resilver in progress, 0.00% done, 134h45m to go
   
What's going on?
   
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
   ___
   zfs-discuss mailing list
   zfs-discuss@opensolaris.org
   http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
  
 
  --
  albert chin ([EMAIL PROTECTED])
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Oops (accidentally deleted replaced drive)

2007-11-19 Thread Albert Chin
Running ON b66 and had a drive fail. Ran 'zfs replace' and resilvering
began. But, accidentally deleted the replacement drive on the array
via CAM.

# zpool status -v
...
  raidz2   DEGRADED 0 0 0
c0t600A0B800029996605964668CB39d0  ONLINE   0 0 0
spare  DEGRADED 0 0 0
  replacingUNAVAIL  0 79.14 
0  insufficient replicas
c0t600A0B8000299966059E4668CBD3d0  UNAVAIL 27   370 
0  cannot open
c0t600A0B800029996606584741C7C3d0  UNAVAIL  0 82.32 
0  cannot open
  c0t600A0B8000299CCC05D84668F448d0ONLINE   0 0 0
c0t600A0B8000299CCC05B44668CC6Ad0  ONLINE   0 0 0
c0t600A0B800029996605A44668CC3Fd0  ONLINE   0 0 0
c0t600A0B8000299CCC05BA4668CD2Ed0  ONLINE   0 0 0


Is there a way to recover from this?
  # zpool replace tww c0t600A0B8000299966059E4668CBD3d0 \
  c0t600A0B8000299CCC06734741CD4Ed0
  cannot replace c0t600A0B8000299966059E4668CBD3d0 with
  c0t600A0B8000299CCC06734741CD4Ed0: cannot replace a replacing device

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oops (accidentally deleted replaced drive)

2007-11-19 Thread Albert Chin
On Mon, Nov 19, 2007 at 06:23:01PM -0800, Eric Schrock wrote:
 You should be able to do a 'zpool detach' of the replacement and then
 try again.

Thanks. That worked.

 - Eric
 
 On Mon, Nov 19, 2007 at 08:20:04PM -0600, Albert Chin wrote:
  Running ON b66 and had a drive fail. Ran 'zfs replace' and resilvering
  began. But, accidentally deleted the replacement drive on the array
  via CAM.
  
  # zpool status -v
  ...
raidz2   DEGRADED 0 0 
  0
  c0t600A0B800029996605964668CB39d0  ONLINE   0 0 
  0
  spare  DEGRADED 0 0 
  0
replacingUNAVAIL  0 79.14 
  0  insufficient replicas
  c0t600A0B8000299966059E4668CBD3d0  UNAVAIL 27   370 
  0  cannot open
  c0t600A0B800029996606584741C7C3d0  UNAVAIL  0 82.32 
  0  cannot open
c0t600A0B8000299CCC05D84668F448d0ONLINE   0 0 
  0
  c0t600A0B8000299CCC05B44668CC6Ad0  ONLINE   0 0 
  0
  c0t600A0B800029996605A44668CC3Fd0  ONLINE   0 0 
  0
  c0t600A0B8000299CCC05BA4668CD2Ed0  ONLINE   0 0 
  0
  
  
  Is there a way to recover from this?
# zpool replace tww c0t600A0B8000299966059E4668CBD3d0 \
c0t600A0B8000299CCC06734741CD4Ed0
cannot replace c0t600A0B8000299966059E4668CBD3d0 with
c0t600A0B8000299CCC06734741CD4Ed0: cannot replace a replacing device
  
  -- 
  albert chin ([EMAIL PROTECTED])
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 --
 Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Did ZFS boot/root make it into Solaris Express Developer Edition 9/07 ?

2007-10-20 Thread Albert Lee
On Sun, 21 Oct 2007, Ian Collins wrote:

 Carl Brewer wrote:
 As the subject says - a quick grovel around didn't say that zfs boot/root 
 had made it into SEDE 9/07, before I download it and try, can anyone save me 
 the bandwidth?
 Thanks!


 It didn't.  It still isn't supported by the installer in SXCE either.

 Ian

ZFS root and boot work fine since the later snv_6x builds, the installer 
is a different matter. You'll have to install to UFS and move to ZFS - see 
http://www.opensolaris.org/os/community/zfs/boot/ .

-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS array NVRAM cache?

2007-09-25 Thread Albert Chin
On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrote:
 I don't understand.  How do you
 
 setup one LUN that has all of the NVRAM on the array dedicated to it
 
 I'm pretty familiar with 3510 and 3310. Forgive me for being a bit
 thick here, but can you be more specific for the n00b?

If you're using CAM, disable NVRAM on all of your LUNs. Then, create
another LUN equivalent to the size of your NVRAM. Assign the ZIL to
this LUN. You'll then have an NVRAM-backed ZIL.

I posted a question along these lines to storage-discuss:
  http://mail.opensolaris.org/pipermail/storage-discuss/2007-July/003080.html

You'll need to determine the performance impact of removing NVRAM from
your data LUNs. Don't blindly do it.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Successful SXCE upgrade from DVD media on ZFS root

2007-09-24 Thread Albert Lee
Hi,

I converted my laptop to use ZFS root, loosely following the instructions: 
http://opensolaris.org/os/community/zfs/boot/zfsboot-manual/

In my case I added seperate filesystems under rootfs for /usr, /var, /opt, and 
/export, which slightly complicates things (they have to be added to vfstab and 
mounted for the upgrade).

Then I booted from a SXCE b73 DVD to upgrade from b72, using the text installer 
(ttinstall). ttinstall assumes you use UFS and has several checks that fail on 
ZFS. After wrangling with ttinstall for a while, I was able to coax it into 
upgrading my system to SXCE b73.



I've written a script to automate this process:

http://trisk.acm.jhu.edu/zfs_ttinstall

Place that script in a place you can access it from during the install (i.e. on 
your root pool or on a USB stick or another filesystem you can mount).

When booting from the DVD, choose Solaris Express at the grub prompt, and 
select one of the text-based install options when prompted later (Desktop is 
preferable, but Console should also work).

Set ROOTPOOL, ROOTFS, and ROOTDEV appropriately at the top of the script.
It assumes that if you have any filesystems under /, they are located at 
$ROOTPOOL/$ROOTFS/filesystem (e.g. /usr is rootpool/rootfs/usr). Those will 
be mounted automatically. It also assumes that ROOTDEV is the pool location and 
also where you want the grub boot sector installed.

Run the script, and hopefully, it will do the right thing. ;)

If you have any issues, please read the script.

Let me know if this was helpful,

-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs log device (zil) ever coming to Sol10?

2007-09-18 Thread Albert Chin
On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
 I think we are very close to using zfs in our production environment..  Now
 that I have snv_72 installed and my pools set up with NVRAM log devices
 things are hauling butt.

How did you get NVRAM log devices?

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate intent log blog

2007-07-27 Thread Albert Chin
On Fri, Jul 27, 2007 at 08:32:48AM -0700, Adolf Hohl wrote:
 what is necessary to get it working from the solaris side. Is a
 driver on board or is there no special one needed?

I'd imagine so.

 I just got a packed MM-5425CN with 256M. However i am lacking a
 pci-x 64bit connector and not sure if it is worth the whole effort
 for my personal purposes.

Huh? So your MM-5425CN doesn't fit into a PCI slot?

 Any comment are very appreciated

How did you obtain your card?

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate intent log blog

2007-07-18 Thread Albert Chin
On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
 I wrote up a blog on the separate intent log called slog blog
 which describes the interface; some performance results; and
 general status:
 
 http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on

So, how did you get a pci Micro Memory pci1332,5425 card :) I
presume this is the PCI-X version.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate intent log blog

2007-07-18 Thread Albert Chin
On Wed, Jul 18, 2007 at 01:00:22PM -0700, Eric Schrock wrote:
 You can find these at:
 
 http://www.umem.com/Umem_NVRAM_Cards.html
 
 And the one Neil was using in particular:
 
 http://www.umem.com/MM-5425CN.html

They only sell to OEMs. Our Sun VAR looked for one as well but they
cannot find anyone selling them.

 - Eric
 
 On Wed, Jul 18, 2007 at 01:54:23PM -0600, Neil Perrin wrote:
  
  
  Albert Chin wrote:
   On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
   I wrote up a blog on the separate intent log called slog blog
   which describes the interface; some performance results; and
   general status:
  
   http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on
   
   So, how did you get a pci Micro Memory pci1332,5425 card :) I
   presume this is the PCI-X version.
  
  I wasn't involved in the aquisition but was just sent one internally
  for testing. Yes it's PCI-X. I assume your asking because they can
  not (or no longer) be obtained?
  
  Neil.
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 --
 Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate intent log blog

2007-07-18 Thread Albert Chin
On Wed, Jul 18, 2007 at 01:54:23PM -0600, Neil Perrin wrote:
 Albert Chin wrote:
  On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
  I wrote up a blog on the separate intent log called slog blog
  which describes the interface; some performance results; and
  general status:
 
  http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on
  
  So, how did you get a pci Micro Memory pci1332,5425 card :) I
  presume this is the PCI-X version.
 
 I wasn't involved in the aquisition but was just sent one internally
 for testing. Yes it's PCI-X. I assume your asking because they can
 not (or no longer) be obtained?

Sadly, not from any reseller I know of.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-10 Thread Albert Chin
On Tue, Jul 10, 2007 at 07:12:35AM -0500, Al Hopper wrote:
 On Mon, 9 Jul 2007, Albert Chin wrote:
 
  On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote:
 
  On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
  It would also be nice for extra hardware (PCI-X, PCIe card) that
  added NVRAM storage to various sun low/mid-range servers that are
  currently acting as ZFS/NFS servers.
 
  You can do it yourself very easily -- check out the umem cards from
  Micro Memory, available at http://www.umem.com.  Reasonable prices
  ($1000/GB), they have a Solaris driver, and the performance
  absolutely rips.
 
  The PCIe card is in beta, they don't sell to individual customers, and
  the person I spoke with didn't even know a vendor (Tier 1/2 OEMs) that
  had a Solaris driver. They do have a number of PCI-X cards though.
 
  So, I guess we'll be testing the dedicate all NVRAM to LUN solution
  once b68 is released.
 
 or ramdiskadm(1M) might be interesting...

Well, that's not really an option as a panic of the server would not
be good. While the on-disk data would be consistent, data the clients
wrote to the server might not have been committed.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-09 Thread Albert Chin
On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote:
 
 On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
  It would also be nice for extra hardware (PCI-X, PCIe card) that
  added NVRAM storage to various sun low/mid-range servers that are
  currently acting as ZFS/NFS servers. 
 
 You can do it yourself very easily -- check out the umem cards from
 Micro Memory, available at http://www.umem.com.  Reasonable prices
 ($1000/GB), they have a Solaris driver, and the performance
 absolutely rips.

The PCIe card is in beta, they don't sell to individual customers, and
the person I spoke with didn't even know a vendor (Tier 1/2 OEMs) that
had a Solaris driver. They do have a number of PCI-X cards though.

So, I guess we'll be testing the dedicate all NVRAM to LUN solution
once b68 is released.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-03 Thread Albert Chin
PSARC 2007/171 will be available in b68. Any documentation anywhere on
how to take advantage of it?

Some of the Sun storage arrays contain NVRAM. It would be really nice
if the array NVRAM would be available for ZIL storage. It would also
be nice for extra hardware (PCI-X, PCIe card) that added NVRAM storage
to various sun low/mid-range servers that are currently acting as
ZFS/NFS servers. Or maybe someone knows of cheap SSD storage that
could be used for the ZIL? I think several HD's are available with
SCSI/ATA interfaces.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-03 Thread Albert Chin
On Tue, Jul 03, 2007 at 05:31:00PM +0200, [EMAIL PROTECTED] wrote:
 
 PSARC 2007/171 will be available in b68. Any documentation anywhere on
 how to take advantage of it?
 
 Some of the Sun storage arrays contain NVRAM. It would be really nice
 if the array NVRAM would be available for ZIL storage. It would also
 be nice for extra hardware (PCI-X, PCIe card) that added NVRAM storage
 to various sun low/mid-range servers that are currently acting as
 ZFS/NFS servers. Or maybe someone knows of cheap SSD storage that
 could be used for the ZIL? I think several HD's are available with
 SCSI/ATA interfaces.
 
 Would flash memory be fast enough (current flash memory has reasonable
 sequential write throughput but horrible I/O ops)

Good point. The speeds for the following don't seem very impressive:
  http://www.adtron.com/products/A25fb-SerialATAFlashDisk.html
  http://www.sandisk.com/OEM/ProductCatalog(1321)-SanDisk_SSD_SATA_5000_25.aspx

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-03 Thread Albert Chin
On Tue, Jul 03, 2007 at 09:01:50AM -0700, Richard Elling wrote:
 Albert Chin wrote:
  Some of the Sun storage arrays contain NVRAM. It would be really nice
  if the array NVRAM would be available for ZIL storage. It would also
  be nice for extra hardware (PCI-X, PCIe card) that added NVRAM storage
  to various sun low/mid-range servers that are currently acting as
  ZFS/NFS servers. Or maybe someone knows of cheap SSD storage that
  could be used for the ZIL? I think several HD's are available with
  SCSI/ATA interfaces.
 
 First, you need a workload where the ZIL has an impact.

ZFS/NFS + zil_disable is faster than ZFS/NFS without zil_disable. So,
I presume, ZFS/NFS + an NVRAM-backed ZIL would be noticeably faster
than ZFS/NFS + ZIL.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   >