Re: [zfs-discuss] (no subject)

2009-01-15 Thread JZ
Open Folks,

Did I give you the impression that only Sun folks can speak on the list 
discussion?

Or did I give you the impression that you have to make the digital name as 
such and such to do this?

No, that was not what I meant, if that would kill the open discussion.



Someone, earlier, asked me a question offlist. what is the difference 
between your Zhou style and the western style?

I think that was a question that asked to explain who I am and why I say 
things and do things in an unusual fashion.

And I copied the list for my response. The server did not deliver.

And in my religion, that act of copying the list can potentially help the 
list better understand me and be less fearful.  It was a noble act and I am 
still wondering why that email did not get through. And therefore I question 
the integrity of the mail server, before I can understand its policy.

I tested the mail server with another method, didn't get through either.



Some friends asked me to be really open. But can you folks take my real open 
state of mind?





Now, users have questions, and are asking the list. No Sun folks are 
responding. I guess any worldwide users who know the answers should post 
some help then?



Best,

z





- Original Message - 
From: JZ j...@excelsioritsolutions.com
To: David Shirley david.shir...@nec.com.au; 
zfs-discuss@opensolaris.org
Sent: Thursday, January 15, 2009 12:04 AM
Subject: Re: [zfs-discuss] (no subject)


 Ok, it's also important, in many many cases, but not all -
 taking the problem into tomorrow is also not very good.

 IMHO, maybe all you smart open folks that know all about this and that, 
 but
 dunno how to fix your darn email address to appear zfs user on the darn
 list discussion?
 do I have to spell this out to you?

 OMG,
 you Solaris mail server is too much for me, kicking my ass, you win
 chatting on, z open folks.




 best,
 z, at home

 [Daisy baby getting off late, you babies don't understand!]

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Swap ZFS pool disks to another host hardware

2009-01-15 Thread Nikhil
Hi,

I am running a Solaris 10 box on a v20z with 11/06 release. It has got ZFS
pool configured on S1 storage box with 3 146gb disks (it has got lot of
data)
I am planning to upgrade the machine to the new hardware with the new
Solaris 10  release of 8/07 and to the new hardware of X2200M2.

I am wondering will there be any thing that I need to take care of before
putting the S1 storage box onto the new hardware X2200 pulling from the old
hardware, meaningly do I need to sync any data before the switch of the
system to the new hardware (essentially switch of the storage to be plugged
to the new hardware).

Do I need to do any kind of sync of zfs metadata?

If I simply pull out the S1 storage box(on which the zfs is configured for a
long time now with data) and plug it to the new hardware, will the OS with
new release detect the pool as it is ? and also have the data on it? or Will
it detect them as the new disks to be configured again from scratch? I do
not want to lose any data.

Kindly suggest what is to be done here.

Thanks,
Nikhil
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Swap ZFS pool disks to another host hardware

2009-01-15 Thread JZ
[Sun folks are not working?]

Hi Nikhi,
doing IT so late at this hour?

I had an email that also got blocked by the mail server.
that one talked about metadata too, which might be satisfying, with some worm 
sake...

http://en.wikipedia.org/wiki/Sake
The first alcoholic drink in Japan may have been kuchikami no sake (mouth- 
chewed..



best,
z

  - Original Message - 
  From: Nikhil 
  To: zfs-discuss@opensolaris.org 
  Sent: Thursday, January 15, 2009 3:39 AM
  Subject: [zfs-discuss] Swap ZFS pool disks to another host hardware


  Hi,

  I am running a Solaris 10 box on a v20z with 11/06 release. It has got ZFS 
pool configured on S1 storage box with 3 146gb disks (it has got lot of data)
  I am planning to upgrade the machine to the new hardware with the new Solaris 
10  release of 8/07 and to the new hardware of X2200M2.

  I am wondering will there be any thing that I need to take care of before 
putting the S1 storage box onto the new hardware X2200 pulling from the old 
hardware, meaningly do I need to sync any data before the switch of the system 
to the new hardware (essentially switch of the storage to be plugged to the new 
hardware).

  Do I need to do any kind of sync of zfs metadata?

  If I simply pull out the S1 storage box(on which the zfs is configured for a 
long time now with data) and plug it to the new hardware, will the OS with new 
release detect the pool as it is ? and also have the data on it? or Will it 
detect them as the new disks to be configured again from scratch? I do not want 
to lose any data.

  Kindly suggest what is to be done here.

  Thanks,
  Nikhil


--


  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (no subject)

2009-01-15 Thread JZ
Another Zhou approach is that -
when there is a fight that needs to be fought, a Zhou will fight that fight, 
and walking into the battlefield in front of the troops

- kind of like the western way of doing fighting in the earlier days

best,
z 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on partitions

2009-01-15 Thread Casper . Dik


ZFS does turn it off if it doesn't have the whole disk.  That's where the
performance issues come from.

But it doesn't touch it so ZFS continues to work if you enable
write caching.  And I think we default to write-cache enabled for
ATA/IDE disks.  (The reason is that they're shipped with write cache 
enabled and that's the only setting tested by the manufacturer)

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Swap ZFS pool disks to another host hardware

2009-01-15 Thread JZ
[still only me speaking? ok, more spam for whoever out there confused...]


http://en.wikipedia.org/wiki/Beer

if you are lazy to read through Sake...

best,
z



  - Original Message - 
  From: JZ 
  To: Nikhil ; zfs-discuss@opensolaris.org 
  Sent: Thursday, January 15, 2009 4:18 AM
  Subject: Re: [zfs-discuss] Swap ZFS pool disks to another host hardware


  [Sun folks are not working?]

  Hi Nikhi,
  doing IT so late at this hour?

  I had an email that also got blocked by the mail server.
  that one talked about metadata too, which might be satisfying, with some warm 
sake...

  http://en.wikipedia.org/wiki/Sake

  best,
  z

- Original Message - 
From: Nikhil 
To: zfs-discuss@opensolaris.org 
Sent: Thursday, January 15, 2009 3:39 AM
Subject: [zfs-discuss] Swap ZFS pool disks to another host hardware


Hi,

I am running a Solaris 10 box on a v20z with 11/06 release. It has got ZFS 
pool configured on S1 storage box with 3 146gb disks (it has got lot of data)
I am planning to upgrade the machine to the new hardware with the new 
Solaris 10  release of 8/07 and to the new hardware of X2200M2.

I am wondering will there be any thing that I need to take care of before 
putting the S1 storage box onto the new hardware X2200 pulling from the old 
hardware, meaningly do I need to sync any data before the switch of the system 
to the new hardware (essentially switch of the storage to be plugged to the new 
hardware).

Do I need to do any kind of sync of zfs metadata?

If I simply pull out the S1 storage box(on which the zfs is configured for 
a long time now with data) and plug it to the new hardware, will the OS with 
new release detect the pool as it is ? and also have the data on it? or Will it 
detect them as the new disks to be configured again from scratch? I do not want 
to lose any data.

Kindly suggest what is to be done here.

Thanks,
Nikhil





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



--


  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Swap ZFS pool disks to another host hardware

2009-01-15 Thread JZ
beer and http://en.wikipedia.org/wiki/Code_of_Hammurabi

enlightening?
fulfilling?

best,
z


  - Original Message - 
  From: JZ 
  To: Nikhil ; zfs-discuss@opensolaris.org 
  Cc: yunlai...@hotmail.com ; guo_r...@hotmail.com ; ??? ?? ; Lu Bin ; 
liaohelen ; Liao, Jane ; ??? ; gmsi...@sina.com ; dongmingjia ; Jee Kim ; 
xpao2...@yahoo.com ; wagner@263.net ; clare...@yahoo.com ; ken wren ; 
Matthew Zhang ; sc...@mohegansun.com 
  Sent: Thursday, January 15, 2009 4:52 AM
  Subject: Re: [zfs-discuss] Swap ZFS pool disks to another host hardware


  [still only me speaking? ok, more spam for whoever out there confused...]


  http://en.wikipedia.org/wiki/Beer

  if you are lazy to read through Sake...

  best,
  z



- Original Message - 
From: JZ 
To: Nikhil ; zfs-discuss@opensolaris.org 
Sent: Thursday, January 15, 2009 4:18 AM
Subject: Re: [zfs-discuss] Swap ZFS pool disks to another host hardware


[Sun folks are not working?]

Hi Nikhi,
doing IT so late at this hour?

I had an email that also got blocked by the mail server.
that one talked about metadata too, which might be satisfying, with some 
warm sake...

http://en.wikipedia.org/wiki/Sake

best,
z

  - Original Message - 
  From: Nikhil 
  To: zfs-discuss@opensolaris.org 
  Sent: Thursday, January 15, 2009 3:39 AM
  Subject: [zfs-discuss] Swap ZFS pool disks to another host hardware


  Hi,

  I am running a Solaris 10 box on a v20z with 11/06 release. It has got 
ZFS pool configured on S1 storage box with 3 146gb disks (it has got lot of 
data)
  I am planning to upgrade the machine to the new hardware with the new 
Solaris 10  release of 8/07 and to the new hardware of X2200M2.

  I am wondering will there be any thing that I need to take care of before 
putting the S1 storage box onto the new hardware X2200 pulling from the old 
hardware, meaningly do I need to sync any data before the switch of the system 
to the new hardware (essentially switch of the storage to be plugged to the new 
hardware).

  Do I need to do any kind of sync of zfs metadata?

  If I simply pull out the S1 storage box(on which the zfs is configured 
for a long time now with data) and plug it to the new hardware, will the OS 
with new release detect the pool as it is ? and also have the data on it? or 
Will it detect them as the new disks to be configured again from scratch? I do 
not want to lose any data.

  Kindly suggest what is to be done here.

  Thanks,
  Nikhil


--


  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Swap ZFS pool disks to another host hardware

2009-01-15 Thread Sanjeev
Nikhil,

Comments inline...

On Thu, Jan 15, 2009 at 02:09:16PM +0530, Nikhil wrote:
 Hi,
 
 I am running a Solaris 10 box on a v20z with 11/06 release. It has got ZFS
 pool configured on S1 storage box with 3 146gb disks (it has got lot of
 data)
 I am planning to upgrade the machine to the new hardware with the new
 Solaris 10  release of 8/07 and to the new hardware of X2200M2.
 
 I am wondering will there be any thing that I need to take care of before
 putting the S1 storage box onto the new hardware X2200 pulling from the old
 hardware, meaningly do I need to sync any data before the switch of the
 system to the new hardware (essentially switch of the storage to be plugged
 to the new hardware).
 
 Do I need to do any kind of sync of zfs metadata?

So, from what I understand you have the entire pool built out of disks
from the same S1 box. In that case all you need to do is :
- Export the pool on the old box : zpool export poolname
- Connect the S1 to the new machine
- Import the pool : zpool import poolname

Hope that helps.

Thanks and regards,
Sanjeev

 
 If I simply pull out the S1 storage box(on which the zfs is configured for a
 long time now with data) and plug it to the new hardware, will the OS with
 new release detect the pool as it is ? and also have the data on it? or Will
 it detect them as the new disks to be configured again from scratch? I do
 not want to lose any data.
 
 Kindly suggest what is to be done here.
 
 Thanks,
 Nikhil

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs backups

2009-01-15 Thread Joerg Schilling
Ian Collins i...@ianshome.com wrote:

 satya wrote:
   Any idea if we can use pax command to backup ZFS acls? will -p option of 
  pax utility do the trick? 
 

 pax should, according to
 http://docs.sun.com/app/docs/doc/819-5461/gbchx?a=view

 tar and cpio do.

 It should be simple enough to test, just generate an archive and have a
 look.

What do you understand by backups?

Neither tar nor cpio or pax (as found on Solaris) support all the features that 
people expect from backup tools.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Swap ZFS pool disks to another host hardware

2009-01-15 Thread JZ
[OMG, sorry, I cannot resist]


Hi Nikhi, so you were playing?

another Zhou thing is that -
we like playful chic


goodnight
z
  - Original Message - 
  From: Nikhil 
  To: Sanjeev 
  Cc: zfs-discuss@opensolaris.org 
  Sent: Thursday, January 15, 2009 5:20 AM
  Subject: Re: [zfs-discuss] Swap ZFS pool disks to another host hardware


  Actually yes.  I figured it out while reading other archive posts in 
zfs-discuss :-)

  Thanks Sanjeev :-)


  On Thu, Jan 15, 2009 at 3:29 PM, Sanjeev sanjeev.bagew...@sun.com wrote:

Nikhil,

Comments inline...


On Thu, Jan 15, 2009 at 02:09:16PM +0530, Nikhil wrote:
 Hi,

 I am running a Solaris 10 box on a v20z with 11/06 release. It has got ZFS
 pool configured on S1 storage box with 3 146gb disks (it has got lot of
 data)
 I am planning to upgrade the machine to the new hardware with the new
 Solaris 10  release of 8/07 and to the new hardware of X2200M2.

 I am wondering will there be any thing that I need to take care of before
 putting the S1 storage box onto the new hardware X2200 pulling from the 
old
 hardware, meaningly do I need to sync any data before the switch of the
 system to the new hardware (essentially switch of the storage to be 
plugged
 to the new hardware).

 Do I need to do any kind of sync of zfs metadata?


So, from what I understand you have the entire pool built out of disks
from the same S1 box. In that case all you need to do is :
- Export the pool on the old box : zpool export poolname
- Connect the S1 to the new machine
- Import the pool : zpool import poolname

Hope that helps.

Thanks and regards,
Sanjeev



 If I simply pull out the S1 storage box(on which the zfs is configured 
for a
 long time now with data) and plug it to the new hardware, will the OS with
 new release detect the pool as it is ? and also have the data on it? or 
Will
 it detect them as the new disks to be configured again from scratch? I do
 not want to lose any data.

 Kindly suggest what is to be done here.

 Thanks,
 Nikhil






--


  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Swap ZFS pool disks to another host hardware

2009-01-15 Thread JZ
Thank you!
This is now real open storage discussion!

really goodnight now
cheers,
z



- Original Message - 
From: Tomas Ögren st...@acc.umu.se
To: JZ j...@excelsioritsolutions.com
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, January 15, 2009 5:36 AM
Subject: Re: [zfs-discuss] Swap ZFS pool disks to another host hardware


On 15 January, 2009 - JZ sent me these 7,9K bytes:

 [OMG, sorry, I cannot resist]

Please do.

 Hi Nikhi, so you were playing?

Please stop sending random crap to this list. Keep it about ZFS, not
Beer/sake/whatever comes to your mind. It's not a random chat channel.

And stop attaching large .wma files.

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs backups

2009-01-15 Thread Joerg Schilling
satya g...@zmanda.com wrote:

 Any update on star ability to backup ZFS ACLs? Any idea if we can use pax 
 command to backup ZFS acls? will -p option of pax utility do the trick? 

I am looking for people who like to discuss the archive format for ZFS ACLs and
for extended attribute files for star.

Please choose one of the following mailing lists:

star-disc...@opensolaris.orgThe mailing list for Solaris integration

see http://mail.opensolaris.org/mailman/listinfo/star-discuss


or

star-develop...@lists.berlios.deThe general developer mailing list

See: http://developer.berlios.de/mail/?group_id=9


Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Swap ZFS pool disks to another host hardware

2009-01-15 Thread Nikhil
Actually yes.  I figured it out while reading other archive posts in
zfs-discuss :-)

Thanks Sanjeev :-)

On Thu, Jan 15, 2009 at 3:29 PM, Sanjeev sanjeev.bagew...@sun.com wrote:

 Nikhil,

 Comments inline...

 On Thu, Jan 15, 2009 at 02:09:16PM +0530, Nikhil wrote:
  Hi,
 
  I am running a Solaris 10 box on a v20z with 11/06 release. It has got
 ZFS
  pool configured on S1 storage box with 3 146gb disks (it has got lot of
  data)
  I am planning to upgrade the machine to the new hardware with the new
  Solaris 10  release of 8/07 and to the new hardware of X2200M2.
 
  I am wondering will there be any thing that I need to take care of before
  putting the S1 storage box onto the new hardware X2200 pulling from the
 old
  hardware, meaningly do I need to sync any data before the switch of the
  system to the new hardware (essentially switch of the storage to be
 plugged
  to the new hardware).
 
  Do I need to do any kind of sync of zfs metadata?

 So, from what I understand you have the entire pool built out of disks
 from the same S1 box. In that case all you need to do is :
 - Export the pool on the old box : zpool export poolname
 - Connect the S1 to the new machine
 - Import the pool : zpool import poolname

 Hope that helps.

 Thanks and regards,
 Sanjeev

 
  If I simply pull out the S1 storage box(on which the zfs is configured
 for a
  long time now with data) and plug it to the new hardware, will the OS
 with
  new release detect the pool as it is ? and also have the data on it? or
 Will
  it detect them as the new disks to be configured again from scratch? I do
  not want to lose any data.
 
  Kindly suggest what is to be done here.
 
  Thanks,
  Nikhil


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Swap ZFS pool disks to another host hardware

2009-01-15 Thread Tomas Ögren
On 15 January, 2009 - JZ sent me these 7,9K bytes:

 [OMG, sorry, I cannot resist]

Please do.

 Hi Nikhi, so you were playing?

Please stop sending random crap to this list. Keep it about ZFS, not
Beer/sake/whatever comes to your mind. It's not a random chat channel.

And stop attaching large .wma files.

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Using ZFS for replication

2009-01-15 Thread Ian Mather
Fairly new to ZFS. I am looking to replicate data between two thumper boxes.
Found quite a few articles about using zfs incremental snapshot send/receive. 
Just a cheeky question to see if anyone has anything working in a live 
environment and are happy to share the scripts,  save me reinventing the wheel. 
thanks in advance.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lackluster ZFS performance trials using various ZIL and L2ARC configurations...

2009-01-15 Thread Will Murnane
On Thu, Jan 15, 2009 at 02:36, Gray Carper gcar...@umich.edu wrote:
 In the third test, we rebuilt the ZFS pool with the ZIL on a 32GB SSD and
 the L2ARC on four 80GB SSDs.
An obvious question: what SSDs are these?  Where did you get them?
Many, many consumer-level MLC SSDs have controllers by JMicron (also
known for their lousy sata controllers, BTW) which cause stalling of
all I/O under certain fairly common conditions (see [1]).  Spending
the cash for an SLC drive (such as the Intel X-25S) may solve the
problem.

Will

[1]: http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403p=8
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using ZFS for replication

2009-01-15 Thread Ahmed Kamal
You might want to look at AVS for realtime replication
http://www.opensolaris.org/os/project/avs/
However, I have had huge performance hits after enabling that. The
replicated volume is almost 10% the speed of normal ones

On Thu, Jan 15, 2009 at 1:28 PM, Ian Mather ian.mat...@northtyneside.gov.uk
 wrote:

 Fairly new to ZFS. I am looking to replicate data between two thumper
 boxes.
 Found quite a few articles about using zfs incremental snapshot
 send/receive. Just a cheeky question to see if anyone has anything working
 in a live environment and are happy to share the scripts,  save me
 reinventing the wheel. thanks in advance.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lackluster ZFS performance trials using various ZIL and L2ARC configurations...

2009-01-15 Thread Gray Carper
Hey there, Will! Thanks for the quick reply and the link.

And: Oops! Yes - the SSD models would probably be useful information. ; The
32GB SSD is an Intel X-25E (SLC). The 80GB SSDs are Intel X-25M (MLC). If
MLC drives can be naughty, perhaps we should try an additional test: keep
the 80GB SSDs out of the chassis, but leave the 32GB SSD in.

-Gray

On Thu, Jan 15, 2009 at 10:00 PM, Will Murnane will.murn...@gmail.comwrote:

 On Thu, Jan 15, 2009 at 02:36, Gray Carper gcar...@umich.edu wrote:
  In the third test, we rebuilt the ZFS pool with the ZIL on a 32GB SSD and
  the L2ARC on four 80GB SSDs.
 An obvious question: what SSDs are these?  Where did you get them?
 Many, many consumer-level MLC SSDs have controllers by JMicron (also
 known for their lousy sata controllers, BTW) which cause stalling of
 all I/O under certain fairly common conditions (see [1]).  Spending
 the cash for an SLC drive (such as the Intel X-25S) may solve the
 problem.

 Will

 [1]: http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403p=8




-- 
Gray Carper
MSIS Technical Services
University of Michigan Medical School
gcar...@umich.edu  |  skype:  graycarper  |  734.418.8506
http://www.umms.med.umich.edu/msis/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lackluster ZFS performance trials using various ZIL and L2ARC configurations...

2009-01-15 Thread Gray Carper
D'oh - I take that back. Upon re-reading, I expect that you weren't
indicting MLC drives generally, just the JMicron-controlled ones. It looks
like we aren't suffering from those, though.

-Gray

On Thu, Jan 15, 2009 at 11:12 PM, Gray Carper gcar...@umich.edu wrote:

 Hey there, Will! Thanks for the quick reply and the link.

 And: Oops! Yes - the SSD models would probably be useful information. ;
 The 32GB SSD is an Intel X-25E (SLC). The 80GB SSDs are Intel X-25M (MLC).
 If MLC drives can be naughty, perhaps we should try an additional test: keep
 the 80GB SSDs out of the chassis, but leave the 32GB SSD in.

 -Gray


 On Thu, Jan 15, 2009 at 10:00 PM, Will Murnane will.murn...@gmail.comwrote:

 On Thu, Jan 15, 2009 at 02:36, Gray Carper gcar...@umich.edu wrote:
  In the third test, we rebuilt the ZFS pool with the ZIL on a 32GB SSD
 and
  the L2ARC on four 80GB SSDs.
 An obvious question: what SSDs are these?  Where did you get them?
 Many, many consumer-level MLC SSDs have controllers by JMicron (also
 known for their lousy sata controllers, BTW) which cause stalling of
 all I/O under certain fairly common conditions (see [1]).  Spending
 the cash for an SLC drive (such as the Intel X-25S) may solve the
 problem.

 Will

 [1]: http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403p=8




 --
 Gray Carper
 MSIS Technical Services
 University of Michigan Medical School
 gcar...@umich.edu  |  skype:  graycarper  |  734.418.8506
 http://www.umms.med.umich.edu/msis/




-- 
Gray Carper
MSIS Technical Services
University of Michigan Medical School
gcar...@umich.edu  |  skype:  graycarper  |  734.418.8506
http://www.umms.med.umich.edu/msis/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-15 Thread Jim Klimov
Is it possible to create a (degraded) zpool with placeholders specified instead
of actual disks (parity or mirrors)? This is possible in linux mdadm (missing 
keyword), so I kinda hoped this can be done in Solaris, but didn't manage to.

Usecase scenario: 

I have a single server (or home workstation) with 4 HDD bays, sold with 2 
drives.
Initially the system was set up with a ZFS mirror for data slices. Now we got 2 
more drives and want to replace the mirror with a larger RAIDZ2 set (say I 
don't 
want a RAID10 which is trivial to make). 

Technically I think that it should be possible to force creation of a degraded
raidz2 array with two actual drives and two missing drives. Then I'd copy data
from the old mirror pool to the new degraded raidz2 pool (zfs send | zfs recv),
destroy the mirror pool and attach its two drives to repair the raidz2 pool.

While obviously not an enterprise approach, this is useful while expanding
home systems when I don't have a spare tape backup to dump my files on it 
and restore afterwards.

I think it's an (intended?) limitation in zpool command itself, since the kernel
can very well live with degraded pools.

//Jim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on partitions

2009-01-15 Thread Jim Klimov
For the sake of curiosity, is it safe to have components of two different ZFS 
pools on the same drive, with and without HDD write cache turned on?

How will ZFS itself behave, would it turn on the disk cache if the two imported 
pools co-own the drive?

An example is a multi-disk system like mine which had UFS-mirrored root and 
a ZFS-raidz2 for data, then was upgraded to a ZFS-mirrored root with another 
ZFS pool for data (partially since grub doesn't do ZFS roots on zfs-raidz*).

PS: for a system with 1 or 2 drives, is there any preference to either layout:
1) sliced with 2 zpools (root and data)
2) sliced with one zpool for all
3) whole-disk with 1 zpool for all (is that supported by GRUB for boot disks at 
all?)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-15 Thread Richard Elling
Jim Klimov wrote:
 Is it possible to create a (degraded) zpool with placeholders specified 
 instead
 of actual disks (parity or mirrors)? This is possible in linux mdadm 
 (missing 
 keyword), so I kinda hoped this can be done in Solaris, but didn't manage to.

 Usecase scenario: 

 I have a single server (or home workstation) with 4 HDD bays, sold with 2 
 drives.
 Initially the system was set up with a ZFS mirror for data slices. Now we got 
 2 
 more drives and want to replace the mirror with a larger RAIDZ2 set (say I 
 don't 
 want a RAID10 which is trivial to make). 

 Technically I think that it should be possible to force creation of a degraded
 raidz2 array with two actual drives and two missing drives. Then I'd copy data
 from the old mirror pool to the new degraded raidz2 pool (zfs send | zfs 
 recv),
 destroy the mirror pool and attach its two drives to repair the raidz2 pool.

 While obviously not an enterprise approach, this is useful while expanding
 home systems when I don't have a spare tape backup to dump my files on it 
 and restore afterwards.
   

I would say it is definitely not a recommended approach for those who
love their data, whether enterprise or not.  But my opinion is really a
result of our environment at Sun (or any systems vendor).  Being here
blinds us to some opportunities. Please file an RFE at
http://bugs.opensolaris.org
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-15 Thread Tomas Ögren
On 15 January, 2009 - Jim Klimov sent me these 1,3K bytes:

 Is it possible to create a (degraded) zpool with placeholders specified 
 instead
 of actual disks (parity or mirrors)? This is possible in linux mdadm 
 (missing 
 keyword), so I kinda hoped this can be done in Solaris, but didn't manage to.
 
 Usecase scenario: 
 
 I have a single server (or home workstation) with 4 HDD bays, sold with 2 
 drives.
 Initially the system was set up with a ZFS mirror for data slices. Now we got 
 2 
 more drives and want to replace the mirror with a larger RAIDZ2 set (say I 
 don't 
 want a RAID10 which is trivial to make). 
 
 Technically I think that it should be possible to force creation of a degraded
 raidz2 array with two actual drives and two missing drives. Then I'd copy data
 from the old mirror pool to the new degraded raidz2 pool (zfs send | zfs 
 recv),
 destroy the mirror pool and attach its two drives to repair the raidz2 pool.
 
 While obviously not an enterprise approach, this is useful while expanding
 home systems when I don't have a spare tape backup to dump my files on it 
 and restore afterwards.
 
 I think it's an (intended?) limitation in zpool command itself, since the 
 kernel
 can very well live with degraded pools.

You can fake it..

kalv:/tmp# mkfile 64m realdisk1
kalv:/tmp# mkfile 64m realdisk2
kalv:/tmp# mkfile -n 64m fakedisk1
kalv:/tmp# mkfile -n 64m fakedisk2
kalv:/tmp# ls -la real* fake*
-rw--T 1 root root 67108864 2009-01-15 17:02 fakedisk1
-rw--T 1 root root 67108864 2009-01-15 17:02 fakedisk2
-rw--T 1 root root 67108864 2009-01-15 17:02 realdisk1
-rw--T 1 root root 67108864 2009-01-15 17:02 realdisk2
kalv:/tmp# du real* fake*
6   realdisk1
6   realdisk2
133 fakedisk1
133 fakedisk2


In reality, those realdisk* should be pointing at real disks, but
fakedisk* should still point at sparse mkfile's with the same size as
your real disks (300GB or whatever).

kalv:/tmp# zpool create blah raidz2 /tmp/realdisk1 /tmp/realdisk2 
/tmp/fakedisk1 /tmp/fakedisk2
kalv:/tmp# zpool status blah
  pool: blah
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
blahONLINE   0 0 0
  raidz2ONLINE   0 0 0
/tmp/realdisk1  ONLINE   0 0 0
/tmp/realdisk2  ONLINE   0 0 0
/tmp/fakedisk1  ONLINE   0 0 0
/tmp/fakedisk2  ONLINE   0 0 0

errors: No known data errors

Ok, so it's created fine. Let's accidentally introduce some problems..


kalv:/tmp# rm /tmp/fakedisk1
kalv:/tmp# rm /tmp/fakedisk2
kalv:/tmp# zpool scrub blah
kalv:/tmp# zpool status blah
  pool: blah
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
 scrub: scrub completed after 0h0m with 0 errors on Thu Jan 15 17:03:38
2009
config:

NAMESTATE READ WRITE CKSUM
blahDEGRADED 0 0 0
  raidz2DEGRADED 0 0 0
/tmp/realdisk1  ONLINE   0 0 0
/tmp/realdisk2  ONLINE   0 0 0
/tmp/fakedisk1  UNAVAIL  0 0 0  cannot open
/tmp/fakedisk2  UNAVAIL  0 0 0  cannot open

errors: No known data errors


Still working.

At this point, you can start filling blah with data. Then after a while,
let's bring in the other real disks:

kalv:/tmp# mkfile 64m realdisk3
kalv:/tmp# mkfile 64m realdisk4
kalv:/tmp# zpool replace blah /tmp/fakedisk1 /tmp/realdisk3
kalv:/tmp# zpool replace blah /tmp/fakedisk2 /tmp/realdisk4
kalv:/tmp# zpool status blah
  pool: blah
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Thu Jan 15 17:04:31 2009
config:

NAMESTATE READ WRITE CKSUM
blahONLINE   0 0 0
  raidz2ONLINE   0 0 0
/tmp/realdisk1  ONLINE   0 0 0
/tmp/realdisk2  ONLINE   0 0 0
/tmp/realdisk3  ONLINE   0 0 0
/tmp/realdisk4  ONLINE   0 0 0


Of course, try it out a bit before doing it for real.

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on partitions

2009-01-15 Thread Roch
Tim writes:
  On Tue, Jan 13, 2009 at 6:26 AM, Brian Wilson bfwil...@doit.wisc.eduwrote:
  
  
   Does creating ZFS pools on multiple partitions on the same physical drive
   still run into the performance and other issues that putting pools in 
   slices
   does?
  
  
  
  Is zfs going to own the whole drive or not?  The *issue* is that zfs will
  not use the drive cache if it doesn't own the whole disk since it won't know
  whether or not it should be flushing cache at any given point in time.
  

  It could cause corruption if you had UFS and zfs on the same disk.
  

Let me correct a few  things. ZFS unconditionaly flushes the
write caches when  it needs to and  owning a drive or not is
not important for the consistency of ZFS.

If ZFS owns a disk it will enable the write cache on the
drive but I'm not positive this has a great performance
impact today. It used to but that was before we had a proper NCQ
implementation. Today I don't know that it helps much. That
this is because we always flush the cache when consistency
requires it.

The performance issue of using a drive to multiple unrelated
consumers (ZFS  UFS)  is that,  if  both are active  at the
same  time, this  will   defeat  the I/O  scheduling  smarts
implemented in ZFS. Rather than  have data streaming to some
physical  location of the rust,   the competition of UFS for
I/O will cause extra head movement.

-r

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using ZFS for replication

2009-01-15 Thread Greg Mason
zfs-auto-snapshot (SUNWzfs-auto-snapshot) is what I'm using. Only trick 
is that on the other end, we have to manage our own retention of the 
snapshots we send to our offsite/backup boxes.

zfs-auto-snapshot can handle the sending of snapshots as well.

We're running this in OpenSolaris 2008.11 (snv_100).

Another use I've seen is using zfs-auto-snapshot to take and manage 
snapshots on both ends, using rsync to replicate the data, but that's 
less than ideal for most folks...

-Greg

Ian Mather wrote:
 Fairly new to ZFS. I am looking to replicate data between two thumper boxes.
 Found quite a few articles about using zfs incremental snapshot send/receive. 
 Just a cheeky question to see if anyone has anything working in a live 
 environment and are happy to share the scripts,  save me reinventing the 
 wheel. thanks in advance.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-15 Thread Jim Klimov
Thanks Tomas, I haven't checked yet, but your workaround seems feasible.

I've posted an RFE and referenced your approach as a workaround.
That's nearly what zpool should do under the hood, and perhaps can be done 
temporarily with a wrapper script to detect min(physical storage sizes)  ;)

//Jim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] a Min Wang person emailed me for free knowledge

2009-01-15 Thread Kees Nuyt
On Wed, 14 Jan 2009 22:40:19 -0500, JZ
j...@excelsioritsolutions.com wrote:

ok, you open folks are really .
just one more, and I hope someone replies so we can save some open time.

[snip]

JZ, would you please be so kind to refrain from including
any attachments in your postings to our beloved
zfs-discuss@opensolaris.org , especially large, binary ones?

They are not welcome here, and I'm pretty sure I'm not the
only one with that opinion.

Thanks in advance for your cooperation.

Regards,
-- 
  (  Kees Nuyt
  )
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] a Min Wang person emailed me for free knowledge

2009-01-15 Thread JZ
[last 5 minutes on my lunch, just to say thank you and sorry]

Yes, I was wondering how the first one even made to the list.
None of those emails with large attachments should have been approved by the 
mail server policy.

And I feel bad that I tested the server with some bad text and those got 
through.

best,
z


- Original Message - 
From: Kees Nuyt k.n...@zonnet.nl
To: zfs-discuss@opensolaris.org
Sent: Thursday, January 15, 2009 3:15 PM
Subject: Re: [zfs-discuss] a Min Wang person emailed me for free knowledge


 On Wed, 14 Jan 2009 22:40:19 -0500, JZ
 j...@excelsioritsolutions.com wrote:

ok, you open folks are really .
just one more, and I hope someone replies so we can save some open time.

 [snip]

 JZ, would you please be so kind to refrain from including
 any attachments in your postings to our beloved
 zfs-discuss@opensolaris.org , especially large, binary ones?

 They are not welcome here, and I'm pretty sure I'm not the
 only one with that opinion.

 Thanks in advance for your cooperation.

 Regards,
 -- 
  (  Kees Nuyt
  )
 c[_]
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card

2009-01-15 Thread Charles Wright
I've tried putting this in /etc/system and rebooting
set zfs:zfs_vdev_max_pending = 16

Are we sure that number equates to a scsi command?
Perhaps I should set it to 8 and see what happens.
(I have 256 scsi commands I can queue across 16 drives)

I still got these error messages in the log.

Jan 15 15:29:40 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
outstanding commands (257  256)
Jan 15 15:29:40 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
outstanding commands (256  256)
Jan 15 15:29:43 yoda last message repeated 73 times

I watched iostat -x a good bit and usually it is 0.0 or 0.1

r...@yoda:~# iostat -x
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.00.00.00.0  0.0  0.00.0   0   0 
sd1   0.42.0   22.3   13.5  0.1  0.0   39.3   1   2 
sd2   0.52.0   25.6   13.5  0.1  0.0   40.4   2   2 
sd3   0.3   21.5   18.7  334.4  0.7  0.1   40.1  13  15 
sd4   0.3   21.6   18.9  334.4  0.7  0.1   40.6  13  15 
sd5   0.3   21.5   19.2  334.4  0.7  0.1   39.7  12  15 
sd6   0.3   21.6   18.6  334.4  0.7  0.2   40.4  13  15 
sd7   0.3   21.6   18.7  334.4  0.7  0.1   40.3  12  15 
sd8   0.3   21.6   18.7  334.4  0.7  0.2   40.1  13  15 
sd9   0.3   21.5   18.5  334.5  0.7  0.1   40.0  12  14 
sd10  0.3   21.4   18.9  333.6  0.7  0.1   40.2  12  14 
sd11  0.3   21.4   18.9  333.6  0.7  0.1   39.3  12  15 
sd12  0.3   21.4   19.4  333.6  0.7  0.2   40.0  13  15 
sd13  0.3   21.4   18.9  333.6  0.7  0.1   40.3  13  15 
sd14  0.3   21.4   19.0  333.6  0.7  0.1   38.8  12  14 
sd15  0.3   21.4   19.1  333.6  0.7  0.1   39.6  12  14 
sd16  0.3   21.4   18.7  333.6  0.7  0.1   39.3  12  14
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] recent ZFS Admin Guide/troubleshooting wiki updates

2009-01-15 Thread Cindy . Swearingen

Hi everyone,

Recent ZFS admin guide updates/troubleshooting wiki include the
following updates.

1. Revised root pool recovery steps.

The process has been changed slightly due to a recently uncovered zfs
receive problem. You can create a recursive root pool snapshot as was
previously documented. In the revised process, you must send and receive
individual root pool datasets from the recursive root pool snapshot.

The revised Solaris 10 10/08 root pool recovery steps are here:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery

The revised SXCE root pool recovery steps are in the ZFS Admin Guide,
here:

http://opensolaris.org/os/community/zfs/docs/

2. Enhanced disk label information.

Added more disk labeling information or provided links to more info in
the boot/install chapter and pools chapter in the ZFS Admin Guide, here:

http://opensolaris.org/os/community/zfs/docs/

On an x86 based system, I retested both an initial install of a ZFS root 
file system and also attached a disk to create a mirrored root pool and
found no problems.

3. Added new text about the pax command's inability to translate
NFSv4-style ACLs in the snapshots chapter in the ZFS Admin Guide, here:

http://opensolaris.org/os/community/zfs/docs/


I will push these versions to docs.sun.com as soon as is feasible.

Thanks,

Cindy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on partitions

2009-01-15 Thread Casper . Dik


The performance issue of using a drive to multiple unrelated
consumers (ZFS  UFS)  is that,  if  both are active  at the
same  time, this  will   defeat  the I/O  scheduling  smarts
implemented in ZFS. Rather than  have data streaming to some
physical  location of the rust,   the competition of UFS for
I/O will cause extra head movement.


Solaris still makes sure that blocks are sorted, whether they
come from UFS or from ZFS.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card

2009-01-15 Thread Richard Elling
Charles Wright wrote:
 I've tried putting this in /etc/system and rebooting
 set zfs:zfs_vdev_max_pending = 16

You can change this on the fly, without rebooting.
See the mdb command at:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Device_I.2FO_Queue_Size_.28I.2FO_Concurrency.29

 Are we sure that number equates to a scsi command?

yes, though actually it pertains to all devices used by ZFS,
even if they are not SCSI devices.

 Perhaps I should set it to 8 and see what happens.
 (I have 256 scsi commands I can queue across 16 drives)
 
 I still got these error messages in the log.
 
 Jan 15 15:29:40 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
 outstanding commands (257  256)
 Jan 15 15:29:40 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
 outstanding commands (256  256)
 Jan 15 15:29:43 yoda last message repeated 73 times
 
 I watched iostat -x a good bit and usually it is 0.0 or 0.1

iostat -x, without any intervals, shows the average since boot
time, which won't be useful.  Try iostat -x 1 to see 1-second
samples while your load is going.

 r...@yoda:~# iostat -x
  extended device statistics 
 devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
 sd0   0.00.00.00.0  0.0  0.00.0   0   0 
 sd1   0.42.0   22.3   13.5  0.1  0.0   39.3   1   2 
 sd2   0.52.0   25.6   13.5  0.1  0.0   40.4   2   2 
 sd3   0.3   21.5   18.7  334.4  0.7  0.1   40.1  13  15 
 sd4   0.3   21.6   18.9  334.4  0.7  0.1   40.6  13  15 
 sd5   0.3   21.5   19.2  334.4  0.7  0.1   39.7  12  15 
 sd6   0.3   21.6   18.6  334.4  0.7  0.2   40.4  13  15 
 sd7   0.3   21.6   18.7  334.4  0.7  0.1   40.3  12  15 
 sd8   0.3   21.6   18.7  334.4  0.7  0.2   40.1  13  15 
 sd9   0.3   21.5   18.5  334.5  0.7  0.1   40.0  12  14 
 sd10  0.3   21.4   18.9  333.6  0.7  0.1   40.2  12  14 
 sd11  0.3   21.4   18.9  333.6  0.7  0.1   39.3  12  15 
 sd12  0.3   21.4   19.4  333.6  0.7  0.2   40.0  13  15 
 sd13  0.3   21.4   18.9  333.6  0.7  0.1   40.3  13  15 
 sd14  0.3   21.4   19.0  333.6  0.7  0.1   38.8  12  14 
 sd15  0.3   21.4   19.1  333.6  0.7  0.1   39.6  12  14 
 sd16  0.3   21.4   18.7  333.6  0.7  0.1   39.3  12  14

NB 40ms average service time (svc_t) is considered very slow
for modern disks.  You should look at this on the intervals
to get a better idea of the svc_t under load.  You want to see
something more like 10ms, or less, for good performance on HDDs.
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-15 Thread Jonny Gerold
Hello,
I was hoping that this would work:
http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror

I have 4x(1TB) disks, one of which is filled with 800GB of data (that I 
cant delete/backup somewhere else)

 r...@fsk-backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0 
 /dev/lofi/1
 r...@fsk-backup:~# zpool list
 NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
 ambry   592G   132K   592G 0%  ONLINE  -
I get this (592GB???) I bring the virtual device offline, and it becomes 
degraded, yet I wont be able to copy my data over. I was wondering if 
anyone else had a solution.

Thanks, Jonny

P.S. Please let me know if you need any extra information.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on partitions

2009-01-15 Thread Will Murnane
On Thu, Jan 15, 2009 at 21:51,  casper@sun.com wrote:


The performance issue of using a drive to multiple unrelated
consumers (ZFS  UFS)  is that,  if  both are active  at the
same  time, this  will   defeat  the I/O  scheduling  smarts
implemented in ZFS. Rather than  have data streaming to some
physical  location of the rust,   the competition of UFS for
I/O will cause extra head movement.


 Solaris still makes sure that blocks are sorted, whether they
 come from UFS or from ZFS.
Yes, but consider the common case where UFS and ZFS are on separate
slices of the disk.  Then writes that ZFS thinks will be contiguous
aren't, because the disk has seeked (sought?) to the UFS slice, a long
way away.  Solaris will optimize this as much as possible, sure, but
there's nothing you can do to avoid moving the head back and forth
from one slice to the other.

There's only so far you can sort blocks 12 and 10**7 ;)

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Segregating /var in ZFS root/boot

2009-01-15 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

The text is in spanish, but the article/commands is pretty verbose. Hope
you can find it useful.

My approach creates a new BE, to be able to recover if there is any problem.

Separar el /var de un Boot Enviroment en ZFS root/boot
http://www.jcea.es/artic/sol10lu6zfs2.htm

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSW/L8plgi5GaxT1NAQJTzwP+ID4KUhJAJe5v+NBYCKyFhcElHXkAAaVh
OU9FxxKNo8hKpbH3Mm83FmZ8TgIGVF827BvA1IZGnokYet9NImNaw4ld/W9/LUQc
TE4dV51FyUi6nHTiuRuHEabplqDw0ughx7nW3lAqg8K8DI5bU/y+1WUHbEpxyqTx
s+4dVA9woWc=
=khBR
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lackluster ZFS performance trials using various ZIL and L2ARC configurations...

2009-01-15 Thread Gray Carper
Hey, Eric!

Now things get complicated. ; I was naively hoping to avoid revealing our
exact pool configuration, fearing that it might lead to lots of tangential
discussion, but I can see how it may be useful so that you have the whole
picture. Time for the big reveal, then...

Here's the exact line used for the baseline test...

create volume data raidz1 c3t600144F049471924d0
c3t600144F0494719D4d0 c3t600144F049471A5Fd0
c3t600144F049471A6Cd0 c3t600144F049471A82d0
c3t600144F049471A8Ed0

...the line for the 32GB SSD ZIL + 4x146GB SAS L2ARC test...

create volume data raidz1 c3t600144F049471924d0
c3t600144F0494719D4d0 c3t600144F049471A5Fd0
c3t600144F049471A6Cd0 c3t600144F049471A82d0
c3t600144F049471A8Ed0 cache c1t2d0 c1t3d0 c1t5d0 c1t6d0 log
c1t4d0

...the line for the 32GB SSD ZIL + 80GB SSD L2ARC...

create volume data raidz1 c3t600144F049471924d0
c3t600144F0494719D4d0 c3t600144F049471A5Fd0
c3t600144F049471A6Cd0 c3t600144F049471A82d0
c3t600144F049471A8Ed0 cache c1t7d0 c1t8d0 c1t9d0 c1t10d0 log
c1t4d0

Now I'm sure someone is asking, What are those crazy
c3t600144F049471924d0, etc pool devices?. They are iSCSI
targets. Our X4240 is the head node for virtualizing and aggregating six
Thumpers-worth of storage. Each X4500 has its own raidz2 pool that is
exported via 10GbE iSCSI, the X4240 collects them all with raidz1, and the
resulting pool is about 140TB.

To head off a few questions that might lead us astray: We have compelling
NAS use-cases for this, it does work, and it is surprisingly fault-tolerant
(for example: while under heavy load, we can reboot an entire iSCSI node
without losing client connections, data, etc).

Using the X25-E for the L2ARC, but having no separate ZIL, sounds like a
worthwhile test. Is 32GB large enough for a good L2ARC, though?

Thanks!
-Gray

On Fri, Jan 16, 2009 at 1:16 AM, Eric D. Mudama
edmud...@bounceswoosh.orgwrote:

 On Thu, Jan 15 at 15:36, Gray Carper wrote:

  Hey, all!

  Using iozone (with the sequential read, sequential write, random read,
 and
  random write categories), on a Sun X4240 system running OpenSolaris b104
  (NexentaStor 1.1.2, actually), we recently ran a number of relative
  performance tests using a few ZIL and L2ARC configurations (meant to try
  and uncover which configuration would be the best choice). I'd like to
  share the highlights with you all (without bogging you down with raw
 data)
  to see if anything strikes you.

  Our first (baseline) test used a ZFS pool which had a self-contained ZIL
  and L2ARC (i.e. not moved to other devices, the default configuration).
  Note that this system had both SSDs and SAS drive attached to the
  controller, but only the SAS drives were in use.


 Can you please provide the exact config, in terms of how the zpool was
 built?

   In the second test, we rebuilt the ZFS pool with the ZIL on a 32GB SSD
 and
  the L2ARC on four 146GB SAS drives. Random reads were significantly worse
  than the baseline, but all other categories were slightly better.


 In this case, ZIL on the X25-E makes sense for writes, but the SAS
 drives read slower than SSDs, so they're probably not the best L2ARC
 units unless you're using 7200RPM devices in your main zpool.

   In the third test, we rebuilt the ZFS pool with the ZIL on a 32GB SSD and
  the L2ARC on four 80GB SSDs. Sequential reads were better than the
  baseline, but all other categories were worse.


 I'm wondering if the single X25-E is not enough faster than the core
 pool, making a separate ZIL not worth it.

   In the fourth test, we rebuilt the ZFS pool with no separate ZIL, but
 with
  the L2ARC on four 146GB SAS drives. Random reads were significantly worse
  than the baseline and all other categories were about the same as the
  baseline.

  As you can imagine, we were disappointed. None of those configurations
  resulted in any significant improvements, and all of the configurations
  resulted in at least one category being worse. This was very much not
 what
  we expected.


 Have you tried using the X25-E as a L2ARC, keep the ZIL default, and
 use the SAS drives as your core pool?

 Or were you using X25-M devices as your core pool before?  How much
 data is in the zpool?


 --
 Eric D. Mudama
 edmud...@mail.bounceswoosh.org




-- 
Gray Carper
MSIS Technical Services
University of Michigan Medical School
gcar...@umich.edu  |  skype:  graycarper  |  734.418.8506
http://www.umms.med.umich.edu/msis/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-15 Thread JZ
Hi Jonny,
So far there is no Sun comments here or at the blog site, I guess your 
approach is good by the Sun folks.

I also noticed that the blog hit today is only 5.
If, I tell my folks to visit the blog often, can they also do chinese? most 
of them are doing blogging in chinese, not english today. And how would 
non-china folks be able to visit the blog without getting hit by all chinese 
text?  So, if you would like more visitors, you would have to have a 
solution to deal with the chinese traffic.

Just some thoughts if you are serious about global open storage.



[BTW, I see Zhang in the URL. That is the name I honor with Zhou. For that, 
if you need help, just let me know.]

Best,
张寒星


- Original Message - 
From: Jonny Gerold j...@thermeon.com
To: zfs-discuss@opensolaris.org
Sent: Thursday, January 15, 2009 5:20 PM
Subject: [zfs-discuss] 4 disk raidz1 with 3 disks...


 Hello,
 I was hoping that this would work:
 http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror

 I have 4x(1TB) disks, one of which is filled with 800GB of data (that I
 cant delete/backup somewhere else)

 r...@fsk-backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0
 /dev/lofi/1
 r...@fsk-backup:~# zpool list
 NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
 ambry   592G   132K   592G 0%  ONLINE  -
 I get this (592GB???) I bring the virtual device offline, and it becomes
 degraded, yet I wont be able to copy my data over. I was wondering if
 anyone else had a solution.

 Thanks, Jonny

 P.S. Please let me know if you need any extra information.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lackluster ZFS performance trials using various ZIL and L2ARC configurations...

2009-01-15 Thread Brendan Gregg - Sun Microsystems
G'Day Gray,

On Thu, Jan 15, 2009 at 03:36:47PM +0800, Gray Carper wrote:
 
Hey, all!
Using iozone (with the sequential read, sequential write, random read,
and random write categories), on a Sun X4240 system running
OpenSolaris b104 (NexentaStor 1.1.2, actually), we recently ran a
number of relative performance tests using a few ZIL and L2ARC
configurations (meant to try and uncover which configuration would be
the best choice). I'd like to share the highlights with you all
(without bogging you down with raw data) to see if anything strikes
you.
Our first (baseline) test used a ZFS pool which had a self-contained
ZIL and L2ARC (i.e. not moved to other devices, the default
configuration). Note that this system had both SSDs and SAS drive
attached to the controller, but only the SAS drives were in use.
In the second test, we rebuilt the ZFS pool with the ZIL on a 32GB SSD
and the L2ARC on four 146GB SAS drives. Random reads were
significantly worse than the baseline, but all other categories were
slightly better.
In the third test, we rebuilt the ZFS pool with the ZIL on a 32GB SSD
and the L2ARC on four 80GB SSDs. Sequential reads were better than the

The L2ARC trickle charges (especially since it feeds from random I/O, which
by nature has low throughput), and with 4 x 80GB of it online - you could be
looking at an 8 hour warmup, or longer.  How long did you run iozone for?

Also, the zfs recsize makes a difference for random I/O to the L2ARC - you
probably want it set to 8 Kbytes or so, before creating files.

... The L2ARC code shipped with the Sun Storage 7000 has had some performance
improvements that aren't in OpenSolaris yet, but will be soon.

Brendan

-- 
Brendan Gregg, Sun Microsystems Fishworks.http://blogs.sun.com/brendan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-15 Thread JZ
Beloved Jonny,

I am just like you.


There was a day, I was hungry, and went for a job interview for sysadmin.
They asked me - what is a protocol?
I could not give a definition, and they said, no, not qualified.

But they did not ask me about CICS and mainframe. Too bad.



baby, even there is a day you can break daddy's pride, you won't want to, I 
am sure.   ;-)

[if you want a solution, ask Orvar, I would guess he thinks on his own now, 
not baby no more, teen now...]

best,
z

- Original Message - 
From: Jonny Gerold j...@thermeon.com
To: JZ j...@excelsioritsolutions.com
Sent: Thursday, January 15, 2009 10:19 PM
Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks...


Sorry that I broke your pride (all knowing) bubble by challenging you.
But your just as stupid as I am since you did not give me a solution.
Find a solution, and I will rock with your Zhou style, otherwise you're
just like me :) I am in the U.S. Great weather...

Thanks, Jonny


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-15 Thread JZ
Hi James,
I have done nothing wrong. It was ok in my religion. Sue my if you care.


He asked for a solution to a ZFS problem.

I was calling for help, Zhou style.



All my C and Z and J folks, are we going to help Jonny or what???


darn!!!  Do I have to put down my other work to make a solution that may not 
be open?



best,
z





- Original Message - 
From: James C. McPherson james.mcpher...@sun.com
To: JZ j...@excelsioritsolutions.com
Sent: Thursday, January 15, 2009 10:35 PM
Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks...



 Hello JZ,
 I fail to see what your email has to do with ZFS.

 I am also at a loss as to why you appear to think that
 it is acceptable to include public mailing lists on
 what are clearly personal emails.


 James C. McPherson
 --
 Senior Kernel Software Engineer, Solaris
 Sun Microsystems
 http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-15 Thread JZ
Very nice.
Ok.
If I don't see any post to promise some help in solving Jonny's solution in 
the next 8 minutes --
I would go to chinatown and get some commitment.
I would have that commitment in 48 hours and a working and tested 
blog site in 60 days.
But it will not be open.

Please, open folks, are you going to help Jonny or what?

Best,
z


- Original Message - 
From: JZ j...@excelsioritsolutions.com
To: James C. McPherson james.mcpher...@sun.com
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, January 15, 2009 10:42 PM
Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks...


 Hi James,
 I have done nothing wrong. It was ok in my religion. Sue my if you care.


 He asked for a solution to a ZFS problem.

 I was calling for help, Zhou style.



 All my C and Z and J folks, are we going to help Jonny or what???


 darn!!!  Do I have to put down my other work to make a solution that may 
 not
 be open?



 best,
 z





 - Original Message - 
 From: James C. McPherson james.mcpher...@sun.com
 To: JZ j...@excelsioritsolutions.com
 Sent: Thursday, January 15, 2009 10:35 PM
 Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks...



 Hello JZ,
 I fail to see what your email has to do with ZFS.

 I am also at a loss as to why you appear to think that
 it is acceptable to include public mailing lists on
 what are clearly personal emails.


 James C. McPherson
 --
 Senior Kernel Software Engineer, Solaris
 Sun Microsystems
 http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-15 Thread Wes Morgan
On Thu, 15 Jan 2009, Jonny Gerold wrote:

 Hello,
 I was hoping that this would work:
 http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror

 I have 4x(1TB) disks, one of which is filled with 800GB of data (that I
 cant delete/backup somewhere else)

 r...@fsk-backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0
 /dev/lofi/1
 r...@fsk-backup:~# zpool list
 NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
 ambry   592G   132K   592G 0%  ONLINE  -
 I get this (592GB???) I bring the virtual device offline, and it becomes
 degraded, yet I wont be able to copy my data over. I was wondering if
 anyone else had a solution.

 Thanks, Jonny

 P.S. Please let me know if you need any extra information.

Are you certain that you created the sparse file as the correct size? If I 
had to guess, it is only in the range of about 150gb. The smallest device 
size will limit the total size of your array. Try using this for your 
sparse file and recreating the raidz:

dd if=/dev/zero of=fakedisk bs=1k seek=976762584 count=0
lofiadm -a fakedisk
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-15 Thread JZ
Thank you!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (no subject)

2009-01-15 Thread JZ
Sorry, open folks, please, chatting on.

This is why my help cannot be provided very often.

Because the Zhou style of fighting is that –



Yes, if we have to step into the battle field, a Zhou will walk in front of 
the real troops, and say, kill me, if you dare, and I am so sure with my 
life on it, that you will not live beyond this fight.







Folks, please, if you know how to read Chinese, just for this one word.



士

A 士 does not need to be known, but cannot be challenged.



Best,

z

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SDXC and the future of ZFS

2009-01-15 Thread Eric D. Mudama
On Mon, Jan 12 at 10:00, casper@sun.com wrote:
My impression is not that other OS's aren't interested in ZFS, they
are, it's that the licensing restrictions limit native support to
Solaris, BSD, and OS-X.

If you wanted native support in Windows or Linux, it would require a
significant effort from Sun.


Why is that a problem for Windows?  Linux, yes, but if they want they can 
change that.

Who is they ?

It's not a problem, it just is-what-it-is.

The significant effort I am referring to is changes to the
licensing, which is a tricky endeavour as soon as you have
contributors instead of a contributor.

Doesn't really matter who changes, or really if anyone changes at all.

--eric


-- 
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-15 Thread Jonathan
Tomas Ögren wrote:
 On 15 January, 2009 - Jim Klimov sent me these 1,3K bytes:
 
 Is it possible to create a (degraded) zpool with placeholders specified 
 instead
 of actual disks (parity or mirrors)? This is possible in linux mdadm 
 (missing 
 keyword), so I kinda hoped this can be done in Solaris, but didn't manage to.

 Usecase scenario: 

 I have a single server (or home workstation) with 4 HDD bays, sold with 2 
 drives.
 Initially the system was set up with a ZFS mirror for data slices. Now we 
 got 2 
 more drives and want to replace the mirror with a larger RAIDZ2 set (say I 
 don't 
 want a RAID10 which is trivial to make). 

 Technically I think that it should be possible to force creation of a 
 degraded
 raidz2 array with two actual drives and two missing drives. Then I'd copy 
 data
 from the old mirror pool to the new degraded raidz2 pool (zfs send | zfs 
 recv),
 destroy the mirror pool and attach its two drives to repair the raidz2 
 pool.

 While obviously not an enterprise approach, this is useful while expanding
 home systems when I don't have a spare tape backup to dump my files on it 
 and restore afterwards.

 I think it's an (intended?) limitation in zpool command itself, since the 
 kernel
 can very well live with degraded pools.
 
 You can fake it..

[snip command set]

Summary, yes that actually works and I've done it, but its very slow!

I essentially did this myself when I migrated a 4x2-way mirror pool to a
2x4 disk raidzs (4x 500GB and 4x 1.5TB).  I can say from experience that
it works but since I used 2 sparsefiles to simulate 2 disks on a single
physical disk performance sucked and it took a long time to do the
migration.  IIRC it took over 2 days to transfer 2TB of data.  I used
rsync, at the time I either didn't know about or forgot about zfs
send/receive which would probably work better.  It took a couple more
days to verify that everything transferred correctly with no bit rot
(rsync -c).

I think Sun avoids making things like this too easy because from a
business standpoint it's easier just to spend the money on enough
hardware to do it properly without the chance of data loss and the
extended down time.  Doesn't invest the time in may be a be a better
phrase than avoids though.  I doubt Sun actually goes out of their way
to make things harder for people.

Hope that helps,
Jonathan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lackluster ZFS performance trials using various ZIL and L2ARC configurations...

2009-01-15 Thread Eric D. Mudama
On Fri, Jan 16 at  9:29, Gray Carper wrote:
   Using the X25-E for the L2ARC, but having no separate ZIL, sounds like a
   worthwhile test. Is 32GB large enough for a good L2ARC, though?

Without knowing much about ZFS internals, I'd just ask if how your
average working data set compares to the sizes of the SSDs.

--eric


-- 
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using ZFS for replication

2009-01-15 Thread Ashish Nabira

It's good article explains about how to use ZFS for replication.


http://swik.net/MySQL/Planet+MySQL/ZFS+Replication+for+MySQL+data/ckjo2
http://www.markround.com/archives/38-ZFS-Replication.html

=
Free India Opensource India.
=
Thanks and regards;
Ashish Nabira
nab...@sun.com
http://sun.com
Mobile: 9845082183
=

On 15-Jan-09, at 11:01 PM, Greg Mason wrote:

zfs-auto-snapshot (SUNWzfs-auto-snapshot) is what I'm using. Only  
trick

is that on the other end, we have to manage our own retention of the
snapshots we send to our offsite/backup boxes.

zfs-auto-snapshot can handle the sending of snapshots as well.

We're running this in OpenSolaris 2008.11 (snv_100).

Another use I've seen is using zfs-auto-snapshot to take and manage
snapshots on both ends, using rsync to replicate the data, but that's
less than ideal for most folks...

-Greg

Ian Mather wrote:
Fairly new to ZFS. I am looking to replicate data between two  
thumper boxes.
Found quite a few articles about using zfs incremental snapshot  
send/receive. Just a cheeky question to see if anyone has anything  
working in a live environment and are happy to share the scripts,   
save me reinventing the wheel. thanks in advance.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using ZFS for replication

2009-01-15 Thread Ian Collins
Ian Mather wrote:
 Fairly new to ZFS. I am looking to replicate data between two thumper boxes.
 Found quite a few articles about using zfs incremental snapshot send/receive. 
 Just a cheeky question to see if anyone has anything working in a live 
 environment and are happy to share the scripts,  save me reinventing the 
 wheel. thanks in advance.
   
I have a tool that automates snapshots, replication and retention
between 3 Thumpers. 

The process works well with one show stopping exception: toxic streams. 
At least on Solaris 10, it's all too easy to produce incremental streams
that panic the receiving system.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss