[zfs-discuss] ZFS with Veritas DMP?

2009-02-08 Thread Andras Spitzer
Hi,

I'm aware that if we talking about DMP on Solaris the preferred way is to use 
MPxIO, still I have a question if any of you got any experience with ZFS on top 
of Veritas DMP?

Does it work? Is it supported? Any real life experience/tests in this subject?

Regards,
sendai
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A question on non-consecutive disk failures

2009-02-08 Thread Sriram Narayanan
On Sun, Feb 8, 2009 at 1:56 AM, Peter Tribble peter.trib...@gmail.com wrote:
snip/
 No. That quote is part of the discussion of ditto blocks.

 See the following:

 http://blogs.sun.com/bill/entry/ditto_blocks_the_amazing_tape


Thank you, Peter.

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A question on non-consecutive disk failures

2009-02-08 Thread Sriram Narayanan
On Sun, Feb 8, 2009 at 1:29 AM, Frank Cusack fcus...@fcusack.com wrote:

 what mirror?  there is no mirror.  you have a raidz.  you can have 1
 disk failure.

Thanks for the correction. I was thinking RAIDZ, but typed mirror. I
have only RAIDZs on my servers.


 - if disks a and c fail, then I will be be able to read from disks b
 and d. Is this understanding correct ?

 no.  you will lose all the data if 2 disks fail.

 The part of the slides you are referring to is in reference to ditto
 blocks, which allow failure of PARTS of a SINGLE disk.


Thanks. I've started to read the various ZFS documentation.


-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New RAM disk from ACARD might be interesting

2009-02-08 Thread Orvar Korvar
Too bad. I will follow this thread. Me, and others hope you find a solution. We 
would like to hear about this setup.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New RAM disk from ACARD might be interesting

2009-02-08 Thread Tim
On Sat, Feb 7, 2009 at 1:55 AM, Will Murnane will.murn...@gmail.com wrote:

 On Thu, Jan 29, 2009 at 23:00, Will Murnane will.murn...@gmail.com
 wrote:
  *sigh*  The 9010b is ordered.  Ground shipping, unfortunately, but
  eventually I'll post my impressions of it.
 Well, the drive arrived today.  It's as nice-looking as it appears in
 the pictures, and building a zpool out of it alone makes for some
 speedy I/O.

 That said, I can't test it nearly as well as I'd hoped because a
 feature I had assumed was present isn't: you can't take a pool with a
 log device of size A and replace the log device with one of size B 
 A.  So my pool, with its 8gb log device, can't use the ACARD device
 unless I find 8GB of memory somewhere.  I am willing to play with
 development bits if someone can get me ZFS that knows how to remove
 log devices; I have a backup of all my data.

 I did find [1], which suggests that one can cause a log device to fail
 and then cause it to be removed from the pool, but I'd rather not try
 this---backups or no, restoring from them takes a long time.  I did
 try booting without the log device plugged in, but this caused the
 system to go into maintenance mode and not boot.  Any other
 suggestions of ways around this?

 Will

 [1]: http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view


http://www.newegg.com/Product/Product.aspx?Item=N82E16820134652

I wouldn't think grabbing 8GB memory would be a big deal after dropping that
much on the controller??

http://www.newegg.com/Product/Product.aspx?Item=N82E16820134652

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Drive labels and Solaris 10U6 ZFS install

2009-02-08 Thread Bob Friesenhahn
Yesterday I gained my first experience with installing Solaris 10U6 on 
on a SPARC workstation with ZFS boot.  I had one non-Sun disk (c0t1d0) 
which was previously used in a Solaris 10U5 install and already had a 
ZFS partition.  Another non-Sun disk (c0t0d0) was brand new.  It was 
my intention to clean slate the two drives and use them as a ZFS 
mirror.

Using the text installer, I was unceremoniously dumped to a shell 
prompt after the installer declared that both of my drives were either 
broken or needed to be labeled.  There was no message printed as to 
how I could restart the installer after the labels were repaired.  So 
I ran format and successfully labeled the first drive with an SMI 
label.  By default, format was not able to label the second drive such 
that the installer would use it.  After the first drive had been 
labeled, the installer behaved a bit better and at least told me that 
I could use 'install-solaris' to continue with the installation.

I ended up installing on just the first drive with the expectation 
that I will complete the mirror once the problems with the label on 
the second drive were figured out.

Today I learned that the problem is that if a drive has been used with 
ZFS before, then it will use an EFI label, and the Solaris 10U6 
installer rejects EFI label off-hand even if the user intends to 
overwrite it.  This is very unfriendly behavior.

By following instructions at 
http://sunblog.mbrannigan.com/2007/03/removing-efi-label.html I have 
been able to remove the nasty EFI label and apply a SMI label so that 
maybe this disk can successfully used.

Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Max size of log device?

2009-02-08 Thread Vincent Fox
So I have read in the ZFS Wiki:

#  The minimum size of a log device is the same as the minimum size of device 
in pool, which is 64 Mbytes. The amount of in-play data that might be stored on 
a log device is relatively small. Log blocks are freed when the log transaction 
(system call) is committed.
# The maximum size of a log device should be approximately 1/2 the size of 
physical memory because that is the maximum amount of potential in-play data 
that can be stored. For example, if a system has 16 Gbytes of physical memory, 
consider a maximum log device size of 8 Gbytes. 

What is the downside of over-large log device?

Let's say I have  a 3310 with 10 older 72-gig 10K RPM drives and RAIDZ2 them.
Then I throw an entire 72-gig 15K RPM drive in as slog.

What is behind this maximum size recommendation?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Max size of log device?

2009-02-08 Thread Neil Perrin


On 02/08/09 11:50, Vincent Fox wrote:
 So I have read in the ZFS Wiki:
 
 #  The minimum size of a log device is the same as the minimum size of device 
 in
 pool, which is 64 Mbytes. The amount of in-play data that might be stored on 
 a log
 device is relatively small. Log blocks are freed when the log transaction 
 (system call)
 is committed.
 # The maximum size of a log device should be approximately 1/2 the size of 
 physical
 memory because that is the maximum amount of potential in-play data that can 
 be stored.
 For example, if a system has 16 Gbytes of physical memory, consider a maximum 
 log device
 size of 8 Gbytes. 
 
 What is the downside of over-large log device?

- Wasted disk space.

 
 Let's say I have  a 3310 with 10 older 72-gig 10K RPM drives and RAIDZ2 them.
 Then I throw an entire 72-gig 15K RPM drive in as slog.
 
 What is behind this maximum size recommendation?

- Just guidance on what might be used in the most stressed environment.
Personally I've never seen anything like the maximum used but it's
theoretically possible. 

Neil.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Max size of log device?

2009-02-08 Thread Andrew Gabriel
Neil Perrin wrote:
 On 02/08/09 11:50, Vincent Fox wrote:
   
 So I have read in the ZFS Wiki:

 #  The minimum size of a log device is the same as the minimum size of 
 device in
 pool, which is 64 Mbytes. The amount of in-play data that might be stored on 
 a log
 device is relatively small. Log blocks are freed when the log transaction 
 (system call)
 is committed.
 # The maximum size of a log device should be approximately 1/2 the size of 
 physical
 memory because that is the maximum amount of potential in-play data that can 
 be stored.
 For example, if a system has 16 Gbytes of physical memory, consider a 
 maximum log device
 size of 8 Gbytes. 

 What is the downside of over-large log device?
 

 - Wasted disk space.

   
 Let's say I have  a 3310 with 10 older 72-gig 10K RPM drives and RAIDZ2 them.
 Then I throw an entire 72-gig 15K RPM drive in as slog.

 What is behind this maximum size recommendation?
 

 - Just guidance on what might be used in the most stressed environment.
 Personally I've never seen anything like the maximum used but it's
 theoretically possible. 
   

Just thinking out loud here, but given such a disk (i.e. one which is 
bigger than required), I might be inclined to slice it up, creating a 
slice for the log at the outer edge of the disk. The outer edge of the 
disk has the highest data rate, and by effectively constraining the head 
movement to only a portion of the whole disk, average seek times should 
be significantly improved (not to mention fewer seeks due to more 
data/cylinder at the outer edge). The log can't be using the write 
cache, so the normal penalty for not using the write cache when not 
giving the whole disk to ZFS is irrelevant in this case. By allocating, 
say, a 32GB slice from the outer edge of a 72GB disk, you should get 
really good performance. If you turn out not to need anything like 32GB, 
then making it smaller will make it even faster (depending how ZFS 
allocates space on a log device, which I don't know). Obviously, don't 
use the rest of the disk in order to achieve this performance.

-- 
Andrew

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terabyte or terror-byte disk drives

2009-02-08 Thread Vincent Fox
You were just lucky before and unlucky now.

I had a PC back in like Pentium-133 days go CRASH because I moved it too 
roughly while the drive was spinning.

I moved many PC in my life with drive spinning no problems, but I don't COUNT 
on it and avoid it if humanly possible.   Don't people do it all the time with 
laptops?  Makes me cringe a bit when I see people tossing laptop onto desk with 
a thud while it's still active.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Max size of log device?

2009-02-08 Thread Bob Friesenhahn
On Sun, 8 Feb 2009, Andrew Gabriel wrote:

 Just thinking out loud here, but given such a disk (i.e. one which is
 bigger than required), I might be inclined to slice it up, creating a
 slice for the log at the outer edge of the disk. The outer edge of the
 disk has the highest data rate, and by effectively constraining the head
 movement to only a portion of the whole disk, average seek times should
 be significantly improved (not to mention fewer seeks due to more
 data/cylinder at the outer edge). The log can't be using the write

This is good thinking but it is likely that the zfs implementors were 
already aware of this so zfs likely already does what you want without 
this extra work.

Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Max size of log device?

2009-02-08 Thread Vincent Fox
Thanks I think I get it now.

Do you think having log on a 15K RPM drive with the main pool composed of 10K 
RPM drives will show worthwhile improvements?  Or am I chasing a few percentage 
points?

I don't have money for new hardware  SSD.  Just recycling some old components 
here are and there are a few 15K RPM drives on the shelf I thought I could 
throw strategically into the mix.

Application will likely be NFS serving.  Might use same setup for a list-serve 
system which does have local storage for archived emails etc.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Max size of log device?

2009-02-08 Thread Mattias Pantzare
On Sun, Feb 8, 2009 at 22:12, Vincent Fox vincent_b_...@yahoo.com wrote:
 Thanks I think I get it now.

 Do you think having log on a 15K RPM drive with the main pool composed of 10K 
 RPM drives will show worthwhile improvements?  Or am I chasing a few 
 percentage points?

 I don't have money for new hardware  SSD.  Just recycling some old 
 components here are and there are a few 15K RPM drives on the shelf I thought 
 I could throw strategically into the mix.

 Application will likely be NFS serving.  Might use same setup for a 
 list-serve system which does have local storage for archived emails etc.

The 3310 has battery backed write cache, that is faster than any disk.
You might get more from the cache if you use it only for the log.

The RPM of the disks used for the log is not important when you have a
RAM write cache in front of the disk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Max size of log device?

2009-02-08 Thread David Magda
On Feb 8, 2009, at 16:12, Vincent Fox wrote:

 Do you think having log on a 15K RPM drive with the main pool  
 composed of 10K RPM drives will show worthwhile improvements?  Or am  
 I chasing a few percentage points?

Another important question is whether it would be sufficient to  
purchase only one 15K disk, or should two be purchased and they be  
mirrored? What would happen if the device that the ZIL lives on  
suddenly goes away or start returning checksum errors?

While an SSD is theoretically less likely to fail in some respects (no  
mechanical parts), what happens if it fails?

How important is mirroring on log devices?

Another question comes to mind: if you have multiple pools, can they  
all share one log device? For example, if you have twelve disks in a  
JBOD, and assign disks 1-4 are in mypool0, disks 4-8 are in mypool1,  
and disks 9-12 are in mypool2. Can you then have one SSD that can be  
allocated as the log device for all the pools?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terabyte or terror-byte disk drives

2009-02-08 Thread Brian Hechinger
 I've always felt squeamish when I had to move boxes with spinning  
 disks,
 or when I had to watch someone else do it.  Thanks for justifying my
 paranoia... and good luck with the replacement drives.

This reminds me of a story.  Many years ago a friend of mine had to move
some servers from one office to another.  There was a SPARCstation 2 running
SunOS4 that had an uptime of 3 or 4 years or so.  In order to not ruin the
uptime, he unplugged the UPS from the wall and hauled the SS2, UPS and some
external disk enclosures to his car and drove it to the new office listeing
the the UPS scream at him about no AC the whole time.

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A question on non-consecutive disk failures

2009-02-08 Thread Sanjeev
Sriram,

On Sun, Feb 08, 2009 at 12:04:22AM +0530, Sriram Narayanan wrote:
 From the presentation ZFS - The last word in filesystems, Page 22
   In a multi-disk pool, ZFS survives any non-consecutive disk failures
 
 Questions:
 If I have a 3 disk RAIDZ with disks A, B and C, then:
 - if disk b fails, then will I be able to continue to read data if
 disks A and C are still available ?

Yes, with RAIDZ, ZFS can sustain with 1 disk failure.

 If I have a 4 disk RAIDZ with disks A, B, C, and D, then:
 - if disks a and b fail, then I won't be able to read from the mirror
 any more. Is this understanding correct ?

Since, this is a RAIDZ configuration, it can sustain upto one disk failure.
So, answer is no.

RAIDZ is similar to RAID-5 wrt to redundancy.


 - if disks a and c fail, then I will be be able to read from disks b
 and d. Is this understanding correct ?

Same as above.

However, if you had configured it as RAIDZ-2 then it can sustain upto 2 disk
failures.


The other option would be configure it as mirrored pool. And this too can
sustain 2 disk failures (one in each mirror device).

Hope that helps.

Thanks and regards,
Sanjeev

 
 -- Sriram
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nested ZFS file systems are not visible over an NFS export

2009-02-08 Thread Sanjeev
Sriram,

sharenfs is an inherited property.
Looks like in your case you set the sharenfs=on on  datapool/vmwarenfs
after the underlying filesystems were created.

If you had set the sharenfs=on before creating the underlying filesystems,
then the property would have been inherited by the children and hence they would
be shared as well.

I guess that would help in your case.

Hope that helps.

Thanks and regards,
Sanjeev

On Sun, Feb 08, 2009 at 12:43:26AM +0530, Sriram Narayanan wrote:
 An update:
 
 I'm using VMWare ESX 3.5 and VMWare ESXi 3.5 as the NFS clients.
 
 I'm use zfs set sharenfs=on datapool/vmwarenfs to make that zfs file
 system accessible over NFS.
 
 -- Sriram
 On Sun, Feb 8, 2009 at 12:07 AM, Sriram Narayanan sriram...@gmail.com wrote:
  Hello:
 
  I have the following zfs structure
  datapool/vmwarenfs - which is available over NFS
 
  I have some ZFS filesystems as follows:
  datapool/vmwarenfs/basicVMImage
  datapool/vmwarenfs/basicvmim...@snapshot
  datapool/vmwarenfs/VMImage01- zfs cloned from basicvmim...@snapshot
  datapool/vmwarenfs/VMImage02- zfs cloned from basicvmim...@snapshot
 
  These are accessible via NFS as /datapool/vmwarenfs
 with the subfolders VMImage01 and VMImage02
 
  What's happening right now:
  a. When I connect to datapool/vmwarenfs over NFS,
  - the contents of /datapool/vmwarenfs are visible and usable
  - VMImage01 and VMImage02 are appearing as empty sub-folders at the
  paths /datapool/vmwarenfs/VMImage01 and /datapool/vmwarenfs/VMImage02,
  but their contents are not vbisible.
 
  b. When I explicity share VMImage01 and VMImage02 via NFS, then
  /datapool/vmwarenfs/VMImage01 - usable as a separate NFS share
  /datapool/vmwarenfs/VMImage02 - usable as a separate NFS share
 
  What I'd like to have:
  - attach over NFS to /datapool/vmwarenfs
  - view the ZFS filesystems VMImage01 and VMImage02 as sub folders
  under /datapool/vmwarenfs
 
  If needed, I can move VMImage01 and VMImage02 from datapool/vmwarenfs,
  and even re-create them elsewhere.
 
  -- Sriram
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss