Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-22 Thread Gary Mills
On Sat, Aug 21, 2010 at 06:36:37PM -0400, Toby Thain wrote:
 
 On 21-Aug-10, at 3:06 PM, Ross Walker wrote:
 
 On Aug 21, 2010, at 2:14 PM, Bill Sommerfeld bill.sommerf...@oracle.com 
  wrote:
 
 On 08/21/10 10:14, Ross Walker wrote:
 ...
 Would I be better off forgoing resiliency for simplicity, putting  
 all my faith into the Equallogic to handle data resiliency?
 
 IMHO, no; the resulting system will be significantly more brittle.
 
 Exactly how brittle I guess depends on the Equallogic system.
 
 If you don't let zfs manage redundancy, Bill is correct: it's a more  
 fragile system that *cannot* self heal data errors in the (deep)  
 stack. Quantifying the increased risk, is a question that Richard  
 Elling could probably answer :)

That's because ZFS does not have a way to handle a large class of
storage designs, specifically the ones with raw storage and disk
management being provided by reliable SAN devices.

-- 
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-22 Thread Richard Elling
On Aug 21, 2010, at 3:36 PM, Toby Thain wrote:
 On 21-Aug-10, at 3:06 PM, Ross Walker wrote:
 On Aug 21, 2010, at 2:14 PM, Bill Sommerfeld bill.sommerf...@oracle.com 
 wrote:
 On 08/21/10 10:14, Ross Walker wrote:
 ...
 Would I be better off forgoing resiliency for simplicity, putting all my 
 faith into the Equallogic to handle data resiliency?
 
 IMHO, no; the resulting system will be significantly more brittle.
 
 Exactly how brittle I guess depends on the Equallogic system.
 
 If you don't let zfs manage redundancy, Bill is correct: it's a more fragile 
 system that *cannot* self heal data errors in the (deep) stack. Quantifying 
 the increased risk, is a question that Richard Elling could probably answer :)

The risk of data loss isn't very different between ZFS and a hardware RAID 
array.
The difference is the impact of the loss. What we've observed over the years is
that older generation or budget RAID arrays that do not verify the data and 
will
happily pass corrupted data up the stack.  ZFS will detect this, providing two 
important control points:

1. The silent error is silent no more -- the corrupted file will be noted in 
the logs
and the output of zpool status -xv  This is a huge win for the recovery effort
because you can quickly determine the scope of the damage and plan a recovery
from backups, if needed.
 
2. The policy on how to present this information to the application requesting 
data
and the systems administrator is somewhat flexible. See the failmode property 
in 
the zpool(1m) man page.

If I may put it in order of preference:
+ Minimum effort: protect your data using regular backups
+ Better: add RAID of some sort to protect against whole disk failure
+ Best: add ZFS for data management, end-to-end error detection, 
copies, etc.

 -- richard

-- 
OpenStorage Summit, October 25-27, San Fransisco
http://nexenta-summit2010.eventbrite.com

Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Ross Walker

I'm planning on setting up an NFS server for our ESXi hosts and plan on using a 
virtualized Solaris or Nexenta host to serve ZFS over NFS.

The storage I have available is provided by Equallogic boxes over 10Gbe iSCSI.

I am trying to figure out the best way to provide both performance and 
resiliency given the Equallogic provides the redundancy.

Since I am hoping to provide a 2TB datastore I am thinking of carving out 
either 3 1TB luns or 6 500GB luns that will be RDM'd to the storage VM and 
within the storage server setting up either 1 raidz vdev with the 1TB luns 
(less RDMs) or 2 raidz vdevs with the 500GB luns (more fine grained 
expandability, work in 1TB increments).

Given the 2GB of write-back cache on the Equallogic I think the integrated ZIL 
would work fine (needs benchmarking though).

The vmdk files themselves won't be backed up (more data then I can store), just 
the essential data contained within, so I would think resiliency would be 
important here.

My questions are these.

Does this setup make sense?

Would I be better off forgoing resiliency for simplicity, putting all my faith 
into the Equallogic to handle data resiliency?

Will this setup perform? Anybody with experience in this type of setup?

-Ross


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Bill Sommerfeld

On 08/21/10 10:14, Ross Walker wrote:

I am trying to figure out the best way to provide both performance and 
resiliency given the Equallogic provides the redundancy.


(I have no specific experience with Equallogic; the following is just 
generic advice)


Every bit stored in zfs is checksummed at the block level; zfs will not 
use data or metadata if the checksum doesn't match.


zfs relies on redundancy (storing multiple copies) to provide 
resilience; if it can't independently read the multiple copies and pick 
the one it likes, it can't recover from bitrot or failure of the 
underlying storage.


if you want resilience, zfs must be responsible for redundancy.

You imply having multiple storage servers.  The simplest thing to do is 
export one large LUN from each of two different storage servers, and 
have ZFS mirror them.


While this reduces the available space, depending on your workload, you 
can make some of it back by enabling compression.


And, given sufficiently recent software, and sufficient memory and/or 
ssd for l2arc, you can enable dedup.


Of course, the effectiveness of both dedup and compression depends on 
your workload.



Would I be better off forgoing resiliency for simplicity, putting all my faith 
into the Equallogic to handle data resiliency?


IMHO, no; the resulting system will be significantly more brittle.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Ross Walker
On Aug 21, 2010, at 2:14 PM, Bill Sommerfeld bill.sommerf...@oracle.com wrote:

 On 08/21/10 10:14, Ross Walker wrote:
 I am trying to figure out the best way to provide both performance and 
 resiliency given the Equallogic provides the redundancy.
 
 (I have no specific experience with Equallogic; the following is just generic 
 advice)
 
 Every bit stored in zfs is checksummed at the block level; zfs will not use 
 data or metadata if the checksum doesn't match.

I understand that much and is the reason I picked ZFS for persistent data 
storage.

 zfs relies on redundancy (storing multiple copies) to provide resilience; if 
 it can't independently read the multiple copies and pick the one it likes, it 
 can't recover from bitrot or failure of the underlying storage.

Can't auto-recover, but will report the failure so it can be restored from 
backup, but since the vmdk files are too big to backup...

 if you want resilience, zfs must be responsible for redundancy.

Must have, not necessarily have full control.

 You imply having multiple storage servers.  The simplest thing to do is 
 export one large LUN from each of two different storage servers, and have ZFS 
 mirror them.

Well... You need to know that the multiple storage servers are acting as a 
single pool with tiered storage levels (SAS 15K in RAID10 and SATA in RAID6) 
and luns are auto-tiered across these based on demand performance, so a pool of 
mirrors won't really provide any more performance then a raidz (same physical 
RAID) and raidz will only waste 33% as oppose to 50%.

 While this reduces the available space, depending on your workload, you can 
 make some of it back by enabling compression.
 
 And, given sufficiently recent software, and sufficient memory and/or ssd for 
 l2arc, you can enable dedup.

The host is a blade server with no room for SSDs, but if SSD investment is 
needed in the future I can add an SSD Equallogic box to the storage pool.

 Of course, the effectiveness of both dedup and compression depends on your 
 workload.
 
 Would I be better off forgoing resiliency for simplicity, putting all my 
 faith into the Equallogic to handle data resiliency?
 
 IMHO, no; the resulting system will be significantly more brittle.

Exactly how brittle I guess depends on the Equallogic system.

-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Richard Elling
On Aug 21, 2010, at 10:14 AM, Ross Walker wrote:
 I'm planning on setting up an NFS server for our ESXi hosts and plan on using 
 a virtualized Solaris or Nexenta host to serve ZFS over NFS.

Please follow the joint EMC+NetApp best practices for VMware ESX servers.
The recommendations apply to any NFS implementation for ESX.

 The storage I have available is provided by Equallogic boxes over 10Gbe iSCSI.
 
 I am trying to figure out the best way to provide both performance and 
 resiliency given the Equallogic provides the redundancy.
 
 Since I am hoping to provide a 2TB datastore I am thinking of carving out 
 either 3 1TB luns or 6 500GB luns that will be RDM'd to the storage VM and 
 within the storage server setting up either 1 raidz vdev with the 1TB luns 
 (less RDMs) or 2 raidz vdevs with the 500GB luns (more fine grained 
 expandability, work in 1TB increments).
 
 Given the 2GB of write-back cache on the Equallogic I think the integrated 
 ZIL would work fine (needs benchmarking though).

This should work fine.

 The vmdk files themselves won't be backed up (more data then I can store), 
 just the essential data contained within, so I would think resiliency would 
 be important here.
 
 My questions are these.
 
 Does this setup make sense?

Yes, it is perfectly reasonable.

 Would I be better off forgoing resiliency for simplicity, putting all my 
 faith into the Equallogic to handle data resiliency?

I don't have much direct experience with Equillogic, but I would expect that
they do a reasonable job of protecting data, or they would be out of business.

You can also use the copies parameter to set extra redundancy for the important
files. ZFS will also tell you if corruption is found in a single file, so that 
you can 
recover just the file and not be forced to recover everything else. I think 
this fits
into your back strategy.

 Will this setup perform? Anybody with experience in this type of setup?

Many people are quite happy with RAID arrays and still take advantage of 
the features of ZFS: checksums, snapshots, clones, send/receive, VMware
integration, etc. The decision of where to implement data protection (RAID) 
is not as important as the decision to protect your data.  

My advice: protect your data.
 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Ross Walker
On Aug 21, 2010, at 4:40 PM, Richard Elling rich...@nexenta.com wrote:

 On Aug 21, 2010, at 10:14 AM, Ross Walker wrote:
 I'm planning on setting up an NFS server for our ESXi hosts and plan on 
 using a virtualized Solaris or Nexenta host to serve ZFS over NFS.
 
 Please follow the joint EMC+NetApp best practices for VMware ESX servers.
 The recommendations apply to any NFS implementation for ESX.

Thanks, I'll check that out! Always looking for advice on how best to tweak NFS 
for ESX.

I have a current ZFS over NFS implementation, but on direct attached storage 
using Sol10. I will be interested to see how Nexenta compares.

 The storage I have available is provided by Equallogic boxes over 10Gbe 
 iSCSI.
 
 I am trying to figure out the best way to provide both performance and 
 resiliency given the Equallogic provides the redundancy.
 
 Since I am hoping to provide a 2TB datastore I am thinking of carving out 
 either 3 1TB luns or 6 500GB luns that will be RDM'd to the storage VM and 
 within the storage server setting up either 1 raidz vdev with the 1TB luns 
 (less RDMs) or 2 raidz vdevs with the 500GB luns (more fine grained 
 expandability, work in 1TB increments).
 
 Given the 2GB of write-back cache on the Equallogic I think the integrated 
 ZIL would work fine (needs benchmarking though).
 
 This should work fine.
 
 The vmdk files themselves won't be backed up (more data then I can store), 
 just the essential data contained within, so I would think resiliency would 
 be important here.
 
 My questions are these.
 
 Does this setup make sense?
 
 Yes, it is perfectly reasonable.
 
 Would I be better off forgoing resiliency for simplicity, putting all my 
 faith into the Equallogic to handle data resiliency?
 
 I don't have much direct experience with Equillogic, but I would expect that
 they do a reasonable job of protecting data, or they would be out of business.
 
 You can also use the copies parameter to set extra redundancy for the 
 important
 files. ZFS will also tell you if corruption is found in a single file, so 
 that you can 
 recover just the file and not be forced to recover everything else. I think 
 this fits
 into your back strategy.

I thought of the copies parameter, but figured a raidz laid on top of the 
storage pool would only waste 33% instead of 50% and since this is on top of a 
conceptually single RAID volume the IOPS bottleneck won't come into play since 
the any single drive IOPS will be equal to the array IOPS as a whole.

 Will this setup perform? Anybody with experience in this type of setup?
 
 Many people are quite happy with RAID arrays and still take advantage of 
 the features of ZFS: checksums, snapshots, clones, send/receive, VMware
 integration, etc. The decision of where to implement data protection (RAID) 
 is not as important as the decision to protect your data.  
 
 My advice: protect your data.

Always good advice.

So I suppose this just confirms my analysis.

Thanks,

-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Richard Elling
On Aug 21, 2010, at 2:21 PM, Ross Walker wrote:

 On Aug 21, 2010, at 4:40 PM, Richard Elling rich...@nexenta.com wrote:
 
 On Aug 21, 2010, at 10:14 AM, Ross Walker wrote:
 I'm planning on setting up an NFS server for our ESXi hosts and plan on 
 using a virtualized Solaris or Nexenta host to serve ZFS over NFS.
 
 Please follow the joint EMC+NetApp best practices for VMware ESX servers.
 The recommendations apply to any NFS implementation for ESX.
 
 Thanks, I'll check that out! Always looking for advice on how best to tweak 
 NFS for ESX.

In this case, it is ESX over NFS recommendations.  You will want to change the
settings on the ESX server.
http://www.vmware.com/files/pdf/partners/netapp_esx_best_practices_whitepaper.pdf

 -- richard

-- 
OpenStorage Summit, October 25-27, San Fransisco
http://nexenta-summit2010.eventbrite.com

Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Toby Thain


On 21-Aug-10, at 3:06 PM, Ross Walker wrote:

On Aug 21, 2010, at 2:14 PM, Bill Sommerfeld bill.sommerf...@oracle.com 
 wrote:



On 08/21/10 10:14, Ross Walker wrote:
...
Would I be better off forgoing resiliency for simplicity, putting  
all my faith into the Equallogic to handle data resiliency?


IMHO, no; the resulting system will be significantly more brittle.


Exactly how brittle I guess depends on the Equallogic system.


If you don't let zfs manage redundancy, Bill is correct: it's a more  
fragile system that *cannot* self heal data errors in the (deep)  
stack. Quantifying the increased risk, is a question that Richard  
Elling could probably answer :)


--Toby



-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss