Re: [zfs-discuss] Directory is not accessible

2012-11-23 Thread Sami Tuominen
How can one remove a directory containing corrupt files or a corrupt file 
itself? For me rm just gives input/output error.

Sami

Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Sami Tuominen
 
 Unfortunately there aren't any snapshots.
 The version of zpool is 15. Is it safe to upgrade that?
 Is zpool clear -F supported or of any use here?

The only thing that will be of use to restore your data will be a backup.

To forget about the lost data and make the error message go away, simply rm the 
bad directory (and/or its parent).

You're probably wondering, you have redundancy and no faulted devices, so how 
could this happen?  There are a few possible explanations, but they're all 
going to have one thing in common:  At some point, something got corrupted 
before it was written corrupted and the redundant copy also written corrupted.  
It might be you had a CPU error, or some parity error in non-ECC ram, or a bus 
glitch or bad firmware in the HBA, for example.  The fact remains, something 
was written corrupted, and the redundant copy was also written corrupted.  All 
you can do is restore from a snapshot, restore from a backup, or accept it for 
what it is and make the error go away.

Sorry to hear it...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-23 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Jim Klimov
  
 I wonder if it would make weird sense to get the boxes, forfeit the
 cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
 get the most flexibility and bang for a buck from the owned hardware...

This is what we decided to do at work, and this is the reason why.
But we didn't buy the appliance-branded boxes; we just bought normal servers 
running solaris.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] dm-crypt + ZFS on Linux

2012-11-23 Thread John Baxter
After searching for dm-crypt and ZFS on Linux and finding too little
information, I shall ask here. Please keep in mind this in the context of
running this in a production environment.

We have the need to encypt our data, approximately 30TB on three ZFS
volumes under Solaris 10. The volumes currently reside on iscsi sans
connected via 10Gb/s ethernet. We have tested Solaris 11 with ZFS encrypted
volumes and found the performance to be very poor and have an open bug
report with Oracle.

We are a Linux shop and since performance is so poor and still no
resolution, we are considering ZFS on Linux with dm-crypt.
I have read once or twice that if we implemented ZFS + dm-crypt we would
loose features, however which features are not specified.
We currently mirror the volumes across identical iscsi sans with ZFS and we
use hourly ZFS snapshots to update our DR site.

Which features of ZFS are lost if we use dm-crypt? My guess would be they
are related to raidz but unsure.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dm-crypt + ZFS on Linux

2012-11-23 Thread Tim Cook
On Fri, Nov 23, 2012 at 9:49 AM, John Baxter johnleebax...@gmail.comwrote:


 We have the need to encypt our data, approximately 30TB on three ZFS
 volumes under Solaris 10. The volumes currently reside on iscsi sans
 connected via 10Gb/s ethernet. We have tested Solaris 11 with ZFS encrypted
 volumes and found the performance to be very poor and have an open bug
 report with Oracle.

 We are a Linux shop and since performance is so poor and still no
 resolution, we are considering ZFS on Linux with dm-crypt.
 I have read once or twice that if we implemented ZFS + dm-crypt we would
 loose features, however which features are not specified.
 We currently mirror the volumes across identical iscsi sans with ZFS and
 we use hourly ZFS snapshots to update our DR site.

 Which features of ZFS are lost if we use dm-crypt? My guess would be they
 are related to raidz but unsure.



Why don't you just use a SAN that supports full drive encryption?  There
should be basically 0 performance overhead.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dm-crypt + ZFS on Linux

2012-11-23 Thread John Baxter
Replacing the SANs is cost prohibitive.


On Fri, Nov 23, 2012 at 10:24 AM, Tim Cook t...@cook.ms wrote:



 On Fri, Nov 23, 2012 at 9:49 AM, John Baxter johnleebax...@gmail.comwrote:


 We have the need to encypt our data, approximately 30TB on three ZFS
 volumes under Solaris 10. The volumes currently reside on iscsi sans
 connected via 10Gb/s ethernet. We have tested Solaris 11 with ZFS encrypted
 volumes and found the performance to be very poor and have an open bug
 report with Oracle.

 We are a Linux shop and since performance is so poor and still no
 resolution, we are considering ZFS on Linux with dm-crypt.
 I have read once or twice that if we implemented ZFS + dm-crypt we would
 loose features, however which features are not specified.
 We currently mirror the volumes across identical iscsi sans with ZFS and
 we use hourly ZFS snapshots to update our DR site.

 Which features of ZFS are lost if we use dm-crypt? My guess would be they
 are related to raidz but unsure.



 Why don't you just use a SAN that supports full drive encryption?  There
 should be basically 0 performance overhead.

 --Tim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dm-crypt + ZFS on Linux

2012-11-23 Thread Fabian Keil
John Baxter johnleebax...@gmail.com wrote:

 After searching for dm-crypt and ZFS on Linux and finding too little
 information, I shall ask here. Please keep in mind this in the context of
 running this in a production environment.
 
 We have the need to encypt our data, approximately 30TB on three ZFS
 volumes under Solaris 10. The volumes currently reside on iscsi sans
 connected via 10Gb/s ethernet. We have tested Solaris 11 with ZFS encrypted
 volumes and found the performance to be very poor and have an open bug
 report with Oracle.

Was the performance acceptable without encryption?

 We are a Linux shop and since performance is so poor and still no
 resolution, we are considering ZFS on Linux with dm-crypt.
 I have read once or twice that if we implemented ZFS + dm-crypt we would
 loose features, however which features are not specified.
 We currently mirror the volumes across identical iscsi sans with ZFS and we
 use hourly ZFS snapshots to update our DR site.

 Which features of ZFS are lost if we use dm-crypt? My guess would be they
 are related to raidz but unsure.

It depends on where you put the encryption layer. If you put it below ZFS,
no ZFS feature has to be lost although bugs in the encryption layer may
make the whole setup less reliable. Of course that's true for Oracle's
ZFS encryption as well.

If you put the encryption layer on top of ZFS, features like compression
and deduplication should be ineffective. It will not encrypt the ZFS
metadata, but it allows you to keep parts of the data on the pool intentionally
(or unintentionally) unencrypted. If your application doesn't work with
raw devices, you need a file system on top of the encryption layer again.

I'm not aware of anything raidz-related that is lost in either setup.

I haven't used ZFS with dm-crypt on GNU/Linux, but if I had to, I'd put
dm-crypt below ZFS and would rather split the pool than put dm-crypt on
top of ZFS.

My impression is that ext4 on dm-crypt on ZFS is a popular setup
(among bloggers), but I have no idea why and certainly wouldn't
want to use it in a production environment.

Just in case your GNU/Linux experiments don't work out, you could
also try ZFS on Geli on FreeBSD which works reasonably well.

Fabian


signature.asc
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dm-crypt + ZFS on Linux

2012-11-23 Thread Richard Elling
On Nov 23, 2012, at 11:56 AM, Fabian Keil freebsd-lis...@fabiankeil.de wrote:
 
 Just in case your GNU/Linux experiments don't work out, you could
 also try ZFS on Geli on FreeBSD which works reasonably well.
 

For illumos-based distros or Solaris 11, using ZFS with lofi has been
well discussed for many years. Prior to the crypto option being integrated
as a first class citizen in OpenSolaris, the codename used was xlofi, so
try that in your google searches, or look at the man page for lofiadm

 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-23 Thread Erik Trimble
On 11/23/2012 5:50 AM, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
  
I wonder if it would make weird sense to get the boxes, forfeit the

cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
get the most flexibility and bang for a buck from the owned hardware...

This is what we decided to do at work, and this is the reason why.
But we didn't buy the appliance-branded boxes; we just bought normal servers 
running solaris.




I gave up and am now buying HP-branded hardware for running Solaris on 
it. Particularly if you get off-lease used hardware (for which, HP is 
still very happy to let you buy a HW support contract), it's cheap, and 
HP has a lot of Solaris drivers for their branded stuff. Their whole 
SmartArray line of adapters has much better Solaris driver coverage than 
the generic stuff or the equivalent IBM or Dell items.


For instance, I just got a couple of DL380 G5 systems with dual 
Harpertown CPUs, fully loaded with 8 2.5 SAS drives and 32GB of RAM, 
for about $800 total.  You can attach their MSA30/50/70-series (or 
DS2700-series, if you want new) as dumb JBODs via SAS, and the nice 
SmartArray controllers have 1GB of NVRAM, which is sufficient for many 
purposes, so you don't even have cough up the dough for a nice ZIL SSD.


HP even made a sweet little appliance thing that was designed for 
Windows, but happens to run Solaris really, really well.  The DL320s  
(the s is part of the model designation).   14x 3.5 SAS/SATA hot swap 
bays, a Xeon 3070 dual-core CPU, SmartArray controller, 2 x GB Nic, LOM, 
and a free 1x PCI-E expansion slot. The only drawback is that it only 
takes up to 8GB of RAM.   It makes a *fabulous* little backup system for 
logs and stuff, and it's under about $2000 even after you splurge for 
1TB drives and an SSD for the thing.


I am in the market for something newer than that, though. Anyone know 
what HP's using as a replacement for the DL320s?


-Erik


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss