Hi,
Couldn't agree more..but i just asked if there was such a tool :)
Bruno
Richard Elling wrote:
On Dec 9, 2009, at 11:07 AM, Bruno Sousa wrote:
Hi,
Despite the fact that i agree in general with your comments, in reality
it all comes to money..
So in this case, if i could prove that ZFS
So far I'm using file container encryption using TrueCrypt on the client,
but I would seriously like native encryption support on Solaris itself,
especially in ZFS. From
http://hub.opensolaris.org/bin/view/Project+zfs-crypto/ I see it's hopefully
coming in Q1 2010?
Are there any alternatives
Matthew Carras wrote:
So far I'm using file container encryption using TrueCrypt on the
client, but I would seriously like native encryption support on Solaris
itself, especially in ZFS. From
http://hub.opensolaris.org/bin/view/Project+zfs-crypto/ I see it's
hopefully coming in Q1 2010?
Cyril Plisko wrote:
On Thu, Dec 10, 2009 at 12:37 AM, James Lever j...@jamver.id.au wrote:
On 10/12/2009, at 5:36 AM, Adam Leventhal wrote:
The dedup property applies to all writes so the settings for the pool of origin
don't matter, just those on the destination pool.
Just a quick related
BTW, are there any implications of having dedup=on on rpool/dump ? I
know that the compression is turned off explicitly for rpool/dump.
It will be ignored because when you write to the dump ZVOL it doesn't go
through the normal ZIO pipeline so the deduplication code is never run in
that
We've been using ZFS for about two years now and make a lot of use of zfs
send/receive to send our data from one X4500 to another. This has been
working well for the past 18 months that we've been doing the sends.
I recently upgraded the receiving thumper to Solaris 10 u8 and since then,
I've
Cyril Plisko wrote:
BTW, are there any implications of having dedup=on on rpool/dump ? I
know that the compression is turned off explicitly for rpool/dump.
It will be ignored because when you write to the dump ZVOL it doesn't go
through the normal ZIO pipeline so the deduplication code is never
On Thu, Dec 10, 2009 at 09:50:43AM +, Andrew Robert Nicols wrote:
We've been using ZFS for about two years now and make a lot of use of zfs
send/receive to send our data from one X4500 to another. This has been
working well for the past 18 months that we've been doing the sends.
I
http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery
Is there a way except for buying enterprise (RAID specific) drives for a array
to use normal drives?
Does anyone have any success stories regarding a particular model?
The TLER cannot be edited on newer drives from Western Digital
Nathan wrote:
http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery
Is there a way except for buying enterprise (RAID specific) drives for a array
to use normal drives?
Does anyone have any success stories regarding a particular model?
The TLER cannot be edited on newer drives from
Thanks for the info Alexander... I will test this out. I'm just wondering
what it's going to see after I install Power Path. Since each drive will
have 4 paths, plus the Power Path... after doing a zfs import how will I
force it to use a specific path? Thanks again! Good to know that this can
Sorry I probably didn't make myself exactly clear.
Basically drives without particular TLER settings drop out of RAID randomly.
* Error Recovery - This is called various things by various manufacturers
(TLER, ERC, CCTL). In a Desktop drive, the goal is to do everything possible to
recover the
http://www.stringliterals.com/?p=77
This guy talks about it too under Hard Drives.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi, I created a ZFS volume and shared it as iSCSI with shareiscsi=on.
My windows 2008 servers can map it with no problems, but cluster validation
fails saying that it Persisten reservation is not supported.
Here http://hub.opensolaris.org/bin/view/Project+iscsitgt/ I can read PGR is
supported.
Yeah, this is my main concern with moving from my cheap Linux server with no
redundancy to ZFS RAID on OpenSolaris; I don't really want to have to pay twice
as much to buy the 'enterprise' disks which appear to be exactly the same
drives with a flag set in the firmware to limit read retries,
Mark Grant wrote:
Yeah, this is my main concern with moving from my cheap Linux server with no
redundancy to ZFS RAID on OpenSolaris; I don't really want to have to pay twice
as much to buy the 'enterprise' disks which appear to be exactly the same
drives with a flag set in the firmware to
Mark Grant wrote:
Yeah, this is my main concern with moving from my cheap Linux server with no
redundancy to ZFS RAID on OpenSolaris; I don't really want to have to pay twice
as much to buy the 'enterprise' disks which appear to be exactly the same
drives with a flag set in the firmware to
From what I remember the problem with the hardware RAID controller is that the
long delay before the drive responds causes the drive to be dropped from the
RAID and then if you get another error on a different drive while trying to
repair the RAID then that disk is also marked failed and your
On Dec 10, 2009, at 8:36 AM, Mark Grant wrote:
From what I remember the problem with the hardware RAID controller
is that the long delay before the drive responds causes the drive
to be dropped from the RAID and then if you get another error on a
different drive while trying to repair the
Thanks, sounds like it should handle all but the worst faults OK then; I
believe the maximum retry timeout is typically set to about 60 seconds in
consumer drives.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
We have just update a major file server to solaris 10 update 9 so that we can
control user and group disk usage on a single filesystem.
We were using qfs and one nice thing about samquota was that it told you your
soft limit, your hard limit and your usage on disk space and on the number of
On Thu, Dec 10, 2009 at 1:50 AM, Andrew Robert Nicols
andrew.nic...@luns.net.uk wrote:
The last snapshot received was named thumperpool/m...@200911301000 and since
then we've been completely unable to receive any snapshots -- even if I've
literally just snapshotted, removed back to the previous
We have just update a major file server to solaris 10 update 9 so that we
can control user and group disk usage on a single filesystem.
We were using qfs and one nice thing about samquota was that it told you
your soft limit, your hard limit and your usage on disk space and on the
number of
I'm playing around with snv_128 on one of my systems, and trying to
see what kinda of benefits enabling dedup will give me.
The standard practice for reprocessing data that's already stored to
add compression and now dedup seems to be a send / receive pipe
similar to:
zfs send -R old
Hi guys,
I have a pool made with three luns striped.
After some scsi retryable messages, happened during a storage activity, zpool
status start to report one checksum error on one file only.
The zpool scrub find it but don't solve it, and when I try to read the file, I
get an I/O error.
Again,
Mark Grant wrote:
I don't think ZFS does any timing out.
It's up to the drivers underneath to timeout and send
an error back to
ZFS - only they know what's reasonable for a given
disk type and bus
type.
I think that is the issue. By my reading, many (if not most) consumer drives
don't
Brandon High wrote:
I'm playing around with snv_128 on one of my systems, and trying to
see what kinda of benefits enabling dedup will give me.
The standard practice for reprocessing data that's already stored to
add compression and now dedup seems to be a send / receive pipe
similar to:
On Thu, Dec 10, 2009 at 2:15 PM, Tom Erickson tom.erick...@sun.com wrote:
After upgrading your pool on the receive side and doing 'zfs receive'
once to initialize the new behavior, you can thereafter set a property
locally and it will not be overwritten by 'zfs receive'.
Maybe I don't
On Thu, Dec 10, 2009 at 2:53 PM, Matthew Ahrens matthew.ahr...@sun.com wrote:
Well, changing the compression property doesn't really interrupt service,
but I can understand not wanting to have even a few blocks with the wrong
I was thinking of sharesmb or sharenfs settings when I wrote that.
You may be interested in PSARC 2009/670: Read-Only Boot from ZFS Snapshot.
Here's the description from:
http://arc.opensolaris.org/caselog/PSARC/2009/670/20091208_joep.vesseur
Allow for booting from a ZFS snapshot. The boot image will be read-only.
Early in boot a clone of the root is
We've been using ZFS for about two years now and make a lot of use of
zfs
send/receive to send our data from one X4500 to another. This has been
working well for the past 18 months that we've been doing the sends.
I recently upgraded the receiving thumper to Solaris 10 u8 and since
then,
My import is still going (I hope, as I can't confirm since my system appears to
be totally locked except for the little blinking console cursor), been well
over a day.
I'm less hopeful now, but will still let it do it's thing for another couple
of days.
--
This message posted from
Hi.
Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot,
WITHOUT destroying later created clones or snapshots?
Example:
--($ ~)-- sudo zfs snapshot rpool/r...@01
--($ ~)-- sudo zfs snapshot rpool/r...@02
--($ ~)-- sudo zfs clone rpool/r...@02 rpool/ROOT-02
--($ ~)-- LC_ALL=C
Hi James,
I just spent about a week recovering about 10TB of file data
for someone who encountered a (somewhat?) similar problem to what you
are seeing.
If you are still having problems with this, please contact me off-list.
Regards,
max
James Risner wrote:
It was created on AMD64 FreeBSD
34 matches
Mail list logo