Am 26.08.2010 um 04:38 schrieb Edward Ned Harvey:
There is no such thing as reliable external disks. Not unless you want to
pay $1000 each, which is dumb. You have to scrap your mini, and use
internal (or hotswappable) disks.
Never expect a mini to be reliable. They're designed to be
Hello,
actually this is bad news.
I always assumed that the mirror redundancy of zil can also be used to handle
bad blocks on the zil device (just as the main pool self healing does for data
blocks).
I actually dont know how SSD's die, because of the wear out characteristics
I can think of
This paper is exactly what is needed -- giving an overview to a wide audience
of the ZFS fundamental components and benefits.
I found several grammar errors -- to be expected in a draft and I think at
least one technical error.
The paper seems to imply that multiple vdevs will induce striping
Is it currently or near future possible to shrink a
zpool remove a disk
As other's have noted, no, not until the mythical bp_rewrite() function is
introduced.
So far I have found no documentation on bp_rewrite(), other than it is the
solution to evacuating a vdev, restriping a vdev,
From: Neil Perrin [mailto:neil.per...@oracle.com]
Hmm, I need to check, but if we get a checksum mismatch then I don't
think we try other
mirror(s). This is automatic for the 'main pool', but of course the ZIL
code is different
by necessity. This problem can of course be fixed. (It will be
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of StorageConcepts
So would say there are 2 bugs / missing features in this:
1) zil needs to report truncated transactions on zilcorruption
2) zil should need mirrored counterpart to recover
On Aug 26, 2010, at 9:14 AM, Edward Ned Harvey wrote:
* After introduction of ldr, before this bug fix is available, it is
pointless to mirror log devices.
That's a bit of an overstatement. Mirrored logs protect against a wide variety
of failure modes. Neil just isn't sure if it does the
On Aug 26, 2010, at 2:40 AM, StorageConcepts wrote:
1) zil needs to report truncated transactions on zilcorruption
As Neil outlined, this isn't possible while preserving current ZIL performance.
There is no way to distinguish the last ZIL block without incurring
additional writes for every
This paper is exactly what is needed -- giving an
overview to a wide audience of the ZFS fundamental
components and benefits.
Thanks :)
I found several grammar errors -- to be expected in a
draft and I think at least one technical error.
Will be fixed :)
The paper seems to imply that
Thanks for the response Victor. It is certainly still relevant in the sense
that I am hoping to recover the data (although I've been informed the odds
are strongly against me)
My understanding is that Nexenta has been backporting ZFS code changes post
134. I suppose that it could be an error
Peter,
Here is where I am at right now.
I can obvious read/write when using anon=0. That for sure works.
But you pointed out it is also a security risk.
NFS-Server# zfs get sharenfs backup
NAMEPROPERTY VALUE SOURCE
backup sharenfs
Problem solved..
Try using FQDN on the server end and that work.
The client did not have to use FQDN.
zfs set sharenfs=rw=nfsclient.domain.com,rw=nfsclient.domain.com,nosuid backup
That worked.
Both systems has the nsswitch.conf set correctly for DNS.
So this is an issue when trying to dns.
Peter,
I ran truss from the client side. Below is what I am getting.
What strikes me as odd that the client does a stat(64) call on the remote.
He cannot find NFS-SERVER:/backup volume at all. Just before that
You get the IOCTL error just before that for the same reason.
Keep in mind when I use
Does that mean that when the begin of the intent log chain gets corrupted, all
other intent log data after the corruption area is lost, because the checksum of
the first corrupted block doesn't match?
Regards,
Markus
Neil Perrin neil.per...@oracle.com hat am 23. August 2010 um 19:44
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
If I might add my $0.02: it appears that the ZIL is implemented as a
kind of circular log buffer. As I understand it, when a corrupt checksum
is detected, it is taken to be the end of the log, but this kind of
defeats the checksum's original purpose,
Actually - I can't read ZFS code, so the next assumtions are more or less based
on brainware - excuse me in advance :)
How does ZFS detect up to date zil's ? - with the tnx check of the ueberblock
- right ?
In our corruption case, we had 2 valid ueberblocks at the end and ZFS used
those to
On 26/08/2010 15:08, Saso Kiselkov wrote:
If I might add my $0.02: it appears that the ZIL is implemented as a
kind of circular log buffer. As I understand it, when a corrupt checksum
It is NOT circular since that implies limited number of entries that get
overwritten.
is detected, it is
On Wed, August 25, 2010 23:00, Neil Perrin wrote:
On 08/25/10 20:33, Edward Ned Harvey wrote:
It's commonly stated, that even with log device removal supported, the
most common failure mode for an SSD is to blindly write without reporting
any errors, and only detect that the device is failed
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I see, thank you for the clarification. So it is possible to have
something equivalent to main storage self-healing on ZIL, with ZIL-scrub
to activate it. Or is that already implemented also? (Sorry for asking
these obvious questions, but I'm not
Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Neil Perrin
This is a consequence of the design for performance of the ZIL code.
Intent log blocks are dynamically allocated and chained together.
When reading the
David Magda wrote:
On Wed, August 25, 2010 23:00, Neil Perrin wrote:
Does a scrub go through the slog and/or L2ARC devices, or only the
primary storage components?
A scrub will go through slogs and primary storage devices. The L2ARC
device is considered volatile and data loss is not possible
Edward Ned Harvey wrote:
Add to that:
During scrubs, perform some reads on log devices (even if there's nothing to
read).
We do read from log device if there is data stored on them.
In fact, during scrubs, perform some reads on every device (even if it's
actually empty.)
Reading from the
On Wed, Aug 25, 2010 at 10:57 PM, StorageConcepts
presa...@storageconcepts.de wrote:
Thanks for the feedback, the idea of it is to give people new to ZFS a
understanding of the terms and mode of operations to avoid common problems
(wide stripe pools etc.). Also agreed that it is a little
On Wed, Aug 25, 2010 at 6:18 PM, Wilkinson, Alex
alex.wilkin...@dsto.defence.gov.au wrote:
0n Wed, Aug 25, 2010 at 02:54:42PM -0400, LaoTsao ?? wrote:
IMHO, U want -E for ZIL and -M for L2ARC
Why ?
-E uses SLC flash, which is optimised for fast writes. Ideal for a
ZIL which is
Hi,
I'd like to know if there is a way to use WORM property on ZFS.
Thanks
Douglas
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Aug 25, 2010 at 12:29 PM, Dr. Martin
Mundschenk
m.mundsch...@me.com wrote:
Well, I wonder what are the components to build a
stable system without having an enterprise solution:
eSATA, USB, FireWire, FibreChannel?
If possible to get a card to fit into a MacMini,
eSATA would be a
pb( == Phillip Bruce (Mindsource) v-phb...@microsoft.com writes:
pb( Problem solved.. Try using FQDN on the server end and that
pb( work. The client did not have to use FQDN.
1. your syntax is wrong. You must use netgroup syntax to specify an
IP, otherwise it will think you mean the
Hi,
I'm trying to track down an error with a 64bit x86 OpenSolaris 2009.06 ZFS
shared via iSCSI and an Ubuntu 10.04 client. The client can successfully log
in, but no device node appears. I captured a session with wireshark. When the
client attempts a SCSI: Inquiry LUN: 0x00, OpenSolaris
On Thu, August 26, 2010 13:58, Tom Buskey wrote:
I usually see 17 MB/s max on an external USB 2.0 drive.
Interesting; I routinly see 27 MB/s peaking to 30 MB/s on the cheap WD 1TB
external drives I use for backups. (Backup is probably best case, the
only user of that drive is a zfs receive
Hi Daniel,
We we're looking into very much the same solution you've tested. Thanks
for your advise. I think we will look for something else. :)
Just out of curiosity, what ZFS tweaking did you do? And what much
pricier competitor solution did you end up with in the end?
Regards,
IMHO, if U use the backup SW that support dedupe in the SW then ZFS is
still a viable solution
On 8/26/2010 6:13 PM, Sigbjørn Lie wrote:
Hi Daniel,
We we're looking into very much the same solution you've tested.
Thanks for your advise. I think we will look for something else. :)
Just
Hey all,
I currently work for a company that has purchased a number of different SAN
solutions (whatever was cheap at the time!) and i want to setup a HA ZFS file
store over fiber channel.
Basically I've taken slices from each of the sans and added them to a ZFS pool
on this box (which I'm
be very careful here!!
On 8/26/2010 9:16 PM, Michael Dodwell wrote:
Hey all,
I currently work for a company that has purchased a number of different SAN
solutions (whatever was cheap at the time!) and i want to setup a HA ZFS file
store over fiber channel.
Basically I've taken slices from
Lao,
I had a look at the HAStoragePlus etc and from what i understand that's to
mirror local storage across 2 nodes for services to be able to access 'DRBD
style'.
Having a read thru the documentation on the oracle site the cluster software
from what i gather is how to cluster services
We are using a 7210, 44 disks I believe, 11 stripes of RAIDz sets. When I
installed I selected the best bang for the buck on the speed vs capacity chart.
We run about 30 VM's on it, across 3 ESX 4 servers. Right now, its all running
NFS, and it sucks... sooo slow.
iSCSI was no better.
I
35 matches
Mail list logo