I have to help setup a configuration where a ZPOOL on MPXIO on OpenSolaris is
being used with Symmetrix devices with replication being handled via Symmetrix
Remote Data Facility (SRDF).
So I am curious whether anyone has used this configuration and have any
feedback/suggestions.
Will there be
Hi Mattias Miles.
To test the version mismatch theory, I setup a snv_91 VM (using virtualbox) on
my snv_95 desktop, and tried the zfs receive again. Unfortunately the symptoms
are exactly the same: around the ~20GB mark, the justhome.zfs stream still
bombs out with the checksum error.
I
Miles Nordin [EMAIL PROTECTED]
cs == Cromar Scott [EMAIL PROTECTED] writes:
cs It appears that the metadata on that pool became corrupted
cs when the processor failed. The exact mechanism is a bit of a
cs mystery,
[...]
cs We were told that the probability of metadata
2008/8/13 Jonathan Wheeler [EMAIL PROTECTED]:
So far we've established that in this case:
*Version mismatches aren't causing the problem.
*Receiving across the network isn't the issue (because I have the exact same
issue restoring the stream directly on
my file server).
*All that's left was
Mattias Pantzare wrote:
2008/8/13 Jonathan Wheeler [EMAIL PROTECTED]:
So far we've established that in this case:
*Version mismatches aren't causing the problem.
*Receiving across the network isn't the issue (because I have the exact same
issue restoring the stream directly on
my file
jw == Jonathan Wheeler [EMAIL PROTECTED] writes:
mp == Mattias Pantzare [EMAIL PROTECTED] writes:
jw Miles: zfs receive -nv works ok
one might argue 'zfs receive' should validate checksums with the -n
option, so you can check if a just-written dump is clean before
counting on it. Without
cs == Cromar Scott [EMAIL PROTECTED] writes:
cs We opened a call with Sun support. We were told that the
cs corruption issue was due to a race condition within ZFS. We
cs were also told that the issue was known and was scheduled for
cs a fix in S10U6.
nice. Is there a bug
Thanks for the information, I'm learning quite a lot from all this.
It seems to me that zfs send *should* be doing some kind of verification, since
some work has clearly been put into zfs so that zfs's can be dumped into
files/pipes. It's a great feature to have, and I can't believe that this
We had done several benchmarks on Thumpers. Config 1 is definetly better on
most of the loads.
Some Raid1 configs perform better on certain loads.
Mertol
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax
Miles Nordin [EMAIL PROTECTED]
cs == Cromar Scott [EMAIL PROTECTED] writes:
cs We opened a call with Sun support. We were told that the
cs corruption issue was due to a race condition within ZFS. We
cs were also told that the issue was known and was scheduled for
cs a fix in
On Tue, 12 Aug 2008, Lori Alt wrote:
There are no plans to add zfs root support to the existing
install GUI. GUI install support for zfs root will be
provided by the new Caiman installer.
Latest BeleniX OpenSolaris uses the Caiman installer so it may be
worth installing it just to see what
Latest BeleniX OpenSolaris uses the Caiman installer so it may be
worth installing it just to see what it is like. I installed it under
VirtualBox yesterday. Installing using whole disk did not work with
VirtualBox but the suggested default partitioning did work.
OpenSolaris 2008.05
Did you ever figure this out?
I have the same hardware: Intel DG33TL motherboard with Intel gigabit nic and
ICH9R but with Hitachi 1TB drives.
I'm getting 2MB/s write speeds.
I've tried the zeroing out trick. No luck.
Network is fine. Disks are fine, the write at around 50MB/s when formatted
Oh, Jeff's write script gives around 60MB/s IIRC.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I used BCwipe to zero the drives. How do you:
boot Knoppix again and zero out the start and end sectors manually (erasing all
GPT data)
??
thanks
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hello Moinak,
Wednesday, August 13, 2008, 1:58:34 PM, you wrote:
MG I have to help setup a configuration where a ZPOOL on MPXIO on
MG OpenSolaris is being used with Symmetrix devices with replication
MG being handled via Symmetrix Remote Data Facility (SRDF).
MG So I am curious whether anyone
Robert Milkowski wrote:
Wednesday, August 13, 2008, 1:58:34 PM, you wrote:
MG I have to help setup a configuration where a ZPOOL on MPXIO on
MG OpenSolaris is being used with Symmetrix devices with replication
MG being handled via Symmetrix Remote Data Facility (SRDF).
MG So I am
I see that a driver patch has now been released for marvell88sx
hardware. I expect that this is the patch that Thumper owners have
been anxiously waiting for. The patch ID is 138053-02.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED],
Given that the checksum algorithms utilized in zfs are already fairly CPU
intensive, I
can't help but wonder if it's verified that a majority of checksum
inconsistency failures
appear to be single bit; if it may be advantageous to utilize some
computationally
simpler hybrid form of a
On Aug 13, 2008, at 5:58 AM, Moinak Ghosh wrote:
I have to help setup a configuration where a ZPOOL on MPXIO on
OpenSolaris is being used with Symmetrix devices with replication
being handled via Symmetrix Remote Data Facility (SRDF).
So I am curious whether anyone has used this
Actually the SRDF copy has to be imported on the standby(R2) host if the
primary host/storage has to be offlined for some reason; but thanks for the
note.
Regards,
Moinak.
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Wed, 13 Aug 2008, paul wrote:
Given that the checksum algorithms utilized in zfs are already fairly CPU
intensive, I
can't help but wonder if it's verified that a majority of checksum
inconsistency failures
appear to be single bit; if it may be advantageous to utilize some
Mario,
Latest BeleniX OpenSolaris uses the Caiman installer so it may be
worth installing it just to see what it is like. I installed it under
VirtualBox yesterday. Installing using whole disk did not work with
VirtualBox but the suggested default partitioning did work.
jw == Jonathan Wheeler [EMAIL PROTECTED] writes:
jw A common example used all over the place is zfs send | ssh
jw $host. In these examples is ssh guaranteeing the data delivery
jw somehow?
it is really all just appologetics. It sounds like a zfs bug to me.
The only alternative is
On Wed, 13 Aug 2008, paul wrote:
Shy extremely noisy hardware and/or literal hard failure, most
errors will most likely always be expressed as 1 bit out of some
very large N number of bits.
This claim ignores the fact that most computers today are still based
on synchronously clocked
paul wrote:
Bob wrote:
... Given the many hardware safeguards against single (and several) bit
errors,
the most common data error will be large. For example, the disk drive may
return data from the wrong sector.
- actually data integrity check bits as may exist within memory
Jonathan Wheeler wrote:
Thanks for the information, I'm learning quite a lot from all this.
It seems to me that zfs send *should* be doing some kind of verification,
since some work has clearly been put into zfs so that zfs's can be dumped
into files/pipes. It's a great feature to have, and
There is an explicit check in ZFS for the checksum, as you deduced. I suspect
that by disabling this check you could recover much, if not all, of your data.
You could probably do this with mdb by 'simply' writing a NOP over the branch
in dmu_recv_stream.
It appears that 'zfs send' was designed
28 matches
Mail list logo