Uwe Dippel wrote:
We have seen some unfortunate miscommunication here, and misinterpretation.
This extends into differences of culture. One of the vocal person in here is
surely not 'Anti-xyz'; rather I sense his intense desire to further the
progress by pointing his finger to some potential
Hey Cindy
Thanks for your help.
How would I configure a 2-way mirror pool for a root pool?
Basically I'd do it this way.
zpool create pool mirror disk0 disk2 mirror disk1 disk3
or with an already configured root pool mirror
zpool add rpool mirror disk1 disk3
But when I try to add this it
zpool add rpool mirror disk1 disk3
But when I try to add this it seems to fail with:
cannot add to 'rpool': root pool can not have multiple vdevs or separate logs
What you want is attach instead of add:
zpool attach [-f] pool device new_device
Attaches new_device to an
Thanks Volker.
I am aware of that.
I was just asking because Cindy said there could be a 2-way mirror
config for a root pool.
Guess I'll either get bigger disks or live with these smaller ones..
Volker A. Brandt wrote:
zpool add rpool mirror disk1 disk3
But when I try to add this it seems
I'm rather tired of hearing this mantra.
[...]
Every file system needs a repair utility
Hey, wait a minute -- that's a mantra too!
I don't think there's actually any substantive disagreement here -- stating
that one doesn't need a separate program called /usr/sbin/fsck is not the
same as
Mario Goebbels wrote:
The good news is that ZFS is getting popular
enough on consumer-grade
hardware. The bad news is that said hardware has
a different set of
failure modes, so it takes a bit of work to become
resilient to them.
This is pretty high on my short list.
One
g == Gino dandr...@gmail.com writes:
g we lost many zpools with multimillion$ EMC,
Netapp and
g HDS arrays just simulating fc switches power
fails.
g The problem is that ZFS can't properly
recover itself.
I don't like what you call ``the problem''---I think
it assumes too
much.
This is CR 6667683
http://bugs.opensolaris.org/view_bug.do?bug_id=6667683
I think that would solve 99% of ZFS corruption problems!
Based on the reports I've seen to date, I think you're right.
Is there any EDT for this patch?
Well, because of this thread, this has gone from on my list
[Still waiting for answers on my earlier questions]
So I take it that ZFS solves one problem perfectly well: Integrity of data
blocks. It uses CRC and atomic writes for this purpose, and as far as I could
follow this list, nobody has ever had any problems in this respect.
However, it also - at
I was just asking because Cindy said there could be a 2-way mirror
config for a root pool.
Guess I'll either get bigger disks or live with these smaller ones..
What you want is attach instead of add:
Ah, OK. So the problem hinges more on the question what a two-way
mirror is. :-)
But I'm
This is CR 6667683
http://bugs.opensolaris.org/view_bug.do?bug_id=6667683
I think that would solve 99% of ZFS corruption
problems!
Based on the reports I've seen to date, I think
you're right.
Is there any EDT for this patch?
Well, because of this thread, this has gone from
On 2/10/2009 3:37 PM, D. Eckert wrote:
(...)
Possibly so. But if you had that ufs/reiserfs on a LVM or on a RAID0
spanning removable drives, you probably wouldn't have been so lucky.
(...)
we are not talking about a RAID 5 array or an LVM. We are talking about a
single FS setup as a zpool over
On 2/10/2009 4:48 PM, Roman V. Shaposhnik wrote:
On Wed, 2009-02-11 at 09:49 +1300, Ian Collins wrote:
These posts do sound like someone who is blaming their parents after
breaking a new toy before reading the instructions.
It looks like there's a serious denial of the fact that bad
On Tue, 10 Feb 2009 21:43:00 PST
Uwe Dippel udip...@gmail.com wrote:
Back to where I started from, with some questions:
1. Can the relevant people confirm that drives might turn dead when
leaving a pool at unfortunate moments? Despite of complete physical
integrity?
I have not experienced
On Tue, February 10, 2009 23:43, Uwe Dippel wrote:
1. Can the relevant people confirm that drives might turn dead when
leaving a pool at unfortunate moments? Despite of complete physical
integrity? [I'd really appreciate an answer here, because this is what I
am starting to implement here:
On Wed, February 11, 2009 02:33, Eric D. Mudama wrote:
BTW, funky/busted bridge hardware in external USB devices don't count.
They do for me; I'm currently using external USB drives for my backup
datasets (in the process of converting to use zfs send/recv to get the
data there). My normal
On Wed, February 11, 2009 02:28, Sandro wrote:
How would I configure a 2-way mirror pool for a root pool?
Basically I'd do it this way.
zpool create pool mirror disk0 disk2 mirror disk1 disk3
or with an already configured root pool mirror
zpool add rpool mirror disk1 disk3
But when I try
Dear ZFS experts,
somehow one of my zpools got corrupted. Symptom is that I cannot
import it any more. To me it is of lesser interest why that happened.
What is really challenging is the following.
Any effort to import the zpool hangs and is unkillable. E.g. if I
issue a zpool import
On 11-Feb-09, at 10:08 AM, David Dyer-Bennet wrote:
On Tue, February 10, 2009 23:43, Uwe Dippel wrote:
1. Can the relevant people confirm that drives might turn dead when
leaving a pool at unfortunate moments? Despite of complete physical
integrity? [I'd really appreciate an answer here,
Hi,
In a scenario where multiple sites replicate their zpools (EMC storage,
hardware based replication) to a single storage in a central site, and
given that all zpools have the same name, can the host in the central
site correctly identify and mount the different zpools correctly?
Thanks,
On Tue, Feb 10, 2009 at 11:44 PM, Fredrich Maney fredrichma...@gmail.comwrote:
Ah... an illiterate AND idiotic bigot. Have you even read the manual
or *ANY* of the replies to your posts? *YOU* caused the situation that
resulted in your data being corrupted. Not Sun, not OpenSolaris, not
ZFS
On February 10, 2009 11:53:39 PM -0500 Toby Thain
t...@telegraphics.com.au wrote:
On 10-Feb-09, at 10:36 PM, Frank Cusack wrote:
On February 10, 2009 4:41:35 PM -0800 Jeff Bonwick
jeff.bonw...@sun.com wrote:
Not if the disk drive just *ignores* barrier and flush-cache commands
and returns
Hi,
(I am sorry but I don't have a system where I can run commands).
Is it OK to create a zpool adding log and cache options?
Thanks, Rafael.
--
= Rafael Friedlander
= Sun Microsystems
= OEM Specialist
= +972 544 971 564
___
zfs-discuss mailing
Tim;
The proper procedure for ejecting a USB drive in Windows is to right
click the device icon and eject the appropriate listed device.
I've done this before without ejecting and lost data before.
My personal experience with ZFS is that it is very reliable FS. I've
not lost data on it yet
I just did a test install of opensolaris 2008.11 on a Seagate 1.5TB drive with
option of using the entire disk.
Afterwards, df -H reports that the available space in /export/home is only
about 970GB ... all counted, there are at least 400GB space missing. I am new
to zfs, however, this
(...)
Good. It looks like this thread can finally die. I received the
following in response to my message below:
(...)
I apologize that your eMail could not be delivered.
This is to either the mail server you use is considered as a machine from a
dynamic ip pool or your mail server is anywhere
On Wed, Feb 11, 2009 at 10:33 AM, Steven Sim unixan...@gmail.com wrote:
Tim;
The proper procedure for ejecting a USB drive in Windows is to right click
the device icon and eject the appropriate listed device.
I'm well aware of what the proper procedure is. My point is, I've done it
for
On Wed, Feb 11, 2009 at 11:25, Rafael Friedlander r...@sun.com wrote:
Hi,
(I am sorry but I don't have a system where I can run commands).
Is it OK to create a zpool adding log and cache options?
Yes, this usage is explicitly mentioned in the man page [1].
Will
[1]:
On 02/11/09 01:28, Sandro wrote:
Hey Cindy
Thanks for your help.
How would I configure a 2-way mirror pool for a root pool?
Basically I'd do it this way.
zpool create pool mirror disk0 disk2 mirror disk1 disk3
This command does not create a valid root pool. Root pools cannot
have more
On Wed, Feb 11 at 8:38, Oliver wrote:
I just did a test install of opensolaris 2008.11 on a Seagate 1.5TB drive with option of
using the entire disk.
Afterwards, df -H reports that the available space in /export/home is only
about 970GB ... all counted, there are at least 400GB space
On Wed, 11 Feb 2009, David Dyer-Bennet wrote:
This all-or-nothing behavior of ZFS pools is kinda scary. Turns out I'd
rather have 99% of my data than 0% -- who knew? :-) I'd much rather have
100.00% than either of course, and I'm running ZFS with mirroring, and
doing regular backups, because
(...)
Ah... an illiterate AND idiotic bigot.
(...)
I apologize for my poor English. Yes, it's not my mother tongue, but I have no
doubt at all, that this
discussion could be continued in German as well.
But just to make it clear:
Finally I did understand very well were I went wrong. But it
could be ... I am hoping for a better clue to figure this out.
after this, I also installed the ubuntu server on the same box, and I am
getting around 1.4TB space as expected ... so it is not something in the
hardware path, I guess.
thanks
Oliver
--
This message posted from opensolaris.org
On Wed, 11 Feb 2009, Oliver wrote:
I just did a test install of opensolaris 2008.11 on a Seagate 1.5TB
drive with option of using the entire disk.
Afterwards, df -H reports that the available space in /export/home
is only about 970GB ... all counted, there are at least 400GB space
missing.
On Wed, 11 Feb 2009, Tim wrote:
All that and yet the fact remains: I've never ejected a USB drive from OS
X or Windows, I simply pull it and go, and I've never once lost data, or had
it become unrecoverable or even corrupted.
And yes, I do keep checksums of all the data sitting on them and
On 2/11/2009 12:11 PM, Bob Friesenhahn wrote:
My understanding is that 1TB is the maximum bootable disk size since
EFI boot is not supported. It is good that you were allowed to use
the larger disk, even if its usable space is truncated.
I don't dispute that, but I don't understand it
On 11-Feb-09, at 11:19 AM, Tim wrote:
...
And yes, I do keep checksums of all the data sitting on them and
periodically check it. So, for all of your ranting and raving, the
fact remains even a *crappy* filesystem like fat32 manages to
handle a hot unplug without any prior notice
On 2/11/2009 12:35 PM, Toby Thain wrote:
On 11-Feb-09, at 11:19 AM, Tim wrote:
...
And yes, I do keep checksums of all the data sitting on them and
periodically check it. So, for all of your ranting and raving, the
fact remains even a *crappy* filesystem like fat32 manages to handle
a hot
On 11 February, 2009 - Kyle McDonald sent me these 1,2K bytes:
On 2/11/2009 12:11 PM, Bob Friesenhahn wrote:
My understanding is that 1TB is the maximum bootable disk size since
EFI boot is not supported. It is good that you were allowed to use
the larger disk, even if its usable space
On 2/11/2009 12:57 PM, Tomas Ögren wrote:
On 11 February, 2009 - Kyle McDonald sent me these 1,2K bytes:
On 2/11/2009 12:11 PM, Bob Friesenhahn wrote:
My understanding is that 1TB is the maximum bootable disk size since
EFI boot is not supported. It is good that you were allowed to
On 2/11/2009 1:03 PM, Kyle McDonald wrote:
Since you can't mix EFI and FDisk partition tables, and you can't have
more than one Solaris fdisk partition (that I'm aware of anyway) it
looks like 1TB is all you can give Solaris at the moment.
I should have qualified that with If you need to
On Wed, February 11, 2009 11:21, Bob Friesenhahn wrote:
On Wed, 11 Feb 2009, Tim wrote:
All that and yet the fact remains: I've never ejected a USB drive from
OS
X or Windows, I simply pull it and go, and I've never once lost data, or
had
it become unrecoverable or even corrupted.
And
On Wed, February 11, 2009 11:35, Toby Thain wrote:
On 11-Feb-09, at 11:19 AM, Tim wrote:
...
And yes, I do keep checksums of all the data sitting on them and
periodically check it. So, for all of your ranting and raving, the
fact remains even a *crappy* filesystem like fat32 manages to
On Wed, February 11, 2009 10:49, Bob Friesenhahn wrote:
On Wed, 11 Feb 2009, David Dyer-Bennet wrote:
This all-or-nothing behavior of ZFS pools is kinda scary. Turns out I'd
rather have 99% of my data than 0% -- who knew? :-) I'd much rather
have
100.00% than either of course, and I'm
On Wed, 11 Feb 2009, David Dyer-Bennet wrote:
Then again, I've never lost data during the learning period, nor on the
rare occasions where I just get it wrong. This is good; not quite
remembering to eject a USB memory stick is *so* easy.
With Windows and OS-X, it is up to the *user* to
Hello,
I noted this problem on build 98 of 2008.11 and have recently verified it
exists in the production release as well.
I have installed 2008.11 as a guest under Xen. If you make an exact
block-level copy of the image and attach the copy as a disk on the original OS,
zpool import does not
On Wed, February 11, 2009 12:23, Bob Friesenhahn wrote:
On Wed, 11 Feb 2009, David Dyer-Bennet wrote:
Then again, I've never lost data during the learning period, nor on the
rare occasions where I just get it wrong. This is good; not quite
remembering to eject a USB memory stick is *so*
Kyle McDonald wrote:
On 2/11/2009 12:57 PM, Tomas Ögren wrote:
On 11 February, 2009 - Kyle McDonald sent me these 1,2K bytes:
On 2/11/2009 12:11 PM, Bob Friesenhahn wrote:
My understanding is that 1TB is the maximum bootable disk size since
EFI boot is not supported. It is good that
On 2/11/2009 1:50 PM, Richard Elling wrote:
Solaris can now (as of b105) use extended partitions.
http://www.opensolaris.org/os/community/on/flag-days/pages/2008120301/
That's interesting, but I'm not sure how it helps.
It's my understanding that Solaris doesn't like it if more than one of
I have a non bootable disk and need to recover files from /root... When
I import the disk via zpool import /root isnt mounted...
Thanks, Jonny
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On February 11, 2009 12:21:03 PM -0600 David Dyer-Bennet d...@dd-b.net
wrote:
I've spent $2000 on hardware and, by now, hundreds of hours of my time
trying to get and keep a ZFS-based home NAS working. Because it's the
only affordable modern practice, my backups are on external drives (USB
On Wed, Feb 11, 2009 at 11:19 AM, Tim t...@tcsac.net wrote:
On Tue, Feb 10, 2009 at 11:44 PM, Fredrich Maney fredrichma...@gmail.com
wrote:
Ah... an illiterate AND idiotic bigot. Have you even read the manual
or *ANY* of the replies to your posts? *YOU* caused the situation that
resulted in
On February 11, 2009 6:17:58 PM +0200 Rafael Friedlander r...@sun.com
wrote:
In a scenario where multiple sites replicate their zpools (EMC storage,
hardware based replication) to a single storage in a central site, and
given that all zpools have the same name, can the host in the central
site
On February 11, 2009 2:07:47 AM -0800 Gino dandr...@gmail.com wrote:
I agree but I'd like to point out that the MAIN problem with ZFS is that
because of a corruption you-ll loose ALL your data and there is no way to
recover it. Consider an example where you have 100TB of data and a fc
switch
David Dyer-Bennet wrote:
I've spent $2000 on hardware and, by now, hundreds of hours of my time
trying to get and keep a ZFS-based home NAS working.
Hundreds of hours doing what? I just plugged in the drives, built the
pool and left the box in a corner for the past couple of years. It's
You can also import pools by their unique ID instead of by name. If the
pool is not imported, 'zpool import' with no arguments should list the
pool IDs. If the pool is imported, 'zpool get guid poolname' will list
the pool ID.
Beware that if the zpools have the same mountpoints set within any
On Wed, February 11, 2009 13:45, Ian Collins wrote:
David Dyer-Bennet wrote:
I've spent $2000 on hardware and, by now, hundreds of hours of my time
trying to get and keep a ZFS-based home NAS working.
Hundreds of hours doing what? I just plugged in the drives, built the
pool and left the
On Wed, Feb 11, 2009 at 11:46 AM, Kyle McDonald kmcdon...@egenera.comwrote:
Yep. I've never unplugged a USB drive on purpose, but I have left a drive
plugged into the docking station, Hibernated windows XP professional,
undocked the laptop, and then woken it up later undocked. It routinely
On Wed, Feb 11, 2009 at 1:36 PM, Frank Cusack fcus...@fcusack.com wrote:
if you have 100TB of data, wouldn't you have a completely redundant
storage network -- dual FC switches on different electrical supplies,
etc. i've never designed or implemented a storage network before but
such
Great, thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On February 11, 2009 3:02:48 PM -0600 Tim t...@tcsac.net wrote:
On Wed, Feb 11, 2009 at 1:36 PM, Frank Cusack fcus...@fcusack.com wrote:
if you have 100TB of data, wouldn't you have a completely redundant
storage network -- dual FC switches on different electrical supplies,
etc. i've never
On Wed, 11 Feb 2009, Tim wrote:
Right, except the OP stated he unmounted the filesystem in question, and it
was the *ONLY* one on the drive, meaning there is absolutely 0 chance of
their being pending writes. There's nothing to write to.
This is an interesting assumption leading to a wrong
We're using some X4540s, with OpenSolaris 2008.11.
According to my testing, to optimize our systems for our specific
workload, I've determined that we get the best performance with the
write cache disabled on every disk, and with zfs:zfs_nocacheflush=1 set
in /etc/system.
The only issue is
On Wed, February 11, 2009 15:51, Frank Cusack wrote:
On February 11, 2009 3:02:48 PM -0600 Tim t...@tcsac.net wrote:
It's hardly uncommon for an entire datacenter to go down, redundant
power
or not. When it does, if it means I have to restore hundreds of
terabytes if not petabytes from
Thanks to John K. and Richard E. for an answer that would have never, ever
occurred to me...
The problem was with the shell. For whatever reason, /usr/bin/ksh can't rejoin
the files correctly. When I switched to /sbin/sh, the rejoin worked fine, the
cksum's matched, and the zfs recv worked
Yup, was an absolute nightmare to diagnose on top of everything else.
Definitely doesn't
happen in windows too. I really want somebody to try snv_94 on a Thumper to
see if you
get the same behaviour there, or whether it's unique to Supermicro's Marvell
card.
On a Thumper under S10U5
On Wed, February 11, 2009 15:52, Bob Friesenhahn wrote:
On Wed, 11 Feb 2009, Tim wrote:
Right, except the OP stated he unmounted the filesystem in question, and
it
was the *ONLY* one on the drive, meaning there is absolutely 0 chance of
their being pending writes. There's nothing to write
so, basically, my question is: Is there a way to quickly or permanently
disable the write cache on every disk in an X4540?
Hmmm... the only idea I have is to see how format(1M) does it and
steal the code to write a small disable-cache tool. :-)
Have a look at uscsi(7I) and specifically the
Hello Greg,
Wednesday, February 11, 2009, 10:13:39 PM, you wrote:
GM We're using some X4540s, with OpenSolaris 2008.11.
GM According to my testing, to optimize our systems for our specific
GM workload, I've determined that we get the best performance with the
GM write cache disabled on every
On Wed, 11 Feb 2009, David Dyer-Bennet wrote:
As a practical matter, it seems unreasonable to me that there would be
uncommitted data in the pool after some quite short period of time when
there's no new IO activity to the pool (not just the filesystem). 5 or 10
seconds, maybe? (Possibly
Hi,
just found on a X4500 with S10u6:
fmd: [ID 441519 daemon.error] SUNW-MSG-ID: ZFS-8000-GH, TYPE: Fault, VER: 1,
SEVERITY: Major
EVENT-TIME: Wed Feb 11 16:03:26 CET 2009
PLATFORM: Sun Fire X4500, CSN: 00:14:4F:20:E0:2C , HOSTNAME: peng
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID:
I need to disappoint you here, LED inactive for a few seconds is a very bad
indicator of pending writes. Used to experience this on a stick on Ubuntu,
which was silent until the 'umount' and then it started to write for some 10
seconds.
On the other hand, you are spot-on w.r.t. 'umount'. Once
On 11-Feb-09, at 5:52 PM, David Dyer-Bennet wrote:
On Wed, February 11, 2009 15:52, Bob Friesenhahn wrote:
On Wed, 11 Feb 2009, Tim wrote:
Right, except the OP stated he unmounted the filesystem in
question, and
it
was the *ONLY* one on the drive, meaning there is absolutely 0
chance
On 11-Feb-09, at 7:16 PM, Uwe Dippel wrote:
I need to disappoint you here, LED inactive for a few seconds is a
very bad indicator of pending writes. Used to experience this on a
stick on Ubuntu, which was silent until the 'umount' and then it
started to write for some 10 seconds.
On the
On Wed, Feb 11, 2009 at 2:13 PM, Greg Mason gma...@msu.edu wrote:
We're using some X4540s, with OpenSolaris 2008.11.
According to my testing, to optimize our systems for our specific workload,
I've determined that we get the best performance with the write cache
disabled on every disk, and
Does anyone know if this card will work in a standard pci express slot?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
No, I don't believe so, I found out that it's a port unique to Supermicro
boards, and you only get one port per board, which pretty much rules this card
out.
However, there's a PCI-e card from LSI using the same chipset. LSISAS3081, or
something like that. A quick search for LSISAS and
Toby,
sad that you fall for the last resort of the marketing droids here. All
manufactures (and there are only a few left) will sue the hell out of you if
you state that their drives don't 'sync'. And each and every drive I have ever
used did. So the talk about a distinct borderline between
Hmm... somebody needs to tell Supermicro's sales staff then. I specifically
didn't buy their cards after they told me it wouldn't work.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Wed, February 11, 2009 17:25, Bob Friesenhahn wrote:
Regardless, it seems that the ZFS problems with crummy hardware are
primarily due to the crummy hardware writting the data to the disk in
a different order than expected. ZFS expects that after a sync that
all pending writes are
On Wed, Feb 11, 2009 at 8:46 PM, Ross myxi...@googlemail.com wrote:
Hmm... somebody needs to tell Supermicro's sales staff then. I
specifically didn't buy their cards after they told me it wouldn't work.
Looks like Brandon Wagoner was the one who got it working here. Guess we
can see if
On Wed, February 11, 2009 18:25, Toby Thain wrote:
Absolutely. You should never get actual corruption (inconsistency)
at any time *except* in the case Jeff Bonwick explained: i.e. faulty/
misbehaving hardware! (That's one meaning of always consistent on
disk.)
I think this is well
May I doubt that there are drives that don't 'sync'? That means you have a good
chance of corrupted data at a normal 'reboot'; or just at a 'umount' (without
considering ZFS here).
May I doubt the marketing drab that you need to buy a USCSI or whatnot to have
functional 'sync' at a shutdown or
Brent wrote:
Does anyone know if this card will work in a standard pci express slot?
Yes. I have an AOC-USAS-L8i working in a regular PCI-E slot in my Tyan
2927 motherboard.
The AOC-SAT2-MV8 also works in a regular PCI slot (although it is PCI-X
card).
84 matches
Mail list logo