Is there a difference between setting mountpoint=legacy and mountpoint=none?
--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Just spotted one - is this intentional?
You can't delegate a dataset to a zone if mountpoint=legacy.
Changing it to 'none' works fine.
vera / # zfs create tank/delegated
vera / # zfs get mountpoint tank/delegated
NAMEPROPERTYVALUE SOURCE
tank/delegated
Is there a difference - Yep,
'legacy' tells ZFS to refer to the /etc/vfstab file for FS mounts and
options
whereas
'none' tells ZFS not to mount the ZFS filesystem at all. Then you would
need to manually mount the ZFS using 'zfs set mountpoint=/mountpoint
poolname/fsname' to get it mounted.
On Tue, Nov 28, 2006 at 06:08:23PM +0100, Terence Patrick Donoghue wrote:
Dick Davies wrote On 11/28/06 17:15,:
Is there a difference between setting mountpoint=legacy and
mountpoint=none?
Is there a difference - Yep,
'legacy' tells ZFS to refer to the /etc/vfstab file for FS mounts and
Erast Benson wrote:
On Tue, 2006-11-28 at 09:46 +, Darren J Moffat wrote:
Lori Alt wrote:
Latest plan is to release zfs boot with U5. It definitely isn't going
to make U4.
We have new prototype bits, but they haven't been putback yet. There are
a number of design decisions that
Lori Alt wrote:
OK, got the message. We'll work on making the ON bits available ASAP.
The soonest we could putback is build 56, which should become available
to the OpenSolaris community in late January. (Note that I'm not saying
that we WILL putback into build 56 because I won't know that
On Tue, Nov 28, 2006 at 06:06:24PM +, Ceri Davies wrote:
But you could presumably get that exact effect by not listing a
filesystem in /etc/vfstab.
Yes, but someone could still manually mount the filesystem using 'mount
-F zfs ...'. If you set the mountpoint to 'none', then it cannot
So is there a command to make the spare get used, or
so I have to remove it as a spare and add it if it doesn't
get automatically used?
Is this a bug to be fixed, or will this always be the case when
the disks aren't exactly the same size?
This message posted from opensolaris.org
Hi All,
I've got myself into the situation where I have multiple pools with the
same name (long story). How can I get the ids for these pools so i can
address them individually and delete or import them.
thanks,
peter
___
zfs-discuss mailing list
Peter Buckingham wrote:
I've got myself into the situation where I have multiple pools with the
same name (long story). How can I get the ids for these pools so i can
address them individually and delete or import them.
never mind, as is always the case i figured this out just after hitting
So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran
for three months, and it's had no hardware errors. But my zfs file system
seems to have died a quiet death. Sun engineering response was to point to
the FMRI, which says to throw out the zfs partition and start over. I'm
On 11/28/06, David Dyer-Bennet [EMAIL PROTECTED] wrote:
Looks to me like another example of ZFS noticing and reporting an
error that would go quietly by on any other filesystem. And if you're
concerned with the integrity of the data, why not use some ZFS
redundancy? (I'm guessing you're
On Tue, Nov 28, 2006 at 03:02:59PM -0500, Elizabeth Schwartz wrote:
So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran
for three months, and it's had no hardware errors. But my zfs file system
seems to have died a quiet death. Sun engineering response was to point to
Do both RAID-Z and Mirror redundancy use checksums on ZFS? Or just RAID-Z?
Thanks in advance,
J
On 11/28/06, David Dyer-Bennet [EMAIL PROTECTED] wrote:
On 11/28/06, Elizabeth Schwartz [EMAIL PROTECTED] wrote:
So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran
for
They both use checksums and can provide self-healing data.
--Bill
On Tue, Nov 28, 2006 at 02:54:56PM -0700, Jason J. W. Williams wrote:
Do both RAID-Z and Mirror redundancy use checksums on ZFS? Or just RAID-Z?
Thanks in advance,
J
On 11/28/06, David Dyer-Bennet [EMAIL PROTECTED]
Jason J. W. Williams wrote:
Do both RAID-Z and Mirror redundancy use checksums on ZFS? Or just RAID-Z?
Both do not only that but even if you have no redundancy in the pool you
still use checksuming. That is what has actually happened in this case.
The checksuming in ZFS has detected errors
Elizabeth Schwartz wrote:
On 11/28/06, *David Dyer-Bennet* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
wrote:
Looks to me like another example of ZFS noticing and reporting an
error that would go quietly by on any other filesystem. And if you're
concerned with the integrity of the
On 11/28/06, Elizabeth Schwartz [EMAIL PROTECTED] wrote:
On 11/28/06, David Dyer-Bennet [EMAIL PROTECTED] wrote:
Looks to me like another example of ZFS noticing and reporting an
error that would go quietly by on any other filesystem. And if you're
concerned with the integrity of the data,
Matthew Ahrens wrote:
Elizabeth Schwartz wrote:
How would I use more redundancy?
By creating a zpool with some redundancy, eg. 'zpool create poolname
mirror disk1 disk2'.
after the fact, you can add a mirror using 'zpool attach'
-- richard
___
I suspect this will be the #1 complaint about zfs as it becomes more
popular. It worked before with ufs and hw raid, now with zfs it says
my data is corrupt! zfs sux0rs!
#2 how do i grow a raid-z.
The answers to these should probably be in a faq somewhere. I'd argue
that the best practices
Jason J. W. Williams wrote:
Is it possible to non-destructively change RAID types in zpool while
the data remains on-line?
Yes. With constraints, however. What exactly are you trying to do?
-- richard
___
zfs-discuss mailing list
On 11/28/06, Frank Cusack [EMAIL PROTECTED] wrote:
I suspect this will be the #1 complaint about zfs as it becomes more
popular. It worked before with ufs and hw raid, now with zfs it says
my data is corrupt! zfs sux0rs!
That's not the problem, so much as zfs says my file system is
Hi Richard,
Originally, my thinking was I'd like drop one member out of a 3 member
RAID-Z and turn it into a RAID-1 zpool.
Although, at the moment I'm not sure.
Currently, I have 3 volume groups in my array with 4 disk each (total
12 disks). These VGs are sliced into 3 volumes each. I then
On 28-Nov-06, at 7:02 PM, Elizabeth Schwartz wrote:
On 11/28/06, Frank Cusack [EMAIL PROTECTED] wrote:
I suspect this will be the #1 complaint about zfs as it becomes more
popular. It worked before with ufs and hw raid, now with zfs it says
my data is corrupt! zfs sux0rs!
That's not the
comment below...
Jason J. W. Williams wrote:
Hi Richard,
Originally, my thinking was I'd like drop one member out of a 3 member
RAID-Z and turn it into a RAID-1 zpool.
Although, at the moment I'm not sure.
Currently, I have 3 volume groups in my array with 4 disk each (total
12 disks). These
On Tue, 28 Nov 2006, Matthew Ahrens wrote:
Elizabeth Schwartz wrote:
On 11/28/06, *David Dyer-Bennet* [EMAIL PROTECTED] mailto:[EMAIL
PROTECTED]
wrote:
Looks to me like another example of ZFS noticing and reporting an
error that would go quietly by on any other filesystem.
Oh my, one day after I posted my horror story another one strikes. This is
validation of the design objectives of ZFS, looks like this type of stuff
happens more often than not. In the past we'd have just attributed this type of
problem to some application induced corruption, now ZFS is pinning
Well, I fixed the HW but I had one bad file, and the problem was that ZFS
was saying delete the pool and restore from tape when, it turns out, the
answer is just find the file with the bad inode, delete it, clear the device
and scrub. Maybe more of a documentation problme, but it sure is
On 28-Nov-06, at 10:01 PM, Elizabeth Schwartz wrote:
Well, I fixed the HW but I had one bad file, and the problem was
that ZFS was saying delete the pool and restore from tape when,
it turns out, the answer is just find the file with the bad inode,
delete it, clear the device and scrub.
With zfs, there's this ominous
message saying destroy the filesystem and restore
from tape. That's not so good, for one corrupt
file.
It is strictly correct that to restore the data you'd need
to refer to a backup, in this case.
It is not, however, correct that to restore the data you
On 11/28/06, Elizabeth Schwartz [EMAIL PROTECTED] wrote:
Well, I fixed the HW but I had one bad file, and the problem was that ZFS
was saying delete the pool and restore from tape when, it turns out, the
answer is just find the file with the bad inode, delete it, clear the device
and scrub.
No, you still have the hardware problem.
What hardware problem?
There seems to be an unspoken assumption that any checksum error detected by
ZFS is caused by a relatively high error rate in the underlying hardware.
There are at least two classes of hardware-related errors. One class are those
Jim,
James F. Hranicky wrote:
Sanjeev Bagewadi wrote:
Jim,
We did hit similar issue yesterday on build 50 and build 45 although the
node did not hang.
In one of the cases we saw that the hot spare was not of the same
size... can you check
if this true ?
It looks like they're all
On Tue, Nov 28, 2006 at 08:03:33PM -0500, Toby Thain wrote:
As others have pointed out, you wouldn't have reached this point with
redundancy - the file would have remained intact despite the hardware
failure. It is strictly correct that to restore the data you'd need
to refer to a
Glad it worked for you. I suspect in your case the corruption happened way down
in the tree and you could get around it by pruning the tree (rm the file) below
the point of corruption. I suspect this could be due to a very localized
corruption like Alpha particle problem where a bit was flipped
Anton B. Rang wrote:
Clearly, the existence of a high error rate (say, more than one error every two
weeks on a server pushing 100 MB/second) would point to a hardware or software
problem; but fewer errors may simply be “normal” for standard hardware
I currently have a server that has a much
36 matches
Mail list logo