the same size).
Shrinking the vdevs requires moving data. Once you move data, you've
got to either invalidate the snapshots or update them. I think that
will be one of the more difficult parts.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical
import a pool on two hosts at once
definitely still applies.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
the pool.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
This line left intentionally blank to confuse you
a given (full disk) pool, it could be scanned and imported
anyway. I don't know if such a feature would be useful for the
implementation of ZFS root or not. Either way it would have to wait for
the hostid stuff to go in.
--
Darren Dunham [EMAIL PROTECTED
What about ZFS root?. And compatibility with Live Upgrade?. Any
timetable estimation?.
ZFS root has been previously announced as targeted for update 4.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp
of them was
exported).
I hope ZFS won't get too worried about them if I do this an they're not
both imported (thinking about moving LUNs over from a test system that
had been using the same pool name).
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical
to some limitations in how ZFS and the sd driver communicates. That
the sd driver will take a really long time to timeout each of what may
be several I/Os to it.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp
to this.
Until very recently there was no general tool to help with this. The
unsupported method of destroying volume information to create new unique
volumes wasn't dangerous enough to keep people from using this
technique. :-)
ZFS is different enough that the techniques used on VxVM do not apply.
--
Darren
no way to set a global default for these, so you have to
remember it each time, making the SMF solution more attractive
Perfect. (although I have to try it). In a cluster framework, the
cluster can remember to do it each time, so that shouldn't be an issue.
--
Darren Dunham
. Anything that does had better be making its own determination of
which host owns the pool independently.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper
and be a
performance hit. Then you go and do the same thing with the other
pools.
Today this isn't possible because you cannot migrate data off of a VDEV
to reclaim the storage.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOS
to do it easily. In the future, you may be
able to expand the RAIDz device. (or if you could remove a VDEV from a
pool, you could rotate through and remove each of the RAIDz devices
followed by an addition of a new (8-column) RAIDz).
--
Darren Dunham
this as a completely new
file, losing potential space savings from snapshots.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
-loadable library that wrapped acl(2)?
Have it just hand off the same result except return 0 when the actual
call was an error set to ENOSYS.
Backups would still have to mess with either legacy mounts or explicit
save set specification, but those are much easier tasks.
--
Darren Dunham
(that would be zfs send or the like).
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
This line left
...)
True.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
This line left intentionally blank to confuse you
support'?
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
This line left intentionally blank to confuse you
a business/management issue and not a technical
issue to resolve.
So pretend it's 500G. The suggestions still seem very valid to me.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr
do you know if this is for 7.3 or will it work for 7.2 too??
we are still using 7.2 and have no plan to update to 7.3 yet...
right now we doing snapshots and send to tar-tape, ugly...
Do you have ACLs you need to maintain? Can you just specify a snapshot
as a saveset directly?
--
Darren
the long term solution for this type of corruption? Will there
be a 'fsck'-like utility that can find all valid items and make sure
they're connected properly, or is something else possible?
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant
.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
This line left intentionally blank to confuse you
still misunderstanding. :-)
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
This line left intentionally blank
with the underlying scsi driver?)
and more importantly why none of the data written after the snapshot was
taken seems to be around any longer. Is the fact that the files were
empty relevant?
Thanks.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant
10 15:37 c
-rw-r--r-- 1 root root 0 May 10 15:37 d
-rw-r--r-- 1 root root 0 May 10 15:37 e
# rmdir snap1 snap2
rmdir: directory snap2: Directory is a mount point or in use
# zfs destroy zpool/[EMAIL PROTECTED]
#
--
Darren Dunham
all over the place?
How does this affect sequential read performance? Does ZFS' read-
ahead mean that this isn't the problem it could be?
That was discussed to some extent in this thread:
http://www.opensolaris.org/jive/thread.jspa?messageID=14997
--
Darren Dunham
201 - 225 of 225 matches
Mail list logo