Hi
We have following scenario/problem:
Our zpool resides on a single LUN on a Hitachi Storage Array. We are
thinking about making a physical clone of the zpool with the ShadowImage
functionality.
ShadowImage takes a snapshot of the LUN, and copies all the blocks to a
new LUN (physical copy). In
Hi Chris,
both server same setup. OS on local hw raid mirror, other filesystem on a SAN.
We found really bad performance but also that under that heavy I/O zfs pool was
something like freezed.
I mean, a zone living on the same zpool was completely unusable because of I/O
load.
We use FSS, but
Hi colleagues
IHAC who wants to use ZFS with his HDS box. He asks now how he can do the
following:
- Create ZFS pool/fs on HDS LUNs
- Create Copy with ShadowImage inside HDS
- Disconnect ShadowImage
- Import ShadowImage with ZFS in addition to the existing ZFS pool/fs
I wonder how ZFS is
Hi Gino,
Can you post the 'zpool status' for each pool and
'zfs get all'
for each fs; Any interesting data in the dmesg output
?
sure.
1) nothing on dmesg (are you thinking about shared IRQ?)
2) Only using one pool for tests:
# zpool status
pool: zpool1
state: ONLINE
scrub: none
We use FSS, but CPU load was really load under the
tests.
errata: We use FSS, but CPU load was really LOW under the tests.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I don't see a patch for this on the SunSolve website. I've opened a service
request to get this patch for Sol10 06/06. Stay tuned.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I would go with:
(3) Three 4D+1P h/w RAID-5 groups, no hot spare, mapped to one LUN each.
Setup a ZFS pool of one RAID-Z group consisting of those three LUN's.
Only ~3200GB available space, but what looks like very good resiliency
in face of multiple disk failures.
IMHO building
Robert Milkowski wrote:
Hello James,
I belive that storing hostid, etc. in a label and checking if it
matches on auto-import is the right solution.
Before it's implemented you can use -R right now with home-clusters
and don't worry about auto-import.
hostid isn't sufficient (got a scar), so
On Mon, Sep 18, 2006 at 02:20:24PM -0400, Torrey McMahon wrote:
1 - ZFS is self consistent but if you take a LUN snapshot then any
transactions in flight might not be completed and the pool - Which you
need to snap in its entirety - might not be consistent. The more LUNs
you have in the
It's a valid use case in the high-end enterprise space.
While it probably makes good sense to use ZFS for snapshot creation,
there are still cases where array-based snapshots/clones/BCVs make
sense. (DR/Array-based replication, data-verification, separate
spindle-pool, legacy/migration reasons,
hi everyone,
I am planning on creating a local SAN via NFS(v4) and several redundant
nodes.
I have been using DRBD on linux before and now am asking whether some of
you have experience on on-demand network filesystem mirrors.
I have yet little Solaris sysadmin know how, but i am
I'm really not an expert on ZFS, but at least from my point to
handle such cases ZFS has to handle at least the following points
- GUID a new/different GUID has to be assigned
- LUNs ZFS has to be aware that device trees are different, if
these are part of some kind of metadata stored
On Mon, Sep 18, 2006 at 03:29:49PM -0400, Jonathan Edwards wrote:
err .. i believe the point is that you will have multiple disks
claiming to be the same disk which can wreak havoc on a system (eg:
I've got a 4 disk pool with a unique GUID and 8 disks claiming to be
part of that same
On Mon, Sep 18, 2006 at 10:06:21PM +0200, Joerg Haederli wrote:
I'm really not an expert on ZFS, but at least from my point to
handle such cases ZFS has to handle at least the following points
- GUID a new/different GUID has to be assigned
As I mentioned previously, ZFS handles this
Eric Schrock wrote:
On Mon, Sep 18, 2006 at 10:06:21PM +0200, Joerg Haederli wrote:
It looks as this has not been implemented yet nor even tested.
What hasn't been implemented? As far as I can tell, this is a request
for the previously mentioned RFE (ability to change GUIDs on
Torrey McMahon wrote:
A day later I turn the host off. I go to the array and offer all six
LUNs, the pool that was in use as well as the snapshot that I took a
day previously, and offer all three LUNs to the host.
Errrthat should be
A day later I turn the host off. I go to the
Joerg Haederli wrote:
I'm really not an expert on ZFS, but at least from my point to
handle such cases ZFS has to handle at least the following points
- GUID a new/different GUID has to be assigned
- LUNs ZFS has to be aware that device trees are different, if
these are part of some
In my experience, we would not normally try to mount two different
copies of the same data at the same time on a single host. To avoid
confusion, we would especially not want to do this if the data represents
two different points of time. I would encourage you to stick with more
On September 18, 2006 5:45:08 PM +0200 Jakob Praher [EMAIL PROTECTED] wrote:
hi everyone,
I am planning on creating a local SAN via NFS(v4) and several redundant nodes.
huh. How do you create a SAN with NFS?
I have been using DRBD on linux before and now am asking whether some of you
The initial idea was to make a dataset/snapshot and
clone (fast) and then separate the clone from its
snapshot. The clone could be then used as a new
independant dataset.
The send/receive subcommands are probably the only
way to duplicate a dataset.
I'm still not sure I understand
[appologies for being away from my data last week]
David Dyer-Bennet wrote:
The more I look at it the more I think that a second copy on the same
disk doesn't protect against very much real-world risk. Am I wrong
here? Are partial(small) disk corruptions more common than I think?
I don't have
On 9/1/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
Marlanne DeLaSource wrote:
Thanks for all your answers.
The initial idea was to make a dataset/snapshot and clone (fast) and then
separate the clone from its snapshot. The clone could be then used as a new
independant dataset.
The
On 9/18/06, Richard Elling - PAE [EMAIL PROTECTED] wrote:
[appologies for being away from my data last week]
David Dyer-Bennet wrote:
The more I look at it the more I think that a second copy on the same
disk doesn't protect against very much real-world risk. Am I wrong
here? Are
On Mon, Sep 18, 2006 at 06:03:47PM -0400, Torrey McMahon wrote:
Its not the transport layer. It works fine as the LUN IDs are different
and the devices will come up with different /dev/dsk entries. (And if
not then you can fix that on the array in most cases.) The problem is
that devices
more below...
David Dyer-Bennet wrote:
On 9/18/06, Richard Elling - PAE [EMAIL PROTECTED] wrote:
[appologies for being away from my data last week]
David Dyer-Bennet wrote:
The more I look at it the more I think that a second copy on the same
disk doesn't protect against very much
Jan Hendrik Mangold wrote:
I didn't ask the original question, but I have a scenario where I
want to use clone as well and encounter a (designed?) behaviour I am
trying to understand.
I create a filesystem A with ZFS and modify it to a point where I
create a snapshot [EMAIL PROTECTED] Then I
Mike Gerdts wrote:
A couple scenarios from environments that I work in, using legacy
file systems and volume managers:
1) Various test copies need to be on different spindles to remove any
perceived or real performance impact imposed by one or the other.
Arguably by having the IO activity
27 matches
Mail list logo