2009/9/7 Ritesh Raj Sarraf r...@researchut.com:
The Discard/Trim command is also available as part of the SCSI standard now.
Now, if you look from a SAN perspective, you will need a little of both.
Filesystems will need to be able to deallocate blocks and then the same
should be triggered as
Piero Gramenzi wrote:
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence
I'd like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
Piero Gramenzi wrote:
Hi Darren,
I do have a disk array that is providing striped LUNs to my Solaris
box. Hence I'd like to simply concat those LUNs without adding
another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2
On 09/07/09 07:29 PM, David Dyer-Bennet wrote:
Is anybody doing this [zfs send/recv] routinely now on 2009-6
OpenSolaris, and if so can I see your commands?
Wouldn't a simple recursive send/recv work in your case? I
imagine all kinds of folks are doing it already. The only problem
with it,
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence
I'd like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0
Hi Darren,
I do have a disk array that is providing striped LUNs to my Solaris
box. Hence I'd like to simply concat those LUNs without adding
another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a
True, this setup is not designed for high random I/O, but rather lots of
storage with fair performance. This box is for our dev/test backend storage.
Our production VI runs in the 500-700 IOPS (80+ VMs, production plus dev/test)
on average, so for our development VI, we are expecting half of
The context is a file in a dataset cloned from a snapshot.
If the file has not been modified since the clone was created,
I'd like to ascribe to the file attributes associated with
the origin snapshot.
1) Is it feasible to determine from the vnode relating to
the clone file if that
I'm new to ZFS and a scenario recently came up that I couldn't figure out. We
are used to using Veritas Volume Mgr so that may affect our thinking to this
approach.
Here it is.
1.ServerA was originally built let's say in January '09 with the
Solaris 10 build from 10/08 with zfs as
On Tue, 8 Sep 2009 15:09:24 -0400, Jon Whitehouse
jonathan.whiteho...@zimmer.com wrote:
I'm new to ZFS and a scenario recently came up that I couldn't figure out. We
are used to using Veritas Volume Mgr so that may affect our thinking to this
approach.
Here it is.
1.ServerA was
On Tue, Sep 8, 2009 at 3:09 PM, Jon
Whitehousejonathan.whiteho...@zimmer.com wrote:
I'm new to ZFS and a scenario recently came up that I couldn't figure out.
We are used to using Veritas Volume Mgr so that may affect our thinking to
this approach.
Here it is.
1. ServerA was
Hi Jon,
If the zpool import command shows the old rpool and associated disk
(c1t1d0s0), then you might able to import it like this:
# zpool import rpool rpool2
Which renames the original pool, rpool, to rpool2, upon import.
If the disk c1t1d0s0 was overwritten in any way then I'm not sure
hello experts,
i have cluster3.2/ZFS and AVS4 in main site and ZFS/AVS4 in
DR , i am trying to replicate ZFS volumes using AVS, i am getting the
below error
sndradm: Error: volume
/dev/rdsk/c4t600A0B80005B1E5702934A27A8CCd0s0 is not part of a
disk group,
please specify
I left the scrub running all day:
scrub: scrub in progress for 67h57m, 100.00% done, 0h0m to go
but as you can see, it didn't finish. So, I ran pkg image-update,
rebooted, and am now running b122. On reboot, the scrub restarted
from the beginning, and currently estimates 17h to go. I'll post
On Tue, Sep 8, 2009 at 10:24 PM, Will Murnane will.murn...@gmail.comwrote:
I left the scrub running all day:
scrub: scrub in progress for 67h57m, 100.00% done, 0h0m to go
but as you can see, it didn't finish. So, I ran pkg image-update,
rebooted, and am now running b122. On reboot, the
15 matches
Mail list logo