Yep, compression is generally a nice win for backups. The amount of
compression will depend on the nature of the data. If it's all mpegs,
you won't see any advantage because they're already compressed. But
for just about everything else, 2-3x is typical.
As for hot spares, they are indeed
I am trying to understand the ZFS layout from a pdf available on Solaris site.
I have couple of queries related to it.
1. Layout tells root vdev is th main root vdev and other vdevs (physical
and top level vdevs are arranged in a tree format). So is root vdev is a
single entity for whole
Hello all..
I'm making some tests with iozone running on a Linux (initiator), writting on
a solaris target (ZVOL). I think there is a BUG in Linux initiator software
(open-iscsi), but i just want your opinion, to see if the target can be the
problem. Seems to me like a corruption in the
re
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
What does 'which chmod' show? I think that Indiana
chose to have
/usr/gnu/bin at the head of the path, so you're
probably picking up the
GNU chmod, which doesn't handle NFSv4 ACLs. Manually
running
/usr/bin/chmod should solve your problem.
Would it be better if this issue is brought to
ZFS is base in storage pool, and a pool is consist of some vdevs in tree
structure. There is only one root vdev for each pool, and any number of vdevs
as the root's children.
Top-level vdev can be logical vdev or physical vdev. If a top-level vdev is a
logical vdev, it's type can be mirror
Hi bu manzhao,
Thank for you quick response. I have some further queries to your answer
(wirtten with sridhar tag). It would be great if you could provide inputs for
them to understand ZFS better.
ZFS is base in storage pool, and a pool is consist of some vdevs in tree
structure.
This sounds like the result of the FUSE-ZFS for Linux developer being hired by
the Lustre folks. I'm not sure of the implications of Luster using ZFS DMU.
Does this mean a subset of ZFS functionality, binary compatibility of written
disks or what?
You can read about it in the developer's
On Thu, Dec 20, 2007 at 04:19:52AM -0800, Ivan Wang wrote:
What does 'which chmod' show? I think that Indiana
chose to have
/usr/gnu/bin at the head of the path, so you're
probably picking up the
GNU chmod, which doesn't handle NFSv4 ACLs. Manually
running
/usr/bin/chmod should
Hello mr. Irvine,
Did you fix that?
Do you have a solaris formal support? I mean, they will fix the problem in
your solaris 10 production server, or you will need to upgrade to a
opensolaris version?
I'm deploying a ZFS environment, but when i think about TB and how mature
is ZFS... i don't
I looked through the solarsinternals zfs best practices and not
completly sure
of the best scenario.
I have a Solaris 10 6/06 Generic_125100-10 box with attached 3510 array
and would like to use zfs on it. Should I create multiple logical disks
thru the raid
controller then create zfs raid file
Hi,
On 21/12/2007, at 6:43 AM, msl wrote:
Hello mr. Irvine,
Did you fix that?
No
Do you have a solaris formal support?
No. However I did upload the crash dump files and I was hoping that a
Solaris engineer might have found them interesting.
I was going to move the zpool to a new
The recursive option creates a separate snapshot for every child filesystem,
making backup management more difficult if there are many child filesystems.
The ability to natively create a single snapshot/backup of an entire ZFS
hierarchy of filesystems would be a very nice thing indeed.
Aye, that's what I was hoping for. Since ZFS seems to support nesting
filesystems, I was hoping that each filesystem would contain the others, so a
snapshot of the root filesystem would also be a snapshot of the others.
I'm still not sure if that's the case or not, it could be that -r just
I've only started using ZFS this week, and hadn't even touched a Unix
welcome to ZFS... here is a simple script you can start with:
#!/bin/sh
snaps=15
today=`date +%j`
nuke=`expr $today - $snaps`
yesterday=`expr $today - 1`
if [ $yesterday -lt 0 ] ; then
yesterday=365
fi
if [ $nuke -lt 0
Morris Hooten wrote:
I looked through the solarsinternals zfs best practices and not
completly sure
of the best scenario.
ok, perhaps we should add some clarifications...
I have a Solaris 10 6/06 Generic_125100-10 box with attached 3510 array
and would like to use zfs on it. Should I
Thanks Richard.
Richard Elling wrote:
Morris Hooten wrote:
I looked through the solarsinternals zfs best practices and not
completly sure
of the best scenario.
ok, perhaps we should add some clarifications...
I have a Solaris 10 6/06 Generic_125100-10 box with attached 3510 array
Ross wrote:
Aye, that's what I was hoping for. Since ZFS seems to support nesting
filesystems, I was hoping that each filesystem would contain the others, so a
snapshot of the root filesystem would also be a snapshot of the others.
IMHO, a file system is created only when you need to
sudarshan sridhar wrote:
Hi *bu manzhao,*
**
* *Thank for you quick response. I have some further queries to your
answer (wirtten with sridhar tag). It would be great if you could
provide inputs for them to understand ZFS better.
ZFS is base in storage pool, and a pool is consist of
Moving to indiana-discuss..
Please do not start battling each other, bringing this issue to indiana-discuss
is only to show potential gotcha when assuming a specific PATH setting in
utilities/scripts
Cheers,
Ivan
This message posted from opensolaris.org
Hi Owen,
Owen Davies wrote:
I'm not sure of the implications of Luster using ZFS DMU. Does this mean a subset of ZFS functionality, binary compatibility of written disks or what?
It means Lustre will be using ZFS (instead of ext4/ldiskfs) as its disk
storage backend in metadata and
21 matches
Mail list logo