Hi,
Follow is what I am looking for.
I need to take hardware snap shot of devices which are provided under zfs file
system.
I could identify type of configuration using zpool status and get the list of
disks.
For example : if a single LUN is assigned to zpool and a file system is created
I would like to know if I can create ZFS file system without ZFS storage
pool. Also I would like to know if I can create ZFS pool/ZFS pool on Veritas
Volume.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
A ZFS filesystem without a zpool doesn't make much sense. Unless I'm badly
mistaken, you have to have the pool to get the filesystem.
As far as using a Veritas volume for the zpools, that is easily done. We do
that where I work for almost all of our ZFS filesystems as a way to
facilitate
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ramesh Babu
I would like to know if I can create ZFS file system without ZFS
storage pool. Also I would like to know if I can create ZFS pool/ZFS
pool on Veritas Volume.
Unless I'm
On Sep 14, 2010, at 6:59 AM, Wolfraider wrote:
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC
devices to our pool. We are looking into getting 4 – 32GB Intel X25-E SSD
drives. Would this be a good solution to slow write speeds?
Maybe, maybe not. Use zilstat to
We have two Intel X25-E 32GB SSD drives in one of our servers. I'm using
one for ZIl and one for L2ARC, we are having great results so far.
Cheers,
-Chris
On Wed, Sep 15, 2010 at 9:43 AM, Richard Elling rich...@nexenta.com wrote:
On Sep 14, 2010, at 6:59 AM, Wolfraider wrote:
We are
any resolution to this issue? I'm experiencing the same annoying lockd thing
with mac osx 10.6 clients. I am at pool ver 14, fs ver 3. Would somehow going
back to the earlier 8/2 setup make things better?
-Nabil
--
This message posted from opensolaris.org
Has anyone done much testing of just using the solid state devices (F20 or F5100) asdevices for ZFS pools? Are there any concerns with running in this mode versus usingsolid state devices for L2ARC cache?Second, has anyone done this sort of testing with MLC based solid state drives?What has your
We actually did some pretty serious testing with SATA SLCs from Sun directly
hosting zpools (not as L2ARC). We saw some really bad performance - as though
there were something wrong, but couldn't find it.
If you search my name on this list you'll find the description of the problem.
--m
m a
On Wed, Sep 15, 2010 at 12:08:20PM -0700, Nabil wrote:
any resolution to this issue? I'm experiencing the same annoying
lockd thing with mac osx 10.6 clients. I am at pool ver 14, fs ver
3. Would somehow going back to the earlier 8/2 setup make things
better?
As noted in the earlier
On Sep 14, 2010, at 4:58 AM, Edward Ned Harvey wrote:
From: Haudy Kazemi [mailto:kaze0...@umn.edu]
With regard to multiuser systems and how that negates the need to
defragment, I think that is only partially true. As long as the files
are defragmented enough so that each particular read
From: Richard Elling [mailto:rich...@nexenta.com]
Suppose you want to ensure at least 99% efficiency of the drive. At
most 1%
time wasted by seeking.
This is practically impossible on a HDD. If you need this, use SSD.
Lately, Richard, you're saying some of the craziest illogical
On Sep 15, 2010, at 2:18 PM, Edward Ned Harvey wrote:
From: Richard Elling [mailto:rich...@nexenta.com]
Suppose you want to ensure at least 99% efficiency of the drive. At
most 1%
time wasted by seeking.
This is practically impossible on a HDD. If you need this, use SSD.
Lately,
On 09/16/10 09:18 AM, Edward Ned Harvey wrote:
From: Richard Elling [mailto:rich...@nexenta.com]
Suppose you want to ensure at least 99% efficiency of the drive. At
most 1%
time wasted by seeking.
This is practically impossible on a HDD. If you need this, use
On Wed, Sep 15, 2010 at 05:18:08PM -0400, Edward Ned Harvey wrote:
It is absolutely not difficult to avoid fragmentation on a spindle drive, at
the level I described. Just keep plenty of empty space in your drive, and
you won't have a fragmentation problem. (Except as required by COW.) How
On Wed, Sep 15, 2010 at 12:27 PM, Brad Diggs brad.di...@oracle.com wrote:
Has anyone done much testing of just using the solid state devices (F20 or
F5100) as
devices for ZFS pools? Are there any concerns with running in this mode
versus using
solid state devices for L2ARC cache?
Ed,
See my answers inline:
I don't think your question is clear. What do you mean on oracle backed by
storage luns?
We'll be using luns from a storage array vs ZFS controller disks. The luns are
mapped the db server and from there initialize under ZFS.
Do you mean on oracle hardware?
On
When using compression, are the on-disk record sizes determined before
or after compression is applied? In other words, if record size is set
to 128k, is that the amount of data fed into the compression engine,
or is the output size trimmed to fit? I think it's the former, but I'm
not certain.
From: Richard Elling [mailto:rich...@nexenta.com]
It is practically impossible to keep a drive from seeking. It is also
The first time somebody (Richard) said you can't prevent a drive from
seeking, I just decided to ignore it. But then it was said twice. (Ian.)
I don't get why anybody is
14x 256gb MLC SSDs in a raidz2 array have worked fine for us. Performace seems
to be mostly limited by the raid controller in operating in JBOD mode.
Raidz2 allows sufficient redundancy to replace any MLC drives that develop
issues and when you have that many consumer level SSDs, some will
20 matches
Mail list logo