Le 18 oct. 2010 à 08:44, Habony, Zsolt zsolt.hab...@hp.com a écrit :
Hi,
I have seen a similar question on this list in the archive but
haven’t seen the answer.
Can I avoid striping across top level vdevs ?
If I use a zpool which is one LUN from the
On 18 Oct 2010, at 12:40, Habony, Zsolt wrote:
Is there a way to avoid it, or can we be sure that the problem does not
exist at all ?
Grow the existing LUN rather than adding another one.
The only way to have ZFS not stripe is to not give it devices to stripe
over. So stick with simple
On 18 Oct 2010, at 17:44, Habony, Zsolt wrote:
Thank You all for the comments.
You should imagine a datacenter with
- standards not completely depending on me.
- SAN for many OSs, one of them is Solaris, (and not the major amount)
So you get luns from the storage team and there is
Hi,
I have seen a similar question on this list in the archive but
haven't seen the answer.
Can I avoid striping across top level vdevs ?
If I use a zpool which is one LUN from the SAN, and when it
becomes full I add a new LUN to it.
But I cannot guarantee that the
On 18/10/2010 07:44, Habony, Zsolt wrote:
I have seen a similar question on this list in the archive but haven’t
seen the answer.
Can I avoid striping across top level vdevs ?
If I use a zpool which is one LUN from the SAN, and when it becomes full
I add a new LUN to it.
But I cannot
.
Is there a way to avoid it, or can we be sure that the problem does not exist
at all ?
-Original Message-
From: Darren J Moffat [mailto:darr...@opensolaris.org]
Sent: 2010. október 18. 10:19
To: Habony, Zsolt
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] How to avoid
No. The basic principle of the zpool is dynamic striping across vdevs in order
to ensure that all available spindles are contributing to the workload. If
you want/need more granular control over what data goes to which disk, then
you'll need to create multiple pools.
Just create a new pool
On 18/10/2010 09:28, Habony, Zsolt wrote:
I worry about head thrashing. Though memory cache of large storage should make
the problem
Is that really something you should be worried about with all the other
software and hardware between ZFS and the actual drives ?
If that is a problem then
On 18/10/2010 10:01, Habony, Zsolt wrote:
If I can force concatenation, then I do not have to investigate, where are the
existing parts of the filesystems.
You can't, the code for concatenation rather than stripping does not
exist and there are no plans to add it.
Instead of assuming you
On Mon, Oct 18, 2010 at 1:28 AM, Habony, Zsolt zsolt.hab...@hp.com wrote:
Is there a way to avoid it, or can we be sure that the problem does not exist
at all ?
ZFS will coalesce asynchronous writes, which should help for most of
the head trash on write. Using a log device will convert sync
Hi,
Habony, Zsolt writes:
You have an application filesystem from one LUN. (vxfs is expensive, ufs/svm
is not really able to handle online filesystem increase. Thus we plan to use
zfs for application filesystems.)
What do you mean by not really?
Use metattach to grow a metadevice or soft
On 10/18/10 2:13 AM, Rainer J.H. Brandt wrote:
Habony, Zsolt writes:
You have an application filesystem from one LUN. (vxfs is
expensive, ufs/svm is not really able to handle online filesystem
increase. Thus we plan to use zfs for application filesystems.)
What do you mean by not really? Use
You have an application filesystem from one LUN. (vxfs is expensive, ufs/svm
is not really able to handle online filesystem increase. Thus we plan to use
zfs for application filesystems.)
What do you mean by not really?
...
Use growfs to grow UFS on the grown device.
I know its off-toopic
Is there a way to avoid it, or can we be sure that the problem does not
exist at all ?
Grow the existing LUN rather than adding another one.
The only way to have ZFS not stripe is to not give it devices to stripe
over. So stick with simple mirrors ...
(I do not mirror, as the storage gives
You have an application filesystem from one LUN. (vxfs is expensive,
ufs/svm is not really able
to handle online filesystem increase. Thus we plan to use zfs for application
filesystems.)
What do you mean by not really?
...
Use growfs to grow UFS on the grown device.
I know its off-toopic
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/18/2010 4:28 AM, Habony, Zsolt wrote:
I worry about head thrashing.
Why?
If your SAN group gives you a LUN that is at the opposite end of the
array, I would think that was because they had already assigned the
space in the middle to other
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/18/2010 5:40 AM, Habony, Zsolt wrote:
(I do not mirror, as the storage gives redundancy behind LUNs.)
By not enabling redundancy (Mirror or RAIDZ[123]) at the ZFS level,
you are opening yourself to corruption problems that the underlying
, Zsolt
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] How to avoid striping ?
On 18/10/2010 07:44, Habony, Zsolt wrote:
I have seen a similar question on this list in the archive but haven't
seen the answer.
Can I avoid striping across top level vdevs ?
If I use a zpool
On 2010-Oct-18 17:45:34 +0800, casper@sun.com casper@sun.com wrote:
Write-lock (wlock) the specified file-system. wlock
suspends writes that would modify the file system.
Access times are not kept while a file system is write-
locked.
All
19 matches
Mail list logo