ZFS will try to enable write cache if whole disks is given.
Additionally keep in mind that outer region of a disk is much faster.
And it's portable. If you use whole disks, you can export the
pool from one machine and import it on another. There's no way
to export just one slice and leave
is zfs any less efficient with just using a portion of a
disk versus the entire disk?
As others mentioned, if we're given a whole disk (i.e. no slice
is specified) then we can safely enable the write cache.
One other effect -- probably not huge -- is that the block placement
algorithm is most
Jeff Bonwick wrote:
is zfs any less efficient with just using a portion of a
disk versus the entire disk?
As others mentioned, if we're given a whole disk (i.e. no slice
is specified) then we can safely enable the write cache.
With all of the talk about performance problems due to
With all of the talk about performance problems due to
ZFS doing a sync to force the drives to commit to data
being on disk, how much of a benefit is this - especially
for NFS?
It depends. For some drives it's literally 10x.
Also, if I was lucky enough to have a working prestoserv
card
Greetings all,
I have been given the task of playing around with ZFS and a StorEdge 9970 (HDS
9970) disk array. This setup will be duplicated into a production system later
with zones as well.
Since i am new to ZFS and big storage array's such as the 9970 i have a few
thoughts/questions that
Well,
You're spot on. Turns out that our datacentre boys change the umask of root to
0027.
:-(
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Aug 3, 2006, at 8:17 AM, Jeff Bonwick wrote:
ZFS will try to enable write cache if whole disks is given.
Additionally keep in mind that outer region of a disk is much faster.
And it's portable. If you use whole disks, you can export the
pool from one machine and import it on another.
Patrick Petit wrote:
Hi,
Using a ZFS emulated volume, I wasn't expecting to see a system [1]
hang caused by a SCSI error. What do you think? The error is not
systematic. When it happens, the Solaris/Xen dom0 console keeps
displaying the following message and the system hangs.
*Aug 3
Robert Milkowski [EMAIL PROTECTED] writes:
Additionally keep in mind that outer region of a disk is much faster.
So if you want to put OS and then designate rest of the disk for
application then probably putting ZFS on a slice beginning on cyl 0 is
best in most scenarios.
This has the
Darren Reed wrote:
Patrick Petit wrote:
Hi,
Using a ZFS emulated volume, I wasn't expecting to see a system [1]
hang caused by a SCSI error. What do you think? The error is not
systematic. When it happens, the Solaris/Xen dom0 console keeps
displaying the following message and the system
Path failover is not handled by ZFS. You would use mpxio, or other
software, to take care of path failover.
Pierre Klovsjo wrote:
Greetings all,
I have been given the task of playing around with ZFS and a StorEdge 9970 (HDS 9970) disk array. This setup will be duplicated into a production
And it's portable. If you use whole disks, you can export the
pool from one machine and import it on another. There's no way
to export just one slice and leave the others behind...
I got the impression that the export command exported the contents
of the pool, not the underlying
I'd filed 6452505 (zfs create should set permissions on underlying mountpoint)
so that this shouldn't cause problems in the future
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Aug 3, 2006, at 5:14 PM, Darren Dunham wrote:
And it's portable. If you use whole disks, you can export the
pool from one machine and import it on another. There's no way
to export just one slice and leave the others behind...
I got the impression that the export command exported the
Anton B. Rang wrote:
I'd filed 6452505 (zfs create should set permissions on underlying mountpoint)
so that this shouldn't cause problems in the future
6238072 might also be of interest.
Darren
___
zfs-discuss mailing list
Ahh, interesting information. Thanks folks, I'm have a better
understanding of this now.
--joe
Jeff Bonwick wrote:
is zfs any less efficient with just using a portion of a
disk versus the entire disk?
As others mentioned, if we're given a whole disk (i.e. no slice
is specified) then
Folks,
I realize this thread has run its course, but I've got a variant of
the original question: What performance problems or anomalies might
one see if mixing both whole disks _and_ slices within the same pool?
I have in mind some Sun boxes (V440, T2000, X4200) with four internal
drives.
Eric Schrock wrote:
On Thu, Aug 03, 2006 at 10:24:12AM -0700, Marion Hakanson wrote:
zpool create mirror c0t2d0 c0t3d0 mirror c0t0d0s5 c0t1d0s5
Is this allowed? Is it stupid? Will performance be so bad/bizarre that
it should be avoided at all costs? Anybody tried it?
Yes, it's
Apologies for the internal URL, I'm including the list of patches for
the everyone's benefit:
sparc Patches
* ZFS Patches
o 118833-17 SunOS 5.10: kernel patch
o 118925-02 SunOS 5.10: unistd header file patch
o 119578-20 SunOS 5.10: FMA Patch
o
Hi,
Some additional elements. Irrespective of the SCSI error reported
earlier, I have established that Solaris dom0 hangs anyway when a domU
is booted from a disk image located on an emulated ZFS volume. Has this
been also observed by other members of the community? Is there a known
Patrick Petit wrote:
Hi,
Some additional elements. Irrespective of the SCSI error reported
earlier, I have established that Solaris dom0 hangs anyway when a domU
is booted from a disk image located on an emulated ZFS volume. Has this
been also observed by other members of the community? Is
On Thu, Aug 03, 2006 at 03:50:20PM -0700, Philip Brown wrote:
Err.. the way you have described that, seems backward to me, and violates
existing expected known solaris behaviour, not to mention logical
separation of filesystems.
zfs should not go changing the permissions on the
Anton B. Rang wrote:
I'd filed 6452505 (zfs create should set permissions on underlying
mountpoint) so that this shouldn't cause problems in the future
Err.. the way you have described that, seems backward to me, and violates
existing expected known solaris behaviour, not to mention
On Thu, Aug 03, 2006 at 01:35:54AM -0700, Tom Simpson wrote:
Well,
You're spot on. Turns out that our datacentre boys change the umask of root
to 0027.
:-(
Many years ago, back in the days of Solaris 2.5.1, changing root's umask
to 027 caused problems if you, say, restarted the
Richard Lowe wrote:
Patrick Petit wrote:
Hi,
Some additional elements. Irrespective of the SCSI error reported
earlier, I have established that Solaris dom0 hangs anyway when a domU
is booted from a disk image located on an emulated ZFS volume. Has
this been also observed by other members
25 matches
Mail list logo