Do you want data availability, data retention, space, or performance?
-- richard
Robert Milkowski wrote:
Hello zfs-discuss,
While waiting for Thumpers to come I'm thinking how to configure
them. I would like to use raid-z. As thumper has 6 SATA controllers
each 8-port then maybe it
Is there any change regarding fsflush such as autoup tunable for zfs ?
Thanks
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ZFS/NFSv4 introduced a new acl model (see acl(2) ...nevada (OpenSolaris)
Solaris10u2). There is no compatibility bridge between the
GETACL/SETACL/GETACLCNT and ACE_SETACL/ACE_SETACL/ACE_GETACLCNT fonctions of
acl(2) syscall. Because this is Solaris speciffic (samba.org defines its
internal
Nicolas Williams [EMAIL PROTECTED] wrote:
On Wed, Oct 11, 2006 at 08:24:13PM +0200, Joerg Schilling wrote:
Before we start defining the first offocial functionality for this Sun
feature,
we should define a mapping for Mac OS, FreeBSD and Linux. It may make
sense, to
define a sub
Spencer Shepler [EMAIL PROTECTED] wrote:
I didn't comment on the error conditions that can occur during
the writing of data upon close(). What you describe is the preferred
method of obtaining any errors that occur during the writing of data.
This occurs because the NFS client is writing
Hello Richard,
Friday, October 13, 2006, 8:05:18 AM, you wrote:
REP Do you want data availability, data retention, space, or performance?
data availability, space, performance
However we're talking about quite a lot of small IOs (r+w).
The real question was what do you think about creating
Hi,
Sorry if this has been raised before.
Question: IS it possible to
1. Solaris 10 OS partitons to be SDS and have a single partition on that same
disk (without SDS) to be ZFS slice.
2. Partition the zfs slice for many partitions and each partition to hold a
zone. Idea is to create many
Hello Roshan,
Friday, October 13, 2006, 1:12:12 PM, you wrote:
RP Hi,
RP Sorry if this has been raised before.
RP Question: IS it possible to
RP 1. Solaris 10 OS partitons to be SDS and have a single partition
RP on that same disk (without SDS) to be ZFS slice.
Yes.
RP 2. Partition the zfs
Roshan Perera wrote:
Hi,
Sorry if this has been raised before.
Question: IS it possible to
1. Solaris 10 OS partitons to be SDS and have a single partition on that same
disk (without SDS) to be ZFS slice.
Yes.
2. Partition the zfs slice for many
partitions and each partition to hold a
On Fri, Joerg Schilling wrote:
Spencer Shepler [EMAIL PROTECTED] wrote:
I didn't comment on the error conditions that can occur during
the writing of data upon close(). What you describe is the preferred
method of obtaining any errors that occur during the writing of data.
This occurs
Hi Jeff Robert,
Thanks for the reply. Your interpretation is correct and the answer spot on.
This is going to be at a VIP clients QA/production environment and first
introduction to 10, zones and zfs. Anything unsupported is not allowed. Hence I
may have to wait for the fix. Do you know
Spencer Shepler [EMAIL PROTECTED] wrote:
Sorry, the code in Solaris would behave as I described. Upon the
application closing the file, modified data is written to the server.
The client waits for completion of those writes. If there is an error,
it is returned to the caller of close().
Jeff Victor [EMAIL PROTECTED] wrote:
Your working did not match with the reality, this is why I did write this.
You did write that upon close() the client will first do something similar
to
fsync on that file. The problem is that this is done asynchronously and the
close() return value
On Fri, Joerg Schilling wrote:
Spencer Shepler [EMAIL PROTECTED] wrote:
Sorry, the code in Solaris would behave as I described. Upon the
application closing the file, modified data is written to the server.
The client waits for completion of those writes. If there is an error,
it is
Does it matter if the /dev names of the partitions change (i.e. from /
dev/dsk/c2t2250CC611005d3s0 to another machine not using sun hba
drivers with a different/shorter name??)
thanks
keith
If the file does not exist than ZFS will not attempt to open any
pools at boot. You must
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Does it matter if the /dev names of the partitions change (i.e. from /
dev/dsk/c2t2250CC611005d3s0 to another machine not using sun hba
drivers with a different/shorter name??)
It should not. As long as all the disks are visible and ZFS can read
the labels, it should be able to import
Most ZFS improvements should be available through patches. Some may require
moving to a future update (for instance, ZFS booting, which may have other
implications throughout the system).
On most systems, you won’t see a lot of difference between hardware or software
mirroring.
The benefit of
Jeremy Teo wrote:
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
It would be really great to automatically select the proper recordsize
for each file! How do you suggest doing so?
--matt
Roshan Perera wrote:
Hi Jeff Robert, Thanks for the reply. Your interpretation is
correct and the answer spot on.
This is going to be at a VIP clients QA/production environment and
first introduction to 10, zones and zfs. Anything unsupported is not
allowed. Hence I may have to wait for the
Robert Milkowski wrote:
Hello Richard,
Friday, October 13, 2006, 8:05:18 AM, you wrote:
REP Do you want data availability, data retention, space, or performance?
data availability, space, performance
However we're talking about quite a lot of small IOs (r+w).
Then you should seriously
Hello Experts
Would appreciate if somebody can comment on sendmail environment on
solaris 10.
How will Zfs perform if one has millions of files in sendmail message
store directory under zfs filesystem compared to UFS or VxFS..
--
Thanks Regards,
On Fri, Oct 13, 2006 at 11:03:51AM +0200, Joerg Schilling wrote:
Nicolas Williams [EMAIL PROTECTED] wrote:
On Wed, Oct 11, 2006 at 08:24:13PM +0200, Joerg Schilling wrote:
Before we start defining the first offocial functionality for this Sun
feature,
we should define a mapping for
For what it's worth, close-to-open consistency was added to Linux NFS in the
2.4.20 kernel (late 2002 timeframe). This might be the source of some of the
confusion.
This message posted from opensolaris.org
___
zfs-discuss mailing list
One technique would be to keep a histogram of read write sizes.
Presumably one would want to do this only during a “tuning phase” after the
file was first created, or when access patterns change. (A shift to smaller
record sizes can be detected by a large proportion of write operations which
- Original Message -
Subject: no tool to get expected disk usage reports
From:Dennis Clarke [EMAIL PROTECTED]
Date:Fri, October 13, 2006 14:29
To: zfs-discuss@opensolaris.org
On Fri, Oct 13, 2006 at 08:30:27AM -0700, Matthew Ahrens wrote:
Jeremy Teo wrote:
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
It would be really great to automatically select the proper recordsize
for each file! How do
I don't understand why you can't use 'zpool status'? That will show
the pools and the physical devices in each and is also a pretty basic
command. Examples are given in the sysadmin docs and manpages for
ZFS on the opensolaris ZFS community page.
I realize it's not quite the same
Hello Matthew,
Friday, October 13, 2006, 5:37:45 PM, you wrote:
MA Robert Milkowski wrote:
Hello Richard,
Friday, October 13, 2006, 8:05:18 AM, you wrote:
REP Do you want data availability, data retention, space, or performance?
data availability, space, performance
However we're
Hello Ramneek,
Friday, October 13, 2006, 6:07:22 PM, you wrote:
RS Hello Experts
RS Would appreciate if somebody can comment on sendmail environment on
RS solaris 10.
RS How will Zfs perform if one has millions of files in sendmail message
RS store directory under zfs filesystem compared to UFS
Hello Noel,
Friday, October 13, 2006, 11:22:06 PM, you wrote:
ND I don't understand why you can't use 'zpool status'? That will show
ND the pools and the physical devices in each and is also a pretty basic
ND command. Examples are given in the sysadmin docs and manpages for
ND ZFS on the
Robert Milkowski wrote:
Hello Noel,
Friday, October 13, 2006, 11:22:06 PM, you wrote:
ND I don't understand why you can't use 'zpool status'? That will show
ND the pools and the physical devices in each and is also a pretty basic
ND command. Examples are given in the sysadmin docs and
Group,
If their is a bad vfs ops template, why
wouldn't you just return(error) versus
trying to create the vnode ops template?
My suggestion is after the cmn_err()
then return(error);
Mitchell Erblich
On 10/13/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
Using ZFS for a zones root is currently planned to be supported in
solaris 10 update 5, but we are working on moving it up to update 4.
Are there any areas where the community can help with this? Would
code or me too! support calls help the
ZFS ignores the fsflush. Here's a snippet of the code in zfs_sync():
/*
* SYNC_ATTR is used by fsflush() to force old filesystems like UFS
* to sync metadata, which they would otherwise cache indefinitely.
* Semantically, the only requirement is that the sync
Group,
I am not sure I agree with the 8k size.
Since recordsize is based on the size of filesystem blocks
for large files, my first consideration is what will be
the max size of the file object.
For extremely large files (25 to 100GBs), that are accessed
36 matches
Mail list logo