RM:
I do not understand - why in some cases with smaller block writing
block twice could be actually faster than doing it once every time?
I definitely am missing something here...
In addition to what Neil said, I want to add that
when an application O_DSYNC write cover only parts of
Robert Milkowski writes:
Hello Neil,
Thursday, August 10, 2006, 7:02:58 PM, you wrote:
NP Robert Milkowski wrote:
Hello Matthew,
Thursday, August 10, 2006, 6:55:41 PM, you wrote:
MA On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
btw: wouldn't
Hi there
Are there any consideration given to this feature...?
I would also agree that this will not only be a testing feature, but will
find it's way into production.
It would probably work on the same princaple of swap -a and swap -d ;) Just a
little bit more complex.
This message
Darren:
With all of the talk about performance problems due to
ZFS doing a sync to force the drives to commit to data
being on disk, how much of a benefit is this - especially
for NFS?
I would not call those things as problems, more like setting
proper expectations.
My
Hi,
I'm looking at moving two UFS quota-ed filesystems to ZFS under
Solaris 10 release 6/06, and the quota issue is gnarly.
One filesystem is user home directories and I'm aiming towards the
one zfs filesystem per user model, attempting to use Casper
Dik's auto_home script for on-the-fly zfs
On Aug 9, 2006, at 8:18 AM, Roch wrote:
So while I'm feeling optimistic :-) we really ought to be
able to do this in two I/O operations. If we have, say, 500K
of data to write (including all of the metadata), we should
be able to allocate a contiguous 500K block on disk and
Thanks for replying (I thought nobody would bother.)
So, If understand correctly, I won't give up ANYTHING available in
EVMS. LVM , Linux Raid -by going to ZFS and Raid -Z Right ?
This message posted from opensolaris.org
___
zfs-discuss
Following up on earlier mail, here's a proposal for create-time
properties. As usual, any feedback or suggestions is welcome.
For those curious about the implementation, this finds its way all the
way down to the create callback, so that we can pick out true
create-time properties (e.g.
No, there are some features we haven't implemented, that may or may not
be available in other RAID solutions. In particular:
- ZFS storage pool cannot be 'shrunk', i.e. removing an entire toplevel
device (mirror, RAID group, etc). Devices can be removed by attaching
and detaching to
Just wanted to point this out --
I have a large web tree that used to have UFS user quotas on it. I converted
to ZFS using
the model that each user has their own ZFS filesystem quota instead. I worked
around some
NFS/automounter issues, and it now seems to be working fine.
Except now I
On Fri, Aug 11, 2006 at 11:04:06AM -0500, Anton Rang wrote:
Once the data blocks are on disk we have the information
necessary to update the indirect blocks iteratively up to
the ueberblock. Those are the smaller I/Os; I guess that
becauseof ditto blocks they go to physically
On Aug 11, 2006, at 12:38 PM, Jonathan Adams wrote:
The problem is that you don't know the actual *contents* of the
parent block
until *all* of its children have been written to their final
locations.
(This is because the block pointer's value depends on the final
location)
But I know
Leon Koll wrote:
On 8/11/06, eric kustarz [EMAIL PROTECTED] wrote:
Leon Koll wrote:
...
So having 4 pools isn't a recommended config - i would destroy
those 4
pools and just create 1 RAID-0 pool:
#zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0
c4t001738010140001Cd0
Just a data point -- our netapp filer actually creates additional raid groups
that are added to the greater pool when you add disks, much as zfs does now.
They aren't simply used to expand
the one large raid group of the volume.I've been meaning to rebuild the
whole thing to
get use of
On Fri, Aug 11, 2006 at 10:02:41AM -0700, Brad Plecs wrote:
There doesn't appear to be a way to move zfspool/www and its
decendants en masse to a new machine with those quotas intact. I have
to script the recreation of all of the descendant filesystems by hand.
Yep, you need
6421959 want
What about the Asus M2N-SLI Deluxe motherboard? It has 7 SATA ports,
supports ECC memory, socket AM2, generally looks very attractive for
my home storage server. Except that it, and the nvidia nForce 570-SLI
it's built on, don't seem to be on the HCL. I'm hoping that's just
yet, not reported
On August 11, 2006 10:31:50 AM -0400 Jeff A. Earickson [EMAIL PROTECTED]
wrote:
Suggestions please?
Ideally you'd be able to move to mailboxes in $HOME instead of /var/mail.
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Follow-up: it looks to me like prstat displays the portion of the system's
physical memory in use by the processes in that zone.
How much memory does that system have? Something seems amiss, as a V490 can hold
up to 32GB, and prstat is showing 163GB of physical memory just for fmtest.
Irma
On August 11, 2006 5:25:11 PM -0700 Peter Looyenga [EMAIL PROTECTED] wrote:
I looked into backing up ZFS and quite honostly I can't say I am convinced
about its usefullness
here when compared to the traditional ufsdump/restore. While snapshots are nice
they can never
substitute offline
On 8/11/06, Irma Garcia [EMAIL PROTECTED] wrote:
ZONEID NPROC SIZE RSS MEMORY TIME CPU ZONE
15 188 169G 163G 100% 0:46:00 48% fmtest
0 54 708M 175M 0.1% 2:23:40 0.1% global
12 27 112M 51M 0.0% 0:02:48 0.0% fmprod
4 27 281M 66M 0.0% 0:14:13 0.0% fmstage
Questions?
Does the 100% memory usage on
20 matches
Mail list logo