On Fri, Aug 11, 2006 at 05:25:11PM -0700, Peter Looyenga wrote:
I looked into backing up ZFS and quite honostly I can't say I am convinced
about its usefullness here when compared to the traditional ufsdump/restore.
While snapshots are nice they can never substitute offline backups. And
The test case was build 38, Solaris 11, a 2 GB file, initially created
with 1 MB SW, and a recsize of 8 KB, on a pool with two raid-z 5+1,
accessed with 24 threads of 8 KB RW, for 500,000 ops or 40 seconds which
ever came first. The result at the pool level was 78% of the
Hi Bob,
Looks like : 6415647 Sequential writing is jumping
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6415647
-r
Roch BourbonnaisSun Microsystems, Icnc-Grenoble
Senior
Yes James is right this is normal behaviour. Unless the writes are
synchronous (O_DSYNC) or explicitely flushed (fsync()) then they
are batched up, written out and committed as a transaction
every txg_time (5 seconds).
Neil.
James C. McPherson wrote:
Bob Evans wrote:
Just getting my feet wet
Neil Perrin writes:
Yes James is right this is normal behaviour. Unless the writes are
synchronous (O_DSYNC) or explicitely flushed (fsync()) then they
are batched up, written out and committed as a transaction
every txg_time (5 seconds).
Neil.
James C. McPherson wrote:
Bob
Incidentally, this is part of how QFS gets its performance
for streaming I/O. We use an allocate forward policy,
allow very largeallocation blocks, and separate the
metadata from data. This allows us to write (or read) data
in fairly large I/O requests, without
I'm starting simple, there is no app.
I have a 10GB file (called foo) on the internal FC drive, I did a zfs create
raidz bar
then ran cp foo /bar/, so there is no cpu activity due to an app.
As a test case, this took 7 min 30 sec to copy to the zfs partition. I removed
the pool,
As added information, top reports that cp is using about 25% of the single
cpu. There are no other apps running.
Bob
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Bob Evans writes:
I'm starting simple, there is no app.
I have a 10GB file (called foo) on the internal FC drive, I did a zfs create
raidz bar
then ran cp foo /bar/, so there is no cpu activity due to an app.
As a test case, this took 7 min 30 sec to copy to the zfs
One last tidbit, for what it is worth. Rather than watch top, I ran xcpustate.
It seems that just as the writes pause, the cpu looks like it hits 100% (or
very close), then it falls back down to its lower level.
I'm still getting used to Solaris 10 as well, so if you have a DTrace script
Bob Evans writes:
One last tidbit, for what it is worth. Rather than watch top, I ran
xcpustate. It seems that just as the writes pause, the cpu looks like
it hits 100% (or very close), then it falls back down to its lower
level.
I'm still getting used to Solaris 10 as well, so
Robert Milkowski wrote:
ps. however I'm really concerned with ZFS behavior when a pool is
almost full, there're lot of write transactions to that pool and
server is restarted forcibly or panics. I observed that file systems
on that pool will mount in 10-30 minutes each during zfs mount -a, and
Hey, Bob -
It might be worth exploring where your data stream for the writes was
coming from. Moreover, it might be worth exploring how fast it was
filling up caches for writing.
Were you delivering enough data to keep the disks busy 100% of the time?
I have been tricked by this before... :)
Eric Schrock wrote:
This case adds a new option, 'zfs create -o', which allows for any ZFS
property to be set at creation time. Multiple '-o' options can appear
in the same subcommand. Specifying the same property multiple times in
the same command results in an error. For example:
Jeff A. Earickson wrote:
Are there any plans/schemes for per-user quotas within a ZFS filesystem,
akin to the UFS quotaon(1M) mechanism? I take it that quotaon won't
work with a ZFS filesystem, right? Suggestions please? My notion right
now is to drop quotas for /var/mail.
An alternative
15 matches
Mail list logo