Michael Shadle wrote:
On Mon, Mar 30, 2009 at 4:13 PM, Michael Shadle mike...@gmail.com wrote:
Sounds like a reasonable idea, no?
Follow up question: can I add a single disk to the existing raidz2
later on (if somehow I found more space in my chassis) so instead of a
7 disk raidz2
I've upgraded my system from ufs to zfs (root pool).
By default, it creates a zvol for dump and swap.
It's a 4GB Ultra-45 and every late night/morning I run a job which takes
around 2GB of memory.
With a zvol swap, the system becomes unusable and the Sun Ray client often
goes into 26B.
So
I'm currently setting up user directories on a zfs filesystem (Solaris
10) which I then nfs mount on an OpenSUSE 9.3 system.
I have a zpool called zpool1. First, I set up a home zfs volume:
zfs create zpool1/la_home_hpc_users
zfs sharenfs=on zpool1/la_home_hpc_users
Then I create a user
Posting this back to zfs-discuss.
Roland's test case (below) is a single threaded sequential write
followed by a single threaded sequential read. His bandwidth
goes from horrible (~2MB/sec) to expected (~30MB/sec)
when prefetch is disabled. This is with relatively recent nv bits (nv110).
On Tue, Mar 31, 2009 at 1:31 AM, Scott Lawson
scott.law...@manukau.ac.nz wrote:
No. There is no way to expand a RAIDZ or RAIDZ2 at this point. It is a
feature that is often discussed
and people would like, but has been seen by Sun as more of a feature home
users would like rather2
than
casper@sun.com said:
I've upgraded my system from ufs to zfs (root pool).
By default, it creates a zvol for dump and swap.
. . .
So I removed the zvol swap and now I have a standard swap partition. The
performance is much better (night and day). The system is usable and I
don't know
james.ma...@sun.com said:
I'm not yet sure what's broken here, but there's something pathologically
wrong with the IO rates to the device during the ZFS tests. In both cases,
the wait queue is getting backed up, with horrific wait queue latency
numbers. On the read side, I don't understand why
casper@sun.com said:
I've upgraded my system from ufs to zfs (root pool).
By default, it creates a zvol for dump and swap.
. . .
So I removed the zvol swap and now I have a standard swap partition. The
performance is much better (night and day). The system is usable and I
don't know
FYI, I filed this PSARC case yesterday, and expect to integrate into
OpenSolaris in April. Your comments are welcome.
http://arc.opensolaris.org/caselog/PSARC/2009/204/
--matt
---BeginMessage---
Template Version: @(#)sac_nextcase 1.68 02/23/09 SMI
This information is Copyright 2009 Sun
2009/3/31 Matthew Ahrens matthew.ahr...@sun.com:
4. New Properties
user/group space accounting information and quotas can be manipulated
with 4 new properties:
zfs get userused@user fs|snap
zfs get groupused@group fs|snap
zfs get userquota@user fs|snap
zfs get groupquota@group fs|snap
Cyril Plisko wrote:
On Thu, Mar 26, 2009 at 8:45 PM, Richard Elling
richard.ell...@gmail.com wrote:
assertion failures are bugs.
Yup, I know that.
Please file one at http://bugs.opensolaris.org
Just did.
Do you have a crash dump from this issue?
- George
You may
Hello Matthew,
Excellent news.
Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted based by a logical file size or physical as in du?
I'm not saying which one is better just raising the question.
--
much cheering ensues!
2009/3/31 Matthew Ahrens matthew.ahr...@sun.com:
FYI, I filed this PSARC case yesterday, and expect to integrate into
OpenSolaris in April. Your comments are welcome.
http://arc.opensolaris.org/caselog/PSARC/2009/204/
--matt
-- Forwarded message --
On Tue, Mar 31, 2009 at 02:37:02PM -0500, Mike Gerdts wrote:
The user or group is specified using one of the following forms:
posix name (eg. ahrens)
posix numeric id (eg. 126829)
sid name (eg. ahr...@sun)
sid numeric id (eg. S-1-12345-12423-125829)
How does this work with zones?
Robert Milkowski wrote:
Hello Matthew,
Excellent news.
Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted based by a logical file size or physical as in du?
]
The compressed space *is* the amount of
On Tue, Mar 31, 2009 at 01:16:42PM -0700, Matthew Ahrens wrote:
Robert Milkowski wrote:
Hello Matthew,
Excellent news.
Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted based by a logical file
Nicolas Williams wrote:
On Tue, Mar 31, 2009 at 02:37:02PM -0500, Mike Gerdts wrote:
The user or group is specified using one of the following forms:
posix name (eg. ahrens)
posix numeric id (eg. 126829)
sid name (eg. ahr...@sun)
sid numeric id (eg. S-1-12345-12423-125829)
How does this work
On Tue, Mar 31, 2009 at 11:01 PM, George Wilson george.wil...@sun.com wrote:
Cyril Plisko wrote:
On Thu, Mar 26, 2009 at 8:45 PM, Richard Elling
richard.ell...@gmail.com wrote:
assertion failures are bugs.
Yup, I know that.
Please file one at http://bugs.opensolaris.org
Just did.
On Tue, Mar 31, 2009 at 01:25:35PM -0700, Matthew Ahrens wrote:
quote case materials
These new properties are not printed by zfs get all, since that could
generate a huge amount of output, which would not be very well
organized. The new zfs userspace subcommand should be used instead.
Ah, I
Nicolas Williams wrote:
We could also
disallow them from doing zfs get useru...@name pool/zoned/fs, just make
it an error to prevent them from seeing something other than what they
intended.
I don't see why the g-z admin should not get this data.
They can of course still get the data by
On 31 March, 2009 - Matthew Ahrens sent me these 10K bytes:
FYI, I filed this PSARC case yesterday, and expect to integrate into
OpenSolaris in April. Your comments are welcome.
http://arc.opensolaris.org/caselog/PSARC/2009/204/
Quota reporting over NFS or for userland apps like Samba?
On Mar 31, 2009, at 04:31, Scott Lawson wrote:
http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z
There's a more recent post on bp (block pointer) rewriting that will
allow for moving blocks around (part of cleaning up the scrub code):
http://blogs.sun.com/ahrens/entry/new_scrub_code
Tomas Ögren wrote:
On 31 March, 2009 - Matthew Ahrens sent me these 10K bytes:
FYI, I filed this PSARC case yesterday, and expect to integrate into
OpenSolaris in April. Your comments are welcome.
http://arc.opensolaris.org/caselog/PSARC/2009/204/
Quota reporting over NFS or for userland
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew Ahrens:
ZFS user quotas (like other zfs properties) will not be accessible over NFS;
you must be on the machine running zfs to manipulate them.
does this mean that without an account on the NFS server, a user cannot see his
current disk use
bh...@freaks.com said:
Even with a very weak CPU the system is close to saturating the PCI bus for
reads with most configurations.
Nice little machine. I wonder if you'd get some of the bonnie numbers
increased if you ran multiple bonnie's in parallel. Even though the
sequential throughput
River Tarnell wrote:
Matthew Ahrens:
ZFS user quotas (like other zfs properties) will not be accessible over NFS;
you must be on the machine running zfs to manipulate them.
does this mean that without an account on the NFS server, a user cannot see his
current disk use / quota?
That's
On Wed, Apr 01, 2009 at 01:41:06AM +0300, Dimitar Vasilev wrote:
Hi all,
Could someone give a hint if it's possible to create rpool/tmp, mount
it as /tmp so that tmpfs has some disk-based back-end instead of
memory-based size-limited one.
You mean you want /tmp to be a regular ZFS filesystem
On Tue, Mar 31, 2009 at 7:12 PM, Matthew Ahrens matthew.ahr...@sun.com wrote:
River Tarnell wrote:
Matthew Ahrens:
ZFS user quotas (like other zfs properties) will not be accessible over
NFS;
you must be on the machine running zfs to manipulate them.
does this mean that without an account
Ya, I agree that we need some additional data and testing. The iostat
data in itself doesn't suggest to me that the process (dd) is slow but
rather that most of the data is being retrieved elsewhere (ARC). An
fsstat would be useful to correlate with the iostat data.
One thing that also comes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew Ahrens:
does this mean that without an account on the NFS server, a user cannot see
his
current disk use / quota?
That's correct.
in this case, might i suggest at least an RFE to add ZFS quota support to
rquotad? i'm sure we aren't the
30 matches
Mail list logo