I'm probably getting this all wrong, but basically OpenSolaris 2009.6 (which is
the latest ISO available iirc) ships with snv 111b.
My problem is I have a borked zpool and could really use PSARC 2009/479 to fix
it. The problem is PSARC 2009/479 was only built recently and subsequently was
you can use one of the livecd's from genunix.
On Fri, Feb 26, 2010 at 3:36 AM, Laurence laure...@mangafish.net wrote:
I'm probably getting this all wrong, but basically OpenSolaris 2009.6
(which is the latest ISO available iirc) ships with snv 111b.
My problem is I have a borked zpool and
On 02/26/10 09:36, Laurence wrote:
I'm probably getting this all wrong, but basically OpenSolaris 2009.6 (which is
the latest ISO available iirc) ships with snv 111b.
My problem is I have a borked zpool and could really use PSARC 2009/479 to fix
it. The problem is PSARC 2009/479 was only built
Paul B. Henson wrote:
I've been surveying various forums looking for other places using ZFS ACL's
in production to compare notes and see how if at all they've handled some
of the issues we've found deploying them.
So far, I haven't found anybody using them in any substantial way, let
alone
On 26/02/2010 00:56, Paul B. Henson wrote:
I've been surveying various forums looking for other places using ZFS ACL's
in production to compare notes and see how if at all they've handled some
of the issues we've found deploying them.
Anyone sharing files over CIFS backed by ZFS is using
Dear All,
our storage system running opensolaris b133 + ZFS has a lot of memory for
caching. 72 GB total. While testing we observed free memory never falls below
11 GB.
Even if we create a ram disk free memory drops below 11 GB but will be 11 GB
shortly after (i assume ARC cache is shrunken
i thouhgt it was designed to use 2/3's of the available memory
On Fri, Feb 26, 2010 at 8:46 AM, Ronny Egner ronnyeg...@gmx.de wrote:
Dear All,
our storage system running opensolaris b133 + ZFS has a lot of memory for
caching. 72 GB total. While testing we observed free memory never falls
errr, i mean 3/4...i know it's some fraction anyways
On Fri, Feb 26, 2010 at 8:49 AM, Thomas Burgess wonsl...@gmail.com wrote:
i thouhgt it was designed to use 2/3's of the available memory
On Fri, Feb 26, 2010 at 8:46 AM, Ronny Egner ronnyeg...@gmx.de wrote:
Dear All,
our storage
Hello
I have an amd64 server running OpenSolaris 2009-06. In December I created one
container on this server named 'cpmail' with it's own zfs dataset and it's been
running ever since. Until earlier this evening when the server did a kernel
panic and rebooted. Now, I can't see any contents in
Jesse Reynolds wrote:
Does ZFS store a log file of all operations applied to it? It feels like someone has gained access and run 'zfs destroy mailtmp' to me, but then again it could just be my own ineptitude.
Yes...
zpool history rpool
--
Andrew Gabriel
Thanks Andrew! Sorry to be dumb.
So, here's the whole history:
r...@marmoset:/rpool/repo/updatelog# zpool history rpool
History for 'rpool':
2009-11-22.21:10:34 zpool create -f rpool c8t0d0s0
2009-11-22.21:10:35 zfs set org.opensolaris.caiman:install=busy rpool
2009-11-22.21:10:36 zfs create -b
on 64bit platforms it is MAX(3/4 of memory, memory - 1GB) by default.
so for a system with 72GB it should be MAX(54GB, 71GB) which is 71GB.
On 26/02/2010 13:51, Thomas Burgess wrote:
errr, i mean 3/4...i know it's some fraction anyways
On Fri, Feb 26, 2010 at 8:49 AM, Thomas Burgess
On 26/02/2010 13:46, Ronny Egner wrote:
Dear All,
our storage system running opensolaris b133 + ZFS has a lot of memory for
caching. 72 GB total. While testing we observed free memory never falls below
11 GB.
Even if we create a ram disk free memory drops below 11 GB but will be 11 GB
On 26/02/2010 14:03, Jesse Reynolds wrote:
Hello
I have an amd64 server running OpenSolaris 2009-06. In December I created one
container on this server named 'cpmail' with it's own zfs dataset and it's been
running ever since. Until earlier this evening when the server did a kernel
panic and
On 26 February, 2010 - Ronny Egner sent me these 0,6K bytes:
Dear All,
our storage system running opensolaris b133 + ZFS has a lot of memory for
caching. 72 GB total. While testing we observed free memory never falls below
11 GB.
Even if we create a ram disk free memory drops below 11
On 26/02/2010 14:03, Jesse Reynolds wrote:
Hello
I have an amd64 server running OpenSolaris 2009-06. In December I created one
container on this server named 'cpmail' with it's own zfs dataset and it's been
running ever since. Until earlier this evening when the server did a kernel
panic and
Ah, thanks Robert!
Yes, I remember now. mailtmp was indeed a separate zfs dataset I created to use
as the source for the mailbox migration I used when I built this server. It no
longer exists, and is not needed. I see now, I can just remove reference to it
and all should hopefully be good
On Feb 26, 2010, at 5:46 AM, Ronny Egner wrote:
Dear All,
our storage system running opensolaris b133 + ZFS has a lot of memory for
caching. 72 GB total. While testing we observed free memory never falls below
11 GB.
Even if we create a ram disk free memory drops below 11 GB but will be
Bob,
Thanks for your reply. As you mentioned, adjusting the zfs tunables to
reduce the transaction group size yields shorter but more frequent pauses.
Unfortunately, this workaround doesn't sufficiently meet our needs (pauses
are still too long).
I've reviewed the forum archives and read a
On 2/25/2010 10:24 PM, Paul B. Henson wrote:
The main ACL problem we've having now (having resolved most of
them, yay) is interaction with chmod() and legacy mode bits, and the
disappointing ease with which an undesired chmod can completely destroy an
ACL.
examples?
Are you using
On Fri, 26 Feb 2010, Ian Collins wrote:
One of my clients makes extensive use of ACLs. Some of them are so
complex, I had to write them an application to interpret and manage them!
Yah, manipulating them directly isn't for the faint of heart ;). But it's
not too hard to abstract them to a
I would probably tune lotsfree down as well. At 72G of ram currently it's
probably reserving around 1.1GB of ram.
http://docs.sun.com/app/docs/doc/819-2724/6n50b07bk?a=view
Ethan
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org]
On Fri, 26 Feb 2010, Darren J Moffat wrote:
Anyone sharing files over CIFS backed by ZFS is using ACLs, particularly
when there are only Windows clients. There are large number and some
very significant in size deployments.
If you're running the opensolaris in-kernel CIFS server, you avoid
I think most people are just confused by ACL's, i know i was when i first
started using them. Having said that, once i got them set correctly, they
work very well for my CIFS shares.
On Fri, Feb 26, 2010 at 11:23 AM, Paul B. Henson hen...@acm.org wrote:
On Fri, 26 Feb 2010, Darren J Moffat
On Fri, 26 Feb 2010, Thomas Burgess wrote:
I think most people are just confused by ACL's, i know i was when i first
started using them. Having said that, once i got them set correctly,
they work very well for my CIFS shares.
Are you using the in-kernel CIFS server or samba? Are the files
comment below...
On Feb 25, 2010, at 5:34 PM, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/24/2010 11:42 PM, Robert Milkowski wrote:
mi...@r600:~# ls -li /bin/bash
1713998 -r-xr-xr-x 1 root bin 799040 2009-10-30 00:41 /bin/bash
mi...@r600:~# zdb -v
On Fri, Feb 26, 2010 at 08:23:40AM -0800, Paul B. Henson wrote:
So far it's been quite a struggle to deploy ACL's on an enterprise central
file services platform with access via multiple protocols and have them
actually be functional and reliable. I can see why the average consumer
might give
Hi,
please find below the requested information:
r...@openstorage:~# mdb -k
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc pcplusmp
rootnex scsi_vhci zfs sockfs ip hook neti sctp arp usba uhci fctl stmf md lofs
idm random mpt sd nfs crypto fcp fcip cpc smbsrv ufs logindmux
On Fri, 26 Feb 2010, Shane Cox wrote:
I've reviewed the forum archives and read a number of threads related to this
issue. However I
didn't find a root-cause explanation for these pauses, only talk of how to
ameliorate them. In my
particular case, I would like to know why zfs_log_writes
On Fri, 26 Feb 2010, Nicolas Williams wrote:
Can you describe your struggles? What could we do to make it easier to
use ACLs? Is this about chmod [and so random apps] clobbering ACLs? or
something more fundamental about ACLs?
I understand and accept that ACL's are complicated, and have no
On Fri, 26 Feb 2010, Jason King wrote:
Did you try adding:
nfs4: mode = special
vfs objects = zfsacl
To the shares in smb.conf? While we haven't done extensive work on
S10, it appears to work well enough for our (limited) purposes (along
with setting the acl properties to
explanation below...
On Feb 26, 2010, at 10:11 AM, Ronny Egner wrote:
Hi,
please find below the requested information:
r...@openstorage:~# mdb -k
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc pcplusmp
rootnex scsi_vhci zfs sockfs ip hook neti sctp arp usba uhci
On 02/26/10 10:45, Paul B. Henson wrote:
I've already posited as to an approach that I think would make a pure-ACL
deployment possible:
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-February/037206.html
Via this concept or something else, there needs to be a way to configure
Hello list,
ZFS can be used in both file level (zfs) and block level access (zvol). When
using zvols, those are always thin provisioned (space is allocated on first
write). We use zvols with comstar to do iSCSI and FC access - and exuse me in
advance - but this may also be a more comstar
I'm considering adding a l2arc to my home system and was wondering if
anyone had recommendations. I've also considered slicing the drive and
using it for a zil and l2arc, but I don't think that my workload would
really benefit from a zil. I'm running a few VMs and serving up
content via CIFS
On 26 February, 2010 - Lutz Schumann sent me these 2,2K bytes:
Hello list,
ZFS can be used in both file level (zfs) and block level access (zvol). When
using zvols, those are always thin provisioned (space is allocated on first
write). We use zvols with comstar to do iSCSI and FC access
On 02/26/10 11:42, Lutz Schumann wrote:
Idea:
- If the guest writes a block with 0's only, the block is freed again
- if someone reads this block again - it wil get the same 0's it would get
if the 0's would be written
- The checksum of a all 0 block dan be hard coded for SHA1 /
This would be an idea and I thought about this. However I see the following
problems:
1) using deduplication
This will reduce the on disk size however the DDT will grow forever and for the
deletion of zvols this will mean a lot of time and work (see other threads
regarding DDT memory issues
On Fri, Feb 26, 2010 at 2:43 PM, Brandon High bh...@freaks.com wrote:
snip
The drives I'm considering are:
OCZ Vertex 30GB
Intel X25V 40GB
Crucial CT64M225 64GB
Personally, I'd go with the Intel product...but save a few more pennies up
and get the X-25M. The extra boost on read and write
nw == Nicolas Williams nicolas.willi...@sun.com writes:
nw What could we do to make it easier to use ACLs?
1. how about AFS-style ones where the effective permission is the AND
of the ACL and the unix permission? You might have to combine this
with an inheritable-by-subdirectories
On Fri, Feb 26, 2010 at 2:42 PM, Lutz Schumann
presa...@storageconcepts.dewrote:
Now If a virtual machine writes to the zvol, blocks are allocated on disk.
Reads are now partial from disk (for all blocks written) and from ZFS layer
(all unwritten blocks).
If the virtual machine (which may
Hi Richard,
r...@openstorage:~# echo swapfs_minfree/D | mdb -k
swapfs_minfree:
swapfs_minfree: 2358757
2358757 pages * 4 KB = 9435028 Kb = 9.2 GB
So there is my memory :-)
If i read the documentation correctly this parameter has to be set in
/etc/system?
Ronny
--
This message posted from
I use the Intel X25-V and I like it :)
Actuall I have 2 in a striped setup.
40 MB Write / Sec (just enought for ZIL filling)
something like 130 MB / sec reads. Just enough.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
with the Intel product...but save a few more pennies
up and get the X-25M. The extra boost on read and
write performance is worth it.
Or use multiple X25-V (L2ARC is not filled fast anyhow, so write does not
matter). You can get 4 of them for 1 160 GB X25-M. With 4 X25-V you get ~500 MB
/sec
Hello,
While destroying a dataset, sometimes ZFS kind of hangs the machine. I
imagine it's starving all I/O while deleting the blocks, right ?
Here logbias=latency, the commit interval is the default (30 seconds) and
we have SSDs for logs and cache.
Is there a way to slow down the destroy a
On Feb 26, 2010, at 11:55 AM, Lutz Schumann wrote:
This would be an idea and I thought about this. However I see the following
problems:
1) using deduplication
This will reduce the on disk size however the DDT will grow forever and for
the deletion of zvols this will mean a lot of
On Fri, Feb 26, 2010 at 12:24 PM, Lutz Schumann
presa...@storageconcepts.de wrote:
Or use multiple X25-V (L2ARC is not filled fast anyhow, so write does not
matter). You can get 4 of them for 1 160 GB X25-M. With 4 X25-V you get ~500
MB /sec instead of ~ 140 MB / sec with the X25-M - much
On Fri, 26 Feb 2010, Bill Sommerfeld wrote:
I believe this proposal is sound.
Mere words can not express the sheer joy with which I receive this opinion
from an @sun.com address ;).
There are already per-filesystem tunables for ZFS which allow the system
to escape the confines of POSIX
On Feb 26, 2010, at 12:20 PM, Ronny Egner wrote:
Hi Richard,
r...@openstorage:~# echo swapfs_minfree/D | mdb -k
swapfs_minfree:
swapfs_minfree: 2358757
2358757 pages * 4 KB = 9435028 Kb = 9.2 GB
So there is my memory :-)
I believe so.
If i read the documentation correctly this
On Fri, February 26, 2010 12:45, Paul B. Henson wrote:
I've already posited as to an approach that I think would make a pure-ACL
deployment possible:
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-February/037206.html
Via this concept or something else, there needs to be a
On Fri, Feb 26, 2010 at 02:50:05PM -0800, Paul B. Henson wrote:
On Fri, 26 Feb 2010, Bill Sommerfeld wrote:
I believe this proposal is sound.
Mere words can not express the sheer joy with which I receive this opinion
from an @sun.com address ;).
I believe we can do a bit better.
A chmod
On Fri, 26 Feb 2010, Nicolas Williams wrote:
I believe we can do a bit better.
A chmod that adds (see below) or removes one of r, w or x for owner is a
simple ACL edit (the bit may turn into multiple ACE bits, but whatever)
modifying / replacing / adding owner@ ACEs (if there is one). A
On Fri, Feb 26, 2010 at 04:26:43PM -0800, Paul B. Henson wrote:
On Fri, 26 Feb 2010, Nicolas Williams wrote:
I believe we can do a bit better.
A chmod that adds (see below) or removes one of r, w or x for owner is a
simple ACL edit (the bit may turn into multiple ACE bits, but whatever)
On Fri, 26 Feb 2010, Nicolas Williams wrote:
a) clobber the ACL;
b) map the change as best you can to an ACL change;
c) ignore the rwx bits in the mode mask (except on create from a POSIX
open(2)/creat(2), in which case the ACL has to be derived from the
initial mode);
d) fail the
On Fri, 26 Feb 2010, Nicolas Williams wrote:
Suppose you deny or ignore chmods. Well, how would you ever set or reset
set-uid/gid and sticky bits? chmod(2) deals only in absolute modes, not
relative changes, which means that in order to distinguish those bits
from the rwx bits the
On 02/26/10 17:38, Paul B. Henson wrote:
As I wrote in that new sub-thread, I see no option that isn't surprising
in some way. My preference would be for what I labeled as option (b).
And I think you absolutely should be able to configure your fileserver to
implement your preference. Why
On Fri, 26 Feb 2010, Bill Sommerfeld wrote:
acl-chmod interactions have been mishandled so badly in the past that i
think a bit of experimentation with differing policies is in order.
I volunteer to help test discard and deny :). Heck, I volunteer to help
*implement* discard and deny...
On 2/26/2010 6:26 PM, Paul B. Henson wrote:
On Fri, 26 Feb 2010, Nicolas Williams wrote:
I believe we can do a bit better.
A chmod that adds (see below) or removes one of r, w or x for owner is a
simple ACL edit (the bit may turn into multiple ACE bits, but whatever)
modifying / replacing
On 2/26/2010 6:52 PM, Paul B. Henson wrote:
On Fri, 26 Feb 2010, David Dyer-Bennet wrote:
chown ddb /path/to/file
chmod 640 /path/to/file
I'll tell you, if I type that and then find I (I'm ddb) *can't* read the
file, I'm going to be REALLY unhappy.
Then clearly you should
I was in the middle of a lengthy reply to this, which I've abandoned, as it
can pretty much be summarized as If you don't want this behavior, don't
enable it.
It wouldn't be the default, and if you didn't want it, you wouldn't enable
it. Perhaps it might be enabled on some system you inherit,
On Fri, 26 Feb 2010, David Dyer-Bennet wrote:
So, even if you're willing to completely discard 30 years of legacy
scripts and applications -- how to you propose that a NEW script or
application should be written so as to work in this brave new
environment?
[...]
And how should new utilities
On 2/26/2010 8:45 PM, Paul B. Henson wrote:
On Fri, 26 Feb 2010, David Dyer-Bennet wrote:
So, even if you're willing to completely discard 30 years of legacy
scripts and applications -- how to you propose that a NEW script or
application should be written so as to work in this brave new
On Thu, Feb 25 at 20:21, Bob Friesenhahn wrote:
On Thu, 25 Feb 2010, Alastair Neil wrote:
I do not know and I don't think anyone would deploy a system in that way with
UFS.
This is the model that is imposed in order to take full advantage of zfs
advanced
features such as snapshots,
Hi,
This page may indicate the root cause.
http://blogs.sun.com/roch/entry/the_new_zfs_write_throttle
ZFS will throttle the write speed to match the write speed to the txg to the
speed of DISK IO. If it detects the modest measure(1 tick pause) cannot
prevent the tx group from being too large, it
On Feb 26, 2010, at 8:25 PM, Eric D. Mudama wrote:
On Thu, Feb 25 at 20:21, Bob Friesenhahn wrote:
On Thu, 25 Feb 2010, Alastair Neil wrote:
I do not know and I don't think anyone would deploy a system in that way
with UFS.
This is the model that is imposed in order to take full advantage
Ironically It's nfs exporting that is the real hog, cifs shares seem to come
up pretty fast. The fact that cifs shares can be fast makes it hard for me
to understand why Sun/Oracle seem to be making such a meal of this bug.
Possibly because it only critically affects poor universities and not
Thanks for your reply. I disabled write throttling, but didn't observe any
change in behavior. After doing some more research, I have a theory as to
the root cause of the pauses that I'm observing.
Near the end of spa_sync, writes are blocked in function zil_itx_assign as
illustrated by the
67 matches
Mail list logo