On Fri, Jan 28, 2011 at 01:38:11PM -0800, Igor P wrote:
I created a zfs pool with dedup with the following settings:
zpool create data c8t1d0
zfs create data/shared
zfs set dedup=on data/shared
The thing I was wondering about was it seems like ZFS only dedup at
the file level and not the
On Tue, Jan 18, 2011 at 07:16:04AM -0800, Orvar Korvar wrote:
BTW, I thought about this. What do you say?
Assume I want to compress data and I succeed in doing so. And then I
transfer the compressed data. So all the information I transferred is
the compressed data. But, then you don't count
On Sat, Jan 15, 2011 at 10:19:23AM -0600, Bob Friesenhahn wrote:
On Fri, 14 Jan 2011, Peter Taps wrote:
Thank you for sharing the calculations. In lay terms, for Sha256,
how many blocks of data would be needed to have one collision?
Two.
Pretty funny.
In this thread some of you are
On Fri, Jan 07, 2011 at 06:39:51AM -0800, Michael DeMan wrote:
On Jan 7, 2011, at 6:13 AM, David Magda wrote:
The other thing to note is that by default (with de-dupe disabled), ZFS
uses Fletcher checksums to prevent data corruption. Add also the fact all
other file systems don't have any
On Thu, Jan 06, 2011 at 11:44:31AM -0800, Peter Taps wrote:
I have been told that the checksum value returned by Sha256 is almost
guaranteed to be unique.
All hash functions are guaranteed to have collisions [for inputs larger
than their output anyways].
In fact, if
On Thu, Jan 06, 2011 at 06:07:47PM -0500, David Magda wrote:
On Jan 6, 2011, at 15:57, Nicolas Williams wrote:
Fletcher is faster than SHA-256, so I think that must be what you're
asking about: can Fletcher+Verification be faster than
Sha256+NoVerification? Or do you have some other goal
On Mon, Dec 27, 2010 at 09:06:45PM -0500, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
Actually I'd say that latency has a direct relationship to IOPS because
it's the
time it takes
On Sat, Dec 25, 2010 at 08:37:42PM -0500, Ross Walker wrote:
On Dec 24, 2010, at 1:21 PM, Richard Elling richard.ell...@gmail.com wrote:
Latency is what matters most. While there is a loose relationship between
IOPS
and latency, you really want low latency. For 15krpm drives, the
On Thu, Dec 23, 2010 at 09:32:13AM +, Darren J Moffat wrote:
On 22/12/2010 20:27, Garrett D'Amore wrote:
That said, some operations -- and cryptographic ones in particular --
may use floating point registers and operations because for some
architectures (sun4u rings a bell) this can make
On Thu, Dec 23, 2010 at 11:25:43AM +0100, Stephan Budach wrote:
as I have learned from the discussion about which SSD to use as ZIL
drives, I stumbled across this article, that discusses short
stroking for increasing IOPs on SAS and SATA drives:
There was a thread on this a while back. I
On Wed, Nov 17, 2010 at 01:58:06PM -0800, Bill Sommerfeld wrote:
On 11/17/10 12:04, Miles Nordin wrote:
black-box crypto is snake oil at any level, IMNSHO.
Absolutely.
As Darren said, much of the design has been discussed in public, and
reviewed by cryptographers. It'd be nicer if we had a
Also, when the IV is stored you can more easily look for accidental IV
re-use, and if you can find hash collisions, them you can even cause IV
re-use (if you can write to the filesystem in question). For GCM IV
re-use is rather fatal (for CCM it's bad, but IIRC not fatal), so I'd
not use GCM with
On Sat, Oct 09, 2010 at 09:52:51PM -0700, Richard Elling wrote:
Are we living in the past?
In the bad old days, UNIX systems spoke NFS and Windows systems spoke
CIFS. The cost of creating a file system was expensive -- slices,
partitions, etc.
With ZFS, file systems (datasets) are
On Wed, Oct 06, 2010 at 04:38:02PM -0400, Miles Nordin wrote:
nw == Nicolas Williams nicolas.willi...@oracle.com writes:
nw The current system fails closed
wrong.
$ touch t0
$ chmod 444 t0
$ chmod A0+user:$(id -nu):write_data:allow t0
$ ls -l t0
-r--r--r--+ 1 carton carton
On Wed, Oct 06, 2010 at 05:19:25PM -0400, Miles Nordin wrote:
nw == Nicolas Williams nicolas.willi...@oracle.com writes:
nw *You* stated that your proposal wouldn't allow Windows users
nw full control over file permissions.
me: I have a proposal
you: op! OP op, wait! DOES
On Mon, Oct 04, 2010 at 04:30:05PM -0600, Cindy Swearingen wrote:
Hi Simon,
I don't think you will see much difference for these reasons:
1. The CIFS server ignores the aclinherit/aclmode properties.
Because CIFS/SMB has no chmod operation :)
2. Your aclinherit=passthrough setting
On Mon, Oct 04, 2010 at 02:28:18PM -0400, Miles Nordin wrote:
nw == Nicolas Williams nicolas.willi...@oracle.com writes:
nw I would think that 777 would invite chmods. I think you are
nw handwaving.
it is how AFS worked. Since no file on a normal unix box besides /tmp
But would
On Thu, Sep 30, 2010 at 08:14:24PM -0400, Miles Nordin wrote:
Can the user in (3) fix the permissions from Windows?
no, not under my proposal.
Let's give it a whirld anyways:
but it sounds like currently people cannot ``fix'' permissions through
the quirky autotranslation anyway,
On Thu, Sep 30, 2010 at 02:55:26PM -0400, Miles Nordin wrote:
nw == Nicolas Williams nicolas.willi...@oracle.com writes:
nw Keep in mind that Windows lacks a mode_t. We need to interop
nw with Windows. If a Windows user cannot completely change file
nw perms because there's
On Thu, Sep 30, 2010 at 03:28:14PM -0500, Nicolas Williams wrote:
Consider this chronologically-ordered sequence of events:
1) File is created via Windows, gets SMB/ZFS/NFSv4-style ACL, including
inherittable ACEs. A mode computed from this ACL might be 664, say.
2) A Unix user does
On Thu, Sep 30, 2010 at 08:14:24PM -0400, Miles Nordin wrote:
Can the user in (3) fix the permissions from Windows?
no, not under my proposal.
Then your proposal is a non-starter. Support for multiple remote
filesystem access protocols is key for ZFS and Solaris.
The impedance
On Wed, Sep 29, 2010 at 03:44:57AM -0700, Ralph Böhme wrote:
On 9/28/2010 2:13 PM, Nicolas Williams wrote:
The version of samba bundled with Solaris 10 seems to
insist on
chmod'ing stuff. I've tried all of the various
Just in case it's not clear, I did not write the quoted text. (One
Keep in mind that Windows lacks a mode_t. We need to interop with
Windows. If a Windows user cannot completely change file perms because
there's a mode_t completely out of their reach... they'll be frustrated.
Thus an ACL-and-mode model where both are applied doesn't work. It'd be
nice, but it
On Wed, Sep 29, 2010 at 03:09:22PM -0700, Ralph Böhme wrote:
Keep in mind that Windows lacks a mode_t. We need to
interop with Windows.
Oh my, I see. Another itch to scratch. Now at least Windows users are
happy while me and mabye others are not.
Yes. Pardon me for forgetting to mention
On Wed, Sep 29, 2010 at 05:21:51PM -0500, Nicolas Williams wrote:
On Wed, Sep 29, 2010 at 03:09:22PM -0700, Ralph Böhme wrote:
Keep in mind that Windows lacks a mode_t. We need to
interop with Windows.
Oh my, I see. Another itch to scratch. Now at least Windows users are
happy while
On Tue, Sep 28, 2010 at 12:18:49PM -0700, Paul B. Henson wrote:
On Sat, 25 Sep 2010, [iso-8859-1] Ralph Böhme wrote:
Darwin ACL model is nice and slick, the new NFSv4 one in 147 is just
braindead. chmod resulting in ACLs being discarded is a bizarre design
decision.
Agreed. What's the
On Tue, Sep 28, 2010 at 02:03:30PM -0700, Paul B. Henson wrote:
On Tue, 28 Sep 2010, Nicolas Williams wrote:
I've researched this enough (mainly by reading most of the ~240 or so
relevant zfs-discuss posts and several bug reports)
And I think some fair fraction of those posts were from
On Wed, Sep 29, 2010 at 10:15:32AM +1300, Ian Collins wrote:
Based on my own research, experimentation and client requests, I
agree with all of the above.
Good to know.
I have be re-ordering and cleaning (deny) ACEs for one client for a
couple of years now and we haven't seen any user
On Thu, Sep 23, 2010 at 06:58:29AM +, Markus Kovero wrote:
What is an example of where a checksummed outside pool would not be able
to protect a non-checksummed inside pool? Would an intermittent
RAM/motherboard/CPU failure that only corrupted the inner pool's block
before it was
On Wed, Sep 22, 2010 at 07:14:43AM -0700, Orvar Korvar wrote:
There was a guy doing that: Windows as host and OpenSolaris as guest
with raw access to his disks. He lost his 12 TB data. It turned out
that VirtualBox dont honor the write flush flag (or something
similar).
VirtualBox has an
On Wed, Sep 22, 2010 at 12:30:58PM -0600, Neil Perrin wrote:
On 09/22/10 11:22, Moazam Raja wrote:
Hi all, I have a ZFS question related to COW and scope.
If user A is reading a file while user B is writing to the same file,
when do the changes introduced by user B become visible to
On Wed, Sep 15, 2010 at 05:18:08PM -0400, Edward Ned Harvey wrote:
It is absolutely not difficult to avoid fragmentation on a spindle drive, at
the level I described. Just keep plenty of empty space in your drive, and
you won't have a fragmentation problem. (Except as required by COW.) How
On Tue, Sep 14, 2010 at 04:13:31PM -0400, Linder, Doug wrote:
I recently created a test zpool (RAIDZ) on some iSCSI shares. I made
a few test directories and files. When I do a listing, I see
something I've never seen before:
[r...@hostname anewdir] # ls -la
total 6160
drwxr-xr-x 2
On Sat, Aug 28, 2010 at 12:05:53PM +1200, Ian Collins wrote:
Think of this from the perspective of an application. How would
write failure be reported? open(2) returns EACCES if the file can
not be written but there isn't a corresponding return from write(2).
Any open file descriptors would
On Fri, Aug 20, 2010 at 09:23:56AM +1200, Ian Collins wrote:
On 08/20/10 08:30 AM, Garrett D'Amore wrote:
There is no common C++ ABI. So you get into compatibility concerns
between code built with different compilers (like Studio vs. g++).
Fail.
Which is why we have extern C. Just about
On Fri, Aug 20, 2010 at 09:38:51AM +1200, Ian Collins wrote:
On 08/20/10 09:33 AM, Nicolas Williams wrote:
Any driver C++ code would still need a C++ run-time. Either you must
statically link it in, or you'll have a problem with multiple drivers
using different C++ run-times. If you
On Thu, Aug 12, 2010 at 07:48:10PM -0500, Norm Jacobs wrote:
For single file updates, this is commonly solved by writing data to
a temp file and using rename(2) to move it in place when it's ready.
For anything more complicated you need... a more complicated approach.
Note that transactional
On Wed, Jul 14, 2010 at 03:07:59PM -0600, Beau J. Bechdol wrote:
So not sue if this is the correct list to email to or not. I am curious to
know on my machine I have two hard drive (c8t0d0 and c8t1d0). Can some one
explain to me what this exactly means? What does c8 t0 and d0 actually
mean. I
On Thu, Jul 08, 2010 at 08:42:33PM -0700, Garrett D'Amore wrote:
On Fri, 2010-07-09 at 10:23 +1000, Peter Jeremy wrote:
In theory, collisions happen. In practice, given a cryptographic hash,
if you can find two different blocks or files that produce the same
output, please publicise it
On Wed, Jun 30, 2010 at 01:35:31PM -0700, valrh...@gmail.com wrote:
Finally, for my purposes, it doesn't seem like a ZIL is necessary? I'm
the only user of the fileserver, so there probably won't be more than
two or three computers, maximum, accessing stuff (and writing stuff)
remotely.
It
On Wed, Jun 16, 2010 at 04:44:07PM +0200, Arne Jansen wrote:
Please keep in mind I'm talking about a usage as ZIL, not as L2ARC or main
pool. Because ZIL issues nearly sequential writes, due to the NVRAM-protection
of the RAID-controller the disk can leave the write cache enabled. This means
On Fri, Jun 04, 2010 at 12:37:01PM -0700, Ray Van Dolson wrote:
On Fri, Jun 04, 2010 at 11:16:40AM -0700, Brandon High wrote:
On Fri, Jun 4, 2010 at 9:30 AM, Ray Van Dolson rvandol...@esri.com wrote:
The ISO's I'm testing with are the 32-bit and 64-bit versions of the
RHEL5 DVD ISO's.
On Mon, May 24, 2010 at 05:48:56PM -0400, Thomas Burgess wrote:
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question is, how
safe is it? I know it doesn't have a supercap so lets' say dataloss
occursis it just dataloss or is
On Thu, May 20, 2010 at 04:23:49PM -0400, Thomas Burgess wrote:
I know i'm probably doing something REALLY stupid.but for some reason i
can't get send/recv to work over ssh. I just built a new media server and
i'd like to move a few filesystem from my old server to my new server but
for
On Wed, May 19, 2010 at 05:33:05AM -0700, Chris Gerhard wrote:
The reason for wanting to know is to try and find versions of a file.
No, there's no such guarantee. The same inode and generation number
pair is extremely unlikely to be re-used, but the inode number itself is
likely to be re-used.
On Wed, May 19, 2010 at 07:50:13AM -0700, John Hoogerdijk wrote:
Think about the potential problems if I don't mirror the log devices
across the WAN.
If you don't mirror the log devices then your disaster recovery
semantics will be that you'll miss any transactions that hadn't been
committed to
On Wed, May 19, 2010 at 02:29:24PM -0700, Don wrote:
Since it ignores Cache Flush command and it doesn't have any
persistant buffer storage, disabling the write cache is the best you
can do.
This actually brings up another question I had: What is the risk,
beyond a few seconds of lost
On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote:
On 5/6/10 5:28 AM, Robert Milkowski wrote:
sync=disabled
Synchronous requests are disabled. File system transactions
only commit to stable storage on the next DMU transaction group
commit which can be many seconds.
Is there a
On Wed, Apr 21, 2010 at 10:45:24AM -0400, Edward Ned Harvey wrote:
From: Mark Shellenbaum [mailto:mark.shellenb...@oracle.com]
You can create/destroy/rename snapshots via mkdir, rmdir, mv inside
the
.zfs/snapshot directory, however, it will only work if you're running
the
command
On Wed, Apr 21, 2010 at 01:03:39PM -0500, Jason King wrote:
ISTR POSIX also doesn't allow a number of features that can be turned
on with zfs (even ignoring the current issues that prevent ZFS from
being fully POSIX compliant today). I think an additional option for
the snapdir property
On Tue, Apr 20, 2010 at 04:28:02PM +, A Darren Dunham wrote:
On Sat, Apr 17, 2010 at 09:03:33AM -0400, Edward Ned Harvey wrote:
zfs list -t snapshot lists in time order.
Good to know. I'll keep that in mind for my zfs send scripts but it's not
relevant for the case at hand.
On Fri, Apr 16, 2010 at 01:54:45PM -0400, Edward Ned Harvey wrote:
If you've got nested zfs filesystems, and you're in some subdirectory where
there's a file or something you want to rollback, it's presently difficult
to know how far back up the tree you need to go, to find the correct .zfs
On Fri, Apr 16, 2010 at 02:19:47PM -0700, Richard Elling wrote:
On Apr 16, 2010, at 1:37 PM, Nicolas Williams wrote:
I've a ksh93 script that lists all the snapshotted versions of a file...
Works over NFS too.
% zfshist /usr/bin/ls
History for /usr/bin/ls (/.zfs/snapshot/*/usr/bin/ls
On Fri, Apr 16, 2010 at 01:56:07PM -0400, Edward Ned Harvey wrote:
The typical problem scenario is: Some user or users fill up the filesystem.
They rm some files, but disk space is not freed. You need to destroy all
the snapshots that contain the deleted files, before disk space is available
On Tue, Apr 06, 2010 at 11:53:23AM -0400, Tony MacDoodle wrote:
Can I rollback a snapshot that I did a zfs send on?
ie: zfs send testpool/w...@april6 /backups/w...@april6_2010
That you did a zfs send does not prevent you from rolling back to a
previous snapshot. Similarly for zfs recv --
One really good use for zfs diff would be: as a way to index zfs send
backups by contents.
Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
zfs diff is incredibly cool.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Mar 25, 2010 at 04:23:38PM +, Darren J Moffat wrote:
If the data is in the L2ARC that is still better than going out to
the main pool disks to get the compressed version.
advocate customer='devil'
Well, one could just compress it... If you'd otherwise put compression
in the ssh
On Thu, Mar 18, 2010 at 10:38:00PM -0700, Rob wrote:
Can a ZFS send stream become corrupt when piped between two hosts
across a WAN link using 'ssh'?
No. SSHv2 uses HMAC-MD5 and/or HMAC-SHA-1, depending on what gets
negotiated, for integrity protection. The chances of random on the wire
On Tue, Mar 02, 2010 at 11:10:52AM -0800, Bill Sommerfeld wrote:
On 03/02/10 08:13, Fredrich Maney wrote:
Why not do the same sort of thing and use that extra bit to flag a
file, or directory, as being an ACL only file and will negate the rest
of the mask? That accomplishes what Paul is
On Mon, Mar 01, 2010 at 09:04:58PM -0800, Paul B. Henson wrote:
On Mon, 1 Mar 2010, Nicolas Williams wrote:
Yes, that sounds useful. (Group modebits could be applied to all ACEs
that are neither owner@ nor everyone@ ACEs.)
That sounds an awful lot like the POSIX mask_obj, which
BTW, it should be relatively easy to implement aclmode=ignore and
aclmode=deny, if you like.
- $SRC/common/zfs/zfs_prop.c needs to be updated to know about the new
values of aclmode.
- $SRC/uts/common/fs/zfs/zfs_acl.c:zfs_acl_chmod()'s callers need to be
modified:
- in the create
On Fri, Feb 26, 2010 at 03:00:29PM -0500, Miles Nordin wrote:
nw == Nicolas Williams nicolas.willi...@sun.com writes:
nw What could we do to make it easier to use ACLs?
1. how about AFS-style ones where the effective permission is the AND
of the ACL and the unix permission? You
On Fri, Feb 26, 2010 at 08:23:40AM -0800, Paul B. Henson wrote:
So far it's been quite a struggle to deploy ACL's on an enterprise central
file services platform with access via multiple protocols and have them
actually be functional and reliable. I can see why the average consumer
might give
On Fri, Feb 26, 2010 at 02:50:05PM -0800, Paul B. Henson wrote:
On Fri, 26 Feb 2010, Bill Sommerfeld wrote:
I believe this proposal is sound.
Mere words can not express the sheer joy with which I receive this opinion
from an @sun.com address ;).
I believe we can do a bit better.
A chmod
On Fri, Feb 26, 2010 at 04:26:43PM -0800, Paul B. Henson wrote:
On Fri, 26 Feb 2010, Nicolas Williams wrote:
I believe we can do a bit better.
A chmod that adds (see below) or removes one of r, w or x for owner is a
simple ACL edit (the bit may turn into multiple ACE bits, but whatever
On Wed, Feb 24, 2010 at 02:09:42PM -0600, Bob Friesenhahn wrote:
I have a directory here containing a million files and it has not
caused any strain for zfs at all although it can cause considerable
stress on applications.
The biggest problem is always the apps. For example, ls by default
On Wed, Feb 24, 2010 at 03:31:51PM -0600, Bob Friesenhahn wrote:
With millions of such tiny files, it makes sense to put the small
files in a separate zfs filesystem which has its recordsize property
set to a size not much larger than the size of the files. This should
reduce waste,
On Fri, Feb 05, 2010 at 03:49:15PM -0500, c.hanover wrote:
Two things, mostly related, that I'm trying to find answers to for our
security team.
Does this scenario make sense:
* Create a filesystem at /users/nfsshare1, user uses it for a while,
asks for the filesystem to be deleted
* New
On Fri, Feb 05, 2010 at 04:41:08PM -0500, Miles Nordin wrote:
ch == c hanover chano...@umich.edu writes:
ch is there a way to a) securely destroy a filesystem,
AIUI zfs crypto will include this, some day, by forgetting the key.
Right.
but for SSD, zfs above a zvol, or zfs above a
On Fri, Feb 05, 2010 at 05:08:02PM -0500, c.hanover wrote:
In our particular case, there won't be snapshots of destroyed
filesystems (I create the snapshots, and destroy them with the
filesystem).
OK.
I'm not too sure on the particulars of NFS/ZFS, but would it be
possible to create a 1GB
On Thu, Feb 04, 2010 at 03:19:15PM -0500, Frank Cusack wrote:
BTW, I could just install everything in the global zone and use the
default inheritance of /usr into each local zone to see the data.
But then my zones are not independent portable entities; they would
depend on some non-default
On Thu, Feb 04, 2010 at 04:03:19PM -0500, Frank Cusack wrote:
On 2/4/10 2:46 PM -0600 Nicolas Williams wrote:
In Frank's case, IIUC, the better solution is to avoid the need for
unionfs in the first place by not placing pkg content in directories
that one might want to be writable from zones
On Thu, Jan 21, 2010 at 02:11:31PM -0800, Moshe Vainer wrote:
PS: For data that you want to mostly archive, consider using Amazon
Web Services (AWS) S3 service. Right now there is no charge to push
data into the cloud and its $0.15/gigabyte to keep it there. Do a
quick (back of the napkin)
On Thu, Dec 17, 2009 at 03:32:21PM +0100, Kjetil Torgrim Homme wrote:
if the hash used for dedup is completely separate from the hash used for
data protection, I don't see any downsides to computing the dedup hash
from uncompressed data. why isn't it?
Hash and checksum functions are slow
On Thu, Dec 03, 2009 at 03:57:28AM -0800, Per Baatrup wrote:
I would like to to concatenate N files into one big file taking
advantage of ZFS copy-on-write semantics so that the file
concatenation is done without actually copying any (large amount of)
file content.
cat f1 f2 f3 f4 f5 f15
On Thu, Dec 03, 2009 at 12:44:16PM -0800, Per Baatrup wrote:
if any of f2..f5 have different block sizes from f1
This restriction does not sound so bad to me if this only refers to
changes to the blocksize of a particular ZFS filesystem or copying
between different ZFSes in the same pool.
On Mon, Sep 07, 2009 at 09:58:19AM -0700, Richard Elling wrote:
I only know of hole punching in the context of networking. ZFS doesn't
do networking, so the pedantic answer is no.
But a VDEV may be an iSCSI device, thus there can be networking below
ZFS.
For some iSCSI targets (including
On Tue, Nov 10, 2009 at 03:33:22PM -0600, Tim Cook wrote:
You're telling me a scrub won't actively clean up corruption in snapshots?
That sounds absolutely absurd to me.
Depends on how much redundancy you have in your pool. If you have no
mirrors, no RAID-Z, and no ditto blocks for data, well,
On Mon, Nov 02, 2009 at 12:58:32PM -0500, Dennis Clarke wrote:
Looking at FIPS-180-3 in sections 4.1.2 and 4.1.3 I was thinking that the
major leap from SHA256 to SHA512 was a 32-bit to 64-bit step.
ZFS doesn't have enough room in blkptr_t for 512-bi hashes.
Nico
--
On Mon, Nov 02, 2009 at 11:01:34AM -0800, Jeremy Kitchen wrote:
forgive my ignorance, but what's the advantage of this new dedup over
the existing compression option? Wouldn't full-filesystem compression
naturally de-dupe?
If you snapshot/clone as you go, then yes, dedup will do little
On Mon, Oct 26, 2009 at 08:53:50PM -0700, Anil wrote:
I haven't tried this, but this must be very easy with dtrace. How come
no one mentioned it yet? :) You would have to monitor some specific
syscalls...
DTrace is not reliable in this sense: it will drop events rather than
overburden the
On Thu, Oct 01, 2009 at 11:03:06AM -0700, Rudolf Potucek wrote:
Hmm ... I understand this is a bug, but only in the sense that the
message is not sufficiently descriptive. Removing the file from the
source filesystem will not necessarily free any space because the
blocks have to be retained in
On Fri, Sep 04, 2009 at 01:41:15PM -0700, Richard Elling wrote:
On Sep 4, 2009, at 12:23 PM, Len Zaifman wrote:
We have groups generating terabytes a day of image data from lab
instruments and saving them to an X4500.
Wouldn't it be easier to compress at the application, or between the
So, the manpage seems to have a bug in it. The valid values for the
normalization property are:
none | formC | formD | formKC | formKD
Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Fri, Aug 21, 2009 at 06:46:32AM -0700, Chris Murray wrote:
Nico, what is a zero-link file, and how would I go about finding
whether I have one? You'll have to bear with me, I'm afraid, as I'm
still building my Solaris knowledge at the minute - I was brought up
on Windows. I use Solaris for
Perhaps an open 14GB, zero-link file?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Aug 13, 2009 at 05:57:57PM -0500, Haudy Kazemi wrote:
Therefore, if you need to interoperate with MacOS X then you should
enable the normalization feature.
Thank you for the reply. My goal is to configure the filesystem for the
lowest common denominator without knowing up front
On Wed, Jul 29, 2009 at 03:35:06PM +0100, Darren J Moffat wrote:
Andriy Gapon wrote:
What do you think about the following feature?
Subdirectory is automatically a new filesystem property - an
administrator turns
on this magic property of a filesystem, after that every mkdir *in the
On Fri, Jul 24, 2009 at 05:01:15PM +0200, dick hoogendijk wrote:
On Fri, 24 Jul 2009 10:44:36 -0400
Kyle McDonald kmcdon...@egenera.com wrote:
... then it seems like a shame (or a waste?) not to equally
protect the data both before it's given to ZFS for writing, and after
ZFS reads it
On Wed, Jul 22, 2009 at 02:45:52PM -0500, Bob Friesenhahn wrote:
On Wed, 22 Jul 2009, t. johnson wrote:
Lets say I have a simple-ish setup that uses vmware files for
virtual disks on an NFS share from zfs. I'm wondering how zfs'
variable block size comes into play? Does it make the alignment
On Tue, Jul 21, 2009 at 02:45:57PM -0700, Richard Elling wrote:
But to put this in perspective, you would have to *delete* 20 GBytes
Or overwrite (since the overwrites turn in to COW writes of new blocks
and the old blocks are released if not referred to from snapshot).
of data a day on a ZFS
On Fri, Jun 19, 2009 at 04:09:29PM -0400, Miles Nordin wrote:
Also, as I said elsewhere, there's a barrier controlled by Sun to
getting bugs accepted. This is a useful barrier: the bug database is
a more useful drive toward improvement if it's not cluttered. It also
means, like I said,
On Fri, May 22, 2009 at 04:40:43PM -0600, Eric D. Mudama wrote:
As another datapoint, the 111a opensolaris preview got me ~29MB/s
through an SSH tunnel with no tuning on a 40GB dataset.
Sender was a Core2Duo E4500 reading from SSDs and receiver was a Xeon
E5520 writing to a few mirrored
On Thu, Apr 23, 2009 at 09:59:33AM -0600, Matthew Ahrens wrote:
zfs destroy [-r] -p sounds great.
I'm not a big fan of the -t template. Do you have conflicting snapshot
names due to the way your (zones) software works, or are you concerned
about sysadmins creating these conflicting
On Thu, Apr 23, 2009 at 11:25:54AM -0700, Edward Pilatowicz wrote:
an interesting idea. i can file an RFE on this as well, but there are a
couple side effects to consider with this approach.
setting this property would break zfs snapshot -r if there are
multiple snapshots and clones of a
On Wed, Apr 15, 2009 at 07:39:13PM +0200, Kees Nuyt wrote:
On Wed, 15 Apr 2009 14:28:45 +0800, ??
sky...@gmail.com wrote:
I did some test about MySQL's Insert performance
on ZFS, and met a big performance problem,
*i'm not sure what's the point*.
Q1: Did you set the filesystem's
On Wed, Apr 01, 2009 at 10:58:34AM +0200, casper@sun.com wrote:
I know that this is one of the additional protocols developed for NFSv2
and NFSv3; does NFSv4 has a similar mechanism to get the quota?
Yes, NFSv4.0 and 4.1 both provide the same quota information retrieval
interface, three
On Wed, Apr 01, 2009 at 10:04:47AM +0100, Darren J Moffat wrote:
If we had the .zfs/props/propname RFE implemented that would allow
users to see this regardless of what file sharing protocol they use.
As well as lots of other very interesting info about the filesystem.
Indeed!
On Tue, Mar 31, 2009 at 02:37:02PM -0500, Mike Gerdts wrote:
The user or group is specified using one of the following forms:
posix name (eg. ahrens)
posix numeric id (eg. 126829)
sid name (eg. ahr...@sun)
sid numeric id (eg. S-1-12345-12423-125829)
How does this work with zones?
1 - 100 of 361 matches
Mail list logo