Moved from PSARC to zfs-code...this discussion is seperate from the case.
Eric kustarz wrote:
On Jun 23, 2008, at 1:20 PM, Darren Reed wrote:
eric kustarz wrote:
On Jun 23, 2008, at 1:07 PM, Darren Reed wrote:
Tim Haley wrote:
primarycache=all | none | metadata
Controls what
Yaniv Aknin wrote:
Thanks for the reference.
I read that thread to the end, and saw there are some complex considerations
regarding changing st_dev on an open file, but no decision. Despite this
complexity, I think the situation is quite brain damanged - I'm moving large
files between
Nicolas Williams wrote:
On Mon, Dec 31, 2007 at 07:20:30PM +1100, Darren Reed wrote:
Frank Hofmann wrote:
http://www.opengroup.org/onlinepubs/009695399/functions/rename.html
ERRORS
The rename() function shall fail if:
[ ... ]
[EXDEV]
[CX] The links named
[EMAIL PROTECTED] wrote:
...
That's a sad situation for backup utilities, by the way - a backup
tool would have no way of finding out that file X on fs A already
existed as file Z on fs B. So what ? If the file got copied, byte by
byte, the same situation exists, the contents are
Frank Hofmann wrote:
On Fri, 28 Dec 2007, Darren Reed wrote:
[ ... ]
Is this behaviour defined by a standard (such as POSIX or the
VFS design) or are we free to innovate here and do something
that allowed such a shortcut as required?
Wrt. to standards, quote from:
http
[EMAIL PROTECTED] wrote:
On Thu, 27 Dec 2007, [EMAIL PROTECTED] wrote:
I would guess that this is caused by different st_dev values in the new
filesystem. In such a case, mv copies the files instead of renaming
them.
No, it's because they are different filesystems and the data needs to
Having just done a largish mv from one ZFS filesystem to another ZFS
filesystem in the same zpool, I was somewhat surprised at how long it
took - I was expecting it to be near instant like it would be within the
same filesystem.
Are there optimisations possible here?
Surely it should be possible
This changed subject long ago...
[EMAIL PROTECTED] wrote:
That but it existed only in RAM in my servers should not be a defense
for failing to retain discoverable evidence is distinct from the issue
of what constitutes discoverable evidence.
But only if you were told you needed to
mike wrote:
it's about time. this hopefully won't spark another license debate,
etc... ZFS may never get into linux officially, but there's no reason
a lot of the same features and ideologies can't make it into a
linux-approved-with-no-arguments filesystem...
Well, there's a dark horse here
Prior to rebooting my system (S10U2) yesterday, I had half
a dozen ZFS shares active...
Today, how that I look at this, I find I have only 1 of them is
being exported through NFS.
# zfs list -o name,sharenfs NAME SHARENFS
biscuit off
biscuit/crashes off
Over the weekend I got ZFS up and running under FreeBSD and have
had much the same experience with it that I have with Solaris - it works
great out of the box and once configured, it is easy to forget about.
So far the only real difference is anything you might tune via /etc/system
(or mdb) is
Robert Milkowski wrote:
Hello Darren,
Monday, April 23, 2007, 9:14:35 PM, you wrote:
DRSC The environment that it is running in has less memory than I've used
DRSC it on with Solaris before, so I went to look at how to tune the ARC,
DRSC only to discover that it had already been capped to
Claus Guttesen wrote:
Gents, how come this thread - without any relation to zfs at all - is
discussed on this list? Do move this irrelevant thread to another
fora.
My intentions subscribing to this list was *not* to read about
lay-man's perception of this nor that license!
Because
From: Joerg Schilling [EMAIL PROTECTED]
Ignatich [EMAIL PROTECTED] wrote:
Joerg Schilling writes:
There is a lot of missunderstandings with the GPL.
Porting ZFS to Linux wouldnotmake ZFS a derived work from Linux.
I do not see why anyone could claim that there is a need to publish ZFS
Erblichs wrote:
My two cents,
...
Secondly, if I can add an additional item, would anyone
want to be able to encrypt the data vs compress or to
be able to combine encryption with compression?
Yes, I might want to encrypt all of my laptop's hard drive contents and
I
Mark Maybee wrote:
Anton B. Rang wrote:
This sounds a lot like:
6417779 ZFS: I/O failure (write on ...) -- need to
reallocate writes
Which would allow us to retry write failures on
alternate vdevs.
Of course, if there's only one vdev, the write should be retried to a
different block on the
From: Toby Thain [EMAIL PROTECTED]
On 11-Apr-07, at 8:25 PM, Ignatich wrote:
Rich Teer writes:
On Wed, 11 Apr 2007, Rayson Ho wrote:
Why does everyone need to be compatible with Linux?? Why not Linux
changes its license and be compatible with *BSD and Solaris??
I agree with this sentiment,
From: Darren J Moffat [EMAIL PROTECTED]
...
The other problem is that you basically need a global unique registry
anyway so that compress algorithm 1 is always lzjb, 2 is gzip, 3 is
etc etc. Similarly for crypto and any other transform.
I've two thoughts on that:
1) if there is to be a
Adam,
With the blog entry[1] you've made about gzip for ZFS, it raises
a couple of questions...
1) It would appear that a ZFS filesystem can support files of
varying compression algorithm. If a file is compressed using
method A but method B is now active, if I truncate the file
and
Robert Milkowski wrote:
Hello Darren,
Thursday, March 29, 2007, 12:01:21 AM, you wrote:
DRSC Adam,
...
DRSC 2) The question of whether or not to use bzip2 was raised in
DRSCthe comment section of your blog. How easy would it be to
DRSCimplement a plugable (or more generic) interface
Jim Mauro wrote:
All righty...I set c_max to 512MB, c to 512MB, and p to 256MB...
arc::print -tad
{
...
c02e29e8 uint64_t size = 0t299008
c02e29f0 uint64_t p = 0t16588228608
c02e29f8 uint64_t c = 0t33176457216
c02e2a00 uint64_t c_min =
Robert Milkowski wrote:
Hello Darren,
Tuesday, March 20, 2007, 3:27:26 AM, you wrote:
Using Solaris 10, Update 2
I've just rebooted my desktop and I have discovered that a ZFS
filesystem appears to have gone missing.
The filesystem in question was called
Using Solaris 10, Update 2
I've just rebooted my desktop and I have discovered that a ZFS
filesystem appears to have gone missing.
The filesystem in question was called biscuit/home and should
have been modified to have its mountpoint set to /export/home.
Before the reboot, I did a lot of
James Dickens wrote:
On 3/12/07, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]*
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
What issues, if any, are likely to surface with using Solaris
inside vmware as a guest os, if I choose to use ZFS?
works great in vmware server, IO
What issues, if any, are likely to surface with using Solaris
inside vmware as a guest os, if I choose to use ZFS?
I'm assuming that ZFS's ability to maintain data integrity
will prevail and protect me from any problems that the
addition of vmware might introduce.
Are there likely to be any
Darren J Moffat wrote:
...
Of course. I didn't mention it because I thought it was obvious but
this would NOT break the COW or the transactional integrity of ZFS.
One of the possible ways that the to be bleached blocks are dealt
with in the face of a crash is just like everything else -
Darren J Moffat wrote:
One other area where is is useful is when you are in a jurisdiction
where a court order may require you to produce your encryption keys -
yes such jurisdictions exist and I don't want to debate the human
rights angle or social engineering aspects of this just state that
Darren,
A point I don't yet believe that has been addressed in this
discussion is: what is the threat model?
Are we targetting NIST requirements for some customers
or just general use by everyday folks?
Darrn
___
zfs-discuss mailing list
Robert Petkus wrote:
Folks,
When using sharenfs, do I really need to NFS export the parent zfs
filesystem *and* all of its children? For example, if I have
/zfshome
/zfshome/user1
/zfshome/user1+n
it seems to me like I need to mount each of these exported filesystems
individually on the NFS
I'm doing a putback onto my local workstation, watching the disk
activity with zpool iostat, when I start to notice something
quite strange...
zpool iostat 1
capacity operationsbandwidth
pool used avail read write read write
-- - - -
How I managed to make this happen, I'm now no longer sure of.
After upgrading my workstation to Solaris 10, Update 2, I could
not find any ZFS pools to import where I thought they were.
Whether this is due to the partitioning not being correclty preserved
or some other problem remains a
Hi,
my box has started panic'ing in zpool.
I'm using bits around a year old (which doesn't help) on S10FCS - when I
can get a DVD with S10U2, I'll try that but...
But my concern here is that this panic pops up at boot and the only way
around this has been to rename /kernel/drv/amd64/zpool to
David Dyer-Bennet wrote:
On 10/11/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
There are tools around that can tell you if hardware is supported by
Solaris.
One such tool can be found at:
http://www.sun.com/bigadmin/hcl/hcts/install_check.html
Beware of this tool. It reports Y for both
Luke Scharf wrote:
Although regular Solaris is good for what I'm doing at work, I prefer
apt-get or yum for package management for a desktop. So, I've been
playing with Nexenta / GnuSolaris -- which appears to be the
open-sourced Solaris kernel and low-level system utilities with Debian
Jeff Bonwick wrote:
is zfs any less efficient with just using a portion of a
disk versus the entire disk?
As others mentioned, if we're given a whole disk (i.e. no slice
is specified) then we can safely enable the write cache.
With all of the talk about performance problems due to
Patrick Petit wrote:
Hi,
Using a ZFS emulated volume, I wasn't expecting to see a system [1]
hang caused by a SCSI error. What do you think? The error is not
systematic. When it happens, the Solaris/Xen dom0 console keeps
displaying the following message and the system hangs.
*Aug 3
Anton B. Rang wrote:
I'd filed 6452505 (zfs create should set permissions on underlying mountpoint)
so that this shouldn't cause problems in the future
6238072 might also be of interest.
Darren
___
zfs-discuss mailing list
I've had people mention that WAFL does indeed support clones of snapshots.
Is this a what version of WAFL problem?
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bart Smaalders wrote:
...
I just swap on a zvol w/ my ZFS root machine.
I haven't been watching...what's the current status of using
ZFS for swap/dump?
Is a/the swap solution to use mkswap and then specify that file
in vfstab?
Darren
___
Mark Shellenbaum wrote:
The following is the delegated admin model that Matt and I have been
working on. At this point we are ready for your feedback on the
proposed model.
-Mark
PERMISSION GRANTING
zfs
Mark Shellenbaum wrote:
Glenn Skinner wrote:
The following is a nit-level comment, so I've directed it onl;y to you,
rather than to the entire list.
Date: Mon, 17 Jul 2006 09:57:35 -0600
From: Mark Shellenbaum [EMAIL PROTECTED]
Subject: [zfs-discuss] Proposal: delegated
Jeff Bonwick wrote:
PERMISSION GRANTING
zfs allow [-l] [-d] everyone|user|group ability[,ability...] \
...
zfs unallow dataset [-r] [-l] [-d]
If we're going to use English words, it should be allow and disallow.
The problem with 'disallow' is that it implies
Bill Moore wrote:
On Wed, Jul 19, 2006 at 03:10:00AM +0200, [EMAIL PROTECTED] wrote:
So how many of the 128 bits of the blockpointer are used for things
other than to point where the block is?
128 *bits*? What filesystem have you been using? :) We've got
luxury-class block
[EMAIL PROTECTED] wrote:
Hello,
because creating/using filesystems in ZFS becoms cheap it is useful now
to create/organize filesystems in hierarchy:
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
dns-pool 136K 43.1G 25.5K /dns-pool
dns-pool/zones
Darren J Moffat wrote:
Jeff Victor wrote:
[EMAIL PROTECTED] wrote:
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
dns-pool 136K 43.1G 25.5K /dns-pool
dns-pool/zones 50K 43.1G 25.5K /dns-pool/zones
dns-pool/zones/dns1 24.5K 43.1G
To put the cat amongst the pigeons here, there were those
within Sun that tried to tell the ZFS team that a backup
program such as zfsdump was necessary but we got told
that amanda and other tools were what people used these
days (in corporate accounts) and therefore zfsdump and
zfsrestore wasn't
Siegfried Nikolaivich wrote:
Hello,
What kind of x86 CPU does ZFS prefer? In particular, what kind of CPU is
optimal when using RAID-Z with a large number of disks (8)?
My experience is that for hardware that will be used in a
server orientated role, there are a lot of considerations
Darren J Moffat wrote:
The rest is just uninformed licensing related fud.
More fool them for not getting it!
Indeed. There was a followup to that email that went
through and debunked that posting along exactly those
lines and to which the OP did not respond.
Darren
What danger is there in stripping off the leading / from zfs
command args and using what is left as a filesystem name?
Quite often I do a quick copy-paste to get from df output
to the zfs command line and every time I need to re-edit
the command line because the copy-paste takes the leading
/
grant beattie wrote:
On Tue, Jun 27, 2006 at 10:14:06AM +0200, Patrick wrote:
Hi,
I've just started using ZFS + NFS, and i was wondering if there is
anything i can do to optimise it for being used as a mailstore ? (
small files, lots of them, with lots of directory's and high
concurrent
Anton B. Rang wrote:
Actually, while Seagate's little white paper doesn't explicitly say so, the
FLASH is used for a write cache and that provides one of the major benefits:
Writes to the disk rarely need to spin up the motor. Probably 90+% of all
writes to disk will fit into the cache in a
Jonathan Adams wrote:
On Tue, Jun 20, 2006 at 09:32:58AM -0700, Richard Elling wrote:
Flash is (can be) a bit more sophisticated. The problem is that they
have a limited write endurance -- typically spec'ed at 100k writes to
any single bit. The good flash drives use block relocation,
[EMAIL PROTECTED] wrote:
Also, options such as -nomtime and -noctime have been introduced
alongside -noatime in some free operating systems to limit the amount
of meta data that gets written back to disk.
Those seem rather pointless. (mtime and ctime generally imply other
changes,
Mike Gerdts wrote:
On 6/17/06, Dale Ghent [EMAIL PROTECTED] wrote:
The concept of shifting blocks in a zpool around in the background as
part of a scrubbing process and/or on the order of a explicit command
to populate newly added devices seems like it could be right up ZFS's
alley. Perhaps
Jeff Bonwick wrote:
...
Since we know that intent log blocks don't live for more than a
single transaction group (which is about five seconds), there's
no reason to allocate them space-efficiently. It would be far
better, when allocating a B-byte intent log block in an N-disk
RAID-Z group, to
Darren J Moffat wrote:
Roland Mainz wrote:
Darren J Moffat wrote:
James Dickens wrote:
I think ZFS should add the concept of ownership to a ZFS filesystem,
so if i create a filesystem for joe, he should be able to use his
space how ever he see's fit, if he wants to turn on compression or
56 matches
Mail list logo