Darren J Moffat [EMAIL PROTECTED] wrote:
Jeff Bonwick wrote:
I personally hate this device naming semantic (/dev/rdsk/c-t-d
not meaning what you'd logically expect it to). (It's a generic
Solaris bug, not a ZFS thing.) I'll see if I can get it changed.
Because
Dennis Clarke [EMAIL PROTECTED] wrote:
As near as I can tell the ZFS filesystem has no way to backup easily to a
tape in the same way that ufsdump has served for years and years.
...
# mt -f /dev/rmt/0cbn status
HP DAT-72 tape drive:
sense key(0x0)= No Additional Sense residual= 0
Darren Reed [EMAIL PROTECTED] wrote:
To put the cat amongst the pigeons here, there were those
within Sun that tried to tell the ZFS team that a backup
program such as zfsdump was necessary but we got told
that amanda and other tools were what people used these
days (in corporate accounts)
Justin Stringfellow [EMAIL PROTECTED] wrote:
Why aren't you using amanda or something else that uses
tar as the means by which you do a backup?
Using something like tar to take a backup forgoes the ability to do things
like the clever incremental backups that ZFS can achieve though; e.g.
Dennis Clarke [EMAIL PROTECTED] wrote:
# mt -f /dev/rmt/0cbn status
HP DAT-72 tape drive:
sense key(0x0)= No Additional Sense residual= 0 retries= 0
file no= 0 block no= 0
# zfs send zfs0/[EMAIL PROTECTED] /dev/rmt/0cbn
cannot write stream: I/O error
#
This looks
Richard Elling [EMAIL PROTECTED] wrote:
I'll call your bluff. Is a zpool create any different for backup
than the original creation? Neither ufsdump nor tar-like programs
do a mkfs or tunefs. In those cases, the sys admin still has to
create the file system using whatever volume manager
Dick Davies [EMAIL PROTECTED] wrote:
As an aside, is there a general method to generate bootable
opensolaris DVDs? The only way I know of getting opensolaris on
is installing sxcr and then BFUing on top.
A year ago, I did publish a toolkit to create bootable SchilliX CDs/DVDs.
Would this
Luke Scharf [EMAIL PROTECTED] wrote:
Karen Chau wrote:
I understand Legato doesn't work with ZFS yet. I looked through the
email archives, cpio and tar were mentioned. What's is my best option
if I want to dump approx 40G to tape?
Am I correct in saying that the issue was not getting
Jeremy Teo [EMAIL PROTECTED] wrote:
A couple of use cases I was considering off hand:
1. Oops i truncated my file
2. Oops i saved over my file
3. Oops an app corrupted my file.
4. Oops i rm -rf the wrong directory.
All of which can be solved by periodic snapshots, but versioning gives
us
Nicolas Williams [EMAIL PROTECTED] wrote:
On Fri, Oct 06, 2006 at 12:02:16PM -0700, Matthew Ahrens wrote:
In my opinion, the marginal benefit of per-write(2) versions over
snapshots (which can be per-transaction, ie. every ~5 seconds) does not
outweigh the complexity of implementation
David Dyer-Bennet [EMAIL PROTECTED] wrote:
On 10/6/06, Erik Trimble [EMAIL PROTECTED] wrote:
First of all, let's agree that this discussion of File Versioning makes
no more reference to its usage as Version Control. That is, we aren't
going to talk about it being useful for source code,
Erik Trimble [EMAIL PROTECTED] wrote:
In order for an FV implementation to be useful for this stated purpose,
it must fulfill the following requirements:
(1) Clean interface for users. That is, one must NOT be presented with
a complete list of all versions unless explicitly asked for
Roch [EMAIL PROTECTED] wrote:
I would add that this is not a bug or deficientcy in
implementation. Any NFS implementation tweak to make 'tar x'
go as fast as direct attached will lead to silent data
corruption (tar x succeeds but the files don't checksum
ok).
Erik Trimble [EMAIL PROTECTED] wrote:
The only idea I get thast matches this criteria is to have the versions
in the extended attribute name space.
Jörg
Realistically speaking, that's my conclusion, if we want a nice clean,
well-designed solution. You need to hide the versioning
Nicolas Williams [EMAIL PROTECTED] wrote:
On Sat, Oct 07, 2006 at 01:43:29PM +0200, Joerg Schilling wrote:
The only idea I get thast matches this criteria is to have the versions
in the extended attribute name space.
Indeed. All that's needed then, CLI UI-wise, beyond what we have now
Nicolas Williams [EMAIL PROTECTED] wrote:
You're arguing for treating FV as extended/named attributes :)
I think that'd be the right thing to do, since we have tools that are
aware of those already. Of course, we're talking about somewhat magical
attributes, but I think that's fine (though,
Nicolas Williams [EMAIL PROTECTED] wrote:
On Mon, Oct 09, 2006 at 12:44:34PM +0200, Joerg Schilling wrote:
Nicolas Williams [EMAIL PROTECTED] wrote:
You're arguing for treating FV as extended/named attributes :)
I think that'd be the right thing to do, since we have tools
Spencer Shepler [EMAIL PROTECTED] wrote:
The close-to-open behavior of NFS clients is what ensures that the
file data is on stable storage when close() returns.
In the 1980s this was definitely not the case. When did this change?
The meta-data requirements of NFS is what ensures that file
Spencer Shepler [EMAIL PROTECTED] wrote:
On Thu, Joerg Schilling wrote:
Spencer Shepler [EMAIL PROTECTED] wrote:
The close-to-open behavior of NFS clients is what ensures that the
file data is on stable storage when close() returns.
In the 1980s this was definitely not the case
Nicolas Williams [EMAIL PROTECTED] wrote:
On Wed, Oct 11, 2006 at 08:24:13PM +0200, Joerg Schilling wrote:
Before we start defining the first offocial functionality for this Sun
feature,
we should define a mapping for Mac OS, FreeBSD and Linux. It may make
sense, to
define a sub
Spencer Shepler [EMAIL PROTECTED] wrote:
I didn't comment on the error conditions that can occur during
the writing of data upon close(). What you describe is the preferred
method of obtaining any errors that occur during the writing of data.
This occurs because the NFS client is writing
Spencer Shepler [EMAIL PROTECTED] wrote:
Sorry, the code in Solaris would behave as I described. Upon the
application closing the file, modified data is written to the server.
The client waits for completion of those writes. If there is an error,
it is returned to the caller of close().
Jeff Victor [EMAIL PROTECTED] wrote:
Your working did not match with the reality, this is why I did write this.
You did write that upon close() the client will first do something similar
to
fsync on that file. The problem is that this is done asynchronously and the
close() return value
Spencer Shepler [EMAIL PROTECTED] wrote:
On Wed, Jonathan Edwards wrote:
On Oct 25, 2006, at 15:38, Roger Ripley wrote:
IBM has contributed code for NFSv4 ACLs under AIX's JFS; hopefully
Sun will not tarry in following their lead for ZFS.
Erik Trimble [EMAIL PROTECTED] wrote:
There have been extensive discussions on loadable modules and licensing
w/r/t the GPLv2 in the linux kernel. nVidia, amongst others, pushed hard
to allow for non-GPL-compatible licensed code to be allowed as a Linux
kernel module. However, the kernel
Paul van der Zwan [EMAIL PROTECTED] wrote:
Sure UFS and ZFS can be faster, but having fast, but possibly
dangerous, defaults
gives you nice benchmark figures ;-)
In real life I prefer the safe, but a bit slower, defaults, as should
anybody
who values his data.
There is another point
dudekula mastan [EMAIL PROTECTED] wrote:
1) On Linux to know the presence of ext2/ext3 file systems on a device we
use tune2fs command. Similar to tune2fs command is there any command to know
the presence of ZFS file system on a device ?
2) When a device is shared between two
Al Hopper [EMAIL PROTECTED] wrote:
On Tue, 5 Dec 2006, Krzys wrote:
Thanks, ah another wird thing is that when I run format on that frive I
get
a coredump :(
... snip
Try zeroing out the disk label with something like:
dd if=/dev/zero of=/dev/rdsk/c?t?d?p0 bs=1024k
Eric Schrock [EMAIL PROTECTED] wrote:
On Sat, Jan 13, 2007 at 12:11:26PM -0800, Richard Elling wrote:
So, what is in your format.dat? I haven't seen an MD21 in over 15 years.
I would have thought that we removed it from format.dat long ago...
-- richard
This sounds like:
5020503
Rob Logan [EMAIL PROTECTED] wrote:
FWIW, the Micropolis 1355 is a 141 MByte (!) ESDI disk.
The MD21 is an ESDI to SCSI converter.
yup... its the board in the middle left of
http://rob.com/sun/sun2/md21.jpg
If you are talking about the middle right, this
is a ACB-4000 series controller
[EMAIL PROTECTED] wrote:
Alpha particles which hit CPUs must have their origin inside said CPU.
(Alpha particles do not penentrate skin, paper, let alone system cases
or CPU packagaging)
Gamma rays cannot be shielded in a senseful way.
Jörg
--
EMail:[EMAIL PROTECTED] (home) Jörg
Rich Teer [EMAIL PROTECTED] wrote:
Hi all,
Is tar/star/gtar still the recommend $0 method of backing up files
that live on ZFS (assuming one is using tape for off-site storage)?
If so, would I be correct in thinking that it is possible to extract
just the file(s) one is interested in
Richard Elling [EMAIL PROTECTED] wrote:
Link to the paper is http://labs.google.com/papers/disk_failures.pdf
As for the spares debate, that is easy: use spares :-)
What they missed to say is that you need to access the whole disk
frequently enough in order to give SMART the ability to
Richard Elling [EMAIL PROTECTED] wrote:
If a disk fitness test were available to verify disk read/write and
performance, future drive problems could be avoided.
Some example tests:
- full disk read
- 8kb r/w iops
- 1mb r/w iops
- raw throughput
Some problems can be seen by
Thomas Nau [EMAIL PROTECTED] wrote:
fflush(fp);
fsync(fileno(fp));
fclose(fp);
and check errors.
(It's remarkable how often people get the above sequence wrong and only
do something like fsync(fileno(fp)); fclose(fp);
Thanks for clarifying! Seems I really need to
Toby Thain [EMAIL PROTECTED] wrote:
I hope this isn't turning into a License flame war. But why do Linux
contributors not deserve the right to retain their choice of license
as equally as Sun, or any other copyright holder, does?
The anti-GPL kneejerk just witnessed on this list is
Rich Teer [EMAIL PROTECTED] wrote:
On Wed, 11 Apr 2007, Toby Thain wrote:
I hope this isn't turning into a License flame war. But why do Linux
contributors not deserve the right to retain their choice of license as
equally as Sun, or any other copyright holder, does?
Read what I wrote
Ignatich [EMAIL PROTECTED] wrote:
Joerg Schilling writes:
There is a lot of missunderstandings with the GPL.
Porting ZFS to Linux wouldnotmake ZFS a derived work from Linux.
I do not see why anyone could claim that there is a need to publish ZFS
under
GPL in case you use
Darren Reed [EMAIL PROTECTED] wrote:
You see no problems, I see no problems but various Linux people do,
including Linus. But as all we have is a collection of different viewpoints
and nothing has been decided in a court of law, the exact meaning is
open to interpretation/discussion.
This
Pawel Jakub Dawidek [EMAIL PROTECTED] wrote:
Hi there.
We have something called system flags in FreeBSD. Those are bascially
some additional flags you can set on files/directories (not extended
attributes nor ACLs).
Bill Sommerfeld [EMAIL PROTECTED] wrote:
(the system flags on *BSD are tied to securelevel; the closet solaris
equivalent would be to define new set system flag and clear system
flag privileges).
If that privilege is only present in single user mode and this way disallows to
clear e.g. the
Nicolas Williams [EMAIL PROTECTED] wrote:
Sigh. We have devolved. Every thread on OpenSolaris discuss lists
seems to devolve into a license discussion.
It is funny to see that in our case, the tecnical problems (those caused
by the fact that linux implements a different VFS interface layer)
Paul Fisher [EMAIL PROTECTED] wrote:
Is there any reason that the CDDL dictates, or that Sun would object,
to zfs being made available as an independently distributed Linux kernel
module? In other words, if I made an Nvidia-like distribution available,
would that be OK from the OpenSolaris
David R. Litwin [EMAIL PROTECTED] wrote:
Well, I tried.
It seems that a Linux port is simply impossible, due purely to licensing
issues. I know I said I'd not bring up licensing, mainly because I did not
want this thread to devolve like the other one; and because I wanted this
thread to
David R. Litwin [EMAIL PROTECTED] wrote:
On 17/04/07, Wee Yeh Tan [EMAIL PROTECTED] wrote:
On 4/17/07, David R. Litwin [EMAIL PROTECTED] wrote:
So, it comes to this: Why, precisely, can ZFS not be
released under a License which _is_ GPL
compatible?
So why do you think should it
David R. Litwin [EMAIL PROTECTED] wrote:
If you refer to the licensing, yes. Coding-wise, I have no idea exept
to say that I would be VERY surprised if ZFS can not be ported to
Linux, especially since there already
exists the FUSE project.
So if you are interested in this project, I would
Erik Trimble [EMAIL PROTECTED] wrote:
This is obviously a missunderstanding. You do not need to make
ZFS _part_ of the Linux kernel as id is some kind of driver.
Using ZFS with Linux would be mere aggregation (see GPL text).
Jörg
No, the general consensus amongst Linux folks
Toby Thain [EMAIL PROTECTED] wrote:
Therein lies the difference in perspective. Linux folks thinks it's
OpenSolaris's fault that ZFS cannot be integrated into Linux.
OpenSolaris folks do not think so.
The OpenSolaris folks here seem to think it's Linux' fault. Impasse.
Let me repeat it
[EMAIL PROTECTED] wrote:
Actually sitting down and doing something hard (like porting
ZFS - one way or another - to Linux), well, the word
procrastination comes to mind and gee, isn't it easier to
come up with reasons /not/ to do it?
If someone really wanted ZFS on Linux, they'd just do it
Nicolas Williams [EMAIL PROTECTED] wrote:
zfs send as backup is probably not generally acceptable: you can't
expect to extract a single file out of it (at least not out of an
incremental zfs send), but that's certainly done routinely with ufsdump,
tar, cpio, ...
Then an incremental star
Dennis Clarke [EMAIL PROTECTED] wrote:
I don't believe that there are any good/useful solutions which are free
that will store both the data and all the potential meta-data in the
filesystem in a recoverable way.
I think that star ( Joerg Schilling ) has a good grasp on all the
metadata
Yaniv Aknin [EMAIL PROTECTED] wrote:
Following my previous post across several mailing lists regarding multi-tera
volumes with small files on them, I'd be glad if people could share real life
numbers on large filesystems and their experience with them. I'm slowly
coming to a realization
Claus Guttesen [EMAIL PROTECTED] wrote:
I'm currently using 4 TB partitions with vxfs. When hosted on FreeBSD
I was limited to 2 TB but using UFS2/FreeBSD was impractical for
several reasons. With vxfs 4 TB is a practical limit, when files are
Could you please give some hints on these
Claus Guttesen [EMAIL PROTECTED] wrote:
I'm currently using 4 TB partitions with vxfs. When hosted on FreeBSD
I was limited to 2 TB but using UFS2/FreeBSD was impractical for
several reasons. With vxfs 4 TB is a practical limit, when files are
Could you please give some hints on
Erblichs [EMAIL PROTECTED] wrote:
Jorg,
Do you really think that ANY FS actually needs to support
more FS objects? If that would be an issue, why not create
more FSs?
A multi-TB FS SHOULD support 100MB+/GB size FS objects, which
IMO is the more common use. I
Leon Koll [EMAIL PROTECTED] wrote:
May be this link could help you?
http://www.nabble.com/VFS-module-handling-ACL-on-ZFS-t3730348.html
Looks exactly what we need. It's strange it wasn't posted to zfs-discuss. SO
many people were waiting for this code.
The NFSv4 ACLs are bitwise
[EMAIL PROTECTED] wrote:
You're right of course and lots of people use them. My point is that
Solaris has been 64 bits lon ger then most others. I think 64 bits in
AIX got 64 bits after Solaris and Linux (via Alpha) did.
Irix was 64 bit near the same time as Solaris but the end of the Irix
[EMAIL PROTECTED] wrote:
IRIX was much earlier than Solaris; Solaris was pretty late in the 64 bit
game with Solaris 7.
And Alpha did not have a real 64 bit port as they did implement ILP64.
With ILP64 your application does not really notice that it runs in 64 bits
if you only use
Mike Dotson [EMAIL PROTECTED] wrote:
Create 20k zfs file systems and reboot. Console login waits for all the
zfs file systems to be mounted (fully loaded 880, you're looking at
about 4 hours so have some coffee ready).
Does this mean, we will get quotas for ZFS in the future?
We need it
[EMAIL PROTECTED] wrote:
Mike Dotson [EMAIL PROTECTED] wrote:
Create 20k zfs file systems and reboot. Console login waits for all the
zfs file systems to be mounted (fully loaded 880, you're looking at
about 4 hours so have some coffee ready).
Does this mean, we will get quotas for
Toby Thain [EMAIL PROTECTED] wrote:
I'll just add, but not for Mac OS X. It was way back in Finder 7
days,
when they used to ship A/UX. (That was where I cut my unix teeth.)
I was actually thinking more of NEXTSTEP, certainly a generation
beyond A/UX; and OS X, a generation further
Ed Ravin [EMAIL PROTECTED] wrote:
Why does ZFS report such small directory sizes? For example, take a maildir
directory with ten entries:
total 2385
drwx-- 8 17121vmail 10 Jun 8 23:50 .
drwx--x--x 14 root root 14 May 12 2006 ..
drwx-- 5 17121
[EMAIL PROTECTED] wrote:
Oh, I see, this is bug 6479267: st_size (struct stat) is unreliable in
ZFS. Any word on when the fix will be out?
It's a bug in scandir (obviously) and it is filed as such.
A very old bug. I fixed it for a Berthold AG customer in 1992
when Novell Netware did start
Jeff Bonwick [EMAIL PROTECTED] wrote:
What was the reason to make ZFS use directory sizes as the number of
entries rather than the way other Unix filesystems use it?
In UFS, the st_size is the size of the directory inode as though it
were a file. The only reason it's like that is that UFS
[EMAIL PROTECTED] wrote:
On Sat, Jun 09, 2007 at 10:16:34PM +0200, [EMAIL PROTECTED] wrote:
Oh, I see, this is bug 6479267: st_size (struct stat) is unreliable in
ZFS. Any word on when the fix will be out?
It's a bug in scandir (obviously) and it is filed as such.
Does scandir
Frank Batschulat [EMAIL PROTECTED] wrote:
Only one byte per directory entry? This confuses
programs that assume that the st_size reported for a
directory is a multiple of sizeof(struct dirent) bytes.
Sorry, but a program making this assumption is just flawed and should be
fixed.
Bill Sommerfeld [EMAIL PROTECTED] wrote:
On Mon, 2007-06-11 at 23:03 +0200, [EMAIL PROTECTED] wrote:
Maybe some additional pragmatism is called for here. If we want NFS
over ZFS to work well for a variety of clients, maybe we should set
st_size to larger values..
+1; let's teach the
Frank Cusack [EMAIL PROTECTED] wrote:
On June 13, 2007 11:26:07 PM -0400 Ed Ravin [EMAIL PROTECTED] wrote:
On Wed, Jun 13, 2007 at 09:42:26PM -0400, Ed Ravin wrote:
As mentioned before, NetBSD's scandir(3) implementation was one. The
NetBSD project has fixed this in their CVS. OpenBSD
Ed Ravin [EMAIL PROTECTED] wrote:
15 years ago, Novell Netware started to return a fixed size of 512 for all
directories via NFS.
If there is still unfixed code, there is no help.
The Novell behavior, commendable as it is, did not break the BSD scandir()
code, because BSD scandir()
MC [EMAIL PROTECTED] wrote:
On the heels of the LZO compression thread, I bring you a 7zip compression
thread!
Shown here as the open source system with the best compression ratio:
http://en.wikipedia.org/wiki/Data_compression#Comparative
Shown here on a SPARC system with the best
Richard L. Hamilton [EMAIL PROTECTED] wrote:
* disks are probably cheaper than CPUs
* it looks to me like 7z may also be RAM-hungry; and there are probably
better ways to use the RAM, too
The main problem with the currently available 7z implementation is that
it has been written in C++ and
Robert Olinski [EMAIL PROTECTED] wrote:
I have a customer who is running into bug 6538387. This is a problem
with HP-UX clients accessing NFS mounts which are on a ZFS file
system. This has to do with ZFS using nanosecond times and the HP
client does not use this amount of precision.
Boyd Adamson [EMAIL PROTECTED] wrote:
Or alternatively, are you comparing ZFS(Fuse) on Linux with XFS on
Linux? That doesn't seem to make sense since the userspace
implementation will always suffer.
Someone has just mentioned that all of UFS, ZFS and XFS are available on
FreeBSD. Are you
Pawel Jakub Dawidek [EMAIL PROTECTED] wrote:
On Mon, Aug 27, 2007 at 10:00:10PM -0700, RL wrote:
Hi,
Does ZFS flag blocks as bad so it knows to avoid using them in the future?
No it doesn't. This would be a really nice feature to have, but
currently when ZFS tries to write to a bad
[EMAIL PROTECTED] wrote:
It's worse than this. Consider the read-only clients. When you
access a filesystem object (file, directory, etc.), UFS will write
metadata to update atime. I believe that there is a noatime option to
mount, but I am unsure as to whether this is sufficient.
[EMAIL PROTECTED] wrote:
AFAIK, a read-only UFS mount will unroll the log and thus write to th=
e medium.
It does not (that's what code inspection suggests).
It will update the in-memory image with the log entries but the
log will not be rolled.
Why then does fsck mount the fs read-only
Joerg Moellenkamp [EMAIL PROTECTED] wrote:
Hello,
in a different benchmark run on the same system, the gfind took 15
minutes whereas the standarf find took 18 minutes. With find and
noatime=off the benchmark took 14 minutes. But even this is slow
compared to 2-3 minutes of the xfs
Nicolas Williams [EMAIL PROTECTED] wrote:
On Wed, Sep 05, 2007 at 03:43:38PM -0500, Rob Windsor wrote:
http://news.com.com/NetApp+files+patent+suit+against+Sun/2100-1014_3-6206194.html
I'm curious how many of those patent filings cover technologies that
they carried over from Auspex.
mike [EMAIL PROTECTED] wrote:
On 9/5/07, Joerg Schilling [EMAIL PROTECTED] wrote:
As I wrote before, my wofs (designed and implemented 1989-1990 for SunOS
4.0,
published May 23th 1991) is copy on write based, does not need fsck and
always
offers a stable view on the media because
James C. McPherson [EMAIL PROTECTED] wrote:
If COW is such an old concept, why haven't there been many filesystems
that have become popular that use it? ZFS, BTRFS (I think) and maybe
WAFL? At least that I know of. It seems like an excellent guarantee of
disk commitment, yet we're all
Matthew Ahrens [EMAIL PROTECTED] wrote:
Joerg Schilling wrote:
The best documented one is the inverted meta data tree that allows wofs to
write
only one new generation node for one modified file while ZFS needs to also
write new
nodes for all directories above the file including
W. Wayne Liauh [EMAIL PROTECTED] wrote:
http://cdrecord.berlios.de/new/private/wofs.ps.gz
Jörg
Hi Jörg,
This link doesn't work. If possible, could you make it as an attachment?
Thanks.
I see no reason why it should not work, it works for me.
Could you give more information?
David Hopwood [EMAIL PROTECTED] wrote:
Al Hopper wrote:
So back to patent portfolios: yes there will be (public and private)
posturing; yes there will be negotiations; and, ultimately, there will
be a resolution. All of this won't affect ZFS or anyone running ZFS.
It matters a great
Stephen Usher [EMAIL PROTECTED] wrote:
Oliver Schinagl wrote:
not to start a flamewar or the like, but linux can run 32bit bins, just not
nativly afaik, you need some sort of emu library. But since I use gentoo,
and pretty much everything is compiled from source anyway, I only have
Stephen Usher [EMAIL PROTECTED] wrote:
Joerg Schilling wrote:
I am not sure about the current state, but 2 years ago, Linux was only able
to
run a few simple proramg in 32 bit mode because the drivers did not support
32 bit ioctl interfaces. This made e.g. a 32 bit cdrecord on 64 bit
[EMAIL PROTECTED] wrote:
*me thinks it would be cool to finally have a generic filesystem
community*
_Do_ we finally get one ? Can't wait :-)
I would like to have a generic filesystem community.
. or declare the ufs communtiy to be the generic part in addition.
Jörg
--
Dickon Hood [EMAIL PROTECTED] wrote:
ZFS would be lovely. Pity about the licence issues.
There is no license issue: the CDDL allows a combination
with any other license and the GPL does not forbid a GPL
project to use code under other licenses in case that the
non-GPL code does not become a
Dickon Hood [EMAIL PROTECTED] wrote:
On Fri, Nov 09, 2007 at 21:34:35 +0100, Joerg Schilling wrote:
: Dickon Hood [EMAIL PROTECTED] wrote:
: ZFS would be lovely. Pity about the licence issues.
: There is no license issue: the CDDL allows a combination
: with any other license and the GPL
can you guess? [EMAIL PROTECTED] wrote:
: In case of a filesystem, I do not see why the
filesystem could
: be a derived work from e.g. Linux.
Indeed not, however AIUI the FSF do.
My impression is that GPFS on Linux was (and may still be) provided as a
binary proprietary loadable
Darren Reed [EMAIL PROTECTED] wrote:
Having just done a largish mv from one ZFS filesystem to another ZFS
filesystem in the same zpool, I was somewhat surprised at how long it
took - I was expecting it to be near instant like it would be within the
same filesystem.
I would guess that this is
Darren Reed [EMAIL PROTECTED] wrote:
if (fromvp != tovp) {
vattr.va_mask = AT_FSID;
if (error = VOP_GETATTR(fromvp, vattr, 0, CRED(), NULL))
goto out;
fsid = vattr.va_fsid;
vattr.va_mask =
Frank Hofmann [EMAIL PROTECTED] wrote:
I don't think the standards would prevent us from adding cross-fs rename
capabilities. It's beyond the standards as of now, and I'd expect that
were it ever added to that it'd be an optional feature as well, to be
queried for via e.g. pathconf().
Why
Frank Hofmann [EMAIL PROTECTED] wrote:
On Fri, 28 Dec 2007, Joerg Schilling wrote:
[ ... ]
POSIX grants that st_dev and st_ino together uniquely identify a file
on a system. As long as neither st_dev nor st_ino change during the
rename(2) call, POSIX does not prevent this rename
Jonathan Edwards [EMAIL PROTECTED] wrote:
On Dec 29, 2007, at 2:33 AM, Jonathan Loran wrote:
Hey, here's an idea: We snapshot the file as it exists at the time of
the mv in the old file system until all referring file handles are
closed, then destroy the single file snap. I know, not
Darren Reed [EMAIL PROTECTED] wrote:
Wrt. to standards, quote from:
http://www.opengroup.org/onlinepubs/009695399/functions/rename.html
ERRORS
The rename() function shall fail if:
[ ... ]
[EXDEV]
[CX] The links named by new and old are on different file
Carsten Bormann [EMAIL PROTECTED] wrote:
On Dec 29 2007, at 08:33, Jonathan Loran wrote:
We snapshot the file as it exists at the time of
the mv in the old file system until all referring file handles are
closed, then destroy the single file snap. I know, not easy to
implement, but
Torrey McMahon [EMAIL PROTECTED] wrote:
http://www.philohome.com/hammerhead/broken-disk.jpg :-)
Be careful, things like this can result in device corruption!
Jörg
--
EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
[EMAIL PROTECTED](uni)
[EMAIL
Christopher Gorski [EMAIL PROTECTED] wrote:
can you try
$(cd pond/photos; tar cf - *) | (cd /pond/copytestsame; tar xf -)
CG I tried it, and it worked. The new tree is an exact copy of the old
one.
could you run your cp as 'truss -t open -o /tmp/cp.truss cp * '
and
Will Murnane [EMAIL PROTECTED] wrote:
On Jan 30, 2008 1:34 AM, Carson Gaspar [EMAIL PROTECTED] wrote:
If this is Sun's cp, file a bug. It's failing to notice that it didn't
provide a large enough buffer to getdents(), so it only got partial results.
Of course, the getdents() API is
Christopher Gorski [EMAIL PROTECTED] wrote:
Of course, the getdents() API is rather unfortunate. It appears the only
safe algorithm is:
while ((r = getdents(...)) 0) {
/* process results */
}
if (r 0) {
/* handle error */
}
You _always_ have to call it at least
1 - 100 of 405 matches
Mail list logo