).
Or am I reading this wrong?
Chad
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be:
rootpool/export/home 40G 20G 10G 10G
0 0
Then imagine that across more than three snapshots. I can't wrap my head
around logic that would work there.
I would love if someone could figure out a good way though...
- Chad
On Wed, Aug 29, 2012 at 8:58 PM, Timothy Coalson tsc...@mst.edu wrote:
As I understand it, the used space of a snapshot does not include anything
that is in more than one snapshot.
True. It shows the amount that would be freed if you destroyed the
snapshot right away. Data held onto by more
Now that is interesting. But how do you do a receive before you reinstall?
Live cd??
Just boot off of the CD (or jumpstart server) to single user mode. Format your
new disk, create a zpool, zfs recv, installboot (or installgrub), reboot and
done.
be able to help me.
I am a little concerned I am going to find out that there is no real way to
show it and that makes for one sad SysAdmin.
Thanks,
Chad
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Nov 15, 2010, at 8:32 AM, Bryan Horstmann-Allen wrote:
+--
| On 2010-11-15 10:21:06, Edward Ned Harvey wrote:
|
| Backups.
|
| Even if you upgrade your hardware to better stuff... with ECC and so on ...
|
On Nov 12, 2010, at 5:54 AM, Edward Ned Harvey wrote:
Why are you sharing iscsi from nexenta to freebsd? Wouldn't it be better
for nexenta to simply create zfs filesystems, and then share nfs? Much more
flexible in a lot of ways. Unless your design requirements require limiting
the
problems
2) I am thinking about formatting the virtual disks served from the Nexenta
iSCSI target as ZFS on the FreeBSD machine even though it has no redundancy. I
see this as safe since the backing store on the Nexenta machine is a redundant
based ZFS zvol... Is this correct thinking?
Thanks
Chad
On Nov 11, 2010, at 7:18 PM, Xin LI wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 11/11/10 17:57, Chad Leigh -- Shire.Net LLC wrote:
I will be setting up a NexentaStor Community Edition based ZFS file
server. I will be serving some zvols over iSCSI to some FreeBSD
machines
non-debug kernel and within a few seconds of reimporting the pool
the scrub was up to ~400 MB/s, so it does indeed seem like the Nexanta
CD kernel is either in debug mode, or something else is slowing it down.
Chad
On Wed, Jul 21, 2010 at 09:12:35AM -0700, Garrett D'Amore wrote:
On Wed, 2010-07
the debug bits. I'm about to test again with an
actual
non-debug 142, and after that a non-debug 145 which just came out.
Thanks,
Chad
On Wed, Jul 21, 2010 at 02:21:51AM -0400, Richard Lowe wrote:
I built in the normal fashion, with the CBE compilers
(cc: Sun C 5.9 SunOS_i386 Patch 124868-10
gave it in my
prior tests
for the scrub to reach it's normal speed, although I can't do that until this
evening
when I'm home again.
Chad
On Wed, Jul 21, 2010 at 09:44:42AM -0700, Chad Cantwell wrote:
Hi,
My bits were originally debug because I didn't know any better. I thought I
had
On Mon, Jul 19, 2010 at 07:01:54PM -0700, Chad Cantwell wrote:
On Tue, Jul 20, 2010 at 10:54:44AM +1000, James C. McPherson wrote:
On 20/07/10 10:40 AM, Chad Cantwell wrote:
fyi, everyone, I have some more info here. in short, rich lowe's 142 works
correctly (fast) on my hardware, while
.
Thanks,
Chad
On Tue, Jul 20, 2010 at 08:39:42AM +0100, Robert Milkowski wrote:
On 20/07/2010 07:59, Chad Cantwell wrote:
I've just compiled and booted into snv_142, and I experienced the same slow
dd and
scrubbing as I did with my 142 and 143 compilations and with the Nexanta 3
RC2 CD
No, this wasn't it. A non debug build with the same NIGHTLY_OPTIONS
at Rich Lowe's 142 build is still very slow...
On Tue, Jul 20, 2010 at 09:52:10AM -0700, Chad Cantwell wrote:
Yes, I think this might have been it. I missed the NIGHTLY_OPTIONS variable
in
opensolaris and I think
On Tue, Jul 20, 2010 at 10:45:58AM -0700, Brent Jones wrote:
On Tue, Jul 20, 2010 at 10:29 AM, Chad Cantwell c...@iomail.org wrote:
No, this wasn't it. A non debug build with the same NIGHTLY_OPTIONS
at Rich Lowe's 142 build is still very slow...
On Tue, Jul 20, 2010 at 09:52:10AM -0700
with how I'm compiling the kernel that makes the hardware not
perform up to its specifications with a zpool, and possibly the Nexanta 3
RC2 ISO has the same problem as my own compilations.
Chad
On Tue, Jul 06, 2010 at 03:08:50PM -0700, Chad Cantwell wrote:
Hi all,
I've noticed something strange
On Tue, Jul 20, 2010 at 10:54:44AM +1000, James C. McPherson wrote:
On 20/07/10 10:40 AM, Chad Cantwell wrote:
fyi, everyone, I have some more info here. in short, rich lowe's 142 works
correctly (fast) on my hardware, while both my compilations (snv 143, snv
144)
and also the nexanta 3 rc2
On Mon, Jul 19, 2010 at 06:00:04PM -0700, Brent Jones wrote:
On Mon, Jul 19, 2010 at 5:40 PM, Chad Cantwell c...@iomail.org wrote:
fyi, everyone, I have some more info here. in short, rich lowe's 142 works
correctly (fast) on my hardware, while both my compilations (snv 143, snv
144
a configuration parameter.
I'm not sure offhand if installing source-compiled ON builds from a bfu'd
rpool is supported, although I suppose it's simple enough to try.
Thanks,
Chad Cantwell
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
I was trying to think of a way to set compression=on at the beginning of a
jumpstart. The only idea I've come up with is to do so with a flash
archive predeployment script. Has anyone else tried this approach?
Thanks,
Chad___
zfs-discuss mailing
I'm looking to migrate a pool from using multiple smaller LUNs to one larger
LUN. I don't see a way to do a zpool replace for multiple to one. Anybody know
how to do this? It needs to be non disruptive.
--
This message posted from opensolaris.org
___
partly since it has PCI-X slots and I thought
those
might be useful for AOC-SAT2-MV8 cards if I couldn't shake the mpt issues, but
now
that the mpt issues are gone I can continue with that controller if I want.
Thanks everyone for your help,
Chad
On Sun, Dec 06, 2009 at 11:12:50PM -0800, Chad
cards,
but it will probably be several days until I get to the bottom of this since
it takes awhile to test after making a change...
Thanks,
Chad
On Mon, Dec 07, 2009 at 11:09:39AM +1000, James C. McPherson wrote:
Gday Chad,
the more swaptronics you partake in, the more difficult it
is going
to resort to motherboard
swapping again.
Chad
On Thu, Dec 03, 2009 at 10:44:53PM -0800, Chad Cantwell wrote:
I eventually performed a few more tests, adjusting some zfs tuning options
which had no effect, and trying the
itmpt driver which someone had said would work, and regardless my system
as one device,
whereas with the IT firmware they were always mpt0 and mpt1. Could also be the
IR works with one card
but not well when two cards are combine...
Chad
On Sat, Dec 05, 2009 at 02:47:55PM -0800, Calvin Morrow wrote:
I found this thread after fighting the same problem in Nexenta
better and wipe again in the event
of problems)
Chad
On Tue, Dec 01, 2009 at 03:06:31PM -0800, Chad Cantwell wrote:
To update everyone, I did a complete zfs scrub, and it it generated no errors
in iostat, and I have 4.8T of
data on the filesystem so it was a fairly lengthy test. The machine also
errors
fairly rapidly)
Thanks again for your help. Sorry for wasting your time if the previously
posted workaround fixes things.
I'll let you know tomorrow either way.
Chad
On Tue, Dec 01, 2009 at 05:57:28PM +1000, James C. McPherson wrote:
Chad Cantwell wrote:
After another crash I checked
Well, ok, the msi=0 thing didn't help after all. A few minutes after my last
message a few errors showed
up in iostat, and then in a few minutes more the machine was locked up hard...
Maybe I will try just
doing a scrub instead of my rsync process and see how that does.
Chad
On Tue, Dec 01
upgrades it creates a second root environment, but my forte isn't
solaris so I
just reformatted the root device)
On Tue, Dec 01, 2009 at 08:09:32AM -0500, Mark Johnson wrote:
Chad Cantwell wrote:
Hi,
I was using for quite awhile OpenSolaris 2009.06
with the opensolaris-provided mpt driver
though, I'm sure it would
generate errors and crash again.
Chad
On Tue, Dec 01, 2009 at 12:29:16AM -0800, Chad Cantwell wrote:
Well, ok, the msi=0 thing didn't help after all. A few minutes after my last
message a few errors showed
up in iostat, and then in a few minutes more the machine
a
couple minutes later:
(if there's any other info I can provide or more things to test just let me
know. Thanks, --Chad )
Nov 29 04:42:55 the-vault scsi: [ID 243001 kern.warning] WARNING:
/p...@0,0/pci8086,2...@3/pci111d,8...@0/pci111d,8...@1/pci1000,3...@0 (mpt1):
Nov 29 04:42:55 the-vault
to two backplanes with 1m
SFF-8087 (both ends) cables. For more details if they are important see my
other post. I haven't tried the MSI workaround yet (although I'm not sure what
MSI is) but from what I've read the workaround won't fix the issues in my case
with non-sun hardware.
Thanks,
Chad
On Tue
Hi,
Replied to your previous general query already, but in summary, they are in the
server chassis. It's a Chenbro 16 hotswap bay case. It has 4 mini backplanes
that each connect via an SFF-8087 cable (1m) to my LSI cards (2 cables / 8
drives
per card).
Chad
On Tue, Dec 01, 2009 at 01:02
Hi,
The Chenbro chassis contains everything - the motherboard/CPU, and the disks.
As far as
I know the chenbro backplanes are basically electrical jumpers that the LSI
cards shouldn't
be aware of. They pass through the SATA signals directly from SFF-8087 cables
to the
disks.
Thanks,
Chad
0 0
c2t10d0 ONLINE 0 0 0
errors: No known data errors
#
On Mon, Nov 30, 2009 at 06:46:13PM -0800, Chad Cantwell wrote:
Hi,
Sorry for not replying to one of the already open threads on this topic;
I've just joined the list for the purposes of this discussion
I appologize if this has been answered already, but I've tried to RTFM and
haven't found much. I'm trying to get HDS shadow copy to work for zpool
replication. We do this with VXVM by modifying each target disk ID after
it's been shadowed from the source LUN. This allows us to import each
On Jul 31, 2008, at 2:56 PM, Bob Netherton wrote:
On Thu, 2008-07-31 at 13:25 -0700, Ross wrote:
Hey folks,
I guess this is an odd question to be asking here, but I could do
with some
feedback from anybody who's actually using ZFS in anger.
ZFS in anger ? That's an interesting way of
for the 1.20.00.15 and there is a
-71010 extension.
Otherwise, file a bug with Areca. They are pretty good about
responding.
Chad
---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net
___
zfs-discuss mailing list
zfs
http://www.opensolaris.org/bug/report.jspa
You'll need an OpenSolaris.org account to file the RFE of course.
On Jul 17, 2008, at 10:52 AM, Will Murnane wrote:
I would like to request an additional flag for the command line zfs
tools. Specifically, I'd like to have a -t flag for zfs destroy,
Here's the announcement for those new Sun JBOD devices mentioned the
other day.
http://www.sun.com/aboutsun/pr/2008-07/sunflash.20080709.1.xml
ckl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 11/20/07, Asif Iqbal [EMAIL PROTECTED] wrote:
On Nov 20, 2007 7:01 AM, Chad Mynhier [EMAIL PROTECTED] wrote:
On 11/20/07, Asif Iqbal [EMAIL PROTECTED] wrote:
On Nov 19, 2007 1:43 AM, Louwtjie Burger [EMAIL PROTECTED] wrote:
On Nov 17, 2007 9:40 PM, Asif Iqbal [EMAIL PROTECTED] wrote
Apparently known bug, fixed in snv_70.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6577473
On Aug 14, 2007, at 8:28 AM, Bill Moloney wrote:
using hyperterm, I captured the panic message as:
SunOS Release 5.11 Version snv_69 32-bit
Copyright 1983-2007 Sun Microsystems, Inc.
the gun on something.
Chad
I wonder when we will see Johnny-cat and Steve-o in the same room
talking
about it.
On 6/12/07 8:23 AM, Sunstar Dude [EMAIL PROTECTED] wrote:
Yea, What is the deal with this? I am so bummed :( What the heck
was Sun's CEO
talking about the other day? And why
). I'll be there and
try to remember to post what is said (though it will probably be in a
billion other places as well)
Chad
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
With US patent laws the way they are, no one but a patent lawyer
could safely give you an answer.
If by some chance a patent lawyer is lurking and decided to comment,
none of the rest of us
could safely read such comments. No one working on ZFS could even
safely look at the patent
you've
of redundancy in ZFS as much the same thing as packet
retransmission in TCP. If the data comes through bad the first time,
checksum verification will catch it, and you get a second chance to
get the correct data. A single-LUN zpool is the moral equivalent of
disabling retransmission in TCP.
Chad Mynhier
:-)
Yep
Chad
--Toby
mostly for its
built-in reliability, free snapshots, built-in compression and
cryptography (soon) and easy to use.
ps. few days ago I encountered my first checksum error on my
desktop system on a submirror (two sata drives in a zfs
mirror). Thanks to zfs
On Apr 17, 2007, at 10:03 AM, Toby Thain wrote:
On 17-Apr-07, at 12:15 PM, Chad Leigh -- Shire.Net LLC wrote:
On Apr 17, 2007, at 7:47 AM, Toby Thain wrote:
On 17-Apr-07, at 8:33 AM, Robert Milkowski wrote:
...
I belive that ZFS definitely belongs on a desktop,
Apple (and I
Chad
---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
business to improve performance.
Thanks for any insight on how I might have set this up wrong.
Thanks
Chad
---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net
smime.p7s
Description: S/MIME cryptographic signature
.
It is what I am using
Chad
---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
should be
mounted.
This is an example where this feature is convenient. There might be
other examples where this feature is necessary.
Chad Mynhier
[1] Note that the purpose of the script was mostly to guard against
operator error rather than system problems. With vfstab, it would
take two
On Dec 2, 2006, at 10:56 AM, Al Hopper wrote:
On Sat, 2 Dec 2006, Chad Leigh -- Shire.Net LLC wrote:
On Dec 2, 2006, at 6:01 AM, [EMAIL PROTECTED] wrote:
While other file systems, when they become corrupt, allow you to
salvage data :-)
They allow you to salvage what you *think
On Dec 2, 2006, at 12:29 PM, Jeff Victor wrote:
Chad Leigh -- Shire.Net LLC wrote:
On Dec 2, 2006, at 10:56 AM, Al Hopper wrote:
On Sat, 2 Dec 2006, Chad Leigh -- Shire.Net LLC wrote:
On Dec 2, 2006, at 6:01 AM, [EMAIL PROTECTED] wrote:
While other file systems, when they become
systems on RAID systems that they trust, but don't realize may have
bugs
(causing data corruption) that have yet to be discovered?
And this is different from any other storage system, how? (ie, JBOD
controllers and disks can also have subtle bugs that corrupt data)
Chad
---
Chad Leigh
On Dec 1, 2006, at 4:34 PM, Dana H. Myers wrote:
Chad Leigh -- Shire.Net LLC wrote:
On Dec 1, 2006, at 9:50 AM, Al Hopper wrote:
Followup: When you say you fixed the HW, I'm curious as to what
you
found and if this experience with ZFS convinced you that your
trusted
RAID
H/W did
On Dec 1, 2006, at 10:17 PM, Ian Collins wrote:
Chad Leigh -- Shire.Net LLC wrote:
On Dec 1, 2006, at 4:34 PM, Dana H. Myers wrote:
Chad Leigh -- Shire.Net LLC wrote:
And this is different from any other storage system, how? (ie,
JBOD
controllers and disks can also have subtle bugs
On Dec 1, 2006, at 10:42 PM, Toby Thain wrote:
On 1-Dec-06, at 6:36 PM, Chad Leigh -- Shire.Net LLC wrote:
On Dec 1, 2006, at 4:34 PM, Dana H. Myers wrote:
Chad Leigh -- Shire.Net LLC wrote:
On Dec 1, 2006, at 9:50 AM, Al Hopper wrote:
Followup: When you say you fixed the HW, I'm
On Nov 22, 2006, at 4:11 PM, Al Hopper wrote:
No problem there! ZFS rocks. NFS/ZFS is a bad combination.
Has anyone tried sharing a ZFS fs using samba or afs or something
else besides nfs? Do we have the same issues?
Chad
---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting
not synching the entire pool on every sync but just the stuff needed
or something like that. I heard it kind of 2nd or 3rd hand so cannot
be to detailed in my description. Can someone here in the know
confirm that this is so (or not)?
Thanks
Chad
---
Chad Leigh -- Shire.Net LLC
Your Web App
at.
Chad
---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
-6 will be installed before XMas)
Chad
---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
to do so.
Chad Mynhier
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
not affects its usefulness.
Chad
-Erik
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net
On Oct 6, 2006, at 3:53 PM, Nicolas Williams wrote:
On Fri, Oct 06, 2006 at 03:30:20PM -0600, Chad Leigh -- Shire.Net
LLC wrote:
On Oct 6, 2006, at 3:08 PM, Erik Trimble wrote:
OK. So, now we're on to FV. As Nico pointed out, FV is going to
need a new API. Using the VMS convention
are editing geometrically more files.
Chad
---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
back to the
previous version (or 2 ago or whatever). I cannot do that in the
current situation.
Chad
-Erik
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
---
Chad Leigh
versions.
Chad
---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
On Oct 5, 2006, at 7:47 PM, Chad Leigh -- Shire.Net LLC wrote:
I find the unix conventions of storying a file and file~ or any
of the other myriad billion ways of doing it that each app has
invented to be much more unwieldy.
sorry, storing a file, not storying
---
Chad Leigh
On Oct 5, 2006, at 6:48 PM, Frank Cusack wrote:
On October 5, 2006 5:25:17 PM -0700 David Dyer-Bennet [EMAIL PROTECTED]
b.net wrote:
Well, unless you have a better VCS than CVS or SVN. I first met this
as an obscure, buggy, expensive, short-lived SUN product, actually; I
believe it was
On Sep 26, 2006, at 12:26 PM, Chad Leigh -- Shire.Net LLC wrote:
On Sep 26, 2006, at 12:24 PM, Mike Kupfer wrote:
Chad == Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED]
writes:
Chad snoop does not show me the reply packets going back. What do I
Chad need to do to go both ways?
It's
On Sep 26, 2006, at 12:24 PM, Mike Kupfer wrote:
Chad == Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] writes:
Chad snoop does not show me the reply packets going back. What do I
Chad need to do to go both ways?
It's possible that performance issues are causing snoop to miss the
replies
mounts and using a traditional share command to share them over nfs?
I am mostly a Solaris noob and am happy to learn and can try anything people
want me to test.
Thanks in advance for any comments or help.
thanks
Chad
This message posted from opensolaris.org
On Sep 25, 2006, at 12:18 PM, eric kustarz wrote:
Chad Leigh wrote:
I have set up a Solaris 10 U2 06/06 system that has basic patches
to the latest -19 kernel patch and latest zfs genesis etc as
recommended. I have set up a basic pool (local) and a bunch of
sub-pools (local/mail, local
On Sep 25, 2006, at 1:15 PM, Mike Kupfer wrote:
Chad == Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] writes:
Chad On Sep 25, 2006, at 12:18 PM, eric kustarz wrote:
You can also grab a snoop trace to see what packets are not being
responded too?
Chad If I can catch it happening. Most
On Sep 25, 2006, at 2:49 PM, Mike Kupfer wrote:
Chad == Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] writes:
Chad There seems to be no packet headers or time stamps or
anything --
Chad just a lot of binary data. What am I looking for?
Use snoop -i capture_file to decode the capture
On Sep 25, 2006, at 3:54 PM, Mike Kupfer wrote:
Chad == Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] writes:
Chad so -t a should show wall clock time
The capture file always records absolute time. So you (just) need to
use -t a when you decode the capture file.
Sorry for not making
On Sep 14, 2006, at 1:32 PM, Henk Langeveld wrote:
Bady, Brant RBCM:EX wrote:
Part of the archiving process is to generate checksums (I happen
to use
MD5), and store them with other metadata about the digital object in
order to verify data integrity and demonstrate the authenticity of
the
On Sep 12, 2006, at 4:39 PM, Celso wrote:
On 12/09/06, Celso [EMAIL PROTECTED] wrote:
I think it has already been said that in many
peoples experience, when a disk fails, it completely
fails. Especially on laptops. Of course ditto blocks
wouldn't help you in this situation either!
Exactly.
On 7/18/06, Brian Hechinger [EMAIL PROTECTED] wrote:
On Tue, Jul 18, 2006 at 09:46:44AM -0400, Chad Mynhier wrote:
On 7/18/06, Brian Hechinger [EMAIL PROTECTED] wrote:
Being able to remove devices from a pool would be a good thing. I can't
personally think of any reason that I would ever
ZFS to treat
a subset of disks as read-only.
Chad Mynhier
http://cmynhier.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It uses extra space in the middle of the write, in order to hold the
new data, but once
the write is complete, the space occupied by the old version is now
free for use.
ckl
On Jul 12, 2006, at 8:05 PM, Robert Chen wrote:
I still could not understand why Copy on Write does not waste file
83 matches
Mail list logo