zfs send -Rv rp...@0908 /net/remote/rpool/snaps/rpool.0908
The recommended thing is to zfs send | zfs receive ... or more likely,
zfs send | ssh somehost 'zfs receive' You should ensure the source and
destination OSes are precisely the same version, because then you're assured
the zfs
How can I prevent /usr/bin/chmod from following symbolic links? I
can't find any
-P option in the documentation (and it doesn't work either..).
Maybe find can be used in some way?
Not possible; in Solaris we don't have a lchmod(2) system call which
makes
adding a chmod option (like
It's a strange question anyway - You want a single file to have
permissions
(suppose 755) in one directory, and some different permissions
(suppost 700)
in some other directory? Then some users could access the file if
they use
path A, but would be denied access to the same file if they
Hi. If I am using slightly more reliable SAS drives versus SATA, SSDs
for both L2Arc and ZIL and lots of RAM, will a mirrored pool of say 24
disks hold any significant advantages over a RAIDZ pool?
Generally speaking, striping mirrors will be faster than raidz or raidz2,
but it will require a
Suppose I have a storagepool: /storagepool
And I have snapshots on it. Then I can access the snaps under
/storagepool/.zfs/snapshots
But is there any way to enable this within all the subdirs? For example,
cd /storagepool/users/eharvey/some/foo/dir
cd
System:
Dell 2950
16G RAM
16 1.5T SATA disks in a SAS chassis hanging off of an LSI 3801e, no
extra drive slots, a single zpool.
svn_124, but with my zpool still running at the 2009.06 version (14).
My plan is to put the SSD into an open disk slot on the 2950, but will
have to configure
Thanks Ed. It sounds like you have run in this mode? No issues with
the perc?
You can JBOD with the perc. It might be technically a raid0 or
raid1 with a
single disk in it, but that would be functionally equivalent to JBOD.
The only time I did this was ...
I have a Windows server, on
Replacing failed disks is easy when PERC is doing the RAID. Just remove
the failed drive and replace with a good one, and the PERC will rebuild
automatically.
Sorry, not correct. When you replace a failed drive, the perc card doesn't
know for certain that the new drive you're adding is meant
The Intel specified random write IOPS are with the cache enabled and
without cache flushing. They also carefully only use a limited span
of the device, which fits most perfectly with how the device is built.
How do you know this? This sounds much more detailed than any average
person could
I built a fileserver on solaris 10u6 (10/08) intending to back it up to
another server via zfs send | ssh othermachine 'zfs receive'
However, the new server is too new for 10u6 (10/08) and requires a later
version of solaris . presently available is 10u8 (10/09)
Is it crazy for me to try the
*snip*
I hope that's clear.
Yes, perfectly clear, and very helpful. Thank you very much.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It says at the end of the zfs send section of the man page The format
of the stream is committed. You will be able to receive your streams on
future versions of ZFS.
'Twas not always so. It used to say The format of the stream is
evolving. No backwards compatibility is guaranteed. You may
I previously had a linux NFS server that I had mounted 'ASYNC' and, as
one would expect, NFS performance was pretty good getting close to
900gb/s. Now that I have moved to opensolaris, NFS performance is not
very good, I'm guessing mainly due to the 'SYNC' nature of NFS. I've
seen various
If there were a ³zfs send² datastream saved someplace, is there a way to
verify the integrity of that datastream without doing a ³zfs receive² and
occupying all that disk space?
I am aware that ³zfs send² is not a backup solution, due to vulnerability of
even a single bit error, and lack of
Depending of your version of OS, I think the following post from Richard
Elling
will be of great interest to you:
-
http://richardelling.blogspot.com/2009/10/check-integrity-of-zfs-send-streams.
html
Thanks! :-)
No, wait!
According to that page, if you zfs receive -n then you should
If feasible, you may want to generate MD5 sums on the streamed output
and then use these for verification.
That's actually not a bad idea. It should be kinda obvious, but I hadn't
thought of it because it's sort-of duplicating existing functionality.
I do have a multipipe script that behaves
Where exactly do you get zstreamdump?
I found a link to zstreamdump.c ... but is that it? Shouldn't it be part of
a source tarball or something?
Does it matter what OS? Every reference I see for zstreamdump is about
opensolaris. But I'm running solaris.
Gzip can be a bit slow. Luckily there is 'lzop' which is quite a lot
more CPU efficient on i386 and AMD64, and even on SPARC. If the
compressor is able to keep up with the network and disk, then it is
fast enough. See http://www.lzop.org/;.
In my development/testing this week, I did time
OS means Operating System, or OpenSolaris. This is in the second
meaning I wrote OS in my answer. It was not obvious you were using
Solaris 10 though. Sorry about that.
(FYI, zstreamdump seems to be an addition to build 125.)
Oh - I never connected OS to OpenSolaris. ;-)
So I gather
I see 3.6X less CPU
consumption from 'lzop -3' than from 'gzip -3'.
Where do you get lzop from? I don't see any binaries on their site, nor
blastwave, nor opencsw. And I am having difficulty building it from source.
___
zfs-discuss mailing list
Oh well. I built LZO, and can't seem to link it in the lzop build, despite
correctly setting the FLAGS variables they say in the INSTALL file. I'd
love to provide an lzop comparison, but can't get it. I give up ... Also,
can't build python-lzo. Also would be sweet, but hey.
For whoever
cat my_log_file | tee (gzip my_log_file.gz) (wc -l) (md5sum) |
sort | uniq -c
That is great. ;-) Thank you very much.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We've been using ZFS for about two years now and make a lot of use of
zfs
send/receive to send our data from one X4500 to another. This has been
working well for the past 18 months that we've been doing the sends.
I recently upgraded the receiving thumper to Solaris 10 u8 and since
then,
This is especially important, because if you have 1 failed drive, and you
pull the wrong drive, now you have 2 failed drives. And that could destroy
the dataset (depending on whether you have raidz-1 or raidz-2)
Whenever possible, always get the hotswappable hardware, that will blink a
red
I'll first suggest questioning the measurement of speed you're getting,
12.5Mb/sec. I'll suggest another, more accurate method:
date ; zfs send somefilesystem | pv -b | ssh somehost zfs receive foo ;
date
At any given time, you can see how many bytes have transferred in aggregate,
and what time
I'm seeing similar results, though my file systems currently have
de-dupe disabled, and only compression enable, both systems being
I can't say this is your issue, but you can count on slow writes with
compression on. How slow is slow? Don't know. Irrelevant in this case?
Possibly.
I'm willing to accept slower writes with compression enabled, par for
the course. Local writes, even with compression enabled, can still
exceed 500MB/sec, with moderate to high CPU usage.
These problems seem to have manifested after snv_128, and seemingly
only affect ZFS receive speeds. Local
Hi all,
I need to move a filesystem off of one host and onto another
smaller
one. The fs in question, with no compression enabled, is using 1.2 TB
(refer). I'm hoping that zfs compression will dramatically reduce this
requirement and allow me to keep the dataset on an 800 GB store.
I've taken to creating an unmounted empty filesystem with a
reservation to prevent the zpool from filling up. It gives you
behavior similar to ufs's reserved blocks.
So ... Something like this?
zpool create -m /path/to/mountpoint myzpool c1t0d0
and then... Assuming it's a 500G disk ...
zfs
What is the best way to back up a zfs pool for recovery? Recover
entire pool or files from a pool... Would you use snapshots and
clones?
I would like to move the backup to a different disk and not use
tapes.
Personally, I use zfs send | zfs receive to an external disk. Initially a
full
I am considering building a modest sized storage system with zfs. Some
of the data on this is quite valuable, some small subset to be backed
up forever, and I am evaluating back-up options with that in mind.
You don't need to store the zfs send data stream on your backup media.
This would be
NO, zfs send is not a backup.
Understood, but perhaps you didn't read my whole message. Here, I will
spell out the whole discussion:
If you zfs send somefile it is well understood there are two big
problems with this method of backup. #1 If a single bit error is introduced
into the file,
Personally, I use zfs send | zfs receive to an external disk.
Initially a
full image, and later incrementals.
Do these incrementals go into the same filesystem that received the
original zfs stream?
Yes. In fact, I think that's the only way possible. The end result is ... On
my
I still believe that a set of compressed incremental star archives give
you
more features.
Big difference there is that in order to create an incremental star archive,
star has to walk the whole filesystem or folder that's getting backed up,
and do a stat on every file to see which files have
Consider then, using a zpool-in-a-file as the file format, rather than
zfs send streams.
That's a pretty cool idea. Then you've still got the entire zfs volume
inside of a file, but you're able to mount and extract individual files if
you want, and you're able to pipe your zfs send directly to
Personally, I like to start with a fresh full image once a month,
and then do daily incrementals for the rest of the month.
This doesn't buy you anything. ZFS isn't like traditional backups.
If you never send another full, then eventually the delta from the original
to the present will
Star implements this in a very effective way (by using libfind) that is
even
faster that the find(1) implementation from Sun.
Even if I just find my filesystem, it will run for 7 hours. But zfs can
create my whole incremental snapshot in a minute or two. There is no way
star or any other
zpool create -f testpool mirror c0t0d0 c1t0d0 mirror c4t0d0 c6t0d0
mirror c0t1d0 c1t1d0 mirror c4t1d0 c5t1d0 mirror c6t1d0 c7t1d0
mirror c0t2d0 c1t2d0
mirror c4t2d0 c5t2d0 mirror c6t2d0 c7t2d0 mirror c0t3d0 c1t3d0
mirror c4t3d0 c5t3d0
mirror c6t3d0 c7t3d0 mirror
Zfs does not strictly support RAID 1+0. However, your sample command
will create a pool based on mirror vdevs which is written to in a
load-shared fashion (not striped). This type of pool is ideal for
Although it's not technically striped according to the RAID definition of
striping, it does
zpool create testpool disk1 disk2 disk3
In the traditional sense of RAID, this would create a concatenated data set.
The size of the data set is the size of disk1 + disk2 + disk3. However,
since this is ZFS, it's not constrained to linearly assigning virtual disk
blocks to physical disk blocks
Are there any plans to have a tool to restore individual files from zfs
send streams - like ufsrestore?
The best advice I've heard so far is thus:
On your backup media, create a zpool in a file container. When you zfs
send don't save the data stream. Instead, feed it directly into zfs
Replacing my current media server with another larger capacity media
server. Also switching over to solaris/zfs.
Anyhow we have 24 drive capacity. These are for large sequential
access (large media files) used by no more than 3 or 5 users at a time.
What type of disks are you using, and
Thanks for the responses guys. It looks like I'll probably use RaidZ2
with 8 drives. The write bandwidth isn't that great as it'll be a
hundred gigs every couple weeks but in a bulk load type of environment.
So, not a major issue. Testing with 8 drives in a raidz2 easily
saturated a GigE
I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2
mirrored boot drives.
You want to use compression and deduplication and raidz2. I hope you didn't
want to get any performance out of this system, because all of those are
compute or IO intensive.
FWIW ... 5 disks in raidz2
Data in raidz2 is striped so that it is split across multiple disks.
Partial truth.
Yes, the data is on more than one disk, but it's a parity hash, requiring
computation overhead and a write operation on each and every disk. It's not
simply striped. Whenever you read or write, you need to
I want my VMs to run fast - so is it deduplication that really slows
things down?
Are you saying raidz2 would overwhelm current I/O controllers to where
I could not saturate 1 GB network link?
Is the CPU I am looking at not capable of doing dedup and compression?
Or are no CPUs capable
b (4) Hold backups from windows machines, mac (time machine),
b linux.
for time machine you will probably find yourself using COMSTAR and the
GlobalSAN iSCSI initiator because Time Machine does not seem willing
to work over NFS. Otherwise, for Macs you should definitely use NFS,
There's also questions of case sensitivity, locking, being mounted at
boot time rather than login time, accomodating more than one user.
I've also heard SMB is far slower.
The Macs I've switched to automounted NFS are causing me less trouble.
If you are in a ``share almost everything''
There's also questions of case sensitivity, locking, being mounted at
boot time rather than login time, accomodating more than one user.
I've also heard SMB is far slower.
The Macs I've switched to automounted NFS are causing me less trouble.
If you are in a ``share almost everything''
amber ~ # zpool list data
NAME SIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
data 930G 295G 635G31% 1.00x ONLINE -
amber ~ # zfs send -RD d...@prededup |zfs recv -d ezdata
cannot receive new filesystem stream: destination 'ezdata' exists
must specify -F to overwrite it
I have a new server, with 7 disks in it. I am performing benchmarks on it
before putting it into production, to substantiate claims I make, like
striping mirrors is faster than raidz and so on. Would anybody like me to
test any particular configuration? Unfortunately I don't have any SSD, so I
IMHO, sequential tests are a waste of time. With default configs, it
will be
difficult to separate the raw performance from prefetched
performance.
You might try disabling prefetch as an option.
Let me clarify:
Iozone does a nonsequential series of sequential tests, specifically
Never mind. I have no interest in performance tests for Solaris 10.
The code is so old, that it does not represent current ZFS at all.
Whatever. Regardless of what you say, it does show:
. Which is faster, raidz, or a stripe of mirrors?
. How much does raidz2 hurt
iozone -m -t 8 -T -O -r 128k -o -s 12G
Actually, it seems that this is more than sufficient:
iozone -m -t 8 -T -r 128k -o -s 4G
Good news, cuz I kicked off the first test earlier today, and it seems like
it will run till Wednesday. ;-) The first run, on a single disk, took 6.5
hrs,
://nedharvey.com/iozone_weezer/neds%20method/raw_results.zip
From: Edward Ned Harvey [mailto:sola...@nedharvey.com]
Sent: Saturday, February 13, 2010 9:07 AM
To: opensolaris-disc...@opensolaris.org; zfs-discuss@opensolaris.org
Subject: ZFS performance benchmarks in various
A most excellent set of tests. We could use some units in the PDF
file though.
Oh, hehehe. ;-) The units are written in the raw txt files. On your
tests, the units were ops/sec, and in mine, they were Kbytes/sec. If you
like, you can always grab the xlsx and modify it to your tastes, and
A most excellent set of tests. We could use some units in the PDF
file though.
Oh, by the way, you originally requested the 12G file to be used in
benchmark, and later changed to 4G. But by that time, two of the tests had
already completed on the 12G, and I didn't throw away those results,
/10 8:08 AM, Edward Ned Harvey sola...@nedharvey.com wrote:
Ok, I¹ve done all the tests I plan to complete. For highest performance, it
seems:
·The measure I think is the most relevant for typical operation is the
fastest random read /write / mix. (Thanks Bob, for suggesting I do
ZFS has intelligent prefetching. AFAIK, Solaris disk drivers do not
prefetch.
Can you point me to any reference? I didn't find anything stating yay or
nay, for either of these.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Doesn't this mean that if you enable write back, and you have
a single, non-mirrored raid-controller, and your raid controller
dies on you so that you loose the contents of the nvram, you have
a potentially corrupt file system?
It is understood, that any single point of failure could result
I wonder if it is a real problem, ie, for example cause longer backup
time, will it be addressed in future?
It doesn't cause longer backup time, as long as you're doing a zfs send |
zfs receive But it could cause longer backup time if you're using
something like tar.
The only way to solve it
Once the famous bp rewriter is integrated and a defrag functionality
built on top of it you will be able to re-arrange your data again so it
is sequential again.
Then again, this would also rearrange your data to be sequential again:
cp -p somefile somefile.tmp ; mv -f somefile.tmp somefile
I have a system with a bunch of disks, and I¹d like to know how much faster
it would be if I had an SSD for the ZIL; however, I don¹t have the SSD and I
don¹t want to buy one right now. The reasons are complicated, but it¹s not
a cost barrier. Naturally I can¹t do the benchmark right now...
But
I don't know the answer to your question, but I am running the same version
of OS you are, and this bug could affect us. Do you have any link to any
documentation about this bug? I'd like to forward something to inform the
other admins at work.
From: zfs-discuss-boun...@opensolaris.org
Sorry for double-post. This thread was posted separately to
opensolaris-help and zfs-discuss. So I'm replying to both lists.
I'm wondering what the possibilities of two-way replication are for a
ZFS storage pool.
Based on all the description you gave, I wouldn't call this two-way
Is there any work on an upgrade of zfs send/receive to handle resuming
on next media?
Please see Darren's post, pasted below.
-Original Message-
From: opensolaris-discuss-boun...@opensolaris.org [mailto:opensolaris-
discuss-boun...@opensolaris.org] On Behalf Of Darren Mackay
Is there any work on an upgrade of zfs send/receive to handle resuming
on next media?
See Darren's post, regarding mkfifo. The purpose is to enable you to use
normal backup tools that support changing tapes, to backup your zfs send
to multiple split tapes. I wonder though - During a restore,
In this email, when I say PERC, I really mean either a PERC, or any other
hardware WriteBack buffered raid controller with BBU.
For future server purchases, I want to know which is faster: (a) A bunch of
hard disks with PERC and WriteBack enabled, or (b) A bunch of hard disks,
plus one SSD
Recently, I'm benchmarking all kinds of stuff on my systems. And one
question I can't intelligently answer is what blocksize I should use in
these tests.
I assume there is something which monitors present disk activity, that I
could run on my production servers, to give me some statistics of
From everything I've seen, an SSD wins simply because it's 20-100x the
size. HBAs almost never have more than 512MB of cache, and even fancy
SAN boxes generally have 1-2GB max. So, HBAs are subject to being
overwhelmed with heavy I/O. The SSD ZIL has a much better chance of
being able to
You are running into this bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6929751
Currently, building a pool from files is not fully supported.
I think Cindy and I interpreted the question differently. If you want the
zpool inside a file to stay mounted while the system is
It all depends on how they are connecting to the storage. iSCSI, CIFS,
NFS,
database, rsync, ...?
The reason I say this is because ZFS will coalesce writes, so just
looking at
iostat data (ops versus size) will not be appropriate. You need to
look at the
data flowing between ZFS and
I don't have an answer to this question, but I can say, I've seen a similar
surprising result. I ran iozone on various raid configurations of spindle
disks . and on a ramdisk. I was surprised to see the ramdisk is only about
50% to 200% faster than the next best competitor in each category. . I
In my case where I reboot the server I cannot get the pool to come
back up. It shows UNAVAIL, I have tried to export before reboot and
reimport it and have not been successful and I dont like this in the
case a power issue of some sort happens. My other option was to mount
using lofiadm
I don't think retransmissions of b0rken packets is a problem anymore,
most people
use ssh which provides good error detection at a fine grain. It is
rare that one would
need to resend an entire ZFS dump stream when using ssh (or TLS or ...)
Archival tape systems are already designed to
In addition to backups on tape, I like to backup my ZFS to removable hard
disk. (Created a ZFS filesystem on removable disk, and zfs send | zfs
receive onto the removable disk). But since a single hard disk is so prone
to failure, I like to scrub my external disk regularly, just to verify the
The one thing that I keep thinking, and which I have yet to see
discredited, is that
ZFS file systems use POSIX semantics. So, unless you are using
specific features
(notably ACLs, as Paul Henson is), you should be able to backup those
file systems
using well known tools.
This is
I think what you're saying is: Why bother trying to backup with zfs
send
when the recommended practice, fully supportable, is to use other tools
for
backup, such as tar, star, Amanda, bacula, etc. Right?
The answer to this is very simple.
#1 ...
#2 ...
Oh, one more thing. zfs send
Why do we want to adapt zfs send to do something it was never
intended
to do, and probably won't be adapted to do (well, if at all) anytime
soon instead of
optimizing existing technologies for this use case?
The only time I see or hear of anyone using zfs send in a way it wasn't
intended is
My own stuff is intended to be backed up by a short-cut combination --
zfs send/receive to an external drive, which I then rotate off-site (I
have three of a suitable size). However, the only way that actually
works so far is to destroy the pool (not just the filesystem) and
recreate it from
From what I've read so far, zfs send is a block level api and thus
cannot be
used for real backups. As a result of being block level oriented, the
Weirdo. The above cannot be used for real backups is obviously
subjective, is incorrect and widely discussed here, so I just say weirdo.
I'm tired
From what I've read so far, zfs send is a block level api and thus
cannot be
used for real backups. As a result of being block level oriented, the
Weirdo. The above cannot be used for real backups is obviously
subjective, is incorrect and widely discussed here, so I just say
weirdo.
ZFS+CIFS even provides
Windows Volume Shadow Services so that Windows users can do this on
their own.
I'll need to look into that, when I get a moment. Not familiar with
Windows Volume Shadow Services, but having people at home able to do
this
directly seems useful.
I'd like to spin
ZFS+CIFS even provides
Windows Volume Shadow Services so that Windows users can do this on
their own.
I'll need to look into that, when I get a moment. Not familiar with
Windows Volume Shadow Services, but having people at home able to do
this
directly seems useful.
Even in
I'll say it again: neither 'zfs send' or (s)tar is an enterprise (or
even home) backup system on their own one or both can be components
of
the full solution.
I would be pretty comfortable with a solution thusly designed:
#1 A small number of external disks, zfs send onto the disks and
1. NDMP for putting zfs send streams on tape over the network. So
Tell me if I missed something here. I don't think I did. I think this
sounds like crazy talk.
I used NDMP up till November, when we replaced our NetApp with a Solaris Sun
box. In NDMP, to choose the source files, we had the
It would appear that the bus bandwidth is limited to about 10MB/sec
(~80Mbps) which is well below the theoretical 400Mbps that 1394 is
supposed to be able to handle. I know that these two disks can go
significantly higher since I was seeing 30MB/sec when they were used on
Macs previously in
I'll say it again: neither 'zfs send' or (s)tar is an
enterprise (or
even home) backup system on their own one or both can
be components of
the full solution.
Up to a point. zfs send | zfs receive does make a very good back up
scheme for the home user with a moderate amount of
5+ years ago the variety of NDMP that was available with the
combination of NetApp's OnTap and Veritas NetBackup did backups at the
volume level. When I needed to go to tape to recover a file that was
no longer in snapshots, we had to find space on a NetApp to restore
the volume. It could
That would add unnecessary code to the ZFS layer for something that
cron can handle in one line.
Actually ... Why should there be a ZFS property to share NFS, when you can
already do that with share and dfstab? And still the zfs property
exists.
I think the proposed existence of a ZFS scrub
Most software introduced in Linux clearly violates the UNIX
philosophy.
Hehehe, don't get me started on OSX. ;-) And for the love of all things
sacred, never say OSX is not UNIX. I made that mistake once. Which is not
to say I was proven wrong or anything - but it's apparently a subject
The only tool I'm aware of today that provides a copy of the data,
and all of the ZPL metadata and all the ZFS dataset properties is 'zfs
send'.
AFAIK, this is correct.
Further, the only type of tool that can backup a pool is a tool like
dd.
How is it different to backup a pool, versus
ln -s .zfs/snapshot snapshots
Voila. All Windows or Mac or Linux or whatever users are able to
easily access snapshots.
Clever.
Just one minor problem though, you've circumvented the reason why the
snapdir
property defaults to hidden. This probably won't affect clients that
Actually ... Why should there be a ZFS property to share NFS, when you
can
already do that with share and dfstab? And still the zfs property
exists.
Probably because it is easy to create new filesystems and clone them;
as
NFS only works per filesystem you need to edit dfstab every time
Does cron happen to know how many other scrubs are running, bogging
down
your IO system? If the scrub scheduling was integrated into zfs itself,
It doesn't need to.
Crontab entry: /root/bin/scruball.sh
/root/bin/scruball.sh:
#!/usr/bin/bash
for filesystem in filesystem1 filesystem2
no, it is not a subdirectory it is a filesystem mounted on top of the
subdirectory.
So unless you use NFSv4 with mirror mounts or an automounter other NFS
version will show you contents of a directory and not a filesystem. It
doesn't matter if it is a zfs or not.
Ok, I learned something
IIRC it's zpool scrub, and last time I checked, the zpool command
exited (with status 0) as soon as it had started the scrub. Your
command
would start _ALL_ scrubs in paralell as a result.
You're right. I did that wrong. Sorry 'bout that.
So either way, if there's a zfs property for scrub,
Not being a CIFS user, could you clarify/confirm for me.. is this
just a presentation issue, ie making a directory icon appear in a
gooey windows explorer (or mac or whatever equivalent) view for people
to click on? The windows client could access the .zfs/snapshot dir
via typed pathname if
This may be a bit dimwitted since I don't really understand how
snapshots work. I mean the part concerning COW (copy on right) and
how it takes so little room.
COW and snapshots are very simple to explain. Suppose you're chugging along
using your filesystem, and then one moment, you tell the
In other words, there
is no
case where multiple scrubs compete for the resources of a single disk
because
a single disk only participates in one pool.
Excellent point. However, the problem scenario was described as SAN. I can
easily imagine a scenario where some SAN administrator created
1 - 100 of 1109 matches
Mail list logo