> > You have me at a disadvantage here, because I'm
> not
> > even a Unix (let alone Solaris and Linux)
> aficionado.
> > But don't Linux snapshots in conjunction with
> rsync
> > (leaving aside other possibilities that I've never
> > heard of) provide rather similar capabilities
> (e.g.,
> > incremental backup or re-synching), especially
> when
> >  used in conjunction with scripts and cron?
> > 
> 
> 
> Which explains why you keep ranting without knowing
> what you're talking about.

Au contraire, cookie:  I present things in detail to make it possible for 
anyone capable of understanding the discussion to respond substantively if 
there's something that requires clarification or further debate.

You, by contrast, babble on without saying anything substantive at all - which 
makes you kind of amusing, but otherwise useless.  You could at least have 
tried to answer my question above, since you took the trouble to quote it - but 
of course you didn't, just babbled some more.

  Copy-on-write.  Even a
> bookworm with 0 real-life-experience should be able
> to apply this one to a working situation.  

As I may well have been designing and implementing file systems since before 
you were born (or not:  you just have a conspicuously callow air about you), my 
'real-life' experience with things like COW is rather extensive.  And while I 
don't have experience with Linux adjuncts like rsync, unlike some people I'm 
readily able to learn from the experience of others (who seem far more credible 
when describing their successful use of rsync and snapshots on Linux than 
anything I've seen you offer up here).

> 
> There's a reason ZFS (and netapp) can take snapshots
> galore without destroying their filesystem
> performance.

Indeed:  it's because ZFS already sacrificed a significant portion of that 
performance by disregarding on-disk contiguity, so there's relatively little 
left to lose.  By contrast, systems that respect the effects of contiguity on 
performance (and WAFL does to a greater degree than ZFS) reap its benefits all 
the time (whether snapshots exist or not) while only paying a penalty when data 
is changed (and they don't have to change as much data as ZFS does because they 
don't have to propagate changes right back to the root superblock on every 
update).

It is possible to have nearly all of the best of both worlds, but unfortunately 
not with any current implementations that I know of.  ZFS could at least come 
considerably closer, though, if it reorganized opportunistically as discussed 
in the database thread.

(By the way, since we're talking about snapshots here rather than about clones 
it doesn't matter at all how many there are, so your 'snapshots galore' bluster 
above is just more evidence of your technical incompetence:  with any 
reasonable implementation the only run-time overhead occurs in keeping the most 
recent snapshot up to date, regardless of how many older snapshots may also be 
present.)

But let's see if you can, for once, actually step up to the plate and discuss 
something technically, rather than spout buzzwords that you apparently don't 
come even close to understanding:

Are you claiming that writing snapshot before-images of modified data (as, 
e.g., Linux LVM snapshots do) for the relatively brief period that it takes to 
transfer incremental updates to another system 'destroys' performance?  First 
of all, that's clearly dependent upon the update rate during that interval, so 
if it happens at a quiet time (which presumably would be arranged if its 
performance impact actually *was* a significant issue) your assertion is 
flat-out-wrong.  Even if the snapshot must be processed during normal 
operation, maintaining it still won't be any problem if the run-time workload 
is read-dominated.

And I suppose Sun must be lying in its documentation for fssnap (which Sun has 
offered since Solaris 8 with good old update-in-place UFS) where it says "While 
the snapshot is active, users of the file system might notice a slight 
performance impact [as contrasted with your contention that performance is 
'destroyed'] when the file system is written to, but they see no impact when 
the file system is read" 
(http://docsun.cites.uiuc.edu/sun_docs/C/solaris_9/SUNWaadm/SYSADV1/p185.html). 
 You'd really better contact them right away and set them straight.

Normal system cache mechanisms should typically keep about-to-be-modified data 
around long enough to avoid the need to read it back in from disk to create the 
before-image for modified data used in a snapshot, and using a log-structured 
approach to storing these BIs in the snapshot file or volume (though I don't 
know what specific approaches are used in fssnap and LVM:  do you?) would be 
extremely efficient - resulting in minimal impact on normal system operation 
regardless of write activity.

C'mon, cookie:  surprise us for once - say something intelligent.  With 
guidance and practice, you might even be able to make a habit of it.

- bill
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to