Re: [zfs-discuss] zfs and raid 51

2009-02-21 Thread Fajar A. Nugraha
2009/2/18 Ragnar Sundblad ra...@csc.kth.se:
 For our file- and mail servers we have been using mirrored raid-5
 chassises, with disksuite and ufs. This has served us well, and the

 By some reason that I haven't gotten
 yet, zfs doesn't allow you to put raids upon each other, like
 mirrors/stripes/parity raids on mirrors/stripes/parity raids, in a
 single pool.

Is there any reason why you don't want to use striped mirrors (i.e.
stripes of mirrored vdevs, a.k.a raid 10) with online spares? This
should provide high level of availability while greatly reducing
downtime needed for resilvering in the event of disk failure. And if
you're REALLY paranoid you could go with 3-way or more mirror for each
vdevs.

Regards,

Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up ZFS snapshots

2009-02-21 Thread David Abrahams
David Magda dmagda at ee.ryerson.ca writes:

 
  The format of the [zfs send] stream is evolving. No backwards  
  compatibility is guaranteed. You may not be able to receive your  
  streams on future versions of ZFS.
 
 http://docs.sun.com/app/docs/doc/819-2240/zfs-1m
 
 If you want to do back ups of your file system use a documented  
 utility (tar, cpio, pax, zip, etc.).

Well understood.  But does anyone know the long-term intentions of
the ZFS developers in this area?  The one big disadvantage of the
recommended approaches shows up when you start taking advantage
of ZFS to clone filesystems without replicating storage.  Using zfs send
will avoid representing the data twice to the backup system (and allow
easy reconstruction of the clones), but I don't think the same goes for
the other techniques.

It would be nice to know that they're thinking about a way to address
these issues.

--
Dave Abrahams
Boostpro Computing
http://boostpro.com





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up ZFS snapshots

2009-02-21 Thread Bob Friesenhahn

On Sat, 21 Feb 2009, David Abrahams wrote:


If you want to do back ups of your file system use a documented
utility (tar, cpio, pax, zip, etc.).


Well understood.  But does anyone know the long-term intentions of
the ZFS developers in this area?  The one big disadvantage of the

It would be nice to know that they're thinking about a way to address
these issues.


You are requesting that the ZFS developers be able to predict the 
future.  How can they do that?


Imposting a requirement for backwards compatibly will make it more 
difficult for ZFS to adapt to changing requirements as it evolves.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up ZFS snapshots

2009-02-21 Thread Tim
On Sat, Feb 21, 2009 at 11:26 AM, Bob Friesenhahn 
bfrie...@simple.dallas.tx.us wrote:

 On Sat, 21 Feb 2009, David Abrahams wrote:


 If you want to do back ups of your file system use a documented
 utility (tar, cpio, pax, zip, etc.).


 Well understood.  But does anyone know the long-term intentions of
 the ZFS developers in this area?  The one big disadvantage of the

 It would be nice to know that they're thinking about a way to address
 these issues.


 You are requesting that the ZFS developers be able to predict the future.
  How can they do that?

 Imposting a requirement for backwards compatibly will make it more
 difficult for ZFS to adapt to changing requirements as it evolves.

 Bob


No, he's not asking them to predict the future.  Don't be a dick.  He's
asking if they can share some of their intentions based on their current
internal roadmap.  If you're telling me Sun doesn't have a 1yr/2yr/3yr
roadmap for ZFS I'd say we're all in some serious trouble. We make it up as
we go along does NOT invoke the least bit of confidence, and I HIGHLY doubt
that's how they're operating.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up ZFS snapshots

2009-02-21 Thread Bob Friesenhahn

On Sat, 21 Feb 2009, Tim wrote:


No, he's not asking them to predict the future.  Don't be a dick.  He's
asking if they can share some of their intentions based on their current
internal roadmap.  If you're telling me Sun doesn't have a 1yr/2yr/3yr
roadmap for ZFS I'd say we're all in some serious trouble. We make it up as
we go along does NOT invoke the least bit of confidence, and I HIGHLY doubt
that's how they're operating.


ZFS is principally already developed.  It is now undergoing feature 
improvement, performance, and stability updates.  Perhaps entries in 
the OpenSolaris bug tracking system may reveal what is requested to be 
worked on.


In the current economy, I think that We make it up as we go along is 
indeed the best plan.  That is what most of us are doing now. 
Multi-year roadmaps are continually being erased and restarted due to 
changes (reductions) in staff, funding, and customer base. Yes, most 
of us are in some serious trouble and if you are not, then you are 
somehow blessed.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up ZFS snapshots

2009-02-21 Thread Tim
On Sat, Feb 21, 2009 at 12:18 PM, Bob Friesenhahn 
bfrie...@simple.dallas.tx.us wrote:


 ZFS is principally already developed.  It is now undergoing feature
 improvement, performance, and stability updates.  Perhaps entries in the
 OpenSolaris bug tracking system may reveal what is requested to be worked
 on.

 In the current economy, I think that We make it up as we go along is
 indeed the best plan.  That is what most of us are doing now. Multi-year
 roadmaps are continually being erased and restarted due to changes
 (reductions) in staff, funding, and customer base. Yes, most of us are in
 some serious trouble and if you are not, then you are somehow blessed.

 Bob



Well given that I *KNOW* Sun isn't making shit up as they go along, and I
have *SEEN* some of their plans under NDA, I'll just outright call bullshit.
 I was trying to be nice about it.  If you're making stuff up as you go
along that's likely why you're struggling.  Modifying plans is one thing.
 Not having any is another thing entirely.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up ZFS snapshots

2009-02-21 Thread Ian Collins

David Abrahams wrote:

David Magda dmagda at ee.ryerson.ca writes:

 
  
The format of the [zfs send] stream is evolving. No backwards  
compatibility is guaranteed. You may not be able to receive your  
streams on future versions of ZFS.
  

http://docs.sun.com/app/docs/doc/819-2240/zfs-1m

If you want to do back ups of your file system use a documented  
utility (tar, cpio, pax, zip, etc.).



Well understood.  But does anyone know the long-term intentions of
the ZFS developers in this area?  The one big disadvantage of the
recommended approaches shows up when you start taking advantage
of ZFS to clone filesystems without replicating storage.  Using zfs send
will avoid representing the data twice to the backup system (and allow
easy reconstruction of the clones), but I don't think the same goes for
the other techniques.

  
I wouldn't have any serious concerns about backing up snapshots provided 
the stream version was on the tape label and I had a backup of the 
Solaris release (or a virtual machine) that produced them.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RFE for two-level ZFS

2009-02-21 Thread Gary Mills
On Thu, Feb 19, 2009 at 12:36:22PM -0800, Brandon High wrote:
 On Thu, Feb 19, 2009 at 6:18 AM, Gary Mills mi...@cc.umanitoba.ca wrote:
  Should I file an RFE for this addition to ZFS?  The concept would be
  to run ZFS on a file server, exporting storage to an application
  server where ZFS also runs on top of that storage.  All storage
  management would take place on the file server, where the physical
  disks reside.  The application server would still perform end-to-end
  error checking but would notify the file server when it detected an
  error.
 
 You could accomplish most of this by creating a iSCSI volume on the
 storage server, then using ZFS with no redundancy on the application
 server.

That's what I'd like to do, and what we do now.  The RFE is to take
advantage of the end-to-end checksums in ZFS in spite of having no
redundancy on the application server.  Having all of the disk
management in one place is a great benefit.

 You'll have two layers for checksums, one on the storage server's
 zpool and a second on the application server's filesystem. The
 application server won't be able to notify the storage server that
 it's detected a bad checksum, other than through retries, but can
 write a user-space monitor that watches for ZFS checksum errors and
 sends notification to the storage server.

The RFE is to enable the two instances of ZFS to exchange information
about checksum failures.

 To poke a hole in your idea: What if the app server does find an
 error? What's the storage server to do at that point? Provided that
 the storage server's zpool already has redundancy, the data written to
 disk should already be exactly what was received from the client. If
 you want to have the ability to recover from erorrs on the app server,
 you should use a redundant zpool - Either a mirror or a raidz.

Yes, if the two instances of ZFS disagree, we have a problem that
needs to be resolved: they need to cooperate in this endevour.

 If you're concerned about data corruption in transit, then it sounds
 like something akin to T10 DIF (which others mentioned) would fit the
 bill. You could also tunnel the traffic over a transit layer such as
 TLS or SSH that provides a measure of validation. Latency should be
 fun to deal with however.

I'm mainly concerned that ZFS on the application server will detect a
checksum error and then be unable to preserve the data.  Iscsi already
has TCP checksums.  I assume that FC-AL does as well.  Using more
reliable checksums has no benefit if ZFS will still detect end-to-end
checksum errors.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up ZFS snapshots

2009-02-21 Thread Bob Friesenhahn

On Sat, 21 Feb 2009, Tim wrote:

Well given that I *KNOW* Sun isn't making shit up as they go along, and I
have *SEEN* some of their plans under NDA, I'll just outright call bullshit.
I was trying to be nice about it.  If you're making stuff up as you go
along that's likely why you're struggling.  Modifying plans is one thing.
Not having any is another thing entirely.


If plans are important to you, then I suggested that you write to your 
local congressman and express your concern.


Otherwise plans are stymied by a severely faltering economy, a 
failing banking system, and unpredictable government response.


At this point we should be happy that Sun has reiterated its support 
for OpenSolaris and ZFS during these difficult times.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up ZFS snapshots

2009-02-21 Thread Miles Nordin
 da == David Abrahams d...@boostpro.com writes:
 ic == Ian Collins i...@ianshome.com writes:

da disadvantage of the recommended approaches shows up when you
da start taking advantage of ZFS to clone filesystems without
da replicating storage.  Using zfs send will avoid representing
da the data twice

Two or three people wanted S3 support, but IMVHO maybe S3 is too
expensive and is better applied to getting another decade out of
aging, reliable operating systems, and ZFS architecture should work
towards replacing S3 not toward pandering to it.

Many new ZFS users are convinced to try ZFS because they want to back
up non-ZFS filesystems onto zpool's because it's better than tape, so
that's not a crazy idea.

It's plausible to want one backup server with a big slow pool,
deliberately running a Solaris release newer than anything else in
your lab.  Then have tens of Solaris's of various tooth-length 'zfs
send'ing from many pools toward one pool on the backup server.  The
obvious way to move a filesystem rather than a pool from older Solaris
to newer, is 'zfs send | zfs recv'.

The obvious problem: this doesn't always work.

The less obvious problem: how do you restore?  It's one thing to say,
``I want it to always work to zfs send from an older system to a
newer,'' which we are NOT saying yet.  To make restore work, we need
to promise more: ``the format of the 'zfs send' stream depends only on
the version number of the ZFS filesystem being sent, not on the zpool
version and not on the build of the sending OS.''  That's a more
aggressive compatibility guarantee than anyone's suggested so far,
never mind what we have.  

At least it's more regression-testable than the weaker compatibility
promises: you can 'zfs send' a hundred stored test streams from
various old builds toward the system under test, then 'zfs send' them
back to simulate a restore, and modulo some possible headers you could
strip off, they should be bit-for-bit identical when they come out as
when they go in.  zpool becomes a non-fragile way of storing fragile
'zfs send' streams.

And to make this comparable in trustworthyness to pre-ZFS backup
systems, we need a THIRD thing---a way to TEST the restore without
disrupting the old-Solaris system in production, a restore-test we are
convinced will expose the problems we know 'zfs recv' sometimes has
including lazy-panic problems---and I think the send|recv architecture
has painted us into a corner in terms of getting that since gobs of
kernel code are involved in receiving streams so that there's no way
to fully test a recv other than to make some room for it, recv it,
then 'zfs destroy'.

so... yeah.  I guess the lack of 'zfs send' stream compatibility does
make into shit my answer ``just use another zpool for backup.  Tape's
going out of fashion anyway.''  And when you add compatibility
problems to the scenario, storing backups in zpool stream rather than
'zfs send' format no longer resolves the problem I raised before with
the lack of recv-test.  I guess the only thing we really have for
backup is rsync --in-place --no-whole-file.

ic I wouldn't have any serious concerns about backing up
ic snapshots provided the stream version was on the tape label
ic and I had a backup of the Solaris release (or a virtual
ic machine) that produced them.

I would have serious concerns doing that because of the numerous other
problems I always talk about that you haven't mentioned.

But, I don't wish for 'zfs send' to become a backup generator.  I like
it as is.  Here are more important problems:

 * are zfs send and zfs recv fast enough now, post-b105?

 * endian-independence (fixed b105?)

 * toxic streams that panic the receiving system (AFAIK unfixed)

though, if I had to add one more wish to that list, the next one would
probably be more stream format compatibility across Solaris releases.

Understand the limitations of your VM approach.  Here is the way you
get access to your data through it:

 * attach a huge amount of storage to the VM and create a zpool on it
   inside the VM

 * pass the streams through the VM and onto the pool, hoping none are
   corrupt or toxic since they're now stored and you no longer have
   the chance to re-send them.  but nevermind that problem for now.

 * export the pool, shut down the VM

   [this is the only spot where backward-compatibility is guaranteed,
and where it seems trustworthy so far]

 * import the pool on a newer Solaris

 * upgrade the pool and the filesystems in it

so, you have to assign disks to the VM, zpool export, zpool import.
If what you're trying to restore is tiny, you can make a file vdev.
And if it's Everything, then you can destroy the production pool,
recreate it inside the vm, u.s.w.  No problem.  But what if you're
trying to restore something that uses 12 disks worth of space on your
48-disk production pool?  You have free space for it on the production
pool, but (1) you do not have 12 unassigned disks 

Re: [zfs-discuss] [basic] zfs operations on zpool

2009-02-21 Thread Harry Putnam
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:

I created a receptacle with zpool
zpool create zbk raidz1 c5t0d0 c5t1d0 c5t2d0

(With compression turned on)

As seen my zfs
 zfs list zbk
  NAME   USED  AVAIL  REFER  MOUNTPOINT
  zbk106G  50.5G   106G  /zbk

As seen by zpool
 zpool list zbk
  NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
  zbk238G   158G  79.6G66%  ONLINE  -

BobF wrote:
 Other than when dealing with the top zfs pools, and zfs filesystems,
 the answer to this is yes.  If this was not the case, then the system
 would not be very useful.

It sure wouldn't.  That was first take but like I said I wasn't that
confident about being right.

Its looking like I botched the job already.  My intent was to create
one top level zfs files system in the pool.  But after your helpful and
explanatory reply I see I carelessly mixed things up so that I used 
`zfs create' where I should have used mkdir

   zfs list -r zbk
  NAME  USED  AVAIL  REFER  MOUNTPOINT
  zbk   106G  50.5G   106G  /zbk
  zbk/mob1  101K  50.5G  28.0K  /zbk/mob1
  zbk/mob1/acronis 49.3K  50.5G  25.3K  /zbk/mob1/acronis
  zbk/mob1/acronis/022009  24.0K  50.5G  24.0K  /zbk/mob1/acronis/022009
  zbk/mob1/ghost   24.0K  50.5G  24.0K  /zbk/mob1/ghost

   ls -lR /zbk|grep '^/'
  /zbk:
  /zbk/chub:
  /zbk/chub/ghost:
  /zbk/chub/ghost/021909:
  /zbk/harvey:
  /zbk/harvey/ghost:
  /zbk/harvey/ghost/022009:
  /zbk/mob1:
  /zbk/mob1/acronis:
  /zbk/mob1/acronis/022009:
  /zbk/mob1/ghost:

  zfs rename zbk/hosts/mob1022009-full.tib zbk/hosts/mob1/022009-full.tib

Probably the wrong move now that its clear how I screwed this up.

I'm thinking something like this might clean things up?

 cd /rbk

  Starting with:
  ls -F .
  chub/  harvey/  mob1/  mob1MyBackup.tib

 zfs destroy -r mob1

 mkdir -p mob1/acronis/022009/  mob1/ghost

 mv  mob1MyBackup.tib mob1/acronis/022009/mob1_01.tib

Is this about right... since there are no actual files under the zfs
file system `mob1/', I can just get rid of it as shown above.  And
create the hierarchy I intended with standard tools mkdir and mv?

I think I'll wait for a reply before I do any of that...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up ZFS snapshots

2009-02-21 Thread Ian Collins

Miles Nordin wrote:

ic I wouldn't have any serious concerns about backing up
ic snapshots provided the stream version was on the tape label
ic and I had a backup of the Solaris release (or a virtual
ic machine) that produced them.

I would have serious concerns doing that because of the numerous other
problems I always talk about that you haven't mentioned.

But, I don't wish for 'zfs send' to become a backup generator.  I like
it as is.  Here are more important problems:

 * are zfs send and zfs recv fast enough now, post-b105?

 * endian-independence (fixed b105?)

 * toxic streams that panic the receiving system (AFAIK unfixed)

  
We should see a resolution for this soon, I have have a support case 
open and I no have a reproducible test case.  I haven't been able to 
panic any recent SXCE builds with the streams that panic Solaris 10.



though, if I had to add one more wish to that list, the next one would
probably be more stream format compatibility across Solaris releases.

  
Luckily for us, they haven't broken it yet on a production release.  
They would give them selves a massive headache if they did.  One point 
that has been overlooked is replication, I'm sure I'm not alone in 
sending older stream formats to newer staging servers.



Understand the limitations of your VM approach.  Here is the way you
get access to your data through it:

 * attach a huge amount of storage to the VM and create a zpool on it
   inside the VM

  

I currently use iSCSI.


 * pass the streams through the VM and onto the pool, hoping none are
   corrupt or toxic since they're now stored and you no longer have
   the chance to re-send them.  but nevermind that problem for now.

  

I receive the stream as well as archive it.


 * export the pool, shut down the VM

   [this is the only spot where backward-compatibility is guaranteed,
and where it seems trustworthy so far]

 * import the pool on a newer Solaris

 * upgrade the pool and the filesystems in it

  

Not necessary.


so, you have to assign disks to the VM, zpool export, zpool import.
If what you're trying to restore is tiny, you can make a file vdev.
And if it's Everything, then you can destroy the production pool,
recreate it inside the vm, u.s.w.  No problem.  But what if you're
trying to restore something that uses 12 disks worth of space on your
48-disk production pool?  You have free space for it on the production
pool, but (1) you do not have 12 unassigned disks sitting around nor
anywhere to mount all 12 at once, and (2) you do not have twice enough
free space for it on the production pool so that you could use iSCSI
or a file vdev on NFS, yo uonly have one times enough space for it.

  
I don't do this for handy backups.  We only do this to archive a 
filesystem.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [basic] zfs operations on zpool

2009-02-21 Thread Bob Friesenhahn

On Sat, 21 Feb 2009, Harry Putnam wrote:

Probably the wrong move now that its clear how I screwed this up.

I'm thinking something like this might clean things up?

cd /rbk

 Starting with:
 ls -F .
 chub/  harvey/  mob1/  mob1MyBackup.tib

zfs destroy -r mob1

mkdir -p mob1/acronis/022009/  mob1/ghost

mv  mob1MyBackup.tib mob1/acronis/022009/mob1_01.tib

Is this about right... since there are no actual files under the zfs
file system `mob1/', I can just get rid of it as shown above.  And
create the hierarchy I intended with standard tools mkdir and mv?


You might want to think a bit more before you get started.  While 
there is an implicit usable filesystem at the pool root ('/rbk'), 
there is considerable value with creating subordinate filesystems 
using 'zfs create' because then you will be able to manage them much 
better using different settings such as block sizes, mount points, 
quotas, and other goodies that ZFS provides.  If the directories are 
for users, then being able to set a quota is quite useful since some 
users need a firewall to protect to ensure that they don't use all of 
the disk space.


Note that you can set the mountpoint for any ZFS filesystem via the 
mountpoint property (see zfs manual page) and this will cause that 
filesystem to appear via the path you specify.  Don't feel that the 
name of the pool needs to drive your directory heirarchy.  In fact, it 
is wise if the pool name is not part of any of the paths used.  By 
using the mountpoint property you can create any number of mounted 
directories directly off of root ('/'), or under any other directory. 
For example, you can easily set the mount path to /mydir.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up ZFS snapshots

2009-02-21 Thread David Abrahams

on Sat Feb 21 2009, Miles Nordin carton-AT-Ivy.NET wrote:

 da == David Abrahams d...@boostpro.com writes:
 ic == Ian Collins i...@ianshome.com writes:

 da disadvantage of the recommended approaches shows up when you
 da start taking advantage of ZFS to clone filesystems without
 da replicating storage.  Using zfs send will avoid representing
 da the data twice

 Two or three people wanted S3 support, 

Amazon S3 support directly in ZFS?  I'd like that, but I'm not sure what
it looks like.  There are already tools that will send / receive ZFS to
Amazon S3.  Is there something you can only do well if you own the
filesystem code?
 
 but IMVHO maybe S3 is too expensive and is better applied to getting
 another decade out of aging, reliable operating systems, and ZFS
 architecture should work towards replacing S3 not toward pandering to
 it.

Replacing S3?

 Many new ZFS users are convinced to try ZFS because they want to back
 up non-ZFS filesystems onto zpool's because it's better than tape, so
 that's not a crazy idea.

Not crazy, unless you need to get the backups off-site.

 It's plausible to want one backup server with a big slow pool,
 deliberately running a Solaris release newer than anything else in
 your lab.  Then have tens of Solaris's of various tooth-length 'zfs
 send'ing from many pools toward one pool on the backup server.  The
 obvious way to move a filesystem rather than a pool from older Solaris
 to newer, is 'zfs send | zfs recv'.

 The obvious problem: this doesn't always work.

Because they might break the send/recv format across versions.

 The less obvious problem: how do you restore?  It's one thing to say,
 ``I want it to always work to zfs send from an older system to a
 newer,'' which we are NOT saying yet.  To make restore work, we need
 to promise more: ``the format of the 'zfs send' stream depends only on
 the version number of the ZFS filesystem being sent, not on the zpool
 version and not on the build of the sending OS.''  That's a more
 aggressive compatibility guarantee than anyone's suggested so far,
 never mind what we have.  

Sure.  But maybe send/recv aren't the right tools for this problem.  I'm
just looking for *a way* to avoid storing lots of backup copies of
cloned filesystems; I'm not asking that it be called send/recv.

-- 
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Confused about zfs recv -d, apparently

2009-02-21 Thread David Dyer-Bennet
First, it fails because the destination directory doesn't exist.  Then it
fails because it DOES exist.  I really expected one of those to work.  So,
what am I confused about now?  (Running 2008.11)

# zpool import -R /backups/bup-ruin bup-ruin
# zfs send -R z...@bup-20090222-054457utc | zfs receive -dv
bup-ruin/fsfs/zp1
cannot receive: specified fs (bup-ruin/fsfs/zp1) does not exist
# zfs create bup-ruin/fsfs/zp1
# zfs send -R z...@bup-20090222-054457utc | zfs receive -dv
bup-ruin/fsfs/zp1
cannot receive new filesystem stream: destination 'bup-ruin/fsfs/zp1' exists
must specify -F to overwrite it

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [basic] zfs operations on zpool

2009-02-21 Thread Harry Putnam
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:

 You might want to think a bit more before you get started.  While
 there is an implicit usable filesystem at the pool root ('/rbk'),
 there is considerable value with creating subordinate filesystems
 using 'zfs create' because then you will be able to manage them much
 better using different settings such as block sizes, mount points,
 quotas, and other goodies that ZFS provides.  If the directories are
 for users, then being able to set a quota is quite useful since some
 users need a firewall to protect to ensure that they don't use all of
 the disk space.

Ahh I see.

The users are not real users... just my home lan connecting to backup
other machines onto the zfs pool.  But I see your point.  And
considering the whole thing is experimental at this point; (I'm
running zfs from a opensol install inside a vmware on windows xp,
hoping to find out some of the gotchas and good practices before
putting a real zfs server into operation on the home lan.

I think I will scrub this setup leaving zbk/ as the main pool then
create xfs filesystems like:

 zbk/HOST1
 zbk/HOST2
 zbk/HOST3
 (etc)
 zbk/misc

And set the HOST[123]/ and pub/ as the cifs shares, instead of the top
level.  That would give quite a bit more granularity.. maybe I'll
learn a little more this way too.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss