Re: [zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-13 Thread BJ Quinn
I was going with the spring release myself, and finally got tired of waiting.  
Got to build some new servers.

I don't believe you've missed anything.  As I'm sure you know, it was 
originally officially 2010.02, then it was officially 2010.03, then it was 
rumored to be .04,  sort of leaked as .05, semi-officially .06/.1H, and when 
that last one passed, even the rumor mill has gone pretty well dead.  The best 
I can find now is someone rumoring Q4 (although there was some discussion as to 
whether that was calendar Q4, or Oracle's fiscal year Q4, which would make it a 
year away).  At any rate, I'm done waiting on the new release, and out of 
principle I'm not going to use a development release in a real world 
environment.  I don't care what the condition of the code is, if Oracle won't 
declare it as a release, then I can't either to my clients.

FYI 2008.11 doesn't appear to have this problem.  I've done some testing that 
seemed to break 2009.06 every time, and so far it has passed.  That's important 
to me since I need the zfs_write_limit_override setting, which isn't 
available in 2008.05.

So for me it looks like 2008.11 until 2010.Unicorn comes out or BTRFS gets 
deduplication (or maybe even if not).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-12 Thread BJ Quinn
I'm actually only running one at a time.  It is recursive / incremental (and 
hundreds of GB), but it's only one at a time.  Was there still problems in 
2009.06 in that scenario?

Does 2008.11 have these problems?  2008.05 didn't, and I'm considering moving 
back to that rather than using a development build.

Message was edited by: bjquinn
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-12 Thread BJ Quinn
Yeah, it's just that I don't think I'll be allowed to put up a dev version, but 
I would probably get away with putting up 2008.11 if it doesn't have the same 
problems with zfs send/recv.  Does anyone know?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-12 Thread BJ Quinn
Actually my current servers are 2008.05, and I noticed the problems I was 
having with 2009.06 BEFORE I put those up as the new servers, so my pools are 
not too new to revert back to 2008.11, I'd actually be upgrading from 2008.05.

I do not have paid support, but it's just not going to go over well with the 
client to use a development build (especially if something goes wrong).

I'd really like to use 2008.11 if someone can confirm that the zfs send/recv 
hangs were introduced AFTER 2008.11.  I'm in the process of trying it myself, 
but since it's intermittent, I'd feel better if someone knew when the problems 
were introduced.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send/recv hanging in 2009.06

2010-07-09 Thread BJ Quinn
I have a couple of systems running 2009.06 that hang on relatively large zfs 
send/recv jobs.  With the -v option, I see the snapshots coming across, and at 
some point the process just pauses, IO and CPU usage go to zero, and it takes a 
hard reboot to get back to normal.  The same script running against the same 
data doesn't hang on 2008.05.

There are maybe 100 snapshots, 200GB of data total.  Just trying to send to a 
blank external USB drive in one case, and in the other, I'm restoring from a 
USB drive to a local drive, but the behavior is the same.

I see that others have had a similar problem, but there doesn't seem to be any 
answers -

https://opensolaris.org/jive/thread.jspa?messageID=384540
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg34493.html
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg37158.html

I'd like to stick with a released version of OpenSolaris, so I'm hoping that 
the answer isn't to switch to the dev repository and pull down b134.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] combining series of snapshots

2010-06-08 Thread BJ Quinn
I have a series of daily snapshots against a set of data that go for several 
months, but then the server crashed.  In a hurry, we set up a new server and 
just copied over the live data and didn't bother with the snapshots (since zfs 
send/recv was too slow and would have taken hours and hours to restore).  We've 
now run on the replacement server for a year or so and it's time to upgrade to 
a new, faster server.

As part of building up our newest server, I wanted to combine the older 
snapshots with the daily snapshots generated on the server that is currently 
running.  I was wondering what the proper way to do this might be.

I was considering the following process for building up the new server :

1.  Copy over all the snapshots from a backup of the server that crashed 
(11/01/2008 - 7/14/2009) using zfs send/recv
2.  Copy over the oldest snapshot from the current server (7/15/2009) using 
rsync so that the data from that snapshot is the live filesystem data on the 
new server.
3.  Take a snapshot on the new server and call it the same thing as the 
snapshot that I copied the data from (i.e. datap...@nightly20090715)
4.  Do an incremental zfs send/recv from 7/15/2009 to today from the current 
server.

I don't know if this would work, or if it would leave me in a consistent state 
even if I did make it work.  Any suggestions?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread BJ Quinn
Is there any way to merge them back together?  I really need the history data 
going back as far as possible, and I'd like to be able to access it from the 
same place .  I mean, worst case scenario, I could rsync the contents of each 
snapshot to the new filesystem and take a snapshot for each one, but surely 
there's a better way than that?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread BJ Quinn
Not exactly sure how to do what you're recommending -- are you suggesting I go 
ahead with using rsync to bring in each snapshot, but to bring it into to a 
clone of the old set of snapshots?  Is there another way to bring my recent 
stuff in to the clone?

If so, then as for the storage savings, I learned a long time ago that rsync 
--inplace --no-whole-file has the same effect - it makes sure to only touch 
blocks that changed, so in theory I ought to be able to rsync over my snapshots 
intelligently without wasting any more space than they took up to begin with.

Not sure if that's what you meant.

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread BJ Quinn
Ugh, yeah, I've learned by now that you always want at least that one snapshot 
in common to keep the continuity in the dataset.  Wouldn't I be able to 
recreate effectively the same thing by rsync'ing over each snapshot one by one? 
 It may take a while, and I'd have to use the --inplace and --no-whole-file 
switches to ensure that I didn't overwrite anything except changed blocks when 
bringing over each snapshot (avoiding marking all blocks as changed and wasting 
all sorts of space), but shouldn't that work at least?  I'd hate to have to 
resort to the two data sets thing.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread BJ Quinn
In my case, snapshot creation time and atime don't matter.  I think rsync can 
preserve mtime and ctime, though.  I'll have to double check that.

I'd love to enable dedup.  Trying to stay on stable releases of OpenSolaris 
for whatever that's worth, and I can't seem to find a link to download 2010.06. 
 :)

At any rate, thanks for the help!  I tried rsync'ing a few snapshots, and it 
doesn't look like it will take as long as I thought.  At first I feared it 
might run for weeks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PSARC recover files?

2009-11-10 Thread BJ Quinn
Say I end up with a handful of unrecoverable bad blocks that just so happen to 
be referenced by ALL of my snapshots (in some file that's been around forever). 
 Say I don't care about the file or two in which the bad blocks exist.  Is 
there any way to purge those blocks from the pool (and all snapshots) without 
having to restore the whole pool from backup?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PSARC recover files?

2009-11-10 Thread BJ Quinn
I believe it was physical corruption of the media.  Strange thing is last time 
it happened to me it also managed to replicate the bad blocks over to my backup 
server replicated with SNDR...

And yes, it IS read only, and a scrub will NOT actively clean up corruption in 
snapshots.  It will DETECT corruption, but if it's unrecoverable, that's that.  
It's unrecoverable.  If there's not enough redundancy in the pool, I'm ok with 
the data not being recoverable.  But wouldn't there be a way to purge out the 
bad blocks if for example it was only in a single bad file out of millions of 
files, and I didn't care about the file in question?  I don't want to recover 
the file, I want to have a working version of my pool+snapshots minus the tiny 
bit that was obviously corrupt.

Barring another solution, I'd have to take the pool in question, delete the bad 
file, and delete ALL the snapshots.  Then restore the old snapshots from backup 
to another pool, and copy over the current data from the pool that had a 
problem over to the new pool.  I can get most of my snapshots back that way, 
with the best known current data sitting on top as the active data set.  
Problem is with hundreds of snapshots plus compression, zfs send/recv takes 
over 24 hours to restore a full backup like that to a new storage device.  Last 
time this happened to me, I just had to say goodbye to all my snapshots and 
deal with it, all over a couple of kilobytes of temp files.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to purge bad data from snapshots

2009-11-09 Thread BJ Quinn
So, I had a fun ZFS learning experience a few months ago.  A server of mine 
suddenly dropped off the network, or so it seemed.  It was an OpenSolaris 
2008.05 box serving up samba shares from a ZFS pool, but it noticed too many 
checksum errors and so decided it was time to take the pool down so as to save 
the (apparently) dying disk from further damage.  Seemed inconvenient at the 
time, but a in hindsight that's a cool feature.  Haven't actually found any 
problems with the drive (an SSD), which has worked fine ever since.  Bit rot?  
Power failure (we had a lot of those for a while)?  Who knows.  At first I was 
afraid my ZFS pool had corrupted itself until I realized that it was a unique 
feature of ZFS actually protecting me from further damage rather than ZFS 
itself being the problem.

At any rate, in this case, the corruption managed to make it over to my backup 
server replicated with SNDR.  One of the corrupted blocks happened to be 
referenced by every single one of my daily snapshots going back nearly a year.  
I had no mirrored storage and copies set to 1.  Arguably a bad setup, I'm sure, 
but that's why I had a replicated server.  At any rate, I didn't care about the 
file referencing the corrupt block.  I would just as well have deleted it, but 
it was still referenced by all the snapshots.  It was a crisis at the time, so 
I just switched over to my replicated server (in case the drive on the primary 
server actually was bad) and deleted the files containing corrupt blocks and 
then deleted all the snapshots so zfs would quit unmounting the pool and just 
to get going again.  Things have been fine ever since, but I still wonder - is 
there something different that I could have done to get rid of the corrupt 
blocks without losing all my snapshots (could have r
 estored them from backup, but it would have taken forever).  I guess I could 
just do clones and then have the capability of deleting stuff, but then I don't 
believe I'd be able to back the thing up - if I don't do incremental zfs 
send/recv, the backup takes over 24 hours since there's so many snapshots, and 
I wouldn't think clones work with incremental zfs send/recv (especially if you 
start deleting files willy-nilly).  Am I just missing something altogether, or 
is restoring from backup the only option?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PSARC 2009/571: ZFS deduplication properties

2009-10-23 Thread BJ Quinn
Anyone know if this means that this will actually show up in SNV soon, or 
whether it will make 2010.02?  (on disk dedup specifically)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] De-duplication before SXCE EOL ?

2009-09-11 Thread BJ Quinn
Personally I don't care about SXCE EOL, but what about before 2010.02?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-02-11 Thread BJ Quinn
Great, thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-02-02 Thread BJ Quinn
Then what if I ever need to export the pool on the primary server and then 
import it on the replicated server.  Will ZFS know which drives should be part 
of the stripe even though the device names across servers may not be the same?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send -R slow

2009-01-28 Thread BJ Quinn
 What about when I pop in the drive to be resilvered, but right before I add 
 it back to the mirror, will Solaris get upset that I have two drives both 
 with the same pool name?
No, you have to do a manual import.

What you mean is that if Solaris/ZFS detects a drive with an identical pool 
name to a currently mounted pool, that it will safely not disrupt the mounted 
pool and simply not mount the same-named pool on the newly inserted drive?

Can I mount a pool as another pool name?

Message was edited by: bjquinn
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-01-28 Thread BJ Quinn
I have two servers set up, with two drives each.  The OS is stored on one 
drive, and the data on the second drive.  I have SNDR replication set up 
between the two servers for the data drive only.

I'm running out of space on my data drive, and I'd like to do a simple zpool 
attach command to add a second data drive.  Of course, this will break my 
replication unless I can also get the second drive replicating.

What can I do?  Do I simply add a second data drive to both servers and format 
them as I did the first drive (space for bitmap partitions, etc.) and then do a 
command like the following --

sndradm -ne server1 /dev/rdsk/[2nd data drive s0] /dev/rdsk/[2nd data drive s0] 
server2 /dev/rdsk/[2nd data drive s1] /dev/rdsk/[2nd data drive s1] ip sync g 
[some name other than my first synced drive's group name]

Is that all there is to it?  In other words, zfs will be happy as long as both 
drives are being synced?  And is this the way to sync them, independently, with 
a sndradm -ne command set up and running for each drive to be replicated, or 
is there a better way to do it?

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-01-28 Thread BJ Quinn
 The means to specify this is sndradm -nE ...,
 when 'E' is equal enabled.

Got it.  Nothing on the disk, nothing to replicate (yet).

The manner in which SNDR can guarantee that
two or more volumes are write-order consistent, as they are
replicated is place them in the same I/O consistency group.

Ok, so my sndradm -nE command with g [same name as first data drive group] 
simply ADDs a set of drives to the group, it doesn't stop or replace the 
replication on the first set of drives, and in fact in keeping the same group 
name I even keep the two sets of drives in each server in sync.  THEN I run my 
zfs attach command on the non-bitmap slice to my existing pool.  Do I have 
that all right?

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send -R slow

2009-01-26 Thread BJ Quinn
That sounds like a great idea if I can get it to work--

I get how to add a drive to a zfs mirror, but for the life of me I can't find 
out how to safely remove a drive from a mirror.

Also, if I do remove the drive from the mirror, then pop it back up in some 
unsuspecting (and unrelated) Solaris box, will it just see a drive with a pool 
on it and let me mount it up?  What about when I pop in the drive to be 
resilvered, but right before I add it back to the mirror, will Solaris get 
upset that I have two drives both with the same pool name?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send -R slow

2009-01-22 Thread BJ Quinn
I'm using OpenSolaris with ZFS as a backup server.  I copy all my data from 
various sources onto the OpenSolaris server daily, and run a snapshot at the 
end of each backup.  Using gzip-1 compression, mount -F smbfs, and the 
--in-place and --no-whole-file switches for rsync, I get efficient space usage, 
only storing the blocks that changed each day.  This way, I have a backup 
server containing all backups for all days going back effectively indefinitely. 
 Works great.

Of course, I also want to have something that can be rotated and/or taken 
offsite.  What I've done is use an internal drive in the backup server to 
actually receive and store all the backups and snapshots themselves.  Then at 
the end of the actual backup I run a snapshot, and then do a zfs send -R of my 
backup pool and all its snapshots to an external drive.  Not being able to 
trust what's on the drive (its contents could possibly have changed since last 
time I used it, and I want every snapshot on every external drive), I wipe the 
external drive clean and then have it receive the full contents of the 
non-incremental zfs send -R backuppool I mentioned above.

This works.  However, it's painfully slow.  I get the impression that zfs is 
de-compressing and then re-compressing the data instead of transferring it in 
its compressed state, and then when the incrementals start copying over (the 
snapshots themselves), it gets drastically slower.  The whole process works, 
but I'm thinking that when I start getting too many snapshots, it won't finish 
overnight and will run into the next day.

I don't want to just copy over the contents of my most recent snapshot on my 
backup server to the external drive then run a snapshot on the external drive, 
because I'd like each external drive to contain ALL the snapshots from the 
internal drive.

Is there any way to speed up a compressed zfs send -R?  Or is there some other 
way to approach this?  Maybe some way to do a bit-level clone of the internal 
drive to the external drive (the internal backup drive is not the same as the 
OS drive, so it could be unmounted), or SNDR replication or something?

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from

2008-12-01 Thread BJ Quinn
Oh.  Yup, I had figured this out on my own but forgot to post back.  --inplace 
accomplishes what we're talking about.  --no-whole-file is also necessary if 
copying files locally (not over the network), because rsync does default to 
only copying changed blocks, but it overrides that default behavior when not 
copying over the network.

Also, has anyone figured out a best-case blocksize to use with rsync?  I tried 
zfs get volblocksize [pool], but it just returns -.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from

2008-12-01 Thread BJ Quinn
Should I set that as rsync's block size?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread BJ Quinn
Here's an idea - I understand that I need rsync on both sides if I want to 
minimize network traffic.  What if I don't care about that - the entire file 
can come over the network, but I specifically only want rsync to write the 
changed blocks to disk.  Does rsync offer a mode like that?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-17 Thread BJ Quinn
We're considering using an OpenSolaris server as a backup server.  Some of the 
servers to be backed up would be Linux and Windows servers, and potentially 
Windows desktops as well.  What I had imagined was that we could copy files 
over to the ZFS-based server nightly, take a snapshot, and only the blocks that 
had changed of the files that were being copied over would be stored on disk.

What I found was that you can take a snapshot, make a small change to a large 
file on a ZFS filesystem, take another snapshot, and you'll only store a few 
blocks extra.  However, if you copy the same file of the same name from another 
source to the ZFS filesystem, it doesn't conserve any blocks.  To a certain 
extent, I understand why - when copying a file from another system (even if 
it's the same file or a slightly changed version of the same file), the 
filesystem actually does write to every block of the file, which I guess marks 
all those blocks as changed.

Is there any way to have ZFS check to realize that in fact the blocks being 
copied from another system aren't different, or that only a few of the blocks 
are different?  Perhaps there's another way to copy the file across the network 
that only copies the changed blocks.  I believe rsync can do this, but some of 
the servers in question are Windows servers and rsync/cygwin might not be an 
option.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-17 Thread BJ Quinn
Thank you both for your responses.  Let me see if I understand correctly - 

1.  Dedup is what I really want, but it's not implemented yet.

2.  The only other way to accomplish this sort of thing is rsync (in other 
words, don't overwrite the block in the first place if it's not different), and 
if I'm on Windows, I'll just have to go ahead and install rsync on my Windows 
boxes if I want it to work correctly.

Wmurnane, you mentioned there was a Windows-based rsync daemon.  Did you mean 
one other than the cygwin-based version?  I didn't know of any native Windows 
rsync software.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Segmentation fault / core dump with recursive

2008-10-14 Thread BJ Quinn
Well, I haven't solved everything yet, but I do feel better now that I realize 
that it was setting moutpoint=none that caused the zfs send/recv to hang.  
Allowing the default mountpoint setting fixed that problem.  I'm now trying 
with moutpoint=legacy, because I'd really rather leave it unmounted, especially 
during the backup itself, to prevent changes happening while the incrementals 
are copying over, and also in the end to hopefully let me avoid using -F.

The incrementals (copying all the snapshots beyond the first one copied) are 
really slow, however.  Is there anything that can be done to speed that up?  
I'm using compression (gzip-1) on the source filesystem.  I wanted the backup 
to retain the same compression.  Can ZFS copy the compressed version over to 
the backup, or does it really have to uncompress it and recompress it?  That 
takes time and lots of CPU cycles.  I'm dealing with highly compressible data 
(at least 6.5:1).
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Segmentation fault / core dump with recursive

2008-10-13 Thread BJ Quinn
Ok so I left the thumb drive to try to backup all weekend.  It got *most* of 
the first snapshot copied over, about 50MB, and that's it.  So I tried an 
external USB hard drive today, and it actually bothered to copy over the 
snapshots, but it does so very slowly.  It copied over the first snapshot (the 
full stream) at about 25MB/s (uncompressed).  Seems silly that it won't 
retain compression during copy, which would speed up my transfer, but oh well.  
However, once it gets to the incrementals, it slows down to about 1MB-5MB/s.  
Then, once it got to the last incremental snapshot, copied it over and then 
hung.  (By the way, when I say incrementals, I don't mean I'm doing a zfs send 
-i, just a zfs send -R to a clean drive.)  Now if I try to do a zfs list on my 
backup pool, it hangs and never comes back.  My zfs send -R command has now 
also hung.  iostat and the little green light on the external drive both show 
no further activity on the external drive.

This is crazy.  I feel like I'm beta testing a brand new feature or something.  
Nothing works.  Does anybody actually use this?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Segmentation fault / core dump with recursive

2008-10-11 Thread BJ Quinn
readonly=on worked (at least with -F), but then it got the error creating a 
mountpoint I mentioned above.  So I took away readonly=on, and it got past that 
part, however the snapshots past the first one take an eternity.  I left it 
overnight and it managed to get from 21MB copied for the second snapshot to 
26MB.  Admittedly, I'm trying to back up to a thumb drive (a fast one), but the 
first snapshot it copies over, which contains 13GB, doesn't take very long.  It 
slows to a crawl after the first snapshot is copied.  iostat shows the thumb 
drive at 100% busy, but it's not actually doing much.  I'll try turning off 
atime and not using -F and possibly also not auto-mounting the backup drive 
(mountpoint=legacy or mountpoint=none I guess) so as to eliminate other 
possible sources of changed data.  Anyone know why it would go so slow after 
the first snapshot?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Segmentation fault / core dump with recursive

2008-10-10 Thread BJ Quinn
Ok, in addition to my why do I have to use -F post above, now I've tried it 
with -F but after the first in the series of snapshots gets sent, it gives me a 
cannot mount '/backup/shares': failed to create mountpoint.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Segmentation fault / core dump with recursive

2008-10-10 Thread BJ Quinn
You've seen -F be necessary on some systems and not on others?

Also, was the mount=legacy suggestion for my problem with not wanting to use -F 
or for my cannot create mountpoint problem?  Or both?

If you use legacy mountpoints, does that mean that mounting the parent 
filesystem doesn't actually mount each sub-filesystem, and you have to mount 
them all individually?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Segmentation fault / core dump with recursive

2008-10-09 Thread BJ Quinn
Yeah -F should probably work fine (I'm trying it as we speak, but it takes a 
little while), but it makes me a bit nervous.  I mean, it should only be 
necessary if (as the error message suggests) something HAS actually changed, 
right?

So, here's what I tried - first of all, I set the backup FS to readonly.  That 
resulted in the same error message.  Strange, how could something have changed 
since the last snapshot if I CONSCIOUSLY didn't change anything or CD into it 
or anything AND it was set to readonly?

Oh well, so I tried another idea - I had been setting compression to gzip-1 on 
my backup, but my source filesystem had compression=gzip-1 and recordsize=16k.  
So, I set both of those settings on my backup FS (and readonly OFF).  Now the 
only difference between my backup and my backup source should be the fact that 
they have different FS names (datapool/shares and backup/shares), but they 
would kinda have to, wouldn't they?

At any rate, I'm trying -F now, but that makes me a bit uncomfortable.  Why 
does zfs think something has changed?  Am I truly creating a backup that could 
be restored and won't have something screwed up either with some of the older 
snapshots or with the current, promoted version of the FS?  If something 
really has changed between snapshots, or the incrementals aren't copied over 
just right, I could end up with my best backup being all corrupted.  (My other 
backup methods, of course, contain only current data, not the whole series of 
snapshots.)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Segmentation fault / core dump with recursive send/recv

2008-10-08 Thread BJ Quinn
Ok I'm taking a step back here.  Forgetting the incremental for a minute (which 
is the part causing the segmentation fault), I'm simply trying to use zfs send 
-R to get a whole filesystem and all of its snapshots.  I ran the following, 
after creating a compressed pool called backup :

zfs send -R datapool/[EMAIL PROTECTED] | zfs recv -dv backup

datapool/shares has three snapshots - BACKUP081007, BACKUP20081008, and just 
BACKUP, in that age order.  However, the command above creates backup/shares 
and backup/[EMAIL PROTECTED], and the contents of backup/shares seems to be 
from that (the oldest) snapshot.  The newer (and explicitly specified) 
snapshot, BACKUP20081008 snapshot doesn't get copied over as a snapshot, and 
its contents don't get copied over into the backup/shares as the current 
snapshot.  Am I doing something wrong here?  Possibly having that additional, 
newer, generic BACKUP snapshot (I just wanted a snapshot to always represent 
the newest backup, so I created BACKUP20081008 and then BACKUP immediately 
afterwards) messed it up?  Can I not transfer over snapshots from a certain 
point backwards in time, or does zfs send -R require sending from the newest 
snapshot backwards, or am I off altogether?

Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Segmentation fault / core dump with recursive send/recv

2008-10-08 Thread BJ Quinn
Oh and I had been doing this remotely, so I didn't notice the following error 
before - 

receiving incremental stream of datapool/[EMAIL PROTECTED] into backup/[EMAIL 
PROTECTED]
cannot receive incremental stream: destination backup/shares has been modified
since most recent snapshot

This is reported after the first snapshot, BACKUP081007 gets copied, and then 
it quits.  I don't see why it would have been modified.  I guess it's possible 
I cd'ed into the backup directory at some point during the send/recv, but I 
don't think so.  Should I set the readonly property on the backup FS or 
something?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Segmentation fault / core dump with recursive send/recv

2008-09-30 Thread BJ Quinn
Is there more information that I need to post in order to help diagnose this 
problem?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Segmentation fault / core dump with recursive send/recv

2008-09-30 Thread BJ Quinn
Please forgive my ignorance.  I'm fairly new to Solaris (Linux convert), and 
although I recognize that Linux has the same concept of Segmentation faults / 
core dumps, I believe my typical response to a Segmentation Fault was to 
upgrade the kernel and that always fixed the problem (i.e. somebody else filed 
the bug and fixed the problem before I got around to doing it myself).

So - I'm running stock OpenSolaris 2008.05.  Even if the bug was fixed, I 
imagine it would require a Solaris kernel upgrade anyway, right?  Perhaps I 
could simply try that first?  Are the kernel upgrades stable?  I know for a 
while there, before the 2008.05 release, Solaris just released a new 
development kernel every two weeks.  I don't think I want to just haphazardly 
upgrade to some random bi-weekly development kernel.  Are there actually 
stable kernel upgrades for OS, and how would I go about upgrading it if there 
are?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Segmentation fault / core dump with recursive send/recv

2008-09-30 Thread BJ Quinn
True, but a search for zfs segmentation fault returns 500 bugs.  It's 
possible one of those is related to my issue, but it would take all day to find 
out.  If it's not flaky or unstable, I'd like to try upgrading to the 
newest kernel first, unless my Linux mindset is truly out of place here, or if 
it's not relatively easy to do.  Are these kernels truly considered stable?  
How would I upgrade?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss