Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-17 Thread Edward Ned Harvey
 NO, zfs send is not a backup.

Understood, but perhaps you didn't read my whole message.  Here, I will
spell out the whole discussion:

If you zfs send  somefile it is well understood there are two big
problems with this method of backup.  #1 If a single bit error is introduced
into the file, then the whole data stream is corrupt.  #2 If you want to
restore just a subset of the filesystem, you cannot.  The only option
available is to restore the whole filesystem.

Instead, it is far preferable to zfs send | zfs receive  ...  That is,
receive the data stream on external media as soon as you send it.  By
receiving the data stream onto external media, instead of just saving the
datastream as a file on external media ... You solve both of the above
problems.  Obviously this is only possible with external disks, and not
possible with tapes.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up a ZFS pool

2010-01-17 Thread Edward Ned Harvey
  Personally, I use zfs send | zfs receive to an external disk.
 Initially a
  full image, and later incrementals.
 
 Do these incrementals go into the same filesystem that received the
 original zfs stream?

Yes.  In fact, I think that's the only way possible.  The end result is ... On 
my external disk, I have a ZFS filesystem, with snapshots.  Each snapshot 
corresponds to each incremental send|receive.

Personally, I like to start with a fresh full image once a month, and then do 
daily incrementals for the rest of the month.

There is one drawback:  If I have 500G filesystem to backup, and I have 1Tb 
target media ...  Once per month, I have to zpool destroy the target media 
before I can write a new full backup onto it.  This leaves a gap where the 
backup has been destroyed and the new image has yet to be written.

To solve this problem, I have more than one external disk, and occasionally 
rotate them.  So there's still another offline backup available, if something 
were to happen to my system during the moment when the backup was being 
destroyed once per month.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-17 Thread Joerg Schilling
Toby Thain t...@telegraphics.com.au wrote:

  Yet it is used in ZFS flash archives on Solaris 10

 I can see the temptation, but isn't it a bit under-designed? I think  
 Mr Nordin might have ranted about this in the past...

Isn't flash cpio based and thus not prepared for the future? Cpio coes not 
support sparse files and is unable to archive files  8 GB.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-17 Thread Joerg Schilling
Edward Ned Harvey sola...@nedharvey.com wrote:

  NO, zfs send is not a backup.

 Understood, but perhaps you didn't read my whole message.  Here, I will
 spell out the whole discussion:
...
 Instead, it is far preferable to zfs send | zfs receive  ...  That is,
 receive the data stream on external media as soon as you send it.  By
 receiving the data stream onto external media, instead of just saving the
 datastream as a file on external media ... You solve both of the above
 problems.  Obviously this is only possible with external disks, and not
 possible with tapes.

Here is the big difference. For a professional backup people still typically
use tapes although tapes have become expensive.

I still believe that a set of compressed incremental star archives give you 
more features.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-17 Thread Thomas Burgess

 Cpio coes not
 support sparse files and is unable to archive files  8 GB.

 Jörg


 I found this out the hard way last time i used it.  I was backing up all my
data from one system to another using cpio and i had a bunch of movies over
8GB (720p and 1080p mkv files)

none of them worked.  I never use cpio anymore.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up a ZFS pool

2010-01-17 Thread Richard Elling
On Jan 17, 2010, at 2:38 AM, Edward Ned Harvey wrote:

 Personally, I use zfs send | zfs receive to an external disk.
 Initially a
 full image, and later incrementals.
 
 Do these incrementals go into the same filesystem that received the
 original zfs stream?
 
 Yes.  In fact, I think that's the only way possible.  The end result is ... 
 On my external disk, I have a ZFS filesystem, with snapshots.  Each snapshot 
 corresponds to each incremental send|receive.
 
 Personally, I like to start with a fresh full image once a month, and then 
 do daily incrementals for the rest of the month.

This doesn't buy you anything. ZFS isn't like traditional backups.

 There is one drawback:  If I have 500G filesystem to backup, and I have 1Tb 
 target media ...  Once per month, I have to zpool destroy the target media 
 before I can write a new full backup onto it.  This leaves a gap where the 
 backup has been destroyed and the new image has yet to be written.

Just make a rolling snapshot. You can have different policies for destroying
snapshots on the primary and each backup tier.
 -- richard

 
 To solve this problem, I have more than one external disk, and occasionally 
 rotate them.  So there's still another offline backup available, if something 
 were to happen to my system during the moment when the backup was being 
 destroyed once per month.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up a ZFS pool

2010-01-17 Thread Gaëtan Lehmann


Le 17 janv. 10 à 11:38, Edward Ned Harvey a écrit :


Personally, I use zfs send | zfs receive to an external disk.

Initially a

full image, and later incrementals.


Do these incrementals go into the same filesystem that received the
original zfs stream?


Yes.  In fact, I think that's the only way possible.  The end result  
is ... On my external disk, I have a ZFS filesystem, with  
snapshots.  Each snapshot corresponds to each incremental send| 
receive.


Personally, I like to start with a fresh full image once a month,  
and then do daily incrementals for the rest of the month.


There is one drawback:  If I have 500G filesystem to backup, and I  
have 1Tb target media ...  Once per month, I have to zpool destroy  
the target media before I can write a new full backup onto it.  This  
leaves a gap where the backup has been destroyed and the new image  
has yet to be written.


To solve this problem, I have more than one external disk, and  
occasionally rotate them.  So there's still another offline backup  
available, if something were to happen to my system during the  
moment when the backup was being destroyed once per month.



ZFS can check the pool and make sure that there is no error.
Running 'zpool scrub' on the two pools from time to time - let's say  
every month - should give you a similar level of protection without  
the need for a full backup.


Even when backing up with rsync+zfs snapshot, a full incremental every  
month may not be required. A rsync run with the --checksum option  
every month may be good enough. It forces the read of the full data on  
both sides, but at least it avoids the network transfer if the pools  
are on different hosts, and it avoids increasing the space used by the  
snapshots.


Gaëtan


--
Gaëtan Lehmann
Biologie du Développement et de la Reproduction
INRA de Jouy-en-Josas (France)
tel: +33 1 34 65 29 66fax: 01 34 65 29 09
http://voxel.jouy.inra.fr  http://www.itk.org
http://www.mandriva.org  http://www.bepo.fr



PGP.sig
Description: Ceci est une signature électronique PGP
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-17 Thread Travis Tabbal
I've been having good luck with Samsung green 1.5TB drives. I have had 1 DOA, 
but I currently have 10 of them, so that's not so bad. In that size purchase, 
I've had one bad from just about any manufacturer. I've avoided WD for RAID 
because of the error handling stuff kicking drives out of arrays. I don't know 
if that's currently an issue though. And with Seagate's recent record, I didn't 
feel confident in their larger drives. I was concerned about the 5400RPM speed 
being a problem, but I can read over 100MB/s from the array, and 95% on my use 
is over a gigabit LAN, so they are more than fast enough for my needs. 

I just set up a new array with them, 6 in raidz2. The replacement time is high 
enough that I decided the extra parity was worth the cost, even for a home 
server. I need 2 more drives, then I'll migrate my other 4 from the older array 
over as well into another 6 drive raidz2 and add it to the pool. 

I have decided to treat HDDs as completely untrustworthy. So when I get new 
drives I test them by creating a temporary pool in a mirror config and filling 
the drives up by copying data from the primary array. Then do a scrub. When 
it's done, if you get no errors, and no other errors in dmesg, then wait a week 
or so and do another scrub test. I found a bad SATA hotswap backplane and a bad 
drive this way. There are probably faster ways, but this works for me.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recordsize...

2010-01-17 Thread Tristan Ball

Hi Everyone,

Is it possible to use send/recv to change the recordsize, or does each 
file need to be individually recreated/copied within a given dataset?


Is there a way to check the recordsize of a given file, assuming that 
the filesystems recordsize was changed at some point?


Also - Am I right in thinking that if a 4K write is made to a filesystem 
block with a recordsize of 8K, then the original block is read (assuming 
it's not in the ARC), before the new block is written elsewhere (the 
copy, from copy on write)? This would be one of the reasons that 
aligning application IO size and filesystem record sizes is a good 
thing, because where such IO is aligned, you remove the need for that 
original read?


Thanks,
Tristan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] opensolaris fresh install lock up

2010-01-17 Thread Thomas Burgess
I just installed opensolaris build 130 which i downloaded from genunix.  The
install went fineand the first reboot after install seemed to work but
when i powered down and rebooted fully, it locks up as soon as i log in.
Gnome is still showing the icon it shows when stuff hasn't finished
loadingis there any way i can find out why it's locking up and how to
fix it?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] opensolaris fresh install lock up

2010-01-17 Thread Jürgen Keil
 I just installed opensolaris build 130 which i
 downloaded from genunix.  The install went
 fineand the first reboot after install seemed to
 work but when i powered down and rebooted fully, it
 locks up as soon as i log in. 

Hmm, seems you're asking in the wrong forum.
Sounds more like a desktop or x-window problem
to me.  Why do you think this is a zfs problem?

 Gnome is still showing
 the icon it shows when stuff hasn't finished
 loadingis there any way i can find out why
 it's locking up and how to fix it?

Hmm, in the build 130 annoucement you can find this:
( http://www.opensolaris.org/jive/thread.jspa?threadID=120631tstart=0 )


13540 Xserver crashes and freezes a system installed with LiveCD on bld 130
http://defect.opensolaris.org/bz/show_bug.cgi?id=13540

After installation, the X server may crash and appears to not
be restarted by the GNOME Display Manager (gdm).

Work-around: None at this time.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recordsize...

2010-01-17 Thread Bob Friesenhahn

On Mon, 18 Jan 2010, Tristan Ball wrote:

Is there a way to check the recordsize of a given file, assuming that the 
filesystems recordsize was changed at some point?


This would be problematic since a file may consist of different size 
records (at least I think so).  If the record size was changed after 
the file was already created, then new/updated parts would use the new 
record size.


Also - Am I right in thinking that if a 4K write is made to a filesystem 
block with a recordsize of 8K, then the original block is read (assuming it's 
not in the ARC), before the new block is written elsewhere (the copy, from 
copy on write)? This would be one of the reasons that aligning application IO 
size and filesystem record sizes is a good thing, because where such IO is 
aligned, you remove the need for that original read?


This is exactly right.  There is a very large performance hit if the 
block to be updated is no longer in the ARC and the update does not 
perfectly align to the origin and size of the underlying block. 
Applications which are aware of this (and which expect the total 
working set to be much larger than available cache) could choose to 
read and write more data than absolutely required so that zfs does not 
need to read an existing block in order to update it.  This also 
explains why the l2arc can be so valuable, if the data then fits in 
the ARC.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] opensolaris fresh install lock up

2010-01-17 Thread Thomas Burgess
 Hmm, seems you're asking in the wrong forum.
 Sounds more like a desktop or x-window problem
 to me.  Why do you think this is a zfs problem?


 Hmm, in the build 130 annoucement you can find this:
 ( http://www.opensolaris.org/jive/thread.jspa?threadID=120631tstart=0 )


 13540 Xserver crashes and freezes a system installed with LiveCD on bld 130
 http://defect.opensolaris.org/bz/show_bug.cgi?id=13540

 After installation, the X server may crash and appears to not
 be restarted by the GNOME Display Manager (gdm).

 Work-around: None at this time.


yes, i am sorry, i asked in the wrong forum, but you still answered it for
me.  It is for sure this bug.
This is ok, i can do most of what i need via ssh.  I just wasn't sure if it
was a bug or if i had done something wrongi had tried installing 2-3
times and it kept happening...was driving me insane.

I can deal with it if it's something that will be fixed in 131 (which is
what the bug page seems to hint at)

Thanks again
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS automatic rollback

2010-01-17 Thread Rodney Lindner
Hi all,
I am running 
Solaris Express Community Edition snv_130 X86
   Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
   Assembled 14 December 2009
with 2 pools, rpool (installed as version 22) and brick (upgraded to version 
22).


Yesterday I had a scenario where I had a hard hang. I powered cycled the system 
and all looked OK.

Later I noticed that some logfiles had been truncated ( and the modify 
timestamp had been set to 1 minute into the reboot)
and that some files that had been moved and renamed, where back the there 
previous state (about 4 GB of files).  These changes had been made
40-60 minutes before the hang. So I was expecting that the writes would have 
been committed to disk.

It looks like that some files systems have rolled to a previous state (I though 
this would happen at a pool level) .
I can find no logging that and old uberblock was used.

It looks like that during the import we had to rollback to an old uberblock.

So my questions are:
1) Does this scenario make sense?
2) How long should it be before the writes are committed to disk?
3) Should this sort of recovery be be @ a fs or pool basis?
4) Is this type of rollback logged any were?


Regards
Rodney
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] opensolaris fresh install lock up

2010-01-17 Thread Jürgen Keil
  in the build 130 annoucement you can find this:
  13540 Xserver crashes and freezes a system installed with LiveCD on bld 130

 It is for sure this bug.  This is ok, i
 can do most of what i need via ssh.  I just
 wasn't sure if it was a bug or if i had done
 something wrongi had tried installing 2-3 times
 and it kept happening...was driving me insane.
 
 I can deal with it if it's something that
 will be fixed in 131 (which is what the bug page
 seems to hint at)

A part of the problem will be fixed in b131: CR 6913965
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6913965

But it seems the segfault from CR 6913157
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6913157
is not yet fixed in b131.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up a ZFS pool

2010-01-17 Thread Daniel Carosone
On Sun, Jan 17, 2010 at 08:05:27AM -0800, Richard Elling wrote:
  Personally, I like to start with a fresh full image once a month, and 
  then do daily incrementals for the rest of the month.
 
 This doesn't buy you anything. 

.. as long as you scrub both the original pool and the backup pool
with the same regularity.  sending the full backup from the source is
basically the same as a scrub of the source.

If scrub ever find an error on your backup pool, you will need to
re-send the snapshots as a full stream from scratch (or at least from
a snapshot from before where the bad blocks are referenced).  You
can't just copy over the damaged file into the top filesystem on the
backup media, because if you write to that filesystem you will no
longer be able to recv new relative snapshots into it (without
rolling back with xfs recv -F) 

  To solve this problem, I have more than one external disk, and
  occasionally rotate them. 

That's a good idea regardless, with one on-site to be used regularly,
and one off-site in case of theft/fire/etc.  If you rotate, say, once
a month, and can keep at least a month-and-a-day's worth of snapshots
on the primary pool, then you can fully catch up the month-old disk
after a changeover.

 ZFS isn't like normal backups

Hooray!

--
Dan.


pgpGZZZCTa5uo.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up a ZFS pool

2010-01-17 Thread Bob Friesenhahn

On Mon, 18 Jan 2010, Daniel Carosone wrote:


.. as long as you scrub both the original pool and the backup pool
with the same regularity.  sending the full backup from the source is
basically the same as a scrub of the source.


This is not quite true.  The send only reads/verifies as much as it 
needs to send the data.  It won't read a redundant copy if it does not 
have to.  It won't traverse metadata that it does not have to.  A 
scrub reads/verifies all data and metadata.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-17 Thread Daniel Carosone
On Sun, Jan 17, 2010 at 05:31:39AM -0500, Edward Ned Harvey wrote:
 Instead, it is far preferable to zfs send | zfs receive  ...  That is,
 receive the data stream on external media as soon as you send it. 

Agree 100% - but..

.. it's hard to beat the convenience of a backup file format, for
all sorts of reasons, including media handling, integration with other
services, and network convenience. 

Consider then, using a zpool-in-a-file as the file format, rather than
zfs send streams.

Make a backup pool out of N conveniently-sized files, sparse to start
with if it suits you.  Go ahead and make it a raidz[23] pool, in case
some of your backup media goes bad. zfs recv your backups into this
pool.  zfs export the pool, to quiesce the file contents. 

If the files are themselves in zfs, snapshot that filesystem now for
good measure; then you can immediately bring your backup pool online
again, as well as have local reference copies of old backups, as
storage space allows. 

Send these files to whatever backup system/service you want to use.  
Handle them like any other large archive file format.

If you choose file sizes well, you can burn them to dvd's or write
them to tapes.  Multiple smaller files (rather than one file per
medium) can be easier to handle, but its best to make multiple raidz
vdevs and arrange a file from each vdev per medium (so you can still
recover from a lost dvd/tape). 

If you have a non-zfs remote storage server, rsync works well to
update the remote copies incrementally after each backup cycle (and to
bring back older versions later if needed).  Lots of cloud backup
providers exist now that can do similar incremental replication.

gpg sign and encrypt the files before sending, if you need to. Some
day soon, zfs crypto will allow backups encrypted within the pool,
without defeating incremental replication of the files (as gpg will).

Another option is to mount these files directly on whatever generic NAS
devices you want to hold the backups, and import the pool from there.
I'd be wary, but if you were to consider that, fortunately there's a
good testing tool (zfs scrub) to help you be sure the NAS service is
reliable.  

You do need all (or at least most) of your files available in order to
do a restore.  Given that you should be preferring to use local
snapshots for small recovery jobs anyway, that's not really a burden
for a full recovery.  At this point, you get back all the snapshots
that were in the backup pool at the time it was saved.

The zfs send stream format used to not be committed, though it appears
this has recently changed.  It still has all the other drawbacks
previously noted. The zpool format is committed and does have a tested
and supported backwards compatibility/upgrade path. 

--
Dan.

pgpmu7bvoOxnf.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up a ZFS pool

2010-01-17 Thread Daniel Carosone
On Sun, Jan 17, 2010 at 04:38:03PM -0600, Bob Friesenhahn wrote:
 On Mon, 18 Jan 2010, Daniel Carosone wrote:

 .. as long as you scrub both the original pool and the backup pool
 with the same regularity.  sending the full backup from the source is
 basically the same as a scrub of the source.

 This is not quite true.  The send only reads/verifies as much as it  
 needs to send the data.  It won't read a redundant copy if it does not  
 have to.  It won't traverse metadata that it does not have to.  A scrub 
 reads/verifies all data and metadata.

Sure, but I was comparing to not doing scrubs at all, since the more
dangerous interpretation is that always-incremental sends are fully
equivalent to the OP's method.  I was pointing out the lack of a
scrub-like side-effect in that method. I shouldn't have glossed over
the differences with basically.  

If one was not doing scrubs, and switched from sending full streams
monthly to continuous replication streams, old data might go unread
and unreadable over time.   

We all agree scrubs and incrementals are the way to go, but don't do
either alone.

--
Dan.


pgpVvrbsd1m23.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-17 Thread Thomas Burgess



 What dou you use instead?

*
*
**tar cvf - /some/dir | (cd /some/other/dir; tar xf -)



 BTW: I recommend star and to use either the H=exustar or -dump option.

 Jörg


i will have to check it out.  I recently migrated to opensolaris from
FreeBSD and i have a LOT to learn.  I am really enjoying Opensolaris so far.


so star is better than tar?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Root Mirror - Permission Denied

2010-01-17 Thread Brian Fitzhugh
I have a system that I'm trying to bring up with a mirrored rpool.  I'm using 
DarkStar's ZFS Root Mirror blog post as a guide 
(http://darkstar-solaris.blogspot.com/2008/09/zfs-root-mirror.html).  

When I get to step 3 I execute:
pfexec prtvtoc /dev/rdsk/c7d0s2 | fmthard -s - /dev/rdsk/c7d1s2

I get:  fmthard:  Cannot open device /dev/rdsk/c7d1s2 - Permission denied

Any ideas as to what i might be doing wrong here?

Thanks, Brian
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Root Mirror - Permission Denied

2010-01-17 Thread Thomas Burgess
On Sun, Jan 17, 2010 at 6:57 PM, Brian Fitzhugh brian.fitzh...@gmail.comwrote:

 I have a system that I'm trying to bring up with a mirrored rpool.  I'm
 using DarkStar's ZFS Root Mirror blog post as a guide (
 http://darkstar-solaris.blogspot.com/2008/09/zfs-root-mirror.html).

 When I get to step 3 I execute:
 pfexec prtvtoc /dev/rdsk/c7d0s2 | fmthard -s - /dev/rdsk/c7d1s2

 I get:  fmthard:  Cannot open device /dev/rdsk/c7d1s2 - Permission denied


you need to use a second pfexec after the |
like this:
 pfexec prtvtoc /dev/rdsk/c7d0s2 | pfexec fmthard -s - /dev/rdsk/c7d1s2


 Any ideas as to what i might be doing wrong here?

 Thanks, Brian
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Root Mirror - Permission Denied

2010-01-17 Thread Brian Fitzhugh
Got an answer emailed to me that said, you need to use a second pfexec after 
the | like this: pfexec prtvtoc /dev/rdsk/c7d0s2 | pfexec fmthard -s - 
/dev/rdsk/c7d1s2

Thanks for the quick response email'er.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshot that won't go away.

2010-01-17 Thread Daniel Carosone
On Sun, Jan 17, 2010 at 06:21:45PM +1300, Ian Collins wrote:
 I have a Solaris 10 update 6 system with a snapshot I can't remove.

 zfs destroy -f snap  reports the device as being busy.  fuser doesn't  
 shore any process using the filesystem and it isn't shared.

Is it the parent snapshot for a clone?

--
Dan.

pgpWXfJvd4hLh.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-17 Thread Travis Tabbal
HD154UI/1AG01118

They have been great drives for a home server. Enterprise users probably need 
faster drives for most uses, but they work great for me.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz2 import, some slices, some not

2010-01-17 Thread Thomas Burgess
 div id=jive-html-wrapper-div
 I am in the middle of converting a FreeBSD
 8.0-Release system to OpenSolaris b130brbrIn
 order to import my stuff, the only way i knew to make
 it work (from testing in virtualbox) was to do
 this:brbrlabel a bunch of drives with an EFI
 label by using the opensolaris live cd, then use
 those drives in FreeBSD to create a zpool.br
 brThis worked fine.br(though i did get a warning
 in freebsd about GPT corruption, i assume this is due
 to differences in the efi label from what FreeBSD
 uses)brbrFrom here, i copied everything from my
 FreeBSD system to the newly made zpool.br
 brThen i exported the pool, installed opensolaris
 and imported the pool.brbrAll of this seems to
 have worked.but here is what i find
 strangesome of the drives in the pool show slice
 and some don#39;tbrbr
 does this even matter?brCan i fix
 it?brbrhere#39;s what i mean:br  pool:
 tankbr state: ONLINEbrstatus: The pool is
 formatted using an older on-disk format.  The pool
 canbr    still be used, but some features are
 unavailable.br
 action: Upgrade the pool using #39;zpool
 upgrade#39;.  Once this is done, thebr    pool
 will no longer be accessible on older software
 versions.br scrub: none
 requestedbrconfig:brbr    NAME 
 STATE READ WRITE CKSUMbr
     tank  ONLINE   0 0
 0br  raidz2-0    ONLINE   0 0
 0br    c5t4d0    ONLINE   0 0
 0br    c3t5d0p0  ONLINE   0 0
 0br    c4t4d0    ONLINE   0 0
 0br
     c3t2d0p0  ONLINE   0 0
 0br    c4t6d0    ONLINE   0 0
 0br    c5t6d0p0  ONLINE   0 0
 0br    c4t7d0p0  ONLINE   0 0
 0brbrerrors: No known data errorsbr
 brbrbrThanks for any help.brbr
 
 /div___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss

I can't find any information on this using googleI'm sure i've done 
something wrongI'm wondering if i should just replace all the drives with 
new drivesdoes anyone know of a quicker method to fix this?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] I can't seem to get the pool to export...

2010-01-17 Thread Richard Elling
On Jan 16, 2010, at 10:03 PM, Travis Tabbal wrote:

 Hmm... got it working after a reboot. Odd that it had problems before that. I 
 was able to rename the pools and the system seems to be running well now. 
 Irritatingly, the settings for sharenfs, sharesmb, quota, etc. didn't get 
 copied over with the zfs send/recv. I didn't have that many filesystems 
 though, so it wasn't too bad to reconfigure them.

What OS or build?  I've had similar issues with b130 on all sorts of mounts
besides ZFS.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recordsize...

2010-01-17 Thread Richard Elling
On Jan 17, 2010, at 11:59 AM, Tristan Ball wrote:

 Hi Everyone,
 
 Is it possible to use send/recv to change the recordsize, or does each file 
 need to be individually recreated/copied within a given dataset?

Yes.  The former does the latter.

 Is there a way to check the recordsize of a given file, assuming that the 
 filesystems recordsize was changed at some point?

I don't know of an easy way to do this. But it is also rarely needed. For most 
file system use it is best to let the recordsize scale to large values.  It is 
only
for fixed record length workloads (eg. databases) that recordsize matching
can significantly improve efficiency.

 Also - Am I right in thinking that if a 4K write is made to a filesystem 
 block with a recordsize of 8K, then the original block is read (assuming it's 
 not in the ARC), before the new block is written elsewhere (the copy, from 
 copy on write)? This would be one of the reasons that aligning application IO 
 size and filesystem record sizes is a good thing, because where such IO is 
 aligned, you remove the need for that original read?

No.  Think of recordsize as a limit.  As long as the recordsize = 4 KB, a 4KB
file will only use one, 4KB record.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshot that won't go away.

2010-01-17 Thread Ian Collins

Daniel Carosone wrote:

On Sun, Jan 17, 2010 at 06:21:45PM +1300, Ian Collins wrote:
  

I have a Solaris 10 update 6 system with a snapshot I can't remove.

zfs destroy -f snap  reports the device as being busy.  fuser doesn't  
shore any process using the filesystem and it isn't shared.



Is it the parent snapshot for a clone?

  
I'm almost certain it isn't.  I haven't created any clones and none show 
in zpool history.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (snv_129, snv_130) can't import zfs pool

2010-01-17 Thread Jack Kielsmeier
Just curious if anything has happened here.

I had a similar issue that was solved by upgrading from 4GB to 8GB of RAM.

I now have the issue again, and my box hard locks when doing the import after 
about 30 minutes. (This time not using de-dup, but was using iscsi). I debated 
on upgrading to 16GB of RAM, but can't justify the cost.

Hoping there is some sort of bug found and fixed in a future release so that I 
may get my 4.5 TB pool back.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss