Re: [zfs-discuss] I can't seem to get the pool to export...

2010-01-16 Thread Travis Tabbal
Hmm... got it working after a reboot. Odd that it had problems before that. I 
was able to rename the pools and the system seems to be running well now. 
Irritatingly, the settings for sharenfs, sharesmb, quota, etc. didn't get 
copied over with the zfs send/recv. I didn't have that many filesystems though, 
so it wasn't too bad to reconfigure them.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Snapshot that won't go away.

2010-01-16 Thread Ian Collins

I have a Solaris 10 update 6 system with a snapshot I can't remove.

zfs destroy -f   reports the device as being busy.  fuser doesn't 
shore any process using the filesystem and it isn't shared.


I can unmount the filesystem OK.

Any clues or suggestions of bigger sticks to hit it with?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] I can't seem to get the pool to export...

2010-01-16 Thread Travis Tabbal
r...@nas:~# zpool export -f raid
cannot export 'raid': pool is busy

I've disabled all the services I could think of. I don't see anything accessing 
it. I also don't see any of the filesystems mounted with mount or "zfs mount". 
What's the deal?  This is not the rpool, so I'm not booted off it or anything 
like that. I'm on snv_129. 

I'm attempting to move the main storage to a new pool. I created the new pool, 
used "zfs send | zfs recv" for the filesystems. That's all fine. The plan was 
to export both pools, and use the import to rename them. I've got the new pool 
exported, but the older one refuses to export. 

Is there some way to get the system to tell me what's using the pool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Toby Thain


On 16-Jan-10, at 6:51 PM, Mike Gerdts wrote:

On Sat, Jan 16, 2010 at 5:31 PM, Toby Thain  
 wrote:

On 16-Jan-10, at 7:30 AM, Edward Ned Harvey wrote:

I am considering building a modest sized storage system with  
zfs. Some
of the data on this is quite valuable, some small subset to be  
backed
up "forever", and I am evaluating back-up options with that in  
mind.


You don't need to store the "zfs send" data stream on your backup  
media.
This would be annoying for the reasons mentioned - some risk of  
being able
to restore in future (although that's a pretty small risk) and  
inability

to
restore with any granularity, i.e. you have to restore the whole  
FS if you

restore anything at all.

A better approach would be "zfs send" and pipe directly to "zfs  
receive"

on
the external media.  This way, in the future, anything which can  
read ZFS
can read the backup media, and you have granularity to restore  
either the

whole FS, or individual things inside there.


There have also been comments about the extreme fragility of the  
data stream
compared to other archive formats. In general it is strongly  
discouraged for

these purposes.



Yet it is used in ZFS flash archives on Solaris 10


I can see the temptation, but isn't it a bit under-designed? I think  
Mr Nordin might have ranted about this in the past...


--Toby



and are slated for
use in the successor to flash archives.  This initial proposal seems
to imply using the same mechanism for a system image backup (instead
of just system provisioning).

http://mail.opensolaris.org/pipermail/caiman-discuss/2010-January/ 
015909.html


--
Mike Gerdts
http://mgerdts.blogspot.com/


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Mike Gerdts
On Sat, Jan 16, 2010 at 5:31 PM, Toby Thain  wrote:
> On 16-Jan-10, at 7:30 AM, Edward Ned Harvey wrote:
>
>>> I am considering building a modest sized storage system with zfs. Some
>>> of the data on this is quite valuable, some small subset to be backed
>>> up "forever", and I am evaluating back-up options with that in mind.
>>
>> You don't need to store the "zfs send" data stream on your backup media.
>> This would be annoying for the reasons mentioned - some risk of being able
>> to restore in future (although that's a pretty small risk) and inability
>> to
>> restore with any granularity, i.e. you have to restore the whole FS if you
>> restore anything at all.
>>
>> A better approach would be "zfs send" and pipe directly to "zfs receive"
>> on
>> the external media.  This way, in the future, anything which can read ZFS
>> can read the backup media, and you have granularity to restore either the
>> whole FS, or individual things inside there.
>
> There have also been comments about the extreme fragility of the data stream
> compared to other archive formats. In general it is strongly discouraged for
> these purposes.
>

Yet it is used in ZFS flash archives on Solaris 10 and are slated for
use in the successor to flash archives.  This initial proposal seems
to imply using the same mechanism for a system image backup (instead
of just system provisioning).

http://mail.opensolaris.org/pipermail/caiman-discuss/2010-January/015909.html

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is the disk a member of a zpool?

2010-01-16 Thread Morten-Christian Bernson
Thanks for the tip both of you.  The zdb approach seems viable.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Toby Thain


On 16-Jan-10, at 7:30 AM, Edward Ned Harvey wrote:

I am considering building a modest sized storage system with zfs.  
Some

of the data on this is quite valuable, some small subset to be backed
up "forever", and I am evaluating back-up options with that in mind.


You don't need to store the "zfs send" data stream on your backup  
media.
This would be annoying for the reasons mentioned - some risk of  
being able
to restore in future (although that's a pretty small risk) and  
inability to
restore with any granularity, i.e. you have to restore the whole FS  
if you

restore anything at all.

A better approach would be "zfs send" and pipe directly to "zfs  
receive" on
the external media.  This way, in the future, anything which can  
read ZFS
can read the backup media, and you have granularity to restore  
either the

whole FS, or individual things inside there.


There have also been comments about the extreme fragility of the data  
stream compared to other archive formats. In general it is strongly  
discouraged for these purposes.


--Toby




Plus, the only way to guarantee the integrity of a "zfs send" data  
stream is

to perform a "zfs receive" on that data stream.  So by performing a
successful receive, you've guaranteed the datastream is not  
corrupt.  Yet.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-16 Thread Simon Breden
Which drive model/revision number are you using?
I presume you are using the 4-platter version: WD15EADS-00R6B0, but perhaps I 
am wrong.

Also did you run WDTLER.EXE on the drives first, to hasten error reporting 
times?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-16 Thread Freddie Cash
We're in the process of upgrading our storage servers from Seagate RE.2 500 GB 
and WD 500 GB "black" drives to WD 1.5 TB "green" drives (ones with 512B 
sectors).  So far, no problems to report.

We've replaced 6 out of 8 drives in one raidz2 vdev so far (1 drive each 
weekend).  re-silver times have dropped from over 80 hours for the first drive 
to just under 60 for the 6th (pool is 10TB with <150 GB free).  No checksum 
errors of any kind reported so far, no drive timeouts reported by the 
controller, everything is working as per normal.

We're running ZFSv13 on FreeBSD 7.2-STABLE.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-16 Thread Simon Breden
Which consumer-priced 1.5TB drives do people currently recommend?

I had zero read/write/checksum errors so far in 2 years with my trusty old 
Western Digital WD7500AAKS drives, but now I want to upgrade to a new set of 
drives that are big, reliable and cheap.

As of Jan 2010 it seems the price sweet spot is the 1.5TB drives.

As I had a lot of success with Western Digital drives I thought I would stick 
with WD.

However, this time I might have to avoid Western Digital (see below), so I 
wondered which other recent drives people have found to be decent drives.

WD15EADS:
The model I was looking at was the WD15EADS.
The older 4-platter WD15EADS-00R6B0 revision seems to work OK, from what I 
found, but I prefer fewer platters from noise, vibration, heat & reliability 
perspectives.
The newer 3-platter WD15EADS-00P8B0 revision seems to have serious problems - 
see links below.

WD15EARS:
Also, very recently WD brought out a 3-platter WD15EARS-00Z5B1 revision, based 
on 'Advanced format' where it uses 4KB sector sizes instead of the old 
traditional 512 byte sector sizes.
Again, these drives seem to have serious issues - see links below.
Does ZFS handle this new 4KB sector size automatically and transparently, or 
does something need to be done for it work?

Reference:
1. On synology site, seems like older 4-platter 1.5TB EADS OK 
(WD15EADS-00R6B0), but newer 3 platter EADS have problems (WD15EADS-00P8B0):
http://forum.synology.com/enu/viewtopic.php?f=151&t=19131&sid=c1c446863595a5addb8652a4af2d09ca
2. A mac user has problems with WD15EARS-00Z5B1:
http://community.wdc.com/t5/Desktop/WD-1-5TB-Green-drives-Useful-as-door-stops/td-p/1217/page/2
  (WD 1.5TB Green drives - Useful as door stops)
http://community.wdc.com/t5/Desktop/WDC-WD15EARS-00Z5B1-awful-performance/m-p/5242
  (WDC WD15EARS-00Z5B1 awful performance)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Thomas Burgess
>
>
>
> NO, zfs send is not a backup.
>
> From a backup, you could restore individual files.
>
> Jörg
>
>
I disagree.

It is a backup.  It's just not "an enterprise backup solution"
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Joerg Schilling
Edward Ned Harvey  wrote:

> > I am considering building a modest sized storage system with zfs. Some
> > of the data on this is quite valuable, some small subset to be backed
> > up "forever", and I am evaluating back-up options with that in mind.
>
> You don't need to store the "zfs send" data stream on your backup media.

NO, zfs send is not a backup.

>From a backup, you could restore individual files.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up a ZFS pool

2010-01-16 Thread dick hoogendijk
On Sat, 2010-01-16 at 07:24 -0500, Edward Ned Harvey wrote:

> Personally, I use "zfs send | zfs receive" to an external disk.  Initially a
> full image, and later incrementals.

Do these incrementals go into the same filesystem that received the
original zfs stream?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs over iscsi bad status

2010-01-16 Thread Arnaud Brand
OK, the third question (localhost transmission failure) should have been posted 
to storage-discuss.
I'll subscribe to this list and ask there.

 
Regarding the first question, after having removed the lun from the target, 
devfsadm -C removes the device and then the pool shows as unavailable. I guess 
that's the proper behaviour.
Still the processes are hung and I can't destroy the pool. 

This leads to being unable to open a new session with a user that has a home 
dir.

I copy-pasted some mdb results I found while looking for a way to get rid of 
the pool. 
Please note I had failmode=wait for the failling pool.

But since you can't change it once you're stuck, you're bound to reboot in case 
of iscsi failure.
Or am I misunderstanding something ?


d...@nc-tanktsm:/tsmvol2# ps -ef | grep zpool
root 5 0   0 01:47:33 ?   0:06 zpool-rpool
root   327 0   0 01:47:50 ?  86:36 zpool-tank
root  4721  4042   0 15:13:27 pts/3   0:00 zpool online tsmvol 
c9t600144F05DF34C004B51BF950003d0
root  4617 0   0 14:36:35 ?   0:00 zpool-tsmvol
root  4752 0   0 15:14:40 ?   0:39 zpool-tsmvol2
root  4664  4042   0 15:08:34 pts/3   0:00 zpool destroy -f tsmvol
root  4861  4042   0 15:27:33 pts/3   0:00 grep zpool

d...@nc-tanktsm:/tsmvol2# echo "0t4721::pid2proc|::walk thread|::findstack -v" 
| mdb -k
stack pointer for thread ff040c813c20: ff00196a3aa0
[ ff00196a3aa0 _resume_from_idle+0xf1() ]
  ff00196a3ad0 swtch+0x145()
  ff00196a3b00 cv_wait+0x61(ff03f7ea4e52, ff03f7ea4e18)
  ff00196a3b50 txg_wait_synced+0x7c(ff03f7ea4c40, 0)
  ff00196a3b90 spa_vdev_state_exit+0x78(ff0402d9da80, ff040c832700,
  0)
  ff00196a3c00 vdev_online+0x20a(ff0402d9da80, abe9a540ed085f5c, 0,
  ff00196a3c14)
  ff00196a3c40 zfs_ioc_vdev_set_state+0x83(ff046c08f000)
  ff00196a3cc0 zfsdev_ioctl+0x175(0, 5a0d, 8042310, 13, ff04054f4528
  , ff00196a3de4)
  ff00196a3d00 cdev_ioctl+0x45(0, 5a0d, 8042310, 13, ff04054f4528,
  ff00196a3de4)
  ff00196a3d40 spec_ioctl+0x5a(ff03e3218180, 5a0d, 8042310, 13,
  ff04054f4528, ff00196a3de4, 0)
  ff00196a3dc0 fop_ioctl+0x7b(ff03e3218180, 5a0d, 8042310, 13,
  ff04054f4528, ff00196a3de4, 0)
  ff00196a3ec0 ioctl+0x18e(3, 5a0d, 8042310)
  ff00196a3f10 _sys_sysenter_post_swapgs+0x149()
d...@nc-tanktsm:/tsmvol2# echo "0t4664::pid2proc|::walk thread|::findstack -v" 
| mdb -k
stack pointer for thread ff03ec9898a0: ff00195ccc20
[ ff00195ccc20 _resume_from_idle+0xf1() ]
  ff00195ccc50 swtch+0x145()
  ff00195ccc80 cv_wait+0x61(ff0403008658, ff0403008650)
  ff00195cccb0 rrw_enter_write+0x49(ff0403008650)
  ff00195ccce0 rrw_enter+0x22(ff0403008650, 0, f79da8a0)
  ff00195ccd40 zfsvfs_teardown+0x3b(ff0403008580, 1)
  ff00195ccd90 zfs_umount+0xe1(ff0403101b80, 400, ff04054f4528)
  ff00195ccdc0 fsop_unmount+0x22(ff0403101b80, 400, ff04054f4528)
  ff00195cce10 dounmount+0x5f(ff0403101b80, 400, ff04054f4528)
  ff00195cce60 umount2_engine+0x5c(ff0403101b80, 400, ff04054f4528,
  1)
  ff00195ccec0 umount2+0x142(80c1fd8, 400)
  ff00195ccf10 _sys_sysenter_post_swapgs+0x149()
d...@nc-tanktsm:/tsmvol2# ps -ef | grep iozone
root  4631  3809   0 14:37:16 pts/2   0:00 
/usr/benchmarks/iozone/iozone -a -b results2.xls
root  4879  4042   0 15:28:06 pts/3   0:00 grep iozone
d...@nc-tanktsm:/tsmvol2# echo "0t4631::pid2proc|::walk thread|::findstack -v" 
| mdb -k
stack pointer for thread ff040c7683e0: ff001791e050
[ ff001791e050 _resume_from_idle+0xf1() ]
  ff001791e080 swtch+0x145()
  ff001791e0b0 cv_wait+0x61(ff04ec895328, ff04ec895320)
  ff001791e0f0 zio_wait+0x5d(ff04ec895020)
  ff001791e150 dbuf_read+0x1e8(ff0453f1ea48, 0, 2)
  ff001791e1c0 dmu_buf_hold+0x93(ff03f60bdcc0, 3, 0, 0, ff001791e1f8
  )
  ff001791e260 zap_lockdir+0x67(ff03f60bdcc0, 3, 0, 1, 1, 0,
  ff001791e288)
  ff001791e2f0 zap_lookup_norm+0x55(ff03f60bdcc0, 3, ff001791e720, 8
  , 1, ff001791e438, 0, 0, 0, 0)
  ff001791e350 zap_lookup+0x2d(ff03f60bdcc0, 3, ff001791e720, 8, 1,
  ff001791e438)
  ff001791e3d0 zfs_match_find+0xfd(ff0403008580, ff040aeb64b0,
  ff001791e720, 0, 1, 0, 0, ff001791e438)
  ff001791e4a0 zfs_dirent_lock+0x3d1(ff001791e4d8, ff040aeb64b0,
  ff001791e720, ff001791e4d0, 6, 0, 0)
  ff001791e540 zfs_dirlook+0xd9(ff040aeb64b0, ff001791e720,
  ff001791e6f0, 1, 0, 0)
  ff001791e5c0 zfs_lookup+0x25f(ff040b230300, ff001791e720,
  ff001791e6f0, ff001791ea30, 1, ff03e1776d80, ff03f84053a0, 0,
  0, 0)
  ff001791e660 fop_lookup+0xed(ff040b230300, ff001791e720,
  ff001791e6f0, ff001791ea30, 1, ff03e1776

Re: [zfs-discuss] zpool fragmentation issues? (dovecot)

2010-01-16 Thread Damon Atkins
In my previous post I was refering more to mdbox (Multi-dbox) rather than dbox, 
however I beleive the meta data is store with the mail msg in version 1.x and 
2.x meta is not updated within the msg which would be better for ZFS.

What I am saying is msg per file which is not updated is better for snapshots.  
I belive 2.x version of single-dbox should be better (ie meta data is no longer 
stored with the msg) compared with 1.x dbox for snapshots. 

Cheers
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool fragmentation issues? (dovecot)

2010-01-16 Thread Damon Atkins
According to DoveCot Wiki dbox files are re-written by a secondary process. ie 
delete do not happen immediately, they happen latter as a background process 
and the whole message file is re-written. You can set a size limit on message 
files.

Some time ago I email Tim, on a few ideas to make it more ZFS friendly. I.e. to 
try and prevent rewrites.   If you use dbox and keeping snapshots you will eat 
your disk up. MailDir is a lot friendlier to snapshots, but it will be slower 
for backups or searching text within the body of lots of email.  Ie there are 
pro’s and cons with ZFS. Personal I will go for snapshots as being more 
important as I take them about 10 times a day and keep them for 7 days. Also 
MailDirs are easier to restore and individual email. It comes down to pro’s and 
con’s. Unfortunate performance is always the most important goal.

Cheers
Damon.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL to disk

2010-01-16 Thread Jeffry Molanus
Thx all, I understand now.

BR, Jeffry
> 
> if an application requests a synchronous write then it is commited to
> ZIL immediately, once it is done the IO is acknowledged to application.
> But data written to ZIL is still in memory as part of an currently open
> txg and will be committed to a pool with no need to read anything from
> ZIL. Then there is an optimization you wrote above so data block not
> necesarilly need to be writen just pointers which point to them.
> 
> Now it is slightly more complicated as you need to take into account
> logbias property and a possibility that a dedicated zil device could be
> present.
> 
> As Neil wrote zfs will read from ZIL only if while importing a pool it
> will be detected that there is some data in ZIL which hasn't been
> commited to a pool yet which could happen due to system reset, power
> loss or devices suddenly disappearing.
> 
> --
> Robert Milkowski
> http://milek.blogspot.com
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Edward Ned Harvey
> I am considering building a modest sized storage system with zfs. Some
> of the data on this is quite valuable, some small subset to be backed
> up "forever", and I am evaluating back-up options with that in mind.

You don't need to store the "zfs send" data stream on your backup media.
This would be annoying for the reasons mentioned - some risk of being able
to restore in future (although that's a pretty small risk) and inability to
restore with any granularity, i.e. you have to restore the whole FS if you
restore anything at all.

A better approach would be "zfs send" and pipe directly to "zfs receive" on
the external media.  This way, in the future, anything which can read ZFS
can read the backup media, and you have granularity to restore either the
whole FS, or individual things inside there.

Plus, the only way to guarantee the integrity of a "zfs send" data stream is
to perform a "zfs receive" on that data stream.  So by performing a
successful receive, you've guaranteed the datastream is not corrupt.  Yet.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up a ZFS pool

2010-01-16 Thread Edward Ned Harvey
> What is the best way to back up a zfs pool for recovery?  Recover
> entire pool or files from a pool...  Would you use snapshots and
> clones?
> 
> I would like to move the "backup" to a different disk and not use
> tapes.

Personally, I use "zfs send | zfs receive" to an external disk.  Initially a
full image, and later incrementals.  This way, you've got the history of
what previous snapshots you've received on the external disk, it's instantly
available if you connect to a new computer, and you can restore either the
whole FS, or a single file if you want.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss