[zfs-discuss] Migrate from iscsitgt to comstar?

2009-09-21 Thread Markus Kovero
Is it possible to migrate data from iscsitgt for comstar iscsi target? I guess comstar wants metadata at beginning of volume and this makes things difficult? Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Real help

2009-09-21 Thread Chris Ridd
On 20 Sep 2009, at 19:46, dick hoogendijk wrote: On Sun, 2009-09-20 at 11:41 -0700, vattini giacomo wrote: Hi there,i'm in a bad situation,under Ubuntu i was tring to import a solaris zpool that is in /dev/sda1,while the Ubuntu is in /dev/ sda5;not being able to mount the solaris pool i

[zfs-discuss] Re-import RAID-Z2 with faulted disk

2009-09-21 Thread Kyle J. Aleshire
Hi all, I have a RAID-Z2 setup with 6x 500Gb SATA disks. I exported the array to use under a different system but during or after the export one of the disks failed: k...@localhost:~$ pfexec zpool import pool: chronicle id: 11592382930413748377 state: DEGRADED status: One or more devices

Re: [zfs-discuss] Real help

2009-09-21 Thread David Magda
On Sep 21, 2009, at 06:52, Chris Ridd wrote: Does zpool destroy prompt are you sure in any way? Some admin tools do (beadm destroy for example) but there's not a lot of consistency. No it doesn't, which I always found strange. Personally I always thought you should be queried for a zfs

Re: [zfs-discuss] Re-import RAID-Z2 with faulted disk

2009-09-21 Thread Casper . Dik
The disk has since been replaced, so now: k...@localhost:~$ pfexec zpool import pool: chronicle id: 11592382930413748377 state: DEGRADED status: One or more devices contains corrupted data. action: The pool can be imported despite missing or damaged devices. The fault tolerance

Re: [zfs-discuss] Real help

2009-09-21 Thread Mattias Pantzare
On Mon, Sep 21, 2009 at 13:34, David Magda dma...@ee.ryerson.ca wrote: On Sep 21, 2009, at 06:52, Chris Ridd wrote: Does zpool destroy prompt are you sure in any way? Some admin tools do (beadm destroy for example) but there's not a lot of consistency. No it doesn't, which I always found

[zfs-discuss] lots of zil_clean threads

2009-09-21 Thread Nils Goroll
Hi All, out of curiosity: Can anyone come up with a good idea about why my snv_111 laptop computer should run more than 1000 zil_clean threads? ff0009a9dc60 fbc2c0300 tq:zil_clean ff0009aa3c60 fbc2c0300 tq:zil_clean ff0009aa9c60

Re: [zfs-discuss] ZFS file disk usage

2009-09-21 Thread Andrew Deason
On Sun, 20 Sep 2009 20:31:57 -0400 Richard Elling richard.ell...@gmail.com wrote: If you are just building a cache, why not just make a file system and put a reservation on it? Turn off auto snapshots and set other features as per best practices for your workload? In other words, treat it

Re: [zfs-discuss] lots of zil_clean threads

2009-09-21 Thread Neil Perrin
Nils, A zil_clean() is started for each dataset after every txg. this includes snapshots (which is perhaps a bit inefficient). Still, zil_clean() is fairly lightweight if there's nothing to do (grab a non contended lock; find nothing on a list; drop the lock exit). Neil. On 09/21/09 08:08,

Re: [zfs-discuss] resizing zpools by growing LUN

2009-09-21 Thread Sascha
Hi Darren, sorry that it took so long before I could answer. The good thing: I found out what went wrong. What I did: After resizing a Disk on the Storage, solaris recognizes it immediately. Everytime you resize a disk, the EVA storage updates the discription which contains the size. So typing

Re: [zfs-discuss] resizing zpools by growing LUN

2009-09-21 Thread Richard Elling
On Sep 21, 2009, at 8:59 AM, Sascha wrote: Hi Darren, sorry that it took so long before I could answer. The good thing: I found out what went wrong. What I did: After resizing a Disk on the Storage, solaris recognizes it immediately. Everytime you resize a disk, the EVA storage updates the

Re: [zfs-discuss] Incremental backup via zfs send / zfs receive

2009-09-21 Thread David Pacheco
Frank Middleton wrote: The problem with the regular stream is that most of the file system properties (such as mountpoint) are not copied as they are with a recursive stream. This may seem an advantage to some, (e.g., if the remote mountpoint is already in use, the mountpoint seems to default to

Re: [zfs-discuss] resizing zpools by growing LUN

2009-09-21 Thread Sascha
Hej Richard. think I'll update all our servers to the same version of zfs... That will hopefully make sure that this doesn't happen again :-) Darren and Richard: Thank you very much for your help ! Sascha -- This message posted from opensolaris.org

Re: [zfs-discuss] lots of zil_clean threads

2009-09-21 Thread Neil Perrin
Thinking more about this I'm confused about what you are seeing. The function dsl_pool_zil_clean() will serialise separate calls to zil_clean() within a pool. I don't expect you have 1037 pools on your laptop! So I don't know what's going on. What is the typical call stack for those zil_clean()

Re: [zfs-discuss] ZFS Recv slow with high CPU

2009-09-21 Thread Matthew Ahrens
Tristan Ball wrote: Hi Everyone, I have a couple of systems running opensolaris b118, one of which sends hourly snapshots to the other. This has been working well, however as of today, the receiving zfs process has started running extremely slowly, and is running at 100% CPU on one core,

[zfs-discuss] zfsdle eating all resources..

2009-09-21 Thread Nilsen, Vidar
Hi, I've got some strange problems with my serer today. When I boot b123, it stops at reading zfs config. I've tried several times to get past this point, but it seems to freeze there. Then I tried single user mode, from GRUB, and it seems to get me a little further. After a few minutes however,

[zfs-discuss] Directory size value

2009-09-21 Thread Chris Banal
It appears as though zfs reports the size of a directory to be one byte per file. Traditional file systems such as ufs or ext3 report the actual size of the data needed to store the directory. This causes some trouble with the default behavior of some nfs clients (linux) to decide to to use a

Re: [zfs-discuss] Directory size value

2009-09-21 Thread Tomas Ă–gren
On 21 September, 2009 - Chris Banal sent me these 4,4K bytes: It appears as though zfs reports the size of a directory to be one byte per file. Traditional file systems such as ufs or ext3 report the actual size of the data needed to store the directory. Or rather, the size needed at some

Re: [zfs-discuss] ZFS file disk usage

2009-09-21 Thread Richard Elling
On Sep 21, 2009, at 7:11 AM, Andrew Deason wrote: On Sun, 20 Sep 2009 20:31:57 -0400 Richard Elling richard.ell...@gmail.com wrote: If you are just building a cache, why not just make a file system and put a reservation on it? Turn off auto snapshots and set other features as per best

Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-21 Thread Gary Mills
On Fri, Sep 18, 2009 at 01:51:52PM -0400, Steffen Weiberle wrote: I am trying to compile some deployment scenarios of ZFS. # of systems One, our e-mail server for the entire campus. amount of storage 2 TB that's 58% used. application profile(s) This is our Cyrus IMAP spool. In addition

Re: [zfs-discuss] ZFS file disk usage

2009-09-21 Thread Andrew Deason
On Mon, 21 Sep 2009 17:13:26 -0400 Richard Elling richard.ell...@gmail.com wrote: OK, so the problem you are trying to solve is how much stuff can I place in the remaining free space? I don't think this is knowable for a dynamic file system like ZFS where metadata is dynamically allocated.

Re: [zfs-discuss] possibilities of AFP ever making it into ZFS like NFS an

2009-09-21 Thread Ron Mexico
I was able to get Netatalk built on OpenSolaris for my ZFS NAS at home. Everything is running great so far, and I'm planning on using it on the 96TB NAS I'm building for my office. It would be nice to have this supported out of the box, but there are probably licensing issues involved. -- This

Re: [zfs-discuss] Re-import RAID-Z2 with faulted disk

2009-09-21 Thread Kyle J. Aleshire
On Mon, Sep 21, 2009 at 3:37 AM, casper@sun.com wrote: The disk has since been replaced, so now: k...@localhost:~$ pfexec zpool import pool: chronicle id: 11592382930413748377 state: DEGRADED status: One or more devices contains corrupted data. action: The pool can be

Re: [zfs-discuss] Re-import RAID-Z2 with faulted disk

2009-09-21 Thread Kyle J. Aleshire
I'm running vanilla 2009.06 since its release. I'll definitely give it a shot with the Live CD. Also I tried importing with only the five good disks physically attached and get the same message. - Kyle On Mon, Sep 21, 2009 at 3:50 AM, Chris Murray chrismurra...@googlemail.comwrote: That

[zfs-discuss] How to recover from can't open objset, cannot iterate filesystems?

2009-09-21 Thread Albert Chin
Recently upgraded a system from b98 to b114. Also replaced two 400G Seagate Barracudea 7200.8 SATA disks with two WD 750G RE3 SATA disks from a 6-device raidz1 pool. Replacing the first 750G went ok. While replacing the second 750G disk, I noticed CKSUM errors on the first disk. Once the second