[zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Gabriele Bulfon
I found this today: http://blog.lastinfirstout.net/2010/06/sunoracle-finally-announces-zfs-data.html?utm_source=feedburnerutm_medium=feedutm_campaign=Feed%3A+LastInFirstOut+%28Last+In%2C+First+Out%29utm_content=FriendFeed+Bot How can I be sure my Solaris 10 systems are fine? Is latest

Re: [zfs-discuss] ZFS root recovery SMI/EFI label weirdness

2010-06-28 Thread Sean .
Thanks I don't know how I missed it. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Ian Collins
On 06/28/10 08:15 PM, Gabriele Bulfon wrote: I found this today: http://blog.lastinfirstout.net/2010/06/sunoracle-finally-announces-zfs-data.html?utm_source=feedburnerutm_medium=feedutm_campaign=Feed%3A+LastInFirstOut+%28Last+In%2C+First+Out%29utm_content=FriendFeed+Bot How can I be sure my

Re: [zfs-discuss] ZFS on Ubuntu

2010-06-28 Thread Joe Little
All true, I just saw too many need ubuntu and zfs and thought to state the obvious in case the patch set for nexenta happen to differ enough to provide a working set. I've had nexenta succeed where opensolaris quarter releases failed and vice versa On Jun 27, 2010, at 9:54 PM, Erik Trimble

Re: [zfs-discuss] Kernel panic on zpool status -v (build 143)

2010-06-28 Thread Andrej Podzimek
I ran 'zpool scrub' and will report what happens once it's finished. (It will take pretty long.) The scrub finished successfully (with no errors) and 'zpool status -v' doesn't crash the kernel any more. Andrej smime.p7s Description: S/MIME Cryptographic Signature

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Gabriele Bulfon
Yes, I did read it. And what worries me is patches availability... -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS on Ubuntu

2010-06-28 Thread Roy Sigurd Karlsbakk
I think zfs on ubuntu currently is a rather bad idea. See test below with ubuntu Lucid 10.04 (amd64) r...@bigone:~# cat /proc/partitions major minor #blocks name 80 312571224 sda 81 979933 sda1 823911827 sda2 83 48829567 sda3 8

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Gabriele Bulfon
mmmI double checked some of the running systems. Most of them have the first patch (sparc-122640-05 and x86-122641-06), but not the second one (sparc-142900-09 and x86-142901-09)... ...I feel I'm right in the middle of the problem... How much am I risking?! These systems are all mirrored via

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Dick Hoogendijk
On 28-6-2010 12:13, Gabriele Bulfon wrote: *sweat* These systems are all running for years nowand I considered them safe... Have I been at risk all this time?! They're still running, are they not? So, stop sweating. g But you're right about the changed patching service from Oracle. It

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Gabriele Bulfon
Yes...they're still running...but being aware that a power failure causing an unexpected poweroff may make the pool unreadable is a pain Yes. Patches should be available. Or adoption may be lowering a lot... -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Victor Latushkin
On 28.06.10 16:16, Gabriele Bulfon wrote: Yes...they're still running...but being aware that a power failure causing an unexpected poweroff may make the pool unreadable is a pain Pool integrity is not affected by this issue. ___ zfs-discuss

Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-28 Thread Frank Cusack
On 6/26/10 9:47 AM -0400 David Magda wrote: Crickey. Who's the genius who thinks of these URLs? SEOs ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Garrett D'Amore
On Mon, 2010-06-28 at 05:16 -0700, Gabriele Bulfon wrote: Yes...they're still running...but being aware that a power failure causing an unexpected poweroff may make the pool unreadable is a pain Yes. Patches should be available. Or adoption may be lowering a lot... I don't have access

[zfs-discuss] Announce: zfsdump

2010-06-28 Thread Tristram Scott
For quite some time I have been using zfs send -R fsn...@snapname | dd of=/dev/rmt/1ln to make a tape backup of my zfs file system. A few weeks back the size of the file system grew to larger than would fit on a single DAT72 tape, and I once again searched for a simple solution to allow

Re: [zfs-discuss] Announce: zfsdump

2010-06-28 Thread Brian Kolaci
I use Bacula which works very well (much better than Amanda did). You may be able to customize it to do direct zfs send/receive, however I find that although they are great for copying file systems to other machines, they are inadequate for backups unless you always intend to restore the whole

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-28 Thread Andrew Jones
Now at 36 hours since zdb process start and: PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 827 root 4936M 4931M sleep 590 0:50:47 0.2% zdb/209 Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to determine where

Re: [zfs-discuss] Announce: zfsdump

2010-06-28 Thread Tristram Scott
I use Bacula which works very well (much better than Amanda did). You may be able to customize it to do direct zfs send/receive, however I find that although they are great for copying file systems to other machines, they are inadequate for backups unless you always intend to restore the

Re: [zfs-discuss] Announce: zfsdump

2010-06-28 Thread Brian Kolaci
On Jun 28, 2010, at 12:26 PM, Tristram Scott wrote: I use Bacula which works very well (much better than Amanda did). You may be able to customize it to do direct zfs send/receive, however I find that although they are great for copying file systems to other machines, they are inadequate

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-28 Thread Andrew Jones
Update: have given up on the zdb write mode repair effort, as least for now. Hoping for any guidance / direction anyone's willing to offer... Re-running 'zpool import -F -f tank' with some stack trace debug, as suggested in similar threads elsewhere. Note that this appears hung at near idle.

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-28 Thread Roy Sigurd Karlsbakk
- Original Message - Now at 36 hours since zdb process start and: PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209 Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-28 Thread Malachi de Ælfweald
I had a similar issue on boot after upgrade in the past and it was due to the large number of snapshots I had... don't know if that could be related or not... Malachi de Ælfweald http://www.google.com/profiles/malachid On Mon, Jun 28, 2010 at 8:59 AM, Andrew Jones andrewnjo...@gmail.comwrote:

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-28 Thread Andrew Jones
Dedup had been turned on in the past for some of the volumes, but I had turned it off altogether before entering production due to performance issues. GZIP compression was turned on for the volume I was trying to delete. -- This message posted from opensolaris.org

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-28 Thread Andrew Jones
Malachi, Thanks for the reply. There were no snapshots for the CSV1 volume that I recall... very few snapshots on the any volume in the tank. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] Kernel Panic on zpool clean

2010-06-28 Thread George
Hi, I have a machine running 2009.06 with 8 SATA drives in SCSI connected enclosure. I had a drive fail and accidentally replaced the wrong one, which unsurprisingly caused the rebuild to fail. The status of the zpool then ended up as: pool: storage2 state: FAULTED status: An intent log

[zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-06-28 Thread valrh...@gmail.com
I'm putting together a new server, based on a Dell PowerEdge T410. I have simple SAS controller, with six 2TB Hitachi DeskStar 7200 RPM SATA drives. The processor is a quad-core 2 GHz Core i7-based Xeon. I will run the drives as one set of three mirror pairs striped together, for 6 TB of

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-28 Thread Roy Sigurd Karlsbakk
- Original Message - Dedup had been turned on in the past for some of the volumes, but I had turned it off altogether before entering production due to performance issues. GZIP compression was turned on for the volume I was trying to delete. Was there a lot of deduped data still on

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-28 Thread Andrew Jones
Just re-ran 'zdb -e tank' to confirm the CSV1 volume is still exhibiting error 16: snip Could not open tank/CSV1, error 16 snip Considering my attempt to delete the CSV1 volume lead to the failure in the first place, I have to think that if I can either 1) complete the deletion of this volume

Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-06-28 Thread Roy Sigurd Karlsbakk
2. Are the RAM requirements for ZFS with dedup based on the total available zpool size (I'm not using thin provisioning), or just on how much data is in the filesystem being deduped? That is, if I have 500 GB of deduped data but 6 TB of possible storage, which number is relevant for

Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-06-28 Thread Erik Trimble
On 6/28/2010 12:33 PM, valrh...@gmail.com wrote: I'm putting together a new server, based on a Dell PowerEdge T410. I have simple SAS controller, with six 2TB Hitachi DeskStar 7200 RPM SATA drives. The processor is a quad-core 2 GHz Core i7-based Xeon. I will run the drives as one set of

Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-06-28 Thread Erik Trimble
On 6/28/2010 12:53 PM, Roy Sigurd Karlsbakk wrote: 2. Are the RAM requirements for ZFS with dedup based on the total available zpool size (I'm not using thin provisioning), or just on how much data is in the filesystem being deduped? That is, if I have 500 GB of deduped data but 6 TB of possible

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-28 Thread Victor Latushkin
On Jun 28, 2010, at 9:32 PM, Andrew Jones wrote: Update: have given up on the zdb write mode repair effort, as least for now. Hoping for any guidance / direction anyone's willing to offer... Re-running 'zpool import -F -f tank' with some stack trace debug, as suggested in similar threads

Re: [zfs-discuss] Resilvering onto a spare - degraded because of read and cksum errors

2010-06-28 Thread Cindy Swearingen
Hi Donald, I think this is just a reporting error in the zpool status output, depending on what Solaris release is. Thanks, Cindy On 06/27/10 15:13, Donald Murray, P.Eng. wrote: Hi, I awoke this morning to a panic'd opensolaris zfs box. I rebooted it and confirmed it would panic each time

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-28 Thread Victor Latushkin
On Jun 28, 2010, at 11:27 PM, George wrote: I've tried removing the spare and putting back the faulty drive to give: pool: storage2 state: FAULTED status: An intent log record could not be read. Waiting for adminstrator intervention to fix the faulted pool. action: Either restore

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-28 Thread George
I've attached the output of those commands. The machine is a v20z if that makes any difference. Thanks, George -- This message posted from opensolaris.orgmdb: logging to debug.txt ::status debugging crash dump vmcore.0 (64-bit) from crypt operating system: 5.11 snv_111b (i86pc) panic message:

[zfs-discuss] COMSTAR ISCSI - configuration export/impo rt

2010-06-28 Thread bso...@epinfante.com
Hi all, Having osol b134 exporting a couple of iscsi targets to some hosts,how can the COMSTAR configuration be migrated to other host? I can use the ZFS send/receive to replicate the luns but how can I replicate the target,views from serverA to serverB ? Is there any best procedures to

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-28 Thread Andrew Jones
Thanks Victor. I will give it another 24 hrs or so and will let you know how it goes... You are right, a large 2TB volume (CSV1) was not in the process of being deleted, as described above. It is showing error 16 on 'zdb -e' -- This message posted from opensolaris.org

Re: [zfs-discuss] COMSTAR ISCSI - configuration export/import

2010-06-28 Thread Mike Devlin
I havnt tried it yet, but supposedly this will backup/restore the comstar config: $ svccfg export -a stmf ⁠comstar⁠.bak.${DATE} If you ever need to restore the configuration, you can attach the storage and run an import: $ svccfg import ⁠comstar⁠.bak.${DATE} - Mike On 6/28/10,

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Gabriele Bulfon
Oh well, thanks for this answer. It makes me feel much better! What are eventual risks? Gabriele Bulfon - Sonicle S.r.l. Tel +39 028246016 Int. 30 - Fax +39 028243880 Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY http://www.sonicle.com

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-28 Thread Geoff Shipman
Andrew, Looks like the zpool is telling you the devices are still doing work of some kind, or that there are locks still held. From man of section 2 intro page the errors are listed. Number 16 looks to be an EBUSY. 16 EBUSYDevice busy An

Re: [zfs-discuss] Announce: zfsdump

2010-06-28 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Tristram Scott If you would like to try it out, download the package from: http://www.quantmodels.co.uk/zfsdump/ I haven't tried this yet, but thank you very much! Other people have pointed

Re: [zfs-discuss] Resilvering onto a spare - degraded because of read and cksum errors

2010-06-28 Thread Donald Murray, P.Eng.
Thanks Cindy. I'm running 111b at the moment. I ran a scrub last night, and it still reports the same status. r...@weyl:~# uname -a SunOS weyl 5.11 snv_111b i86pc i386 i86pc Solaris r...@weyl:~# zpool status -x pool: tank state: DEGRADED status: One or more devices could not be opened.

Re: [zfs-discuss] Announce: zfsdump

2010-06-28 Thread Asif Iqbal
On Mon, Jun 28, 2010 at 11:26 AM, Tristram Scott tristram.sc...@quantmodels.co.uk wrote: For quite some time I have been using zfs send -R fsn...@snapname | dd of=/dev/rmt/1ln to make a tape backup of my zfs file system.  A few weeks back the size of the file system grew to larger than would

[zfs-discuss] Processes hang in /dev/zvol/dsk/poolname

2010-06-28 Thread James Dickens
After multiple power outages caused by storms coming through, I can no longer access /dev/zvol/dsk/poolname, which are hold l2arc and slog devices in another pool I don't think this is related, since I the pools are ofline pending access to the volumes. I tried running find /dev/zvol/dsk/poolname