Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-26 Thread Karel Gardas
Thank you Christopher and Edward for all the detailed information provided. 
Indeed DDRDrive looks like a right tool for fast ZIL, but for my development 
workstation I'm rather searching for l2arc cache where as you note ReviDrive 
might do the nice job.
Thanks,
Karel
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] panic[cpu0]/thread=ffffff0016e03c40: zfs: allocating allocated segment...

2010-11-25 Thread Karel Gardas
Hello Victor,

thanks a lot for your help. I'm just going to buy some new drives here to be 
able to backup this pool, but anyway, what do you suggest to do with such pool 
later? Complete destroy and reinstall? Or is it possible to run scrub on 
read-only pool to recover it? I see this bugreport: 
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6616286 which is 
already closed so I'm wondering if this bug is still present but not loaded 
into your bug db? i.e. any silent corruption bug in ZFS makes me a little bit 
nervous...

Thanks!
Karel

 
 With Solaris 11 Express you can import your pool in
 readonly fashion like this:
 
 zpool import -o readonly=on poolname
 
 that way you should be able to backup all your
 important data.
 
 Victor
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss

-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to zfs send current stage of fs from read-only pool?

2010-11-25 Thread Karel Gardas
Hello,
I'm trying to recover from zfs pool crash described here: 
http://www.opensolaris.org/jive/thread.jspa?threadID=135675tstart=0 -- I've 
successfully used Victor's advice and now I do have my pool imported in 
read-only mode. I see the data on filesystems and would like to backup all of 
them using zfs send/zfs receive to another backup pool.
My problem is that `zfs send' supports sending only of snapshot and I cannot 
create any new snapshot on the read-only pool. Is there any workaround for 
this? Something for example like virtual @now snapshot would be great to have 
for those purposes IMHO...
Thanks for any idea!
Karel
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] OCZ RevoDrive ZFS support

2010-11-25 Thread Karel Gardas
Hello,
I'm curious if there is a support for OCZ RevoDrive SSD or any other SSD hooked 
directly on PCIe in Solaris. This RevoDrive looks particularly interesting for 
its low price and why to buy something SATA based when someone might have twice 
the speed on PCIe for the same money
Thanks,
Karel
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Strange behavior of b151a and .zfs directory

2010-11-25 Thread Karel Gardas
Hello,
after upgrade to Sol11Express I've noticed kind of strange behavior of .zfs 
directory of any ZFS filesystem. Go into the .zfs directory and type `find . 
-type f' for the first time after you've mounted the file-system. It'll show 
nothing. Type it second time and you will get expected list of files from all 
the snapshots.
Is this expected or is it a bug? I don't remember seeing this on OS2009.06.
Thanks,
Karel
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] panic[cpu0]/thread=ffffff0016e03c40: zfs: allocating allocated segment...

2010-11-24 Thread Karel Gardas
Hello,

during my attempts to update my workstation OS to the latest Solaris 11 Express 
2010.11 I've come to the point where machine nolonger booted anymore. That was 
just after the last reboot of Sol11Express when everything was updated well and 
set in usual way (as is here). During the last Sol11Express reboot, i.e. during 
the boot I've got following panic (well, I haven't seen it for the first time 
so I booted with -s -k to see it). Also please note that this crash is taken 
from the console of second computer where I've added disks in order to see if 
my workstation hardware (mobo) is broken or if those are the disk failures or 
what. Anyway, the message on the workstation was the same zfs: allocating 
allocated segment etc.).

panic[cpu0]/thread=ff0016e03c40: zfs: allocating allocated 
segment(offset=335092685312 size=4096)


Warning - stack not written to the dump buffer
ff0016e03500 genunix:vcmn_err+2c ()
35f0 zfs:zfs_panic_recover+ae ()
3690 zfs:space_map_add+fb ()
3750 zfs:space_map_load+294 ()
37b0 zfs:metaslab_activate+95 ()
3870 zfs:metaslab_group_alloc+246 ()
3930 zfs:metaslab_alloc_dva+2a4 ()
3a00 zfs:metaslab_alloc+d4 ()
3a60 zfs:zio_dva_allocate+db ()
3a90 zfs:zio_execute+8d ()
3b30 genunix:taskq_thread+248 ()
3b40 unix:thread_start+8 ()
   

panic: entering debugger (no dump device, continue to reboot)

Welcome to kmdb
Loaded modules: [ scsi_vhci mac uppc sd unix cpu_ms.AuthenticAMD.15 mpt zfs 
krtld sata apix genunix specfs pcplusmp cpu.generic ]
[0] 


Please note that my workstation is (where the issue happened first, although I 
remember just its first line) Asus P5W64 WS Pro + 6GB ECC RAM + 2 500GB Hitachi 
Travelstar 2.5 7200RPM connected to mobo's ICHx (I don't know which ICH is 
provided by the i975X chipset...). When this happened I've put Supermicro 
AOC-SAT2-MV8 into the workstation and attempted to test this with 3 old drives 
where luckily some OS 2009.06 instance was still presented. Machine worked as 
expected so I hooked my 500GB Hitachis to the card and got the same panic. I 
also need to note that before this I performed 2 whole passes in memcheck86 w/o 
any memory error. So it looks more as a drives issue, hence I put Supermicro 
card into my testing hp585/4opteron box, hooked old 3 drives to it, verify that 
they behaves like they should and then tested my workstation's 500GB hitachis. 
Again the same panic -- at which point I took the notebook and written the 
panic to it from seeing the hp585 console output -- so i
 t is panic from 2xHitachi Travelstar 500GB 2.5 7200RPM hooked to the 
Supermicro AOC-SAT2-MV8 card which is put into 133MHz PCI-X inside HP585 box 
coming from the attempt to boot Solaris 11 Express 2010.11. Ask if you also 
need to have panic written from the original workstation

Anyway, now sad things starts to be described. This is my workstation rpool 
with unbackedup emails for 2-3 months and unbackedup work of last 3-4 weeks. 
Also the pool holds still preserved BE of OS 2009.11 and OpenSolaris snv_134b. 
I've tried to boot both while drives were still in the workstation hooked to 
the ICHx but the panic was still the same (allocating already allocated 
segment)
Now, is there any chance to get my data back?

Last note: this pool is formed by the mirror of both drives -- question shall I 
attempt attaching just one of drives for test and then another if the test does 
not succeed? I've not attempted this yet as I would like to get some ZFS expert 
advice first on it.

Thanks for any idea how to proceeed!
Karel
PS: shall I also enter some bugreport for it?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-09-07 Thread Karel Gardas
What's your uptime? Usually it scrubs memory during the idle time and usually 
waits quite a long nearly till the deadline -- which is IIRC 12 hours. So do 
you have more than 12 hours of uptime?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-09-03 Thread Karel Gardas
Hello,
your (open)solaris for Ecc support (which seems to have been dropped from 
200906) is misunderstanding. OS 2009.06 also supports ECC as 2005 did. Just 
install it and use my updated ecccheck.pl script to get informed about errors. 
Also you might verify that Solaris' memory scrubber is really running if you 
are that curious: 
http://developmentonsolaris.wordpress.com/2009/03/06/how-to-make-sure-memory-scrubber-is-running/
Karel
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss