Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-01-29 Thread kristof
additional guests. I'm now waiting for new SSD disks (STEC Zeus 18GB en STEC Mach 100GB.), since those are used in SUN 7000 product. I hope they perform better. Kristof -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Comparison between the S-TEC Zeus and the Intel X25-E

2009-01-20 Thread kristof
I have been testing the 32 GB X25-E last week. When I connect it to one of the onboard (tyan 2925) sata ports, it's not detected by opensolaris 2008.11. When I connect it to an PCIE LSI 3081 , The disk is found But I'm getting trouble when I run performance tests via filebench. Filebench

Re: [zfs-discuss] ZFS iSCSI (For VirtualBox target) and SMB

2009-01-04 Thread kristof
I've seen this error often, but mostly the volume is shared. I think it happens as soon ay the volume has snapshots. To check if the volume is exposed or not, you can run: iscsitadm list target -v If the volume shows up, it's OK and you should ignore the message. K -- This message posted

Re: [zfs-discuss] How to mount rpool and edit vfstab from LiveCD?

2009-01-04 Thread kristof
If you have snapshots of the root filesystem, you can recover the file. To check for snapshots run: zfs list - t all If you see something like rpool/ROOT/opensola...@x then you are lucky, you will find the original vfstab file in: /b/.zfs/snapshotname/etc/vfstab K -- This message

Re: [zfs-discuss] cp: Operation not supported

2008-12-11 Thread Kristof Van Damme
mypool/normal # cp UTF8-Köln.txt /mypool/mixed/ # cp ISO8859-K?ln.txt /mypool/mixed/ cp: cannot stat `/mypool/mixed/ISO8859-K\366ln.txt': Operation not supported # cp UTF8-Köln.txt /mypool/normal/ # cp ISO8859-K?ln.txt /mypool/normal/ # Kristof/ -- This message posted from opensolaris.org koeln.tar

Re: [zfs-discuss] cp: Operation not supported

2008-12-10 Thread Kristof Van Damme
with the same result. A truss on the failed can be found in truss-ENOTSUP.txt. Notice the \xf6 in the filename of the stat64, which gives an ENOTSUP. When we take an the UTF-8 name, the stat64 succeeds just fine. A truss of this can be found in truss-OK.txt Cheers, Kristof/ -- This message posted from

[zfs-discuss] cp: Operation not supported

2008-12-09 Thread Kristof Van Damme
/ # Kristof/ -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Custom Jumpstart and RAID-10 ZFS rpool

2008-10-29 Thread kristof
I don't think this is possible. I already tried to add extra vdevs after install, but I got an error message telling me that multiple vdevs for rpool are not allowed. K -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] zpool import: all devices online but: insufficient replicas

2008-10-28 Thread kristof
HI, Today I tried one more time from scratch. I re-installed server B with latest available opensolaris 2008.11 (b99), b.t.w server A runs opensolaris 2008 b98 I also re-labeled all my disks. This time I can successfully import the pool on server B: [EMAIL PROTECTED]:~# zpool import pool:

[zfs-discuss] zpool import: all devices online but: insufficient replicas

2008-10-27 Thread kristof
ONLINE c8t600144F048FFCCD8E081B33B9800d0 ONLINE But the pool can still be imported on server1, so the pool is still OK. How come I cannot import the pool on another node? Thanks in advance. Kristof -- This message posted from opensolaris.org

Re: [zfs-discuss] zpool import: all devices online but:

2008-10-27 Thread kristof
post. If something is still unclear, please let me know. Kristof -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Slow zpool import with b98

2008-10-20 Thread kristof
I'm also seeing a very slow import on the 2008.11 build 98 prerelease. I have the following setup: a striped zpool of 2 mirrors, both mirrors have 1 local disk and 1 iscsi disk. I was testing a setup with iscsiboot (windows vista) with gpxeboot, every client was booted from a iscsi exposed

Re: [zfs-discuss] Slow zpool import with b98

2008-10-20 Thread kristof
I'm also seeing a slow import on the 2008.11 build 98 prerelease. but my situation is a little different : I have the following setup: a striped zpool of 2 mirrors, both mirrors have 1 local disk and 1 iscsi disk. I was testing iscsiboot (windows vista) with gpxeboot, every client was booted

Re: [zfs-discuss] ZFS Mirrors braindead?

2008-10-07 Thread kristof
I don't know if this is already available in S10 10/08, but in opensolaris build 71 you can set the: zpool failmode property see: http://opensolaris.org/os/community/arc/caselog/2007/567/ available options are: The property can be set to one of three options: wait, continue, or panic. The

Re: [zfs-discuss] iSCSI targets mapped to a VMWare ESX server

2008-04-11 Thread kristof
A colleague told me IET is not longer an ongoing project, so it's obsoleted. There's another new linux iscsi target: ISCSI-SCST is a forked (with all respects) version of IET with updates to work over SCST as well as with many improvements and bugfixes. see http://scst.sourceforge.net/

Re: [zfs-discuss] iSCSI targets mapped to a VMWare ESX server

2008-04-10 Thread kristof
thanks for pointing me to that article. I see there is a solution (patch) for ietd. Is there any solution for zfs zvols ? Is Sun plannig any action to solve this issue? K This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] iSCSI targets mapped to a VMWare ESX server

2008-04-07 Thread kristof
Some time ago I experienced the same issue. Only 1 target could be connected from an esx host. Others were shown as alternative paths to that target. If I'm reminding correctly I thought I read on a forum it has something to do with the disks serial number. Normally every single (i)scsi disk

Re: [zfs-discuss] [storage-discuss] OpenSolaris ZFS NAS Setup

2008-04-05 Thread kristof
If you have a mirrored iscsi zpool. It will NOT panic when 1 of the submirrors is unavailable. zpool status will hang for some time, but after I thinkt 300 seconds it will put the device on unavailable. The panic was the default in the past, And it only occurs if all devices are unavailable.

Re: [zfs-discuss] Per filesystem scrub

2008-03-31 Thread kristof
I would be very happy having a filesystem based zfs scrub We have a 18TB big zpool, it takes more then 2 days to do the scrub. Since we cannot take snapshots during the scrub, this is unacceptable Kristof This message posted from opensolaris.org

Re: [zfs-discuss] Best practices for ZFS plaiding

2008-03-26 Thread kristof
Best option is to stripe pairs of mirrors. So in your case create a pool which stripes over 3 mirrors, this will look like: pool mirror: thumper1 thumper2 mirror: thumper3 thumper4 mirror: thumper5 thumper6 So this will stripe over those 3

Re: [zfs-discuss] I.O error: zpool metadata corrupted after powercut

2008-01-31 Thread kristof
I don't have an exact copy of the error, but the following message was reported by zpool status: Pool degraded. Meta data corrupted. Please restore pool from backup. All devices where online, but pool could not be imported. During import we got I/O error. Krdoor This message posted from

[zfs-discuss] I.O error: zpool metadata corrupted after powercut

2008-01-30 Thread kristof
Last 2 weeks we had 2 zpools corrupted. Pool was visible via zpool import, but could not be imported anymore. During import attempt we got I/O error, After a first powercut we lost our jumpstart/nfsroot zpool (another pool was still OK). Luckaly jumpstart data was backed up and easely

[zfs-discuss] zpool version 3 Uberblock version 9 , zpool upgrade only half succeeded?

2007-12-13 Thread kristof
We are currently experiencing a very huge perfomance drop on our zfs storage server. We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not

[zfs-discuss] iscsi target secured by CHAP

2007-05-30 Thread kristof
never see the Server is sending back the challenge in response. What could be going on? Thanks for all your help! Kristof This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org