Re: [zfs-discuss] ZFS is very slow in our test, when the capacity is high

2007-10-12 Thread Thomas Liesner
Hi, did you read the following? http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Currently, pool performance can degrade when a pool is very full and filesystems are updated frequently, such as on a busy mail server. Under these circumstances, keep pool space under 80%

Re: [zfs-discuss] zfs: allocating allocated segment(offset=77984887808

2007-10-12 Thread Jürgen Keil
size=66560) In-Reply-To: [EMAIL PROTECTED] Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Approved: 3sm4u3 X-OpenSolaris-URL: http://www.opensolaris.org/jive/message.jspa?messageID=163221tstart=0#163221 how does one free

Re: [zfs-discuss] zfs: allocating allocated segment (offset=

2007-10-12 Thread Rob Logan
I suspect that the bad ram module might have been the root cause for that freeing free segment zfs panic, perhaps I removed two 2G simms but left the two 512M simms, also removed kernelbase but the zpool import still crashed the machine. its also registered ECC ram, memtest86 v1.7

Re: [zfs-discuss] Zone root on a ZFS filesystem and Cloning zones

2007-10-12 Thread Dick Davies
On 11/10/2007, Dick Davies [EMAIL PROTECTED] wrote: No, they aren't (i.e. zoneadm clone on S10u4 doesn't use zfs snapshots). I have a workaround I'm about to blog Here it is - hopefully be of some use: http://number9.hellooperator.net/articles/2007/10/11/fast-zone-cloning-on-solaris-10 --

[zfs-discuss] zfs/zpools iscsi

2007-10-12 Thread Krzys
Hello all, sorry if somebody already asked this or not. I was playing today with iSCSI and I was able to create zpool and then via iSCSI I can see it on two other hosts. I was courious if I could use zfs to have it shared on those two hosts but aparently I was unable to do it for obvious

Re: [zfs-discuss] Inherited quota question

2007-10-12 Thread Rahul Mehta
Has there been any solution to the problem discussed above in ZFS version 8?? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] XFS_IOC_FSGETXATTR XFS_IOC_RESVSP64 like options in ZFS ?

2007-10-12 Thread Manoj Nayak
Hi, I am using XFS_IOC_FSGETXATTR in ioctl() call on Linux running XFS file system.I want to use similar thing on Solaris running ZFS file system. struct fsxattr fsx; ioctl(fd, XFS_IOC_FSGETXATTR, fsx); The above call get additional attributes associated with files in XFS file systems. The

Re: [zfs-discuss] zfs/zpools iscsi

2007-10-12 Thread Mattias Pantzare
2007/10/12, Krzys [EMAIL PROTECTED]: Hello all, sorry if somebody already asked this or not. I was playing today with iSCSI and I was able to create zpool and then via iSCSI I can see it on two other hosts. I was courious if I could use zfs to have it shared on those two hosts but aparently

Re: [zfs-discuss] XFS_IOC_FSGETXATTR XFS_IOC_RESVSP64 like options in ZFS ?

2007-10-12 Thread Darren J Moffat
Manoj Nayak wrote: Hi, I am using XFS_IOC_FSGETXATTR in ioctl() call on Linux running XFS file system.I want to use similar thing on Solaris running ZFS file system. See openat(2). -- Darren J Moffat ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS is very slow in our test, when the capacity is high

2007-10-12 Thread LI Xin
eSX wrote: We are tesing ZFS in OpenSolairs, write TBs data to ZFS, But when the capacity is close to 90%, ZFS went into slowly. We do ls, rm, and write something, those operation is so terrible. for example, we do ls in a Directory which have about 4000 Directories, the time is about 5-10s!

[zfs-discuss] practicality of zfs send/receive for failover

2007-10-12 Thread Paul B. Henson
We've been evaluating ZFS as a possible enterprise file system for our campus. Initially, we were considering one large cluster, but it doesn't look like that will scale to meet our needs. So, now we are thinking about breaking our storage across multiple servers, probably three. However, I

Re: [zfs-discuss] zfs/zpools iscsi

2007-10-12 Thread roland
I was courious if I could use zfs to have it shared on those two hosts no, that`s not possible for now. but aparently I was unable to do it for obvious reasons. you will corrupt your data! On my linuc oracle rac I was using ocfs which works just as I need it yes, because ocfs is build for

Re: [zfs-discuss] zfs/zpools iscsi

2007-10-12 Thread Richard Elling
roland wrote: Is there any solutions out there of this kind? i`m not that deep into solaris, but iirc there isn`t one for free. veritas is quite popular, but you need spend lots of bucks for this. maybe SAM-QFS ? We have lots of customers using shared QFS with RAC. QFS is on the road to open

Re: [zfs-discuss] ZFS array NVRAM cache

2007-10-12 Thread Vincent Fox
So what are the failure modes to worry about? I'm not exactly sure what the implications of this nocache option for my configuration. Say from a recent example I have an overtemp and first one array shuts down, then the other one. I come in after A/C is returned, shutdown and repower

Re: [zfs-discuss] io:::start and zfs filenames?

2007-10-12 Thread Matthew Ahrens
Jim Mauro wrote: Hi Neel - Thanks for pushing this out. I've been tripping over this for a while. You can instrument zfs_read() and zfs_write() to reliably track filenames: #!/usr/sbin/dtrace -s #pragma D option quiet zfs_read:entry, zfs_write:entry { printf(%s of

Re: [zfs-discuss] ZFS Space Map optimalization

2007-10-12 Thread Matthew Ahrens
Łukasz K wrote: Now space maps, intent log, spa history are compressed. All normal metadata (including space maps and spa history) is always compressed. The intent log is never compressed. Can you tell me where space map is compressed ? we specify that it should be compressed in

Re: [zfs-discuss] practicality of zfs send/receive for failover

2007-10-12 Thread Vincent Fox
So the problem in the zfs send/receive thing, is what if your network glitches out during the transfers? We have these once a day due to some as-yet-undiagnosed switch problem, a chop-out of 50 seconds or so which is enough to trip all our IPMP setups and enough to abort SSH transfers in

Re: [zfs-discuss] Some test results: ZFS + SAMBA + Sun Fire X4500 (Thumper)

2007-10-12 Thread Matthew Ahrens
Tim Thomas wrote: Hi this may be of interest: http://blogs.sun.com/timthomas/entry/samba_performance_on_sun_fire I appreciate that this is not a frightfully clever set of tests but I needed some throughout numbersand the easiest way to share the results is to blog. It seems

Re: [zfs-discuss] ZFS 60 second pause times to read 1K

2007-10-12 Thread Matthew Ahrens
Michael Kucharski wrote: We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have the files system mounted over v5 krb5 NFS and accessed directly. The pool is a 20TB pool and is using . There are three filesystems, backup, test and home. Test has about 20 million files and

[zfs-discuss] enlarge a mirrored pool

2007-10-12 Thread Ivan Wang
Hi all, Forgive me if this is a dumb question. Is it possible for a two-disk mirrored zpool to be seamlessly enlarged by gradually replacing previous disk with larger one? Say, in a constrained desktop, only space for two internal disks is available, could I just begin with two 160G disks,

Re: [zfs-discuss] enlarge a mirrored pool

2007-10-12 Thread Erik Trimble
Ivan Wang wrote: Hi all, Forgive me if this is a dumb question. Is it possible for a two-disk mirrored zpool to be seamlessly enlarged by gradually replacing previous disk with larger one? Say, in a constrained desktop, only space for two internal disks is available, could I just

Re: [zfs-discuss] enlarge a mirrored pool

2007-10-12 Thread Neil Perrin
Erik Trimble wrote: Ivan Wang wrote: Hi all, Forgive me if this is a dumb question. Is it possible for a two-disk mirrored zpool to be seamlessly enlarged by gradually replacing previous disk with larger one? Say, in a constrained desktop, only space for two internal disks is