Re: [zfs-discuss] Mount order of ZFS filesystems vs. other filesystems?

2008-03-27 Thread Volker A. Brandt
Hello Kyle! All of these mounts are failing at bootup with messages about non-existent mountpoints. My guess is that it's because when /etc/vfstab is running, the ZFS '/export/OSImages' isn't mounted yet? Yes, that is absolutely correct. For details, look at the start method of

Re: [zfs-discuss] Periodic flush

2008-03-27 Thread Selim Daoud
the question is: does the IO pausing behaviour you noticed penalize your application? what are the consequences at the application level? for instance we have seen application doing some kind of data capture from external device (video for example) requiring a constant throughput to disk (data

[zfs-discuss] UFS Formatted ZVOLs and Oracle Databases / MDBMS

2008-03-27 Thread Brandon Wilson
Hi all, here's a couple questions. Has anyone run oracle databases off of a UFS formatted ZVOL? If so, how does it compare in speed to UFS direct io? I'm trying my best to get rid of UFS, but ZFS isn't up to par on the speed of UFS direct io for MDBMS. So I'm trying to come up with some

Re: [zfs-discuss] Mount order of ZFS filesystems vs. other filesystems?

2008-03-27 Thread Kyle McDonald
Volker A. Brandt wrote: Hello Kyle! All of these mounts are failing at bootup with messages about non-existent mountpoints. My guess is that it's because when /etc/vfstab is running, the ZFS '/export/OSImages' isn't mounted yet? Yes, that is absolutely correct. For details, look

Re: [zfs-discuss] Periodic flush

2008-03-27 Thread Bob Friesenhahn
On Wed, 26 Mar 2008, Neelakanth Nadgir wrote: When you experience the pause at the application level, do you see an increase in writes to disk? This might the regular syncing of the transaction group to disk. If I use 'zpool iostat' with a one second interval what I see is two or three

Re: [zfs-discuss] Periodic flush

2008-03-27 Thread Richard Elling
Selim Daoud wrote: the question is: does the IO pausing behaviour you noticed penalize your application? what are the consequences at the application level? for instance we have seen application doing some kind of data capture from external device (video for example) requiring a constant

Re: [zfs-discuss] Periodic flush

2008-03-27 Thread Neelakanth Nadgir
Bob Friesenhahn wrote: On Wed, 26 Mar 2008, Neelakanth Nadgir wrote: When you experience the pause at the application level, do you see an increase in writes to disk? This might the regular syncing of the transaction group to disk. If I use 'zpool iostat' with a one second interval what I

Re: [zfs-discuss] Periodic flush

2008-03-27 Thread Bob Friesenhahn
On Thu, 27 Mar 2008, Neelakanth Nadgir wrote: This causes the sync to happen much faster, but as you say, suboptimal. Haven't had the time to go through the bug report, but probably CR 6429205 each zpool needs to monitor its throughput and throttle heavy writers will help. I hope that this

[zfs-discuss] ClearCase support for ZFS?

2008-03-27 Thread Nissim Ben-Haim
Hi, Does anybody know what is the latest status with ClearCase support for ZFS? I noticed this from IBM: http://www-1.ibm.com/support/docview.wss?rs=0uid=swg21155708 I would like to make sure someone has installed and tested it before recommending to a customer. Regards, Nissim Ben-Haim

Re: [zfs-discuss] Mount order of ZFS filesystems vs. other filesystems?

2008-03-27 Thread Volker A. Brandt
The only way I could find was to set the mountpoint of the file system to legacy, and add it to /etc/vfstab. Here's an example: I tried this last night also, after sending the message and I made it work. Seems clunky though. Yes, I also would have liked something more streamlined. But

Re: [zfs-discuss] UFS Formatted ZVOLs and Oracle Databases / MDBMS

2008-03-27 Thread Brandon Wilson
Well I don't have any hard numbers 'yet'. But sometime in the next couple weeks when the Hyperion Essbase install team get essbase up and running on a sun m4000, I plan on taking advantage of the situation to do some stress and performance testing on zfs and MDBMS. Stuff like ufs+directio, zfs,

Re: [zfs-discuss] UFS Formatted ZVOLs and Oracle Databases / MDBMS

2008-03-27 Thread Richard Elling
Brandon Wilson wrote: Well I don't have any hard numbers 'yet'. But sometime in the next couple weeks when the Hyperion Essbase install team get essbase up and running on a sun m4000, I plan on taking advantage of the situation to do some stress and performance testing on zfs and MDBMS.

[zfs-discuss] pool hangs for 1 full minute?

2008-03-27 Thread Neal Pollack
For the last few builds of Nevada, if I come back to my workstation after long idle periods such as overnight, and try any command that would touch the zfs filesystem, it hangs for an entire 60 seconds approximately. This would include ls, zpool status, etc. Does anyone has a hint as to how I

Re: [zfs-discuss] pool hangs for 1 full minute?

2008-03-27 Thread Tomas Ögren
On 27 March, 2008 - Neal Pollack sent me these 1,9K bytes: Also given: I have been doing live upgrade every other build since approx Nevada build 46. I am running on a Sun Ultra 40 modified to include 8 disks. (second backplane and SATA quad cable) It appears that the zfs filesystems are

Re: [zfs-discuss] pool hangs for 1 full minute?

2008-03-27 Thread Neal Pollack
Tomas Ögren wrote: On 27 March, 2008 - Neal Pollack sent me these 1,9K bytes: Also given: I have been doing live upgrade every other build since approx Nevada build 46. I am running on a Sun Ultra 40 modified to include 8 disks. (second backplane and SATA quad cable) It appears that

Re: [zfs-discuss] Periodic flush

2008-03-27 Thread eric kustarz
On Mar 27, 2008, at 9:24 AM, Bob Friesenhahn wrote: On Thu, 27 Mar 2008, Neelakanth Nadgir wrote: This causes the sync to happen much faster, but as you say, suboptimal. Haven't had the time to go through the bug report, but probably CR 6429205 each zpool needs to monitor its throughput

[zfs-discuss] kernel memory and zfs

2008-03-27 Thread Matt Cohen
We have a 32 GB RAM server running about 14 zones. There are multiple databases, application servers, web servers, and ftp servers running in the various zones. I understand that using ZFS will increase kernel memory usage, however I am a bit concerned at this point. [EMAIL

Re: [zfs-discuss] Periodic flush

2008-03-27 Thread abs
you may want to try disabling the disk write cache on the single disk. also for the RAID disable 'host cache flush' if such an option exists. that solved the problem for me. let me know. Bob Friesenhahn [EMAIL PROTECTED] wrote: On Thu, 27 Mar 2008, Neelakanth Nadgir wrote: This causes the

Re: [zfs-discuss] kernel memory and zfs

2008-03-27 Thread Richard Elling
Matt Cohen wrote: We have a 32 GB RAM server running about 14 zones. There are multiple databases, application servers, web servers, and ftp servers running in the various zones. I understand that using ZFS will increase kernel memory usage, however I am a bit concerned at this point.

[zfs-discuss] nfs and smb performance

2008-03-27 Thread abs
hello all, i have two xraids connect via fibre to a poweredge2950. the 2 xraids are configured with 2 raid5 volumes each, giving me a total of 4 raid5 volumes. these are striped across in zfs. the read and write speeds local to the machine are as expected but i have noticed some performance

Re: [zfs-discuss] kernel memory and zfs

2008-03-27 Thread Thomas Maier-Komor
Richard Elling wrote: The size of the ARC (cache) is available from kstat in the zfs module (kstat -m zfs). Neel wrote a nifty tool to track it over time called arcstat. See http://www.solarisinternals.com/wiki/index.php/Arcstat Remember that this is a cache and subject to eviction

Re: [zfs-discuss] nfs and smb performance

2008-03-27 Thread Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, Sun MicroSystems
Hello abs Would you be able to repeat the same tests for the cifs in zfs option instead of using samba? Would be interesting to see how the kernel cifs versus the samba performance compare. Peter abs wrote: hello all, i have two xraids connect via fibre to a poweredge2950. the 2 xraids are

Re: [zfs-discuss] nfs and smb performance

2008-03-27 Thread Dale Ghent
Have you turned on the Ignore cache flush commands option on the xraids? You should ensure this is on when using ZFS on them. /dale On Mar 27, 2008, at 6:16 PM, abs wrote: hello all, i have two xraids connect via fibre to a poweredge2950. the 2 xraids are configured with 2 raid5