[zfs-discuss] Help with the best layout

2008-02-15 Thread Kim Tingkær
Hi everybody, thanks for at very good source of information! I hope maybe you guys can help out a little. I have 3 disk, one usb 300gb and 2x150gb ide. I would like to get the most space out of what ever configuration i apply. So i've been thinking (and testing without success), is it at all

Re: [zfs-discuss] Help with the best layout

2008-02-15 Thread Kim Tingkær
Using ZFS ofcourse *g* This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] iscsi connection aborted.

2008-02-15 Thread James Nord
Hi, I'm trying to boot a HP dl360 G5 from via iSCSI from a solaris 10 u4 zfs device but it's failing the login at boot: POST messages from the dl360: Starting iSCSI boot option rom initialization... Connecting.connected. Logging in...error - failing. Interestingly (and correctly) the

Re: [zfs-discuss] Which DTrace provider to use

2008-02-15 Thread Roch Bourbonnais
Le 14 févr. 08 à 02:22, Marion Hakanson a écrit : [EMAIL PROTECTED] said: It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP. Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty handily pull 120MB/sec from it, and write at over 100MB/sec. It falls

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Roch Bourbonnais
Le 15 févr. 08 à 03:34, Bob Friesenhahn a écrit : On Thu, 14 Feb 2008, Tim wrote: If you're going for best single file write performance, why are you doing mirrors of the LUNs? Perhaps I'm misunderstanding why you went from one giant raid-0 to what is essentially a raid-10. That

[zfs-discuss] ZFS write throttling

2008-02-15 Thread Philip Beevers
Hi everyone, This is my first post to zfs-discuss, so be gentle with me :-) I've been doing some testing with ZFS - in particular, in checkpointing the large, proprietary in-memory database which is a key part of the application I work on. In doing this I've found what seems to be some fairly

Re: [zfs-discuss] ZFS taking up to 80 seconds to flush a single 8KB O_SYNC block.

2008-02-15 Thread Roch Bourbonnais
Le 10 févr. 08 à 12:51, Robert Milkowski a écrit : Hello Nathan, Thursday, February 7, 2008, 6:54:39 AM, you wrote: NK For kicks, I disabled the ZIL: zil_disable/W0t1, and that made not a NK pinch of difference. :) Have you exported and them imported pool to get zil_disable into

Re: [zfs-discuss] ZFS write throttling

2008-02-15 Thread Roch Bourbonnais
Le 15 févr. 08 à 11:38, Philip Beevers a écrit : Hi everyone, This is my first post to zfs-discuss, so be gentle with me :-) I've been doing some testing with ZFS - in particular, in checkpointing the large, proprietary in-memory database which is a key part of the application I work

Re: [zfs-discuss] ZFS write throttling

2008-02-15 Thread Philip Beevers
Hi Roch, Thanks for the response. Throttling is being addressed. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6429205 BTW, the new code will adjust write speed to disk speed very quickly. You will not see those ultra fast initial checkpoints. Is this a concern ?

Re: [zfs-discuss] Help with the best layout

2008-02-15 Thread Ross
I thought that too, but actually, I'm not sure you can. You can stripe multiple mirror or raid sets with zpool create, but I don't see any documentation or examples for mirroring a raid set. However, in this case even if you could, you might not want to. Creating a stripe that way will

Re: [zfs-discuss] ZFS write throttling

2008-02-15 Thread Tao Chen
On 2/15/08, Roch Bourbonnais [EMAIL PROTECTED] wrote: Le 15 févr. 08 à 11:38, Philip Beevers a écrit : [...] Obviously this isn't good behaviour, but it's particularly unfortunate given that this checkpoint is stuff that I don't want to retain in any kind of cache anyway - in fact,

Re: [zfs-discuss] [storage-discuss] Preventing zpool imports on boot

2008-02-15 Thread Mike Gerdts
On Thu, Feb 14, 2008 at 11:17 PM, Dave [EMAIL PROTECTED] wrote: I don't want Solaris to import any pools at bootup, even when there were pools imported at shutdown/at crash time. The process to prevent importing pools should be automatic and not require any human intervention. I want to

Re: [zfs-discuss] Help with the best layout

2008-02-15 Thread Richard Elling
Ross wrote: I thought that too, but actually, I'm not sure you can. You can stripe multiple mirror or raid sets with zpool create, but I don't see any documentation or examples for mirroring a raid set. Split the USB disk in half, then mirror each IDE disk to a USB disk half. However,

Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-15 Thread Neil Perrin
Nathan Kroenert wrote: And something I was told only recently - It makes a difference if you created the file *before* you set the recordsize property. If you created them after, then no worries, but if I understand correctly, if the *file* was created with 128K recordsize, then it'll

[zfs-discuss] How to set ZFS metadata copies=3?

2008-02-15 Thread Vincent Fox
Let's say you are paranoid and have built a pool with 40+ disks in a Thumper. Is there a way to set metadata copies=3 manually? After having built RAIDZ2 sets with 7-9 disks and then pooled these together, it just seems like a little bit of extra insurance to increase metadata copies. I don't

Re: [zfs-discuss] ZFS write throttling

2008-02-15 Thread Bob Friesenhahn
On Fri, 15 Feb 2008, Roch Bourbonnais wrote: The latter appears to be bug 6429855. But the underlying behaviour doesn't really seem desirable; are there plans afoot to do any work on ZFS write throttling to address this kind of thing? Throttling is being addressed.

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Roch Bourbonnais
Le 15 févr. 08 à 18:24, Bob Friesenhahn a écrit : On Fri, 15 Feb 2008, Roch Bourbonnais wrote: As mentioned before, the write rate peaked at 200MB/second using RAID-0 across 12 disks exported as one big LUN. What was the interlace on the LUN ? The question was about LUN interlace not

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Bob Friesenhahn
On Fri, 15 Feb 2008, Roch Bourbonnais wrote: What was the interlace on the LUN ? The question was about LUN interlace not interface. 128K to 1M works better. The segment size is set to 128K. The max the 2540 allows is 512K. Unfortunately, the StorageTek 2540 and CAM documentation does not

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Peter Tribble
On Fri, Feb 15, 2008 at 12:30 AM, Bob Friesenhahn [EMAIL PROTECTED] wrote: Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different

Re: [zfs-discuss] ZFS write throttling

2008-02-15 Thread Marion Hakanson
[EMAIL PROTECTED] said: I also tried using O_DSYNC, which stops the pathological behaviour but makes things pretty slow - I only get a maximum of about 20MBytes/sec, which is obviously much less than the hardware can sustain. I may misunderstand this situation, but while you're waiting for

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Bob Friesenhahn
On Fri, 15 Feb 2008, Peter Tribble wrote: Each LUN is accessed through only one of the controllers (I presume the 2540 works the same way as the 2530 and 61X0 arrays). The paths are active/passive (if the active fails it will relocate to the other path). When I set mine up the first time it

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Bob Friesenhahn
On Fri, 15 Feb 2008, Peter Tribble wrote: May not be relevant, but still worth checking - I have a 2530 (which ought to be that same only SAS instead of FC), and got fairly poor performance at first. Things improved significantly when I got the LUNs properly balanced across the controllers.

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Luke Lonergan
Hi Bob, I¹m assuming you¹re measuring sequential write speed ­ posting the iozone results would help guide the discussion. For the configuration you describe, you should definitely be able to sustain 200 MB/s write speed for a single file, single thread due to your use of 4Gbps Fibre Channel

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Bob Friesenhahn
On Fri, 15 Feb 2008, Luke Lonergan wrote: I only managed to get 200 MB/s write when I did RAID 0 across all drives using the 2540's RAID controller and with ZFS on top. Ridiculously bad. I agree. :-( While I agree that data is sent twice (actually up to 8X if striping across four mirrors)

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Albert Chin
On Fri, Feb 15, 2008 at 09:00:05PM +, Peter Tribble wrote: On Fri, Feb 15, 2008 at 8:50 PM, Bob Friesenhahn [EMAIL PROTECTED] wrote: On Fri, 15 Feb 2008, Peter Tribble wrote: May not be relevant, but still worth checking - I have a 2530 (which ought to be that same only SAS

Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-15 Thread Richard Elling
Nathan Kroenert wrote: And something I was told only recently - It makes a difference if you created the file *before* you set the recordsize property. Actually, it has always been true for RAID-0, RAID-5, RAID-6. If your I/O strides over two sets then you end up doing more I/O, perhaps twice

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Bob Friesenhahn
On Fri, 15 Feb 2008, Bob Friesenhahn wrote: Notice that the first six LUNs are active to one controller while the second six LUNs are active to the other controller. Based on this, I should rebuild my pool by splitting my mirrors across this boundary. I am really happy that ZFS makes such

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Peter Tribble
On Fri, Feb 15, 2008 at 8:50 PM, Bob Friesenhahn [EMAIL PROTECTED] wrote: On Fri, 15 Feb 2008, Peter Tribble wrote: May not be relevant, but still worth checking - I have a 2530 (which ought to be that same only SAS instead of FC), and got fairly poor performance at first. Things

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Richard Elling
Bob Friesenhahn wrote: On Fri, 15 Feb 2008, Luke Lonergan wrote: I only managed to get 200 MB/s write when I did RAID 0 across all drives using the 2540's RAID controller and with ZFS on top. Ridiculously bad. I agree. :-( While I agree that data is sent twice

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Bob Friesenhahn
On Fri, 15 Feb 2008, Albert Chin wrote: http://groups.google.com/group/comp.unix.solaris/browse_frm/thread/59b43034602a7b7f/0b500afc4d62d434?lnk=stq=#0b500afc4d62d434 This is really discouraging. Based on these newsgroup postings I am thinking that the Sun StorageTek 2540 was not a good

Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-15 Thread Mattias Pantzare
If you created them after, then no worries, but if I understand correctly, if the *file* was created with 128K recordsize, then it'll keep that forever... Files have nothing to do with it. The recordsize is a file system parameter. It gets a little more complicated because the

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Joel Miller
The segment size is amount of contiguous space that each drive contributes to a single stripe. So if you have a 5 drive RAID-5 set @ 128k segment size, a single stripe = (5-1)*128k = 512k BTW, Did you tweak the cache sync handling on the array? -Joel This message posted from

[zfs-discuss] Cannot do simultaneous read/write to ZFS over smb.

2008-02-15 Thread Sam
Me again, Thanks for all the previous help my 10 disc RAIDz2 is running mostly great. Just ran into a problem though: I have the RAIDz2 partition mounted to OS X via smb and I can upload OR download data to it just fine, however if I start an upload then start a download the upload fails and

[zfs-discuss] SunMC module for ZFS

2008-02-15 Thread Torrey McMahon
Anyone have a pointer to a general ZFS health/monitoring module for SunMC? There isn't one baked into SunMC proper which means I get to write one myself if someone hasn't already done it. Thanks. ___ zfs-discuss mailing list

Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-15 Thread Nathan Kroenert
What about new blocks written to an existing file? Perhaps we could make that clearer in the manpage too... hm. Mattias Pantzare wrote: If you created them after, then no worries, but if I understand correctly, if the *file* was created with 128K recordsize, then it'll keep that

Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-15 Thread Nathan Kroenert
Hey, Richard - I'm confused now. My understanding was that any files created after the recordsize was set would use that as the new maximum recordsize, but files already created would continue to use the old recordsize. Though I'm now a little hazy on what will happen when the new existing

Re: [zfs-discuss] [storage-discuss] Preventing zpool imports on boot

2008-02-15 Thread George Wilson
Mike Gerdts wrote: On Feb 15, 2008 2:31 PM, Dave [EMAIL PROTECTED] wrote: This is exactly what I want - Thanks! This isn't in the man pages for zfs or zpool in b81. Any idea when this feature was integrated? Interesting... it is in b76. I checked several other releases both

Re: [zfs-discuss] How to set ZFS metadata copies=3?

2008-02-15 Thread George Wilson
Vincent Fox wrote: Let's say you are paranoid and have built a pool with 40+ disks in a Thumper. Is there a way to set metadata copies=3 manually? After having built RAIDZ2 sets with 7-9 disks and then pooled these together, it just seems like a little bit of extra insurance to increase

[zfs-discuss] 'du' is not accurate on zfs

2008-02-15 Thread Bob Friesenhahn
I have a script which generates a file and then immediately uses 'du -h' to obtain its size. With Solaris 10 I notice that this often returns an incorrect value of '0' as if ZFS is lazy about reporting actual disk use. Meanwhile, 'ls -l' does report the correct size. Bob