Re: [zfs-discuss] zfs as a cache server
Hello Jean-Noël, Thursday, April 9, 2009, 3:39:50 PM, you wrote: JNM Hi François, JNM You should take care of the recordsize in your filesystems. This should JNM be tuned according to the size of the most accessed files. I don't think this is necessary and it will rather do more harm than good in the squid case. With default recordsize of 128KB zfs will use a smaller block size for smaller files - recordsize is a maximum size (limit) of a block and not a fixed block size. This is a common misconception about zfs. Now, under normal circumstances, when for example Oracle creates large file zfs will create the file using block size of recordsize (if file is larger than recordsize) but in most cases Oracle (or any other programs) will access the data in a much smaller logical block (db_block_size in oracle case) which in turn will force zfs to read entire fs block in order to verify it's checksums. Now if the working set is larger than a memory and reads are mostly random reads of a logical size being much smaller than zfs recordsize it will negatively impact the performance. In squid case most (all?) files will be read entirely in sequential manner or not at all so generally the largel fs block the better which means that default 128KB (which is the maximum supported value) is best options. If file is for example 8K in size then zfs will use 8KB fs block if recordsize is =8KB. The only issue I'm aware of is a tail block (last block in a file) which if considerably smaller than current fixed block size for the file will unnecessary allocate too much disk space. Lowering the recordsize should alleviate extra disk usage (depending on particular workset) but generally won't matter from the performance point of view. -- Best regards, Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs as a cache server
Francois, Your best bet is probably a stripe of mirrors. i.e. a zpool made of many mirrors. This way you have redundancy, and fast reads as well. You'll also enjoy pretty quick resilvering in the event of a disk failure as well. For even faster reads, you can add dedicated L2ARC cache devices (folks typically use SSDs for very fast (15k RPM) SAS drives for this). -Greg Francois wrote: Hello list, What would be the best zpool configuration for a cache/proxy server (probably based on squid) ? In other words with which zpool configuration I could expect best reading performance ? (there'll be some writes too but much less). Thanks. -- Francois ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs as a cache server
Hi François, You should take care of the recordsize in your filesystems. This should be tuned according to the size of the most accessed files. Maybe disabling the atime is also good idea (but it's probably something you already know ;) ). We've also noticed some cases where enabling compression gave better I/O results (but don't use gzip), but this should be done only if your machine is exclusively running the proxy server. About the topology of your pool, in a performance matter, prefer some striped mirrors if you can afford it, or raidz if not ! HTH, Jnm. -- Francois a écrit : Hello list, What would be the best zpool configuration for a cache/proxy server (probably based on squid) ? In other words with which zpool configuration I could expect best reading performance ? (there'll be some writes too but much less). Thanks. -- Francois ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss