[zfs-discuss] Any company willing to support a 7410 ?

2012-07-19 Thread sol
Other than Oracle do you think any other companies would be willing to take over support for a clustered 7410 appliance with 6 JBODs? (Some non-Oracle names which popped out of google: Joyent/Coraid/Nexenta/Greenbytes/NAS/RackTop/EraStor/Illumos/???)

Re: [zfs-discuss] Question on 4k sectors

2012-07-19 Thread Hans J. Albertsson
I think the problem is with disks that are 4k organised, but report their blocksize as 512. If the disk reports it's blocksize correctly as 4096, then ZFS should not have a problem. At least my 2TB Seagate Barracuda disks seemed to report their blocksizes as 4096, and my zpools on those

Re: [zfs-discuss] Any company willing to support a 7410 ?

2012-07-19 Thread Hung-Sheng Tsao Ph.D.
hi you have two issues here 1)one is the HW support 2)one is the SW support no one but oracle can provide SW support even if you find someone for HW support regards For Other you mentioned are all opensolaris fock, some does provide Gui but pricing model are very different AFAIK Nexenta is

Re: [zfs-discuss] Question on 4k sectors

2012-07-19 Thread Hans Rosenfeld
On Thu, Jul 19, 2012 at 02:29:38PM +0200, Hans J. Albertsson wrote: I think the problem is with disks that are 4k organised, but report their blocksize as 512. If the disk reports it's blocksize correctly as 4096, then ZFS should not have a problem. At least my 2TB Seagate Barracuda disks

Re: [zfs-discuss] Any company willing to support a 7410 ?

2012-07-19 Thread Gordon Ross
On Thu, Jul 19, 2012 at 5:38 AM, sol a...@yahoo.com wrote: Other than Oracle do you think any other companies would be willing to take over support for a clustered 7410 appliance with 6 JBODs? (Some non-Oracle names which popped out of google:

Re: [zfs-discuss] Question on 4k sectors

2012-07-19 Thread Freddie Cash
On Thu, Jul 19, 2012 at 5:29 AM, Hans J. Albertsson hans.j.alberts...@branneriet.se wrote: I think the problem is with disks that are 4k organised, but report their blocksize as 512. If the disk reports it's blocksize correctly as 4096, then ZFS should not have a problem. At least my 2TB

Re: [zfs-discuss] Very poor small-block random write performance

2012-07-19 Thread Bob Friesenhahn
On Wed, 18 Jul 2012, Michael Traffanstead wrote: I have an 8 drive ZFS array (RAIDZ2 - 1 Spare) using 5900rpm 2TB SATA drives with an hpt27xx controller under FreeBSD 10 (but I've seen the same issue with FreeBSD 9). The system has 8gigs and I'm letting FreeBSD auto-size the ARC. Running

Re: [zfs-discuss] Any company willing to support a 7410 ?

2012-07-19 Thread Bob Friesenhahn
On Thu, 19 Jul 2012, Gordon Ross wrote: On Thu, Jul 19, 2012 at 5:38 AM, sol a...@yahoo.com wrote: Other than Oracle do you think any other companies would be willing to take over support for a clustered 7410 appliance with 6 JBODs? (Some non-Oracle names which popped out of google:

Re: [zfs-discuss] Very poor small-block random write performance

2012-07-19 Thread Jim Klimov
This is normal. The problem is that with zfs 128k block sizes, zfs needs to re-read the original 128k block so that it can compose and write the new 128k block. With sufficient RAM, this is normally avoided because the original block is already cached in the ARC. If you were to reduce the zfs

Re: [zfs-discuss] Very poor small-block random write performance

2012-07-19 Thread Bob Friesenhahn
On Fri, 20 Jul 2012, Jim Klimov wrote: I am not sure if I misunderstood the question or Bob's answer, but I have a gut feeling it is not fully correct: ZFS block sizes for files (filesystem datasets) are, at least by default, dynamically-sized depending on the contiguous write size as queued by

Re: [zfs-discuss] Very poor small-block random write performance

2012-07-19 Thread John Martin
On 07/19/12 19:27, Jim Klimov wrote: However, if the test file was written in 128K blocks and then is rewritten with 64K blocks, then Bob's answer is probably valid - the block would have to be re-read once for the first rewrite of its half; it might be taken from cache for the second half's

Re: [zfs-discuss] Very poor small-block random write performance

2012-07-19 Thread Traffanstead, Mike
vfs.zfs.txg.synctime_ms: 1000 vfs.zfs.txg.timeout: 5 On Thu, Jul 19, 2012 at 8:47 PM, John Martin john.m.mar...@oracle.com wrote: On 07/19/12 19:27, Jim Klimov wrote: However, if the test file was written in 128K blocks and then is rewritten with 64K blocks, then Bob's answer is probably