Re: [zfs-discuss] slog device
Hi, Anyway, are there other devices out there that you would recommend to use as a slog device, other than this nvram card, that would present similar performance gains? Thanks Gilberto On 7/8/08 9:40 PM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Ross wrote: Hi Gilberto, I bought a Micro Memory card too, so I'm very likely going to end up in the same boat. I saw Neil Perrin's blog about the MM-5425 card, found that Vmetro don't seem to want to sell them, but then then last week spotted five of those cards on e-bay so snapped them up. I'm still waiting for the hardware for this server, but regarding the drivers, if these cards don't work out of the box I was planning to pester Neil Perrin and see if he still has some drivers for them :) Unfortunately, there are a couple of problems: 1. It's been a while since I used that board and driver. I recently tried pkgadd-ing on the latest Nevada build and it hung. I'm not sure if the latest Nevada is somehow incompatible. I didn't have time to track down the cause. 2. I received the board and driver from another group within Sun. It would be better to contact Micro Memory (or whoever took them over) directly, as it's not my place to give out 3rd party drivers or provide support for them. Sorry for the bad news: Neil. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs-discuss Digest, Vol 33, Issue 19
Hello Ross, We're trying to accomplish the same goal over here, ie. serving multiple VMware images from a NFS server. Could you tell what kind of NVRAM device did you end up choosing? We bought a Micromemory PCI card but can't get a Solaris driver for it... Thanks Gilberto On 7/6/08 9:54 AM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: -- Message: 6 Date: Sun, 06 Jul 2008 06:37:40 PDT From: Ross [EMAIL PROTECTED] Subject: [zfs-discuss] Measuring ZFS performance - IOPS and throughput To: zfs-discuss@opensolaris.org Message-ID: [EMAIL PROTECTED] Content-Type: text/plain; charset=UTF-8 Can anybody tell me how to measure the raw performance of a new system I'm putting together? I'd like to know what it's capable of in terms of IOPS and raw throughput to the disks. I've seen Richard's raidoptimiser program, but I've only seen results for random read iops performance, and I'm particularly interested in write performance. That's because the live server will be fitted with 512MB of nvram for the ZIL, and I'd like to see what effect that actually has. The disk system will be serving NFS to VMware to act as the datastore for a number of virtual machines. I plan to benchmark the individual machines to see what kind of load they put on the server, but I need the raw figures from the disk to get an idea of how many machines I can serve before I need to start thinking bigger. I'd also like to know if there's any easy way to see the current performance of the system once it's in use? I know VMware has performance monitoring built into the console, but I'd prefer to take figures directly off the storage server if possible. thanks, Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Does block allocation for small writes work over iSCSI?
Hello list, I'm thinking about this topology: NFS Client NFS--- zFS Host ---iSCSI--- zFS Node 1, 2, 3 etc. The idea here is to create a scalable NFS server by plugging in more nodes as more space is needed, striping data across them. A question is: we know from the docs that zFS optimizes random write speed by consolidating what would be many random writes into a single sequential operation. I imagine that for zFS be able to do that it has to have some knowledge about the hard disk geography. Now, if this geography is being abstracted by iSCSI, is that optimization still valid? Thanks Gilberto ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss