[zfs-discuss] VDI iops with caching

2013-01-02 Thread Geoff Nordli

I am looking at the performance numbers for the Oracle VDI admin guide.

http://docs.oracle.com/html/E26214_02/performance-storage.html

From my calculations for 200 desktops running Windows 7 knowledge user 
(15 iops) with a 30-70 read/write split it comes to 5100 iops. Using 
7200 rpm disks the requirement will be 68 disks.


This doesn't seem right, because if you are using clones with caching, 
you should be able to easily satisfy your reads from ARC and L2ARC.  As 
well, Oracle VDI by default caches writes; therefore the writes will be 
coalesced and there will be no ZIL activity.


Anyone have other guidelines on what they are seeing for iops with vdi?

Happy New Year!

Geoff








___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-02 Thread Richard Elling

On Jan 2, 2013, at 2:03 AM, Eugen Leitl  wrote:

> On Sun, Dec 30, 2012 at 10:40:39AM -0800, Richard Elling wrote:
>> On Dec 30, 2012, at 9:02 AM, Eugen Leitl  wrote:
> 
>>> The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3
>>> memory, no ECC. All the systems have Intel NICs with mtu 9000
>>> enabled, including all switches in the path.
>> 
>> Does it work faster with the default MTU?
> 
> No, it was even slower, that's why I went from 1500 to 9000.
> I estimate it brought ~20 MByte/s more peak on Windows 7 64 bit CIFS.

OK, then you have something else very wrong in your network.

>> Also check for retrans and errors, using the usual network performance
>> debugging checks.
> 
> Wireshark or tcpdump on Linux/Windows? What would
> you suggest for OI?

Look at all of the stats for all NICs and switches on both ends of each wire.
Look for collisions (should be 0), drops (should be 0), dups (should be 0),
retrans (should be near 0), flow control (server shouldn't see flow control
activity), etc. There is considerable written material on how to diagnose
network flakiness.

> 
>>> P.S. Not sure whether this is pathological, but the system
>>> does produce occasional soft errors like e.g. dmesg
>> 
>> More likely these are due to SMART commands not being properly handled
> 
> Otherwise napp-it attests full SMART support.
> 
>> for SATA devices. They are harmless.


Yep, this is a SATA/SAS/SMART interaction where assumptions are made
that might not be true. Usually it means that the SMART probes are using SCSI
commands on SATA disks.
 -- richard

--

richard.ell...@richardelling.com
+1-760-896-4422









___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-02 Thread Eugen Leitl
On Sun, Dec 30, 2012 at 10:40:39AM -0800, Richard Elling wrote:
> On Dec 30, 2012, at 9:02 AM, Eugen Leitl  wrote:

> > The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3
> > memory, no ECC. All the systems have Intel NICs with mtu 9000
> > enabled, including all switches in the path.
> 
> Does it work faster with the default MTU?

No, it was even slower, that's why I went from 1500 to 9000.
I estimate it brought ~20 MByte/s more peak on Windows 7 64 bit CIFS.

> Also check for retrans and errors, using the usual network performance
> debugging checks.

Wireshark or tcpdump on Linux/Windows? What would
you suggest for OI?
 
> > P.S. Not sure whether this is pathological, but the system
> > does produce occasional soft errors like e.g. dmesg
> 
> More likely these are due to SMART commands not being properly handled

Otherwise napp-it attests full SMART support.

> for SATA devices. They are harmless.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss