John,

Thanks.  We are currently on RAID 10, and according to our IT folks while the 
UV VM is not the only VM on the machine, the other VMs are low usage compared 
to the UV VM, and there are plenty of free resources.  They've been doing 
monitoring and the RAM, while still only set to 4gb, never gets fully utilized. 
 Also, no de-duplication or compression either.  FYI, we're running about 
100-120 users concurrently at any given time on Windows Server 2008 R2.

Adam Taylor
Director of Software Development
O: (713) 795-2352
www.INXI.com




-----Original Message-----
From: u2-users-boun...@listserver.u2ug.org 
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of John Thompson
Sent: Tuesday, August 09, 2011 12:44 PM
To: U2 Users List
Subject: Re: [U2] UV 11.1 64-bit on Cisco UCS with NetApp Filer?

Googling during my lunch...

I'm guessing this is the box you have, or something similar:
http://www.netapp.com/us/products/storage-systems/fas3100/fas3100-tech-specs.html

You might try disabling that RAID DP stuff they have and just running RAID 1
or RAID 10, as a first step in your process of elimination.

Any of the options I suggested are a project.
Reconfiguring just the disk shelf is less work than reconfiguring a host for
it to run on.

If you are running a fiber link to it, you probably can't use NFS, so that
idea is out.

If you are doing any de-duplication/compression, turn that off too.

On Tue, Aug 9, 2011 at 12:44 PM, John Thompson <jthompson...@gmail.com>wrote:

> Is the disk shelf RAID 5 or RAID 10?  Or some other fancy RAID technology?
>
> If its RAID 5 or 6 or whatever netapp uses, you could try just doing RAID
> 10 (1+0).
> Of course that would probably require totally re-doing the disk shelf disk
> setup and loading from a backup.
>
> Sometimes RAID arrays that have to write parity data (i.e. 5 or 6) can slow
> down an MV system and most databases for that matter.
>
> Another thing to check or try...
>
> So you said its a 4 gig fiber network card for the SAN.
>
> What NIC is the vm using to connect to the LAN?
> Is it the only vm using that NIC on the Cisco server that is your host?
>
> From this quote:
> "The only caveat is that even though it is on its own disk shelf, traffic
> for all our virtualized servers goes through the singular connection between
> the UCS and the filer."
>
> It sounds like your IT folks have separated everything "virtually"
> properly, but, not "physically" enough.  (which is no discredit to them,
> they are just not use to running resource intensive systems virtually)
>
> It sounds like you are sharing one physical copper NIC on the host,
> with multiple vm's.
> It also sounds like you are sharing one physical fibre NIC with multiple
> vm's.
> Or maybe I'm misunderstanding...
>
> If this is correct, and depending on how many universe users you have... I
> would be willing to bet this is half of your problem.
>
> Make sense?
>
> Anyway, I would make sure of the following if you are really intent on
> using this environment and eliminating your performance issues:
>
> 1) Make sure that your vm host (cisco machine in your case), has only one
> vm on it (i.e. the universe system).  Make sure you assign it plenty of RAM
> and cpu cores.  I would say if you are on Linux and at a 100 users, then
> don't do any less than 8 GB of RAM, and 4 cpu cores.
>
> 2) Make sure that cisco vm host has two regular copper network cards (or
> perhaps at least one with two ports).  One for the host, and one for the
> universe vm, each on their own vlan.
>
> 3) Then make sure you have at least one fibre network card to connect to
> your SAN (i.e. your dedicated disk shelf- which I'm guessing is your netapp
> box).  For production, you might want to consider making that redundant
> somehow.
>
> 4) Make your disk shelf RAID 10, or RAID 1, no RAID 5 or 6 or whatever.
>
> Then you can do some testing.  If you see the same behavior, then you know
> you have a much deeper problem.
>
> If the performance improves... then you can add vm's to that host one at a
> time so that you can make better use of it, BUT, I would still use one
> physical copper NIC for vm's with heavy traffic, and I would definitely try
> and use one physical fiber NIC, ONLY for the Universe vm disk traffic.
>
> Your techs may scoff at that, but, I think that is the only good way to
> setup the test and start eliminating issues.  Then eventually you might be
> able to narrow it down to the real problem.
>
> Of course, if you need a new cisco vm host and nic cards, then, my idea
> will cost you more money :(
>
> Thats my two cents...
>
> I would like to hear your success/frustrations though.
>
> The irony here is that some salesman from our ISP the other day tried to
> sell us a similar solution, after we had a metro-E network service outage a
> few days back...
> I, of course, shook my head and thought... I don't need that headache right
> now.
>
> On Tue, Aug 9, 2011 at 11:39 AM, Adam Taylor <adam.tay...@inxi.com> wrote:
>
>> John,
>>
>> Thanks.  I just discussed these with our IT folks in charge of the UCS,
>> and here is what I found out:
>>
>> 1)  Our NetApp filer is on its own VLAN for the connection between the
>> filer and UCS.  The drives in our filer are SAS, and our UV server is
>> actually on our fastest 15K disk shelf using fiber.
>> 2)  The connection into our UCS is a 4 gig network card, and from there
>> the connection to the UV server is dedicated.
>>
>> >From what he was telling me, the setup is fairly separated so that
>> traffic to and from.  The only caveat is that even though it is on its own
>> disk shelf, traffic for all our virtualized servers goes through the
>> singular connection between the UCS and the filer.
>>
>> Adam Taylor
>> Director of Software Development
>> O: (713) 795-2352
>> www.INXI.com
>>
>>
>>
>>
>> -----Original Message-----
>> From: u2-users-boun...@listserver.u2ug.org [mailto:
>> u2-users-boun...@listserver.u2ug.org] On Behalf Of John Thompson
>> Sent: Tuesday, August 09, 2011 9:44 AM
>> To: U2 Users List
>> Subject: Re: [U2] UV 11.1 64-bit on Cisco UCS with NetApp Filer?
>>
>> Ugh... I wish I did.
>>
>> I will say one thing.  In my experience with virtualization, it is a brave
>> new world...
>> and way more complex than the salesmen make it out to be.
>>
>> I don't know what type of hypervisor that Cisco is using, but, I suspect
>> you
>> are having issues with the performance of the virtualized network
>> interface,
>> and the virtual disk.
>>
>> Some thoughts though:
>>
>> 1) If you are using a SAN (which I think netapp is), make sure that the
>> virtual disk that your UV machine is living on, is on its own VLAN.  Don't
>> share the traffic with anything else- period.
>>
>> Are you using SSD's or SAS drives?
>> If SSD's are in the mix, be aware that MV data files and apps (because of
>> the hashed filesystem) are very random read and random write intensive.
>>  They behave differently from a mqsql, or oracle.  Some SSD's don't like
>> random writes.    And of course our furry friends at Cisco and Netapp
>> probably have never tested with a MV database.
>>
>> So if its SSD's, you might try just putting it on regular SAS drives and
>> see
>> what happens.
>>
>> Is it Fibre, ISCSI, NFS?
>>
>> You might even try using NFS if netapp supports it.  I have heard of folks
>> having better performance with NFS over ISCSI with certain SAN's.  Of
>> course, this is a shot in the dark, and both protocols have a myriad of
>> configuration options.
>>
>> 2) Try and put the virtual nic (that the users are using to ssh to for a
>> uv
>> session) on its own VLAN and separate that traffic too.
>>
>> The more you can separate and organize all of these virtual nics, disks,
>> the
>> better off you will be.
>>
>> Thats all I've got...
>>
>> On Tue, Aug 9, 2011 at 10:16 AM, Adam Taylor <adam.tay...@inxi.com>
>> wrote:
>>
>> > All,
>> >
>> > We recently upgraded and virtualized our UniVerse servers to UV 11.1
>> 64-bit
>> > on a Cisco UCS B Series platform with a NetApp 3040A/A Filer.  Going to
>> > 64-bit with an upgrade in the virtualized environment, we expected to
>> see a
>> > noticeable (if not significant) increase in performance than we were
>> > achieving on our old physical 32-bit server.  However, not only has
>> > performance not notably increased, we are having random slowdowns in
>> various
>> > areas of the system that were not occurring before.  I say random,
>> because
>> > while it does seem to affect certain areas more, it is not consistent.
>> >
>> > Some examples of behavior that has changed since the virtualization:
>> >
>> > An SB screen that has some calculated fields on it will take a minute to
>> > load, but the next time it loads it loads in just a couple seconds for
>> the
>> > same record.
>> > An ASP webpage with 10+ queries against different files (heavy-duty
>> > processes) using Web DE to connect will takes 5-6 minutes to load that
>> was
>> > only taking a minute before.
>> > Randomly, saving a change in the SBClient screen designer on our Dev
>> > environment will hang for 20-30 seconds. (Same virtualized specs, but
>> far
>> > less traffic due to being a dev environment.)
>> > Simply initiating a telnet session, sometimes it will be 10-15 seconds
>> > before the login prompt appears to enter username and password.
>> >
>> > Does anyone out there have any experience with any of these pieces (UV
>> 11.1
>> > 64-bit, Cisco UCS, NetApp Filer) that could shed some light on why we
>> are
>> > now seeing these performance issues?
>> >
>> > Thanks.
>> >
>> >
>> > Adam Taylor
>> > Director of Software Development
>> > O: (713) 795-2352
>> > www.INXI.com
>> >
>> >
>> >
>> >
>> > IMPORTANT/CONFIDENTIAL: This message from INX Inc. is intended only for
>> the
>> > use of the addressees shown above.
>> > It contains information that may be privileged, confidential and/or
>> exempt
>> > from disclosure under applicable law.
>> > If you are not the intended recipient of this message, you are hereby
>> > notified that the copying, use or distribution of any information or
>> > materials transmitted in or with this message is strictly prohibited.
>> > If you received this message by mistake, please immediately email or
>> call
>> > us collect at (469) 549-3800 and delete/destroy the original message.
>> >
>> > _______________________________________________
>> > U2-Users mailing list
>> > U2-Users@listserver.u2ug.org
>> > http://listserver.u2ug.org/mailman/listinfo/u2-users
>> >
>>
>>
>>
>> --
>> John Thompson
>> _______________________________________________
>> U2-Users mailing list
>> U2-Users@listserver.u2ug.org
>> http://listserver.u2ug.org/mailman/listinfo/u2-users
>> IMPORTANT/CONFIDENTIAL<http://listserver.u2ug.org/mailman/listinfo/u2-usersIMPORTANT/CONFIDENTIAL>:
>> This message from INX Inc. is intended only for the use of the addressees
>> shown above.
>> It contains information that may be privileged, confidential and/or exempt
>> from disclosure under applicable law.
>> If you are not the intended recipient of this message, you are hereby
>> notified that the copying, use or distribution of any information or
>> materials transmitted in or with this message is strictly prohibited.
>> If you received this message by mistake, please immediately email or call
>> us collect at (469) 549-3800 and delete/destroy the original message.
>>
>> _______________________________________________
>> U2-Users mailing list
>> U2-Users@listserver.u2ug.org
>> http://listserver.u2ug.org/mailman/listinfo/u2-users
>>
>
>
>
> --
> John Thompson
>



-- 
John Thompson
_______________________________________________
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users
IMPORTANT/CONFIDENTIAL: This message from INX Inc. is intended only for the use 
of the addressees shown above. 
It contains information that may be privileged, confidential and/or exempt from 
disclosure under applicable law. 
If you are not the intended recipient of this message, you are hereby notified 
that the copying, use or distribution of any information or materials 
transmitted in or with this message is strictly prohibited. 
If you received this message by mistake, please immediately email or call us 
collect at (469) 549-3800 and delete/destroy the original message.

_______________________________________________
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users

Reply via email to