Stefan and SunRay Freinds,
Below is the network and filer configuration

Five Sun Fire 4600s with 8 network interfaces each = 40
-ESX VMSERVERS

Three Sun T2000 with 2 network interfaces each = 6
-Sun Ray Server 4.0

Three Sun X2200 with 2 network interfaces each = 6
-Windows 2003 Enterprise 64 bit/Terminal Servers with applications served
from Softricity Soft Grid

Raritan VGA KVM over IP = 1
-Remote management for 2200 and 4600's 

Raritan Serial KVM over IP = 1
-Remote management of T2000's

NetApp 3020 cluster = 4
- Filer 2 control Heads and 4 storage shells

Brocade Switch  = 2
- 4GB Fiber Channel switches



Here is our volume plan for the filer.
 
FlexVol Name # of LUNs   VM Size GB      VOL Size GB     LUN IDs  
IGROUP  
Walker1  40      25      2,500   101-140         ESX1A  
Walker2  20      25      1,250   151-170         ESX1B  
Walker3  45      25      2,813   201-245         ESX2A  
Walker5  30      25      1,875   251-280         ESX2B  
Walker6  30      25      1,875   301-330         ESX3A  
Walker7  30      25      1,875   351-380         ESX3B  
Walker8  30      25      1,875   401-430         ESX4A  
Walker9  30      25      1,875   451-480         ESX4B  
Walker10         30      25      1,875   501-531         ESX5A  
Walker11         30      25      1,875   551-580         ESX5B  

This allows space for each lun + 1 snapshot of each lun + 50% on each
FlexVol.  Snap reserve is set to 0% according to best practice.
 
We use one lun for each VM to allow for maximum bandwidth. This is a best
practice, we did however try and put 10 VM's to a lun, and this had poor
performance.
 
We also have one network CIFS file share as FlexVol named WHFILES with size
of 1,000GB.  We use VIF single-mode with Gb interface connected to two
redundant network switches on each Controller.
 
There is 1 port on board FC ports of Controller 1 connected to each of the 4
shelves. The on board FC ports of Controller 2 connected to the Out FC port
on each of the 4 disk shelves.  The other option is to go Active/Active (1
Aggregate per Controller) with each controller driving 2 shelves.
Active/Active distributes the FC IO across more channels which improves
performance. 

        For the Sun Ray Server I put all the IP's in the the host table. The
way the user logs into the VM is I put an entry into the .cshrc file that
equates to the user and the appriate VM. As an example if the user is using
vmuser01, the users will login as that user. Vmuser01 will connect to the
RDP VM running on the ESX Server that maps to vmuser01. The ESX Servers are
all clustered, and as we fire up 100 VM's the load is distributed across all
5 4600's. The management of the VM's is done via the VM Console, all users
are domain users and we run one real AD Server and one virtual VM AD Server.


We also run the Sun Ray Servers in Smart Card CAM Mode, if a user inserts a
Smart Card I launch a Windows Terminal Server Session. I use Craig Benders
terminal Server load balancing script, that is available from the Think Thin
Site  http://blog.sun.com/ThinkThin/entry/cam_mass_storage_workaround  If
you don't know who Craig is check out Blog
http://blogs.sun.com/ThinGuy/category/Sun+Ray  


The entire sollution sits in two racks, we will scale to 450 DTU's, and we
are in the process of installing a system to support another 1200 dtu's with
this configuration. The limitation of the ESX Server is not CPU, but memory.
Our Future plans are to migrate to 128GB of RAM, onece the memory becomes
available. If you have any questions or want more specifics let me know.

Take Care


David Partington
Ft Huachuca, Az


----------------------------------------------------------------------------
----------

58 Network Connections Total

-----Original Message-----
From: Stefan Varga [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 08, 2007 10:44 AM
To: [EMAIL PROTECTED]
Subject: Re: [SunRay-Users] RE: Wow...

Partington, David R Mr (NGIT) USAIC&FH wrote:
> Mike,
> We have close to 3000 Sun Ray's in production at Ft Huachuca. We have 
> some Solaris Desktops, but the majority is Terminal Server Desktops. 
> The problem with terminal servers is that the individual students dont 
> have individual Windows IP's, and a few apps don't work well terminal 
> services. To solve this issue we conducted a couple of pilots running 
> Windows XP VM's with VMWARE ESX. The pilot was very successful and we 
> were able to establish a baseline for scalablity. What we determined 
> was that we could run 80 users on a single 4600. Our (5)4600's have 
> 64GB of Ram and 8 dual core CPU's. Each 4600 has 8 4GB FCAL ports 
> attached to a 27TB file Server. For each VM session, we have allocate 
> 2GB of RAM and 70GB of Disk. On the five 4600's we are provided 450 
> simultaneous XP VM's. The XP VM are served out to the Sun Ray's via 
> the uttsc windows connector. The cost savings over a FAT client is 
> 70%. I will post more results later.  If you have any further questions
E-mail me.
>   

Can you post more information please, I'm interested in whole setup(disk
array, file system, sunray server's, domain ?, etc...)

Regards,
Stefan
_______________________________________________
SunRay-Users mailing list
[email protected]
http://www.filibeto.org/mailman/listinfo/sunray-users

Reply via email to