On Mon, Jun 9, 2008 at 2:30 PM, Michael J. Barton <[EMAIL PROTECTED]>
wrote:

>  Scott is quite right  : )
>
>
>
> Scott… with respect to the links that Andrew shared… I'm willing to
> contribute the Princeton University configuration. Who do I talk to about
> getting it posted? ;-)
>
Thanks!  It should be user editable as long as you have a wiki account.

-Scott

>
>
>
>
> *From:* [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] *On
> Behalf Of *Scott Battaglia
> *Sent:* Monday, June 09, 2008 2:01 PM
> *To:* Yale CAS mailing list
> *Subject:* Re: CAS in Production... What kind of set up? (hardware,
> etc...)
>
>
>
> >> Two disks (C: D:) 8GB each (just enough for the O/S and CAS)
>
> Just a note that the majority of that space would be the O/S...CAS isn't
> THAT big ;-)
>
> -Scott Battaglia
> PGP Public Key Id: 0x383733AA
> LinkedIn: http://www.linkedin.com/in/scottbattaglia
>
> On Mon, Jun 9, 2008 at 1:45 PM, Michael J. Barton <[EMAIL PROTECTED]>
> wrote:
>
> Duran,
>
> At Princeton, we went the virtual machine route for our production CAS
> environment.
>
> We have two VMs running on our VMWare ESX server cluster.
> The VMs are configured as:
>        3GHz Intel Xeon
>        1GB RAM
>        1GB nic
>        10MB nic (private network for cluster communication)
>        Two disks (C: D:) 8GB each (just enough for the O/S and CAS)
>        Windows 2003 O/S
>
> We are running Apache to serve HTTPS and Tomcat as the java engine for CAS.
> We are using the JBoss clustering as described on the CAS site for the
> shared ticket store.
>
> We front the two virtual machines with a Foundry Networks load balancer
> that
> supports SSL.
>
> This arrangement has proven to be very robust for us thus far and we figure
> we can scale horizontally adding more virtual machines should the need
> arise.
>
> -Michael
>
>
>
>
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
> Behalf Of Goodyear, Duran
> Sent: Monday, June 09, 2008 12:21 PM
> To: Yale CAS mailing list
> Subject: CAS in Production... What kind of set up? (hardware, etc...)
>
> As I continue my investigation into CAS as a SSO provider... I am
> curious as to how different users are configuring their use of it?
>
> What kind of hardware do you run it on?
> What kind of fault tollerance do you have?  (cluster? Redundancy?
> Hot/cold spare?)
>
> Obviously, as a critical part of the infrastructure, it needs more then
> just a spare old server in the corner... But I'm of course wondering...
> How much?
>
>
> Thanks.
>
>
> ] duran goodyear
> ] web developer
> ] administrative computing  // office of information technology
> ] the university of the arts
> ] [EMAIL PROTECTED]
> ] 215.717.6068
> ] skype://duran.goodyear
> _______________________________________________
> Yale CAS mailing list
> [email protected]
> http://tp.its.yale.edu/mailman/listinfo/cas
>
>
> _______________________________________________
> Yale CAS mailing list
> [email protected]
> http://tp.its.yale.edu/mailman/listinfo/cas
>
>
>
> _______________________________________________
> Yale CAS mailing list
> [email protected]
> http://tp.its.yale.edu/mailman/listinfo/cas
>
>
_______________________________________________
Yale CAS mailing list
[email protected]
http://tp.its.yale.edu/mailman/listinfo/cas

Reply via email to