Let's see...Kansas State University's ITS....has 3 or 4 or 5 offers for web 
hosting.

Before ITS, there was CNS....and under this was the web team.  Central web 
servers.  Which is static HTML, SJS, some CGI and other weird stuff (like SAS). 
  With new stuff being injected until it finally gets replaced.  Until about 
mid-last year....it was Netscape Enterprise Server 3.6 behind our F5.  Perhaps 
once upon a time it was for load balancing, but the NES crashes so much that 
they were expecting the F5 to give them HA (scripts to restart the webservers 
every 10 minutes or when they crashed, whichever came first).  And to improve 
HA, they later went from 2 servers to 4 servers.  But, with increasing 
frequency users would still find the error page that there are no members 
available in the pool.  I think NES still lurks in the background doing some 
SSL, but all the HTTP is Apache now (they have the F5 redirecting traffic for 
certain urls and pages to a smaller/older set of servers which have django or 
other frameworks available...and a plan to have their own reverse pr
 oxy w/cache behind the F5...and have pools of many different sorts of things 
behind it, including initially the legacy web servers.)

Departmental and organizations can also have content under this 
server....(either self managed, or 'pay' web team to help.).... The doc root is 
a large NFS share (accessible from the public unix servers and the campus Samba 
server) with Sun ACLs to control access.  The NFS share is served by a pair of 
Sun V240's running Solaris 9 and Sun Cluster (VxVM & UFS)...with storage from 
our 9990.  We want to retire the V210/V240 NFS clusters.... originally the 
NetApp was supposed to do it, but they couldn't make ACLs go and Oracle 
Financials slowed to a crawl on it....and eventually we ran out of space.  
NetApp fell off support back in November.  We have a 7420, which we told Oracle 
in the Try-and-Buy, that Oracle Financials had to work off of it..... Not sure 
if the web team is up to migrating ACLs, or how that even works.  But, a few 
weeks ago we tried to move production Oracle Financials to it...and it didn't 
like it (somehow it also caused the test stack to go down.)  Think I hear
 d we need to get more SSDs.  When the web team needed more space, we told them 
its time to move to 7420....after which they freed up space and postponed the 
move.

2nd, sort-of, option is personal pages.

public_html directory in user's public unix account.  Subject to antiquated 
disk and bandwidth quota policies. (everybody only gets 8MB of disk quota 
automatically.  grad students can ask to have this increased to 16MB, and 
faculty/staff can ask to have this increased to 32MB.... and then there's EST 
and people who are friends....)  But, we don't control the bandwidth quota...so 
making a big fancy personal page isn't too useful.  Daily limit is 20 million 
points.  Where each page request is 500 points, and each byte sent is 1 
point....

Recently a faculty person found out the hard way that they shouldn't be 
depending on their personal page for a grant application.

3rd option is with LANTECH... the former Novell group, now Windows servers.  I 
would guess something that requires Windows as a host.  Don't know much about 
it...but they don't have anything behind our F5.  They have their own Netbackup 
media server to take care of their backups, and some of there servers get SAN 
storage, which is then shared around themselves.

And, then the 4th option was OME...used to be under DCE (continuing 
education)...but in a reorg....they were moved under ITS, and then they became 
the developer group.  The old web team became part of them...and now that their 
associate director has gone over to Communications & Marketing...they are 
finally starting to get assimilated.  The OME also assimilated the developers 
from telecom, and the developers that did KEAS (IDM)...so they're the group 
people are sent to when they want something done that violates policy, and 
we're the group people are sent to when they want the policy enforced (making 
us look bad.)  OME is of the position that since they control the developers 
that do IDM, they in turn also control its data.  A topic that has come up 
again, because the marketing group wants to be able to spam all the students 
every day...and listserv won't allow them to do that, because listserv allows 
users to unsubscribe or suppress messages.  (And, they want to get rid of t
 he guy where one of the things he used to do was manually resubscribe users 
that unsubscribed from such lists.... he's one of the few remaining people with 
administrative tenure.  They originally thought they could shove him off the 
loading the dock with the mainframe.)

Anyways...OME has all sorts of servers around....mainly pairs of apache + some 
framework (jboss, tomcat, django, etc.)  They're working on the reverse proxy 
w/cache solution, that will then go in front of the legacy central web servers 
and all their other stuff and handle HA/load balancing etc.  Not sure why its 
still behind the F5, they've always hated it for years.  Though they were 
talking about moving SSL certs to the F5 (except that we still do real SSL 
certs behind the F5.)  But, specifically they had a pair of servers called 
hosted1/hosted2, which was for hosting departmental web servers...mainly those 
wanting PHP.  Though they were playing around with Django stuff for certain 
units, like the President's office.  And, then those units went public.... so I 
had to make the F5 redirect those URLs to the content in development.  They 
were a pair of old Dell servers that were past support....and one died, but we 
kind of got it working again...minus its PERC card, so no RAID
 1. (also their primary mysql server)  Told them to work on getting off of it.  
Few months later the other dell server started having similar issues.  And, 
then finally they're both gone.  We did set up new systems, a pair of test and 
a pair of prod servers for them....but they haven't started using them yet.  
They brought up services on another pair of past support Dell boxes that were 
in the same vlan (there's FWSM around every vlan in the datacenter...).... the 
vlan happens to be named OldOMEServers.

Anyways...OME servers mainly use NFS.  They still have stuff on the old V240 
cluster, because we ran out of space in the work to move them to the NetApp.  
When we asked them to give us the 5 year storage growth plans...they drew an 
exponential graph that goes to infinity after 3 years.  The NetApp purchase was 
triggered because we hit a UFS bug with their >1TB data store.  The filesystem 
started out life at 40MB...and had been grown bit by bit until it crossed over 
1TB.  Nothing ever get's deleted on the filesystem.  So, eventually the time it 
took for the filesystem to find free inodes was causing all NFS to go away for 
minutes many times a day.  We solved the problem temporarily by remaking it 
from scratch as 1TB and only growing in 256GB increments, the bug should be 
fixed now...but we haven't applied updates to the NFS cluster in forever.  
Meanwhile, some suits were at a tradeshow, and fell in love with a NetApp that 
they had to get for us.

Now they have a bunch of filesystems on there....all under 1TB, totaling over 
6TB.  The devs wrote some kind of multi-fs storage management system for their 
app.  But, now they are saying they don't want that anymore...and want to go 
back to one big FS.  Which we can do when they say so...since we've migrated 
them to the 7420....where all their shares are from a 46TB zpool.  We have two 
zpools, for storage tiers... (raidz2 or raid1)

5th is wordpress....a lonely VM, that until recently only had local storage 
(more and more groups are moving on to it, so they've switched to NFS)  Though 
the wordpress server is part of OME as well.  It was originally created because 
we got a new president, and he wanted to blog.  But, other departments are 
using wordpress as their publishing platform.....

I suppose there's a 6th as well.... OME was playing around with Amazon cloud, 
and we don't know what...but apparently we have production services in there.  
First we heard of it, was when they wanted to transfer its domain name to our 
servers.

Officially we've signed up for OmniUpdate for central web CMS.....and 
eventually there's supposed to be a mandate forcing all the departments to use 
central web CMS and servers.  Most of the people in the big departments I've 
spoken to have taken a we'll be ready when it happens, but we're staying with 
our own setup for now.  Sure that means they have to update their content 
twice....  Though its their names that keep being swung around in trying to get 
the small groups to get on board.

----- Original Message -----
> The University of Georgia's central IT department, EITS, has two
> general offerings for web hosting.  For students, we offer static
> HTML hosting on load-balanced Apache web servers which mount content
> on Novell Cluster Services via NFS; BTW, we hope to change this
> architecture soon.  For departments, faculty, and student
> organizations, we offer static HMTL, CGI (via suexec), and PHP (via
> suexec+fastcgi) on load-balanced Apache web servers.  The shared NFS
> storage is not currently redundant.  We offer distinct
> sub-environments for the development cycle: dev, staging, and
> production.  We offer MySQL using a multi-master configuration.  We
> are hoping to bolster our storage redundancy with glusterfs.  We
> also plan to roll out Tomcat hosting any time now.  We are trying to
> balance features, redundancy, and simplicity.  Redundancy and
> features are crowding out simplicity right now.
>
> Before we deploy gluster as a storage backend, we wanted to poll peer
> institutions about thier hosting environments.  Your use of gluster,
> other cluster filesystem, or decision not to use such is of
> particular interest.  But, while we're asking questions, we'd like
> to get a general sense of your central hosting offerings.
>
> 1. Does central IT at your University offer web hosting to
> departments, committees, or similar affiliates?
>
Yes

> 2. Does central IT at your University offer web hosting to students
> or student organizations?  If so, are the offerings the same as
> those offered to departments?  Could you briefly any difference in
> the environments offered?
>
Yes....student organizations can be the same as departments.  Personal pages 
for students/staff/faculty.

> 3. What operating systems do you use for hosting?
>
Central web is Solaris/SPARC 10.  Don't know what LAN uses.  OME is mainly RHAS 
4/5/6, with some Solaris/x64 & Solaris/SPARC 10.

> 4. Which programming languages do you host (i.e. PHP, ASP, RoR, JSP)?
>
Central is traditionally CGI/SJS....OME does lots of other stuff.

> 5. Are your web servers load-balanced?  How?
>
Yes, BigIP 6400 LTM HA Pair

> 6. Are your storage backends redundant? How? Have you tried glusterfs
> for any purpose or decided against its use for any reason?
>
Mostly...either Sun Cluster NFS from 9990 storage, or dual head NAS appliance.  
NetApp was active/active, and 7420 is active/standby.

Administration/Policy wouldn't allow us to try/use glusterfs....  I think once 
upon a time, the offer to get VxCFS was made.  But, it was before my time....

> 7. Which databases do you offer?  What redundancy do you have for
> those servers?
>

ISO (the DBA) group only does Oracle...and they're pretty much all RAC now.  
OME uses MySQL...they are working on making it redundant...for longer than I've 
been here... Someday they expect ISO to take it over.

> 8. Do you manage hosting accounts with a commercial tool, such as
> Plesk?  Which one?
>
We've signed up with OmniUpdate....

> 9. Do you offer a development and/or staging environment?
>
OmniUpdate provides this right?

Currently, people make edits to live websites...and major panic ensues when 
they mess something up and have to wait for it to get restored from backups.  
Was really bad the time they corrupted the common css file.  Its always been 
that everybody should use the central templates, going to be a requirement soon.

> 10. Do you have any other comments on your environment, such as
> off-site redundancy or use of a cloud provider like Ec2?
>

There has been lots of talk, but little action yet.  Until recently we didn't 
have off-campus secondary DNS, so when we lost connectivity...even sites hosted 
off-campus couldn't be reached.  There's been discussion of having the 
secondary DNS site switch zone files in the event of an outage, and send people 
to another server.  But, the other server doesn't exist yet.  (though I have 
scripts for creating additional zone files and signing them....and scp'ng for 
such an event.)

Cloud has come up in various discussions....

There is also talk about using our other campus locations...and changing 
routing for portions of our address space.

> 11. Do you charge a fee for any of your web hosting services?
>
Yes, but no.

If web team or OME or LAN person worked on the development of the 
site...they're used to be a charge back.  Otherwise, we would just provide a 
directory for them and let them loose.

Now that we're moving for adoption of the new central CMS and getting people to 
move back under central web services.  We aren't.  But, there's talk that we 
need to find some other way to pay for these things....

> 12. What are your single points of failure, if any?
>
Tons....and I'm one of them.

> 13. What is the approximate enrollment at your university?  How many
> fac/staff?

23,863

Not sure now...when I started, I heard it was >9000.  Now I hear numbers of 
6000+.

I know we have half the number of bodies in EST (Enterprise Server 
Technologies)....and there are times where there's only one person 
around....and none of us can do everything (though I try.)

>
> 14. May I compile your reply to share with the LOPSA list?
>
>
> Thanks in advance for any replies.
>
> _______________________________________________
> Discuss mailing list
> [email protected]
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
> This list provided by the League of Professional System
> Administrators
>  http://lopsa.org/
>

--
Who: Lawrence K. Chen, P.Eng. - W0LKC - Senior Unix Systems Administrator
For: Enterprise Server Technologies (EST) -- & SafeZone Ally
Snail: Computing and Telecommunications Services (CTS)
Kansas State University, 109 East Stadium, Manhattan, KS 66506-3102
Phone: (785) 532-4916 - Fax: (785) 532-3515 - Email: [email protected]
Web: http://www-personal.ksu.edu/~lkchen - Where: 11 Hale Library
_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to