[MCN-L] Hosting hardware requirements
[Sorry if you receive this twice. I sent it 24 hours ago but it still hasn't appeared.] I'm the website manager at a mid-sized art museum (220 full-time staff, 1.35 million physical visitors pa) in Sydney, Australia. Currently we host our websites externally (in a hosting facility in the USA, for cost reasons) but it is clear that our server is now underpowered. So, we are considering hosting internally on TWO, more powerful servers, one for the application and one for the database. The company that provides support for our content management system (Squiz.net) also manages our server in the USA remotely, so they could continue to do that. We would just need to upgrade our Internet connection. The question I have is this: How powerful a system do we need? Squiz.net have quoted for 2 quad-core dual-Xeon commercial-grade servers, running at 2.0 GHz (detailed specs below). Our network manager believes this is MASSIVE overkill. I COULD ask Squiz.net to provide details of other, comparable organisations and THEIR web server specs, but since they'd probably all be their clients too, this may not be a strong argument for management. So, I would actually appreciate answers to ANY of the following 3 questions: 1. From your own experience, do these specs seem reasonable, allowing for some room to grow? 2. If your institution and/or websites are comparable to ours, what are your server specs... and are they adequate? 3. If your hosting setup is similar to what we were recommended, how big is your website (or websites)? To give you a better idea of our needs, here's what we have now: * Total web traffic: approx. 150-200 GB per month * 1 main website + 8 smaller, CMS-driven websites + 9 static HTML websites * 2 content management systems (1 phasing out the other) + collection management system customised web interface * Monthly email newsletter: approx. 150,000 subscribers * Online video: New content (~ 25 minutes, 55 MB) weekly, currently hosted on internal server * Online audio: currently 2 audio-tours, but set to expand, currently hosted on internal server And here are the detailed specs we were recommended for each server: Dell PowerEdge 2950 Dual Xeon Commercial grade server Dual Xeon 2.0 GHz (1333MHz Bus) Quad Core (8 Cores Total) Memory: ECC Registered DDR 8GB * Application server: 2 x 73 GB SAS/SCSI Hard Disk - RAID 1 * Database server: 6 x 73 GB SAS/SCSI Hard Disk - RAID 1+0 Intel 10/100Mb Network Card Intel 10/100/1000mbps TX Network Card Red Hat Enterprise Linux Thanks. Regards, Jonathan Cooper Manager of Information / Website Art Gallery of New South Wales Sydney, Australia http://www.artgallery.nsw.gov.au - - - Please consider the environment before printing my email - - - This e-mail message is intended only for the addressee(s) and contains information which may be confidential. If you are not the intended recipient please advise the sender by return email, do not use or disclose the contents, and delete the message and any attachments from your system. Unless specifically indicated, this email does not constitute formal advice or commitment by the sender or the Art Gallery of NSW (ABN 24 934 492 575) or its related entities.
[MCN-L] Hosting hardware requirements
Hi Jonathan Try this, it may help http://www.mediatemple.net/ Regards Tim Roberts A rts R esearch T icketing S ervices AUSTRALIA [:]:[:]:[:]:[:]:[:]:[:]:[:]:[:]:[:]:[:]:[:]:[:]:[:]:[:]:[:]:[:]:[:]:[:] m:- Tim Roberts ARTS Australia 280 Barcom Avenue Paddington NSW 2021 AUSTRALIA t:- 61 (0)2 9356 3777 m:- 61 (0)419 277 694 e:- tim.roberts at artsoz.com.au w:- http://www.artsoz.com.au The Australia Council for the Arts with the assistance of Arts Victoria, WA Department for Culture and the Arts, Arts Queensland, Arts SA and Arts NT commissioned Roger Tomlinson and Tim Roberts to revise and update the book Boxing Clever for Australia. Boxing Clever originally published by Arts Council England in 1993, discusses ticketing and its greater potential to facilitate sophisticated arts marketing. The new book FULL HOUSE: Turning Data into Audiences was published in print in Australia in November 2006 followed by an edition commissioned for New Zealand by Creative New Zealand, in December 2006. Editions in other markets and languages are in development for 2008/9. Available for purchase online now -Original Message- From: mcn-l-bounces at mcn.edu [mailto:mcn-l-boun...@mcn.edu] On Behalf Of JonathanC at ag.nsw.gov.au Sent: Friday, 30 January 2009 3:10 PM To: mcn-l at mcn.edu Subject: [MCN-L] Hosting hardware requirements [Sorry if you receive this twice. I sent it 24 hours ago but it still hasn't appeared.] I'm the website manager at a mid-sized art museum (220 full-time staff, 1.35 million physical visitors pa) in Sydney, Australia. Currently we host our websites externally (in a hosting facility in the USA, for cost reasons) but it is clear that our server is now underpowered. So, we are considering hosting internally on TWO, more powerful servers, one for the application and one for the database. The company that provides support for our content management system (Squiz.net) also manages our server in the USA remotely, so they could continue to do that. We would just need to upgrade our Internet connection. The question I have is this: How powerful a system do we need? Squiz.net have quoted for 2 quad-core dual-Xeon commercial-grade servers, running at 2.0 GHz (detailed specs below). Our network manager believes this is MASSIVE overkill. I COULD ask Squiz.net to provide details of other, comparable organisations and THEIR web server specs, but since they'd probably all be their clients too, this may not be a strong argument for management. So, I would actually appreciate answers to ANY of the following 3 questions: 1. From your own experience, do these specs seem reasonable, allowing for some room to grow? 2. If your institution and/or websites are comparable to ours, what are your server specs... and are they adequate? 3. If your hosting setup is similar to what we were recommended, how big is your website (or websites)? To give you a better idea of our needs, here's what we have now: * Total web traffic: approx. 150-200 GB per month * 1 main website + 8 smaller, CMS-driven websites + 9 static HTML websites * 2 content management systems (1 phasing out the other) + collection management system customised web interface * Monthly email newsletter: approx. 150,000 subscribers * Online video: New content (~ 25 minutes, 55 MB) weekly, currently hosted on internal server * Online audio: currently 2 audio-tours, but set to expand, currently hosted on internal server And here are the detailed specs we were recommended for each server: Dell PowerEdge 2950 Dual Xeon Commercial grade server Dual Xeon 2.0 GHz (1333MHz Bus) Quad Core (8 Cores Total) Memory: ECC Registered DDR 8GB * Application server: 2 x 73 GB SAS/SCSI Hard Disk - RAID 1 * Database server: 6 x 73 GB SAS/SCSI Hard Disk - RAID 1+0 Intel 10/100Mb Network Card Intel 10/100/1000mbps TX Network Card Red Hat Enterprise Linux Thanks. Regards, Jonathan Cooper Manager of Information / Website Art Gallery of New South Wales Sydney, Australia http://www.artgallery.nsw.gov.au - - - Please consider the environment before printing my email - - - This e-mail message is intended only for the addressee(s) and contains information which may be confidential. If you are not the intended recipient please advise the sender by return email, do not use or disclose the contents, and delete the message and any attachments from your system. Unless specifically indicated, this email does not constitute formal advice or commitment by the sender or the Art Gallery of NSW (ABN 24 934 492 575) or its related entities. ___ You are currently subscribed to mcn-l, the listserv of the Museum Computer Network (http://www.mcn.edu) To post to this list, send messages to: mcn-l at mcn.edu To unsubscribe or change mcn-l delivery options visit: http://toronto.mediatrope.com/mailman/listinfo/mcn-l The MCN-L archives can be found at: http://toronto.mediatrope.com/pipermail/mcn-l/
[MCN-L] Hosting hardware requirements
This question is one of the reasons why we set up our repository on Amazon Web Services, and why we are moving are general websites in that direction. We just don't want to be in the business of sinking capital we need in hardware that we may need. Moving to metered service in such a situation lets you pay for what you need, and removes the cost of forecasting and maintaining the physical servers. It also makes it easier to move away from the metaphor that every significant application requires its own server--you use virtual servers (the sort of situation that VMWare supports, as one good example; AWS has its own virtualization software) instead. It is also critical that you think not in terms of a single production set, but that you accomodate development and staging sets, as well. (You never want to be in a situation where you are manually updating your production server--you would stage changes, ensure that they are okay, then automatically update production; similarly, you want your development environment entirely out of the path of regular staging and production.) This becomes significantly more affordable when all of these servers are virtualized (which may or may not happen on AWS, although we are now moving in that direction). Beyond that, attempts to right-size your physical infrastructure depend on the database traffic and webserver traffic, something that you can triangulate by looking at your average and peak load averages on the servers and the response time degradation when you move from average to peak. Building for future growth should probably not be a large factor unless you are, in fact, experiencing significant growth in traffic (or have reason to believe that it will happen), or if you are adding significant new content and believe that the new content will lead to significant growth. In our experience, for those operations still based on physical co-located servers, we have generally been able to move periodically to faster servers with larger hard disks every year or two, for about the same cost as we had been paying for the previous services. At times we are paying for servers far in excess of need, but worth purchasing that level of service because the price is reasonable and lets us sleep at night. Hope some of this helps, Ari Davidow On Thu, Jan 29, 2009 at 11:10 PM, JonathanC at ag.nsw.gov.au wrote: [Sorry if you receive this twice. I sent it 24 hours ago but it still hasn't appeared.] I'm the website manager at a mid-sized art museum (220 full-time staff, 1.35 million physical visitors pa) in Sydney, Australia. Currently we host our websites externally (in a hosting facility in the USA, for cost reasons) but it is clear that our server is now underpowered. So, we are considering hosting internally on TWO, more powerful servers, one for the application and one for the database. The company that provides support for our content management system (Squiz.net) also manages our server in the USA remotely, so they could continue to do that. We would just need to upgrade our Internet connection. The question I have is this: How powerful a system do we need? Squiz.net have quoted for 2 quad-core dual-Xeon commercial-grade servers, running at 2.0 GHz (detailed specs below). Our network manager believes this is MASSIVE overkill. I COULD ask Squiz.net to provide details of other, comparable organisations and THEIR web server specs, but since they'd probably all be their clients too, this may not be a strong argument for management. So, I would actually appreciate answers to ANY of the following 3 questions: 1. From your own experience, do these specs seem reasonable, allowing for some room to grow? 2. If your institution and/or websites are comparable to ours, what are your server specs... and are they adequate? 3. If your hosting setup is similar to what we were recommended, how big is your website (or websites)? To give you a better idea of our needs, here's what we have now: * Total web traffic: approx. 150-200 GB per month * 1 main website + 8 smaller, CMS-driven websites + 9 static HTML websites * 2 content management systems (1 phasing out the other) + collection management system customised web interface * Monthly email newsletter: approx. 150,000 subscribers * Online video: New content (~ 25 minutes, 55 MB) weekly, currently hosted on internal server * Online audio: currently 2 audio-tours, but set to expand, currently hosted on internal server And here are the detailed specs we were recommended for each server: Dell PowerEdge 2950 Dual Xeon Commercial grade server Dual Xeon 2.0 GHz (1333MHz Bus) Quad Core (8 Cores Total) Memory: ECC Registered DDR 8GB * Application server: 2 x 73 GB SAS/SCSI Hard Disk - RAID 1 * Database server: 6 x 73 GB SAS/SCSI Hard Disk - RAID 1+0 Intel 10/100Mb Network Card Intel 10/100/1000mbps TX Network Card Red Hat Enterprise Linux Thanks. Regards, Jonathan Cooper
[MCN-L] Hosting hardware requirements
Ari, Your statement about using Amazon as a repository is very interesting. Can you discuss the size of the images you are sending to the repository and how many MBs or TBs you are storing each month? How is the speed on ingest and retrieval? I've been looking at Amazon as well, but have concerns about the speed and security of the images. We have approx. 200mb images to store and will have approx. 10tb by the end of 2009. Are any other museums using Cloud computing as a repository? Thanks, David David Parsell Systems Manager Yale Center for British Art 1080 Chapel Street PO Box 208280 New Haven, CT 06520-8280 203 432-9603 203 432-9414 f david.parsell at yale.edu -Original Message- From: mcn-l-bounces at mcn.edu [mailto:mcn-l-boun...@mcn.edu] On Behalf Of Ari Davidow Sent: Friday, January 30, 2009 8:06 AM To: Museum Computer Network Listserv Subject: Re: [MCN-L] Hosting hardware requirements This question is one of the reasons why we set up our repository on Amazon Web Services, and why we are moving are general websites in that direction. We just don't want to be in the business of sinking capital we need in hardware that we may need. Moving to metered service in such a situation lets you pay for what you need, and removes the cost of forecasting and maintaining the physical servers. It also makes it easier to move away from the metaphor that every significant application requires its own server--you use virtual servers (the sort of situation that VMWare supports, as one good example; AWS has its own virtualization software) instead. It is also critical that you think not in terms of a single production set, but that you accomodate development and staging sets, as well. (You never want to be in a situation where you are manually updating your production server--you would stage changes, ensure that they are okay, then automatically update production; similarly, you want your development environment entirely out of the path of regular staging and production.) This becomes significantly more affordable when all of these servers are virtualized (which may or may not happen on AWS, although we are now moving in that direction). Beyond that, attempts to right-size your physical infrastructure depend on the database traffic and webserver traffic, something that you can triangulate by looking at your average and peak load averages on the servers and the response time degradation when you move from average to peak. Building for future growth should probably not be a large factor unless you are, in fact, experiencing significant growth in traffic (or have reason to believe that it will happen), or if you are adding significant new content and believe that the new content will lead to significant growth. In our experience, for those operations still based on physical co-located servers, we have generally been able to move periodically to faster servers with larger hard disks every year or two, for about the same cost as we had been paying for the previous services. At times we are paying for servers far in excess of need, but worth purchasing that level of service because the price is reasonable and lets us sleep at night. Hope some of this helps, Ari Davidow On Thu, Jan 29, 2009 at 11:10 PM, JonathanC at ag.nsw.gov.au wrote: [Sorry if you receive this twice. I sent it 24 hours ago but it still hasn't appeared.] I'm the website manager at a mid-sized art museum (220 full-time staff, 1.35 million physical visitors pa) in Sydney, Australia. Currently we host our websites externally (in a hosting facility in the USA, for cost reasons) but it is clear that our server is now underpowered. So, we are considering hosting internally on TWO, more powerful servers, one for the application and one for the database. The company that provides support for our content management system (Squiz.net) also manages our server in the USA remotely, so they could continue to do that. We would just need to upgrade our Internet connection. The question I have is this: How powerful a system do we need? Squiz.net have quoted for 2 quad-core dual-Xeon commercial-grade servers, running at 2.0 GHz (detailed specs below). Our network manager believes this is MASSIVE overkill. I COULD ask Squiz.net to provide details of other, comparable organisations and THEIR web server specs, but since they'd probably all be their clients too, this may not be a strong argument for management. So, I would actually appreciate answers to ANY of the following 3 questions: 1. From your own experience, do these specs seem reasonable, allowing for some room to grow? 2. If your institution and/or websites are comparable to ours, what are your server specs... and are they adequate? 3. If your hosting setup is similar to what we were recommended, how big is your website (or websites)? To give you a better idea of our needs, here's what we have now: * Total web traffic: approx. 150-200 GB per month * 1
[MCN-L] Hosting hardware requirements
We are moving about 6TB of data, mostly audio and video, to AWS. I think we're only about 500GB in, though--it's a long project since we invested in a T1 and everything has to upload through that pipe. We have found no serving issues--this is the same service that delivers Amazon's own web pages. The way that pieces fit together is a bit different from what is done in a non-virtualized environment. The security issues are probably on the same level as with your ISP in terms of hackability--maybe somewhat less, depending on what you might introduce in your own configuration. Integrity issues (the other security headache) have been non-existent--we have no data gone missing or corrupted--but that doesn't mean that we don't have our local RAID server backup. I actually like the slight decrease in worry when I compare AWS's staff and 24/7 likelihood vs. our remaining ISP--which has been good, but is still much smaller and much more vulnerable to disaster (however unlikely disaster is, overall, in this context). We are actually also using AWS to backup our network drives--the day-to-day working files of the Archive, via an inexpensive utility called JungleDisk. I believe that the Indianapolis Museum is also using AWS--Rob Stein is speaking on the subject at Museums on the Web this spring. Not sure who else ari On Fri, Jan 30, 2009 at 8:25 AM, Parsell, David david.parsell at yale.edu wrote: Ari, Your statement about using Amazon as a repository is very interesting. Can you discuss the size of the images you are sending to the repository and how many MBs or TBs you are storing each month? How is the speed on ingest and retrieval? I've been looking at Amazon as well, but have concerns about the speed and security of the images. We have approx. 200mb images to store and will have approx. 10tb by the end of 2009. Are any other museums using Cloud computing as a repository? Thanks, David David Parsell Systems Manager Yale Center for British Art 1080 Chapel Street PO Box 208280 New Haven, CT 06520-8280 203 432-9603 203 432-9414 f david.parsell at yale.edu -Original Message- From: mcn-l-bounces at mcn.edu [mailto:mcn-l-bounces at mcn.edu] On Behalf Of Ari Davidow Sent: Friday, January 30, 2009 8:06 AM To: Museum Computer Network Listserv Subject: Re: [MCN-L] Hosting hardware requirements This question is one of the reasons why we set up our repository on Amazon Web Services, and why we are moving are general websites in that direction. We just don't want to be in the business of sinking capital we need in hardware that we may need. Moving to metered service in such a situation lets you pay for what you need, and removes the cost of forecasting and maintaining the physical servers. It also makes it easier to move away from the metaphor that every significant application requires its own server--you use virtual servers (the sort of situation that VMWare supports, as one good example; AWS has its own virtualization software) instead. It is also critical that you think not in terms of a single production set, but that you accomodate development and staging sets, as well. (You never want to be in a situation where you are manually updating your production server--you would stage changes, ensure that they are okay, then automatically update production; similarly, you want your development environment entirely out of the path of regular staging and production.) This becomes significantly more affordable when all of these servers are virtualized (which may or may not happen on AWS, although we are now moving in that direction). Beyond that, attempts to right-size your physical infrastructure depend on the database traffic and webserver traffic, something that you can triangulate by looking at your average and peak load averages on the servers and the response time degradation when you move from average to peak. Building for future growth should probably not be a large factor unless you are, in fact, experiencing significant growth in traffic (or have reason to believe that it will happen), or if you are adding significant new content and believe that the new content will lead to significant growth. In our experience, for those operations still based on physical co-located servers, we have generally been able to move periodically to faster servers with larger hard disks every year or two, for about the same cost as we had been paying for the previous services. At times we are paying for servers far in excess of need, but worth purchasing that level of service because the price is reasonable and lets us sleep at night. Hope some of this helps, Ari Davidow On Thu, Jan 29, 2009 at 11:10 PM, JonathanC at ag.nsw.gov.au wrote: [Sorry if you receive this twice. I sent it 24 hours ago but it still hasn't appeared.] I'm the website manager at a mid-sized art museum (220 full-time staff, 1.35 million physical visitors pa
[MCN-L] Hosting hardware requirements
One of the advantages of internal management vs. hosting is that massive overkill on hardware isn't a lot more expensive, in the scheme of things, than barely good enough. What are those Dells going to cost, maybe $6000US each if you stretch it? An adequate server would only save you $2000US. And you want something that's going to last 3-5 years so you're talking about a difference of $400-$600/yr. You and your team probably make a lot more than that annually, so the difference is not worth quibbling over. If you're committed to bringing the servers inside, which it sounds like you are, I vote for massive overkill. Make them twice as powerful, even. Matt On 1/29/09 11:10 PM, JonathanC at ag.nsw.gov.au JonathanC at ag.nsw.gov.au wrote: [Sorry if you receive this twice. I sent it 24 hours ago but it still hasn't appeared.] I'm the website manager at a mid-sized art museum (220 full-time staff, 1.35 million physical visitors pa) in Sydney, Australia. Currently we host our websites externally (in a hosting facility in the USA, for cost reasons) but it is clear that our server is now underpowered. So, we are considering hosting internally on TWO, more powerful servers, one for the application and one for the database. The company that provides support for our content management system (Squiz.net) also manages our server in the USA remotely, so they could continue to do that. We would just need to upgrade our Internet connection. The question I have is this: How powerful a system do we need? Squiz.net have quoted for 2 quad-core dual-Xeon commercial-grade servers, running at 2.0 GHz (detailed specs below). Our network manager believes this is MASSIVE overkill. I COULD ask Squiz.net to provide details of other, comparable organisations and THEIR web server specs, but since they'd probably all be their clients too, this may not be a strong argument for management. So, I would actually appreciate answers to ANY of the following 3 questions: 1. From your own experience, do these specs seem reasonable, allowing for some room to grow? 2. If your institution and/or websites are comparable to ours, what are your server specs... and are they adequate? 3. If your hosting setup is similar to what we were recommended, how big is your website (or websites)? To give you a better idea of our needs, here's what we have now: * Total web traffic: approx. 150-200 GB per month * 1 main website + 8 smaller, CMS-driven websites + 9 static HTML websites * 2 content management systems (1 phasing out the other) + collection management system customised web interface * Monthly email newsletter: approx. 150,000 subscribers * Online video: New content (~ 25 minutes, 55 MB) weekly, currently hosted on internal server * Online audio: currently 2 audio-tours, but set to expand, currently hosted on internal server And here are the detailed specs we were recommended for each server: Dell PowerEdge 2950 Dual Xeon Commercial grade server Dual Xeon 2.0 GHz (1333MHz Bus) Quad Core (8 Cores Total) Memory: ECC Registered DDR 8GB * Application server: 2 x 73 GB SAS/SCSI Hard Disk - RAID 1 * Database server: 6 x 73 GB SAS/SCSI Hard Disk - RAID 1+0 Intel 10/100Mb Network Card Intel 10/100/1000mbps TX Network Card Red Hat Enterprise Linux Thanks. Regards, Jonathan Cooper Manager of Information / Website Art Gallery of New South Wales Sydney, Australia http://www.artgallery.nsw.gov.au - - - Please consider the environment before printing my email - - - This e-mail message is intended only for the addressee(s) and contains information which may be confidential. If you are not the intended recipient please advise the sender by return email, do not use or disclose the contents, and delete the message and any attachments from your system. Unless specifically indicated, this email does not constitute formal advice or commitment by the sender or the Art Gallery of NSW (ABN 24 934 492 575) or its related entities. ___ You are currently subscribed to mcn-l, the listserv of the Museum Computer Network (http://www.mcn.edu) To post to this list, send messages to: mcn-l at mcn.edu To unsubscribe or change mcn-l delivery options visit: http://toronto.mediatrope.com/mailman/listinfo/mcn-l The MCN-L archives can be found at: http://toronto.mediatrope.com/pipermail/mcn-l/