Agreed, network will be bottleneck even with ssd on shared resource. For a stable env having a dedicated hosted server will be the best approach and cheaper too.
Jai Rangi Www.didforsale.com www.cebodtelecom.com www.cebod.com > On Mar 8, 2015, at 9:10 AM, Jeff LaCoursiere <[email protected]> wrote: > > > Still a shared resource. I don't see the benefit. > > Even beyond the shared resource bit, with the kind of IO you are likely to be > pushing, you will want a decent NAS with lots of spindles and fibre channel > to your hosts. > > j > >> On 03/08/2015 10:51 AM, Jai Rangi wrote: >> Digital ocean offers ssd on all the virtual machines. Uptime is good. >> >> Jai Rangi >> Www.didforsale.com >> www.cebodtelecom.com >> www.cebod.com >> >> On Mar 8, 2015, at 8:11 AM, Jeff LaCoursiere <[email protected]> wrote: >> >>> >>> Amazon instances are shared resources. I wouldn't want to count on timing >>> or disk throughput, and you can't just ask them to do "ssd" - its a virtual >>> machine! 500 simultaneous recordings is a hefty load, and I would want to >>> know that the underlying hardware is dedicated to the task. >>> >>> Sure you see lots of posts about hosting asterisk and/or freeswitch on EC2. >>> I have done it myself and even have some clients doing it now *for proof >>> of concept*. I've never heard of anyone using it for the kind of load you >>> are talking about. I'm assuming with such a giant load you are making a >>> decent profit. Buy some hefty hardware and do the architecture properly. >>> You can rent half a rack at lots of high end datacenters for less than >>> $1000/month. >>> >>> j >>> >>>> On 03/07/2015 12:43 AM, Amit Patkar wrote: >>>> Hi Jeff >>>> >>>> Are you aware of any challenges of hosting it on AWS? It will help me to >>>> work out alternate plan. Is there any recommendation? Should I split it to >>>> multiple instances and balance traffic across multiple small server >>>> instances? I can use Kamailio to balance traffic. >>>> >>>> I see many posts referring to AWS deployment. Please help me to choose AWS >>>> server instance. >>>> >>>> Thanks & Regards, >>>> Amit Patkar >>>> >>>>> On 3/7/2015 12:19 AM, Jeff LaCoursiere wrote: >>>>> >>>>> Why use Amazon? With that kind of load I would want dedicated servers. >>>>> Call Rackspace or Softlayer. >>>>> >>>>> j >>>>> >>>>>> On 03/06/2015 11:59 AM, Amit Patkar wrote: >>>>>> Hi >>>>>> >>>>>> I plan to host Asterisk instances on AWS/EC2 servers. >>>>>> Requirement is to run asterisk instance with transcoding (g.729 + g.711) >>>>>> and full recording. Number of concurrent calls expected are 500+. 2 >>>>>> instances will be configured for 100% redundancy. Heart beat will be >>>>>> used to determine active instance. >>>>>> How should I choose EC2 instance? >>>>>> How many vCPU, RAM should be selected? I am assuming that server with >>>>>> ssd is required as all 500+ calls needs to be recorded. >>>>>> >>>>>> Regards, >>>>>> Amit Patkar >>> -- >>> _____________________________________________________________________ >>> -- Bandwidth and Colocation Provided by http://www.api-digital.com -- >>> New to Asterisk? Join us for a live introductory webinar every >>> Thurs: >>> http://www.asterisk.org/hello >>> >>> asterisk-users mailing list >>> To UNSUBSCRIBE or update options visit: >>> http://lists.digium.com/mailman/listinfo/asterisk-users > > -- > _____________________________________________________________________ > -- Bandwidth and Colocation Provided by http://www.api-digital.com -- > New to Asterisk? Join us for a live introductory webinar every Thurs: > http://www.asterisk.org/hello > > asterisk-users mailing list > To UNSUBSCRIBE or update options visit: > http://lists.digium.com/mailman/listinfo/asterisk-users
-- _____________________________________________________________________ -- Bandwidth and Colocation Provided by http://www.api-digital.com -- New to Asterisk? Join us for a live introductory webinar every Thurs: http://www.asterisk.org/hello asterisk-users mailing list To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
