I understand the minimum requirements as posted for Arches is 8-16GB of 
RAM, which for institutions with in house servers or dedicated server 
hosting levels (already), is fairly economical. 

But for small organizations with little to no budget, a vps or cloud setup 
would be nice. However, having 8-16GB of RAM in reserve typically scales a 
vps plan into uneconomical and fairly over spec'ed territory. For instance, 
on linode, 8GB of RAM also gets you 6 CPU cores, 192GB SSD, and 8TB 
transfer at $80/month, which seems ridiculously overpowered. With AWS EC2, 
you're looking at a t2.large instance at the least, which puts you at 
$76/mo. 

For those of you running Linux vps or cloud instances out there, I'm 
curious what your average load has been and what resources your instance 
needs in reserve. For those on AWS, are you on a t2 instance, and if so, 
how are the CPU credits working out?

I understand this all scales depending upon dataset size and number of 
requests, so any stats people can share would be really helpful for 
estimating budgets and planning for future steps up in instance specs as 
the dataset grows.

Many thanks,
Angela

P.S. I can share the following for my testing instance: 
Linode 1GB RAM 1CPU core running Ubuntu trusty with the sample Arches 3 hip 
data, with 1 demo user accessing, the load was minimal: max CPU usage 8%, 
i/o rate 76 avg with highest spike at 1890, using 1% of monthly bandwidth 
to install and access (2 work weeks of fairly regular access/usage). 
But getting some production data would be nice...

-- 
-- To post, send email to [email protected]. To unsubscribe, send 
email to [email protected]. For more information, 
visit https://groups.google.com/d/forum/archesproject?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Arches Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to