We use m1.larges in EC2 <http://aws.amazon.com/ec2/instance-types/> for both nimbus and supervisor machines (though the m1 family have been deprecated in favor of m3). Our use case is to do some pre-aggregation before persisting the data in a store. (The main bottleneck in this setup is the downstream datastore, but memory is the primary constraint on the worker machines due to the in-memory cache which wraps the trident state.)
For what its worth, Infochimps suggests<https://github.com/infochimps-labs/big_data_for_chimps/blob/master/25-storm%2Btrident-tuning.asciidoc>c1.xlarge or m3.xlarge machines. Using the Amazon cloud machines as a reference, we like to use either the c1.xlarge machines (7GB ram, 8 cores, $424/month, giving the highest CPU-performance-per-dollar) or the m3.xlargemachines (15 GB ram, 4 cores, $365/month, the best balance of CPU-per-dollar and RAM-per-dollar). You shouldn’t use fewer than four worker machines in production, so if your needs are modest feel free to downsize the hardware accordingly. Not sure what others would recommend. -Cody On Wed, Apr 30, 2014 at 5:57 PM, Software Dev <[email protected]>wrote: > What kind of specs are we looking at for > > 1) Nimbus > 2) Workers > > Any recommendations? > -- Cody A. Ray, LEED AP [email protected] 215.501.7891
