In article <mailman.11202.1403534666.18130.python-l...@python.org>,
William Ray Wing <w...@mac.com> wrote:
> On Jun 23, 2014, at 12:26 AM, smur...@gmail.com wrote:
> > On Sunday, June 22, 2014 3:49:53 PM UTC+2, Roy Smith wrote:
> >> Can you give us some more quantitative idea of your requirements? How
> >> many objects? How much total data is being stored? How many queries
> >> per second, and what is the acceptable latency for a query?
> > Not yet, A whole lot, More than fits in memory, That depends.
> > To explain. The data is a network of diverse related objects. I can keep
> > the most-used objects in memory but not all of them. Indeed, I _need_ to
> > keep them, otherwise this will be too slow, even when using Mongo instead
> > of SQLAlchemy. Which objects are "most-used" changes over time.
> Are you sure it won¹t fit in memory? Default server memory configs these
> days tend to start at 128 Gig, and scale to 256 or 384 Gig.
I'm not sure what "default" means, but it's certainly possible to get
machines with that much RAM. On the other hand, even the amount of RAM
on a single machine is not really a limit. There are very easy to use
technologies these days (i.e. memcache) which let you build clusters to
effectively aggregate the physical RAM from multiple machines. And
database sharding lets you do a different flavor of memory aggregation.