On Wed, Sep 24, 2014 at 09:43:54PM -0700, Preston L. Bannister wrote: > Sorry, I am jumping into this without enough context, but ... > > > On Wed, Sep 24, 2014 at 8:37 PM, Qiming Teng <[email protected]> > wrote: > > > > mysql> select count(*) from metadata_text; > > +----------+ > > | count(*) | > > +----------+ > > | 25249913 | > > +----------+ > > 1 row in set (3.83 sec) > > > > > There are problems where a simple sequential log file is superior to a > database table. The above looks like a log ... a very large number of > events, without an immediate customer. For sequential access, a simple file > is *vastly* superior to a database table. > > If you are thinking about indexed access to the above as a table, think > about the cost of adding items to the index, for that many items. The cost > of building the index is not small. Running a map/reduce on sequential > files might be faster. > > Again, I do not have enough context, but ... 25 million rows?
Yes, just about 3 VMs running on two hosts, for at most 3 weeks. This is leading me to another question -- any best practices/tools to retire the old data on a regular basis? Regards, Qiming > _______________________________________________ > OpenStack-dev mailing list > [email protected] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
