On 13 Jul 2008, at 10:13, Brian Aker wrote:
Hi!
On Jul 13, 2008, at 12:29 AM, Antony T Curtis wrote:
20,000 tables, definition stored in memory as simple SQL
statements, plucking number out of sky, lets assume 2000 chars
average for table definition. That gives us about 40 Mb of memory
What happens when the DB is up for a year or two?
Take for instance a shop who builds daily tables, works on them, and
then drops them. One load? Awesome... no problem.
Over time thought?
Practical upshot - all the INFORMATION_SCHEMA operations will be
quicker and won't need to hit the disk at all.
I think we can get around this anyway. Current design is FUBAR.
What about dropping the central cache, keeping table info around and
just using your async communication method to flush?
Sure... Perhaps only hold complete schemas in memory (useful when
foreign keys are implemented in a meaningful way). So if no connection
is using a specific schema, it can be removed from memory. The idea I
suggested where a single file holds the SQL DDL would be split apart
so that there is one such file per schema. Make this a 'plugable'
behaviour so that for storage engines which only know if a table
exists at use time can implement their own schema cache. This would
mean that such uncooperative storage engines cannot have their tables
co-exist in the same schema as other engines but that would be a minor
impediment which can be relieved with views etc. The server would ask
all SCHEMA plugins if they 'own' a schema the first time that schema
is accessed. Of course, the 'mysql' schema must only use the inbuilt
schema plugin ... and the information_schema would be its own as
well... Hmmm, this could be an elegant way to get rid of some of the
I_S hacks and abstract them nicely too.
Regards,
Antony
_______________________________________________
Mailing list: https://launchpad.net/~drizzle-discuss
Post to : [email protected]
Unsubscribe : https://launchpad.net/~drizzle-discuss
More help : https://help.launchpad.net/ListHelp