The reason the varchars are in place is because they are representing Java UUID's outside the database which is required.
There is no specific reason we are calling analyze every 10 minutes... we are currently (although almost guaranteed to increase) inserting between 60,000 and 100k records every 10 minutes. It should be noted also that when the application is running, the database is truncated during "setup" and when the application is reset. So we don't have a limitless upward bound of data. We probably have our upward bound of about 24 hours.. which would total just under 9mill records in the data table. Do have a recommendation on more optimal schedule that Analyze should run? Each select query (although they are called in groups usually 1-5) would return an upward bound of just over 85k records. I'm guessing at this point the max memory rows at 50000, causing the last 35k rows to be buffered to the temporary file would be the real issue... Would their be memory/performance issues if I raised the max memory rows to 100k? As always, thanks Thomas for your expertise and H2 On Jan 30, 5:41 am, Thomas Mueller <[email protected]> wrote: > Hi, > > The query is quite simple, I don't know what the problem could have > been. Also, ANALYZE is usually not required that often. Is VARCHAR > really required? INT would be faster. If you see the problem again > please let me know! > > Regards, > Thomas --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "H2 Database" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/h2-database?hl=en -~----------~----~----~----~------~----~------~--~---
