On Fri, Oct 28, 2011 at 01:33:31PM +0200, Gert Corthout scratched on the wall:
> 
> hello all,
>  
> we have a database that only performs insert statements on a table 
> (database contains only this table), is it in this case a good idea
> to reduce the cache size to, say, 5?

  No, that would be a bad idea.  Smaller than the default 2000 might 
  work, but you don't want to go that small.

> Because if I understand things correctly the cache doesn't really
> help you when you do nothing but inserts, is this correct?

  Not really.  *All* I/O is done through the cache.  Inserts require
  accessing and manipulating the table's B-Tree, in addition to the
  pages that contain the actual data.  If you have to keep re-reading
  the intermediate nodes, you'll see a performance hit, especially
  if you're working on a platform that doesn't have a good OS-level
  file-system cache.  
  
  This issue is multiplied if you have any indexes on the table, which
  also have a B-Tree that must be updated.

  Cutting the cache way back may not cause any issues for small test
  cases, but if the table already has a million rows in it, I would
  expect a slow-down.

  The best advice is to just run some tests, but be sure you run them
  with realistic data sets.  This is a case where the size of the
  table counts.  If you have no indexes on the table, I wouldn't be
  surprised if you can get away with a ~100 page cache, but I'd be very
  hesitant to cut it way way down unless performance is not an issue.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to