The standard computer answer applies:  The fastest way to perform I/O is not to 
do it (and contrary to recent press, this discovery was not made by Microsoft 
Research a couplke of weeks ago but was "discovered" in the 1950's when, on 
average, a good secretary could find a file in the filing cabinets faster than 
the computer could find the same record in several miles of magnetic tape).

You achieve this by using a disk cache -- preferably a block-cache, however 
this technology has gone by the wayside and periodically "discovered" to be how 
to run a good cache every few years.  It dates back to (once again) the 60's 
and 70's.  Modern crap uses filesystem based caching which is ill-conceived and 
usually more-or-less totally brain-dead.

In any case, your cache size should be tuned so that you average a 90% hit rate 
or better (which with a properly designed block cache is not that hard to 
achieve or very large).  Or, in these days of humongous amounts of medium speed 
(dynamic) RAM, the cache should use all space not otherwise being used for code 
and data working set (if you bought memory and it is "free" as in unused, you 
flushed your money down the toilet).

You do not want to use shared cache.  Shared cache is designed for use in 
really itty bitty bitty bitty bitty boxes where memory is measured in bytes.  
(phones, watches, TVs, hand-held TV remote controls, etc).  If you are using 
something that qualifies as a "computer" then you do not want a shared cache.

Of course bear in mind process memory limits (you may not be able to use more 
than 256MB or 512MB total cache per process on a 32-bit computer that only 
allocates 2 GB of virtual address space per process) and also the fact that 
just because you "said" to use 4 TB of RAM as cache does not mean that 4 TB 
will be used.  A 1 GB file will use a maximum of 1 or perhaps 2 GB of cache 
(depending on your operations) even if you tell to use 4 TB of cache.

So, the optimal answer is often "as much as possible without impacting the 
multiprogramming ratio, especially on Operating Systems of brain-dead design 
from the get-go which favour bad uses of memory over good ones).

> -----Original Message-----
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Mark Hamburg
> Sent: Tuesday, 29 November, 2016 11:39
> To: SQLite mailing list
> Subject: Re: [sqlite] Read-only access which does not block writers
> 
> One other question about WAL mode and simultaneous readers and writers:
> How are people setting their page caches? My read is that shared cache is
> probably not what's wanted. I was setting my reader caches to be bigger
> than my writer cache under the assumption that writers write and then move
> on whereas readers would benefit from having more data cached, but I'm now
> thinking that the disk cache should be getting me the latter effect and
> increasing the size of the write cache should allow the writer to run
> longer without having to flush from memory to disk. Is there any standard
> advice in this regard or is this the sort of question where the answer is
> "experiment".
> 
> Mark
> 
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to