On 2016-10-07 10:46 PM, Damien Sykes-Lindley wrote:
Hi there,
My name is Damien Lindley, and I am, among other things, an independent,
hobbiest programmer. I have been blind since birth and thus all my computer
work relies on screenreader software and keyboard.
I have only just come through
Hi there,
My name is Damien Lindley, and I am, among other things, an independent,
hobbiest programmer. I have been blind since birth and thus all my computer
work relies on screenreader software and keyboard.
I have only just come through the brink of scripting into compiled programming
and so
Machines with >100GB of RAM have been commonplace for a several years. These
days, 384 GB is quite common.
Even 1 TB is not a "special build" anymore -- you can buy them "off the shelf"
from Dell ... (Dell no longer makes custom machines but only sells fixed
configurations off the boat
(My two cents) I just setup two brand new machines in our Colo for ESX.
Both machines had 256gig of memory. Not unheard of in server situations. ;)
On Fri, Oct 7, 2016 at 4:48 PM, Simon Slavin wrote:
>
> On 7 Oct 2016, at 9:37pm, Daniel Meyer wrote:
>
On 7 Oct 2016, at 9:37pm, Daniel Meyer wrote:
> We have database files that are on the order of 100GB [...] in memory
You have 100GB memory ?
Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
Daniel Meyer wrote:
>
> How can we allow many reader threads on an in memory, write once read many
> times database and achieve multi-core performance? Is this possible with
> sqlite?
>
Have you tried using the URI "file::memory:?cache=shared" with one of the
sqlite3_open*() C APIs? Further
We are interested in using sqlite as a read only, in memory, parallel
access database. We have database files that are on the order of 100GB
that we are loading into memory. We have found great performance when
reading from a single thread. We need to scale up to have many parallel
reader
7 matches
Mail list logo