On Aug 31, 2016 5:29 AM, "sanhua.zh" <sanhua...@foxmail.com> wrote:
>
> BTW, what do you think if I mapseparatly instead of the whole db file, ...

I suspect that it wouldn't really help you much, if any.

One, there is overhead in making that many system calls to map a bunch of 4
MiB buffers.

Two, once you've mapped that many buffers, you now have to determine where
each part of your file is in memory before accessing it. You've lost the
benefit of easy address calculations and now have to perform an indirect
lookup, first finding the correct page, next computing the offset into that
page. It would be potentially worse if a piece of data every spanned two
pages, but I doubt that would be an issue for SQLite.

Three, I have no idea what limits exist for mapping pages at the OS level
per process, but it wouldn't surprise me if maybe this exceeded something
like that.

Four, the one potential benefit is that mapping the pages would avoid the
penalty of copying read data from a kernel buffer to your user space
buffer. I think the additional complexity and my other reasons above make
it a less than ideal solution.

If your problem space requires higher speed access to the data than SQLite
is capable of delivering, it seems to me that you'd be better off with a
data storage solution tailored to your requirements. I don't make that
suggestion lightly, and would have to be really desperate for performance
to do it myself, but a more specialized solution can gain performance at
the expense of not being as generally useful. Even then, it might be
difficult to improve on SQLite, a very optimized library for data storage,
manipulation, and access.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to