On Sun, Mar 25, 2012 at 07:48:51PM +0200, Tal Tabakman wrote:
> Hi,
> I am writing an application that performs a lot of DB writes. I am using a
> lot of recommended optimizations (like using transactions and more...)
> I want to improve my recording time by reducing the amount of I/O. one way
> to do so is by compressing the data before dumping it to DISK.
> I am evaluating a sqlite extension called zipvfs. this VFS extension
> compresses pages before writing them to disk
> I am using zlib compress/uncompress as my compression callback functions
> for this VFS. I assumed that database writing will  be faster with this VFS
> since
> compression [means less I/O], in reality I see no difference (but the data
> is indeed compressed)...
> any idea why I don't see any recording time improvement ?

Yes. If you are using drive with rortating magnetic plates, then the most
critical stage is seek (latency) time, rather than linear throughput. In
other words, you are limited by a _number_ of random i/o operatitions,
and not by an _amount_ of the information written.

So, if you want to improve your write performance, then your have to
use low-latency storage, such as SSD drive for small databases or
RAID array with plenty ow write cache memory for huge ones. 

> is there an overhead with zipvfs ?

You can easily measure this overhead yourself on the in-memory database. 
RAM is cheap now ;-)

Valentin Davydov.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to