The prices of large capacity Solid State Disks (SLCs of course) are still
too high to most of us.

But it´s already possible to find SSDs of small size (8 to 32 GB) today with
affordable prices and good performance (0,1ms access time and at least
150MB/s r/w transfer rate).

So, would it possible to use these Small Size SSDs (S5Ds) as a buffer to
improve postgreSQL's write performance?

For now, I detected two opportunities:

1) usage of a S5D to temporarily store the WAL log files until a deamon
process copy them to the regular HD.

2) usage of a S5D to store instructions to a make a checkpoint. Instead of
write the "dirty" pages directly to database files, postgreSQL could dump to
SSD the dirty pages and the instructions of how update the data files.
Later, a deamon process would update the files following these instructions
and erase the instruction files after that.  I guess that MVCC induces the
spread of writes along the file area, requiring lots of seeks to find the
correct disk block. So SSDs will produce a good performance in burst
situation.

I guess these ideas could improve the write performance significantly (3x to
50x) in databases systems that perform writes with SYNC and have many write
bursts or handle large (20MB+) BLOBs (many WAL segments and pages to write
on checkpoint).

Of course, postgreSQL should be patched to handle, not only with the new
behaviours, but to handle a possible SSD full.

One solution to (1) could be a fast/main volume scheme. In case of the fast
volume full condition, postgreSQL should use the main volume.

The (2) solution is more delicate because introduce a new type of file to
hold data. But, if the gain worths, it should be examinated ...

Well, that´s it.



-- 
Nilson

Reply via email to