On 7/26/16 9:54 AM, Joshua D. Drake wrote:

The following article is a very good look at some of our limitations and highlights some of the pains many of us have been working "around" since we started using the software.



* Inefficient architecture for writes
* Inefficient data replication
* Issues with table corruption
* Poor replica MVCC support
* Difficulty upgrading to newer releases

It is a very good read and I encourage our hackers to do so with an open mind.



It was a good read.

Having based a high performance web tracking service as well as a high performance security appliance on Postgresql I too have been bitten by these issues.

I had a few questions that maybe the folks with core knowledge can answer:

1) Would it be possible to create a "star-like" schema to fix this problem? For example, let's say you have a table that is similar to Uber's:
col0pk, col1, col2, col3, col4, col5

All cols are indexed.
Assuming that updates happen to only 1 column at a time.
Why not figure out some way to encourage or automate the splitting of this table into multiple tables that present themselves as a single table?

What I mean is that you would then wind up with the following tables:
table1: col0pk, col1
table2: col0pk, col2
table3: col0pk, col3
table4: col0pk, col4
table5: col0pk, col5

Now when you update "col5" on a row, you only have to update the index on table5:col5 and table5:col0pk as opposed to beforehand where you would have to update more indecies. In addition I believe that vacuum would be somewhat mitigated as well in this case.

2) Why not have a look at how innodb does its storage, would it be possible to do this?

3) For the small-ish table that Uber mentioned, is there a way to "have it in memory" however provide some level of sync to disk so that it is consistent?


Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to