-hackers,

So this is more of a spit balling thread than anything. As I understand it, if we have a long running transaction or a large number of wal logs and we crash, we then have to restore those logs on restart to the last known good transaction. No problem.

I recently ran a very long transaction. I was building up a large number of rows into a two column table to test index performance. I ended up having to kill the connection and thus the transaction after I realized I had an extra zero in my generate_series(). (Side note: Amazing the difference a single zero can make ;)). When coming back up, I watched the machine and I was averaging anywhere from 60MBs to 97MBs on writes. That in itself isn't that bad over a single thread and a single SSD, doing what it is doing.

However, since I know this machine can get well over 400MBs when using multiple connections I can't help but wonder if there is anything we can do to make restoration more efficient without sacrificing the purpose of what it is doing?

Can we have multiple readers pull transaction logs into shared_buffers (on recovery only), sort the good transactions and then push them back to the walwriter or bgwriter to the pages?

Anyway, like I said, spitballing and I thought I would start the thread.

Sincerely,

JD

--
Command Prompt, Inc.                  http://the.postgres.company/
                        +1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to