Hello everyone,

An inaugural poster here, sorry if I misidentified a list for my question.

I am planning to use PostgreSQL as a storage for application logs (lines of text) with the following properties:

- Ingest logs at high rate: 3K lines per second minimum, but the more the better as it would mean we could use one Postgres instance for more than one app.

- Only store logs for a short while: days, may be weeks.

- Efficiently query logs by an arbitrary time period.

- A "live feed" output, akin to `tail -f` on a file.

For context, I only used Postgres for a bog standard read-heavy web apps, so I'm completely out of expertise for such a case. Here are my questions:

- Is it even possible/advisable to use an actual ACID RDBMS for such a load? Or put another way, can Postgres be tuned to achieve the required write throughput on some mid-level hardware on AWS? May be at the expense of sacrificing transaction isolation or something…

- Is there an efficient kind of index that would allow me to do `where 'time' between ... ` on a constantly updated table?

- Is there such a thing as a "live cursor" in Postgres for doing the `tail -f` like output, or I should just query it in a loop (and skip records if the client can't keep up)?

Thanks in advance for all the answers!


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to