If I run the database on a server that has enough ram to load all the
indexes and tables into ram. And then it would update the index on the HDD
every x seconds. Would that work to increase performance dramatically?

On Wed, Feb 5, 2020, 12:15 PM Justin Pryzby <pry...@telsasoft.com> wrote:

> On Wed, Feb 05, 2020 at 12:03:52PM -0500, Arya F wrote:
> > I'm looking to write about 1100 rows per second to tables up to 100
> million
> > rows. I'm trying to come up with a design that I can do all the writes
> to a
> > database with no indexes. When having indexes the write performance slows
> > down dramatically after the table gets bigger than 30 million rows.
> >
> > I was thinking of having a server dedicated for all the writes and have
> > another server for reads that has indexes and use logical replication to
> > update the read only server.
>
> Wouldn't the readonly server still have bad performance for all the wites
> being
> replicated to it ?
>
> > Would that work? Or any recommendations how I can achieve good
> performance
> > for a lot of writes?
>
> Can you use partitioning so the updates are mostly affecting only one
> table at
> once, and its indices are of reasonable size, such that they can fit
> easily in
> shared_buffers.
>
> brin indices may help for some, but likely not for all your indices.
>
> Justin
>

Reply via email to