On Mon, Jul 29, 2019 at 11:46 AM Jean Baro <jfb...@gmail.com> wrote:

> Hello there.
>
> I am not an PG expert, as currently I work as a Enterprise Architect (who
> believes in OSS and in particular PostgreSQL 😍). So please forgive me if
> this question is too simple. 🙏
>
> Here it goes:
>
> We have a new Inventory system running  on its own database (PG 10 AWS
> RDS.m5.2xlarge 1TB SSD EBS - Multizone). The DB effective size is less than
> 10GB at the moment. We provided 1TB to get more IOPS from EBS.
>
> As we don't have a lot of different products in our catalogue it's quite
> common (especially when a particular product is on sale) to have a high
> rate of concurrent updates against the same row. There is also a frequent
> (every 30 minutes) update to all items which changed their current
> stock/Inventory coming from the warehouses (SAP), the latter is a batch
> process. We have just installed this system for a new tenant (one of the
> smallest one) and although it's running great so far, we believe this
> solution would not scale as we roll out this system to new (and bigger)
> tenants. Currently there is up to 1.500 transactions per second (mostly
> SELECTS and 1 particular UPDATE which I believe is the one being
> aborted/deadlocked some tImes) in this inventory database.
>
Monitoring the locks and activities, as described here, may help -
https://wiki.postgresql.org/wiki/Lock_Monitoring

Regards,
Jayadevan

Reply via email to