n_dead_tup as "dead_tuples"
FROM pg_stat_user_tables
WHERE schemaname = 'schemaFOO' and relname = 'bucket';
[image: image.png]
On Mon, Jul 29, 2019 at 9:26 PM Jean Baro wrote:
> [image: image.png]
>
> The dead tuples goes up at a high ratio, but then it gets cleaned.
>
>
[image: image.png]
The dead tuples goes up at a high ratio, but then it gets cleaned.
if you guys need any further information, please let me know!
On Mon, Jul 29, 2019 at 9:06 PM Jean Baro wrote:
> The UPDATE was something like:
>
> UPDATE bucket SET qty_available = qty_available
The UPDATE was something like:
UPDATE bucket SET qty_available = qty_available + 1 WHERE bucket_uid =
0940850938059380590
Thanks for all your help guys!
On Mon, Jul 29, 2019 at 9:04 PM Jean Baro wrote:
> All the failures come from the Bucket Table (see image below).
>
> I don't ha
All the failures come from the Bucket Table (see image below).
I don't have access to the DB, neither the code, but last time I was
presented to the UPDATE it was changing (incrementing or decrementing)
*qty_available*, but tomorrow morning I can be sure, once the developers
and DBAs are back to
uot;log_checkpoints" to get more info.
>
> Regards,
> Michael Vitale
>
> Rick Otten wrote on 7/29/2019 8:35 AM:
>
>
> On Mon, Jul 29, 2019 at 2:16 AM Jean Baro wrote:
>
>>
>> We have a new Inventory system running on its own database (PG 10 AWS
>&
Hello there.
I am not an PG expert, as currently I work as a Enterprise Architect (who
believes in OSS and in particular PostgreSQL ). So please forgive me if
this question is too simple.
Here it goes:
We have a new Inventory system running on its own database (PG 10 AWS
RDS.m5.2xlarge 1TB
performance between PG and
Lambda?
I am sorry for wasting your time guys, it helped us to find the problem
though, even if it wasn't a PG problem.
BTW, what a performance! I am impressed.
Thanks PG community!
Em 27 de dez de 2017 14:34, "Jean Baro" <jfb...@gmail.com> escreveu:
General purpose, 500GB but we are planing to increase it to 1TB before
going into production.
500GB 1.500 iops (some burst of 3.000 iops)
1TB 3.000 iops
Em 27 de dez de 2017 14:23, "Jeff Janes" <jeff.ja...@gmail.com> escreveu:
> On Sun, Dec 24, 2017 at 11:51 AM, Jean Bar
r data/structure that could result in such terrible performance.
Mike Sofen
*From:* Jean Baro [mailto:jfb...@gmail.com]
*Sent:* Wednesday, December 27, 2017 7:14 AM
Hello,
We are still seeing queries (by UserID + UserCountry) taking over 2
seconds, even when there is no batch insert going on
Thanks Jeremy,
We will provide a more complete EXPLAIN as other people have suggested.
I am glad we might end up with a much better performance (currently each
query takes around 2 seconds!).
Cheers
Em 27 de dez de 2017 14:02, "Jeremy Finzel" escreveu:
> The EXPLAIN
>
>
blic.card USING btree
> (campaign ASC NULLS LAST)
>
> The EXPLAIN
>
> 'Index Scan using idx_user_country on card (cost=0.57..1854.66 rows=460
> width=922)'
> ' Index Cond: (((user_id)::text = '4684'::text) AND (user_country =
> 'BR'::bpchar))'
>
>
>
> Em 25
concurrently?
>
> Sent from my iPhone
>
> > On Dec 24, 2017, at 2:51 PM, Jean Baro <jfb...@gmail.com> wrote:
> >
> > Hi there,
> >
> > We are testing a new application to try to find performance issues.
> >
> > AWS RDS m4.large 500GB storag
Hi there,
We are testing a new application to try to find performance issues.
AWS RDS m4.large 500GB storage (SSD)
One table only, called Messages:
Uuid
Country (ISO)
Role (Text)
User id (Text)
GroupId (integer)
Channel (text)
Title (Text)
Payload (JSON, up to 20kb)
Starts_in (UTC)
13 matches
Mail list logo