n_dead_tup as "dead_tuples"
FROM pg_stat_user_tables
WHERE schemaname = 'schemaFOO' and relname = 'bucket';
[image: image.png]
On Mon, Jul 29, 2019 at 9:26 PM Jean Baro wrote:
> [image: image.png]
>
> The dead tuples goes up at a high ratio, but then it ge
[image: image.png]
The dead tuples goes up at a high ratio, but then it gets cleaned.
if you guys need any further information, please let me know!
On Mon, Jul 29, 2019 at 9:06 PM Jean Baro wrote:
> The UPDATE was something like:
>
> UPDATE bucket SET qty_available = qty_available
The UPDATE was something like:
UPDATE bucket SET qty_available = qty_available + 1 WHERE bucket_uid =
0940850938059380590
Thanks for all your help guys!
On Mon, Jul 29, 2019 at 9:04 PM Jean Baro wrote:
> All the failures come from the Bucket Table (see image below).
>
> I don't
All the failures come from the Bucket Table (see image below).
I don't have access to the DB, neither the code, but last time I was
presented to the UPDATE it was changing (incrementing or decrementing)
*qty_available*, but tomorrow morning I can be sure, once the developers
and DBAs are back to t
en. Turn on "log_checkpoints" to get more info.
>
> Regards,
> Michael Vitale
>
> Rick Otten wrote on 7/29/2019 8:35 AM:
>
>
> On Mon, Jul 29, 2019 at 2:16 AM Jean Baro wrote:
>
>>
>> We have a new Inventory system running on its own database (PG
Hello there.
I am not an PG expert, as currently I work as a Enterprise Architect (who
believes in OSS and in particular PostgreSQL ๐). So please forgive me if
this question is too simple. ๐
Here it goes:
We have a new Inventory system running on its own database (PG 10 AWS
RDS.m5.2xlarge 1TB S
t the moment, like PGBouncer), is
it the Lambda start up time? Is it the network performance between PG and
Lambda?
I am sorry for wasting your time guys, it helped us to find the problem
though, even if it wasn't a PG problem.
BTW, what a performance! I am impressed.
Thanks PG community!
Em 27
General purpose, 500GB but we are planing to increase it to 1TB before
going into production.
500GB 1.500 iops (some burst of 3.000 iops)
1TB 3.000 iops
Em 27 de dez de 2017 14:23, "Jeff Janes" escreveu:
> On Sun, Dec 24, 2017 at 11:51 AM, Jean Baro wrote:
>
>>
could result in such terrible performance.
Mike Sofen
*From:* Jean Baro [mailto:jfb...@gmail.com]
*Sent:* Wednesday, December 27, 2017 7:14 AM
Hello,
We are still seeing queries (by UserID + UserCountry) taking over 2
seconds, even when there is no batch insert going on at the same time
Thanks Jeremy,
We will provide a more complete EXPLAIN as other people have suggested.
I am glad we might end up with a much better performance (currently each
query takes around 2 seconds!).
Cheers
Em 27 de dez de 2017 14:02, "Jeremy Finzel" escreveu:
> The EXPLAIN
>
> 'Index Scan using i
Thanks Rick,
We are now partitioning the DB (one table) into 100 sets of data.
As soon as we finish this new experiment we will provide a better EXPLAIN
as you suggested. :)
Em 27 de dez de 2017 13:38, "Rick Otten"
escreveu:
On Wed, Dec 27, 2017 at 10:13 AM, Jean Baro wrote
y on card (cost=0.57..1854.66 rows=460
width=922)'
' Index Cond: (((user_id)::text = '4684'::text) AND (user_country =
'BR'::bpchar))'
Em 25 de dez de 2017 01:10, "Jean Baro" escreveu:
> Thanks for the clarification guys.
>
> It will be supe
it read
> so much in a write only process, and AWS support didn't answer yet.
>
> So, for you, try to throttle inserts so WAL is never overfilled and you
> don't experience WALWrite locks, and then increase wal buffers to max.
>
> 24 ะณััะด. 2017 ั. 21:51 "Jean Baro&q
rom my iPhone
>
> > On Dec 24, 2017, at 2:51 PM, Jean Baro wrote:
> >
> > Hi there,
> >
> > We are testing a new application to try to find performance issues.
> >
> > AWS RDS m4.large 500GB storage (SSD)
> >
> > One table only, call
Hi there,
We are testing a new application to try to find performance issues.
AWS RDS m4.large 500GB storage (SSD)
One table only, called Messages:
Uuid
Country (ISO)
Role (Text)
User id (Text)
GroupId (integer)
Channel (text)
Title (Text)
Payload (JSON, up to 20kb)
Starts_in (UTC)
Expires_in
Hi there,
We are creating a new DB which will behave most like a file system, I mean,
there will be no complex queries or joins running in the DB. The idea is to
grab the WHOLE set of messages for a particular user and then filter,
order, combine or full text search in the function itself (AWS Lam
16 matches
Mail list logo