On Wed, Dec 27, 2017 at 2:10 PM, Mike Sofen wrote:
> In my experience, that 77ms will stay quite constant even if your db grew
> to > 1TB. Postgres IS amazing. BTW, for a db, you should always have
> provisioned IOPS or else your performance can vary wildly, since the SSDs
> are shared.
>
>
>
>
: Wednesday, December 27, 2017 11:03 AM
Subject: Re: Batch insert heavily affecting query performance.
Sorry guys,
The performance problem is not caused by PG.
'Index Scan using idx_user_country on public.old_card (cost=0.57..1854.66
rows=460 width=922) (actual time=3.442..76.606 row
In my experience, that 77ms will stay quite constant even if your db grew to >
1TB. Postgres IS amazing. BTW, for a db, you should always have provisioned
IOPS or else your performance can vary wildly, since the SSDs are shared.
Re Lambda: another team is working on a new web app using Lam
On 27/12/17 18:02, Jean Baro wrote:
Sorry guys,
The performance problem is not caused by PG.
'Index Scan using idx_user_country on public.old_card
(cost=0.57..1854.66 rows=460 width=922) (actual time=3.442..76.606
rows=200 loops=1)'
' Output: id, user_id, user_country, user_channel, user_
Sorry guys,
The performance problem is not caused by PG.
'Index Scan using idx_user_country on public.old_card (cost=0.57..1854.66
rows=460 width=922) (actual time=3.442..76.606 rows=200 loops=1)'
' Output: id, user_id, user_country, user_channel, user_role,
created_by_system_key, created_by_us
General purpose, 500GB but we are planing to increase it to 1TB before
going into production.
500GB 1.500 iops (some burst of 3.000 iops)
1TB 3.000 iops
Em 27 de dez de 2017 14:23, "Jeff Janes" escreveu:
> On Sun, Dec 24, 2017 at 11:51 AM, Jean Baro wrote:
>
>> Hi there,
>>
>> We are testing
Thanks Mike,
We are using the standard RDS instance m4.large, it's not Aurora, which is
a much more powerful server (according to AWS).
Yes, we could install it on EC2, but it would take some extra effort from
our side, it can be an investment though in case it will help us finding
the bottle ne
Thanks Jeremy,
We will provide a more complete EXPLAIN as other people have suggested.
I am glad we might end up with a much better performance (currently each
query takes around 2 seconds!).
Cheers
Em 27 de dez de 2017 14:02, "Jeremy Finzel" escreveu:
> The EXPLAIN
>
> 'Index Scan using i
Thanks Rick,
We are now partitioning the DB (one table) into 100 sets of data.
As soon as we finish this new experiment we will provide a better EXPLAIN
as you suggested. :)
Em 27 de dez de 2017 13:38, "Rick Otten"
escreveu:
On Wed, Dec 27, 2017 at 10:13 AM, Jean Baro wrote:
> Hello,
>
> W
On Sun, Dec 24, 2017 at 11:51 AM, Jean Baro wrote:
> Hi there,
>
> We are testing a new application to try to find performance issues.
>
> AWS RDS m4.large 500GB storage (SSD)
>
Is that general purpose SSD, or provisioned IOPS SSD? If provisioned, what
is the level of provisioning?
Cheers,
Je
>
> The EXPLAIN
>
> 'Index Scan using idx_user_country on card (cost=0.57..1854.66 rows=460
> width=922)'
> ' Index Cond: (((user_id)::text = '4684'::text) AND (user_country =
> 'BR'::bpchar))'
>
Show 3 runs of the full explain analyze plan on given condition so that we
can also see cold vs warm
Hi Jean,
I’ve used Postgres on a regular EC2 instance (an m4.xlarge), storing complex
genomic data, hundreds of millions of rows in a table and “normal” queries that
used an index returned in 50-100ms, depending on the query…so this isn’t a
Postgres issue per se.
Your table and index s
On Wed, Dec 27, 2017 at 10:13 AM, Jean Baro wrote:
> Hello,
>
> We are still seeing queries (by UserID + UserCountry) taking over 2
> seconds, even when there is no batch insert going on at the same time.
>
> Each query returns from 100 to 200 messagens, which would be a 400kb pay
> load, which
Hello,
We are still seeing queries (by UserID + UserCountry) taking over 2
seconds, even when there is no batch insert going on at the same time.
Each query returns from 100 to 200 messagens, which would be a 400kb pay
load, which is super tiny.
I don't know what else I can do with the limitati
Thanks for the clarification guys.
It will be super useful. After trying this I'll post the results!
Merry Christmas!
Em 25 de dez de 2017 00:59, "Danylo Hlynskyi"
escreveu:
> I had an opportunity to perform insertion of 700MM rows into Aurora
> Postgresql, for which performance insights are a
I had an opportunity to perform insertion of 700MM rows into Aurora
Postgresql, for which performance insights are available. Turns out, that
there are two stages of insert slowdown - first happens when max WAL
buffers limit reached, second happens around 1 hour after.
The first stage cuts insert
Yes it would/does make a difference! When you do it with one connection
you should see a big performance gain. Delayed, granted, extend locks
(locktype=extend) can happen due to many concurrent connections trying
to insert into the same table at the same time. Each insert request
results in a
Multiple connections, but we are going to test it with only one. Would it
make any difference?
Thanks
Em 24 de dez de 2017 21:52, "michael...@sqlexec.com"
escreveu:
> Are the inserts being done through one connection or multiple connections
> concurrently?
>
> Sent from my iPhone
>
> > On Dec
Are the inserts being done through one connection or multiple connections
concurrently?
Sent from my iPhone
> On Dec 24, 2017, at 2:51 PM, Jean Baro wrote:
>
> Hi there,
>
> We are testing a new application to try to find performance issues.
>
> AWS RDS m4.large 500GB storage (SSD)
>
> One
19 matches
Mail list logo