Ok, will do that. Thanks a lot.
On Wed, Sep 5, 2018 at 9:37 PM Jeff Janes wrote:
>
>
> On Wed, Sep 5, 2018 at 12:00 PM Jeff Janes wrote:
>
>> On Wed, Sep 5, 2018 at 12:21 AM Dinesh Kumar wrote:
>>
>>> Hi All,
>>> I was wondering whether the case is solved or still continuing. As a
>>>
On Wed, Sep 5, 2018 at 12:21 AM Dinesh Kumar wrote:
> Hi All,
> I was wondering whether the case is solved or still continuing. As a
> Postgres newbie, I can't understand any of the terms (JIT, tuple
> deformation) as you mentioned above. Please anyone let me know , what is
> the current
Hi All,
I was wondering whether the case is solved or still continuing. As a
Postgres newbie, I can't understand any of the terms (JIT, tuple
deformation) as you mentioned above. Please anyone let me know , what is
the current scenario.
Thanks,
Dineshkumar.
On Wed, Aug 1, 2018 at 8:51 PM Jeff
On Mon, Jul 30, 2018 at 3:02 PM, Andres Freund wrote:
> Hi,
>
> On 2018-07-30 13:31:33 -0400, Jeff Janes wrote:
> > I don't know where the time is going with the as-committed JIT. None of
> > the JIT-specific timings reported by EXPLAIN (ANALYZE) add up to anything
> > close to the slow-down
Hi,
On 2018-07-31 12:56:26 -0400, Jeff Janes wrote:
> On Mon, Jul 30, 2018 at 1:23 PM, Andres Freund wrote:
>
> > On 2018-07-30 07:19:07 -0400, Jeff Janes wrote:
> >
> > > And indeed, in my hands JIT makes it almost 3 times worse.
> >
> > Not in my measurement. Your example won't use JIT at
On Mon, Jul 30, 2018 at 1:23 PM, Andres Freund wrote:
> On 2018-07-30 07:19:07 -0400, Jeff Janes wrote:
>
> > And indeed, in my hands JIT makes it almost 3 times worse.
>
> Not in my measurement. Your example won't use JIT at all, because it's
> below the cost threshold. So I think you might
Hi,
On 2018-07-30 13:31:33 -0400, Jeff Janes wrote:
> I don't know where the time is going with the as-committed JIT. None of
> the JIT-specific timings reported by EXPLAIN (ANALYZE) add up to anything
> close to the slow-down I'm seeing. Shouldn't compiling and optimization
> time show up
Hi,
On 2018-07-30 18:01:34 +0200, Pavel Stehule wrote:
> look on
> http://www.postgresql-archive.org/PATCH-LLVM-tuple-deforming-improvements-td6029385.html
> thread, please.
Given the results I just posted in the sibling email I don't think those
issues apply here.
Greetings,
Andres Freund
writes:
>>>> > On 29 July 2018 at 17:38, Dinesh Kumar wrote:
>>>> >> I found performance variance between accessing int1 and int200
>>>> column which
>>>> >> is quite large.
>>>>
>>>> > Have a look at sl
On 2018-07-30 07:19:07 -0400, Jeff Janes wrote:
> On Mon, Jul 30, 2018 at 12:11 AM, Pavel Stehule
> wrote:
>
> > 2018-07-30 1:00 GMT+02:00 Tom Lane :
> >
> >> David Rowley writes:
> >> > On 29 July 2018 at 17:38, Dinesh Kumar wrote:
> >> >
2018-07-30 13:19 GMT+02:00 Jeff Janes :
> On Mon, Jul 30, 2018 at 12:11 AM, Pavel Stehule
> wrote:
>
>> 2018-07-30 1:00 GMT+02:00 Tom Lane :
>>
>>> David Rowley writes:
>>> > On 29 July 2018 at 17:38, Dinesh Kumar wrote:
>>> >> I fou
On Mon, Jul 30, 2018 at 12:11 AM, Pavel Stehule
wrote:
> 2018-07-30 1:00 GMT+02:00 Tom Lane :
>
>> David Rowley writes:
>> > On 29 July 2018 at 17:38, Dinesh Kumar wrote:
>> >> I found performance variance between accessing int1 and int200 column
>> w
David Rowley writes:
> On 29 July 2018 at 17:38, Dinesh Kumar wrote:
>> I found performance variance between accessing int1 and int200 column which
>> is quite large.
> Have a look at slot_deform_tuple and heap_deform_tuple. You'll see
> that tuples are deformed starting a
On 29 July 2018 at 17:38, Dinesh Kumar wrote:
> I found performance variance between accessing int1 and int200 column which
> is quite large.
Have a look at slot_deform_tuple and heap_deform_tuple. You'll see
that tuples are deformed starting at the first attribute. If you ask
for attribu
random data ( from
generate series) and remaining columns are all null. The table has 100
rows.
I found performance variance between accessing int1 and int200 column which
is quite large.
Reports from pg_stat_statements:
query | total_time | min_time |
max_time
(attached
- test with secondary index and without secondary index - only primary
keys), the SSD always wins in performance.
Only for this query that he was the loser, so I put this topic in the list.
Today I will not be able to check your test information in more
detail, but I will return with more
And perhaps more interesting:
Re-running query 9 against the (single) HDD setup *but* with pgsql_tmp
symlinked to the 2x SSD RAID0: 15 minutes
I'm thinking that you have inadvertently configured your HDD test in
this way (you get 9 minutes because you have 2x HDDs). Essentially most
of the
FWIW:
re-running query 9 using the SSD setup as 2x crucial M550 RAID0: 10 minutes.
On 20/07/18 11:30, Mark Kirkwood wrote:
One more thought on this:
Query 9 does a lot pf sorting to disk - so there will be writes for
that and all the reads for the table scans. Thus the location of your
One more thought on this:
Query 9 does a lot pf sorting to disk - so there will be writes for that
and all the reads for the table scans. Thus the location of your
instance's pgsql_tmp directory(s) will significantly influence results.
I'm wondering if in your HDD test the pgsql_tmp on the
>Model: 850 Evo 500 GB SATA III 6Gb/s -
please check the SSD *"DRIVE HEALTH STATUS"* and the* "S.M.A.R.T values of
specified disk" *
for example - with the "smartctl" tool ( https://www.smartmontools.org/
) ( -x "Show all information for device" )
Expected output with "Samsung SSD 850
On Wed, 18 Jul 2018 09:46:32 +0200, Fabio Pardi
wrote:
RAID 0 to store production data should never be used. Never a good
idea, in my opinion.
RAID 0 by itself should never be used. Combined with other RAID
levels, it can boost performance without sacrificing reliability.
https
Ok, so you are using 1 instance and tablespaces. Also I see you are
restarting the instance between HDD and SSD tests, so all good there.
The point I made about having the OS on the SSD's means that if these
tests make your system swap, and your swap device is on the SSDs (which
is probably
Hi Neto,
RAID 0 to store production data should never be used. Never a good idea, in my
opinion.
Simple reason is that when you lose one disk, you lose everything.
If your goal is to bench the disk, go for single disk.
If you want to be closer to a production setup, go for RAID 10, or pick a
2018-07-17 22:13 GMT-03:00 Neto pr :
> 2018-07-17 20:04 GMT-03:00 Mark Kirkwood :
>> Ok, so dropping the cache is good.
>>
>> How are you ensuring that you have one test setup on the HDDs and one on the
>> SSDs? i.e do you have 2 postgres instances? or are you using one instance
>> with
2018-07-17 20:04 GMT-03:00 Mark Kirkwood :
> Ok, so dropping the cache is good.
>
> How are you ensuring that you have one test setup on the HDDs and one on the
> SSDs? i.e do you have 2 postgres instances? or are you using one instance
> with tablespaces to locate the relevant tables? If the 2nd
Yeah,
A +1 to telling us the model. In particular the later EVOs use TLC nand
with a small SLC cache... and when you exhaust the SLC cache the
performance can be worse than a HDD...
On 18/07/18 01:44, Nicolas Charles wrote:
Hi Neto,
You should list the SSD model also - there are pleinty
Ok, so dropping the cache is good.
How are you ensuring that you have one test setup on the HDDs and one on
the SSDs? i.e do you have 2 postgres instances? or are you using one
instance with tablespaces to locate the relevant tables? If the 2nd case
then you will get pollution of
On Mon, Jul 16, 2018 at 5:29 PM, Lincoln Swaine-Moore <
lswainemo...@gmail.com> wrote:
> Tom and Jeff,
>
> Thanks very much for the suggestions!
>
> Here's what I've found so far after playing around for a few more days:
>
> What is your default_statistics_target? What can you tell us about the
On 07/17/2018 04:05 PM, Neto pr wrote:
> 2018-07-17 10:55 GMT-03:00 Fabio Pardi :
>> Also i think it makes not much sense testing on RAID 0. I would start
>> performing tests on a single disk, bypassing RAID (or, as mentioned, at
>> least disabling cache).
>>
>
> But in my case, both the 2
to buy another SSD I will prefer INTEL SSDs.
I had a previous problem with it (Sansung EVO) as it lost in
performance to a SAS HDD, but however, the SAS HDD was a 12 Gb/s
transfer rate and the SSD was 6 Gb/s.
But now I tested against an HDD (7200 RPM) that has the same transfer
rate as the SSD 6 Gb/sec
2018-07-17 10:55 GMT-03:00 Fabio Pardi :
> If you have a RAID cache, i would disable it, since we are only focusing
> on the disks. Cache can give you inconsistent data (even it looks like
> is not the case here).
>
> Also, we can do a step backward, and exclude postgres from the picture
> for the
f Samsung's advertisement
says yes. If I have to buy another SSD I will prefer INTEL SSDs.
I had a previous problem with it (Sansung EVO) as it lost in
performance to a SAS HDD, but however, the SAS HDD was a 12 Gb/s
transfer rate and the SSD was 6 Gb/s.
But now I tested against an HDD (7200 RPM) th
If you have a RAID cache, i would disable it, since we are only focusing
on the disks. Cache can give you inconsistent data (even it looks like
is not the case here).
Also, we can do a step backward, and exclude postgres from the picture
for the moment.
try to perform a dd test in reading from
On 17.07.2018 15:44, Nicolas Charles wrote:
> - Partitions is not correctly aligned on the SSD blocks
Does that really make a noticeable difference? If yes, have you got some further
reading material on that?
On Tue, Jul 17, 2018 at 1:00 AM, Neto pr wrote:
> Dear,
> Some of you can help me understand this.
>
> This query plan is executed in the query below (query 9 of TPC-H
> Benchmark, with scale 40, database with approximately 40 gb).
>
> The experiment consisted of running the query on a HDD (Raid
2018-07-17 10:04 GMT-03:00 Neto pr :
> Sorry.. I replied in the wrong message before ...
> follows my response.
> -
>
> Thanks all, but I still have not figured it out.
> This is really strange because the tests were done on the same machine
> (I use HP ML110 Proliant 8gb RAM - Xeon
Sorry.. I replied in the wrong message before ...
follows my response.
-
Thanks all, but I still have not figured it out.
This is really strange because the tests were done on the same machine
(I use HP ML110 Proliant 8gb RAM - Xeon 2.8 ghz processor (4
cores), and POSTGRESQL 10.1.
-
As already mentioned by Robert, please let us know if you made sure that
nothing was fished from RAM, over the faster test.
In other words, make sure that all caches are dropped between one test
and another.
Also,to better picture the situation, would be good to know:
- which SSD (brand/model)
> Why did the HDD (7200 rpm) perform better?
Are these different systems? Have you ruled out that during the HDD test the
data was available in memory?
Can you post make and model of the SSD concerned? In general the cheaper
consumer grade ones cannot do sustained read/writes at anything like
their quoted max values.
regards
Mark
On 17/07/18 17:00, Neto pr wrote:
Dear,
Some of you can help me understand this.
This query plan is executed
Can you show the configuration of postgresql.conf?
Query configuration method:
Select name, setting from pg_settings where name ~ 'buffers|cpu|^enable';
On 2018年07月17日 13:17, Benjamin Scherrey wrote:
What's the on disk cache size for each drive? The better HDD
performance problem won't
Dear,
Some of you can help me understand this.
This query plan is executed in the query below (query 9 of TPC-H
Benchmark, with scale 40, database with approximately 40 gb).
The experiment consisted of running the query on a HDD (Raid zero).
Then the same query is executed on an SSD (Raid Zero).
Tom and Jeff,
Thanks very much for the suggestions!
Here's what I've found so far after playing around for a few more days:
What is your default_statistics_target? What can you tell us about the
> distribution of parent_id? (exponential, power law, etc?). Can you show
> the results for
On Tue, Jul 10, 2018 at 11:07 AM, Lincoln Swaine-Moore <
lswainemo...@gmail.com> wrote:
>
>
>
> Something about the estimated row counts (this problem persisted after I
> tried ANALYZEing)
>
What is your default_statistics_target? What can you tell us about the
distribution of parent_id?
On 07/10/2018 08:38 PM, Justin Pryzby wrote:
On Tue, Jul 10, 2018 at 01:54:12PM -0400, Patrick Hemmer wrote:
I'm looking for a way of gathering performance stats in a more usable
way than turning on `log_statement_stats` (or other related modules).
The problem I have with the log_*_stats family
On Tue, Jul 10, 2018 at 11:38 AM, Justin Pryzby
wrote:
>
> > 2. Make stats available in `pg_stat_statements` (or alternate view that
> > could be joined on). The block stats are already available here, but
> > others like CPU usage, page faults, and context switches are not.
>
>
Lincoln Swaine-Moore writes:
> Here's the result (I turned off the timeout and got it to finish):
> ...
I think the core of the problem here is bad rowcount estimation. We can't
tell from your output how many rows really match
> WHERE "a"."parent_id" IN (
> 49188,14816,14758,8402
> )
but
t;parent_id" IN (
34226,24506,40987,27162
)
ORDER BY "a"."tmstmp" DESC
LIMIT 20;
May be that limit 20 is not pushed to partitions in your version ?
Regards
PAscal
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
On Tue, Jul 10, 2018 at 01:54:12PM -0400, Patrick Hemmer wrote:
> I'm looking for a way of gathering performance stats in a more usable
> way than turning on `log_statement_stats` (or other related modules).
> The problem I have with the log_*_stats family of modules is that they
&g
I'm looking for a way of gathering performance stats in a more usable
way than turning on `log_statement_stats` (or other related modules).
The problem I have with the log_*_stats family of modules is that they
log every single query, which makes them unusable in production. Aside
from consuming
On 01.07.18 11:31, AJG wrote:
> Where is mentioned trigger overhead, and provided an alternative solution
> (logical replication slot monitoring).
>
> My 2 part question is.
>
> 1) Does anybody have any benchmarks re: trigger overhead/performance or have
> any expe
> Have you run an analyze on all your tables after the upgrade to 10? The
> estimates are way off.
Thank you. I embarrassingly missed that step. That fixed the problem. In the
future, if estimates are way off… I’ll run analyze.
Have you run an analyze on all your tables after the upgrade to 10? The
estimates are way off.
Greetings,
I have several similar queries that are all suffering from a dramatic slow down
after upgrading a RDS instance from 9.3 to 10.3. The query time goes from 28
milliseconds to over 70 seconds I could use some help trying to figure out the
problem. This is one of the queries:
SELECT
It is certainly the TCP loopback overhead you are experiencing.
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
On Mon, May 07, 2018 at 07:33:17PM -0400, Julian Wilson wrote:
> Hi,
>
> I'm trying to debug improve the performance of my time bucketing query.
> What I'm trying to do is essentially bucket by an arbitrary interval and
> then do some aggregations within that interval (min,max,
On Sun, Apr 29, 2018 at 7:48 PM, Justin Pryzby wrote:
> On Sun, Apr 29, 2018 at 10:05:23AM +0530, Kaushal Shriyan wrote:
> > # SELECT serorgid,appname,sum(message_count) AS mtrc0,date_trunc('day',
> > client_received_start_timestamp at time zone '+5:30:0')::timestamp
>
On Sun, Apr 29, 2018 at 10:05:23AM +0530, Kaushal Shriyan wrote:
> # SELECT serorgid,appname,sum(message_count) AS mtrc0,date_trunc('day',
> client_received_start_timestamp at time zone '+5:30:0')::timestamp without
> time zone AS time_unit FROM analytics."test.prod.fact" WHERE
>
gt;> >
>> > I am running postgresql db server 9.4.14 on AWS of C4.2xlarge instance
>> type
>> > with 500 GB volume of volume type io1 with 25000 IOPS and I am seeing
>> > performance issues. The sql query response takes around *127713.413 ms
>> *time
>
e of volume type io1 with 25000 IOPS and I am seeing
> > performance issues. The sql query response takes around *127713.413 ms
> *time
> > *.* Is there a way to find out the bottleneck?
>
> Send the output of "explain(analyze,buffers)" for the query?
>
> J
Thank you Pavel for those ideas.
I should probably have mentioned we don't have access to the file
system on the PostgreSQL server, as it's provided by Amazon AWS RDS
service.
These functions look good when you can push the file to be loaded into
the database file system.
I'll see if it's
2018-03-21 13:56 GMT+01:00 Pavel Stehule :
>
>
> 2018-03-21 13:03 GMT+01:00 Gary Cowell :
>
>> We are trying to implement postgresql code to load a large object into
>> a postgresql bytea in chunks to avoid loading the file into memory in
>> the
Can you use a materialized view to do the bytea_agg() and then refresh
concurrently whenever you need updated data?
The refresh concurrently might take a few hours or days to run to keep the
matview up to date, but your queries would be pretty fast.
A possible problem is that you are running out
David Osborne writes:
> Hi, yes I've run "analyse" against the newly restored database. Should that
> be enough?
My apologies, you did say that further down in the original message.
It looks like the core of the problem is the poor rowcount estimation
here:
SQL
>> Partial Indexes
>> >> which suits my requirement. I have created Partial Indexes with ID
>> Range as criteria and
>> >> it provides Uniqueness and Indexing per form basis as expected. But
>> DML operations on a
>> >> particular form scans
;> it provides Uniqueness and Indexing per form basis as expected. But DML
> operations on a
> >> particular form scans all the Indexes created for the entire table
> instead of scanning
> >> the Indexes created for that particular form ID Range. This degrades
> Planner Per
table instead of scanning
the Indexes created for that particular form ID Range.
This degrades Planner Performance and Query Time more
than 10 times as below,
Query
Result for the table with 3000 Partial Inde
ans all the Indexes created for the
> entire table instead of scanning the Indexes created for that particular
> form ID Range. This degrades Planner Performance and Query Time more than
> 10 times as below,
>
> Query Result for the table with 3000 Partial Indexes(15 Indexes per form)
> :
>
tire table instead of
> scanning
> the Indexes created for that particular form ID Range. This degrades Planner
> Performance
> and Query Time more than 10 times as below,
>
> Query Result for the table with 3000 Partial Indexes(15 Indexes per form) :
It is crazy to create 3000 partia
. This degrades Planner Performance and Query Time more than
10 times as below,
Query Result for the table with 3000 Partial Indexes(15 Indexes per form) :
explain analyse select id from form_data_copy where id between 3001 and
4000 and bigint50=789;
QUERY PLAN
Le 23/02/2018 à 22:20, Andreas Kretschmer a écrit :
Am 23.02.2018 um 20:29 schrieb Daulat Ram:
We have the following requirements in single query or any proper
solution. Please help on this.
How many sessions are currently opened.
ask pg_stat_activity, via select * from pg_stat_activity
Am 23.02.2018 um 20:29 schrieb Daulat Ram:
We have the following requirements in single query or any proper solution.
Please help on this.
How many sessions are currently opened.
ask pg_stat_activity, via select * from pg_stat_activity
-and if opened then how many queries have
Hello experts,
We have the following requirements in single query or any proper solution.
Please help on this.
How many sessions are currently opened.
-and if opened then how many queries have executed on that session.
-and also we have to trace how much time each query is taking.
-and
+1 for atop. Be sure to adjust the sampling interval so it suits your
needs. It'll tell you what caused the spike.
Alternatively you could probably use sysdig, but I expect that'd result in
a fair performance hit if your system is already struggling.
Micky
On 14 February 2018 at 08:15, Gunnar
Am 06.02.2018 um 15:31 schrieb Thomas Güttler:
>
>
> Am 05.02.2018 um 14:26 schrieb Andreas Kretschmer:
>>
>>
>> Am 05.02.2018 um 14:14 schrieb Thomas Güttler:
>>> What do you suggest to get some reliable figures?
>>
>> sar is often recommended, see
>>
On Tue, 2018-02-06 at 15:31 +0100, Thomas Güttler wrote:
>
But one thing is still unclear. Imagine I see a peak in the chart. The peak
> was some hours ago. AFAIK sar has only the aggregated numbers.
>
> But I need to know details if I want to answer the question "Why?". The peak
> has gone
Am 05.02.2018 um 14:26 schrieb Andreas Kretschmer:
Am 05.02.2018 um 14:14 schrieb Thomas Güttler:
What do you suggest to get some reliable figures?
sar is often recommended, see
https://blog.2ndquadrant.com/in-the-defense-of-sar/.
Can you exclude other reasons like vacuum / vacuum
Am 05.02.2018 um 17:22 schrieb Andrew Kerber:
Oracle has a problem with transparent hugepages, postgres may well
have the same problem, so consider disabling transparent hugepages.
yes, that's true.
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com
Am 05.02.2018 um 14:14 schrieb Thomas Güttler:
What do you suggest to get some reliable figures?
sar is often recommended, see
https://blog.2ndquadrant.com/in-the-defense-of-sar/.
Can you exclude other reasons like vacuum / vacuum freeze?
Regards, Andreas
--
2ndQuadrant - The
This is a bit off-topic, since it is not about the performance of PG itself.
But maybe some have the same issue.
We run PostgreSQL in virtual machines which get provided by our customer.
We are not responsible for the hypervisor and have not access to it.
The IO performance of our application
Thomas Kellerer schrieb am 19.01.2018 um 17:48:
>
> I wonder what performance implications that has on a server with
> around 50-100 active connections (through pgBouncer).
>
> My understanding of the documentation is, that Postgres will work
> just fine if we lower the limit,
eem to help.
See attachment.
Regards,
Virendra
-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com]
Sent: Tuesday, January 09, 2018 6:00 PM
To: Kumar, Virendra
Cc: pgsql-performa...@postgresql.org
Subject: Re: Performance of a Query
On Tue, Jan 9, 2018 at 3:25 PM,
On Tue, Jan 9, 2018 at 4:09 PM, Kumar, Virendra
wrote:
> It did not seem to help.
> See attachment.
Yeah while it's still writing, it's about half as much but most of the
time seems to be in merging etc multiple data sets. I'm wondering
what non-default values you
ous.
> From: Scott Marlowe [mailto:scott.marl...@gmail.com]
> Sent: Tuesday, January 09, 2018 5:08 PM
> To: Kumar, Virendra
> Cc: pgsql-performa...@postgresql.org
> Subject: Re: Performance of a Query
Try it with something reasonable like 64MB and then post your query
plans to expla
: Tuesday, January 09, 2018 5:08 PM
To: Kumar, Virendra
Cc: pgsql-performa...@postgresql.org
Subject: Re: Performance of a Query
On Tue, Jan 9, 2018 at 2:18 PM, Kumar, Virendra <virendra.ku...@guycarp.com>
wrote:
> Hello Gurus,
>
> I am struggling to tune a query which is do
machine out of memory if you do. but doubling or tripling it and
seeing the effect on the query performance is a good place to start.
The good news is that most of your row estimates are about right, so
the query planner is doing what it can to make the query fast, but I'm
guessing if you get
Hello Gurus,
I am struggling to tune a query which is doing join on top of aggregate for
around 3 million rows. The plan and SQL is attached to the email.
Below is system Details:
PGSQL version - 10.1
OS - RHEL 3.10.0-693.5.2.el7.x86_64
Binary - Dowloaded from postgres.org compiled and
On Wed, Dec 27, 2017 at 2:10 PM, Mike Sofen <mso...@runbox.com> wrote:
> In my experience, that 77ms will stay quite constant even if your db grew
> to > 1TB. Postgres IS amazing. BTW, for a db, you should always have
> provisioned IOPS or else your performance can vary wild
On Dec 27, 2017, at 8:20 PM, Justin Pryzby wrote:
>
> That's one of the major use cases for partitioning (DROP rather than DELETE
> and
> thus avoiding any following vacuum+analyze).
>
On Wed, Dec 27, 2017 at 07:54:23PM -0500, Robert Blayzor wrote:
> Question on large tables…
>
> When should one consider table partitioning vs. just stuffing 10 million rows
> into one table?
IMO, whenever constraint exclusion, DROP vs DELETE, or seq scan on individual
children justify the
Question on large tables…
When should one consider table partitioning vs. just stuffing 10 million rows
into one table?
I currently have CDR’s that are injected into a table at the rate of over
100,000 a day, which is large.
At some point I’ll want to prune these records out, so being able
to determine the cause. We
typically see low CPU and high IOPS just prior to our degraded performance. Our
production environment runs provisioned IOPS to avoid this very issue.
Regards, David
From: Jean Baro <jfb...@gmail.com>
To: Jeremy Finzel <finz...@gmail.com>
Cc: Dan
In my experience, that 77ms will stay quite constant even if your db grew to >
1TB. Postgres IS amazing. BTW, for a db, you should always have provisioned
IOPS or else your performance can vary wildly, since the SSDs are shared.
Re Lambda: another team is working on a new web app us
On 27/12/17 18:02, Jean Baro wrote:
Sorry guys,
The performance problem is not caused by PG.
'Index Scan using idx_user_country on public.old_card
(cost=0.57..1854.66 rows=460 width=922) (actual time=3.442..76.606
rows=200 loops=1)'
' Output: id, user_id, user_country, user_channel
Sorry guys,
The performance problem is not caused by PG.
'Index Scan using idx_user_country on public.old_card (cost=0.57..1854.66
rows=460 width=922) (actual time=3.442..76.606 rows=200 loops=1)'
' Output: id, user_id, user_country, user_channel, user_role,
created_by_system_key
o <jfb...@gmail.com> wrote:
>
>> Hi there,
>>
>> We are testing a new application to try to find performance issues.
>>
>> AWS RDS m4.large 500GB storage (SSD)
>>
>
> Is that general purpose SSD, or provisioned IOPS SSD? If provisioned,
> what is the level of provisioning?
>
> Cheers,
>
> Jeff
>
r data/structure that could result in such terrible performance.
Mike Sofen
*From:* Jean Baro [mailto:jfb...@gmail.com]
*Sent:* Wednesday, December 27, 2017 7:14 AM
Hello,
We are still seeing queries (by UserID + UserCountry) taking over 2
seconds, even when there is no batch insert going on
Thanks Jeremy,
We will provide a more complete EXPLAIN as other people have suggested.
I am glad we might end up with a much better performance (currently each
query takes around 2 seconds!).
Cheers
Em 27 de dez de 2017 14:02, "Jeremy Finzel" <finz...@gmail.com> escreveu:
de dez de 2017 01:10, "Jean Baro" <jfb...@gmail.com> escreveu:
>
>> Thanks for the clarification guys.
>>
>> It will be super useful. After trying this I'll post the results!
>>
>> Merry Christmas!
>>
>> Em 25 de dez de 2017 00:59, "D
On Sun, Dec 24, 2017 at 11:51 AM, Jean Baro <jfb...@gmail.com> wrote:
> Hi there,
>
> We are testing a new application to try to find performance issues.
>
> AWS RDS m4.large 500GB storage (SSD)
>
Is that general purpose SSD, or provisioned IOPS SSD? If provis
801 - 900 of 911 matches
Mail list logo