On 5.4.2012 20:43, Merlin Moncure wrote:
> The original problem is read based performance issue though and this
> will not have any affect on that whatsoever (although it's still
> excellent advice). Also dd should bypass the o/s buffer cache. I
> still pretty much convinced that there is a funda
On Thu, Apr 5, 2012 at 2:47 PM, Julien Cigar wrote:
> - http://www.pastie.org/3731956 : with default config
> - http://www.pastie.org/3731960 : this is with enable_seq_scan = off
It looks like the join selectivity of (context_to_context_links,
ancestors) is being overestimated by almost two order
On Thu, Apr 5, 2012 at 10:49 AM, Tomas Vondra wrote:
> On 5.4.2012 17:17, Cesar Martin wrote:
>> Well, I have installed megacli on server and attach the results in file
>> megacli.txt. Also we have "Dell Open Manage" install in server, that can
>> generate a log of H800. I attach to mail with name
On 5.4.2012 17:17, Cesar Martin wrote:
> Well, I have installed megacli on server and attach the results in file
> megacli.txt. Also we have "Dell Open Manage" install in server, that can
> generate a log of H800. I attach to mail with name lsi_0403.
>
> About dirty limits, I have default values:
Istvan Endredy wrote:
> i've ran into a planning problem.
> If effective_cache_size has a greater value (6GB), this select has
> a bad planning and long query time (2000ms):
Could you try that configuration with one change and let us know how
it goes?:
set cpu_tuple_cost = '0.05';
I've s
I know this is a very general question. But if you guys had to specify
system (could be one server or cluster), with sustainable transaction
rate of 1.5M tps running postgresql, what configuration and hardware
would you be looking for ?
The transaction distribution there is 90% writes/updates and 1
Kim Hansen wrote:
> I have a query where the planner makes a wrong cost estimate, it
> looks like it underestimates the cost of a "Bitmap Heap Scan"
> compared to an "Index Scan".
> What can I do to fix the cost estimate?
Could you try running the query with cpu_tuple_cost = 0.05 and let
us
Tomas Vondra wrote:
> On 5.4.2012 15:44, superman0920 wrote:
>> Today I install PG and MySQL at a Server. I insert 85 rows
>> record to each db.
>> I execute "select count(*) from poi_all_new" at two db.
>> MySQL takes 0.9s
>> PG takes 364s
> Are you sure the comparison was fair, i.e. bot
On 5.4.2012 15:44, superman0920 wrote:
> Sure, i will post that at tomorrow.
>
> Today I install PG and MySQL at a Server. I insert 85 rows record
> to each db.
> I execute "select count(*) from poi_all_new" at two db.
> MySQL takes 0.9s
> PG takes 364s
First of all, keep the list (pgsql-pe
On Wed, Apr 4, 2012 at 3:22 AM, ahchuan wrote:
> Hi All,
>
>
> I am new in using postgresSQL, I now support a system that been
> running on postgressql. Recently I found that the database are
> consuming the diskspace rapidly, it starting from 9GB and it now grow
> until 40GB within 4-5 month.
>
>
Julien Cigar wrote:
> I tried to play on the various cost settings but it's doesn't
> change anything, except setting random_page_cost to 1 (which will
> lead to bad plans for other queries, so not a solution)
Yeah, you clearly don't have the active portion of your database
fully cached, so yo
On 04/04/2012 05:22 AM, ahchuan wrote:
Hi All,
I am new in using postgresSQL, I now support a system that been
running on postgressql. Recently I found that the database are
consuming the diskspace rapidly, it starting from 9GB and it now grow
until 40GB within 4-5 month.
I try to do a full
Hi All,
I am new in using postgresSQL, I now support a system that been
running on postgressql. Recently I found that the database are
consuming the diskspace rapidly, it starting from 9GB and it now grow
until 40GB within 4-5 month.
I try to do a full vacuum to the database but then i get this
Hi All
I have a query where the planner makes a wrong cost estimate, it looks
like it underestimates the cost of a "Bitmap Heap Scan" compared to an
"Index Scan".
This it the two plans, I have also pasted them below:
Slow (189ms): http://explain.depesz.com/s/2Wq
Fast (21ms): http://explain.de
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of David Kerr
Sent: Wednesday, 4 April 2012 11:37 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] pg_autovacuum in PG9.x
Howdy,
What is/is there a replac
Hi,
i've ran into a planning problem.
Dedicated PostgreSQL Server:
"PostgreSQL 9.1.3 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2
20080704 (Red Hat 4.1.2-46), 64-bit"
Memory: 8GB
4CPUs
The problem is reduced to the following: there are 2 tables:
-product (3millions rows, 1GB)
-produc
> From: Tomas Vondra
> But the fluctuation, that surely is strange. What are the page cache
> dirty limits, i.e.
>
> cat /proc/sys/vm/dirty_background_ratio
> cat /proc/sys/vm/dirty_ratio
>
> That's probably #1 source I've seen responsible for such issues (on
> machines with a lot of RAM).
>
Hello,
I have an extremely bad plan for one of my colleague's query. Basically
PostgreSQL chooses to seq scan instead of index scan. This is on:
antabif=# select version();
version
On Tue, Apr 3, 2012 at 7:04 PM, Dave Crooke wrote:
>On Tue, Apr 3, 2012 at 10:38 AM, Claudio Freire wrote:
>> You perform 8 roundtrips minimum per event, so that's 375us per query.
>> It doesn't look like much. That's probably Nagle and task switching
>> time, I don't think you can get it much l
19 matches
Mail list logo