Hello,
we have some performance problems with postgres 8.0.4, more precisely
with vacuuming 'large' database with a lot of deleted rows.
We had a 3.2 GB database, consisting mainly from 4 large tables, two of
them (say table A and B) having about 14.000.000 of rows and 1 GB of
size each, and two
Tom Lane wrote:
Tomas Vondra [EMAIL PROTECTED] writes:
1) drop, create and fill table B (aggregated data from A, C, D)
2) copy 'old' data from A, C and D to A_old, C_old a D_old
3) delete old data from A, C, D
4) dump data from A_old, C_old and D_old
5) truncate tables A, C, D
6) vacuum
Probably the indexes are bloated after the vacuum full. I think the
best way to get rid of the fat is to recreate both tables and indexes
anew. For this the best tool would be to CLUSTER the tables on some
index, probably the primary key. This will be much faster than
VACUUMing the tables,
I guess you're right. I forgot to mention there are 12 composed indexes
on the largest (and not deleted) table B, having about 14.000.000 rows
and 1 GB of data. I'll try to dump/reload the database ...
Aaargh, the problem probably is not caused by the largest table, as it
was dropped, filled
Huh, I didn't suggest to dump/reload. I suggested CLUSTER. You need to
apply it only to tables where you have lots of dead tuples, which IIRC
are A, C and D.
Sorry, I should read more carefully. Will clustering a table according
to one index solve problems with all the indexes on the table
Hi Tomas,
Tomas wrote:
We've decided to remove unneeded 'old' data, which means removing about
99.999% of rows from tables A, C and D (about 2 GB of data). At the
beginning, the B table (containing aggregated from A, C and D) was emptied
(dropped and created) and filled in with current
If you want to benefit from the usage of an index, the query has to
contain some WHERE conditions (on the indexed columns). This is a
'select all' query - there is no way to speed it up using index.
Tomas
Why Postgres 8.1 does not use makse_kuupaev_idx index in the
following query
?
How
More precisely - the Postgres could use the index to speed up the
sorting, but in this case the sorting is very fast (less than one
second according to the output), so Postgres probably decided not
to use the index because it would be slower.
Btw. have you run ANALYZE on the table recently? What
It depends what is the purpose of the function. If it's mainly a
container for a heap of SQL queries along with some simple IF, ELSE
etc. then I use two simple ways to analyze the performance (or lack
of performance):
1) I use a lot of debug messages
2) I print out all SQL and the execute
Assuming the table's NOT bloated, you may do well to increase the
effective_cache_size, which doesn't allocate anything,
snip
try setting it to something like 512MB or so.
It's currently set to 1000MB.
If your table is bloating, and you don't have idle transactions hanging
of the
That's about 32% dead rows. Might be worth scheduling a vacuum full,
but it's not like I was afraid it might be. It looks to me like you
could probably use a faster I/O subsystem in that machine though.
If the random page cost being lower fixes your issues, then I'd just run
with it lower
Any finally, any ideas on planner constants? Here's what I'm using:
seq_page_cost = 0.5 # measured on an arbitrary scale
random_page_cost = 1.0 # same scale as above
cpu_tuple_cost = 0.001 # same scale as above
cpu_index_tuple_cost =
Hi,
I find various references in the list to this issue of queries
being too slow because the planner miscalculates things and
decides to go for a sequenctial scan when an index is available
and would lead to better performance.
Is this still an issue with the latest version? I'm
Can anybody help me out
I just wanted to knw what will be the configuraion settings for
partitioning table so as to make inserts faster on the partitioned tables.
Well, that depends. Many questions are important here. Will you insert
into several partitions or only to a single one? Do you
Following is one of the update query and it's explain plan which takes
about 6 mins to execute. I am trying to find a way to execute it faster.
The functions used in the update statement are if then else test and
then return one value or the other.
If you want to disable it only for some tables, you can put special
values into pg_autovacuum. This won't disable the autovacuum daemon, but
some of the tables won't be vacuumed.
Tomas
Not quite a performance question, but I can't seem to find a simple
answer to this. We're using 8.1.4 and
I understand query tuning and table design play a large role in
performance, but taking that factor away
and focusing on just hardware, what is the best hardware to get for Pg
to work at the highest level
(meaning speed at returning results)?
Depends heavily on the particular application, but
Index is not used for
is null
condition:
create index makse_dokumnr_idx on makse(dokumnr);
explain select
sum( summa)
from MAKSE
where dokumnr is null
Aggregate (cost=131927.95..131927.96 rows=1 width=10)
- Seq Scan on makse (cost=0.00..131927.94 rows=1 width=10)
This is the critical point. You have this line:
There were 132969 unused item pointers.
Which says there's 132k or so dead rows in your table. Which means
vacuum / autovacuum isn't keeping up. Did you try and stop the update
several times? Each time it starts then gets killed it creates
Well, you're obviously right - I didn't know this. I guess I've found
that the index is not used for null values, and deduced somehow that
NULL values are not stored in the index.
Thanks, it's nice to find out a 'bug' before it's too late :-)
regards
Tomas
Are you sure NULL values are not
Mark Roberts napsal(a):
On Tue, 2008-11-18 at 17:38 +0100, [EMAIL PROTECTED] wrote:
I bet there is no 'critical' length - this is just another case of
index
scan vs. seqscan. The efficiency depends on the size of the table /
row,
amount of data in the table, variability of the column used in
Thank you.
My 8.1.4 postgresql.conf does not contain such option. So
vacuum_cost_delay is off probably.
Since doc does not recommend any value, I planned to use 2000
Will value of 30 allow other clients to work when VACUUM FULL is running ?
No, as someone already noted the VACUUM FULL is
2. Run the following commands periodically in this order:
VACUUM FULL;
vacuum full pg_shdepend;
CLUSTER rid on (toode);
CLUSTER dok on (kuupaev);
REINDEX DATABASE mydb;
REINDEX SYSTEM mydb;
ANALYZE;
Are all those command required or can something leaved out ?
Running CLUSTER after VACUUM
Risky to try in prod server. Requires creating randomly distributed
product_id testcase to measure
difference.
What should I do next?
I guess you have backups - take them, restore the database on a
different machine (preferably with the same / similar hw config) and
tune the queries on
My test computer has PostgreSql 8.3, 4 GB RAM, SSD disks, Intel X2Extreme
CPU
So it is much faster than this prod server.
No idea how to emulate this environment.
I can create new db in prod server as old copy but this can be used in late
night only.
Well, a faster but comparable system may not
Scott,
thank you.
work_mem = 512
This is very easy to try. You can change work_mem for just a single
session, and this can in some cases help performance quite a bit, and in
others not at all.
I would not recommend having it lower than at least 4MB on a server like
that unless you have a
Dne 17.11.2010 05:47, Pavel Stehule napsal(a):
2010/11/17 Humair Mohammed huma...@hotmail.com:
There are no indexes on the tables either in SQL Server or Postgresql - I am
comparing apples to apples here. I ran ANALYZE on the postgresql tables,
Actually no, you're not comparing apples to
Dne 16.2.2011 21:51, Chris Hoover napsal(a):
All,
I'm trying to estimate the size of my hot data set, and wanted to get
some validation that I'm doing this correctly.
Basically, I'm using the sum(heap_blks_read + idx_blks_read) from
pg_statio_all_tables, and diffing the numbers over a
Dne 6.4.2011 17:33, Kevin Grittner napsal(a):
Maria L. Wilson maria.l.wilso...@nasa.gov wrote:
Autovacuum is not running - but regular vacuums are being done
twice daily.
Is the ANALYZE keyword used on those VACUUM runs? What version of
PostgreSQL is this. If it's enough to need fsm
Dne 4.4.2011 16:32, Kevin Grittner napsal(a):
Nothing there makes a write glut on checkpoint less likely to be the
cause. Without a BBU write-back cache it is actually *more* likely,
and having enough RAM to hold the whole database makes it *more*
likely. If you haven't placed your pg_xlog
Dne 31.3.2011 19:26, Laszlo Nagy napsal(a):
For this query:
select pp.id,pp.product_id,pp.selling_site_id,pp.asin
from product_price pp
where
(pp.asin is not null and pp.asin'')
and (pp.upload_status_id1)
and pp.selling_site_id in (8,7,35,6,9)
and (pp.last_od 'now'::timestamp - '1
There's a quite nice description in the docs:
http://www.postgresql.org/docs/9.0/interactive/row-estimation-examples.html
and a some more details for indexes:
http://www.postgresql.org/docs/9.0/interactive/index-cost-estimation.html
A bit more info about how this is used is available in this
Dne 12.4.2011 19:23, Ogden napsal(a):
On Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:
Ogden li...@darkstatic.com wrote:
I have been wrestling with the configuration of the dedicated Postges 9.0.3
server at work and granted, there's more activity on the production server,
but
Dne 12.4.2011 20:28, Ogden napsal(a):
On Apr 12, 2011, at 1:16 PM, Tomas Vondra wrote:
Dne 12.4.2011 19:23, Ogden napsal(a):
On Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:
Ogden li...@darkstatic.com wrote:
I have been wrestling with the configuration of the dedicated Postges
Dne 12.4.2011 23:19, Ogden napsal(a):
On Apr 12, 2011, at 4:09 PM, Tomas Vondra wrote:
Dne 12.4.2011 20:28, Ogden napsal(a):
On Apr 12, 2011, at 1:16 PM, Tomas Vondra wrote:
Dne 12.4.2011 19:23, Ogden napsal(a):
On Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:
Ogden li
Dne 12.4.2011 09:33, Claudio Freire napsal(a):
I actually implemented a statistical system for measuring these kinds
of correlations.
It's complex, but it might be adaptable to pgsql. Furthermore, one of
the latest projects of mine was to replace the purely statistical
approach with SVCs.
Dne 12.4.2011 01:22, Radhya sahal napsal(a):
Thanks Mr Nathan Boley ,
i want these equations to solve thsese equtions of parameters and total
time in order to get each paramter formula
i need these formula in my experiments is very important to know the
rate for each parameter in total cost
Dne 14.4.2011 00:05, Nathan Boley napsal(a):
If you model the costing to reflect the reality on your server, good
plans will be chosen.
Wouldn't it be better to derive those costs from actual performance
data measured at runtime?
Say, pg could measure random/seq page cost, *per tablespace*
Dne 14.4.2011 01:10, Claudio Freire napsal(a):
On Thu, Apr 14, 2011 at 12:19 AM, Tomas Vondra t...@fuzzy.cz wrote:
Another issue is that when measuring multiple values (processing of
different requests), the decisions may be contradictory so it really
can't be fully automatic.
I don't
Dne 26.4.2011 07:35, Robert Haas napsal(a):
On Apr 13, 2011, at 6:19 PM, Tomas Vondra t...@fuzzy.cz wrote:
Yes, I've had some lectures on non-linear programming so I'm aware that
this won't work if the cost function has multiple extremes (walleys /
hills etc.) but I somehow suppose that's
Dne 2.5.2011 06:53, Adarsh Sharma napsal(a):
I am also in need of a proper documentation that explains how to set
SHMAX and SHMALL variables in Postgres.
What things need to be taken in consideration before doing that ?
What is the value of SHMAX SHMALL if u have 16 GB RAM for Postgres
Dne 10.5.2011 18:22, Shaun Thomas napsal(a):
On 05/10/2011 10:06 AM, Maciek Sakrejda wrote:
I have 8-core server, I wanted to ask whether a query can be divided for
multiple processors or cores, if it could be what to do in postgresql
No, at this time (and for the foreseeable future), a
Dne 12.5.2011 17:39, Willy-Bas Loos napsal(a):
Hi,
We have some indexes that don't seem to be used at all.
I'd like to know since when they have not been used.
That is, the time when postgres started counting to reach the number
that is in pg_stat_user_indexes.idx_scan
Is there a way to
Dne 12.5.2011 17:39, Willy-Bas Loos napsal(a):
Hi,
We have some indexes that don't seem to be used at all.
I'd like to know since when they have not been used.
That is, the time when postgres started counting to reach the number
that is in pg_stat_user_indexes.idx_scan
Is there a way to
Dne 12.5.2011 22:03, Willy-Bas Loos napsal(a):
Then, are the index scans counted in a memory variable and written at
analyze time?
No, I believe raghu mixed two things - stats used by the planner and
stats about access to the data (how many tuples were read using an
index, etc.)
Stats for the
Dne 19.5.2011 23:13, Strange, John W napsal(a):
Am I reading this right in that the sort is taking almost 8 seconds?
You're probably reading it wrong. The sort itself takes about 1 ms (just
subtract the numbers in actual=). If you include all the overhead it
takes about 2.3 seconds (the hash
Dne 23.5.2011 19:01, Maciek Sakrejda napsal(a):
You're probably reading it wrong. The sort itself takes about 1 ms (just
subtract the numbers in actual=).
I thought it was cost=startup_cost..total_cost. That is not quite the
same thing, since startup_cost is effectively cost to produce first
Dne 23.5.2011 15:30, Shaun Thomas napsal(a):
On 05/17/2011 07:45 AM, Andrey Vorobiev wrote:
2011-05-17 18:55:51 NOVST LOG: checkpoint starting: xlog
2011-05-17 18:57:20 NOVST LOG: checkpoint complete: wrote 62861 buffers
(24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled;
Dne 23.5.2011 21:05, Robert Haas napsal(a):
On Mon, May 23, 2011 at 2:46 PM, Tomas Vondra t...@fuzzy.cz wrote:
Really? He already has 64 checkpoint segments, which is about 1GB of
xlog data. The real problem is that the amount of buffers to write is
constantly growing. At the beginning there's
Dne 24.5.2011 07:24, Terry Schmitt napsal(a):
As near as I can tell from your test configuration description, you have
JMeter -- J2EE -- Postgres.
Have you ruled out the J2EE server as the problem? This problem may not
be the database.
I would take a look at your app server's health and look
Dne 20.6.2011 18:04, Sushant Sinha napsal(a):
On Mon, 2011-06-20 at 10:58 -0500, Kevin Grittner wrote:
Sushant Sinha sushant...@gmail.com wrote:
I have a tsvector column docvector and a gin index on it
docmeta1_docvector_idx
I have a simple query select * from docmeta1 where docvector
Dne 20.6.2011 09:57, Didik Prasetyo napsal(a):
hai friend i have problem with performance database in postgre, how to
know slowly query in postgre,
i want kill or stop query to make postgre slowly, on the server status
on the admin pg, sometimes the query and how long the query runs do not
Dne 27.6.2011 17:58, Jenish napsal(a):
Hi,
I have already checked all the statements present in the trigger, no one
is taking more then 20 ms.
I am using 8-Processor, Quad-Core Server ,CPU utilization is more then
90-95 % for all. (htop result)
So all cores are 95% utilized? That means
Dne 27.6.2011 17:01, Jenish napsal(a):
Hi,
DB : POSTGRES 8.4.8
OS : Debian
HD : SAS 10k rpm
Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
How much data are we talking about? Does that fit into the shared
buffers or is it significantly larger? Do the triggers
Dne 27.6.2011 22:14, Jenish napsal(a):
And I am testing the limit for the concurrent active users. When I am
running my test for 400 concurrent user ie. Active connection. I am
getting good performance but when I am running the same the same test
for 950 concurrent users I am getting very bad
Dne 28.6.2011 23:28, Craig McIlwee napsal(a):
Daily table explain analyze: http://explain.depesz.com/s/iLY
Half month table explain analyze: http://explain.depesz.com/s/Unt
Are you sure those two queries are exactly the same? Because the daily
case output says the width is 50B, while the
Dne 29.6.2011 01:26, Craig McIlwee napsal(a):
Dne 28.6.2011 23:28, Craig McIlwee napsal(a):
Are you sure those two queries are exactly the same? Because the daily
case output says the width is 50B, while the half-month case says it's
75B. This might be why the sort/aggregate steps are
Dne 29.6.2011 01:50, Craig McIlwee napsal(a):
work_mem: 512MB
shared_buffers: 64MB, 512MB, and 1024MB, each yielded the same query
plan and took the same amount of time to execute give or take a few
seconds
shared_buffers doesn't normally impact the query plan; it impacts how
much churn
Dne 6.7.2011 15:30, bakkiya napsal(a):
Any help, please?
According to the EXPLAIN ANALYZE output (please, don't post it to the
mailing list directly - use something like explain.depesz.com, I've done
that for you this time: http://explain.depesz.com/s/HMN), you're doing a
UNIQUE over a lot of
Dne 18.7.2011 22:11, k...@rice.edu napsal(a):
In my testing I have a 32bit CentOS on the x3450, but a 64bit CentOS
on the E5335. Can this make such a bit difference or should the
perform fairly close to the same speed? Both servers have 8GB of
RAM, and the database I tested with is only
Dne 25.7.2011 09:11, Adarsh Sharma napsal(a):
I go through the link, so it is impossible to get the data back.
I have following files in my pg_xlog directory :
000100070091
000100070092
000100070093
000100070094
000100070095
On 10 Srpen 2011, 16:26, Grzegorz Blinowski wrote:
Now, the query above takes about 60sec to execute; exactly: 70s for the
first run and 60s for the next runs. In my opinion this is too long: It
should take 35 s to read the whole table into RAM (assuming 100 MB/s
transfers - half the HDD
Dne 10.8.2011 19:01, Grzegorz Blinowski napsal(a):
However, changing shared_mem didn't help. We also checked system I/O
stats during the query - and in fact there is almost no IO (even with
suboptimal shared_memory). So the problem is not disk transfer/access
but rather the way Postgres
Hi,
I've run a lot of pgbench tests recently (trying to compare various fs,
block sizes etc.), and I've noticed several really strange results.
Eeach benchmark consists of three simple steps:
1) set-up the database
2) read-only run (10 clients, 5 minutes)
3) read-write run (10 clients, 5
On 17 Srpen 2011, 3:35, Ogden wrote:
Hope all is well. I have received tremendous help from this list prior and
therefore wanted some more advice.
I bought some new servers and instead of RAID 5 (which I think greatly
hindered our writing performance), I configured 6 SCSI 15K drives with
On 17 Srpen 2011, 16:28, Ogden wrote:
I was wondering what the best parameters would be with my new setup. The
work_mem obviously will increase as will everything else as it's a 64Gb
machine as opposed to a 16Gb machine. The configuration I posted was for
a 16Gb machine but this new one is
On 17 Srpen 2011, 18:39, Ogden wrote:
Yes, but it greatly depends on the amount of WAL and your workload. If
you
need to write a lot of WAL data (e.g. during bulk loading), this may
significantly improve performance. It may also help when you have a
write-heavy workload (a lot of clients
On 17 Srpen 2011, 21:22, Ogden wrote:
This is a very important point. I've found on most machines with
hardware caching RAID and 8 or fewer 15k SCSI drives it's just as
fast to put it all on one big RAID-10 and if necessary partition it to
put the pg_xlog on its own file system. After that
On 18 Srpen 2011, 0:40, hyelluas wrote:
Should I start with replacing the sql with procedures?
Should I start with replacing the views with the procedures to save time
on
recreating an execution plan and parsing?
Should I start with tuning server parameters ?
Yes, you should start by
On 24 Srpen 2011, 20:48, gnuo...@rcn.com wrote:
It's worth knowing exactly what that means. Turns out that NAND quality
is price specific. There's gooduns and baduns. Is this a failure in the
controller(s) or the NAND?
Why is that important? It's simply a failure of electronics and it has
On 24 Srpen 2011, 21:41, Merlin Moncure wrote:
On Wed, Aug 24, 2011 at 2:32 PM, Tomas Vondra t...@fuzzy.cz wrote:
On 24 Srpen 2011, 20:48, gnuo...@rcn.com wrote:
Also, given that PG is *nix centric and support for TRIM is win
centric,
having that makes a big difference in performance
On 24 Srpen 2011, 21:42, gnuo...@rcn.com wrote:
Original message
Date: Wed, 24 Aug 2011 21:32:16 +0200
From: pgsql-performance-ow...@postgresql.org (on behalf of Tomas Vondra
t...@fuzzy.cz)
Subject: Re: [PERFORM] Reports from SSD purgatory
To: gnuo...@rcn.com
Cc: pgsql-performance
On 29 Srpen 2011, 11:13, Tasdassa Asdasda wrote:
Hi. I have a table called work (id bigserial, userid int4, kind1 enum,
kind2 enum, kind3 enim, value bigint, modified timestamp)
Table will have about 2*10^6 rows (at same time - overall it can have
higher IDs but old records are eventually
On 31 Srpen 2011, 13:19, Jayadevan M wrote:
Hello,
Please run EXPLAIN ANALYZE on the query and post that, it's hard to
say
what's wrong from just the query plan, without knowing where the time
is
actually spent.
Here is the explain analyze
http://explain.depesz.com/s/MY1
Going
On 31 Srpen 2011, 15:59, Andy Colson wrote:
I assume:
Buckets: 16384 Batches: 1 Memory Usage: 4531kB
That means a total of 4.5 meg of ram was used for the hash, so if my
work_mem was lower than that it would swap? (or choose a different plan?)
Why don't you try that? Just set the work_mem
On 4 Září 2011, 20:06, Jayadevan wrote:
I don't think I understood all that. Anyway, is there a way to fix this -
either by rewriting the query or by creating an index? The output does
match
what I am expecting. It does take more than 10 times the time taken by
Oracle for the same result,
On 3 Září 2011, 9:26, Gerhard Wohlgenannt wrote:
Dear list,
we are encountering serious performance problems with our database.
Queries which took around 100ms or less last week now take several
seconds.
The database runs on Ubuntu Server 10.4.3 (kernel: 2.6.32-33) on
hardware as follows:
On 5 Září 2011, 15:51, Andy Colson wrote:
On 09/05/2011 02:48 AM, Tomas Vondra wrote:
That seems a bit slow ... 27MB/s for writes and 41MB/s forreads is ait
slow with 8 drives.
Tomas
Agreed, that's really slow. A single SATA drive will get 60 MB/s. Did
you run Bonnie while the VM's
On 5 Září 2011, 16:08, Gerhard Wohlgenannt wrote:
Below please find the results of vmstat 2 over some periode of time ..
with normal database / system load.
What does a normal load mean? Does that mean a time when the queries are
slow?
Are you sure the machine really has 48GB of RAM? Because
On 5 Září 2011, 21:07, Andy Colson wrote:
On 09/05/2011 01:45 PM, Scott Marlowe wrote:
On Mon, Sep 5, 2011 at 8:08 AM, Gerhard Wohlgenanntwo...@ai.wu.ac.at
wrote:
Below please find the results of vmstat 2 over some periode of time ..
with
normal database / system load.
2 1 1344204 240924
On 6 Září 2011, 10:26, Gerhard Wohlgenannt wrote:
Thanks a lot to everybody for their helpful hints!!!
I am running all these benchmarks while the VMs are up .. with the
system under something like typical loads ..
The RAID is hardware based. On of my colleagues will check if there is
any
On 6 Září 2011, 10:55, Gerhard Wohlgenannt wrote:
That's why I love dstat, just do this
$ dstat -C 0,1,2,3,4,5,6,7
and you know all you need.
dstat looks like a very nice tool, results below ..
(now the system load seems a bit lower then before when generating
results for vmstat and
On 8 Září 2011, 14:51, Anibal David Acosta wrote:
Hi!
I have a table not too big but with aprox. 5 millions of rows, this table
must have 300 to 400 select per second. But also must have 10~20
delete/insert/update per second.
So, I need to know if the insert/delete/update really affect
Dne 12.9.2011 00:44, Anthony Presley napsal(a):
We've currently got PG 8.4.4 running on a whitebox hardware set up,
with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM
SATA drives, using the onboard IDE controller and ext3.
A few weeks back, we purchased two refurb'd HP DL360's
On 14 Září 2011, 15:09, MirrorX wrote:
here is the explain analyze output-
server=# explain analyze select count(*) from temp_by_hour where xid 100
and xdate now() - interval '1 week';
QUERY
PLAN
On 14 Září 2011, 17:14, MirrorX wrote:
thx for the answer.
- What is the problem, i.e. what behaviour you expect?
- How much data is the table?
- What portion of it matches the conditions?
- What is the index definition?
i think in my first post i provided most of these details but -
Dne 15.9.2011 01:40, Tom Lane napsal(a):
Stefan Keller sfkel...@gmail.com writes:
2011/9/14 Tom Lane t...@sss.pgh.pa.us writes:
(...) I think that
the current state of affairs is still what depesz said, namely that
there might be cases where they'd be a win to use, except the lack of
WAL
On 28 Září 2011, 9:05, Greg Smith wrote:
Venkat Balaji wrote:
1. Big Full Table Scans
2. Table with high IOs (hot tables)
3. Highly used Indexes
4. Tables undergoing high DMLs with index scans 0 (with unused indexes)
5. Index usage for heap blk hits
6. Tracking Checkpoints
This is fairly
Hi,
On 28 Říjen 2011, 19:27, Sorbara, Giorgio (CIOK) wrote:
Dear all,
I am new to PG but I have a solid background on tuning in Oracle and
MSSQL.
I have a query coming out from a piece of software from our SW-Stack (I
can't change it) and of course it takes a large amount of time.
The
On 28 Říjen 2011, 18:11, Merlin Moncure wrote:
On Fri, Oct 28, 2011 at 10:28 AM, Marcus Engene meng...@engene.se wrote:
Hi list,
Every now and then I have write peaks which causes annoying delay on my
website. No particular reason it seems, just that laws of probability
dictates that there
Hi,
On 28 Říjen 2011, 17:28, Marcus Engene wrote:
Hi list,
Every now and then I have write peaks which causes annoying delay on my
website. No particular reason it seems, just that laws of probability
dictates that there will be peaks every now and then.
Anyway, thinking of ways to make
On 28 Říjen 2011, 20:40, Merlin Moncure wrote:
sure, but then you have to have a more complicated setup with a
drive(s) designated for WAL, another for storage, etc. Also, your
argument falls away if the WAL is shared with another drive. The era
of the SSD is here. All new systems I plan will
On 1 Listopad 2011, 10:57, Marcus Engene wrote:
Hi Hashim,
One workaround I've done is if something looking like this
select
...
from
table_linking_massive_table tlmt
,massive_table mt
,some_table1 st1
,some_table2 st2
,some_table3 st3
,some_table4
On 3 Listopad 2011, 16:02, Mario Weilguni wrote:
Am 02.11.2011 08:12, schrieb Mohamed Hashim:
Dear All
Thanks for your suggestions replies.
The below are the sample query which i put for particular one bill_id
EXPLAIN ANALYZE SELECT abd.bill_no as
On 3 Listopad 2011, 16:52, siva palanisamy wrote:
I basically have 3 tables. One being the core table and the other 2 depend
on the 1st. I have the requirement to add upto 7 records in the
tables.
I do have constraints (primary foreign keys, index, unique etc) set for
the tables. I can't
On 7 Listopad 2011, 11:36, Lucas Mocellin wrote:
Hi everybody,
I'm having some issues with PostgreSQL 9.03 running on FreeBSD 8.2 on top
of VMware ESXi 4.1 U1.
The problem is query are taking too long, and some times one query
blocks
everybody else to use the DB as well.
I'm a network
On 8 Listopad 2011, 4:21, Mohamed Hashim wrote:
Hi all,
Thanks for all your responses.
Sorry for late response
Earlier we used Postgres8.3.10 with Desktop computer (as server) and
configuration of the system (I2 core with 4GB RAM) and also the
application
was slow i dint change any
On 8 Listopad 2011, 13:15, Mohamed Hashim wrote:
Hi Sam,Tomas
In my first post i have mentioned all how much shared (shared buffers,
effective cache size, work mem, etc.) and my OS and hardware information
and what are the basic settings i have changed
Sorry, I've missed that first message -
On 14 Listopad 2011, 19:16, Cody Caughlan wrote:
shared_buffers = 3584MB
wal_buffers = 16MB
checkpoint_segments = 32
max_wal_senders = 10
checkpoint_completion_target = 0.9
wal_keep_segments = 1024
maintenance_work_mem = 256MB
work_mem = 88MB
shared_buffers = 3584MB
effective_cache_size
On 14 Listopad 2011, 22:58, Cody Caughlan wrote:
Seems reasonable, although I'd bump up the checkpoint_timeout (the 5m is
usually too low).
Ok, will do.
Yes, but find out what that means and think about the possible impact
first. It usually improves the checkpoint behaviour but increases the
1 - 100 of 340 matches
Mail list logo