8 checkpoint segments is very small, try 50
2011/10/4, Venkat Balaji venkat.bal...@verse.in:
Hello,
Sorry. I should have put some more details in the email.
I have got a situation where in i see the production system is loaded with
the checkpoints and at-least 1000+ buffers are being
I think , you may add a ramdisk as tablespace for temporary tables.
This should work similar to bigger work_mem.
2011/9/12, Robert Schnabel schnab...@missouri.edu:
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use.
For 10 TB table and 3hours, disks should have a transfer about 1GB/s (seqscan).
2011/9/11, Scott Marlowe scott.marl...@gmail.com:
On Sun, Sep 11, 2011 at 6:35 AM, Igor Chudov ichu...@gmail.com wrote:
I have a server with about 18 TB of storage and 48 GB of RAM, and 12
CPU cores.
1 or 2 fast
Look at developer faq.
2011/9/9, Antonio Rodriges antonio@gmail.com:
Hello,
Does anyone know whether PostgreSQL uses DMA (Direct Memory Access) in
certain cases to improve networking IO performance?
I mean simple query is which doesn't require any CPU processing, for ex
SELECT column_a
Hi.
Autoexplain module allow to log plans and statistics of live queries. Try it.
2011/9/3, Gerhard Wohlgenannt wo...@ai.wu.ac.at:
Dear list,
we are encountering serious performance problems with our database.
Queries which took around 100ms or less last week now take several seconds.
The
I think that wal_segments are too low, try 30.
2011/9/5, Andy Colson a...@squeakycode.net:
On 09/05/2011 05:28 AM, Richard Shaw wrote:
Hi,
I have a database server that's part of a web stack and is experiencing
prolonged load average spikes of up to 400+ when the db is restarted and
first
Try to use single regular expression.
2011/8/10, Grzegorz Blinowski g.blinow...@gmail.com:
Dear All,
I have some problems with regexp queries performance - common sense tells me
that my queries should run faster than they do.
The database - table in question has 590 K records, table's size
Hi. I have the idea: hints joined to function. For example instead of
WHERE table1.field1=table2.field2
write:
WHERE specificeq(table1.field1,table2.field2)
and hints add to declaration of specificeq function.
2011/2/23, Robert Haas robertmh...@gmail.com:
On Wed, Feb 16, 2011 at 4:22 PM, Bruce
hi.
I think you're missing the point, which is that all the hash work is
just pure overhead in such a case (and it is most definitely not
zero-cost overhead). You might as well just do a nestloop join.
Hashing is only beneficial to the extent that it allows a smaller subset
of the inner relation
2011/3/30, Lars Feistner feist...@uni-heidelberg.de:
Hello Kevin,
On 03/29/2011 09:28 PM, Kevin Grittner wrote:
Lars Feistnerfeist...@uni-heidelberg.de wrote:
The log tells me that certain update statements take sometimes
about 3-10 minutes. But we are talking about updates on tables
Hi.
I do small test of plsql and perl.Result is that perl may be
2xfaster in simple loops.
CREATE OR REPLACE FUNCTION _.test1() RETURNS void AS
$BODY$
declare i integer; j bigint := 0;
begin
for i in 1..100 loop j:=j+i; end loop;
end;
$BODY$ LANGUAGE plpgsql VOLATILE COST 100;
Result
Try
order by created_at+0
On 1/6/11, Mike Broers mbro...@gmail.com wrote:
Thanks for the assistance.
Here is an explain analyze of the query with the problem limit:
production=# explain analyze select * from landing_page.messages where
((messages.topic = E'x') AND (messages.processed =
Thanks for reply.
I tested random changes and query runs fastest after:
set seq_page_cost = 0.1;
set random_page_cost = 0.1;
cpu_operator_cost = 0.01
pasman
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
Hi.
I install auto_explain module for monitoring queries.
By the way, is any tool to tune planner automatically ?
pasman
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
hello.
I ve the table NumeryA with 3 indices. Query below uses incorrect index.
SELECT
A.NKA,
A.NTA,
Min(PołączeniaMin) || ',' || Max(PołączeniaMax) AS Biling,
Sum(Ile)::text AS Ilość CDR,
R.LP::text AS Sprawa,
R.Osoba weryfikująca AS Osoba,
to_char(min(Wartość),'FM990D00') AS
UNION will remove all duplicates, so that the result additionally requires to
be sorted.
Right, to avoid the SORT and UNIQUE - operation you can use UNION ALL
by the way maybe apply hashing to calculate UNION be better ?
pasman
--
Sent via pgsql-performance mailing list
hello.
i tested how are distributed values xmin,xmax on pages.
in my tables . typically there are no more than 80 records
on pages.
maybe its possible to compress xmin xmax values to
1 byte/per record (+table of transactions per page)?
its reduce the space when more than 1 record is
from the
FWIW, it isn't going to happen anyway, because the TID scan mechanism
doesn't support scanning based on a join condition. That hasn't gotten
to the top of the to-do list because the use case is almost vanishingly
small. ctids generally aren't stable enough for it to be useful to
store references
Hello.
How to use tid scans? This below not works :-(
Always is used merge join.
DROP TABLE IF EXISTS test1;
CREATE TABLE test1 as select i,hashint4(i)::text from
generate_series(1,1) as a(i);
DROP TABLE IF EXISTS test2;
CREATE TABLE test2 as select j,j%1 as i,null::tid as ct from
Hello.
I have a query which works a bit slow.
It's runned on desktop computer: AMD Athlon X2 2GHz , Win Xp sp2, 1GB ram.
Postgres 8.4.5 with some changes in config:
shared_buffers = 200MB # min 128kB
# (change requires restart)
temp_buffers = 8MB
20 matches
Mail list logo