[PERFORM] Slow query, where am I going wrong?

2012-10-30 Thread Andy
Hello all, I have been pulling my hair out over the last few days trying to get any useful performance out of the following painfully slow query. The query is JPA created, I've just cleaned the aliases to make it more readable. Using 'distinct' or 'group by' deliver about the same results,

Re: [PERFORM] Slow query, where am I going wrong?

2012-10-30 Thread Albe Laurenz
Andy wrote: I have been pulling my hair out over the last few days trying to get any useful performance out of the following painfully slow query. The query is JPA created, I've just cleaned the aliases to make it more readable. Using 'distinct' or 'group by' deliver about the same results,

Re: [PERFORM] Replaying 48 WAL files takes 80 minutes

2012-10-30 Thread Albe Laurenz
On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz laurenz.a...@wien.gv.at wrote: I am configuring streaming replication with hot standby with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64). PostgreSQL was compiled from source. It works fine, except that starting the standby took for

Re: [PERFORM] Request for help with slow query

2012-10-30 Thread Albe Laurenz
Sean Woolcock wrote: I have a large (3 million row) table called tape that represents files, which I join to a small (100 row) table called filesystem that represents filesystems. I have a web interface that allows you to sort by a number of fields in the tape table and view

[PERFORM] out of memory

2012-10-30 Thread Mahavir Trivedi
hi i have sql file (it's size are 1GB ) when i execute it then the String is 987098801 bytr too long for encoding conversion error occured . pls give me solution about i have XP 64-bit with 8 GB RAM shared_buffer 1GB check point = 34 with thanks mahavir

[PERFORM] PostgreSQL server failed to start

2012-10-30 Thread vignesh
Hi, When i start my postgres. Iam getting this error. I had installed 8.4 and 9.1 It was working good yesterday but not now. service postgresql restart * Restarting PostgreSQL 8.4databaseserver * The PostgreSQL server failed to start. Please check the log output. If i see the log. it

Re: [PERFORM] Slow query, where am I going wrong?

2012-10-30 Thread AndyG
Thanks very much Laurenz. I'll put your suggestions into motion right away and let you know the results. Albe Laurenz *EXTERN* wrote BTW, you seem to have an awful lot of indexes defined, some of which seem redundant. I am in the process of pruning unused/useless indexes on this database -

Re: [PERFORM] Replaying 48 WAL files takes 80 minutes

2012-10-30 Thread Heikki Linnakangas
On 30.10.2012 10:50, Albe Laurenz wrote: Why does WAL replay read much more than it writes? I thought that pretty much every block read during WAL replay would also get dirtied and hence written out. Not necessarily. If a block is modified and written out of the buffer cache before next

Re: [PERFORM] out of memory

2012-10-30 Thread Tatsuo Ishii
i have sql file (it's size are 1GB ) when i execute it then the String is 987098801 bytr too long for encoding conversion error occured . pls give me solution about You hit the upper limit of internal memory allocation limit in PostgreSQL. IMO, there's no way to avoid the error except you

Re: [PERFORM] How to keep queries low latency as concurrency increases

2012-10-30 Thread Kevin Grittner
Catalin Iacob wrote: Hardware: Virtual machine running on top of VMWare 4 cores, Intel(R) Xeon(R) CPU E5645 @ 2.40GHz 4GB of RAM You should carefully test transaction-based pools limited to around 8 DB connections. Experiment with different size limits.

[PERFORM] Seq scan on 10million record table.. why?

2012-10-30 Thread Vincenzo Melandri
Hi all I have a problem with a data import procedure that involve the following query: select a,b,c,d from big_table b join data_sequences_table ds on b.key1 = ds.key1 and b.key2 = ds.key2 where ds.import_id=xx The big table has something like 10.000.000 records ore more (depending on

[PERFORM] High %SYS CPU usage

2012-10-30 Thread Cesar Martin
Hello there, I have PostgreSQL 8.3.18 server running on Centos 6.2 (2.6.32-220.7.1) with this specs: 2x CPU AMD Opteron 6282 128GB RAM Raid 10 (12HD 15k rpm 1GB cache) with data Raid 10 (4HD 15k rpm 1GB cache) with xlog Raid 1 (15k rpm 1GB cache shared with xlog) with system On this server I

Re: [PERFORM] Replaying 48 WAL files takes 80 minutes

2012-10-30 Thread k...@rice.edu
On Tue, Oct 30, 2012 at 09:50:44AM +0100, Albe Laurenz wrote: On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz laurenz.a...@wien.gv.at wrote: I am configuring streaming replication with hot standby with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64). PostgreSQL was compiled from

Re: [PERFORM] Replaying 48 WAL files takes 80 minutes

2012-10-30 Thread Albe Laurenz
Heikki Linnakangas wrote: Why does WAL replay read much more than it writes? I thought that pretty much every block read during WAL replay would also get dirtied and hence written out. Not necessarily. If a block is modified and written out of the buffer cache before next checkpoint, the

Re: [PERFORM] Replaying 48 WAL files takes 80 minutes

2012-10-30 Thread Albe Laurenz
k...@rice.edu wrote: If you do not have good random io performance log replay is nearly unbearable. also, what io scheduler are you using? if it is cfq change that to deadline or noop. that can make a huge difference. We use the noop scheduler. As I said, an identical system performed

Re: [PERFORM] Replaying 48 WAL files takes 80 minutes

2012-10-30 Thread k...@rice.edu
On Tue, Oct 30, 2012 at 02:16:57PM +0100, Albe Laurenz wrote: k...@rice.edu wrote: If you do not have good random io performance log replay is nearly unbearable. also, what io scheduler are you using? if it is cfq change that to deadline or noop. that can make a huge difference.

Re: [PERFORM] Seq scan on 10million record table.. why?

2012-10-30 Thread Gabriele Bartolini
Hi Vincenzo, On Tue, 30 Oct 2012 13:15:10 +0100, Vincenzo Melandri vmelan...@imolinfo.it wrote: I have indexes on both the key on the big table and the import_id on the sequence table. Forgive my quick answer, but it might be that the data you are retrieving is scattered throughout the

Re: [PERFORM] Seq scan on 10million record table.. why?

2012-10-30 Thread Shaun Thomas
On 10/30/2012 07:15 AM, Vincenzo Melandri wrote: Merge Join (cost=2604203.98..2774528.51 rows=129904 width=20) Merge Cond: big_table.key1)::numeric) = data_sequences_table.key1) AND ((( big_table.key2)::numeric) = data_sequences_table.key2)) - Sort (cost=2602495.47..2635975.81

Re: [PERFORM] How to keep queries low latency as concurrency increases

2012-10-30 Thread Shaun Thomas
On 10/30/2012 06:55 AM, Kevin Grittner wrote: Is there a good transaction-based connection pooler in Python? You're better off with a good pool built in to the client application than with a good pool running as a separate process between the client and the database, IMO. Could you explain

Re: [PERFORM] Slow query, where am I going wrong?

2012-10-30 Thread AndyG
A marginal improvement. http://explain.depesz.com/s/y63 I am going to normalize the table some more before partitioning. -- View this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730059.html Sent from the PostgreSQL - performance

Re: [PERFORM] PostgreSQL server failed to start

2012-10-30 Thread Maciek Sakrejda
On Tue, Oct 30, 2012 at 2:24 AM, vignesh vignes...@snovabits.net wrote: Hi, When i start my postgres. Iam getting this error. You may want to ask on the pgsql-general mailing list [1]. This list is just for Postgres performance questions. While, technically, failing to start outright

Re: [PERFORM] Slow query, where am I going wrong?

2012-10-30 Thread Albe Laurenz
AndyG wrote: A marginal improvement. http://explain.depesz.com/s/y63 That's what I thought. Increasing the statistics for test_result.id_recipe_version had no effect? I am going to normalize the table some more before partitioning. How do you think that partitioning will help? Yours,

Re: [PERFORM] Seq scan on 10million record table.. why?

2012-10-30 Thread Vincenzo Melandri
1) Make all types the same 2) If you are using some narrow type for big_table (say, int2) to save space, you can force narrowing conversion, e.g. b.key1=ds.key1::int2. Note that if ds.key1 has any values that don't fit into int2, you will have problems. And of course, use your type used

Re: [PERFORM] High %SYS CPU usage

2012-10-30 Thread Josh Berkus
Cesar, On this server I have only one database with 312GB of data. The database had run fine during 4 months, but from two months ago, during high work load periods, the server is collapsed by %sys type load. Hmmm. Have you updated Linux any time recently? I'm wondering if this is a

Re: [PERFORM] How to keep queries low latency as concurrency increases

2012-10-30 Thread Kevin Grittner
Shaun Thomas wrote: On 10/30/2012 06:55 AM, Kevin Grittner wrote: Is there a good transaction-based connection pooler in Python? You're better off with a good pool built in to the client application than with a good pool running as a separate process between the client and the database, IMO.

Re: [PERFORM] set-returning calls and overhead

2012-10-30 Thread Jon Nelson
On Thu, Jul 19, 2012 at 11:07 AM, Jon Nelson jnelson+pg...@jamponi.net wrote: Recently I found myself wondering what was taking a particular query so long. I immediately assumed it was a lack of I/O, because lack of I/O is a thorn in my side. Nope, the I/O was boring. CPU? Well, the process

Re: [PERFORM] How to keep queries low latency as concurrency increases

2012-10-30 Thread Jeff Janes
On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob iacobcata...@gmail.com wrote: pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine as Postgres. Django connects via TCP/IP to pgbouncer (it does one connection and one transaction per request) and pgbouncer keeps connections open

Re: [PERFORM] How to keep queries low latency as concurrency increases

2012-10-30 Thread Greg Williamson
Jeff / Catalin -- Jeff Janes wrote: On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob iacobcata...@gmail.com wrote: pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine as Postgres. Django connects via TCP/IP to pgbouncer (it does one connection and one transaction per request)

Re: [PERFORM] How to keep queries low latency as concurrency increases

2012-10-30 Thread Scott Marlowe
On Tue, Oct 30, 2012 at 4:11 PM, Greg Williamson gwilliamso...@yahoo.com wrote: Jeff / Catalin -- Jeff Janes wrote: On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob iacobcata...@gmail.com wrote: pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine as Postgres. Django connects