Re: [PERFORM] serious problems with vacuuming databases

2006-04-10 Thread Ahmad Fajar
Hi Tomas, Tomas wrote: We've decided to remove unneeded 'old' data, which means removing about 99.999% of rows from tables A, C and D (about 2 GB of data). At the beginning, the B table (containing aggregated from A, C and D) was emptied (dropped and created) and filled in with current data.

[PERFORM] Takes too long to fetch the data from database

2006-04-10 Thread soni de
Hello, I have difficulty in fetching the records from the database. Database table contains more than 1 GB data. For fetching the records it is taking more the 1 hour and that's why it is slowing down the performance. please provide some help regarding improving the performance and how do I run

Re: [PERFORM]

2006-04-10 Thread Doron Baranes
Hi, I Attached here a file with details about the tables, the queries and the Explain analyze plans. Hope this can be helpful to analyze my problem 10x Doron -Original Message- From: Ragnar [mailto:[EMAIL PROTECTED] Sent: Sunday, April 09, 2006 2:37 PM To: Doron Baranes Subject: RE:

[PERFORM] Restore performance?

2006-04-10 Thread Jesper Krogh
Hi I'm currently upgrading a Posgresql 7.3.2 database to a 8.1.something-good I'd run pg_dump | gzip sqldump.gz on the old system. That took about 30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psql into the 8.1 database seems to take about the same time. Are

[PERFORM] Dump restore performance 7.3 - 8.1

2006-04-10 Thread Jesper Krogh
Hi I'm currently upgrading a Posgresql 7.3.2 database to a 8.1.something-good I'd run pg_dump | gzip sqldump.gz on the old system. That took about 30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psql into the 8.1 database seems to take about the same time. Are

Re: [PERFORM] Restore performance?

2006-04-10 Thread Andreas Pflug
Jesper Krogh wrote: Hi I'm currently upgrading a Posgresql 7.3.2 database to a 8.1.something-good I'd run pg_dump | gzip sqldump.gz on the old system. That took about 30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psql into the 8.1 database seems to take about

Re: [PERFORM] Restore performance?

2006-04-10 Thread Jesper Krogh
If they both took the same amount of time, then you are almost certainly bottlenecked on gzip. Try a faster CPU or use gzip -fast. gzip does not seem to be the bottleneck, on restore is psql the nr. 1 consumer on cpu-time. Jesper Sorry for the double post. -- Jesper Krogh

Re: [PERFORM] Restore performance?

2006-04-10 Thread Marcin Mańk
I'd run pg_dump | gzip sqldump.gz on the old system. That took about 30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psql into the 8.1 database seems to take about the same time. Are there any tricks I can use to speed this dump+restore process up? The

Re: [PERFORM] OT: Data structure design question: How do they count

2006-04-10 Thread Richard Huxton
Brendan Duddridge wrote: Now, initially I thought they would just pre-compute these counts, but the problem is, when you click on any of the above attribute values, they reduce the remaining possible set of matching products (and set of possible remaining attributes and attribute values) by

Re: [PERFORM] pg 8.1.3, AIX, huge box, painfully slow.

2006-04-10 Thread Simon Riggs
On Fri, 2006-04-07 at 19:05 -0400, Tom Lane wrote: It's plausible though that we are seeing contention across members of the LWLock array, with the semop storm just being a higher-level symptom of the real hardware-level problem. You might try increasing LWLOCK_PADDED_SIZE to 64 or even 128,

Re: [PERFORM] pg 8.1.3, AIX, huge box, painfully slow.

2006-04-10 Thread Gavin Hamill
Simon Riggs wrote: pSeries cache lines are 128 bytes wide, so I'd go straight to 128. Hello :) OK, that line of code is: #define LWLOCK_PADDED_SIZE (sizeof(LWLock) = 16 ? 16 : 32) What should I change this to? I don't understand the syntax of the = 16 ? : stuff... would a simple

Re: [PERFORM]

2006-04-10 Thread Ragnar
On mán, 2006-04-10 at 10:30 +0200, Doron Baranes wrote: I Attached here a file with details about the tables, the queries and the Explain analyze plans. Hope this can be helpful to analyze my problem first query: explain analyze SELECT date_trunc('hour'::text, i.entry_time) AS datetime,

Re: [PERFORM] Restore performance?

2006-04-10 Thread Tom Lane
Jesper Krogh [EMAIL PROTECTED] writes: gzip does not seem to be the bottleneck, on restore is psql the nr. 1 consumer on cpu-time. Hm. We've seen some situations where readline mistakenly decides that the input is interactive and wastes lots of cycles doing useless processing (like keeping

Re: [PERFORM] pg 8.1.3, AIX, huge box, painfully slow.

2006-04-10 Thread Tom Lane
Gavin Hamill [EMAIL PROTECTED] writes: would a simple #define LWLOCK_PADDED_SIZE 128 be sufficient? Yeah, that's fine. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend

Re: [PERFORM] Restore performance?

2006-04-10 Thread Rajesh Kumar Mallah
On 4/10/06, Jesper Krogh [EMAIL PROTECTED] wrote: HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.something-goodI'd run pg_dump | gzip sqldump.gzon the old system. That took about30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psqlinto the 8.1 database

Re: [PERFORM] Restore performance?

2006-04-10 Thread Rajesh Kumar Mallah
sorry for the post , i didn' saw the other replies only after posting.On 4/10/06, Rajesh Kumar Mallah [EMAIL PROTECTED] wrote: On 4/10/06, Jesper Krogh [EMAIL PROTECTED] wrote: HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.something-goodI'd run pg_dump | gzip sqldump.gzon the old

Re: [PERFORM] Takes too long to fetch the data from database

2006-04-10 Thread Rajesh Kumar Mallah
what is the query ?use LIMIT or a restricting where clause.regdsmallah.On 4/10/06, soni de [EMAIL PROTECTED] wrote:Hello, I have difficulty in fetching the records from the database. Database table contains more than 1 GB data. For fetching the records it is taking more the 1 hour and that's why

Re: [PERFORM] Restore performance?

2006-04-10 Thread Rajesh Kumar Mallah
4. fsync can also be turned off while loading huge dataset , but seek others comments too (as study docs) as i am not sure about the reliability. i think it can make a lot of difference. On 4/10/06, Jesper Krogh [EMAIL PROTECTED] wrote: Rajesh Kumar Mallah wrote: I'd run pg_dump | gzip

Re: [PERFORM] Restore performance?

2006-04-10 Thread Alvaro Herrera
Rajesh Kumar Mallah wrote: 4. fsync can also be turned off while loading huge dataset , but seek others comments too (as study docs) as i am not sure about the reliability. i think it can make a lot of difference. Also be sure to increase maintenance_work_mem so that index creation

[PERFORM] Better index stategy for many fields with few values

2006-04-10 Thread Oscar Picasso
Hi,I want to optimize something like this.- My items table:code int -- can take one of 100 valuesproperty varchar(250) -- can take one of 5000 valuesparam01 char(10) -- can take one of 10 valuesparam02 char(10) -- can take one of 10 values...[ 20 similar columns }...parama20 char(10) -- can take

Re: [PERFORM] Better index stategy for many fields with few values

2006-04-10 Thread PFC
- My items table: code int -- can take one of 100 values property varchar(250) -- can take one of 5000 values param01 char(10) -- can take one of 10 values param02 char(10) -- can take one of 10 values ... [ 20 similar columns } ... parama20 char(10) -- can take one

Re: [PERFORM] Takes too long to fetch the data from database

2006-04-10 Thread Joshua D. Drake
Rajesh Kumar Mallah wrote: what is the query ? use LIMIT or a restricting where clause. You could also use a cursor. Joshua D. Drake regds mallah. On 4/10/06, *soni de* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Hello, I have difficulty in fetching the records from

Re: [PERFORM] Restore performance?

2006-04-10 Thread Vivek Khera
On Apr 10, 2006, at 3:55 AM, Jesper Krogh wrote: I'd run pg_dump | gzip sqldump.gz on the old system. That took about 30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psql into the 8.1 database seems to take about the same time. Are there any tricks I can use to

Re: [PERFORM] pg 8.1.3, AIX, huge box, painfully slow.

2006-04-10 Thread Brad Nicholson
Tom Lane wrote: This is unfortunately not going to help you as far as getting that machine into production now (unless you're brave enough to run CVS tip as production, which I certainly am not). I'm afraid you're most likely going to have to ship that pSeries back at the end of the month,

Re: [PERFORM] OT: Data structure design question: How do they count so fast?

2006-04-10 Thread Brendan Duddridge
Hi Richard (and anyone else who want's to comment!), I'm not sure it will really work pre-computed. At least not in an obvious way (for me! :-)) It's fine to display a pre-computed list of product counts for the initial set of attribute and attribute values, but we need to be able to

Re: [PERFORM] bad performance on Solaris 10

2006-04-10 Thread Chris Mair
Chris, Just to make sure the x4100 config is similar to your Linux system, can you verify the default setting for disk write cache and make sure they are both enabled or disabled. Here's how to check in Solaris. As root, run format -e - pick a disk - cache - write_cache - display

Re: [PERFORM] Encouraging multi-table join order

2006-04-10 Thread Tom Lane
Dan Harris [EMAIL PROTECTED] writes: I have a query that is intended to select from multiple small tables to get a limited subset of incidentid and then join with a very large table. One of the operations will require a sequential scan, but the planner is doing the scan on the very large

Re: [PERFORM] Encouraging multi-table join order

2006-04-10 Thread Dan Harris
Tom Lane wrote: That's very strange --- the estimated cost of the seqscan is high enough that the planner should have chosen a nestloop with inner indexscan on the big table. I'm not sure about the join-order point, but the hash plan for the first join seems wrong in any case. Um, you do have

Re: [PERFORM] Encouraging multi-table join order

2006-04-10 Thread Tom Lane
Dan Harris [EMAIL PROTECTED] writes: Yes, eventactivity.incidentid is indexed. The datatype is varchar(40). Although, by checking this, I noticed that k_h.incidentid was varchar(100). Perhaps the difference in length between the keys caused the planner to not use the fastest method? No,

[PERFORM] pgmemcache

2006-04-10 Thread C Storm
I was wondering if anyone on the list has a successful installation of pgmemcache running that uses LISTEN/NOTIFY to signal a successfully completed transaction, i.e., to get around the fact that TRIGGERS are transaction unaware. Or perhaps any other information regarding a successful deployment

[PERFORM] slow IN clause

2006-04-10 Thread FavoYang
I have a slow sql: SELECT * FROM mytable WHERE id IN (1,3,5,7,3k here...); mytable is about 10k rows. if don't use the IN clause, it will cost 0,11 second, otherwise it will cost 2.x second I guess pg use linear search to deal with IN clause, is there any way to let pg use other search method

Re: [PERFORM] slow IN clause

2006-04-10 Thread Vinko Vrsalovic
On lun, 2006-04-10 at 12:44 +0800, Qingqing Zhou wrote: [EMAIL PROTECTED] wrote I have a slow sql: SELECT * FROM mytable WHERE id IN (1,3,5,7,3k here...); mytable is about 10k rows. if don't use the IN clause, it will cost 0,11 second, otherwise it will cost 2.x second I guess