Re: [PERFORM] Quick Performance Poll

2006-04-20 Thread Milen Kulev
Hi Luke, Thank you very much for your prompt reply. I have got ***much*** more information than expected ;) Obviously there are thing to be improved in the current implementation of BMP indexes, But anyway they are worth usung (I wa pretty impressed from the BMP index performance, after Readin

Re: [PERFORM] Takes too long to fetch the data from database

2006-04-20 Thread soni de
I don't want to query exactly 81900 rows into set. I just want to fetch 50 or 100 rows at a time in a decreasing order of stime.(i.e 50 or 100 rows starting from last to end).   if we fetched sequentially, there is also problem in fetching all the records (select * from wan where kname='pluto' orde

Re: [PERFORM] Introducing a new linux readahead framework

2006-04-20 Thread Jim C. Nasby
On Fri, Apr 21, 2006 at 09:38:26AM +0800, Wu Fengguang wrote: > Greetings, > > I'd like to introduce a new readahead framework for the linux kernel: > http://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1021.html > > HOW IT WORKS > > In adaptive readahead, the context based method may be of par

[PERFORM] Introducing a new linux readahead framework

2006-04-20 Thread Wu Fengguang
Greetings, I'd like to introduce a new readahead framework for the linux kernel: http://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1021.html HOW IT WORKS In adaptive readahead, the context based method may be of particular interest to postgresql users. It works by peeking into the file cache

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Gavin Sherry
On Thu, 20 Apr 2006, Brendan Duddridge wrote: > Hi Tomas, > > Hmm... ktrace -p PID -c returns immediately without doing anything > unless I've previously done a ktrace -p PID. > > According to the man page for ktrace's -c flag: >-c Clear the trace points associated with the specified file

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Brendan Duddridge
Hi Tomas, Hmm... ktrace -p PID -c returns immediately without doing anything unless I've previously done a ktrace -p PID. According to the man page for ktrace's -c flag: -c Clear the trace points associated with the specified file or processes. When I run ktrace on OS X Server 10.4

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Brendan Duddridge
Hi Tom, Well, we started the restore back up with the WAL archives copied to our local disk. It's going at about the same pace as with the restore over NFS. So I tried ktrace -p PID and it created a really big file. I had to do 'ktrace -p PID -c' to get it to stop. The ktrace.out file is

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Brendan Duddridge
Thanks Tom, We are storing only the WAL archives on the NFS volume. It must have been a hiccup in the NFS mount. Jeff Frost asked if we were using hard or soft mounts. We were using soft mounts, so that may be where the problem lies with the PANIC. Is it better to use the boot volume of t

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Tom Lane
Brendan Duddridge <[EMAIL PROTECTED]> writes: > Oops... forgot to mention that both files that postgres said were > missing are in fact there: Please place the blame where it should fall: it's your archive restore command that's telling postgres that. > There didn't seem to be any issues with t

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Brendan Duddridge
Well our restore command is pretty basic: restore_command = 'gunzip %p' I'm not sure why that would succeed then fail. Brendan Duddridge | CTO | 403-277-5591 x24 | [EMAIL PROTECTED] ClickSpace Interactive Inc. Suite L100, 23

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Jeff Frost
Brendan, Is your NFS share mounted hard or soft? Do you have space to copy the files locally? I suspect you're seeing NFS slowness in your restore since you aren't using much in the way of disk IO or CPU. -Jeff On Thu, 20 Apr 2006, Brendan Duddridge wrote: Oops... forgot to mention that

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Tom Lane
Brendan Duddridge <[EMAIL PROTECTED]> writes: > However, as I just finished posting to the list, the process died > with a PANIC error: > [2006-04-20 16:41:28 MDT] LOG: restored log file > "0001018F0034" from archive > [2006-04-20 16:41:35 MDT] LOG: restored log file > "00010

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Brendan Duddridge
Oops... forgot to mention that both files that postgres said were missing are in fact there: A partial listing from our wal_archive directory: -rw--- 1 postgres staff 4971129 Apr 19 20:08 0001018F0036.gz -rw--- 1 postgres staff 4378284 Apr 19 20:09 0001018F0

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Brendan Duddridge
Hi Tom, I found it... it's called ktrace on OS X Server. However, as I just finished posting to the list, the process died with a PANIC error: [2006-04-20 16:41:28 MDT] LOG: restored log file "0001018F0034" from archive [2006-04-20 16:41:35 MDT] LOG: restored log file "00

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Brendan Duddridge
Hi Jeff, The WAL files are stored on a separate server and accessed through an NFS mount located at /wal_archive. However, the restore failed about 5 hours in after we got this error: [2006-04-20 16:41:28 MDT] LOG: restored log file "0001018F0034" from archive [2006-04-20 16:41

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Jeff Frost
On Thu, 20 Apr 2006, Brendan Duddridge wrote: Hi, We had a database issue today that caused us to have to restore to our most recent backup. We are using PITR so we have 3120 WAL files that need to be applied to the database. After 45 minutes, it has restored only 230 WAL files. At this rat

Re: [PERFORM] Quick Performance Poll

2006-04-20 Thread Luke Lonergan
Milen, On 4/20/06 12:45 PM, "Milen Kulev" <[EMAIL PROTECTED]> wrote: > I (still) haven't tried Bizgres, but what do you mean with "The current > drawback to bitmap index is that it isn't very > maintainable under insert/update, although it is safe for those operations"? Yes. > Do you mean tha

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Luke Lonergan
Title: Re: [PERFORM] Recovery will take 10 hours Brendan,   strace –p -c Then do a “CTRL-C” after a minute to get the stats of system calls. - Luke On 4/20/06 2:13 PM, "Brendan Duddridge" <[EMAIL PROTECTED]> wrote: Hi Tom, Do you mean do a kill -QUIT on the postgres process in order to g

Re: [PERFORM] Performance decrease

2006-04-20 Thread Guido Neitzer
On 20.04.2006, at 18:10 Uhr, Radovan Antloga wrote: I have once or twice a month update on many records (~6000) but not so many. I did not expect PG would have problems with updating 15800 records. It has no problems with that. We have a database where we often update/insert rows with about

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Tom Lane
Brendan Duddridge <[EMAIL PROTECTED]> writes: > Do you mean do a kill -QUIT on the postgres process in order to > generate a stack trace? Not at all! I'm talking about tracing the kernel calls it's making. Depending on your platform, the tool for this is called strace, ktrace, truss, or maybe e

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Brendan Duddridge
Hi Tom, Do you mean do a kill -QUIT on the postgres process in order to generate a stack trace? Will that affect the currently running process in any bad way? And where would the output go? stdout? Thanks, Brendan Duddr

Re: [PERFORM] Recovery will take 10 hours

2006-04-20 Thread Tom Lane
Brendan Duddridge <[EMAIL PROTECTED]> writes: > We had a database issue today that caused us to have to restore to > our most recent backup. We are using PITR so we have 3120 WAL files > that need to be applied to the database. > After 45 minutes, it has restored only 230 WAL files. At this rat

Re: [PERFORM] Quick Performance Poll

2006-04-20 Thread Milen Kulev
Hi Luke, I (still) haven't tried Bizgres, but what do you mean with "The current drawback to bitmap index is that it isn't very maintainable under insert/update, although it is safe for those operations"? Do you mean that INSERT/UPDATE operations against bitmap indexes are imperformant ? If ye

Re: [PERFORM] Inserts optimization?

2006-04-20 Thread Vivek Khera
On Apr 14, 2006, at 8:00 AM, Marc Cousin wrote: So, you'll probably end up being slowed down by WAL fsyncs ... and you won't have a lot of solutions. Maybe you should start with trying to set fsync=no as a test to confirm that (you should have a lot of iowaits right now if you haven't dis

Re: [PERFORM] Inserts optimization?

2006-04-20 Thread Vivek Khera
On Apr 13, 2006, at 2:59 PM, Francisco Reyes wrote: This particular server is pretty much what I inherited for now for this project.and its Raid 5. There is a new server I am setting up soon... 8 disks which we are planning to setup 6 disks in RAID 10 2 Hot spares In RAID 10 would it matte

[PERFORM] Recovery will take 10 hours

2006-04-20 Thread Brendan Duddridge
Hi, We had a database issue today that caused us to have to restore to our most recent backup. We are using PITR so we have 3120 WAL files that need to be applied to the database. After 45 minutes, it has restored only 230 WAL files. At this rate, it's going to take about 10 hours to rest

Re: [PERFORM] Hardware: HP StorageWorks MSA 1500

2006-04-20 Thread Mark Lewis
Hmmm. We use an MSA 1000 with Fibre Channel interconnects. No real complaints, although I was a little bit disappointed by the RAID controller's battery-backed write cache performance; tiny random writes are only about 3 times as fast with write caching enabled as with it disabled, I had (perhaps

[PERFORM] Hardware: HP StorageWorks MSA 1500

2006-04-20 Thread Mikael Carneholm
We're going to get one for evaluation next week (equipped with dual 2Gbit HBA:s and 2x14 disks, iirc). Anyone with experience from them, performance wise? Regards, Mikael ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send

Re: [PERFORM] Performance decrease

2006-04-20 Thread Jim C. Nasby
On Thu, Apr 20, 2006 at 06:10:21PM +0200, Radovan Antloga wrote: > I have once or twice a month update on many records (~6000) but > not so many. I did not expect PG would have problems with > updating 15800 records. And generally speaking, it doesn't. But you do need to ensure that you're vacuumi

Re: [PERFORM] Quick Performance Poll

2006-04-20 Thread Jim C. Nasby
Interested in doing a case study for the website? On Thu, Apr 20, 2006 at 09:36:25AM -0400, Jim Buttafuoco wrote: > > Simon, > > I have many databases over 1T with the largest being ~6T. All of my > databases store telecom data, such as call detail > records. The access is very fast when look

Re: [PERFORM] SELECT FOR UPDATE performance is bad

2006-04-20 Thread Jim C. Nasby
On Wed, Apr 19, 2006 at 10:20:54AM +0200, Mario Splivalo wrote: > This works perfectly, but sometimes the game has no codes, and I still > need to know exactley who came first, who was second, and so on... So a > locking table as Tom suggested is, I guess, a perfect solution for my > situation...

Re: [PERFORM] merge>hash>loop

2006-04-20 Thread Jim C. Nasby
On Wed, Apr 19, 2006 at 01:25:28AM -0400, Tom Lane wrote: > Mark Kirkwood <[EMAIL PROTECTED]> writes: > > Jim C. Nasby wrote: > >> Good point. :/ I'm guessing there's no easy way to see how many blocks > >> for a given relation are in shared memory, either... > > > contrib/pg_buffercache will tell

Re: [PERFORM] Performance decrease

2006-04-20 Thread Radovan Antloga
190 fields in a table seems like rather a lot ... is that actually representative of your intended applications? Test table is like table I use in production with Firebird and Oracle db. Table has a lot of smallint and integer fields. As you can see I have Firebird for low cost projects (small c

Re: [PERFORM] Takes too long to fetch the data from database

2006-04-20 Thread Merlin Moncure
> SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900; you need to try and solve the problem without using 'offset'. you could do: BEGIN; DECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime; FETCH ABSOLUTE 81900 in crs; FETCH 49 in crs; CLOSE crs; COMMIT; this may be a bit faster but will

Re: [PERFORM] Performance decrease

2006-04-20 Thread Tom Lane
"Radovan Antloga" <[EMAIL PROTECTED]> writes: > My test table has 15830 records with 190 fields. 190 fields in a table seems like rather a lot ... is that actually representative of your intended applications? > I do like this: > update table > set field = null Again, is that representative of

[PERFORM] Performance decrease

2006-04-20 Thread Radovan Antloga
I'm new to PG and I'm testing default PG settings for now. I have PG 8.1.3. installed with autovacuum=on. My test table has 15830 records with 190 fields. I have different fields types (date, numeric, varchar, integer, smallint,...). I decided to evaluate PG because I need to use schemas. Fir

Re: [PERFORM] Quick Performance Poll

2006-04-20 Thread Luke Lonergan
Markus, On 4/20/06 8:11 AM, "Markus Schaber" <[EMAIL PROTECTED]> wrote: > Are they capable to index custom datatypes like the PostGIS geometries > that use the GIST mechanism? This could probably speed up our Geo > Databases for Map rendering, containing static data that is updated > approx. 2 ti

[PERFORM] IBM pSeries - overrated bucket of crud?

2006-04-20 Thread Gavin Hamill
Hi again :) This is a follow-up to the mega thread which made a Friday night more interesting [1] - the summary is various people thought there was some issue with shared memory access on AIX. I then installed Debian (kernel 2.6.11) on the 8-CPU p650 (native - no LPAR) and saw just as woeful

Re: [PERFORM] Quick Performance Poll

2006-04-20 Thread Markus Schaber
Hi, Luke, Luke Lonergan wrote: > The current drawback to bitmap index is that it isn't very maintainable > under insert/update, although it is safe for those operations. For now, you > have to drop index, do inserts/updates, rebuild index. So they effectively turn the table into a read-only tab

Re: [PERFORM] Quick Performance Poll

2006-04-20 Thread Jim Buttafuoco
I have been following your work with great interest. I believe I spoke to someone from Greenplum at linux world in Boston a couple of weeks ago. -- Original Message --- From: "Luke Lonergan" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED], "Simon Dale" <[EMAIL PROTECTED]>, pgsql-perf

Re: [PERFORM] Quick Performance Poll

2006-04-20 Thread Luke Lonergan
Jim, On 4/20/06 7:40 AM, "Jim Buttafuoco" <[EMAIL PROTECTED]> wrote: > First of all this is NOT a single table and yes I am using partitioning and > the constaint exclusion stuff. the largest > set of tables is over 2T. I have not had to rebuild the biggest database yet, > but for a smaller one

Re: [PERFORM] Perfrmance Problems (7.4.6)

2006-04-20 Thread Ruben Rubio Rey
Did you tried to index the expression? Did it work? Doron Baranes wrote: Ok. But that means I need a trigger on the original column to update the new column on each insert/update and that overhead. -Original Message- From: Ruben Rubio Rey [mailto:[EMAIL PROTECTED] Sent: Thursday, Apri

Re: [PERFORM] Quick Performance Poll

2006-04-20 Thread Jim Buttafuoco
First of all this is NOT a single table and yes I am using partitioning and the constaint exclusion stuff. the largest set of tables is over 2T. I have not had to rebuild the biggest database yet, but for a smaller one ~1T the restore takes about 12 hours including many indexes on both large an

Re: [PERFORM] Quick Performance Poll

2006-04-20 Thread Luke Lonergan
Jim, On 4/20/06 6:36 AM, "Jim Buttafuoco" <[EMAIL PROTECTED]> wrote: > The access is very fast when looking for a small subset of the data. I guess you are not using indexes because building a (non bitmap) index on 6TB on a single machine would take days if not weeks. So if you are using table

Re: [PERFORM] Inserts optimization?

2006-04-20 Thread Scott Marlowe
On Wed, 2006-04-19 at 20:07, Christopher Kings-Lynne wrote: > > Scott Marlowe <[EMAIL PROTECTED]> writes: > >> It's the refusal of people to stop using MyISAM table types that's the > >> real issue. > > > > Isn't MyISAM still the default over there? It's hardly likely that the > > average MySQL u

Re: [PERFORM] Identical query on two machines, different plans....

2006-04-20 Thread Csaba Nagy
OK, I marked the wrong row counts, but the conclusion is the same. Cheers, Csaba. > > QUERY PLAN > > --- > > Index Scan using ticketing_codes_uq_value_

Re: [PERFORM] Identical query on two machines, different plans....

2006-04-20 Thread Mario Splivalo
On Thu, 2006-04-20 at 15:59 +0200, Csaba Nagy wrote: > You very likely forgot to run ANALYZE on your laptop after copying the > data. Observe the different row count estimates in the 2 plans... > > HTH, > Csaba. Sometimes I wish I am Dumbo the Elephant, so I could cover myself with me ears... Th

Re: [PERFORM] Identical query on two machines, different plans....

2006-04-20 Thread Csaba Nagy
You very likely forgot to run ANALYZE on your laptop after copying the data. Observe the different row count estimates in the 2 plans... HTH, Csaba. > QUERY PLAN >

[PERFORM] Identical query on two machines, different plans....

2006-04-20 Thread Mario Splivalo
I have copied the database from production server to my laptop (pg_dump, etc...) to do some testing. While testing I have found out that one particular query is beeing much slower on my machine than on the server (it's not just because my laptop is much slower than the server), and found out that

Re: [PERFORM] Quick Performance Poll

2006-04-20 Thread Jim Buttafuoco
Simon, I have many databases over 1T with the largest being ~6T. All of my databases store telecom data, such as call detail records. The access is very fast when looking for a small subset of the data. For servers, I am using white box intel XEON and P4 systems with SATA disks, 4G of memory

[PERFORM] Quick Performance Poll

2006-04-20 Thread Simon Dale
Hi,   I was just wondering whether anyone has had success with storing more than 1TB of data with PostgreSQL and how they have found the performance.   We need a database that can store in excess of this amount and still show good performance. We will probably be implementing several ta

Re: [PERFORM] Perfrmance Problems (7.4.6)

2006-04-20 Thread Doron Baranes
Ok. But that means I need a trigger on the original column to update the new column on each insert/update and that overhead. -Original Message- From: Ruben Rubio Rey [mailto:[EMAIL PROTECTED] Sent: Thursday, April 20, 2006 12:49 PM To: Doron Baranes; pgsql-performance@postgresql.org Subje

Re: [PERFORM] Perfrmance Problems (7.4.6)

2006-04-20 Thread Ruben Rubio Rey
I think that the problem is the GROUP BY (datetime) that is date_trunc('hour'::text, i.entry_time) You should create an indexe with this expression (if its possible). http://www.postgresql.org/docs/7.4/interactive/indexes-expressional.html If is not possible, I would create a column with value

[PERFORM] Perfrmance Problems (7.4.6)

2006-04-20 Thread Doron Baranes
Hi, I am running on postgres 7.4.6. I did a vacuum analyze on the database but there was no change. I Attached here a file with details about the tables, the queries and the Explain analyze plans. Hope this can be helpful to analyze my problem 10x Doron TABLES