Re: [PERFORM] filesystem performance with lots of files

2005-12-02 Thread David Lang
On Fri, 2 Dec 2005, Qingqing Zhou wrote: I don't have all the numbers readily available (and I didn't do all the tests on every filesystem), but I found that even with only 1000 files/directory ext3 had some problems, and if you enabled dir_hash some functions would speed up, but writing lots o

Re: [PERFORM] Database restore speed

2005-12-02 Thread Luke Lonergan
David, > Luke, would it help to have one machine read the file and > have it connect to postgres on a different machine when doing > the copy? (I'm thinking that the first machine may be able to > do a lot of the parseing and conversion, leaving the second > machine to just worry about doing

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Ron
Agreed, and I apologize for the imprecision of my post below. I should have written: "Best practice seems to be to use a journaling fs and log metadata only and put it on separate dedicated spindles." I've seen enough HD failures that I tend to be paranoid and log the metadata of fs dedicated

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Michael Stone
On Fri, Dec 02, 2005 at 03:15:00AM -0500, Ron wrote: I've seen enough HD failures that I tend to be paranoid and log the metadata of fs dedicated to WAL as well, but that may very well be overkill. Especially since it wouldn't gain anything. Journalling doesn't give you any advantage whatsoever

Re: [PERFORM] COPY into table too slow with index: now an I/O

2005-12-02 Thread Michael Stone
On Fri, Dec 02, 2005 at 12:15:57AM -0500, Luke Lonergan wrote: That's good to know - makes sense. I suppose we might still thrash over a 1GB range in seeks if the BG writer starts running at full rate in the background, right? Or is there some write combining in the BG writer? That part your

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Alex Stapleton
On 1 Dec 2005, at 16:03, Tom Lane wrote: Michael Riess <[EMAIL PROTECTED]> writes: (We NEED that many tables, please don't recommend to reduce them) No, you don't. Add an additional key column to fold together different tables of the same structure. This will be much more efficient than

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Alex Stapleton
On 2 Dec 2005, at 14:16, Alex Stapleton wrote: On 1 Dec 2005, at 16:03, Tom Lane wrote: Michael Riess <[EMAIL PROTECTED]> writes: (We NEED that many tables, please don't recommend to reduce them) No, you don't. Add an additional key column to fold together different tables of the same

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Jan Wieck
On 12/1/2005 2:34 PM, Michael Riess wrote: VACUUM FULL was probably always overkill, unless "always" includes versions prior to 7.3... Well, we tried switching to daily VACUUM ANALYZE and weekly VACUUM FULL, but the database got considerably slower near the end of the week. This indicates th

[PERFORM] Network permormance under windows

2005-12-02 Thread Teracat
Hello, We used Postgresql 7.1 under Linux and recently we have changed it to Postgresql 8.1 under Windows XP. Our application uses ODBC and when we try to get some information from the server throw a TCP connection, it's very slow. We have also tried it using psql and pgAdmin III, and we get

Re: [PERFORM] Network permormance under windows

2005-12-02 Thread Merlin Moncure
> We used Postgresql 7.1 under Linux and recently we have changed it to > Postgresql 8.1 under Windows XP. Our application uses ODBC and when we > try to get some information from the server throw a TCP connection, it's > very slow. We have also tried it using psql and pgAdmin III, and we get > the

Re: [PERFORM] pg_dump slow

2005-12-02 Thread Merlin Moncure
> > That was the command used to restore a database > > pg_restore.exe -i -h localhost -p 5432 -U postgres -d temp2 -v > "D:\d\temp.bkp" > > The database was created before using LATIN1 charset > > With 100 rows you can´t feel the test, then I decided send the whole > table. > > Very Thanks >

Re: [PERFORM] pg_dump slow

2005-12-02 Thread Merlin Moncure
> How are you dumping out your archive? I confirmed unreasonably slow dump > with pg_dump -Z temp2 > temp2.bkp on windows 2000 server. I normally use > bzip to compress my dumps. > > Can you measure time to dump uncompressed and also with bzip and compare? > > Merlin oops...cancel that. I was

Re: [PERFORM] Open request for benchmarking input (fwd)

2005-12-02 Thread Qingqing Zhou
"David Lang" <[EMAIL PROTECTED]> wrote > here are the suggestions from the MySQL folks, what additional tests > should I do. > I think the tests you list are enough in this stage, Regards, Qingqing ---(end of broadcast)--- TIP 9: In versions b

[PERFORM] two disks - best way to use them?

2005-12-02 Thread Rick Schumeyer
I installed another drive in my linux pc in an attempt to improve performance on a large COPY to a table with a geometry index.   Based on previous discussion, it seems there are three things competing for the hard drive:   1)   the input data file 2)   the pg table 3)  

Re: [PERFORM] Network permormance under windows

2005-12-02 Thread Josep Maria Pinyol Fontseca
Dear Merlin, For instance, we have this table (with 22900 tuples): CREATE TABLE tbl_empresa ( id_empresa int4 NOT NULL DEFAULT nextval(('seq_empresa'::text)::regclass), ref_poblacio int4 NOT NULL, nom varchar(50) NOT NULL, nif varchar(12), carrer varchar(50), telefon varchar(13), fax varchar(13

Re: [PERFORM] two disks - best way to use them?

2005-12-02 Thread Ron
At 01:58 PM 12/2/2005, Rick Schumeyer wrote: I installed another drive in my linux pc in an attempt to improve performance on a large COPY to a table with a geometry index. Based on previous discussion, it seems there are three things competing for the hard drive: 1) the input data file

Re: [PERFORM] Database restore speed

2005-12-02 Thread Stephen Frost
* Luke Lonergan ([EMAIL PROTECTED]) wrote: > > Luke, would it help to have one machine read the file and > > have it connect to postgres on a different machine when doing > > the copy? (I'm thinking that the first machine may be able to > > do a lot of the parseing and conversion, leaving the se

Re: [PERFORM] Network permormance under windows

2005-12-02 Thread me
we experienced the same. had 2 win2003 servers - www and db connected to the same router through 100mbit. the performance was quite bad. now we run the db on the same machine as the web and everything runs smooth. cheers, thomas - Original Message - From: "Josep Maria Pinyol Fontseca

Re: [PERFORM] Network permormance under windows

2005-12-02 Thread Qingqing Zhou
"Josep Maria Pinyol Fontseca" <[EMAIL PROTECTED]> wrote > > When we select all data in local machine, we obtain results in 2-3 seconds > aprox. In remote connections: > > Postgresql 7.1 usign pgAdminII: > Network traffic generated with remote applications is about 77-80% in a > 10Mb connection.

Re: [PERFORM] Database restore speed

2005-12-02 Thread Luke Lonergan
Stephen, On 12/2/05 12:18 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote: > Just a thought, but couldn't psql be made to use the binary mode of > libpq and do at least some of the conversion on the client side? Or > does binary mode not work with copy (that wouldn't suprise me, but > perhaps copy

Re: [PERFORM] Network permormance under windows

2005-12-02 Thread Josep Maria Pinyol Fontseca
Yes, with psql, pgAdminIII and our application with ODBC I experiment the same situation... the sentences that I execute are like "select * ..." or similar like this. Qingqing Zhou wrote: "Josep Maria Pinyol Fontseca" <[EMAIL PROTECTED]> wrote When we select all data in local machine, w

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Andrew Sullivan
On Thu, Dec 01, 2005 at 08:34:43PM +0100, Michael Riess wrote: > Well, we tried switching to daily VACUUM ANALYZE and weekly VACUUM FULL, > but the database got considerably slower near the end of the week. If you have your FSM configured correctly and you are vacuuming tables often enough for yo

Re: [PERFORM] Database restore speed

2005-12-02 Thread Stephen Frost
* Luke Lonergan ([EMAIL PROTECTED]) wrote: > On 12/2/05 12:18 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote: > > Just a thought, but couldn't psql be made to use the binary mode of > > libpq and do at least some of the conversion on the client side? Or > > does binary mode not work with copy (that

Re: [PERFORM] Database restore speed

2005-12-02 Thread Luke Lonergan
Stephen, On 12/2/05 1:19 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote: > I've used the binary mode stuff before, sure, Postgres may have to > convert some things but I have a hard time believing it'd be more > expensive to do a network_encoding -> host_encoding (or toasting, or > whatever) than

Re: [PERFORM] Database restore speed

2005-12-02 Thread Luke Lonergan
Stephen, On 12/2/05 1:19 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote: > >> I've used the binary mode stuff before, sure, Postgres may have to >> convert some things but I have a hard time believing it'd be more >> expensive to do a network_encoding -> host_encoding (or toasting, or >> whatever)

Re: [PERFORM] Database restore speed

2005-12-02 Thread Michael Stone
On Fri, Dec 02, 2005 at 01:24:31PM -0800, Luke Lonergan wrote: From a performance standpoint no argument, although you're betting that you can do parsing / conversion faster than the COPY core in the backend can Not necessarily; you may be betting that it's more *efficient* to do the parsing o

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Francisco Reyes
Michael Riess writes: Sorry, I should have included that info in the initial post. You're right in that most of these tables have a similar structure. But they are independent and can be customized by the users. How about creating 50 databases and give each it's own tablespace? It's not onl

Re: [PERFORM] Database restore speed

2005-12-02 Thread Tom Lane
"Luke Lonergan" <[EMAIL PROTECTED]> writes: > One more thing - this is really about the lack of a cross-platform binary > input standard for Postgres IMO. If there were such a thing, it *would* be > safe to do this. The current Binary spec is not cross-platform AFAICS, it > embeds native represen

[PERFORM] 15,000 tables - next step

2005-12-02 Thread Michael Riess
Hi, thanks for your comments so far - I appreciate it. I'd like to narrow down my problem a bit: As I said in the other thread, I estimate that only 20% of the 15,000 tables are accessed regularly. So I don't think that vacuuming or the number of file handles is a problem. Have a look at thi

Re: [PERFORM] Database restore speed

2005-12-02 Thread Luke Lonergan
Micahel, On 12/2/05 1:46 PM, "Michael Stone" <[EMAIL PROTECTED]> wrote: > Not necessarily; you may be betting that it's more *efficient* to do the > parsing on a bunch of lightly loaded clients than your server. Even if > you're using the same code this may be a big win. If it were possible in l

[PERFORM] Small table or partial index?

2005-12-02 Thread Francisco Reyes
I am in the process of designing a new system. There will be a long list of words such as -word table word_id integer word varchar special boolean Some "special" words are used to determine if some work is to be done and will be what we care the most for one type of operation. Will it be more

Re: [PERFORM] Database restore speed

2005-12-02 Thread Mitch Skinner
On Fri, 2005-12-02 at 13:24 -0800, Luke Lonergan wrote: > It's a matter of safety and generality - in general you > can't be sure that client machines / OS'es will render the same conversions > that the backend does in all cases IMO. Can't binary values can safely be sent cross-platform in DataRow

Re: [PERFORM] Database restore speed

2005-12-02 Thread Luke Lonergan
And how do we compose the binary data on the client? Do we trust that the client encoding conversion logic is identical to the backend's? If there is a difference, what happens if the same file loaded from different client machines has different results? Key conflicts when loading a restore f