On Tue, 8 May 2007, �~]~N彦 Ian Li wrote:
In #postgresql on freenode, somebody ever mentioned that ZFS from Solaris
helps a lot to the performance of pgsql, so dose anyone have information
about that?
the filesystem you use will affect the performance of postgres
significantly. I've heard a
I am about to order a new server for my Postgres cluster. I will
probably get a Dual Xeon Quad Core instead of my current Dual Xeon.
Which OS would you recommend to optimize Postgres behaviour (i/o
access, multithreading, etc) ?
I am hesitating between Fedora Core 6, CentOS and Debian. Can
[EMAIL PROTECTED] wrote:
if you don't journal your data then you avoid the problems above, but in
a crash you may find that you lost data, even though the filesystem is
'intact' according to fsck.
PostgreSQL itself journals it's data to the WAL, so that shouldn't happen.
--
Heikki
In #postgresql on freenode, somebody ever mentioned that ZFS from Solaris
helps a lot to the performance of pgsql, so dose anyone have information
about that?
the filesystem you use will affect the performance of postgres
significantly. I've heard a lot of claims for ZFS, unfortunantly many
On Tue, 8 May 2007, Claus Guttesen wrote:
In #postgresql on freenode, somebody ever mentioned that ZFS from
Solaris
helps a lot to the performance of pgsql, so dose anyone have information
about that?
the filesystem you use will affect the performance of postgres
significantly. I've
[EMAIL PROTECTED] wrote:
On Tue, 8 May 2007, Claus Guttesen wrote:
In #postgresql on freenode, somebody ever mentioned that ZFS from
Solaris
helps a lot to the performance of pgsql, so dose anyone have
information
about that?
the filesystem you use will affect the performance of
On Mon, May 07, 2007 at 11:56:14PM -0400, Greg Smith wrote:
Debian packages PostgreSQL in a fashion unique to it; it's arguable
whether it's better or not (I don't like it), but going with that will
assure your installation is a bit non-standard compared with most Linux
installas. The main
Hello to all,
I have a table that is used as a spool for various events. Some processes
write data into it, and another process reads the resulting rows, do some
work, and delete the rows that were just processed.
As you can see, with hundreds of thousands events a day, this table will
On 5/8/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
[snip]
I personally don't trust reiserfs, jfs seems to be a tools for
transitioning from AIX more then anything else [...]
What makes you say this? I have run JFS for years with complete
satisfaction, and I have never logged into an AIX
On Mon, May 07, 2007 at 03:14:08PM -0700, Joshua D. Drake wrote:
It is my understanding (and I certainly could be wrong) that FreeBSD
doesn't handle SMP nearly as well as Linux (and Linux not as well as
Solaris).
I'm not actually sure about the last part. There are installations as big as
1024
Pomarede Nicolas npomarede 'at' corp.free.fr writes:
Hello to all,
I have a table that is used as a spool for various events. Some
processes write data into it, and another process reads the resulting
rows, do some work, and delete the rows that were just processed.
As you can see, with
Pomarede Nicolas wrote:
But for the data (dead rows), even running a vacuum analyze every day is
not enough, and doesn't truncate some empty pages at the end, so the
data size remains in the order of 200-300 MB, when only a few effective
rows are there.
For a table like that you should run
On Tue, 8 May 2007, [EMAIL PROTECTED] wrote:
On Tue, 8 May 2007, Pomarede Nicolas wrote:
As you can see, with hundreds of thousands events a day, this table will need
being vaccumed regularly to avoid taking too much space (data and index).
Note that processing rows is quite fast in fact,
On Tue, 8 May 2007, Guillaume Cottenceau wrote:
Pomarede Nicolas npomarede 'at' corp.free.fr writes:
Hello to all,
I have a table that is used as a spool for various events. Some
processes write data into it, and another process reads the resulting
rows, do some work, and delete the rows
On Tue, 8 May 2007, Trygve Laugstøl wrote:
currently ZFS is only available on Solaris, parts of it have been released
under GPLv2, but it doesn't look like enough of it to be ported to Linux
(enough was released for grub to be able to access it read-only, but not
the full filesystem). there
Pomarede Nicolas [EMAIL PROTECTED] writes:
But for the data (dead rows), even running a vacuum analyze every day is not
enough, and doesn't truncate some empty pages at the end, so the data size
remains in the order of 200-300 MB, when only a few effective rows are there.
Try running vacuum
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
But for the data (dead rows), even running a vacuum analyze every day is
not enough, and doesn't truncate some empty pages at the end, so the data
size remains in the order of 200-300 MB, when only a few effective rows are
On Tue, 8 May 2007, Pomarede Nicolas wrote:
As you can see, with hundreds of thousands events a day, this table will need
being vaccumed regularly to avoid taking too much space (data and index).
Note that processing rows is quite fast in fact, so at any time a count(*) on
this table
Pomarede Nicolas wrote:
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
But for the data (dead rows), even running a vacuum analyze every day
is not enough, and doesn't truncate some empty pages at the end, so
the data size remains in the order of 200-300 MB, when only a
[EMAIL PROTECTED] wrote:
On Tue, 8 May 2007, Trygve Laugstøl wrote:
currently ZFS is only available on Solaris, parts of it have been
released
under GPLv2, but it doesn't look like enough of it to be ported to
Linux
(enough was released for grub to be able to access it read-only, but
not
Pomarede Nicolas [EMAIL PROTECTED] writes:
Yes, I already do this on another spool table ; I run a vacuum after
processing
it, but I wondered if there was another way to keep the disk size low for this
table.
after processing it might be too soon if there are still transactions around
that
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
But for the data (dead rows), even running a vacuum analyze every day is
not enough, and doesn't truncate some empty pages at the end, so the data
size
Pomarede Nicolas wrote:
There's not too much simultaneous transaction on the database, most of
the time it shouldn't exceed one minute (worst case). Except, as I need
to run a vacuum analyze on the whole database every day, it now takes 8
hours to do the vacuum (I changed vacuum values to be a
I'm really not a senior member around here and while all this licensing
stuff and underlying fs between OSs is very interesting can we please
think twice before continuing it.
Thanks for the minute,
./C
---(end of broadcast)---
TIP 2: Don't 'kill
Heikki Linnakangas heikki 'at' enterprisedb.com writes:
Pomarede Nicolas wrote:
But for the data (dead rows), even running a vacuum analyze every
day is not enough, and doesn't truncate some empty pages at the end,
so the data size remains in the order of 200-300 MB, when only a few
Guillaume Cottenceau wrote:
Heikki, is there theoretical need for frequent VACUUM when
max_fsm_pages is large enough to hold references of dead rows?
Not really, if you don't mind that your table with 10 rows takes
hundreds of megabytes on disk. If max_fsm_pages is large enough, the
table
WRT ZFS on Linux, if someone were to port it, the license issue would get
worked out IMO (with some discussion to back me up). From discussions with the
developers, the biggest issue is a technical one: the Linux VFS layer makes the
port difficult.
I don't hold any hope that the FUSE port
I'm trying to come up with a way to estimate the need for a
VACUUM FULL and/or a REINDEX on some tables.
According to documentation[1], VACUUM FULL's only benefit is
returning unused disk space to the operating system; am I correct
in assuming there's also the benefit of optimizing the
I'm really not a senior member around here and while all this licensing
stuff and underlying fs between OSs is very interesting can we please
think twice before continuing it.
Agree, there are other lists for this stuff; and back to what one of
the original posters said: it doesn't matter
Guillaume Cottenceau wrote:
According to documentation[1], VACUUM FULL's only benefit is
returning unused disk space to the operating system; am I correct
in assuming there's also the benefit of optimizing the
performance of scans, because rows are physically compacted on
the disk?
That's
In response to Guillaume Cottenceau [EMAIL PROTECTED]:
I'm trying to come up with a way to estimate the need for a
VACUUM FULL and/or a REINDEX on some tables.
You shouldn't vacuum full unless you have a good reason. Vacuum full
causes index bloat.
According to documentation[1], VACUUM
Hi,
Despite numerous efforts, we're unable to solve a severe performance limitation
between Pg 7.3.2
and Pg 8.1.4.
The query and 'explain analyze' plan below, runs in
26.20 msec on Pg 7.3.2, and
2463.968 ms on Pg 8.1.4,
and the Pg7.3.2 is on older hardware and OS.
Multiply
Susan Russo [EMAIL PROTECTED] writes:
Despite numerous efforts, we're unable to solve a severe performance
limitation between Pg 7.3.2
and Pg 8.1.4.
The query and 'explain analyze' plan below, runs in
26.20 msec on Pg 7.3.2, and
2463.968 ms on Pg 8.1.4,
You're not getting
On Tue, May 08, 2007 at 10:18:34AM -0400, Susan Russo wrote:
explain analyze output on Pg7.3.2:
- Index Scan using dbxref_idx2 on dbxref dx
(cost=0.00..5.83 rows=1 width=21) (actual time=25.58..25.58 rows=0 loops=1)
Index Cond: ((accession =
--- Susan Russo [EMAIL PROTECTED] wrote:
and accession like 'AY851043%'
I don't know if you've tried refactoring your query, but you could try:
AND accession BETWEEN 'AY8510430' AND 'AY8510439' -- where the last digit is
^ ^ --
On 5/8/07, Tom Lane [EMAIL PROTECTED] wrote:
You're not getting the indexscan optimization of the LIKE clause, which
is most likely due to having initdb'd the 8.1 installation in something
other than C locale. You can either redo the initdb in C locale (which
might be a good move to fix other
In response to Alexander Staubo [EMAIL PROTECTED]:
On 5/8/07, Tom Lane [EMAIL PROTECTED] wrote:
You're not getting the indexscan optimization of the LIKE clause, which
is most likely due to having initdb'd the 8.1 installation in something
other than C locale. You can either redo the
Alexander Staubo [EMAIL PROTECTED] writes:
why is there no support for changing the database locale after the fact?
It'd corrupt all your indexes (or all the ones on textual columns anyway).
There are some TODO entries related to this, but don't hold your breath
waiting for a fix ...
This query does some sort of analysis on an email archive:
SELECT
eh_subj.header_body AS subject,
count(distinct eh_from.header_body)
FROM
email JOIN mime_part USING (email_id)
JOIN email_header eh_subj USING
Hi,
You could always try
CREATE INDEX test_index ON dbxref (accession varchar_pattern_ops);
WOW! we're now at runtime 0.367ms on Pg8
Next step is to initdb w/C Locale (tonight) (Thanks Tom et al.!).
Thanks again - will report back soon.
Susan
From [EMAIL PROTECTED] Tue May 8 10:49:14
Susan Russo wrote:
Hi,
You could always try
CREATE INDEX test_index ON dbxref (accession varchar_pattern_ops);
WOW! we're now at runtime 0.367ms on Pg8
Next step is to initdb w/C Locale (tonight) (Thanks Tom et al.!).
That's alternative to the pattern_ops index; it won't help you
Peter Eisentraut [EMAIL PROTECTED] writes:
Note how spectacularly overpriced this plan is.
Hmm, I'd have expected it to discount the repeated indexscans a lot more
than it seems to be doing for you. As an example in the regression
database, note what happens to the inner indexscan cost estimate
I've seen the FUSE port of ZFS, and it does run sslloowwllyy. It
appears that a native linux port is going to be required if we want
ZFS to be reasonably performant.
WRT which FS to use for pg; the biggest issue is what kind of DB you
will be building. The best pg FS for OLTP and OLAP are
Alvaro Herrera [EMAIL PROTECTED] writes:
Susan Russo wrote:
Next step is to initdb w/C Locale (tonight) (Thanks Tom et al.!).
That's alternative to the pattern_ops index; it won't help you obtain a
plan faster than this one.
No, but since their old DB was evidently running in C locale, this
I am back with the chatlog and seem it's the Transparent compression
that helps a lot, very interesting...
here is the log of #postgresql on Apr. 21th around 13:20 GMT (snipped) :
Solatis why is that, when hard disk i/o is my bottleneck ?
Solatis well i have 10 disks in a raid1+0
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
There's not too much simultaneous transaction on the database, most of the
time it shouldn't exceed one minute (worst case). Except, as I need to run
a vacuum analyze on the whole database every day, it now takes 8 hours to
Scott Marlowe wrote:
On Thu, 2007-05-03 at 21:37, Merlin Moncure wrote:
On 5/3/07, Fei Liu [EMAIL PROTECTED] wrote:
Hello, Andreas, I too am having exactly the same issue as you do.
Comparing my partitioned and plain table performance, I've found that
the plain tables perform about 25%
On Tue, 2007-05-08 at 13:41, Fei Liu wrote:
Scott Marlowe wrote:
On Thu, 2007-05-03 at 21:37, Merlin Moncure wrote:
On 5/3/07, Fei Liu [EMAIL PROTECTED] wrote:
Hello, Andreas, I too am having exactly the same issue as you do.
Comparing my partitioned and plain table
Does using DISTINCT in a query force PG to abandon any index search it might
have embarked upon?
--
Yudhvir Singh Sidhu
408 375 3134 cell
On Tue, May 08, 2007 at 12:52:35PM -0700, Y Sidhu wrote:
Does using DISTINCT in a query force PG to abandon any index search it might
have embarked upon?
No.
If you need help with a specific query, please post it, along with your table
definitions and EXPLAIN ANALYZE output.
/* Steinar */
--
Y Sidhu wrote:
Does using DISTINCT in a query force PG to abandon any index search it
might have embarked upon?
Depends on the where clause.
--
Yudhvir Singh Sidhu
408 375 3134 cell
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 ||
On Tue, 2007-05-08 at 14:52, Y Sidhu wrote:
Does using DISTINCT in a query force PG to abandon any index search it
might have embarked upon?
explain analyze select distinct request from businessrequestsummary
where lastflushtime between now() - interval '30 minutes' and now();
I'm building a kiosk with a 3D front end accessing PostGIS/PostgreSQL
via Apache/PHP. The 3D display is supposed to show smooth motion from
location to location, with PostGIS giving dynamically updated
information on the locations. Everything runs on the same machine,
and it all works, but
On Tue, 8 May 2007, [EMAIL PROTECTED] wrote:
one issue with journaling filesystems, if you journal the data as well as the
metadata you end up with a very reliable setup, however it means that all
your data needs to be written twice, oncce to the journal, and once to the
final location. the
On Tue, May 08, 2007 at 04:27:10PM -0400, Daniel Griscom wrote:
3: ... some other solution I haven't thought of.
On a wild guess, could you try setting the CPU costs higher, to make the
planner choose a less CPU-intensive plan?
Other (weird) suggestions would include calling a user-defined
In response to Daniel Griscom [EMAIL PROTECTED]:
I'm building a kiosk with a 3D front end accessing PostGIS/PostgreSQL
via Apache/PHP. The 3D display is supposed to show smooth motion from
location to location, with PostGIS giving dynamically updated
information on the locations.
On Tue, 8 May 2007, Daniel Griscom wrote:
I'm building a kiosk with a 3D front end accessing PostGIS/PostgreSQL via
Apache/PHP. The 3D display is supposed to show smooth motion from location to
location, with PostGIS giving dynamically updated information on the
locations. Everything runs on
1. If you go the route of using nice, you might want to run the 3D
front-end at a higher priority instead of running PG at a lower
priority. That way apache, php and the other parts all run at the same
priority as PG and just the one task that you want to run smoothly is
elevated.
2. You may not
I am trying to follow a message thread. One guy says we should be running
vacuum analyze daily and the other says we should be running vacuum multiple
times a day. I have tried looking for what a vacuum analyze is to help me
understand but no luck.
--
Yudhvir Singh Sidhu
408 375 3134 cell
On Tue, 8 May 2007, Y Sidhu wrote:
I am trying to follow a message thread. One guy says we should be running
vacuum analyze daily and the other says we should be running vacuum multiple
times a day. I have tried looking for what a vacuum analyze is to help me
understand but no luck.
vaccum
Y Sidhu escribió:
I am trying to follow a message thread. One guy says we should be running
vacuum analyze daily and the other says we should be running vacuum multiple
times a day. I have tried looking for what a vacuum analyze is to help me
understand but no luck.
VACUUM ANALYZE is like
On Tue, May 08, 2007 at 05:52:13PM -0400, Alvaro Herrera wrote:
I am trying to follow a message thread. One guy says we should be running
vacuum analyze daily and the other says we should be running vacuum multiple
times a day. I have tried looking for what a vacuum analyze is to help me
Steinar H. Gunderson wrote:
On Tue, May 08, 2007 at 05:52:13PM -0400, Alvaro Herrera wrote:
I am trying to follow a message thread. One guy says we should be running
vacuum analyze daily and the other says we should be running vacuum
multiple
times a day. I have tried looking for what a
Steinar H. Gunderson wrote:
Or use a dual-core system. :-)
Am I missing something?? There is just *one* instance of this idea in,
what,
four replies?? I find it so obvious, and so obviously the only solution
that
has any hope to work, that it makes me think I'm missing something ...
Is
Carlos Moreno wrote:
Steinar H. Gunderson wrote:
Or use a dual-core system. :-)
Am I missing something?? There is just *one* instance of this idea in,
what,
four replies?? I find it so obvious, and so obviously the only solution
that
has any hope to work, that it makes me think I'm
On Tue, May 08, 2007 at 06:32:14PM -0400, Carlos Moreno wrote:
Or use a dual-core system. :-)
Am I missing something?? There is just *one* instance of this idea in,
what, four replies?? I find it so obvious, and so obviously the only
solution that has any hope to work, that it makes me think
Joshua D. Drake wrote:
Am I missing something?? There is just *one* instance of this idea
in, what,
four replies?? I find it so obvious, and so obviously the only
solution that
has any hope to work, that it makes me think I'm missing something ...
Is it that multiple PostgreSQL processes
Thanks for all the feedback. Unfortunately I didn't specify that this
is running on a WinXP machine (the 3D renderer is an ActiveX plugin),
and I don't even think nice is available. I've tried using the
Windows Task Manager to set every postgres.exe process to a low
priority, but that didn't
On Tue, 8 May 2007, Carlos Moreno wrote:
Daniel Griscom wrote:
Several people have mentioned having multiple processors; my current
machine is a uni-processor machine, but I believe we could spec the actual
runtime machine to have multiple processors/cores.
My estimate is that yes, you
Hi Everybody,
I was trying to see how many inserts per seconds my application could
handle on various machines.
Those are the machines I used to run my app:
1) Pentium M 1.7Ghz
2) Pentium 4 2.4 Ghz
3) DMP Xeon 3Ghz
Sure, I was expecting the dual Zeon to outperform the
Orhan Aglagul wrote:
Hi Everybody,
I was trying to see how many inserts per seconds my application could
handle on various machines.
I read that postgres does have issues with MP Xeon (costly context
switching). But I still think that with fsync=on 65 seconds is ridiculous.
CPU is
Joshua D. Drake wrote:
CPU is unlikely your bottleneck.. You failed to mention anything
about your I/O setup. [...]
He also fails to mention if he is doing the inserts one at a time or
as batch.
Would this really be important? I mean, would it affect a *comparison*??
As long as he
On Tue, 2007-05-08 at 17:59, Orhan Aglagul wrote:
Hi Everybody,
I was trying to see how many inserts per seconds my application could
handle on various machines.
Here is the data:
Time for 1 inserts
Fsync=on
Fsync=off
Pentium M 1.7
~17 sec
~6 sec
Forgot to reply to the mailing list. Sorry (new here)
Here are responses to previous questions
-Original Message-
From: Orhan Aglagul
Sent: Tuesday, May 08, 2007 5:30 PM
To: 'Joshua D. Drake'
Subject: RE: [PERFORM]
I am using a prepared statement and inserting in a loop 10,000
-Original Message-
From: Orhan Aglagul
Sent: Tuesday, May 08, 2007 5:37 PM
To: 'Scott Marlowe'
Subject: RE: [PERFORM]
But 10,000 records in 65 sec comes to ~153 records per second. On a dual
3.06 Xeon
What range is acceptable?
-Original Message-
From: Scott Marlowe
No, it is one transaction per insert.
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 08, 2007 5:38 PM
To: Orhan Aglagul
Subject: RE: [PERFORM]
On Tue, 2007-05-08 at 19:36, Orhan Aglagul wrote:
But 10,000 records in 65 sec comes to ~153 records per
On Tue, 8 May 2007, Orhan Aglagul wrote:
No, it is one transaction per insert.
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 08, 2007 5:38 PM
To: Orhan Aglagul
Subject: RE: [PERFORM]
On Tue, 2007-05-08 at 19:36, Orhan Aglagul wrote:
But 10,000
On Tue, 8 May 2007, Tom Lane wrote:
What Debian has done is set up an arrangement that lets you run two (or
more) different PG versions in parallel. Since that's amazingly helpful
during a major-PG-version upgrade, most of the other packagers are
scheming how to do something similar.
I
On Tue, 8 May 2007, Luke Lonergan wrote:
From discussions with the developers, the biggest issue is a technical
one: the Linux VFS layer makes the [ZFS] port difficult.
Difficult on two levels. First you'd have to figure out how to make it
work at all; then you'd have to reshape it into a
You can use the workload management feature that we've contributed to
Bizgres. That allows you to control the level of statement concurrency by
establishing queues and associating them with roles.
That would provide the control you are seeking.
- Luke
On 5/8/07 4:24 PM, [EMAIL PROTECTED]
On Tue, 8 May 2007, Orhan Aglagul wrote:
Time for 1 inserts
Pentium M 1.7
~17 sec fsync=on
~6 sec fsync=off
This is 588 inserts/second with fsync on. It's impossible to achieve that
without write caching at either the controller or hard drive. My bet
would be that your hard drive in
Thanks for all the feedback. Unfortunately I didn't specify that this
is running on a WinXP machine (the 3D renderer is an ActiveX plugin),
and I don't even think nice is available. I've tried using the
Windows Task Manager to set every postgres.exe process to a low
priority, but that
82 matches
Mail list logo