Is there any MS-SQL Server like 'Profiler' available for PostgreSQL? A profiler is a tool that monitors the database server and outputs a detailed trace of all the transactions/queries that areexecuted ona database during a specified period of time. Kindly let me know ifany of you knows of sucha
Dan Harris [EMAIL PROTECTED] writes:
I keep the entire database vacuumed regularly.
How often is regularly? We get frequent posts from people who think daily or
every 4 hours is often enough. If the table is very busy you can need vacuums
as often as every 15 minutes.
Also, if you've done
Try turning on query logging and using the 'pqa' utility on pgfoundry.org.
Chris
Agha Asif Raza wrote:
Is there any MS-SQL Server like 'Profiler' available for PostgreSQL? A
profiler is a tool that monitors the database server and outputs a
detailed trace of all the transactions/queries that
[reposted due to delivery error -jwb]
I just took delivery of a new system, and used the opportunity to
benchmark postgresql 8.0 performance on various filesystems. The system
in question runs Linux 2.6.12, has one CPU and 1GB of system memory, and
5 7200RPM SATA disks attached to an Areca
Agha Asif Raza wrote:
Is there any MS-SQL Server like 'Profiler' available for PostgreSQL? A
profiler is a tool that monitors the database server and outputs a detailed
trace of all the transactions/queries that are executed on a database during
a specified period of time. Kindly let me
David Mitchell wrote:
What is the load average on this machine? Do you do many updates? If
you do a lot of updates, perhaps you haven't vacuumed recently. We
were seeing similar symptoms when we started load testing our stuff
and it turned out we were vacuuming too infrequently.
The load
I was wondering - have you had a chance to run the same benchmarks on
ReiserFS (ideally both 3 and 4, with notail)?
I'd be quite interested to see how it performs in this situation since
it's my fs of choice for most things.
Thanks,
Dmitri
-Original Message-
From: [EMAIL PROTECTED]
I'm trying to improve the speed of this query:
explain select recordtext from eventactivity inner join ( select
incidentid from k_r where id = 94 ) a using ( incidentid ) inner join
( select incidentid from k_b where id = 107 ) b using ( incidentid );
On 7/14/05, Jeffrey W. Baker [EMAIL PROTECTED] wrote:
[reposted due to delivery error -jwb]
I just took delivery of a new system, and used the opportunity to
benchmark postgresql 8.0 performance on various filesystems. The system
in question runs Linux 2.6.12, has one CPU and 1GB of system
Quoting Jeffrey W. Baker [EMAIL PROTECTED]:
Here's the result, in transactions per second.
ext3 jfs xfs
--
---
10 Clients 55 81 68
100 Clients 61 100 64
Was fsync true? And have you tried ext2?
Did you seperate the data the transaction log? I've noticed less than
optimal performance on xfs if the transaction log is on the xfs data
partition, and it's silly to put the xlog on a journaled filesystem
anyway. Try putting xlog on an ext2 for all the tests.
Mike Stone
On Thu, 2005-07-14 at 10:06 +1000, Marc McIntyre wrote:
I'm having a problem with a query that performs a sequential scan on a
table when it should be performing an index scan. The interesting thing
is, when we dumped the database on another server, it performed an index
scan on that
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Dawid Kuroczko wrote:
|
| If you still have a chance, could you do tests with other journaling
| options for ext3 (journal=writeback, journal=data)? And could you
| give figures about performace of other IO elevators? I mean, you
| wrote that
On Thu, 2005-07-14 at 10:03 +0200, Dawid Kuroczko wrote:
On 7/14/05, Jeffrey W. Baker [EMAIL PROTECTED] wrote:
[reposted due to delivery error -jwb]
I just took delivery of a new system, and used the opportunity to
benchmark postgresql 8.0 performance on various filesystems. The system
On Thu, Jul 14, 2005 at 02:15:52PM +0200, Grega Bremec wrote:
I'm curious as to what this means - did they have problems integrating
it into their toolchain or are there actual problems going on in jfs
currently?
I've found jfs to be the least stable linux filesystem and won't allow
it
Dan Harris wrote:
I'm trying to improve the speed of this query:
explain select recordtext from eventactivity inner join ( select
incidentid from k_r where id = 94 ) a using ( incidentid ) inner join (
select incidentid from k_b where id = 107 ) b using ( incidentid );
You might try giving
On Thu, Jul 14, 2005 at 12:28:05AM -0600, Dan Harris wrote:
Ok, so I remounted this drive as ext2 shortly before sending my first
email today. It wasn't enough time for me to notice the ABSOLUTELY
HUGE difference in performance change. Ext3 must really be crappy
for postgres, or at
On Jul 14, 2005, at 9:47 AM, Alvaro Herrera wrote:
On Thu, Jul 14, 2005 at 12:28:05AM -0600, Dan Harris wrote:
. Ext3 must really be crappy
for postgres, or at least is on this box.
Were you using the default journal settings for ext3?
Yes, I was. Next time I get a chance to reboot
On Thu, Jul 14, 2005 at 03:08:30PM +1000, Alison Winters wrote:
Hi,
Our application requires a number of processes to select and update rows
from a very small (10 rows) Postgres table on a regular and frequent
basis. These processes often run for weeks at a time, but over the
space of a
On Thu, 2005-07-14 at 15:08 +1000, Alison Winters wrote:
Hi,
Our application requires a number of processes to select and update rows
from a very small (10 rows) Postgres table on a regular and frequent
basis. These processes often run for weeks at a time, but over the
Are these long
On Wed, Jul 13, 2005 at 11:33:41PM -0700, Jeffrey W. Baker wrote:
[reposted due to delivery error -jwb]
I just took delivery of a new system, and used the opportunity to
benchmark postgresql 8.0 performance on various filesystems. The system
in question runs Linux 2.6.12, has one CPU and
On Wed, Jul 13, 2005 at 12:08:54PM +0200, Nicolas Beaume wrote:
Hello
I have a large database with 4 large tables (each containing at least
200 000 rows, perhaps even 1 or 2 million) and i ask myself if it's
better to split them into small tables (e.g tables of 2000 rows) to
speed the
On Thu, 2005-07-14 at 14:29 +0800, Christopher Kings-Lynne wrote:
Try turning on query logging and using the 'pqa' utility on pgfoundry.org.
Have you got that to work for 8 ?
pqa 1.5 doesn't even work with its own test file.
Best Regards, Simon Riggs
---(end of
Dan Harris [EMAIL PROTECTED] writes:
Well, once every day, but there aren't a ton of inserts or updates going on a
daily basis. Maybe 1,000 total inserts?
It's actually deletes and updates that matter. not inserts.
I have a feeling I'm going to need to do a cluster soon. I have done
On Thu, Jul 14, 2005 at 04:29:58PM -0600, Dan Harris wrote:
Ok, I tried this one. My ssh keeps getting cut off by a router
somewhere between me and the server due to inactivity timeouts, so
all I know is that both the select and explain analyze are taking
over an hour to run.
Try running
Dan Harris wrote:
On Jul 14, 2005, at 9:42 AM, John A Meinel wrote:
...
Did you try doing this to see how good the planners selectivity
estimates are?
Well, postgres is estimating around 500 rows each, is that way off? Try
just doing:
EXPLAIN ANALYZE SELECT incidentid FROM k_b WHERE id =
Dan Harris [EMAIL PROTECTED] writes:
Here's the explain select for that one, since
that's the best I can get.
explain select recordtext from eventactivity,k_r,k_b where
eventactivity.incidentid = k_r.incidentid and
eventactivity.incidentid = k_b.incidentid and k_r.id = 94 and k_b.id
Dan Harris wrote:
On Jul 14, 2005, at 9:42 AM, John A Meinel wrote:
You might try giving it a little bit more freedom with:
EXPLAIN ANALYZE
SELECT recordtext FROM eventactivity, k_r, k_b
WHERE eventactivity.incidentid = k_r.incidentid
AND eventactivity.incidentid = k_b.incidentid
John A Meinel [EMAIL PROTECTED] writes:
What I don't understand is that the planner is actually estimating that
joining against the new table is going to *increase* the number of
returned rows.
It evidently thinks that incidentid in the k_r table is pretty
nonunique. We really need to look at
Tom Lane wrote:
John A Meinel [EMAIL PROTECTED] writes:
What I don't understand is that the planner is actually estimating that
joining against the new table is going to *increase* the number of
returned rows.
It evidently thinks that incidentid in the k_r table is pretty
nonunique. We
Hi,
Our application requires a number of processes to select and update rows
from a very small (10 rows) Postgres table on a regular and frequent
basis. These processes often run for weeks at a time, but over the
space of a few days we find that updates start getting painfully slow.
We
[EMAIL PROTECTED] (Alison Winters) writes:
Our application requires a number of processes to select and update rows
from a very small (10 rows) Postgres table on a regular and frequent
basis. These processes often run for weeks at a time, but over the
space of a few days we find that updates
Alison Winters wrote:
Hi,
Our application requires a number of processes to select and update rows
from a very small (10 rows) Postgres table on a regular and frequent
basis. These processes often run for weeks at a time, but over the
space of a few days we find that updates start getting
On Jul 14, 2005, at 5:12 PM, John A Meinel wrote:
Dan Harris wrote:
Well, postgres is estimating around 500 rows each, is that way
off? Try
just doing:
EXPLAIN ANALYZE SELECT incidentid FROM k_b WHERE id = 107;
EXPLAIN ANALYZE SELECT incidentid FROM k_r WHERE id = 94;
Once again, do
On Fri, Jul 15, 2005 at 09:42:12AM +1000, Alison Winters wrote:
Our application requires a number of processes to select and update rows
from a very small (10 rows) Postgres table on a regular and frequent
basis. These processes often run for weeks at a time, but over the
space of a
On Thu, Jul 14, 2005 at 08:28:24PM -0400, Alvaro Herrera wrote:
On Fri, Jul 15, 2005 at 09:42:12AM +1000, Alison Winters wrote:
INFO: Pages 1221: Changed 3, reaped 256, Empty 0, New 0; Tup 108137: Vac
4176, Keep/VTL 108133/108133, UnUsed 19, MinLen 84, MaxLen 84; Re-using:
Free/Avail.
Hi all,
No wonder, considering that your less than 10 rows table contains
something upwards of 10 tuples:
INFO: --Relation public.plc_fldio--
INFO: Pages 1221: Changed 3, reaped 256, Empty 0, New 0; Tup 108137: Vac
4176, Keep/VTL 108133/108133, UnUsed 19, MinLen 84, MaxLen 84;
On Jul 14, 2005, at 7:15 PM, John A Meinel wrote:
Is the distribution of your rows uneven? Meaning do you have more rows
with a later id than an earlier one?
There are definitely some id's that will have many times more than
the others. If I group and count them, the top 10 are fairly
Dan Harris wrote:
On Jul 14, 2005, at 7:15 PM, John A Meinel wrote:
Is the distribution of your rows uneven? Meaning do you have more rows
with a later id than an earlier one?
There are definitely some id's that will have many times more than the
others. If I group and count them, the
The question appeared because of strange issues with functional indexes.
It seems they are recalculated even where it is obviously not needed.
\d+ test:
i | integer | |
t | text| |
x | text|
Nicolas,
These sizes would not be considered large. I would leave them
as single tables.
Ken
On Wed, Jul 13, 2005 at 12:08:54PM +0200, Nicolas Beaume wrote:
Hello
I have a large database with 4 large tables (each containing at least
200 000 rows, perhaps even 1 or 2 million) and i ask
Before I ask, I don't want to start a war.
Can someone here give me an honest opinion of how PostgresSQL (PG) is better
than Firebird on Windows?
I've just recently started reading the Firebird NG and a poster over there
has brought up some serious issues with Firebird, but they seem to not
I just took delivery of a new system, and used the opportunity to
benchmark postgresql 8.0 performance on various filesystems. The system
in question runs Linux 2.6.12, has one CPU and 1GB of system memory, and
5 7200RPM SATA disks attached to an Areca hardware RAID controller
having 128MB of
VACUUM FULL ANALYZE is performed right before tests.
UPDATE test SET t = xpath_string(x, 'movie/rating'::text); is performed also
to make selects equal.
Xpath_string is IMMUTABLE.
Table public.test
Column | Type | Modifiers | Description
It seems functional indexes are recalculated even where it is obviously not
needed.
\d+ test:
i | integer | |
t | text| |
x | text| |
i_i btree (i)
x_iii btree
45 matches
Mail list logo