Hi ,
my database seems to be taking too long for a select count(*)
i think there are lot of dead rows. I do a vacuum full it improves
bu again the performance drops in a short while ,
can anyone please tell me if anything worng with my fsm settings
current fsm=55099264 (not sure how i
On Friday 14 November 2003 12:51, Rajesh Kumar Mallah wrote:
Hi ,
my database seems to be taking too long for a select count(*)
i think there are lot of dead rows. I do a vacuum full it improves
bu again the performance drops in a short while ,
can anyone please tell me if anything worng
Hi Everyone,
I am using PostgreSQL 7.3.2 and have used earlier versions (7.1.x onwards)
and with all of them I noticed same problem with INSERTs when there is a
large data set. Just to so you guys can compare time it takes to insert
one row into a table when there are only few rows present and
Heya,
FYI just spotted this and thought I would pass it on, for all those who are
looking at new boxes.
http://www.theinquirer.net/?article=12665
http://www.promise.com/product/product_detail_eng.asp?productId=112familyId
=2
Looks like a four-channel hot-swap IDE (SATA) hardware RAID controller
On Fri, 14 Nov 2003 20:38:33 +1100 (EST)
Slavisa Garic [EMAIL PROTECTED] wrote:
Any help would be greatly appreciated even pointing me in the right
direction where to ask this question. By the way I designed the
database this way as my application that uses PGSQL a lot during the
execution so
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Rajesh Kumar Mallah)
wrote:
INFO: profiles: found 0 removable, 369195 nonremovable row versions in 43423 pages
DETAIL: 246130 dead row versions cannot be removed yet.
Nonremovable row versions range from 136 to 2036 bytes long.
Neil Conway [EMAIL PROTECTED] writes:
Interesting -- I wonder if it would be possible for the optimizer to
detect this and avoid the redundant inner sort ... (/me muses to
himself)
I think the ability to generate two sort steps is a feature, not a bug.
This has been often requested in
Hannu Krosing [EMAIL PROTECTED] writes:
Christopher Browne kirjutas R, 14.11.2003 kell 16:13:
I have seen this happen somewhat-invisibly when a JDBC connection
manager opens transactions for each connection, and then no processing
happens to use those connections for a long time. The open
Hannu Krosing wrote:
Christopher Browne kirjutas R, 14.11.2003 kell 16:13:
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Rajesh Kumar Mallah) wrote:
INFO: profiles: found 0 removable, 369195 nonremovable row versions in 43423 pages
DETAIL: 246130 dead row versions cannot
After a long battle with technology, [EMAIL PROTECTED] (Hannu Krosing), an earthling,
wrote:
Christopher Browne kirjutas R, 14.11.2003 kell 16:13:
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Rajesh Kumar Mallah)
wrote:
INFO: profiles: found 0 removable, 369195
Will LaShell [EMAIL PROTECTED] writes:
Hannu Krosing wrote:
Can't the backend be made to delay the real start of transaction until
the first query gets executed ?
That seems counter intuitive doesn't it? Why write more code in the
server when the client is the thing that has the problem?
Hi-
I'm seeing estimates for n_distinct that are way off for a large table
(8,700,000 rows). They get better by setting the stats target higher, but
are still off by a factor of 10 with the stats set to 1000. I've noticed and
reported a similar pattern before on another table. Because this
When I execute a transaction using embedded sql statements in a c program,
I get the error,
Error in transaction processing. I could see from the documentation that
it means, Postgres signalled to us that we cannot start, commit or
rollback the transaction
I don't find any mistakes in the
The only thing you're adding to the query is a second SORT step, so it
shouldn't require any more time/memory than the query's first SORT
did.
Interesting -- I wonder if it would be possible for the optimizer to
detect this and avoid the redundant inner sort ... (/me muses to
himself)
That's
14 matches
Mail list logo