On Thu, 2006-08-17 at 14:33 -0400, Tom Lane wrote:
There's a more interesting issue, which I'm afraid we do not have time
to fix for PG 8.2. The crux of the matter is that given
SELECT ...
FROM SHEEP_FLOCK f1 JOIN
(SELECT f.regn_no, MAX(f.transfer_date) as last_xfer_date
FROM
Hi, to all!
Recently i try increasing the memory values of shared buffers on one
IBM xseries 255 (Quad XEON 2.8, 8 GB RAM, 2 disk SCSI 36 GB(Raid 1), 1
Storage.
I try change these shared memory values to use 25% of memory ram (2048
MB) and effective_cache_size to 50% (4096 MB) of memory. All
On Thursday 28 September 2006 17:18, Ben wrote:
explain select
distinct public.album.id
from
public.album,public.albumjoin,public.track,umdb.node
where
node.dir=2811
and albumjoin.album = public.album.id
and public.albumjoin.track = public.track.id
and
On Tue, 2006-08-08 at 16:42 -0400, Tom Lane wrote:
Patrice Beliveau [EMAIL PROTECTED] writes:
SELECT * FROM TABLE
WHERE TABLE.COLUMN1=something
AND TABLE.COLUMN2=somethingelse
AND function(TABLE.COLUMN3,TABLE.COLUMN4) 0;
I find out that the function process every row even if the row
On Fri, 2006-09-29 at 11:55 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
PreallocXlogFiles() adds only a *single* xlog file, sometimes.
Hm, you are right. I wonder why it's so unaggressive ... perhaps
because under normal circumstances we soon settle into a steady
state
Marcelo Costa wrote:
Hi, to all!
Recently i try increasing the memory values of shared buffers on one
IBM xseries 255 (Quad XEON 2.8, 8 GB RAM, 2 disk SCSI 36 GB(Raid 1), 1
Storage.
You haven't specified your OS so I am going to assume Linux.
Where I start up the cluster very messages of
Yes, my system is DEBIAN SARGE 3.0thanks,Marcelo2006/10/2, Joshua D. Drake [EMAIL PROTECTED]:
Marcelo Costa wrote: Hi, to all! Recently i try increasing the memory values of shared buffers on one IBM xseries 255 (Quad XEON 2.8, 8 GB RAM, 2 disk SCSI 36 GB(Raid 1), 1 Storage.
You haven't specified
Please unsubscribe me! Thank you!
Also, it would be better to have a message foot saying how to unsubscribe.
On Sat, Sep 30, 2006 at 02:55:54PM +0530, Nimesh Satam wrote:
I am trying to vaccum one of the table using the following command:
VACUUM FULL ANALYZE VERBOSE table_name;
Are you sure you want to do a vacuum full? Normally, that shouldn't be
required.
--
Jim Nasby
On Sun, Oct 01, 2006 at 11:01:19PM -0400, Tom Lane wrote:
Steve Martin [EMAIL PROTECTED] writes:
I am trying to determine if there is a way to improve the performance
when selecting data from the information_schema.columns view.
In my experience, there isn't any single one of the
Hi, Uwcssa,
uwcssa wrote:
Please unsubscribe me! Thank you!
Sorry, but we (the list members) are unable do that, we have no
adminstrative power on the list. :-(
Also, it would be better to have a message foot saying how to unsubscribe.
List unsubscribe information is contained in the
Hi
Thanks for you replies.
Regarding, newsysviews, what is the current state, I have had a quick
look at the pgFoundry site and the last updates were 9 months ago.
The most efficient way in the short term I can find to improve
performance for our application is to create a table from
On Oct 2, 2006, at 7:31 PM, Steve Martin wrote:
Regarding, newsysviews, what is the current state, I have had a
quick look at the pgFoundry site and the last updates were 9 months
ago.
Well, the system catalogs don't change terribly often, so it's not
like a lot needs to be done. We'd
I have loaded three of the four cores by running three different versions of
the import program to import three different segments of the table to
import. The server jumps to 75% usage, with three postgresql processes
eating up 25% each., the actual client itself taking up just a few ticks.
Did you think about putting the whole data into PostgreSQL using COPY in
a nearly unprocessed manner, index it properly, and then use SQL and
stored functions to transform the data inside the database to the
desired result?
This is actually what we are doing. The slowness is on the row-by-row
My experience with that type of load process is that doing this
row-by-row is a very expensive approach and your results bear that out.
I expected this, and had warned the client before the project started that
this is exactly where SQL underperforms.
It is often better to write each step as
1. fork your import somhow to get all 4 cores running
This is already happening, albeit only 3. No improvement - it appears we
have taken the same problem, and divided it by 3. Same projected completion
time. this is really curious, to say the least.
2. write the code that actually does the
Here is an explain analyze for the query that performs slowly, I hope this
helps unfortunately I can't reproduce the version of the query that ran
quickly and therefore can't provide and 'explain analyze' for it.
Aggregate (cost=88256.32..88256.32 rows=1 width=0) (actual
Tim Truman [EMAIL PROTECTED] writes:
Here is an explain analyze for the query that performs slowly,
This shows that the planner is exactly correct in thinking that all
the runtime is going into the seqscan on transaction:
Aggregate (cost=88256.32..88256.32 rows=1 width=0) (actual
19 matches
Mail list logo