I'm doing the writes individually. Is there a better way? Combining
them all into a transaction or something?
Use COPY of course :)
Or at worst bundle 1000 inserts at a time in a transation...
And if you seriously do not care about your data at all, set fsync = off
in you postgresql.conf for a
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> On Thu, May 19, 2005 at 05:21:07PM -0500, Steve Bergman wrote:
>> I'm doing the writes individually. Is there a better way? Combining
>> them all into a transaction or something?
> Batching them all in one or a few transactions will speed it
On Thu, May 19, 2005 at 05:21:07PM -0500, Steve Bergman wrote:
> I'm doing the writes individually. Is there a better way? Combining
> them all into a transaction or something?
Batching them all in one or a few transactions will speed it up a _lot_.
Using COPY would help a bit more on top of th
Hi,
I am using postgresql in small (almost trivial) application in which I
pull some data out of a Cobol C/ISAM file and write it into a pgsl
table. My users can then use the data however they want by interfacing
to the data from OpenOffice.org.
The amount of data written is about 60MB and tak
Hello,
It always depends on the dataset but you should try an explain analyze
on each query. It will tell you which one is more efficient for your
particular data.
Sincerely,
Joshua D. Drake
Here's the join:
# explain select child_pid from ssv_product_children, nv_products where
nv_products.id
Looking for some general advice on correlated subqueries vs. joins.
Which of these plans is likely to perform better. One table is a master
record table for entities and their IDs (nv_products), the other
represents a transitive closure of parent/child relationships (for a
tree) of ID's in the
Yes, I am using it another DB/application. Few more days and I'll have a
free hand on this box as well.
Thanks,
Anjan
-Original Message-
From: Josh Berkus [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 19, 2005 3:58 PM
To: Anjan Dave
Cc: Donald Courtney; Tom Lane; pgsql-performance@postgr
Josh Berkus wrote:
>Anjan,
>
>
>
>>As far as disk I/O is concerned for flushing the buffers out, I am not
>>ruling out the combination of Dell PERC4 RAID card, and the RH AS 3.0
>>Update3 being a problem.
>>
>>
>
>You know that Update4 is out, yes?
>Update3 is currenly throttling your I/O
Anjan,
> As far as disk I/O is concerned for flushing the buffers out, I am not
> ruling out the combination of Dell PERC4 RAID card, and the RH AS 3.0
> Update3 being a problem.
You know that Update4 is out, yes?
Update3 is currenly throttling your I/O by about 50%.
--
--Josh
Josh Berkus
Ag
On May 19, 2005, at 2:12 PM, Anjan Dave wrote:
As far as disk I/O is concerned for flushing the buffers out, I am
not ruling out the combination of Dell PERC4 RAID card
That'd be my first guess as to I/O speed issues. I have some dell
hardware that by all means should be totally blowing out m
Anjan Dave wrote:
What platform is this?
Its a DELL RH 4 with the xlog on a seperate external mounted file system.
The data directory is on a external mounted file system as well.
We had similar issue (PG 7.4.7). Raising number of checkpoint segments to 125,
seperating the WAL to a different LU
What platform is this?
We had similar issue (PG 7.4.7). Raising number of checkpoint segments to 125,
seperating the WAL to a different LUN helped, but it's still not completely
gone.
As far as disk I/O is concerned for flushing the buffers out, I am not ruling
out the combination of Dell PE
Tom,
> for rec in explain analyze ... loop
> insert into table values(rec."QUERY PLAN");
> end loop;
I need to go further than that and parse the results as well. And preserve
relationships and nesting levels.
H ... what's the indenting formula for nesting leve
Josh Berkus writes:
> I think a first step would be, in fact, to develop a tool that allows us to
> put EXPLAIN ANALYZE results in a database table. Without that, there is no
> possibility of statistical-scale analysis.
AFAIK you can do that today using, eg, plpgsql:
for rec in explai
Tom
Thanks for the post - I think I am getting this problem for
a synthetic workload at high connection loads. The whole
system seems to stop.
Can you give some examples on what to try out in the .conf file?
I tried
bgwriter_all_percent = 30, 10, and 3
Which I understand to mean 30%, 10% and 3%
Jim,
> I've been doing some work to try and identify the actual costs
> associated with an index scan with some limited sucess. What's been run
> so far can be seen at http://stats.distributed.net/~decibel. But there's
> a couple problems. First, I can't use the box exclusively for this
> testing,
16 matches
Mail list logo