Jim C. Nasby jnasby 'at' pervasive.com writes:
On Tue, Mar 21, 2006 at 02:03:19PM +0100, Guillaume Cottenceau wrote:
Jim C. Nasby jnasby 'at' pervasive.com writes:
On Tue, Mar 21, 2006 at 10:40:45PM +1200, Mark Kirkwood wrote:
I was going to recommend higher - but not knowing what
On Mon, 2006-03-20 at 15:59 +0100, Mikael Carneholm wrote:
This gives that 10Gb takes ~380s = ~27Mb/s (with fsync=off), compared to
the raw dd result (~75.5Mb/s).
I assume this difference is due to:
- simultaneous WAL write activity (assumed: for each byte written to the
table, at least
I find another java program for monitory application query:
http://www.p6spy.com/
with interface
http://www.jahia.net/jahia/page597.html
Thomas Pundt wrote:
Hi,
On Tuesday 21 March 2006 09:25, Marco Furetto wrote:
| I'm managing the db of a Content Management environment and I'm
| searching
On Wed, 2006-03-22 at 10:04 +0100, Mikael Carneholm wrote:
but that gave a more uneven insert rate
Not sure what you mean, but happy to review test results.
You should be able to tweak other parameters from here as you had been
trying. Your bgwriter will be of some benefit now if you set it
2006/3/21, Reimer [EMAIL PROTECTED]:
Fernando,If you need to read all the table for example it would be better to readonly the data pages instead of read data and index pages.Reimer- Original Message -From: Fernando Lujan
[EMAIL PROTECTED]To: pgsql-performance@postgresql.orgSent: Tuesday,
On Wed, Mar 22, 2006 at 08:50:20AM -0300, Alejandro D. Burne wrote:
Explains:
With SET ENABLE_SEQSCAN TO ON;
HashAggregate (cost=251306.99..251627.36 rows=11650 width=78)
You'll need to post EXPLAIN ANALYZE results, not just EXPLAIN.
/* Steinar */
--
Homepage:
2006/3/22, Steinar H. Gunderson [EMAIL PROTECTED]:
On Wed, Mar 22, 2006 at 08:50:20AM -0300, Alejandro D. Burne wrote: Explains: With SET ENABLE_SEQSCAN TO ON; HashAggregate(cost=251306.99..251627.36 rows=11650 width=78)
You'll need to post EXPLAIN ANALYZE results, not just EXPLAIN./*
On Tue, Mar 21, 2006 at 05:04:16PM -0800, Craig A. James wrote:
Tom Lane wrote:
Craig A. James [EMAIL PROTECTED] writes:
It looks to me like the problem is the use of nested loops when a hash
join should be used, but I'm no expert at query planning.
Given the sizes of the tables involved,
On Tue, Mar 21, 2006 at 08:33:50PM +, Simon Riggs wrote:
On Tue, 2006-03-21 at 06:22 -0600, Jim C. Nasby wrote:
Currently, it appears that SELECT * INTO new_table FROM old_table logs
each page as it's written to WAL. Is this actually needed? Couldn't the
database simply log that the
On Wed, Mar 22, 2006 at 10:04:49AM +0100, Mikael Carneholm wrote:
It does (LOG: checkpoints are occurring too frequently (2 seconds apart))
However, I tried increasing checkpoint_segments to 32 (512Mb) making it
checkpoint every 15 second or so, but that gave a more uneven insert rate
than
On Wed, 2006-03-22 at 06:47 -0600, Jim C. Nasby wrote:
Also, why do we log rows for CTAS/SELECT INTO when PITR is in use for
simple SELECTs (ones that don't call non-deterministic functions)? The
data should alread be available AFAICS...
Not sure what you're asking... SELECTs don't produce
Thanks, will try that. I'll report on the progress later, I have some unit
tests to set up first but as soon as that is done I'll go back to optimizing
insert performance.
Regards,
Mikael.
-Original Message-
From: Jim C. Nasby [mailto:[EMAIL PROTECTED]
Sent: den 22 mars 2006 13:55
To:
On Wed, Mar 22, 2006 at 01:08:34PM +, Simon Riggs wrote:
On Wed, 2006-03-22 at 06:47 -0600, Jim C. Nasby wrote:
Also, why do we log rows for CTAS/SELECT INTO when PITR is in use for
simple SELECTs (ones that don't call non-deterministic functions)? The
data should alread be available
All,
Has anyone tested PostgreSQL 8.1.x compiled with Intel's Linux C/C++
compiler?
Greg
--
Greg Spiegelberg
[EMAIL PROTECTED]
ISOdx Product Development Manager
Cranel, Inc.
---(end of broadcast)---
TIP 3: Have you checked our extensive
Jim C. Nasby [EMAIL PROTECTED] writes:
On Wed, Mar 22, 2006 at 10:04:49AM +0100, Mikael Carneholm wrote:
It does (LOG: checkpoints are occurring too frequently (2 seconds apart))
However, I tried increasing checkpoint_segments to 32 (512Mb) making it
checkpoint every 15 second or so, but
Jim C. Nasby [EMAIL PROTECTED] writes:
PITR wants all changes. Without PITR we can optimise certain logging
actions.
The only change here is that we're creating a new table based on the
results of a SELECT. If that SELECT doesn't use anything that's
non-deterministic, then the machine doing
On Wed, Mar 22, 2006 at 10:06:05AM -0500, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
PITR wants all changes. Without PITR we can optimize certain logging
actions.
The only change here is that we're creating a new table based on the
results of a SELECT. If that SELECT doesn't
Jim C. Nasby [EMAIL PROTECTED] writes:
Oh, so in other words, SELECT * INTO temp FROM table is inherently
non-deterministic at the physical level, so the only way to be able to
allow PITR to work is to duplicate all the physical changes. Darn.
Well, lemme put it this way: I'm not prepared to
I have a database with foreign keys enabled on the schema. I receive different
files, some of them are huge. And I need to load these files in the database
every night. There are several scenerios that I want to design an optimal
solution for -
1. One of the file has around 80K records and I
For both cases, you could COPY your file into a temporary table and do a
big JOIN with your existing table, one for inserting new rows, and one for
updating existing rows.
Doing a large bulk query is a lot more efficient than doing a lot of
selects. Vacuum afterwards, and you'll be fine.
Load the files into a temp table and go from there...
COPY ... FROM file;
UPDATE existing_table SET ... WHERE ...;
INSERT INTO existing_table SELECT * FROM temp_table WHERE NOT EXISTS(
SELECT * FROM existing_table WHERE ...)
On Wed, Mar 22, 2006 at 10:32:10AM -0500, ashah wrote:
I have a
On Wed, 2006-03-22 at 02:04, Guillaume Cottenceau wrote:
Jim C. Nasby jnasby 'at' pervasive.com writes:
On Tue, Mar 21, 2006 at 02:03:19PM +0100, Guillaume Cottenceau wrote:
Jim C. Nasby jnasby 'at' pervasive.com writes:
On Tue, Mar 21, 2006 at 10:40:45PM +1200, Mark Kirkwood
On Wed, 2006-03-22 at 16:35, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
Oh, so in other words, SELECT * INTO temp FROM table is inherently
non-deterministic at the physical level, so the only way to be able to
allow PITR to work is to duplicate all the physical changes. Darn.
Hi Scott,
Scott Marlowe smarlowe 'at' g2switchworks.com writes:
On Wed, 2006-03-22 at 02:04, Guillaume Cottenceau wrote:
[...]
Yes, we use 7.4.5 actually, because it just works, so production
wants to first deal with all the things that don't work before
upgrading. I have recently
Greg,
On 3/22/06 5:56 AM, Spiegelberg, Greg [EMAIL PROTECTED] wrote:
Has anyone tested PostgreSQL 8.1.x compiled with Intel's Linux C/C++
compiler?
We used to compile 8.0 with icc and 7.x before that. We found very good
performance gains for Intel P4 architecture processors and some gains
On Wed, 22 Mar 2006, Jim C. Nasby wrote:
Ok, I saw disk activity on the base directory and assumed it was pg_xlog
stuff. Turns out that both SELECT INTO and CREATE TABLE AS ignore
default_tablepsace and create the new tables in the base directory. I'm
guessing that's a bug... (this is on
Simon Riggs [EMAIL PROTECTED] wrote
On Tue, 2006-03-21 at 06:22 -0600, Jim C. Nasby wrote:
Currently, it appears that SELECT * INTO new_table FROM old_table logs
each page as it's written to WAL. Is this actually needed? Couldn't the
database simply log that the SELECT ... INTO statement
I'd like to know if the latest PostgreSQL release can scale up by
utilizing multiple cpu or dual core cpu to boost up the sql
executions.
I already do a research on the PostgreSQL mailing archives and only
found old threads dating back 2000. A lot of things have improved with
PostgreSQL and
28 matches
Mail list logo