On 23 Oct 2006, at 22:59, Jim C. Nasby wrote:
http://stats.distributed.net used to use a perl script to do some
transformations before loading data into the database. IIRC, when we
switched to using C we saw 100x improvement in speed, so I suspect
that
if you want performance perl isn't the
On 10/25/06, Craig A. James [EMAIL PROTECTED] wrote:
Jim C. Nasby wrote:
Well, given that perl is using an entire CPU, it sounds like you should
start looking either at ways to remove some of the overhead from perl,
or to split that perl into multiple processes.
I use Perl for big database
Hi, Worky,
Worky Workerson wrote:
$ psql -c COPY my_table TO STDOUT my_data
$ ls my_data
2018792 edgescape_pg_load
$ time cat my_data | psql -c COPY mytable FROM STDIN
real5m43.194s
user0m35.412s
sys 0m9.567s
That's via PSQL, and you get about 5 MB/Sec.
On a table with no
On 10/23/06, Worky Workerson [EMAIL PROTECTED] wrote:
The disk load is where I start to get a little fuzzy, as I haven't
played with iostat to figure what is normal. The local drives
contain PG_DATA as well as all the log files, but there is a
tablespace on the FibreChannel SAN that contains
Mr. Worky Workerson,
On 10/25/06 5:03 AM, Worky Workerson [EMAIL PROTECTED] wrote:
However, I am still curious as to the rather slow COPYs from psql to
local disks. Like I mentioned previously, I was only seeing about 5.7
MB/s (1.8 GB / 330 seconds), where it seemed like others were doing
On Tue, Oct 24, 2006 at 10:36:04PM -0700, Craig A. James wrote:
Jim C. Nasby wrote:
Well, given that perl is using an entire CPU, it sounds like you should
start looking either at ways to remove some of the overhead from perl,
or to split that perl into multiple processes.
I use Perl for
On Wed, Oct 25, 2006 at 08:03:38AM -0400, Worky Workerson wrote:
I'm just doing CSV style transformations (and calling a lot of
functions along the way), but the end result is a straight bulk load
of data into a blank database. And we've established that Postgres
can do *way* better than what
On Wed, Oct 25, 2006 at 07:55:38AM -0300, Mara Dalponte wrote:
On 10/24/06, Jim C. Nasby [EMAIL PROTECTED] wrote:
On Mon, Oct 23, 2006 at 04:54:00PM -0300, Mara Dalponte wrote:
Hello,
I have a query with several join operations and applying the same
filter condition over each involved
On 10/25/06, Worky Workerson [EMAIL PROTECTED] wrote:
I'm guessing the high bursts are checkpoints. Can you check your log
files for pg and see if you are getting warnings about checkpoint
frequency? You can get some mileage here by increasing wal files.
Nope, nothing in the log. I have
(I tried this question on the interface forum and got no result, but I don't
know how to tell if it's an interface issue or not)
I have a TCL app which typically takes hours to complete. I found out that
it is taking longer than it should because it occasionally stalls
inexplicably (for tens
I'm guessing the high bursts are checkpoints. Can you check your log
files for pg and see if you are getting warnings about checkpoint
frequency? You can get some mileage here by increasing wal files.
Nope, nothing in the log. I have set:
wal_buffers=128
checkpoint_segments=128
Merlin,
On 10/25/06 8:38 AM, Merlin Moncure [EMAIL PROTECTED] wrote:
in theory, with 10 10k disks in raid 10, you should be able to keep
your 2fc link saturated all the time unless your i/o is extremely
random. random i/o is the wild card here, ideally you should see at
least 2000 seeks in
Jim C. Nasby wrote:
Wait... so you're using perl to copy data between two tables? And using
a cursor to boot? I can't think of any way that could be more
inefficient...
What's wrong with a plain old INSERT INTO ... SELECT? Or if you really
need to break it into multiple transaction blocks, at
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Craig A. James
Sent: Wednesday, October 25, 2006 12:52 PM
To: Jim C. Nasby
Cc: Worky Workerson; Merlin Moncure; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Best COPY Performance
Jim
Spiegelberg, Greg wrote:
The data are on two different computers, and I do processing
of the data as it passes through the application. Otherwise,
the INSERT INTO ... SELECT is my first choice.
Would dblink() help in any way?
It might if perl wasn't so damned good at this. ;-)
Craig
Tom Lane wrote:
This isn't going to be changed, because the likely direction of future
development is that the planner will start making use of constraints
even for SELECT queries. This means that a DROP CONSTRAINT operation
could invalidate the plan of a SELECT query, so the locking will be
Carlo Stonebanks wrote:
The delays are so long that I used to think the app was hopelessly frozen.
By accident, I left the app alone in its frozen state and came back a good
deal later and seen that it was running again.
Sometimes I decide it *IS* frozen and have to restart. Because
Merlin/Luke:
in theory, with 10 10k disks in raid 10, you should be able to keep
your 2fc link saturated all the time unless your i/o is extremely
random. random i/o is the wild card here, ideally you should see at
least 2000 seeks in bonnie...lets see what comes up.
I suspect the
Hi, Craig,
Craig A. James wrote:
Would dblink() help in any way?
It might if perl wasn't so damned good at this. ;-)
You know that you can use Perl inside PostgreS via plperl?
HTH,
Markus
---(end of broadcast)---
TIP 3: Have you checked our
You may try to figure out what's the process doing (the backend
obviously, not the frontend (Tcl) process) by attaching to it with
strace.
It's so sad when us poor Windows guys get helpful hints from people assume
that we're smart enough to run *NIX... ;-)
Maybe it's swamped by a context
Mr. Worky,
On 10/25/06 11:26 AM, Worky Workerson [EMAIL PROTECTED] wrote:
And here are the dd results for 16GB RAM, i.e. 4,000,000 8K blocks:
So, if we divide 32,000 MB by the real time, we get:
/home (WAL):
53 MB/s write
84 MB/s read
/data (data):
89 MB/s write
38 MB/s read
The write and
I have a question
for you: did you have a long running query keeping open a transaction?
I've just noticed the same problem here, but things cleaned up immediately when
I aborted the long-running transaction.
No, the only processes are from those in
the import applications themselves:
Hi
Please help. I have got a postgres 7.3.4
database running on RedHat ES 3, with 8GB of physical memory in it.
The machine is shared with my application which is pretty intensive
in doing selects and updates against the database, but there are usually no
more than 10 connections to
Mark Lonsdale wrote:
Hi
Please help. I have got a postgres 7.3.4 database running on RedHat ES
3, with 8GB of physical memory in it. The machine is shared with my
application which is pretty intensive in doing selects and updates
against the database, but there are usually no
Carlo Stonebanks wrote:
You may try to figure out what's the process doing (the backend
obviously, not the frontend (Tcl) process) by attaching to it with
strace.
It's so sad when us poor Windows guys get helpful hints from people assume
that we're smart enough to run
Hi Josh
Thanks for the feedback, that is most usefull. When you said one of the
settings was likely killing us, was it all of the settings for
max_fsm_relations, max_fsm_pages, and sort_mem or just the setting for
sort_mem ?
Can you explain why the setting would be killing me :-)
Thanks
Mark Lonsdale wrote:
Hi Josh
Thanks for the feedback, that is most usefull. When you said one of the
settings was likely killing us, was it all of the settings for
max_fsm_relations, max_fsm_pages, and sort_mem or just the setting for
sort_mem ?
Can you explain why the setting would be
Richard Huxton wrote:
Mark Lonsdale wrote:
Hi Josh
Thanks for the feedback, that is most usefull. When you said one of the
settings was likely killing us, was it all of the settings for
max_fsm_relations, max_fsm_pages, and sort_mem or just the setting for
sort_mem ?
Can you explain why
Thanks guys, I think we'll certainly look to get the app certified with
7.4 and 8.x but that may take a little while. In the interim, Im
thinking of making the following changes then:-
Change Shared_buffers from 393216 to 80,000 ( ~15% of 4GB of RAM.
Server is 8GB but I want to leave space
Mark Lonsdale wrote:
Thanks guys, I think we'll certainly look to get the app certified with
7.4 and 8.x but that may take a little while. In the interim, Im
thinking of making the following changes then:-
Change Shared_buffers from 393216 to 80,000 ( ~15% of 4GB of RAM.
Server is 8GB
On Wed, 2006-10-25 at 15:07, Carlo Stonebanks wrote:
You may try to figure out what's the process doing (the backend
obviously, not the frontend (Tcl) process) by attaching to it with
strace.
It's so sad when us poor Windows guys get helpful hints from people assume
that we're smart
Maybe it's swamped by a context switch storm (but in that case, probably
the other processes would be affected as well).
What is a context switch storm? (and what a great name for a heavy metal
rock band!)
I can just see the postgresql group getting together at the next
O'Reilley's
On Wed, Oct 25, 2006 at 11:25:01AM -0400, Worky Workerson wrote:
I'm guessing the high bursts are checkpoints. Can you check your log
files for pg and see if you are getting warnings about checkpoint
frequency? You can get some mileage here by increasing wal files.
Nope, nothing in the
On Wed, Oct 25, 2006 at 04:32:16PM -0400, Carlo Stonebanks wrote:
I have a question for you: did you have a long running query keeping open
a transaction? I've just noticed the same problem here, but things cleaned
up immediately when I aborted the long-running transaction.
No, the only
34 matches
Mail list logo