Marc Mamin wrote:
Postgres configuration for 64 CPUs, 128 GB RAM...
there are probably not that much installation out there that large -
comments below
Hello,
We have the oppotunity to benchmark our application on a large server. I
have to prepare the Postgres configuration and I'd
On Tue, Jul 17, 2007 at 04:10:30PM +0200, Marc Mamin wrote:
shared_buffers= 262143
You should at least try some runs with this set far, far larger. At
least 10% of memory, but it'd be nice to see what happens with this set
to 50% or higher as well (though don't set it larger than the database
Marc Mamin [EMAIL PROTECTED] writes:
We have the oppotunity to benchmark our application on a large server. I
have to prepare the Postgres configuration and I'd appreciate some
comments on it as I am not experienced with servers of such a scale.
Moreover the configuration should be
On Tue, 17 Jul 2007, Marc Mamin wrote:
Moreover the configuration should be fail-proof as I won't be able to
attend the tests.
This is unreasonable. The idea that you'll get a magic perfect
configuration in one shot suggests a fundamental misunderstanding of how
work like this is done. If
Postgres configuration for 64 CPUs, 128 GB RAM...
Hello,
We have the oppotunity to benchmark our application on a large server. I
have to prepare the Postgres configuration and I'd appreciate some
comments on it as I am not experienced with servers of such a scale.
Moreover the configuration
On Tue, 17 We have the oppotunity to benchmark our application on a
large server. I
have to prepare the Postgres configuration and I'd appreciate some
comments on it as I am not experienced with servers of such a scale.
Moreover the configuration should be fail-proof as I won't be able to
It appears my multi-thread application (100 connections every 5 seconds)
is stalled when working with postgresql database server. I have limited
number of connections in my connection pool to postgresql to 20. At the
begining, connection is allocated and released from connection pool as
Hi
I was doing some testing on insert compared to select into. I
inserted 100 000 rows (with 8 column values) into a table, which took 14
seconds, compared to a select into, which took 0.8 seconds.
(fyi, the inserts where batched, autocommit was turned off and it all
happend on the local
On Jul 17, 2007, at 14:38 , Thomas Finneid wrote:
I was doing some testing on insert compared to select into. I
inserted 100 000 rows (with 8 column values) into a table, which
took 14 seconds, compared to a select into, which took 0.8 seconds.
(fyi, the inserts where batched, autocommit
Have you also tried the COPY-statement? Afaik select into is similar to
what happens in there.
Best regards,
Arjen
On 17-7-2007 21:38 Thomas Finneid wrote:
Hi
I was doing some testing on insert compared to select into. I
inserted 100 000 rows (with 8 column values) into a table, which took
Michael Glaesemann [EMAIL PROTECTED] writes:
It would be helpful if you included the actual queries you're using,
as there are a number of variables:
Not to mention which PG version he's testing. Since (I think) 8.1,
SELECT INTO knows that it can substitute one fsync for WAL-logging
the
Tom Lane wrote:
Michael Glaesemann [EMAIL PROTECTED] writes:
It would be helpful if you included the actual queries you're using,
as there are a number of variables:
Not to mention which PG version he's testing.
Its pg 8.1, for now, I'll be upgrading to a compile optimised 8.2 when I
On Tue, Jul 17, 2007 at 10:50:22PM +0200, Thomas Finneid wrote:
I havent done this test in a stored function yet, nor have I tried it
with a C client so far, so there is the chance that it is java/jdbc that
makes the insert so slow. I'll get to that test soon if there is any
chance my theory
If you're performing via JDBC, are you using addBatch/executeBatch, or
are you directly executing each insert? If you directly execute each
insert, then your code will wait for a server round-trip between each
insert.
That still won't get you to the speed of select into, but it should
help. You
Arjen van der Meijden wrote:
Have you also tried the COPY-statement? Afaik select into is similar to
what happens in there.
No, because it only works on file to db or vice versa not table to table.
regards
thoams
---(end of broadcast)---
TIP
On Jul 17, 2007, at 15:50 , Thomas Finneid wrote:
Michael Glaesemann wrote:
2a) Are you using INSERT INTO foo (foo1, foo2, foo2) SELECT foo1,
foo2, foo3 FROM pre_foo or individual inserts for each row? The
former would be faster than the latter.
performed with JDBC
insert into
I was doing some testing on insert compared to select into. I
inserted 100 000 rows (with 8 column values) into a table, which took 14
seconds, compared to a select into, which took 0.8 seconds.
(fyi, the inserts where batched, autocommit was turned off and it all
happend on the local
PFC wrote:
I was doing some testing on insert compared to select into. I
inserted 100 000 rows (with 8 column values) into a table, which took
14 seconds, compared to a select into, which took 0.8 seconds.
(fyi, the inserts where batched, autocommit was turned off and it all
happend on the
Hi
During the somes I did I noticed that it does not necessarily seem to be
true that one needs the fastest disks to have a pg system that is fast.
It seems to me that its more important to:
- choose the correct methods to use for the operation
- tune the pg memory settings
- tune/disable pg
Thomas Finneid wrote:
Hi
During the somes I did I noticed that it does not necessarily seem to be
true that one needs the fastest disks to have a pg system that is fast.
It seems to me that its more important to:
- choose the correct methods to use for the operation
- tune the pg memory
Seems Linux has IO scheduling through a program called ionice.
Has anyone here experimented with using it rather than
vacuum sleep settings?
http://linux.die.net/man/1/ionice
This program sets the io scheduling class and priority
for a program. As of this writing, Linux supports 3
21 matches
Mail list logo