On Fri, Jun 9, 2017 at 10:55 PM, Cocco Gianfranco
wrote:
> Is there a way to fix “wal_segsize” to about 1 Gb in 9.2. version, and
> “rebuild” postgreSQL server?
As long as you are able to compile your own version of Postgres and
your distribution does not allow that, there is nothing preventing
Frits,
Would you mind sharing the source code of your benchmark?
>BTW: It seems you need a recent driver for this; I'm
using postgresql-42.1.1.jar
Technically speaking, reWriteBatchedInserts was introduced in 9.4.1209
(2016-07-15)
Vladimir
On Fri, Jun 9, 2017 at 6:04 AM, Frits Jalvingh wrote:
>
> I already changed the following config parameters:
> work_mem 512MB
> synchronous_commit off
>
Since you are already batching up commits into large chunks, this setting
is not very useful, but does risk you losing supposedly-committed dat
The parameter was
reWriteBatchedInserts = true
Either added in connection properties of in the connection URL like
jdbc:postgresql://localhost:5432/datavault_12_tst?reWriteBatchedInserts=true
BTW: It seems you need a recent driver for this; I'm
using postgresql-42.1.1.jar
On Fri, Jun 9, 2017 at
On Fri, Jun 09, 2017 at 03:22:35PM +, Frits Jalvingh wrote:
> Hi Babu,
>
> That was all already done, as it is common practice for JDBC. Your
> parameter was added to the code that already did all that - and worked
> brilliantly there ;)
>
Hi Frits,
What was the parameter? I did not see an E
Hi Babu,
That was all already done, as it is common practice for JDBC. Your
parameter was added to the code that already did all that - and worked
brilliantly there ;)
>
On Fri, Jun 9, 2017 at 9:12 AM, Frits Jalvingh wrote:
> Hi John,
>
> Yes, I was aware and amazed by that ;) It is actually the fetch size in
> combination with autocommit being on; that dies the sweet OOM death as soon
> as the table gets big.
>
> But Postgres read performance, with autocommit off
Hi John,
Yes, I was aware and amazed by that ;) It is actually the fetch size in
combination with autocommit being on; that dies the sweet OOM death as soon
as the table gets big.
But Postgres read performance, with autocommit off and fetch size arond
64K, is quite OK. But it's good to get this m
You need to be careful with the setFetchSize we have tables with over 10
million rows and many columns and the PostgreSQL JDBC driver silently fails,
ignores the fetch size and tries to read the entire table content into memory.
I spent many agonizing days on this.
ps.setFetchSize(65536);
Rega
Hi Babu,
No, I did not, and the effect is quite great:
Inserted 100 rows in 2535 milliseconds, 394477.3175542406 rows per
second
Inserted 100 rows in 2553 milliseconds, 391696.0438699569 rows per
second
compared to (without your parameter):
Inserted 100 rows in 7643 milliseconds, 130
Hi Kenneth,
I tried unlogged before, but as long as the commit interval is long it had
no discerning effect that I could see.
I am not doing anything special I guess. I am adding the results of the
tests and the programs I'm using to the following page:
https://etc.to/confluence/display/~admjal/PostgreSQL+performance+tests
The copy example, in Java, is at the end. All of the examples use trivial
data and the same data.
On Fri, Jun 09, 2017 at 02:39:37PM +, Frits Jalvingh wrote:
> Hi all,
>
> Thanks a lot for the many responses!
>
> About preparing statements: this is done properly in Java, and pgsql does
> it by itself. So that cannot be done better ;)
>
> I tried the copy command, and that indeed works qu
Frits,
When you use the copy command, are you doing anything special to get the
run time that you are indicating?
On Fri, Jun 9, 2017 at 10:39 AM, Frits Jalvingh wrote:
> Hi all,
>
> Thanks a lot for the many responses!
>
> About preparing statements: this is done properly in Java, and pgsql do
Hi all,
Thanks a lot for the many responses!
About preparing statements: this is done properly in Java, and pgsql does
it by itself. So that cannot be done better ;)
I tried the copy command, and that indeed works quite brilliantly:
Inserted 2400 rows in 22004 milliseconds, 1090710.779858207
On Fri, Jun 09, 2017 at 01:56:58PM +, Frits Jalvingh wrote:
> Hi Kenneth, Andreas,
>
> Thanks for your tips!
>
> I increased shared_buffers to 8GB but it has no measurable effect at all. I
> think that is logical: shared buffers are important for querying but not
> for inserting; for that the
On Fri, Jun 9, 2017 at 7:56 AM, Frits Jalvingh wrote:
> Hi Kenneth, Andreas,
>
> Thanks for your tips!
>
> I increased shared_buffers to 8GB but it has no measurable effect at all. I
> think that is logical: shared buffers are important for querying but not for
> inserting; for that the speed to w
Hi Kenneth, Andreas,
Thanks for your tips!
I increased shared_buffers to 8GB but it has no measurable effect at all. I
think that is logical: shared buffers are important for querying but not
for inserting; for that the speed to write to disk seems most important- no
big reason to cache the data
On Fri, Jun 09, 2017 at 03:24:15PM +0200, Andreas Kretschmer wrote:
>
>
> Am 09.06.2017 um 15:04 schrieb Frits Jalvingh:
> >Hi all,
> >
> >I am trying to improve the runtime of a big data warehouse
> >application. One significant bottleneck found was insert
> >performance, so I am investigating w
Am 09.06.2017 um 15:04 schrieb Frits Jalvingh:
Hi all,
I am trying to improve the runtime of a big data warehouse
application. One significant bottleneck found was insert performance,
so I am investigating ways of getting Postgresql to insert data faster.
* use COPY instead of Insert, it i
Well after some messing around I have eventually dug out some meaningful(?)
information.
We engineered a simple test with a stored procedure that returned immediately
– so the SQL is very simple (SELECT * from storedProc). This means no DB
activity.
On a test machine that took ~44usecsmea
Hi all,
I am trying to improve the runtime of a big data warehouse application. One
significant bottleneck found was insert performance, so I am investigating
ways of getting Postgresql to insert data faster. I ran several tests on a
fast machine to find out what performs best, and compared the re
22 matches
Mail list logo