Re: [PERFORM] Insert Concurrency

2017-04-18 Thread ROBERT PRICE
to speed on postgres partitioning for a future need. From: Scott Marlowe Sent: Tuesday, April 18, 2017 3:41 PM To: ROBERT PRICE Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] Insert Concurrency On Mon, Apr 17, 2017 at 8:55 PM, ROBERT PRICE wrote

Re: [PERFORM] Insert Concurrency

2017-04-18 Thread David McKelvie
>> To understand recursion, one must first understand recursion. This makes no sense unless you also provide the base case. David -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-perfor

Re: [PERFORM] Insert Concurrency

2017-04-18 Thread Scott Marlowe
On Mon, Apr 17, 2017 at 8:55 PM, ROBERT PRICE wrote: > I come from an Oracle background and am porting an application to postgres. > App has a table that will contain 100 million rows and has to be loaded by a > process that reads messages off a SQS queue and makes web service calls to > insert re

Re: [PERFORM] Insert Concurrency

2017-04-18 Thread Claudio Freire
On Tue, Apr 18, 2017 at 2:45 AM, Daniel Blanch Bataller wrote: > > But if you are going to move a record at a time you are going to be limited > by the fastest transaction rate you can achieve, which is going to be a few > hundred per second, and limited at the end by the disk hardware you have, .

Re: [PERFORM] Insert Concurrency

2017-04-17 Thread Daniel Blanch Bataller
Yes, postgres has partitions: https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html But this is not going to help much in the scenario you have. Postgres can ingest data very very fast, 100M records in seconds - minu

Re: [PERFORM] Insert Concurrency

2017-04-17 Thread David Rowley
On 18 April 2017 at 14:55, ROBERT PRICE wrote: > I come from an Oracle background and am porting an application to postgres. > App has a table that will contain 100 million rows and has to be loaded by a > process that reads messages off a SQS queue and makes web service calls to > insert records

[PERFORM] Insert Concurrency

2017-04-17 Thread ROBERT PRICE
I come from an Oracle background and am porting an application to postgres. App has a table that will contain 100 million rows and has to be loaded by a process that reads messages off a SQS queue and makes web service calls to insert records one row at a time in a postgres RDS instance. I know

Re: [PERFORM] insert performance

2016-01-24 Thread Jim Nasby
On 1/21/16 11:54 PM, Jinhua Luo wrote: There is another problem. When the autovacuum recycles the old pages, the ReadBuffer_common() would do mdread() instead of mdextend(). The read is synchronous, while the write could be mostly asynchronous, so the frequent read is much worse than write versio

Re: [PERFORM] insert performance

2016-01-21 Thread Jinhua Luo
Hi, There is another problem. When the autovacuum recycles the old pages, the ReadBuffer_common() would do mdread() instead of mdextend(). The read is synchronous, while the write could be mostly asynchronous, so the frequent read is much worse than write version. Any help? Please. Regards, Jinh

Re: [PERFORM] insert performance

2016-01-18 Thread Jinhua Luo
Hi, I thought with async commit enabled, the backend process would rarely do file io. But in fact, it still involves a lot of file io. After inspecting the vfs probes using systemtap, and reading the source codes of postgresql, I found the tight loop of insert or update will cause heavy file io u

Re: [PERFORM] insert performance

2016-01-13 Thread Jinhua Luo
Hi All, I found it's not related to file I/O. I use systemtap to diagnose the postgres backend process. The systemtap script is a modified version of sample-bt-off-cpu: https://gist.github.com/kingluo/15b656998035cef193bc Test process: 1) create a simple table: --- create table test

Re: [PERFORM] insert performance

2016-01-13 Thread Jinhua Luo
Hi, I found the insert performance is not related to the table schema. In fact, I could recur the issue using simple table: create table test(k bigserial primary key, a int, b int, c text, d text); test.sql: insert into test(a, b, c, d) values(3438, 1231, 'o'

Re: [PERFORM] insert performance

2016-01-11 Thread Jeff Janes
On Sat, Jan 9, 2016 at 9:57 PM, Jinhua Luo wrote: > > To make a clean test env, I clone a new table, removing the indexes (keeping > the primary key) and triggers, and use pgbench to test insert statement > purely. Can you share the pgbench command line, and the sql file you feed to it (and whate

Re: [PERFORM] insert performance

2016-01-10 Thread Jim Nasby
On 1/9/16 11:57 PM, Jinhua Luo wrote: But I do not understand that why the process do so many IO with async commit? And it does not even happen at the shared buffer flushing and locks waiting. Where's the code path doing these IO? I assume you're asking about all the IO to the heap table. That

[PERFORM] insert performance

2016-01-09 Thread Jinhua Luo
Hi All, The database is postgresql 9.3, running on debian7, with 8 cpu cores and 8096MB physical memory. There is a big table, with 70 more columns. It would be constantly at 700 rows/sec. It's not feasible to use COPY, because the data is not predefined or provisioned, and it's generated on dem

Re: [PERFORM] Insert vs Update

2015-07-15 Thread Guillaume Lelarge
Le 15 juil. 2015 11:16 PM, "David G. Johnston" a écrit : > > On Wed, Jul 15, 2015 at 4:53 PM, Michael Nolan wrote: >> >> On Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco wrote: >>> >>> >>> Thanks David, my example was a big simplification, but I appreciate your guidance. The different event types

Re: [PERFORM] Insert vs Update

2015-07-15 Thread David G. Johnston
On Wed, Jul 15, 2015 at 4:53 PM, Michael Nolan wrote: > On Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco > wrote: > >> >> Thanks David, my example was a big simplification, but I appreciate your >> guidance. The different event types have differing amounts of related data. >> Query speed on this

Re: [PERFORM] Insert vs Update

2015-07-15 Thread Michael Nolan
On Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco wrote: > > Thanks David, my example was a big simplification, but I appreciate your > guidance. The different event types have differing amounts of related data. > Query speed on this schema is not important, it's really the write speed > that matte

Re: [PERFORM] Insert vs Update

2015-07-15 Thread Robert DiFalco
On Wed, Jul 15, 2015 at 12:32 PM, David G. Johnston < david.g.johns...@gmail.com> wrote: > > You may be able to put Table Inheritance to good use here... > > I do not know (but doubt) if HOT optimization works when going from NULL > to non-NULL since the former is stored in a bitmap while the late

Re: [PERFORM] Insert vs Update

2015-07-15 Thread David G. Johnston
On Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco wrote: > The different event types have differing amounts of related data. > ​On this basis alone I would select the multiple-table version as my baseline and only consider something different if the performance of this was insufficient and I could

Re: [PERFORM] Insert vs Update

2015-07-15 Thread Robert DiFalco
On Wed, Jul 15, 2015 at 11:15 AM, David G. Johnston < david.g.johns...@gmail.com> wrote: > > ​Yes, you are trying to choose between a bunch of one-to-one (optional) > relationships versus adding additional columns to a table all of which can > be null. > > ​I'd argue that neither option is "normal

Re: [PERFORM] Insert vs Update

2015-07-15 Thread David G. Johnston
On Wed, Jul 15, 2015 at 1:56 PM, Robert DiFalco wrote: > > > On Wed, Jul 15, 2015 at 10:33 AM, David G. Johnston < > david.g.johns...@gmail.com> wrote: > >> On Wednesday, July 15, 2015, Robert DiFalco >> wrote: >> >>> First off I apologize if this is question has been beaten to death. I've >>> l

Re: [PERFORM] Insert vs Update

2015-07-15 Thread Robert DiFalco
On Wed, Jul 15, 2015 at 10:33 AM, David G. Johnston < david.g.johns...@gmail.com> wrote: > On Wednesday, July 15, 2015, Robert DiFalco > wrote: > >> First off I apologize if this is question has been beaten to death. I've >> looked around for a simple answer and could not find one. >> >> Given a

Re: [PERFORM] Insert vs Update

2015-07-15 Thread David G. Johnston
On Wednesday, July 15, 2015, Robert DiFalco wrote: > First off I apologize if this is question has been beaten to death. I've > looked around for a simple answer and could not find one. > > Given a database that will not have it's PKEY or indices modified, is it > generally faster to INSERT or UP

Re: [PERFORM] Insert vs Update

2015-07-15 Thread Michael Nolan
I On Wed, Jul 15, 2015 at 12:16 PM, Robert DiFalco wrote: > First off I apologize if this is question has been beaten to death. I've > looked around for a simple answer and could not find one. > > Given a database that will not have it's PKEY or indices modified, is it > generally faster to INSE

Re: [PERFORM] Insert vs Update

2015-07-15 Thread David G. Johnston
On Wed, Jul 15, 2015 at 12:16 PM, Robert DiFalco wrote: > First off I apologize if this is question has been beaten to death. I've > looked around for a simple answer and could not find one. > > Given a database that will not have it's PKEY or indices modified, is it > generally faster to INSERT

[PERFORM] Insert vs Update

2015-07-15 Thread Robert DiFalco
First off I apologize if this is question has been beaten to death. I've looked around for a simple answer and could not find one. Given a database that will not have it's PKEY or indices modified, is it generally faster to INSERT or UPDATE data. And if there is a performance difference is it subs

Re: [PERFORM] Insert performance for large transaction with multiple COPY FROM

2013-01-18 Thread Horst Dehmer
Hey Jeff (and others)! First of all: Thanks for your detailed explanations and guide lines. On 17.01.2013, at 18:12, Jeff Janes wrote: > So the theory is that the presence of idx_4 is causing the trigger to > pick a poor plan (i.e. one using idx_4) while its absence removes that > temptation?

Re: [PERFORM] Insert performance for large transaction with multiple COPY FROM

2013-01-17 Thread Jeff Janes
On Tue, Jan 15, 2013 at 3:44 PM, Horst Dehmer wrote: > idx_4 together with a simple select in the tables on-insert trigger is > slowing things down considerably. So the theory is that the presence of idx_4 is causing the trigger to pick a poor plan (i.e. one using idx_4) while its absence remov

Re: [PERFORM] Insert performance for large transaction with multiple COPY FROM

2013-01-15 Thread Horst Dehmer
After more testing I have gained some insights: The foreign key constraints are NOT responsible for the low COPY FROM performance in my case. I forgot about the indexes which are created along with the FK constraints. Besides the primary key CONSTRAINT obj_item_loc_pkey PRIMARY KEY (obj_item_i

Re: [PERFORM] Insert performance for large transaction with multiple COPY FROM

2013-01-12 Thread Claudio Freire
On Sat, Jan 12, 2013 at 7:41 PM, Horst Dehmer wrote: > Since the complete schema (of about 300 tables) is generated, I will just try > char(20) instead of numeric(20) in the next days to see if it makes any > difference. Which I somehow doubt. I think that might just make it worse. Well, maybe

Re: [PERFORM] Insert performance for large transaction with multiple COPY FROM

2013-01-12 Thread Horst Dehmer
The types referenced by the foreign keys are the same Numeric(20). Since the complete schema (of about 300 tables) is generated, I will just try char(20) instead of numeric(20) in the next days to see if it makes any difference. Which I somehow doubt. But first I'm following the lead of the ta

Re: [PERFORM] Insert performance for large transaction with multiple COPY FROM

2013-01-12 Thread Tom Lane
Jeff Janes writes: > Will PG allow you to add a FK constraint where there is no usable > index on the referenced side? It will not, because the referenced side must have a unique constraint, ie an index. The standard performance gotcha here is not having an index on the referencing side. But th

Re: [PERFORM] Insert performance for large transaction with multiple COPY FROM

2013-01-12 Thread Claudio Freire
On Sat, Jan 12, 2013 at 5:16 PM, Horst Dehmer wrote: > Yes, the ids is something I don't like either. > They carry additional semantics, which I cannot make go away. > How are chances char(20) is more time efficient than numeric(20)? > Disk space is no problem here. What are the other tables like

Re: [PERFORM] Insert performance for large transaction with multiple COPY FROM

2013-01-12 Thread Jeff Janes
On Fri, Jan 11, 2013 at 5:17 PM, Claudio Freire wrote: > On Fri, Jan 11, 2013 at 8:55 PM, Horst Dehmer wrote: >> Except - and that's the wall I'm hitting - for one table which yielded just >> 75 records/second. >> The main 'problem' seem to be the FK constraints. Dropping just them >> restored in

Re: [PERFORM] Insert performance for large transaction with multiple COPY FROM

2013-01-12 Thread Horst Dehmer
Yes, the ids is something I don't like either. They carry additional semantics, which I cannot make go away. How are chances char(20) is more time efficient than numeric(20)? Disk space is no problem here. On 12.01.2013, at 02:17, Claudio Freire wrote: > On Fri, Jan 11, 2013 at 8:55 PM, Horst D

Re: [PERFORM] Insert performance for large transaction with multiple COPY FROM

2013-01-11 Thread Claudio Freire
On Fri, Jan 11, 2013 at 8:55 PM, Horst Dehmer wrote: > Except - and that's the wall I'm hitting - for one table which yielded just > 75 records/second. > The main 'problem' seem to be the FK constraints. Dropping just them > restored insert performance for this table to 6k records/s. The table in

[PERFORM] Insert performance for large transaction with multiple COPY FROM

2013-01-11 Thread Jeff Janes
On Friday, January 11, 2013, Horst Dehmer wrote: > > Except - and that's the wall I'm hitting - for one table which yielded > just 75 records/second. > The main 'problem' seem to be the FK constraints. Dropping just them > restored insert performance for this table to 6k records/s. > It sure soun

[PERFORM] Insert performance for large transaction with multiple COPY FROM

2013-01-11 Thread Horst Dehmer
Hi! I see a massive performance drop when writing a large transaction. I'm writing data for 33 tables with COPY FROM directly from streams in Scala/Java. Over all tables there are 2.2M records which are unevenly distributed from 1 record to 315k records in some tables. For comparison I ran a t

Re: [PERFORM] insert

2011-08-01 Thread Kevin Grittner
Vitalii Tymchyshyn wrote: > Please note that in multitasking environment you may have problems > with your code. Two connections may check if "a" is available and > if not (and both got empty "select" result), try to insert. One > will succeed, another will fail if you have a unique constraint o

Re: [PERFORM] insert

2011-08-01 Thread Vitalii Tymchyshyn
Hello. Please note that in multitasking environment you may have problems with your code. Two connections may check if "a" is available and if not (and both got empty "select" result), try to insert. One will succeed, another will fail if you have a unique constraint on category name (and you

Re: [PERFORM] insert

2011-07-31 Thread Gavin Flower
On 30/07/11 08:14, Kevin Grittner wrote: alan wrote: Can I write a BEFORE ROW trigger for the products table to runs on INSERT or UPDATE to 1. insert a new category& return the new category_id OR 2. return the existing category_id for the (to be inserted row) What would you be using

Re: [PERFORM] insert

2011-07-29 Thread Kevin Grittner
alan wrote: > Can I write a BEFORE ROW trigger for the products table to runs > on INSERT or UPDATE to > 1. insert a new category & return the new category_id OR > 2. return the existing category_id for the (to be inserted row) What would you be using to match an existing category? If th

[PERFORM] insert

2011-07-29 Thread alan
next question. I have a product table with a 'category" column that I want to maintain in a separate table. CREATE TABLE products ( product_id INTEGER DEFAULT nextval('product_id_seq'::regclass) NOT NULL, name VARCHAR(60) NOT NULL, category SMALLINT

Re: [PERFORM] insert

2011-07-29 Thread alan
I think I figured it out myself. If anyone sees issues with this (simple) approach, please let me know. I changed my table definitions to this: CREATE SEQUENCE public.product_id_seq CREATE TABLE products ( product_id INTEGER DEFAULT nextval('product_id_seq'::regclass) NOT NULL, name VARCH

Re: [PERFORM] INSERT query times

2011-07-10 Thread Tom Lane
sergio mayoral writes: > i am using libpq library and postgresql 8.4 for my linux application running > on ARM with 256 MB. I am just doing: > PQconnectdb(); > PQexec(INSERT INTO table1 ); (0.009661 sec.) > PQexec(INSERT INTO table1 ); (0.004208 sec.) > PQexec(INSERT INTO table2 ); (

Re: [PERFORM] INSERT query times

2011-07-10 Thread Pavel Stehule
Hello a) look on COPY statement and COPY API protocol - it can be 100x faster than INSERTS http://www.postgresql.org/docs/8.3/static/libpq-copy.html b) if you can't to use COPY use: * outer transaction - BEGIN, INSERT, INSERT ... COMMIT if this is possible * use a prepared statement http://www.p

[PERFORM] INSERT query times

2011-07-10 Thread sergio mayoral
Hi, i am using libpq library and postgresql 8.4 for my linux application running on ARM with 256 MB. I am just doing: PQconnectdb(); PQexec(INSERT INTO table1 ); (0.009661 sec.) PQexec(INSERT INTO table1 ); (0.004208 sec.) PQexec(INSERT INTO table2 ); (0.007352 sec.) PQexec(INSERT IN

Re: [PERFORM] Insert performance with composite index

2010-11-02 Thread Cédric Villemain
2010/11/2 hubert depesz lubaczewski : > On Tue, Nov 02, 2010 at 12:04:42PM +0100, Cédric Villemain wrote: >> 2010/11/2 hubert depesz lubaczewski : >> > On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote: >> >> >   CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex) >> >> > ) >> >> > t

Re: [PERFORM] Insert performance with composite index

2010-11-02 Thread Divakar Singh
May be a query that is filtering based on these 2 columns? Best Regards, Divakar From: Cédric Villemain To: dep...@depesz.com Cc: Divakar Singh ; pgsql-performance@postgresql.org Sent: Tue, November 2, 2010 4:34:42 PM Subject: Re: [PERFORM] Insert

Re: [PERFORM] Insert performance with composite index

2010-11-02 Thread hubert depesz lubaczewski
On Tue, Nov 02, 2010 at 12:04:42PM +0100, Cédric Villemain wrote: > 2010/11/2 hubert depesz lubaczewski : > > On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote: > >> >   CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex) > >> > ) > >> > the index definition is > >> > CREATE INDEX "P

Re: [PERFORM] Insert performance with composite index

2010-11-02 Thread Cédric Villemain
2010/11/2 hubert depesz lubaczewski : > On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote: >> >   CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex) >> > ) >> > the index definition is >> > CREATE INDEX "PK_AT2" >> >   ON ABC >> >   USING btree >> >   (event, tableindex) >> > TABLES

Re: [PERFORM] Insert performance with composite index

2010-11-02 Thread hubert depesz lubaczewski
On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote: > >   CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex) > > ) > > the index definition is > > CREATE INDEX "PK_AT2" > >   ON ABC > >   USING btree > >   (event, tableindex) > > TABLESPACE sample; > > Indexing twice the same column

Re: [PERFORM] Insert performance with composite index

2010-11-01 Thread Andres Freund
On Monday 01 November 2010 15:28:19 Divakar Singh wrote: > Do you mean these parameters have been removed starting 9.X? > As I see on > http://www.network-theory.co.uk/docs/postgresql/vol3/BackgroundWriter.html > ,these parameters were added starting from 8.0 right? No, I mean setting to 0 is a b

Re: [PERFORM] Insert performance with composite index

2010-11-01 Thread Divakar Singh
Singh Cc: pgsql-performance@postgresql.org Sent: Mon, November 1, 2010 7:50:59 PM Subject: Re: [PERFORM] Insert performance with composite index On Monday 01 November 2010 15:16:49 Divakar Singh wrote: > I am using 9.0.1 Either thats not true or you cargo culted loads of your config fro

Re: [PERFORM] Insert performance with composite index

2010-11-01 Thread Andres Freund
On Monday 01 November 2010 15:16:49 Divakar Singh wrote: > I am using 9.0.1 Either thats not true or you cargo culted loads of your config from a significantly older pg version. Things like: #bgwriter_delay = 200# 10-1 milliseconds between rounds bgwriter_lru_percent = 0#

Re: [PERFORM] Insert performance with composite index

2010-11-01 Thread Divakar Singh
I am using 9.0.1 Best Regards, Divakar From: Andres Freund To: Divakar Singh Cc: pgsql-performance@postgresql.org Sent: Mon, November 1, 2010 7:44:31 PM Subject: Re: [PERFORM] Insert performance with composite index On Monday 01 November 2010 15:08:10

Re: [PERFORM] Insert performance with composite index

2010-11-01 Thread Andres Freund
On Monday 01 November 2010 15:08:10 Divakar Singh wrote: > here are my parameters: Which pg version is that? -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Insert performance with composite index

2010-11-01 Thread Andres Freund
Hi, On Monday 01 November 2010 13:49:14 Divakar Singh wrote: > When I tried inserting 1M rows into a table with a Primary Key, it took > almost 62 seconds. > After adding a composite index of 2 columns, the performance degrades to > 125 seconds. > I am using COPY to insert all data in 1 transactio

Re: [PERFORM] Insert performance with composite index

2010-11-01 Thread Cédric Villemain
2010/11/1 Divakar Singh : > Hi, > I am trying to tune my libpq program for insert performance. > When I tried inserting 1M rows into a table with a Primary Key, it took > almost 62 seconds. > After adding a composite index of 2 columns, the performance degrades to 125 > seconds. > I am using COPY t

Re: [PERFORM] Insert performance with composite index

2010-11-01 Thread Marti Raudsepp
On Mon, Nov 1, 2010 at 14:56, Divakar Singh wrote: > Thanks for your tips. i will try those. > I am on Solaris Sparc 5.10 Sorry, I assumed you were running Linux. But still it could be the same problem as I had. Be careful changing your wal_sync_method, as it has the potential to corrupt your da

Re: [PERFORM] Insert performance with composite index

2010-11-01 Thread Marti Raudsepp
On Mon, Nov 1, 2010 at 14:49, Divakar Singh wrote: > I am trying to tune my libpq program for insert performance. > When I tried inserting 1M rows into a table with a Primary Key, it took > almost 62 seconds. > After adding a composite index of 2 columns, the performance degrades to 125 > seconds.

Re: [PERFORM] Insert performance with composite index

2010-11-01 Thread Divakar Singh
Hi Marti, Thanks for your tips. i will try those. I am on Solaris Sparc 5.10 Best Regards, Divakar From: Marti Raudsepp To: Divakar Singh Cc: pgsql-performance@postgresql.org Sent: Mon, November 1, 2010 6:23:17 PM Subject: Re: [PERFORM] Insert performance

[PERFORM] Insert performance with composite index

2010-11-01 Thread Divakar Singh
Hi, I am trying to tune my libpq program for insert performance. When I tried inserting 1M rows into a table with a Primary Key, it took almost 62 seconds. After adding a composite index of 2 columns, the performance degrades to 125 seconds. I am using COPY to insert all data in 1 transaction. t

Re: [PERFORM] Insert performance and multi-column index order

2009-06-30 Thread Bob Lunney
ame reason, but forgot to apply the lesson to PostgreSQL. BTW, this is PG 8.2.1 and 8.3.7 running on SLES 10.3, although I don't think it matters. Thanks for the help, Greg and Tom! --- On Sat, 6/27/09, Greg Smith wrote: > From: Greg Smith > Subject: Re: [PERFORM] Insert performa

Re: [PERFORM] Insert performance and multi-column index order

2009-06-26 Thread Greg Smith
On Fri, 26 Jun 2009, bob_lun...@yahoo.com wrote: The original unique index was in the order (timestamptz, varchar, text, text) and most queries against it were slow.  I changed the index order to (varchar, text, timestamptz, text) and queries now fly, but loading data (via copy from stdin) in

Re: [PERFORM] Insert performance and multi-column index order

2009-06-26 Thread Tom Lane
bob_lun...@yahoo.com writes: > Why would changing the column order on a unique index cause data loading or > index servicing to slow down? Page splits in the b-tree, maybe? Yeah, perhaps. Tell us about the data distributions in the columns? Is there any ordering to the keys that're being insert

[PERFORM] Insert performance and multi-column index order

2009-06-26 Thread bob_lunney
I have a partitioned table with a multi-column unique index.  The table is partitioned on a timestamp with time zone column.  (I realize this has nothing to do with the unique index.)  The original unique index was in the order (timestamptz, varchar, text, text) and most queries against it were

Re: [PERFORM] INSERT times - same storage space but more fields -> much slower inserts

2009-04-15 Thread Matthew Wakeling
On Wed, 15 Apr 2009, Matthew Wakeling wrote: If anyone needs this code in Java, we have a version at http://www.intermine.org/ Download source code: http://www.intermine.org/wiki/SVNCheckout Javadoc: http://www.intermine.org/api/ Sorry, that should be http://www.flymine.org/api/ Matthew --

Re: [PERFORM] INSERT times - same storage space but more fields -> much slower inserts

2009-04-15 Thread Matthew Wakeling
On Tue, 14 Apr 2009, Stephen Frost wrote: What does your test harness currently look like, and what would you like to see to test the binary-format COPY? I'd be happy to write up the code necessary to implement binary-format COPY for this. If anyone needs this code in Java, we have a version a

Re: [PERFORM] INSERT times - same storage space but more fields -> much slower inserts

2009-04-14 Thread Stephen Frost
Craig, * Craig Ringer (cr...@postnewspapers.com.au) wrote: > In other cases, binary-format COPY would be unsafe without some way to > determine remote endianness and sizeof(various types). As Tom mentioned already, the binary protocol is actually pretty well defined, and it's in network-byte-orde

Re: [PERFORM] INSERT times - same storage space but more fields -> much slower inserts

2009-04-14 Thread Tom Lane
Craig Ringer writes: > Unlike PQexecPrepared(...), binary-format COPY doesn't handle endian and > type size issues for you. You need to convert the data to the database > server's endianness and type sizes, but I don't think the PostgreSQL > protocol provides any way to find those out. The on-the

Re: [PERFORM] INSERT times - same storage space but more fields -> much slower inserts

2009-04-14 Thread Craig Ringer
Stephen Frost wrote: > * Matthew Wakeling (matt...@flymine.org) wrote: >> On Tue, 14 Apr 2009, Stephen Frost wrote: >>> Bacula should be using COPY for the batch data loads, so hopefully won't >>> suffer too much from having the fields split out. I think it would be >>> interesting to try doing PQ

Re: [PERFORM] INSERT times - same storage space but more fields -> much slower inserts

2009-04-14 Thread Stephen Frost
* Matthew Wakeling (matt...@flymine.org) wrote: > On Tue, 14 Apr 2009, Stephen Frost wrote: >> Bacula should be using COPY for the batch data loads, so hopefully won't >> suffer too much from having the fields split out. I think it would be >> interesting to try doing PQexecPrepared with binary-fo

Re: [PERFORM] INSERT times - same storage space but more fields -> much slower inserts

2009-04-14 Thread Matthew Wakeling
On Tue, 14 Apr 2009, Stephen Frost wrote: Bacula should be using COPY for the batch data loads, so hopefully won't suffer too much from having the fields split out. I think it would be interesting to try doing PQexecPrepared with binary-format data instead of using COPY though. I'd be happy to

Re: [PERFORM] INSERT times - same storage space but more fields -> much slower inserts

2009-04-14 Thread Stephen Frost
Craig, * Craig Ringer (cr...@postnewspapers.com.au) wrote: > I've been doing some testing for the Bacula project, which uses > PostgreSQL as one of the databases in which it stores backup catalogs. We also use Bacula with a PostgreSQL backend. > I've been evaluating a schema change for Bacula th

[PERFORM] INSERT times - same storage space but more fields -> much slower inserts

2009-04-14 Thread Craig Ringer
Hi I've been doing some testing for the Bacula project, which uses PostgreSQL as one of the databases in which it stores backup catalogs. Insert times are critical in this environment, as the app may insert millions of records a day. I've been evaluating a schema change for Bacula that takes a f

Re: [PERFORM] insert and Update slow after implementing slony.

2008-12-17 Thread David Rees
On Tue, Dec 16, 2008 at 8:03 PM, Nimesh Satam wrote: > We are trying to implement slony as a replication tool for one of our > database. The Insert and updates have increased by approximately double > making some of our important script slow. What version of PostgreSQL are you running and on what

[PERFORM] insert and Update slow after implementing slony.

2008-12-16 Thread Nimesh Satam
Hi, We are trying to implement slony as a replication tool for one of our database. The Insert and updates have increased by approximately double making some of our important script slow. The database in concern is a warehouse and we have added additional primary key to support slony by using def

Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-04 Thread Heikki Linnakangas
andrew klassen wrote: I am using the c-library interface and for these particular transactions I preload PREPARE statements. Then as I get requests, I issue a BEGIN, followed by at most 300 EXECUTES and then a COMMIT. That is the general scenario. What value beyond 300 should I try? Make sure

Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-04 Thread Tom Lane
andrew klassen <[EMAIL PROTECTED]> writes: > I am using the c-library interface and for these particular transactions > I preload PREPARE statements. Then as I get requests, I issue a BEGIN, > followed by at most 300 EXECUTES and then a COMMIT. That is the > general scenario. What value beyond 300

Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-04 Thread PFC
ze of the table+indexes when it gets slow ? - Original Message From: James Mansion <[EMAIL PROTECTED]> To: andrew klassen <[EMAIL PROTECTED]> Cc: pgsql-performance@postgresql.org Sent: Wednesday, June 4, 2008 3:20:26 PM Subject: Re: [PERFORM] insert/update tps slow with i

Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-04 Thread andrew klassen
involves file I/O) improve the above scenario? Thanks. - Original Message From: James Mansion <[EMAIL PROTECTED]> To: andrew klassen <[EMAIL PROTECTED]> Cc: pgsql-performance@postgresql.org Sent: Wednesday, June 4, 2008 3:20:26 PM Subject: Re: [PERFORM] insert/update tps slow wit

Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-04 Thread James Mansion
andrew klassen wrote: I'll try adding more threads to update the table as you suggest. You could try materially increasing the update batch size too. As an exercise you could see what the performance of COPY is by backing out the data and reloading it from a suitable file. -- Sent via pgsql

Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-04 Thread James Mansion
Matthew Wakeling wrote: If you're running a "work queue" architecture, that probably means you only have one thread doing all the updates/inserts? It might be worth going multi-threaded, and issuing inserts and updates through more than one connection. Postgres is designed pretty well to scale

Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-04 Thread andrew klassen
t;[EMAIL PROTECTED]> To: pgsql-performance@postgresql.org Sent: Wednesday, June 4, 2008 10:10:38 AM Subject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rows On Wed, 4 Jun 2008, andrew klassen wrote: > I am using multiple threads, but only one worker thread for insert/u

Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-04 Thread Matthew Wakeling
On Wed, 4 Jun 2008, andrew klassen wrote: I am using multiple threads, but only one worker thread for insert/updated to this table. I don't mind trying to add multiple threads for this table, but my guess is it would not help because basically the overall tps rate is decreasing so dramatically.

Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-04 Thread andrew klassen
sen <[EMAIL PROTECTED]> Cc: pgsql-performance@postgresql.org Sent: Wednesday, June 4, 2008 5:31:22 AM Subject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rows On Tue, 3 Jun 2008, andrew klassen wrote: > Basically, I have a somewhat constant rate of inserts/updates that go

Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-04 Thread Matthew Wakeling
On Tue, 3 Jun 2008, andrew klassen wrote: Basically, I have a somewhat constant rate of inserts/updates that go into a work queue and then get passed to postgres. The cpu load is not that high, i.e. plenty of idle cpu. I am running an older version of freebsd and the iostat output is not very

Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-03 Thread Scott Marlowe
On Tue, Jun 3, 2008 at 4:36 PM, andrew klassen <[EMAIL PROTECTED]> wrote: > The cpu load is not that high, i.e. plenty of idle cpu. I am running an > older > version of freebsd and the iostat output is not very detailed. > During this time, the number is low < 10Mbs. The system has an > LSI Logic

Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-03 Thread andrew klassen
3, 2008 7:15:10 PM Subject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rows On Wed, 04 Jun 2008 00:36:09 +0200, andrew klassen <[EMAIL PROTECTED]>  wrote: > Running postgres 8.2.5 >   > I have a table that has 5 indices, no foreign keys or any > depende

Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-03 Thread PFC
On Wed, 04 Jun 2008 00:36:09 +0200, andrew klassen <[EMAIL PROTECTED]> wrote: Running postgres 8.2.5   I have a table that has 5 indices, no foreign keys or any dependency on any other table. If delete the database and start entering entries, everything works very well until I get to some poin

[PERFORM] insert/update tps slow with indices on table > 1M rows

2008-06-03 Thread andrew klassen
Running postgres 8.2.5   I have a table that has 5 indices, no foreign keys or any dependency on any other table. If delete the database and start entering entries, everything works very well until I get to some point (let's say 1M rows). Basically, I have a somewhat constant rate of inserts/upda

[PERFORM] Insert time

2008-04-01 Thread Ioana Danes
Hi everyone, I am running a test with 1 thread calling a stored procedure in an endless loop. The stored procedure inserts 1000 records in a table that does not have indexes or constraints. In the log file I see that the time to execute the procedure sometimes it jumps from 100 ms to 700 ms. The a

Re: [PERFORM] Insert Statements Hanging

2007-07-25 Thread Alan Hodgson
On Wednesday 25 July 2007 13:27, Pallav Kalva <[EMAIL PROTECTED]> wrote: > I am hoping "SELECT 1 FROM ONLY "provisioning"."account" x WHERE > "accountid" = $1 FOR UPDATE OF x" is causing the problem. If that is the > case why doesnt it show in the pg_stat_activity view ? or am I missing > somethin

[PERFORM] Insert Statements Hanging

2007-07-25 Thread Pallav Kalva
Hi, I am having problems with some of the Insert statements in the prod database. Our client application is trying into insert some of the records and it is not going through , they are just hanging. They are running in a transaction and some how it is not telling us what is it waiting on . He

Re: [PERFORM] insert vs select into performance

2007-07-23 Thread Michael Stone
On Wed, Jul 18, 2007 at 09:13:14PM +0200, Thomas Finneid wrote: Michael Stone wrote: I don't understand how the insert you described is table to table? SELECT INTO is table to table, so is INSERT INTO SELECT FROM. I could have sworn that at least one of the examples you gave didn't have any

Re: [PERFORM] insert vs select into performance

2007-07-18 Thread Heikki Linnakangas
Adriaan van Os wrote: > So, how does one (temporarily) disable WAL logging ? Or, for example, > disable WAL logging for a temporary table ? Operations on temporary tables are never WAL logged. Operations on other tables are, and there's no way to disable it. -- Heikki Linnakangas EnterpriseD

Re: [PERFORM] insert vs select into performance

2007-07-18 Thread Adriaan van Os
Michael Stone wrote: On Tue, Jul 17, 2007 at 10:58:01PM +0200, Thomas Finneid wrote: I am not sure I understand you correctly here, are you saying that SELECT INTO in 8.1 disables WAL logging and uses just a single fsync at the end? in that case it means that I could disable WAL as well and ac

Re: [PERFORM] insert vs select into performance

2007-07-18 Thread Thomas Finneid
Michael Stone wrote: On Tue, Jul 17, 2007 at 11:01:15PM +0200, Thomas Finneid wrote: Arjen van der Meijden wrote: Have you also tried the COPY-statement? Afaik select into is similar to what happens in there. No, because it only works on file to db or vice versa not table to table. I don'

  1   2   3   >