Re: [PERFORM] very very slow inserts into very large table

2012-07-16 Thread Satoshi Nagayasu
2012/07/16 22:37, Jon Nelson wrote: I have a single *table* that is some 560GB in size, 6 columns, average row width 63. There are approximately 6.1 billion rows. It has 4 indices, 131, 313, 131 and 190 GB in size, respectively. All are btree indices. I tried inserting new data into the table, a

Re: [PERFORM] very very slow inserts into very large table

2012-07-16 Thread Craig Ringer
On 07/17/2012 01:56 AM, Jon Nelson wrote: What is the greater lesson to take away, here? If you are working with data that is larger (substantially larger) than available memory, is the architecture and design of postgresql such that the only real approach is some type of data partitioning? It

Re: [PERFORM] very very slow inserts into very large table

2012-07-16 Thread Claudio Freire
On Mon, Jul 16, 2012 at 4:16 PM, Mark Thornton wrote: >> Actually, it should create a temporary index btree and merge[0] them. >> Only worth if there are really a lot of rows. >> >> [0] http://www.ccs.neu.edu/home/bradrui/index_files/parareorg.pdf > > I think 93 million would qualify as a lot of r

Re: [PERFORM] very very slow inserts into very large table

2012-07-16 Thread Mark Thornton
On 16/07/12 20:08, Claudio Freire wrote: On Mon, Jul 16, 2012 at 3:59 PM, Mark Thornton wrote: 4. The most efficient way for the database itself to do the updates would be to first insert all the data in the table, and then update each index in turn having first sorted the inserted keys in the

Re: [PERFORM] very very slow inserts into very large table

2012-07-16 Thread Claudio Freire
On Mon, Jul 16, 2012 at 3:59 PM, Mark Thornton wrote: > 4. The most efficient way for the database itself to do the updates would be > to first insert all the data in the table, and then update each index in > turn having first sorted the inserted keys in the appropriate order for that > index. A

Re: [PERFORM] very very slow inserts into very large table

2012-07-16 Thread Mark Thornton
On 16/07/12 18:56, Jon Nelson wrote: It is not my intent to insult or even disparage my favorite software, but it took less time to*build* the indices for 550GB of data than it would have to insert 1/20th as much. That doesn't seem right. My explanation would apply to many databases, not just P

Re: [PERFORM] very very slow inserts into very large table

2012-07-16 Thread Jeff Janes
On Mon, Jul 16, 2012 at 10:35 AM, Samuel Gendler wrote: > On Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton wrote: >>> >>> >> Every insert updates four indexes, so at least 3 of those will be in >> random order. The indexes don't fit in memory, so all those updates will >> involve reading most of th

Re: [PERFORM] very very slow inserts into very large table

2012-07-16 Thread Jon Nelson
On Mon, Jul 16, 2012 at 12:35 PM, Samuel Gendler wrote: > On Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton wrote: >>> >>> >> Every insert updates four indexes, so at least 3 of those will be in >> random order. The indexes don't fit in memory, so all those updates will >> involve reading most of th

Re: [PERFORM] very very slow inserts into very large table

2012-07-16 Thread Samuel Gendler
On Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton wrote: > >> Every insert updates four indexes, so at least 3 of those will be in > random order. The indexes don't fit in memory, so all those updates will > involve reading most of the relevant b-tree pages from disk (or at least > the leaf level).

Re: [PERFORM] query overhead

2012-07-16 Thread Craig Ringer
On 07/16/2012 06:13 PM, Andy Halsall wrote: Thanks for the responses. I take the points - the times we're dealing with are very small. Sorry but I'm a bit confused by the suggestions around function types / prepared statements, but probably haven't been clear in my question: I'm invoking a PL/P

Re: [PERFORM] very very slow inserts into very large table

2012-07-16 Thread Mark Thornton
On 16/07/12 14:37, Jon Nelson wrote: I have a single *table* that is some 560GB in size, 6 columns, average row width 63. There are approximately 6.1 billion rows. It has 4 indices, 131, 313, 131 and 190 GB in size, respectively. All are btree indices. I tried inserting new data into the table,

[PERFORM] very very slow inserts into very large table

2012-07-16 Thread Jon Nelson
I have a single *table* that is some 560GB in size, 6 columns, average row width 63. There are approximately 6.1 billion rows. It has 4 indices, 131, 313, 131 and 190 GB in size, respectively. All are btree indices. I tried inserting new data into the table, and it's taking a *very* long time. I p

Re: [PERFORM] PostgreSQL index issue

2012-07-16 Thread Albe Laurenz
codevally wrote: > I have a question regarding PostgreSQL 9.1 indexing. > > I am having a table and want to create a index for a column and I want to > store the data with time zone for that column. The questions are: > > 1. Can I create a index for a column which store time stamp with time zone.