2012/07/16 22:37, Jon Nelson wrote:
I have a single *table* that is some 560GB in size, 6 columns, average
row width 63.
There are approximately 6.1 billion rows.
It has 4 indices, 131, 313, 131 and 190 GB in size, respectively. All
are btree indices.
I tried inserting new data into the table, a
On 07/17/2012 01:56 AM, Jon Nelson wrote:
What is the greater lesson to take away, here? If you are working with
data that is larger (substantially larger) than available memory, is
the architecture and design of postgresql such that the only real
approach is some type of data partitioning? It
On Mon, Jul 16, 2012 at 4:16 PM, Mark Thornton wrote:
>> Actually, it should create a temporary index btree and merge[0] them.
>> Only worth if there are really a lot of rows.
>>
>> [0] http://www.ccs.neu.edu/home/bradrui/index_files/parareorg.pdf
>
> I think 93 million would qualify as a lot of r
On 16/07/12 20:08, Claudio Freire wrote:
On Mon, Jul 16, 2012 at 3:59 PM, Mark Thornton wrote:
4. The most efficient way for the database itself to do the updates would be
to first insert all the data in the table, and then update each index in
turn having first sorted the inserted keys in the
On Mon, Jul 16, 2012 at 3:59 PM, Mark Thornton wrote:
> 4. The most efficient way for the database itself to do the updates would be
> to first insert all the data in the table, and then update each index in
> turn having first sorted the inserted keys in the appropriate order for that
> index.
A
On 16/07/12 18:56, Jon Nelson wrote:
It is not my intent to
insult or even disparage my favorite software, but it took less time
to*build* the indices for 550GB of data than it would have to insert
1/20th as much. That doesn't seem right.
My explanation would apply to many databases, not just P
On Mon, Jul 16, 2012 at 10:35 AM, Samuel Gendler
wrote:
> On Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton wrote:
>>>
>>>
>> Every insert updates four indexes, so at least 3 of those will be in
>> random order. The indexes don't fit in memory, so all those updates will
>> involve reading most of th
On Mon, Jul 16, 2012 at 12:35 PM, Samuel Gendler
wrote:
> On Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton wrote:
>>>
>>>
>> Every insert updates four indexes, so at least 3 of those will be in
>> random order. The indexes don't fit in memory, so all those updates will
>> involve reading most of th
On Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton wrote:
>
>> Every insert updates four indexes, so at least 3 of those will be in
> random order. The indexes don't fit in memory, so all those updates will
> involve reading most of the relevant b-tree pages from disk (or at least
> the leaf level).
On 07/16/2012 06:13 PM, Andy Halsall wrote:
Thanks for the responses. I take the points - the times we're dealing
with are very small. Sorry but I'm a bit confused by the suggestions
around function types / prepared statements, but probably haven't been
clear in my question: I'm invoking a PL/P
On 16/07/12 14:37, Jon Nelson wrote:
I have a single *table* that is some 560GB in size, 6 columns, average
row width 63.
There are approximately 6.1 billion rows.
It has 4 indices, 131, 313, 131 and 190 GB in size, respectively. All
are btree indices.
I tried inserting new data into the table,
I have a single *table* that is some 560GB in size, 6 columns, average
row width 63.
There are approximately 6.1 billion rows.
It has 4 indices, 131, 313, 131 and 190 GB in size, respectively. All
are btree indices.
I tried inserting new data into the table, and it's taking a *very* long time.
I p
codevally wrote:
> I have a question regarding PostgreSQL 9.1 indexing.
>
> I am having a table and want to create a index for a column and I want
to
> store the data with time zone for that column. The questions are:
>
> 1. Can I create a index for a column which store time stamp with time
zone.
13 matches
Mail list logo