a. You can even start with the
source of the standard pg_dump! However, you could then eliminate the
per customer schema/tables and add an extra 'customer' key column on
each table. Now you modify pg_dump to only dump the parts of each table
matching a given customer id.
Mark Thornton
On 16/07/12 20:08, Claudio Freire wrote:
On Mon, Jul 16, 2012 at 3:59 PM, Mark Thornton wrote:
4. The most efficient way for the database itself to do the updates would be
to first insert all the data in the table, and then update each index in
turn having first sorted the inserted keys in the
On 16/07/12 18:56, Jon Nelson wrote:
It is not my intent to
insult or even disparage my favorite software, but it took less time
to*build* the indices for 550GB of data than it would have to insert
1/20th as much. That doesn't seem right.
My explanation would apply to many databases, not just P
relevant b-tree pages from disk (or at least
the leaf level). A total of 10ms of random read from disk (per inserted
row) wouldn't surprise me ... which adds up to more than 10 days for
your 93 million rows.
Mark Thornton
--
Sent via pgsql-performance mailing list (pgsql-performance@pos
On 11/07/12 21:18, Craig James wrote:
It strikes me as a contrived case rather than a use case. What sort
of app repeatedly fills and truncates a small table thousands of times
... other than a test app to see whether you can do it or not?
If I have a lot of data which updates/inserts an exis
On 11/06/12 14:52, Shaun Thomas wrote:
On 06/11/2012 08:46 AM, Mark Thornton wrote:
500m --- though isn't clear if cluster uses maintenance memory or the
regular work memory. I could readily use a higher value for
maintenance_work_mem.
For an operation like that, having a full GB wou
1. 327600 rows, 105MB, 15.8s
2. 770165 rows, 232MB, 59.5s
3. 1437041 rows, 424MB, 140s
4. 3980922 rows, 1167MB, 276s
5. 31843368 rows, 9709MB, ~ 10 hours
Server is version 9.1. with postgis 1.5.4.
Regards,
Mark Thornton
--
Sent via pgsql-performance mailing list (pgsql-performance
updates involving a FROM clause wrong --- the resulting
table is correct but the running time is quadratic. You might want to
try a series of smaller examples to see if your query displays this
behaviour.
Mark Thornton
--
Sent via pgsql-performance mailing list (pgsql-performance
On 05/03/2011 09:37, Tobias Brox wrote:
Sorry for not responding directly to your question and for changing
the subject ... ;-)
On 4 March 2011 18:18, Landreville wrote:
That is partitioned into about 3000 tables by the switchport_id (FK to
a lookup table), each table has about 30 000 rows cur
On 05/03/2011 09:37, Tobias Brox wrote:
Sorry for not responding directly to your question and for changing
the subject ... ;-)
On 4 March 2011 18:18, Landreville wrote:
That is partitioned into about 3000 tables by the switchport_id (FK to
a lookup table), each table has about 30 000 rows cur
On 04/03/2011 16:07, Robert Haas wrote:
That seems quite surprising. There are only 14 rows in the table but
PG thinks 2140? Do you have autovacuum turned on? Has this table
been analyzed recently?
I think autovacuum is enabled, but as a temporary table LinkIds has only
existed for a very sho
On 04/03/2011 16:07, Robert Haas wrote:
On Fri, Mar 4, 2011 at 6:40 AM, Mark Thornton wrote:
I can achieve this manually by rewriting the query as a union between
queries against each of the child tables. Is there a better way? (I'm using
PostGreSQL 8.4 with PostGIS 1.4).
Can you pos
much faster if it did an index scan on each of the child
tables and merged the results.
I can achieve this manually by rewriting the query as a union between
queries against each of the child tables. Is there a better way? (I'm
using PostGreSQL 8.4 with PostGIS 1.4).
Regards,
Mark Tho
13 matches
Mail list logo