Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread Torsten Zühlsdorff
Mladen Gogala schrieb: Well, the problem will not go away. As I've said before, all other databases have that feature and none of the reasons listed here convinced me that everybody else has a crappy optimizer. The problem may go away altogether if people stop using PostgreSQL. A common

Re: [PERFORM] Using more tha one index per table

2010-07-24 Thread Torsten Zühlsdorff
Craig James schrieb: The problem is that Google ranks pages based on inbound links, so older versions of Postgres *always* come up before the latest version in page ranking. Since 2009 you can deal with this by defining the canonical-version.

Re: [PERFORM] Using more tha one index per table

2010-07-23 Thread Torsten Zühlsdorff
Craig James schrieb: A useful trick to know is that if you replace the version number with current, you'll get to the latest version most of the time (sometimes the name of the page is changed between versions, too, but this isn't that frequent). The docs pages could perhaps benefit from an

Re: [PERFORM] How to insert a bulk of data with unique-violations very fast

2010-06-09 Thread Torsten Zühlsdorff
Pierre C schrieb: Within the data to import most rows have 20 till 50 duplicates. Sometime much more, sometimes less. In that case (source data has lots of redundancy), after importing the data chunks in parallel, you can run a first pass of de-duplication on the chunks, also in parallel,

Re: [PERFORM] How to insert a bulk of data with unique-violations very fast

2010-06-07 Thread Torsten Zühlsdorff
Pierre C schrieb: Since you have lots of data you can use parallel loading. Split your data in several files and then do : CREATE TEMPORARY TABLE loader1 ( ... ) COPY loader1 FROM ... Use a TEMPORARY TABLE for this : you don't need crash-recovery since if something blows up, you can COPY it

Re: [PERFORM] How to insert a bulk of data with unique-violations very fast

2010-06-06 Thread Torsten Zühlsdorff
Cédric Villemain schrieb: I think you need to have a look at pgloader. It does COPY with error handling. very effective. Thanks for this advice. I will have a look at it. Greetings from Germany, Torsten -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make

Re: [PERFORM] How to insert a bulk of data with unique-violations very fast

2010-06-06 Thread Torsten Zühlsdorff
Scott Marlowe schrieb: i have a set of unique data which about 150.000.000 rows. Regullary i get a list of data, which contains multiple times of rows than the already stored one. Often around 2.000.000.000 rows. Within this rows are many duplicates and often the set of already stored data. I

[PERFORM] How to insert a bulk of data with unique-violations very fast

2010-06-02 Thread Torsten Zühlsdorff
Hello, i have a set of unique data which about 150.000.000 rows. Regullary i get a list of data, which contains multiple times of rows than the already stored one. Often around 2.000.000.000 rows. Within this rows are many duplicates and often the set of already stored data. I want to store

Re: [PERFORM] Are folks running 8.4 in production environments? and 8.4 and slon 1.2?

2009-10-17 Thread Torsten Zühlsdorff
Tory M Blue schrieb: Any issues, has it baked long enough, is it time for us 8.3 folks to deal with the pain and upgrade? I've upgraded all my databases to 8.4. They pain was not so big, the new -j Parameter from pg_restore is fantastic. I really like the new functions around Pl/PGSQL. All

Re: [PERFORM] Why is vacuum_freeze_min_age 100m?

2009-08-12 Thread Torsten Zühlsdorff
Tom Lane schrieb: Josh Berkus j...@agliodbs.com writes: I've just been tweaking some autovac settings for a large database, and came to wonder: why does vacuum_max_freeze_age default to such a high number? What's the logic behind that? (1) not destroying potentially useful forensic evidence