Mladen Gogala schrieb:
Well, the problem will not go away. As I've said before, all other
databases have that feature and none of the reasons listed here
convinced me that everybody else has a crappy optimizer. The problem
may go away altogether if people stop using PostgreSQL.
A common
Craig James schrieb:
The problem is that Google ranks pages based on inbound links, so
older versions of Postgres *always* come up before the latest version
in page ranking.
Since 2009 you can deal with this by defining the canonical-version.
Craig James schrieb:
A useful trick to know is that if you replace the version number
with current, you'll get to the latest version most of the time
(sometimes the name of the page is changed between versions, too, but
this isn't that frequent).
The docs pages could perhaps benefit from an
Pierre C schrieb:
Within the data to import most rows have 20 till 50 duplicates.
Sometime much more, sometimes less.
In that case (source data has lots of redundancy), after importing the
data chunks in parallel, you can run a first pass of de-duplication on
the chunks, also in parallel,
Pierre C schrieb:
Since you have lots of data you can use parallel loading.
Split your data in several files and then do :
CREATE TEMPORARY TABLE loader1 ( ... )
COPY loader1 FROM ...
Use a TEMPORARY TABLE for this : you don't need crash-recovery since if
something blows up, you can COPY it
Cédric Villemain schrieb:
I think you need to have a look at pgloader. It does COPY with error
handling. very effective.
Thanks for this advice. I will have a look at it.
Greetings from Germany,
Torsten
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
Scott Marlowe schrieb:
i have a set of unique data which about 150.000.000 rows. Regullary i get
a
list of data, which contains multiple times of rows than the already
stored
one. Often around 2.000.000.000 rows. Within this rows are many
duplicates
and often the set of already stored data.
I
Hello,
i have a set of unique data which about 150.000.000 rows. Regullary i
get a list of data, which contains multiple times of rows than the
already stored one. Often around 2.000.000.000 rows. Within this rows
are many duplicates and often the set of already stored data.
I want to store
Tory M Blue schrieb:
Any issues, has it baked long enough, is it time for us 8.3 folks to deal
with the pain and upgrade?
I've upgraded all my databases to 8.4. They pain was not so big, the new
-j Parameter from pg_restore is fantastic. I really like the new
functions around Pl/PGSQL. All
Tom Lane schrieb:
Josh Berkus j...@agliodbs.com writes:
I've just been tweaking some autovac settings for a large database, and
came to wonder: why does vacuum_max_freeze_age default to such a high
number? What's the logic behind that?
(1) not destroying potentially useful forensic evidence
10 matches
Mail list logo