Hey All,
I previously post about the troubles I was having dumping a 1Tb (size
with indexes) table. The rows in the table could be very large. Using
perl's DBD::Pg we were some how able to add these very large rows
without running in to the 1Gb row bug. With everyones help I
determined I
I was hoping use pg_dump and not to have to do a manual dump but if that
latest solution (moving rows 300mb elsewhere and dealing with them
later) does not work I'll try that.
Thanks everyone.
Merlin Moncure wrote:
On Fri, Dec 26, 2008 at 12:38 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Ted
,
Ted
From: Tom Lane [...@sss.pgh.pa.us]
Sent: Wednesday, December 24, 2008 12:49 PM
To: Ted Allen
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Troubles dumping a very large table.
Ted Allen tal...@blackducksoftware.com writes:
during the upgrade
(NOTE: I tried sending this email from my excite account and it appears
to have been blocked for whatever reason. But if the message does get
double posted, sorry for the inconvenience.)
Hey all,
Merry Christmas Eve, Happy Holidays, and all that good stuff. At my
work, I'm trying to
I've found that doing joins seems to produce better results on the big
tables queries I use. This is not always the case though.
How about this option:
SELECT distinct ip_info.* FROM ip_info RIGHT JOIN network_events USING
(ip) RIGHT JOIN host_events USING (ip) WHERE
How many rows were delete last time you ran the query?
Chad's query looks good but here is another variation that may help.
Delete From ceroriesgo.salarios Where numero_patrono In (Select
ceroriesgo.salarios.numero_patrono From ceroriesgo.salarios Left Join
ceroriesgo.patronos Using
What indexes do those tables have? Any?
Sidar López Cruz wrote:
Check this:
query: Delete From ceroriesgo.salarios Where numero_patrono Not In
(Select numero_patrono From ceroriesgo.patronos)
Seq Scan on salarios (cost=51021.78..298803854359.95 rows=14240077
width=6)
Filter: (NOT
Stephan Szabo wrote:
On Wed, 6 Dec 2006, Rafael Martinez wrote:
We are having some problems with an UPDATE ... FROM sql-statement and
pg-8.1.4. It takes ages to finish. The problem is the Seq Scan of the
table 'mail', this table is over 6GB without indexes, and when we send
thousands of