larry price wrote:
On 12/19/05, Matthew Jarvis <[EMAIL PROTECTED]> wrote:
Anyhoo, last Thurs for the first time since I've been here (about 5 mo)
that data transfer didn't complete.
Have you figured out why it didn't complete?
No idea... I did notice that when I tried to manually ftp upload the
dump file to park it on that server (9.2mb) that ftp timed out... don't
know if I ever saw that before and it was only something like 30
minutes, maybe 45....
For better or worse the update process uses a DELETE then COPY sequence
to update the tables on the site. The DELETE takes place, table is
updated, next DELETE, next table etc.... sometimes the process isn't
being completed as though something is timing out, so I was looking to
a) speed up the whole process and b) park at least a snapshot of the
data up on the server so I can just import it in and get the site back
to functionality
If the data is relatively similar from night to night you can speed up
the transfer quite a lot by using rsync which works very well on line
oriented data formats like sql text dumps.
If you want to make sure that the tables never go away, even if the
process is interrupted partway through, make sure that each
delete/copy operation is wrapped in a transaction, something like.
BEGIN WORK;
DELETE FROM bar;
COPY bar FROM '/home/foo/bar.dump' WITH DELIMITER AS "|" NULL AS "";
COMMIT WORK;
that way if it blows up on you during the COPY operation, it's as if
the DELETE never happened.
Thanks for the idea Larry - that's a good one... I'm hampered by a)
ignorance and b) assumptions that my predescessor that set all this up
did things in the 'best' way possible... clearly by your suggestion (and
other things I've seen) that isn't the case....
Here is a chunk of the script which is repeated over and over for about
15 tables:
DELETE FROM bikes_componentgrp_bars;
--
-- PostgreSQL database dump
--
SET search_path = public, pg_catalog;
--
-- Data for TOC entry 1 (OID 5610955)
-- Name: bikes_componentgrp_bars; Type: TABLE DATA; Schema: public;
Owner: postgr
es
--
COPY bikes_componentgrp_bars (barstypeid, name, sort, descr) FROM stdin;
1 drop bars 1 \N
2 flat bars 5 \N
3 STI touring bars 3 \N
4 "H"-style bars 4 \N
5 above-seat bars 6 \N
6 under-seat bars 7 \N
7 bullhorn bars 8 \N
8 triathlon bars 2 \N
9 choice 1000 \N
10 \N 0 \N
\.
I can get away with wrapping before the DELETE to after the COPY text
with BEGIN WORK, COMMIT WORK - yes?
I'd rather that *nothing* get updated than just some of it, otherwise I
risk getting parent/child records out of whack... if it fails, that at
least buys me time to deal with it at my leisure, like 3am... <g>
Does BEGIN/COMMIT return an error code so I can determine success of the
whole process? Might be nice to get an email upon failure....
I like this plan.... <g>
Matthew S. Jarvis
IT Manager
Bike Friday - "Performance that Packs."
www.bikefriday.com
541/687-0487 x140
[EMAIL PROTECTED]
_______________________________________________
EUGLUG mailing list
[email protected]
http://www.euglug.org/mailman/listinfo/euglug