Ha. Looks like I'm going to have to write the two-db app regardless.
The surprising thing is, Postgres allowed me to put in strings that only
differ in case into tables where the string was declared "unique". MySql,
however, considers them the same and throws an IntegrityError exception
when I try to load the CSV. Because of the nature of the CSV file loading,
that means I have to scrub the database, rebuild it empty, and then try
I've loaded the same database 5 times now and get a new IntegrityError on
Time for Plan B.
On Friday, September 16, 2016 at 10:36:35 PM UTC-7, Joe Barnhart wrote:
> Huh. Just tried db.export_to_csv_file() again and it wasn't nearly as
> slow as I thought before. It took about 2 minutes for a largish (but not
> my biggest) database, exported from Postgres. As a fun thing to try, I ran
> the same command under the most recent pypy and the same export took only
> 23 seconds!! I had to do it twice and compare the output files to convince
> myself it actually worked. But it did.
> I'm not sure how long the import will take using python into MySql. Pypy
> is not an option here as I don't have an all-python driver for mysql that
> works with pypy. I'm sure it will take longer than the export, but I don't
> have to do this very often...
> -- Joe
> The idea about setting a "fence" and getting changes added since the last
> export is a pretty compelling one, however.
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
You received this message because you are subscribed to the Google Groups
To unsubscribe from this group and stop receiving emails from it, send an email
For more options, visit https://groups.google.com/d/optout.