Huh.  Just tried db.export_to_csv_file() again and it wasn't nearly as slow 
as I thought before.  It took about 2 minutes for a largish (but not my 
biggest) database, exported from Postgres.  As a fun thing to try, I ran 
the same command under the most recent pypy and the same export took only 
23 seconds!!  I had to do it twice and compare the output files to convince 
myself it actually worked.  But it did.

I'm not sure how long the import will take using python into MySql.  Pypy 
is not an option here as I don't have an all-python driver for mysql that 
works with pypy.  I'm sure it will take longer than the export, but I don't 
have to do this very often...

-- Joe

The idea about setting a "fence" and getting changes added since the last 
export is a pretty compelling one, however.

On Friday, September 16, 2016 at 2:28:09 PM UTC-7, Joe Barnhart wrote:
>
> I've been using PostgresSQL for my production site for quite awhile.  My 
> future improved site, however, will feature MySql for its database engine. 
>  (Not a diss on Postgres, it's just a lot easier to find hosting companies 
> who will rent me a supported-and-maintained MySql instance.)  I have 
> thousands of user logins and lots 'o history I want, yea even NEED, to 
> preserve.  I've been reading up on strategies for converting from one 
> database to another.
>
> For example, I can dump the Postgres database in its entirety into CSV 
> using the DAL, then read the data back into MySql.  That has some 
> attractiveness but I couldn't help noticing how s--l-o-w this process is. 
>  I let the dump run for about an hour and it wasn't done yet.  Then there's 
> the little problem of the developer.  He just can't leave things alone. 
>  The MySql database design is "much better" (read: different) than the 
> Postgres design.  (It makes no difference that I am actually the developer 
> of both.)  So I can't just export CSV and import it.  Not without editing 
> the CSV on the way.
>
> What if, instead, I create the MySql tables with an extra column to hold 
> the "old" id from the Postgres database?  I could then develop a 
> "transition" class that would open connections to both databases, read 
> records from the Postgres one, and insert them (with whatever mods are 
> required) into the MySql database.  When I process records from Postgres 
> which have dependencies on the "id" field from another table, I use the 
> "old_id" field to map it to the new id value.
>
> This process could take as long as it wants, and run concurrently with the 
> old system.  I could shut it down and restart it as needed if I kept a 
> "fence" of the highest id processed from the old database.  I would need to 
> do the tables in order of no dependencies to fullest dependencies, to make 
> sure the required dependent records are already in place when the table is 
> processed.
>
> After all is said and done, I could simply delete the "old_id" columns 
> from the affected MySql tables.  If I even care at that point.
>
> Am I missing something?  Or does this seem viable for a system with a 
> large database to migrate?
>
> -- Joe
>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to