On 24/04/06, jvanasco <[EMAIL PROTECTED]> wrote:
>
> why not just use the dump feature in sqlite/mysql/pgsql
>
> all dbs have a 'dump' function/app that exports the contents to a
> generic format that can be read/altered to be read by another db
>
> going to pgsql you might have to run it through a filtering app, but
> mysql is pretty lax (just test your data to make sure its all there.
> probably best to run in 'traditional sql' mode if you have 5.0 )

Part of my reason for this is that Python seems to be a lot easier
than fiddling around with SQL and databases, but the dump thing seems
quite easy.

Testing my data to see if it's all there could be a nightmare though:
loads of tables with thousands of entries each.

> > I need to bear in mind here that when I put the data back in (with a
> > simple script) some of the class names or field names might have
> > changed, so I will need another nested dictionary (?) of changes that
> > will override the other dictionary.
>
> Thats a nightmare.  backup your db on box 1, coerce your data into the
> new schema, and export to ascii with a dump utility. (when i do stuff
> like that, i keep a log of all the necessary sql commands)  making
> filters to change data will just be a nightmare.

I was hoping not to do all this (I know quite little of SQL), but I
guess you're right though.

It does seem like there is a need to do stuff like this in TG though. 
Altering your class definitions seems to be a lot more work than it
should be when you've got data in the table.

Thanks

Ed

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"TurboGears" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/turbogears
-~----------~----~----~----~------~----~------~--~---

Reply via email to