#5253: serializer for csv format for loaddata and dumpdata
-------------------------------------------------+--------------------------
   Reporter:  Adam Schmideg <[EMAIL PROTECTED]>  |                Owner:  jacob 
       
     Status:  new                                |            Component:  
Serialization
    Version:  SVN                                |           Resolution:        
       
   Keywords:  csv, foreign keys                  |                Stage:  
Accepted     
  Has_patch:  1                                  |           Needs_docs:  1     
       
Needs_tests:  1                                  |   Needs_better_patch:  1     
       
-------------------------------------------------+--------------------------
Comment (by russellm):

 Replying to [comment:4 Adam Schmideg <[EMAIL PROTECTED]>]:
 > After some googling I still couldn't find a reference to putting
 multiple table data into a single csv file.
 
 Before this gets committed, I'll need to see some consensus amongst others
 that this is indeed the case. "I couldn't find it" isn't a particularly
 compelling argument by itself :-)
 
 Once the patch is ready for inclusion, we will need to raise the design
 for public comment. This should shake out any objections or
 counterexamples for the design you are proposing.
 
 >  * headers behave almost like normal csv headers with the exception that
 the first column will be called ''<tablename>:id''
 
 <tablename>:pk would be better here; both for consistency with other
 backends, and because the primary key isn't always called id. pk is the
 reserved name for the primary key in searches, etc, which is why it is
 used in the JSON and YAML serializers.
 
 > > I haven't dug in detail to work out why 'nice_foreign_keys' exists,
 but by the looks of it, it isn't required functionality at all. Foreign
 keys should be integers (ignoring non-numerical keys for the moment) - not
 strings containing some magic syntax.
 >
 > I found this feature very useful in the following situation.  The first
 version of my initial_data.csv looked something like this
 
 1) CSV is a terse syntax by design. This a very pythonic extension.
 2) Your original complaint was that JSON syntax was too verbose - and then
 your introduce the most verbose and repetitive component of JSON syntax
 into your CSV syntax? Am I the only one that thinks this is a little
 strange?
 3) You are attempting to solve a problem that isn't part of what a
 serializer should be doing. If you require contenttypes as part of your
 fixtures, serialize (and deserialize) the contenttypes table. That way you
 can be guaranteed the IDs for the contenttypes you use.

-- 
Ticket URL: <http://code.djangoproject.com/ticket/5253#comment:5>
Django Code <http://code.djangoproject.com/>
The web framework for perfectionists with deadlines
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django updates" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-updates?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to