I am trying to optimize some code to use bulk creation of models- on the order of dozens, not hundreds, and I do still need models, so raw dataset methods aren't suitable.
I don't see a way to do this with Sequel- we have Dataset#import and #multi_insert, and they can return an array of primary keys. But there is internal machinery in Model (@new and @modified attributes among others) that is unavailable, so even if I set the primary key value, the model still thinks it is new. It'd be nice at times to "enqueue" an unsaved model for bulk insert; save it; and then use it once saved. Is this something reasonable to add? Django, which for the most part I loathe, has this sort of thing (of course its bulk create is entirely hand-rolled and does not use underlying database machinery so is filled with other issues). It would also seem possible to work around this by being able to update a model in-place, or something like 'refreshing' it from some data rather than the database (`_clear_changed_columns` is not enough because of the internal attributes). Anyway, this can be worked around, but I did want to bring it up in case there is interest. Thanks as always for Sequel! - Rob -- You received this message because you are subscribed to the Google Groups "sequel-talk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/sequel-talk/ff0783b9-3113-4f5d-ba32-7e2168bee0d3n%40googlegroups.com.
