Re: [openstack-dev] [MagnetoDB] Bulk load API draft

2014-05-28 Thread Illia Khudoshyn
Hi Dima, Sounds good, thank you for the point. On Tue, May 27, 2014 at 7:34 PM, Dmitriy Ukhlov dukh...@mirantis.comwrote: Hi Illia, Looks good. But I suggest to return all of these fields for positive request as well as for error request: read: string, processed: string,

Re: [openstack-dev] [MagnetoDB] Bulk load API draft

2014-05-28 Thread Illia Khudoshyn
Hi Ilya As for 'string' vs 'number', in MDB REST API we pass numers as strings since we want to support big ints, so I just wanted to be conform. As for the last parameter name, I'd prefer 'failed_items', 'cos we 1) already have 'failed' and I think it would be good if they match 2) they were

Re: [openstack-dev] [MagnetoDB] Bulk load API draft

2014-05-27 Thread Illia Khudoshyn
Hi openstackers, While working on bulk load, I found previously proposed batch-oriented asynchronous approach both resource consuming on server side and somewhat complicated to use. So I tried to outline some more straightforward streaming way of uploading data. By the link below you can found a

Re: [openstack-dev] [MagnetoDB] Bulk load API draft

2014-05-27 Thread Dmitriy Ukhlov
Hi Illia, Looks good. But I suggest to return all of these fields for positive request as well as for error request: read: string, processed: string, failed: string, but leave next fields optional and fill them in case of error response (failed 0) to specify what exactly was

[openstack-dev] [MagnetoDB] Bulk load API draft

2014-05-14 Thread Illia Khudoshyn
Hi openstackers, I'm working on bulk load for MagnetoDB, the facility for inserting large amounts of data, like, millions of rows, gigabytes of data. Below is the link to draft API description. https://wiki.openstack.org/wiki/MagnetoDB/bulkload#.5BDraft.5D_MagnetoDB_Bulk_Load_workflow_and_API