Good news is I figured out a way of implementing the upload
functionality using the requests package.

https://github.com/apache/libcloud/pull/923/commits/5e04dbce554830eca3f9812272076a2fbdbe7cdc

I've tested it against the Google Cloud Storage account, downloaded
and uploaded a file using both the direct file_path option and the
option passing a context manager (IOStream or ByteStream).

The bad news is :
- The S3 multipart upload I've removed. I don't have time right now to
implement this feature from scratch
- The unit tests are all coupled to the private methods, the call back
system and a bunch of other bad coupling practices, so they are broken
BUT it does actually work

It's nearly there.

Ant

On Fri, Jan 6, 2017 at 7:28 AM, anthony shaw <anthony.p.s...@gmail.com> wrote:
> it's more of an existential question :-)
>
> The _upload_object method inside the libcloud.storage.base submodule
> makes a 'raw' call to the LibcloudConnection, which will send the top
> part of the HTTP request then some headers and leave the connection
> open (i.e. not read the response).
>
> Then, depending on the driver, the file and other things, it will
> callback one of the methods like _stream_data, which writes directly
> to the HTTP session using the `send()` method, which is only available
> in httplib.
>
> httplib is a very low level library, requests is very high level. You
> don't get access to the HTTP session directly in requests.
>
> That means that I would have to throw away the code we already have
> (which I am definitely in favour of in the long term since it is
> fragile) and replace it with requests' APIs for doing chunked uploads
> using file streams.
>
> It would probably take me another day or two to finish that implementation.
>
> I always preached that you should change 1 thing at a thing, in small
> amount, and keep testing. So far this has been more like pulling a
> thread on a sweater, I've touched every single file in the code base
> practically!
> The odds are, I will have missed something. So 2.0.0rc1 (if we do call
> it that), despite my best intentions will introduce a new bug just
> based on the number of things I have changed.
>
> On Fri, Jan 6, 2017 at 1:11 AM, Tom Melendez <t...@supertom.com> wrote:
>> Hi Anthony,
>>
>> Nice job getting this going!
>>
>> Would you mind elaborating on this point?
>> "The raw connection still uses httplib. I decided it was too risky to
>> swap that for requests' method of uploading files."
>>
>> Since you're going through the trouble, it would be ideal to go to Requests
>> completely.  What's blocking us on the upload code (Admittedly, I haven't
>> studied the upload code)?  Anything the community can do to help?
>>
>> Thanks,
>>
>> Tom
>>
>>
>> On Thu, Jan 5, 2017 at 3:43 AM, Chris DeRamus <chris.dera...@gmail.com>
>> wrote:
>>
>>> For what it's worth my company (DivvyCloud) has been using the good work
>>> you've done now for almost six months. We had to make a few tweaks, but the
>>> core code contributed has worked flawlessly across AWS, Azure, OpenStack,
>>> Google, VMware and more. The only issue I believe that still stands which
>>> I've seen is an issue when LIBCLOUD_DEBUG is set to true. Logging doesn't
>>> appear to function properly, but that may have been addressed since your
>>> initial submission last year.
>>>
>>> Nice work on this and we sincerely appreciate the contribution.
>>>
>>> On Thu, Jan 5, 2017 at 12:21 AM, anthony shaw <anthony.p.s...@gmail.com>
>>> wrote:
>>>
>>> > That package had a dumb error in it, I've since fixed it and against a
>>> > live API (GoDaddy). I've tested the following scenarios
>>> >
>>> > - Applying a custom proxy via the environment variable
>>> > - Using libcloud.security to disable SSL verification
>>> > - Using libcloud.security to set a custom CA certificate
>>> > - Combining all of those scenarios
>>> > - Verification of custom headers applied by the driver using Charles
>>> > Proxy and inspecting the HTTP messages manually
>>> > - Decoding JSON messages - although this still uses the existing
>>> > methods, not the requests own json() decoder.
>>> >
>>> > IMO, this is ready to merge. I would like to test the raw connections
>>> > and file uploads if anyone has an account on one of those providers?
>>> >
>>> > On Thu, Jan 5, 2017 at 8:30 AM, Tomaz Muraus <to...@apache.org> wrote:
>>> > > Thanks for working on this again!
>>> > >
>>> > > Once we get a green light from people testing those changes, I propose
>>> to
>>> > > first roll out v2.0.0-rc1 and eventually after we are happy with the
>>> > > stability call it v2.0.0.
>>> > >
>>> > > On Wed, Jan 4, 2017 at 6:20 AM, anthony shaw <anthony.p.s...@gmail.com
>>> >
>>> > > wrote:
>>> > >
>>> > >> Hi,
>>> > >>
>>> > >> I tried doing this a year ago but put it in the 'too hard' bucket.
>>> > >> I've opened a PR (again) replacing the use of httplib with the
>>> > >> requests package.
>>> > >>
>>> > >> The consequences are:
>>> > >> - Connection.conn_classes is no longer a tuple, it is
>>> > >> Connection.conn_class. There is no separation between a HTTP and HTTPS
>>> > >> connection. I could have just hacked around this but I updated all the
>>> > >> tests instead
>>> > >> - Mock implementations no longer use the tuple as above
>>> > >> - We cannot support Python 3.2 officially anymore. Requests does not
>>> > >> support 3.2
>>> > >> - The raw connection still uses httplib. I decided it was too risky to
>>> > >> swap that for requests' method of uploading files.
>>> > >>
>>> > >> https://github.com/apache/libcloud/pull/923#partial-pull-merging
>>> > >>
>>> > >> Please remote fetch this branch and test it out on some working code
>>> > >> talking to real APIs. Mocks can only go so far.
>>> > >>
>>> > >> I've uploaded the package here -
>>> > >> http://people.apache.org/~anthonyshaw/libcloud/1.5.0.post0/
>>> > >>
>>> > >> I would like to get this merged but would like some additional nods
>>> > >> before it gets merged into trunk.
>>> > >>
>>> > >> Ant
>>> > >>
>>> >
>>>

Reply via email to