Hi!

I think that given the output from your benchmarks it is OK to proceed with
the async approach. The asnyc and OutputStream versions are not exclusive
and the benchmarks show no reason for not starting with implementation of
the async feature.

For me, let's go with the next step and open a pull request with the PoC
code/continue the discussion/whatever plan you had in mind :)


I.

On Jun 12, 2017 4:48 AM, "Battula Kishore" <batt...@adobe.com.invalid>
wrote:

Hi Andrew, Ignasi

Checking once again any update on this?

-- Thanks
-- Kishore









On 06/06/17, 12:10 PM, "Ignasi Barrera" <n...@apache.org> wrote:

>Hi,
>
>I can pick it up, but I think we should have some more feedback from
>Andrew first, at least to see if he still has some concerns about the
>implementation.
>I do like the current async approach but it wouldn't be right to move
>forward without the OK of the team member that had concerns and has
>been more involved in the design discussion.
>
>
>I,
>
>On 6 June 2017 at 08:28, Felix Meschberger <fmesc...@adobe.com.invalid>
wrote:
>> Hi Jclouds dev
>>
>> Anyone else who could pick up instead of AndrewG ?
>>
>> It would be great to be able to have this support added to JClouds.
>>
>> Thanks
>> Felix
>>
>>> Am 06.06.2017 um 05:49 schrieb Andrew Phillips <andr...@apache.org>:
>>>
>>> Hi Kishore
>>>
>>> Andrew G. is travelling for most of June, as far as I understand, so
will likely be a bit slower to respond. Thanks for your patience!
>>>
>>> ap
>>>
>>> On 2017-06-05 23:42, Battula Kishore wrote:
>>>> Hi Andrew,
>>>> Any update on this?
>>>> -- Thanks
>>>> -- Kishore
>>>> On 29/05/17, 1:06 PM, "Battula Kishore" <batt...@adobe.com.INVALID>
wrote:
>>>>> Hi Andrew,
>>>>> Thanks andrew for the quick response. Here is the GitHub repo and
instructions on how to run the tests https://na01.safelinks.
protection.outlook.com/?url=https%3A%2F%2Fgithub.com%
2Fkishore25kumar%2Fs3proxy-async-test-setup&data=02%7C01%7C%
7C8969ed65c6a749aebfa608d4a66583cc%7Cfa7b1b5a7b34438794aed2c178de
cee1%7C0%7C0%7C636316402308126519&sdata=0YESeCSlmChUOicpe2m9SYC1WaB7m%
2B7hIB6o%2BTOp1Jo%3D&reserved=0. The READE.md also has the repo details of
s3proxy as well as jclouds implementation. Hope this helps? Let me know if
you need anything else?
>>>>> In the mean time you review these results if you can let me know the
design review process I can be prepared for that.
>>>>> -- Thanks
>>>>> -- Kishore
>>>>> On 26/05/17, 12:40 PM, "Andrew Gaul" <g...@apache.org> wrote:
>>>>>> Kishore, these are promising results!  I reformatted the most
important
>>>>>> rows which show a 2x improvement in throughput and latency:
>>>>>> 10 10,000 Async Http Lib 209 282 48
>>>>>> 10 10,000 OutputStream   392 542 25
>>>>>> Can you share the implementation and include instructions on how to
>>>>>> replicate these tests?
>>>>>> On Wed, May 24, 2017 at 05:50:52AM +0000, Battula Kishore wrote:
>>>>>>> Hi,
>>>>>>> This is Kishore who is working on async poc using mail id(
kishore25ku...@gmail.com<mailto:kishore25ku...@gmail.com>). I work at adobe
and we wanted to implement async support for jclouds library and contribute
it back.
>>>>>>> From the last discussion I was asked to get the performance numbers
for the two approaches.
>>>>>>> Approach 1: Using Http Async Library
>>>>>>> Approach 2: Using Outputstream
>>>>>>> Test setup:
>>>>>>> 1.       Both the s3 proxy server and test runner are running in
same Docker container in azure west-us region.
>>>>>>> 2.       Azure storage account is also residing in same west-us
region.
>>>>>>> 3.       A bucket is prepopulated with 100,000 files, each file of
1 MB size before test start.
>>>>>>> 4.       The test runner sends unique requests to s3proxy to
download files.
>>>>>>> Virtual Machine spec: CPU - 8 cores, Memory - 28 GB (Standard_D4
Azure machine)
>>>>>>> S3proxy is running with 1 jetty worker thread in all the scenarios.
The payload size used is 1 MB file. Here are the performance numbers.
>>>>>>> Test Runner Threads
>>>>>>> Iteration Per thread
>>>>>>> Approach
>>>>>>> Avg response time (ms)
>>>>>>> 99%tile time (ms)
>>>>>>> Throughput
>>>>>>> (Requests / sec)
>>>>>>> 1
>>>>>>> 10,000
>>>>>>> Async Http Lib
>>>>>>> 45
>>>>>>> 87
>>>>>>> 22
>>>>>>> 5
>>>>>>> 10,000
>>>>>>> Async Http Lib
>>>>>>> 107
>>>>>>> 159
>>>>>>> 47
>>>>>>> 10
>>>>>>> 10,000
>>>>>>> Async Http Lib
>>>>>>> 209
>>>>>>> 282
>>>>>>> 48
>>>>>>> 1
>>>>>>> 10,000
>>>>>>> OutputStream
>>>>>>> 41
>>>>>>> 85
>>>>>>> 24
>>>>>>> 5
>>>>>>> 10,000
>>>>>>> OutputStream
>>>>>>> 190
>>>>>>> 283
>>>>>>> 26
>>>>>>> 10
>>>>>>> 10,000
>>>>>>> OutputStream
>>>>>>> 392
>>>>>>> 542
>>>>>>> 25
>>>>>>> Summary: Under load Http Async Library approach is providing more
throughput compared to Output stream approach.
>>>>>>> Both the approaches improve performance. The output stream approach
can be used along with Http Async library approach which is giving around
(3-5 ms) improvement in latency.
>>>>>>> Each approach is independent development. At this point I am keen
to take up Http Async Library development.
>>>>>>> -- Thanks
>>>>>>> -- Kishore
>>>>>> --
>>>>>> Andrew Gaul
>>>>>> https://na01.safelinks.protection.outlook.com/?url=
http%3A%2F%2Fgaul.org%2F&data=02%7C01%7C%7C8690ee3e00bc4e3da0e608d4a406565e%
7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636313794511467457&sdata=XQ%
2FshVjdqC3KiVEuyH6%2FJvmDN5DHBmS0kIBx98V89KY%3D&reserved=0
>>

Reply via email to