Hi Anand,
I tested with and without pipelining and it doesn’t make a difference. First of
all because unlimited pipelining is not a good idea, because we still have to
handle the responses and need to be able to relate the request and response
upon return, i.e. store the context of the request
interesting this topic.
2016-10-17 2:51 GMT+08:00 Dario Rexin :
> Hi Anand,
>
> I tested with current HEAD. After I saw low throughput on our own HTTP API
> client, I wrote a small server that sends out fake events and accepts calls
> and our client was able to send a lot more
Hi Anand,
I tested with current HEAD. After I saw low throughput on our own HTTP API
client, I wrote a small server that sends out fake events and accepts calls and
our client was able to send a lot more calls to that server. I also wrote a
small tool that simply sends as many calls to Mesos
Hi haosdent,
thanks for the pointer! Your results show exactly what I’m experiencing. I
think especially for bigger clusters this could be very problematic. It would
be great to get some input from the folks working on the HTTP API, especially
Anand.
Thanks,
Dario
> On Oct 16, 2016, at 12:01
Hmm, this is an interesting topic. @anandmazumdar create a benchmark test
case to compare v1 and v0 APIs before. You could run it via
```
./bin/mesos-tests.sh --benchmark
--gtest_filter="*SchedulerReconcileTasks_BENCHMARK_Test*"
```
Here is the result that run it in my machine.
```
[ RUN ]
5 matches
Mail list logo