Hello, Igniters.

I have not seen such use-cases, where heavy client ignite node placed in a
much worse network than the server. I'm not sure we should encourage a bad
cluster architecture.

Usually, in my use-cases, the servers and clients locate in the same
network. And if the cluster has SSL enabled, it makes sense to enable
compression, even if the network is fast. It also makes sense when we have
a high load on the network, and the CPU is utilized poorly.

I'll do tests on yardstick for real operations like get, put etc. and SQL
requests.

I propose to add configurable compression for thin client/ODBC/JDBC as a
separate issue because it increases the current PR.

Even if it really makes sense to compress the traffic only between
client-server ignite nodes, it should also be a separate issue, that would
not increase the PR. Especially, since this compression architecture may
not be accepted by the community.

2018-02-05 13:02 GMT+03:00 Nikita Amelchev <nsamelc...@gmail.com>:

> Thanks for your comments,
>
> I will try to separate network compression for clients and servers.
>
> It makes sense to enable compression on servers if we have SSL turned on.
> I tested rebalancing time and compression+ssl is faster. SSL throughput is
> limited by 800 Mbits/sec per connection and if enable compression, it
> boosted up to 1100 Mbits.
>
> 2018-02-02 18:52 GMT+03:00 Alexey Kuznetsov <akuznet...@apache.org>:
>
>> I think Igor is right.
>>
>> Ususally servers connected via fast local network.
>> But clients could be in external and slow network.
>> In this scenario compression will be very useful.
>>
>> Once I had such scenario - client connected to cluster via 300 kb/s
>> network
>> and tries to transfer ~10Mb of uncumpressed data.
>> So it takse ~30 seconds.
>> After I implemented compression it becamed 1M and transfered for ~3
>> seconds.
>>
>> I think we should take care of all mentioned problems with NIO threads in
>> order to not slow down whole cluster.
>>
>>
>> On Fri, Feb 2, 2018 at 10:05 PM, gvvinblade <gvvinbl...@gmail.com> wrote:
>>
>> > Nikita,
>> >
>> > Yes, you're right. Maybe I wasn't clear enough.
>> >
>> > Usually server nodes are placed in the same fast network segment (one
>> > datacenter); in any case we need an ability to setup compression per
>> > connection using some filter like useCompression(ClusterNode,
>> ClusterNode)
>> > to compress traffic only between servers and client nodes.
>> >
>> > But issue is still there, since the same NIO worker serves both client
>> and
>> > server connections, enabled compression may impact whole cluster
>> > performance
>> > because NIO threads will compress client messages instead of processing
>> > servers' compute requests. That was my concern.
>> >
>> > Compression for clients is really cool feature and usefull in some
>> cases.
>> > Probably it makes sense to have two NIO servers with and without
>> > compression
>> > to process server and client requests separately or pin somehow worker
>> > threads to client or server sessions...
>> >
>> > Also we have to think about client connections (JDBC, ODBC, .Net thin
>> > client, etc) and setup compression for them separately.
>> >
>> > Anyway I would compare put, get, putAll, getAll and SQL SELECT
>> operations
>> > for strings and POJOs, one server, several clients with and without
>> > compression, setting up the server to utilize all cores by NIO workers,
>> > just
>> > to get know possible impact.
>> >
>> > Possible configuration for servers with 16 cores:
>> >
>> > Selectors cnt = 16
>> > Connections per node = 4
>> >
>> > Where client nodes perform operations in 16 threads
>> >
>> > Regards,
>> > Igor
>> >
>> >
>> >
>> >
>> > --
>> > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>> >
>>
>>
>>
>> --
>> Alexey Kuznetsov
>>
>
>
>
> --
> Best wishes,
> Amelchev Nikita
>



-- 
Best wishes,
Amelchev Nikita

Reply via email to