Re: Throughput test of OpenHFT networking

2018-05-12 Thread John Hening
Why do you think that I have a production system? I do it for learning 
purpose. 


W dniu sobota, 12 maja 2018 21:59:53 UTC+2 użytkownik Greg Young napisał:
>
> Will your production system be running over loopback? 
>
> On Sun, May 13, 2018 at 1:59 AM, John Hening  > wrote:
>
>> I know that testing by loopback isn't the best idea (and that it omits a 
>> part of the network stack) but it is the simplest and I have just a laptop 
>> with one physical network interface so I don't see a possibility to test it 
>> without a second physical host. 
>>
>>
>>  Not mentioning that open HFT may employ socket mechanics different from 
>>> what's available in jdk. 
>>>
>> AFAIK (perhaps I am wrong) OpenHFT uses networking available in jdk,
>>
>> W dniu sobota, 12 maja 2018 20:34:57 UTC+2 użytkownik Wojciech Kudla 
>> napisał:
>>>
>>> It's probably not the response that you were hoping to see but I'd avoid 
>>> testing for performance using loopback interface. 
>>> There are whole parts of the network stack omitted by the Linux kernel 
>>> in such scenarios. 
>>> Not mentioning that open HFT may employ socket mechanics different from 
>>> what's available in jdk. 
>>>
>>> On Sat, 12 May 2018, 19:03 John Hening,  wrote:
>>>
 Hello, 
 I am trying to examine a throughput of OpenHFT networking in version 
 1.12.2 to compare it with my toy. 

 My test works in the following way: 

 I have few concurrent clients that send (by loopback) to the server 
 (written with OpenHFT networking) 131`072 messages of size 4KB and 
 blocking-wait for a (1-byte) response that confirms that message was 
 processed by the server.

 I've run a test with -DServerThreadingStrategy=CONCURRENT, 
 MULTI_THREADED_BUSY_WAITING  and SINGLE_THREADED.



 number of clientsOpenHFT: avg. 
 Messages per second: my toy
 (run on different threads)CONCURRENT  SINGLE_THREADED 
 MULTI_THREADED_BUSY_WAITING CONCURRENT (no single threaded 
 strategy)
   1   52.47 50.4 
 49.95   41.4  
   2   40.21 48.57
 39.54   44.65
   4   21.92 23.68
 21.51   32.04
   8   10.78 12.83
 10.91   23.06
  16   5.53  6.02 
 5.5711.77 
  32   2.68  2.79 
 2.766.46



 



 I suppose that the problem is with my usage of that library, but I 
 cannot figure out what is wrong- it is not so easy to write a test because 
 of lack/obsolete documentation/examples.

 The server is run as follows: (I skipped not important details to make 
 it shorter)

 private static EventLoop eg;

 public static void starServer() {
 eg = new EventGroup(true);
 eg.start();
 TCPRegistry.createServerSocketChannelFor(desc);
 AcceptorEventHandler eah = new AcceptorEventHandler(desc, 
 LegacyHanderFactory.legacyTcpEventHandlerFactory(nc -> new Confirmer
 ()), VanillaNetworkContext::new);
 
  // Confirmer send 
 one-byte message to a client to confirm processing of message
 eg.addHandler(eah);
 }

 class Confirmer implements TcpHandler {
 @Override
 public void process(@NotNull final Bytes in, @NotNull final 
 Bytes out, NetworkContext nc) {
 if (in.readRemaining() == 0){
 return;
 }
 out.write(in, in.readPosition(), 1); // send a confirmation
 in.readSkip(Math.min(in.readRemaining(), out.writeRemaining
 ()));
 }
 }
 }



 A client code (run on different threads):
   
 private void sender(long numberOfMessage, int sizeOfMessage) {
 SocketChannel sc = TCPRegistry.createSocketChannel(desc);
 sc.configureBlocking(true);
 ByteBuffer buffer = ByteBuffer.allocateDirect(sizeOfMessage);
 ByteBuffer recv = ByteBuffer.allocateDirect(1);
 buffer.putInt(64); // required bytes. To be frankly, I don't 
 understand exactly why 64. 
 

Re: Throughput test of OpenHFT networking

2018-05-12 Thread Greg Young
Will your production system be running over loopback?

On Sun, May 13, 2018 at 1:59 AM, John Hening  wrote:

> I know that testing by loopback isn't the best idea (and that it omits a
> part of the network stack) but it is the simplest and I have just a laptop
> with one physical network interface so I don't see a possibility to test it
> without a second physical host.
>
>
>  Not mentioning that open HFT may employ socket mechanics different from
>> what's available in jdk.
>>
> AFAIK (perhaps I am wrong) OpenHFT uses networking available in jdk,
>
> W dniu sobota, 12 maja 2018 20:34:57 UTC+2 użytkownik Wojciech Kudla
> napisał:
>>
>> It's probably not the response that you were hoping to see but I'd avoid
>> testing for performance using loopback interface.
>> There are whole parts of the network stack omitted by the Linux kernel in
>> such scenarios.
>> Not mentioning that open HFT may employ socket mechanics different from
>> what's available in jdk.
>>
>> On Sat, 12 May 2018, 19:03 John Hening,  wrote:
>>
>>> Hello,
>>> I am trying to examine a throughput of OpenHFT networking in version
>>> 1.12.2 to compare it with my toy.
>>>
>>> My test works in the following way:
>>>
>>> I have few concurrent clients that send (by loopback) to the server
>>> (written with OpenHFT networking) 131`072 messages of size 4KB and
>>> blocking-wait for a (1-byte) response that confirms that message was
>>> processed by the server.
>>>
>>> I've run a test with -DServerThreadingStrategy=CONCURRENT,
>>> MULTI_THREADED_BUSY_WAITING  and SINGLE_THREADED.
>>>
>>>
>>>
>>> number of clientsOpenHFT: avg.
>>> Messages per second: my toy
>>> (run on different threads)CONCURRENT  SINGLE_THREADED
>>> MULTI_THREADED_BUSY_WAITING CONCURRENT (no single threaded
>>> strategy)
>>>   1   52.47 50.4
>>> 49.95   41.4
>>>   2   40.21 48.57
>>> 39.54   44.65
>>>   4   21.92 23.68
>>> 21.51   32.04
>>>   8   10.78 12.83
>>> 10.91   23.06
>>>  16   5.53  6.02
>>> 5.5711.77
>>>  32   2.68  2.79
>>> 2.766.46
>>>
>>>
>>> 
>>> 
>>> 
>>>
>>>
>>>
>>> I suppose that the problem is with my usage of that library, but I
>>> cannot figure out what is wrong- it is not so easy to write a test because
>>> of lack/obsolete documentation/examples.
>>>
>>> The server is run as follows: (I skipped not important details to make
>>> it shorter)
>>>
>>> private static EventLoop eg;
>>>
>>> public static void starServer() {
>>> eg = new EventGroup(true);
>>> eg.start();
>>> TCPRegistry.createServerSocketChannelFor(desc);
>>> AcceptorEventHandler eah = new AcceptorEventHandler(desc,
>>> LegacyHanderFactory.legacyTcpEventHandlerFactory(nc -> new Confirmer()),
>>> VanillaNetworkContext::new);
>>>
>>>  // Confirmer send
>>> one-byte message to a client to confirm processing of message
>>> eg.addHandler(eah);
>>> }
>>>
>>> class Confirmer implements TcpHandler {
>>> @Override
>>> public void process(@NotNull final Bytes in, @NotNull final
>>> Bytes out, NetworkContext nc) {
>>> if (in.readRemaining() == 0){
>>> return;
>>> }
>>> out.write(in, in.readPosition(), 1); // send a confirmation
>>> in.readSkip(Math.min(in.readRemaining(), out.writeRemaining
>>> ()));
>>> }
>>> }
>>> }
>>>
>>>
>>>
>>> A client code (run on different threads):
>>>
>>> private void sender(long numberOfMessage, int sizeOfMessage) {
>>> SocketChannel sc = TCPRegistry.createSocketChannel(desc);
>>> sc.configureBlocking(true);
>>> ByteBuffer buffer = ByteBuffer.allocateDirect(sizeOfMessage);
>>> ByteBuffer recv = ByteBuffer.allocateDirect(1);
>>> buffer.putInt(64); // required bytes. To be frankly, I don't
>>> understand exactly why 64.
>>> buffer.put(bytes.getBytes()); // nearly 4KB-text-message
>>>
>>> long start = System.nanoTime();
>>> for(int i = 0; i < numberOfMessage; i++){
>>> buffer.clear();
>>> recv.clear();
>>> sc.write(buffer);
>>> sc.read(recv);
>>> }
>>> times.add(System.nanoTime() - start); // use it to compute an
>>> average after
>>> sc.close();
>>> }
>>>
>>>
>>>
>>> And I use times to get an average throughput.
>>>
>>> P.S. I see that system is loaded (in the sense of threads and kernel
>>> network stack) but results 

Re: Throughput test of OpenHFT networking

2018-05-12 Thread Wojciech Kudla
It's probably not the response that you were hoping to see but I'd avoid
testing for performance using loopback interface.
There are whole parts of the network stack omitted by the Linux kernel in
such scenarios.
Not mentioning that open HFT may employ socket mechanics different from
what's available in jdk.

On Sat, 12 May 2018, 19:03 John Hening,  wrote:

> Hello,
> I am trying to examine a throughput of OpenHFT networking in version
> 1.12.2 to compare it with my toy.
>
> My test works in the following way:
>
> I have few concurrent clients that send (by loopback) to the server
> (written with OpenHFT networking) 131`072 messages of size 4KB and
> blocking-wait for a (1-byte) response that confirms that message was
> processed by the server.
>
> I've run a test with -DServerThreadingStrategy=CONCURRENT,
> MULTI_THREADED_BUSY_WAITING  and SINGLE_THREADED.
>
>
>
> number of clientsOpenHFT: avg.
> Messages per second: my toy
> (run on different threads)CONCURRENT  SINGLE_THREADED
> MULTI_THREADED_BUSY_WAITING CONCURRENT (no single threaded
> strategy)
>   1   52.47 50.4 49.95
> 41.4
>   2   40.21 48.5739.54
>   44.65
>   4   21.92 23.6821.51
>  32.04
>   8   10.78 12.8310.91
>  23.06
>  16   5.53  6.02 5.57
>11.77
>  32   2.68  2.79 2.76
>6.46
>
>
>
> 
>
>
>
> I suppose that the problem is with my usage of that library, but I cannot
> figure out what is wrong- it is not so easy to write a test because of
> lack/obsolete documentation/examples.
>
> The server is run as follows: (I skipped not important details to make it
> shorter)
>
> private static EventLoop eg;
>
> public static void starServer() {
> eg = new EventGroup(true);
> eg.start();
> TCPRegistry.createServerSocketChannelFor(desc);
> AcceptorEventHandler eah = new AcceptorEventHandler(desc,
> LegacyHanderFactory.legacyTcpEventHandlerFactory(nc -> new Confirmer()), V
> anillaNetworkContext::new);
>
>// Confirmer send one-byte
> message to a client to confirm processing of message
> eg.addHandler(eah);
> }
>
> class Confirmer implements TcpHandler {
> @Override
> public void process(@NotNull final Bytes in, @NotNull final Bytes
> out, NetworkContext nc) {
> if (in.readRemaining() == 0){
> return;
> }
> out.write(in, in.readPosition(), 1); // send a confirmation
> in.readSkip(Math.min(in.readRemaining(), out.writeRemaining
> ()));
> }
> }
> }
>
>
>
> A client code (run on different threads):
>
> private void sender(long numberOfMessage, int sizeOfMessage) {
> SocketChannel sc = TCPRegistry.createSocketChannel(desc);
> sc.configureBlocking(true);
> ByteBuffer buffer = ByteBuffer.allocateDirect(sizeOfMessage);
> ByteBuffer recv = ByteBuffer.allocateDirect(1);
> buffer.putInt(64); // required bytes. To be frankly, I don't
> understand exactly why 64.
> buffer.put(bytes.getBytes()); // nearly 4KB-text-message
>
> long start = System.nanoTime();
> for(int i = 0; i < numberOfMessage; i++){
> buffer.clear();
> recv.clear();
> sc.write(buffer);
> sc.read(recv);
> }
> times.add(System.nanoTime() - start); // use it to compute an average
> after
> sc.close();
> }
>
>
>
> And I use times to get an average throughput.
>
> P.S. I see that system is loaded (in the sense of threads and kernel
> network stack) but results seems to be low though. I don't have an
> experience and I cannot decide whether that results are normal.
> P.S.2 I also know that my test is primitive
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mechanical-sympathy+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.