That's fair but I have dealt with team members who have "optimized" code over loopback only to find its slower over an actual network :) Hence the question.
On Sun, May 13, 2018 at 3:04 AM, John Hening <gocio...@gmail.com> wrote: > Why do you think that I have a production system? I do it for learning > purpose. > > > W dniu sobota, 12 maja 2018 21:59:53 UTC+2 użytkownik Greg Young napisał: >> >> Will your production system be running over loopback? >> >> On Sun, May 13, 2018 at 1:59 AM, John Hening <goci...@gmail.com> wrote: >> >>> I know that testing by loopback isn't the best idea (and that it omits a >>> part of the network stack) but it is the simplest and I have just a laptop >>> with one physical network interface so I don't see a possibility to test it >>> without a second physical host. >>> >>> >>> Not mentioning that open HFT may employ socket mechanics different from >>>> what's available in jdk. >>>> >>> AFAIK (perhaps I am wrong) OpenHFT uses networking available in jdk, >>> >>> W dniu sobota, 12 maja 2018 20:34:57 UTC+2 użytkownik Wojciech Kudla >>> napisał: >>>> >>>> It's probably not the response that you were hoping to see but I'd >>>> avoid testing for performance using loopback interface. >>>> There are whole parts of the network stack omitted by the Linux kernel >>>> in such scenarios. >>>> Not mentioning that open HFT may employ socket mechanics different from >>>> what's available in jdk. >>>> >>>> On Sat, 12 May 2018, 19:03 John Hening, <goci...@gmail.com> wrote: >>>> >>>>> Hello, >>>>> I am trying to examine a throughput of OpenHFT networking in version >>>>> 1.12.2 to compare it with my toy. >>>>> >>>>> My test works in the following way: >>>>> >>>>> I have few concurrent clients that send (by loopback) to the server >>>>> (written with OpenHFT networking) 131`072 messages of size 4KB and >>>>> blocking-wait for a (1-byte) response that confirms that message was >>>>> processed by the server. >>>>> >>>>> I've run a test with -DServerThreadingStrategy=CONCURRENT, >>>>> MULTI_THREADED_BUSY_WAITING and SINGLE_THREADED. >>>>> >>>>> >>>>> >>>>> number of clients OpenHFT: avg. >>>>> Messages per second: my toy >>>>> (run on different threads) CONCURRENT SINGLE_THREADED >>>>> MULTI_THREADED_BUSY_WAITING CONCURRENT (no single threaded >>>>> strategy) >>>>> 1 52.47 50.4 >>>>> 49.95 41.4 >>>>> 2 40.21 48.57 >>>>> 39.54 44.65 >>>>> 4 21.92 23.68 >>>>> 21.51 32.04 >>>>> 8 10.78 12.83 >>>>> 10.91 23.06 >>>>> 16 5.53 6.02 >>>>> 5.57 11.77 >>>>> 32 2.68 2.79 >>>>> 2.76 6.46 >>>>> >>>>> >>>>> ____________________________________________________________ >>>>> ____________________________________________________________ >>>>> ________________________________________ >>>>> >>>>> >>>>> >>>>> I suppose that the problem is with my usage of that library, but I >>>>> cannot figure out what is wrong- it is not so easy to write a test because >>>>> of lack/obsolete documentation/examples. >>>>> >>>>> The server is run as follows: (I skipped not important details to make >>>>> it shorter) >>>>> >>>>> private static EventLoop eg; >>>>> >>>>> public static void starServer() { >>>>> eg = new EventGroup(true); >>>>> eg.start(); >>>>> TCPRegistry.createServerSocketChannelFor(desc); >>>>> AcceptorEventHandler eah = new AcceptorEventHandler(desc, >>>>> LegacyHanderFactory.legacyTcpEventHandlerFactory(nc -> new Confirmer >>>>> ()), VanillaNetworkContext::new); >>>>> >>>>> // Confirmer send >>>>> one-byte message to a client to confirm processing of message >>>>> eg.addHandler(eah); >>>>> } >>>>> >>>>> class Confirmer implements TcpHandler<NetworkContext> { >>>>> @Override >>>>> public void process(@NotNull final Bytes in, @NotNull final >>>>> Bytes out, NetworkContext nc) { >>>>> if (in.readRemaining() == 0){ >>>>> return; >>>>> } >>>>> out.write(in, in.readPosition(), 1); // send a >>>>> confirmation >>>>> in.readSkip(Math.min(in.readRemaining(), out. >>>>> writeRemaining())); >>>>> } >>>>> } >>>>> } >>>>> >>>>> >>>>> >>>>> A client code (run on different threads): >>>>> >>>>> private void sender(long numberOfMessage, int sizeOfMessage) { >>>>> SocketChannel sc = TCPRegistry.createSocketChannel(desc); >>>>> sc.configureBlocking(true); >>>>> ByteBuffer buffer = ByteBuffer.allocateDirect(sizeOfMessage); >>>>> ByteBuffer recv = ByteBuffer.allocateDirect(1); >>>>> buffer.putInt(64); // required bytes. To be frankly, I don't >>>>> understand exactly why 64. >>>>> buffer.put(bytes.getBytes()); // nearly 4KB-text-message >>>>> >>>>> long start = System.nanoTime(); >>>>> for(int i = 0; i < numberOfMessage; i++){ >>>>> buffer.clear(); >>>>> recv.clear(); >>>>> sc.write(buffer); >>>>> sc.read(recv); >>>>> } >>>>> times.add(System.nanoTime() - start); // use it to compute an >>>>> average after >>>>> sc.close(); >>>>> } >>>>> >>>>> >>>>> >>>>> And I use times to get an average throughput. >>>>> >>>>> P.S. I see that system is loaded (in the sense of threads and kernel >>>>> network stack) but results seems to be low though. I don't have an >>>>> experience and I cannot decide whether that results are normal. >>>>> P.S.2 I also know that my test is primitive >>>>> >>>>> -- >>> You received this message because you are subscribed to the Google >>> Groups "mechanical-sympathy" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to mechanical-sympathy+unsubscr...@googlegroups.com. >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> >> >> -- >> Studying for the Turing test >> > -- > You received this message because you are subscribed to the Google Groups > "mechanical-sympathy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to mechanical-sympathy+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -- Studying for the Turing test -- You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group. To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.