Hi,

We're testing ignite DataStream troughput with a network of 1Gbps. If we
send a big file with scp we have a throughput of 150 Mb/s. 
We have 1 Ignite Server and 1 Ignite Client on version 2.0, in Client we
have an HashMap of 1Million Entry which we send into Server's cache. The
Cache is configured as Partitioned, Atomic, no backup. With a simple
DataStream when we send the HashMap into the cache we have a throughput of
20/25 Mb/s without Stream Transformer. Is this a limit of DataStream? 


Here is the code:

  try(IgniteDataStreamer<String, Long> stmr =
ignite.dataStreamer("default")){
                
      for(Entry<String, Long> entry : mappa1.entrySet()){
                stmr.addData(entry.getKey(),entry.getValue());
        }
                
  }  


When we add another Server Node the each Server Node receive 11 Mb/s
(effective throughput is the same 20/25 Mb/s)

If we Add a StreamTransformer for update value in cache and send more than 1
Hashmap the throughput is 15 Mb/s and if we add another Server Node the
throughput is the same.


public static class UpdateWord extends StreamTransformer<String, Long> {
        
        @Override
        public Object process(MutableEntry<String, Long> e, Object... arg)
throws EntryProcessorException {
                Object streamed = arg[0];
                Long temp = Long.valueOf(streamed.toString()).longValue();
                Long val = e.getValue();
                e.setValue(val == null ? temp : val + temp);
            return null;
        }
    }


Is there some limit on DataStream Usage or a bottleneck? How can we  use all
network bandwidth? 
Maybe some DataStream Configuration?

Thanks,
Mimmo



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-DataStream-Troughput-tp14918.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to