Re: How many nodes within your ignite cluster

2016-11-10 Thread Vladislav Pyatkov
Hi Duke,

I think, Ignite should be cope with any issue. For example: looped thread
which does not handle flag of interrupted.

But you always can to try do that, by way implement your segmentation
behavior.
Something like this:

ignite.events().localListen(new IgnitePredicate() {
@Override public boolean apply(Event event) {
(new Thread() {
 Ignition.inite().close();
 Ignition.start(new IgniteConfiguration());
}).start()

return true;
}
}, EventType.EVT_NODE_SEGMENTED);

Do not forget create new configuration, because Ignite changed state in it.

On Fri, Nov 11, 2016 at 4:51 AM, Duke Dai  wrote:

> Hi vdpyatkov,
>
> Finally, I figured out not only one but all nodes become segmentation due
> to
> unknown VMWare infrastructure.
> I changed failuredetectiontimeout/sockettimeout/networktimeout, and the
> cluster survived last day, need more time to observe their behavior.
>
> I'm thinking SegmentationPolicy.
> Why SegmentationPolicy.RESTART_JVM was provided(must work with
> CommandLineStartup)? Any limitation/implication that soft-restart(in same
> JVM) won't work?
>
>
> Thanks,
> Duke
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-many-nodes-within-your-ignite-
> cluster-tp8808p8892.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Connection Cleanup when IgniteCallable Killed

2016-11-10 Thread jaime spicciati
All,
I currently broadcast an IgniteCallable in my cluster which opens
connections to various resources, specifically Zookeeper via Apache
Curator.

If the originating node (the client that launched the IgniteCallable) is
stopped prematurely I see that Ignite will rightfully cancel the broadcast
call within the cluster. This is all great but Apache Curator has a thread
in the background watching Zookeeper. So when Ignite stops the
IgniteCallable in the cluster the connection to Zookeeper is still open
which is keeping ephemeral nodes from being deleted.

I tried implementing logic to handle thread interrupts to close the
zookeeper connection but it doesn't look like IgniteCallable is cancelled
through interrupts. I looked through the Ignite code base and can't quite
figure out how it is cancelling my IgniteCallable so that I can hook into
the IgniteCallable life cycle.

Long story short, how do I do resource/connection cleanup in an
IgniteCallable when the client disconnects ungracefully, and the connection
is held by a thread launched from within the IgniteCallable?

Thanks


Re: Hive job submsiion failed with exception ”java.io.UTFDataFormatException“

2016-11-10 Thread lapalette
Hi,Andery:
 Thanks for your attention, but I tried use  “OptimizedMarshaller”  and
"JdkMarshaller", but it did not work. I use the ignite version 1.6, and do I
need to upgrade to 1.7? Or how can I modify the limitation of the
ObjectOutputStream.
Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Hive-job-submsiion-failed-with-exception-java-io-UTFDataFormatException-tp8863p8893.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How many nodes within your ignite cluster

2016-11-10 Thread Duke Dai
Hi vdpyatkov, 

Finally, I figured out not only one but all nodes become segmentation due to
unknown VMWare infrastructure. 
I changed failuredetectiontimeout/sockettimeout/networktimeout, and the
cluster survived last day, need more time to observe their behavior.

I'm thinking SegmentationPolicy. 
Why SegmentationPolicy.RESTART_JVM was provided(must work with
CommandLineStartup)? Any limitation/implication that soft-restart(in same
JVM) won't work? 


Thanks,
Duke



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-many-nodes-within-your-ignite-cluster-tp8808p8892.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Visor console

2016-11-10 Thread Paolo Di Tommaso
Hi,

Is it possible to deploy the visor console in a embedded manner? I mean
just including the visor dependencies into an application classpath and
launch it?


Is there any example of that?


Cheers,
Paolo


Re: Multiple servers in a Ignite Cluster

2016-11-10 Thread Tracyl
Hi Vlad,

The ideal work flow for my use case is: I host two clusters, one is
computation cluster that run Spark jobs, the other is data cluster that host
Ignite node and cache hot data. Then at the run time, multiple Spark jobs
share this data cluster and query it. The problem I have is, I am
pre-loading Ignite cache using a Spark job. Once IgniteContext got
instantiated, it will launch Ignite Node with same number of Spark executors
I allocated. Then distributed cache will distribute data on those nodes
within my computation cluster as well, which I don't want because partial
data hosted by these nodes won't be there once my pre-load job dies.
Currently I force these node to be client mode so that the cache only
distributed to data cluster when I execute my pre-load job. Are there better
way to solve this? 

Thanks,
Tracy



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Multiple-servers-in-a-Ignite-Cluster-tp8840p8887.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Hive job submsiion failed with exception ”java.io.UTFDataFormatException“

2016-11-10 Thread Andrey Mashenkov
Hi lapalette,

There is a limitation in ObjectOutputStream, it fails to write UTF strings
longer than 65k bytes.

Have you try to use another marshaller?

https://ignite.apache.org/releases/1.7.0/javadoc/index.html?org/apache/ignite/marshaller/Marshaller.html

On Thu, Nov 10, 2016 at 6:20 AM, lapalette  wrote:

> Hi,there is a error when i run TPC-DS testsets on ignite-enhanced hive2.0
> over hadoop2.6.0, and here is the log:
>
> java.io.IOException: Failed to submit job.
> at
> org.apache.ignite.internal.processors.hadoop.proto.HadoopClientProtocol.
> submitJob(HadoopClientProtocol.java:128)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(
> JobSubmitter.java:536)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1628)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.JobClient.submitJobInternal(
> JobClient.java:557)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.
> java:548)
> at
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:432)
> at
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:138)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:158)
> at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(
> TaskRunner.java:101)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1840)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1584)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1361)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1184)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1172)
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(
> CliDriver.java:233)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(
> CliDriver.java:184)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(
> CliDriver.java:400)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(
> CliDriver.java:336)å
> at org.apache.hadoop.hive.cli.CliDriver.processReader(
> CliDriver.java:433)
> at org.apache.hadoop.hive.cli.CliDriver.processFile(
> CliDriver.java:449)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(
> CliDriver.java:152)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(
> CliDriver.java:400)
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(
> CliDriver.java:778)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:717)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:645)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: class
> org.apache.ignite.internal.client.GridServerUnreachableException: Failed
> to
> communicate with grid nodes (maximum count of retries reached).
> at
> org.apache.ignite.internal.client.impl.GridClientAbstractProjection.
> withReconnectHandling(GridClientAbstractProjection.java:153)
> at
> org.apache.ignite.internal.client.impl.GridClientComputeImpl.executeAsync(
> GridClientComputeImpl.java:132)
> at
> org.apache.ignite.internal.client.impl.GridClientComputeImpl.execute(
> GridClientComputeImpl.java:121)
> at
> org.apache.ignite.internal.processors.hadoop.proto.HadoopClientProtocol.
> submitJob(HadoopClientProtocol.java:119)
> ... 40 more
> Caused by: class
> org.apache.ignite.internal.client.impl.connection.
> GridClientConnectionResetException:
> Failed to send message over connection (will try to reconnect):
> /192.168.100.31:11211
> at
> org.apache.ignite.internal.client.impl.connection.
> GridClientNioTcpConnection.makeRequest(GridClientNioTcpConnection.
> java:495)
> at
> org.apache.ignite.internal.client.impl.connection.
> 

Re: One server node seems to hang onto heap memory after clear

2016-11-10 Thread vdpyatkov
Hi,

If your caches are off_heap, these do not consume heap basically.

Could you please, get heap dump and try to provide briefly analyze, which
object consume heap?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/One-server-node-seems-to-hang-onto-heap-memory-after-clear-tp8838p8884.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: rest-http can't get data if key is Integer or others

2016-11-10 Thread ptupitsyn
Hi Victor,

Please have a look at our contribution guidelines:
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute

* You are writing to user list, but this discussion should be on the dev
list (d...@ignite.apache.org)
* Pull request name should include JIRA ticket (IGNITE-4195)
* JIRA ticket should be moved to Patch Available status
* Make sure you follow the coding guidelines

Thank you for your interest in Ignite,

Pavel



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/rest-http-can-t-get-data-if-key-is-Integer-or-others-tp8762p8883.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ignite-spring conflict

2016-11-10 Thread Andrey Mashenkov
It looks like, now you run out of memory. You can try to increase heap
memory size.
Please, attach full stacktrace, so one can see if there is another possible
issue.

On Thu, Nov 10, 2016 at 11:54 AM, ewg  wrote:

> Yeah, that was the problem. We use spring.version 4.1.4.RELEASE, after
> downgrading to 4.1.0 the exception is gone. One step forward, and I am
> getting some other exeption:
>
> [08:47:37,702][SEVERE][tcp-disco-sock-reader-#102%null%][TcpDiscoverySpi]
> Runtime error caught during grid runnable execution: Socket reader [id=968,
> name=tcp-disco-sock-reader-#102%null%,
> nodeId=f8acf485-73f6-498a-846d-1a7a8851acc1]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>
> Lets see if I can get it fixed myself, though looks very suspicious since,
> it takes very long time to run "basic" code, before the exception come up.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/ignite-spring-conflict-tp8837p8871.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite with cassandra

2016-11-10 Thread Dmitriy Govorukhin
Hi, i think it problem related to cassandra configuration or network, check
you firewall. You can try change cassandra configuration and enable port
9160 for using. Also check which version cassandra-jdbc driver using,
different version use different ports.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-with-cassandra-tp8777p8881.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Very high memory consumption in apache ignite

2016-11-10 Thread rishi007bansod
Cache configuration I have used is,

CacheConfiguration ccfg_order_line = new
CacheConfiguration<>();
ccfg_order_line.setIndexedTypes(order_lineKey.class, order_line.class);
ccfg_order_line.setName("order_line_cache");
ccfg_order_line.setCopyOnRead(false);
ccfg_order_line.setMemoryMode(CacheMemoryMode.ONHEAP_TIERED);
ccfg_order_line.setSwapEnabled(false);
ccfg_order_line.setBackups(0);
IgniteCache cache_order_line =
ignite.createCache(ccfg_order_line);

JVM configuration I have used is,

-server 
-Xms10g 
-Xmx10g 
-XX:+UseParNewGC 
-XX:+UseConcMarkSweepGC 
-XX:+UseTLAB 
-XX:NewSize=128m 
-XX:MaxNewSize=128m 
-XX:MaxTenuringThreshold=0 
-XX:SurvivorRatio=1024 
-XX:+UseCMSInitiatingOccupancyOnly 
-XX:CMSInitiatingOccupancyFraction=40
-XX:MaxGCPauseMillis=1000 
-XX:InitiatingHeapOccupancyPercent=50 
-XX:+UseCompressedOops
-XX:ParallelGCThreads=8 
-XX:ConcGCThreads=8 
-XX:+DisableExplicitGC

same as provided at link 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Very-high-memory-consumption-in-apache-ignite-tp8822p8880.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Very high memory consumption in apache ignite

2016-11-10 Thread Dmitriy Govorukhin
Hi, could you please provide your final jvm options  and cache configuration?  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Very-high-memory-consumption-in-apache-ignite-tp8822p8879.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Jdbc connection

2016-11-10 Thread Anil
Any help in understanding below ?

On 10 November 2016 at 16:31, Anil  wrote:

> I have couple of questions on ignite jdbc connection. Could you please
> clarify ?
>
> 1. Should connection be closed like other jdbc db connection ? - I see
> connection close is shutdown of ignite client node.
> 2. Connection objects are not getting released and all connections are
> busy ?
> 3. Connection pool is really required for ignite client ? i hope one
> ignite connection can handle number of queries in parallel.
> 4. What is the recommended configuration for ignite client to support
> failover ?
>
> Thanks.
>


Re: What's the difference between EntryProcessor and distributed closure?

2016-11-10 Thread vdpyatkov
Hi Tracyl,

You can use "invoke", the method will be most effective for retrieve part of
value from cache. For the withKeepBinary off_heap cache "invoke" takes lock
on entry and can to manipulate with off_heap pointer wrapped on
BinaryObject.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/What-s-the-difference-between-EntryProcessor-and-distributed-closure-tp8759p8877.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: rest-http can't get data if key is Integer or others

2016-11-10 Thread Dmitriy Govorukhin
Hi, 

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply. 

Could you please provide test for reproduce issue? And i think that need to
move Cross-origin and your fix to separate pull request, also need add
tests.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/rest-http-can-t-get-data-if-key-is-Integer-or-others-tp8762p8876.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How many nodes within your ignite cluster

2016-11-10 Thread vdpyatkov
Hi Duke,

I don't know, why one node fail leads to fail all other. It is not a normal
behavior.
What is reason fail first node, and others?
Can you increase failureDetectionTimeout, if you think this is network
issue? (or network timeouts in discovery SPI)
Could I look at logs from each nodes?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-many-nodes-within-your-ignite-cluster-tp8808p8875.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: server node is java , and client is c++, client can't join cluster

2016-11-10 Thread Igor Sapego
Hi,

Can it be that you use JDKs from different vendors for different nodes?

Best Regards,
Igor

On Wed, Nov 9, 2016 at 2:27 PM, smile  wrote:

> Hi, all
>  when I start one c++ server node, and then start java server node,
> which successfully joins in the cluster, and finally when I start  one c++
> client node, then I have find that the c++ client can't join the cluster,
> in the java server node throw exception as follow:
>
> *log4j:WARN No appenders could be found for logger
> (org.springframework.core.env.StandardEnvironment).*
> *log4j:WARN Please initialize the log4j system properly.*
> *log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>  for more info.*
> *[19:12:18]__   *
> *[19:12:18]   /  _/ ___/ |/ /  _/_  __/ __/ *
> *[19:12:18]  _/ // (7 7// /  / / / _/   *
> *[19:12:18] /___/\___/_/|_/___/ /_/ /___/  *
> *[19:12:18] *
> *[19:12:18] ver. 1.7.0#19700101-sha1:DEV*
> *[19:12:18] 2016 Copyright(C) Apache Software Foundation*
> *[19:12:18] *
> *[19:12:18] Ignite documentation: http://ignite.apache.org
> *
> *[19:12:18] *
> *[19:12:18] Quiet mode.*
> *[19:12:18]   ^-- Logging to file
> 'E:\ignite1.7.0\apache-ignite-1.7.0-src\work\log\ignite-ae00a82f.log'*
> *[19:12:18]   ^-- To see **FULL** console log here add
> -DIGNITE_QUIET=false or "-v" to ignite.{sh|bat}*
> *[19:12:18] *
> *[19:12:18] OS: Windows 7 6.1 amd64*
> *[19:12:18] VM information: Java(TM) SE Runtime Environment 1.7.0_79-b15
> Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 24.79-b02*
> *[19:12:18] Initial heap size is 254MB (should be no less than 512MB, use
> -Xms512m -Xmx512m).*
> *[19:12:20] Configured plugins:*
> *[19:12:20]   ^-- None*
> *[19:12:20] *
> *[19:12:21] Security status [authentication=off, tls/ssl=off]*
> *[19:12:28] Performance suggestions for grid  (fix if possible)*
> *[19:12:28] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true*
> *[19:12:28]   ^-- Disable grid events (remove 'includeEventTypes' from
> configuration)*
> *[19:12:28] *
> *[19:12:28] To start Console Management & Monitoring run
> ignitevisorcmd.{sh|bat}*
> *[19:12:28] *
> *[19:12:28] Ignite node started OK (id=ae00a82f)*
> *[19:12:28] Topology snapshot [ver=2, servers=2, clients=0, CPUs=8,
> heap=4.4GB]*
> *[19:12:35,578][ERROR][tcp-disco-sock-reader-#5%null%][TcpDiscoverySpi]
> Failed to read message [sock=Socket[addr=/127.0.0.1
> ,port=9120,localport=47501],
> locNodeId=ae00a82f-6e0d-4316-98a3-a1db78ba80bd,
> rmtNodeId=e7d0ba75-e47c-4dc0-8d2a-ed6c128c2e7d]*
> *class org.apache.ignite.IgniteCheckedException: Failed to deserialize
> object with given class loader: sun.misc.Launcher$AppClassLoader@6fd7bd04*
> * at
> org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal(JdkMarshaller.java:105)*
> * at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$SocketReader.body(ServerImpl.java:5457)*
> * at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)*
> *Caused by: java.io.InvalidClassException:
> org.apache.ignite.internal.util.lang.GridFunc$49$1; local class
> incompatible: stream classdesc serialVersionUID = 1953108849692953835,
> local class serialVersionUID = -4878603819884545190*
> * at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:617)*
> * at
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)*
> * at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)*
> * at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)*
> * at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)*
> * at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)*
> * at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)*
> * at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)*
> * at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)*
> * at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)*
> * at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)*
> * at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)*
> * at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)*
> * at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)*
> * at
> org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal(JdkMarshaller.java:102)*
> * ... 2 more*
> *[19:12:58] Topology snapshot [ver=3, servers=1, clients=0, CPUs=8,
> heap=3.5GB]*
> *[19:12:58] Ignite node stopped OK [uptime=00:00:30:097]*
>
> but if I start C++ server nodes, then start one C++ client node, the
> client can join the cluster and can access data successfully;
>
> How can I solve it ?
>
> Thank you very much!
>


Re: Ignite Jdbc connection

2016-11-10 Thread Anil
I have couple of questions on ignite jdbc connection. Could you please
clarify ?

1. Should connection be closed like other jdbc db connection ? - I see
connection close is shutdown of ignite client node.
2. Connection objects are not getting released and all connections are busy
?
3. Connection pool is really required for ignite client ? i hope one ignite
connection can handle number of queries in parallel.
4. What is the recommended configuration for ignite client to support
failover ?

Thanks.


Re: ON HEAP vs OFF HEAP memory mode performance Apache Ignite

2016-11-10 Thread Vladislav Pyatkov
Hi,

Work with OFF_HEAP requires more CPU time than ON_HEAP, because each "get"
should get bytes from memory (by pointer to off_heap) and deserialize these
bytes to business object.

However you can use Binary interface in order to avoid deserialization:

IgniteCache =
ignite.cache(CACHE_NAME).withKeepBinary();

BinaryObject binaryObject = cache.get(key);
binaryObject.field(FIELD_NAME);

and do not copy object on each "get" (if the object does not change after
"get"):




...

Also if remote data processing fit to you, you can use "invoke" (on
withKeepBinary cache). At this case work will be produced over off_heap
pointer (without copy bytes to heap).


public Object process(MutableEntry entry, Object... arguments) throws
EntryProcessorException {
BinaryObject binaryObject =
(BinaryObject)entry.getValue();
...

On Thu, Nov 10, 2016 at 11:17 AM, rishi007bansod 
wrote:

> Following are average execution time for running 14 queries against 16
> million entries (DB size: 370 MB)
>
> OFF HEAP memory mode - 47 millisec
> ON HEAP memory mode - 16 millisec
>
> why there is difference in execution times between off heap and on heap
> memory modes as both are In-memory? What performance tuning can be applied
> on off heap memory mode for better results?(I have also tried JVM tuning
> mentioned in Ignite documentation, but its not giving any better results)
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/ON-HEAP-vs-OFF-HEAP-memory-mode-
> performance-Apache-Ignite-tp8870.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: ignite-spring conflict

2016-11-10 Thread ewg
Yeah, that was the problem. We use spring.version 4.1.4.RELEASE, after
downgrading to 4.1.0 the exception is gone. One step forward, and I am
getting some other exeption:

[08:47:37,702][SEVERE][tcp-disco-sock-reader-#102%null%][TcpDiscoverySpi]
Runtime error caught during grid runnable execution: Socket reader [id=968,
name=tcp-disco-sock-reader-#102%null%,
nodeId=f8acf485-73f6-498a-846d-1a7a8851acc1]
java.lang.OutOfMemoryError: GC overhead limit exceeded

Lets see if I can get it fixed myself, though looks very suspicious since,
it takes very long time to run "basic" code, before the exception come up.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-spring-conflict-tp8837p8871.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


ON HEAP vs OFF HEAP memory mode performance Apache Ignite

2016-11-10 Thread rishi007bansod
Following are average execution time for running 14 queries against 16
million entries (DB size: 370 MB)

OFF HEAP memory mode - 47 millisec
ON HEAP memory mode - 16 millisec

why there is difference in execution times between off heap and on heap
memory modes as both are In-memory? What performance tuning can be applied
on off heap memory mode for better results?(I have also tried JVM tuning
mentioned in Ignite documentation, but its not giving any better results)



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ON-HEAP-vs-OFF-HEAP-memory-mode-performance-Apache-Ignite-tp8870.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.